Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,600 | 5,162 | Parallel Sampling of DP Mixture
Models using Sub-Clusters Splits
John W. Fisher III?
CSAIL, MIT
fisher@csail.mit.edu
Jason Chang?
CSAIL, MIT
jchang7@csail.mit.edu
Abstract
We present an MCMC sampler for Dirichlet process mixture models that can
be parallelized to achieve significant computational gains. We combine a nonergodic, restricted Gibbs iteration with split/merge proposals in a manner that
produces an ergodic Markov chain. Each cluster is augmented with two subclusters to construct likely split moves. Unlike some previous parallel samplers,
the proposed sampler enforces the correct stationary distribution of the Markov
chain without the need for finite approximations. Empirical results illustrate that
the new sampler exhibits better convergence properties than current methods.
1
Introduction
Dirichlet process mixture models (DPMMs) are widely used in the machine learning community
(e.g. [28, 32]). Among other things, the elegant theory behind DPMMs has extended finite mixture
models to include automatic model selection in clustering problems. One popular method for posterior inference in DPMMs is to draw samples of latent variables using a Markov chain Monte Carlo
(MCMC) scheme. Extensions to the DPMM such as the Hierarchical Dirichlet processes [29] and
the dependent Dirichlet process [18] also typically employ sampling-based inference.
Posterior sampling in complex models such as DPMMs is often difficult because samplers that
propose local changes exhibit poor convergence. Split and merge moves, first considered in DPMMs
by [13], attempt to address these convergence issues. Alternatively, approximate inference methods
such as the variational algorithms of [3] and [15] can be used. While variational algorithms do
not have the limiting guarantees of MCMC methods and may also suffer from similar convergence
issues, they are appealing for use in large datasets as they lend themselves to parallelization. Here,
we develop a sampler for DPMMs that: (1) preserves limiting guarantees; (2) proposes splits and
merges to improve convergence; (3) can be parallelized to accommodate large datasets; and (4)
is applicable to a wide variety of DPMMs (conjugate and non-conjugate). To our knowledge, no
current sampling algorithms satisfy all of these properties simultaneously.
While we focus on DP mixture models here, similar methods can be extended for mixture models
with other priors (finite Dirichlet distributions, Pitman-Yor Processes, etc.).
2
Related Work
Owing to the wealth of literature on DPMM samplers, we focus on the most relevant work in our
overview. Other sampling algorithms (e.g. [17]) and inference methods (e.g. [3]) are not discussed.
The majority of DPMM samplers fit into one of two categories: collapsed-weight samplers that
?
Jason Chang was partially supported by the Office of Naval Research Multidisciplinary Research Initiative
(MURI) program, award N000141110688. John Fisher was partially supported by the Defense Advanced
Research Projects Agency, award FA8650-11-1-7154.
1
Table 1: Capabilities of MCMC Sampling Algorithms
Exact Model
Splits & Merges
Intra-cluster Parallelizable
Inter-cluster Parallelizable
Non-conjugate Priors
CW
[11, 12]
[7, 24]
[5, 9, 13]
[14]
[19, 31]
Proposed Method
X
?
?
?
X
?
?
?
X
X
X
?
?
X
X
X
X
?
?
?
X
X
?
?
X
X
?
X
?
?
X
X
X
X
X
marginalize over the mixture weights or instantiated-weight samplers that explicitly represent them.
Capabilities of current algorithms, which we now discuss, are summarized in Table 1.
Collapsed-weight (CW) samplers using both conjugate (e.g. [4, 6, 20, 22, 30]) and non-conjugate
(e.g. [21, 23]) priors sample the cluster labels iteratively one data point at a time. When a conjugate
prior is used, one can also marginalize out cluster parameters. However, as noted by multiple authors
(e.g. [5, 13, 17]), these methods often exhibit slow convergence. Additionally, due to the particular
marginalization schemes, these samplers cannot be parallelized.
Instantiated-weight (IW) samplers explicitly represent cluster weights, typically using a finite approximation to the DP (e.g. [11, 12]). Recently, [7] and [24] have eliminated the need for this
approximation; however, IW samplers still suffer from convergence issues. If cluster parameters are
marginalized, it can be very unlikely for a single point to start a new cluster. When cluster parameters are instantiated, samples of parameters from the prior are often a poor fit to the data. However,
IW samplers are often useful because they can be parallelized across each data point conditioned on
the weights and parameters. We refer to this type of algorithm as ?inter-cluster parallelizable?, since
the cluster label for each point within a cluster can be sampled in parallel.
The recent works of [19] and [31] present an alternative parallelization scheme for CW samplers.
They observe that multiple clusters can be grouped into ?super-clusters? and that each super-cluster
can be sampled independently. We refer to this type of implementation as ?intra-cluster parallelizable?, since points in different super-clusters can be sampled in parallel, but points within a cluster
cannot. This distinction is important as many problems of interest contain far more data points than
clusters, and the greatest computational gain may come from inter-cluster parallelizable algorithms.
Due to their particular construction, current algorithms group super-clusters solely based on the size
of each super-cluster. In the sequel, we show empirically that this can lead to slow convergence and
demonstrate how data-based super-clusters improve upon these methods.
Recent CW samplers consider larger moves to address convergence issues. Green and Richardson [9] present a reversible jump MCMC sampler that proposes splitting and merging components.
While a general framework is presented, proposals are model-dependent and generic choices are
not specified. Proposed splits are unlikely to fit the posterior since auxiliary variables governing the
split cluster parameters and weights are proposed independent of the data. Jain and Neal [13, 14]
construct a split by running multiple restricted Gibbs scans for a single cluster in conjugate and nonconjugate models. While each restricted scan improves the constructed split, it also increases the
amount of computation needed. As such, it is not easy to determine how many restricted scans are
needed. Dahl [5] proposes a split scheme for conjugate models by reassigning labels of a cluster sequentially. All current split samplers construct a proposed move to be used in a Metropolis-Hastings
framework. If the split is rejected, considerable computation is wasted, and all information contained
in learning the split is forgotten. In contrast, the proposed method of fitting sub-clusters iteratively
learns likely split proposals with the auxiliary variables. Additionally, we show that split proposals
can be computed in parallel, allowing for very efficient implementations.
3
Dirichlet Process Mixture Model Samplers
In this section we give a brief overview of DPMMs. For a more in-depth understanding, we refer
the reader to [27]. A graphical model for the DPMM is shown in Figure 1a, where i indexes a
particular data point, x is the vector of observed data, z is the vector of cluster indices, ? is the
infinite vector of mixture weights, ? is the concentration parameter for the DP, ? is the vector of the
cluster parameters, and ? is the hyperparameter for the corresponding DP base measure.
2
3.1
Instantiated-Weight Samplers using Approximations to the Dirichlet Process
The constructive proof of the Dirichlet processes [26] shows that a DP can be sampled by iteratively
scaling an infinite sequence of Beta random variables. Therefore, posterior MCMC inference in a
DPMM could, in theory, alternate between the following samplers
(?1 , . . . , ?? ) ? p(?|z, ?),
(1)
?
?k ? fx (x{k} ; ?k )f? (?k ; ?),
?k ? {1, . . . , ?},
X?
?
zi ?
?k fx (xi ; ?k )1I[zi = k], ?i ? {1, . . . , N },
k=1
(2)
(3)
?
where ? samples from a distribution proportional to the right side, x{k} denotes the (possibly empty)
set of data labeled k, and f? (?) denotes a particular form of the probability density function of ?. We
use fx (x{k} ; ?k ) to denote the product of likelihoods for all data points in cluster k. When conjugate
priors are used, the posterior distribution for cluster parameters is in the same family as the prior:
p(?k |x, z, ?) ? f? (?k ; ?)fx (x{k} ; ?k ) ? f? (?k ; ??k ),
(4)
where ??k denotes the posterior hyperparameters for cluster k. Unfortunately, the infinite length
sequences of ? and ? clearly make this procedure impossible.
As an approximation, authors have considered the truncated stick-breaking representation [11] and
the finite symmetric Dirichlet distribution [12]. These approximations become more accurate when
the truncation is much larger than the true number of components. However, knowledge of the
true number of clusters is often unknown. When cluster parameters are explicitly sampled, these
algorithms may additionally suffer from slow convergence issues. In particular, a broad prior will
often result in a very small probability of creating new clusters since the probability of generating a
parameter from the prior to fit a single data point is small.
3.2
Collapsed-Weight Samplers using the Chinese Restaurant Process
Alternatively, the weights can be marginalized to form a collapsed-weight sampler. By exchangeability, a label can be drawn using the Chinese Restaurant Process (CRP) [25], which assigns a new
customer (i.e. data point) to a particular table (i.e. cluster) with the following predictive distribution
i
hX
?
Nk\i fx (xi ; ??k\i )1I[z = k] + ?fx (xi ; ?)1I[z = k],
(5)
p(zi |x, z\i ; ?) ?
k
where \i denotes all indices excluding i, Nk\i are the number of elements in z\i with label k, k? is
a new cluster label, and fx (?; ?) denotes the distribution of x when marginalizing over parameters.
When a non-conjugate prior is used, a computationally expensive Metropolis-Hastings step (e.g.
[21, 23]) must be used when sampling the label for each data point.
4
Exact Parallel Instantiated-Weight Samplers
We now present a novel alternative to the instantiated-weight samplers that does not require any finite
model approximations. The detailed balance property underlies most MCMC sampling algorithms.
In particular, if one desires to sample from a target distribution, ?(z), satisfying detailed balance
for an ergodic Markov chain guarantees that simulations of the chain will uniquely converge to the
target distribution of interest. We now consider the atypical case of simulating from a non-ergodic
chain with a transition distribution that satisfies detailed balance.
Definition 4.1 (Detailed Balance). Let ?(z) denote the target distribution. If a Markov chain is
constructed with a transition distribution q(?
z |z) that satisfies ?(z)q(?
z |z) = ?(?
z )q(z|?
z ), then the
chain is said to satisfy the detailed balance condition and ?(z) is guaranteed to be a stationary
distribution of the chain.
We define a restricted sampler as one that satisfies detailed balance (e.g. using the Hastings ratio
[10]) but does not result in an ergodic chain. We note that without ergodicity, detailed balance does
not imply uniqueness in, or convergence to the stationary distribution. One key observation of this
work is that multiple restricted samplers can be combined to form an ergodic chain. In particular,
3
(a) DPMM Graphical Model
(b) Augmented Super-Cluster
(c) Super-Cluster Example
Figure 1: (a)-(b) Graphical models for the DPMM and augmented super-cluster space. Auxiliary
variables are dotted. (c) An illustration of the super-cluster grouping. Nodes represent clusters,
arrows point to neighbors, and colors represent the implied super-clusters.
we consider a sampler that is restricted to only sample labels belonging to non-empty clusters. Such
a sampler is not ergodic because it cannot create new clusters. However, when mixed with a sampler
that proposes splits, the resulting chain is ergodic and yields a valid sampler. We now consider a
restricted Gibbs sampler. The coupled split sampler is discussed in Section 5.
4.1
Restricted DPMM Gibbs Sampler with Super-Clusters
A property stemming from the definition of Dirichlet processes is that the measure for every finite
partitioning of the measurable space is distributed according to a Dirichlet distribution [8]. While the
DP places an infinite length prior on the labels, any realization of z will belong to a finite number of
clusters. Supposing zi ? {1, ? ? ? , K}, ?i, we show in the supplement that the posterior distribution
of mixture weights, ?, conditioned on the cluster labels can be expressed as
(?1 , ? ? ? , ?K , ?
?K+1 ) ? Dir (N1 , ? ? ? , NK , ?) ,
(6)
P?
P
where Nk = i 1I[zi = k] is the number of points in cluster k, and ?
?K+1 = k=K+1 ?k is the sum
of all empty mixture weights. This relationship has previously been noted in the literature (c.f. [29]).
In conjunction with Definition 4.1, this leads to the following iterated restricted Gibbs sampler:
(?1 , . . . , ?K , ?
?K+1 ) ? Dir(N1 , . . . , NK , ?),
?
?k ? fx (x{k} ; ?k )f? (?k ; ?),
?k ? {1, . . . , K},
X
K
?
zi ?
?k fx (xi ; ?k )1I[zi = k], ?i ? {1, . . . , N }.
k=1
(7)
(8)
(9)
We note that each of these steps can be parallelized and, because the mixture parameters are explicitly represented, this procedure works for conjugate and non-conjugate priors. When non-conjugate
priors are used, any proposal that leaves the stationary distribution invariant can be used (c.f. [23]).
Similar to previous super-cluster methods, we can also restrict each cluster to only consider moving
to a subset of other clusters. The super-clusters of [19] and [31] are formed using a size-biased
sampler. This can lead to slower convergence since clusters with similar data may not be in the same
super-cluster. By observing that any similarly restricted Gibbs sampler satisfies detailed balance,
any randomized algorithm that assigns finite probability to any super-cluster grouping can be used.
As shown in Figure 1b, we augment the sample space with super-cluster groups, g, that group similar
clusters together. Conditioned on g, Equation 9 is altered to only consider labels within the supercluster that the data point currently belongs to. The super-cluster sampling procedure is described
in Algorithm 1. Here, D denotes an arbitrary distance measure between probability distributions.
In our experiments, we use the symmetric version of KL-divergence (J-divergence). When the Jdivergence is difficult to calculate, any distance measure can be substituted. For example, in the
case of multinomial distributions, we use the J-divergence for the categorical distribution as a proxy.
An illustration of the implied super-cluster grouping from the algorithm is shown in Figure 1c and a
visualization of an actual super-cluster grouping is shown in Figure 2. Notice that the super-cluster
groupings using [19] are essentially random while our super-clusters are grouped by similar data.
Algorithm 1 Sampling Super-clusters with Similar Cluster
1. Form the adjacency matrix, A, where Ak,m = exp[?D(fx (?; ?k ), fx (?; ?m ))]
? P
2. For each cluster, k, sample a random neighbor k 0 , according to, k 0 ? m Ak,m 1I[k 0 = m]
3. Form the groups of super-clusters, g, by finding the separate connected graphs
4
Figure 2: (left) A visualization of the algorithm. Each set of uniquely colored ellipses indicate one
cluster. Solid ellipses indicate regular clusters and dotted ellipses indicate sub-cluster. Color of data
points indicate super-cluster membership. (right) Inferred clusters and super-clusters from [19].
5
Parallel Split/Merge Moves via Sub-Clusters
The preceding section showed that an exact MCMC sampling algorithm can be constructed by alternating between a restricted Gibbs sampler and split moves. While any split proposal (e.g. [5, 13, 14])
can result in an ergodic chain, we now develop efficient split moves that are compatible with conjugate and non-conjugate priors and that can be parallelized. We will augment the space with auxiliary
variables, noting that samples of the non-auxiliary variables can be obtained by drawing samples
from the joint space and simply discarding any auxiliary values.
5.1
Augmenting the Space with Auxiliary Variables
Since the goal is to design a model that is tailored toward splitting clusters, we augment each regular
cluster with two explicit sub-clusters (herein referred to as the ?left? and ?right? sub-clusters). Each
data point is then attributed with a sub-cluster label, z i ? {`, r}, indicating whether it comes from
the left or right sub-cluster. Additionally, each sub-cluster has an associated pair of weights, ? k =
{? k,` , ? k,r }, and parameters, ?k = {?k,` , ?k,r }. These auxiliary variables are named in a similar
fashion to their regular-cluster counterparts because of the similarities between sub-clusters and
regular-clusters. One na??ve choice for auxiliary parameter distributions is
p(? k ) = Dir(? k,` , ? k,r ; ?/2, ?/2),
Y Y
p(z|?, ?, x, z) =
k
p(?k ) = f? (?k,` ; ?)f? (?k,r ; ?),
? k,zi fx (xi ;? k,zi )
.
{i;zi =k} ? k,` fx (xi ;? k,` )+? k,r fx (xi ;? k,r )
(10)
(11)
The corresponding graphical model is shown in Figure 3a. It would be advantageous if the form
of the posterior for the auxiliary variables matched those of the regular-clusters in Equation 7-9.
Unfortunately, because the normalization in Equation 11 depends on ? and ?, this choice of auxiliary
distributions does not result in the posterior distributions for ? and ? that one would expect. We note
that this problem only arises in the auxiliary space where x generates the auxiliary label z (in contrast
to the regular space, where z generates x). Additional details are provided in the supplement.
Consequently, we alter the distribution over sub-cluster parameters to be
Y
p(?k |x, z, ?) ? f? (?k,` ; ?)f? (?k,r ; ?)
? k,` fx (xi ; ?k,` ) + ? k,r fx (xi ; ?k,r ) .
{i;zi =k}
(12)
It is easily verified that this choice results in the the following conditional posterior distributions
?k ? {1, . . . , K},
(13)
(? k,` , ? k,r ) ? Dir(Nk,` + ?/2, Nk,r + ?/2),
?
?k,s ? fx (x{k,s} ; ?k,s )f? (?k,s ; ?),
?k ? {1, . . . , K}, ?s ? {`, r},
X
?
zi ?
? zi ,s fx (xi ; ?zi ,s )1I[z i = s], ?i ? {1, . . . , N },
s?{`,r}
(14)
(15)
which essentially match the distributions for regular-cluster parameters in Equation 7-9. We note
that the joint distribution over the augmented space cannot be expressed analytically as a result of
only specifying Equation 12 up to a proportionality constant that depends on ?, x, and z. The
corresponding graphical model is shown in Figure 3b.
5.2
Restricted Gibbs Sampling in Augmented Space
Restricted sampling in the augmented space can be performed in a similar fashion as before. One
can draw a sample from the space of K regular clusters by sampling all the regular- and sub-cluster
parameters conditioned on labels and data from Equations 7, 8, 13, and 14. Conditioned on these
parameters, one can sample a regular-cluster label followed by a sub-cluster label for each data point
from Equations 9 and 15. All of these steps can be computed in parallel.
5
(a) Unmatched Augmented Sub-Cluster Model
(b) Matched Augmented Sub-Cluster Model
Figure 3: Graphical models for the augmented DPMMs. Auxiliary variables are dotted.
5.3
Metropolis-Hastings Sub-Cluster Split Moves
A pair of inferred sub-clusters contains a likely split of the corresponding regular-cluster. We
exploit these auxiliary variables to propose likely splits. Similar to previous methods, we use a
Metropolis-Hastings (MH) MCMC [10] method for proposed splits. A new set of random variables,
? z?, ?,
? ??, z}
? are proposed via some proposal distribution, q, and accepted with probability
{?
? , ?,
??
??
? ?,
?
? ?,
? ?,
?,
z|x,?
z ) q(?,z,?,?,?,z|?
? ,?
z ,?,
z)
p(?
? ,?
z ,?,x)p(
min 1, p(?,z,?,x)p(?,?,z|x,z) ?
= min[1, H],
(16)
?? ??
q(?
? ,?
z ,?,?,?,z|?,z,?,?,?,z)
where H is the ?Hastings ratio?. Because of the required reverse proposal in the Hastings ratio, we
must propose both merges and splits. Unfortunately, because the joint likelihood for the augmented
space cannot be analytically expressed, the Hastings ratio for an arbitrary proposal distribution cannot be computed. A very specific proposal distribution, which we now discuss, does result in a
tractable Hastings ratio. A split or merge move, denoted by Q, is first selected at random. In our
examples, all possible splits and merges are considered since the number of clusters is much smaller
than the number of data points. When this is not the case, any randomized proposal can be used.
Conditioned on Q = Qsplit-c , which splits cluster c into m and n, or Q = Qmerge-mn , which merges
clusters m and n into c, a new set of variables are sampled with the following
Q = Qsplit-c
(?
z{m} , z?{n} ) = split-c(z, z)
Q = Qmerge-mn
z?{c} = merge-mn(z)
(?
?m , ?
?n ) = ?c ? (um , un ),
?
(??m , ??n ) ? q(??m , ??n |x, z?, z)
?m , N
?n )
(um , un ) ? Dir(N
v?m , v?n ? p(v?m , v?n |x, z?)
?
?c = ?
?m + ?
?n
?
??c ? q(??c |x, z?, z)
v?c ? p(v?c |x, z?)
(17)
(18)
(19)
(20)
Here, v k = {? k , ?k , z {k} } denotes the set of auxiliary variables for cluster k, the function split-c(?)
splits the labels of cluster c based on the sub-cluster labels, and merge-mn(?) merges the labels of
clusters m and n. The proposal of cluster parameters is written in a general form so that users can
specify their own proposal for non-conjugate priors. All other cluster parameters remain the same.
Sampling auxiliary variables from Equation 20 will be discussed shortly. Assuming that this can be
performed, we show in the supplement that the resulting Hastings ratio for a split is
Q
Y ?(N?k )f? (??k ;?)fx (x ;??k )
?k )fx (x{k} ;?)
? k?{m,n} ?(N
z)
{k}
c |x,z,?
Hsplit-c = ?(Nk )f?q(?
=
. (21)
?(Nc )fx (x{c} ;?)
? (?c ;?)fx (x{c} ;?c )
q(?? |x,z,?
z)
k
k?{m,n}
The first expression can be used for non-conjugate models, and the second expression can be used in
conjugate models where new cluster parameters are sampled directly from the posterior distribution.
We note that these expressions do not have any residual normalization terms and can be computed
exactly, even though the joint distribution of the augmented space can not be expressed analytically.
Unfortunately, the Hastings ratio for a merge move is slightly more complicated. We discuss these
complications following the explanation of sampling the auxiliary variables in the next section.
5.4
Deferred Metropolis-Hastings Sampling
The preceding section showed that sampling a split according to Equations 17-20 results in an accurate MH framework. However, sampling the auxiliary variables from Equation 20 is not straightforward. This step is equivalent to sampling cluster parameters and labels for a 2-component
6
mixture model, which is known to be difficult. One typically samples from this space using
an MCMC procedure. In fact, that is precisely what the restricted Gibbs sampler is doing.
We therefore sample from Equation 20 by running a restricted Gibbs sampler for each newly
proposed sub-cluster until they have burned-in. We monitor the data-likelihood for cluster m,
Lm = fx (x{m,`} ; ?m,` ) ? fx (x{m,r} ; ?m,r ) and declare burn-in once Lm begins to oscillate.
Furthermore, due to the implicit marginalization of auxiliary variables, the restricted Gibbs sampler
and split moves that act on clusters that were not recently split do not depend on the proposed
auxiliary variables. As such, these proposals can be computed before the auxiliary variables are
even proposed. The sampling of auxiliary variables of a recently split cluster are deferred to the
restricted Gibbs sampler while the other sampling steps are run concurrently. Once a set of proposed
sub-clusters have burned-in, the corresponding clusters can be proposed to split again.
5.5
Merge Moves with Random Splits
The Hastings ratio for a merge depends on the proposed auxiliary variables for the reverse split.
Since proposed splits are deterministic conditioned on the sub-cluster labels, the Hastings ratio will
be zero if the proposed sub-cluster labels for a merge do not match those of the current clusters. We
show in the supplement that as the number of data points grows, the acceptance ratio for a merge
move quickly decays. With only 256 data points, the acceptance ratio for a merge proposal for 1000
trials in a 1D Gaussian mixture model did not exceed 10?16 . We therefore approximate all merges
with an automatic rejection. Unfortunately, this can lead to slow convergence in certain situations.
Fortunately, there is a very simple sampler that is good at proposing merges: a data-independent,
random split proposal generated from the prior with a corresponding merge move. A split is constructed by sampling a random cluster, c, followed by a random partitioning of its data points form
a Dirichlet-Multinomial. In general, these data-independent splits will be non-sensical and result in
a rejection. However, merge moves are accepted with much higher probability than the sub-cluster
merges. We refer the interested reader to the supplement for additional details.
6
Results
In this section, we compare the proposed method against other MCMC sampling algorithms. We
consider three different versions of the proposed algorithm: using sub-clusters with and without
super-clusters (S UB C and S UB C+S UP C) and an approximate method that does not wait for the convergence of sub-clusters to split (S UB C+S UP C A PPROX). We note that while we do not expect this
last version to converge to the correct distribution, empirical results show that it is similar in average
performance. We compare the proposed methods against four other methods: the finite symmetric Dirichlet approximate model (FSD) with 100 components, a Rao-Blackwellized Gibbs sampler
(G IBBS), a Rao-Blackwellized version of the original super-cluster work of [19] (G IBBS +S UP C),
and the current state-of-the-art split/merge sampler [5] (G IBBS +SAMS). In our implementations,
the concentration parameter is not resampled, though one could easily use a slice-sampler if desired.
We first compare these algorithms on synthetic Gaussian data with a Normal Inverse-Wishart prior.
100,000 data points are simulated from ten 2D Gaussian clusters. The average log likelihood for
multiple sample paths obtained using the algorithms without parallelization for different numbers
of initial clusters K and concentration parameters ? are shown in the first two columns of Figure 4.
In this high data regime, ? should have little effect on the resulting clusters. However, we find that
the samplers without split/merge proposals (FSD, G IBBS , G IBBS +SC) perform very poorly when
the initial number of clusters and the concentration parameter is small. We also find that the supercluster method, G IBBS +SC, performs even worse than regular Gibbs sampling. This is likely due to
super-clusters not being grouped by similar data, since data points not being able to move between
different super-clusters can hinder convergence. In contrast, the proposed super-cluster method does
not suffer from the same convergence problems, but is comparable to S UB C because there are a
small number of clusters. Finally, the approximate sub-cluster method has significant gains when
only one initial cluster is used, but performs approximately the same with more initial clusters.
Next we consider parallelizing the algorithms using 16 cores in the last column of Figure 4. The
four inter-cluster parallelizable algorithms, S UB C, S UB C+S UP C, S UB C+S UP C A PPROX, and
FSD exhibit an order of magnitude speedup, while the the intra-cluster parallelizable algorithm
7
Figure 4: Synthetic data results for various initial clusters K, concentration parameters ?, and cores.
Figure 5: Log likelihood vs. computation time for real data. All parallel algorithms use 16 cores.
G IBBS +S UP C only has minor gains. As expected, parallelization does not aid the convergence of
algorithms, only the speed at which they converge.
We now show results on real data. We test a Gaussian model with a Normal Inverse-Wishart prior
on the MNIST dataset [16] by first running PCA on the 70,000 training and test images to 50 dimensions. Results on the MNIST dataset are shown in Figure 5a. We additionally test the algorithm
on multinomial data with a Dirichlet prior on the following datasets: Associated Press [2] (2,246
documents and 10,473 dimension dictionary), Enron Emails [1] (39,861 documents and 28,102
dimension dictionary), New York Times articles [1] (300,000 documents and 102,660 dimension
dictionary), and PubMed abstracts [1] (8,200,000 documents and 141,043 dimension dictionary).
Results are shown in Figure 5b-e. In contrast to HDP models, each document is treated as a single
draw from a multinomial distribution. We note that on the PubMed dataset, we had to increase the
approximation of FSD to 500 components after observing that S UB C inferred approximately 400
clusters. On real data, it is clearly evident that the other algorithms have issues with convergence.
In fact, in the allotted time, no algorithms besides the proposed methods converge to the same log
likelihood with the two different initializations on the larger datasets. The presented sub-cluster
methods converge faster to a better sample than other algorithms converge to a worse sample.
On the small, Associated Press dataset, the proposed methods actually perform slightly worse than
the G IBBS methods. Approximately 20 clusters are inferred for this dataset, resulting in approximately 100 observations for each cluster. In these small data regimes, it is important to marginalize
over as many variables as possible. We believe that because the G IBBS methods marginalize over
the cluster parameters and weights, they achieve better performance as compared to the sub-cluster
methods and FSD which explicitly instantiate them. This is not an issue with larger datasets.
7
Conclusion
We have presented a novel sampling algorithm for Dirichlet process mixture models. By alternating between a restricted Gibbs sampler and a split proposal, finite approximations to the
DPMM are not needed and efficient inter-cluster parallelization can be achieved. Additionally, the proposed method for constructing splits based on fitting sub-clusters is, to our knowledge, the first parallelizable split algorithm for mixture models. Results on both synthetic and
real data demonstrate that the speed of the sampler is orders of magnitude faster than other exact MCMC methods. Publicly available source code used in this work can be downloaded at
http://people.csail.mit.edu/jchang7/.
8
References
[1] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[2] D. M. Blei, T. L. Griffiths, M. I. Jordan, and J. B. Tenenbaum. Hierarchical topic models and the nested
Chinese restaurant process. In NIPS, 2003.
[3] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis,
1:121?144, 2005.
[4] C. A. Bush and S. N. MacEachern. A semiparametric Bayesian model for randomised block designs.
Biometrika, 83:275?285, 1973.
[5] D. B. Dahl. An improved merge-split sampler for conjugate Dirichlet process mixture models. Technical
report, University of Wisconsin - Madison Dept. of Statistics, 2003.
[6] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association, 90(430):577?588, 1995.
[7] S. Favaro and Y. W. Teh. MCMC for normalized random measure mixture models. Statistical Science,
2013.
[8] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?
230, 1973.
[9] P. J. Green and S. Richardson. Modelling heterogeneity with and without the Dirichlet process. Scandinavian Journal of Statistics, pages 355?375, 2001.
[10] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika,
57(1):97?109, 1970.
[11] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American
Statistical Association, 96:161?173, 2001.
[12] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the Dirichlet process.
Canadian Journal of Statistics, 30:269?283, 2002.
[13] S. Jain and R. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture
model. Journal of Computational and Graphical Statistics, 13:158?182, 2000.
[14] S. Jain and R. Neal. Splitting and merging components of a nonconjugate Dirichlet process mixture
model. Bayesian Analysis, 2(3):445?472, 2007.
[15] K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational Dirichlet process mixture models. In
International Joint Conference on Artificial Intelligence, 2007.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[17] P. Liang, M. I. Jordan, and B. Taskar. A permutation-augmented sampler for DP mixture models. In
Proceedings of the 24th international conference on Machine learning, 2007.
[18] D. Lin, E. Grimson, and J. W. Fisher III. Construction of dependent Dirichlet processes based on Poisson
processes. In NIPS, 2010.
[19] D. Lovell, R. P. Adams, and V. K. Mansingka. Parallel Markov chain Monte Carlo for Dirichlet process
mixtures. In Workshop on Big Learning, NIPS, 2012.
[20] S. N. MacEachern. Estimating normal means with a conjugate style Dirichlet process prior. In Communications in Statistics: Simulation and Computation, 1994.
[21] S. N. MacEachern and P. M?uller. Estimating mixture of Dirichlet process models. Journal of Computational and Graphical Statistics, 7(2):223?238, June 1998.
[22] R. Neal. Bayesian mixture modeling. In Proceedings of the 11th International Workshop on Maximum
Entropy and Bayesian Methods of Statistical Analysis, 1992.
[23] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational
and Graphical Statistics, 9(2):249?265, June 2000.
[24] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet
process hierarchical models. Biometrika, 95(1):169?186, 2008.
[25] J. Pitman. Combinatorial stochastic processes. Technical report, U.C. Berkeley Dept. of Statistics, 2002.
[26] J. Sethuraman. A constructive definition of Dirichlet priors. Statstica Sinica, pages 639?650, 1994.
[27] E. B. Sudderth. Graphical Models for Visual Object Recognition and Tracking. PhD thesis, Massachusetts
Institute of Technology, 2006.
[28] E. B. Sudderth, A. B. Torralba, W. T. Freeman, and A. S. Willsky. Describing visual scenes using transformed Dirichlet processes. In NIPS, 2006.
[29] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[30] M. West, P. M?uller, and S. N. MacEachern. Hierarchical priors and mixture models, with application in
regression and density estimation. Aspects of Uncertainity, pages 363?386, 1994.
[31] S. A. Williamson, A. Dubey, and E. P. Xing. Parallel Markov chain Monte Carlo for nonparametric
mixture models. In ICML, 2013.
[32] E. P. Xing, R. Sharan, and M. I. Jordan. Bayesian haplotype inference via the Dirichlet process. In ICML,
2004.
9
| 5162 |@word trial:1 repository:1 version:4 advantageous:1 proportionality:1 simulation:2 solid:1 accommodate:1 initial:5 contains:1 lichman:1 document:6 dpmms:9 pprox:2 current:7 must:2 written:1 john:2 stemming:1 v:1 stationary:4 intelligence:1 leaf:1 selected:1 instantiate:1 core:3 colored:1 blei:3 node:1 complication:1 favaro:1 blackwellized:2 constructed:4 beta:1 become:1 initiative:1 combine:1 fitting:2 manner:1 qsplit:2 inter:5 expected:1 themselves:1 freeman:1 actual:1 little:1 project:1 provided:1 matched:2 begin:1 estimating:2 what:1 proposing:1 finding:1 guarantee:3 forgotten:1 berkeley:1 every:1 act:1 exactly:1 um:2 biometrika:3 stick:2 partitioning:2 before:2 declare:1 local:1 ak:2 solely:1 path:1 merge:18 approximately:4 burn:1 initialization:1 specifying:1 lecun:1 enforces:1 block:1 procedure:5 empirical:2 jchang7:2 regular:12 griffith:1 wait:1 cannot:6 marginalize:4 selection:1 collapsed:5 impossible:1 measurable:1 equivalent:1 customer:1 deterministic:1 straightforward:1 independently:1 ergodic:8 splitting:3 assigns:2 fx:24 limiting:2 annals:1 construction:2 target:3 user:1 exact:5 element:1 expensive:1 satisfying:1 recognition:2 bache:1 muri:1 labeled:1 observed:1 taskar:1 calculate:1 connected:1 grimson:1 agency:1 hinder:1 depend:1 predictive:1 upon:1 easily:2 joint:5 mh:2 represented:1 various:1 instantiated:6 jain:3 monte:6 artificial:1 sc:2 widely:1 larger:4 drawing:1 statistic:9 richardson:2 beal:1 sequence:2 propose:3 product:1 relevant:1 uci:1 realization:1 poorly:1 achieve:2 convergence:18 cluster:145 empty:3 produce:1 generating:1 escobar:1 adam:1 object:1 illustrate:1 develop:2 augmenting:1 minor:1 auxiliary:24 come:2 indicate:4 uncertainity:1 correct:2 owing:1 stochastic:1 ibbs:9 adjacency:1 require:1 hx:1 extension:1 considered:3 normal:3 exp:1 lm:2 dictionary:4 torralba:1 uniqueness:1 estimation:2 applicable:1 label:22 iw:3 currently:1 combinatorial:1 grouped:3 create:1 uller:2 mit:5 clearly:2 concurrently:1 gaussian:4 super:31 exchangeability:1 office:1 conjunction:1 focus:2 june:2 naval:1 modelling:1 likelihood:6 contrast:4 sharan:1 inference:8 dependent:3 membership:1 ferguson:1 typically:3 unlikely:2 transformed:1 interested:1 issue:7 among:1 augment:3 denoted:1 proposes:4 art:1 construct:3 once:2 sampling:29 eliminated:1 broad:1 reassigning:1 icml:2 alter:1 report:2 employ:1 preserve:1 simultaneously:1 divergence:3 ve:1 n1:2 attempt:1 interest:2 acceptance:2 intra:3 deferred:2 mixture:30 behind:1 chain:19 accurate:2 desired:1 column:2 modeling:1 rao:2 subset:1 dir:5 synthetic:3 combined:1 density:3 international:3 randomized:2 csail:5 sequel:1 together:1 quickly:1 na:1 again:1 thesis:1 possibly:1 unmatched:1 wishart:2 worse:3 creating:1 american:3 style:1 summarized:1 satisfy:2 explicitly:5 depends:3 performed:2 jason:2 observing:2 doing:1 start:1 xing:2 parallel:11 capability:2 complicated:1 formed:1 fsd:5 publicly:1 yield:1 bayesian:8 iterated:1 carlo:6 parallelizable:8 email:1 definition:4 against:2 james:1 proof:1 attributed:1 associated:3 gain:4 sampled:7 newly:1 dataset:5 popular:1 massachusetts:1 knowledge:3 color:2 improves:1 actually:1 higher:1 subclusters:1 nonconjugate:2 specify:1 improved:1 though:2 furthermore:1 governing:1 rejected:1 crp:1 ergodicity:1 until:1 implicit:1 hastings:15 reversible:1 multidisciplinary:1 believe:1 grows:1 effect:1 zarepour:1 contain:1 true:2 normalized:1 counterpart:1 analytically:3 alternating:2 symmetric:3 iteratively:3 neal:5 uniquely:2 noted:2 lovell:1 evident:1 demonstrate:2 performs:2 image:1 variational:4 novel:2 recently:3 multinomial:4 haplotype:1 empirically:1 overview:2 discussed:3 belong:1 association:3 significant:2 refer:4 gibbs:16 automatic:2 similarly:1 had:1 moving:1 scandinavian:1 similarity:1 etc:1 base:1 posterior:11 own:1 recent:2 showed:2 belongs:1 reverse:2 certain:1 additional:2 fortunately:1 preceding:2 parallelized:6 determine:1 converge:6 multiple:5 technical:2 match:2 faster:2 lin:1 award:2 ellipsis:3 nonergodic:1 underlies:1 regression:1 essentially:2 poisson:1 iteration:1 represent:4 tailored:1 normalization:2 achieved:1 proposal:18 semiparametric:1 wealth:1 sudderth:2 source:1 parallelization:5 biased:1 unlike:1 enron:1 supposing:1 elegant:1 thing:1 sensical:1 jordan:5 noting:1 exceed:1 split:53 iii:2 easy:1 bengio:1 variety:1 marginalization:2 fit:4 zi:14 restaurant:3 restrict:1 haffner:1 whether:1 expression:3 pca:1 defense:1 retrospective:1 suffer:4 fa8650:1 york:1 oscillate:1 useful:1 detailed:8 dubey:1 amount:1 nonparametric:2 ten:1 tenenbaum:1 category:1 http:1 notice:1 dotted:3 hyperparameter:1 group:4 key:1 four:2 monitor:1 drawn:1 verified:1 dahl:2 wasted:1 graph:1 sum:2 run:1 inverse:2 named:1 place:1 family:1 reader:2 draw:3 scaling:1 comparable:1 resampled:1 guaranteed:1 followed:2 precisely:1 scene:1 generates:2 aspect:1 speed:2 min:2 speedup:1 according:3 alternate:1 poor:2 conjugate:20 belonging:1 across:1 smaller:1 remain:1 slightly:2 sam:1 appealing:1 metropolis:5 qmerge:2 restricted:19 invariant:1 computationally:1 equation:11 visualization:2 previously:1 randomised:1 discus:3 describing:1 needed:3 tractable:1 available:1 ishwaran:2 observe:1 hierarchical:5 generic:1 simulating:1 canadian:1 alternative:2 shortly:1 slower:1 original:1 denotes:7 running:3 dirichlet:32 include:1 clustering:1 graphical:10 marginalized:2 madison:1 exploit:1 chinese:3 implied:2 move:16 concentration:5 said:1 exhibit:4 gradient:1 dp:8 cw:4 distance:2 separate:1 simulated:1 majority:1 topic:1 toward:1 willsky:1 assuming:1 hdp:1 length:2 besides:1 index:3 relationship:1 illustration:2 ratio:11 balance:8 code:1 nc:1 difficult:3 unfortunately:5 liang:1 robert:1 sinica:1 implementation:3 design:2 dpmm:9 unknown:1 perform:2 allowing:1 teh:3 observation:2 markov:11 datasets:5 finite:11 truncated:1 situation:1 extended:2 excluding:1 heterogeneity:1 communication:1 arbitrary:2 parallelizing:1 community:1 inferred:4 pair:2 required:1 specified:1 kl:1 merges:9 distinction:1 herein:1 nip:4 address:2 able:1 regime:2 program:1 green:2 lend:1 explanation:1 greatest:1 treated:1 residual:1 advanced:1 mn:4 scheme:4 improve:2 altered:1 technology:1 brief:1 imply:1 sethuraman:1 categorical:1 coupled:1 prior:23 literature:2 understanding:1 marginalizing:1 wisconsin:1 expect:2 permutation:1 mixed:1 burned:2 proportional:1 downloaded:1 proxy:1 article:1 compatible:1 supported:2 last:2 truncation:1 side:1 institute:1 wide:1 neighbor:2 pitman:2 yor:1 distributed:1 slice:1 depth:1 dimension:5 transition:2 valid:1 author:2 jump:1 far:1 welling:1 approximate:6 sequentially:1 n000141110688:1 xi:10 alternatively:2 un:2 latent:1 supercluster:2 table:3 additionally:6 williamson:1 bottou:1 complex:1 constructing:1 substituted:1 did:1 arrow:1 big:1 hyperparameters:1 augmented:12 papaspiliopoulos:1 referred:1 west:2 pubmed:2 fashion:2 slow:4 aid:1 sub:29 explicit:1 atypical:1 breaking:2 learns:1 discarding:1 specific:1 decay:1 grouping:5 workshop:2 mnist:2 merging:2 supplement:5 phd:1 magnitude:2 conditioned:7 nk:8 rejection:2 entropy:1 simply:1 likely:5 visual:2 desire:1 contained:1 expressed:4 tracking:1 partially:2 chang:2 nested:1 satisfies:4 conditional:1 goal:1 consequently:1 fisher:4 considerable:1 change:1 infinite:4 sampler:52 kurihara:1 accepted:2 indicating:1 allotted:1 people:1 maceachern:4 scan:3 arises:1 ub:8 bush:1 constructive:2 dept:2 mcmc:13 |
4,601 | 5,163 | Lexical and Hierarchical Topic Regression
Viet-An Nguyen
Computer Science
University of Maryland
College Park, MD
vietan@cs.umd.edu
Jordan Boyd-Graber
iSchool & UMIACS
University of Maryland
College Park, MD
jbg@umiacs.umd.edu
Philip Resnik
Linguistics & UMIACS
University of Maryland
College Park, MD
resnik@umd.edu
Abstract
Inspired by a two-level theory from political science that unifies agenda setting
and ideological framing, we propose supervised hierarchical latent Dirichlet allocation (S H L DA), which jointly captures documents? multi-level topic structure and
their polar response variables. Our model extends the nested Chinese restaurant
processes to discover tree-structured topic hierarchies and uses both per-topic hierarchical and per-word lexical regression parameters to model response variables.
S H L DA improves prediction on political affiliation and sentiment tasks in addition
to providing insight into how topics under discussion are framed.
1
Introduction: Agenda Setting and Framing in Hierarchical Models
How do liberal-leaning bloggers talk about immigration in the US? What do conservative politicians
have to say about education? How do Fox News and MSNBC differ in their language about the gun
debate? Such questions concern not only what, but how things are talked about.
In political communication, the question of ?what? falls under the heading of agenda setting theory,
which concerns the issues introduced into political discourse (e.g., by the mass media) and their
influence over public priorities [1]. The question of ?how? concerns framing: the way the presentation
of an issue reflects or encourages a particular perspective or interpretation [2]. For example, the rise
of the ?innocence frame? in the death penalty debate, emphasizing the irreversible consequence of
mistaken convictions, has led to a sharp decline in the use of capital punishment in the US [3].
In its concern with the subjects or issues under discussion in political discourse, agenda setting
maps neatly to topic modeling [4] as a means of discovering and characterizing those issues [5].
Interestingly, one line of communication theory seeks to unify agenda setting and framing by viewing
frames as a second-level kind of agenda [1]: just as agenda setting is about which objects of
discussion are salient, framing is about the salience of attributes of those objects. The key is that
what communications theorists consider an attribute in a discussion can itself be an object, as well.
For example, ?mistaken convictions? is one attribute of the death penalty discussion, but it can also
be viewed as an object of discussion in its own right.
This two-level view leads naturally to the idea of using a hierarchical topic model to formalize
both agendas and frames within a uniform setting. In this paper, we introduce a new model to do
exactly that. The model is predictive: it represents the idea of alternative or competing perspectives
via a continuous-valued response variable. Although inspired by the study of political discourse,
associating texts with ?perspectives? is more general and has been studied in sentiment analysis,
discovery of regional variation, and value-sensitive design. We show experimentally that the model?s
hierarchical structure improves prediction of perspective in both a political domain and on sentiment
analysis tasks, and we argue that the topic hierarchies exposed by the model are indeed capturing
structure in line with the theory that motivated the work.
1
?
?
??
?
??
?
?????
??????
??????
???
???
?
?
??
????
???
?
??
??
?
??
?
?
?
?
1. For each node k ? [1, ?) in the tree
(a) Draw topic ?k ? Dir(?k )
(b) Draw regression parameter ?k ? N (?, ?)
2. For each word v ? [1, V ], draw ?v ? Laplace(0, ?)
3. For each document d ? [1, D]
(a) Draw level distribution ?d ? GEM(m, ?)
(b) Draw table distribution ?d ? GEM(?)
(c) For each table t ? [1, ?), draw a path cd,t ? nCRP(?)
(d) For each sentence s ? [1, Sd ], draw a table indicator
td,s ? Mult(?d )
i. For each token n ? [1, Nd,s ]
A. Draw level zd,s,n ? Mult(?d )
B. Draw word wd,s,n ? Mult(?cd,td,s ,zd,s,n )
? d , ?):
(e) Draw response yd ? N (? T z?d + ? T w
PSd PNd,s
1
i. z?d,k = Nd,? s=1 n=1 I [kd,s,n = k]
P d PNd,s
ii. w
?d,v = N1d,? S
n=1 I [wd,s,n = v]
s=1
Figure 1: S H L DA?s generative process and plate diagram. Words w are explained by topic hierarchy ?, and
response variables y are explained by per-topic regression coefficients ? and global lexical coefficients ? .
2
S H L DA: Combining Supervision and Hierarchical Topic Structure
Jointly capturing supervision and hierarchical topic structure falls under a class of models called
supervised hierarchical latent Dirichlet allocation. These models take as input a set of D documents,
each of which is associated with a response variable yd , and output a hierarchy of topics which is
informed by yd . Zhang et al. [6] introduce the S H L DA family, focusing on a categorical response.
In contrast, our novel model (which we call S H L DA for brevity), uses continuous responses. At
its core, S H L DA?s document generative process resembles a combination of hierarchical latent
Dirichlet allocation [7, HLDA] and the hierarchical Dirichlet process [8, HDP]. HLDA uses the nested
Chinese restaurant process (nCRP(?)), combined with an appropriate base distribution, to induce an
unbounded tree-structured hierarchy of topics: general topics at the top, specific at the bottom. A
document is generated by traversing this tree, at each level creating a new child (hence a new path)
with probability proportional to ? or otherwise respecting the ?rich-get-richer? property of a CRP.
A drawback of HLDA, however, is that each document is restricted to only a single path in the
tree. Recent work relaxes this restriction through different priors: nested HDP [9], nested Chinese
franchises [10] or recursive CRPs [11]. In this paper, we address this problem by allowing documents
to have multiple paths through the tree by leveraging information at the sentence level using the twolevel structure used in HDP. More specifically, in the HDP?s Chinese restaurant franchise metaphor,
customers (i.e., tokens) are grouped by sitting at tables and each table takes a dish (i.e., topic) from
a flat global menu. In our S H L DA, dishes are organized in a tree-structured global menu by using
the nCRP as prior. Each path in the tree is a collection of L dishes (one for each level) and is called
a combo. S H L DA groups sentences of a document by assigning them to tables and associates each
table with a combo, and thus, models each document as a distribution over combos.1
In S H L DA?s metaphor, customers come in a restaurant and sit at a table in groups, where each group
is a sentence. A sentence wd,s enters restaurant d and selects a table t (and its associated combo)
with probability proportional to the number of sentences Sd,t at that table; or, it sits at a new table
with probability proportional to ?. After choosing the table (indexed by td,s ), if the table is new, the
group will select a combo of dishes (i.e., a path, indexed by cd,t ) from the tree menu. Once a combo
is in place, each token in the sentence chooses a ?level? (indexed by zd,s,n ) in the combo, which
specifies the topic (?kd,s,n ? ?cd,td,s ,zd,s,n ) producing the associated observation (Figure 2).
S H L DA also draws on supervised LDA [12, SLDA] associating each document d with an observable
continuous response variable yd that represents the author?s perspective toward a topic, e.g., positive
vs. negative sentiment, conservative vs. liberal ideology, etc. This lets us infer a multi-level topic
structure informed by how topics are ?framed? with respect to positions along the yd continuum.
1
We emphasize that, unlike in HDP where each table is assigned to a single dish, each table in our metaphor
is associated with a combo?a collection of L dishes. We also use combo and path interchangeably.
2
Sd
Sd,t
dish
table
=1
=2
=1
=2
=3
=1
=2
=1 =2
=
=1 =2 =3
=1
=
=
=2
=
customer
group
(token) (sentence)
restaurant
(document)
=1
=1
combo
(path)
Nd,s
Nd,?,l
Nd,?,>l
Nd,?,?l
Mc,l
Cc,l,v
Cd,x,l,v
?k
?k
?v
cd,t
td,s
zd,s,n
kd,s,n
L
C+
Figure 2: S H L DA?s restaurant franchise metaphor.
# sentences in document d
# groups (i.e. sentences) sitting at table t
in restaurant d
# tokens wd,s
# tokens in wd assigned to level l
# tokens in wd assigned to level > l
? Nd,?,l + Nd,?,>l
# tables at level l on path c
# word type v assigned to level l on path c
# word type v in vd,x assigned to level l
Topic at node k
Regression parameter at node k
Regression parameter of word type v
Path assignment for table t in restaurant d
Table assignment for group wd,s
Level assignment for wd,s,n
Node assignment for wd,s,n (i.e., node at
level zd,s,n on path cd,td,s )
Height of the tree
Set of all possible paths (including new
ones) of the tree
Table 1: Notation used in this paper
Unlike SLDA, we model the response variables using a normal linear regression that contains both pertopic hierarchical and per-word lexical regression parameters. The hierarchical regression parameters
are just like topics? regression parameters in SLDA: each topic k (here, a tree node) has a parameter
?k , and the model uses the empirical distribution over the nodes that generated a document as the
regressors. However, the hierarchy in S H L DA makes it possible to discover relationships between
topics and the response variable that SLDA?s simple latent space obscures. Consider, for example,
a topic model trained on Congressional debates. Vanilla LDA would likely discover a healthcare
category. SLDA [12] could discover a pro-Obamacare topic and an anti-Obamacare topic. S H L DA
could do that and capture the fact that there are alternative perspectives, i.e., that the healthcare issue
is being discussed from two ideological perspectives, along with characterizing how the higher level
topic is discussed by those on both sides of that ideological debate.
Sometimes, of course, words are strongly associated with extremes on the response variable continuum
regardless of underlying topic structure. Therefore, in addition to hierarchical regression parameters,
we include global lexical regression parameters to model the interaction between specific words
and response variables. We denote the regression parameter associated with a word type v in the
vocabulary as ?v , and use the normalized frequency of v in the documents to be its regressor.
Including both hierarchical and lexical parameters is important. For detecting ideology in the US,
?liberty? is an effective indicator of conservative speakers regardless of context; however, ?cost?
is a conservative-leaning indicator in discussions about environmental policy but liberal-leaning
in debates about foreign policy. For sentiment, ?wonderful? is globally a positive word; however,
?unexpected? is a positive descriptor of books but a negative one of a car?s steering. S H L DA captures
these properties in a single model.
3
Posterior Inference and Optimization
Given documents with observed words w = {wd,s,n } and response variables y = {yd }, the inference
task is to find the posterior distribution over: the tree structure including topic ?k and regression
parameter ?k for each node k, combo assignment cd,t for each table t in document d, table assignment
td,s for each sentence s in a document d, and level assignment zd,s,n for each token wd,s,n . We
approximate S H L DA?s posterior using stochastic EM, which alternates between a Gibbs sampling
E-step and an optimization M-step. More specifically, in the E-step, we integrate out ?, ? and ? to
construct a Markov chain over (t, c, z) and alternate sampling each of them from their conditional
distributions. In the M-step, we optimize the regression parameters ? and ? using L-BFGS [13].
Before describing each step in detail, let us define the following probabilities. For more thorough
derivations, please see the supplement.
3
? First, define vd,x as a set of tokens (e.g., a token, a sentence or a set of sentences) in document d.
The conditional density of vd,x being assigned to path c given all other assignments is
fc?d,x (vd,x ) =
L
Y
?d,x
?(Cc,l,?
+ V ?l )
l=1
?d,x
?(Cc,l,?
+ Cd,x,l,? + V ?l )
?d,x
V
Y
?(Cc,l,v
+ Cd,x,l,v + ?l )
v=1
?d,x
?(Cc,l,v
+ ?l )
(1)
where superscript ?d,x denotes the same count excluding assignments of vd,x ; marginal counts
are represented by ??s. For a new path cnew , if the node does not exist, Cc?d,x
new ,l,v = 0 for all word
types v.
? Second, define the conditional density of the response variable yd of document d given vd,x being
assigned to path c and all other assignments as gc?d,x (yd ) =
?
1
N?
Nd,?
X
?cd,td,s ,zd,s,n +
wd,s,n ?{wd \vd,x }
L
X
?c,l ? Cd,x,l,? +
Sd Nd,s
X
X
! ?
?wd,s,n , ??
(2)
s=1 n=1
l=1
where Nd,? is the total number of tokens in document d. For a new node at level l on a new path
cnew , we integrate over all possible values of ?cnew ,l .
Sampling t: For each group wd,s we need to sample a table td,s . The conditional distribution of a
table t given wd,s and other assignments is proportional to the number of sentences sitting at t times
the probability of wd,s and yd being observed under this assignment. This is P (td,s = t | rest) ?
?d,s ?d,s
P (td,s = t | t?s
,t
, z, c, ?)
d ) ? P (wd,s , yd | td,s = t, w
?d,s ?d,s
?d,s
Sd,t ? fcd,t (wd,s ) ? gcd,t (yd ),
for existing table t;
P
?
(3)
? ? c?C + P (cd,tnew = c | c?d,s ) ? fc?d,s (wd,s ) ? gc?d,s (yd ), for new table tnew .
For a new table tnew , we need to sum over all possible paths C + of the tree, including new ones. For
example, the set C + for the tree shown in Figure 2 consists of four existing paths (ending at one of
the four leaf nodes) and three possible new paths (a new leaf off of one of the three internal nodes).
The prior probability of path c is: P (cd,tnew = c | c?d,s ) ?
?
?d,s
?
Mc,l
QL
?
?
,
for an existing path c;
?
? l=2 M ?d,s + ?
l?1
c,l?1
?
Mc?d,s
Ql?
new ,l
?l?
?
?
, for a new path cnew which consists of an existing path
?
? M ?d,s + ? ? l=2 M ?d,s
+
?
new
?
new
l
l?1
from the root to a node at level l? and a new node.
c
,l
c
,l?1
(4)
Sampling z: After assigning a sentence wd,s to a table, we assign each token wd,s,n to a level to
choose a dish from the combo. The probability of assigning wd,s,n to level l is
P (zd,s,n = l | rest) ? P (zd,s,n = l | zd?s,n )P (wd,s,n , yd | zd,s,n = l, w?d,s,n , z ?d,s,n , t, c, ?) (5)
The first factor captures the probability that a customer in restaurant d is assigned to level l, conditioned on the level assignments of all other customers in restaurant d, and is equal to
P (zd,s,n =
l | zd?s,n )
=
?d,s,n l?1
?d,s,n
Y (1 ? m)? + Nd,?,>j
m? + Nd,?,l
?d,s,n
? + Nd,?,?l
?d,s,n
? + Nd,?,?j
j=1
,
The second factor is the probability of observing wd,s,n and yd , given that wd,s,n is assigned to level
l: P (wd,s,n , yd | zd,s,n = l, w?d,s,n , z ?d,s,n , t, c, ?) = fc?d,s,n
(wd,s,n ) ? gc?d,s,n
(yd ).
d,t
d,t
d,s
d,s
Sampling c: After assigning customers to tables and levels, we also sample path assignments for
all tables. This is important since it can change the assignments of all customers sitting at a table,
which leads to a well-mixed Markov chain and faster convergence. The probability of assigning table
t in restaurant d to a path c is
P (cd,t = c | rest) ? P (cd,t = c | c?d,t ) ? P (wd,t , yd | cd,t = c, w?d,t , c?d,t , t, z, ?)
(6)
where we slightly abuse the notation by using wd,t ? ?{s|td,s =t} wd,s to denote the set of customers
in all the groups sitting at table t in restaurant d. The first factor is the prior probability of a path
given all tables? path assignments c?d,t , excluding table t in restaurant d and is given in Equation 4.
The second factor in Equation 6 is the probability of observing wd,t and yd given the new path
assignments, P (wd,t , yd | cd,t = c, w?d,t , c?d,t , t, z, ?) = fc?d,t (wd,t ) ? gc?d,t (yd ).
4
Optimizing ? and ? : We optimize the regression parameters ? and ? via the likelihood,
+
D
K
V
1 X
1 X
1X
? d )2 ?
L(?, ? ) = ?
(yd ? ? T z?d ? ? T w
(?k ? ?)2 ?
|?v |,
2?
2?
? v=1
d=1
(7)
k=1
where K + is the number of nodes in the tree.2 This maximization is performed using L-BFGS [13].
4
Data: Congress, Products, Films
We conduct our experiments using three datasets: Congressional floor debates, Amazon product
reviews, and movie reviews. For all datasets, we remove stopwords, add bigrams to the vocabulary,
and filter the vocabulary using tf-idf.3
? U.S Congressional floor debates: We downloaded debates of the 109th US Congress from GovTrack4 and preprocessed them as in Thomas et al. [14]. To remove uninterestingly non-polarized
debates, we ignore bills with less than 20% ?Yea? votes or less than 20% ?Nay? votes. Each
document d is a turn (a continuous utterance by a single speaker, i.e. speech segment [14]), and
its response variable yd is the first dimension of the speaker?s DW- NOMINATE score [15], which
captures the traditional left-right political distinction.5 After processing, our corpus contains 5,201
turns in the House, 3,060 turns in the Senate, and 5,000 words in the vocabulary.6
? Amazon product reviews: From a set of Amazon reviews of manufactured products such as
computers, MP 3 players, GPS devices, etc. [16], we focused on the 50 most frequently reviewed
products. After filtering, this corpus contains 37,191 reviews with a vocabulary of 5,000 words.
We use the rating associated with each review as the response variable yd .7
? Movie reviews: Our third corpus is a set of 5,006 reviews of movies [17], again using review
ratings as the response variable yd , although in this corpus the ratings are normalized to the range
from 0 to 1. After preprocessing, the vocabulary contains 5,000 words.
5
Evaluating Prediction
S H L DA?s response variable predictions provide a formally rigorous way to assess whether it is an
improvement over prior methods. We evaluate effectiveness in predicting values of the response
variables for unseen documents in the three datasets. For comparison we consider these baselines:
? Multiple linear regression (MLR) models the response variable as a linear function of multiple
features (or regressors). Here, we consider two types of features: topic-based features and lexicallybased features. Topic-based MLR, denoted by MLR - LDA, uses the topic distributions learned by
vanilla LDA as features [12], while lexically-based MLR, denoted by MLR - VOC, uses the frequencies
of words in the vocabulary as features. MLR - LDA - VOC uses both features.
? Support vector regression (SVM) is a discriminative method [18] that uses LDA topic distributions
(SVM - LDA), word frequencies (SVM - VOC), and both (SVM - LDA - VOC) as features.8
? Supervised topic model (SLDA): we implemented SLDA using Gibbs sampling. The version of
SLDA we use is slightly different from the original SLDA described in [12], in that we place a
Gaussian prior N (0, 1) over the regression parameters to perform L2-norm regularization.9
For parametric models (LDA and SLDA), which require the number of topics K to be specified beforehand, we use K ? {10, 30, 50}. We use symmetric Dirichlet priors in both LDA and SLDA, initialize
The superscript + is to denote that this number is unbounded and varies during the sampling process.
To find bigrams, we begin with bigram candidates that occur at least 10 times in the corpus and use Pearson?s
?2 -test to filter out those that have ?2 -value less than 5, which corresponds to a significance level of 0.025. We
then treat selected bigrams as single word types and add them to the vocabulary.
2
3
4
http://www.govtrack.us/data/us/109/
5
Scores were downloaded from http://voteview.com/dwnomin_joint_house_and_senate.htm
6
Data will be available after blind review.
7
The ratings can range from 1 to 5, but skew positive.
8
9
http://svmlight.joachims.org/
This performs better than unregularized SLDA in our experiments.
5
Floor Debates
House-Senate
Senate-House
PCC ?
MSE ?
PCC ?
MSE ?
Amazon
Reviews
PCC ?
MSE ?
Movie
Reviews
PCC ?
MSE ?
SVM - LDA 10
SVM - LDA 30
SVM - LDA 50
SVM - VOC
SVM - LDA - VOC
0.173
0.172
0.169
0.336
0.256
0.861
0.840
0.832
1.549
0.784
0.08
0.155
0.215
0.131
0.246
1.247
1.183
1.135
1.467
1.101
0.157
0.277
0.245
0.373
0.371
1.241
1.091
1.130
0.972
0.965
0.327
0.365
0.395
0.584
0.585
0.970
0.938
0.906
0.681
0.678
MLR - LDA 10
MLR - LDA 30
MLR - LDA 50
MLR - VOC
MLR - LDA - VOC
0.163
0.160
0.150
0.322
0.319
0.735
0.737
0.741
0.889
0.873
0.068
0.162
0.248
0.191
0.194
1.151
1.125
1.081
1.124
1.120
0.143
0.258
0.234
0.408
0.410
1.034
1.065
1.114
0.869
0.860
0.328
0.367
0.389
0.568
0.581
0.957
0.936
0.914
0.721
0.702
SLDA 10
SLDA 30
SLDA 50
0.154
0.174
0.254
0.729
0.793
0.897
0.090
0.128
0.245
1.145
1.188
1.184
0.270
0.357
0.241
1.113
1.146
1.939
0.383
0.433
0.503
0.953
0.852
0.772
S H L DA
0.356
0.753
0.303
1.076
0.413
0.891
0.597
0.673
Models
Table 2: Regression results for Pearson?s correlation coefficient (PCC, higher is better (?)) and mean squared
error (MSE, lower is better (?)). Results on Amazon product reviews and movie reviews are averaged over 5
folds. Subscripts denote the number of topics for parametric models. For SVM - LDA - VOC and MLR - LDA - VOC,
only best results across K ? {10, 30, 50} are reported. Best results are in bold.
the Dirichlet hyperparameters to 0.5, and use slice sampling [19] for updating hyperparameters. For
SLDA , the variance of the regression is set to 0.5. For S H L DA , we use trees with maximum depth
of three. We slice sample m, ?, ? and ?, and fix ? = 0, ? = 0.5, ? = 0.5 and ? = 0.5. We found
that the following set of initial hyperparameters works reasonably well for all the datasets in our
~ = (1.0, 0.5, 0.25), ~? = (1, 1), ? = 1. We also set the regression
experiments: m = 0.5, ? = 100, ?
parameter of the root node to zero, which speeds inference (since it is associated with every document)
and because it is reasonable to assume that it would not change the response variable.
To compare the performance of different methods, we compute Pearson?s correlation coefficient
(PCC) and mean squared error (MSE) between the true and predicted values of the response variables
and average over 5 folds. For the Congressional debate corpus, following Yu et al. [20], we use
documents in the House to train and test on documents in the Senate and vice versa.
Results and analysis Table 2 shows the performance of all models on our three datasets. Methods
that only use topic-based features such as SVM - LDA and MLR - LDA do poorly. Methods only based
on lexical features like SVM - VOC and MLR - VOC outperform methods that are based only on topic
features significantly for the two review datasets, but are comparable or worse on congressional
debates. This suggests that reviews have more highly discriminative words than political speeches
(Table 3). Combining topic-based and lexically-based features improves performance, which supports
our choice of incorporating both per-topic and per-word regression parameters in S H L DA.
In all cases, S H L DA achieves strong performance results. For the two cases where S H L DA was
second best in MSE score (Amazon reviews and House-Senate), it outperforms other methods in PCC.
Doing well in PCC for these two datasets is important since achieving low MSE is relatively easier due
to the response variables? bimodal distribution in the floor debates and positively-skewed distribution
in Amazon reviews. For the floor debate dataset, the results of the House-Senate experiment are
generally better than those of the Senate-House experiment, which is consistent with previous
results [20] and is explained by the greater number of debates in the House.
6
Qualitative Analysis: Agendas and Framing/Perspective
Although a formal coherence evaluation [21] remains a goal for future work, a qualitative look at
the topic hierarchy uncovered by the model suggests that it is indeed capturing agenda/framing
structure as discussed in Section 1. In Figure 3, a portion of the topic hierarchy induced from the
Congressional debate corpus, Nodes A and B illustrate agendas?issues introduced into political
discourse?associated with a particular ideology: Node A focuses on the hardships of the poorer
victims of hurricane Katrina and is associated with Democrats, and text associated with Node E
discusses a proposed constitutional amendment to ban flag burning and is associated with Republicans.
Nodes C and D, children of a neutral ?tax? topic, reveal how parties frame taxes as gains in terms of
new social services (Democrats) and losses for job creators (Republicans).
6
E
flag constitution
freedom supreme_court
elections rights
continuity american_flag
constitutional_amendm
ent
gses credit_rating
fannie_mae regulator
freddie_mac market
financial_services
agencies competition
investors fannie
bill speaker time
amendment
chairman people
gentleman
legislation
congress support
R:1.1
R:0
A
minimum_wage
commission
independent_commissio
n investigate
hurricane_katrina
increase investigation
R:1.0
B
percent tax economy
estate_tax capital_gains
money taxes
businesses families
tax_cuts pay tax_relief
social_security
affordable_housing
housing manager fund
activities funds
organizations
voter_registration
faithbased nonprofits
R:0.4
D:1.7
C
death_tax jobs
businesses business
family_businesses
equipment productivity
repeal_permanency
employees capital farms
D
REPUBLICAN
billion budget children
cuts debt tax_cuts
child_support deficit
education students
health_care republicans
national_debt
R:4.3
D:2.2
DEMOCRAT
D:4.5
Figure 3:
Topics discovered from
Congressional floor debates. Many
first-level topics are bipartisan (purple),
while lower level topics are associated
with specific ideologies (Democrats blue,
Republicans red). For example, the
?tax? topic (B) is bipartisan, but its
Democratic-leaning child (D) focuses on
social goals supported by taxes (?children?, ?education?, ?health care?), while
its Republican-leaning child (C) focuses
on business implications (?death tax?,
?jobs?, ?businesses?). The number below
each topic denotes the magnitude of the
learned regression parameter associated
with that topic. Colors and the numbers
beneath each topic show the regression
parameter ? associated with the topic.
Figure 4 shows the topic structure discovered by S H L DA in the review corpus. Nodes at higher levels
are relatively neutral, with relatively small regression parameters.10 These nodes have general topics
with no specific polarity. However, the bottom level clearly illustrates polarized positive/negative
perspective. For example, Node A concerns washbasins for infants, and has two polarized children
nodes: reviewers take a positive perspective when their children enjoy the product (Node B: ?loves?,
?splash?, ?play?) but have negative reactions when it leaks (Node C: ?leak(s/ed/ing)?).
transmitter ipod car
frequency iriver
product transmitters
live station presets itrip
iriver_aft charges
international_mode
driving
P:6.6
tried waste batteries
tunecast rabbit_ears
weak terrible antenna
hear returned refund
returning item junk
return
A
D
router setup network
expander set signal
wireless connect
linksys connection
house wireless_router
laptop computer
wre54g
N:2.2
N:1.0
tivo adapter series
adapters phone_line
tivo_wireless transfer
plugged
wireless_adapter tivos
plug dvr tivo_series
tivo_box tivo_unit
P:5.1
tub baby water bath
sling son daughter sit
bathtub sink newborn
months bath_tub bathe
bottom
N:8.0
months loves
hammock splash love
baby drain eurobath
hot fits wash play infant
secure slip
P:7.5
NEGATIVE
N:0
N:2.7
B
POSITIVE
time bought product
easy buy love using
price lot able set found
purchased money
months
transmitter car static
ipod radio mp3_player
signal station sound
music sound_quality
volume stations
frequency frequencies
C
leaks leaked leak
leaking hard waste
snap suction_cups lock
tabs difficult bottom
tub_leaks properly ring
N:8.9
monitor radio
weather_radio night
baby range alerts
sound sony house
interference channels
receiver static alarm
N:1.7
hear feature static
monitors set live
warning volume
counties noise outside
alert breathing
rechargeable_battery
alerts
P:6.2
version hours phone F
firmware told spent
linksys tech_support
technical_supportcusto
mer_service
range_expander
support return
N:10.6
E
router firmware ddwrt
wrt54gl version wrt54g
tomato linksys linux
routers flash versions
browser dlink stable
P:4.8
z22 palm pda
palm_z22 calendar
software screen
contacts computer
device sync
information outlook
data programs
N:1.9
headphones sound
pair bass headset
sound_quality ear ears
cord earbuds
comfortable hear head
earphones fit
N:1.3
appointments
organized phone lists
handheld organizer
photos etc pictures
memos track bells
books purse whistles
P:5.8
noise_canceling noise
sony exposed
noise_cancellation
stopped wires warranty
noise_cancelling bud
pay white_noise
disappointed
N:7.6
bottles bottle baby
leak nipples nipple
avent avent_bottles
leaking son daughter
formula leaks gas milk
comfortable sound
phones sennheiser
bass px100 px100s
phone headset highs
portapros portapro
price wear koss
N:2.0
leak formula
bottles_leak feeding
leaked brown
frustrating started
clothes waste newborn
playtex_ventaire
soaked matter
N:7.9
P:5.7
nipple breast nipples
dishwasher ring
sippy_cups tried
breastfeed screwed
breastfeeding
nipple_confusion
avent_system bottle
P:6.4
Figure 4: Topics discovered from Amazon reviews. Higher topics are general, while lower topics are more
specific. The polarity of the review is encoded in the color: red (negative) to blue (positive). Many of the firstlevel topics have no specific polarity and are associated with a broad class of products such as ?routers? (Node D).
However, the lowest topics in the hierarchy are often polarized; one child topic of ?router? focuses on upgradable
firmware such as ?tomato? and ?ddwrt? (Node E, positive) while another focuses on poor ?tech support? and
?customer service? (Node F, negative). The number below each topic is the regression parameter learned with
that topic.
In addition to the per-topic regression parameters, S H L DA also associates each word with a lexical
regression parameter ? . Table 3 shows the top ten words with highest and lowest ? . The results are
unsuprising, although the lexical regression for the Congressional debates is less clear-cut than other
10
All of the nodes at the second level have slightly negative values for the regression parameters mainly due
to the very skewed distribution of the review ratings in Amazon.
7
datasets. As we saw in Section 5, for similar datasets, S H L DA?s context-specific regression is more
useful when global lexical weights do not readily differentiate documents.
Dataset
Floor
Debates
Amazon
Reviews
Movie
Reviews
Top 10 words with positive weights
bringing,
private property,
illegally,
tax relief, regulation, mandates, constitutional, committee report, illegal alien
highly recommend, pleased, love, loves, perfect, easy, excellent, amazing, glad, happy
hilarious, fast, schindler, excellent, motion pictures, academy award, perfect, journey, fortunately, ability
Top 10 words with negative weights
bush administration, strong opposition, ranking, republicans, republican leadership, secret, discriminate, majority, undermine
waste, returned, return, stopped, leak, junk,
useless, returning, refund, terrible
bad, unfortunately, supposed, waste, mess,
worst, acceptable, awful, suppose, boring
Table 3: Top words based on the global lexical regression coefficient, ? . For the floor debates, positive ? ?s are
Republican-leaning while negative ? ?s are Democrat-leaning.
7
Related Work
S H L DA joins a family of LDA extensions that introduce hierarchical topics, supervision, or both.
Owing to limited space, we focus here on related work that combines the two. Petinot et al. [22]
propose hierarchical Labeled LDA (hLLDA), which leverages an observed document ontology to learn
topics in a tree structure; however, hLLDA assumes that the underlying tree structure is known a
priori. SSHLDA [23] generalizes hLLDA by allowing the document hierarchy labels to be partially
observed, with unobserved labels and topic tree structure then inferred from the data. Boyd-Graber
and Resnik [24] used hierarchical distributions within topics to learn topics across languages. In
addition to these ?upstream? models [25], Perotte et al. [26] propose a ?downstream? model called
HSLDA , which jointly models documents? hierarchy of labels and topics. HSLDA ?s topic structure
is flat, however, and the response variable is a hierarchy of labels associated with each document,
unlike S H L DA?s continuous response variable. Finally, another body related body of work includes
models that jointly capture topics and other facets such as ideologies/perspectives [27, 28] and
sentiments/opinions [29], albeit with discrete rather than continuously valued responses.
Computational modeling of sentiment polarity is a voluminous field [30], and many computational
political science models describe agendas [5] and ideology [31]. Looking at framing or bias at
the sentence level, Greene and Resnik [32] investigate the role of syntactic structure in framing,
Yano et al. [33] look at lexical indications of sentence-level bias, and Recasens et al. [34] develop
linguistically informed sentence-level features for identifying bias-inducing words.
8
Conclusion
We have introduced S H L DA, a model that associates a continuously valued response variable with
hierarchical topics to capture both the issues under discussion and alternative perspectives on those
issues. The two-level structure improves predictive performance over existing models on multiple
datasets, while also adding potentially insightful hierarchical structure to the topic analysis. Based on
a preliminary qualitative analysis, the topic hierarchy exposed by the model plausibly captures the
idea of agenda setting, which is related to the issues that get discussed, and framing, which is related
to authors? perspectives on those issues. We plan to analyze the topic structure produced by S H L DA
with political science collaborators and more generally to study how S H L DA and related models can
help analyze and discover useful insights from political discourse.
Acknowledgments
This research was supported in part by NSF under grant #1211153 (Resnik) and #1018625 (BoydGraber and Resnik). Any opinions, findings, conclusions, or recommendations expressed here are
those of the authors and do not necessarily reflect the view of the sponsor.
8
References
[1] McCombs, M. The agenda-setting role of the mass media in the shaping of public opinion. North,
2009(05-12):21, 2002.
[2] McCombs, M., S. Ghanem. The convergence of agenda setting and framing. In Framing public life. 2001.
[3] Baumgartner, F. R., S. L. De Boef, A. E. Boydstun. The decline of the death penalty and the discovery of
innocence. Cambridge University Press, 2008.
[4] Blei, D. M., A. Ng, M. Jordan. Latent Dirichlet allocation. JMLR, 3, 2003.
[5] Grimmer, J. A Bayesian hierarchical topic model for political texts: Measuring expressed agendas in
Senate press releases. Political Analysis, 18(1):1?35, 2010.
[6] Zhang, J. Explore objects and categories in unexplored environments based on multimodal data. Ph.D.
thesis, University of Hamburg, 2012.
[7] Blei, D. M., T. L. Griffiths, M. I. Jordan. The nested Chinese restaurant process and Bayesian nonparametric
inference of topic hierarchies. J. ACM, 57(2), 2010.
[8] Teh, Y. W., M. I. Jordan, M. J. Beal, et al. Hierarchical Dirichlet processes. JASA, 101(476), 2006.
[9] Paisley, J. W., C. Wang, D. M. Blei, et al. Nested hierarchical Dirichlet processes. arXiv:1210.6738, 2012.
[10] Ahmed, A., L. Hong, A. Smola. The nested Chinese restaurant franchise process: User tracking and
document modeling. In ICML. 2013.
[11] Kim, J. H., D. Kim, S. Kim, et al. Modeling topic hierarchies with the recursive Chinese restaurant process.
In CIKM, pages 783?792. 2012.
[12] Blei, D. M., J. D. McAuliffe. Supervised topic models. In NIPS. 2007.
[13] Liu, D., J. Nocedal. On the limited memory BFGS method for large scale optimization. Math. Prog., 1989.
[14] Thomas, M., B. Pang, L. Lee. Get out the vote: Determining support or opposition from Congressional
floor-debate transcripts. In EMNLP. 2006.
[15] Lewis, J. B., K. T. Poole. Measuring bias and uncertainty in ideal point estimates via the parametric
bootstrap. Political Analysis, 12(2), 2004.
[16] Jindal, N., B. Liu. Opinion spam and analysis. In WSDM. 2008.
[17] Pang, B., L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to
rating scales. In ACL. 2005.
[18] Joachims, T. Making large-scale SVM learning practical. In Adv. in Kernel Methods - SVM. 1999.
[19] Neal, R. M. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[20] Yu, B., D. Diermeier, S. Kaufmann. Classifying party affiliation from political speech. JITP, 2008.
[21] Chang, J., J. Boyd-Graber, C. Wang, et al. Reading tea leaves: How humans interpret topic models. In
NIPS. 2009.
[22] Petinot, Y., K. McKeown, K. Thadani. A hierarchical model of web summaries. In HLT. 2011.
[23] Mao, X., Z. Ming, T.-S. Chua, et al. SSHLDA: A semi-supervised hierarchical topic model. In EMNLP.
2012.
[24] Boyd-Graber, J., P. Resnik. Holistic sentiment analysis across languages: Multilingual supervised latent
Dirichlet allocation. In EMNLP. 2010.
[25] Mimno, D. M., A. McCallum. Topic models conditioned on arbitrary features with Dirichlet-multinomial
regression. In UAI. 2008.
[26] Perotte, A. J., F. Wood, N. Elhadad, et al. Hierarchically supervised latent Dirichlet allocation. In NIPS.
2011.
[27] Ahmed, A., E. P. Xing. Staying informed: Supervised and semi-supervised multi-view topical analysis of
ideological perspective. In EMNLP. 2010.
[28] Eisenstein, J., A. Ahmed, E. P. Xing. Sparse additive generative models of text. In ICML. 2011.
[29] Jo, Y., A. H. Oh. Aspect and sentiment unification model for online review analysis. In WSDM. 2011.
[30] Pang, B., L. Lee. Opinion Mining and Sentiment Analysis. Now Publishers Inc, 2008.
[31] Monroe, B. L., M. P. Colaresi, K. M. Quinn. Fightin?words: Lexical feature selection and evaluation for
identifying the content of political conflict. Political Analysis, 16(4):372?403, 2008.
[32] Greene, S., P. Resnik. More than words: Syntactic packaging and implicit sentiment. In NAACL. 2009.
[33] Yano, T., P. Resnik, N. A. Smith. Shedding (a thousand points of) light on biased language. In NAACL-HLT
Workshop on Creating Speech and Language Data with Amazon?s Mechanical Turk. 2010.
[34] Recasens, M., C. Danescu-Niculescu-Mizil, D. Jurafsky. Linguistic models for analyzing and detecting
biased language. In ACL. 2013.
9
| 5163 |@word private:1 version:4 pcc:8 norm:1 bigram:4 nd:15 seek:1 tried:2 yea:1 outlook:1 initial:1 liu:2 series:1 score:3 contains:4 uncovered:1 document:31 interestingly:1 outperforms:1 existing:5 reaction:1 wd:34 com:1 assigning:5 router:5 readily:1 additive:1 remove:2 fund:2 v:2 infant:2 generative:3 discovering:1 selected:1 item:1 device:2 leaf:3 mccallum:1 recasens:2 smith:1 core:1 leadership:1 chua:1 blei:4 detecting:2 math:1 node:30 sits:1 liberal:3 org:1 zhang:2 perotte:2 unbounded:2 stopwords:1 alert:3 height:1 along:2 qualitative:3 consists:2 combine:1 sync:1 nay:1 introduce:3 secret:1 indeed:2 market:1 ontology:1 frequently:1 love:6 multi:3 manager:1 whistle:1 wsdm:2 globally:1 voc:12 ming:1 td:13 inspired:2 election:1 metaphor:4 begin:1 discover:5 notation:2 underlying:2 mass:2 medium:2 laptop:1 what:4 lowest:2 kind:1 bipartisan:2 informed:4 unobserved:1 finding:1 clothes:1 warning:1 thorough:1 unexplored:1 every:1 charge:1 exactly:1 returning:2 healthcare:2 grant:1 enjoy:1 producing:1 comfortable:2 mcauliffe:1 before:1 service:2 positive:11 obamacare:2 treat:1 sd:6 congress:3 irreversible:1 consequence:1 headset:2 analyzing:1 subscript:1 path:29 yd:24 abuse:1 acl:2 resembles:1 studied:1 suggests:2 jurafsky:1 limited:2 range:3 averaged:1 acknowledgment:1 practical:1 recursive:2 bootstrap:1 empirical:1 bell:1 mult:3 significantly:1 illegal:1 boyd:4 word:30 induce:1 griffith:1 seeing:1 get:3 selection:1 presets:1 context:2 influence:1 live:2 optimize:2 www:1 map:1 customer:9 reviewer:1 restriction:1 bill:2 hllda:3 regardless:2 lexical:13 focused:1 unify:1 amazon:11 identifying:2 insight:2 menu:3 oh:1 dw:1 sennheiser:1 variation:1 laplace:1 annals:1 hierarchy:15 play:2 suppose:1 user:1 gps:1 us:8 slip:1 illegally:1 associate:3 updating:1 cut:2 labeled:1 observed:4 role:2 bottom:4 enters:1 capture:8 worst:1 wang:2 thousand:1 cord:1 news:1 adv:1 bass:2 highest:1 environment:1 agency:1 respecting:1 pda:1 leak:8 battery:1 leaking:2 mccombs:2 trained:1 segment:1 exposed:3 predictive:2 sink:1 htm:1 multimodal:1 represented:1 talk:1 derivation:1 train:1 fast:1 effective:1 describe:1 choosing:1 outside:1 pearson:3 victim:1 richer:1 slda:16 valued:3 film:1 say:1 encoded:1 otherwise:1 calendar:1 katrina:1 ability:1 statistic:1 snap:1 browser:1 unseen:1 syntactic:2 jointly:4 farm:1 disappointed:1 superscript:2 online:1 itself:1 housing:1 differentiate:1 antenna:1 beal:1 indication:1 blogger:1 propose:3 interaction:1 product:10 hslda:2 combining:2 beneath:1 bath:1 holistic:1 poorly:1 tax:8 academy:1 supposed:1 inducing:1 competition:1 ent:1 exploiting:1 convergence:2 billion:1 mckeown:1 perfect:2 ring:2 staying:1 object:5 franchise:4 spent:1 develop:1 illustrate:1 help:1 amazing:1 transcript:1 job:3 strong:2 implemented:1 predicted:1 wonderful:1 come:1 c:1 dishwasher:1 liberty:1 differ:1 drawback:1 owing:1 filter:2 stochastic:1 attribute:3 human:1 viewing:1 opinion:5 public:3 education:3 require:1 feeding:1 assign:1 fix:1 preliminary:1 collaborator:1 investigation:1 county:1 ideology:6 extension:1 normal:1 driving:1 continuum:2 achieves:1 polar:1 linguistically:1 radio:2 label:4 saw:1 sensitive:1 grouped:1 vice:1 tf:1 reflects:1 clearly:1 gaussian:1 rather:1 newborn:2 obscures:1 linguistic:1 release:1 focus:6 joachim:2 improvement:1 properly:1 transmitter:3 likelihood:1 mainly:1 alien:1 tech:1 political:19 rigorous:1 secure:1 kim:3 contrast:1 equipment:1 baseline:1 inference:4 economy:1 dvr:1 foreign:1 niculescu:1 voluminous:1 selects:1 issue:10 denoted:2 priori:1 plan:1 initialize:1 marginal:1 field:1 construct:1 equal:1 once:1 ng:1 sampling:9 represents:2 park:3 look:2 icml:2 broad:1 yu:2 future:1 report:1 recommend:1 hurricane:1 relief:1 immigration:1 psd:1 freedom:1 organization:1 headphone:1 investigate:2 mining:1 highly:2 evaluation:2 extreme:1 light:1 chain:2 implication:1 ncrp:3 beforehand:1 poorer:1 unification:1 nominate:1 traversing:1 fox:1 conduct:1 tree:20 plugged:1 indexed:3 politician:1 stopped:2 modeling:4 facet:1 measuring:2 assignment:17 maximization:1 cost:1 neutral:2 frustrating:1 uniform:1 commission:1 reported:1 connect:1 varies:1 dir:1 punishment:1 combined:1 chooses:1 density:2 lee:3 told:1 off:1 regressor:1 continuously:2 linux:1 again:1 thesis:1 ear:2 reflect:1 jo:1 choose:1 squared:2 emnlp:4 priority:1 worse:1 creating:2 book:2 return:3 manufactured:1 de:1 bfgs:3 star:1 lexically:2 bold:1 includes:1 waste:5 coefficient:5 matter:1 north:1 inc:1 student:1 ranking:1 blind:1 mp:1 performed:1 view:3 lot:1 root:2 analyze:2 doing:1 red:2 observing:2 tab:1 portion:1 xing:2 investor:1 ass:1 purple:1 pang:3 amendment:2 kaufmann:1 descriptor:1 variance:1 sitting:5 weak:1 bayesian:2 unifies:1 produced:1 tnew:4 mc:3 cc:6 hardship:1 ed:1 hlt:2 frequency:6 turk:1 naturally:1 associated:17 junk:2 static:3 gain:1 dataset:2 fannie:1 color:2 car:3 improves:4 organized:2 shaping:1 combo:12 formalize:1 focusing:1 innocence:2 higher:4 supervised:10 response:28 strongly:1 just:2 smola:1 implicit:1 crp:2 correlation:2 undermine:1 web:1 night:1 cnew:4 continuity:1 lda:24 reveal:1 naacl:2 brown:1 true:1 normalized:2 hence:1 assigned:9 regularization:1 symmetric:1 death:4 neal:1 leaked:2 skewed:2 interchangeably:1 encourages:1 during:1 please:1 speaker:4 eisenstein:1 hong:1 plate:1 performs:1 motion:1 pro:1 percent:1 novel:1 multinomial:1 mlr:14 twolevel:1 gcd:1 volume:2 discussed:4 interpretation:1 interpret:1 employee:1 versa:1 gibbs:2 cambridge:1 paisley:1 theorist:1 framed:2 vanilla:2 mistaken:2 neatly:1 language:6 wear:1 stable:1 supervision:3 money:2 etc:3 base:1 add:2 posterior:3 own:1 recent:1 perspective:14 optimizing:1 dish:8 categorization:1 phone:4 hamburg:1 constitution:1 affiliation:2 life:1 baby:4 jbg:1 handheld:1 fortunately:1 care:1 floor:9 greater:1 steering:1 signal:2 ii:1 semi:2 multiple:4 earphone:1 sound:4 infer:1 ing:1 faster:1 plug:1 ahmed:3 mess:1 award:1 sponsor:1 packaging:1 prediction:4 regression:34 breast:1 arxiv:1 kernel:1 sometimes:1 bimodal:1 addition:4 appointment:1 mandate:1 diagram:1 publisher:1 biased:2 rest:3 regional:1 unlike:3 bringing:1 umd:3 umiacs:3 subject:1 induced:1 expander:1 thing:1 leveraging:1 effectiveness:1 jordan:4 bought:1 call:1 leverage:1 svmlight:1 ideal:1 congressional:9 easy:2 relaxes:1 adapter:2 restaurant:17 fit:2 associating:2 competing:1 idea:3 decline:2 tub:1 administration:1 whether:1 motivated:1 ischool:1 sentiment:12 penalty:3 baumgartner:1 returned:2 speech:4 useful:2 generally:2 clear:1 nonparametric:1 z22:1 ten:1 ph:1 category:2 terrible:2 http:3 outperform:1 exist:1 specifies:1 nsf:1 cikm:1 per:7 track:1 blue:2 zd:15 discrete:1 tea:1 group:9 elhadad:1 salient:1 four:2 key:1 monitor:2 achieving:1 capital:2 schindler:1 preprocessed:1 nocedal:1 downstream:1 wood:1 sum:1 legislation:1 uncertainty:1 journey:1 extends:1 place:2 prog:1 reasonable:1 family:3 draw:11 polarized:4 coherence:1 acceptable:1 comparable:1 capturing:3 opposition:2 pay:2 ipod:2 fold:2 n1d:1 g:1 greene:2 activity:1 occur:1 idf:1 software:1 flat:2 govtrack:1 regulator:1 aspect:1 speed:1 relatively:3 glad:1 palm:1 structured:3 alternate:2 combination:1 poor:1 kd:3 across:3 slightly:3 em:1 son:2 making:1 organizer:1 explained:3 restricted:1 interference:1 unregularized:1 equation:2 remains:1 skew:1 count:2 committee:1 describing:1 turn:3 discus:1 sony:2 photo:1 generalizes:1 available:1 refund:2 hierarchical:25 appropriate:1 quinn:1 alternative:3 original:1 thomas:2 denotes:2 dirichlet:12 creator:1 linguistics:1 include:1 top:5 lock:1 assumes:1 music:1 plausibly:1 chinese:7 purchased:1 contact:1 question:3 parametric:3 burning:1 chairman:1 md:3 traditional:1 deficit:1 maryland:3 majority:1 philip:1 vd:7 gun:1 topic:83 argue:1 thadani:1 toward:1 water:1 bud:1 hdp:5 talked:1 useless:1 polarity:4 relationship:2 providing:1 happy:1 regulation:1 difficult:1 unfortunately:1 ql:2 setup:1 potentially:1 debate:21 negative:10 daughter:2 rise:1 memo:1 design:1 agenda:16 policy:2 perform:1 allowing:2 teh:1 wire:1 observation:1 datasets:10 markov:2 pnd:2 msnbc:1 anti:1 gas:1 yano:2 excluding:2 communication:3 looking:1 topical:1 gc:4 frame:4 head:1 station:3 sharp:1 discovered:3 grimmer:1 arbitrary:1 inferred:1 rating:6 introduced:3 gentleman:1 pair:1 mechanical:1 specified:1 bottle:3 connection:1 sentence:18 conflict:1 warranty:1 learned:3 framing:12 distinction:1 ideological:4 hour:1 nip:3 address:1 able:1 poole:1 below:2 democratic:1 breathing:1 reading:1 hear:3 program:1 including:4 memory:1 debt:1 hot:1 business:5 predicting:1 indicator:3 senate:8 mizil:1 movie:6 republican:9 picture:2 started:1 categorical:1 utterance:1 health:1 pleased:1 text:4 prior:7 review:25 discovery:2 l2:1 drain:1 determining:1 loss:1 mixed:1 allocation:6 proportional:4 filtering:1 ghanem:1 integrate:2 downloaded:2 jasa:1 consistent:1 tomato:2 leaning:7 classifying:1 cd:18 course:1 summary:1 token:12 ban:1 wireless:1 firmware:3 supported:2 heading:1 salience:1 formal:1 viet:1 bias:4 side:1 fall:2 characterizing:2 sparse:1 slice:3 mimno:1 dimension:1 vocabulary:8 depth:1 ending:1 rich:1 evaluating:1 author:3 collection:2 preprocessing:1 regressors:2 spam:1 nguyen:1 party:2 social:2 approximate:1 observable:1 ignore:1 emphasize:1 multilingual:1 global:6 buy:1 uai:1 receiver:1 corpus:8 gem:2 discriminative:2 hlda:3 continuous:5 latent:7 table:41 reviewed:1 channel:1 reasonably:1 transfer:1 learn:2 mse:8 excellent:2 upstream:1 necessarily:1 domain:1 da:30 petinot:2 significance:1 hierarchically:1 noise:2 alarm:1 hyperparameters:3 child:9 graber:4 positively:1 body:2 join:1 screen:1 resnik:9 position:1 mao:1 awful:1 candidate:1 house:10 jmlr:1 third:1 formula:2 emphasizing:1 bad:1 specific:7 boring:1 insightful:1 list:1 svm:14 concern:5 sit:2 incorporating:1 workshop:1 albeit:1 conviction:2 adding:1 milk:1 supplement:1 magnitude:1 wash:1 illustrates:1 conditioned:2 budget:1 splash:2 easier:1 monroe:1 democrat:5 led:1 fc:4 likely:1 explore:1 unexpected:1 expressed:2 tracking:1 partially:1 recommendation:1 chang:1 nested:7 corresponds:1 environmental:1 discourse:5 acm:1 lewis:1 conditional:4 goal:2 presentation:1 month:3 viewed:1 flash:1 price:2 content:1 experimentally:1 change:2 hard:1 specifically:2 flag:2 conservative:4 total:1 called:3 discriminate:1 player:1 shedding:1 vote:3 productivity:1 select:1 college:3 formally:1 internal:1 support:6 people:1 brevity:1 bush:1 evaluate:1 |
4,602 | 5,164 | A Novel Two-Step Method for Cross Language
Representation Learning
Min Xiao and Yuhong Guo
Department of Computer and Information Sciences
Temple University, Philadelphia, PA 19122, USA
{minxiao, yuhong}@temple.edu
Abstract
Cross language text classification is an important learning task in natural language
processing. A critical challenge of cross language learning arises from the fact that
words of different languages are in disjoint feature spaces. In this paper, we propose a two-step representation learning method to bridge the feature spaces of different languages by exploiting a set of parallel bilingual documents. Specifically,
we first formulate a matrix completion problem to produce a complete parallel
document-term matrix for all documents in two languages, and then induce a low
dimensional cross-lingual document representation by applying latent semantic
indexing on the obtained matrix. We use a projected gradient descent algorithm
to solve the formulated matrix completion problem with convergence guarantees.
The proposed method is evaluated by conducting a set of experiments with cross
language sentiment classification tasks on Amazon product reviews. The experimental results demonstrate that the proposed learning method outperforms a number of other cross language representation learning methods, especially when the
number of parallel bilingual documents is small.
1
Introduction
Cross language text classification is an important natural language processing task that exploits a
large amount of labeled documents in an auxiliary source language to train a classification model for
classifying documents in a target language where labeled data is scarce. An effective cross language
learning system can greatly reduce the manual annotation effort in the target language for learning
good classification models. Previous work in the literature has demonstrated successful performance
of cross language learning systems on various cross language text classification problems, including
multilingual document categorization [2], cross language fine-grained genre classification [14], and
cross-lingual sentiment classification [18, 16].
The challenge of cross language text classification lies in the language barrier. That is documents
in different languages are expressed with different word vocabularies and thus have disjoint feature
spaces. A variety of methods have been proposed in the literature to address cross language text
classification by bridging the cross language gap, including transforming the training or test data
from one language domain into the other language domain by using machine translation tools or
bilingual lexicons [18, 6, 23], and constructing cross-lingual representations by using readily available auxiliary resources such as bilingual word pairs [16], comparable corpora [10, 20, 15], and
other multilingual resources [3, 14].
In this paper, we propose a two-step learning method to induce cross-lingual feature representations for cross language text classification by exploiting a set of unlabeled parallel bilingual documents. First we construct a concatenated bilingual document-term matrix where each document is
represented in the concatenated vocabulary of two languages. In such a matrix, a pair of parallel
1
documents are represented as a row vector filled with observed word features from both the source
language domain and the target language domain, while a non-parallel document in a single language is represented as a row vector filled with observed word features only from its own language
and has missing values for the word features from the other language. We then learn the unobserved
feature entries of this sparse matrix by formulating a matrix completion problem and solving it using a projected gradient descent optimization algorithm. By doing so, we expect to automatically
capture important and robust low-rank information based on the word co-occurrence patterns expressed both within each language and across languages. Next we perform latent semantic indexing
over the recovered document-term matrix and induce a low-dimensional dense cross-lingual representation of the documents, on which standard monolingual classifiers can be applied. To evaluate
the effectiveness of the proposed learning method, we conduct a set of experiments with cross language sentiment classification tasks on multilingual Amazon product reviews. The empirical results
show that the proposed method significantly outperforms a number of cross language learning methods. Moreover, the proposed method produces good performance even with a very small number of
unlabeled parallel bilingual documents.
2
Related Work
Many works in the literature address cross language text classification by first translating documents
from one language domain into the other one via machine translation tools or bilingual lexicons
and then applying standard monolingual classification algorithms [18, 23], domain adaptation techniques [17, 9, 21], or multi-view learning methods [22, 2, 1, 13, 12]. For example, [17] proposed
an expectation-maximization based self-training method, which first initializes a monolingual classifier in the target language with the translated labeled documents from the source language and
then retrains the model by adding unlabeled documents from the target language with automatically
predicted labels. [21] proposed an instance and feature bi-weighting method by first translating
documents from one language domain to the other one and then simultaneously re-weighting instances and features to address the distribution difference across domains. [22] proposed to use
the co-training method for cross language sentiment classification on parallel corpora. [2] proposed a multi-view majority voting method to categorize documents in multiple views produced
from machine translation tools. [1] proposed a multi-view co-classification method for multilingual
document categorization, which minimizes both the training loss for each view and the prediction
disagreement between different language views. Our proposed approach in this paper shares similarity with these approaches in exploiting parallel data produced by machine translation tools. But our
approach only requires a small set of unlabeled parallel documents, while these approaches require
at least translating all the training documents in one language domain.
Another important group of cross language text classification methods in the literature construct cross-lingual representations by exploiting bilingual word pairs [16, 7], parallel corpora
[10, 20, 15, 19, 8], and other resources [3, 14]. [16] proposed a cross-language structural correspondence learning method to induce language-independent features by using pivot word pairs
produced by word translation oracles. [10] proposed a cross-language latent semantic indexing
(CL-LSI) method to induce cross-lingual representations by performing LSI over a dual-language
document-term matrix, where each dual-language document contains its original words and the
corresponding translation text. [20] proposed a cross-lingual kernel canonical correlation analysis
(CL-KCCA) method. It first learns two projections (one for each language) by conducting kernel
canonical correlation analysis over a paired bilingual corpus and then uses them to project documents from language-specific feature spaces to the shared multilingual semantic feature space.
[15] employed cross-lingual oriented principal component analysis (CL-OPCA) over concatenated
parallel documents to learn a multilingual projection by simultaneously minimizing the projected
distance between parallel documents and maximizing the projected covariance of documents across
languages. Some other work uses multilingual topic models such as the coupled probabilistic latent
semantic analysis and the bilingual latent Dirichlet allocation to extract latent cross-lingual topics
as interlingual representations [19]. [14] proposed to use language-specific part-of-speech (POS)
taggers to tag each word and then map those language-specific POS tags to twelve universal POS
tags as interlingual features for cross language fine-grained genre classification. Similar to the multilingual semantic representation learning approaches such as CL-LSI, CL-KCCA and CL-OPCA,
our two-step learning method exploits parallel documents. But different from these methods which
apply operations such as LSI, KCCA, and OPCA directly on the original concatenated document2
term matrix, our method first fills the missing entries of the document-term matrix using matrix
completion, and then performs LSI over the recovered low-rank matrix.
3
Approach
In this section, we present the proposed two-step learning method for learning cross-lingual document representations. We assume a subset of unlabeled parallel documents from the two languages
are given, which can be used to capture the co-occurrence of terms across languages and build connections between the vocabulary sets of the two languages. We first construct a unified documentterm matrix for all documents from the auxiliary source language domain and the target language
domain, whose columns correspond to the word features from the unified vocabulary set of the two
languages. In this matrix, each pair of parallel documents is represented as a fully observed row
vector, and each non-parallel document is represented as a partially observed row vector where only
entries corresponding to words in its own language vocabulary are observed. Instead of learning a
low-dimensional cross-lingual document representation from this matrix directly, we perform a twostep learning procedure: First we learn a low-rank document-term matrix by automatically filling the
missing entries via matrix completion. Next we produce cross-lingual representations by applying
the latent semantic indexing method over the learned matrix.
Let M 0 ? Rt?d be the unified document-term matrix, which is partially filled with observed nonnegative feature values, where t is the number of documents and d is the size of the unified vocabulary.
0
We use ? to denote the index set of the observed features in M 0 , such that (i, j) ? ? if only if Mij
b to denote the index set of the missing features in M 0 , such that (i, j) ? ?
b
is observed; and use ?
0
is unobserved. For the i-th document in the data set from one language, if the docif only if Mij
ument does not have a parallel translation in the other language, then all the features in row Mi:0
corresponding to the words in the vocabulary of the other language are viewed as missing features.
3.1
Matrix Completion
Note that the document-term matrix M 0 has a large fraction of missing features and the only bridge
between the vocabulary sets of the two languages is the small set of parallel bilingual documents.
Learning from this partially observed matrix directly by treating missing features as zeros certainly
will lose a lot of information. On the other hand, a fully observed document-term matrix is naturally
low-rank and sparse, as the vocabulary set is typically very large and each document only contains
a small fraction of the words in the vocabulary. Thus we propose to automatically fill the missing
entries of M 0 based on the feature co-occurrence information expressed in the observed data, by
conducting matrix completion to recover a low-rank and sparse matrix. Specifically, we formulate
the matrix completion as the following optimization problem
min rank(M ) + ?kM k1
M
0
b
subject to Mij = Mij
, ?(i, j) ? ?; Mij ? 0, ?(i, j) ? ?
(1)
where k ? k1 denotes a ?1 norm and is used to enforce sparsity. The rank function however is nonconvex and difficult to optimize. We can relax it to its convex envelope, a convex trace norm kM k? .
Moreover, instead of using the equality constraints in (1), we propose to minimize a regulariza0
tion loss function, c(Mij , Mij
), to cope with observation noise for all the observed feature entries.
b to
Meanwhile, we also add regularization terms over the missing features, c(Mij , 0), ?(i, j) ? ?,
1
2
avoid overfitting. In particular, we use the least squared loss function c(x, y) = 2 kx ? yk . Hence
we obtain the following relaxed convex optimization problem for matrix completion
X
X
0
min ?kM k? + ?kM k1 +
c(Mij , Mij
)+?
c(Mij , 0) subject to M ? 0
(2)
M
(i,j)??
b
(i,j)??
With nonnegativity constraints M ? 0, the non-smooth ?1 norm
P regularizer in the objective function
of (2) is equivalent to a smooth linear function kM k1 =
ij Mij . Nevertheless, with the nonsmooth trace norm kM k? , the optimization problem (2) remains to be convex but non-smooth.
Moreover, the matrix M in cross-language learning tasks is typically very large, and thus a scalable
optimization algorithm needs to be developed to conduct efficient optimization. In next section, we
will present a scalable projected gradient descent algorithm to solve this minimization problem.
3
Algorithm 1 Projected Gradient Descent Algorithm
Input: M 0 , ?, ? ? 1, 0 < ? < min(2, ?2 ), ?.
Initialize M as the nonnegative projection of the rank-1 approximation of M 0 .
while not converged do
1. gradient descent: M = M ? ? ?g(M ).
2. shrink: M = S? ? (M ).
3. project onto feasible set: M = max(M, 0).
end while
3.2
Latent Semantic Indexing
After solving (2) for an optimal low-rank solution M ? , we can use each row of the sparse matrix
M ? as a vector representation for each document in the concatenated vocabulary space of the two
languages. However exploiting such a matrix representation directly for cross language text classification lacks sufficient capacity of handling feature noise and sparseness, as each document is
represented using a very small set of words in the vocabulary set. We thus propose to apply a latent
semantic indexing (LSI) method on M ? to produce a low-dimensional semantic representation of
the data. LSI uses singular value decomposition to discover the important associative relationships
of word features [10], and create a reduced-dimension feature space. Specifically, we first perform
singular value decomposition over M ? , M ? = U SV ? , and then obtain a low dimensional representation matrix Z via a projection Z = M ? Vk , where Vk contains the top k right singular vectors of
M ? . Cross-language text classification can then be conducted over Z using monolingual classifiers.
4
4.1
Optimization Algorithm
Projected Gradient Descent Algorithm
A number of algorithms have been developed to solve matrix completion problems in the literature [4, 11]. We use a projected gradient descent algorithm to solve the non-smooth convex optimization problem in (2). This algorithm takes the objective function f (M ) in (2) as a composition
of a non-smooth term and a convex smooth term such as f (M ) = ?kM k? + g(M ) where
X
X
0
g(M ) = ?kM k1 +
)+?
c(Mij , Mij
c(Mij , 0).
(3)
(i,j)??
b
(i,j)??
It first initializes M as the nonnegative projection of the rank-1 approximation of M 0 , and then
iteratively updates M using a projected gradient descent procedure. In each iteration, we perform
three steps to update M . First, we take a gradient descent step M = M ? ? ?g(M ) with stepsize ?
and gradient function
?g(M ) = ?E + (M ? M 0 ) ? Y + ?M ? Yb
(4)
where E is a t ? d matrix with all 1s; Y and Yb are t ? d indicator matrices such that Yij = 1 if
and only if (i, j) ? ? and Yb = E ? Y ; and ??? denotes the Hadamard product. Next we perform a
shrinkage operation M = S? (M ) over the resulting matrix from the first step to minimize its rank.
The shrinkage operator is based on singular value decomposition
S? (M ) = U ?(?) V ? , M = U ?V ? , ?(?) = max(? ? ?, 0),
(5)
where ? = ? ?. Finally we project the resulting matrix into the nonnegative feasible set by M =
max(M, 0). This update procedure provably converges to an optimal solution. The overall algorithm
is given in Algorithm 1.
4.2
Convergence Analysis
Let h(?) = I(?) ? ? ?g(?) be the gradient descent operator used in the gradient descent step, and
let PC (?) = max(?, 0) be the projection operator, while S? (?) is the shrinkage operator. Below we
prove the convergence of the projected gradient descent algorithm.
4
Lemma 1. Let E be a t?d matrix with all 1s, and Q = E ?? (Y +?Yb ). For ? ? (0, min(2, ?2 )), the
operator h(?) is non-expansive, i.e., for any M and M ? ? Rt?d , kh(M )?h(M ? )kF ? kM ?M ? kF .
Moreover, kh(M ) ? h(M ? )kF = kM ? M ? kF if and only if h(M ) ? h(M ? ) = M ? M ? .
Proof. Note that for ? ? (0, min(2, ?2 )), we have ?1 < Qij < 1, ?(i, j). Then following the
gradient definition in (4), we have
X
? 2 2 12
? kM ? M ? kF
kh(M ) ? h(M ? )kF =
(M ? M ? ) ? QkF = ( (Mij ? Mij
) Qij )
ij
The inequalities become equalities if only if h(M ) ? h(M ? ) = M ? M ? .
Lemma 2. [11, Lemma 1] The shrinkage operator S? (?) is non-expansive, i.e., for any M and
M ? ? Rt?d , kS? (M )?S? (M ? )kF ? kM ?M ? kF . Moreover, kS? (M )?S? (M ? )kF = kM ?M ? kF
if and only if S? (M ) ? S? (M ? ) = M ? M ? .
Lemma 3. The projection operator PC (?) is non-expansive, i.e., kPC (M ) ? PC (M ? )kF ? kM ?
M ? kF . Moreover, kPC (M )?PC (M ? )kF = kM ?M ? kF if and only if PC (M )?PC (M ? ) = M ?M ? .
Proof. For any given entry index (i, j), there are four cases:
?
?
? 2
? Case 1: Mij ? 0, Mij
? 0. We have (PC (Mij ) ? PC (Mij
))2 = (Mij ? Mij
) .
?
?
2
? 2
? Case 2: Mij ? 0, Mij
< 0. We have (PC (Mij ) ? PC (Mij
))2 = Mij
< (Mij ? Mij
) .
2
? 2
?
?
) .
? Case 3: Mij < 0, Mij
? 0. We have (PC (Mij ) ? PC (Mij
))2 = M ? ij < (Mij ? Mij
?
?
? 2
? Case 4: Mij < 0, Mij
< 0. We have (PC (Mij ) ? PC (Mij
))2 = 0 ? (Mij ? Mij
) .
Therefore,
kPC (M ) ? PC (M ? )kF =
X
?
(PC (Mij ) ? PC (Mij
))2
?
X
? 2
(Mij ? Mij
)
21
= kM ? M ? kF
ij
ij
?
21
?
and kPC (M ) ? PC (M )kF = kM ? M kF if only if PC (M ) ? PC (M ? ) = M ? M ? .
Theorem 1. The sequence {M k } generated by the projected gradient descent iterations in Algorithm 1 with 0 < ? < min(2, ?2 ) converges to M ? , which is an optimal solution of (2).
Proof. Since h(?), S? (?) and PC (?) are all non-expansive, the composite operator PC (S? (h(?))) is
non-expansive as well. This theorem can then be proved following [11, Theorem 4].
5
Experiments
In this section, we evaluate the proposed two-step learning method by conducting extensive cross
language sentiment classification experiments on multilingual Amazon product reviews.
5.1
Experimental Setting
Dataset We used the multilingual Amazon product reviews dataset [16], which contains three
categories (Books (B), DVD (D), Music (M)) of product reviews in four different languages (English
(E), French (F), German (G), Japanese (J)). For each category of the product reviews, there are 2000
positive and 2000 negative English reviews, and 1000 positive and 1000 negative reviews for each
of the other three languages. In addition, there are another 2000 unlabeled parallel reviews between
English and each of the other three languages. Each review is preprocessed into a unigram bag-ofword feature vector with TF-IDF values. We focused on cross-lingual learning between English and
the other three languages and constructed 18 cross language sentiment classification tasks (EFB,
FEB, EFD, FED, EFM, FEM, EGB, GEB, EGD, GED, EGM, GEM, EJB, JEB, EJD, JED, EJM,
JEM), each for one combination of selected source language, target language and category. For
example, the task EFB uses English Books reviews as the source language data and uses French
Books reviews as the target language data.
5
Table 1: Average classification accuracies (%) and standard deviations (%) over 10 runs for the 18
cross language sentiment classification tasks.
TASK
EFB
FEB
EFD
FED
EFM
FEM
EGB
GEB
EGD
GED
EGM
GEM
EJB
JEB
EJD
JED
EJM
JEM
TBOW
67.31?0.96
66.82?0.43
67.80?0.94
66.15?0.65
67.84?0.43
66.08?0.52
67.23?0.68
67.16?0.55
66.79?0.80
66.27?0.69
67.65?0.45
66.74?0.55
63.15?0.69
66.85?0.68
65.47?0.50
66.42?0.55
67.62?0.75
66.51?0.51
CL-LSI
79.56?0.21
76.66?0.34
77.82?0.66
76.61?0.25
75.39?0.40
76.33?0.27
77.59?0.21
77.64?0.19
79.22?0.22
77.78?0.26
73.81?0.49
77.28?0.51
72.68?0.35
74.63?0.42
72.55?0.28
75.18?0.27
73.44?0.50
72.38?0.50
CL-KCCA
77.56?0.14
73.45?0.13
78.19?0.09
74.93?0.07
78.24?0.12
73.38?0.12
79.14?0.12
74.15?0.09
76.73?0.10
74.26?0.08
79.18?0.05
72.31?0.08
69.46?0.11
67.99?0.18
74.79?0.11
72.44?0.16
73.54?0.11
70.00?0.18
CL-OPCA
76.55?0.31
74.43?0.53
70.54?0.41
72.49?0.47
73.69?0.49
73.46?0.50
74.72?0.54
74.78?0.39
74.59?0.66
74.83?0.45
74.45?0.59
74.15?0.42
71.41?0.48
73.41?0.41
71.84?0.41
75.42?0.52
74.96?0.86
72.64?0.66
TSL
81.92?0.20
79.51?0.21
81.97?0.33
78.09?0.32
79.30?0.30
78.53?0.46
79.22?0.31
78.65?0.23
81.34?0.24
79.34?0.23
79.39?0.39
79.02?0.34
72.57?0.52
77.17?0.36
76.60?0.49
79.01?0.50
76.21?0.40
77.15?0.58
Approaches We compared the proposed two-step learning (TSL) method with the following four
methods: TBOW, CL-LSI, CL-OPCA and CL-KCCA. The Target Bag-Of-Word (TBOW) baseline
method trains a supervised monolingual classifier in the original bag-of-word feature space with the
labeled training data from the target language domain. The Cross-Lingual Latent Semantic Indexing
(CL-LSI) method [10] and the Cross-Lingual Oriented Principal Component Analysis (CL-OPCA)
method [15] first learn cross-lingual representations with all data from both language domains by
performing LSI or OPCA and then train a monolingual classifier with labeled data from both language domains in the induced low-dimensional feature space. The Cross-Lingual Kernel Canonical
Component Analysis (CL-KCCA) method [20] first induces two language projections by using unlabeled parallel data and then trains a monolingual classifier on labeled data from both language
domains in the projected low-dimensional space. For all experiments, we used linear support vector
machine (SVM) as the monolingual classification model. For implementation, we used the libsvm
package [5] with default parameter setting.
5.2
Classification Accuracy
For each of the 18 cross language sentiment classification tasks, we used all documents from the two
languages and the additional 2000 unlabeled parallel documents for representation learning. Then
we used all documents in the auxiliary source language and randomly chose 100 documents from
the target language as labeled data for classification model training, and used the remaining data in
the target language as test data. For the proposed method, TSL, we set ? = 10?6 and ? = 1, chose
? value from {0.01, 0.1, 1, 10}, chose ? value from {10?5 , 10?4 , 10?3 , 10?2 , 10?1 , 1}, and chose
the dimension k value from {20, 50, 100, 200, 500}. We used the first task EFB to perform model
parameter selection by running the algorithm 3 times based on random selections of 100 labeled
target training data. This gave us the following parameter setting: ? = 0.1, ? = 10?4 , k = 50. We
used the same procedure to select the dimensionality of the learned semantic representations for the
other three approaches, CL-LSI, CL-OPCA and CL-KCCA, which produced k = 50 for CL-LSI
and CL-OPCA, and k = 100 for CL-KCCA. We then used the selected model parameters for all
the 18 tasks and ran each experiment for 10 times based on random selections of 100 labeled target
documents. The average classification accuracies and standard deviations are reported in Table 1.
We can see that the proposed two-step learning method, TSL, outperforms all other four comparison
methods in general. The target baseline TBOW performs poorly on all the 18 tasks, which implies
that 100 labeled target training documents are far from enough to obtain a robust sentiment classifier
6
EFB
EFD
EFM
80
82
75
70
500
1000
1500
78
78
76
76
74
72
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
65
80
Accuracy
Accuracy
Accuracy
80
CL?LSI
CL?KCCA
CL?OPCA
TSL
68
66
2000
500
1000
1500
Unlabeled parallel data
Unlabeled parallel data
EGB
EGD
74
72
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
68
66
64
2000
500
1000
1500
2000
Unlabeled parallel data
EGM
80
80
80
70
65
CL?LSI
CL?KCCA
CL?OPCA
TSL
60
500
1000
1500
Accuracy
75
Accuracy
Accuracy
75
75
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
65
2000
500
1000
1500
65
CL?LSI
CL?KCCA
CL?OPCA
TSL
60
2000
500
1000
1500
Unlabeled parallel data
Unlabeled parallel data
Unlabeled parallel data
EJB
EJD
EJM
72
2000
76
76
70
74
66
64
62
CL?LSI
CL?KCCA
CL?OPCA
TSL
60
58
56
500
1000
1500
Unlabeled parallel data
2000
Accuracy
Accuracy
68
Accuracy
70
74
72
CL?LSI
CL?KCCA
CL?OPCA
TSL
70
68
500
1000
1500
Unlabeled parallel data
2000
72
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
68
66
500
1000
1500
2000
Unlabeled parallel data
Figure 1: Average test classification accuracies (%) and standard deviations (%) over 10 runs with
different numbers of unlabeled parallel documents for adapting a classification system from English
to French, German and Japanese.
in the target language domain. All the other three cross-lingual representation learning methods,
CL-LSI, CL-KCCA and CL-OPCA, consistently outperform this baseline method across all the
18 tasks, which demonstrates that the labeled training data from the source language domain is
useful for classifying the target language data under a unified data representation. Nevertheless, the
improvements achieved by these three methods over the baseline are much smaller than the proposed
TSL method. Across all the 18 tasks, TSL increases the average test accuracy over the baseline
TBOW method by at least 8.59 (%) on the EJM task and up to 14.61 (%) on the EFB task. Moreover,
TSL also outperforms both CL-KCCA and CL-OPCA across all the 18 tasks, outperforms CL-LSI
on 17 out of the 18 tasks and achieves comparable performance with CL-LSI on the remaining
one task (EJB). All these results demonstrate the efficacy and robustness of the proposed two-step
representation learning method for cross language text classification.
5.3
Impact of the Size of Unlabeled Parallel Data
All the four cross-lingual adaptation learning methods, CL-LSI, CL-KCCA, CL-OPCA and TSL,
exploit unlabeled parallel reviews for learning cross-lingual representations. Next we investigated
the performance of these methods with respect to different numbers of unlabeled parallel reviews.
We tested a set of different numbers, np ? {200, 500, 1000, 2000}. For each number np in the set,
we randomly chose np parallel documents from all the 2000 unlabeled parallel reviews to conduct
experiments using the same setting from the previous experiments. Each experiment was repeated
10 times based on random selections of labeled target training data. The average test classification
accuracies and standard deviations are plotted in Figure 1 and Figure 2. Figure 1 presents the results
for the 9 cross-lingual classification tasks that adapt classification systems from English to French,
German and Japanese, while Figure 2 presents the results for the other 9 cross-lingual classification
tasks that adapt classification systems from French, German and Japanese to English.
7
FEB
FED
80
78
76
74
CL?LSI
CL?KCCA
CL?OPCA
TSL
72
500
1000
1500
Accuracy
76
Accuracy
Accuracy
78
74
72
CL?LSI
CL?KCCA
CL?OPCA
TSL
70
68
2000
500
Unlabeled parallel data
74
CL?LSI
CL?KCCA
CL?OPCA
TSL
500
1000
70
2000
500
1500
80
78
78
76
76
74
CL?LSI
CL?KCCA
CL?OPCA
TSL
70
2000
500
1000
1500
1500
74
CL?LSI
CL?KCCA
CL?OPCA
TSL
72
70
2000
500
1000
1500
Unlabeled parallel data
Unlabeled parallel data
Unlabeled parallel data
JEB
JED
JEM
76
74
76
74
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
68
66
1000
1500
Unlabeled parallel data
2000
Accuracy
78
78
Accuracy
80
76
72
74
72
CL?LSI
CL?KCCA
CL?OPCA
TSL
70
68
500
1000
2000
GEM
80
72
1000
Unlabeled parallel data
78
500
CL?LSI
CL?KCCA
CL?OPCA
TSL
72
Accuracy
Accuracy
Accuracy
76
70
1500
74
GED
78
72
1000
76
Unlabeled parallel data
GEB
Accuracy
FEM
78
1500
Unlabeled parallel data
2000
2000
72
70
CL?LSI
CL?KCCA
CL?OPCA
TSL
68
66
500
1000
1500
2000
Unlabeled parallel data
Figure 2: Average test classification accuracies and standard deviations over 10 runs with different
numbers of unlabeled parallel documents for adapting a classification system from French, German
and Japanese to English.
From these results, we can see that the performance of all four methods in general improves with the
increase of the unlabeled parallel data. The proposed method, TSL, nevertheless outperforms the
other three cross-lingual adaptation learning methods across the range of different np values for 16
out of the 18 cross language sentiment classification tasks. For the remaining two tasks, EFM and
EGM, it has similar performance with the CL-KCCA method while significantly outperforming the
other two methods. Moreover, for the 9 tasks that make adaptation from English to the other three
languages, the TSL method achieves great performance with only 200 unlabeled parallel documents,
while the performance of the other three methods decreases significantly with the decrease of the
number of unlabeled parallel documents. These results demonstrate the robustness and efficacy of
the proposed method, comparing to other methods.
6
Conclusion
In this paper, we developed a novel two-step method to learn cross-lingual semantic data representations for cross language text classification by exploiting unlabeled parallel bilingual documents. We
first formulated a matrix completion problem to infer unobserved feature values of the concatenated
document-term matrix in the space of unified vocabulary set from the source and target languages.
Then we performed latent semantic indexing over the completed low-rank document-term matrix to
produce a low-dimensional cross-lingual representation of the documents. Monolingual classifiers
were then used to conduct cross language text classification based on the learned document representation. To investigate the effectiveness of the proposed learning method, we conducted extensive
experiments with tasks of cross language sentiment classification on Amazon product reviews. Our
experimental results demonstrated that the proposed two-step learning method significantly outperforms the other four comparison methods. Moreover, the proposed approach needs much less
parallel documents to produce a good cross language text classification system.
8
References
[1] M. Amini and C. Goutte. A co-classification approach to learning from multilingual corpora.
Machine Learning, 79:105?121, 2010.
[2] M. Amini, N. Usunier, and C. Goutte. Learning from multiple partially observed views - an
application to multilingual text categorization. In NIPS, 2009.
[3] B. A.R., A. Joshi, and P. Bhattacharyya. Cross-lingual sentiment analysis for indian languages
using linked wordnets. In Proc. of COLING, 2012.
[4] E. Cand?es and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[5] C. Chang and C. Lin. LIBSVM: A library for support vector machines. ACM Transactions on
Intelligent Systems and Technology, 2:27:1?27:27, 2011.
[6] W. Dai, Y. Chen, G. Xue, Q. Yang, and Y. Yu. Translated learning: Transfer learning across
different feature spaces. In NIPS, 2008.
[7] A. Gliozzo. Exploiting comparable corpora and bilingual dictionaries for cross-language text
categorization. In Proc. of ICCL-ACL, 2006.
[8] J. Jagarlamudi, R. Udupa, H. Daum?e III, and A. Bhole. Improving bilingual projections via
sparse covariance matrices. In Proc. of EMNLP, 2011.
[9] X. Ling, G. Xue, W. Dai, Y. Jiang, Q. Yang, and Y. Yu. Can chinese web pages be classified
with English data source? In Proc. of WWW, 2008.
[10] M. Littman, S. Dumais, and T. Landauer. Automatic cross-language information retrieval using
latent semantic indexing. In Cross-Language Information Retrieval, chapter 5, pages 51?62.
Kluwer Academic Publishers, 1998.
[11] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank
minimization. Mathematical Programming: Series A and B archive, 128, Issue 1-2, 2011.
[12] X. Meng, F. Wei, X. Liu, M. Zhou, G. Xu, and H. Wang. Cross-lingual mixture model for
sentiment classification. In Proc. of ACL, 2012.
[13] J. Pan, G. Xue, Y. Yu, and Y. Wang. Cross-lingual sentiment classification via bi-view nonnegative matrix tri-factorization. In Proc. of PAKDD, 2011.
[14] P. Petrenz and B. Webber. Label propagation for fine-grained cross-lingual genre classification.
In Proc. of the NIPS xLiTe workshop, 2012.
[15] J. Platt, K. Toutanova, and W. Yih. Translingual document representations from discriminative
projections. In Proc. of EMNLP, 2010.
[16] P. Prettenhofer and B. Stein. Cross-language text classification using structural correspondence
learning. In Proc. of ACL, 2010.
[17] L. Rigutini and M. Maggini. An EM based training algorithm for cross-language text categorization. In Proc. of the Web Intelligence Conference, 2005.
[18] J. Shanahan, G. Grefenstette, Y. Qu, and D. Evans. Mining multilingual opinions through
classification and translation. In AAAI Spring Symp. on Explor. Attit. and Affect in Text, 2004.
[19] W. Smet, J. Tang, and M. Moens. Knowledge transfer across multilingual corpora via latent
topics. In Proc. of PAKDD, 2011.
[20] A. Vinokourov, J. Shawe-taylor, and N. Cristianini. Inferring a semantic representation of text
via cross-language correlation analysis. In NIPS, 2002.
[21] C. Wan, R. Pan, and J. Li. Bi-weighting domain adaptation for cross-language text classification. In Proc. of IJCAI, 2011.
[22] X. Wan. Co-training for cross-lingual sentiment classification. In Proc. of ACL-IJCNLP, 2009.
[23] K. Wu, X. Wang, and B. Lu. Cross language text categorization using a bilingual lexicon. In
Proc. of IJCNLP, 2008.
9
| 5164 |@word webber:1 norm:4 km:17 covariance:2 decomposition:3 yih:1 liu:1 contains:4 efficacy:2 series:1 document:67 bhattacharyya:1 outperforms:7 recovered:2 comparing:1 egd:3 readily:1 evans:1 shanahan:1 treating:1 update:3 intelligence:1 selected:2 lexicon:3 tagger:1 mathematical:1 constructed:1 become:1 qij:2 prove:1 symp:1 ofword:1 cand:1 multi:3 automatically:4 project:3 discover:1 moreover:9 minimizes:1 developed:3 unified:6 unobserved:3 guarantee:1 voting:1 classifier:8 demonstrates:1 platt:1 positive:2 jiang:1 meng:1 chose:5 acl:4 k:2 co:7 factorization:1 bi:3 qkf:1 range:1 procedure:4 empirical:1 universal:1 significantly:4 composite:1 projection:10 adapting:2 word:20 induce:5 onto:1 unlabeled:36 selection:4 operator:8 applying:3 www:1 optimize:1 equivalent:1 map:1 demonstrated:2 missing:9 maximizing:1 convex:7 focused:1 formulate:2 moens:1 amazon:5 fill:2 target:20 exact:1 programming:1 us:5 pa:1 labeled:12 observed:13 wang:3 capture:2 decrease:2 yk:1 ran:1 transforming:1 littman:1 cristianini:1 solving:2 translated:2 po:3 interlingual:2 various:1 represented:6 genre:3 regularizer:1 chapter:1 train:4 effective:1 whose:1 solve:4 relax:1 associative:1 sequence:1 propose:5 product:8 adaptation:5 hadamard:1 poorly:1 kh:3 exploiting:7 convergence:3 ijcai:1 produce:6 categorization:6 jed:3 converges:2 completion:12 ij:5 auxiliary:4 predicted:1 jem:3 implies:1 translating:3 opinion:1 require:1 yij:1 ijcnlp:2 great:1 efb:6 achieves:2 dictionary:1 proc:14 kpc:4 lose:1 label:2 bag:3 prettenhofer:1 bridge:2 create:1 tf:1 tool:4 minimization:2 avoid:1 zhou:1 shrinkage:4 vk:2 consistently:1 rank:13 improvement:1 expansive:5 greatly:1 baseline:5 ument:1 typically:2 provably:1 overall:1 classification:52 dual:2 issue:1 initialize:1 construct:3 yu:3 filling:1 nonsmooth:1 np:4 intelligent:1 oriented:2 randomly:2 simultaneously:2 investigate:1 mining:1 certainly:1 mixture:1 pc:22 bregman:1 filled:3 conduct:4 taylor:1 re:1 plotted:1 twostep:1 instance:2 column:1 temple:2 maximization:1 deviation:5 entry:7 subset:1 successful:1 conducted:2 reported:1 sv:1 xue:3 dumais:1 recht:1 twelve:1 probabilistic:1 squared:1 aaai:1 wan:2 emnlp:2 book:3 ged:3 li:1 tion:1 view:8 lot:1 performed:1 doing:1 linked:1 recover:1 parallel:52 annotation:1 minimize:2 accuracy:25 conducting:4 correspond:1 produced:4 lu:1 converged:1 classified:1 manual:1 definition:1 naturally:1 proof:3 mi:1 proved:1 dataset:2 knowledge:1 dimensionality:1 improves:1 supervised:1 wei:1 yb:4 evaluated:1 shrink:1 correlation:3 hand:1 web:2 lack:1 propagation:1 french:6 usa:1 equality:2 regularization:1 hence:1 iteratively:1 goldfarb:1 semantic:16 self:1 complete:1 demonstrate:3 performs:2 novel:2 kluwer:1 composition:1 automatic:1 mathematics:1 language:113 shawe:1 similarity:1 add:1 feb:3 own:2 nonconvex:1 inequality:1 outperforming:1 jeb:3 additional:1 relaxed:1 dai:2 employed:1 multiple:2 infer:1 smooth:6 adapt:2 academic:1 cross:74 lin:1 retrieval:2 maggini:1 paired:1 impact:1 prediction:1 scalable:2 kcca:30 expectation:1 iteration:2 kernel:3 achieved:1 addition:1 fine:3 singular:4 source:10 tsl:28 publisher:1 envelope:1 archive:1 tri:1 subject:2 induced:1 effectiveness:2 structural:2 joshi:1 yang:2 iii:1 enough:1 variety:1 affect:1 gave:1 reduce:1 vinokourov:1 pivot:1 retrains:1 bridging:1 effort:1 sentiment:15 speech:1 useful:1 amount:1 stein:1 induces:1 category:3 reduced:1 outperform:1 lsi:35 canonical:3 disjoint:2 group:1 four:7 nevertheless:3 preprocessed:1 libsvm:2 fraction:2 run:3 package:1 wu:1 comparable:3 correspondence:2 oracle:1 nonnegative:5 constraint:2 idf:1 pakdd:2 udupa:1 dvd:1 tag:3 min:7 formulating:1 spring:1 performing:2 department:1 combination:1 lingual:31 across:10 smaller:1 pan:2 em:1 qu:1 indexing:9 translingual:1 resource:3 goutte:2 remains:1 german:5 fed:3 end:1 usunier:1 available:1 operation:2 apply:2 enforce:1 disagreement:1 amini:2 occurrence:3 stepsize:1 robustness:2 original:3 denotes:2 dirichlet:1 top:1 remaining:3 running:1 completed:1 music:1 daum:1 exploit:3 concatenated:6 k1:5 especially:1 build:1 chinese:1 initializes:2 objective:2 rt:3 gradient:15 distance:1 capacity:1 majority:1 topic:3 index:3 relationship:1 minimizing:1 difficult:1 trace:2 negative:2 implementation:1 perform:6 observation:1 descent:13 pair:5 extensive:2 connection:1 learned:3 efm:4 nip:4 address:3 below:1 pattern:1 sparsity:1 challenge:2 including:2 max:4 critical:1 natural:2 indicator:1 scarce:1 geb:3 technology:1 library:1 coupled:1 extract:1 philadelphia:1 text:23 review:16 literature:5 kf:18 loss:3 expect:1 fully:2 monolingual:9 allocation:1 foundation:1 sufficient:1 xiao:1 classifying:2 share:1 translation:8 row:6 english:11 barrier:1 sparse:5 dimension:2 vocabulary:13 default:1 projected:12 far:1 cope:1 transaction:1 smet:1 multilingual:14 overfitting:1 corpus:7 gem:3 discriminative:1 landauer:1 fem:3 latent:13 iterative:1 table:2 learn:5 opca:30 robust:2 transfer:2 improving:1 investigated:1 cl:86 meanwhile:1 constructing:1 domain:18 japanese:5 dense:1 noise:2 bilingual:16 ling:1 repeated:1 xu:1 explor:1 inferring:1 nonnegativity:1 lie:1 weighting:3 learns:1 grained:3 coling:1 tang:1 theorem:3 specific:3 yuhong:2 unigram:1 svm:1 workshop:1 toutanova:1 adding:1 sparseness:1 kx:1 gap:1 chen:2 expressed:3 partially:4 egm:4 chang:1 iccl:1 mij:46 acm:1 ma:1 grefenstette:1 viewed:1 formulated:2 shared:1 feasible:2 specifically:3 wordnet:1 principal:2 lemma:4 experimental:3 e:1 select:1 support:2 guo:1 arises:1 categorize:1 indian:1 evaluate:2 tested:1 handling:1 |
4,603 | 5,165 | Learning word embeddings efficiently with
noise-contrastive estimation
Koray Kavukcuoglu
DeepMind Technologies
koray@deepmind.com
Andriy Mnih
DeepMind Technologies
andriy@deepmind.com
Abstract
Continuous-valued word embeddings learned by neural language models have recently been shown to capture semantic and syntactic information about words very
well, setting performance records on several word similarity tasks. The best results
are obtained by learning high-dimensional embeddings from very large quantities
of data, which makes scalability of the training method a critical factor.
We propose a simple and scalable new approach to learning word embeddings
based on training log-bilinear models with noise-contrastive estimation. Our approach is simpler, faster, and produces better results than the current state-of-theart method. We achieve results comparable to the best ones reported, which were
obtained on a cluster, using four times less data and more than an order of magnitude less computing time. We also investigate several model types and find that
the embeddings learned by the simpler models perform at least as well as those
learned by the more complex ones.
1
Introduction
Natural language processing and information retrieval systems can often benefit from incorporating
accurate word similarity information. Learning word representations from large collections of unstructured text is an effective way of capturing such information. The classic approach to this task
is to use the word space model, representing each word with a vector of co-occurrence counts with
other words [16]. Representations of this type suffer from data sparsity problems due to the extreme dimensionality of the word count vectors. To address this, Latent Semantic Analysis performs
dimensionality reduction on such vectors, producing lower-dimensional real-valued word embeddings.
Better real-valued representations, however, are learned by neural language models which are trained
to predict the next word in the sentence given the preceding words. Such representations have been
used to achieve excellent performance on classic NLP tasks [4, 18, 17]. Unfortunately, few neural
language models scale well to large datasets and vocabularies due to use of hidden layers and the
cost of computing normalized probabilities.
Recently, a scalable method for learning word embeddings using light-weight tree-structured neural
language models was proposed in [10]. Although tree-structured models can be trained quickly, they
are considerably more complex than the traditional (flat) models and their performance is sensitive
to the choice of the tree over words [13]. Inspired by the excellent results of [10], we investigate
a simpler approach based on noise-contrastive estimation (NCE) [6], which enables fast training
without the complexity of working with tree-structured models. We compound the speedup obtained
by using NCE to eliminate the normalization costs during training, by using very simple variants of
the log-bilinear model [14], resulting in parameter update complexity linear in the word embedding
dimensionality.
1
We evaluate our approach on two analogy-based word similarity tasks [11, 10] and show that despite the considerably shorter training times our models outperform the Skip-gram model from [10]
trained on the same 1.5B-word Wikipedia dataset. Furthermore, we can obtain performance comparable to that of the huge Skip-gram and CBOW models trained on a 125-CPU-core cluster after
training for only four days on a single core using four times less training data. Finally, we explore
several model architectures and discover that the simplest architectures learn embeddings that are at
least as good as those learned by the more complex ones.
2
Neural probabilistic language models
Neural probabilistic language models (NPLMs) specify the distribution for the target word w, given
a sequence of words h, called the context. In statistical language modelling, w is typically the next
word in the sentence, while the context h is the sequence of words that precede w. Though some
models such as recurrent neural language models [9] can handle arbitrarily long contexts, in this
paper, we will restrict our attention to fixed-length contexts. Since we are interested in learning
word representations as opposed to assigning probabilities to sentences, we do not need to restrict
our models to predicting the next word, and can, for example, predict w from the words surrounding
it as was done in [4].
Given a context h, an NPLM defines the distribution for the word to be predicted using the scoring
function s? (w, h) that quantifies the compatibility between the context and the candidate target
word. Here ? are model parameters, which include the word embeddings. The scores are converted
to probabilities by exponentiating and normalizing:
exp(s? (w, h))
P?h (w) = P
.
(1)
0
w0 exp(s? (w , h))
Unfortunately both evaluating P?h (w) and computing the corresponding likelihood gradient requires
normalizing over the entire vocabulary, which means that maximum likelihood training of such
models takes time linear in the vocabulary size, and thus is prohibitively expensive for all but the
smallest vocabularies.
There are two main approaches to scaling up NPLMs to large vocabularies. The first one involves
using a tree-structured vocabulary with words at the leaves, resulting in training time logarithmic
in the vocabulary size [15]. Unfortunately, this approach is considerably more involved than ML
training and finding well-performing trees is non-trivial [13]. The alternative is to keep the model but
use a different training strategy. Using importance sampling to approximate the likelihood gradient
was the first such method to be proposed [2, 3], and though it could produce substantial speedups, it
suffered from stability problems. Recently, a method for training unnormalized probabilistic models,
called noise-contrastive estimation (NCE) [6], has been shown to be a stable and efficient way of
training NPLMs [14]. As it is also considerably simpler than the tree-based prediction approach, we
use NCE for training models in this paper. We will describe NCE in detail in Section 3.1.
3
Scalable log-bilinear models
We are interested in highly scalable models that can be trained on billion-word datasets with vocabularies of hundreds of thousands of words within a few days on a single core, which rules out most
traditional neural language models such as those from [1] and [4]. We will use the log-bilinear language model (LBL) [12] as our starting point, which unlike traditional NPLMs, does not have a hidden layer and works by performing linear prediction in the word feature vector space. In particular,
we will use a more scalable version of LBL [14] that uses vectors instead of matrices for its context
weights to avoid the high cost of matrix-vector multiplication. This model, like all other models
we will describe, has two sets of word representations: one for the target words (i.e. the words
being predicted) and one for the context words. We denote the target and the context representations
for word w with qw and rw respectively. Given a sequence of context words h = w1 , .., wn , the
model computes the predicted representation for the target word by taking a linear combination of
the context word feature vectors:
n
X
q?(h) =
ci rwi ,
(2)
i=1
2
where ci is the weight vector for the context word in position i and denotes element-wise multiplication. The context can consist of words preceding, following, or surrounding the word being
predicted. The scoring function then computes the similarity between the predicted feature vector
and one for word w:
s? (w, h) = q?(h)> qw + bw ,
(3)
where bw is a bias that captures the context-independent frequency of word w. We will refer to this
model as vLBL, for vector LBL.
vLBL can be made even simpler by eliminating the position-dependent weights and computing
Pn the
predicted feature vector simply by averaging the context word feature vectors: q?(h) = n1 i=1 rwi .
The result is something like a local topic model, which ignores the order of context words, potentially
forcing it to capture more semantic information, perhaps at the expense of syntax. The idea of simply
averaging context word feature vectors was introduced in [8], where it was used to condition on large
contexts such as entire documents. The resulting model can be seen as a non-hierarchical version of
the CBOW model of [10].
As our primary concern is learning word representations as opposed to creating useful language
models, we are free to move away from the paradigm of predicting the target word from its context
and, for example, do the reverse. This approach is motivated by the distributional hypothesis, which
states that words with similar meanings often occur in the same contexts [7] and thus suggests looking for word representations that capture their context distributions. The inverse language modelling
approach of learning to predict the context from the word is a natural way to do that. Some classic
word-space models such as HAL and COALS [16] follow this approach by representing the context
distribution using a bag-of-words but they do not learn embeddings from this information.
Unfortunately, predicting an n-word context requires modelling the joint distribution of n words,
which is considerably harder than modelling the distribution of a single word. We make the task
tractable by assuming that the words in different context positions are conditionally independent
given the current word w:
P?w (h) =
n
Y
w
Pi,?
(wi ).
(4)
i=1
Though this assumption can be easily relaxed without giving up tractability by introducing some
Markov structure into the context distribution, we leave investigating this direction as future work.
w
The context word distributions Pi,?
(wi ) are simply vLBL models that condition on the current word
and are defined by the scoring function
si,? (wi , w) = (ci rw )> qwi + bwi .
(5)
The resulting model can be seen as a Naive Bayes classifier parameterized in terms of word embeddings. As this model performs inverse language modelling, we will refer to it as ivLBL.
As with our traditional language model, we also consider the simpler version of this model without
position-dependent weights, defined by the scoring function
>
si,? (wi , w) = rw
qwi + bwi .
(6)
The resulting model is the non-hierarchical counterpart of the Skip-gram model [10]. Note that
unlike the tree-based models, such as those in the above paper, which only learn conditional embeddings for words, in our models each word has both a conditional and a target embedding which can
potentially capture complementary information. Tree-based models replace target embeddings with
parameters vectors associated with the tree nodes, as opposed to individual words.
3.1
Noise-contrastive estimation
We train our models using noise-contrastive estimation, a method for fitting unnormalized models
[6], adapted to neural language modelling in [14]. NCE is based on the reduction of density estimation to probabilistic binary classification. The basic idea is to train a logistic regression classifier to
discriminate between samples from the data distribution and samples from some ?noise? distribution, based on the ratio of probabilities of the sample under the model and the noise distribution. The
3
main advantage of NCE is that it allows us to fit models that are not explicitly normalized making
the training time effectively independent of the vocabulary size. Thus, we will be able to drop the
normalizing factor from Eq. 1, and simply use exp(s? (w, h)) in place of P?h (w) during training. The
perplexity of NPLMs trained using this approach has been shown to be on par with those trained
with maximum likelihood learning, but at a fraction of the computational cost.
Suppose we would like to learn the distribution of words for some specific context h, denoted by
P h (w). To do that, we create an auxiliary binary classification problem, treating the training data as
positive examples and samples from a noise distribution Pn (w) as negative examples. We are free
to choose any noise distribution that is easy to sample from and compute probabilities under, and
that does not assign zero probability to any word. We will use the (global) unigram distribution of
the training data as the noise distribution, a choice that is known to work well for training language
models. If we assume that noise samples are k times more frequent than data samples, the probability
Pdh (w)
that the given sample came from the data is P h (D = 1|w) = P h (w)+kP
. Our estimate of this
n (w)
d
probability is obtained by using our model distribution in place Pdh :
P h (D = 1|w, ?) =
P?h (w)
= ? (?s? (w, h)) ,
P?h (w) + kPn (w)
(7)
where ?(x) is the logistic function and ?s? (w, h) = s? (w, h) ? log(kPn (w)) is the difference in
the scores of word w under the model and the (scaled) noise distribution. The scaling factor k in
front of Pn (w) accounts for the fact that noise samples are k times more frequent than data samples.
Note that in the above equation we used s? (w, h) in place of log P?h (w), ignoring the normalization
term, because we are working with an unnormalized model. We can do this because the NCE
objective encourages the model to be approximately normalized and recovers a perfectly normalized
model if the model class contains the data distribution [6].
We fit the model by maximizing the log-posterior probability of the correct labels D averaged over
the data and noise samples:
J h (?) =EPdh log P h (D = 1|w, ?) + kEPn log P h (D = 0|w, ?)
=EPdh [log ? (?s? (w, h))] + kEPn [log (1 ? ? (?s? (w, h)))] ,
(8)
In practice, the expectation over the noise distribution is approximated by sampling. Thus, we
estimate the contribution of a word / context pair w, h to the gradient of Eq. 8 by generating k noise
samples {xi } and computing
k
X
?
?
? h,w
J (?) = (1 ? ? (?s? (w, h)))
log P?h (w) ?
? (?s? (xi , h))
log P?h (xi ) . (9)
??
??
??
i=1
Note that the gradient in Eq. 9 involves a sum over k noise samples instead of a sum over the entire
vocabulary, making the NCE training time linear in the number of noise samples and independent
of the vocabulary size. As we increase the number of noise samples k, this estimate approaches
the likelihood gradient of the normalized model, allowing us to trade off computation cost against
estimation accuracy [6].
NCE shares some similarities with a training method for non-probabilistic neural language models
that involves optimizing a margin-based ranking objective [4]. As that approach is non-probabilistic,
it is outside the scope of this paper, though it would be interesting to see whether it can be used to
learn competitive word embeddings.
4
Evaluating word embeddings
Using word embeddings learned by neural language models outside of the language modelling context is a relatively recent development. An early example of this is the multi-layer neural network
of [4] trained to perform several NLP tasks which represented words exclusively in terms of learned
word embeddings. [18] provided the first comparison of several word embeddings learned with different methods and showed that incorporating them into established NLP pipelines can boost their
performance.
4
Recently the focus has shifted towards evaluating such representations more directly, instead of measuring their effect on the performance of larger systems. Microsoft Research (MSR) has released
two challenge sets: a set of sentences each with a missing word to be filled in [20] and a set of
analogy questions [11], designed to evaluate semantic and syntactic content of word representations respectively. Another dataset, consisting of semantic and syntactic analogy questions has been
released by Google [10].
In this paper we will concentrate on the two analogy-based challenge sets, which consist of questions
of the form ?a is to b is as c is to ?, denoted as a : b ? c : ? . The task is to identify the held-out
fourth word, with only exact word matches deemed correct. Word embeddings learned by neural
language models have been shown to perform very well on these datasets when using the following
vector-similarity-based protocol for answering the questions. Suppose w
~ is the representation vector
for word w normalized to unit norm. Then, following [11], we answer a : b ? c : ? , by finding the
word d? with the representation closest to ~b ? ~a + ~c according to cosine similarity:
d? = arg max
x
(~b ? ~a + ~c)> ~x
.
k~b ? ~a + ~ck
(10)
We discovered that reproducing the results reported in [10] and [11] for publicly available word
embeddings required excluding b and c from the vocabulary when looking for d? using Eq. 10,
though that was not clear from the papers. To see why this is necessary, we can rewrite Eq. 10 as
d? = arg max ~b> ~x ? ~a> ~x + ~c> ~x
(11)
x
and notice that setting x to b or c maximizes the first or third term respectively (since the vectors are
normalized), resulting in a high similarity score. This equation suggests the following interpretation
of d? : it is simply the word with the representation most similar to ~b and ~c and dissimilar to ~a, which
makes it quite natural to exclude b and c themselves from consideration.
5
5.1
Experimental evaluation
Datasets
We evaluated our word embeddings on two analogy-based word similarity tasks released recently
by Google and Microsoft Research that we described in Section 4. We could not train on the data
used for learning the embeddings in the original papers as it was not readily available. [10] used the
proprietary Google News corpus consisting of 6 billion words, while the 320-million-word training
set used in [11] is a compilation of several Linguistic Data Consortium corpora, some of which
available only to their subscribers.
Instead, we decided to use two freely-available datasets: the April 2013 dump of English Wikipedia
and the collection of about 500 Project Gutenberg texts that form the canonical training data for
the MSR Sentence Completion Challenge [19]. We preprocessed Wikipedia by stripping out the
XML formatting, mapping all words to lowercase, and replacing all digits with 7, leaving us with
1.5 billion words. Keeping all words that occurred at least 10 times resulted in a vocabulary of
about 872 thousand words. Such a large vocabulary was used to demonstrate the scalability of our
method as well as to ensure that the models will have seen almost all the words they will be tested
on. When preprocessing the 47M-word Gutenberg dataset, we kept all words that occurred 5 or
more times, resulting in an 80-thousand-word vocabulary. Note that many words used for testing
the representations are missing from this dataset, which greatly limits the accuracy achievable when
using it. To make our results directly comparable to those in other papers, we report accuracy scores
computed using Eq. 10, excluding the second and the third word in the question from consideration,
as explained in Section 4.
5.2
Details of training
All models were trained on a single core, using minibatches of size 100 and the initial learning
rate of 3 ? 10?2 . No regularization was used. Initially we used a validation-set based learning
rate adaptation scheme described in [14], which halves the learning rate whenever the validation set
5
Table 1: Accuracy in percent on word similarity tasks. The models had 100D word embeddings
and were trained to predict 5 words on both sides of the current word on the 1.5B-word Wikipedia
dataset. Skip-gram(*) is our implementation of the model from [10]. ivLBL is the inverse language
model without position-dependent weights. NCEk denotes NCE training using k noise samples.
M ODEL
S KIP - GRAM (*)
IV LBL+NCE1
IV LBL+NCE2
IV LBL+NCE3
IV LBL+NCE5
IV LBL+NCE10
IV LBL+NCE25
S EMANTIC
28.0
28.4
30.8
34.2
37.2
38.9
40.0
G OOGLE
S YNTACTIC
36.4
42.1
44.1
43.6
44.7
45.0
46.1
MSR
OVERALL
32.6
35.9
38.0
39.4
41.3
42.2
43.3
31.7
34.9
36.2
36.3
36.7
36.0
36.7
T IME
( HOURS )
12.3
3.1
4.0
5.1
7.3
12.2
26.8
Table 2: Accuracy in percent on word similarity tasks for large models. The Skip-gram? and
CBOW? results are from [10]. ivLBL models predict 5 words before and after the current word.
vLBL models predict the current word from the 5 preceding and 5 following words.
M ODEL
S KIP - GRAM?
S KIP - GRAM?
S KIP - GRAM?
IV LBL+NCE25
IV LBL+NCE25
IV LBL+NCE25
IV LBL+NCE25
IV LBL+NCE25
IV LBL+NCE25
CBOW?
CBOW?
V LBL+NCE5
V LBL+NCE5
V LBL+NCE5
V LBL+NCE5
V LBL+NCE5
E MBED .
D IM .
300
300
1000
300
300
300?2
100
100
100?2
300
1000
300
100
300
600
600?2
T RAINING
SET SIZE
1.6B
785M
6B
1.5B
1.5B
1.5B
1.5B
1.5B
1.5B
1.6B
6B
1.5B
1.5B
1.5B
1.5B
1.5B
S EM .
52.2
56.7
66.1
61.2
63.6
65.2
52.6
55.9
59.3
16.1
57.3
40.3
45.0
54.2
57.3
60.5
G OOGLE
S YN . OVERALL
55.1
53.8
52.2
55.5
65.1
65.6
58.4
59.7
61.8
62.6
63.0
64.0
48.5
50.3
50.1
53.2
54.2
56.5
52.6
36.1
68.9
63.7
55.4
48.5
56.8
51.5
64.8
60.0
66.0
62.1
67.1
64.1
MSR
48.8
52.4
54.2
39.2
42.3
44.6
48.7
52.3
58.1
59.1
60.8
T IME
( DAYS )
2.0
2.5
2.5?125
1.2
4.1
4.1
1.2
2.9
2.9
0.6
2?140
0.3
2.0
2.0
2.0
3.0
perplexity failed to improve after some time, but found that it led to poor representations despite
achieving low perplexity scores, which was likely due to undertraining. The linear learning rate
schedule described in [10] produced better results. Unfortunately, using it requires knowing in
advance how many passes through the data will be performed, which is not always possible or
convenient. Perhaps more seriously, this approach might result in undertraining of representations
for rare words because all representation share the same learning rate.
AdaGrad [5] provides an automatic way of dealing with this issue. Though AdaGrad has already
been used to train neural language models in a distributed setting [10], we found that it helped
to learn better word representations even using a single CPU core. We reduced the potentially
prohibitive memory requirements of AdaGrad, which requires storing a running sum of squared
gradient values for each parameter, by using the same learning rate for all dimensions of a word
embedding. Thus we store only one extra number per embedding vector, which is helpful when
training models with hundreds of millions of parameters.
5.3
Results
Inspired by the excellent performance of tree-based models of [10], we started by comparing the
best-performing model from that paper, the Skip-gram, to its non-hierarchical counterpart, ivLBL
without position-dependent weights, proposed in Section 3, trained using NCE. As there is no publicly available Skip-gram implementation, we wrote our own. Our implementation is faithful to the
description in the paper, with one exception. To speed up training, instead of predicting all context
words around the current word, we predict only one context word, sampled at random using the
6
Table 3: Results for various models trained for 20 epochs on the 47M-word Gutenberg dataset
using NCE5 with AdaGrad. (D) and (I) denote models with and without position-dependent weights
respectively. For each task, the left (right) column give the accuracy obtained using the conditional
(target) word embeddings. nL (nR) denotes n words on the left (right) of the current word.
C ONTEXT
M ODEL
V LBL( D )
V LBL( D )
V LBL( D )
V LBL( I )
V LBL( I )
V LBL( I )
IV LBL( D )
IV LBL( I )
SIZE
5L + 5R
10L
10R
5L + 5R
10L
10R
5L + 5R
5L + 5R
S EMANTIC
2.4
2.6
1.9
2.8
2.7
2.4
3.0
2.9
2.5
2.8
2.3
2.6
2.8
2.3
2.8
2.6
G OOGLE
S YNTACTIC
24.7 23.8
22.1 14.8
13.1 24.1
27.5 29.6
23.5 16.1
16.2 24.6
15.1 13.0
26.8 26.8
MSR
OVERALL
14.6 14.2
12.9
9.3
8.4 14.2
16.4 17.5
14.0 10.1
9.9 14.6
9.5
8.1
15.9 15.8
23.4
20.9
8.8
22.9
19.8
10.0
14.5
21.4
23.1
9.0
23.0
24.2
10.1
20.3
14.0
21.0
T IME
( HOURS )
2.6
2.6
2.6
2.3
2.3
2.1
1.2
1.2
non-uniform weighting scheme from the paper. Note that our models are also trained using the same
context-word sampling approach. To make the comparison fair, we did not use AdaGrad for our
models in these experiments, using the linear learning rate schedule as in [10] instead.
Table 1 shows the results on the word similarity tasks for the two models trained on the Wikipedia
dataset. We ran NCE training several times with different numbers of noise samples to investigate the
effect of this parameter on the representation quality and training time. The models were trained for
three epochs, which in our experience provided a reasonable compromise between training time and
representation quality.1 All NCE-trained models outperformed the Skip-gram. Accuracy steadily
increased with the number of noise samples used, as did the training time. The best compromise
between running time and performance seems to be achieved with 5 or 10 noise samples.
We then experimented with training models using AdaGrad and found that it significantly improved
the quality of embeddings obtained when training with 10 or 25 noise samples, increasing the semantic score for the NCE25 model by over 10 percentage points. Encouraged by this, we trained
two ivLBL models with position-independent weights and different embedding dimensionalities
for several days using this approach. As some of the best results in [10] were obtained with the
CBOW model, we also trained its non-hierarchical counterpart from Section 3, vLBL with positionindependent weights, using 100/300/600-dimensional embeddings and NCE with 5 noise samples,
for shorter training times. Note that due to the unavailability of the Google News dataset used in that
paper, we trained on Wikipedia. The scores for ivLBL and vLBL models were obtained using the
conditional word and target word representations respectively, while the scores marked with d ? 2
were obtained by concatenating the two word representations, after normalizing them.
The results, reported in Table 2, show that our models substantially outperform their hierarchical
counterparts when trained using comparable amounts of time and data. For example, the 300D
ivLBL model trained for just over a day, achieves accuracy scores 3-9 percentage points better than
the 300D Skip-gram trained on the same amount of data for almost twice as long. The same model
trained for four days achieves accuracy scores that are only 2-4 percentage points lower than those
of the 1000D Skip-gram trained on four times as much data using 75 times as many CPU cycles.
By computing word similarity scores using the conditional and the target word representations concatenated together, we can bring the accuracy gap down to 2 percentage points at no additional
computational cost. The accuracy achieved by vLBL models as compared to that of CBOW models
follows a similar pattern. Once again our models achieve better accuracy scores faster and we can
get within 3 percentage points of the result obtained on a cluster using much less data and far less
computation.
To determine whether we were crippling our models by using position-independent weight, we
evaluated all model architectures described in Section 3 on the Gutenberg corpus. The models were
trained for 20 epochs using NCE5 and AdaGrad. We report the accuracy obtained with both conditional and target representation (left and right columns respectively) for each of the models in Ta1
We checked this by training the Skip-gram model for 10 epochs, which did not result in a substantial
increase in accuracy.
7
Table 4: Accuracy on the MSR Sentence Completion Challenge dataset.
M ODEL
LSA [19]
S KIP - GRAM [10]
LBL [14]
IV LBL
IV LBL
IV LBL
C ONTEXT
L ATENT
SIZE
SENTENCE
DIM
P ERCENT
CORRECT
300
640
300
100
300
600
49
48.0
54.7
51.0
55.2
55.5
10L+10R
10L
5L+5R
5L+5R
5L+5R
ble 3. Perhaps surprisingly, the results show that representations learned with position-independent
weights, designated with (I), tend to perform better than the ones learned with position-dependent
weights. The difference is small for traditional language models (vLBL), but is quite pronounced
for the inverse language model (ivLBL). The best-performing representations were learned by the
traditional language model with the context surrounding the word and position-independent weights.
Sentence completion: We also applied our approach to the MSR Sentence Completion Challenge
[19], where the task is to complete each of the 1,040 test sentences by picking the missing word
from the list of five candidate words. Using the 47M-word Gutenberg dataset, preprocessed as in
[14], as the training set, we trained several ivLBL models with NCE5 to predict 5 words preceding
and 5 following the current word. To complete a sentence, we compute the probability of the 10
words around the missing word (using Eq. 4) for each of the candidate words and pick the one
producing the highest value. The resulting accuracy scores, given in Table 4 along with those of
several baselines, show that ivLBL models perform very well. Even the model with the lowest
embedding dimensionality of 100, achieves 51.0% correct, compared to 48.0% correct reported in
[10] for the Skip-gram model with 640D embeddings. The 55.5% correct achieved by the model
with 600D embeddings is also better than the best single-model score on this dataset in the literature
(54.7% in [14]).
6
Discussion
We have proposed a new highly scalable approach to learning word embeddings which involves
training lightweight log-bilinear language models with noise-contrastive estimation. It is simpler
than the tree-based language modelling approach of [10] and produces better-performing embeddings faster. Embeddings learned using a simple single-core implementation of our method achieve
accuracy scores comparable to the best reported ones, which were obtained on a large cluster using
four times as much data and almost two orders of magnitude as many CPU cycles. The scores we
report in this paper are also easy to compare to, because we trained our models only on publicly
available data.
Several promising directions remain to be explored. [8] have recently proposed a way of learning
multiple representations for each word by clustering the contexts the word occurs in and allocating
a different representation for each cluster, prior to training the model. As ivLBL predicts the context
from the word, it naturally allows using multiple context representations per current word, resulting
in a more principled approach to the problem based on mixture modeling. Sharing representations
between the context and the target words is also worth investigating as it might result in betterestimated rare word representations.
Acknowledgments
We thank Volodymyr Mnih for his helpful comments.
References
[1] Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language
model. Journal of Machine Learning Research, 3:1137?1155, 2003.
[2] Yoshua Bengio and Jean-S?ebastien Sen?ecal. Quick training of probabilistic neural nets by importance
sampling. In AISTATS?03, 2003.
8
[3] Yoshua Bengio and Jean-S?ebastien Sen?ecal. Adaptive importance sampling to accelerate training of a
neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713?722, 2008.
[4] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks
with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, 2008.
[5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2010.
[6] M.U. Gutmann and A. Hyv?arinen. Noise-contrastive estimation of unnormalized statistical models, with
applications to natural image statistics. Journal of Machine Learning Research, 13:307?361, 2012.
[7] Zellig S Harris. Distributional structure. Word, 1954.
[8] Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the
Association for Computational Linguistics, pages 873?882, 2012.
?
[9] T. Mikolov, M. Karafi?at, L. Burget, J. Cernock`
y, and S. Khudanpur. Recurrent neural network based
language model. In Eleventh Annual Conference of the International Speech Communication Association,
2010.
[10] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. International Conference on Learning Representations 2013, 2013.
[11] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word
representations. Proceedings of NAACL-HLT, 2013.
[12] A. Mnih and G. Hinton. Three new graphical models for statistical language modelling. Proceedings of
the 24th International Conference on Machine Learning, pages 641?648, 2007.
[13] Andriy Mnih and Geoffrey Hinton. A scalable hierarchical distributed language model. In Advances in
Neural Information Processing Systems, volume 21, 2009.
[14] Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language
models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751?1758,
2012.
[15] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In AISTATS?05, pages 246?252, 2005.
[16] Magnus Sahlgren. The Word-Space Model: Using distributional analysis to represent syntagmatic and
paradigmatic relations between words in high-dimensional vector spaces. PhD thesis, Stockholm, 2006.
[17] R. Socher, C.C. Lin, A.Y. Ng, and C.D. Manning. Parsing natural scenes and natural language with
recursive neural networks. In International Conference on Machine Learning (ICML), 2011.
[18] J. Turian, L. Ratinov, and Y. Bengio. Word representations: A simple and general method for semisupervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational
Linguistics, pages 384?394, 2010.
[19] G. Zweig and C.J.C. Burges. The Microsoft Research Sentence Completion Challenge. Technical Report
MSR-TR-2011-129, Microsoft Research, 2011.
[20] Geoffrey Zweig and Chris J.C. Burges. A challenge set for advancing language modeling. In Proceedings
of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of
Language Modeling for HLT, pages 29?36, 2012.
9
| 5165 |@word multitask:1 msr:8 version:3 eliminating:1 achievable:1 norm:1 seems:1 hyv:1 ivlbl:11 subscriber:1 contrastive:8 pick:1 yih:1 tr:1 harder:1 reduction:2 initial:1 contains:1 score:16 exclusively:1 lightweight:1 seriously:1 document:1 current:10 com:2 comparing:1 si:2 assigning:1 readily:1 john:1 parsing:1 enables:1 christian:1 drop:1 treating:1 update:1 designed:1 half:1 leaf:1 prohibitive:1 core:6 record:1 provides:1 node:1 simpler:7 five:1 along:1 fitting:1 eleventh:1 coal:1 themselves:1 multi:1 inspired:2 qwi:2 cpu:4 increasing:1 provided:2 discover:1 project:1 maximizes:1 qw:2 lowest:1 substantially:1 deepmind:4 unified:1 finding:2 prohibitively:1 classifier:2 scaled:1 unit:1 lsa:1 yn:1 producing:2 positive:1 before:1 local:1 limit:1 bilinear:5 despite:2 approximately:1 might:2 twice:1 suggests:2 co:1 averaged:1 decided:1 faithful:1 acknowledgment:1 testing:1 practice:1 recursive:1 digit:1 significantly:1 emantic:2 convenient:1 burget:1 word:151 morin:1 consortium:1 get:1 context:39 yee:1 dean:1 quick:1 missing:4 maximizing:1 attention:1 starting:1 tomas:2 unstructured:1 rule:1 his:1 stability:1 classic:3 embedding:6 handle:1 target:13 suppose:2 exact:1 us:1 hypothesis:1 element:1 expensive:1 approximated:1 distributional:3 predicts:1 capture:5 thousand:3 news:2 cycle:2 gutmann:1 trade:1 highest:1 ran:1 substantial:2 principled:1 complexity:2 trained:27 rewrite:1 compromise:2 eric:1 easily:1 joint:1 accelerate:1 represented:1 various:1 surrounding:3 train:4 fast:2 effective:1 describe:2 kp:1 outside:2 quite:2 jean:2 larger:1 valued:3 ducharme:1 elad:1 kai:1 statistic:1 syntactic:3 online:1 sequence:3 advantage:1 net:1 sen:2 propose:1 adaptation:1 frequent:2 achieve:4 description:1 pronounced:1 scalability:2 billion:3 cluster:5 requirement:1 regularity:1 produce:3 generating:1 leave:1 recurrent:2 completion:5 andrew:1 eq:7 auxiliary:1 predicted:6 skip:12 involves:4 direction:2 concentrate:1 correct:6 stochastic:1 arinen:1 assign:1 really:1 stockholm:1 pdh:2 im:1 around:2 magnus:1 exp:3 scope:1 predict:8 mapping:1 achieves:3 early:1 smallest:1 released:3 estimation:11 outperformed:1 precede:1 bag:1 label:1 sensitive:1 create:1 always:1 rwi:2 ck:1 avoid:1 pn:3 linguistic:2 focus:1 modelling:9 likelihood:5 greatly:1 baseline:1 helpful:2 dim:1 dependent:6 lowercase:1 jauvin:1 eliminate:1 typically:1 entire:3 initially:1 hidden:2 relation:1 interested:2 compatibility:1 arg:2 classification:2 overall:3 issue:1 denoted:2 pascal:1 development:1 once:1 ng:2 koray:2 sampling:5 encouraged:1 icml:1 theart:1 future:2 report:4 yoshua:4 richard:1 few:2 wen:1 ime:3 resulted:1 individual:1 consisting:2 bw:2 jeffrey:1 n1:1 microsoft:4 huge:1 investigate:3 mnih:5 highly:2 evaluation:1 mixture:1 extreme:1 nl:1 light:1 compilation:1 held:1 accurate:1 vlbl:8 allocating:1 necessary:1 experience:1 shorter:2 tree:12 filled:1 iv:17 lbl:32 nplms:5 increased:1 column:2 modeling:3 measuring:1 cost:6 tractability:1 introducing:1 rare:2 hundred:2 uniform:1 front:1 gutenberg:5 reported:5 stripping:1 answer:1 considerably:5 density:1 international:6 probabilistic:11 off:1 picking:1 together:1 quickly:1 w1:1 oogle:3 squared:1 again:1 thesis:1 opposed:3 choose:1 huang:1 ontext:2 cbow:7 creating:1 account:1 converted:1 exclude:1 volodymyr:1 zellig:1 explicitly:1 ranking:1 collobert:1 performed:1 helped:1 hazan:1 nplm:1 competitive:1 bayes:1 odel:4 contribution:1 publicly:3 accuracy:17 greg:1 efficiently:1 crippling:1 identify:1 vincent:1 kavukcuoglu:1 produced:1 worth:1 whenever:1 checked:1 sharing:1 hlt:3 against:1 frequency:1 involved:1 steadily:1 naturally:1 associated:1 recovers:1 sampled:1 dataset:11 dimensionality:5 schedule:2 day:6 follow:1 specify:1 improved:1 april:1 done:1 though:6 evaluated:2 furthermore:1 just:1 working:2 replacing:1 christopher:1 google:4 defines:1 logistic:2 quality:3 perhaps:3 hal:1 semisupervised:1 effect:2 naacl:2 normalized:7 counterpart:4 undertraining:2 regularization:1 semantic:6 conditionally:1 unavailability:1 during:2 encourages:1 unnormalized:4 cosine:1 whye:1 syntax:1 complete:2 demonstrate:1 performs:2 duchi:1 bring:1 percent:2 meaning:1 wise:1 consideration:2 image:1 recently:6 wikipedia:6 volume:1 bwi:2 million:2 interpretation:1 occurred:2 association:3 refer:2 automatic:1 language:39 had:1 stable:1 similarity:13 something:1 posterior:1 closest:1 recent:1 showed:1 own:1 optimizing:1 forcing:1 reverse:1 compound:1 perplexity:3 store:1 binary:2 arbitrarily:1 came:1 meeting:2 scoring:4 seen:3 additional:1 relaxed:1 preceding:4 freely:1 determine:1 paradigm:1 corrado:1 paradigmatic:1 multiple:3 technical:1 faster:3 match:1 long:2 retrieval:1 zweig:3 lin:1 prediction:2 scalable:7 variant:1 basic:1 regression:1 expectation:1 normalization:2 represent:1 achieved:3 leaving:1 suffered:1 extra:1 unlike:2 pass:1 comment:1 tend:1 bengio:5 embeddings:31 wn:1 easy:2 fit:2 architecture:4 restrict:2 andriy:4 perfectly:1 idea:2 prototype:1 knowing:1 whether:2 motivated:1 suffer:1 speech:1 proprietary:1 deep:1 useful:1 clear:1 amount:2 simplest:1 rw:3 reduced:1 outperform:2 percentage:5 canonical:1 shifted:1 notice:1 per:2 four:6 achieving:1 preprocessed:2 kept:1 advancing:1 subgradient:1 fraction:1 nce:15 sum:3 ratinov:1 inverse:4 parameterized:1 fourth:1 place:3 almost:3 reasonable:1 ble:1 scaling:2 comparable:5 capturing:1 layer:3 annual:3 adapted:1 occur:1 scene:1 flat:1 speed:1 performing:5 mikolov:3 relatively:1 speedup:2 structured:4 designated:1 according:1 combination:1 poor:1 manning:2 remain:1 em:1 wi:4 karafi:1 formatting:1 making:2 kpn:2 explained:1 pipeline:1 equation:2 ecal:2 count:2 singer:1 tractable:1 available:6 hierarchical:7 away:1 occurrence:1 alternative:1 original:1 denotes:3 running:2 nlp:3 include:1 ensure:1 clustering:1 linguistics:2 graphical:1 yoram:1 giving:1 concatenated:1 move:1 objective:2 question:5 quantity:1 already:1 occurs:1 strategy:1 primary:1 traditional:6 nr:1 gradient:6 thank:1 w0:1 chris:1 topic:1 trivial:1 assuming:1 length:1 ratio:1 unfortunately:5 potentially:3 expense:1 negative:1 implementation:4 ebastien:2 perform:5 allowing:1 teh:1 datasets:5 markov:1 hinton:2 looking:2 excluding:2 communication:1 ever:1 discovered:1 reproducing:1 introduced:1 pair:1 required:1 sentence:12 kip:5 learned:13 established:1 boost:1 hour:2 address:1 able:1 rejean:1 pattern:1 sparsity:1 challenge:7 max:2 memory:1 tau:1 critical:1 natural:7 cernock:1 predicting:4 representing:2 scheme:2 improve:1 xml:1 technology:2 started:1 deemed:1 naive:1 text:2 epoch:4 literature:1 prior:1 multiplication:2 adagrad:7 par:1 interesting:1 analogy:5 geoffrey:3 validation:2 storing:1 pi:2 share:2 surprisingly:1 free:2 english:1 keeping:1 bias:1 side:1 burges:2 taking:1 benefit:1 distributed:2 raining:1 vocabulary:15 gram:18 evaluating:3 dimension:1 computes:2 ignores:1 collection:2 made:1 exponentiating:1 preprocessing:1 adaptive:2 far:1 transaction:1 approximate:1 keep:1 dealing:1 ml:1 global:2 wrote:1 investigating:2 corpus:3 mbed:1 atent:1 xi:3 continuous:2 latent:1 quantifies:1 why:1 table:7 promising:1 learn:6 ignoring:1 improving:1 excellent:3 complex:3 protocol:1 did:3 aistats:2 main:2 noise:28 turian:1 fair:1 complementary:1 dump:1 position:12 concatenating:1 candidate:3 answering:1 third:2 weighting:1 down:1 specific:1 unigram:1 list:1 experimented:1 explored:1 normalizing:4 concern:1 workshop:1 incorporating:2 consist:2 socher:2 frederic:1 effectively:1 importance:3 ci:3 phd:1 magnitude:2 margin:1 gap:1 chen:1 logarithmic:1 led:1 simply:5 explore:1 likely:1 failed:1 khudanpur:1 syntagmatic:1 harris:1 minibatches:1 weston:1 conditional:6 marked:1 towards:1 replace:2 content:1 averaging:2 called:2 discriminate:1 experimental:1 exception:1 dissimilar:1 evaluate:2 tested:1 |
4,604 | 5,166 | Training and Analyzing Deep Recurrent Neural
Networks
Michiel Hermans, Benjamin Schrauwen
Ghent University, ELIS departement
Sint Pietersnieuwstraat 41,
9000 Ghent, Belgium
michiel.hermans@ugent.be
Abstract
Time series often have a temporal hierarchy, with information that is spread out
over multiple time scales. Common recurrent neural networks, however, do not
explicitly accommodate such a hierarchy, and most research on them has been
focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing
time series. Here, each layer is a recurrent network which receives the hidden
state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the
structure of time series. We show that they reach state-of-the-art performance for
recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent
time scales.
1
Introduction
The last decade, machine learning has seen the rise of neural networks composed of multiple layers,
which are often termed deep neural networks (DNN). In a multitude of forms, DNNs have shown to
be powerful models for tasks such as speech recognition [17] and handwritten digit recognition [4].
Their success is commonly attributed to the hierarchy that is introduced due to the several layers.
Each layer processes some part of the task we wish to solve, and passes it on to the next. In this
sense, the DNN can be seen as a processing pipeline, in which each layer solves a part of the task
before passing it on to the next, until finally the last layer provides the output.
One type of network that debatably falls into the category of deep networks is the recurrent neural
network (RNN). When folded out in time, it can be considered as a DNN with indefinitely many
layers. The comparison to common deep networks falls short, however, when we consider the functionality of the network architecture. For RNNs, the primary function of the layers is to introduce
memory, not hierarchical processing. New information is added in every ?layer? (every network iteration), and the network can pass this information on for an indefinite number of network updates,
essentially providing the RNN with unlimited memory depth. Whereas in DNNs input is only presented at the bottom layer, and output is only produced at the highest layer, RNNs generally receive
input and produce output at each time step. As such, the network updates do not provide hierarchical processing of the information per se, only in the respect that older data (provided several time
steps ago) passes through the recursion more often. There is no compelling reason why older data
would require more processing steps (network iterations) than newly received data. More likely, the
recurrent weights in an RNN learn during the training phase to select what information they need to
pass onwards, and what they need to discard. Indeed, this quality forms the core motivation of the
so-called Long Short-term memory (LSTM) architecture [11], a special form of RNN.
1
DRNN-AO
DRNN-1O
3-layer RNN
1-layer RNN
time
time
Figure 1: Schematic illustration of a DRNN. Arrows represent connection matrices, and white,
black and grey circles represent input frames, hidden states, and output frames respectively. Left:
Standard RNN, folded out in time. Middle: DRNN of 3 layers folded out in time. Each layer can
be interpreted as an RNN that receives the time series of the previous layer as input. Right: The two
alternative architectures that we study in this paper, where the looped arrows represent the recurrent
weights. Either only the top layer connects to the output (DRNN-1O), or all layers do (DRNN-AO).
One potential weakness of a common RNN is that we may need complex, hierarchical processing of
the current network input, but this information only passes through one layer of processing before
going to the output. Secondly, we may need to process the time series at several time scales. If
we consider for example speech, at the lowest level it is built up of phonemes, which exist on a
very short time-scale. Next, on increasingly longer time scales, there are syllables, words, phrases,
clauses, sentences, and at the highest level for instance a full conversation. Common RNNs do not
explicitly support multiple time scales, and any temporal hierarchy that is present in the input signal
needs to be embedded implicitly in the network dynamics.
In past research, some hierarchical architectures employing RNNs have been proposed [3, 5, 6].
Especially [5] is interesting in the sense that they construct a hierarchy of RNNs, which all operate on different time-scales (using subsampling). The authors limit themselves to artificial tasks,
however. The architecture we study in this paper has been used in [8]. Here, the authors employ
stacked bi-directional LSTM networks, and train it on the TIMIT phoneme dataset [7] in which they
obtain state-of-the-art performance. Their paper is strongly focused on reaching good performance,
however, and little analysis on the actual contribution of the network architecture is provided.
The architecture we study in this paper is essentially a common DNN (a multilayer perceptron) with
temporal feedback loops in each layer, which we call a deep recurrent neural network (DRNN).
Each network update, new information travels up the hierarchy, and temporal context is added in
each layer (see Figure 1). This basically combines the concept of DNNs with RNNs. Each layer
in the hierarchy is a recurrent neural network, and each subsequent layer receives the hidden state
of the previous layer as input time series. As we will show, stacking RNNs automatically creates
different time scales at different levels, and therefore a temporal hierarchy.
In this paper we will study character-based language modelling and provide a more in-depth analysis
of how the network architecture relates to the nature of the task. We suspect that DRNNs are wellsuited to capture temporal hierarchies, and character-based language modeling is an excellent realworld task to validate this claim, as the distribution of characters is highly nonlinear and covers
both short- and long-term dependencies. As we will show, DRNNs embed these different timescales
directly in their structure, and they are able to model long-term dependencies. Using only stochastic
gradient descent (SGD) we are able to get state-of-the-art performance for recurrent networks on
a Wikipedia-based text corpus, which was previously only obtained using the far more advanced
Hessian-free training algorithm [19].
2
2.1
Deep RNNs
Hidden state evolution
We define a DRNN with L layers, and N neurons per layer. Suppose we have an input time series
s(t) of dimensionality Nin , and a target time series y? (t). In order to simplify notation we will not
explicitly write out bias terms, but augment the corresponding variables with an element equal to
2
one. We use the notation x
? = [x; 1].
We denote the hidden state of the i-th layer with ai (t). Its update equation is given by:
ai (t) = tanh (Wi ai (t ? 1) + Zi ?
ai?1 (t)) if i > 1
ai (t) = tanh (Wi ai (t ? 1) + Zi?
s(t)) if i = 1.
Here, Wi and Zi are the recurrent connections and the connections from the lower layer or input
time series, respectively. A schematic drawing of the DRNN is presented in Figure 1.
Note that the network structure inherently offers different time scales. The bottom layer has fading
memory of the input signal. The next layer has fading memory of the hidden state of the bottom
layer, and consequently a fading memory of the input which reaches further in the past, and so on
for each additional layer.
2.2
Generating output
The task we consider in this paper is a classification task, and we use a softmax function to generate
output. The DRNN generates an output which we denote by y(t). We will consider two scenarios:
that where only the highest layer in the hierarchy couples to the output (DRNN-1O), and that where
all layers do (DRNN-AO). In the two respective cases, y(t) is given by:
y(t) = softmax (U?
aL (t)) ,
(1)
where U is the matrix with the output weights, and
L
X
y(t) = softmax
!
Ui ?
ai (t) ,
(2)
i=1
such that Ui corresponds to the output weights of the i-th layer. The two resulting architectures are
depicted in the right part of Figure 1.
The reason that we use output connections at each layer is twofold. First, like any deep architecture,
DRNNs suffer from a pathological curvature in the cost function. If we use backpropagation through
time, the error will propagate from the top layer down the hierarchy, but it will have diminished in
magnitude once it reaches the lower layers, such that they are not trained effectively. Adding output
connections at each layer amends this problem to some degree as the training error reaches all layers
directly.
Secondly, having output connections at each layer provides us with a crude measure of its role in
solving the task. We can for instance measure the decay of performance by leaving out an individual layer?s contribution, or study which layer contributes most to predicting characters in specific
instances.
2.3
Training setup
In all experiments we used stochastic gradient descent. To avoid extremely large gradients near
bifurcations, we applied the often-used trick of normalizing the gradient before using it for weight
updates. This simple heuristic seems to be effective to prevent gradient explosions and sudden jumps
of the parameters, while not diminishing the end performance. We write the number of batches we
train on as T . The learning rate is set at an initial value ?0 , and drops linearly with each subsequent
weight update. Suppose ?(j) is the set of all trainable parameters after j updates, and ?? (j) is the
gradient of a cost function w.r.t. this parameter set, as computed on a randomly sampled part of the
training set. Parameter updates are given by:
j
?? (j)
?(j + 1) = ?(j) ? ?0 1 ?
.
(3)
T ||?? (j)||
In the case where we use output connections at the top layer only, we use an incremental layer-wise
method to train the network, which was necessary to reach good performance. We add layers one
by one and at all times an output layer only exists at the current top layer. When adding a layer, the
previous output weights are discarded and new output weights are initialised connecting from the
new top layer. In this way each layer has at least some time during training in which it is directly
3
coupled to the output, and as such can be trained effectively. Over the course of each of these training
stages we used the same training strategy as described before: training the full network with BPTT
and linearly reducing the learning rate to zero before a new layer is added. Notice the difference to
common layer-wise training schemes where only a single layer is trained at a time. We always train
the full network after each layer is added.
3
Text prediction
In this paper we consider next character prediction on a Wikipedia text-corpus [19] which was
made publicly available1 . The total set is about 1.4 billion characters long, of which the final 10
million is used for testing. Each character is represented by one-out-of-N coding. We used 95 of
the most common characters2 (including small letters, capitals, numbers and punctuation), and one
?unknown? character, used to map any character not part of the 95 common ones, e.g. Cyrillic and
Chinese characters. We need time in the order of 10 days to train a single network, largely due to
the difficulty of exploiting massively parallel computing for SGD. Therefore we only tested three
network instantiations3 . Each experiment was run on a single GPU (NVIDIA GeForce GTX 680,
4GB RAM).
The task is as follows: given a sequence of text, predict the probability distribution of the next
character. The used performance metric is the average number of bits-per-character (BPC), given
by BPC = ? hlog2 pc i, where pc is the probability as predicted by the network of the correct next
character.
3.1
Network setups
The challenge in character-level language modelling lies in the great diversity and sheer number of
words that are used. In the case of Wikipedia this difficulty is exacerbated due to the large number
of names of persons and places, scientific jargon, etc. In order to capture this diversity we need large
models with many trainable parameters.
All our networks have a number of neurons selected such that in total they each had approximately
4.9 million trainable parameters, which allowed us to make a comparison to other published work
[19]. We considered three networks: a common RNN (2119 units), a 5-layer DRNN-1O (727 units
per layer), and a 5-layer DRNN-AO (706 units per layer)4 . Initial learning rates ?0 were chosen at
0.5, except for the the top layer of the DRNN-1O, where we picked ?0 = 0.25 (as we observed that
the nodes started to saturate if we used a too high learning rate).
The RNN and the DRNN-AO were trained over T = 5 ? 105 parameter updates. The network with
output connections only at the top layer had a different number of parameter updates per training
stage, T = {0.5, 1, 1.5, 2, 2.5} ? 105 , for the 5 layers respectively. As such, for each additional
layer the network is trained for more iterations. All gradients are computed using backpropagation
through time (BPTT) on 75 randomly sampled sequences in parallel, drawn from the training set.
All sequences were 250 characters long, and the first 50 characters were disregarded during the
backwards pass, as they may have insufficient temporal context. In the end the DRNN-AO sees the
full training set about 7 times in total, and the DRNN-1O about 10 times.
The matrices Wi and Zi>1 were initialised with elements drawn from N (0, N ?1/2 ). The input
weights Z1 were drawn from N (0, 1). We chose to have the same number of neurons for every
layer, mostly to reduce the number of parameters that need to be optimised. Output weights were
always initialised on zero.
1
http://www.cs.toronto.edu/?ilya/mrnns.tar.gz
In [19] only 86 character are used, but most of the additional characters in our set are exceedingly rare,
such that cross-entropy is not affected meaningfully by this difference.
3
In our experience the networks are so large that there is very little difference in performance for different
initialisations
4
The decision for 5 layers is based on a previous set of experiments (results not shown).
2
4
BPC test
1.610
1.557
1.541
1.55
1.51
1.276
0.6 ? 1.3
2
Increase in BPC test
Model
RNN
DRNN-AO
DRNN-1O
MRNN
PAQ
Hutter Prize (current record) [12]
Human level (estimated) [18]
1.5
1
0.5
0
Table 1: Results on the Wikipedia character prediction task. The first three numbers are our
measurements, the next two the results on the
same dataset found in [19]. The bottom two
numbers were not measured on the same text
corpus.
3.2
1
2
3
4
Removed layer
5
Figure 2: Increase in BPC on the test set from
removing the output contribution of a single
layer of the DRNN-AO.
Results
Performance and text generation
The resulting BPCs for our models and comparative results in literature are shown in Table 1. The
common RNN performs worst, and the DRNN-1O the best, with the DRNN-AO slightly worse. Both
DRNNs perform well and are roughly similar to the state-of-the-art for recurrent networks with the
same number of trainable parameters5 , which was established with a multiplicative RNN (MRNN),
trained with Hessian-free optimization in the course of 5 days on a cluster of 8 GPUs6 . The same
authors also used the PAQ compression algorithm [14] as a comparison, which we included in the
list. In the table we also included two results which were not measured on the same dataset (or even
using the same criteria), but which give an estimation of the true number of BPC for natural text.
To check how each layer influences performance in the case of the DRNN-AO, we performed tests
in which the output of a single layer is set to zero. This can serve as a sanity check to ensure
that the model is efficiently trained. If for instance removing the top layer output contribution
does not significantly harm performance, this essentially means that it is redundant (as it does no
preprocessing for higher layers). Furthermore we can use this test to get an overall indication of
which role a particular layer has in producing output. Note that these experiments only have a limited
interpretability, as the individual layer contributions are likely not independent. Perhaps some layers
provide strong negative output bias which compensates for strong positive bias of another, or strong
synergies might exists between them.
First we measure the increase in test BPC by removing a single layer?s output contribution, which
can then be used as an indicator for the importance of this layer for directly generating output. In
Figure 2 we show the result. The contribution of the top layer is the most important, and that of the
bottom layer second important. The intermediate layers contribute less to the direct output and seem
to be more important in preprocessing the data for the top layer.
As in [19], we also used the networks in a generative mode, where we use the output probabilities
of the DRNN-AO to recursively sample a new input character in order to complete a given sentence.
We too used the phrase ?The meaning of life is ?. We performed three tests: first we generated
text with an intact network, next we see how the text quality deteriorates when we leave out the
contributions of the bottom and top layer respectively7 (by setting it equal to zero before adding up
5
This similarity might reflect limitations caused by the network size. We also performed a long-term experiment with a DRNN-AO with 9.6 million trainable parameters, which resulted in a test BPC of 1.472 after
1,000,000 weight updates (training for over a month). More parameters offer more raw storage power, and
hence provide a straightforward manner in which to increase performance.
6
This would suggest a computational cost of roughly 4 times ours, but an honest comparison is hard to make
as the authors did not specify explicitly how much data their training algorithm went through in total. Likely
the cost ratio is smaller than 4, as we use a more modern GPU.
7
Leaving out the contributions of intermediate layers only has a minimal effect on the subjective quality of
the produced text.
5
The meaning of life is the ?decorator of
Rose?. The Ju along with its perspective character survive, which coincides
with his eromine, water and colorful
art called ?Charles VIII?.??In ?Inferno?
(also 220: ?The second Note Game
Magazine?, a comic at the Old Boys
at the Earl of Jerusalem for two years)
focused on expanded domestic differences from 60 mm Oregon launching,
and are permitted to exchange guidance.
The meaning of life is man sistasteredsteris bus and nuster eril?n ton nis our
ousNmachiselle here hereds?d toppstes impedred wisv.?-hor ens htls betwez rese, and Intantored wren in
thoug and elit toren on the marcel,
gos infand foldedsamps que help sasecre hon Roser and ens in respoted
we frequen enctuivat herde pitched
pitchugismissedre and loseflowered
The meaning of life is impossible
to
unprecede
?Pok.{*
PRER)!?KGOREMFHEAZ CTX=R M
?S=6 5?&+??=7xp*= 5FJ4?13/TxI
JX=?b28O=&4+E9F=&Z26 ?R&N==
Z8&A=58=84&T=RESTZINA=L&95Y
2O59&FP85=&&#=&H=S=Z IO =T
@?CBOM=6&9Y1= 9 5
Table 2: Three examples of text, generated by the DRNN-AO. The left one is generated by the intact
network, the middle one by leaving out the contribution of the first layer, and the right one by leaving
out the contribution of the top layer.
0
RNN
layer 1
layer 2
layer 3
layer 4
layer 5
?1
10
RNN
DRNN?1O
DRNN?AO
layer 1
layer 2
layer 3
layer 4
layer 5
1
10
average increase in BPC
normalised average distance
10
?2
10
0
10
?1
10
?2
10
?3
10
?3
20
40
60
80
nr. of presented characters
10
100
20
40
60
80
nr. of presented characters
100
Figure 3: Left panel: normalised average distance between hidden states of a perturbed and unperturbed network as a function of presented characters. The perturbation is a single typo at the first
character. The coloured full lines are for the individual layers of the DRNN-1O, and the coloured
dashed lines are those of the layers of the DRNN-AO. Distances are normalised on the distance of
the occurrence of the typo. Right panel: Average increase in BPC between a perturbed and unperturbed network as a function of presented characters. The perturbation is by replacing the initial
context (see text), and the result is shown for the text having switched back to the correct context.
Coloured lines correspond to the individual contributions of the layers in the DRNN-AO.
layer contributions and applying the softmax function). Resulting text samples are shown in Table
2. The text sample of the intact network shows short-term correct grammar, phrases, punctuation
and mostly existing words. The text sample with the bottom layer output contribution disabled very
rapidly becomes ?unstable?, and starts to produce long strings of rare characters, indicating that the
contribution of the bottom layer is essential in modeling some of the most basic statistics of the
Wikipedia text corpus. We verified this further by using such a random string of characters as initialization of the intact network, and observed that it consistently fell back to producing ?normal?
text. The text sample with the top layer disabled is interesting in the sense that it produces roughly
word-length strings of common characters (letters and spaces), of which substrings resemble common syllables. This suggests that the top layer output contribution captures text statistics longer than
word-length sequences.
Time scales
In order to gauge at what time scale each individual layer operates, we have performed several
experiments on the models. First of all we considered an experiment in which we run the DRNN
on two identical text sequences from the test set, but after 100 characters we introduce a typo in
one of them (by replacing it by a character randomly sampled from the full set). We record the
hidden states after the typo as a function of time for both the perturbed and unperturbed network
6
output
15
10
5
0
?5
prob.
0.4
0.2
0
50
100
150
200
250
300
nr. presented characters
350
400
450
500
Figure 4: Network output example for a particularly long phrase between parentheses (296 characters), sampled from the test set. The vertical dashed lines indicate the opening and closing parentheses in the input text sequence. Top panel: output traces for the closing parenthesis character for
each layer in the DRNN-AO. Coloring is identical to that of Figure 3. Bottom panel: total predicted
output probability of the closing parenthesis sign of the DRNN-AO.
and measure the Euclidean distance between them as a function of time, to see how long the effect
of the typo remains present in each layer.
Next we measured what the length of the context is the DRNNs effectively employ. In order to do so
we measured the average difference in BPC between normal text and a perturbed copy, in which we
replaced the first 100 characters by text randomly sampled from elsewhere in the test set. This will
give an indication of how long the lack of correct context lingers after the text sequence switched.
All measurements were averaged over 50,000 instances. Results are shown in Figure 3. The left
panel shows how fast each individual layer in the DRNNs forgets the typo-perturbation. It appears
that the layer-wise time scales behave quite differently in the case of the DRNN-1O and the DRNNAO. The DRNN-AO has very short time-scales in the three bottom layers and longer memory only
appears for the two top ones, whereas in the DRNN-1O, the bottom two layers have relatively short
time scales, but the top three layers have virtually the same, very long time scale. This is almost
certainly caused by the way in which we trained the DRNN-1O, such that intermediate layers already
assumed long memory when they were at the top of the hierarchy. The effect of the perturbation of
the normal RNN is also shown. Even though it decays faster at the start, the effect of the perturbation
remains present in the network for a long period as well.
The right panel of Figure 3 depicts the effect on switching the context on the actual prediction
accuracy, which gives some insight in what the actual length of the context used by the networks
is. Both DRNNs seem to recover more slowly from the context switch than the RNN, indicating
that they employ a longer context for prediction. The time scales of the individual layers of the
DRNN-AO are also depicted (by using the perturbed hidden states of an individual layer and the
unperturbed states for the other layers for generating output), which largely confirms the result from
the typo-perturbation test.
The results shown here verify that a temporal hierarchy develops when training a DRNN. We have
also performed a test to see what the time scales of an untrained DRNN are (by performing the typo
test), which showed that here the differences in time-scales for each layer were far smaller (results
not shown). The big differences we see in the trained DRNNs are hence a learned property.
Long-term interactions: parentheses
In order to get a clearer picture on some of the long-term dependencies the DRNNs have learned we
look at their capability of closing parentheses, even when the phrase between parentheses is long.
To see how well the networks remember the opening of a parenthesis, we observe the DRNN-AO
output for the closing parenthesis-character8 . In Figure 4 we show an example for an especially long
phrase between parentheses. We both show the output probability and the individual layers? output
8
Results on the DRNN-1O are qualitatively similar.
7
contribution for the closing parenthesis (before they are added up and sent to the softmax function).
The output of the top layer for the closing parenthesis is increased strongly for the whole duration
of the phrase, and is reduced immediately after it is closed.
The total output probability shows a similar pattern, showing momentary high probabilities for the
closing parenthesis only during the parenthesized phrase, and extremely low probabilities elsewhere.
These results are quite consistent over the test set, with some notable exceptions. When several sentences appear between parentheses (which occasionally happens in the text corpus), the network
reduces the closing bracket probability (i.e., essentially ?forgets? it) as soon as a full stop appears9 .
Similarly, if a sentence starts with an opening bracket it will not increase closing parenthesis probability at all, essentially ignoring it. Furthermore, the model seems not able to cope with nested
parentheses (perhaps because they are quite rare). The fact that the DRNN is able to remember the
opening parenthesis for sequences longer than it has been trained on indicates that it has learned
to model parentheses as a pseudo-stable attractor-like state, rather than memorizing parenthesized
phrases of different lengths.
In order to see how well the networks can close parentheses when they operate in the generative
mode, we performed a test in which we initialize it with a 100-character phrase drawn from the test
set ending in an opening bracket and observe in how many cases the network generates a closing
bracket. A test is deemed unsuccessful if the closing parenthesis doesn?t appear in 500 characters,
or if it produces a second opening parenthesis. We averaged the results over 2000 initializations.
The DRNN-AO performs best in this test; only failing in 12% of the cases. The DRNN-1O fails in
16%, and the RNN in 28%.
The results presented in this section hint at the fact that DRNNs might find it easier to learn longterm relations between input characters than common RNNs. This could lead to test DRNNs on the
tasks introduced in [11]. These tasks are challenging in the sense that they require to retain very
long memory of past input, while being driven by so-called distractor input. It has been shown that
LSTMs and later common RNNs trained with Hessian-free methods [16] and Echo State Networks
[13] are able to model such long-term dependencies. These tasks, however, purely focus on memory
depth, and very little additional processing is required, let alone hierarchical processing. Therefore
we do not suspect that DRNNs pose a strong advantage over common RNNs for these tasks in
particular.
4
Conclusions and Future Work
We have shown that using a deep recurrent neural network (DRNN) is beneficial for characterlevel language modeling, reaching state-of-the-art performance for recurrent neural networks on a
Wikipedia text corpus, confirming the observation that deep recurrent architectures can boost performance [8]. We also present experimental evidence for the appearance of a hierarchy of time-scales
present in the layers of the DRNNs. Finally we have demonstrated that in certain cases the DRNNs
can have extensive memory of several hundred characters long.
The training method we obtained on the DRNN-1O indicates that supervised pre-training for deep
architectures is helpful, which on its own can provide an interesting line of future research. Another
one is to extend common pre-training schemes, such as the deep belief network approach [9] and
deep auto-encoders [10, 20] for DRNNs. The results in this paper can potentially contribute to the
ongoing debate on training algorithms, especially whether SGD or second order methods are more
suited for large-scale machine learning problems [2]. Therefore, applying second order techniques
such as Hessian-free training [15] on DRNNs seems an attractive line of future research in order to
obtain a solid comparison.
Acknowledgments
This work is partially supported by the interuniversity attraction pole (IAP) Photonics@be of the
Belgian Science Policy Office and the ERC NaResCo Starting grant. We would like to thank Sander
Dieleman and Philemon Brakel for helping with implementations. All experiments were performed
using Theano [1].
9
It is consistently resilient against points appearing in abbreviations such as ?e.g.,? and ?dr.? though.
8
References
[1] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for
Scientific Computing Conference (SciPy), June 2010.
[2] L. Bottou and O. Bousquet. The tradeoffs of large-scale learning. Optimization for Machine Learning,
page 351, 2011.
[3] W.-Y. Chen, Y.-F. Liao, and S.-H. Chen. Speech recognition with hierarchical recurrent neural networks.
Pattern Recognition, 28(6):795 ? 805, 1995.
[4] D. Ciresan, U. Meier, L. Gambardella, and J. Schmidhuber. Deep, big, simple neural nets for handwritten
digit recognition. Neural computation, 22(12):3207?3220, 2010.
[5] S. El Hihi and Y. Bengio. Hierarchical recurrent neural networks for long-term dependencies. Advances
in Neural Information Processing Systems, 8:493?499, 1996.
[6] S. Fern?andez, A. Graves, and J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 20th International Joint Conference on Artificial
Intelligence, IJCAI 2007, Hyderabad, India, January 2007.
[7] J. Garofolo, N. I. of Standards, T. (US, L. D. Consortium, I. Science, T. Office, U. States, and D. A. R. P.
Agency. TIMIT Acoustic-phonetic Continuous Speech Corpus. Linguistic Data Consortium, 1993.
[8] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In To
appear in ICASSP 2013, 2013.
[9] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural computation,
18(7):1527?1554, 2006.
[10] G. E. Hinton. Reducing the dimensionality of data with neural networks. Science, 313:504?507, 2006.
[11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[12] M. Hutter. The human knowledge compression prize, 2006.
[13] H. Jaeger. Long short-term memory in echo state networks: Details of a simulation study. Technical
report, Jacobs University, 2012.
[14] M. Mahoney. Adaptive weighing of context models for lossless data compression. Florida Tech., Melbourne, USA, Tech. Rep, 2005.
[15] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning, pages 735?742, 2010.
[16] J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proceedings of the 28th International Conference on Machine Learning, volume 46, page 68. Omnipress
Madison, WI, 2011.
[17] A. Mohamed, G. Dahl, and G. Hinton. Acoustic modeling using deep belief networks. Audio, Speech,
and Language Processing, IEEE Transactions on, 20(1):14?22, 2012.
[18] C. E. Shannon. Prediction and entropy of printed english. Bell system technical journal, 30(1):50?64,
1951.
[19] I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In Proceedings
of the 28th International Conference on Machine Learning, pages 1017?1024, 2011.
[20] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with
denoising autoencoders. In Proceedings of the 25th International Conference on Machine learning, pages
1096?1103, 2008.
9
| 5166 |@word longterm:1 middle:2 compression:3 seems:3 bptt:2 grey:1 confirms:1 simulation:1 propagate:1 jacob:1 sgd:3 solid:1 accommodate:1 recursively:1 initial:3 series:9 initialisation:1 ours:1 past:3 subjective:1 existing:1 current:3 gpu:3 subsequent:2 confirming:1 drop:1 update:11 alone:1 generative:2 selected:1 intelligence:1 weighing:1 prize:2 short:9 core:1 record:2 indefinitely:1 sudden:1 provides:2 pascanu:1 node:1 toronto:1 contribute:2 launching:1 math:1 along:1 direct:1 frequen:1 combine:1 manner:1 introduce:2 indeed:1 roughly:3 themselves:1 distractor:1 automatically:1 little:3 actual:3 cpu:1 domestic:1 provided:2 becomes:1 notation:2 panel:6 lowest:1 what:6 interpreted:1 string:3 temporal:9 remember:2 every:3 pseudo:1 unit:3 grant:1 colorful:1 appear:3 producing:2 before:7 positive:1 limit:1 io:1 switching:1 analyzing:1 optimised:1 approximately:1 black:1 rnns:11 chose:1 might:3 initialization:2 garofolo:1 suggests:1 challenging:1 limited:1 bi:1 averaged:2 acknowledgment:1 testing:1 backpropagation:2 digit:2 rnn:19 bell:1 significantly:1 printed:1 word:5 pre:2 suggest:1 consortium:2 get:3 close:1 storage:1 context:11 influence:1 impossible:1 applying:2 www:1 map:1 demonstrated:1 paq:2 marten:3 straightforward:1 jerusalem:1 go:1 duration:1 starting:1 focused:2 immediately:1 scipy:1 insight:1 attraction:1 lamblin:1 his:1 hierarchy:15 suppose:2 target:1 magazine:1 trick:1 element:2 recognition:6 particularly:1 bottom:11 role:2 observed:2 hyderabad:1 capture:4 worst:1 went:1 highest:3 removed:1 rose:1 benjamin:1 agency:1 ui:2 warde:1 dynamic:1 trained:12 solving:1 serve:1 creates:1 purely:1 icassp:1 joint:1 emergent:1 differently:1 represented:1 stacked:1 train:5 fast:2 effective:1 artificial:2 que:1 sanity:1 quite:3 heuristic:1 solve:1 drawing:1 compensates:1 grammar:1 statistic:2 echo:2 final:1 sequence:9 indication:2 advantage:1 net:2 interaction:1 loop:1 rapidly:1 roser:1 validate:1 sutskever:2 billion:1 exploiting:1 cluster:1 ijcai:1 nin:1 jaeger:1 produce:4 generating:4 amends:1 incremental:1 comparative:1 leave:1 help:1 recurrent:22 clearer:1 pose:1 measured:4 received:1 exacerbated:1 strong:4 solves:1 predicted:2 c:1 marcel:1 resemble:1 indicate:1 larochelle:1 lingers:1 correct:4 functionality:1 stochastic:3 human:2 require:2 exchange:1 dnns:3 resilient:1 wellsuited:1 ao:22 andez:1 secondly:2 helping:1 mm:1 considered:3 normal:3 great:1 dieleman:1 predict:1 claim:1 desjardins:1 jx:1 belgium:1 failing:1 estimation:1 wren:1 travel:1 tanh:2 gauge:1 always:2 rather:2 reaching:2 avoid:1 tar:1 office:2 linguistic:1 focus:1 june:1 consistently:2 modelling:2 check:2 indicates:2 tech:2 sense:4 helpful:1 el:1 diminishing:1 hidden:9 relation:1 dnn:4 going:1 overall:1 classification:1 hon:1 augment:1 art:6 special:1 softmax:5 bifurcation:1 initialize:1 equal:2 construct:1 once:1 having:2 identical:2 look:1 survive:1 looped:1 future:3 report:1 simplify:1 develops:1 employ:3 opening:6 modern:1 pathological:1 randomly:4 composed:1 hint:1 resulted:1 individual:9 replaced:1 phase:1 connects:1 attractor:1 onwards:1 highly:1 certainly:1 bpc:11 mahoney:1 weakness:1 punctuation:2 bracket:4 photonics:1 farley:1 pc:2 explosion:1 necessary:1 experience:1 belgian:1 respective:1 old:1 euclidean:1 circle:1 guidance:1 minimal:1 melbourne:1 hutter:2 instance:5 increased:1 modeling:5 compelling:1 cover:1 phrase:10 stacking:1 pole:1 elis:1 ugent:1 rare:3 cost:4 hundred:1 osindero:1 too:2 dependency:5 encoders:1 perturbed:5 person:1 ju:1 lstm:2 international:5 retain:1 connecting:1 ilya:1 schrauwen:1 reflect:1 interuniversity:1 pietersnieuwstraat:1 slowly:1 dr:1 worse:1 potential:1 diversity:2 bergstra:1 coding:1 oregon:1 notable:1 explicitly:4 caused:2 multiplicative:1 performed:7 picked:1 closed:1 later:1 compiler:1 start:3 recover:1 parallel:2 capability:1 timit:2 contribution:17 publicly:1 ni:1 accuracy:1 phoneme:2 largely:2 efficiently:1 correspond:1 directional:1 handwritten:2 raw:1 vincent:1 produced:2 basically:1 substring:1 fern:1 published:1 ago:1 reach:5 against:1 initialised:3 geforce:1 mohamed:2 naturally:1 attributed:1 couple:1 sampled:5 newly:1 dataset:3 pitched:1 stop:1 conversation:1 knowledge:1 dimensionality:2 back:2 coloring:1 focusing:1 appears:2 higher:1 day:2 supervised:1 permitted:1 specify:1 though:2 strongly:2 furthermore:2 stage:2 until:1 autoencoders:1 receives:3 lstms:1 replacing:2 nonlinear:1 lack:1 mode:2 quality:3 perhaps:2 scientific:2 hor:1 disabled:2 name:1 effect:6 usa:1 verify:1 concept:1 gtx:1 true:1 evolution:1 hence:2 jargon:1 white:1 attractive:1 during:4 game:1 coincides:1 criterion:1 complete:1 txi:1 performs:2 omnipress:1 meaning:4 wise:3 charles:1 common:16 wikipedia:6 clause:1 volume:1 million:3 extend:1 hihi:1 measurement:2 ai:7 similarly:1 erc:1 closing:12 language:6 had:2 stable:1 longer:5 similarity:1 etc:1 add:1 curvature:1 own:1 showed:1 perspective:1 driven:1 discard:1 termed:1 scenario:1 massively:1 nvidia:1 occasionally:1 certain:1 schmidhuber:3 success:1 phonetic:1 life:4 rep:1 seen:2 additional:4 gambardella:1 redundant:1 period:1 signal:2 dashed:2 relates:1 multiple:3 full:7 reduces:1 technical:2 faster:1 michiel:2 offer:3 long:23 cross:1 parenthesis:21 schematic:2 prediction:6 basic:2 multilayer:1 essentially:5 metric:1 liao:1 iteration:3 represent:3 hochreiter:1 receive:1 whereas:2 rese:1 leaving:4 breuleux:1 operate:2 typo:8 pass:3 fell:1 suspect:2 virtually:1 sent:1 meaningfully:1 seem:2 call:1 extracting:1 near:1 backwards:1 intermediate:3 bengio:3 sander:1 switch:1 zi:4 architecture:14 ciresan:1 reduce:1 tradeoff:1 honest:1 whether:1 expression:1 gb:1 suffer:1 speech:6 passing:1 hessian:6 deep:17 generally:1 se:1 category:1 reduced:1 generate:1 http:1 exist:1 notice:1 sign:1 estimated:1 deteriorates:1 per:6 mrnn:2 write:2 affected:1 indefinite:1 sheer:1 drawn:4 capital:1 prevent:1 verified:1 dahl:1 ram:1 year:1 realworld:1 run:2 letter:2 powerful:1 prob:1 place:1 almost:1 decision:1 bit:1 layer:119 syllable:2 fading:3 unlimited:1 bousquet:1 generates:2 extremely:2 performing:1 expanded:1 relatively:1 structured:1 smaller:2 slightly:1 increasingly:1 character:40 beneficial:1 wi:5 departement:1 happens:1 memorizing:1 sint:1 theano:2 pipeline:1 equation:1 previously:1 bus:1 remains:2 end:2 iap:1 observe:2 hierarchical:9 occurrence:1 appearing:1 alternative:1 batch:1 florida:1 top:19 subsampling:1 ensure:1 madison:1 especially:3 chinese:1 added:5 already:1 strategy:1 primary:1 nr:3 gradient:8 distance:5 thank:1 unstable:1 reason:2 water:1 viii:1 length:5 illustration:1 providing:1 insufficient:1 ratio:1 manzagol:1 difficult:1 setup:2 mostly:2 potentially:1 boy:1 debate:1 trace:1 negative:1 rise:1 implementation:1 policy:1 unknown:1 perform:2 teh:1 vertical:1 neuron:3 observation:1 philemon:1 discarded:1 descent:3 behave:1 january:1 hinton:5 frame:2 y1:1 perturbation:6 introduced:2 meier:1 required:1 trainable:5 extensive:1 connection:8 sentence:4 z1:1 acoustic:2 learned:3 established:1 boost:1 able:5 ctx:1 pattern:2 challenge:1 herman:2 built:1 including:1 memory:13 interpretability:1 unsuccessful:1 belief:3 power:1 available1:1 difficulty:2 natural:1 predicting:1 indicator:1 recursion:1 advanced:1 older:2 scheme:2 lossless:1 picture:1 started:1 deemed:1 gz:1 coupled:1 auto:1 text:28 coloured:3 literature:1 python:1 graf:2 embedded:1 comic:1 interesting:3 generation:1 limitation:1 switched:2 earl:1 degree:1 xp:1 consistent:1 course:2 elsewhere:2 supported:1 last:2 free:6 copy:1 soon:1 english:1 bias:3 normalised:3 perceptron:1 india:1 fall:2 feedback:1 depth:3 ending:1 z8:1 exceedingly:1 doesn:1 author:4 commonly:1 jump:1 made:1 preprocessing:2 qualitatively:1 adaptive:1 employing:1 far:2 cope:1 brakel:1 transaction:1 implicitly:1 synergy:1 corpus:7 harm:1 assumed:1 continuous:1 decade:1 why:1 table:5 learn:2 nature:1 robust:1 parenthesized:2 inherently:1 ignoring:1 composing:1 contributes:1 excellent:1 complex:1 untrained:1 bottou:1 domain:1 did:1 spread:1 timescales:1 linearly:2 arrow:2 motivation:1 big:2 whole:1 turian:1 allowed:1 drnn:49 en:2 depicts:1 fails:1 momentary:1 wish:1 lie:1 crude:1 forgets:2 down:1 saturate:1 embed:1 removing:3 specific:1 bastien:1 showing:1 unperturbed:4 list:1 decay:2 ton:1 multitude:1 normalizing:1 evidence:1 exists:2 essential:1 adding:3 effectively:3 importance:1 magnitude:1 labelling:1 disregarded:1 chen:2 easier:1 suited:1 entropy:2 depicted:2 likely:3 appearance:1 partially:1 corresponds:1 nested:1 abbreviation:1 bpcs:1 month:1 consequently:1 twofold:1 man:1 hard:1 diminished:1 folded:3 except:1 reducing:2 included:2 operates:1 denoising:1 ghent:2 called:3 total:6 pas:3 experimental:1 shannon:1 intact:4 indicating:2 select:1 exception:1 support:1 ongoing:1 audio:1 tested:1 |
4,605 | 5,167 | Extracting regions of interest from biological images
with convolutional sparse block coding
Marius Pachitariu1 , Adam Packer2 , Noah Pettit2 , Henry Dagleish2 ,
Michael Hausser2 and Maneesh Sahani1
1
Gatsby Unit, UCL, UK {marius, maneesh}@gatsby.ucl.ac.uk
2
The Wolfson Institute for Biomedical Research, UCL, UK {a.packer,
noah.pettit.10, henry.dalgleish.09, m.hausser}@ucl.ac.uk
Abstract
Biological tissue is often composed of cells with similar morphologies replicated
throughout large volumes and many biological applications rely on the accurate
identification of these cells and their locations from image data. Here we develop
a generative model that captures the regularities present in images composed of
repeating elements of a few different types. Formally, the model can be described
as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the KSVD learning algorithm to subspaces by retaining several principal vectors from
the SVD decomposition instead of just one. Good models with little cross-talk
between subspaces can be obtained by learning the blocks incrementally. We
perform extensive experiments on simulated images and the inference algorithm
consistently recovers a large proportion of the cells with a small number of false
positives. We fit the convolutional model to noisy GCaMP6 two-photon images
of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based
representation is reflected in the variability of the recovered cell shapes.
1
Introduction
For evolutionary reasons, biological tissue at all spatial scales is composed of repeating patterns.
This is because successful biological motifs are reused and multiplied by evolutionary pressures. At
a small spatial scale eukaryotic cells contain only a few types of major organelles like mitochondria
and vacuoles and several dozen minor organelles like vesicles and ribosomes. Each of the organelles
is replicated a large number of times within each cell and has a distinctive visual appearance. At
the scale of whole cells, most tissue types like muscle and epithelium are composed primarily of
single cell types. Some of the more diverse biological tissues are probably in the brain where gray
matter contains different types of neurons and glia, often spatially overlapping. Repetition is also
encouraged at large spatial scales. Striate muscles are made out of similar axially-aligned fibers
called sarcomers and human cortical surfaces are highly folded inside the skull producing repeating
surface patterns called gyri and sulci.
Much biological data at all spatial scales comes in the form of two- or three-dimensional images.
Non-invasive techniques like magnetic resonance imaging allow visualization of details on the order
of one millimeter. Cells in tissue can be seen with light microscopy and cellular organelles can
be seen with the electron microscope. Given the stereotypical nature of biological motifs, these
images often appear as collections of similar elements over a noisy background, as shown in figure
1(a). We developed a generative image model that automatically discovers the repeating motifs, and
segments biological images into the most common elements that form them. We apply the model
to two-dimensional images composed of several hundred cells of possibly different types, such as
1
(a)
(b)
Figure 1: a. Mean image of a two-photon recording of calcium-based fluorescence. b. Same image
as in (a) after subtractive and divisive normalization locally.
images of cortical tissue expressing fluorescent GCaMP6, a calcium indicator, taken with a twophoton microscope in vivo. We also apply the model to Nissl-stained cortical tissue imaged in slice.
Each experimental exposure can contain hundreds of cells and many exposures are usually taken
over a single experimental session. Our main aim is to automate the cell detection stage, because
tracing cell contours by hand can be a laborious and inexact process, especially given the multitude
of confounds usually present in these images. One confound clearly visible in figure 1(a) is the
large variation in contrast and luminance over a single image. A second confound, also visible in
figure 1(a), is that many cells tend to cluster together and press their boundaries against each other.
Assigning pixels to the correct cell can be difficult. A third confound is that calcium, the marker
which the fluorescent images report, is present in the entire neuropil (in the dendrites and axons of
the cells). Activation of calcium in the neuropil makes a noisy background for the estimation of cell
somata. Given such large confounds, a properly-formulated image model is needed to resolve the
ambiguities as well as the human eye can resolve them.
1.1 Background on automated extraction of cell somata
Histological examination of biological tissue with light-microscopy is an important application for
techniques of cell identification and segmentation. Most algorithms for identifying cell somata
from such images are based on hand-crafted filtering and thresholding techniques. For example,
[1] proposes a pipeline of as many as fourteen separate steps, each of which is meant to deal with
some particular dimension of variability in the images. Our approach is to instead propose a fully
generative model of the biological tissue which encapsulates our beliefs about the stereotypical
structure of such images. Inference in the model inverts the generative model ? or in other words
deconvolves the image ? and thereby replaces the filtering and thresholding techniques usually
employed. Learning the parameters of the generative model replaces the hand-crafting of the filters
and thresholds.
For one image type we use here, fluorescent images of neuronal tissue, the approach of [2] is closer
in spirit to our methodology of model design and inference. The authors propose an independent
components analysis (ICA) model of the movies which expresses their beliefs that all the pixels belonging to a cell should brighten together, but only rarely. The model effectively uses the temporal
correlations between pixels to segment each image, much like [3] but the pipeline of [3] is manual and not model-designed like that of [2]. Both of these studies are different from our approach,
because we aim to recover cell bodies from single images alone. The method of [2] applies well
to small fields of view and large coherent fluorescence fluctuations in single cells, but fails when
applied to our data with large fields of view containing hundreds of small neurons. The failure is
due to long-range spatial correlations between many thousands of pixels which overcome the noisy
correlations between the few dozen pixels belonging to each cell. Consequently, the independent
components extracted by the algorithm of [2]1 have large spatial domains as can be seen in supplemental figure 1. Our approach is robust to large non-local correlations because we analyze the
1
available online at http://www.snl.salk.edu/?emukamel/
2
mean image alone. One advantage is that the resulting model can be applied not just to data from
functional imaging experiments but to data from any imaging technique.
1.2 Background on convolutional image models
Our proposed image model is a novel extension of a family of recent algorithms based on sparse
coding that are commonly used in object recognition experiments [4], [5], [6], [7], [8]. A starting
point for our model was the convolutional matching pursuit (MP) implementation of [5] (but see [6]
for more details). The authors show that convolutional MP learns a diverse set of basis functions
from natural images. Most of these basis functions are edges, but some have a globular appearance
and others represent curved edges and corners. Their implied generative model of an image is
to pick out randomly a few basis functions and place them at random locations. While this is a
poor generative model for natural images, it is much better suited to biological images which are
composed of many repeating and seemingly randomly distributed elements of a few different types.
One disadvantage of convolutional MP as described by [6] is that it uses fixed templates for each
dictionary element. Although it seems like the cells in figure 1(b) might be well described by
a single ring shape, there are size and shape variations which could be better captured by more
flexible templates. In general, we expect the repeating elements in a biological image to have similar
appearances to a first approximation, but patterned variability is unavoidable. A better model of the
image of a single cell might be to assume it was generated by combining a few different prototypes
with different coefficients, effectively interpolating between the prototypes. We group the prototypes
related to a single object into blocks and every image is formed by activating a small number of
such blocks. We call this model sparse block coding. Note that the blocking principle is common in
natural image modelling, where Gabor filters in quadrature are combined with different coefficients
to produce edges of different spatial phases. Independent subspace analysis (ISA [7]) also entails
distributing basis functions into non-overlapping blocks. However, in our formulation the blocks are
either activated or not, while ISA assumes a continuous distribution on the activations of each block.
This property of sparse block coding makes it valuable in making hard assignments of inferred cell
locations, rather than giving a continuous coefficient for each location.
Closer to our formulation, [8] have used a similar sparse block coding model on natural movie
patches and added a temporal smoothness prior on the activation probabilities of blocks in consecutive movie frames. The expensive variational iterative techniques used by [8] for inference
and learning in small image patches are computationally infeasible for the convolutional model of
large images we present here. Instead, we use a convolutional block pursuit technique which is an
extension of standard matching pursuit and has similarly low computational complexity even for
arbitrarily large blocks and arbitrarily large images.
2
Model
2.1 Convolutional sparse block coding
Following [8], we distinguish between identity and attribute variables in the generative model of
each object in an image. An object can be a cell, a cell fragment or any other spatially-localized
object. Identity variables hkxy , where (x, y) is the location of the object and k the type of object,
are Bernoulli-distributed with very small prior probabilities. Each of the objects also has several
continuous-valued attribute variables xkl
xy , with l indexing the attribute. In the generative model
these attributes are given a broad uniform probability and specify the coefficients with which a set
of basis functions Akl are combined at spatial location (x, y) before being linearly combined with
objects generated at other locations. The full description of the generative process is best captured
in terms of two-dimensional convolutions by the following set of equations
hkxy ? Bernoulli(p)
2
xkl
xy ? N 0, ?x
X
y?
Akl ? xkl ? hk + N (0, ?y ) ,
k,l
where ?y is the (small) noise variance for the image, ?x is the (large) prior variance for the coefficients, p is a small activation probability specific to each object type, hk and xkl represent the
full two-dimensional maps of the binary and continuous coefficients respectively, ??? represents the
elementwise or Hadamard product and ??? denotes two-dimensional convolution where the result is
3
taken to have the same dimensions as the input image.2 The joint log-likelihood (or negative energy)
can now be derived easily
P
P
kl 2
ky ? k,l Akl ? xkl ? hk k2
klxy (xxy )
L (x, h, A) = ?
?
+
2?y2
2?x2
X
hkxy log(p) + (1 ? hkxy ) log(1 ? p) + constants
(1)
kxy
In practice, we used ?x = ? as we found that it gave similar results to finite values of ?x . This
model can be fit by alternately optimizing the cost function in equation 1 over the unobserved variables x and h and the parameters A. The prior bias parameter p will not be optimized over but
instead will be adjusted so as to guarantee a mean number of elements per image. We also set
kAkl k = 1 without loss of generality, since the absolute values of x can scale to compensate.
2.2 Inference by convolutional block pursuit
Given a set of basis functions Akl and an image y, we would like to infer the most likely locations
of objects of each type in an image. This inference is generally NP-hard but good solutions can
nonetheless be obtained with greedy methods like matching pursuit (MP). In standard matching
pursuit, a sequential process is followed where at each step a basis function Akl is chosen which if
activated increases most the log-likelihood of equation 1. In our model, at each step we activate a
full block k which includes multiple templates Akl . Due to the quadratic nature of equation 1, for a
proposal hkxy = 1 we can easily compute the MAP estimate for each xkxy given the current residual
X
image yres = y ?
Akl ? xkl ? hk . Here we understand xkxy as a vector concatenating xkl
xy for
k,l
all l. The MAP estimate for xkxy is
?1 k
? kxy = (Ak )T Ak
x
vxy
k
kl
v (l) = A? ? yres
xy
xy
where A?kl is the basis function Akl rotated by 180 degrees and the matrix Ak contains as columns
the vectorized basis functions Akl . The corresponding increase in likelihood in equation 1 is
k T ? kl
xxy
vxy
p
k
?Lxy =
.
? log
2
2?y
1?p
p
from the prior overcomes the data term for all
1?p
possible objects k at all possible locations (x, y).
Inference stops when the activation penalty log
A simple trick common to all matching pursuit algorithms [9], [6] allows us to save computation
when sequentially calculating vklxy = A?kl ? yres by keeping track of v and updating it after each
new coefficient is turned on:
? kxy ,
vnew = v ? G(....),(k.xy) x
where G is the grand Gram matrix of all basis functions Akl
xy at all positions (x, y), and the indexing
means that every dot runs over all possible values of that index. Because the basis functions are
much smaller in length and width than the entire image, most entries in the Gram matrix are actually
0. In practice, we do not keep track of these and instead keep track only of G(k0 l0 x0 y0 ),(klxy) for
|x ? x0 | < d and |y ? y 0 | < d, where d is the width and length of the basis function. We also keep
? and ?Lkxy and only need to update these quantities at positions (x, y)
track during inference of x
around the extracted object. These caching techniques make the complexity of the inference scale
linearly with the number of objects in each image, regardless of image or object size.
Thus, our algorithm benefits from the computational efficacy of matching pursuit. One additional
computation lies in determining the inverse of (Ak )T Ak for each k. This cost is negligible, since
each block contains a small number of attributes and we only need to do the inversions once per iter? and ?Lkxy locally around the extracted
ation. Every iteration of block pursuit requires updating v, x
2
In other words, the convolution uses ?zero-padding?.
4
block, which is several times more expensive than the corresponding update in simple matching
pursuit. However, this cost is also negligible compared to the cost of finding the best block at each
iteration: the single most intensive operation during inference is the loop through all the elements
in all the convolutional maps to find the block which most increases the likelihood if activated. All
the other update operations are local around the extracted block, and thus negligible. In practice for
the datasets we use (for example, 18 images of 256 by 256 pixels each), a model can be learned in
minutes on a modern CPU and inference on a single large image takes under one second.
2.3
Learning with block K-SVD
Given the inferred active blocks and their coefficients, we would like to adapt the parameters of the
basis functions Akl so as to maximize the cost function in eq 1. This can most easily be accomplished
by gradient descent (GD). Unfortunately, for general dictionary learning setups gradient descent can
produce suboptimal solutions, where a proportion of the basis function fail to learn meaningful
structure [10]. Similarly, for our block-based representations we found that gradient descent often
mixed together subspaces that should have been separated (see fig 2(c)). We considered the option
of estimating the subspaces in each Ak sequentially where we run a couple of iterations of learning
with a single subspace in each Ak and then every couple of iterations we increase the number of
subspaces we estimate for Ak . This incremental approach always resulted in demixed subspaces
like those in figure 2(a). Note also that the standard approach in MP-based models is to extract
a fixed number of coefficients per image, but in our database of biological images there are large
variations in the number of cells present in each image so we needed the inference method to be
flexible enough to accomodate varying numbers of objects. To control the total number of active
coefficients, we adjusted during learning the prior activation probability p whenever the average
number of active elements was too small or too large compared to our target mean activation rate.
Although incremental gradient descent worked well, it tended to be slow in practice. A popular
learning algorithm that was proposed to accelerate patch-based dictionary learning is K-SVD [10].
In every iteration of K-SVD, coefficients are extracted for all the image patches in the training
set. Then the algorithm modifies each basis function sequentially to exactly minimize the squared
reconstruction cost. The convolutional MP implementation of [6] indeed uses K-SVD for learning
and we here show how K-SVD can be adapted to block-based representations.
At every iteration of K-SVD, given a set of active basis functions per image obtained with an inference method, the objective is to minimize the reconstruction cost with respect to the basis functions
and coefficients simultaneously [10]. We consider each basis function Akl sequentially, extract all
image patches {yi }i where that basis function is active and assume all coefficients for the other basis
functions are fixed. In the convolutional setting, these patches are extracted from locations in the
images where each basis function is active [6]. We add back the contribution of basis function Akl
to each patch in {yi }i and now make the observation that to minimize the reconstruction error with
a single basis function A?kl we must find the direction in pixel space where most of the variance in
{yi }i lies. This can be done with an SVD decomposition followed by retaining the first principal
vector A?kl . The new reconstructions for each patch yi are yi ? A?kl (A?kl )T yi and with this new
residual we move on to the next basis function to be reestimated.
By analogy, in block K-SVD we are given a set of active blocks per image, each block consisting of
K basis functions. We consider each block Ak sequentially, extract all image patches {yi }i where
that block is active and assume all coefficients for the other blocks are fixed. We add back the
contribution of block Ak to each patch in {yi }i and like before perform an SVD decomposition
of these residuals. However, we are now looking for a K-dimensional subspace where most of
the variance in {yi }i lies and this is exactly achieved by considering the first K principal vectors
returned by SVD. The reconstructions for each patch are yi ? A?k (A?k )T yi where A?k are the first
K principal vectors. On a more technical note, after each iteration of K-SVD we centered the
parameters spatially so that the center of mass of the first direction of variability in each block was
aligned to the center of its window, otherwise the basis functions did not center by themselves.
Although K-SVD was an order of magnitude faster than GD and converged in practice, we noted
that in the convolutional setting K-SVD is biased. This is because at the step of re-estimating a
block Ak from a set of patches {yi }i , some of these patches may be spatially overlapping in the
full image. Therefore, the subspaces in Ak are driven to explain the residual at some pixels multiple
times. One way around the problem would be to enforce non-overlapping windows during inference,
5
(a)
(b)
(c)
(d)
(e)
Figure 2: a. Typical recovered parameters with incremental gradient descent learning on GCaMP6
fluorescent images. Each column is a block and is sorted in the order of variance from the SVD
decomposition. Left columns capture the structure of cell somatas, while right columns represent
dendrite fragments. b. Like (a) but with incremental block K-SVD. Similar subspaces are recovered
with ten times fewer iterations. c. and d. Typical failure modes of learning with non-incremental
gradient descent and block K-SVD, respectively. The subspaces from (a) appear mixed together. e.
Subspaces obtained from Nissl-stained slices of cortex.
but in our images many cell pairs touch and would in fact require overlapping windows. Instead,
we decided to fine-tune the parameters returned by block K-SVD with a few iterations of gradient
descent which worked well in practice and in simulations recovered good model parameters with
little further computational effort.
3
Results
3.1 Qualitative results on fluorescent images of neurons
The main applications of our work are to nissl-stained slices and to fields of neurons and neuropil
imaged with a two-photon microscope (figure 1(a)). The neurons were densely labeled with a fluorescent calcium indicator GCaMP6 in a small area of the mouse somatosensory (barrel) cortex.
While the mice were either anesthetized or awake, their whiskers were stimulated which activated
corresponding barrel cortex neurons, leading to an influx of calcium into the cells and consequently
an increase in fluorescence which was reported by the two-photon microscope. Although cell somas
receive a large influx of calcium, dendrites and axons can also be seen. Individual images of the
fluorescence can be very noisy purely due to the low number of photons released over each exposure. Better spatial accuracy can be obtained at the expense of temporal accuracy or at the expense
of a smaller field of view. In practice, cell locations can be identified based on the mean images
recorded over the duration of an entire experiment, in our case 1000 or 5000 frames. Using 18 images like the one in figure 1(b) we learned a full model with two types of objects each with three
subspaces. One of the object types, the left column in figure 2(a) was clearly a model of single
neurons. The right column of figure 2(a) represented small pieces of dendrite that were also highly
fluorescent. Note how within a block each of the two objects includes dimensions of variability that
capture anisotropies in the shape of the cell or dendritic fragments. Figure 3(a) shows in alternating
odd rows patches from the training set identified by the algorithm to contain cells and the respective
reconstructions in the even rows. Note that while most cells are ring-shaped, some appear filled and
some appear to be larger and the model?s flexibility is sufficient to capture these variations. Figure
2(c) shows a typical failure for gradient based learning that motivated us to use incremental block
learning. The two subspaces recovered in figure 2(a) are mixed in figure 2(c) and the likelihood
from equation 1 is correspondingly lower.
3.2 Simulated data
We ran extensive experiments on simulated data to assess the algorithm?s ability to learn and infer
cell locations. There are two possible failure modes: the inference algorithm might not be accurate
enough or the learning algorithm might not recover good parameters. We address each of these
failure modes separately. We wanted to have simulated data as similar as possible to the real data so
we first fitted a model to the GCaMP6 data. We then took the learned model and generated a new
dataset from it using the same number of objects of each type and similar amounts of Gaussian noise
as the real images. To generate diverse shapes of cells, we fit a K-dimensional multivariate Gaussian
6
(a)
(b)
Figure 3: a. Patches from the GCaMP6 training images (odd rows) and their reconstructions (even
rows) with the subspaces shown in figure 2(b). b. One area from a Nissl-stained image together with
a human segmentation (open circles) and the model segmentation (stars). Larger zoom versions are
available in the supplementary material.
to the posteriors of each block on the real data and generated coefficients from this model for the
simulated images. Supplemental figure 6 shows a simulated image and it can be seen to resemble
images in the training set. Note that we are not modelling some of the structured variability in the
noise, for example the blood vessels and dendrites visible in figure 1(b). This structured variability
is the likely reason why the model performs better on simulated than on real images.
3.2.1 Inference quality of convolutional block pursuit
We kept the ground truths for the simulated dataset and investigated how well we can recover cell
locations when we know perfectly what the simulation parameters were. There is one free parameter
in our model that we cannot learn automatically which is the average number of extracted objects
per image. We varied this parameter and report ROC curves for true positives and false positives as
we vary the number of extracted coefficients. Sometimes we observed that cells were identified not
exactly at the correct location but one or a few pixels away. Such small deviations are acceptable
in practice, so we considered inferred cells as correctly identified if they were within four pixels of
the correct location (cells were 8-16 pixels in diameter). We enforced that a true cell could only be
identified once. If the algorithm made two predictions within ?4 pixels of a true cell, only the first
of these was considered a true positive. Figure 4(a) reports the typical performance of convolutional
block pursuit. We also investigated the quality of inference without considering the full structure of
the subspaces in each object. Using a single subspace per object is equivalent to matching pursuit,
achieved significantly worse performance and saturated at a smaller number of true positives because
the model could not recognize some of the variations in cell shape.
3.2.2 Learning quality of K-SVD + gradient descent
We next tested how well the algorithm recovers the generative parameters. We assume that the
model knows how many object types there are and how many attributes each object type has. To
compare the various learning strategies we could in principle just evaluate the joint log-likelihood
of equation 1. However the differences, although consistent, were relatively small and hard to interpret. More relevant to us is the ROC performance in recovering correctly cell locations. Block
K-SVD consistently recovers good parameters but does not perform quite as well as the true parameters because of its bias (figure 4(b)). However refinement with GD consistently recovers the best
parameters which approach the performance of the true generative parameters. We also asked how
well the model recovers the parameters when the true number of objects per image is unknown, by
running several experiments with different mean numbers of objects per image. The performance of
the learned subspaces is reported in figure 4(c). Although the correct number of elements per image
was 600, learning with as few as 200 or as many as 1400 objects resulted in equally well-performing
models. If performance on simulated data is at all indicative of behavior on real data, we conclude
that our algorithm is not sensitive to the only free parameter in the model.
7
Inference with
known parameters
Learning + Inference
B3P
Compare against Human3
GCaMP6 fluorescence
Learning with
X elements per image
170
170
Compare against Human3
Nissl stains
200
300
200
160
160
150
150
B1P (MP)
B2P
B3P
B3P?learn
Oracle
50
0
0
10
False positives
20
(a)
130
120
110
100
0
140
X = 200
400
600 (true)
800
1000
1200
1400
known
parameters
130
120
K?SVD
K?SVD + GD
known
parameters
10
False positives
(b)
20
110
100
0
10
False positives
(c)
20
True positives
140
150
True positives
True positives
True positives
100
True positives
250
150
100
BP1
BP2
BP3
Human1
Human2
Oracle
50
0
0
20
40
False positives
(d)
200
150
100
BP1
BP2
BP4
Human 1
Human 2
Oracle
50
0
0
50
False positives
100
(e)
Figure 4: ROC curves show the model?s behavior on simulated data (a-c) and on manuallysegmented GCaMP6 images (d) and Nissl-stained images (e) . a. Inference with block pursuit
with all three subspaces per object (B3P) as well as block pursuit with only the first or first two
principal subspaces (B1P and B2P). We also show for comparison the performance of B3P with
model parameters identified by learning. Notice the small number of false negatives when a large
proportion of the cells are identified. The cells not identified were too dim to pick out even with a
large number of false negative, hence the quick saturation of the ROC curve. b. Ten runs of block
K-SVD followed by gradient descent. Refining with GD improved performance. c. Not knowing
the average number of elements per image does not make a difference on simulated data.
3.3 Comparison with human segmentation on biological images
We compare the segmentation of the model with manual segmentations on one example each of the
GCaMP6 and Nissl-stained images (figures 4(d) and 4(e)). The human segmenters were instructed
to locate cells in approximately the order of confidence, thus producing an ordering similar to the
ordering returned by the algorithm. As we retain more cells from that ordering we can build ROC
curves showing the agreement of the humans with each other, and of the model?s segmentation to
the humans?. We found that using multiple templates per block helped the model agree more with
the human segmentations. In the case of the Nissl-stain, block coding with four templates identified
fifty more cells than matching pursuit. Although the model generally performs below inter-human
agreement, the gap is sufficiently small to warrant practical use. In addition, a post-hoc analysis
suggests that many of the model?s false positives are in fact cells that were not selected in the manual
segmentations. Examples of these false positives can be seen both in figure 3(b) and in figures in
the supplementary material. As we anticipated in the introduction, a standard method based on
thresholded and localized correlation maps only reached 25 true positives at 50 false positives and
is not shown in figure 4(d).
4
Conclusions
We have presented an image model that can be used to automatically and effectively infer the locations and shapes of cells from biological image data. This application of generative image models is
to our knowledge novel and should allow automating many types of biological studies. Our contribution to the image modelling literature is to extend the sparse block coding model presented in [8]
to the convolutional setting where each block is allowed to be present at any location in an image.
We also derived convolutional block pursuit, a greedy inference algorithm which scales gracefully
to images of large dimensions with many possible object types in the generative model. For learning
the model, we extended the K-SVD learning algorithm to the block-based and convolutional representation. We identified a bias in convolutional K-SVD and used gradient descent to fine-tune the
model parameters towards good local optima.
On simulated data, convolutional block pursuit recovers with good accuracy cell locations in simulated biological images and the learning rule recovers well and consistently the parameters of the
generative model. Using the block pursuit algorithm recovers significantly more cells than simple
matching pursuit. On data from calcium imaging experiments and nissl-stained tissue, the model
succeeds in recovering cell locations and learns good models of the variability among different cell
shapes.
8
References
[1] M Oberlaender, VJ Dercksen, R Egger, M Gensel, B Sakmann, and HC Hege. Automated threedimensional detection and counting of neuron somata. J Neuroscience Methods, 180:147?160, 2009.
[2] EA Mukamel, A Nimmerjahn, and MJ Schnitzer. Automated analysis of cellular signals from large-scale
calcium imaging data. Neuron, 63:747?760, 2009.
[3] I Ozden, HM Lee, MR Sullivan, and SSH Wang. Identification and clustering of event patterns from in
vivo multiphoton optical recordings of neuronal ensembles. J Neurophysiol, 100:495?503, 2008.
[4] K Kavukcuoglu, P Sermanet, YL Boureau, K Gregor, M Mathieu, and Y LeCun. Learning convolutional
feature hierarchies for visual recognition. Advances in Neural Information Processing, 2010.
[5] K Gregor, A Szlam, and Y LeCun. Structured sparse coding via lateral inhibition. Advances in Neural
Information Processing, 2011.
[6] A Szlam, K Kavukcuoglu, and Y LeCun. Convolutional matching pursuit and dictionary training. arXiv,
page 1010.0422v1, 2010.
[7] A Hyvarinen, J Hurri, and PO Hoyer. Natural Image Statistics. Springer, 2009.
[8] P Berkes, RE Turner, and M Sahani. A structured model of video produces primary visual cortical
organisation. PLoS Computational Biology, 5, 2009.
[9] SG Mallat and Z Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on
Signal Processing, 41(12):3397?3415, 1993.
[10] M Aharon, M Elad, and A Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for
sparse representation. IEEE Transactions on Signal Processing, 54(11):4311?4322, 2006.
9
| 5167 |@word version:1 inversion:1 proportion:3 seems:1 reused:1 open:1 human2:1 simulation:2 decomposition:4 pressure:1 pick:2 thereby:1 schnitzer:1 contains:3 fragment:3 efficacy:1 recovered:5 current:1 activation:7 assigning:1 must:1 visible:3 shape:8 wanted:1 designed:1 update:3 alone:2 generative:15 greedy:2 fewer:1 selected:1 indicative:1 location:21 zhang:1 ksvd:1 qualitative:1 epithelium:1 inside:1 x0:2 inter:1 indeed:1 ica:1 behavior:2 themselves:1 morphology:1 brain:1 automatically:3 resolve:2 little:2 cpu:1 window:3 considering:2 anisotropy:1 estimating:2 mass:1 barrel:2 wolfson:1 what:1 akl:13 developed:1 supplemental:2 unobserved:1 finding:1 guarantee:1 temporal:3 every:6 exactly:3 k2:1 uk:4 control:1 unit:1 szlam:2 appear:4 producing:2 positive:19 before:2 negligible:3 local:3 packer2:1 ak:12 fluctuation:1 approximately:1 might:4 suggests:1 patterned:1 range:1 decided:1 practical:1 lecun:3 practice:8 block:59 sullivan:1 area:2 maneesh:2 gabor:1 significantly:2 matching:13 word:2 confidence:1 cannot:1 www:1 equivalent:1 map:5 quick:1 center:3 modifies:1 exposure:3 regardless:1 starting:1 duration:1 identifying:1 rule:1 stereotypical:2 variation:5 target:1 hierarchy:1 mallat:1 xxy:2 us:4 designing:1 agreement:2 trick:1 element:12 stain:2 recognition:2 expensive:2 updating:2 bp2:2 database:1 blocking:1 labeled:1 observed:1 wang:1 capture:4 thousand:1 region:1 ordering:3 plo:1 valuable:1 ran:1 complexity:2 asked:1 segment:2 vesicle:1 purely:1 distinctive:1 basis:26 neurophysiol:1 easily:3 joint:2 accelerate:1 k0:1 po:1 represented:1 fiber:1 talk:1 various:1 separated:1 activate:1 quite:1 larger:2 valued:1 supplementary:2 elad:1 otherwise:1 ability:1 statistic:1 noisy:5 online:1 seemingly:1 hoc:1 advantage:1 ucl:4 propose:2 reconstruction:7 took:1 product:1 aligned:2 combining:1 hadamard:1 turned:1 loop:1 relevant:1 flexibility:2 description:1 ky:1 cluster:1 regularity:1 optimum:1 produce:3 egger:1 adam:1 ring:2 rotated:1 object:30 incremental:6 develop:1 ac:2 odd:2 minor:1 eq:1 recovering:2 resemble:1 come:1 somatosensory:1 direction:2 correct:4 attribute:6 filter:2 centered:1 human:11 material:2 globular:1 require:1 activating:1 pettit:1 biological:18 dendritic:1 adjusted:2 extension:2 around:4 considered:3 ground:1 sufficiently:1 electron:1 automate:1 major:1 dictionary:6 consecutive:1 vary:1 released:1 estimation:1 fluorescence:5 sensitive:1 repetition:1 clearly:2 always:1 gaussian:2 aim:2 rather:1 caching:1 varying:1 derived:2 l0:1 refining:1 properly:1 consistently:4 modelling:3 bernoulli:2 likelihood:6 hk:4 contrast:1 dim:1 inference:22 motif:3 entire:3 pixel:12 among:1 flexible:2 retaining:2 proposes:1 resonance:1 spatial:9 field:4 once:2 extraction:1 shaped:1 encouraged:1 biology:1 represents:1 broad:1 warrant:1 anticipated:1 report:3 others:1 np:1 few:9 primarily:1 modern:1 randomly:2 composed:6 simultaneously:1 packer:1 resulted:2 individual:1 densely:1 zoom:1 recognize:1 phase:1 consisting:1 detection:2 interest:1 highly:2 laborious:1 saturated:1 light:2 activated:4 accurate:2 edge:3 closer:2 xy:7 respective:1 filled:1 re:2 circle:1 overcomplete:1 fitted:1 column:6 disadvantage:1 assignment:1 cost:7 deviation:1 entry:1 hundred:3 uniform:1 successful:1 too:3 reported:2 combined:3 gd:5 grand:1 retain:1 automating:1 lee:1 yl:1 michael:1 together:5 reestimated:1 mouse:2 squared:1 ambiguity:1 recorded:1 unavoidable:1 containing:1 possibly:1 worse:1 corner:1 leading:1 photon:5 star:1 coding:10 includes:2 coefficient:16 matter:1 mp:7 piece:1 view:3 helped:1 analyze:1 reached:1 dalgleish:1 recover:3 option:1 vivo:2 contribution:3 ass:1 formed:1 minimize:3 accuracy:3 convolutional:25 variance:5 ensemble:1 confounds:2 millimeter:1 identification:3 kavukcuoglu:2 axially:1 tissue:12 converged:1 explain:1 tended:1 manual:3 whenever:1 inexact:1 against:3 failure:5 energy:1 nonetheless:1 frequency:1 invasive:1 recovers:9 couple:2 stop:1 dataset:2 popular:1 knowledge:1 segmentation:9 actually:1 back:2 ea:1 gcamp6:9 reflected:1 methodology:1 specify:1 improved:1 formulation:2 done:1 generality:1 just:3 biomedical:1 stage:1 correlation:5 hand:3 touch:1 overlapping:5 marker:1 incrementally:1 mode:3 quality:3 gray:1 contain:3 y2:1 lxy:1 true:15 hence:1 spatially:4 imaged:2 alternating:1 ribosome:1 deal:1 during:4 width:2 noted:1 nimmerjahn:1 xkl:7 performs:2 image:92 variational:1 discovers:1 novel:2 common:3 functional:1 spiking:1 fourteen:1 volume:1 extend:2 elementwise:1 interpret:1 expressing:1 vxy:2 smoothness:1 session:1 similarly:2 henry:2 dot:1 entail:1 supervision:1 surface:2 cortex:3 inhibition:1 add:2 berkes:1 mitochondrion:1 multivariate:1 posterior:1 recent:1 optimizing:1 driven:1 binary:1 arbitrarily:2 accomplished:1 yi:12 muscle:2 seen:6 captured:2 additional:1 mr:1 employed:1 maximize:1 signal:3 full:6 multiple:3 isa:2 infer:3 technical:1 faster:1 adapt:1 cross:1 organelle:4 long:1 compensate:1 post:1 equally:1 prediction:1 variant:1 arxiv:1 iteration:9 normalization:1 represent:3 sometimes:1 microscopy:2 cell:60 microscope:4 proposal:1 background:4 achieved:2 fine:2 receive:1 separately:1 addition:1 biased:1 fifty:1 probably:1 recording:2 tend:1 spirit:1 call:1 extracting:1 counting:1 enough:2 automated:3 fit:3 gave:1 identified:10 suboptimal:1 perfectly:1 prototype:3 knowing:1 intensive:1 motivated:1 distributing:1 padding:1 effort:1 human3:2 penalty:1 returned:3 kxy:3 generally:2 tune:2 amount:1 repeating:6 locally:2 ten:2 diameter:1 gyrus:1 http:1 generate:1 notice:1 neuroscience:1 stained:8 per:14 track:4 correctly:2 diverse:3 express:1 group:1 iter:1 four:2 soma:5 threshold:1 blood:1 sulcus:1 segmenters:1 thresholded:1 kept:1 luminance:1 imaging:5 v1:1 enforced:1 run:3 inverse:1 place:1 throughout:1 family:1 patch:15 acceptable:1 followed:3 distinguish:1 replaces:2 quadratic:1 oracle:3 adapted:2 noah:2 worked:2 awake:1 x2:1 influx:2 hkxy:5 performing:1 optical:1 relatively:1 marius:2 glia:1 structured:4 poor:1 belonging:2 smaller:3 y0:1 skull:1 encapsulates:1 making:1 confound:3 indexing:2 taken:3 pipeline:2 computationally:1 equation:7 visualization:1 agree:1 fail:1 needed:2 know:2 pursuit:23 brighten:1 available:2 operation:2 multiplied:1 apply:2 aharon:1 b2p:2 away:1 enforce:1 magnetic:1 save:1 assumes:1 denotes:1 running:1 bp1:2 clustering:1 calculating:1 giving:1 especially:1 build:1 threedimensional:1 gregor:2 crafting:1 implied:1 objective:1 added:1 quantity:1 move:1 strategy:1 primary:1 striate:1 evolutionary:2 gradient:11 hoyer:1 subspace:21 separate:1 simulated:13 lateral:1 gracefully:1 cellular:2 reason:2 length:2 index:1 sermanet:1 difficult:1 unfortunately:1 setup:1 expense:2 negative:3 design:1 implementation:2 calcium:9 sakmann:1 unknown:1 perform:3 neuron:10 convolution:3 datasets:1 observation:1 finite:1 descent:10 curved:1 extended:1 variability:8 looking:1 frame:2 locate:1 varied:1 inferred:3 pair:1 kl:9 extensive:2 optimized:1 hausser:1 coherent:1 learned:4 alternately:1 address:1 usually:3 pattern:3 below:1 saturation:1 video:1 belief:2 event:1 ation:1 natural:5 examination:1 rely:1 indicator:2 residual:4 turner:1 movie:3 eye:1 demixed:1 mathieu:1 hm:1 extract:3 sahani:1 prior:6 literature:1 sg:1 determining:1 fully:1 expect:1 loss:1 whisker:1 mixed:3 fluorescent:7 filtering:2 analogy:1 localized:2 degree:1 vectorized:1 sufficient:1 consistent:1 thresholding:2 principle:2 subtractive:1 row:4 keeping:1 histological:1 infeasible:1 free:2 bias:3 allow:2 understand:1 institute:1 template:5 correspondingly:1 absolute:1 sparse:10 tracing:1 distributed:2 slice:4 boundary:1 dimension:4 cortical:5 overcome:1 gram:2 contour:1 benefit:1 curve:4 author:2 made:2 collection:1 replicated:2 commonly:1 refinement:1 human1:1 instructed:1 hyvarinen:1 transaction:2 overcomes:1 keep:3 bruckstein:1 vacuole:1 sequentially:5 active:8 conclude:1 hurri:1 continuous:4 iterative:1 why:1 stimulated:1 nature:2 learn:4 robust:1 mj:1 dendrite:5 neuropil:3 vessel:1 interpolating:1 investigated:2 hc:1 eukaryotic:1 domain:1 vj:1 did:1 main:2 linearly:2 whole:1 noise:3 allowed:1 quadrature:1 body:2 neuronal:2 crafted:1 fig:1 roc:5 gatsby:2 salk:1 axon:2 slow:1 fails:1 position:2 inverts:1 concatenating:1 lie:3 third:1 learns:2 dozen:2 minute:1 specific:1 showing:1 bp3:1 multitude:1 organisation:1 vnew:1 false:12 sequential:1 effectively:3 mukamel:1 magnitude:1 accomodate:1 boureau:1 gap:1 suited:1 appearance:3 likely:2 visual:3 applies:1 springer:1 truth:1 extracted:8 identity:2 formulated:1 sorted:1 consequently:2 towards:1 multiphoton:1 hard:3 deconvolves:1 folded:1 typical:4 principal:5 called:2 total:1 svd:26 divisive:1 experimental:2 succeeds:1 meaningful:1 anesthetized:1 rarely:1 formally:1 meant:1 evaluate:1 tested:1 |
4,606 | 5,168 | Mapping cognitive ontologies to and from the brain
Yannick Schwartz, Bertrand Thirion, and Gael Varoquaux
Parietal Team, Inria Saclay Ile-de-France
Saclay, France
firstname.lastname@inria.fr
Abstract
Imaging neuroscience links brain activation maps to behavior and cognition via
correlational studies. Due to the nature of the individual experiments, based on
eliciting neural response from a small number of stimuli, this link is incomplete,
and unidirectional from the causal point of view. To come to conclusions on the
function implied by the activation of brain regions, it is necessary to combine a
wide exploration of the various brain functions and some inversion of the statistical inference. Here we introduce a methodology for accumulating knowledge
towards a bidirectional link between observed brain activity and the corresponding function. We rely on a large corpus of imaging studies and a predictive engine.
Technically, the challenges are to find commonality between the studies without
denaturing the richness of the corpus. The key elements that we contribute are
labeling the tasks performed with a cognitive ontology, and modeling the long
tail of rare paradigms in the corpus. To our knowledge, our approach is the first
demonstration of predicting the cognitive content of completely new brain images.
To that end, we propose a method that predicts the experimental paradigms across
different studies.
1
Introduction
Functional brain imaging, in particular fMRI, is the workhorse of brain mapping, the systematic
study of which areas of the brain are recruited during various experiments. To date, 33K papers on
pubmed mention ?fMRI?, revealing an accumulation of activation maps related to specific tasks or
cognitive concepts. From this literature has emerged the notion of brain modules specialized to a
task, such as the celebrated fusiform face area (FFA) dedicated to face recognition [1]. However,
the link between the brain images and high-level notions from psychology is mostly done manually,
due to the lack of co-analysis framework. The challenges in quantifying observations across experiments, let alone at the level of the literature, leads to incomplete pictures and well-known fallacies.
For instance a common trap is that of reverse inferences [2]: attributing a cognitive process to a
brain region, while the individual experiments can only come to the conclusion that it is recruited
by the process under study, and not that the observed activation of the region demonstrates the engagement of the cognitive process. Functional specificity can indeed only be measured by probing a
large variety of functions, which exceeds the scale of a single study. Beyond this lack of specificity,
individual studies are seldom comprehensive, in the sense that they do not recruit every brain region.
Prior work on such large scale cognitive mapping of the brain has mostly relied on coordinate-based
meta-analysis, that forgo activation maps and pool results across publications via the reported Talairach coordinates of activation foci [3, 4]. While the underlying thresholding of statistical maps
and extraction of local maxima leads to a substantial loss of information, the value of this approach
lies in the large amount of studies covered: Brainmap [3], that relies on manual analysis of the
literature, comprises 2 298 papers, while Neurosynth [4], that uses text mining, comprises 4 393
papers. Such large corpuses can be used to evaluate the occurrence of the cognitive and behavioral
1
terms associated with activations and formulate reverse inference as a Bayesian inversion on standard (forward) fMRI inference [2, 4]. On the opposite end of the spectrum, [5] shows that using a
machine-learning approach on studies with different cognitive content can predict this content from
the images, thus demonstrating principled reverse inference across studies. Similarly, [6] have used
image-based classification to challenge the vision that the FFA is by itself specific of faces. Two
trends thus appear in the quest for explicit correspondences between brain regions and cognitive
concepts. One is grounded on counting term frequency on a large corpus of studies described by
coordinates. The other uses predictive models on images. The first approach can better define functional specificity by avoiding the sampling bias inherent to small groups of studies; however each
study in a coordinate-based meta-analysis brings only very limited spatial information [7].
Our purpose here is to outline a strategy to accumulate knowledge from a brain functional image
database in order to provide grounds for principled bidirectional reasoning from brain activation
to behavior and cognition. To increase the breadth in co-analysis and scale up from [5], which
used only 8 studies with 22 different cognitive concepts, we have to tackle several challenges. A
first challenge is to find commonalities across studies, without which we face the risk of learning
idiosyncrasies of the protocols. For this very reason we choose to describe studies with terms that
come from a cognitive paradigm ontology instead of a high-level cognitive process one. This setting
enables not only to span the terms across all the studies, but also to use atypical studies that do
not clearly share cognitive processes. A second challenge is that of diminishing statistical power
with increasing number of cognitive terms under study. Finally, a central goal is to ensure some
sort of functional specificity, which is hindered by the data scarcity and ensuing biases in an image
database.
In this paper, we gather 19 studies, comprising 131 different conditions, which we labeled with
19 different terms describing experimental paradigms. We perform a brain mapping experiment
across these studies, in which we consider both forward and reverse inference. Our contributions
are two-fold: on the one hand we show empirical results that outline specific difficulties of such
co-analysis, on the second hand we introduce a methodology using image-based classification and
a cognitive-paradigm ontology that can scale to large set of studies. The paper is organized as
following. In section 2, we introduce our methodology for establishing correspondence between
studies and performing forward and reverse inference across them. In section 3, we present our data,
a corpus of studies and the corresponding paradigm descriptions. In section 4 we show empirically
that our approach can predict these descriptions in unseen studies, and that it gives promising maps
for brain mapping. Finally, in section 5, we discuss the empirical findings in the wider context of
meta-analyses.
2
2.1
Methodology: annotations, statistics and learning
Labeling activation maps with common terms across studies
A standard task-based fMRI study results in activation maps per subject that capture the brain response to each experimental condition. They are combined to single out responses to high-level
cognitive functions in so-called contrast maps, for which the inference is most often performed at
the group level, across subjects. These contrasts can oppose different experimental conditions, some
to capture the effect of interest while others serve to cancel out non-specific effects. For example,
to highlight computation processes, one might contrast visual calculation with visual sentences, to
suppress the effect of the stimulus modality (visual instructions), and the explicit stimulus (reading
the numbers).
When considering a corpus of different studies, finding correspondences between the effects highlighted by the contrasts can be challenging. Indeed, beyond classical localizers, capturing only very
wide cognitive domains, each study tends to investigate fairly unique questions, such as syntactic
structure in language rather than language in general [8]. Combining the studies requires engineering meta-contrasts across studies. For this purpose, we choose to affect a set of terms describing
the content of each condition. Indeed, there are important ongoing efforts in cognitive science and
neuroscience to organize the scientific concepts into formal ontologies [9]. Taking the ground-level
objects of these gives a suitable family of terms, a taxonomy to describe the experiments.
2
2.2
Forward inference: which regions are recruited by tasks containing a given term?
Armed with the term labels, we can use the standard fMRI analysis framework and ask using a
General Linear Model (GLM) across studies for each voxels of the subject-level activation images if
it is significantly-related to a term in the corpus of images. If x ? Rp is the observed activation map
with p voxels, the GLM tests P(xi 6= 0|T ) for each voxel i and term T . This test relies on a linear
model that assumes that the response in each voxel is a combination of the different factors and on
classical statistics:
x = Y ? + ?,
where Y is the design matrix yielding the occurrence of terms and ? the term effects. Here, we
assemble term-versus-rest contrasts, that test for the specific effect of the term. The benefit of the
GLM formulation is that it estimates the effect of each term partialing out the effects of the other
terms, and thus imposes some form of functional specificity in the results. Term co-occurrence in
the corpus can however lead to collinearity of the regressors.
2.3
Reverse inference: which regions are predictive of tasks containing a given term?
Poldrack 2006 [2] formulates reverse inferences as reasoning on P(T |x), the probability of a term
T being involved in the experiment given the activation map x. For coordinate-based meta analysis,
as all that is available is the presence or the absence of significant activations at a given position, the
information on x boils down to {i, xi 6= 0}. Approaches to build a reverse inference framework
upon this description have relied on Bayesian inversion to go from P(xi 6= 0|T ), as output by the
GLM, to P(T |xi 6= 0) [2, 4]. In terms of predictive models on images, this approach can be understood as a naive Bayes predictor: the distribution of the different voxels are learned independently
conditional to each term, and Bayes? rule is used for prediction. Learning voxels-level parameters
independently is a limitation as it makes it harder to capture distributed effects, such as large-scale
functional networks, that can be better predictors of stimuli class than localized regions [6]. However, learning the full distribution of x is ill-posed, as x is high-dimensional. For this reason, we
must resort to statistical learning tools.
We choose to use an `2 -regularized logistic regression to directly estimate the conditional probability
P(T |x) under a linear model. The choice of linear models is crucial to our brain-mapping goals,
as their decision frontier is fully represented by a brain map1 ? ? Rp . However, as the images are
spatially smooth, neighboring voxels carry similar information, and we use feature clustering with
spatially-constrained Ward clustering [10] to reduce the dimensionality of the problem. We further
reduce the dimensionality by selecting the most significant features with a one-way ANOVA. We
observe that the classification performance is not hindered if we reduce the data from 48K voxels
to 15K parcels2 and then select the 30% most significant features. The classifier is quite robust
to these parameters, and our choice is motivated by computational concerns. We indeed use a
leave-one-study out cross validation scheme, nested with a 10-fold stratified shuffle split to set the
`2 regularization parameter. As a result, we need to estimate 1200 models per term label, which
amounts to over 20K in total. The dimension reduction helps making the approach computationally
tractable.
The learning task is rendered difficult by the fact that it is highly multi-class, with a small number
of samples in some classes. To divide the problem in simpler learning tasks, we use the fact that our
terms are derived from an ontology, and thus can be grouped by parent category. In each category,
we apply a strategy similar to one-versus-all: we train a classifier to predict the presence of each
term, opposed to the others. The benefits of this approach are i) that it is suited to the presence of
multiple terms for a map, and ii) that the features it highlights are indeed selective for the associated
term only.
Finally, an additional challenge faced by the predictive learning task is that of strongly imbalanced
classes: some terms are very frequent, while others hardly present. In such a situation, an empirical
risk minimizer will mostly model the majority class. Thus we add sample weights inverse of the
1
In this regard, the Naive Bayes prediction strategy does yield clear cut maps, as its decision boundary is a
conic section.
2
Reducing even further down to 2K parcels does not impact the classification performance, however the
brain maps ? are then less spatially resolved.
3
CATEGORY
Stimulus modality
Explicit stimulus
Instructions
Overt response
TERMS
visual, auditory
words, shapes, digits, abstract patterns, non-vocal sounds, scramble, face
attend, read, move, track, count, discriminate, inhibit
saccades, none, button press
Table 1: Subset of CogPO terms and categories that are present in our corpus
population imbalance in the training set. This strategy is commonly used to compensate for covariate shift [11]. However, as our test set is drawn from the same corpus, and thus shows the same
imbalance, we apply an inverse bias in the decision rule of the classifier by shifting the probability
output by the logistic model: if P is the probability of the term presence predicted by the logistic,
we use: Pbiased = ?term P , where ?term is the fraction of train samples containing the term.
3
3.1
An image database
Studies
We need a large collection of task fMRI datasets to cover the cognitive space. We also want to avoid
particular biases regarding imaging methods or scanners, and therefore prefer images from different
teams. We use 19 studies, mainly drawn from the OpenfMRI project [12], which despite remaining
small in comparison to coordinate databases, is as of now the largest open database for task fMRI.
The datasets include risk-taking tasks [13, 14], classification tasks [15, 16, 17], language tasks [18, 8,
19], stop-signal tasks [20], cueing tasks [21], object recognition tasks [22, 23], functional localizers
tasks [24, 25], and finally a saccades & arithmetic task [26]. The database accounts for 486 subjects,
131 activation map types, and 3 826 individual maps, the number of subjects and map types varying
across the studies. To avoid biases due to heterogeneous data analysis procedures, we re-process
from scratch all the studies with the SPM (Statistical Parametric Mapping) software.
3.2
Annotating
To tackle highly-multiclass problems, computer vision greatly benefits from the WordNet ontology
[27] to standardize annotation of pictures, but also to impose structure on the classes. The neuroscience community recognizes the value of such vocabularies and develops ontologies to cover the
different aspects of the field such as protocols, paradigms, brain regions and cognitive processes.
Among the many initiatives, CogPO (The Cognitive Paradigm Ontology) [9] aims to represent the
cognitive paradigms used in fMRI studies. CogPO focuses on the description of the experimental
conditions characteristics, namely the explicit stimuli and their modality, the instructions, and the
explicit responses and their modality. Each of those categories use standard terms to specify the
experimental condition. As an example a stimulus modality may be auditory or visual, the explicit
stimulus a non-vocal sound or a shape. We use this ontology to label with the appropriate terms all
the experimental conditions from the database. The categories and terms that we use are listed in
Table 1.
4
4.1
Experimental results
Forward inference
In our corpus, the occurrence of some terms is too correlated and gives rise to co-linear regressors.
For instance, we only have visual or auditory stimulus modalities. While a handful of contrasts
display both stimulus modalities, the fact that a stimulus is not auditory mostly amounts to it being
visual. For this reason, we exclude from our forward inference visual, which will be captured by
negative effects on auditory, and digits, that amounts mainly to the instruction being count. We
fit the GLM using a design matrix comprising all the remaining terms, and consider results with
p-values corrected for multiple comparisons at a 5% family-wise error rate (FWER). To evaluate the
spatial layout of the different CogPO categories, we report the different term effects as outlines in
the brain, and show the 5% top values for each term to avoid clutter in Figure 3. Forward inference
4
outlines many regions relevant to the terms, such as the primary visual and auditory systems on the
stimulus modality maps, or pattern and object-recognition areas in the ventral stream, on the explicit
stimulus maps.
It can be difficult to impose a functional specificity in forward inference because of several phenomena: i) the correlation present in the design matrix, makes it hard to separate highly associated
(often anti-correlated) factors, as can be seen in Fig. 1, right. ii) the assumption inherent to this
model that a certain factor is expressed identically across all experiments where it is present. This
assumption ignores modulations and interactions effects that are very likely to occur; however their
joint occurrence is related to the protocol, making it impossible to disentangle these factors with
the database used here. iii) important confounding effects are not modeled, such as the effect of
attention. Indeed the count map captures networks related to visuo-spatial orientation and attention:
a dorsal attentional network, and a salience network (insulo-cingulate network [28]) in Figure 3.
4.2
Reverse inference
The promise of predictive modeling on a large statistical map database is to provide principled reverse inference, going from observations of neural activity to well-defined cognitive processes. The
classification model however requires a careful setting to be specific to the intended effect. Figure 1
highlights some confounding effects that can captured by a predictive model: two statistical maps
originating from the same study are closer than two maps labeled as sharing a same experimental
condition in the sense of a Euclidean distance. We mitigate the capture of undesired effect with
different strategies. First we use term labels at span across studies, and refrain from using those that
were not present in at least two. We ensure this way that no term is attached to a specific study.
Second, we only test the classifiers on previously unseen studies and if possible subjects, using for
example a leave-one-study out cross validation scheme. A careless classification setting can very
easily lead to training a study detector.
Figure 2 summarizes the highly multi-class and imbalanced problem that we face: the distribution
of the number of samples per class displays a long tail. To find non-trivial effects we need to be able
to detect the under-represented terms as well as possible. As a reference method, we use a K-NN,
as it is in general a fairly good approach for highly multi-class problems. Its training is independent
of the term label structure and predicts the map labels instead. It subsequently assigns to a new map
terms that are present in more than half of its nearest neighbors from the training3 . We compare this
approach to training independent predictive models for each term and use three types of classifiers:
a naive Bayes, a logistic regression, and a weighted logistic regression. Figure 2 shows the results
for each method in terms of precision and recall, standard information-retrieval metrics. Note that
the performance scores mainly follow the class representation, i.e. the number of samples per class
in the train set. Considering that rare occurrences are also those that are most likely to provide
new insight, we want a model that promotes recall over precision in the tail of the term frequency
distribution. On the other hand, well represented classes are easier to detect and correspond to
massive, well-known mental processes. For these, we want to favor precision, i.e. not affecting the
corresponding term to other processes, as these term are fairly general and non-descriptive.
Overall the K-NN has the worst performance, both in precision and recall. It confirms the idea
outlined in Figure 1, that an Euclidean distance alone is not appropriate to discriminate underlying
brain functions because of overwhelming confounding effects4 . Similarly, the naive bayes performs
poorly, with very high recall and low precisions scores which lead to a lack of functional specificity.
On the contrary, the methods using a logistic regression show better results, and yield performance
scores above the chance levels which are represented by the red horizontal bars for the leave-onestudy out cross validation scheme in Figure 2. Interestingly, switching the cross validation scheme to
a leave-one-laboratory out does not change the performance significantly. This result is important, as
it confirms that the classifiers do not rely on specificities from the stimuli presentation in a research
group to perform the prediction. We mainly use data drawn from 2 different groups in this work,
and use those data in turn to train and test a logistic regression model. The predicitions scores for
3
K was chosen in a cross-validation loop, varying between 5 and 20. Such small numbers for K are useful
to avoid penalizing under-represented terms of rare classes in the vote of the KNN. For this reason we do not
explore above K=20, in respect to the small number of occurrences of the faces term.
4
Note that the picture does not change when `1 distances are used instead of `2 distances.
5
Nb of m a ps
All
Sa m e la be l
Sa m e s tudy
Sa m e c ontra s t
0
Dis ta nc e be twe e n two m a ps
Figure 1: (Left) Histogram of the distance
between maps owing to their commonalities:
study of origin, functional labels, functional
contrast. (Right) Correlation of the design
matrix.
the terms present in both groups are displayed in Figure 2, with the chance levels represented by the
green horizontal bars for this cross validation scheme.
We evaluate the spatial layout of maps representing CogPO categories for reverse inference as well,
and report boundaries of the 5% top values from the weighted logistic coefficients. Figure 3 reports
the outlined regions that include motor cortex activations in the instructions category, and activations
in the auditory cortex and FFA respectively for the words and faces terms in the explicit stimulus
category. Despite being very noisy, those regions report findings consistent with the literature and
complementary to the forward inference maps. For instance, the move instruction map comprises
the motor cortex, unlike for forward inference. Similarly, the saccades over response map segments
the intra-parietal sulci and the frontal eye fields, which corresponds to the well known signature of
saccades, unlike the corresponding forward inference map, which is very non specific of saccades5 .
5
Discussion and conclusion
Linking cognitive concepts to brain maps can give solid grounds to the diffuse knowledge derived
in imaging neuroscience. Common studies provide evidence on which brain regions are recruited in
given tasks. However coming to conclusions on the tasks in which regions are specialized requires
data accumulation across studies to overcome the small coverage in cognitive domain of the tasks
assessed in a single study. In practice, such a program faces a variety of roadblocks. Some are
technical challenges, that of build a statistical predictive engine that can overcome the curse of
dimensionality. While others are core to meta-analysis. Indeed, finding correspondence between
studies is a key step to going beyond idiosyncrasies of the experimental designs. Yet the framework
should not discard rare but repeatable features of the experiments as these provide richness to the
description of brain function.
We rely on ontologies to solve the correspondence problem. It is an imperfect solution, as the
labeling is bound to be inexact, but it brings the benefit of several layers of descriptions and thus
enable us to fraction the multi-class learning task in simpler tasks. A similar strategy based on
WordNet was essential to progress in object recognition in the field of computer vision [27]. Previous
work [5] showed high classification scores for several mental states across multiple studies, using
cross-validation with a leave-one-subject out strategy. However, as this work did not model common
factors across studies, the mental state was confounded by the study. In every study, a subject was
represented by a single statistical map, and there is therefore no way to validate whether the study or
the mental state was actually predicted. As figure 1 shows, predicting studies is much easier albeit of
little neuroscientific interest. Interestingly, [5] also explores the ability of a model to be predictive on
two different studies sharing the same cognitive task, and a few subjects. When using the common
subjects, their model performs worse than without these subjects, as it partially mistakes cognitive
5
This failure of forward inference is probably due to the small sample size of saccades.
6
Figure 2: Precision and recall for all terms per classification method, and term representation in
the database. The * denotes a leave-one-laboratory out cross validation scheme, associated with
the green bars representing the chance levels. The other methods use a leave-one-study out cross
validation, whose chance levels are represented by the red horizontal bars.
tasks for subjects. This performance drop illustrates that a classifier is not necessarily specific to the
desired effect, and in this case detects subjects in place of tasks to a certain degree. To avoid this
loophole, we included in our corpus only studies that had terms in common with at least on other
study and performed cross-validation by leaving a study out, and thus predicting from completely
new activation maps. The drawback is that it limits directly the number of terms that we can attempt
to predict given a database, and explain why we have fewer terms than [5] although we have more
than twice as many studies. Indeed, in [5], the terms cannot be disambiguated from the studies.
Our labeled corpus is riddled with very infrequent terms giving rise to class imbalance problems
in which the rare occurrences are the most difficult to model. Interestingly, though coordinates
databases such as Neurosynth [4] cover a larger set of studies and a broader range of cognitive
processes, they suffer from a similar imbalance bias, which is given by the state of the literature.
Indeed, by looking at the terms in Neurosynth, that are the closest to the one we use in this work,
we find that motor is cited in 1090 papers, auditory 558, word 660, and the number goes as low
as 55 and 31 for saccade and calculation respectively. Consequently, these databases may also
yield inconsistent results. For instance, the reverse inference map corresponding to the term digits
is empty, whereas the forward inference map is well defined 6 . Neurosynth draws from almost
5K studies while our work is based on 19 studies; however, unlike Neurosynth, we are able to
benefit from the different contrasts and subjects in our studies, which provides us with 3 826 training
samples. In this regard, our approach is particularly interesting and can hope to achieve competitive
results with much less studies.
This paper shows the first demonstration of zero-shot learning for prediction of tasks from brain
activity: paradigm description is given for images from unseen studies, acquired on different scanners, in different institutions, on different cognitive domains. More importantly than the prediction
per se, we pose the foundation of a framework to integrate and co-analyze many studies. This data
6
http://neurosynth.org/terms/digits
7
Instructions
L
L
R
R
Terms
L
L
R
R
count
inhibit
discriminate
read
move
track
attend
y=-60
x=-46
y=-60
z=49
Forward inference atlas
x=-46
z=49
Reverse inference atlas
Figure 3: Maps for the forward inference (left) and the reverse inference (right) for each term category. To minimize clutter, we set the outline so as to encompass 5% of the voxels in the brain on
each figure, thus highlighting only the salient features of the maps. In reverse inference, to reduce
the visual effect of the parcellation, maps were smoothed using a ? of 2 voxels.
accumulation, combined with the predictive model can provide good proxies of reverse inference
maps, giving regions whose activation supports certain cognitive functions. These maps should, in
principle, be better suited for causal interpretation than maps estimated from standard brain mapping
correlational analysis. In future work, we plan to control the significance of the reverse inference
maps, that show promising results but would probably benefit from thresholding out non-significant
regions. In addition, we hope that further progress, in terms of spatial and cognitive resolution in
mapping the brain to cognitive ontologies, will come from enriching the database with new studies,
that will bring more images, and new low and high-level concepts.
Acknowledgments
This work was supported by the ANR grants BrainPedia ANR-10-JCJC 1408-01 and IRMGroup
ANR-10-BLAN-0126-02, as well as the NSF grant NSF OCI-1131441 for the OpenfMRI project.
References
[1] N. Kanwisher, J. McDermott, and M. M. Chun, ?The fusiform face area: a module in human extrastriate
cortex specialized for face perception.,? J Neurosci, vol. 17, p. 4302, 1997.
[2] R. Poldrack, ?Can cognitive processes be inferred from neuroimaging data?,? Trends in cognitive sciences,
vol. 10, p. 59, 2006.
[3] A. Laird, J. Lancaster, and P. Fox, ?Brainmap,? Neuroinformatics, vol. 3, p. 65, 2005.
[4] T. Yarkoni, R. Poldrack, T. Nichols, D. V. Essen, and T. Wager, ?Large-scale automated synthesis of
human functional neuroimaging data,? Nature Methods, vol. 8, p. 665, 2011.
[5] R. Poldrack, Y. Halchenko, and S. Hanson, ?Decoding the large-scale structure of brain function by
classifying mental states across individuals,? Psychological Science, vol. 20, p. 1364, 2009.
8
[6] S. Hanson and Y. Halchenko, ?Brain reading using full brain support vector machines for object recognition: there is no face identification area,? Neural Computation, vol. 20, p. 486, 2008.
[7] G. Salimi-Khorshidi, S. M. Smith, J. R. Keltner, T. D. Wager, et al., ?Meta-analysis of neuroimaging data:
a comparison of image-based and coordinate-based pooling of studies,? Neuroimage, vol. 45, p. 810, 2009.
[8] C. Pallier, A. Devauchelle, and S. Dehaene, ?Cortical representation of the constituent structure of sentences,? Proc Natl Acad Sci, vol. 108, p. 2522, 2011.
[9] J. Turner and A. Laird, ?The cognitive paradigm ontology: design and application,? Neuroinformatics,
vol. 10, p. 57, 2012.
[10] V. Michel, A. Gramfort, G. Varoquaux, E. Eger, C. Keribin, and B. Thirion, ?A supervised clustering
approach for fMRI-based inference of brain states,? Pattern Recognition, vol. 45, p. 2041, 2012.
[11] H. Shimodaira, ?Improving predictive inference under covariate shift by weighting the log-likelihood
function,? Journal of statistical planning and inference, vol. 90, p. 227, 2000.
[12] R. Poldrack, D. Barch, J. Mitchell, T. Wager, A. Wagner, J. Devlin, C. Cumba, and M. Milham, ?Towards
open sharing of task-based fMRI data: The openfMRI project (in press),? Frontiers in Neuroinformatics.
[13] T. Schonberg, C. Fox, J. Mumford, C. Congdon, C. Trepel, and R. Poldrack, ?Decreasing ventromedial
prefrontal cortex activity during sequential risk-taking: an fMRI investigation of the balloon analog risk
task,? Frontiers in Neuroscience, vol. 6, 2012.
[14] S. Tom, C. Fox, C. Trepel, and R. Poldrack, ?The neural basis of loss aversion in decision-making under
risk,? Science, vol. 315, p. 515, 2007.
[15] A. Aron, M. Gluck, and R. Poldrack, ?Long-term test?retest reliability of functional MRI in a classification learning task,? Neuroimage, vol. 29, p. 1000, 2006.
[16] K. Foerde, B. Knowlton, and R. Poldrack, ?Modulation of competing memory systems by distraction,?
Proc Natl Acad Sci, vol. 103, p. 11778, 2006.
[17] R. Poldrack, J. Clark, E. Pare-Blagoev, D. Shohamy, J. Creso Moyano, C. Myers, and M. Gluck, ?Interactive memory systems in the human brain,? Nature, vol. 414, p. 546, 2001.
[18] G. Xue and R. Poldrack, ?The neural substrates of visual perceptual learning of words: implications for
the visual word form area hypothesis,? J Cognitive Neurosci, vol. 19, p. 1643, 2007.
[19] L. Vagharchakian, G. Dehaene-Lambertz, C. Pallier, and S. Dehaene, ?A temporal bottleneck in the language comprehension network,? J Neurosci, vol. 32, p. 9089, 2012.
[20] G. Xue, A. Aron, and R. Poldrack, ?Common neural substrates for inhibition of spoken and manual
responses,? Cerebral Cortex, vol. 18, p. 1923, 2008.
[21] A. Kelly, L. Q. Uddin, B. B. Biswal, F. Castellanos, and M. Milham, ?Competition between functional
brain networks mediates behavioral variability,? Neuroimage, vol. 39, p. 527, 2008.
[22] J. Haxby, I. Gobbini, M. Furey, A. Ishai, J. Schouten, and P. Pietrini, ?Distributed and overlapping representations of faces and objects in ventral temporal cortex,? Science, vol. 293, p. 2425, 2001.
[23] K. Duncan, C. Pattamadilok, I. Knierim, and J. Devlin, ?Consistency and variability in functional localisers,? Neuroimage, vol. 46, p. 1018, 2009.
[24] P. Pinel, B. Thirion, S. Meriaux, A. Jobert, J. Serres, D. L. Bihan, J. B. Poline, and S. Dehaene, ?Fast
reproducible identification and large-scale databasing of individual functional cognitive networks,? BMC
neuroscience, vol. 8, p. 91, 2007.
[25] P. Pinel and S. Dehaene, ?Genetic and environmental contributions to brain activation during calculation,?
NeuroImage, vol. in press, 2013.
[26] A. Knops, B. Thirion, E. M. Hubbard, V. Michel, and S. Dehaene, ?Recruitment of an area involved in
eye movements during mental arithmetic,? Science, vol. 324, p. 1583, 2009.
[27] J. Deng, A. Berg, K. Li, and L. Fei-Fei, ?What does classifying more than 10,000 image categories tell
us?,? in Computer Vision?ECCV 2010, p. 71, 2010.
[28] W. W. Seeley, V. Menon, A. F. Schatzberg, J. Keller, G. H. Glover, H. Kenna, A. L. Reiss, and M. D.
Greicius, ?Dissociable intrinsic connectivity networks for salience processing and executive control,? J
neurosci, vol. 27, p. 2349, 2007.
9
| 5168 |@word fusiform:2 collinearity:1 mri:1 cingulate:1 inversion:3 open:2 instruction:7 confirms:2 mention:1 solid:1 shot:1 harder:1 carry:1 extrastriate:1 reduction:1 celebrated:1 score:5 selecting:1 halchenko:2 genetic:1 interestingly:3 activation:20 yet:1 must:1 shape:2 enables:1 motor:3 haxby:1 drop:1 atlas:2 reproducible:1 openfmri:3 alone:2 half:1 fewer:1 smith:1 core:1 mental:6 provides:1 institution:1 contribute:1 org:1 simpler:2 glover:1 initiative:1 combine:1 behavioral:2 introduce:3 acquired:1 kanwisher:1 indeed:9 ontology:13 behavior:2 planning:1 multi:4 brain:40 bertrand:1 detects:1 decreasing:1 little:1 armed:1 overwhelming:1 considering:2 increasing:1 curse:1 project:3 underlying:2 furey:1 what:1 recruit:1 meriaux:1 spoken:1 finding:4 temporal:2 mitigate:1 every:2 tackle:2 interactive:1 demonstrates:1 classifier:7 schwartz:1 control:2 grant:2 appear:1 organize:1 neurosynth:6 engineering:1 local:1 understood:1 tends:1 attend:2 mistake:1 switching:1 despite:2 limit:1 acad:2 establishing:1 modulation:2 inria:2 might:1 twice:1 challenging:1 co:6 limited:1 greicius:1 oppose:1 stratified:1 range:1 enriching:1 unique:1 acknowledgment:1 practice:1 digit:4 procedure:1 area:7 empirical:3 significantly:2 revealing:1 word:5 vocal:2 specificity:8 cannot:1 nb:1 risk:6 context:1 impossible:1 accumulating:1 accumulation:3 map:42 brainmap:2 go:2 layout:2 attention:2 independently:2 keller:1 formulate:1 resolution:1 assigns:1 pinel:2 rule:2 insight:1 importantly:1 population:1 congdon:1 notion:2 coordinate:8 infrequent:1 massive:1 substrate:2 us:2 hypothesis:1 origin:1 element:1 trend:2 recognition:6 standardize:1 particularly:1 seeley:1 cut:1 predicts:2 database:14 labeled:3 observed:3 module:2 capture:5 worst:1 region:16 richness:2 balloon:1 shuffle:1 inhibit:2 movement:1 substantial:1 principled:3 jobert:1 signature:1 segment:1 predictive:12 technically:1 serve:1 upon:1 completely:2 basis:1 resolved:1 joint:1 easily:1 trepel:2 various:2 represented:8 train:4 fast:1 describe:2 labeling:3 tell:1 lancaster:1 neuroinformatics:3 quite:1 emerged:1 posed:1 solve:1 whose:2 larger:1 annotating:1 anr:3 favor:1 statistic:2 ward:1 unseen:3 knn:1 syntactic:1 highlighted:1 itself:1 noisy:1 laird:2 descriptive:1 myers:1 propose:1 interaction:1 coming:1 fr:1 frequent:1 neighboring:1 relevant:1 combining:1 loop:1 date:1 poorly:1 achieve:1 eger:1 description:7 validate:1 competition:1 dissociable:1 constituent:1 parent:1 empty:1 p:2 leave:7 object:6 wider:1 help:1 pose:1 measured:1 nearest:1 progress:2 sa:3 coverage:1 predicted:2 salimi:1 come:4 drawback:1 owing:1 subsequently:1 exploration:1 human:3 enable:1 khorshidi:1 investigation:1 varoquaux:2 comprehension:1 frontier:3 scanner:2 ground:3 mapping:9 cognition:2 predict:4 ventral:2 commonality:3 purpose:2 proc:2 overt:1 label:7 hubbard:1 grouped:1 largest:1 tool:1 scramble:1 weighted:2 hope:2 clearly:1 aim:1 rather:1 avoid:5 varying:2 broader:1 publication:1 pallier:2 blan:1 derived:2 focus:2 likelihood:1 mainly:4 greatly:1 contrast:9 sense:2 detect:2 inference:35 nn:2 diminishing:1 originating:1 selective:1 france:2 comprising:2 going:2 overall:1 classification:10 ill:1 among:1 orientation:1 plan:1 spatial:5 constrained:1 fairly:3 gramfort:1 field:3 extraction:1 sampling:1 manually:1 bmc:1 cancel:1 uddin:1 fmri:11 future:1 report:4 others:4 stimulus:16 develops:1 inherent:2 few:1 comprehensive:1 individual:6 intended:1 attempt:1 interest:2 mining:1 investigate:1 highly:5 intra:1 essen:1 yielding:1 natl:2 wager:3 implication:1 closer:1 necessary:1 fox:3 incomplete:2 divide:1 euclidean:2 re:1 desired:1 causal:2 psychological:1 instance:4 modeling:2 castellanos:1 cover:3 formulates:1 subset:1 rare:5 predictor:2 too:1 reported:1 ishai:1 xue:2 engagement:1 combined:2 cited:1 explores:1 systematic:1 decoding:1 pool:1 synthesis:1 connectivity:1 central:1 containing:3 choose:3 opposed:1 prefrontal:1 idiosyncrasy:2 worse:1 cognitive:38 resort:1 michel:2 li:1 account:1 exclude:1 de:1 coefficient:1 stream:1 aron:2 performed:3 view:1 analyze:1 red:2 competitive:1 relied:2 sort:1 bayes:5 unidirectional:1 annotation:2 yarkoni:1 contribution:2 minimize:1 characteristic:1 yield:3 correspond:1 bayesian:2 identification:2 none:1 detector:1 explain:1 manual:2 sharing:3 inexact:1 failure:1 frequency:2 involved:2 associated:4 visuo:1 boil:1 stop:1 auditory:8 cueing:1 ask:1 mitchell:1 recall:5 knowledge:4 dimensionality:3 localizers:2 organized:1 actually:1 retest:1 bidirectional:2 ta:1 supervised:1 follow:1 methodology:4 response:8 specify:1 tom:1 formulation:1 done:1 though:1 strongly:1 correlation:2 hand:3 horizontal:3 bihan:1 overlapping:1 lack:3 spm:1 logistic:8 brings:2 menon:1 scientific:1 effect:20 concept:6 nichols:1 regularization:1 riddled:1 spatially:3 read:2 laboratory:2 undesired:1 biswal:1 parcel:1 during:4 lastname:1 fwer:1 outline:5 workhorse:1 performs:2 dedicated:1 bring:1 reasoning:2 image:18 wise:1 common:7 specialized:3 functional:17 empirically:1 poldrack:12 attached:1 cerebral:1 tail:3 linking:1 interpretation:1 jcjc:1 analog:1 accumulate:1 significant:4 seldom:1 outlined:2 consistency:1 similarly:3 ongoing:1 language:4 had:1 reliability:1 cortex:7 inhibition:1 add:1 map1:1 disentangle:1 imbalanced:2 showed:1 confounding:3 closest:1 reverse:17 discard:1 certain:3 meta:7 refrain:1 mcdermott:1 captured:2 seen:1 additional:1 impose:2 deng:1 paradigm:11 signal:1 ii:2 arithmetic:2 full:2 multiple:3 sound:2 encompass:1 exceeds:1 smooth:1 technical:1 calculation:3 cross:10 long:3 compensate:1 retrieval:1 promotes:1 impact:1 ile:1 prediction:5 regression:5 heterogeneous:1 vision:4 metric:1 histogram:1 grounded:1 represent:1 affecting:1 want:3 whereas:1 addition:1 leaving:1 modality:8 crucial:1 rest:1 unlike:3 probably:2 subject:14 recruited:4 pooling:1 dehaene:6 contrary:1 inconsistent:1 counting:1 presence:4 split:1 identically:1 iii:1 automated:1 variety:2 affect:1 fit:1 psychology:1 competing:1 opposite:1 hindered:2 reduce:4 regarding:1 idea:1 imperfect:1 multiclass:1 devlin:2 shift:2 bottleneck:1 whether:1 motivated:1 effort:1 suffer:1 hardly:1 useful:1 gael:1 covered:1 clear:1 listed:1 se:1 amount:4 clutter:2 category:12 http:1 nsf:2 neuroscience:6 estimated:1 per:6 track:2 promise:1 vol:26 group:5 key:2 salient:1 demonstrating:1 drawn:3 sulcus:1 penalizing:1 anova:1 breadth:1 imaging:5 button:1 milham:2 fraction:2 inverse:2 recruitment:1 place:1 family:2 almost:1 draw:1 decision:4 prefer:1 summarizes:1 duncan:1 capturing:1 bound:1 layer:1 serres:1 display:2 correspondence:5 fold:2 assemble:1 activity:4 occur:1 handful:1 fei:2 software:1 diffuse:1 aspect:1 disambiguated:1 span:2 performing:1 rendered:1 combination:1 shimodaira:1 across:19 making:3 glm:5 computationally:1 ventromedial:1 previously:1 describing:2 discus:1 thirion:4 count:4 turn:1 tractable:1 end:2 confounded:1 available:1 apply:2 observe:1 appropriate:2 occurrence:8 pietrini:1 rp:2 assumes:1 clustering:3 ensure:2 remaining:2 include:2 recognizes:1 top:2 denotes:1 giving:2 parcellation:1 build:2 eliciting:1 classical:2 implied:1 move:3 question:1 gobbini:1 mumford:1 strategy:7 parametric:1 primary:1 distance:5 link:4 separate:1 attentional:1 sci:2 ensuing:1 majority:1 trivial:1 reason:4 modeled:1 demonstration:2 nc:1 difficult:3 mostly:4 neuroimaging:3 taxonomy:1 negative:1 rise:2 neuroscientific:1 suppress:1 design:6 perform:2 imbalance:4 observation:2 datasets:2 anti:1 parietal:2 displayed:1 situation:1 looking:1 team:2 variability:2 smoothed:1 knierim:1 community:1 inferred:1 namely:1 sentence:2 hanson:2 engine:2 learned:1 mediates:1 beyond:3 able:2 bar:4 pattern:3 firstname:1 perception:1 reading:2 challenge:8 loophole:1 saclay:2 program:1 green:2 memory:2 shifting:1 power:1 suitable:1 difficulty:1 rely:3 regularized:1 predicting:3 turner:1 representing:2 scheme:6 pare:1 eye:2 picture:3 conic:1 naive:4 text:1 prior:1 literature:5 voxels:8 faced:1 kelly:1 loss:2 fully:1 highlight:3 interesting:1 limitation:1 versus:2 localized:1 oci:1 validation:10 foundation:1 integrate:1 aversion:1 degree:1 clark:1 gather:1 executive:1 consistent:1 imposes:1 proxy:1 thresholding:2 principle:1 classifying:2 share:1 eccv:1 poline:1 supported:1 dis:1 salience:2 bias:6 formal:1 schouten:1 wide:2 neighbor:1 face:13 taking:3 wagner:1 benefit:6 distributed:2 regard:2 dimension:1 boundary:2 vocabulary:1 overcome:2 cortical:1 knowlton:1 ignores:1 forward:15 commonly:1 collection:1 regressors:2 voxel:2 databasing:1 corpus:14 xi:4 training3:1 spectrum:1 fallacy:1 why:1 table:2 promising:2 nature:3 robust:1 improving:1 necessarily:1 protocol:3 domain:3 did:1 significance:1 neurosci:4 complementary:1 fig:1 pubmed:1 probing:1 precision:6 neuroimage:5 position:1 comprises:3 explicit:8 lie:1 atypical:1 perceptual:1 weighting:1 down:2 ffa:3 specific:9 covariate:2 repeatable:1 chun:1 concern:1 evidence:1 trap:1 essential:1 intrinsic:1 albeit:1 sequential:1 barch:1 ability:1 illustrates:1 easier:2 gluck:2 suited:2 attributing:1 likely:2 explore:1 visual:12 highlighting:1 expressed:1 yannick:1 partially:1 saccade:6 nested:1 minimizer:1 talairach:1 relies:2 chance:4 corresponds:1 environmental:1 conditional:2 goal:2 presentation:1 quantifying:1 careful:1 towards:2 consequently:1 absence:1 content:4 hard:1 change:2 included:1 reducing:1 corrected:1 wordnet:2 correlational:2 called:1 total:1 discriminate:3 forgo:1 experimental:10 la:1 vote:1 select:1 distraction:1 berg:1 quest:1 support:2 dorsal:1 assessed:1 phenomenon:1 scarcity:1 roadblock:1 frontal:1 evaluate:3 avoiding:1 reiss:1 scratch:1 correlated:2 |
4,607 | 5,169 | Geometric optimisation on positive definite matrices
with application to elliptically contoured distributions
Reshad Hosseini
School of ECE, College of Engineering
University of Tehran, Tehran, Iran
Suvrit Sra
Max Planck Institute for Intelligent Systems
T?ubingen, Germany
Abstract
Hermitian positive definite (hpd) matrices recur throughout machine learning,
statistics, and optimisation. This paper develops (conic) geometric optimisation
on the cone of hpd matrices, which allows us to globally optimise a large class of
nonconvex functions of hpd matrices. Specifically, we first use the Riemannian
manifold structure of the hpd cone for studying functions that are nonconvex
in the Euclidean sense but are geodesically convex (g-convex), hence globally
optimisable. We then go beyond g-convexity, and exploit the conic geometry
of hpd matrices to identify another class of functions that remain amenable to
global optimisation without requiring g-convexity. We present key results that
help recognise g-convexity and also the additional structure alluded to above. We
illustrate our ideas by applying them to likelihood maximisation for a broad family
of elliptically contoured distributions: for this maximisation, we derive novel,
parameter free fixed-point algorithms. To our knowledge, ours are the most general
results on geometric optimisation of hpd matrices known so far. Experiments show
that advantages of using our fixed-point algorithms.
1
Introduction
The geometry of Hermitian positive definite (hpd) matrices is remarkably rich and forms a foundational pillar of modern convex optimisation [21] and of the rapidly evolving area of convex algebraic
geometry [4]. The geometry exhibited by hpd matrices, however, goes beyond what is typically
exploited in these two areas. In particular, hpd matrices form a convex cone which is also a differentiable Riemannian manifold that is also a CAT(0) space (i.e., a metric space of nonpositive
curvature [7]). This rich structure enables ?geometric optimisation? with hpd matrices, which allows
solving many problems that are nonconvex in the Euclidean sense but convex in the manifold sense
(see ?2 or [29]), or have enough metric structure (see ?3) to permit efficient optimisation.
This paper develops (conic) geometric optimisation1 (GO) for hpd matrices. We present key results
that help recognise geodesic convexity (g-convexity); we also present sufficient conditions that put a
class of even non g-convex functions within the grasp of GO. To our knowledge, ours are the most
general results on geometric optimisation with hpd matrices known so far.
Motivation for GO. We begin by noting that the widely studied class of geometric programs is
ultimately nothing but the 1D version of GO on hpd matrices. Given that geometric programming
has enjoyed great success in numerous applications?see e.g., the survey of Boyd et al. [6]?we
hope GO also gains broad applicability. For this paper, GO arises naturally while performing
maximum likelihood parameter estimation for a rich class of elliptically contoured distributions
1
To our knowledge the name ?geometric optimisation? has not been previously attached to hpd matrix
optimisation, perhaps because so far only scattered few examples were known. Our theorems provide a starting
point for recognising and constructing numerous problems amenable to geometric optimisation.
1
(ECDs) [8, 13, 20]. Perhaps the best known GO problem is the task of computing the Karcher /
Fr?echet-mean of hpd matrices: a topic that has attracted great attention within matrix theory [2, 3, 27],
computer vision [10], radar imaging [22; Part II], and medical imaging [11, 31]?we refer the reader
to the recent book [22] for additional applications, references, and details. Another GO problem
arises as a subroutine in nearest neighbour search over hpd matrices [12]. Several other areas involve
GO problems: statistics (covariance shrinkage) [9], nonlinear matrix equations [17], Markov decision
processes and the wider encompassing area of nonlinear Perron-Frobenius theory [18].
Motivating application. We use ECDs as a platform for illustrating our ideas for two reasons:
(i) ECDs are important in a variety of settings (see the recent survey [23]); and (ii) they offer an
instructive setup for presenting key ideas from the world of geometric optimisation.
Let us therefore begin by recalling some basics. An ECD with density on Rd takes the form 2
? x ? Rd ,
E? (x; S) ? det(S)?1/2 ?(xT S ?1 x),
(1)
where S ? Pd (i.e., the set of d ? d symmetric positive definite matrices) is the scatter matrix while
? : R ? R++ is positive density generating function (dgf). If ECDs have finite covariance matrix,
then the scatter matrix is proportional to the covariance matrix [8].
t
Example 1. With ?(t) = e? 2 , density (1) reduces to the multivariate normal density. For the choice
?(t) = t??d/2 exp ?(t/b)? ,
(2)
where ?, b and ? are fixed positive numbers, density (1) yields the rich class called Kotz-type
distributions that are known to have powerful modelling abilities [15; ?3.2]; they include as special
cases multivariate power exponentials, elliptical gamma, multivariate W-distributions, for instance.
MLE. Let (x1 , . . . , xn ) be i.i.d. samples from an ECD E? (S). Up to constants, the log-likelihood is
Xn
L(S) = ? 12 n log det S +
log ?(xTi S ?1 xi ).
(3)
i=1
Equivalently, we may consider the minimisation problem
minS0
?(S) := 12 n log det(S) ?
X
i
log ?(xTi S ?1 xi ).
(4)
Problem (4) is in general difficult as ? may be nonconvex and may have multiple local minima.
Since statistical estimation theory relies on having access to global optima, it is important to be able
to solve (4) to global optimality. These difficulties notwithstanding, using GO ideas, we identify a
rich class of ECDs for which we can indeed solve (4) optimally. Some examples already exist in
the literature [16, 23, 30]; this paper develops techniques that are strictly more general and subsume
previous examples, while advancing the broader idea of geometric optimisation.
We illustrate our ideas by studying the following two main classes of dgfs in (1):
(i) Geodesically Convex (GC): This class contains functions for which the negative log-likelihood
?(S) is g-convex, i.e., convex along geodesics in the manifold of hpd matrices. Some members
of this class have been previously studied (though sometimes without recognising or directly
exploiting the g-convexity);
(ii) Log-Nonexpansive (LN): This is a new class that we introduce in this paper. It exploits the
?non-positive curvature? property of the manifold of hpd matrices.
There is a third important class: LC, the class of log-convex dgfs ?. Though, since (4) deals with
? log ?, the optimisation problem is still nonconvex. We describe class LC only in [28] primarily
due to paucity of space and also because the first two classes contain our most novel results. These
classes of dgfs are neither mutually disjoint nor proper subsets of each other. Each captures unique
analytic or geometric structure crucial to efficient optimisation. Class GC characterises the ?hidden?
convexity found in several instances of (4), while LN is a novel class of models that might not have
this hidden convexity, but nevertheless admit global optimisation.
Contributions. The key contributions of this paper are the following:
? New results that characterise and help recognise g-convexity (Thm. 1, Cor. 2, Cor. 3, Thm. 4).
Though initially motivated by ECDs, our matrix-theoretic proofs are more generally applicable and
should be of wider interest. All technical proofs, and several additional results that help recognise
g-convexity are in the longer version of this paper [28].
2
For simplicity we describe only mean zero families; the extension to the general case is trivial.
2
? New fixed-point theory for solving GO problems, including some that might even lack g-convexity.
Here too, our results go beyond ECDs?in fact, they broaden the class of problems that admit
fixed-point algorithms in the metric space (Pd , ?T )?Thms. 11 and 14 are the key results here.
Our results on geodesic convexity subsume the more specialised results reported recently in [29].
We believe our matrix-theoretic proofs, though requiring slightly more advanced machinery, are
ultimately simpler and more widely applicable. Our fixed-point theory offers a unified framework
that not only captures the well-known M-estimators of [16], but applies to a larger class of problems
than possible using previous methods. Our experimental illustrate computational benefits of one of
resulting algorithms.
2
Geometric optimisation with geodesic convexity: class GC
Geodesic convexity (g-convexity) is a classical concept in mathematics and is used extensively in
the study of Hadamard manifolds and metric spaces of nonpositive curvature [7, 24] (i.e., spaces
whose distance function is g-convex). This concept has been previously studied in nonlinear optimisation [25], but its full importance and applicability in statistical applications and optimisation is only
recently emerging [29, 30].
We begin our presentation by recalling some definitions?please see [7, 24] for extensive details.
Definition 2 (gc set). Let M denote a d-dimensional connected C 2 Riemannian manifold. A set
X ? M, where is called geodesically convex if any two points of X are joined by a geodesic lying in
X . That is, if x, y ? X , then there exists a path ? : [0, 1] ? X such that ?(0) = x and ?(1) = y.
Definition 3 (gc function). Let X ? M be a gc set. A function ? : X ? R is geodesically convex,
if for any x, y ? X and a unit speed geodesic ? : [0, 1] ? X with ?(0) = x and ?(1) = y, we have
?(?(t)) ? (1 ? t)?(?(0)) + t?(?(1)) = (1 ? t)?(x) + t?(y).
(5)
The power of gc functions in the context of solving (4) comes into play because the set Pd (the
convex cone of positive definite matrices) is also a differentiable Riemannian manifold where
geodesics between points can be computed efficiently. Indeed, the tangent space to Pd at any point
can be identified with the set of Hermitian matrices, and the inner product on this space leads to
a Riemannian metric on Pd . At any point A ? Pd , this metric is given by the differential form
ds = kA?1/2 dAA?1/2 kF ; also, between A, B ? Pd there is a unique geodesic [1; Thm. 6.1.6]
A#t B := ?(t) = A1/2 (A?1/2 BA?1/2 )t A1/2 ,
t ? [0, 1].
(6)
The midpoint of this path, namely A#1/2 B is called the matrix geometric mean, which is an object
of great interest in numerous areas [1?3, 10, 22]. As per convention, we denote it simply by A#B.
Example 4. Let z ? Cd be any vector. The function ?(X) := z ? X ?1 z is gc.
Proof. Since ? is continuous, it suffices to verify midpoint convexity: ?(X#Y ) ? 21 ?(X) + 12 ?(Y ),
?1
?1
for X, Y ? Pd . Since (X#Y )?1 = X ?1 #Y ?1 and X ?1 #Y ?1 X +Y
([1; 4.16]), it follows
2
1
1
that ?(X#Y ) = z ? (X#Y )?1 z ? 2 (z ? X ?1 z + z ? Y ?1 z) = 2 (?(X) + ?(Y )).
We are ready to state our first main theorem, which vastly generalises the above example and provides
a foundational tool for recognising and constructing gc functions.
Theorem 1. Let ? : Pd ? Pk be a strictly positive linear map. Let A, B ? Pd we have
?(A#t B) ?(A)#t ?(B),
t ? [0, 1].
(7)
Proof. Although positive linear maps are well-studied objects (see e.g., [1; Ch. 4]), we did not find
an explicit proof of (7) in the literature, so we provide a proof in the longer version [28].
A useful corollary of Thm. 1 is the following (notice this corollary subsumes Example 4).
Corollary 2. For positive definite matrices A, B ? Pd and matrices 0 6= X ? Cd?k we have
tr X ? (A#t B)X ? [tr X ? AX]1?t [tr X ? BX]t ,
3
t ? (0, 1).
(8)
Proof. Use the map A 7? tr X ? AX in Thm. 1.
Note: Cor. 2 actually constructs a log-g-convex function, from which g-convexity is immediate.
A notable corollary to Thm. 1 that subsumes a nontrivial result [14; Lem. 3.2] is mentioned below.
Corollary
Xi ? Cd?k with k ? d such that rank([Xi ]m
i=1 ) = k. Then the function ?(S) :=
P 3. Let
?
log det( i Xi SXi ) is gc on Pd .
P ?
Proof. By our assumption on the XP
i , the map ? = S 7?
i Xi SXi is strictly positive. Thus, from
Thm 1 it follows that ?(S#R) = i Xi? (S#R)Xi ?(S)#?(R). Since log det is monotonic,
and determinant is multiplicative, the previous inequality yields
?(S#R) = log det ?(S#R) ? log det(?(S)) + log det(?(R)) = 12 ?(S) + 12 ?(R).
We are now ready to state our second main theorem.
Theorem 4. Let h : Pk ? R be gc function that is nondecreasing in L?owner order. Let r ? {?1},
and let ? : Pd ? Pk be a strictly positive linear map. Then, ?(S) = h(?(S r )) ? log det(S) is gc.
Proof. Since ? is continuous, it suffices to prove midpoint geodesic convexity. Since r ? {?1},
(S#R)r = S r #Rr ; thus, from Thm. 1 and since h is matrix nondecreasing, it follows that
h(?(S#R)r ) = h(?(S r #Rr )) ? h(?(S r )#?(Rr )).
(9)
Since h is also gc, inequality (9) further yields
h(?(S r )#?(Rr )) ? 12 h(?(S r )) + 12 h(?(Rr )).
Since ? log det(S#R) = ? 12 log det(S) + log det(R) , on combining with (10) we obtain
(10)
?(S#R) ? 12 ?(S) + 21 ?(R),
as desired. Notice also that if h is strictly gc, then ?(S) is also strictly gc.
Finally, we state a corollary of Thm. 4 helpful towards recognising geodesic convexity of ECDs.
We mention here that a result equivalent to Corr. 5 was recently also discovered in [30]. Thm. 4 is
more general and uses a completely different argument founded on the matrix-theoretic results; our
techniques may also be of wider independent interest.
Corollary 5. Let h : R++ ? R be nondecreasing
and gc (i.e., h(x1?? y ? ) ? (1 ? ?)h(x) + ?h(y)).
P
T r
Then, for r ? {?1}, ? : Pd ? R : S 7? i h(xi S xi ) ? log det(S) is gc.
2.1
Application to ECDs in class GC
We begin with a straightforward corollary of the above discussion.
Corollary 6. For the following distributions the negative log-likelihood (4) is gc: (i) Kotz with ? ? d2
(its special cases include Gaussian, multivariate power exponential, multivariate W-distribution with
shape parameter smaller than one, elliptical gamma with shape parameter ? ? d2 ); (ii) Multivariate-t;
(iii) Multivariate Pearson type II with positive shape parameter; (iv) Elliptical multivariate logistic
distribution. 3
If the log-likelihood is strictly gc then (4) cannot have multiple solutions. Moreover, for any local
optimisation method that computes a solution to (4), geodesic convexity ensures that this solution is
globally optimal. Therefore, the key question to answer is: (i) does (4) have a solution?
Note that answering this question is nontrivial even in special cases [16, 30]. We provide below a
fairly general result that helps establish existence.
3
The dgfs of different distributions are brought here for the reader?s convenience. Multivariate power
exponential: ?(t) = exp(?t? /b), ? > 0; Multivariate W-distribution: ?(t) = t??1 exp(?t? /b), ? > 0;
Elliptical gamma: ?(t) = t??d/2 exp(?t/b), ? > 0; Multivariate t: ?(t) = (1 + t/?)?(?+d)/2 , ? > 0;
Multivariate Pearson
type II: ?(t)
(1 ? t)? , ? > ?1, 0 ? t ? 1; Elliptical multivariate logistic:
?
? =
2
?(t) = exp(? t)/(1 + exp(? t)) .
4
Theorem 7. If ?(S) satisfies the following properties: (i) ? log ?(t) is lower semi-continuous (lsc)
for t > 0, and (ii) ?(S) ? ? as kSk ? ? or kS ?1 k ? ?, then ?(S) attains its minimum.
Proof. Consider the metric space (Pd , dR ), where dR is the Riemannian distance,
dR (A, B) = klog(A?1/2 BA?1/2 )kF
A, B ? Pd .
(11)
If ?(S) ? ? as kSk ? ? or as kS ?1 k ? ?, then ?(S) has bounded lower-level sets in (Pd , dR ).
It is a well-known result in variational analysis that a function that has bounded lower-level sets in
a metric space and is lsc, then the function attains its minimum [26]. Since ? log ?(t) is lsc and
log det(S ?1 ) is continuous, ?(S) is lsc on (Pd , dR ). Therefore it attains its minimum.
A key consequence of Thm. 7 is its ability to show existence of solutions to (4) for a variety of
different ECDs. Let us look at an application to Kotz-type distributions below. For these distributions,
the function ?(S) assumes the form
Xn
Xn xT S ?1 x ?
i
i
.
(12)
K(S) = n2 log det(S) + ( d2 ? ?)
log xTi S ?1 xi +
b
i=1
Lemma 8 shows that K(S) ? ? whenever kS
?1
i=1
k ? ? or kSk ? ?.
Lemma 8. Let the data X = {x1 , . . . , xn } span the whole space and satisfy for ? <
d
2
the condition
|X ? L|
dL
<
,
|X |
d ? 2?
(13)
where L is an arbitrary subspace with dimension dL < d and |X ? L| is the number of datapoints
that lie in the subspace L. If kS ?1 k ? ? or kSk ? ?, then K(S) ? ?.
Proof. If kS ?1 k ? ? and since the data span the whole space, it is possible to find a datum x1 such
that t1 = xT1 S ?1 x1 ? ?. Since
lim c1 log(t) + tc2 + c3 ? ?
t??
for constants c1 ,c3 and c2 > 0, it follows that K(S) ? ? whenever kS ?1 k ? ?.
If kSk ? ? and kS ?1 k is bounded, then the third term in expression of K(S) is bounded. Assume
that dL is the number of eigenvalues of S that go to ? and |X ? L| is the number of data that lie
in the subspace span by these eigenvalues. Then in the limit when eigenvalues of S go to ?, K(S)
converges to the following limit
lim n dL
??? 2
log ? + ( d2 ? ?)|X ? L| log ??1 + c
Apparently if n2 dL ? ( d2 ? ?)|X ? L| > 0, then K(S) ? ? and the proof is complete.
It is important to note that overlap condition (13) can be fulfilled easily by assuming that the number
of data is larger than their dimensionality and that they are noisy. Using Lemma 8, we can invoke
Thm. 7 to immediately state the following result.
Theorem 9 (Existence Kotz-distr.). If the data samples satisfy condition (13), then the Kotz negative
log-likelihood has a minimiser.
As previously mentioned, once existence is ensured, one may use any local optimisation method to
minimise (4) to obtain the desired mle. This brings us to the next question. What if ?(S) is neither
convex nor g-convex? The ideas introduced in Sec. 3 below offer a partial one answer.
3
Geometric optimisation for class LN
Without convexity or g-convexity, in general at best we might obtain local minima. However, as
alluded to previously, the set Pd of hpd matrices possesses remarkable geometric structure that allows
us to extend global optimisation to a rich class beyond just gc functions. To our knowledge, this class
of ECDs was beyond the grasp of previous methods [16, 29, 30]. We begin with a key definition.
5
Definition 5 (Log-nonexpansive). Let f : R++ ? (0, ?). We say f is log-nonexpansive (LN) on a
compact interval I ? R+ if there exists a fixed constant 0 ? q ? 1 such that
| log f (t) ? log f (s)| ? q| log t ? log s|, ?s, t ? I.
(14)
If q < 1, we say f is log-contractive. Finally, if for every s 6= t it holds that
| log f (t) ? log f (s)| < | log t ? log s|, ?s, t s 6= t,
we say f is weakly log-contractive (wlc); an important point to note here is the absence of a fixed q.
Next we study existence, uniqueness, and computation of solutions to (4). To that end, momentarily
ignore the constraint S 0, to see that the first-order necessary optimality condition for (4) is
Xn ?0 (xT S ?1 x )
??(S)
i
i
S ?1 xi xTi S ?1 = 0.
(15)
?? 12 nS ?1 +
?S = 0
?(xT S ?1 xi )
i=1
i
Defining h ? ??0 /?, condition (15) may be rewritten more compactly as
Xn
S = n2
xi h(xTi S ?1 xi )xTi = n2 Xh(DS )X T ,
i=1
(16)
Diag(xTi S ?1 xi ),
where DS :=
and X = [x1 , . . . , xm ]. If (16) has a positive definite solution, then
it is a candidate mle; if it is unique, then it is the desired solution (observe that if we have a Gaussian,
then h(t) ? 1/2, and as expected (16) reduces to the sample covariance matrix).
But how should we solve (16)? This question is in general highly nontrivial to answer because (16) is
difficult nonlinear equation in matrix variables. This is the point where the class LN introduced above
comes into play. More specifically, we solve (16) via a fixed-point iteration. Introduce therefore the
nonlinear map G : Pd ? Pd that maps S to the right hand side of (16); then, starting with a feasible
S0 0, simply perform the iteration
Sk+1 ? G(Sk ), k = 0, 1, . . . ,
(17)
which is shown more explicitly as Alg. 1 below.
Algorithm 1 Fixed-point iteration for mle
Input: Observations x1 , . . . , xn ; function h
Initialize: k ? 0; S0 ? In
while ? converged
Pn do
Sk+1 ? n2 i=1 xi h(xTi Sk?1 xi )xTi
end while
return Sk
The most interesting twist to analysing iteration (17) is that the map G is usually not contractive with
respect to the Euclidean metric. But the metric geometry of Pd alluded to previously suggests that it
might be better to analyse the iteration using a non-Euclidean metric. Unfortunately, the Riemannnian
distance (11) on Pd , while canonical, also turns out to be unsuitable. This impasse is broken by
selecting a more suitable ?hyperbolic distance? that captures the crucial non-Euclidean geometry of
Pd , while still respecting its convex conical structure.
Such a suitable choice is provided by the Thompson metric?an object of great interest in nonlinear
matrix equations [17]?which is known to possess geometric properties suitable for analysing convex
cones, of which Pd is a shining example [18]. On Pd , the Thompson metric is given by
?T (X, Y ) := klog(Y ?1/2 XY ?1/2 )k,
(18)
where k?k is the usual operator 2-norm, and ?log? is the matrix logarithm. The core properties of (18)
that prove useful for analysis fixed point iterations are listed below?for proofs please see [17, 19].
Proposition 10. Unless noted otherwise, all matrices are assumed to be hpd..
?T (X ?1 , Y ?1 ) = ?T (X, Y )
(19a)
?T (B ? XB, B ? Y B) = ?T (X, Y ),
B ? GLn (C)
(19b)
?T
?T (X t , Y t )
X
wi Xi ,
wi Yi
?
|t|?T (X, Y ),
?
?T (X + A, Y + A)
?
max ?T (Xi , Yi ),
1?i?m
?
?+? ?T (X, Y ),
X
i
i
where ? = max{kXk, kY k} and ? = ?min (A).
6
for t ? [?1, 1]
wi ? 0, w 6= 0
A 0,
(19c)
(19d)
(19e)
We need one more crucial result (see [28] for a proof), which we state below. This theorem should be
of wider interest as it enlarges the class of maps that one can study using the Thompson metric.
Theorem 11. Let X ? Cd?p , where p ? d, and rank(X) = p. Let A, B ? Pd . Then,
?T (X ? AX, X ? BX)
?
?T (A, B).
(20)
We now show how to use Prop. 10 and Thm. 11 to analyse contractions on Pd .
Proposition 12. Let h be a LN function. Then, the map G in (17) is nonexpansive in ?T . Moreover, if
h is wlc, then G is weakly-contractive in ?T .
Proof. Let S, R 0 be arbitrary. Then, we have the following chain of inequalities
?T (G(S), G(R)) = ?T n2 Xh(DS )X T , n2 Xh(DR )X T
? ?T h(DS ), h(DR )
?
max ?T h(xTi S ?1 xi ), h(xTi R?1 xi )
1?i?n
T ?1
T ?1
?
max ?T xi S xi , xi R xi
? ?T S ?1 , R?1 = ?T (S, R),
1?i?n
where the first inequality follows from (19b) and Thm. 11; the second inequality follows since
h(DS ) and h(DS ) are diagonal; the third follows from (19d); the fourth from another application of
Thm. 11; while the final equality is via (19a). This proves nonexpansivity. If in addition h is weakly
log-contractive and S 6= R, then the second inequality above is strict, that is,
?T (G(S), G(R)) < ?T (S, R) ?S, R
and S 6= R.
Consequently, we obtain the following main convergence theorem for (17).
Theorem 13. If G is weakly contractive and (16) has a solution, then this solution is unique and
iteration (17) converges to it.
When h is merely LN (not wlc), it is still possible to show uniqueness of (16) up to a constant. Our
proof depends on the following new property of ?T , which again should be of broader interest.
Theorem 14. Let G be nonexpansive in the ?T metric, that is ?T (G(X), G(Y )) ? ?T (X, Y ), and F
be weakly contractive, that is ?T (F(X), F(Y )) < ?T (X, Y ), then G + F is also weakly contractive.
Observe that the property proved in Thm. 14 is a striking feature of the nonpositive curvature of
Pd ; clearly, such a result does not usually hold in Banach spaces. As a consequence, Thm. 14 helps
establish the following ?robustness? result for iteration (17).
Theorem 15. If h is LN, and S1 6= S2 are solutions to the nonlinear equation (16), then iteration
(17) converges to a solution, and S1 ? S2 .
As an illustrative example of these results, consider the problem of finding the minimum of negative
log-likelihood solution of Kotz type distribution. The convergence of the iterative algorithm in (17)
can be obtained from Thm. 15. But for the Kotz distribution we can show a stronger result, which
helps obtain geometric convergence rates for the fixed-point iteration.
Lemma 16. If c > 0 and ?1 < b < 1, the function h(x) = x + cxb is weakly log-contractive.
According to this lemma, h in the iterative algorithm 16 for the Kotz-type distributions with 0 < ? < 2
and ? < d2 is wlc. Based on Thm. 9, K(S) has a minimum. Therefore, we have the following.
Corollary 17. The iterative algorithm (16) for the Kotz-type distribution with 0 < ? < 2 and ? <
converges to a unique fixed point.
4
d
2
Numerical results
We briefly highlight the numerical performance of our fixed-point iteration. The key message here
is that our fixed-point iterations solve nonconvex likelihood maximisation problems that involve a
complicating hpd constraint. But since the fixed-point iterations always generate hpd iterates, no
extra eigenvalue computation is needed, which leads to substantial computational advantages. In
contrast, a nonlinear solver must perform constrained optimisation, which can be unduly expensive.
7
5.1
5.5
fm
log ?(S)??(Smin)
on
?2.97
?5
0
?1.4 ?0.84 ?0.28 0.28 0.84
log Running time (seconds)
3.41
1.3
?0.79
nt
fixed?poi
int
?5
?1.9 ?1.52 ?1.14 ?0.76 ?0.38
1.06
?0.96
int
po
o
?p
ed
?3.18
3.08
d?
fixe
log ?(S)??(Smin)
on
inc
0.46
?1.36
fminc
fmincon
fix
log ?(S)??(Smin)
4.1
2.28
?2.89
?5
?1.3 ?0.46 0.38 1.22 2.06
1.4
log Running time (seconds)
2.9
log Running time (seconds)
Figure 1: Running times comparison of the fixed-point iteration compared with M ATLAB?s fmincon to
maximise a Kotz-likelihood (see text for details). The plots show (from left to right), running times for estimating
S ? Pd , for d ? {4, 16, 32}. Larger d was not tried because fmincon does not scale.
in
3
fminco
fmincon
n
1
1.6
?0.59
?2.8
t
t
?poin
?2.94
n
3.81
fixed
oin
?p
?3
1.19
?0.87
?poin
fixed
ed
fix
?1
3.24
log ?(S)??(Smin)
co
log ?(S)??(Smin)
6
5.3
fm
log ?(S)??(Smin)
5
t
?5
?1.4
?0.64
0.12
0.88
1.64
log Running time (seconds)
2.4
?5
?1.4 ?0.84 ?0.28 0.28 0.84
log Running time (seconds)
?5
1.4
?1.3 ?0.72 ?0.14 0.44 1.02
1.6
log Running time (seconds)
Figure 2: In the Kotz-type distribution, when ? gets close to zero or 2, the contraction factor becomes smaller
which could impact the convergence rate. This figure shows running time variance for Kotz-type distributions
with fixed d = 16, and ? = 2, for different values of ?: ? = 0.1, ? = 1, ? = 1.7.
We show two short experiments (Figs. 1 and 2) showing scalability of the fixed-point iteration with
increasing dimensionality of the input matrix, and for varying ? parameter of the Kotz distribution; this
parameter influences the convergence rate of the fixed-point iteration. For three different dimensions
d = 4, d = 16, and d = 32, we sample 10,000 datapoints from a Kotz-type distribution with
? = 0.5, ? = 2, and a random covariance matrix. The convergence speed is shown as blue curves
in Figure 1. For comparison, the result of constrained optimisation (red curves) using M ATLAB ? S
optimisation toolbox are shown. The fixed-point algorithm clearly outperforms M ATLAB ? S toolbox,
especially as dimensionality increases. These results indicate that the fixed-point approach can be very
competitive. Also note that the problems are nonconvex with an open constraint set?this precludes
direct application simple approaches such as gradient-projection (since projection requires closed
sets; moreover, projection also requires eigenvector decompositions). Additional comparisons in the
longer version [28] show that the fixed-point iteration also significantly outperforms sophisticated
manifold optimisation techniques [5], especially for increasing data dimensionality.
5
Conclusion
We developed geometric optimisation for minimising potentially nonconvex functions over the set of
positive definite matrices. We showed key results that help recognise geodesic convexity; we also
introduced the class of log-nonexpansive functions that contains functions that need not be g-convex,
but can still be optimised efficiently. Key to our ideas here was a careful construction of fixed-point
iterations in a suitably chosen metric space. We motivated, developed, and applied our results to
the task of maximum likelihood estimation for various elliptically contoured distributions, covering
classes and examples substantially beyond what had been known so far in the literature. We believe
that the general geometric optimisation techniques that we developed in this paper will prove to be of
wider use and interest beyond our motivating application. Developing a more extensive geometric
optimisation numerical package is part of our ongoing project.
References
[1] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[2] R. Bhatia and R. L. Karandikar. The matrix geometric mean. Technical Report isid/ms/2-11/02, Indian
Statistical Institute, 2011.
[3] D. A. Bini and B. Iannazzo. Computing the karcher mean of symmetric positive definite matrices. Linear
Algebra and its Applications, 438(4):1700 ? 1710, 2013.
8
[4] G. Blekherman and P. A. Parrilo, editors. Semidefinite Optimization and Convex Algebraic Geometry.
SIAM, 2013.
[5] N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt: a matlab toolbox for optimization on
manifolds. arXiv Preprint 1308.5200, 2013.
[6] S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi. A Tutorial on Geometric Programming. Optimization
and Engineering, 8(1):67?127, 2007.
[7] M. R. Bridson and A. Haeflinger. Metric Spaces of Non-Positive Curvature. Springer, 1999.
[8] S. Cambanis, S. Huang, and G. Simons. On the theory of elliptically contoured distributions. Journal of
Multivariate Analysis, 11(3):368?385, 1981.
[9] Y. Chen, A. Wiesel, and A. Hero. Robust shrinkage estimation of high-dimensional covariance matrices.
IEEE Transactions on Signal Processing, 59(9):4097?4107, 2011.
[10] G. Cheng and B. Vemuri. A novel dynamic system in the space of spd matrices with applications to
appearance tracking. SIAM Journal on Imaging Sciences, 6(1):592?615, 2013.
[11] G. Cheng, H. Salehian, and B. C. Vemuri. Efficient Recursive Algorithms for Computing the Mean
Diffusion Tensor and Applications to DTI Segmentation. In European Conference on Computer Vision
(ECCV), volume 7, pages 390?401, 2012.
[12] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet Divergence for
Efficient Similarity Computations on Positive Definite Tensors. IEEE TPAMI, 2012.
[13] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman and Hall/CRC, 1999.
[14] L. Gurvits and A. Samorodnitsky. A deterministic algorithm for approximating mixed discriminant and
mixed volume, and a combinatorial corollary. Disc. Comp. Geom., 27(4), 2002.
[15] S. K. K.-T. Fang and K. W. Ng. Symmetric multivariate and related distributions. Chapman & Hall, 1990.
[16] J. T. Kent and D. E. Tyler. Redescending M-estimates of multivariate location and scatter. The Annals of
Statistics, 19(4):2102?2119, Dec. 1991.
[17] H. Lee and Y. Lim. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity, 21:
857?878, 2008.
[18] B. Lemmens and R. Nussbaum. Nonlinear Perron-Frobenius Theory. Cambridge Univ. Press, 2012.
[19] Y. Lim and M. P?alfia. Matrix power means and the Karcher mean. J. Functional Analysis, 262:1498?1514,
2012.
[20] R. J. Muirhead. Aspects of multivariate statistical theory. John-Wiley, 1982.
[21] Y. Nesterov and A. S. Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM,
1994.
[22] F. Nielsen and R. Bhatia, editors. Matrix Information Geometry. Springer, 2013.
[23] E. Ollila, D. Tyler, V. Koivunen, and H. V. Poor. Complex elliptically symmetric distributions: Survey,
new results and applications. IEEE Transactions on Signal Processing, 60(11):5597?5625, 2011.
[24] A. Papadopoulos. Metric spaces, convexity and nonpositive curvature. Europ. Math. Soc., 2005.
[25] T. Rapcs?ak. Geodesic convexity in nonlinear optimization. J. Optim. Theory and Appl., 69(1):169?183,
1991.
[26] R. T. Rockafellar and R. J.-B. Wets. Variational analysis. Springer, 1998.
[27] S. Sra. Positive Definite Matrices and the Symmetric Stein Divergence. arXiv:1110.1773, Oct. 2012.
[28] S. Sra and R. Hosseini. Conic geometric optimisation on the manifold of positive definite matrices. arXiv
preprint, 2013.
[29] A. Wiesel. Geodesic convexity and covariance estimation. IEEE Transactions on Signal Processing, 60
(12):6182?89, 2012.
[30] T. Zhang, A. Wiesel, and S. Greco. Multivariate generalized gaussian distribution: Convexity and graphical
models. arXiv preprint arXiv:1304.3206, 60(11):5597?5625, Nov. 2013.
[31] H. Zhu, H. Zhang, J. Ibrahim, and B. Peterson. Statistical analysis of diffusion tensors in diffusion-weighted
magnetic resonance imaging data. Journal of the American Statistical Association, 102(480):1085?1102,
2007.
9
| 5169 |@word determinant:1 illustrating:1 version:4 briefly:1 norm:1 pillar:1 stronger:1 suitably:1 wiesel:3 open:1 polynomial:1 d2:6 tried:1 covariance:7 contraction:3 decomposition:1 kent:1 mention:1 tr:4 sepulchre:1 contains:2 cherian:1 selecting:1 ours:2 outperforms:2 mishra:1 elliptical:5 ka:1 nt:1 optim:1 scatter:3 attracted:1 must:1 john:1 numerical:3 shape:3 enables:1 analytic:1 plot:1 core:1 short:1 papadopoulos:1 provides:1 iterates:1 math:1 location:1 simpler:1 nussbaum:1 zhang:2 along:1 c2:1 direct:1 differential:1 prove:3 owner:1 hermitian:3 introduce:2 expected:1 indeed:2 nor:2 globally:3 xti:11 solver:1 increasing:2 becomes:1 begin:5 provided:1 moreover:3 bounded:4 project:1 estimating:1 what:3 substantially:1 emerging:1 eigenvector:1 developed:3 unified:1 finding:1 dti:1 every:1 ensured:1 fmincon:4 unit:1 medical:1 planck:1 positive:22 t1:1 engineering:2 local:4 maximise:1 limit:2 consequence:2 ak:1 optimised:1 path:2 might:4 studied:4 k:7 suggests:1 appl:1 co:1 klog:2 contractive:9 unique:5 maximisation:3 recursive:1 definite:13 foundational:2 area:5 evolving:1 hyperbolic:1 significantly:1 boyd:2 projection:3 get:1 cannot:1 convenience:1 close:1 operator:1 interior:1 put:1 context:1 applying:1 influence:1 equivalent:1 map:10 deterministic:1 go:16 attention:1 starting:2 straightforward:1 convex:23 survey:3 thompson:3 simplicity:1 immediately:1 estimator:1 muirhead:1 vandenberghe:1 datapoints:2 fang:1 annals:1 construction:1 play:2 programming:3 us:1 expensive:1 preprint:3 capture:3 ensures:1 connected:1 momentarily:1 mentioned:2 substantial:1 pd:30 convexity:27 broken:1 respecting:1 nesterov:1 dynamic:1 geodesic:15 ultimately:2 radar:1 weakly:7 solving:3 algebra:1 completely:1 compactly:1 easily:1 po:1 cat:1 various:1 univ:1 describe:2 bhatia:3 pearson:2 whose:1 daa:1 widely:2 solve:5 larger:3 say:3 otherwise:1 enlarges:1 precludes:1 ability:2 statistic:3 nondecreasing:3 noisy:1 analyse:2 final:1 advantage:2 differentiable:2 rr:5 eigenvalue:4 tpami:1 product:1 fr:1 hadamard:1 combining:1 rapidly:1 frobenius:2 ky:1 scalability:1 exploiting:1 convergence:6 optimum:1 generating:1 converges:4 object:3 help:8 illustrate:3 derive:1 wider:5 nearest:1 school:1 soc:1 europ:1 indicate:1 come:2 convention:1 crc:1 optimisable:1 wlc:4 suffices:2 fix:2 proposition:2 strictly:7 extension:1 hold:2 lying:1 hall:2 normal:1 exp:6 great:4 tyler:2 reshad:1 uniqueness:2 estimation:5 applicable:2 wet:1 combinatorial:1 tool:1 weighted:1 hope:1 brought:1 clearly:2 gaussian:3 always:1 papanikolopoulos:1 pn:1 ollila:1 shrinkage:2 poi:1 varying:1 broader:2 minimisation:1 corollary:11 ax:3 modelling:1 likelihood:11 rank:2 contrast:1 attains:3 geodesically:4 sense:3 absil:1 helpful:1 kim:1 tehran:2 typically:1 initially:1 hidden:2 subroutine:1 germany:1 resonance:1 platform:1 special:3 fairly:1 initialize:1 constrained:2 construct:1 once:1 having:1 gurvits:1 ng:1 chapman:2 broad:2 look:1 tc2:1 report:1 intelligent:1 develops:3 few:1 primarily:1 modern:1 neighbour:1 gamma:3 divergence:2 geometry:8 impasse:1 recalling:2 interest:7 message:1 highly:1 grasp:2 semidefinite:1 xb:1 chain:1 amenable:2 bregman:1 partial:1 necessary:1 xy:1 minimiser:1 machinery:1 unless:1 iv:1 euclidean:5 logarithm:1 desired:3 instance:2 karcher:3 applicability:2 subset:1 too:1 motivating:2 optimally:1 reported:1 answer:3 density:5 siam:3 recur:1 lee:1 invoke:1 vastly:1 again:1 huang:1 dr:7 admit:2 book:1 american:1 bx:2 return:1 parrilo:1 sec:1 subsumes:2 int:2 inc:1 rockafellar:1 satisfy:2 notable:1 explicitly:1 depends:1 multiplicative:1 closed:1 apparently:1 red:1 competitive:1 simon:1 contribution:2 variance:1 efficiently:2 yield:3 identify:2 disc:1 comp:1 converged:1 whenever:2 ed:2 cambanis:1 definition:5 echet:1 atlab:3 naturally:1 proof:17 riemannian:6 nonpositive:4 gain:1 proved:1 knowledge:4 lim:4 dimensionality:4 segmentation:1 dgfs:4 nielsen:1 sophisticated:1 actually:1 though:4 contoured:5 just:1 d:7 hand:1 nonlinear:11 banerjee:1 lack:1 logistic:2 brings:1 perhaps:2 believe:2 name:1 requiring:2 contain:1 concept:2 verify:1 hence:1 equality:1 symmetric:5 deal:1 please:2 covering:1 noted:1 illustrative:1 m:1 generalized:1 presenting:1 theoretic:3 complete:1 variational:2 novel:4 recently:3 functional:1 twist:1 attached:1 volume:2 banach:1 extend:1 association:1 refer:1 cambridge:1 enjoyed:1 rd:2 mathematics:1 nonlinearity:1 had:1 poin:2 access:1 longer:3 similarity:1 curvature:6 multivariate:18 recent:2 showed:1 nagar:1 nonconvex:8 suvrit:1 ubingen:1 inequality:6 success:1 cxb:1 yi:2 exploited:1 minimum:7 additional:4 signal:3 ii:7 hpd:22 multiple:2 full:1 semi:1 reduces:2 technical:2 generalises:1 offer:3 minimising:1 mle:4 a1:2 impact:1 basic:1 optimisation:32 metric:19 vision:2 arxiv:5 iteration:18 sometimes:1 dec:1 c1:2 addition:1 remarkably:1 interval:1 crucial:3 extra:1 exhibited:1 posse:2 strict:1 member:1 noting:1 iii:1 enough:1 variety:2 spd:1 variate:1 identified:1 fm:2 inner:1 idea:8 det:15 minimise:1 motivated:2 expression:1 ibrahim:1 algebraic:2 logdet:1 elliptically:6 matlab:1 generally:1 useful:2 involve:2 characterise:1 listed:1 iran:1 stein:1 extensively:1 generate:1 exist:1 canonical:1 tutorial:1 notice:2 fulfilled:1 disjoint:1 per:1 blue:1 smin:6 key:11 nevertheless:1 neither:2 sxi:2 diffusion:3 advancing:1 imaging:4 merely:1 cone:5 package:1 powerful:1 fourth:1 striking:1 throughout:1 family:2 reader:2 kotz:14 recognise:5 decision:1 datum:1 conical:1 cheng:2 nontrivial:3 constraint:3 aspect:1 speed:2 argument:1 min:2 optimality:2 span:3 performing:1 developing:1 according:1 poor:1 nonexpansive:6 remain:1 slightly:1 smaller:2 wi:3 lem:1 s1:2 invariant:1 ln:8 alluded:3 equation:5 previously:6 mutually:1 turn:1 needed:1 hero:1 cor:3 end:2 studying:2 rewritten:1 permit:1 distr:1 observe:2 magnetic:1 robustness:1 existence:5 broaden:1 assumes:1 running:9 include:2 ecd:2 graphical:1 unsuitable:1 paucity:1 exploit:2 bini:1 dgf:1 hosseini:2 establish:2 classical:1 prof:1 especially:2 approximating:1 tensor:3 greco:1 already:1 question:4 usual:1 diagonal:1 gradient:1 subspace:3 distance:4 topic:1 manifold:11 discriminant:1 trivial:1 reason:1 assuming:1 equivalently:1 setup:1 difficult:2 unfortunately:1 potentially:1 negative:4 ecds:11 ba:2 proper:1 perform:2 redescending:1 observation:1 markov:1 finite:1 manopt:1 immediate:1 subsume:2 defining:1 nemirovskii:1 gc:21 discovered:1 arbitrary:2 thm:20 princeton:1 introduced:3 namely:1 perron:2 toolbox:3 extensive:2 c3:2 unduly:1 beyond:7 able:1 below:7 usually:2 xm:1 geom:1 program:1 max:5 optimise:1 including:1 power:5 overlap:1 suitable:3 difficulty:1 advanced:1 zhu:1 numerous:3 conic:4 ready:2 gln:1 text:1 geometric:25 literature:3 characterises:1 tangent:1 kf:2 encompassing:1 highlight:1 ksk:5 mixed:2 interesting:1 proportional:1 remarkable:1 sufficient:1 xp:1 s0:2 editor:2 cd:4 eccv:1 free:1 side:1 institute:2 boumal:1 peterson:1 midpoint:3 benefit:1 curve:2 dimension:2 xn:8 world:1 complicating:1 rich:6 computes:1 founded:1 far:4 transaction:3 nov:1 compact:1 ignore:1 global:5 xt1:1 assumed:1 xi:26 search:1 continuous:4 iterative:3 sk:5 robust:1 sra:4 lsc:4 alg:1 european:1 complex:1 constructing:2 diag:1 did:1 pk:3 main:4 oin:1 motivation:1 whole:2 s2:2 n2:7 nothing:1 x1:7 fig:1 scattered:1 wiley:1 lc:2 n:1 hassibi:1 explicit:1 xh:3 exponential:3 lie:2 candidate:1 answering:1 third:3 karandikar:1 theorem:13 shining:1 xt:4 showing:1 jensen:1 iannazzo:1 gupta:1 dl:5 exists:2 recognising:4 corr:1 importance:1 notwithstanding:1 chen:1 specialised:1 simply:2 appearance:1 kxk:1 tracking:1 joined:1 applies:1 monotonic:1 ch:1 springer:3 satisfies:1 relies:1 prop:1 oct:1 presentation:1 consequently:1 careful:1 towards:1 absence:1 feasible:1 analysing:2 vemuri:2 samorodnitsky:1 specifically:2 lemma:5 called:3 ece:1 experimental:1 college:1 arises:2 indian:1 ongoing:1 bridson:1 instructive:1 |
4,608 | 517 | A Segment-based Automatic Language
Identification System
Yeshwant K. Muthusamy & Ronald A. Cole
Center for Spoken Language Understanding
Oregon Graduate Institute of Science and Technology
Beaverton OR 97006-1999
Abstract
We have developed a four-language automatic language identification system for high-quality speech. The system uses a neural network-based
segmentation algorithm to segment speech into seven broad phonetic categories. Phonetic and prosodic features computed on these categories are
then input to a second network that performs the language classification.
The system was trained and tested on separate sets of speakers of American English, Japanese, Mandarin Chinese and Tamil. It currently performs
with an accuracy of 89.5% on the utterances of the test set.
1
INTRODUCTION
Automatic language identification is the rapid automatic determination of the language being spoken, by any speaker, saying anything. Despite several important
applications of automatic language identification, this area has suffered from a lack
of basic research and the absence of a standardized, public-domain database of
languages.
It is well known that languages have characteristic sound patterns. Languages have
been described subjectively as "singsong" , "rhythmic" , "guttural", "nasal" etc. The
key to solving the problem of automatic language identification is the detection and
exploitation of such differences between languages.
We assume that each language in the world has a unique acoustic structure, and that
this structure can be defined in terms of phonetic and prosodic features of speech.
241
242
Muthusamy and Cole
Phonetic, or segmental features, include the the inventory of phonetic segments
and their frequency of occurrence in speech. Prosodic information consists of the
relative durations and amplitudes of sonorant (vowel-like) segments, their spacing
in time, and patterns of pitch change within and across these segments .
To the extent that these assumptions are valid, languages can be identified automatically by segmenting speech into broad phonetic categories, computing segmentbased features that capture the relevant phonetic and prosodic structure, and training a classifier to associate the feature measurements with the spoken language.
We have developed a language identification system that uses a neural network to
segment speech into a sequence of seven broad phonetic categories. Information
about these categories is then used to train a second neural network to discriminate
among utterances spoken by native speakers of American English, Japanese, Mandarin Chinese and Tamil. When tested on utterances produced by six new speakers
from each language, the system correctly identifies the language being spoken 89.5%
of the time.
2
SYSTEM OVERVIEW
The following steps transform an input utterance into a decision about what language was spoken.
Data Capture
The speech is recorded using a Sennheiser HMD 224 noise-canceling microphone,
low-pass filtered at 7.6 kHz and sampled at 16 kHz.
Signal Representations
A number of waveform and spectral parameters are computed in preparation for
further processing. The spectral parameters are generated from a 128-point discrete
Fourier transform computed on a 10 ms Hanning window. All parameters are
computed every 3 ms.
The waveform parameters consist of estimates of (i) zc8000: the zero-crossing rate
of the waveform in a 10 ms window, (ii) ptp700 and ptp8000: the peak-to-peak
amplitude of the waveform in a 10 ms window in two frequency bands (0-700 Hz
and 0-8000 Hz), and (iii) pitch: the presence or absence of pitch in each 3 ms frame.
The pitch estimate is derived from a neural network pitch tracker that locates pitch
periods in the filtered (0-700 Hz) waveform [2]. The spectral parameters consist
of (i) DFT coefficients, (ii) sda700 and sda8000: estimates of averaged spectral
difference in two frequency bands, (iii) sdf: spectral difference in adjacent 9 ms
intervals, and (iv) cmlOOO: the center-of-mass of the spectrum in the region of the
first formant.
Broad Category Segmentation
Segmentation is performed by a fully-connected, feedforward, three-layer neural
network that assigns 7 broad phonetic category scores to each 3 ms time frame of
the utterance. The broad phonetic categories are: vac (vowel) , FRIC (fricative),
A Segment-based Automatic Language Identification System
STOP (pre-vocalic stop), PRVS (pre-vocalic sonorant), INVS (inter-vocalic sonorant), POVS (post-vocalic sonorant), and CLOS (silence or background noise). A
Viterbi search, which incorporates duration and bigram probabilities, uses these
frame-based output activations to find the best scoring sequence of broad phonetic
category labels spanning the utterance. The segmentation algorithm is described
in greater detail in [31.
Language Classification
Language classification is performed by a second fully-connected feedforward network that uses phonetic and prosodic features derived from the time-aligned broad
category sequence. These features, described below, are designed to capture the
phonetic and prosodic differences between the four languages.
3
FOUR-LANGUAGE HIGH-QUALITY SPEECH
DATABASE
The data for this research consisted of natural continuous speech recorded in a laboratory by 20 native speakers (10 male and 10 female) of each of American English,
Mandarin Chinese, Japanese and Tamil. The speakers were asked to speak a total
of 20 utterances!: 15 conversational sentences of their choice, two questions of their
choice, the days of the week, the months of the year and the numbers 0 through 10.
The objective was to have a mix of unconstrained- and restricted-vocabulary speech.
The segmentation algorithm was trained on just the conversational sentences, while
the language classifier used all utterances from each speaker.
4
NEURAL NETWORK SEGMENTATION
4.1
SEGMENTER TRAINING
4.1.1
Training and Test Sets
Five utterances from each of 16 speakers per language were used to train and test
the segmenter. The training set had 50 utterances from 10 speakers (5 male and 5
female) from each of the 4 languages, for a total of 200 utterances. The development
test set had 10 utterances from a different set of 2 speakers (1 male and 1 female)
from each language, for a total of 40 utterances. The final test set had 20 utterances
from yet another set of 4 speakers (2 male and 2 female) from each language for a
total of 80 utterances. The average duration of the utterances in the training set
was 4.7 secs and that of the test sets was 5.7 secs.
4.1.2
Network Architecture
The segmentation network was a fully-connected, feed-forward network with 304
input units, 18 hidden units and 7 output units. The number of hidden units was
determined experimentally. Figure 1 shows the network configuration and the input
features.
1 Five
speakers in Japanese and one in Tamil provided only 10 utterances each.
243
244
Muthusamy and Cole
NEURAL NETWORK SEGMENTATION
VOC
FRIC
CLOS
STOP
PRVS
INVS
POVS
7
OlJTPUT
UNITS
18
HIDDEN
UNITS
304
INPUT
/
,
U~
L-...JL-...JL-...JL-JL-JL-JL-...JL-JL-J
Z.o
Cronlntl
L
PTP
0-700
Av~.
PTP
SO 0-700 0-8000
Av~.
SO 0-8000
Pltcfl
F_tCh...g. CoM
SO 0-700
0?1000
~
84 OFT
Co.fflc!.nt.
30 samples Each
Figure 1: Segmentation Network
4.1.3
Feature Measurements
The feature measurements used to train the network include the 64 DFT coefficients
at the frame to be classified and 30 samples each of zc8000, ptp700, ptp8000, sda 700,
sda8000, sd/, pitch and cml 000, for a total of 304 features. These samples were
taken from a 330 ms window centered on the frame, with more samples being taken
in the immediate vicinity of the frame than near the ends of the window.
4.1.4
Hand-labeling
Both the training and test utterances were hand-labeled with 7 broad phonetic
category labels and checked by a second labeler for correctness and consistency.
4.1.5
Coarse Sampling of Frames
As it was not computationally feasible to train on every 3 ms frame in each utterance, only a few frames were chosen at random from each segment. To ensure
approximately equal number of frames from each category, fewer frames were sampled from the more frequent categories such as vowels and closures.
4.1.6
Network Training
The networks were trained using backpropagation with conjugate gradient optimization [1]. Training was continued until the performance of the network on the
development test set leveled off.
A Segment-based Automatic Language Identification System
4.2
SEGMENTER EVALUATION
Segmentation performance was evaluated on the 80-utterance final test set. The
segmenter output was compared to the hand-labels for each 3 ms time frame. First
choice accuracy was 85.1% across the four languages. When scored on the middle 80% and middle 60% of each segment, the accuracy rose to 86.9% and 88.0%
respectively, pointing to the presence of boundary errors.
5
LANGUAGE IDENTIFICATION
5.1
5.1.1
CLASSIFIER TRAINING
Training and Test Sets
The training set contained 12 speakers from each language, with 10 or 20 utterances
per speaker, for a total of 930 utterances. The development test set contained a
different group of 2 speakers per language with 20 utterances from each speaker, for
a total of 160 utterances. The final test set had 6 speakers per language, with 10
or 20 utterances per speaker, for a total of 440 utterances. The average duration of
the utterances in the training set was 5.1 seconds and that of the test sets was 5.5
seconds.
5.1.2
Feature Development
Several passes were needed through the iterative process of feature development
and network training before a satisfactory feature set was obtained. Much of the
effort was concentrated on statistical and linguistic analysis of the languages with
the objective of determining the distinguishing characteristics among them. For
example, the knowledge that Mandarin Chinese was the only monosyllabic tonal
language in the set (the other three being stress languages), led us to design features
that attempted to capture the large variation in pitch within and across segments for
Mandarin Chinese utterances. Similarly, the presence of sequences of equal-length
broad category segments in Japanese utterances led us to design an "inter-segment
duration difference" feature. The final set of 80 features is described below. All the
features are computed over the entire length of an utterance and use the time-aligned
broad category sequence provided by the segmentation algorithm. The numbers in
parentheses refer to the number of values generated.
? Intra-segment pitch variation: Average of the standard deviations of the pitch
within all sonorant segments-VOC, PRVS, INVS, POVS (4 values)
? Inter-segment pitch variation: Standard deviation of the average pitch in all
sonorant segments (4 values)
? Frequency of occurrence (number of occurrences per second of speech) of triples
of segments. The following triples were chosen based on statistical analyses of
the training data: VOC-INVS-VOC, CLOS-PRVS-VOC, VOC-POVS-CLOS,
STOP-VOC-FRIC, STOP-VOG-CLOS, and FRIC-VOC-CLOS (6 values)
? Frequency of occurrence of each of the seven broad phonetic labels (7 values)
245
246
Murhusamy and Cole
? Frequency of occurrence of all segments (number of segments per second)
(1 value)
? Frequency of occurrence of all consonants (STOPs and FRICs) (1 value)
? Frequency of occurrence of all sonorants (4 values)
? Ratio of number of sonorant segments to total number of segments (1 value)
? Fraction of the total duration of the utterance devoted to each of the seven
broad phonetic labels (7 values)
? Fraction of the total duration of the utterance devoted to all sonorants (1 value)
? Frequency of occurrence of voiced consonants (1 value)
? Ratio of voiced consonants to total number of consonants (1 value)
? Average duration of the seven broad phonetic labels (7 values)
? Standard deviation of the duration of the seven broad phonetic labels (7 values)
? Segment-pair ratios: conditional probability of occurrence of selected pairs of
segments. The segment-pairs were selected based on histogram plots generated
on the training set. Examples of selected pairs: POVS-FRIC, VOC-FRIC,
INVS-VOC, etc. (27 values)
? Inter-segment duration difference: Average absolute difference in durations between successive segments (1 value)
? Standard deviation of the inter-segment duration differences (1 value)
? Average distance between the centers of successive vowels (1 value)
? Standard deviation of the distances between centers of successive vowels
(1 value)
5.2
5.2.1
LANGUAGE IDENTIFICATION PERFORMANCE
Single Utterances
During the feature development phase, the 2 speakers-per-Ianguage development
test set was used. The classifier performed at an accuracy of 90.0% on this small
test set. For final evaluation, the development test set was combined with the
original training set to form a 14 speakers-per-Ianguage training set. The performance of the classifier on the 6 speakers-per-Ianguage final test set was 79.6%. The
individual language performances were English 75.8%, Japanese 77.0%, Mandarin
Chinese 78.3%, and Tamil 88.0%. This result was obtained with training and test
set utterances that were approximately 5.4 seconds long on the average.
5.2.2
Concatenated Utterances
To observe the effect of training and testing with longer durations of speech per
utterance, a series of experiments were conducted in which pairs and triples of
utterances from each speaker were concatenated end-to-end (with 350 ms of silence
in between to simulate natural pauses) in both the training and test sets. It is to
be noted that the total duration of speech used in training and testing remained
unchanged for all these experiments. Table 1 summarizes the performance of the
A Segment-based Automatic Language Identification System
Table 1: Percentage Accuracy on Varying Durations of Speech Per Utterance
A vge. Duration of
Test Utts. (sec)
Avge. Duration of
Training Utts. (sec)
5.3
10.6
15.2
5.7
ll.~
17.1
79.6
71.8
67.9
73.6
86.8
85.5
71.2
85.0
89.5
classifier when trained and tested on different durations of speech per utterance.
The rows of the table show the effect of testing on progressively longer utterances
for a given training set, while the columns of the table show the effect of training
on progressively longer utterances for a given test set. Not surprisingly, the best
performance is obtained when the classifier is trained and tested on three utterances
concatenated together.
6
DISCUSSION
The results indicate that the system performs better on longer utterances. This
is to be expected given the feature set, since the segment-based statistical features
tend to be more reliable with a larger number of segments. Also, it is interesting
to note that we have obtained an accuracy of 89.5% without using any spectral
information in the classifier feature set. All of the features are based on the broad
phonetic category segment sequences provided by the segmenter.
It should be noted that approximately 15% of the utterances in the training and test
sets consisted of a fixed vocabulary: the days of the week, the months of the year
and the numbers zero through ten. It is likely that the inclusion of these utterances
inflated classification performance. Nevertheless, we are encouraged by the 10.5%
error rate, given the small number of speakers and utterances used to train the
system.
Acknowledgements
This research was supported in part by NSF grant No. IRI-9003110, a grant from
Apple Computer, Inc., and by a grant from DARPA to the Department of Computer
Science & Engineering at the Oregon Graduate Institute. We thank Mark Fanty
for his many useful comments.
References
[1] E. Barnard and R. A. Cole. A neural-net training program based on conjugategradient optimization. Technical Report CSE 89-014, Department of Computer
Science, Oregon Graduate Institute of Science and Technology, 1989.
247
248
Muthusamy and Cole
[2] E. Barnard, R. A. Cole, M. P. Vea, and F. A. Alleva. Pitch detection with a
neural-net classifier. IEEE Transactions on Signal Processing, 39(2):298-307,
February 1991.
[3] Y. K. Muthusamy, R. A. Cole, and M. Gopalakrishnan. A segment-based approach to automatic language identification. In Proceedings 1991 IEEE International Conference on Acoustics, Speech, and Signal Processing, Toronto,
Canada, May 1991.
PART V
TEMPORAL
SEQUENCES
| 517 |@word exploitation:1 middle:2 bigram:1 closure:1 cml:1 configuration:1 series:1 score:1 clos:6 com:1 nt:1 activation:1 yet:1 ronald:1 designed:1 plot:1 progressively:2 fewer:1 selected:3 filtered:2 coarse:1 cse:1 toronto:1 successive:3 five:2 consists:1 inter:5 expected:1 rapid:1 voc:10 automatically:1 window:5 provided:3 mass:1 what:1 developed:2 spoken:6 temporal:1 every:2 classifier:9 unit:6 grant:3 segmenting:1 before:1 engineering:1 sd:1 despite:1 approximately:3 co:1 monosyllabic:1 graduate:3 averaged:1 unique:1 testing:3 backpropagation:1 area:1 pre:2 center:4 iri:1 duration:18 assigns:1 continued:1 his:1 sennheiser:1 variation:3 speak:1 us:4 distinguishing:1 associate:1 crossing:1 native:2 database:2 labeled:1 capture:4 region:1 connected:3 rose:1 asked:1 trained:5 segmenter:5 solving:1 segment:33 darpa:1 train:5 prosodic:6 labeling:1 larger:1 formant:1 transform:2 final:6 sequence:7 net:2 fanty:1 frequent:1 relevant:1 aligned:2 mandarin:6 indicate:1 inflated:1 waveform:5 centered:1 public:1 tracker:1 viterbi:1 week:2 pointing:1 label:7 currently:1 vea:1 cole:8 correctness:1 fricative:1 varying:1 linguistic:1 derived:2 entire:1 hidden:3 classification:4 among:2 development:8 equal:2 sampling:1 labeler:1 encouraged:1 broad:16 report:1 few:1 individual:1 phase:1 vowel:5 detection:2 intra:1 evaluation:2 male:4 devoted:2 iv:1 column:1 deviation:5 conducted:1 combined:1 sda:1 peak:2 international:1 off:1 together:1 recorded:2 american:3 sec:4 coefficient:2 inc:1 oregon:3 leveled:1 performed:3 voiced:2 accuracy:6 characteristic:2 prvs:4 identification:12 produced:1 apple:1 classified:1 ptp:2 canceling:1 checked:1 frequency:9 sampled:2 stop:6 knowledge:1 segmentation:11 amplitude:2 feed:1 day:2 evaluated:1 just:1 until:1 hand:3 lack:1 quality:2 effect:3 consisted:2 vicinity:1 laboratory:1 satisfactory:1 adjacent:1 ll:1 during:1 speaker:23 anything:1 noted:2 m:11 stress:1 performs:3 yeshwant:1 sdf:1 overview:1 khz:2 jl:8 measurement:3 refer:1 dft:2 automatic:10 unconstrained:1 consistency:1 similarly:1 inclusion:1 language:43 had:4 longer:4 subjectively:1 etc:2 segmental:1 female:4 phonetic:19 scoring:1 greater:1 period:1 fric:6 signal:3 ii:2 sound:1 mix:1 technical:1 determination:1 long:1 post:1 locates:1 parenthesis:1 pitch:13 basic:1 histogram:1 alleva:1 background:1 spacing:1 interval:1 suffered:1 pass:1 comment:1 hz:3 tend:1 conjugategradient:1 incorporates:1 near:1 presence:3 feedforward:2 iii:2 muthusamy:5 architecture:1 identified:1 six:1 tonal:1 effort:1 speech:16 useful:1 nasal:1 band:2 ten:1 concentrated:1 category:16 percentage:1 nsf:1 correctly:1 per:13 discrete:1 group:1 key:1 four:4 nevertheless:1 fraction:2 year:2 saying:1 decision:1 summarizes:1 layer:1 fourier:1 simulate:1 conversational:2 department:2 conjugate:1 across:3 restricted:1 taken:2 computationally:1 needed:1 end:3 observe:1 spectral:6 occurrence:9 original:1 standardized:1 include:2 ensure:1 beaverton:1 vocalic:4 concatenated:3 chinese:6 february:1 unchanged:1 objective:2 question:1 gradient:1 distance:2 separate:1 thank:1 seven:6 extent:1 spanning:1 gopalakrishnan:1 length:2 ratio:3 design:2 av:2 tamil:5 immediate:1 frame:12 canada:1 pair:5 sentence:2 acoustic:2 below:2 pattern:2 oft:1 program:1 reliable:1 natural:2 pause:1 technology:2 identifies:1 utterance:46 understanding:1 acknowledgement:1 determining:1 relative:1 fully:3 interesting:1 triple:3 row:1 surprisingly:1 supported:1 english:4 silence:2 institute:3 rhythmic:1 absolute:1 boundary:1 vocabulary:2 world:1 valid:1 forward:1 transaction:1 consonant:4 spectrum:1 search:1 continuous:1 iterative:1 table:4 inventory:1 japanese:6 domain:1 noise:2 scored:1 hanning:1 sonorant:7 remained:1 vac:1 hmd:1 consist:2 led:2 likely:1 contained:2 conditional:1 month:2 barnard:2 absence:2 feasible:1 change:1 experimentally:1 determined:1 microphone:1 total:13 discriminate:1 pas:1 attempted:1 mark:1 preparation:1 tested:4 |
4,609 | 5,170 | Estimating the Unseen:
Improved Estimators for Entropy and other
Properties
Paul Valiant ?
Brown University
Providence, RI 02912
pvaliant@gmail.com
Gregory Valiant ?
Stanford University
Stanford, CA 94305
valiant@stanford.edu
Abstract
Recently, Valiant and Valiant [1, 2] showed that a class of distributional properties, which includes such practically relevant properties as entropy, the number
of distinct elements, and distance metrics between pairs of distributions, can be
estimated given a sublinear sized sample. Specifically, given a sample consisting
of independent draws from any distribution over at most n distinct elements, these
properties can be estimated accurately using a sample of size O(n/ log n). We
propose a novel modification of this approach and show: 1) theoretically, this estimator is optimal (to constant factors, over worst-case instances), and 2) in practice,
it performs exceptionally well for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. Perhaps unsurprisingly, the key
step in our approach is to first use the sample to characterize the ?unseen? portion
of the distribution. This goes beyond such tools as the Good-Turing frequency
estimation scheme, which estimates the total probability mass of the unobserved
portion of the distribution: we seek to estimate the shape of the unobserved portion
of the distribution. This approach is robust, general, and theoretically principled;
we expect that it may be fruitfully used as a component within larger machine
learning and data analysis systems.
1
Introduction
What can one infer about an unknown distribution based on a random sample? If the distribution
in question is relatively ?simple? in comparison to the sample size?for example if our sample
consists of 1000 independent draws from a distribution supported on 100 domain elements?then
the empirical distribution given by the sample will likely be an accurate representation of the true
distribution. If, on the other hand, we are given a relatively small sample in relation to the size
and complexity of the distribution?for example a sample of size 100 drawn from a distribution
supported on 1000 domain elements?then the empirical distribution may be a poor approximation
of the true distribution. In this case, can one still extract accurate estimates of various properties of
the true distribution?
Many real?world machine learning and data analysis tasks face this challenge; indeed there are
many large datasets where the data only represent a tiny fraction of an underlying distribution we
hope to understand. This challenge of inferring properties of a distribution given a ?too small?
sample is encountered in a variety of settings, including text data (typically, no matter how large the
corpus, around 30% of the observed vocabulary only occurs once), customer data (many customers
or website users are only seen a small number of times), the analysis of neural spike trains [15],
?
?
http://theory.stanford.edu/~valiant/ A portion of this work was done while at Microsoft Research.
http://cs.brown.edu/people/pvaliant/
1
and the study of genetic mutations across a population1 . Additionally, many database management
tasks employ sampling techniques to optimize query execution; improved estimators would allow
for either smaller sample sizes or increased accuracy, leading to improved efficiency of the database
system (see, e.g. [6, 7]).
We introduce a general and robust approach for using a sample to characterize the ?unseen? portion
of the distribution. Without any a priori assumptions about the distribution, one cannot know what
the unseen domain elements are. Nevertheless, one can still hope to estimate the ?shape? or histogram of the unseen portion of the distribution?essentially, we estimate how many unseen domain
elements occur in various probability ranges. Given such a reconstruction, one can then use it to
estimate any property of the distribution which only depends on the shape/histogram; such properties are termed symmetric and include entropy and support size. In light of the long history of
work on estimating entropy by the neuroscience, statistics, computer science, and information theory communities, it is compelling that our approach (which is agnostic to the property in question)
outperforms these entropy-specific estimators.
Additionally, we extend this intuition to develop estimators for properties of pairs of distributions,
the most important of which are the distance metrics. We demonstrate that our approach can accurately estimate the total variational distance (also known as statistical distance or ?1 distance)
between distributions using small samples. To illustrate the challenge of estimating variational distance (between distributions over discrete domains) given small samples, consider drawing two samples, each consisting of 1000 draws from a uniform distribution over 10,000 distinct elements. Each
sample can contain at most 10% of the domain elements, and their intersection will likely contain
only 1% of the domain elements; yet from this, one would like to conclude that these two samples
must have been drawn from nearly identical distributions.
1.1
Previous work: estimating distributions, and estimating properties
There is a long line of work on inferring information about the unseen portion of a distribution,
beginning with independent contributions from both R.A. Fisher and Alan Turing during the 1940?s.
Fisher was presented with data on butterflies collected over a 2 year expedition in Malaysia, and
sought to estimate the number of new species that would be discovered if a second 2 year expedition
were conducted [8]. (His answer was ?? 75.?) At nearly the same time, as part of the British WWII
effort to understand the statistics of the German enigma ciphers, Turing and I.J. Good were working
on the related problem of estimating the total probability mass accounted for by the unseen portion of
a distribution [9]. This resulted in the Good-Turing frequency estimation scheme, which continues
to be employed, analyzed, and extended by our community (see, e.g. [10, 11]).
More recently, in similar spirit to this work, Orlitsky et al. posed the following natural question:
given a sample, what distribution maximizes the likelihood of seeing the observed species frequencies, that is, the number of species observed once, twice, etc.? [12, 13] (What Orlitsky et al. term
the pattern of a sample, we call the fingerprint, as in Definition 1.) Orlitsky et al. show that such
likelihood maximizing distributions can be found in some specific settings, though the problem of
finding or approximating such distributions for typical patterns/fingerprints may be difficult. Recently, Acharya et al. showed that this maximum likelihood approach can be used to yield a nearoptimal algorithm for deciding whether two samples originated from identical distributions, versus
distributions that have large distance [14].
In contrast to this approach of trying to estimate the ?shape/histogram? of a distribution, there has
been nearly a century of work proposing and analyzing estimators for particular properties of distributions. In Section 3 we describe several standard, and some recent estimators for entropy, though
we refer the reader to [15] for a thorough treatment. There is also a large literature on estimating
support size (also known as the ?species problem?, and the related ?distinct elements? problem), and
we refer the reader to [16] and to [17] for several hundred references.
Over the past 15 years, the theoretical computer science community has spent significant effort
developing estimators and establishing worst-case information theoretic lower bounds on the sample
size required for various distribution estimation tasks, including entropy and support size (e.g. [18,
19, 20, 21]).
1
Three recent studies (appearing in Science last year) found that very rare genetic mutations are especially
abundant in humans, and observed that better statistical tools are needed to characterize this ?rare events?
regime, so as to resolve fundamental problems about our evolutionary process and selective pressures [3, 4, 5].
2
The algorithm we present here is based on the intuition of the estimator described in our theoretical
work [1]. That estimator is not practically viable, and additionally, requires as input an accurate
upper bound on the support size of the distribution in question. Both the algorithm proposed in this
current work and that of [1] employ linear programming, though these programs differ significantly
(to the extent that the linear program of [1] does not even have an objective function and simply
defines a feasible region). Our proof of the theoretical guarantees in this work leverages some of
the machinery of [1] (in particular, the ?Chebyshev bump construction?) and achieves the same
theoretical worst-case optimality guarantees. See Appendix A for further theoretical and practical
comparisons with the estimator of [1].
1.2
Definitions and examples
We begin by defining the fingerprint of a sample, which essentially removes all the label-information
from the sample. For the remainder of this paper, we will work with the fingerprint of a sample,
rather than the with the sample itself.
Definition 1. Given a samples X = (x1 , . . . , xk ), the associated fingerprint, F = (F1 , F2 , . . .),
is the ?histogram of the histogram? of the sample. Formally, F is the vector whose ith component,
Fi , is the number of elements in the domain that occur exactly i times in sample X.
For estimating entropy, or any other property whose value is invariant to relabeling the distribution
support, the fingerprint of a sample contains all the relevant information (see [21], for a formal proof
of this fact). We note that in some of the literature, the fingerprint is alternately termed the pattern,
histogram, histogram of the histogram or collision statistics of the sample.
In analogy with the fingerprint of a sample, we define the histogram of a distribution, a representation
in which the labels of the domain have been removed.
Definition 2. The histogram of a distribution D is a mapping hD : (0, 1] ? N ? {0}, where hD (x)
is equal to the number of domain elements that each occur in distribution D with probability x.
Formally, hD (x) = |{? : D(?) = x}|, where D(?) is the probability mass that distribution D
assigns to domain element ?. We will also allow for ?generalized histograms? in which hD does
not necessarily take integral values.
?
Since h(x) denotes the number of elements that have probability x, we have x:h(x)?=0 x?h(x) = 1,
as the total probability mass of a distribution is 1. Any symmetric property is a function of only the
histogram of the distribution:
? The Shannon entropy H(D)
?of a distribution D is defined
?to be
H(D) := ?
D(?) log2 D(?) = ?
hD (x)x log2 x.
??sup(D)
x:hD (x)?=0
? The support size is the number of domain elements that occur
? with positive probability:
|sup(D)| := |{? : D(?) > 0}| =
hD (x).
x:hD (x)?=0
We provide an example to illustrate the above definitions:
Example 3. Consider a sequence of animals, obtained as a sample from the distribution of animals
on a certain island, X = (mouse, mouse, bird, cat, mouse, bird, bird, mouse, dog, mouse). We
have F = (2, 0, 1, 0, 1), indicating that two species occurred exactly once (cat and dog), one species
occurred exactly three times (bird), and one species occurred exactly five times (mouse).
Consider the following distribution of animals:
P r(mouse) = 1/2,
P r(bird) = 1/4,
P r(cat) = P r(dog) = P r(bear) = P r(wolf ) = 1/16.
The associated histogram of this distribution is h : (0, 1] ? Z defined by h(1/16) = 4, h(1/4) = 1,
h(1/2) = 1, and for all x ?? {1/16, 1/4, 1/2}, h(x) = 0.
As we will see in Example 5 below, the fingerprint of a sample is intimately related to the Binomial
distribution; the theoretical analysis will be greatly simplified by reasoning about the related Poisson
distribution, which we now define:
Definition 4. We denote the Poisson distribution of expectation ? as P oi(?), and write poi(?, j) :=
e?? ?j
j! , to denote the probability that a random variable with distribution P oi(?) takes value j.
3
Example 5. Let D be the uniform distribution with support size 1000. Then hD (1/1000) = 1000,
and for all x ?= 1/1000, hD (x) = 0. Let X be a sample consisting of 500 independent draws
from D. Each element of the domain, in expectation, will occur 1/2 times in X, and thus the
number of occurrences of each domain element in the sample X will be roughly distributed as
P oi(1/2). (The exact distribution will be Binomial(500, 1/1000), though the Poisson distribution is an accurate approximation.) By linearity of expectation, the expected fingerprint satisfies
E[Fi ] ? 1000 ? poi(1/2, i). Thus we expect to see roughly 303 elements once, 76 elements twice, 13
elements three times, etc., and in expectation 607 domain elements will not be seen at all.
2
Estimating the unseen
Given the fingerprint F of a sample of size k, drawn from a distribution with histogram h, our highlevel approach is to find a histogram h? that has the property that if one were to take k independent
draws from a distribution with histogram h? , the fingerprint of the resulting sample would be similar
to the observed fingerprint F. The hope is then that h and h? will be similar, and, in particular, have
similar entropies, support sizes, etc.
As an illustration of this approach, suppose we are given a sample of size k = 500, with fingerprint
F = (301, 78, 13, 1, 0, 0, . . .); recalling Example 5, we recognize that F is very similar to the
expected fingerprint that we would obtain if the sample had been drawn from the uniform distribution
over support 1000. Although the sample only contains 391 unique domain elements, we might be
justified in concluding that the entropy of the true distribution from which the sample was drawn is
close to H(U nif (1000)) = log2 (1000).
In general, how does one obtain a ?plausible? histogram from a fingerprint in a principled fashion?
We must start by understanding how to obtain a plausible fingerprint from a histogram.
Given a distribution D, and some domain element ? occurring with probability x = D(?), the probability that it will be drawn exactly i times in k independent draws from D is P r[Binomial(k, x) =
i] ? poi(kx, i). By linearity of expectation, the expected ith fingerprint entry will roughly satisfy
?
E[Fi ] ?
h(x)poi(kx, i).
(1)
x:hD (x)?=0
This mapping between histograms and expected fingerprints is linear in the histogram, with coefficients given by the Poisson probabilities. Additionally, it is not hard to show that V ar[Fi ] ? E[Fi ],
and thus the fingerprint is tightly concentrated about its expected value. This motivates a ?first moment? approach. We will, roughly, invert the linear map from histograms to expected fingerprint
entries, to yield a map from observed fingerprints, to plausible histograms h? .
There is one additional component of our approach. For many fingerprints, there will be a large space
of equally plausible histograms. To illustrate, suppose we obtain fingerprint F = (10, 0, 0, 0, . . .),
and consider the two histograms given by the uniform distributions with respective support sizes
10,000, and 100,000. Given either distribution, the probability of obtaining the observed fingerprint
from a set of 10 samples is > .99, yet these distributions are quite different and have very different
entropy values and support sizes. They are both very plausible?which distribution should we return?
To resolve this issue in a principled fashion, we strengthen our initial goal of ?returning a histogram
that could have plausibly generated the observed fingerprint?: we instead return the simplest histogram that could have plausibly generated the observed fingerprint. Recall the example above,
where we observed only 10 distinct elements, but to explain the data we could either infer an additional 9,900 unseen elements, or an additional 99,000. In this sense, inferring ?only? 9,900 additional unseen elements is the simplest explanation that fits the data, in the spirit of Occam?s razor.2
2.1
The algorithm
We pose this problem of finding the simplest plausible histogram as a pair of linear programs. The
first linear program will return a histogram h? that minimizes the distance between its expected finh?
gerprint and the observed fingerprint, where we penalize the discrepancy between F?
i and E[Fi ] in
proportion to the inverse of the standard deviation of Fi , which we estimate as 1/ 1 + Fi , since
2
The practical performance seems virtually unchanged if one returns the ?plausible? histogram of minimal
entropy, instead of minimal support size (see Appendix B).
4
Poisson distributions have variance equal to their expectation. The constraint that h? corresponds to
a histogram simply means that the total probability mass is 1, and all probability values are nonnegative. The second linear program will then find the histogram h?? of minimal support size, subject to
the constraint that the distance between its expected fingerprint, and the observed fingerprint, is not
much worse than that of the histogram found by the first linear program.
To make the linear programs finite, we consider a fine mesh of values x1 , . . . , x? ? (0, 1] that between them discretely approximate the potential support of the histogram. The variables of the linear
program, h?1 , . . . , h?? will correspond to the histogram values at these mesh points, with variable h?i
representing the number of domain elements that occur with probability xi , namely h? (xi ).
A minor complicating issue is that this approach is designed for the challenging ?rare events? regime,
where there are many domain elements each seen only a handful of times. By contrast if there is
a domain element that occurs very frequently, say with probability 1/2, then the number of times
it occurs will be concentrated about its expectation of k/2 (and the trivial empirical estimate will
be accurate), though fingerprint Fk/2 will not be concentrated about its expectation, as it will take
an integer value of either 0, 1 or 2. Hence we will split the fingerprint into the ?easy? and ?hard?
portions, and use the empirical estimator for the easy portion, and our linear programming approach
for the hard portion. The full algorithm is below (see our websites or Appendix D for Matlab code).
Algorithm 1. E STIMATE U NSEEN
Input: Fingerprint F = F1 , F2 , . . . , Fm , derived from a sample of size k,
vector x = x1 , . . . , x? with 0 < xi ? 1, and error parameter ? > 0.
Output: List of pairs (y1 , h?y1 ), (y2 , h?y2 ), . . . , with yi ? (0, 1], and h?yi ? 0.
? Initialize the output list of pairs to be empty, and initialize a vector F ? to be equal to F .
? For i = 1 to k,
?
?
? If j?{i???i?,...,i+??i?} Fj ? 2 i
[i.e. if the fingerprint is ?sparse? at index i]
Set Fi? = 0, and append the pair (i/k, Fi ) to the output list.
? Let vopt be the objective function value returned by running Linear Program 1 on input F ? , x.
? Let h be the histogram returned by running Linear Program 2 on input F ? , x, vopt , ?.
? For all i s.t. hi > 0, append the pair (xi , hi ) to the output list.
Linear Program 1. F IND P LAUSIBLE H ISTOGRAM
Input: Fingerprint F = F1 , F2 , . . . , Fm , derived from a sample of size k,
vector x = x1 , . . . , x? consisting of a fine mesh of points in the interval (0, 1].
Output: vector h? = h?1 , . . . , h?? , and objective value vopt ? R.
Let h?1 , . . . , h?? and vopt be, respectively, the solution assignment, and corresponding objective function
value of the solution of the following linear program, with variables h?1 , . . . , h?? :
?
?
?
m
?
?
?
?
1
?
?
?
?
hj ? poi(kxj , i)?
Minimize:
? Fi ?
?
?
1
+
F
i
j=1
i=1
??
?
?
?
Subject to:
j=1 xj hj =
i Fi /k, and ?j, hj ? 0.
Linear Program 2. F IND S IMPLEST P LAUSIBLE H ISTOGRAM
Input: Fingerprint F = F1 , F2 , . . . , Fm , derived from a sample of size k,
vector x = x1 , . . . , x? consisting of a fine mesh of points in the interval (0, 1],
optimal objective function value vopt from Linear Program 1, and error parameter ? > 0.
Output: vector h? = h?1 , . . . , h?? .
Let h?1 , . . . , h?? be the solution assignment of the following linear program, with variables h?1 , . . . , h?? :
?
?
??
?m
??
?
?
?
?1
Minimize:
Subject to:
?Fi ? j=1 h?j ? poi(kxj , i)? ? vopt +?,
j=1 hj
i=1
1+Fi
??
?
?
?
j=1 xj hj =
i Fi /k, and ?j, hj ? 0.
Theorem 1. There exists a constant C0 > 0 and assignment of parameter ? := ?(k) of Algorithm 1
such that for any c > 0, for sufficiently large n, given a sample of size k = c logn n consisting of
independent draws from a distribution D over a domain of size at most n, with probability at least
?(1)
1 ? e?n
over the randomness in the selection of the sample, Algorithm 13 , when run with a
C0
sufficiently fine mesh x1 , . . . , x? , returns a histogram h? such that |H(D) ? H(h? )| ? ?
.
c
3
For simplicity, we prove this statement for Algorithm 1 with the second bullet step of the algorithm modified as follows: there is an explicit cutoff N such that the linear programming approach is applied to fingerprint
entries Fi for i ? N , and the empirical estimate is applied to fingerprints Fi for i > N .
5
The above theorem characterizes the worst-case performance guarantees of the above algorithm in
terms of entropy estimation. The proof of Theorem 1 is rather technical and we provide the complete
proof together with a high-level overview of the key components, in Appendix C. In fact, we prove
a stronger theorem?guaranteeing that the histogram returned by Algorithm 1 is close (in a specific
metric) to the histogram of the true distribution; this stronger theorem then implies that Algorithm 1
can accurately estimate any statistical property that is sufficiently Lipschitz continuous with respect
to the specific metric on histograms.
The information theoretic lower bounds of [1] show that there is some constant C1 such that for
sufficiently large k, no algorithm can estimate the entropy of (worst-case) distributions of support
size n to within ?0.1 with any probability of success greater 0.6 when given a sample of size at most
k = C1 logn n . Together with Theorem 1, this establishes the worst-case optimality of Algorithm 1
(to constant factors).
3
Empirical results
In this section we demonstrate that Algorithm 1 performs well, in practice. We begin by briefly
discussing the five entropy estimators to which we compare our estimator in Figure 1. The first
three are standard, and are, perhaps, the most commonly used estimators [15]. We then describe two
recently proposed estimators that have been shown to perform well [22].
The ?naive? estimator: the entropy of the empirical
? distribution, namely, given a fingerprint F
derived from a set of k samples, H naive (F) := ? i Fi ki | log2 ki |.
The Miller-Madow corrected estimator [23]: the naive estimator H naive corrected to try to account
for the second derivative of the logarithm function, namely H M M (F) := H naive (F) +
?
( i Fi )?1
, though we note that the numerator of the correction term is sometimes replaced by vari2k
ous related quantities, see [24].
?k
naive
The jackknifed naive estimator [25, 26]: H JK (F) := k ? H naive (F) ? k?1
(F ?j ),
j=1 H
k
where F ?j is the fingerprint given by removing the contribution of the jth sample.
The coverage adjusted estimator (CAE) [27]: Chao and Shen proposed the CAE, which is specifically designed to apply to settings in which there is a significant component of the distribution that
is unseen, and was shown to perform well in practice in [22].4 Given a fingerprint F derived from
a set of k samples, let Ps := 1 ? F1 /k be the Good?Turing estimate of the probability mass of
the ?seen? portion of the distribution [9]. The CAE adjusts the empirical probabilities according to
Ps , then applies the Horvitz?Thompson estimator for population totals [28] to take into account the
probability that the elements were seen. This yields:
? (i/k)Ps log ((i/k)Ps )
2
.
H CAE (F) := ?
Fi
k
1 ? (1 ? (i/k)Ps )
i
The Best Upper Bound estimator [15]: The final estimator to which we compare ours is the Best
Upper Bound (BUB) estimator of Paninski. This estimator is obtained by searching for a minimax
linear estimator, with respect to a certain error metric. The linear estimators of [2] can be viewed
as a variant of this estimator with provable performance bounds.5 The BUB estimator requires, as
input, an upper bound on the support size of the distribution from which the samples are drawn;
if the bound provided is inaccurate, the performance degrades considerably, as was also remarked
in [22]. In our experiments, we used Paninski?s implementation of the BUB estimator (publicly
available on his website), with default parameters. For the distributions with finite support, we gave
the true support size as input, and thus we are arguably comparing our estimator to the best?case
performance of the BUB estimator.
See Figure 1 for the comparison of Algorithm 1 with these estimators.
4
One curious weakness of the CAE, is that its performance is exceptionally poor on some simple large
instances. Given a sample of size k from a uniform distribution over k elements, it is not hard to show that
the bias of the CAE is ?(log k). This error is not even bounded! For comparison, even the naive estimator has
error bounded by a constant in the limit as k ? ? in this setting. This bias of the CAE is easily observed in
our experiments as the ?hump? in the top row of Figure 1.
5
We also implemented the linear estimators of [2], though found that the BUB estimator performed better.
6
Naive
Miller?Madow
Jackknifed
CAE
BUB
Unseen
RMSE
2
10
RMSE
RMSE
1
0.5
2
10
0
2
10
1.5
2
10
3
10
Sample Size
RMSE
RMSE
4
10
Sample Size
0
5
Zipf2[n], n=10,000
6
10
Zipf[n], n=100,000
4
5
10
10
Sample Size
1.5
6
10
Zipf2[n], n=100,000
1
0.5
3
4
10
Sample Size
0
5
10
Geom[n], n=10,000
4
5
10
10
Sample Size
1.5
6
10
Geom[n], n=100,000
1
0.5
3
4
10
Sample Size
0
5
10
MixGeomZipf[n], n=10,000
1
0.5
0
5
1
10
1.5
RMSE
1
3
10
MixGeomZipf[n], n=1,000
4
0.5
1
0
3
6
10
MixUnif[n], n=100,000
10
10
Sample Size
1.5
0.5
10
Sample Size
0.5
0
1.5
RMSE
RMSE
1
Zipf[n], n=10,000
10
Geom[n], n=1,000
0
5
10
1
0
3
4
10
Sample Size
0.5
10
Sample Size
0.5
3
10
1.5
5
0.5
1
0
3
4
10
10
Sample Size
1
0.5
10
Sample Size
Zipf2[n], n=1,000
1.5
1.5
1.5
0
5
10
MixUnif[n], n=10,000
10
RMSE
RMSE
1
0.5
0
0
3
4
10
Sample Size
0.5
10
Sample Size
Zipf[n], n=1,000
1.5
3
RMSE
2
10
0
1
Unif[n], n=100,000
0.5
10
MixUnif[n], n=1,000
0.5
0
RMSE
10
Sample Size
RMSE
RMSE
1
0.5
0
3
1
RMSE
2
10
Unif[n], n=10,000
4
5
10
10
Sample Size
6
10
MixGeomZipf[n], n=100,000
1.5
RMSE
RMSE
0.5
0
1
RMSE
Unif[n], n=1,000
1
1
0.5
3
10
4
10
Sample Size
5
10
0
4
5
10
10
Sample Size
6
10
Figure 1: Plots depicting the square root of the mean squared error (RMSE) of each entropy estimator over
500 trials, plotted as a function of the sample size; note the logarithmic scaling of the x-axis. The samples are
drawn from six classes of distributions: the uniform distribution, U nif [n] that assigns probability pi = 1/n
5
for i = 1, 2, . . . , n; an even mixture of U nif [ n5 ] and U nif [ 4n
], which assigns probability pi = 2n
for
5
n
5
n
i = 1, . . . , 5 and probability pi = 8n for i = 5 + 1, . . . , n; the Zipf distribution Zipf [n] that assigns
probability pi = ?n1/i1/j for i = 1, 2, . . . , n and is commonly used to model naturally occurring ?power law?
j=1
distributions, particularly in natural language processing; a modified Zipf distribution with power?law exponent
0.6
0.6, Zipf 2[n], that assigns probability pi = ?n1/i1/j 0.6 for i = 1, 2, . . . , n; the geometric distribution
j=1
Geom[n], which has infinite support and assigns probability pi = (1/n)(1 ? 1/n)i , for i = 1, 2 . . .; and
lastly an even mixture of Geom[n/2] and Zipf [n/2]. For each distribution, we considered three settings of
the parameter n: n = 1, 000 (left column), n = 10, 000 (center column), and n = 100, 000 (right column). In
each plot, the sample size ranges over the interval [n0.6 , n1.25 ].
All experiments were run in Matlab. The error parameter ? in Algorithm 1 was set to be 0.5 for all
trials, and the vector x = x1 , x2 , . . . used as the support of the returned histogram was chosen to be a coarse
geometric mesh, with x1 = 1/k2 , and xi = 1.1xi?1 . The experimental results are essentially unchanged
if the parameter ? varied within the range [0.25, 1], or if x1 is decreased, or if the mesh is made more fine
(see Appendix B). Appendix D contains our Matlab implementation of Algorithm 1 (also available from our
websites).
The unseen estimator performs far better than the three standard estimators, dominates the CAE estimator
for larger sample sizes and on samples from the Zipf distributions, and also dominates the BUB estimator, even
for the uniform and Zipf distributions for which the BUB estimator received the true support sizes as input.
7
Estimating Distance (d=0.5)
Naive
Unseen
0.8
0.6
0.4
0.2
0
3
4
5
10
10
Sample Size
Estimating Distance (d=1)
1
Estimated L1 Distance
Estimated L1 Distance
Estimated L1 Distance
Estimating Distance (d=0)
1
0.8
0.6
0.4
Naive
Unseen
0.2
0
10
3
4
10
10
Sample Size
1
Naive
Unseen
0.8
0.6
0.4
0.2
0
5
3
10
4
10
10
Sample Size
5
10
Figure 2: Plots depicting the estimated the total variation distance (?1 distance) between two uniform distri-
butions on n = 10, 000 points, in three cases: the two distributions are identical (left plot, d = 0), the supports
overlap on half their domain elements (center plot, d = 0.5), and the distributions have disjoint supports (right
plot, d = 1). The estimate of the distance is plotted along with error bars at plus and minus one standard
deviation; our results are compared with those for the naive estimator (the distance between the empirical distributions). The unseen estimator can be seen to reliably distinguish between the d = 0, d = 12 , and d = 1
cases even for samples as small as several hundred.
3.1
Estimating ?1 distance and number of words in Hamlet
The other two properties that we consider do not have such widely-accepted estimators as entropy,
and thus our evaluation of the unseen estimator will be more qualitative. We include these two examples here because they are of a substantially different flavor from entropy estimation, and highlight
the flexibility of our approach.
Figure 2 shows the results of estimating the total variation distance (?1 distance). Because total
variation distance is a property of two distributions instead of one, fingerprints and histograms are
two-dimensional objects in this setting (see Section 4.6 of [29]), and Algorithm 1 and the linear programs are extended accordingly, replacing single indices by pairs of indices, and Poisson coefficients
by corresponding products of Poisson coefficients.
Finally, in contrast to the synthetic tests above, we also evaluated our estimator on a real-data problem which may be seen as emblematic of the challenges in a wide gamut of natural language processing problems: given a (contiguous) fragment of Shakespeare?s Hamlet, estimate the number
of distinct words in the whole play. We use this example to showcase the flexibility of our linear
programming approach?our estimator can be customized to particular domains in powerful and
principled ways by adding or modifying the constraints of the linear program. To estimate the histogram of word frequencies in Hamlet, we note that the play is of length ? 25, 000, and thus the
1
. Thus in contrast to our previous
minimum probability with which any word can occur is 25,000
approach of using Linear Program 2 to bound the support of the returned histogram, we instead
1
simply modify the input vector x of Linear Program 1 to contain only probability values ? 25,000
,
and forgo running Linear Program 2. The results are plotted in Figure 3. The estimates converge
towards the true value of 4268 distinct words extremely rapidly, and are slightly negatively biased,
perhaps reflecting the fact that words appearing close together are correlated.
In contrast to Hamlet?s charge that ?there are more things in heaven and earth...than are dreamt of
in your philosophy,? we can say that there are almost exactly as many things in Hamlet as can be
dreamt of from 10% of Hamlet.
Estimating # Distinct Words in Hamlet
8000
Estimate
6000
4000
Naive
CAE
Unseen
2000
0
0
0.5
1
1.5
Length of Passage
2
2.5
4
x 10
Figure 3: Estimates of the total number of distinct word forms in Shakespeare?s Hamlet (excluding stage
directions and proper nouns) as a functions of the length of the passage from which the estimate is inferred.
The true value, 4268, is shown as the horizontal line.
8
References
[1] G. Valiant and P. Valiant. Estimating the unseen: an n/ log(n)?sample estimator for entropy and support
size, shown optimal via new CLTs. In Symposium on Theory of Computing (STOC), 2011.
[2] G. Valiant and P. Valiant. The power of linear estimators. In IEEE Symposium on Foundations of Computer
Science (FOCS), 2011.
[3] M. R. Nelson et al. An abundance of rare functional variants in 202 drug target genes sequenced in 14,002
people. Science, 337(6090):100?104, 2012.
[4] J. A. Tennessen et al. Evolution and functional impact of rare coding variation from deep sequencing of
human exomes. Science, 337(6090):64?69, 2012.
[5] A. Keinan and A. G. Clark. Recent explosive human population growth has resulted in an excess of rare
genetic variants. Science, 336(6082):740?743, 2012.
[6] F. Olken and D. Rotem. Random sampling from database files: a survey. In Proceedings of the Fifth
International Workshop on Statistical and Scientific Data Management, 1990.
[7] P. J. Haas, J. F. Naughton, S. Seshadri, and A. N. Swami. Selectivity and cost estimation for joins based
on random sampling. Journal of Computer and System Sciences, 52(3):550?569, 1996.
[8] R.A. Fisher, A. Corbet, and C.B. Williams. The relation between the number of species and the number
of individuals in a random sample of an animal population. Journal of the British Ecological Society,
12(1):42?58, 1943.
[9] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika,
40(16):237?264, 1953.
[10] D. A. McAllester and R.E. Schapire. On the convergence rate of Good-Turing estimators. In Conference
on Learning Theory (COLT), 2000.
[11] A. Orlitsky, N.P. Santhanam, and J. Zhang. Always Good Turing: Asymptotically optimal probability
estimation. Science, 302(5644):427?431, October 2003.
[12] A. Orlitsky, N. Santhanam, K.Viswanathan, and J. Zhang. On modeling profiles instead of values. Uncertainity in Artificial Intelligence, 2004.
[13] J. Acharya, A. Orlitsky, and S. Pan. The maximum likelihood probability of unique-singleton, ternary,
and length-7 patterns. In IEEE Symp. on Information Theory, 2009.
[14] J. Acharya, H. Das, A. Orlitsky, and S. Pan. Competitive closeness testing. In COLT, 2011.
[15] L. Paninski. Estimation of entropy and mutual information. Neural Comp., 15(6):1191?1253, 2003.
[16] J. Bunge and M. Fitzpatrick. Estimating the number of species: A review. Journal of the American
Statistical Association, 88(421):364?373, 1993.
[17] J. Bunge. Bibliography of references on the problem of estimating support size, available at
http://www.stat.cornell.edu/?bunge/bibliography.html.
[18] Z. Bar-Yossef, R. Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications. In
STOC, 2001.
[19] T. Batu Testing Properties of Distributions Ph.D. thesis, Cornell, 2001.
[20] M. Charikar, S. Chaudhuri, R. Motwani, and V.R. Narasayya. Towards estimation error guarantees for
distinct values. In SODA, 2000.
[21] T. Batu, L. Fortnow, R. Rubinfeld, W.D. Smith, and P. White. Testing that distributions are close. In IEEE
Symposium on Foundations of Computer Science (FOCS), 2000.
[22] V.Q. Vu, B. Yu, and R.E. Kass. Coverage-adjusted entropy estimation. Statistics in Medicine,
26(21):4039?4060, 2007.
[23] G. Miller. Note on the bias of information estimates. Information Theory in Psychology II-B, ed H
Quastler (Glencoe, IL: Free Press):pp 95?100, 1955.
[24] S. Panzeri and A Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87?107, 1996.
[25] S. Zahl. Jackknifing an index of diversity. Ecology, 58:907?913, 1977.
[26] B. Efron and C. Stein. The jacknife estimate of variance. Annals of Statistics, 9:586?596, 1981.
[27] A. Chao and T.J. Shen. Nonparametric estimation of shannons index of diversity when there are unseen
species in sample. Environmental and Ecological Statistics, 10:429?443, 2003.
[28] D.G. Horvitz and D.J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260):663?685, 1952.
[29] P. Valiant. Testing Symmetric Properties of Distributions. SIAM J. Comput., 40(6):1927?1968,2011.
9
| 5170 |@word trial:2 briefly:1 clts:1 proportion:1 seems:1 stronger:2 c0:2 unif:3 seek:1 pressure:1 minus:1 moment:1 initial:1 contains:3 fragment:1 genetic:3 ours:1 outperforms:1 past:1 horvitz:2 current:1 com:1 comparing:1 ka:1 gmail:1 yet:2 must:2 mesh:7 shakespeare:2 shape:4 remove:1 malaysia:1 designed:2 plot:6 n0:1 half:1 intelligence:1 website:4 accordingly:1 xk:1 beginning:1 ith:2 smith:1 coarse:1 zhang:2 five:2 along:1 symposium:3 viable:1 qualitative:1 consists:1 prove:2 focs:2 symp:1 introduce:1 theoretically:2 expected:8 indeed:1 roughly:4 frequently:1 resolve:2 begin:2 estimating:18 underlying:1 linearity:2 maximizes:1 mass:6 agnostic:1 provided:1 what:4 bounded:2 minimizes:1 substantially:1 proposing:1 unobserved:2 finding:2 guarantee:4 thorough:1 orlitsky:7 charge:1 growth:1 exactly:6 returning:1 k2:1 biometrika:1 arguably:1 positive:1 modify:1 limit:1 analyzing:1 establishing:1 sivakumar:1 might:1 plus:1 twice:2 bird:5 challenging:1 limited:1 range:4 practical:2 unique:2 testing:4 emblematic:1 practice:3 ternary:1 vu:1 empirical:9 drug:1 significantly:1 word:8 seeing:1 cannot:1 close:4 vopt:6 selection:1 optimize:1 www:1 map:2 customer:2 center:2 maximizing:1 go:1 williams:1 thompson:2 survey:1 shen:2 simplicity:1 assigns:6 estimator:52 adjusts:1 his:2 hd:11 century:1 population:5 searching:1 variation:4 annals:1 construction:1 suppose:2 play:2 user:1 exact:1 programming:4 strengthen:1 target:1 element:32 jk:1 particularly:1 continues:1 madow:2 showcase:1 distributional:1 database:3 observed:13 yossef:1 istogram:2 worst:6 region:1 removed:1 principled:4 intuition:2 complexity:1 swami:1 negatively:1 efficiency:1 f2:4 kxj:2 cae:10 easily:1 various:3 cat:3 train:1 distinct:10 describe:2 query:1 artificial:1 whose:2 quite:1 stanford:4 larger:2 posed:1 plausible:7 drawing:1 say:2 widely:1 statistic:6 unseen:22 itself:1 final:1 butterfly:1 sequence:1 highlevel:1 analytical:1 propose:1 reconstruction:1 product:1 remainder:1 relevant:2 narasayya:1 bub:8 rapidly:1 flexibility:2 chaudhuri:1 convergence:1 empty:1 p:5 motwani:1 guaranteeing:1 object:1 spent:1 illustrate:3 develop:1 stat:1 pose:1 minor:1 received:1 coverage:2 c:1 implemented:1 implies:1 differ:1 direction:1 uncertainity:1 modifying:1 human:3 mcallester:1 f1:5 generalization:1 adjusted:2 correction:1 practically:2 around:1 sufficiently:4 considered:1 deciding:1 panzeri:1 mapping:2 bump:1 fitzpatrick:1 sought:1 achieves:1 earth:1 estimation:13 label:2 cipher:1 establishes:1 tool:2 hope:3 always:1 jackknifed:2 modified:2 rather:2 hj:6 poi:6 cornell:2 derived:5 sequencing:1 likelihood:4 greatly:1 contrast:5 sense:1 enigma:1 inaccurate:1 typically:1 relation:2 selective:1 i1:2 issue:2 colt:2 html:1 logn:2 priori:1 exponent:1 animal:4 noun:1 initialize:2 mutual:1 equal:3 once:4 sampling:6 identical:3 yu:1 nearly:3 discrepancy:1 acharya:3 employ:2 bunge:3 resulted:2 recognize:1 tightly:1 relabeling:1 individual:1 replaced:1 consisting:6 replacement:1 microsoft:1 n1:3 recalling:1 explosive:1 ecology:1 hump:1 evaluation:1 weakness:1 analyzed:1 mixture:2 light:1 heaven:1 accurate:5 integral:1 respective:1 machinery:1 logarithm:1 abundant:1 plotted:3 theoretical:6 minimal:3 instance:2 increased:1 column:3 compelling:1 modeling:1 ar:1 contiguous:1 assignment:3 cost:1 deviation:2 entry:3 rare:6 uniform:8 hundred:2 fruitfully:1 conducted:1 too:1 characterize:3 nearoptimal:1 providence:1 answer:1 gregory:1 considerably:1 synthetic:1 fundamental:1 international:1 siam:1 together:3 mouse:7 squared:1 thesis:1 management:2 worse:1 american:2 derivative:1 leading:1 return:5 account:2 potential:1 singleton:1 diversity:2 coding:1 includes:1 coefficient:3 matter:1 satisfy:1 depends:1 performed:1 try:1 root:1 sup:2 portion:12 start:1 characterizes:1 competitive:1 mutation:2 expedition:2 rmse:19 contribution:2 minimize:2 square:1 oi:3 accuracy:1 publicly:1 variance:2 il:1 miller:3 yield:3 correspond:1 accurately:3 comp:1 randomness:1 history:1 explain:1 ed:1 definition:6 frequency:5 remarked:1 pp:1 naturally:1 proof:4 associated:2 treatment:1 recall:1 efron:1 reflecting:1 improved:3 done:1 though:7 evaluated:1 stage:1 lastly:1 nif:4 hand:1 working:1 horizontal:1 replacing:1 defines:1 perhaps:3 bullet:1 scientific:1 brown:2 true:9 contain:3 y2:2 evolution:1 hence:1 symmetric:3 white:1 ind:2 during:1 numerator:1 razor:1 generalized:1 trying:1 butions:1 theoretic:2 demonstrate:2 complete:1 performs:3 l1:3 fj:1 passage:2 reasoning:1 variational:2 novel:1 recently:4 fi:20 functional:2 overview:1 fortnow:1 extend:1 occurred:3 association:2 refer:2 significant:2 zipf:10 fk:1 language:2 fingerprint:42 had:1 etc:3 showed:2 recent:3 termed:2 selectivity:1 certain:2 ecological:2 success:1 discussing:1 yi:2 ous:1 seen:7 minimum:1 additional:4 greater:1 employed:1 converge:1 ii:1 full:1 infer:2 alan:1 technical:1 long:2 equally:1 impact:1 variant:3 n5:1 essentially:3 metric:5 poisson:7 expectation:8 histogram:43 represent:1 sometimes:1 sequenced:1 invert:1 penalize:1 justified:1 c1:2 fine:5 interval:3 decreased:1 biased:1 file:1 subject:3 virtually:1 thing:2 spirit:2 call:1 integer:1 curious:1 leverage:1 distri:1 split:1 easy:2 variety:3 xj:2 fit:1 gave:1 rotem:1 psychology:1 fm:3 chebyshev:1 whether:1 six:1 effort:2 returned:5 matlab:3 deep:1 collision:1 nonparametric:1 stein:1 ph:1 concentrated:3 simplest:3 http:3 schapire:1 estimated:6 neuroscience:1 disjoint:1 discrete:1 write:1 santhanam:2 key:2 nevertheless:1 drawn:8 cutoff:1 asymptotically:1 fraction:1 year:4 naughton:1 run:2 turing:7 inverse:1 powerful:1 soda:1 almost:1 reader:2 draw:7 appendix:6 scaling:1 bound:10 hi:2 ki:2 distinguish:1 encountered:1 nonnegative:1 discretely:1 occur:7 constraint:3 handful:1 your:1 ri:1 x2:1 bibliography:2 optimality:2 concluding:1 extremely:1 kumar:1 relatively:2 charikar:1 developing:1 according:1 viswanathan:1 rubinfeld:1 poor:2 across:1 smaller:1 slightly:1 intimately:1 pan:2 island:1 modification:1 invariant:1 german:1 needed:1 know:1 jacknife:1 available:3 apply:1 appearing:2 occurrence:1 denotes:1 binomial:3 include:2 running:3 top:1 log2:4 medicine:1 plausibly:2 especially:1 approximating:1 society:1 unchanged:2 objective:5 question:4 quantity:1 occurs:3 spike:1 degrades:1 evolutionary:1 distance:23 nelson:1 haas:1 collected:1 extent:1 trivial:1 provable:1 code:1 length:4 index:5 illustration:1 difficult:1 october:1 statement:1 stoc:2 append:2 implementation:2 reliably:1 motivates:1 proper:1 unknown:1 perform:2 upper:4 datasets:1 finite:3 defining:1 extended:2 excluding:1 y1:2 discovered:1 varied:1 community:3 treves:1 inferred:1 pair:8 required:1 dog:3 namely:3 alternately:1 beyond:1 bar:2 below:2 pattern:4 regime:2 challenge:4 geom:5 program:20 including:2 explanation:1 power:3 event:2 overlap:1 natural:4 customized:1 representing:1 scheme:2 minimax:1 nseen:1 axis:1 gamut:1 extract:1 naive:15 text:1 chao:2 literature:2 understanding:1 geometric:2 review:1 batu:2 unsurprisingly:1 law:2 expect:2 bear:1 highlight:1 sublinear:1 analogy:1 versus:1 clark:1 foundation:2 tiny:1 occam:1 pi:6 row:1 accounted:1 supported:2 last:1 free:1 jth:1 formal:1 allow:2 understand:2 bias:4 wide:2 face:1 fifth:1 sparse:1 distributed:1 default:1 vocabulary:1 world:1 complicating:1 commonly:2 made:1 simplified:1 far:1 excess:1 approximate:1 gene:1 corpus:1 conclude:1 xi:6 continuous:1 additionally:4 robust:2 ca:1 obtaining:1 depicting:2 necessarily:1 domain:23 da:1 universe:1 whole:1 paul:1 profile:1 x1:9 join:1 fashion:2 inferring:3 originated:1 explicit:1 comput:1 abundance:1 british:2 theorem:6 removing:1 specific:4 list:4 dominates:2 closeness:1 exists:1 workshop:1 adding:1 valiant:11 execution:1 occurring:2 kx:2 flavor:1 entropy:23 intersection:1 logarithmic:1 simply:3 likely:2 paninski:3 applies:1 wolf:1 corresponds:1 satisfies:1 environmental:1 sized:1 goal:1 viewed:1 towards:2 lipschitz:1 fisher:3 exceptionally:2 feasible:1 hard:4 specifically:2 typical:1 corrected:2 infinite:1 total:10 specie:11 accepted:1 experimental:1 forgo:1 shannon:2 indicating:1 formally:2 people:2 support:26 philosophy:1 correlated:1 |
4,610 | 5,171 | Factorized Asymptotic Bayesian Inference
for Latent Feature Models
Kohei Hayashi??
?National Institute of Informatics
?JST, ERATO, Kawarabayashi Large Graph Project
kohei-h@nii.ac.jp
Ryohei Fujimaki
NEC Laboratories America
rfujimaki@nec-labs.com
Abstract
This paper extends factorized asymptotic Bayesian (FAB) inference for latent feature models (LFMs). FAB inference has not been applicable to models, including LFMs, without a speci?c condition on the Hessian matrix of a complete loglikelihood, which is required to derive a ?factorized information criterion? (FIC).
Our asymptotic analysis of the Hessian matrix of LFMs shows that FIC of LFMs
has the same form as those of mixture models. FAB/LFMs have several desirable properties (e.g., automatic hidden states selection and parameter identi?ability) and empirically perform better than state-of-the-art Indian Buffet processes in
terms of model selection, prediction, and computational ef?ciency.
1
Introduction
Factorized asymptotic Bayesian (FAB) inference is a recently-developed Bayesian approximation
inference method for model selection of latent variable models [5, 6]. FAB inference maximizes
a computationally tractable lower bound of a ?factorized information criterion? (FIC) which converges to a marginal log-likelihood for a large sample limit. In application with respect to mixture
models (MMs) and hidden Markov models, previous work has shown that FAB inference achieves
as good or even better model selection accuracy as state-of-the-art non-parametric Bayesian (NPB)
methods and variational Bayesian (VB) methods with less computational cost. One of the interesting
characteristics of FAB inference is that it estimates both models (e.g., the number of mixed components for MMs) and parameter values without priors (i.e., it asymptotically ignores priors), and it
does not have a hand-tunable hyper-parameter. With respect to the trade-off between controllability
and automation, FAB inference places more importance on automation.
Although FAB inference is a promising model selection method, as yet it has only been applicable to
models satisfying a speci?c condition that the Hessian matrix of a complete log-likelihood (i.e., of a
log-likelihood over both observed and latent variables) must be block diagonal, with only a part of
the observed samples contributing individual sub-blocks. Such models include basic latent variable
models as MMs [6]. The application of FAB inference to more advanced models that do not satisfy
the condition remains to be accomplished.
This paper extends an FAB framework to latent feature models (LFMs) [9, 17]. Model selection for
LFMs (i.e., determination of the dimensionality of latent features) has been addressed by NBP and
VB methods [10, 3]. Although they have shown promising performance in such applications as link
prediction [16], their high computational costs restrict their applications to large-scale data.
Our asymptotic analysis of the Hessian matrix of the log-likelihood shows that FICs for LFMs have
the same form as those for MMs, despite the fact that LFMs do not satisfy the condition explained
above (see Lemma 1). Eventually, as FAB/MMs, FAB/LFMs offer several desirable properties, such
as FIC convergence to a marginal log-likelihood, automatic hidden states selection, and monotonic
increase in the lower FIC bound through iterative optimization. Further we conduct two analysis in
1
Section 3: 1) we relate FAB E-steps to a convex concave procedure (CCCP) [29]. Inspired by this
analysis, we propose a shrinkage acceleration method which drastically reduces computational cost
in practice, and 2) we show that FAB/LFMs have parameter identi?ability. This analysis offers a
natural guide to the merging post-processing of latent features. Rigorous proofs and assumptions
with respect to the main results are given in the supplementary materials.
Notation In this paper, we denote the (i, j)-th element, the i-th row vector, and the j-th column
vector of A by aij , ai , and a?j , respectively.
1.1
Related Work
FIC for MMs Suppose we have N ? D observed data X and N ? K latent variables Z. FIC
considers the following alternative representation of the marginal log-likelihood:
{
}
?
?
p(X, Z|M)
q(Z) log
log p(X|M) = max
, p(X, Z|M) = p(X, Z|P)p(P|M)dP, (1)
q
q(Z)
Z
where q(Z) is a variational distribution on Z; M and P are a model and its parameter, respectively. In the case of MMs, log p(X, Z|P) can be factorized into log p(Z) and log p(X|Z) =
?
k log pk (X|z?k ), where pk is the k-th observation distribution (we here omit parameters for
notational simplicity.) We can then approximate p(X, Z|M) by individually applying Laplace?s
method [28] to log p(Z) and log pk (X|z?k ):
K
?
(2?)DZ /2
(2?)Dk /2
?
?
p(X, Z|M) ? p(X, Z|P)
,
(2)
N DZ /2 det |FZ |1/2 k=1 ( n znk )Dk /2 det |Fk |1/2
where P? is the maximum likelihood estimator (MLE) of p(X, Z|P).1 DZ and Dk are the parameter dimensionalities of p(Z)?
and pk (X|z?k ), respectively. FZ and Fk are ??? log p(Z)|P? /N
and ??? log pk (X|z?k )|P? /( n znk ), respectively. Under conditions for asymptotic ignoring of
log det |FZ | and log det |Fk |, substituting Eq.(2) into (1) gives the FIC for MMs as follows:
[
]
? Dk
?
? ? DZ log N ?
FICMM ? max Eq log p(X, Z|P)
log
znk + H(q),
(3)
q
2
2
n
k
?
where H(q) is the entropy of q(Z). The most important term in FICMM (3) is log( n znk ), which
offers such theoretically desirable properties for FAB inference as automatic shrinkage of irrelevant
latent variables and parameter identi?ability [6].
?
Direct optimization of FICMM is dif?cult because: (i) evaluation of Eq [log n znk ] is computationally infeasible, and (ii) the MLE is not available
? in principle. Instead, FAB optimizes a tractable
lower bound of an FIC [6]. For (i), since ? log n znk is a convex function, its linear approximation
at N ?
?k > 0 yields the lower bound:
?
)
]
? Dk [
?
? Dk (
?k
n Eq [znk ]/N ? ?
?
Eq log
znk ? ?
log N ?
?k +
,
(4)
2
2
?
?k
n
k
k
where 0 < ?
?k ? 1 is a linearization parameter. For (ii), since, from the de?nition of the MLE, the
? ? log p(X, Z|P) holds for any P, we optimize P along with q. Alternatinequality log p(X, Z|P)
? guarantees a monotonic increase
ing maximization of the lower bound with respect to q, P, and ?
in the FIC lower bound [6].
In?nite LFMs and Indian Buffet Process The IBP [10, 11] is a nonparametric prior over in?nite LFMs. It enables us to express an in?nite number of latent features, and making it possible to
adjust model complexity on the basis of observations. In?nite IBPs have still been actively studied in terms of both applications (e.g., link prediction [16]) and model representations (e.g., latent
attribute models [19]). Since naive Gibbs sampling requires unrealistic computational cost, acceleration algorithms such as accelerated sampling [2] and VB [3] have been developed. Reed and
Ghahramani [22] have recently proposed an ef?cient MAP estimation framework of an IBP model
via submodular optimization, which is referred to as maximum-expectation IBP (MEIBP). As similar to FIC, ?MAD-Bayes? [1] considers asymptotics of MMs and LFMs, but it is based on a limiting
case that the noise variance goes to zero, which yields a prior-derived regularization term.
1
While p(X|P) is a non-regular model, P (X, Z|P) is a regular model (i.e., the Fisher information is non?
singular at the ML estimator,) and Fk and FZ have their inversions at P.
2
2
FIC and FAB Algorithm for LFMs
LFMs assume underlying relationships for X with binary features Z ? {0, 1}N ?K and linear bases
W ? RD?K such that, for n = 1, . . . , N ,
xn = Wzn + b + ?n ,
(5)
where ?n ? N (0, ??1 ) is the Gaussian noise having the diagonal precision matrix ? ? diag(?),
? = X?
and b ? RD is a bias term. For later convenience, we de?ne the centered observation X
1b> . Z follows a Bernoulli prior distribution znk ? Bern(?k ) with a mean parameter ?k . The
parameter set P is de?ned as P ? {W, b, ?, ?}. Also, we denote parameters with respect to the
d-th dimension as ? d = (wd , bd , ?d ). Similarly with other FAB frameworks, the log-priors of P are
=0
assumed to be constant with respect to N , i.e., limN ?? log p(P|M)
N
In the case of MMs, we implicitly use the fact that: A1) parameters of pk (X|z?k ) are mutually independent for k = 1, . . . , K (in other words, ?? log p(X|Z) is block diagonal
? having K blocks), and
A2) the number of observations which contribute
??
log
p
(X|z
)
is
k
?k
n znk . These conditions
?
naturally yield the FAB regularization term log n znk by the Laplace approximation of MMs (2).
However, since ? d is shared by all latent features in LFMs, A1 and A2 are not satis?ed. In the next
section, we address this issue and derive FIC for LFMs.
2.1
FICs for LFMs
The following lemma plays the most important role in our derivation of FICs for LFMs.
Lemma 1. Let F(d) be the Hessian matrix of the negated log-likelihood with respect to ? d , i.e.,
??? log p(x?d |Z, ? d ). Under some mild assumptions (see the supplementary materials), the following equality holds:
?
?
znk
log det |F(d) | =
log n
(6)
+ Op (1).
N
k
?
An important fact is that the log n znk term naturally appears in log det |F(d) | without A1 and A2.
Lemma 1 induces the following theorem, which states an asymptotic approximation of a marginal
complete log-likelihood, log p(X, Z|M).
Theorem 2. If Lemma 1 holds and the joint marginal log-likelihood is bounded for a suf?ciently
large N , it can be asymptotically approximated as:
? + Op (1),
log p(X, Z|M) = J(Z, P)
?
|P| ? DK
D?
J(Z, P) ? log p(X, Z|P) ?
log N ?
log
znk .
2
2
n
(7)
(8)
k
It is worth noting that, if we evaluate the model complexity of ? d (log det |F(d) |) by N , i.e.,
if we apply Laplace?s method without Lemma 1, Eq. (7) falls into Bayesian Information Criterion?[23], which
tells us that the model complexity relevant to ? d increases not O(K log N ) but
?
O( k log n znk ).
By substituting approximation (7) into Eq. (1), we obtain the FIC of the LFM as follows:
? + H(q).
FICLFM ? max Eq [J(Z, P)]
q
(9)
It is interesting that FICLFM (9) and FICMM (3) have exactly the same representation despite the
fact that LFMs do not satisfy A1 and A2. This indicates the wide applicability of FICs and suggests
that FIC representation of approximated marginal log-likelihoods is feasible not only for MMs but
also for more general (discrete) latent variable models.
Since the asymptotic constant terms of Eq. (7) are not affected by the expectation of q(Z), the
difference between the FIC and the marginal log-likelihood is asymptotically constant; in other
words, the distance between log p(X|M)/N and FICLFM /N is asymptotically small.
Corollary 3. For N ? ?, log p(X|M) = FICLFM + Op (1) holds.
3
2.2
FAB/LFM Algorithm
As with the case of MMs (3), FICLFM is not available in practice, and we employ the lower bounding techniques (i) and (ii). For LFMs, we further introduce
? a mean-?led approximation on Z, i.e., we
restrict the class of q(zn ) to a factorized form: q(zn ) = k q?(znk |?nk ), where q?(z|?) is a Bernoulli
distribution with a mean parameter ? = Eq [z]. Rather than this approximation?s making the FIC
lower bound looser (the equality (1) no longer holds), the variational distribution has a closed-form
solution. Note that this approximation does not cause signi?cant performance degradation in VB
contexts [20, 25]. The VB-extension of IBP [3] also uses this factorized assumption.
? =
By applying (i), (ii), and the mean-?eld approximation, we obtain the lower bound: L(q, P, ?)
?
2D + K
H(q(zn )).
Eq [log p(X|Z, ?) + log p(Z|?) + RHS of (4)] ?
log N +
(10)
2
n
? with respect to {{?n }, P, ?}.
? Notice that
An FAB algorithm alternatingly maximizes L(q, P, ?)
the algorithm described below monotonically increases L in every single step, and therefore we are
guaranteed to obtain a local maximum. This monotonic increase in L gives us a natural stopping
condition with a tolerance ?: if (Lt ? Lt?1 )/N < ? then stop the algorithm, where we denote the
value of L at the t-th iteration by Lt .
FAB E-step In the FAB E-step, we update ?n in a way similar to that with the variational mean?eld inference in a restricted Boltzmann machine [20]. Taking the gradient of L with respect to ?n
and setting it to zero yields the following ?xed-point equations:
?nk = g (cnk + ?(?k ) ? D/2N ?
?k )
(11)
?
where g(x) = (1 + exp(?x))?1 is the sigmoid function, cnk = w>
xn ? l6=k ?nl w?l ? 21 w?k ),
?k ?(?
?k
and ?(?k ) = log 1??k is a natural parameter of the prior of z?k . Update equation (11) is a form of
coordinate descent, and every update is guaranteed to increase the lower bound [25]. After several
iterations of Eq. (11) over k = 1, . . . , K, we are able to obtain a local maximum of Eq [zn ] = ?n
>
2
and Eq [zn z>
n ] = ?n ?n + diag(?n ? ?n ).
?
One unique term in Eq. (11) is ? 2ND??k , which originated in the log n znk term in Eq. (8). In
?k (or equivalent to ?k by Eq. (12)) is, the smaller ?nk is. And a
updating ?nk (11), the smaller ?
smaller ?nk is likely to induce a smaller ?
?k (see Eq. (12)). This results in the shrinking of irrelevant
features, and therefore FAB/LFMs are capable of automatically selecting feature dimensionality
K. This regularization effect is induced independently of prior (i.e., asymptotic ignorance of prior)
and is known as ?model induced regularization? which is caused by Bayesian marginalization in
singular models [18]. Notice that Eq. (11) offers another shrinking effect, by means of ?(?k ), which
is a prior-based regularization. We empirically show that the latter shrinking effect is too weak to
mitigate over-?tting and the FAB algorithm achieves faster convergence, with respect to N , to the
true model (see Section 4.) Note that if we only use the effect of ?(?k ) (i.e. setting D/2N ?
?k = 0),
then update equation (11) is equivalent to that of variational EM.
FAB M-step The FAB M-step is equivalent to the M-step in the EM algorithm of LFMs; the
solutions of W, ? and b are given as in closed form and is exactly the same as those of PPCA [24]
? and ?, we obtain the following solutions:
(see the supplementary materials.) For ?
?
?nk /N.
?k = ?
?k =
(12)
n
Shrinkage step As we have explained, in principle, the FAB regularization term 2ND??k in Eq. (11)
automatically eliminates irrelevant latent features. While the elimination does not change the value
of Eq [log(X|Z, P)], removing them from the model increases L due to a decrease in model complexity.
We eliminate shrunken
?
? features after FAB E-step in terms of that LFMs approximate X by
>
>
?
w
+
1b
.
When
does not affect to the approximation
n ?nk /N = 0, the k-th feature?
?k ?k >?k ?
>
( l z?l w?l = l6=k z?l w?l ), and we simply remove it. When n ?nk /N = 1, wk can be seen as a
?
?
new
>
>
= b + wk and then remove it.
bias ( l z?l w>
?l =
l6=k z?l w?l + 1w?k ), and we update b
4
Algorithm 1 The FAB algorithm for LFMs.
1: Initialize {?n }
2: while Convergence do
3:
Update P
4:
accelerateShrinkage({?n })
5:
for k = 1, . . . , K do
6:
Update {?nk } by Eq. (11)
7:
end for
8:
Shrink unnecessary latent features
9:
if (Lt ? Lt?1 )/N < ? then
10:
{{?0n }, W0 } ? merge({?n }, W)
11:
if dim(W0 ) = dim(W) then Converge
12:
else {?n } ? {?0n }, W ? W0
13:
end if
14: end while
Estimated K
40
30
20
10
100
200
300
400
Elapsed time (sec)
?30
?40
FIC lower bound / N
Algorithm 2 accelerateShrinkage
input {?n }
1: for k = 1, . . . ?
, K do
> 1
>
?
2:
ck ? (X?
l6=k ??l w?l ? 2 1w?k )?w?k
3:
for t = 1, . . . , Tshrink do
4:
Update {?nk } by Eq. (11)
? by Eq. (12)
5:
Update ? and ?
6:
end for
7: end for
50
?50
?60
?70
Acceleration
On
?80
?90
Off
#Iteration
?100
20
100
200
40
300
Elapsed time (sec)
80
160
400
Figure 1: Time evolution of K (top) and L/N
(bottom) in FAB with and without shrinkage acceleration (D = 50 and K = 5). Different lines
represent different random starts.
This model shrinkage also works
?to avoid the ill-conditioning of the FIC;?if there are latent features that are never activated ( n ?nk /N = 0) or always activated ( n ?nk /N = 1), the
FIC will no longer be an approximation of the marginal log-likelihood. Algorithm 1 summarizes whole procedures with respect to the FAB/LFMs. Note that details regarding sub-routines
accelerateShrinkage() and merge() are explained in Section 3.
3
Analysis and Re?nements
CCCP Interpretation and Shrinkage Acceleration Here we interpret the alternating updates
? as a convex concave procedure (CCCP) [29] and consider to eliminate irrelevant
of ? and ?
features
in
? early steps to reduce computational cost. By substituting an optimality condition
?
?k = n ?nk /N (12) into the lower bound, we obtain
(
)
?
?
D?
>
L(q) = ?
log
?nk +
(cn + ?) ?n + H(q) + const.
(13)
2
n
n
k
The ?rst and second terms are convex and concave with respect to ?nk , respectively. The CCCP
solves Eq.(13) by iteratively linearizing the ?rst term around ?t?1
nk . By setting the derivative of the
?linearized? objective to be zero, we obtain the CCCP update as follows:
)
(
D ? t?1
t
?
.
?nk = g cnk + ?(?k ) ?
(14)
2 n nk
By taking N ?
?k =
?
n
t?1
?nk
into account, Eq.(14) is equivalent to Eq.(11).
This new view of the FAB optimization gives us an important insight to accelerate the algorithm.
By considering the FAB optimization as the alternating maximization in terms of P and ? (?
? is
removed), it is natural to take multiple CCCP steps (14). Such multiple CCCP steps in each FABEM step is expected to accelerate the shrinkage effect discussed in the previous section because the
5
?
regularization in terms of ?D/2( n ?nk ) causes the effect. Eventually, it is expected to reduce the
total computational cost since we may be able to remove irrelevant latent features in earlier iterations.
We summarize the whole routine of accelerateShrinkage() based on the CCCP in Algorithm 2.
? for further acceleration of the shrinkage. We
Note that, in practice, we update ? along with ?
empirically con?rmed that Algorithm 2 signi?cantly reduced computational costs (see Section 4 and
Figure 1.) Further discussion of this this update (an exponentiated gradient descent interpretation)
can be found in the supplementary materials.
Identi?ability and Merge Post-processing Parameter identi?ability is an important theoretical
aspect in learning algorithms for latent variable models. It has been known [26, 27] that generalization error signi?cantly worsens if the mapping between parameters and functions is not one-toone (i.e., is non-identi?able.) Let us consider the LFM case of K = 2. If w?1 = w?2 , then any
combination of ?n1 and ?n2 = 2? ? ?n1 will have the same representation: Eq [Ex [?
xnd |? d ]] =
wd1 (?n1 + ?n2 ) = 2wd1 ?, and therefore the MLE is non-identi?able.
The following theorem shows that FAB inference resolves such non-identi?ability in LFMs.
? ?
Theorem 4. Let P ? and q ? be stationary points of L such that 0 <
n ?nk /N < 1 for k =
? ?
?
?
?
w
|
<
?
for
k
=
1,
.
.
.
,
K,
n
=
1,
.
.
.
,
N
.
Then,
w
1, . . . , K and?|?
x>
n
?k
?k = w?l is a suf?cient
? ?
?
condition of n ?nk /N = n ?nl /N .
For the ill-conditioned situation described above, the FAB algorithm has a unique solution that
balances the sizes of latent features. In large sample limit, both FAB and EM reach the same ML
value. The point is, for LFMs, ML solutions are not unique and EM is likely to choose large-Ksolutions because of this non-identi?ability issue. On the other hands, FAB prefers to small-K ML
solutions on the basis of the regularizer. In addition, Theorem 4 gives us an important insight about
post-processing of latent features. If w??k = w??l , then Eq [log p(X, Z|M? )] is equivalent without
relation to ?nk and ?nl , while model complexity is smaller if we only have one latent feature.
Therefore, if w??k = w??l , merging these two latent features increases L, i.e., w??k = 2w??k and
?? +??
???k = ?k 2 ?l . In practice, we search for such overlapping features on the basis of a Euclidean
distance matrix of W? and w??k for k = 1, . . . , K and merge them if the lower bound increases after
the post-processing. We empirically found that a few merging operations were likely to occur in real
world data sets. The algorithm of merge() is summarized in the supplementary materials.
4
Experiments
We have evaluated FAB/LFMs in terms of computational speed, model selection accuracy, and prediction performance with respect to missing values. We compared FAB inference and the variational
EM algorithm (see Section 2.2) with an IBP that utilized fast Gibbs sampling [2], a VB [3] having a
?nite K, and MEIBP [22]. IBP and MEIBP select a model which maximizes posterior probability.
For VB, we performed inference with K = 2, . . . , D and selected the model having the highest free
energy. EM selects K using the shrinkage effect of ? as we have explained in Section 2.2.
All the methods were implemented in Matlab (for IBP, VB, and MEIBP, we used original codes
released by the authors), and the computational performance were fairly compared. For FAB and
EM, we set ? = 10?4 (this was not sensitive) and Tshrink = 100 (FAB only); {?n } were randomly
and uniformly initialized by 0 and 1; the initial number of latent features was set to min(N, D) as
well as MEIBP. ?
Since the softwares of IBP, VB, and MEIBP did not learn the standard deviation
of the noise (1/ ? in FAB), we ?xed it to 1 for arti?cial simulations, which is the true standard
deviation of toy data, and 0.75 for real data by following the original papers [2, 22]. We set other
parameters with software default values. For example, ?, a hyperparameter of IBP, was set to 3,
which might cause overestimation of K. As common preprocessing, we normalized X (i.e., the
sample variance is 1) in all experiments.
Arti?cial Simulations We ?rst conducted arti?cial simulations with fully-observed synthetic data
generated by model (5) having a ?xed ?k = 1 and ?k = 0.5. Figure 1 shows the results of a comparison between FAB with and without shrinkage acceleration.2 Clearly, our shrinkage acceleration
2
We also investigated the effect of merge post-processing, but none was observed in this small example.
6
5
em
ibp
True K=5
meibp
vb
10
103
102.5
102
101.5
101
100.5
10
30
Estimated K
Elapsed time (sec)
fab
25
20
15
10
5
100 250 500 10002000
N
100 250 500 10002000
100 250 500 1000 2000
N
100 250 500 1000 2000
Figure 2: Comparative evaluation of the arti?cial simulations in terms of N v.s. elapsed time (left)
and selected K (right). Each error-bar shows the standard deviation over 10 trials (D = 30).
Figure 3: Learned Ws in block data.
signi?cantly reduced computational cost by eliminating irrelevant features in the early steps, while
both algorithms achieved roughly the same objective value L and model selection performance at
the convergence. Figure 2 shows the results of a comparison between FAB (with acceleration) and
the other methods. While MEIBP was much faster than FAB in terms of elapsed computational time,
FAB achieved the most accurate estimation of K, especially for large N .
Block Data We next demonstrate performance of FAB/LFMs in terms of learning features. We
used the block data, a synthetic data originally used in [10]. Observations were generated by
combining four distinct patterns (i.e., K = 4, see Figure 3) with Gaussian noise, on 6 by 6 pixels
(i.e., D = 36). We prepared the results of N = 2000 samples with the noise standard deviation
0.3 and no missing values (more results can be found in the supplementary materials.) Figure 3
compares estimated features of each method on early learning phase (at the 5th iteration) and after
the convergence (the result displayed is the example which has the median log-likelihood over 10
trials.) Note that, we omitted MEIPB since we observed that its parameter setting was very sensitive
for this data. While EM and IBP retain irrelevant features, FAB successfully extracts the true patterns
without irrelevant features.
Real World Data We ?nally evaluated predictive performance by using the real data sets described
in Table 1. We randomly removed 30% of data with 5 different random seeds and treated them as
missing values, and we measured predictive and training log-likelihood (PLL and TLL) for them.
Table 1 summarizes the results with respect to elapsed computational time (hours), selected K,
PLL, and TLL. Note that, for cases when the computational time for a method exceeded 50 hours,
we stopped the program after that iteration.3 Since MEIBP is the method for non-negative data, we
omitted the results of those containing negative values. Also, since MEIBP did not ?nish the ?rst
iteration within 50 hours for yaleB and USPS data, we set the initial K as 100. FAB consistently
achieved good predictive performance (higher PLL) with low computational cost. Although MEIBP
performed faster than FAB with appropriately set the initial value of K (i.e., yaleB and USPS),
PLLs of FAB were much better than those of MEIBP. In terms of K, FAB typically achieved a
more compact and better model representation than the others (smaller K). Another important
observation is that FAB have much smaller differences between TLL and PLL than the others. This
suggests that FAB?s unique regularization worked well for mitigating over-?tting. For the large
sample data sets (EEG, Piano, USPS), PLLs of FAB and EM were competitive with one another;
3
We totally omitted VB because of its long computational time.
7
Table 1: Results on real-world data sets. The best result (e.g., the smallest K in model selection)
and those not signi?cantly worse than it are highlighted in boldface. We used a one-side t-test with
95% con?dence. *We exclude the results of MEIBP for yaleB and USPS from the t-test because of the
different experimental settings (initial K was smaller than the others. See the body text for details.)
Method Time (h)
K
FAB < 0.01 4.4 ? 1.1
EM < 0.01 48.8 ? 0.5
IBP
3.3 69.6 ? 4.8
MEIBP < 0.01 45.4 ? 1.7
Libras [4]
FAB < 0.01 19.0 ? 0.7
360 ? 90
EM
0.01 75.6 ? 8.6
IBP
4.8 36.4 ? 1.1
MEIBP
0.05 40.8 ? 1.3
Auslan [14]
FAB
0.04 6.0 ? 0.7
16180 ? 22
EM
0.2
22 ? 0
IBP
50.2
73 ? 5
MEIBP
N/A
N/A
EEG [12]
FAB
1.6 11.2 ? 1.6
120576 ? 32
EM
3.7
32 ? 0
IBP
53.0 46.4 ? 4.4
MEIBP
N/A
N/A
Piano [21]
FAB
19.4 58.0 ? 3.5
57931 ? 161
EM
50.1 158.6 ? 3.4
IBP
55.8 89.6 ? 4.2
MEIBP
14.3 48.4 ? 3.2
yaleB [7]
FAB
2.2 77.2 ? 7.9
2414 ? 1024
EM
50.9 929 ? 20
IBP
51.7 94.2 ? 7.5
?
MEIBP
7.2 69.8 ? 2.7
USPS [13]
FAB
11.2 110.2 ? 5.1
110000 ? 256
EM
45.7
256 ? 0
IBP
61.6 181.0 ? 4.8
?
MEIBP
1.9 22.0 ? 2.7
Data
Sonar [4]
208 ? 49
PLL
?1.25 ? 0.02
?4.04 ? 0.46
?4.48 ? 0.15
?18.10 ? 1.90
?0.63 ? 0.03
?0.68 ? 0.11
?0.18 ? 0.01
?11.30 ? 2.00
?1.34 ? 0.15
?1.79 ? 0.27
?4.54 ? 0.08
N/A
?0.93 ? 0.02
?0.88 ? 0.09
?3.16 ? 0.03
N/A
?0.83 ? 0.01
?0.82 ? 0.02
?1.83 ? 0.02
?7.14 ? 0.52
?0.37 ? 0.02
?4.60 ? 1.20
?0.54 ? 0.02
?1.18 ? 0.02
?0.96 ? 0.01
?1.06 ? 0.01
?2.59 ? 0.08
?1.35 ? 0.03
TLL
?1.14 ? 0.03
?0.08 ? 0.07
0.13 ? 0.02
?15.60 ? 1.80
?0.42 ? 0.03
0.76 ? 0.24
0.13 ? 0.01
?10.70 ? 1.80
?0.92 ? 0.02
?0.78 ? 0.02
0.08 ? 0.01
N/A
?0.76 ? 0.04
?0.59 ? 0.01
?0.26 ? 0.05
N/A
?0.63 ? 0.02
?0.45 ? 0.01
?0.84 ? 0.05
?6.90 ? 0.50
?0.29 ? 0.03
0.80 ? 0.27
?0.35 ? 0.02
?1.12 ? 0.02
?0.64 ? 0.02
?0.36 ? 0.01
?0.76 ? 0.01
?1.31 ? 0.03
this is reasonable, for large N , both of them ideally achieve the maximum likelihood while FAB
achieved much smaller K (see identi?ability discussion in Section 3). In small N scenarios, on the
other hand, FIC approximation would be not accurate, and FAB would perform worse than NPBs
(while we observed such case only in Libras.)
5
Summary
We have considered here an FAB framework for LFMs that offers fully automated model selection,
i.e., selecting the number of latent features. While LFMs do not satisfy the assumptions that naturally
induce FIC/FAB on MMs, we have shown that they have the same ?degree? of model complexity as
the approximated marginal log-likelihood, and we have derived FIC/FAB in a form similar to that
for MMs. In addition, our proposed accelerating mechanism for shrinking models drastically reduces total computational time. Experimental comparisons of FAB inference with existing methods,
including state-of-the-art IBP methods, have demonstrated the superiority of FAB/LFM.
Acknowledgments
The authors would like to thank Finale Doshi-Velez for providing Piano and EEG data sets. This
work was supported by JSPS KAKENHI Grant Number 25880028.
8
References
[1] T. Broderick, B. Kulis, and M. I. Jordan. MAD-Bayes: MAP-based Asymptotic Derivations from Bayes.
In ICML, 2013.
[2] F. Doshi-Velez and Z. Ghahramani. Accelerated sampling for the indian buffet process. In ICML, 2009.
[3] F. Doshi-Velez, K. T. Miller, J. Van Gael, and Y. W. Teh. Variational inference for the Indian buffet
process. In AISTATS, 2009.
[4] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[5] R. Fujimaki and K. Hayashi. Factorized asymptotic bayesian hidden markov model. In ICML, 2012.
[6] R. Fujimaki and S. Morinaga. Factorized asymptotic bayesian inference for mixture modeling. In AISTATS, 2012.
[7] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman. From few to many: Illumination cone models
for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23:643?660, 2001.
[8] Z. Ghahramani. Factorial learning and the EM algorithm. In NIPS, 1995.
[9] Z. Ghahramani, T. L. Grif?ths, and P. Sollich. Bayesian nonparametric latent feature models (with discussion). In 8th Valencia International Meeting on Bayesian Statistics, 2006.
[10] T. Grif?ths and Z. Ghahramani. In?nite latent feature models and the indian buffet process, 2005.
[11] T. L. Grif?ths and Z. Ghahramani. The indian buffet process: An introduction and review. JMLR,
12:1185?1224, 2011.
[12] U. Hoffmann, G. Garcia, J. M. Vesin, K. Diserens, and T. Ebrahimi. A boosting approach to p300
detection with application to brain-computer interfaces. In International IEEE EMBS Conference on
Neural Engineering, pages 97?100. 2005.
[13] J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 16(5):550?554, 1994.
[14] M. W. Kadous. Temporal Classi?cation: Extending the Classi?cation Paradigm to Multivariate Time
Series. PhD thesis, School of Computer Science & Engineering, University of New South Wales, 2002.
[15] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors.
Information and Computation, 132(1):1?63, 1997.
[16] K. Miller, T. Grif?ths, and M. Jordan. Nonparametric latent feature models for link prediction. In NIPS,
2009.
[17] K. T. Miller. Bayesian Nonparametric Latent Feature Models. PhD thesis, University of California,
Berkeley, 2011.
[18] S. Nakajima, M. Sugiyama, and D. Babacan. On bayesian PCA: Automatic dimensionality selection and
analytic solution. In ICML, 2011.
[19] K. Palla, D. A. Knowles, and Z. Ghahramani. An in?nite latent attribute model for network data. In
ICML, 2012.
[20] C. Peterson and J. Anderson. A mean ?eld theory learning algorithm for neural networks. Complex
systems, 1:995?1019, 1987.
[21] G. E. Poliner and D. P. W. Ellis. A discriminative model for polyphonic piano transcription. EURASIP
Journal of Advances in Signal Processing, 2007(1):154, 2007.
[22] C. Reed and Z. Ghahramani. Scaling the indian buffet process via submodular maximization. In ICML,
2013.
[23] G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461?464, 1978.
[24] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical
Society. Series B, 61(3):611?622, 1999.
[25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[26] S. Watanabe. Algebraic analysis for nonidenti?able learning machines. Neural Computation, 13(4):899?
933, 2001.
[27] S. Watanabe. Algebraic Geometry and Statistical Learning Theory (Cambridge Monographs on Applied
and Computational Mathematics). Cambridge University Press, 2009.
[28] R. Wong. Asymptotic Approximation of Integrals (Classics in Applied Mathematics). SIAM, 2001.
[29] A. L. Yuille and A. Rangarajan. The Concave-Convex procedure. Neural Computation, 15(4):915?936,
2003.
[30] R. S. Zemel and G. E. Hinton. Learning population codes by minimizing description length. Neural
Computation, 7(3):11?18, 1994.
9
| 5171 |@word mild:1 worsens:1 trial:2 kulis:1 inversion:1 eliminating:1 repository:1 nd:2 simulation:4 linearized:1 arti:4 eld:3 initial:4 series:2 selecting:2 nii:1 existing:1 nally:1 com:1 wd:1 yet:1 must:1 bd:1 cant:1 enables:1 analytic:1 remove:3 update:13 polyphonic:1 stationary:1 intelligence:2 selected:3 warmuth:1 cult:1 boosting:1 contribute:1 along:2 direct:1 ryohei:1 wale:1 introduce:1 theoretically:1 expected:2 roughly:1 brain:1 inspired:1 palla:1 automatically:2 resolve:1 considering:1 totally:1 project:1 estimating:1 notation:1 underlying:1 maximizes:3 factorized:10 bounded:1 xed:3 nbp:1 developed:2 guarantee:1 cial:4 mitigate:1 every:2 temporal:1 berkeley:1 concave:4 exactly:2 grant:1 omit:1 superiority:1 engineering:2 local:2 limit:2 despite:2 merge:6 might:1 studied:1 suggests:2 dif:1 unique:4 acknowledgment:1 practice:4 block:7 procedure:4 nite:7 asymptotics:1 kohei:2 word:2 induce:2 regular:2 convenience:1 selection:12 context:1 applying:2 wong:1 optimize:1 equivalent:5 map:2 demonstrated:1 dz:4 missing:3 go:1 independently:1 convex:5 simplicity:1 estimator:2 insight:2 classic:1 population:1 coordinate:1 laplace:3 limiting:1 tting:2 annals:1 suppose:1 play:1 us:1 poliner:1 element:1 trend:1 satisfying:1 approximated:3 updating:1 utilized:1 recognition:2 xnd:1 database:1 observed:7 role:1 bottom:1 trade:1 decrease:1 removed:2 highest:1 monograph:1 complexity:6 overestimation:1 broderick:1 ideally:1 kriegman:1 predictive:3 yuille:1 basis:3 usps:5 accelerate:2 joint:1 georghiades:1 america:1 regularizer:1 derivation:2 distinct:1 fast:1 zemel:1 tell:1 hyper:1 supplementary:6 loglikelihood:1 ability:8 statistic:2 highlighted:1 propose:1 relevant:1 combining:1 uci:1 shrunken:1 p300:1 achieve:1 cnk:3 description:1 pll:5 rst:4 convergence:5 rangarajan:1 extending:1 comparative:1 converges:1 derive:2 ac:1 pose:1 measured:1 school:1 op:3 ibp:19 eq:29 solves:1 implemented:1 signi:5 attribute:2 hull:1 centered:1 jst:1 material:6 elimination:1 libra:2 generalization:1 extension:1 mm:15 hold:5 around:1 considered:1 exp:1 seed:1 mapping:1 substituting:3 achieves:2 early:3 a2:4 released:1 omitted:3 smallest:1 estimation:2 applicable:2 sensitive:2 individually:1 schwarz:1 successfully:1 clearly:1 gaussian:2 always:1 rather:1 ck:1 avoid:1 shrinkage:11 tll:4 corollary:1 derived:2 notational:1 consistently:1 bernoulli:2 likelihood:17 indicates:1 kakenhi:1 rigorous:1 dim:2 inference:20 stopping:1 eliminate:2 typically:1 hidden:4 relation:1 w:1 selects:1 mitigating:1 pixel:1 issue:2 ill:2 art:3 initialize:1 fairly:1 marginal:9 never:1 having:5 sampling:4 rfujimaki:1 icml:6 others:3 npb:1 employ:1 few:2 randomly:2 national:1 individual:1 phase:1 geometry:1 n1:3 detection:1 satis:1 evaluation:2 fujimaki:3 adjust:1 mixture:3 nl:3 grif:4 activated:2 accurate:2 integral:1 capable:1 conduct:1 euclidean:1 initialized:1 re:1 theoretical:1 toone:1 stopped:1 column:1 earlier:1 modeling:1 elli:1 zn:5 maximization:3 cost:9 applicability:1 deviation:4 predictor:1 jsps:1 conducted:1 too:1 synthetic:2 international:2 siam:1 retain:1 cantly:4 probabilistic:1 off:2 informatics:1 thesis:2 containing:1 choose:1 worse:2 derivative:1 actively:1 toy:1 account:1 exclude:1 de:3 sec:3 wk:2 automation:2 summarized:1 satisfy:4 caused:1 performed:2 later:1 view:1 lab:1 closed:2 start:1 bayes:3 competitive:1 asuncion:1 accuracy:2 variance:2 characteristic:1 miller:3 yield:4 weak:1 bayesian:14 handwritten:1 none:1 worth:1 lighting:1 alternatingly:1 cation:2 reach:1 ed:1 energy:1 doshi:3 naturally:3 proof:1 con:2 stop:1 ppca:1 tunable:1 kawarabayashi:1 dimensionality:4 routine:2 appears:1 exceeded:1 originally:1 higher:1 tipping:1 evaluated:2 shrink:1 anderson:1 hand:3 overlapping:1 effect:8 normalized:1 true:4 yaleb:4 evolution:1 regularization:8 equality:2 alternating:2 laboratory:1 iteratively:1 ignorance:1 erato:1 criterion:3 linearizing:1 complete:3 demonstrate:1 interface:1 variational:8 ef:2 recently:2 sigmoid:1 common:1 empirically:4 conditioning:1 jp:1 discussed:1 interpretation:2 interpret:1 velez:3 cambridge:2 gibbs:2 ai:1 automatic:4 rd:2 fk:4 mathematics:2 similarly:1 sugiyama:1 submodular:2 longer:2 base:1 posterior:1 multivariate:1 irrelevant:8 optimizes:1 scenario:1 binary:1 meeting:1 accomplished:1 nition:1 seen:1 speci:2 belhumeur:1 converge:1 paradigm:1 monotonically:1 signal:1 ii:4 multiple:2 desirable:3 reduces:2 ing:1 faster:3 determination:1 offer:5 long:1 cccp:8 post:5 mle:4 a1:4 prediction:5 basic:1 expectation:2 iteration:7 represent:1 nakajima:1 achieved:5 addition:2 embs:1 addressed:1 else:1 singular:2 median:1 limn:1 appropriately:1 eliminates:1 south:1 induced:2 valencia:1 finale:1 jordan:3 ciently:1 noting:1 automated:1 marginalization:1 affect:1 restrict:2 reduce:2 regarding:1 cn:1 det:7 pca:1 accelerating:1 algebraic:2 hessian:5 cause:3 prefers:1 matlab:1 gael:1 factorial:1 nonparametric:4 prepared:1 induces:1 reduced:2 fz:4 notice:2 estimated:3 discrete:1 hyperparameter:1 affected:1 express:1 four:1 graph:1 asymptotically:4 cone:1 extends:2 place:1 reasonable:1 family:1 knowles:1 looser:1 summarizes:2 scaling:1 vb:11 bound:12 guaranteed:2 occur:1 worked:1 software:2 dence:1 aspect:1 speed:1 babacan:1 optimality:1 min:1 ned:1 combination:1 smaller:9 sollich:1 em:18 making:2 explained:4 restricted:1 wd1:2 computationally:2 equation:3 mutually:1 remains:1 eventually:2 mechanism:1 fic:23 tractable:2 end:5 rmed:1 available:2 operation:1 apply:1 alternative:1 buffet:7 original:2 ebrahimi:1 top:1 include:1 graphical:1 l6:4 const:1 ghahramani:8 especially:1 society:1 objective:2 hoffmann:1 parametric:1 diagonal:3 gradient:4 dp:1 distance:2 link:3 thank:1 w0:3 considers:2 mad:2 boldface:1 code:2 length:1 reed:2 relationship:1 providing:1 balance:1 minimizing:1 relate:1 frank:1 negative:2 boltzmann:1 perform:2 negated:1 teh:1 observation:6 markov:2 descent:3 controllability:1 displayed:1 situation:1 hinton:1 lfm:4 required:1 identi:10 fics:4 fab:72 elapsed:6 learned:1 california:1 hour:3 nip:2 address:1 able:5 bar:1 below:1 pattern:4 summarize:1 program:1 including:2 max:3 royal:1 wainwright:1 unrealistic:1 natural:4 treated:1 kivinen:1 advanced:1 ne:1 naive:1 extract:1 lfms:33 text:2 prior:10 piano:4 review:1 contributing:1 asymptotic:13 fully:2 mixed:1 interesting:2 suf:2 versus:1 foundation:1 znk:17 degree:1 principle:2 row:1 summary:1 supported:1 free:1 bern:1 infeasible:1 drastically:2 guide:1 aij:1 bias:2 exponentiated:2 institute:1 fall:1 wide:1 taking:2 side:1 face:1 peterson:1 tolerance:1 van:1 dimension:2 xn:2 world:3 default:1 ignores:1 author:2 preprocessing:1 transaction:2 approximate:2 compact:1 implicitly:1 transcription:1 ml:4 assumed:1 ibps:1 unnecessary:1 discriminative:1 search:1 latent:30 iterative:1 sonar:1 table:3 promising:2 learn:1 ignoring:1 eeg:3 investigated:1 complex:1 diag:2 did:2 pk:6 main:1 aistats:2 rh:1 bounding:1 noise:5 whole:2 n2:2 body:1 referred:1 cient:2 precision:1 sub:2 shrinking:4 originated:1 watanabe:2 ciency:1 wzn:1 exponential:1 jmlr:1 theorem:5 removing:1 bishop:1 dk:7 merging:3 importance:1 phd:2 nec:2 linearization:1 illumination:1 conditioned:1 nk:23 entropy:1 led:1 lt:5 simply:1 likely:3 garcia:1 hayashi:2 monotonic:3 acceleration:9 shared:1 fisher:1 feasible:1 change:1 nish:1 eurasip:1 uniformly:1 classi:2 lemma:6 degradation:1 total:2 principal:1 experimental:2 select:1 latter:1 indian:7 accelerated:2 evaluate:1 ex:1 |
4,611 | 5,172 | Tracking Time-varying Graphical Structure
David Danks
Carnegie Mellon University
Pittsburgh, PA 15213
ddanks@andrew.cmu.edu
Erich Kummerfeld
Carnegie Mellon University
Pittsburgh, PA 15213
ekummerf@andrew.cmu.edu
Abstract
Structure learning algorithms for graphical models have focused almost exclusively on stable environments in which the underlying generative process does not
change; that is, they assume that the generating model is globally stationary. In
real-world environments, however, such changes often occur without warning or
signal. Real-world data often come from generating models that are only locally
stationary. In this paper, we present LoSST, a novel, heuristic structure learning algorithm that tracks changes in graphical model structure or parameters in a
dynamic, real-time manner. We show by simulation that the algorithm performs
comparably to batch-mode learning when the generating graphical structure is
globally stationary, and significantly better when it is only locally stationary.
1
Introduction
Graphical models are used in a wide variety of domains, both to provide compact representations
of probability distributions for rapid, efficient inference, and also to represent complex causal structures. Almost all standard algorithms for learning graphical model structure [9, 10, 12, 3] assume
that the underlying generating structure does not change over the course of data collection, and so
the data are i.i.d. (or can be transformed into i.i.d. data). In the real world, however, generating
structures often change and it can be critical to quickly detect the structure change and then learn
the new one.
In many of these real-world contexts, we also do not have the luxury of collecting large amounts of
data and then retrospectively determining when (if ever) the structure changed. That is, we cannot
learn in ?batch mode,? but must instead learn the novel structure in an online manner, processing the
data as it arrives. Current online learning algorithms can detect and handle changes in the learning
environment, but none are capable of general, graphical model structure learning.
In this paper, we develop a heuristic algorithm that fills this gap: it assumes only that our data are
locally i.i.d., and learns graphical model structure in an online fashion. In the next section, we
quickly survey related methods and show that they are individually insufficient for this task. We
then present the details of our algorithm and provide simulation evidence that it can successfully
learn graphical model structure in an online manner. Importantly, when there is a stable generating
structure, the algorithm?s performance is indistinguishable from a standard batch-mode structure
learning algorithm. Thus, using this algorithm incurs no additional costs in ?normal? structure
learning situations.
2
Related work
We focus here on graphical models based on directed acyclic graphs (DAGs) over random variables
with corresponding quantitative components, whether Bayesian networks or recursive Structural
Equation Models (SEMs) [3, 12, 10]. All of our observations in this paper, as well as the core
1
algorithm, are readily adaptable to learn structure for models based on undirected graphs, such as
Markov random fields or Gaussian graphical models [6, 9].
Standard graphical model structure learning algorithms divide into two rough types. Bayesian/scorebased methods aim to find the model M that maximizes P (M |Data), but in practice, score the
models using a decomposable measure based on P (Data|M ) and the number of parameters in M
[3]. Constraint-based structure learning algorithms leverage the fact that every graphical model
predicts a pattern of (conditional) independencies over the variables, though multiple models can
predict the same pattern. Those algorithms (e.g., [10, 12]) find the set of graphical models that best
predict the (conditional) independencies in the data.
Both types of structure learning algorithms assume that the data come from a single generating
structure, and so neither is directly usable for learning when structure change is possible. They learn
from the sufficient statistics, but neither has any mechanism for detecting change, responding to it, or
learning the new structure. Bayesian learning algorithms?or various approximations to them?are
often used for online learning, but precisely because case-by-case Bayesian updating yields the same
output as batch-mode processing (assuming the data are i.i.d.). Since we are focused on situations
in which the underlying structure can change, we do not want the same output.
One could instead look to online learning methods that track some environmental feature. The
classic TDL algorithm, TD(0) [13], provides a dynamic estimate Et (X) of a univariate random
variable X using a simple update rule: Et+1 (X) ? Et (X) + ?(Xt ? Et (X)), where Xt is the
value of X at time t. The static ? parameter encodes the learning rate, and trades off convergence
rate and robustness to noise (in stable environments). In general, TDL methods are good at tracking
slow-moving environmental changes, but perform suboptimally during times of either high stability
or dramatic change, such as when the generating model structure abruptly changes.
Both Bayesian [1] and frequentist [4] online changepoint detection (CPD) algorithms are effective
at detecting abrupt changes, but do so by storing substantial portions of the input data. For example,
a Bayesian CPD [1] outputs the probability of a changepoint having occurred r timesteps ago, and
so the algorithm must store more than r datapoints. Furthermore, CPD algorithms assume a model
of the environment that has only abrupt changes separated by periods of stability. Environments that
evolve slowly but continuously will have their time-series discretized in seemingly arbitrary fashion,
or not at all.
Two previous papers have aimed to learn time-indexed graph structures from time-series data,
though both require full datasets as input, so cannot function in real-time [14, 11]. Talih and Hengartner (2005) take an ordered data set and divide it into a fixed number of (possibly empty) data
intervals, each with an associated undirected graph that differs by one edge from its neighbors. In
contrast with our work, they focus on a particular type of graph structure change (single edge addition or deletion), operate solely in ?batch mode,? and use undirected graphs instead of directed
acyclic graph models. Siracusa and Fisher III (2009) uses a Bayesian approach to find the posterior
uncertainty over the possible directed edges at different points in a time-series. Our approach differs
by using frequentist methods instead of Bayesian ones (since we would otherwise need to maintain
a probability distribution over the superexponential number of graphical models), and by being able
to operate in real-time on an incoming data stream.
3
Locally Stationary Structure Tracker (LoSST) Algorithm
Given a set of continuous variables V, we assume that there is, at each time r, a true underlying
generative model Gr over V. Gr is assumed to be a recursive Structural Equation Model (SEM):
a pair hG, Fi, where G denotes a DAG over V, and F is a set of linear equations of the form Vi =
P
Vj ?pa(Vi ) aji ? Vj + i , where pa(Vi ) denotes the variables Vj ? G such that Vj ? Vi , and the
i are normally distributed noise/error terms. In contrast to previous work on structure learning, we
assume only that the generating process is locally stationary: for each time r, data are generated
i.i.d. from Gr , but it is not necessarily the case that Gr = Gs for r 6= s. Notice that Gr can change
in both structure (i.e., adding, removing, or reorienting edges) and parameters (i.e., changes in aji ?s
or the i distributions).
At a high level, the Locally Stationary Structure Tracker (LoSST) algorithm takes, at each timestep
r, a new datapoint as input and outputs a graphical model Mr . Obviously, a single datapoint is
2
insufficient to learn graphical model structure. The LoSST algorithm instead tracks the locally
stationary sufficient statistics?for recursive SEMs, the means, covariances, and sample size?in an
online fashion, and then dynamically (re)learns the graphical model structure as appropriate. The
LoSST algorithm processes each datapoint only once, and so LoSST can also function as a singlepass, graphical model structure learner for very large datasets.
Let Xr be the r-th multivariate datapoint and let Xir be the value of Vi for that datapoint. To track the
potentially changing generating structure, the datapoints must potentially be differentially weighted.
In particular, datapoints should P
be weighted more heavily after a change occurs. Let ar ? (0, ?) be
r
the weight on Xr , and let br = k=1 ak be the sum of those weights over time.
Pr ak k
The weighted mean of Vi after datapoint r is ?ri =
k=1 br Xi , which can be computed in an
online fashion using the update equation:
?r+1
=
i
br
br+1
?ri +
ar+1 r+1
X
br+1 i
(1)
The (weighted) covariance between Vi and Vj after datapoint r is provably equal to CrVi ,Vj =
Pr ak r
r+1
r+1
r
r
r
(Xir+1 ? ?ri ). The update equation for
? ?ri = abr+1
k=1 br (Xi ? ?i )(Xj ? ?j ). Let ?i = ?i
Cr+1 can be written (after some algebra) as:
1
Cr+1
[br CrXi ,Xj + br ?i ?j + ar+1 (Xir+1 ? ?r+1
)(Xjr+1 ? ?r+1
)]
i
j
Xi ,Xj =
br+1
(2)
If ak = c for all k and some constant c > 0, then the estimated covariance matrix is identical to the
batch-mode estimated covariance matrix. If ar = ?br , then the learning is the same as if one uses
TD(0) learning for each covariance with a learning rate of ?.
The sample size S r is more complicated, since datapoints are weighted differently and so the ?effective? sample size can differ from the actual sample size (though it should always be less-than-orequal). Because Xr+1 comes from the current generating structure, it should always contribute 1 to
more than Xr . If we adjust the natural
the effective sample size. In addition, Xr+1 is weighted aar+1
r
sample size update equation to satisfy these two constraints, then the update equation becomes:
ar r
S r+1 =
S +1
(3)
ar+1
If ar+1 ? ar for all r (as in the method we use below), then S r+1 ? S r + 1. If ar+1 = ar for all r,
then S r = r; that is, if the datapoint weights are constant, then S r is the true sample size.
Sufficient statistics tracking??r+1 , Cr+1 , and S r+1 ?thus requires remembering only their previous values and br , assuming that ar+1 can be efficiently computed. The ar+1 weights are based on
the ?fit? between the current estimated covariance matrix and the input data: poor fit implies that
a change in the underlying generating structure is more likely. For multivariate Gaussian data, the
?fit? between Xr+1 and the current estimated covariance matrix Cr is given by the Mahalanobis
distance Dr+1 [8]: Dr+1 = (Xr+1 ? ?r )(Cr )?1 (Xr+1 ? ?r )T .
A large Mahalanobis distance (i.e., poor fit) for some datapoint could indicate simply an outlier;
inferring that the underlying generating structure has changed requires large Mahalanobis distances
over multiple datapoints. The likelihood of the (weighted) sequence of Dr ?s is analytically intractable, and so we cannot use the Dr values directly. We instead base the ar+1 weights on the
(weighted) pooled p-value of the individual p-values for the Mahalanobis distance of each datapoint.
The Mahalanobis distance of a V -dimensional datapoint from a covariance matrix estimated from
a sample of size N is distributed as Hotelling?s T 2 with parameters p = V and m = N ? 1. The
p-value for the Mahalanobis distance Dr+1 is thus: pr+1 = T 2 (x > Dr+1 |p = N, m = S r ? 1)
where S r is the effective sample size. Let ?(x, y) be the cdf of a Gaussian with mean 0 and variance
y evaluated at x. Then Liptak?s method for weighted pooling
pPof the individual?p-values [7] gives
Pr
a2i ) = ?(?r+1 , ?r+1 ), where the
the following definition:1 ?r+1 = ?( i=1 ai ??1 (pi , 1),
?1
update equations for ? and ? are ?r+1 = ?r + ar ? (pr , 1) and ?r+1 = ?r + a2r .
1
?r+1 cannot include pr+1 without being circular: pr+1 would have to be appropriately weighted by ar+1 ,
but that weight depends on ?r+1 .
3
There are many ways to convert the pooled p-value ?r+1 into a weight ar+1 . We use the strategy:
if ?r+1 is greater than some threshold T (i.e., the data sequence is sufficiently likely given the
current model), then keep the weight constant; if ?r+1 is less that T , then increase ar+1 linearly and
inversely to ?r+1 up to a maximum of ?ar at ?r+1 = 0. Mathematically, this transformation is:
?T ? ??r+1 + ?r+1
ar+1 = ar ? max 1,
(4)
T
Efficient computation of ar+1 thus only requires additionally tracking ?r , ?r , and ?r .
We can efficiently track the relevant sufficient statistics in an online fashion, and so the only remaining step is to learn the corresponding graphical model. The implementation in this paper uses the
PC algorithm [12], a standard constraint-based structure learning algorithm. A range of alternative
structure learning algorithms could be used instead, depending on the assumptions one is able to
make.
Learning graphical model structure is computationally expensive [2] and so one should balance the
accuracy of the current model against the computational cost of relearning. More precisely, graph2
relearning should be most frequent after an inferred underlying change, though there should be a
non-zero chance of relearning even when the structure appears to be relatively stable (since the
structure could be slowly drifting).
In practice, the LoSST algorithm probabilistically relearns based on the inverse3 of ?r : the probability of relearning at time r + 1 is a noisy-OR gate with the probability of relearning at time r,
and a weighted (1 ? ?r+1 ). Mathematically, Pr+1 (relearn) = Pr (relearn) + ?(1 ? ?r+1 ) ?
Pr (relearn)?(1 ? ?r+1 ), where ? ? [0, 1] modifies the frequency of graph relearning: large values
result in more frequent relearning and small values result in fewer. If a relearning event is triggered
at datapoint r, then a new graphical model structure and parameters are learned, and Pr (relearn)
is set to 0. In general, ?r is lower when changepoints are detected, so Pr (relearn) will increase
more quickly around changepoints, and graph relearning will become more frequent. During times
of stability, ?r will be comparatively large, resulting in a slower increase of Pr (relearn) and thus
less frequent graph relearning.
3.1
Convergence vs. diligence in LoSST
LoSST is capable of exhibiting different long-run properties, depending on its parameters. Convergence is a standard desideratum: if there is a stable structure in the limit, then the algorithm?s
output should stabilize on that structure. In contexts in which the true structure can change, another
desirable property for learning algorithms is diligence: if the generating structure has a change of
given size (that manifests in the data), then the algorithm should detect and respond to that change
within a fixed number of datapoints (regardless of the amount of previous data). Both diligence and
convergence are desirable methodological virtues, but they are provably incompatible: no learning
algorithm can be both diligent and convergent [5]. Intuitively, they are incompatible because they
must respond differently to improbable datapoints: convergent algorithms must tolerate them (since
such data occur with probability 1 in the infinite limit), while diligent algorithms must regard them
as signals that the structure has changed.
If ? = 1, then LoSST is a convergent algorithm, since it follows that ar+1 = ar for all r (which
is a sufficient condition for convergence). For ? > 1, the behavior of LoSST depends on T . If
T < 0, then we again have ar+1 = ar for all r, and so LoSST is convergent. LoSST is also
provably
convergent if T is time-indexed such that Tr = f (Sr ) for some f with (0, 1] range, where
P?
4
i=1 (1 ? f (i)) converges.
2
Recall that the sufficient statistics are updated after every datapoint.
Recall that ?r isPa pooled p-value, so low values indicate unlikely data.
4
Proof sketch: ?
i=r (1 ? qi ) can be shown to be an upper bound on the probability that (1 ? ?i ) > qi
will occur for some i P
in [r, ?), where qi is the i-th element of the sequence Q of lower threshold values.
?
Any sequence Q s.t.
i=1 (1 ? qi ) < 1 will then guarantee that an infinite amount of unbiased data will
be accumulated in the infinite limit. This provides probability 1 convergence for LoSST, since the structure
learning method has probability 1 convergence in the limit. If Q is prepended with arbitrary strictly positive
threshold values, the first element of Q will still be reached infinitely many times with probability 1 in the
infinite limit, and so LoSST will still converge with probability 1, even using these expanded sequences.
3
4
In contrast, if T > 1 and ? > 1, then LoSST is provably diligent.5 We conjecture that there are
sequences of time-indexed Tr < 1 that will also yield diligent versions of LoSST, analogously to
the condition given above for convergence.
Interestingly, if ? > 1 and 0 < T < 1, then LoSST is neither convergent nor diligent, but rather
strikes a balance between the desiderata. In particular, these versions (a) tend to converge towards
stable structures, but provably do not actually converge since they remain sensitive to outliers; and
(b) respond quickly to change in generating structure, but only exponentially fast in the number of
previous datapoints, rather than within a fixed interval. The full behavior of LoSST in this parameter
regime, including the extent and sensitivity of trade-offs, is an open question for future research.
For the simulations below, unsystematic investigation led to T = 0.05 and ? = 3, which seemed to
appropriately trade off convergence vs. diligence in that context.
4
Simulation results
We used synthetic data to evaluate the performance of LoSST given known ground truth. All simulations used scenarios in which either the ground truth parameters or ground truth graph (and parameters) changed during the course of data collection. Before the first changepoint, there should be
no significant difference between LoSST and a standard batch-mode learner, since those datapoints
are globally i.i.d. Performance on these datapoints thus provides information about the performance
cost (if any) of online learning using LoSST, relative to traditional algorithms. After a changepoint,
one is interested both in the absolute performance of LoSST (i.e., can it track the changes?) and in
its performance relative to a standard batch-mode algorithm (i.e., what performance gain does it provide?). We used the PC algorithm [12] as our baseline batch-mode learning algorithm; we conjecture
that any other standard graphical model structure learning algorithm would perform similarly, given
the graphs and sample sizes in our simulations.
In order to directly compare the performance of LoSST and PC, we imposed a fixed ?graph relearning? schedule6 on LoSST. The first set of simulations used datasets with 2000 datapoints, where the
SEM graph and parameters both changed after the first 1000 datapoints. 500 datasets were generated
for each of a range of h#variables, M axDegreei pairs,7 where each dataset used two different,
randomly generated SEMs of the specified size and degree.
Figures 1(a-c) show the mean edge addition, removal, and orientation errors (respectively) by
LoSST as a function of time, and Figures 1(d-f) show the means of #errorsP C ? #errorsLoSST
for each error type (i.e., higher numbers imply LoSST outperforms PC). In all Figures, each
hvariable, degreei pair is a distinct line. As expected, LoSST was basically indistinguishable from
PC for the first 1000 datapoints; the lines in Figures 1(d-f) for that interval are all essentially zero.
After the underlying generating model changes, however, there are significant differences. The PC
algorithm performs quite poorly because the full dataset is essentially a mixture from two different
distributions which induces a large number of spurious associations. In contrast, the LoSST algorithm finds large Mahalanobis distances for those datapoints, which lead to higher weights, which
lead it to learn (approximately) the new underlying graphical model. In practice, LoSST typically
stabilized on a new model by roughly 250 datapoints after the changepoint.
The second set of simulations was identical to the first (500 runs each for various pairs of variable
number and edge degree), except that the graph was held constant throughout and only the SEM
parameters changed after 1000 datapoints. Figures 2(a-c) and 2(d-f) report, for these simulations, the
same measures as Figures 1(a-c) and 1(d-f). Again, LoSST and PC performed basically identically
for the first 1000 datapoints. Performance after the parameter change did not follow quite the same
pattern as before, however. LoSST again does much better on edge addition and orientation errors,
but performed significantly worse on edge removal errors for the first 200 points following the
Proof sketch: By equation (4), T > 1 & ? > 1 ? ? ? ??1
> 1 ? ar+1 ? ar (? ? ??1
) > ar for all
T
T
?T ??+1
r. This last strict inequality implies that the effective sample size has a finite upper bound (= (??1)(T
if
?1)
?r = 1 for all r), and the majority of the effective sample comes from recent data points. These two conditions
are jointly sufficient for diligence.
6
LoSST relearned graphs and PC was rerun after datapoints {25, 50, 100, 200, 300, 500, 750, 1000, 1025,
1050, 1100, 1200, 1300, 1500, 1750, 2000}.
7
Specifically, h4, 3i, h8, 3i, h10, 3i, h10, 7i, h15, 4i, h15, 9i, h20, 5i, and h20, 12i
5
5
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Structure & parameter changes: (a-c) LoSST errors; (d-f) LoSST improvement over PC
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: Parameter changes: (a-c) LoSST errors; (d-f) LoSST improvement over PC
change. When a change occcurs, PC intially responds by adding edges to the output, while LoSST
responds by being more cautious in its inferences (since the effective sample size shrinks after a
change). The short-term impact on each algorithm is thus: PC?s output tends to be a superset of
the original edges, while LoSST outputs fewer edges. As a result, PC can outperform LoSST for
a brief time on the edge removal metric in these types of cases in which the change involves only
parameters, not graph structure.
The third set of simulations was designed to explore in detail the performance with probabilistic
relearning. We randomly generated a single dataset with 10,000 datapoints, where the underlying
SEM graph and parameters changed after every 1000 datapoints. Each SEM had 10 variables and
maximum degree of 7. We then ran LoSST with probabilistic relearning (? = .005) 500 times
on this dataset. Figure 3(a) shows the (observed) expected number of ?relearnings? in each 256
(a)
(b)
(c)
(d)
Figure 3: (a) LoSST expected relearnings; (b-d) Expected edge additions, removals, and flips,
against constant relearning
(a)
(b)
(c)
Figure 4: (a) Effective sample size during LoSST run on BLS data; (b) Pooled p-values; (c) Mahalanobis distances
datapoint window. As expected, there are substantial relearning peaks after each structure shift, and
the expected number of relearnings persisted at roughly 0.1 per 25 datapoints throughout the stable
periods. Figures 3(b-d) provide error information: the smooth green lines indicate the mean edge
addition, removal, and orientation errors (respectively) during learning, and the blocky blue lines
indicate the LoSST errors if graph relearning occurred after every datapoint. Although there are
many fewer graph relearnings with the probabilistic schedule, overall errors did not significantly
increase.
5
Application to US price index volatility
To test the performance of the LoSST algorithm on real-world data, we applied it to seasonally
adjusted price index data from the U.S. Bureau of Labor Statistics. We limited the data to commodities/services with data going back to at least 1967, resulting in a data set of 6 variables: Apparel,
Food, Housing, Medical, Other, and Transportation. The data were collected monthly from 19672011, resulting in 529 data points. Because of significant trends in the indices over time, we used
month-to-month differences.
Figure 4(a) shows the change in effective sample size, where the key observation is that change
detection prompts significant drops in the effective sample size. Figures 4(b) and 4(c) show the
pooled p-value and Mahalanobis distance for each month, which are the drivers of sample size
7
changes. The Great Moderation was a well-known macroeconomic phenomenon between 1980 and
2007 in which the U.S. financial market underwent a slow but steady reduction in volatility. LoSST
appears to detect exactly such a shift in the volatility of the relationships between these price indexes,
though it detects another shift shortly after 2000.8 This real-world case study also demonstrates the
importance of using pooled p-values, as that is why LoSST does not respond to the single-month
spike in Mahalanobis distance in 1995, but does respond to the extended sequence of slightly above
average Mahalanobis distances around 1980.
6
Discussion and future research
The LoSST algorithm is suitable for locally stationary structures, but there are obviously limits. In
particular, it will perform poorly if the generating structure changes very rapidly, or if the datapoints
are a random-order mixture from multiple structures. An important future research direction is to
characterize and then improve LoSST?s performance on more rapidly varying structures. Various
heuristic aspects of LoSST could also potentially be replaced by more normative procedures, though
as noted earlier, many will not work without substantial revision (e.g., obvious Bayesian methods).
This algorithm can also be extended to have the current learned model influence the ar weights.
Suppose particular graphical edges or adjacencies have not changed over a long period of time, or
have been stable over multiple relearnings. In that case, one might plausibly conclude that those
connections are less likely to change, and so much greater error should be required to relearn those
connections. In practice, this extension would require the ar weights to vary across hVi , Vj i pairs,
which significantly complicates the mathematics and memory requirements of the sufficient statistic
tracking. It is an open question whether the (presumably) improved tracking would compensate for
the additional computational and memory cost in particular domains.
We have focused on SEMs, but there are many other types of graphical models; for example,
Bayesian networks have the same graph-type but are defined over discrete variables with conditional
probability tables. Tracking the sufficient statistics for Bayes net structure learning is substantially
more costly, and we are currently investigating ways to learn the necessary information in a tractable,
online fashion. Similarly, our graph learning relies on constraint-based structure learning since the
relevant scores in score-based methods (such as [3]) do not decompose in a manner that is suitable
for online learning. We are thus investigating alternative scores, as well as heuristic approximations
to principled score-based search.
There are many real-world contexts in which batch-mode structure learning is either infeasible or
inappropriate. In particular, the real world frequently involves dynamically varying structures that
our algorithms must track over time. The online structure learning algorithm presented here has
great potential to perform well in a range of challenging contexts, and at little cost in ?traditional?
settings.
Acknowledgments
Thanks to Joe Ramsey and Rob Tillman for help with the simulations, and three anonymous reviewers for helpful comments. DD was partially supported by a James S. McDonnell Foundation Scholar
Award.
8
This shift is almost certainly due to the U.S. recession that occurred in March to November of that year.
8
References
[1] R. P. Adams and D. J. C. MacKay. Bayesian online changepoint detection. Technical report,
University of Cambridge, Cambridge, UK, 2007. arXiv:0710.3742v1 [stat.ML].
[2] D. M. Chickering. Learning Bayesian networks is NP-complete. In Proceedings of AI and
Statistics, 1995.
[3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine
Learning Research, 3:507?554, 2002.
[4] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. IEEE
Transactions on Signal Processing, 8:2961?2974, 2005.
[5] E. Kummerfeld and D. Danks. Model change and methodological virtues in scientific inference. Technical report, Carnegie Mellon University, Pittsburgh, Pennsylvania, 2013.
[6] S. L. Lauritzen. Graphical models. Clarendon Press, 1996.
[7] T. Liptak. On the combination of independent tests. Magyar Tud. Akad. Mat. Kutato Int. Kozl.,
3:171?197, 1958.
[8] P. C. Mahalanobis. On the generalized distance in statistics. Proceedings of the National
Institute of Sciences of India, 2:49?55, 1936.
[9] A. McCallum, D. Freitag, and F. C. N. Pereira. Maximum entropy Markov models of information extraction and segmentation. In Proceedings of ICML-2000, pages 591?598, 2000.
[10] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
[11] M.R. Siracusa and J.W. Fisher III. Tractable bayesian inference of time-series dependence
structure. In Proceedings of the 12th International Conference on Artificial Intelligence and
Statistics, 2009.
[12] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd
edition, 2000.
[13] R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning,
3:9?44, 1988.
[14] M. Talih and N. Hengartner. Structural learning with time-varying components: tracking the
cross-section of financial time series. Journal of the Royal Statistical Society - Series B: Statistical Methodology, 67(3):321?341, 2005.
9
| 5172 |@word version:2 nd:1 open:2 simulation:11 covariance:8 dramatic:1 incurs:1 tr:2 reduction:1 series:6 exclusively:1 score:5 interestingly:1 outperforms:1 ramsey:1 current:7 must:7 readily:1 written:1 designed:1 drop:1 update:6 v:2 stationary:9 generative:2 fewer:3 greedy:1 tillman:1 intelligence:1 mccallum:1 prepended:1 core:1 short:1 detecting:2 provides:3 contribute:1 h4:1 become:1 driver:1 tdl:2 freitag:1 manner:4 market:1 expected:6 roughly:2 rapid:1 nor:1 frequently:1 behavior:2 discretized:1 globally:3 detects:1 td:2 food:1 actual:1 little:1 window:1 inappropriate:1 becomes:1 revision:1 underlying:10 maximizes:1 what:1 substantially:1 warning:1 transformation:1 guarantee:1 temporal:1 quantitative:1 every:4 collecting:1 commodity:1 exactly:1 demonstrates:1 uk:1 normally:1 medical:1 positive:1 before:2 service:1 tends:1 limit:6 sutton:1 ak:4 solely:1 approximately:1 might:1 dynamically:2 challenging:1 limited:1 range:4 directed:3 acknowledgment:1 recursive:3 practice:4 differs:2 xr:8 procedure:1 aji:2 significantly:4 davy:1 cannot:4 context:5 h8:1 intially:1 influence:1 imposed:1 reviewer:1 transportation:1 modifies:1 regardless:1 focused:3 survey:1 decomposable:1 abrupt:2 rule:1 importantly:1 fill:1 datapoints:22 financial:2 stability:3 classic:1 handle:1 updated:1 suppose:1 heavily:1 us:3 pa:4 element:2 trend:1 expensive:1 updating:1 predicts:1 observed:1 trade:3 ran:1 substantial:3 a2r:1 environment:6 principled:1 dynamic:2 graph2:1 algebra:1 learner:2 differently:2 various:3 separated:1 distinct:1 fast:1 effective:10 detected:1 desobry:1 artificial:1 quite:2 heuristic:4 otherwise:1 statistic:11 jointly:1 noisy:1 online:16 seemingly:1 obviously:2 sequence:7 triggered:1 housing:1 net:1 frequent:4 relevant:2 rapidly:2 poorly:2 differentially:1 cautious:1 convergence:9 empty:1 requirement:1 generating:17 adam:1 converges:1 volatility:3 depending:2 andrew:2 develop:1 stat:1 help:1 lauritzen:1 involves:2 come:4 implies:2 indicate:4 differ:1 exhibiting:1 direction:1 apparel:1 adjacency:1 require:2 scholar:1 investigation:1 decompose:1 anonymous:1 mathematically:2 adjusted:1 strictly:1 extension:1 xjr:1 tracker:2 sufficiently:1 around:2 normal:1 ground:3 great:2 presumably:1 predict:3 changepoint:6 vary:1 hvi:1 currently:1 sensitive:1 individually:1 successfully:1 weighted:11 mit:1 rough:1 offs:1 danks:2 gaussian:3 aim:1 always:2 rather:2 cr:5 varying:4 probabilistically:1 liptak:2 xir:3 focus:2 improvement:2 methodological:2 likelihood:1 reorienting:1 contrast:4 baseline:1 detect:4 helpful:1 inference:5 accumulated:1 unlikely:1 typically:1 spurious:1 h10:2 transformed:1 going:1 interested:1 provably:5 rerun:1 overall:1 orientation:3 mackay:1 field:1 once:1 equal:1 having:1 extraction:1 identical:2 look:1 icml:1 future:3 report:3 cpd:3 np:1 tud:1 causation:1 randomly:2 national:1 individual:2 replaced:1 luxury:1 maintain:1 detection:4 circular:1 adjust:1 unsystematic:1 blocky:1 certainly:1 mixture:2 arrives:1 pc:13 hg:1 held:1 edge:15 capable:2 necessary:1 improbable:1 indexed:3 divide:2 re:1 causal:1 complicates:1 diligent:5 earlier:1 ar:30 cost:5 gr:5 characterize:1 synthetic:1 thanks:1 peak:1 sensitivity:1 international:1 probabilistic:3 off:2 analogously:1 quickly:4 continuously:1 siracusa:2 again:3 slowly:2 possibly:1 dr:6 worse:1 usable:1 diligence:5 relearned:1 potential:1 pooled:6 stabilize:1 int:1 satisfy:1 vi:7 stream:1 depends:2 performed:2 portion:1 reached:1 bayes:1 complicated:1 accuracy:1 variance:1 efficiently:2 yield:2 bayesian:13 identification:1 comparably:1 basically:2 none:1 ago:1 datapoint:15 definition:1 against:2 frequency:1 james:1 obvious:1 associated:1 proof:2 static:1 kummerfeld:2 gain:1 dataset:4 manifest:1 recall:2 segmentation:1 schedule:1 actually:1 back:1 adaptable:1 appears:2 ispa:1 clarendon:1 tolerate:1 higher:2 follow:1 methodology:1 improved:1 evaluated:1 though:6 shrink:1 furthermore:1 relearn:7 sketch:2 mode:10 scientific:1 true:3 unbiased:1 analytically:1 spirtes:1 mahalanobis:12 indistinguishable:2 during:5 noted:1 steady:1 generalized:1 complete:1 performs:2 reasoning:1 novel:2 fi:1 exponentially:1 association:1 occurred:3 mellon:3 significant:4 monthly:1 cambridge:3 dag:2 ai:2 erich:1 mathematics:1 similarly:2 had:1 moving:1 stable:8 recession:1 base:1 posterior:1 multivariate:2 recent:1 scenario:1 store:1 inequality:1 sems:4 additional:2 remembering:1 greater:2 mr:1 converge:3 period:3 strike:1 signal:3 multiple:4 full:3 desirable:2 smooth:1 technical:2 cross:1 long:2 compensate:1 award:1 qi:4 impact:1 desideratum:2 prediction:1 essentially:2 cmu:2 metric:1 arxiv:1 represent:1 kernel:1 addition:6 want:1 interval:3 appropriately:2 operate:2 sr:1 strict:1 comment:1 pooling:1 tend:1 undirected:3 structural:3 leverage:1 iii:2 identically:1 superset:1 variety:1 xj:3 fit:4 timesteps:1 pennsylvania:1 br:11 shift:4 whether:2 abruptly:1 aimed:1 amount:3 locally:8 induces:1 outperform:1 stabilized:1 notice:1 estimated:5 track:7 per:1 blue:1 carnegie:3 discrete:1 mat:1 bls:1 independency:2 key:1 hengartner:2 threshold:3 changing:1 neither:3 v1:1 timestep:1 graph:22 sum:1 convert:1 year:1 run:3 uncertainty:1 respond:5 almost:3 throughout:2 incompatible:2 bound:2 convergent:6 g:1 occur:3 constraint:4 precisely:2 ri:4 encodes:1 aspect:1 expanded:1 relatively:1 conjecture:2 glymour:1 mcdonnell:1 poor:2 march:1 combination:1 remain:1 slightly:1 across:1 rob:1 retrospectively:1 outlier:2 intuitively:1 pr:13 computationally:1 equation:9 scheines:1 mechanism:1 flip:1 tractable:2 abr:1 changepoints:2 appropriate:1 hotelling:1 frequentist:2 batch:10 robustness:1 shortly:1 a2i:1 alternative:2 drifting:1 gate:1 slower:1 original:1 responding:1 assumes:1 denotes:2 include:1 remaining:1 graphical:27 bureau:1 h20:2 plausibly:1 society:1 comparatively:1 question:2 occurs:1 spike:1 strategy:1 costly:1 dependence:1 traditional:2 responds:2 distance:12 majority:1 extent:1 collected:1 assuming:2 suboptimally:1 index:4 relationship:1 insufficient:2 balance:2 akad:1 potentially:3 implementation:1 perform:4 upper:2 observation:2 markov:2 datasets:4 finite:1 november:1 situation:2 extended:2 ever:1 superexponential:1 persisted:1 arbitrary:2 prompt:1 inferred:1 david:1 pair:5 required:1 specified:1 connection:2 learned:2 deletion:1 pearl:1 able:2 below:2 pattern:3 regime:1 max:1 including:1 green:1 memory:2 royal:1 critical:1 event:1 natural:1 moderation:1 suitable:2 improve:1 brief:1 inversely:1 imply:1 removal:5 ddanks:1 determining:1 evolve:1 relative:2 acyclic:2 foundation:1 degree:3 sufficient:9 dd:1 storing:1 pi:1 course:2 changed:8 supported:1 last:1 infeasible:1 institute:1 wide:1 neighbor:1 underwent:1 india:1 absolute:1 distributed:2 talih:2 regard:1 world:8 seemed:1 collection:2 transaction:1 compact:1 keep:1 ml:1 incoming:1 investigating:2 pittsburgh:3 assumed:1 conclude:1 xi:3 continuous:1 search:3 why:1 table:1 additionally:1 learn:11 sem:5 complex:1 necessarily:1 domain:2 vj:7 did:2 linearly:1 noise:2 edition:1 causality:1 fashion:6 slow:2 inferring:1 pereira:1 doncarli:1 chickering:2 third:1 learns:2 aar:1 removing:1 h15:2 xt:2 normative:1 virtue:2 evidence:1 intractable:1 joe:1 adding:2 importance:1 relearning:16 gap:1 entropy:1 led:1 simply:1 univariate:1 likely:3 infinitely:1 explore:1 labor:1 ordered:1 tracking:8 partially:1 truth:3 environmental:2 chance:1 relies:1 cdf:1 conditional:3 month:4 towards:1 price:3 fisher:2 change:41 infinite:4 except:1 specifically:1 macroeconomic:1 evaluate:1 phenomenon:1 |
4,612 | 5,173 | Sparse Precision Matrix Estimation with Calibration
Tuo Zhao
Department of Computer Science
Johns Hopkins University
Han Liu
Department of Operations Research and Financial Engineering
Princeton University
Abstract
We propose a semiparametric method for estimating sparse precision matrix of
high dimensional elliptical distribution. The proposed method calibrates regularizations when estimating each column of the precision matrix. Thus it not only
is asymptotically tuning free, but also achieves an improved finite sample performance. Theoretically, we prove that the proposed method achieves the parametric rates of convergence in both parameter estimation and model selection. We
present numerical results on both simulated and real datasets to support our theory
and illustrate the effectiveness of the proposed estimator.
1
Introduction
We study the precision matrix estimation problem: let X = (X1 , ..., Xd )T be a d-dimensional random vector following some distribution with mean ? ? Rd and covariance matrix ? ? Rd?d , where
?kj = EXk Xj ? EXk EXj . We want to estimate ? = ??1 from n independent observations. To
make the estimation manageable in high dimensions (d/n ? ?), we assume that ? is sparse. That
is, many off-diagonal entries of ? are zeros.
Existing literature in machine learning and statistics usually assumes that X follows a multivariate Gaussian distribution, i.e., X ? N (0, ?). Such a distributional assumption naturally connects
sparse precision matrices with Gaussian graphical models (Dempster, 1972), and has motivated
numerous applications (Lauritzen, 1996). To estimate sparse precision matrices for Gaussian distributions, many methods in the past decade have been proposed based on the sample covariance
estimator. Let x1 , ..., xn ? Rd be n independent observations of X, the sample covariance estimator is defined as
n
n
1X
1X
? i ? x)
? T with x
?=
S=
(xi ? x)(x
xi .
(1.1)
n i=1
n i=1
Banerjee et al. (2008); Yuan and Lin (2007); Friedman et al. (2008) take advantage of the Gaussian
likelihood, and propose the graphic lasso (GLASSO) estimator by solving
X
b = argmin ? log |?| + tr(S?) + ?
?
|?kj |,
?
j,k
where ? > 0 is the regularization parameter. Scalable software packages for GLASSO have been
developed, such as huge (Zhao et al., 2012).
In contrast, Cai et al. (2011); Yuan (2010) adopt the pseudo-likelihood approach to estimate the precision matrix. Their estimators follow a column-by-column estimation scheme, and possess better
1
theoretical properties. More specifically,P
given a matrix A ? Rd?d , let A?j = (A1j , ..., Adj )T
th
denote the j column of A, ||A?j ||1 = k |Akj | and ||A?j ||? = maxk |Akj |, Cai et al. (2011)
obtain the CLIME estimator by solving
b ?j = argmin ||??j ||1 s.t. ||S??j ? I?j ||? ? ?, ? j = 1, ..., d.
?
(1.2)
??j
Computationally, (1.2) can be reformulated and solved by general linear program solvers. Theoretically, let ||A||1 = maxj ||A?j ||1 be the matrix `1 norm of A, and ||A||2 be the largest singular
value of A, (i.e., the spectral norm of A), Cai et al. (2011) show that if we choose
r
log d
? ||?||1
,
(1.3)
n
the CLIME estimator achieves the following rates of convergence under the spectral norm,
1?q !
log d
4?4q 2
2
b
||? ? ?||2 = OP ||?||1 s
,
(1.4)
n
P
where q ? [0, 1) and s = maxj k |?kj |q .
Despite of these good properties, the CLIME estimator in (1.2) has three drawbacks: (1) The theoretical justification heavily relies on the subgaussian tail assumption. When this assumption is violated,
the inference can be unreliable; (2) All columns are estimated using the same regularization parameter, even though these columns may have different sparseness. As a result, more estimation bias is
introduced to the denser columns to compensate the sparser columns. In another word, the estimation is not calibrated (Liu et al., 2013); (3) The selected regularization parameter in (1.3) involves
the unknown quantity ||?||1 . Thus we have to carefully tune the regularization parameter over a
refined grid of potential values in order to get a good finite-sample performance. To overcome the
above three drawbacks, we propose a new sparse precision matrix estimation method, named EPIC
(Estimating Precision mIatrix with Calibration).
To relax the Gaussian assumption, our EPIC method adopts an ensemble of the transformed
Kendall?s tau estimator and Catoni?s M-estimator (Kruskal, 1958; Catoni, 2012). Such a semiparametric combination makes EPIC applicable to the elliptical distribution family. The elliptical
family (Cambanis et al., 1981; Fang et al., 1990) contains many multivariate distributions such as
Gaussian, multivariate t-distribution, Kotz distribution, multivariate Laplace, Pearson type II and
VII distributions. Many of these distributions do not have subgaussian tails, thus the commonly
used sample covariance-based sparse precision matrix estimators often fail miserably.
Moreover, our EPIC method adopts a calibration framework proposed in Gautier and Tsybakov
(2011), which reduces the estimation bias by calibrating the regularization for each column. Meanwhile, the optimal regularization parameter selection under such a calibration framework does not
require any prior knowledge of unknown quantities (Belloni et al., 2011). Thus our EPIC estimator is asymptotically tuning free (Liu and Wang, 2012). Our theoretical analysis shows that if the
underlying distribution has a finite fourth moment, the EPIC estimator achieves the same rates of
convergence as (1.4). Numerical experiments on both simulated and real datasets show that EPIC
outperforms existing precision matrix estimation methods.
2
Background
We first introduce some notations used throughout this paper. Given a vector v = (v1 , . . . , vd )T ?
Rd , we define the following vector norms:
X
X
||v||1 =
|vj |, ||v||22 =
vj2 , ||v||? = max |vj |.
j
j
j
Given a matrix A ? Rd?d , we use A?j = (A1j , ..., Adj )T to denote the j th column of A. We
define the following matrix norms:
X
||A||1 = max ||A?j ||1 , ||A||2 = max ?j (A), ||A||2F =
A2kj , ||A||max = max |Akj |,
j
j
k,j
2
k,j
where ?j (A)?s are all singular values of A.
We then briefly review the elliptical family. As a generalization of the Gaussian distribution, it has
the following definition.
Definition 2.1 (Fang et al. (1990)). Given ? ? Rd and ? ? Rd?d , where ? 0 and
rank(?) = r ? d, we say that a d-dimensional random vector X = (X1 , ..., X)T follows an
elliptical distribution with parameter ?, ?, and ?, if X has a stochastic representation
d
X = ? + ?BU ,
such that ? ? 0 is a continuous random variable independent of U , U ? Sr?1 is uniformly distributed in the unit sphere in Rr , and ? = BBT .
Since we are interested in the precision matrix estimation, we assume that maxj EXj2 is finite. Note
that the stochastic representation in Definition 2.1 is not unique, and existing literature usually imposes the constraint maxj ?jj = 1 to make the distribution identifiable (Fang et al., 1990). However,
such a constraint does not necessarily make ? the covariance matrix. Here we present an alternative
representation as follows.
Proposition 2.2. If X has the stochastic representation X = ? + ?BU as in Definition 2.1, given
T
? = BB
= r, and E(? 2 ) = ? < ?, X can be rewritten as X = ? + ?AU , where
p , rank(?)p
? = ? r/?, A = B ?/r and ? = AAT . Moreover we have
E(? 2 ) = r, E(X) = ?, and Cov(X) = ?.
After the reparameterization in Proposition 2.2, the distribution is identifiable with ? defined as the
conventional covariance matrix.
Remark 2.3. ? has the decomposition ? = ?Z?, where Z is the Pearson correlation matrix,
and ? = diag(?1 , ..., ?d ) with ?j as the standard deviation of Xj . Since ? is a diagonal matrix,
the precision ? also has a similar decomposition ? = ??1 ???1 , where ? = Z?1 is the inverse
correlation matrix.
3
Method
We propose a three-step method: (1) We first use the transformed Kendall?s tau estimator and
b and ?
b respectively. (2) We then plug Z
b into the calibrated inCatoni?s M-estimator to obtain Z
b
b
b to obtain ?.
b
verse correlation matrix estimation to obtain ?. (3) At last, we assemble ? and ?
3.1
Correlation Matrix and Standard Deviation Estimation
To estimate Z, we adopt the transformed Kendall?s tau estimator proposed in Liu et al. (2012). Given
n independent observations, x1 , ..., xn , where xi = (xi1 , ..., xid )T , we calculate the Kendall?s
statistic by
?
X
2
?
sign (xij ? xi0 j )(xik ? xi0 k )
if j 6= k;
n(n ? 1) 0
?bkj =
i<i
?
1
otherwise.
b = [Z
b kj ] = sin ? ?bkj
After a simple transformation, we obtain a correlation matrix estimator Z
2
(Liu et al., 2012; Zhao et al., 2013).
To estimate ? = diag(?1 , ..., ?d ), we adopt the Catoni?s M-estimator proposed in Catoni (2012).
We define
?(t) = sign(t) log(1 + |t| + t2 /2),
where sign(0) = 0. Let m
b j be the estimator of EXj2 , we solve
r
r
n
n
X
X
2
2
2
? (xij ? ?
bj )
= 0,
? (xij ? m
b j)
= 0.
nKmax
nKmax
i=1
i=1
where Kmax is an upper bound of maxj Var(Xj ) and maxj Var(Xj2 ). Since ?(t) is a strictly
increasing function in t, ?
bj and m
b j are unique and can be obtained
q by the efficient Newton-Raphson
b
b
method (Stoer et al., 1993). Then we can obtain ?j using ?j = m
bj ? ?
b2 .
j
3
3.2
Calibrated Inverse Correlation Matrix Estimation
b into the following convex program,
We plugin Z
b ?j , ?bj ) = argmin ||??j ||1 + c?j
(?
??j ,?j
b ?j ? I?j ||? ? ??j , ||??j ||1 ? ?j , ? j = 1, ..., d.
s.t.
||Z?
(3.1)
where c can be an arbitrary constant (e.g. c = 0.5). ?j works as an auxiliary variable to calibrate the
regularization.
Remark 3.1. If we know ?j = ||??j ||1 in advance, we can consider a simple variant of the CLIME
estimator,
b ?j = argmin ||??j ||1
?
??j
s.t. ||S??j ? I?j ||? ? ??j , ? j = 1, ..., d.
Since we do not have any prior knowledge of ?j0 s, we consider the following replacement
b ?j , ?bj ) = argmin ||??j ||1
(?
(3.2)
??j ,?j
s.t. ||S??j ? I?j ||? ? ??j , ?j = ||??j ||1 ? j = 1, ..., d.
As can be seen, (3.2) is nonconvex due to the constraint ?j = ||??j ||1 . Thus no global optimum can
be guaranteed in polynomial time.
From a computational perspective, (3.1) can be viewed as a convex relaxation of (3.2). Both the
objective function and the constraint in (3.1) contain ?j to prevent from choosing ?j either too large
or too small. Due to the complementary slackness, (3.1) eventually encourages the regularization
to be proportional to the `1 norm of each column (weak sparseness). Therefore the estimation is
calibrated.
?
+
?
By introducing the decomposition ??j = ?+
?j ? ??j with ??j , ??j ? 0, we can reformulate (3.1)
as a linear program as follows,
b+ , ?
b ? , ?bj ) = argmin 1T ?+ + 1T ?? + c?j
(3.3)
(?
?j
?j
?j
?j
?
?+
?j ,??j ,?j
?? + ? "
#
b ?Z
b ??
??j
Z
I?j
? ? ?I?j ,
b Z
b ?? ? ? ??
subjected to ? ?Z
?j
T
T
0
?j
1
1
?1
?
?
?+
?j ? 0, ??j ? 0, ?j ? 0,
where ? = (?, ..., ?)T ? Rd . (3.3) can be solved by existing linear program solvers, and further
accelerated by the parallel computing techniques.
Remark 3.2. Though (3.1) looks more complicated than (1.2), it is not necessarily more computationally difficult. After the reparameterization, (3.3) contains 2d + 1 parameters to optimize, which
is of a similar scale to the linear program formulation as the CLIME method in Cai et al. (2011).
b Thus we need the following
Our EPIC method does not guarantee the symmetry of the estimator ?.
e
symmetrization methods to obtain the symmetric replacement ?.
e kj = ?
b kj I(|?
b kj | ? ?
b jk ) + ?
b jk I(|?
b kj | > ?
b jk ).
?
3.3
Precision Matrix Estimation
e we can recover the precision matrix
Once we obtain the estimated inverse correlation matrix ?,
estimator by the ensemble rule,
b =?
b ?1 ?
e?
b ?1 .
?
Remark 3.3. A possible alternative is to directly estimate ? by plugging a covariance estimator
b=?
bZ
b?
b
S
(3.4)
b but this direct estimation procedure makes the regularization parameter
into (3.1) instead of Z,
selection sensitive to Var(Xj2 ).
4
4
Statistical Properties
In this section, we study statistical properties of the EPIC estimator. We define the following class
of sparse symmetric matrices,
o
n
X
|?kj |q ? s, ||?||1 ? M ,
Uq (s, M ) = ? ? Rd?d ? 0, ? = ?T , max
j
k
where q ? [0, 1) and (s, d, M ) can scale with the sample size n. We also impose the following
additional conditions:
(A.1) ? ? Uq (s, M )
(A.2) maxj |?j | ? ?max , maxj ?j ? ?max , minj ?j ? ?min
(A.3) maxj EXj4 ? K
where ?max , K, ?max , and ?min are constants.
Before we proceed with our main results, we first present the following key lemma.
Lemma 4.1. Suppose that X follows an elliptical distribution with mean ?, and covariance ? =
?Z?. Assume that (A.1)-(A.3) hold, given the transformed Kendall?s tau estimator and Catoni?s Mestimator defined in Section 3, there exist universal constants ?1 and ?2 such that for large enough
n,
!
r
log d
2
?1
?1
b
P max |?j ? ?j | ? ?2
? 1 ? 3,
j
n
d
!
r
b kj ? Zkj | ? ?1 log d ? 1 ? 1 .
P max |Z
j,k
n
d3
Lemma 4.1 implies that both transformed Kendall?s tau estimator and Catoni?s M-estimator possess
good concentration properties, which enable us to obtain a consistent estimator of ?.
The next theorem presents the rates of convergence under the matrix `1 norm, spectral norm, Frobenius norm, and max norm.
Theorem 4.2. Suppose that X follows an elliptical distribution. Assume (A.1)-(A.3) hold, there
exist universal constants C1 , C2 , and C3 such that by taking
r
log d
? = ?1
,
(4.1)
n
for large enough n and p = 1, 2, we have
1?q
b ? ?||2 ? C1 M 4?4q s2 log d
||?
,
p
n
1?q/2
1 b
log d
2
4?2q
||? ? ?||F ? C2 M
s
,
d
n
r
b ? ?||max ? C3 M 2 log d ,
||?
n
with probability at least 1 ? 3 exp(?3 log d). Moreover, when the exact sparsity
holds
(i.e., q = 0),
b kj 6= 0}, then we have P E ? E
b = {(k, j) | ?
b ? 1, if there
let E = {(k, j) | ?kj 6= 0}, and E
exists a large enough constant C4 such that
r
min |?kj | ? C4 M
(k,j)?E
2
log d
.
n
The rates of convergence in Theorem 4.2 are comparable to those in Cai et al. (2011).
Remark 4.3. The selected tuning parameter ? in (4.1) does not involve any unknown quantity.
Therefore our EPIC method is asymptotically tuning free.
5
5
Numerical Simulations
In this section, we compare the proposed ALCE method with other methods including
b defined in (3.4) as the input covariance matrix
(1) GLASSO.RC : GLASSO + S
b as the input covariance matrix
(2) CLIME.RC: CLIME + S
(3) CLIME.SM: CLIME + S defined in (1.1) as the input covariance matrix
We consider three different settings for the comparison: (1) d = 100; (2) d = 200; (3) d = 400. We
adopt the following three graph generation schemes, as illustrated in Figure 1, to obtain precision
matrices.
(a) Chain
(b) Erd?os-R?enyi
(c) Scale-free
Figure 1: Three different graph patterns. To ease the visualization, we choose d = 100.
We then generate n = 200 independent samples from the t-distribution1 with 5 degrees of freedom,
mean 0 and covariance ? = ??1 . For the EPIC estimator, we set c = 0.5 in (3.1). For the Catoni?s
M-estimator, we set Kmax = 102 .
To evaluate the performance in parameter estimation, we repeatedly split the data into a training set
of n1 = 160 samples and a validation set of n2 = 40 samples for 10 times. We tune ? over a refined
grid, then the selected optimal regularization parameter is
? = argmin
?
10
X
b (?,k) ?
b (k) ? I||max ,
||?
k=1
b (?,k) denotes the estimated precision matrix using the regularization parameter ? and the
where ?
b (k) denotes the estimated covariance matrix using the validation
training set in the k th split, and ?
set in the k th split. Table 1 summarizes our experimental results averaged over 200 simulations. We
see that EPIC outperforms the competing estimators throughout all settings.
To evaluate the performance in model selection, we calculate the ROC curve of each obtained regularization path. Figure 2 summarizes ROC curves of all methods averaged over 200 simulations.
We see that EPIC also outperforms the competing estimators throughout all settings.
6
Real Data Example
To illustrate the effectiveness of the proposed EPIC method, we adopt the breast cancer data2 , which
is analyzed in Hess et al. (2006). The data set contains 133 subjects with 22,283 gene expression
levels. Among the 133 subjects, 99 have achieved residual disease (RD) and the remaining 34 have
achieved pathological complete response (pCR). Existing results have shown that the pCR subjects
have higher chance of cancer-free survival in the long term than the RD subject. Thus we are
interested in studying the response states of patients (with RD or pCR) to neoadjuvant (preoperative)
chemotherapy.
1
2
The marginal variances of the distribution vary from 0.5 to 2.
Available at http://bioinformatics.mdanderson.org/.
6
0.02
0.03
0.04
1.0
0.00
0.01
0.02
0.03
0.04
0.2
0.4
True plot(c(e Rate
0.6
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.0
0.2
0.0
0.05
0.6
0.8
1.0
0.01
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.05
0.00
0.01
0.02
0.03
False Positive Rate
False Positive Rate
False Positive Rate
(a) d = 100
(b) d = 200
(c) d = 400
0.04
0.05
0.01
0.02
0.03
0.04
0.05
0.00
0.01
0.02
0.03
0.04
0.4
0.3
0.2
True Positive Rate
0.6
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.0
0.1
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.0
0.0
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.05
0.00
0.01
0.02
0.03
False Positive Rate
False Positive Rate
(d) d = 100
(e) d = 200
(f) d = 400
0.04
0.05
0.00
0.01
0.02
0.03
0.04
0.05
0.01
0.02
0.03
0.04
0.05
0.6
0.4
0.2
True Positive Rate
0.00
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.0
0.4
0.3
0.2
EPC
GLASSO.RC
CLIME.RC
CLIME.SC
0.1
True Positive Rate
0.0
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.0
0.6
0.4
0.2
True Positive Rate
0.5
0.6
0.8
False Positive Rate
0.8
0.00
0.4
True Positive Rate
0.2
0.6
0.4
0.2
True Positive Rate
0.8
0.8
0.5
1.0
0.00
0.4
True Positive Rate
0.8
1.0
0.8
0.6
0.4
True Positive Rate
0.0
0.2
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
0.00
0.01
0.02
0.03
False Positive Rate
False Positive Rate
False Positive Rate
(g) d = 100
(h) d = 200
(i) d = 400
0.04
0.05
Figure 2: Average ROC curves of different methods on the chain (a-c), Erd?os-R?enyi (d-e), and scalefree (f-h) models. We can see that EPIC uniformly outperforms the competing estimators throughout
all settings.
We randomly divide the data into a training set of 83 RD and 29 pCR subjects, and a testing set of the
remaining 16 RD and 5 pCR subjects. Then by conducting a Wilcoxon test between two categories
for each gene, we further reduce the dimension by choosing the 113 most signcant genes with the
smallest p-values. We assume that the gene expression data in each category is elliptical distributed,
and the two categories have the same covariance matrix ? but different means ?(k) , where k = 0
for RD and k = 1 for pCR. In Cai et al. (2011), the sample mean is adopted to estimate ?(k) ?s, and
CLIME.RC is adopted to estimate ? = ??1 . In contrast, we adopt the Catoni?s M-estimator to
estimate ?k ?s, and EPIC is adopted to estimate ?. We classify a sample x to pCR if
b (1) + ?
b (0)
?
x?
2
T
b ?
b (1) ? ?
b (0) ? 0,
?
and to RD otherwise. We use the testing set to evaluate the performance of CLIME.RC and EPIC.
For the tuning parameter selection, we use a 5-fold cross validation on the training data to pick ?
with the minimum classification error rate.
To evaluate the classification performance, we use the criteria of specificity, sensitivity, and Mathews
Correlation Coefficient (MCC). More specifically, let yi ?s and ybi ?s be true labels and predicted labels
7
Table 1: Quantitive comparison of EPIC, GLASSO.RC, CLIME.RC, and CLIME.SC on the chain,
Erd?os-R?enyi, and scale-free models. We see that EPIC outperforms the competing estimators
throughout all settings.
b ? ?||2
Spectral Norm: ||?
Model
d
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
Chain
100
200
400
0.8405(0.1247)
0.9147(0.1009)
1.0058(0.1231)
1.1880(0.1003)
1.3433(0.0870)
1.4842(0.0760)
0.9337(0.5389)
1.0716(0.4939)
1.3567(0.3706)
3.2991(0.0512)
3.7303(0.4477)
3.8462(0.4827)
Erd?os-R?enyi
100
200
400
0.9846(0.0970)
1.1944(0.0704)
1.9010(0.0462)
1.6037(0.2289)
1.6105(0.0680)
2.2613(0.1133)
1.6885(0.1704)
1.7507(0.0389)
2.6884(0.5988)
3.7158(0.0663)
3.5209(0.0419)
4.1342(0.1079)
Scale-free
100
200
400
0.9779(0.1379)
2.9278(0.3367)
1.1816(0.1201)
1.6619(0.1553)
4.0882(0.0962)
1.8304(0.0710)
2.1327(0.0986)
4.5820(0.0604)
2.1191(0.0629)
3.4548(0.0513)
8.8904(0.0575)
3.4249(0.0849)
b ? ?||F
Frobenius Norm: ||?
Model
d
EPIC
GLASSO.RC
CLIME.RC
CLIME.SC
Chain
100
200
400
3.3108(0.1521)
5.0309(0.1833)
7.5134(0.1205)
4.5664(0.1034)
7.2154(0.0831)
11.300(0.1851)
3.4406(0.4319)
5.4776(0.2586)
7.8357(1.2217)
16.282(0.1346)
23.403(0.2727)
33.504(0.1341)
Erd?os-R?enyi
100
200
400
3.5122(0.0796)
6.3000(0.0868)
11.489(0.0858)
3.9600(0.1459)
7.3385(0.0994)
12.594(0.1633)
4.4212(0.1065)
7.3501(0.1589)
13.026(0.4124)
13.734(0.0629)
20.151(0.1899)
30.030(0.1289)
Scale-free
100
200
400
2.6369(0.1125)
4.1280(0.1389)
5.3440(0.0511)
3.1154(0.1001)
7.7543(0.0934)
6.3741(0.0723)
3.1363(0.1014)
7.8916(0.0556)
5.7643(0.0625)
10.717(0.0844)
16.370(0.1490)
20.687(0.1373)
of the testing samples, we define
Specificity =
MCC = p
TN
TP
, Sensitivity =
,
TN + FP
TP + FN
TPTN ? FPFN
(TP + FP)(TP + FN)(TN + FP)(TN + FN)
,
where
TP =
X
I(b
yi = yi = 1), FP =
i
TN =
X
X
I(b
yi = 0, yi = 1)
i
I(b
yi = yi = 0), FN =
i
X
I(b
yi = 1, yi = 0).
i
Table 2 summarizes the performance of both methods over 100 replications. We see that EPIC
outperforms CLIME.RC on the specificity. The overall classification performance measured by
MCC shows that EPIC has a 4% improvement over CLIME.RC.
Table 2: Quantitive comparison of EPIC and CLIME.RC in the breast cancer data analysis.
Method
Specificity
Sensitivity
MCC
CLIME.RC
0.7412(0.0131)
0.7911(0.0251)
0.4905(0.0288)
EPIC
0.7935(0.0211)
0.8087(0.0324)
0.5301(0.0375)
8
References
BANERJEE , O., E L G HAOUI , L. and D ?A SPREMONT, A. (2008). Model selection through sparse
maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine
Learning Research 9 485?516.
B ELLONI , A., C HERNOZHUKOV, V. and WANG , L. (2011). Square-root lasso: pivotal recovery of
sparse signals via conic programming. Biometrika 98 791?806.
C AI , T., L IU , W. and L UO , X. (2011). A constrained `1 minimization approach to sparse precision
matrix estimation. Journal of the American Statistical Association 106 594?607.
C AMBANIS , S., H UANG , S. and S IMONS , G. (1981). On the theory of elliptically contoured distributions. Journal of Multivariate Analysis 11 368?385.
C ATONI , O. (2012). Challenging the empirical mean and empirical variance: a deviation study.
Annales de l?Institut Henri Poincar?e, Probabilit?es et Statistiques 48 1148?1185.
D EMPSTER , A. P. (1972). Covariance selection. Biometrics 157?175.
FANG , K.-T., KOTZ , S. and N G , K. W. (1990). Symmetric Multivariate and Related Distributions, Monographs on Statistics and Applied Probability, 36. London: Chapman and Hall Ltd.
MR1071174.
F RIEDMAN , J., H ASTIE , T. and T IBSHIRANI , R. (2008). Sparse inverse covariance estimation with
the graphical lasso. Biostatistics 9 432?441.
G AUTIER , E. and T SYBAKOV, A. B. (2011). High-dimensional instrumental variables regression
and confidence sets. Tech. rep., ENSAE ParisTech.
H ESS , K. R., A NDERSON , K., S YMMANS , W. F., VALERO , V., I BRAHIM , N., M EJIA , J. A.,
B OOSER , D., T HERIAULT, R. L., B UZDAR , A. U., D EMPSEY, P. J. ET AL . (2006). Pharmacogenomic predictor of sensitivity to preoperative chemotherapy with paclitaxel and fluorouracil,
doxorubicin, and cyclophosphamide in breast cancer. Journal of clinical oncology 24 4236?4244.
K RUSKAL , W. H. (1958). Ordinal measures of association. Journal of the American Statistical
Association 53 814?861.
L AURITZEN , S. L. (1996). Graphical models, vol. 17. Oxford University Press.
L IU , H., H AN , F., Y UAN , M., L AFFERTY, J. and WASSERMAN , L. (2012). High-dimensional
semiparametric gaussian copula graphical models. The Annals of Statistics 40 2293?2326.
L IU , H. and WANG , L. (2012). Tiger: A tuning-insensitive approach for optimally estimating
gaussian graphical models. Tech. rep., Massachusett Institute of Technology.
L IU , H., WANG , L. and Z HAO , T. (2013). Multivariate regression with calibration. arXiv preprint
arXiv:1305.2238 .
S TOER , J., B ULIRSCH , R., BARTELS , R., G AUTSCHI , W. and W ITZGALL , C. (1993). Introduction
to numerical analysis, vol. 2. Springer New York.
Y UAN , M. (2010). High dimensional inverse covariance matrix estimation via linear programming.
The Journal of Machine Learning Research 11 2261?2286.
Y UAN , M. and L IN , Y. (2007). Model selection and estimation in the gaussian graphical model.
Biometrika 94 19?35.
Z HAO , T., L IU , H., ROEDER , K., L AFFERTY, J. and WASSERMAN , L. (2012). The huge package
for high-dimensional undirected graph estimation in r. The Journal of Machine Learning Research
9 1059?1062.
Z HAO , T., ROEDER , K. and L IU , H. (2013). Positive semidefinite rank-based correlation matrix
estimation with application to semiparametric graph estimation. Journal of Computational and
Graphical Statistics To appear.
9
| 5173 |@word briefly:1 manageable:1 polynomial:1 norm:12 instrumental:1 simulation:3 covariance:17 decomposition:3 pick:1 tr:1 moment:1 liu:5 contains:3 past:1 existing:5 outperforms:6 elliptical:8 adj:2 auritzen:1 john:1 fn:4 numerical:4 plot:1 selected:3 es:1 data2:1 org:1 rc:32 c2:2 direct:1 zkj:1 replication:1 yuan:2 prove:1 introduce:1 theoretically:2 solver:2 increasing:1 estimating:4 notation:1 moreover:3 underlying:1 biostatistics:1 argmin:7 developed:1 transformation:1 guarantee:1 pseudo:1 xd:1 biometrika:2 mathews:1 unit:1 uo:1 appear:1 before:1 positive:18 engineering:1 aat:1 despite:1 plugin:1 oxford:1 path:1 au:1 challenging:1 ease:1 averaged:2 unique:2 testing:3 procedure:1 poincar:1 j0:1 probabilit:1 empirical:2 mcc:4 universal:2 word:1 confidence:1 specificity:4 get:1 selection:8 kmax:2 optimize:1 conventional:1 convex:2 recovery:1 wasserman:2 estimator:35 rule:1 financial:1 fang:4 reparameterization:2 justification:1 laplace:1 annals:1 suppose:2 heavily:1 exact:1 programming:2 jk:3 distributional:1 exk:2 ensae:1 preprint:1 solved:2 wang:4 calculate:2 disease:1 monograph:1 dempster:1 solving:2 enyi:5 london:1 sc:12 pearson:2 refined:2 choosing:2 solve:1 denser:1 say:1 relax:1 otherwise:2 statistic:5 cov:1 advantage:1 rr:1 cai:6 propose:4 frobenius:2 xj2:2 convergence:5 optimum:1 illustrate:2 measured:1 lauritzen:1 op:1 auxiliary:1 predicted:1 involves:1 implies:1 drawback:2 stochastic:3 enable:1 xid:1 require:1 brahim:1 generalization:1 proposition:2 strictly:1 hold:3 hall:1 exp:1 bj:6 kruskal:1 achieves:4 adopt:6 vary:1 smallest:1 estimation:26 gautier:1 applicable:1 label:2 symmetrization:1 sensitive:1 largest:1 minimization:1 gaussian:11 improvement:1 rank:3 likelihood:3 tech:2 contrast:2 inference:1 roeder:2 bartels:1 transformed:5 interested:2 iu:6 overall:1 among:1 classification:3 constrained:1 copula:1 marginal:1 once:1 epc:1 chapman:1 look:1 t2:1 pathological:1 randomly:1 maxj:9 connects:1 replacement:2 n1:1 friedman:1 freedom:1 huge:2 chemotherapy:2 analyzed:1 semidefinite:1 chain:5 institut:1 biometrics:1 divide:1 theoretical:3 column:11 classify:1 tp:5 calibrate:1 introducing:1 deviation:3 entry:1 predictor:1 graphic:1 too:2 optimally:1 calibrated:4 sensitivity:4 akj:3 bu:2 off:1 xi1:1 hopkins:1 choose:2 american:2 zhao:3 potential:1 de:1 b2:1 coefficient:1 root:1 kendall:6 recover:1 parallel:1 complicated:1 clime:39 square:1 variance:2 conducting:1 ensemble:2 ybi:1 bbt:1 weak:1 minj:1 cambanis:1 definition:4 verse:1 naturally:1 knowledge:2 carefully:1 higher:1 follow:1 response:2 improved:1 erd:5 formulation:1 though:2 contoured:1 correlation:9 statistiques:1 o:5 banerjee:2 slackness:1 calibrating:1 contain:1 true:10 regularization:13 symmetric:3 illustrated:1 sin:1 encourages:1 criterion:1 complete:1 tn:5 insensitive:1 tail:2 xi0:2 association:3 ai:1 hess:1 tuning:6 rd:17 grid:2 exj:1 calibration:5 han:1 wilcoxon:1 multivariate:8 perspective:1 bkj:2 nonconvex:1 binary:1 rep:2 yi:9 seen:1 minimum:1 additional:1 impose:1 ibshirani:1 ii:1 signal:1 reduces:1 plug:1 clinical:1 cross:1 compensate:1 lin:1 long:1 raphson:1 sphere:1 plugging:1 scalable:1 variant:1 regression:2 breast:3 patient:1 bz:1 arxiv:2 achieved:2 c1:2 background:1 semiparametric:4 want:1 singular:2 posse:2 sr:1 subject:6 undirected:1 effectiveness:2 subgaussian:2 split:3 enough:3 xj:3 lasso:3 competing:4 reduce:1 motivated:1 expression:2 ltd:1 reformulated:1 proceed:1 york:1 jj:1 remark:5 repeatedly:1 elliptically:1 involve:1 tune:2 tsybakov:1 category:3 epic:33 generate:1 http:1 xij:3 exist:2 sign:3 estimated:4 vol:2 key:1 d3:1 prevent:1 exj2:2 v1:1 asymptotically:3 relaxation:1 graph:4 annales:1 package:2 inverse:5 fourth:1 named:1 family:3 kotz:2 throughout:5 summarizes:3 comparable:1 bound:1 miserably:1 guaranteed:1 fold:1 calibrates:1 assemble:1 identifiable:2 uan:3 belloni:1 constraint:4 software:1 min:3 department:2 combination:1 valero:1 computationally:2 visualization:1 eventually:1 fail:1 know:1 ordinal:1 subjected:1 studying:1 available:1 operation:1 rewritten:1 adopted:3 spectral:4 uq:2 alternative:2 assumes:1 denotes:2 remaining:2 graphical:7 newton:1 objective:1 quantity:3 parametric:1 concentration:1 diagonal:2 simulated:2 vd:1 reformulate:1 difficult:1 a1j:2 xik:1 hao:3 unknown:3 upper:1 observation:3 datasets:2 sm:1 afferty:2 finite:4 maxk:1 vj2:1 oncology:1 arbitrary:1 tuo:1 introduced:1 c3:2 c4:2 uang:1 distribution1:1 usually:2 pattern:1 fp:4 sparsity:1 program:5 max:15 tau:5 including:1 pcr:7 residual:1 scheme:2 technology:1 numerous:1 conic:1 kj:13 prior:2 literature:2 review:1 glasso:16 generation:1 proportional:1 var:3 validation:3 degree:1 quantitive:2 riedman:1 consistent:1 imposes:1 cancer:4 nkmax:2 last:1 free:8 bias:2 institute:1 taking:1 empster:1 sparse:12 distributed:2 overcome:1 dimension:2 xn:2 curve:3 adopts:2 commonly:1 bb:1 henri:1 unreliable:1 gene:4 scalefree:1 global:1 astie:1 xi:3 continuous:1 decade:1 table:4 symmetry:1 necessarily:2 meanwhile:1 vj:2 diag:2 main:1 s2:1 n2:1 complementary:1 pivotal:1 x1:4 roc:3 precision:18 theorem:3 survival:1 exists:1 false:9 catoni:8 sparseness:2 sparser:1 vii:1 preoperative:2 springer:1 chance:1 relies:1 viewed:1 paristech:1 tiger:1 specifically:2 uniformly:2 lemma:3 experimental:1 e:1 support:1 bioinformatics:1 violated:1 accelerated:1 evaluate:4 princeton:1 |
4,613 | 5,174 | A* Lasso for Learning a Sparse Bayesian Network
Structure for Continuous Variables
Seyoung Kim
Lane Center for Computational Biology
Carnegie Mellon University
Pittsburgh, PA 15213
sssykim@cs.cmu.edu
Jing Xiang
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
jingx@cs.cmu.edu
Abstract
We address the problem of learning a sparse Bayesian network structure for continuous variables in a high-dimensional space. The constraint that the estimated
Bayesian network structure must be a directed acyclic graph (DAG) makes the
problem challenging because of the huge search space of network structures. Most
previous methods were based on a two-stage approach that prunes the search
space in the first stage and then searches for a network structure satisfying the
DAG constraint in the second stage. Although this approach is effective in a lowdimensional setting, it is difficult to ensure that the correct network structure is not
pruned in the first stage in a high-dimensional setting. In this paper, we propose
a single-stage method, called A* lasso, that recovers the optimal sparse Bayesian
network structure by solving a single optimization problem with A* search algorithm that uses lasso in its scoring system. Our approach substantially improves
the computational efficiency of the well-known exact methods based on dynamic
programming. We also present a heuristic scheme that further improves the efficiency of A* lasso without significantly compromising the quality of solutions.
We demonstrate our approach on data simulated from benchmark Bayesian networks and real data.
1
Introduction
Bayesian networks have been popular tools for representing the probability distribution over a large
number of variables. However, learning a Bayesian network structure from data has been known
to be an NP-hard problem [1] because of the constraint that the network structure has to be a directed acyclic graph (DAG). Many of the exact methods that have been developed for recovering the
optimal structure are computationally expensive and require exponential computation time [15, 7].
Approximate methods based on heuristic search are more computationally efficient, but they recover
a suboptimal structure. In this paper, we address the problem of learning a Bayesian network structure for continuous variables in a high-dimensional space and propose an algorithm that recovers the
exact solution with less computation time than the previous exact algorithms, and with the flexibility
of further reducing computation time without a significant decrease in accuracy.
Many of the existing algorithms are based on scoring each candidate graph and finding a graph with
the best score, where the score decomposes for each variable given its parents in a DAG. Although
methods may differ in the scoring method that they use (e.g., MDL [9], BIC [14], and BDe [4]),
most of these algorithms, whether exact methods or heuristic search techniques, have a two-stage
learning process. In Stage 1, candidate parent sets for each node are identified while ignoring the
DAG constraint. Then, Stage 2 employs various algorithms to search for the best-scoring network
structure that satisfies the DAG constraint by limiting the search space to the candidate parent sets
from Stage 1. For Stage 1, methods such as sparse candidate [2], max-min parents children [17], and
1
total conditioning [11] algorithms have been previously proposed. For Stage 2, exact methods based
on dynamic programming [7, 15] and A* search algorithm [19] as well as inexact methods such as
heuristic search technique [17] and linear programming formulation [6] have been developed. These
approaches have been developed primarily for discrete variables, and regardless of whether exact or
inexact methods are used in Stage 2, Stage 1 involved exponential computation time and space.
For continuous variables, L1 -regularized Markov blanket (L1MB) [13] was proposed as a two-stage
method that uses lasso to select candidate parents for each variable in Stage 1 and performs heuristic
search for DAG structure and variable ordering in Stage 2. Although a two-stage approach can
reduce the search space by pruning candidate parent sets in Stage 1, Huang et al. [5] observed that
applying lasso in Stage 1 as in L1MB is likely to miss the true parents in a high-dimensional setting,
thereby limiting the quality of the solution in Stage 2. They proposed the sparse Bayesian network
(SBN) algorithm that formulates the problem of Bayesian network structure learning as a singlestage optimization problem and transforms it into a lasso-type optimization to obtain an approximate
solution. Then, they applied a heuristic search to refine the solution as a post-processing step.
In this paper, we propose a new algorithm, called A* lasso, for learning a sparse Bayesian network structure with continuous variables in high-dimensional space. Our method is a single-stage
algorithm that finds the optimal network structure with a sparse set of parents while ensuring the
DAG constraint is satisfied. We first show that a lasso-based scoring method can be incorporated
within dynamic programming (DP). While previous approaches based on DP required identifying
the exponential number of candidate parent sets and their scores for each variable in Stage 1 before
applying DP in Stage 2 [7, 15], our approach effectively combines the score computation in Stage
1 within Stage 2 via lasso optimization. Then, we present A* lasso which significantly prunes the
search space of DP by incorporating the A* search algorithm [12], while guaranteeing the optimality
of the solution. Since in practice, A* search can still be expensive compared to heuristic methods,
we explore heuristic schemes that further limit the search space of A* lasso. We demonstrate in
our experiments that this heuristic approach can substantially improve the computation time without
significantly compromising the quality of the solution, especially on large Bayesian networks.
2
Background on Bayesian Network Structure Learning
A Bayesian network is a probabilistic graphical model defined over a DAG G with a set of p =
|V | nodes V = {v1 , . . . , vp }, where each node vj is associated with a random variable Xj [8].
The
Qp probability model associated with G in a Bayesian network factorizes as p(X1 , . . . , Xp ) =
j=1 p(Xj |Pa(Xj )), where p(Xj |Pa(Xj )) is the conditional probability distribution for Xj given
its parents Pa(Xj ) with directed edges from each node in Pa(Xj ) to Xj in G. We assume continuous
random variables and use a linear regression model for the conditional probability distribution of
each node Xj = Pa(Xj )0 ?j + , where ?j = {?jk ?s for Xk ? Pa(Xj )} is the vector of unknown
parameters to be estimated from data and is the noise distributed as N (0, 1).
Given a dataset X = [x1 , . . . , xp ], where xj is a vector of n observations for random variable Xj ,
our goal is to estimate the graph structure G and the parameters ?j ?s jointly. We formulate this
problem as that of obtaining a sparse estimate of ?j ?s, under the constraint that the overall graph
structure G should not contain directed cycles. Then, the nonzero elements of ?j ?s indicate the
presence of edges in G. We obtain an estimate of Bayesian network structure and parameters by
minimizing the negative log likelihood of data with sparsity enforcing L1 penalty as follows:
min
?1 ,...,?p
p
X
k xj ? x?j 0 ?j k22 +?
j=1
p
X
k ?j k1
s.t. G ? DAG,
(1)
j=1
where x?j represents all columns of X excluding xj , assuming all other variables are candidate
parents of node vj . Given the estimate of ?j ?s, the set of parents for node vj can be found as the
support of ?j , S(?j ) = {vi |?ji 6= 0}. The ? is the regularization parameter that determines the
amount of sparsity in ?j ?s and can be determined by cross-validation. We notice that if the acyclicity
constraint is ignored, Equation (1) decomposes into individual lasso estimations for each node:
LassoScore(vj |V \vj ) = min k xj ? x?j 0 ?j k22 +? k ?j k1 ,
?j
2
where V \vj represents the set of all nodes in V excluding vj . The above lasso optimization problem
can be solved efficiently with the shooting algorithm [3]. However, the main challenge in optimizing
Equation (1) arises from ensuring that the ?j ?s satisfy the DAG constraint.
3
3.1
A* Lasso for Bayesian Network Structure Learning
Dynamic Programming with Lasso
The problem of learning a Bayesian network structure that satisfies
the constraint of no directed cycles can be cast as that of learning an
optimal ordering of variables [8]. Once the optimal variable ordering
is given, the constraint of no directed cycles can be trivially enforced
by constraining the parents of each variable in the local conditional
probability distribution to be a subset of the nodes that precede the
V
given node in the ordering. We let ?V = [?1V , . . . , ?|V
| ] denote an
V
ordering of the nodes in V , where ?j indicates the node v ? V in
the jth position of the ordering, and ?V?vj denote the set of nodes in
V that precede node vj in ordering ?V .
{}
{?1}
{?2}
{?3}
{?1,?2}
{?1,?3}
{?2,?3}
{?1,?2,?3}
Figure 1: Search space of
Algorithms based on DP have been developed to learn the optimal variable ordering for three
variable ordering for Bayesian networks [16]. These approaches are variables V = {v1 , v2 , v3 }.
based on the observation that the score of the optimal ordering of the
full set of nodes V can be decomposed into (a) the optimal score for the first node in the ordering,
given a choice of the first node and (b) the score of the optimal ordering of the nodes excluding the
first node. The optimal variable ordering can be constructed by recursively applying this decomposition to select the first node in the ordering and to find the optimal ordering of the set of remaining
nodes U ? V . This recursion is given as follows, with an initial call of the recursion with U = V :
OptScore(U ) = min OptScore(U \vj ) + BestScore(vj |V \U )
(2)
?1U = argmin OptScore(U \vj ) + BestScore(vj |V \U ),
(3)
vj ?U
vj ?U
where BestScore(vj |V \U ) is the optimal score of vj under the optimal choice of parents from V \U .
In order to obtain BestScore(vj |V \U ) in Equations (2) and (3), for the case of discrete variables,
many previous approaches enumerated all possible subsets of V as candidate sets of parents for node
vj to precompute BestScore(vj |V \U ) in Stage 1 before applying DP in Stage 2 [7, 15]. While this
approach may perform well in a low-dimensional setting, in a high-dimensional setting, a two-stage
method is likely to miss the true parent sets in Stage 1, which in turn affects the performance of Stage
2 [5]. In this paper, we consider the high-dimensional setting and present a single-stage method that
applies lasso to obtain BestScore(vj |V \U ) within DP as follows:
BestScore(vj |V \U )
= LassoScore(vj |V \U )
=
min
?j ,S(?j )?V \U
k xj ? x?j 0 ?j k22 +? k ?j k1 .
The constraint S(?j ) ? V \U in the above lasso optimization can be trivially maintained by setting
the ?jk for vk ? U to 0 and optimizing only for the other ?jk ?s. When applying the recursion in
Equations (2) and (3), DP takes advantage of the overlapping subproblems to prune the search space
of orderings, since the problem of computing OptScore(U ) for U ? V can appear as a subproblem
of scoring orderings of any larger subsets of V that contain U .
The problem of finding the optimal variable ordering can be viewed as that of finding the shortest
path from the start state to the goal state in a search space given as a subset lattice. The search
space consists of a set of states, each of which is associated with one of the 2|V | possible subsets
of nodes in V . The start state is the empty set {} and the goal state is the set of all variables V . A
valid move in this search space is defined from a state for subset Qs to another state for subset Qs0 ,
only if Qs0 contains one additional node to Qs . Each move to the next state corresponds to adding a
node at the end of the ordering of the nodes in the previous state. The cost of such a move is given
by BestScore(v|Qs ), where v = Qs0 \Qs . Each path from the start state to the goal state gives one
3
possible ordering of nodes. Figure 1 illustrates the search space, where each state is associated with
a Qs . DP finds the shortest path from the start state to the goal state that corresponds to the optimal
variable ordering by considering all possible paths in this search space and visiting all 2|V | states.
3.2
A* Lasso for Pruning Search Space
As discussed in the previous section, DP considers all 2|V | states in the subset lattice to find the
optimal variable ordering. Thus, it is not sufficiently efficient to be practical for problems with
more than 20 nodes. On the other hand, a greedy algorithm is computationally efficient because
it explores a single variable ordering by greedily selecting the most promising next state based on
BestScore(v|Qs ), but it returns a suboptimal solution. In this paper, we propose A* lasso that
incorporates the A* search algorithm [12] to construct the optimal variable ordering in the search
space of the subset lattice. We show that this strategy can significantly prune the search space
compared to DP, while maintaining the optimality of the solution.
When selecting the next move in the process of constructing a path in the search space, instead of
greedily selecting the move, A* search also accounts for the estimate of the future cost given by a
heuristic function h(Qs ) that will be incurred to reach the goal state from the candidate next state.
Although the exact future cost is not known until A* search constructs the full path by reaching
the goal state, a reasonable estimate of the future cost can be obtained by ignoring the directed
acyclicity constraint. It is well-known that A* search is guaranteed to find the shortest path if the
heuristic function h(Qs ) is admissible [12], meaning that h(Qs ) is always an underestimate of the
true cost of reaching the goal state. Below, we describe an admissible heuristic for A* lasso.
While exploring the search space, A* search algorithm assigns a score f (Qs ) to each state and
its corresponding subset Qs of variables for which the ordering has been determined. A* search
algorithm computes this score f (Qs ) as the sum of the cost g(Qs ) that has been incurred so far to
reach the current state from the start state and an estimate of the cost h(Qs ) that will be incurred to
reach the goal state from the current state:
f (Qs ) = g(Qs ) + h(Qs ).
(4)
More specifically, given the ordering ?Qs of variables in Qs that has been constructed along the
path from the start state to the state for Qs , the cost that has been incurred so far is defined as
X
s
g(Qs ) =
LassoScore(vj |?Q
(5)
?vj )
vj ?Qs
and the heuristic function for the estimate of the future cost to reach the goal state is defined as:
X
h(Qs ) =
LassoScore(vj |V \vj )
(6)
vj ?V \Qs
Note that the heuristic function is admissible, or an underestimate of the true cost, since the constraint of no directed cycles is ignored and each variable in V \Qs is free to choose any variables in
V as its parents, which lowers the lasso objective value.
When the search space is a graph where multiple paths can reach the same state, we can further
improve efficiency if the heuristic function has the property of consistency in addition to admissibility. A consistent heuristic always satisfies h(Qs ) ? h(Qs0 ) + LassoScore(vk |Qs ), where
LassoScore(vk |Qs ) is the cost of moving from state Qs to state Qs0 with {vk } = Qs0 \Qs . Consistency ensures that the first path found by A* search to reach the given state is always the shortest
path to that state [12]. This allows us to prune the search when we reach the same state via a different
path later in the search. The following proposition states that our heuristic function is consistent.
Proposition 1 The heuristic in Equation (6) is consistent.
Proof For any successor state Qs0 of Qs , let vk = Qs0 \Qs .
X
h(Qs ) =
LassoScore(vj |V \vj )
vj ?V \Qs
=
X
LassoScore(vj |V \vj ) + LassoScore(vk |V \vk )
vj ?V \Qs ,vj 6=vk
? h(Qs0 ) + LassoScore(vk |Qs ),
4
Input : X, V , ?
Output: Optimal variable ordering ?V
Initialize OPEN to an empty queue;
Initialize CLOSED to an empty set;
Compute LassoScore(vj |V \vj ) for all vj ? V ;
OPEN.insert((Qs = {}, f (Qs ) = h({}), g(Qs ) = 0, ?Qs = [ ]));
while true do
(Qs , f (Qs ), g(Qs ), ?Qs ) ? OPEN.pop();
if h(Qs ) = 0 then
Return ?V ? ?Qs ;
end
foreach v ? V \Qs do
Qs0 ? Qs ? {v};
if Qs0 ?
/ CLOSED then
Compute LassoScore(v|Qs ) with lasso shooting algorithm;
g(Qs0 ) ? g(Qs ) + LassoScore(v|Qs );
h(Qs0 ) ? h(Qs ) ? LassoScore(v|V \v);
f (Qs0 ) ? g(Qs0 ) + h(Qs0 );
?Qs0 ? [?Qs , v];
OPEN.insert(L = (Qs0 , f (Qs0 ), g(Qs0 ), ?Qs0 ));
CLOSED ? CLOSED ?{Qs0 };
end
end
end
Algorithm 1: A* lasso for learning Bayesian network structure
where LassoScore(vk |Qs ) is the true cost of moving from state Qs to Qs0 . The inequality above holds because vk has fewer parents to choose from in LassoScore(vk |Qs ) than in
LassoScore(vk |V \vk ). Thus, our heuristic in Equation (6) is consistent.
Given a consistent heuristic, many paths that go through the same state can be pruned by maintaining
an OPEN list and a CLOSED list during A* search. In practice, the OPEN list can be implemented
with a priority queue and the CLOSED list can be implemented with a hash table. The OPEN list is
a priority queue that maintains all the intermediate results (Qs , f (Qs ), g(Qs ), ?Qs )?s for a partial
construction of the variable ordering up to Qs at the frontier of the search, sorted according to the
score f (Qs ). During search, A* lasso pops from the OPEN list the partial construction of ordering
with the lowest score f (Qs ), visits the successor states by adding another node to the ordering ?Qs ,
and queues the results onto the OPEN list. Any state that has been popped by A* lasso is placed
in the CLOSED list. The states that have been placed in the CLOSED list are not considered again,
even if A* search reaches these states through different paths later in the search.
The full algorithm for A* lasso is given in Algorithm 1. As in DP with lasso, A* lasso is a singlestage algorithm that solves lasso within A* search. Every time A* lasso moves from state Qs to
s
the next state Qs0 in the search space, LassoScore(vj |?Q
?vj ) for {vj } = Qs0 \Qs is computed with
the shooting algorithm and added to g(Qs ) to obtain g(Qs0 ). The heuristic score h(Qs0 ) can be
precomputed as LassoScore(vj |V \vj ) for all vj ? V for a simple look-up during A* search.
3.3
Heuristic Schemes for A* Lasso to Improve Scalability
Although A* lasso substantially prunes the search space compared to DP, it is not sufficiently efficient for large graphs, because it still considers a large number of states in the exponentially large
search space. One simple strategy for further pruning the search space would be to limit the size of
the priority queue in the OPEN list, forcing A* lasso to discard less promising intermediate results
first. In this case, limiting the queue size to one is equivalent to a greedy algorithm with a scoring
function in Equation (4). In our experiments, we found that such a naive strategy substantially reduced the quality of solutions because the best-scoring intermediate results tend to be the results at
the early stage of the exploration. They are at the shallow part of the search space near the start state
because the admissible heuristic underestimates the true cost.
Instead, given a limited queue size, we propose to distribute the intermediate results to be discarded
across different depths/layers of the search space. For example, given the depth of the search space
5
Table 1: Comparison of computation time of different methods
Dataset (Nodes) DP
A* lasso
A* Qlimit 1000 A* Qlimit 200 A* Qlimit 100 A* Qlimit 5 L1MB SBN
Dsep (6)
0.20 (64)
0.14 (15)
? (?)
? (?)
? (?)
0.17 (11)
2.65 8.76
Asia (8)
1.07 (256)
0.26 (34)
? (?)
? (?)
? (?)
0.22 (12)
2.79 8.9
Bowling (9)
2.42 (512)
0.48 (94)
? (?)
? (?)
? (?)
0.23 (13)
2.85 8.75
Inversetree (11) 8.44 (2048) 1.68 (410)
? (?)
1.8 (423)
1.16 (248)
0.2 (16)
3.03 8.56
Rain (14)
1216 (1.60e4) 76.64 (2938) 64.38 (1811)
13.97 (461)
7.88 (270)
1.67 (17)
12.26 10.19
1.6e4 (6.6e4) 137.36 (2660) 108.39 (1945) 26.16 (526)
9.92 (244)
2.14 (19)
4.72 14.56
Cloud (16)
Funnel (18)
4.2e4 (2.6e5) 1527.0 (2.3e4) 88.87 (2310)
25.19 (513)
11.53 (248)
2.73 (21)
4.76 10.08
Galaxy (20)
1.3e5 (1.0e6) 2.40e4 (8.2e4) 110.05 (3093) 27.59 (642)
12.02 (323)
3.03 (23)
6.59 11.0
Factor (27)
? (?)
? (?)
1389.7 (3912) 125.91 (801) 59.92 (397)
3.96 (30)
9.04 13.91
Insurance (27) ? (?)
? (?)
2874.2 (3448) 442.65 (720) 202.9 (395)
16.31 (33) 10.96 29.45
Water (32)
? (?)
? (?)
2397.0 (3442) 301.67 (687) 130.71 (343) 12.14 (38) 32.73 14.96
Mildew (35)
? (?)
? (?)
3928.8 (3737) 802.76 (715) 339.04 (368) 29.3 (36)
15.25 116.33
Alarm (37)
? (?)
? (?)
2732.3 (3426) 384.87 (738) 158.0 (378)
12.42 (42) 7.91 39.78
? (?)
? (?)
10766.0 (4072) 1869.4 (807) 913.46 (430) 109.14 (52) 23.25 483.33
Barley (48)
Hailfinder (56) ? (?)
? (?)
9752.0 (3939) 2580.5 (816) 1058.3 (390) 112.61 (57) 44.36 826.41
Table 2: A* lasso computation time under different edge strengths ?j ?s
Dataset (Nodes)
Dsep (6)
Asia (8)
Bowling (9)
Inversetree (11)
Rain (14)
Cloud (16 )
Funnel (18)
Galaxy (20)
(1.2,1.5)
0.14 (15)
0.26 (34)
0.48 (94)
1.68 (410)
76.64 (2938)
137.36 (2660)
1526.7 (22930)
24040 (82132)
(1,1.2)
0.14 (16)
0.23 (37)
0.49 (103)
2.09 (561)
66.93 (2959)
229.12 (7805)
2060.2 (33271)
66710 (168492)
(0.8,1)
0.17 (30)
0.29 (59)
0.54 (128)
2.25 (620)
97.26 (4069)
227.43 (8858)
3744.4 (40644)
256490 (220821)
|V |, if we need to discard k intermediate results, we discard k/|V | intermediate results at each depth.
In our experiments, we found that this heuristic scheme substantially improves the computation time
of A* lasso with a small reduction in the quality of the solution. We also considered other strategies
such as inflating heuristics [10] and pruning edges in preprocessing with lasso, but such strategies
substantially reduced the quality of solutions.
4
4.1
Experiments
Simulation Study
We perform simulation studies in order to evaluate the accuracy of the estimated structures and
measure the computation time of our method. We created several small networks under 20 nodes and
obtained the structure of several benchmark networks between 27 and 56 nodes from the Bayesian
Network Repository (the left-most column in Table 1). In addition, we used the tiling technique [18]
to generate two networks of approximately 300 nodes so that we could evaluate our method on
larger graphs. Given the Bayesian network structures, we set the parameters ?j for each conditional
probability distribution of node vj such that ?jk ? ?U nif orm[l, u] for predetermined values for u
and l if node vk is a parent of node vj and ?jk = 0 otherwise. We then generated data from each
Bayesian network by forward sampling with noise ? N (0, 1) in the regression model, given the
true variable ordering. All data were mean-centered.
We compare our method to several other methods including DP with lasso for an exact method,
L1MB for heuristic search, and SBN for an optimization-based approximate method. We downloaded the software implementations of L1MB and SBN from the authors? website. For L1MB,
we increased the authors? recommended number of evaluations 2500 to 10 000 in Stage 2 heuristic
search for all networks except the two larger networks of around 300 nodes (Alarm 2 and Hailfinder
2), where we used two different settings of 50 000 and 100 000 evaluations. We also evaluated A*
lasso with the heuristic scheme with the queue sizes of 5, 100, 200, and 1000.
DP, A* lasso, and A* lasso with a limited queue size require a selection of the regularization parameter ? with cross-validation. In order to determine the optimal value for ?, for different values
of ?, we trained a model on a training set, performed an ordinary least squares re-estimation of the
non-zero elements of ?j to remove the bias introduced by the L1 penalty, and computed prediction
errors on the validation set. Then, we selected the value of ? that gives the smallest prediction error
as the optimal ?. We used a training set of 200 samples for relatively small networks with under
6
0
0
1
0
0
1
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0.5
Recall
0
0
0.5
Recall
0
0
1
Hailfinder 2
0.5
Recall
1
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
1
0.5
Recall
Alarm 2
1
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
1
0
0
1
1
Precision
0.5
0.5
Recall
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
Water
1
Precision
Precision
0.5
Recall
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
Mildew
1
0
0
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
Insurance
Hailfinder
1
Precision
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0.5
Recall
Precision
Precision
Precision
0.5
0
0
Barley
1
Precision
Alarm
1
Precision
Factors
1
0.5
L1MB?5e4
L1MB?1e5
SBN
A*?Qlim=5
A*?Qlim=100
0
0
1
0.5
Recall
L1MB?5e4
L1MB?1e5
SBN
A*?Qlim=5
A*?Qlim=100
0
0
1
0.5
Recall
1
Figure 2: Precision/recall curves for the recovery of skeletons of benchmark Bayesian networks.
0
0
1
Insurance
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0.5
Recall
1
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0
0
1
0.5
0
0
1
Hailfinder 2
1
0.5
Recall
1
1
0.5
L1MB?5e4
L1MB?1e5
SBN
A*?Qlim=5
A*?Qlim=100
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
1
0.5
Recall
Alarm 2
1
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0.5
Recall
0.5
Recall
0.5
Water
0.5
0
0
Precision
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
0
0
1
1
Precision
Precision
0.5
Recall
Mildew
1
0
0
0.5
Precision
0.5
Recall
Precision
0
0
0.5
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
L1MB
SBN
A*?Qlim=100
A*?Qlim=200
A*?Qlim=1000
1
1
Precision
Precision
Precision
0.5
Hailfinder
Barley
1
Precision
Alarm
Factors
1
0
0
0.5
Recall
1
0.5
L1MB?5e4
L1MB?1e5
SBN
A*?Qlim=5
A*?Qlim=100
0
0
0.5
Recall
1
Figure 3: Precision/recall curves for the recovery of v-structures of benchmark Bayesian networks.
60 nodes and a training set of 500 samples for the two large networks with around 300 nodes. We
used a validation set of 500 samples. For L1MB and SBN, we used a similar strategy to select the
regularization parameters, while mainly following the strategy suggested by the authors and in their
software implementation.
We present the computation time for the different methods in Table 1. For DP, A* lasso, and A* lasso
with limited queue sizes, we also record the number of states visited in the search space in parentheses in Table 1. All methods were implemented in Matlab and were run on computers with 2.4
GHz processors. We used a dataset generated from a true model with ?jk ? ?U nif orm[1.2, 1.5].
It can be seen from Table 1 that DP considers all possible states 2|V | in the search space that grows
exponentially with the number of nodes. It is clear that A* lasso visits significantly fewer states
than DP, visiting about 10% of the number of states in DP for the funnel and galaxy networks. We
were unable to obtain the computation time for A* lasso and DP for some of the larger graphs in
Table 1 as they required significantly more time. Limiting the size of the queue in A* lasso reduces
both the computation time and the number of states visited. For smaller graphs, we do not report the
computation time for A* lasso with limited queue size, since it is identical to the full A* lasso. We
notice that the computation time for A* lasso with a small queue of 5 or 100 is comparable to that
of L1MB and SBN.
In general, we found that the extent of pruning of the search space by A* lasso compared to DP
depends on the strengths of edges (?j values) in the true model. We applied DP and A* lasso to
datasets of 200 samples generated from each of the networks under each of the three settings for the
true edge strengths, ?U nif orm[1.2, 1.5], ?U nif orm[1, 1.2], and ?U nif orm[0.8, 1]. As can be
seen from the computation time and the number of states visited by DP and A* lasso in Table 2, as
the strengths of edges increase, the number of states visited by A* lasso and the computation time
tend to decrease. The results in Table 2 indicate that the efficiency of A* lasso is affected by the
signal-to-noise ratio.
7
4.2 Analysis of S&P Stock Data
We applied the methods on the daily stock price data of the S&P 500
companies to learn a Bayesian network that models the dependencies
in prices among different stocks. We obtained the stock prices of 125
companies over 1500 time points between Jan 3, 2007 and Dec 17, 2012.
We estimated a Bayesian network using the first 1000 time points with
the different methods, and then computed prediction errors on the last
500 time points. For L1MB, we used two settings for the number of
evaluations, 50 000 and 100 000. We applied A* lasso with different
queue limits of 5, 100, and 200. The prediction accuracies for the various
methods are shown in Figure 5. Our method obtains lower prediction
errors than the other methods, even with the smaller queue sizes.
5
Prediction Error
5.0 5.2 5.4 5.6 5.8 6.0
Prediction Error
In order to evaluate the accuracy of the Bayesian network struc- 30 L1MB?5e4
tures recovered by each method, we make use of the fact that two 25 L1MB?1e5
L1MB
SBN
Bayesian network structures are indistinguishable if they belong to
A*?Qlim=5
the same equivalence class, where an equivalence class is defined 20 A*?Qlim=100
A*?Qlim=200
as the set of networks with the same skeleton and v-structures. The
A*?Qlim=1000
15
skeleton of a Bayesian network is defined as the edge connectivities ignoring edge directions and a v-structure is defined as the 10
local graph structure over three variables, with two variables point5
1
2
3
4
5
6
7
8
9
ing to the other variables (i.e., A ? B ? C). We evaluated the
Network
performance of the different methods by comparing the estimated
network structure with the true network structure in terms of skele- Figure 4: Prediction errors
for benchmark Bayesian netton and v-structures and computing the precision and recall.
works.
The x-axis labels
The precision/recall curves for the skeleton and v-structures of indicate different benchmark
the models estimated by the different methods are shown in Fig- Bayesian networks for 1: Facures 2 and 3, respectively. Each curve was obtained as an average tors, 2: Alarm, 3: Barley, 4:
over the results from 30 different datasets for the two large graphs Hailfinder, 5: Insurance, 6:
(Alarm 2 and Hailfinder 2) and from 50 different datasets for all Mildew, 7: Water, 8: Alarm 2,
the other Bayesian networks. All data were simulated under the and 9: Hailfinder 2.
setting ?jk ? ?U nif orm[0.4, 0.7]. For the benchmark Bayesian
networks, we used A* lasso with different queue sizes, including 100, 200, and 1000, whereas for
the two large networks (Alarm 2 and Hailfinder 2) that require more computation time, we used A*
lasso with queue size of 5 and 100. As can be seen in Figures 2 and 3, all methods perform relatively
well on identifying the true skeletons, but find it significantly more challenging to recover the true
v-structures. We find that although increasing the size of queues in A* lasso generally improves the
performance, even with smaller queue sizes, A* lasso outperforms L1MB and SBN in most of the
networks. While A* lasso with a limited queue size preforms consistently well on smaller networks,
it significantly outperforms the other methods on the larger graphs such as Alarm 2 and Hailfinder 2,
even with a queue size of 5 and even when the number of evaluations for L1MB has been increased
to 50 000 and 100 000. This demonstrates that while limiting the queue size in A* lasso will not
guarantee the optimality of the solution, it still reduces the computation time of A* lasso dramatically without substantially compromising the quality of the solution. In addition, we compare the
performance of the different methods in terms of prediction errors on independent test datasets in
Figure 4. We find that the prediction errors of A* lasso are consistently lower even with a limited
queue size.
e4
e5
?5
?1
MB
L1
MB
L1
5
0
0
N
SB A*?Q ?Q10 ?Q20
A*
A*
Figure 5: Prediction errors for S&P stock price
data.
Conclusions
In this paper, we considered the problem of learning a Bayesian network structure and proposed
A* lasso that guarantees the optimality of the solution while reducing the computational time of
the well-known exact methods based on DP. We proposed a simple heuristic scheme that further
improves the computation time but does not significantly reduce the quality of the solution.
Acknowledgments
This material is based upon work supported by an NSF CAREER Award No. MCB-1149885, Sloan
Research Fellowship, and Okawa Foundation Research Grant to SK and by a NSERC PGS-D to JX.
8
References
[1] David Maxwell Chickering. Learning Bayesian networks is NP-complete. In Learning from
data, pages 121?130. Springer, 1996.
[2] Nir Friedman, Iftach Nachman, and Dana Pe?er. Learning Bayesian network structure from
massive datasets: the ?Sparse Candidate? algorithm. In Proceedings of the Fifteenth conference
on Uncertainty in Artificial Intelligence, pages 206?215. Morgan Kaufmann Publishers Inc.,
1999.
[3] Wenjiang J Fu. Penalized regressions: the bridge versus the lasso. Journal of Computational
and Graphical Statistics, 7(3):397?416, 1998.
[4] David Heckerman, Dan Geiger, and David M Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine learning, 20(3):197?243, 1995.
[5] Shuai Huang, Jing Li, Jieping Ye, Adam Fleisher, Kewei Chen, Teresa Wu, and Eric Reiman.
A sparse structure learning algorithm for Gaussian Bayesian network identification from
high-dimensional data. IEEE Transactions on Pattern Analysis and Machine Intelligence,
35(6):1328?1342, 2013.
[6] Tommi Jaakkola, David Sontag, Amir Globerson, and Marina Meila. Learning Bayesian network structure using LP relaxations. In Proceedings of the Thirteenth International Conference
on Artificial intelligence and Statistics (AISTATS), 2010.
[7] Mikko Koivisto and Kismat Sood. Exact Bayesian structure discovery in Bayesian networks.
Journal of Machine Learning Research, 5:549?573, 2004.
[8] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques.
MIT press, 2009.
[9] Wai Lam and Fahiem Bacchus. Learning Bayesian belief networks: An approach based on the
MDL principle. Computational intelligence, 10(3):269?293, 1994.
[10] Maxim Likhachev, Geoff Gordon, and Sebastian Thrun. ARA*: Anytime A* with provable
bounds on sub-optimality. Advances in Neural Information Processing Systems (NIPS), 16,
2003.
[11] Jean-Philippe Pellet and Andr?e Elisseeff. Using Markov blankets for causal structure learning.
The Journal of Machine Learning Research, 9:1295?1342, 2008.
[12] Stuart Jonathan Russell, Peter Norvig, John F Canny, Jitendra M Malik, and Douglas D Edwards. Artificial intelligence: a modern approach, volume 74. Prentice hall Englewood Cliffs,
1995.
[13] Mark Schmidt, Alexandru Niculescu-Mizil, and Kevin Murphy. Learning graphical model
structure using L1-regularization paths. In Proceedings of the National Conference on Artificial
Intelligence, volume 22, page 1278, 2007.
[14] Gideon Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461?464,
1978.
[15] Ajit Singh and Andrew Moore. Finding optimal Bayesian networks by dynamic programming.
Technical Report 05-106, School of Computer Science, Carnegie Mellon University, 2005.
[16] Marc Teyssier and Daphne Koller. Ordering-based search: A simple and effective algorithm
for learning Bayesian networks. In Proceedings of the Twentieth conference on Uncertainty in
Artificial Intelligence, pages 584?590, 2005.
[17] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing
Bayesian network structure learning algorithm. Machine Learning, 65(1):31?78, 2006.
[18] Ioannis Tsamardinos, Alexander Statnikov, Laura E Brown, and Constantin F Aliferis. Generating realistic large Bayesian networks by tiling. In the Nineteenth International FLAIRS
conference, pages 592?597, 2006.
[19] Changhe Yuan, Brandon Malone, and Xiaojian Wu. Learning optimal Bayesian networks using
A* search. In Proceedings of the Twenty-Second international joint conference on Artificial
Intelligence, pages 2186?2191. AAAI Press, 2011.
9
| 5174 |@word repository:1 open:10 simulation:2 decomposition:1 elisseeff:1 thereby:1 recursively:1 reduction:1 initial:1 contains:1 score:13 selecting:3 outperforms:2 existing:1 current:2 recovered:1 comparing:1 must:1 john:1 realistic:1 wenjiang:1 predetermined:1 remove:1 hash:1 greedy:2 fewer:2 website:1 selected:1 intelligence:8 amir:1 malone:1 xk:1 record:1 node:42 daphne:2 along:1 constructed:2 shooting:3 consists:1 yuan:1 combine:1 dan:1 ara:1 decomposed:1 company:2 considering:1 increasing:1 estimating:1 lowest:1 argmin:1 substantially:7 developed:4 finding:4 inflating:1 guarantee:2 every:1 demonstrates:1 grant:1 appear:1 before:2 local:2 limit:3 cliff:1 path:15 approximately:1 equivalence:2 challenging:2 limited:6 directed:8 practical:1 acknowledgment:1 globerson:1 practice:2 jan:1 significantly:9 orm:6 onto:1 selection:1 prentice:1 applying:5 equivalent:1 center:1 jieping:1 go:1 regardless:1 formulate:1 identifying:2 assigns:1 recovery:2 q:67 limiting:5 annals:1 construction:2 norvig:1 massive:1 exact:11 programming:6 us:2 mikko:1 xiaojian:1 pa:8 element:2 satisfying:1 expensive:2 jk:7 observed:1 cloud:2 subproblem:1 solved:1 fleisher:1 sbn:26 ensures:1 cycle:4 sood:1 ordering:32 decrease:2 russell:1 skeleton:5 dynamic:5 trained:1 singh:1 solving:1 upon:1 efficiency:4 eric:1 joint:1 stock:5 geoff:1 various:2 effective:2 describe:1 artificial:6 kevin:1 jean:1 heuristic:29 larger:5 aliferis:2 nineteenth:1 otherwise:1 statistic:3 jointly:1 advantage:1 propose:5 lowdimensional:1 lam:1 mb:2 canny:1 flexibility:1 dsep:2 q10:1 scalability:1 parent:19 empty:3 jing:2 generating:1 guaranteeing:1 adam:1 andrew:1 school:1 edward:1 solves:1 recovering:1 c:2 implemented:3 blanket:2 indicate:3 differ:1 direction:1 tommi:1 correct:1 compromising:3 alexandru:1 exploration:1 centered:1 successor:2 material:1 require:3 proposition:2 enumerated:1 exploring:1 insert:2 frontier:1 hold:1 brandon:1 sufficiently:2 considered:3 around:2 hall:1 tor:1 early:1 smallest:1 jx:1 estimation:2 precede:2 label:1 nachman:1 visited:4 reiman:1 bridge:1 schwarz:1 pellet:1 tool:1 mit:1 always:3 gaussian:1 reaching:2 factorizes:1 jaakkola:1 vk:15 consistently:2 likelihood:1 indicates:1 mainly:1 greedily:2 kim:1 niculescu:1 sb:1 koller:2 overall:1 among:1 initialize:2 once:1 construct:2 sampling:1 biology:1 represents:2 identical:1 look:1 stuart:1 future:4 np:2 report:2 gordon:1 employ:1 primarily:1 modern:1 national:1 individual:1 murphy:1 friedman:2 huge:1 englewood:1 insurance:4 evaluation:4 mdl:2 constantin:2 edge:9 fu:1 partial:2 daily:1 re:1 preforms:1 causal:1 barley:4 column:2 increased:2 formulates:1 lattice:3 ordinary:1 cost:13 subset:10 bacchus:1 dependency:1 struc:1 explores:1 international:3 l1mb:36 probabilistic:2 connectivity:1 again:1 aaai:1 satisfied:1 huang:2 choose:2 priority:3 laura:2 return:2 li:1 account:1 distribute:1 ioannis:2 inc:1 satisfy:1 jitendra:1 sloan:1 vi:1 depends:1 later:2 performed:1 closed:8 start:7 recover:2 maintains:1 square:1 accuracy:4 kaufmann:1 efficiently:1 climbing:1 vp:1 bayesian:48 identification:1 processor:1 reach:8 sebastian:1 wai:1 inexact:2 underestimate:3 involved:1 galaxy:3 associated:4 proof:1 recovers:2 dataset:4 popular:1 recall:22 knowledge:1 anytime:1 improves:5 maxwell:1 asia:2 formulation:1 evaluated:2 stage:33 shuai:1 until:1 nif:6 hand:1 overlapping:1 quality:8 grows:1 singlestage:2 ye:1 k22:3 brown:2 contain:2 true:14 regularization:4 moore:1 nonzero:1 kewei:1 indistinguishable:1 during:3 bowling:2 maintained:1 flair:1 hill:1 complete:1 demonstrate:2 performs:1 l1:6 meaning:1 qp:1 ji:1 conditioning:1 foreach:1 exponentially:2 volume:2 discussed:1 belong:1 mellon:3 significant:1 sssykim:1 dag:11 meila:1 trivially:2 consistency:2 moving:2 optimizing:2 forcing:1 discard:3 inequality:1 scoring:8 seen:3 morgan:1 additional:1 prune:6 determine:1 v3:1 shortest:4 recommended:1 signal:1 full:4 multiple:1 reduces:2 ing:1 technical:1 cross:2 post:1 marina:1 visit:2 award:1 parenthesis:1 ensuring:2 prediction:11 regression:3 cmu:2 fifteenth:1 dec:1 background:1 addition:3 whereas:1 fellowship:1 thirteenth:1 publisher:1 tend:2 hailfinder:11 incorporates:1 call:1 near:1 presence:1 constraining:1 intermediate:6 qs0:27 xj:18 bic:1 affect:1 lasso:66 suboptimal:2 identified:1 reduce:2 okawa:1 whether:2 likhachev:1 penalty:2 queue:23 peter:1 sontag:1 statnikov:1 matlab:1 ignored:2 generally:1 dramatically:1 clear:1 tsamardinos:2 transforms:1 amount:1 reduced:2 generate:1 nsf:1 andr:1 notice:2 estimated:6 carnegie:3 discrete:2 affected:1 douglas:1 v1:2 graph:14 relaxation:1 sum:1 enforced:1 run:1 uncertainty:2 reasonable:1 wu:2 geiger:1 comparable:1 layer:1 bound:1 guaranteed:1 mildew:4 refine:1 strength:4 constraint:14 software:2 lane:1 min:6 optimality:5 pruned:2 relatively:2 changhe:1 department:1 according:1 precompute:1 combination:1 across:1 smaller:4 heckerman:1 lp:1 shallow:1 iftach:1 computationally:3 equation:7 previously:1 turn:1 precomputed:1 popped:1 end:5 tiling:2 koivisto:1 v2:1 schmidt:1 rain:2 remaining:1 ensure:1 graphical:4 maintaining:2 k1:3 especially:1 q20:1 move:6 objective:1 added:1 malik:1 strategy:7 teyssier:1 visiting:2 dp:25 unable:1 simulated:2 thrun:1 considers:3 extent:1 water:4 enforcing:1 provable:1 assuming:1 ratio:1 minimizing:1 difficult:1 subproblems:1 negative:1 bde:1 implementation:2 unknown:1 perform:3 twenty:1 observation:2 markov:2 discarded:1 benchmark:7 datasets:5 philippe:1 incorporated:1 excluding:3 ajit:1 introduced:1 david:4 cast:1 required:2 teresa:1 pop:2 nip:1 address:2 suggested:1 below:1 pattern:1 sparsity:2 challenge:1 gideon:1 max:2 including:2 belief:1 regularized:1 recursion:3 mizil:1 representing:1 scheme:6 improve:3 axis:1 created:1 naive:1 nir:2 discovery:1 xiang:1 admissibility:1 tures:1 acyclic:2 dana:1 versus:1 validation:4 funnel:3 downloaded:1 incurred:4 foundation:1 xp:2 consistent:5 principle:2 penalized:1 placed:2 last:1 free:1 supported:1 jth:1 bias:1 sparse:10 distributed:1 ghz:1 curve:4 depth:3 dimension:1 valid:1 computes:1 forward:1 author:3 preprocessing:1 far:2 transaction:1 approximate:3 pruning:5 obtains:1 pittsburgh:2 continuous:6 search:60 decomposes:2 sk:1 table:10 promising:2 learn:2 career:1 ignoring:3 obtaining:1 e5:8 constructing:1 marc:1 vj:47 aistats:1 main:1 pgs:1 noise:3 alarm:11 child:1 x1:2 fig:1 precision:22 sub:1 position:1 exponential:3 candidate:11 pe:1 chickering:2 admissible:4 e4:13 er:1 list:10 incorporating:1 adding:2 effectively:1 maxim:1 illustrates:1 chen:1 likely:2 explore:1 twentieth:1 nserc:1 applies:1 springer:1 corresponds:2 satisfies:3 determines:1 conditional:4 goal:10 viewed:1 seyoung:1 sorted:1 price:4 hard:1 fahiem:1 determined:2 specifically:1 reducing:2 except:1 miss:2 called:2 total:1 acyclicity:2 select:3 e6:1 support:1 mark:1 arises:1 jonathan:1 alexander:1 evaluate:3 mcb:1 |
4,614 | 5,175 | On model selection consistency of M-estimators with
geometrically decomposable penalties
Jason D. Lee, Yuekai Sun
Institute for Computational and Mathematical Engineering
Stanford University
{jdl17,yuekai}@stanford.edu
Jonathan E. Taylor
Department of Statisticis
Stanford University
jonathan.taylor@stanford.edu
Abstract
Penalized M-estimators are used in diverse areas of science and engineering to fit
high-dimensional models with some low-dimensional structure. Often, the penalties are geometrically decomposable, i.e. can be expressed as a sum of support
functions over convex sets. We generalize the notion of irrepresentable to geometrically decomposable penalties and develop a general framework for establishing
consistency and model selection consistency of M-estimators with such penalties.
We then use this framework to derive results for some special cases of interest in
bioinformatics and statistical learning.
1
Introduction
The principle of parsimony is used in many areas of science and engineering to promote ?simple?
models over more complex ones. In machine learning, signal processing, and high-dimensional
statistics, this principle motivates the use of sparsity inducing penalties for model selection and
signal recovery from incomplete/noisy measurements. In this work, we consider M-estimators of
the form:
minimize
`(n) (?) + ??(?), subject to ? ? S,
(1.1)
p
??R
where `(n) is a convex, twice continuously differentiable loss function, ? is a penalty function, and
S ? Rp is a subspace. Many commonly used penalties are geometrically decomposable, i.e. can
be expressed as a sum of support functions over convex sets. We describe this notion of decomposable in Section 2 and then develop a general framework for analyzing the consistency and model
selection consistency of M-estimators with geometrically decomposable penalties. When specialized to various statistical models, our framework yields some known and some new model selection
consistency results.
This paper is organized as follows: First, we review existing work on consistency and model selection consistency of penalized M-estimators. Then, in Section 2, we describe the notion of geometrically decomposable and give some examples of geometrically decomposable penalties. In Section
3, we generalize the notion of irrepresentable to geometrically decomposable penalties and state our
main result (Theorem 3.4). We prove our main result in the Supplementary Material and develop a
converse result concerning the necessity of the irrepresentable condition in the Supplementary Material. In Section 4, we use our main result to derive consistency and model selection consistency
results for the generalized lasso (total variation) and maximum likelihood estimation in exponential
families.
1
1.1
Consistency of penalized M-estimators
The consistency of penalized M-estimators has been studied extensively. The three most wellstudied problems are (i) the lasso [2, 26], (ii) generalized linear models (GLM) with the lasso
penalty [10], and (iii) inverse covariance estimators with sparsity inducing penalties (equivalent
to sparse maximum likelihood estimation for a Gaussian graphical model) [21, 20]. There are also
consistency results for M-estimators with group and structured variants of the lasso penalty [1, 7].
Negahban et al. [17] proposes a unified framework for establishing consistency and convergence
rates for M-estimators with penalties ? that are decomposable with respect to a pair of subspaces
?:
M, M
? ?.
?(x + y) = ?(x) + ?(y), for all x ? M, y ? M
Many commonly used penalties such as the lasso, group lasso, and nuclear norm are decomposable
in this sense. Negahban et al. prove a general result that establishes the consistency of M-estimators
with decomposable penalties. Using their framework, they derive consistency results for special
cases like sparse and group sparse regression. The current work is in a similar vein as Negahban et
al. [17], but we focus on establishing the more stringent result of model selection consistency rather
than consistency. See Section 3 for a comparison of the two notions of consistency.
The model selection consistency of penalized M-estimators has also been extensively studied. The
most commonly studied problems are (i) the lasso [30, 26], (ii) GLM?s with the lasso penalty [4, 19,
28], (iii) covariance estimation [15, 12, 20] and (more generally) structure learning [6, 14]. There are
also general results concerning M-estimators with sparsity inducing penalties [29, 16, 11, 22, 8, 18,
24]. Despite the extensive work on model selection consistency, to our knowledge, this is the first
work to establish a general framework for model selection consistency for penalized M-estimators.
2
Geometrically decomposable penalties
Let C ? Rp be a closed convex set. Then the support function over C is
hC (x) = supy {y T x | y ? C}.
(2.1)
Support functions are sublinear and should be thought of as semi-norms. If C is a norm ball, i.e.
C = {x | kxk ? 1}, then hC is the dual norm:
?
kyk = supx {xT y | kxk ? 1}.
The support function is a supremum of linear functions, hence the subdifferential consists of the
linear functions that attain the supremum:
?hC (x) = {y ? C | y T x = hC (x)}.
The support function (as a function of the convex set C) is also additive over Minkowski sums, i.e.
if C and D are convex sets, then
hC+D (x) = hC (x) + hD (x).
We use this property to express penalty functions as sums of support functions. E.g. if ? is a norm
and the dual norm ball can be expressed as a (Minkowski) sum of convex sets C1 , . . . , Ck , then ?
can be expressed as a sum of support functions:
?(x) = hC1 (x) + ? ? ? + hCk (x).
If a penalty ? can be expressed as
?(?) = hA (?) + hI (?) + hS ? (?),
(2.2)
where A and I are closed convex sets and S is a subspace, then we say ? is a geometrically decomposable penalty. This form is general; if ? can be expressed as a sum of support functions,
i.e.
?(?) = hC1 (?) + ? ? ? + hCk (?),
?
then we can set A, I, and S to be sums of the sets C1 , . . . , Ck to express ? in geometrically
decomposable form (2.2). In many cases of interest, A + I is a norm ball and hA+I = hA + hI is
the dual norm. In our analysis, we assume
1
Given the extensive work on consistency of penalized M-estimators, our review and referencing is necessarily incomplete.
2
1. A and I are bounded.
2. I contains a relative neighborhood of the origin, i.e. 0 ? relint(I).
We do not require A + I to contain a neighborhood of the origin. This generality allows for unpenalized variables.
The notation A and I should be as read as ?active? and ?inactive?: span(A) should contain the true
parameter vector and span(I) should contain deviations from the truth that we want to penalize. E.g.
if we know the sparsity pattern of the unknown parameter vector, then A should span the subspace
of all vectors with the correct sparsity pattern.
The third term enforces a subspace constraint ? ? S because the support function of a subspace is
the (convex) indicator function of the orthogonal complement:
0 x?S
hS ? (x) = 1S (x) =
? otherwise.
Such subspace constraints arise in many problems, either naturally (e.g. the constrained lasso [9]) or
after reformulation (e.g. group lasso with overlapping groups). We give three examples of penalized
M-estimators with geometrically decomposable penalties, i.e.
`(n) (?) + ??(?),
minimize
p
??R
(2.3)
where ? is a geometrically decomposable penalty. We also compare our notion of geometrically
decomposable to two other notions of decomposable penalties by Negahban et al. [17] and Van De
Geer [25] in the Supplementary Material.
2.1
The lasso and group lasso penalties
Two geometrically decomposable penalties are the lasso and group lasso penalties. Let A and I
be complementary subsets of {1, . . . , p}. We can decompose the lasso penalty component-wise to
obtain
k?k1 = hB?,A (?) + hB?,I (?),
where hB?,A and hB?,I are support functions of the sets
B?,A = ? ? Rp | k?k? ? 1 and ?I = 0
B?,I = ? ? Rp | k?k? ? 1 and ?A = 0 .
If the groups do not overlap, then we can also decompose the group lasso penalty group-wise (A
and I are now sets of groups) to obtain
X
k?g k2 = hB(2,?),A (?) + hB(2,?),I (?).
g?G
hB(2,?),A and hB(2,?),I are support functions of the sets
B(2,?),A = ? ? Rp | max k?g k2 ? 1 and ?g = 0, g ? A
g?G
p
B(2,?),I = ? ? R | max k?g k2 ? 1 and ?g = 0, g ? I .
g?G
If the groups overlap, then we can duplicate the parameters in overlapping groups and enforce equality constraints.
2.2
The generalized lasso penalty
Another geometrically decomposable penalty is the generalized lasso penalty [23]. Let D ? Rm?p
be a matrix and A and I be complementary subsets of {1, . . . , m}. We can express the generalized
lasso penalty in decomposable form:
kD?k1 = hDT B?,A (?) + hDT B?,I (?).
3
(2.4)
hDT B?,A and hDT B?,I are support functions of the sets
T
DT B?,A = {x ? Rp | x = DA
y, kyk? ? 1}
T
p
D B?,I = {x ? R | x =
DIT y, kyk?
? 1}.
(2.5)
(2.6)
We can also formulate any generalized lasso penalized M-estimator as a linearly constrained, lasso
penalized M-estimator. After a change of variables, a generalized lasso penalized M-estimator is
equivalent to
minimize `(n) (D? ? + ?) + ? k?k1 , subject to ? ? N (D),
??Rk ,??Rp
where N (D) is the nullspace of D. The lasso penalty can then be decomposed component-wise to
obtain
k?k1 = hB?,A (?) + hB?,I (?).
We enforce the subspace constraint ? ? N (D) with the support function of R(D)? . This yields the
convex optimization problem
minimize `(n) (D? ? + ?) + ?(hB?,A (?) + hB?,I (?) + hN (D)? (?)).
??Rk ,??Rp
There are many interesting applications of the generalized lasso in signal processing and statistical
learning. We refer to Section 2 in [23] for some examples.
2.3
?Hybrid? penalties
A large class of geometrically decomposable penalties are so-called ?hybrid? penalties: infimal
convolutions of penalties to promote solutions that are sums of simple components, e.g. ? = ?1 + ?2 ,
where ?1 and ?2 are simple. If the constituent simple penalties are geometrically decomposable, then
the resulting hybrid penalty is also geometrically decomposable.
For example, let ?1 and ?2 be geometrically decomposable penalties, i.e. there are sets A1 , I1 , S1
and A2 , I2 , S2 such that
?1 (?) = hA1 (?) + hI1 (?) + hS1? (?)
?2 (?) = hA2 (?) + hI2 (?) + hS2? (?)
The M-estimator with penalty ?(?) = inf ? {?1 (?) + ?2 (? ? ?)} is equivalent to the solution to the
convex optimization problem
minimize
`(n) (?1 + ?2 ) + ?(?1 (?1 ) + ?2 (?2 )).
2p
??R
(2.7)
This is an M-estimator with a geometrically decomposable penalty:
minimize
`(n) (?1 + ?2 ) + ?(hA (?) + hI (?) + hS ? (?)).
2p
??R
hA , hI and hS ? are support functions of the sets
A = {(?1 , ?2 ) | ?1 ? A1 ? Rp , ?2 ? A2 ? Rp }
I = {(?1 , ?2 ) | ?1 ? I1 ? Rp , ?2 ? I2 ? Rp }
S = {(?1 , ?2 ) | ?1 ? S1 ? Rp , ?2 ? S2 ? Rp }.
There are many interesting applications of the hybrid penalties in signal processing and statistical
2
learning. Two examples are the huber function, ?(?) = inf ?=?1 +?2 k?1 k1 +k?2 k2 , and the multitask
group regularizer, ?(?) = inf ?=B+S kBk1,? + kSk1 . See [27] for recent work on model selection
consistency in hybrid penalties.
3
Main result
We assume the unknown parameter vector ?? is contained in the model subspace
M := span(I)? ? S,
4
(3.1)
and we seek estimates of ?? that are ?correct?. We consider two notions of correctness: (i) an
estimate ?? is consistent (in the `2 norm) if the estimation error in the `2 norm decays to zero in
probability as sample size grows:
p
?
? ? ??
? 0 as n ? ?,
2
and (ii) ?? is model selection consistent if the estimator selects the correct model with probability
tending to one as sample size grows:
Pr(?? ? M ) ? 1 as n ? ?.
N OTATION : We use PC to denote the orthogonal projector onto span(C) and ?C to denote the
gauge function of a convex set C containing the origin:
?C (x) = inf {? ? R+ | x ? ?C}.
x
Further, we use ?(?) to denote the compatibility constant between a semi-norm ? and the `2 norm
over the model subspace:
?(?) := sup {?(x) | kxk2 ? 1, x ? M }.
x
Finally, we choose a norm k?k? to make
?`(n) (?? )
? small. This norm is usually the dual norm to
the penalty.
Before we state our main result, we state our assumptions on the problem. Our two main assumptions are stated in terms of the Fisher information matrix:
Q(n) = ?2 `(n) (?? ).
Assumption 3.1 (Restricted strong convexity). We assume the loss function `(n) is locally strongly
convex with constant m over the model subspace, i.e.
m
2
`(n) (?1 ) ? `(n) (?2 ) ? ?`(n) (?2 )T (?1 ? ?2 ) +
k?1 ? ?2 k2
(3.2)
2
for some m > 0 and all ?1 , ?2 ? Br (?? ) ? M .
We require this assumption to make the maximum likelihood estimate unique over the model subspace. Otherwise, we cannot hope for consistency. This assumption requires the loss function to be
curved along certain directions in the model subspace and is very similar to Negahban et al.?s notion
of restricted strong convexity [17] and Buhlmann and van de Geer?s notion of compatibility [3].
Intuitively, this assumption means the ?active? predictors are not overly dependent on each other.
We also require ?2 `(n) to be locally Lipschitz continuous, i.e.
k?2 `(n) (?1 ) ? ?2 `(n) (?2 )k2 ? L k?1 ? ?2 k2 .
for some L > 0 and all ?1 , ?2 ? Br (?? ) ? M . This condition automatically holds for all twicecontinuously differentiable loss functions, hence we do not state this condition as an assumption.
To obtain model selection consistency results, we must first generalize the key notion of irrepresentable to geometrically decomposable penalties.
Assumption 3.2 (Irrepresentability). There exist ? ? (0, 1) such that
sup {V (PM ? (Q(n) PM (PM Q(n) PM )? PM z ? z)) | z ? ?hA (Br (?? ) ? M )}
z
< 1 ? ?,
where V is the infimal convolution of ?I and 1S ?
V (z) = inf {?I (u) + 1S ? (z ? u)}.
u
If uI (z) and uS ? (u) achieve V (z) (i.e. V (z) = ?I (uI (z))), then V (u) < 1, means uI (z) ?
relint(I). Hence the irrepresentable condition requires any z ? M ? to be decomposable into
uI + uS ? , where uI ? relint(I) and uS ? ? S ? .
5
Lemma 3.3. V is a bounded semi-norm over M ? , i.e. V is finite and sublinear over M ? .
Let k?k? be an error norm, usually chosen to make
?`(n) (?? )
? small. V is a bounded semi-norm
over M ? , hence there exists some ?? such that
V (PM ? (Q(n) PM (PM Q(n) PM )? PM x ? x)) ? ?? kxk?
(3.3)
p
?? surely exists because (i) k?k? is a norm, so the set {x ? R | kxk? ? 1} is compact, and (ii) V is
finite over M ? , so the left side of (3.3) is a continuous function of x. Intuitively, ?? quantifies how
large the irrepresentable term can be compared to the error norm.
The irrepresentable condition is a standard assumption for model selection consistency and has
been shown to be almost necessary for sign consistency of the lasso [30, 26]. Intuitively, the irrepresentable condition requires the active predictors to be not overly dependent on the inactive
predictors. In Supplementary Material, we show our (generalized) irrepresentable condition is also
necessary for model selection consistency with some geometrically decomposable penalties.
Theorem 3.4. Suppose Assumption 3.1 and 3.2 are satisfied. If we select ? such that
?>
and
? < min
2?
?
k?`(n) (?? )k?
?
? 2
?m
?
2
L 2?
? ?(k?k? )(2?(hA )+ ??? ?(k?k?
? ))
mr
?
,
2?(hA )+ ??? ?(k?k?
?)
then the penalized M-estimator is unique, consistent (in the `2 norm), and model selection consistent,
i.e. the optimal solution to (2.3) satisfies
?
?
2
?(hA ) + 2?
1.
?? ? ??
? m
? ?(k?k? ) ?,
2
2. ?? ? M := span(I)? ? S.
Remark 1. Theorem 3.4 makes a deterministic statement about the optimal solution to (2.3). To
use this result to derive consistency and model selection consistency results for a statistical model,
we must first verify Assumptions (3.1) and (3.2) are satisfied with high probability. Then, we must
choose an error norm k?k? and select ? such that
?>
and
? < min
2?
?
k?`(n) (?? )k?
?
? 2
?m
?
2
L 2?
? ?(k?k? )(2?(hA )+ ??? ?(k?k?
? ))
mr
?
2?(hA )+ ??? ?(k?k?
?)
with high probability.
In Section 4, we use this theorem to derive consistency and model selection consistency results for
the generalized lasso and penalized likelihood estimation for exponential families.
4
Examples
We use Theorem 3.4 to establish the consistency and model selection consistency of the generalized
lasso and a group lasso penalized maximum likelihood estimator in the high-dimensional setting.
Our results are nonasymptotic, i.e. we obtain bounds in terms of sample size n and problem dimension p that hold with high probability.
4.1
The generalized lasso
Consider the linear model y = X T ?? + , where X ? Rn?p is the design matrix, and ?? ? ?Rp
are unknown regression parameters. We assume the columns of X are normalized so kxi k2 ? n.
? Rn is i.i.d., zero mean, sub-Gaussian noise with parameter ? 2 .
6
We seek an estimate of ?? with the generalized lasso:
minimize
p
??R
1
ky ? X?k22 + ? kD?k1 ,
2n
(4.1)
where D ? Rm?p . The generalized lasso penalty is geometrically decomposable:
kD?k1 = hDT B?,A (?) + hDT B?,I (?).
hDT B?,A and hDT B?,I are support functions of the sets
DT B?,A = {x ? Rp | x = DT y, yI = 0, kyk? ? 1}
DT B?,I = {x ? Rp | x = DT y, yA = 0, kyk? ? 1}.
The sample fisher information matrix is Q(n) = n1 X T X. Q(n) does not depend on ?, hence the
Lipschitz constant of Q(n) is zero. The restricted strong convexity constant is
m = ?min (Q(n) ) = inf {xT Q(n) x | kxk2 = 1}.
x
The model subspace is the set
span(DT B?,I )? = R(DIT )? = N (DI ),
where I is a subset of the row indices of D. The compatibility constants ?(`1 ), ?(hA ) are
?(`1 ) = sup {kxk1 | kxk2 ? 1, x ? N (DI )}
x
p
?(hA ) = sup hDT B?,A (x) | kxk2 ? 1, x ? M ? kDA k2 |A|.
x
q
(n) ?
?
?
If we select ? > 2 2? ??? logn p , then there exists c such that Pr ? ? 2?
(? )
? ? 1 ?
? ?`
2 exp ?c?2 n . Thus the assumptions of Theorem 3.4 are satisfied with probability at least 1 ?
2 exp(?c?2 n), and we deduce the generalized lasso is consistent and model selection consistent.
Corollary 4.1. Suppose y = X?? + , where X ? Rn?p is the design matrix, ?? are unknown
coefficients,
and is i.i.d., zero mean, sub-Gaussian noise with parameter ? 2 . If we select ? >
? ?? q log p
2
2 2? ?
n then, with probability at least 1 ? 2 exp ?c? n , the solution to the generalized
lasso is unique, consistent, and model selection consistent, i.e. the optimal solution to (4.1) satisfies
p
2
?
?(`
)
?,
1.
?? ? ??
? m
kDA k2 |A| + 2?
1
?
2
2. D?? i = 0, for any i such that D?? i = 0.
4.2
Learning exponential families with redundant representations
Suppose X is a random vector, and let ? be a vector of sufficient statistics. The exponential family
associated with these sufficient statistics is the set of distributions with the form
Pr(x; ?) = exp ?T ?(x) ? A(?) ,
Suppose we are given samples x(1) , . . . , x(n) drawn i.i.d. from an exponential family with unknown
parameters ?? ? Rp . We seek a maximum likelihood estimate (MLE) of the unknown parameters:
(n)
minimize
`ML (?) + ? k?k2,1 , subject to ? ? S.
p
??R
(n)
where `ML is the (negative) log-likelihood function
n
(n)
`ML (?) = ?
n
1X
1X T
log Pr(x(i) ; ?) = ?
? ?(x(i) ) + A(?)
n i=1
n i=1
7
(4.2)
and k?k2,1 is the group lasso penalty
k?k2,1 =
X
k?g k2 .
g?G
It is also straightforward to change the maximum likelihood estimator to the more computationally
tractable pseudolikelihood estimator [13, 6], the neighborhood selection procedure [15], and include
covariates [13]. For brevity, we only explain the details for the maximum likelihood estimator.
Many undirected graphical models can be naturally viewed as exponential families. Thus estimating the parameters of exponential families is equivalent to learning undirected graphical models, a
problem of interest in many application areas such as bioinformatics.
Below, we state a corollary that results from applying Theorem 3.4 to exponential families. Please
see the supplementary material for the proof and definitions of the quantities involved.
Corollary 4.2. Suppose we are given samples x(1) , . . . , x(n) drawn i.i.d. from an exponential family
with unknown parameters ?? . If we select
r
?
2 2L1 ?? (maxg?G |g|) log |G|
?>
?
n
and the sample size n is larger than
(
4
32L1 L22 ??2
2 + ??? (maxg?G |g|)|A|2 log |G|
4? 4
m
max 16L1
? 2
m2 r 2 (2 + ?? ) (maxg?G |g|)|A| log |G|,
then, with probability at least 1 ? 2 maxg?G |g| exp(?c?2 n), the penalized maximum likelihood
estimator is unique, consistent, and model selection consistent, i.e. the optimal solution to (4.2)
satisfies
p
2
?
|A|?,
1.
?? ? ??
? m
1 + 2?
?
2
2. ??g = 0, g ? I and ??g 6= 0 if
?g?
2 >
5
1
m
1+
?
2?
?
p
|A|?.
Conclusion
We proposed the notion of geometrically decomposable and generalized the irrepresentable condition to geometrically decomposable penalties. This notion of decomposability builds on those
by Negahban et al. [17] and Cand?es and Recht [5] and includes many common sparsity inducing
penalties. This notion of decomposability also allows us to enforce linear constraints.
We developed a general framework for establishing the model selection consistency of M-estimators
with geometrically decomposable penalties. Our main result gives deterministic conditions on the
problem that guarantee consistency and model selection consistency; in this sense, it extends the
work of [17] from estimation consistency to model selection consistency. We combine our main
result with probabilistic analysis to establish the consistency and model selection consistency of the
generalized lasso and group lasso penalized maximum likelihood estimators.
Acknowledgements
We thank Trevor Hastie and three anonymous reviewers for their insightful comments. J. Lee was
supported by a National Defense Science and Engineering Graduate Fellowship (NDSEG) and an
NSF Graduate Fellowship. Y. Sun was supported by the NIH, award number 1U01GM102098-01.
J.E. Taylor was supported by the NSF, grant DMS 1208857, and by the AFOSR, grant 113039.
References
[1] F. Bach. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res., 9:1179?1225,
2008.
8
[2] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. Ann. Statis.,
37(4):1705?1732, 2009.
[3] P. B?uhlmann and S. van de Geer. Statistics for high-dimensional data: Methods, theory and applications.
2011.
[4] F. Bunea. Honest variable selection in linear and logistic regression models via `1 and `1 +`2 penalization.
Electron. J. Stat., 2:1153?1194, 2008.
[5] E. Cand`es and B. Recht. Simple bounds for recovering low-complexity models. Math. Prog. Ser. A, pages
1?13, 2012.
[6] J. Guo, E. Levina, G. Michailidis, and J. Zhu. Asymptotic properties of the joint neighborhood selection
method for estimating categorical markov networks. arXiv preprint.
[7] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In Int. Conf. Mach. Learn.
(ICML), pages 433?440. ACM, 2009.
[8] A. Jalali, P. Ravikumar, V. Vasuki, S. Sanghavi, and UT ECE. On learning discrete graphical models
using group-sparse regularization. In Int. Conf. Artif. Intell. Stat. (AISTATS), 2011.
[9] G.M. James, C. Paulson, and P. Rusmevichientong. The constrained lasso. Technical report, University
of Southern California, 2012.
[10] S.M. Kakade, O. Shamir, K. Sridharan, and A. Tewari. Learning exponential families in high-dimensions:
Strong convexity and sparsity. In Int. Conf. Artif. Intell. Stat. (AISTATS), 2010.
[11] M. Kolar, L. Song, A. Ahmed, and E. Xing. Estimating time-varying networks. Ann. Appl. Stat., 4(1):94?
123, 2010.
[12] C. Lam and J. Fan. Sparsistency and rates of convergence in large covariance matrix estimation. Ann.
Statis., 37(6B):4254, 2009.
[13] J.D. Lee and T. Hastie. Learning mixed graphical models. arXiv preprint arXiv:1205.5012, 2012.
[14] P.L. Loh and M.J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance
matrices and their inverses. arXiv:1212.0478, 2012.
[15] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso. Ann.
Statis., 34(3):1436?1462, 2006.
[16] Y. Nardi and A. Rinaldo. On the asymptotic properties of the group lasso estimator for linear models.
Electron. J. Stat., 2:605?633, 2008.
[17] S.N. Negahban, P. Ravikumar, M.J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Statist. Sci., 27(4):538?557, 2012.
[18] G. Obozinski, M.J. Wainwright, and M.I. Jordan. Support union recovery in high-dimensional multivariate
regression. Ann. Statis., 39(1):1?47, 2011.
[19] P. Ravikumar, M.J. Wainwright, and J.D. Lafferty. High-dimensional ising model selection using `1 regularized logistic regression. Ann. Statis., 38(3):1287?1319, 2010.
[20] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing `1 -penalized log-determinant divergence. Electron. J. Stat., 5:935?980, 2011.
[21] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation.
Electron. J. Stat., 2:494?515, 2008.
[22] Y. She. Sparse regression with exact clustering. Electron. J. Stat., 4:1055?1096, 2010.
[23] R.J. Tibshirani and J.E. Taylor. The solution path of the generalized lasso. Ann. Statis., 39(3):1335?1371,
2011.
[24] S. Vaiter, G. Peyr?e, C. Dossal, and J. Fadili. Robust sparse analysis regularization. IEEE Trans. Inform.
Theory, 59(4):2001?2016, 2013.
[25] S. van de Geer. Weakly decomposable regularization penalties and structured sparsity. arXiv preprint
arXiv:1204.4813, 2012.
[26] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming (lasso). IEEE Trans. Inform. Theory, 55(5):2183?2202, 2009.
[27] E. Yang and P. Ravikumar. Dirty statistical models. In Adv. Neural Inf. Process. Syst. (NIPS), pages
827?835, 2013.
[28] E. Yang, P. Ravikumar, G.I. Allen, and Z. Liu. On graphical models via univariate exponential family
distributions. arXiv:1301.4183, 2013.
[29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc.
Ser. B Stat. Methodol., 68(1):49?67, 2006.
[30] P. Zhao and B. Yu. On model selection consistency of lasso. J. Mach. Learn. Res., 7:2541?2563, 2006.
9
| 5175 |@word h:4 multitask:1 determinant:1 norm:22 seek:3 covariance:6 jacob:1 necessity:1 liu:1 contains:1 existing:1 ksk1:1 current:1 must:3 additive:1 statis:6 kyk:5 math:1 mathematical:1 along:1 yuan:1 prove:2 consists:1 combine:1 huber:1 cand:2 nardi:1 decomposed:1 automatically:1 estimating:3 notation:1 bounded:3 parsimony:1 developed:1 unified:2 guarantee:1 k2:14 rm:2 ser:2 converse:1 grant:2 before:1 engineering:4 despite:1 mach:3 analyzing:1 establishing:4 path:1 twice:1 studied:3 dantzig:1 meinshausen:1 appl:1 graduate:2 unique:4 enforces:1 union:1 procedure:1 area:3 thought:1 attain:1 vert:1 onto:1 irrepresentable:10 selection:34 cannot:1 applying:1 equivalent:4 projector:1 deterministic:2 reviewer:1 straightforward:1 fadili:1 convex:13 formulate:1 decomposable:36 recovery:3 m2:1 estimator:33 nuclear:1 hd:1 notion:14 variation:1 shamir:1 suppose:5 exact:1 programming:1 origin:3 hdt:9 ising:1 vein:1 kxk1:1 preprint:3 adv:1 sun:2 convexity:4 ui:5 covariates:1 complexity:1 depend:1 weakly:1 joint:1 various:1 regularizer:1 describe:2 neighborhood:4 stanford:4 supplementary:5 larger:1 say:1 hc1:2 otherwise:2 statistic:4 noisy:2 differentiable:2 lam:1 achieve:1 hi2:1 inducing:4 ky:1 constituent:1 convergence:2 derive:5 develop:3 stat:10 strong:4 soc:1 recovering:1 direction:1 correct:3 stringent:1 material:5 require:3 decompose:2 anonymous:1 rothman:1 hold:2 exp:5 electron:5 bickel:2 a2:2 estimation:11 hs1:1 uhlmann:2 grouped:1 correctness:1 gauge:1 establishes:1 bunea:1 hope:1 gaussian:3 rather:1 ck:2 varying:1 hs2:1 corollary:3 focus:1 she:1 likelihood:11 sense:2 dependent:2 i1:2 selects:1 compatibility:3 dual:4 logn:1 proposes:1 constrained:4 special:2 yu:3 icml:1 promote:2 sanghavi:1 report:1 duplicate:1 national:1 intell:2 divergence:1 sparsistency:1 n1:1 interest:3 wellstudied:1 pc:1 regularizers:1 necessary:2 orthogonal:2 incomplete:2 taylor:4 re:2 column:1 deviation:1 subset:3 decomposability:2 predictor:3 peyr:1 supx:1 ha2:1 kxi:1 dossal:1 recht:2 negahban:7 lee:3 probabilistic:1 continuously:1 satisfied:3 ndseg:1 containing:1 hn:1 choose:2 l22:1 conf:3 zhao:1 syst:1 relint:3 nonasymptotic:1 de:4 rusmevichientong:1 vaiter:1 includes:1 coefficient:1 int:3 jason:1 closed:2 sup:4 xing:1 minimize:8 yield:2 generalize:3 explain:1 simultaneous:1 inform:2 trevor:1 definition:1 involved:1 james:1 dm:1 naturally:2 associated:1 di:2 proof:1 knowledge:1 ut:1 organized:1 dt:6 ritov:1 strongly:1 generality:1 overlapping:2 logistic:2 grows:2 artif:2 k22:1 contain:3 true:1 verify:1 normalized:1 hence:5 equality:1 regularization:3 read:1 i2:2 please:1 generalized:20 l1:3 allen:1 wise:3 nih:1 common:1 specialized:1 tending:1 raskutti:1 jdl17:1 measurement:1 refer:1 consistency:44 pm:10 deduce:1 multivariate:1 recent:1 inf:7 irrepresentability:1 certain:1 yi:1 mr:2 surely:1 redundant:1 signal:4 ii:4 semi:4 multiple:1 yuekai:2 technical:1 levina:2 ahmed:1 bach:1 lin:1 concerning:2 mle:1 award:1 ravikumar:6 a1:2 variant:1 regression:7 arxiv:7 kernel:1 c1:2 penalize:1 subdifferential:1 fellowship:2 want:1 comment:1 subject:3 undirected:2 lafferty:1 sridharan:1 jordan:1 yang:2 iii:2 hb:12 fit:1 hastie:2 lasso:46 michailidis:1 br:3 honest:1 inactive:2 defense:1 penalty:55 song:1 loh:1 remark:1 generally:1 tewari:1 tsybakov:1 extensively:2 statist:1 locally:2 dit:2 exist:1 nsf:2 sign:1 overly:2 tibshirani:1 diverse:1 discrete:2 express:3 group:21 key:1 reformulation:1 threshold:1 drawn:2 hi1:1 graph:2 geometrically:27 sum:9 inverse:2 extends:1 family:11 almost:1 prog:1 infimal:2 bound:2 hi:4 fan:1 quadratic:1 constraint:5 span:7 min:3 minkowski:2 department:1 structured:2 ball:3 kd:3 kakade:1 s1:2 intuitively:3 restricted:3 pr:4 referencing:1 glm:2 invariant:1 computationally:1 know:1 kda:2 tractable:1 enforce:3 rp:18 clustering:1 include:1 dirty:1 paulson:1 graphical:7 k1:7 build:1 establish:3 quantity:1 jalali:1 southern:1 subspace:14 kbk1:1 thank:1 sci:1 index:1 kolar:1 minimizing:1 statement:1 stated:1 negative:1 design:2 motivates:1 unknown:7 convolution:2 markov:1 finite:2 curved:1 rn:3 sharp:1 buhlmann:1 complement:1 pair:1 extensive:2 california:1 nip:1 trans:2 usually:2 pattern:2 below:1 sparsity:9 max:3 unpenalized:1 wainwright:6 overlap:3 hybrid:5 regularized:1 indicator:1 methodol:1 zhu:2 categorical:1 review:2 acknowledgement:1 relative:1 afosr:1 asymptotic:2 loss:4 permutation:1 sublinear:2 interesting:2 mixed:1 penalization:1 supy:1 vasuki:1 sufficient:2 consistent:10 principle:2 row:1 penalized:17 supported:3 otation:1 side:1 pseudolikelihood:1 institute:1 sparse:7 van:4 ha1:1 dimension:2 commonly:3 compact:1 selector:1 supremum:2 ml:3 active:3 continuous:2 maxg:4 quantifies:1 learn:3 robust:1 hc:6 complex:1 necessarily:1 da:1 aistats:2 main:8 linearly:1 s2:2 noise:2 arise:1 complementary:2 sub:2 exponential:11 kxk2:4 third:1 nullspace:1 theorem:7 rk:2 xt:2 insightful:1 decay:1 exists:3 hck:2 univariate:1 rinaldo:1 expressed:6 kxk:4 contained:1 truth:1 satisfies:3 acm:1 obozinski:2 viewed:1 ann:7 lipschitz:2 fisher:2 change:2 lemma:1 total:1 geer:4 called:1 ece:1 e:2 ya:1 select:5 support:17 guo:1 jonathan:2 brevity:1 bioinformatics:2 |
4,615 | 5,176 | A multi-agent control framework for co-adaptation in
brain-computer interfaces
Josh Merel1 , ? Roy Fox2 , Tony Jebara3, Liam Paninski4
Department of Neurobiology and Behavior, 3 Department of Computer Science,
4
Department of Statistics, Columbia University, New York, NY 10027
2
School of Computer Science and Engineering, Hebrew University, Jerusalem 91904, Israel
jsm2183@columbia.edu, royf@cs.huji.ac.il,
jebara@cs.columbia.edu, liam@stat.columbia.edu
?
1
Abstract
In a closed-loop brain-computer interface (BCI), adaptive decoders are used to
learn parameters suited to decoding the user?s neural response. Feedback to the
user provides information which permits the neural tuning to also adapt. We
present an approach to model this process of co-adaptation between the encoding model of the neural signal and the decoding algorithm as a multi-agent formulation of the linear quadratic Gaussian (LQG) control problem. In simulation
we characterize how decoding performance improves as the neural encoding and
adaptive decoder optimize, qualitatively resembling experimentally demonstrated
closed-loop improvement. We then propose a novel, modified decoder update rule
which is aware of the fact that the encoder is also changing and show it can improve simulated co-adaptation dynamics. Our modeling approach offers promise
for gaining insights into co-adaptation as well as improving user learning of BCI
control in practical settings.
1 Introduction
Neural signals from electrodes implanted in cortex [1], electrocorticography (ECoG) [2], and electroencephalography (EEG) [3] all have been used to decode motor intentions and control motor
prostheses. Standard approaches involve using statistical models to decode neural activity to control
some actuator (e.g. a cursor on a screen [4], a robotic manipulator [5], or a virtual manipulator [6]).
Performance of offline decoders is typically different from the performance of online, closed-loop
decoders where the user gets immediate feedback and neural tuning changes are known to occur
[7, 8]. In order to understand how decoding will be performed in closed-loop, it is necessary to
model how the decoding algorithm updates and neural encoding updates interact in a coordinated
learning process, termed co-adaptation.
There have been a number of recent efforts to learn improved adaptive decoders specifically tailored
for the closed loop setting [9, 10], including an approach relying on stochastic optimal control theory
[11]. In other contexts, emphasis has been placed on training users to improve closed-loop control
[12]. Some efforts towards modeling the co-adaptation process have sought to model properties
of different decoders when used in closed-loop [13, 14, 15], with emphasis on ensuring the stability of the decoder and tuning the adaptation rate. One recent simulation study also demonstrated
how modulating task difficulty can improve the rate of co-adaptation when feedback noise limits
performance [16]. However, despite speculation that exploiting co-adaptation will be integral to
state-of-the-art BCI [17], general models of co-adaptation and methods which exploit those models
to improve co-adaptation dynamics are lacking.
?
These authors contributed equally.
1
We propose that we should be able to leverage our knowledge of how the encoder changes in order
to better update the decoder. In the current work, we present a simple model of the closed-loop coadaptation process and show how we can use this model to improve decoder learning on simulated
experiments. Our model is a novel control setting which uses a split Linear Quadratic Gaussian
(LQG) system. Optimal decoding is performed by Linear Quadratic Estimation (LQE), effectively
the Kalman filter model. Encoding model updates are performed by the Linear Quadratic Regulator
(LQR), the dual control problem of the Kalman filter. The system is split insofar as each agent has
different information available and each performs optimal updates given the state of the other side
of the system. We take advantage of this model from the decoder side by anticipating changes in
the encoder and pre-emptively updating the decoder to match the estimate of the further optimized
encoding model. We demonstrate that this approach can improve the co-adaptation process.
2 Model framework
2.1 Task model
For concreteness, we consider a motor-cortical neuroprosthesis setting. We assume a naive user,
placed into a BCI control setting, and propose a training scheme which permits the user and decoder
to adapt. We provide a visual target cue at a 3D location and the user controls the BCI via neural signals which, in a natural setting, relate to hand kinematics. The target position is moved each timestep
to form a trajectory through the 3D space reachable by the user?s hand. The BCI user receives visual
feedback via the displayed location of their decoded hand position. The user?s objective is to control
their cursor to be as close to the continuously moving target cursor as possible. A key feature of this
scheme is that we know the ?intention? of the user, assuming it corresponds to the target.
The complete graphical model of this system is provided in figure 1. xt in our simulations is a three
dimensional position vector (Cartesian Coordinates) corresponding to the intended hand position.
This variable could be replaced or augmented by other variables of interest (e.g. velocity). We
randomly evolve the target signal using a linear-Gaussian drift model (eq. (1)). The neural encoding
model is linear-Gaussian in response to intended position xt and feedback x
?t?1 (eq. (2)), giving
a vector of neural responses ut (e.g. local field potential or smoothed firing rates of neural units).
Since we do not observe the whole brain region, we must subsample the number of neural units
from which we collect information. The transformation C is conceptually equivalent to electrode
sampling and yt is the observable neural response vector via the electrodes (eq. (3)). Lastly, x?t is
the decoded hand position estimate, which also serves as visual feedback (eq. (4)).
xt = P xt?1 + ?t ;
?t ? N (0, Q)
(1)
ut = Axt + B x
?t?1 + ?t ;
?t ? N (0, R)
(2)
yt = Cut + ?t ;
?t ? N (0, S)
(3)
x?t = F yt + G?
xt?1 .
(4)
P
xt
xt+1
A
A
ut+1
ut
C
B
x
?t?1
C
yt
G
B
F
x
?t
yt+1
G
F
x
?t+1
During training, the decoding system is allowed access to the target position, interpreted as the real intention xt . The decoded x
?t is only used as feedback,
to inform the user of the gradually learned dynamics
of the decoder. After training, the system is tested
on a task with the same parameters of the trajectory
dynamics, but with the actual intention only known
to the user, and hidden from the decoder. A natural
objective is to minimize tracking error, measured as
accumulated mean squared error between the target
and neurally decoded pose over time.
Figure 1: Graphical model relating target signal (xt ), neural response (ut ), electrode ob- For contemporary BCI applications, the Kalman filservation of neural response (yt ), and de- ter is a reasonable baseline decoder, so we do not
consider even simpler models. However, for other
coded feedback signal (?
xt ).
applications one might wish to consider a model in
which the state at each timestep is encoded independently. It is possible to find a closed form for the optimal encoder and decoder that minimizes the
error in this case [18, 19].
2
Sections 2.2 and 2.3 describe the model presented in figure 1 as seen from the distinct viewpoints
of the two agents involved ? the encoder and the decoder. The encoder observes xt and x
?t?1 , and
selects A and B to generate a control signal ut . The decoder observes yt , and selects F and G
to estimate the intention as x
?t . We assume that both agents are free to performed unconstrained
optimization on their parameters.
2.2 Encoding model and optimal decoder
Our encoding model is quite simple, with neural units responding in a linear-Gaussian fashion to
intended position xt and feedback x
?t?1 (eq. (2)). This is a standard model of neural responses for
BCI. The matrices A and B effectively correspond to the tuning response functions of the neural
units, and we will allow these parameters to be adjusted under the control of the user. The matrix
C corresponds to the observation of the neural units by the electrodes, so we treat it as fixed (in our
case C will down-sample the neurons). For this paper, we assume noise covariances are fixed and
known, but this can be generalized. Given the encoder, the decoder will estimate the intention xt ,
which follows a hidden Markov chain (eq. (1)). The observations available to the decoder are the
electrode samples yt (eq. (2) and (3))
yt = CAxt + CB x
?t?1 + ??t ;
??t ? N (0, RC )
(5)
T
RC = CRC + S.
(6)
Given all the electrode samples up to time t, the problem of finding the most likely hidden intention
is a Linear-Quadratic Estimation problem (figure 2), and its standard solution is the Kalman filter,
and this decoder is widely in similar contexts. To choose appropriate Kalman gain F and mean
dynamics G, the decoding system needs a good model of the dynamics of the underlying intention
process (P , Q of eq.(1)) and the electrode observations (CA, CB, and RC of eqs. (5) & (6)).
We can assume that P and Q are known since the decoding algorithm is controlled by the same
experimenter who specifies the intention process for the training phase. We discuss the estimation
of the observation model in section 4.
P
xt
CA
xt+1
CA
x
?t?1
A
F
x
?t
ut+1
ut
B
CB
G
xt+1
A
yt+1
yt
CB
P
xt
G
F
x
?t+1
x
?t?1
Figure 2: Decoder?s point of view ? target
signal (xt ) directly generates observed responses (yt ), with the encoding model collapsed to omit the full signal (ut ). Decoded feedback signal (?
xt ) is generated by
the steady state Kalman filter.
B
G
FC
x
?t
G
FC
x
?t+1
Figure 3: Encoder?s point of view ? target signal (xt ) and decoded feedback signal (?
xt?1 )
generate neural response (ut ). Model of decoder collapses over responses (yt ) which are
unseen by the encoder side.
Given an encoding model, and assuming a very long horizon 1 , there exist standard methods to
optimize the stationary value of the decoder parameters [20]. The stationary covariance ? of xt
given x
?t?1 is the unique positive-definite fixed point of the Riccati equation
? = P ?P T ? P ?(CA)T (RC + (CA)?(CA)T )?1 (CA)?P T + Q.
(7)
The Kalman gain is then
F = ?(CA)T ((CA)?(CA)T + RC )?1
(8)
G = P ? F (CA)P ? F (CB).
(9)
with mean dynamics
1
Our task is control of the BCI for arbitrarily long duration, so it makes sense to look for the stationary
decoder. Similarly the BCI user will look for a stationary encoder. We could also handle the finite horizon case
(see section 2.3 for further discussion).
3
We estimate x
?t using eq. (4), and this is the most likely value, as well as the expected value,
of xt given the electrode observations y1 , . . . , yt . Using this estimate as the decoded intention is
equivalent to minimizing the expectation of a quadratic cost
X
1
clqe =
?t k2 .
(10)
2 kxt ? x
t
2.3 Model of co-adaptation
At the same time as the decoder-side agent optimizes the decoder parameters F and G, the encoderside agent can optimize the encoder parameters A and B. We formulate encoder updates for the BCI
application as a standard LQR problem. This framework requires that the encoder-side agent has an
intention model (same as eq. (1)) and a model of the decoder. The decoder model combines eq. (3)
and (4) into
x?t = F Cut + G?
xt?1 + F ?t .
(11)
This model is depicted in figure 3. We assume that the encoder has access to a perfect estimate of the
intention-model parameters P and Q (task knowledge). We also assume that the encoder is free to
change its parameters A and B arbitrarily given the decoder-side parameters (which it can estimate
as discussed in section 4).
As a model of real neural activity, there must be some cost to increasing the power of the neural
signal. Without such a cost, the solutions diverge. We add an additional cost term (a regularizer),
which is quadratic in the magnitude of the neural response ut , and penalizes a large neural signal
X
1
? t.
clqr =
kxt ? x
?t k2 + 1 uT Ru
(12)
2 t
2
t
Since the decoder has no direct influence on this additional term, it can be viewed as optimizing for
this target cost function as well. The LQR problem is solved similarly to eq. (7), by assuming a very
long horizon and optimizing the stationary value of the encoder parameters [20].
We next formulate our objective function in terms of standard LQR parameters. The control depends
on the joint process of the intention and the feedback (xt , x?t?1 ), but the cost is defined between xt
and x
?t . To compute the expected cost given xt , x
?t?1 and ut , we use eq. (11) to get
E k?
xt ? xt k2 = kF Cut + G?
xt?1 ? xt k2 + const
(13)
T
T
T
= (G?
xt?1 ? xt ) (G?
xt?1 ? xt ) + (F Cut ) (F Cut ) + 2(G?
xt?1 ? xt ) (F Cut ) + const.
Equation 13 provides the error portion of the quadratic objective of the LQR problem. The standard
solution
for the stationary case involves computing the Hessian V of the cost-to-go in joint state
xt
as the unique positive-definite fixed point of the Riccati equation
x
?t?1
? + P? T V D)(
? R
? + S? + D
? T V D)
? ?1 (N
?T + D
? T V P? ) + Q.
?
V = P? T V P? + (N
(14)
? is the controllability of this
Here P? is the process dynamics for the joint state of xt and x
?t?1 and D
? S? and N
? are the cost parameters which can be determined by inspection of eq. (13).
dynamics. Q,
? is the Hessian of the neural response cost term which is chosen in simulations so that the resulting
R
increase in neural signal strength is reasonable.
?F C
?GT
P 0
T
?
?
?= I
? = 0 , Q
.
,
S
=
(F
C)
(F
C),
N
=
, D
P? =
FC
0 G
GT (F C)
?G GT G
In our formulation, the encoding model (A, B) is equivalent to the feedback gain
?TV D
? +R
? + S)
? ?1 (N
?T + D
? T V P? ).
[A B] = ?(D
(15)
This is the optimal stationary control, and is generally not optimal for shorter planning horizons. In
the co-adaptation setting, the encoding model (At , Bt ) regularly changes to adapt to the changing
decoder. This means that (At , Bt ) is only used for one timestep (or a few) before it is updated. The
effective planning horizon is thus shortened from its ideal infinity, and now depends on the rate and
magnitude of the perturbations introduced in the encoding model. Eq. (14) can be solved for this
finite horizon, but here for simplicity we assume the encoder updates introduce small or infrequent
enough changes to keep the planning horizon very long, and the stationary control close to optimal.
4
1
13000
0.95
12000
0.9
11000
10000
?
error (summed over x,y,z)
14000
0.85
9000
0.8
8000
7000
0.75
6000
2
4
6
8
10
12
14
16
18
0.7
1
20
update iteration index
2
3
4
5
6
7
8
9
10
encoder update iteration index
(a)
(b)
Figure 4: (a) Each curve plots single trial changes in decoding mean squared error (MSE) over
whole timeseries as a function of the number of update half-iterations. The encoder is updated in
even steps, the decoder in odd ones. Distinct curves are for multiple, random initializations of the
encoder. (b) Plots the corresponding changes in encoder parameter updates - y-axis, ?, is correlation
between the vectorized encoder parameters after each update with the final values.
3 Perfect estimation setting
We can consider co-adaptation in a hypothetical setting where each agent has instant access to a
perfect estimate of the other?s parameters as soon as they change. To keep this setting comparable
to the setting of section 4, where parameter estimation is needed, we only allow each agent access to
those variables that it could, in principle, estimate. We assume both agents know the parameters P
and Q of the intention dynamics, that the encoder knows F C and G of eq. (11), and that the decoder
knows CA, CB and RC of eq. (5) and (6). These are the same parameters needed by each agent for
its own re-optimization. This process of parameter updates is performed by alternating between the
encoder update equations (7)-(9) and the decoder update equations (14)-(15). Since the agents take
turns minimizing the expected infinite-horizon objectives of eq. (12) given the other, this cost will
tend to decrease, approximately converging.
Note that neither of these steps depends explicitly on the observed values of the neural signal ut
or the decoded output x
?t . In other words, co-adaptation can be simulated without ever actually
generating the stochastic process of intention, encoding and decoding. However, this process and
the signal-feedback loop become crucial when estimation is involved, as in section 4. Then each
agent?s update indirectly depends on its observations through its estimated model of the other agent.
To examine the dynamics in this idealized setting, we hold fixed the target trajectory x1...T as well
as the realization of the noise terms. We initialize the simulation with a random encoding model and
observe empirically that, as the encoder and the decoder are updated alternatingly, the error rapidly
reduces to a plateau. As the improvement saturates, the joint encoder-decoder pair approximates
a locally optimal solution to the co-adaptation problem. Figure 4(a) plots the error as a function
of the number of model update iterations ? the different curves correspond to distinct, random initializations of the encoder parameters A, B with everything else held fixed. We emphasize that
for a fixed encoder, the first decoder update would yield the infinite-horizon optimal update if the
encoder could not adapt, and the error can be interpreted relative to this initial optimal decoding
(see supplementary fig1(a) for depiction of initial error and improvement by encoder adaptation in
supplementary fig1(b)). This method obtains optimized encoder-decoder pairs with moderate sensitivity to the initial parameters of the encoding model. Interpreted in the context of BCI, this suggests
that the initial tuning of the observed neurons may affect the local optima attainable for BCI performance due to standard co-adaptation. We may also be able to optimize the final error by cleverly
choosing updates to decoder parameters in a fashion which shifts which optimum is reached. Figure
4(b) displays the corresponding approximate convergence of the encoder parameters - as the error
decreases, the encoder parameters settle to a stable set (the actual final values across initializations
vary).
Parameters free from the standpoint of the simulation are the neural noise covariance RC and the
? of the neural signal cost. We set these to reasonable values - the noise to a moderate
Hessian R
5
level and the cost sufficiently high as to prevent an exceedingly large neural signal which would
swamp the noise and yield arbitrarily low error (see supplement). In an experimental setting, these
parameters would be set by the physical system and they would need to be estimated beforehand.
4 Partially observable setting with estimation
More realistic than the model of co-adaptation where the decoder-side and encoder-side agents automatically know each other?s parameters, is one where the rate of updating is limited by the partial
knowledge each agent has about the other. In each timestep, each agent will update its estimate of
the other agent?s parameters, and then use the current estimates to re-optimize its own parameters.
In this work we use a recursive least squares (RLS) which is presented in the supplement section 3
for this estimation. RLS has a forgetting factor ? which regulates how quickly the routine expects
the parameters it estimates to change. This co-adaptation process is detailed in procedure 1. We
elect to use the same estimation routine for each agent and assume that the user performs idealobserver style optimal estimation. In general, if more knowledge is available about how a real BCI
user updates their estimates of the decoder parameters, such a model could easily be used. We could
also explore in simulation how various suboptimal estimation models employed by the user affect
co-adaptation.
As noted previously, we will assume the noise model is fixed and that the decoder side knows the
neural signal noise covariance RC (eq. (6)). The encoder-side will use a scaled identity matrix as the
estimate of the electrodes-decoder noise model. To jointly estimate the decoder parameters and the
noise model, an EM-based scheme would be a natural approach (such estimation of the BCI user?s
internal model of the decoder has been treated explicitly in [21]).
Procedure 1 standard co-adaptation
for t = 1 to lengthT raining do
Encoder-side
Get xt and x
?t?1
d
b (RLS)
Update encoder-side estimate of decoder F
C, G
d
b (LQR)
Update optimal encoder A, B using current decoder estimate F
C, G
Encode current intention using A, B and send signal yt
Decoder-side
Get xt and yt
d CB
d (RLS)
Update decoder-side estimate of encoder CA,
d CB
d (LQE)
Update optimal decoder F, G using current encoder estimate CA,
Decode current signal using F, G and display as feedback x
?t
end for
Standard co-adaptation yields improvements in decoding performance over time as the encoder and
decoder agents estimate each others? parameters and update based on those estimates. Appropriately,
that model will improve the encoder-decoder pair over time, as in the blue curves of figure 5 below.
5 Encoder-aware decoder updates
In this section, we present an approach to model the encoder updates from the decoder side. We
will use this to ?take an extra step? towards optimizing the decoder for what the anticipated future
encoder ought to look like.
In the most general case, the encoder can update At and Bt in an unconstrained fashion at each
timestep t. From the decoder side, we do not know C and therefore we cannot know F C, an
estimate of which is needed by the user to update the encoder. However, the decoder sets F and can
predict updates to [CA CB] directly, instead of to [A B] as the actual encoder does (equation
15). We emphasize that this update is not actually how the user will update the encoder, rather it
captures how the encoder ought to change the signals observed by the decoder (from the decoder?s
perspective).
6
Figure 5: In each subplot, the blue line corresponds to decreasing error as a function of simulated
time from standard co-adaptation (procedure 1). The green line corresponds to the improved onestep-ahead co-adaptation (procedure 2). Plots from left to right have decreasing RLS forgetting
factor used by the encoder-side to estimate the decoder parameters. Curves depict the median error
across 20 simulations with confidence intervals of 25% and 75% quantiles. Error at each timestep is
appropriately cross-validated, it corresponds to taking the encoder-decoder pair of that timestep and
computing error on ?test? data.
We can find the update [CApred
presented in section 2.3, eq. (15)
[CApred
CBpred ] by solving a modified version of the LQR problem
? ?T V D
?? + R
? ? + S?? )?1 (N
? ?T + D
? ?T V P? ),
CBpred ] = ?(D
with terms defined similarly to section 2.3, except
0
?
?
, S?? = F T F,
D =
F
?? =
N
?F
.
GT F
(16)
(17)
We also note that the quadratic penalty used in this approximation been transformed from a cost
? ? serves as a
on the responses of all of the neural units to a cost only on the observed ones. R
regularization parameter which now must be tuned so the decoder-side estimate of the encoding
? ? = ?I for some constant coarsely tuned ?, though
update is reasonable. For simplicity we let R
in general this cost need not be a scaled identity matrix. Equations 16 & 17 only use information
available at the decoder side, with terms dependent on F C having been replaced by terms dependent
instead on F . These predictions will be used only to engineer decoder update schemes that can be
used to improve co-adaptation (as in procedure 2).
Procedure 2 r-step-ahead co-adaptation
for t = 1 to lengthT raining do
Encoder-side
As in section 5
Decoder-side
Get xt and yt
d CB
d (RLS)
Update decoder-side estimate of encoder CA,
d CB
d (LQE)
Update optimal decoder F, G using current encoder estimate CA,
for r = 1 to numStepsAhead do
Anticipate encoder update CApred , CBpred to updated decoder F, G (modified LQR)
Update r-step-ahead optimal decoder F, G using CApred , CBpred (LQE)
end for
Decode current signal using r-step-ahead F, G and display as feedback x
bt
end for
The ability to compute decoder-side approximate encoder updates opens the opportunity to improve
encoder-decoder update dynamics by anticipating encoder-side adaptation to guide the process towards faster convergence, and possibly to better solutions. For the current estimate of the encoder,
we update the optimal decoder, anticipate the encoder update by the method of section above, and
then update the decoder in response to the anticipated encoder update. This procedure allows rstep-ahead updating as presented in procedure 2. Figure 5 demonstrates how the one-step-ahead
7
scheme can improve the co-adaptation dynamics. It is not a priori obvious that this method would
help - the decoder-side estimate of the encoder update is not identical to the actual update. An
encoder-side agent more permissive of rapid changes in the decoder may better handle r-step-ahead
co-adaptation. We have also tried r-step-ahead updates for r > 1. However, this did not outperform
the one-step-ahead method, and in some cases yields a decline relative to standard co-adaptation.
These simulations are susceptible to the setting of the forgetting factor used by each agent in the
RLS estimation, the initial uncertainty of the parameters, and the quadratic cost used in the one? ? . The encoder-side RLS parameters in a real setting will be determined
step-ahead approximation R
?
?
by the BCI user and R should be tuned (as a regularization parameter).
The encoder-side forgetting factor would correspond roughly to the plasticity of the BCI user with
respect to the task. A high forgetting factor permits the user to tolerate very large changes in the
decoder, and a low forgetting factor corresponds to the user assuming more decoder stability. From
left to right in the subplots of figure 5, encoder-side forgetting factor decreases - the regime where
augmenting co-adaptation may offer the most benefit corresponds to a user that is most uncertain
about the decoder and willing to tolerate decoder changes. Whether or not co-adaptation gains
are possible in our model depend upon parameters of the system. Nevertheless, for appropriately
selected parameters, attempting to augment the co-adaptation should not hurt performance even
if the user were outside of the regime where the most benefit is possible. A real user will likely
perform their half of co-adaptation sub-optimally relative to our idealized BCI user and the structure
of such suboptimalities will likely increase the opportunity for co-adaptation to be augmented. The
timescale of these simulation results are unspecified, but would correspond to the timescale on which
the biological neural encoding can change. This varies by task and implicated brain-region, ranging
from a few training sessions [22, 23] to days [24].
6 Conclusion
Our work represents a step in the direction of exploiting co-adaptation to jointly optimize the neural
encoding and the decoder parameters, rather than simply optimizing the decoder parameters without
taking the encoder parameter adaptation into account. We model the process of co-adaptation that
occurs in closed-loop BCI use between the user and decoding algorithm. Moreover, the results using
our modified decoding update demonstrate a proof of concept that reliable improvement can be
obtained relative to naive adaptive decoders by encoder-aware updates to the decoder in a simulated
system. It is still open how well methods based on this approach will extend to experimental data.
BCI is a two-agent system, and we may view co-adaptation as we have formulated it within multiagent control theory. As both agents adapt to reduce the error of the decoded intention given their
respective estimates of the other agent, a fixed point of this co-adaptation process is a Nash equilibrium. This equilibrium is only known to be unique in the case where the intention at each timestep is
independent [25]. In our more general setting, there may be more than one encoder-decoder pair for
which each is optimal given the other. Moreover, there may exist non-linear encoders with which
non-linear decoders can be in equilibrium. These connections will be explored in future work.
Obviously our model of the neural encoding and the process by which the neural encoding model
is updated are idealizations. Future experimental work will determine how well our co-adaptive
model can be applied to the real neuroprosthetic context. For rapid, low-cost experiments it might
be best to begin with a human, closed-loop experiments intended to simulate a BCI [26]. As the
Kalman filter is a standard decoder, it will be useful to begin experimental investigations with this
choice (as analyzed in this work). More complicated decoding schemes also appear to improve
decoding performance [23] by better accounting for the non-linearities in the real neural encoding,
and such methods scale to BCI contexts with many output degrees of freedom [27]. An important
extension of the co-adaptation model presented in this work is to non-linear encoding and decoding
schemes. Even in more complicated, realistic settings, we hope the framework presented here will
offer similar practical benefits for improving BCI control.
Acknowledgments
This project is supported in part by the Gatsby Charitable Foundation. Liam Paninski receives
support from a NSF CAREER award.
8
References
[1] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue, ?Instant neural control
of a movement signal.,? Nature, vol. 416, no. 6877, pp. 141?142, 2002.
[2] K. J. Miller et al., ?Cortical activity during motor execution, motor imagery, and imagery-based online
feedback.,? PNAS, vol. 107, no. 9, pp. 4430?4435, 2010.
[3] D. J. McFarland, W. A. Sarnacki, and J. R. Wolpaw, ?Electroencephalographic (eeg) control of threedimensional movement.,? Journal of Neural Engineering, vol. 7, no. 3, p. 036007, 2010.
[4] V. Gilja et al., ?A high-performance neural prosthesis enabled by control algorithm design.,? Nat Neurosci,
2012.
[5] L. R. Hochberg et al., ?Reach and grasp by people with tetraplegia using a neurally controlled robotic
arm,? Nature, vol. 485, no. 7398, pp. 372?375, 2012.
[6] D. Putrino et al., ?Development of a closed-loop feedback system for real-time control of a highdimensional brain machine interface,? Conf Proc IEEE EMBS, vol. 2012, pp. 4567?4570, 2012.
[7] S. Koyama et al., ?Comparison of brain-computer interface decoding algorithms in open-loop and closedloop control.,? Journal of Computational Neuroscience, vol. 29, no. 1-2, pp. 73?87, 2010.
[8] J. M. Carmena et al., ?Learning to control a brainmachine interface for reaching and grasping by primates,? PLoS Biology, vol. 1, no. 2, p. E42, 2003.
[9] V. Gilja et al., ?A brain machine interface control algorithm designed from a feedback control perspective.,? Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 1318?22, 2012.
[10] Z. Li, J. E. ODoherty, M. A. Lebedev, and M. A. L. Nicolelis, ?Adaptive decoding for brain-machine
interfaces through bayesian parameter updates.,? Neural Comput., vol. 23, no. 12, pp. 3162?204, 2011.
[11] K. Kowalski, B. He, and L. Srinivasan, ?Dynamic analysis of naive adaptive brain-machine interfaces,?
Neural Comput., vol. 25, no. 9, pp. 2373?2420, 2013.
[12] C. Vidaurre, C. Sannelli, K.-R. Muller, and B. Blankertz, ?Machine-learning based co-adaptive calibration
for brain-computer interfaces,? Neural Computation, vol. 816, no. 3, pp. 791?816, 2011.
[13] M. Lagang and L. Srinivasan, ?Stochastic optimal control as a theory of brain-machine interface operation,? Neural Comput., vol. 25, pp. 374?417, Feb. 2013.
[14] R. Heliot, K. Ganguly, J. Jimenez, and J. M. Carmena, ?Learning in closed-loop brain-machine interfaces: Modeling and experimental validation,? Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on, vol. 40, no. 5, pp. 1387?1397, 2010.
[15] S. Dangi, A. L. Orsborn, H. G. Moorman, and J. M. Carmena, ?Design and Analysis of Closed-Loop Decoder Adaptation Algorithms for Brain-Machine Interfaces,? Neural Computation, pp. 1?39, Apr. 2013.
[16] Y. Zhang, A. B. Schwartz, S. M. Chase, and R. E. Kass, ?Bayesian learning in assisted brain-computer
interface tasks.,? Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 2740?3, 2012.
[17] S. Waldert et al., ?A review on directional information in neural signals for brain-machine interfaces.,?
Journal Of Physiology Paris, vol. 103, no. 3-5, pp. 244?254, 2009.
[18] G. P. Papavassilopoulos, ?Solution of some stochastic quadratic Nash and leader-follower games,? SIAM
J. Control Optim., vol. 19, pp. 651?666, Sept. 1981.
[19] E. Doi and M. S. Lewicki, ?Characterization of minimum error linear coding with sensory and neural
noise.,? Neural Computation, vol. 23, no. 10, pp. 2498?2510, 2011.
[20] M. Athans, ?The discrete time linear-quadratic-Gaussian stochastic control problem,? Annals of Economic
and Social Measurement, vol. 1, pp. 446?488, September 1972.
[21] M. D. Golub, S. M. Chase, and B. M. Yu, ?Learning an internal dynamics model from control demonstration.,? 30th International Conference on Machine Learning, 2013.
[22] R. Shadmehr, M. A. Smith, and J. W. Krakauer, ?Error correction, sensory prediction, and adaptation in
motor control.,? Annual Review of Neuroscience, vol. 33, no. March, pp. 89?108, 2010.
[23] L. Shpigelman, H. Lalazar, and E. Vaadia, ?Kernel-arma for hand tracking and brain-machine interfacing
during 3d motor control,? in NIPS, pp. 1489?1496, 2008.
[24] A. C. Koralek, X. Jin, J. D. Long II, R. M. Costa, and J. M. Carmena, ?Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills.,? Nature, vol. 483, no. 7389, pp. 331?335, 2012.
[25] T. Basar, ?On the uniqueness of the Nash solution in linear-quadratic differential games,? International
Journal of Game Theory, vol. 5, no. 2-3, pp. 65?90, 1976.
[26] J. P. Cunningham et al., ?A closed-loop human simulator for investigating the role of feedback control in
brain-machine interfaces.,? Journal of Neurophysiology, vol. 105, no. 4, pp. 1932?1949, 2010.
[27] Y. T. Wong et al., ?Decoding arm and hand movements across layers of the macaque frontal cortices.,?
Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 1757?60, 2012.
9
| 5176 |@word neurophysiology:1 trial:1 version:1 open:3 willing:1 simulation:10 tried:1 covariance:4 accounting:1 eng:3 attainable:1 coadaptation:1 initial:5 jimenez:1 lqr:8 tuned:3 current:9 ka:1 optim:1 follower:1 must:3 realistic:2 plasticity:2 lqg:2 motor:7 plot:4 designed:1 update:54 depict:1 stationary:8 cue:1 half:2 selected:1 inspection:1 smith:1 provides:2 characterization:1 location:2 simpler:1 zhang:1 rc:8 direct:1 become:1 differential:1 combine:1 shpigelman:1 introduce:1 forgetting:7 expected:3 rapid:2 roughly:1 behavior:1 planning:3 examine:1 multi:2 brain:17 simulator:1 relying:1 decreasing:2 automatically:1 actual:4 electroencephalography:1 increasing:1 provided:1 begin:2 underlying:1 moreover:2 linearity:1 project:1 lalazar:1 israel:1 what:1 interpreted:3 minimizes:1 unspecified:1 finding:1 transformation:1 ought:2 fellow:1 hypothetical:1 axt:1 k2:4 scaled:2 demonstrates:1 control:35 unit:6 schwartz:1 omit:1 appear:1 positive:2 before:1 engineering:2 local:2 treat:1 limit:1 despite:1 encoding:23 shortened:1 firing:1 approximately:1 might:2 emphasis:2 initialization:3 closedloop:1 collect:1 suggests:1 co:41 collapse:1 limited:1 liam:3 practical:2 unique:3 acknowledgment:1 recursive:1 definite:2 wolpaw:1 procedure:8 physiology:1 intention:18 pre:1 word:1 confidence:1 get:5 cannot:1 close:2 context:5 collapsed:1 influence:1 wong:1 optimize:6 equivalent:3 demonstrated:2 yt:17 resembling:1 jerusalem:1 go:1 send:1 independently:1 duration:1 formulate:2 simplicity:2 rule:1 insight:1 enabled:1 stability:2 handle:2 swamp:1 coordinate:1 hurt:1 updated:5 annals:1 target:12 infrequent:1 user:31 decode:4 us:1 velocity:1 roy:1 updating:3 cut:6 observed:5 role:1 solved:2 capture:1 region:2 grasping:1 decrease:3 contemporary:1 movement:3 observes:2 hatsopoulos:1 plo:1 nash:3 dynamic:15 electrocorticography:1 depend:1 solving:1 upon:1 easily:1 joint:4 various:1 regularizer:1 distinct:3 describe:1 effective:1 doi:1 choosing:1 outside:1 quite:1 encoded:1 widely:1 supplementary:2 bci:23 encoder:70 statistic:1 ability:1 unseen:1 ganguly:1 timescale:2 jointly:2 final:3 online:2 obviously:1 chase:2 advantage:1 kxt:2 vaadia:1 propose:3 adaptation:45 loop:16 realization:1 rapidly:1 riccati:2 moved:1 carmena:4 exploiting:2 convergence:2 electrode:10 optimum:2 generating:1 perfect:3 help:1 ac:1 augmenting:1 pose:1 stat:1 measured:1 odd:1 school:1 eq:21 soc:3 c:2 involves:1 direction:1 filter:5 stochastic:5 human:2 settle:1 virtual:1 everything:1 crc:1 investigation:1 koralek:1 anticipate:2 biological:1 adjusted:1 ecog:1 extension:1 assisted:1 hold:1 correction:1 sufficiently:1 intentional:1 vidaurre:1 cb:11 equilibrium:3 predict:1 sought:1 vary:1 uniqueness:1 estimation:13 proc:4 modulating:1 hope:1 interfacing:1 gaussian:6 modified:4 rather:2 reaching:1 fig1:2 encode:1 validated:1 improvement:5 electroencephalographic:1 baseline:1 sense:1 dependent:2 accumulated:1 typically:1 bt:4 cunningham:1 hidden:3 transformed:1 selects:2 dual:1 augment:1 priori:1 development:1 art:1 summed:1 initialize:1 field:1 aware:3 having:1 sampling:1 identical:1 represents:1 biology:1 look:3 rls:8 yu:1 anticipated:2 future:3 others:1 few:2 randomly:1 replaced:2 intended:4 phase:1 freedom:1 interest:1 grasp:1 golub:1 analyzed:1 held:1 chain:1 beforehand:1 integral:1 partial:1 necessary:2 shorter:1 respective:1 penalizes:1 prosthesis:2 re:2 arma:1 uncertain:1 modeling:3 cost:18 expects:1 characterize:1 optimally:1 encoders:1 varies:1 international:2 sensitivity:1 huji:1 siam:1 decoding:21 diverge:1 continuously:1 quickly:1 lebedev:1 squared:2 imagery:2 choose:1 possibly:1 conf:4 style:1 li:1 account:1 potential:1 de:1 coding:1 coordinated:1 explicitly:2 depends:4 idealized:2 performed:5 view:3 closed:15 portion:1 reached:1 complicated:2 minimize:1 il:1 square:1 who:1 miller:1 correspond:4 yield:4 kowalski:1 directional:1 conceptually:1 bayesian:2 trajectory:3 cybernetics:2 alternatingly:1 plateau:1 inform:1 reach:1 pp:23 involved:2 obvious:1 proof:1 permissive:1 athans:1 gain:4 costa:1 experimenter:1 knowledge:4 ut:14 improves:1 routine:2 anticipating:2 actually:2 tolerate:2 day:1 response:15 improved:2 formulation:2 though:1 lastly:1 correlation:1 hand:7 receives:2 manipulator:2 concept:1 regularization:2 alternating:1 during:3 game:3 elect:1 noted:1 steady:1 generalized:1 complete:1 demonstrate:2 performs:2 interface:15 tetraplegia:1 ranging:1 novel:2 empirically:1 physical:1 regulates:1 discussed:1 extend:1 approximates:1 relating:1 he:1 measurement:1 tuning:5 unconstrained:2 similarly:3 session:1 reachable:1 moving:1 access:4 stable:1 cortex:2 depiction:1 calibration:1 gt:4 add:1 feb:1 own:2 recent:2 perspective:2 optimizing:4 optimizes:1 moderate:2 termed:1 arbitrarily:3 muller:1 seen:1 minimum:1 additional:2 subplot:1 employed:1 determine:1 signal:26 ii:1 neurally:2 full:1 multiple:1 reduces:1 pnas:1 match:1 adapt:5 faster:1 offer:3 long:5 cross:1 equally:1 award:1 coded:1 controlled:2 ensuring:1 converging:1 prediction:2 implanted:1 expectation:1 iteration:4 kernel:1 tailored:1 serruya:1 embs:1 interval:1 else:1 median:1 suboptimalities:1 crucial:1 standpoint:1 appropriately:3 extra:1 tend:1 med:3 regularly:1 leverage:1 ter:1 ideal:1 split:2 insofar:1 enough:1 subplots:1 affect:2 suboptimal:1 reduce:1 decline:1 economic:1 donoghue:1 shift:1 whether:1 effort:2 penalty:1 york:1 hessian:3 generally:1 useful:1 detailed:1 involve:1 locally:1 generate:2 specifies:1 outperform:1 exist:2 nsf:1 estimated:2 neuroscience:2 blue:2 discrete:1 promise:1 vol:23 coarsely:1 srinivasan:2 key:1 nevertheless:1 changing:2 prevent:1 neither:1 timestep:8 concreteness:1 idealization:1 uncertainty:1 reasonable:4 ob:1 hochberg:1 comparable:1 basar:1 layer:1 display:3 quadratic:13 annual:1 activity:3 strength:1 occur:1 ahead:10 infinity:1 generates:1 regulator:1 simulate:1 attempting:1 department:3 tv:1 march:1 cleverly:1 across:3 em:1 primate:1 gradually:1 sannelli:1 equation:7 previously:1 discus:1 kinematics:1 turn:1 needed:3 know:8 serf:2 end:3 available:4 operation:1 permit:3 actuator:1 observe:2 appropriate:1 indirectly:1 responding:1 tony:1 graphical:2 opportunity:2 instant:2 const:2 exploit:1 giving:1 krakauer:1 threedimensional:1 corticostriatal:1 objective:5 occurs:1 september:1 simulated:5 decoder:90 koyama:1 assuming:4 ru:1 kalman:8 index:2 minimizing:2 hebrew:1 demonstration:1 susceptible:1 relate:1 design:2 contributed:1 perform:1 observation:6 neuron:2 markov:1 finite:2 jin:1 displayed:1 controllability:1 immediate:1 timeseries:1 neurobiology:1 ever:1 saturates:1 y1:1 perturbation:1 smoothed:1 jebara:1 neuroprosthesis:1 drift:1 introduced:1 pair:5 paris:1 speculation:1 optimized:2 connection:1 learned:1 nip:1 macaque:1 able:2 mcfarland:1 below:1 regime:2 gaining:1 including:1 green:1 reliable:1 power:1 difficulty:1 natural:3 treated:1 nicolelis:1 arm:2 blankertz:1 scheme:7 improve:11 axis:1 naive:3 columbia:4 sept:1 review:2 kf:1 evolve:1 relative:4 lacking:1 multiagent:1 validation:1 foundation:1 agent:26 degree:1 vectorized:1 principle:1 viewpoint:1 charitable:1 e42:1 placed:2 supported:1 free:3 soon:1 implicated:1 offline:1 side:29 allow:2 understand:1 guide:1 odoherty:1 taking:2 benefit:3 feedback:20 curve:5 cortical:2 raining:2 neuroprosthetic:2 exceedingly:1 sensory:2 author:1 qualitatively:1 adaptive:8 social:1 transaction:1 approximate:2 observable:2 emphasize:2 obtains:1 skill:1 keep:2 robotic:2 investigating:1 leader:1 gilja:2 learn:2 nature:3 ca:17 career:1 eeg:2 improving:2 interact:1 mse:1 did:1 apr:1 neurosci:1 whole:2 noise:11 subsample:1 allowed:1 x1:1 augmented:2 quantiles:1 screen:1 fashion:3 gatsby:1 ny:1 sub:1 position:8 decoded:9 wish:1 comput:3 down:1 xt:42 explored:1 effectively:2 supplement:2 magnitude:2 execution:1 nat:1 cartesian:1 cursor:3 horizon:9 suited:1 depicted:1 fc:3 simply:1 likely:4 explore:1 paninski:2 josh:1 visual:3 tracking:2 partially:1 lewicki:1 corresponds:7 putrino:1 fox2:1 viewed:1 identity:2 formulated:1 towards:3 brainmachine:1 man:1 experimentally:1 change:15 onestep:1 specifically:1 determined:2 infinite:2 except:1 shadmehr:1 engineer:1 experimental:5 highdimensional:1 internal:2 support:1 people:1 frontal:1 tested:1 biol:3 |
4,616 | 5,177 | Probabilistic Movement Primitives
Alexandros Paraschos, Christian Daniel, Jan Peters, and Gerhard Neumann
Intelligent Autonomous Systems, Technische Universit?t Darmstadt
Hochschulstr. 10, 64289 Darmstadt, Germany
{paraschos,daniel,peters,neumann}@ias.tu-darmstadt.de
Abstract
Movement Primitives (MP) are a well-established approach for representing modular and re-usable robot movement generators. Many state-of-the-art robot learning successes are based MPs, due to their compact representation of the inherently
continuous and high dimensional robot movements. A major goal in robot learning is to combine multiple MPs as building blocks in a modular control architecture to solve complex tasks. To this effect, a MP representation has to allow for
blending between motions, adapting to altered task variables, and co-activating
multiple MPs in parallel. We present a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Our probabilistic approach
allows for the derivation of new operations which are essential for implementing
all aforementioned properties in one framework. In order to use such a trajectory
distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution. We evaluate
and compare our approach to existing methods on several simulated as well as
real robot scenarios.
1
Introduction
Movement Primitives (MPs) are commonly used for representing and learning basic movements
in robotics, e.g., hitting and batting, grasping, etc. [1, 2, 3]. MP formulations are compact parameterizations of the robot?s control policy. Modulating their parameters permits imitation and
reinforcement learning as well as adapting to different scenarios. MPs have been used to solve
many complex tasks, including ?Ball-in-the-Cup? [4], Ball-Throwing [5, 6], Pancake-Flipping [7]
and Tetherball [8].
The aim of MPs is to allow for composing complex robot skills out of elemental movements with a
modular control architecture. Hence, we require a MP architecture that supports parallel activation
and smooth blending of MPs for composing complex movements of sequentially [9] and simultaneously [10] activated primitives. Moreover, adaptation to a new task or a new situation requires
modulation of the MP to an altered desired target position, target velocity or via-points [3]. Additionally, the execution speed of the movement needs to be adjustable to change the speed of, for
example, a ball-hitting movement. As we want to learn the movement from data, another crucial requirement is that the parameters of the MPs should be straightforward to learn from demonstrations
as well as through trial and error for reinforcement learning approaches. Ideally, the same architecture is applicable for both stroke-based and periodic movements, and capable of representing
optimal behavior in deterministic and stochastic environments.
While many of these properties are implemented by one or more existing MP architectures [1, 11,
10, 2, 12, 13, 14, 15], no approach exists which exhibits all of these properties in one framework. For
example, [13] also offers a probabilistic interpretation of MPs by representing an MP as a learned
graphical model. However, this approach heavily depends on the quality of the used planner and the
1
movement can not be temporally scaled. Rozo et. al. [12, 16] use a combination of primitives, yet,
their control policy of the MP is based on heuristics and it is unclear how the combination of MPs
affects the resulting movements.
In this paper, we introduce the concept of probabilistic movement primitives (ProMPs) as a general
probabilistic framework for representing and learning MPs. Such a ProMP is a distribution over
trajectories. Working with distributions enables us to formulate the described properties by operations from probability theory. For example, modulation of a movement to a novel target can be
realized by conditioning on the desired target?s positions or velocities. Similarly, consistent parallel
activation of two elementary behaviors can be accomplished by a product of two independent trajectory probability distributions. Moreover, a trajectory distribution can also encode the variance of the
movement, and, hence, a ProMP can often directly encode optimal behavior in stochastic systems
[17]. Finally, a probabilistic framework allows us to model the covariance between trajectories of
different degrees of freedom, that can be used to couple the joints of the robot.
Such properties of trajectory distributions have so far not been properly exploited for representing
and learning MPs. The main reason for the absence of such an approach has been the difficulty of
extracting a policy for controlling the robot from a trajectory distribution. We show how this step can
be accomplished and derive a control policy that exactly reproduces a given trajectory distribution.
To the best of our knowledge, we present the first principled MP approach that can exploit the power
of operations from probability theory.
While the ProMPs? representation introduces many novel components, it incorporates many advantages from well-known previous movement primitive representations [18, 10], such as phase
variables for timing of the movement that enable temporal rescaling of movements, and the ability
to represent both rhythmic and stroke based movements. However, since ProMPs incorporate the
variance of demonstrations, the increased flexibility and advantageous properties of the representation come at the price of requiring multiple demonstrations to learn the primitives as opposed to past
approaches [18, 3] that can clone movements from a single demonstration.
2
Probabilistic Movement Primitives (ProMPs)
A movement primitive representation should
exhibit several desirable properties, such as co- Table 1: Desirable properties and their implemenactivation, adaptability and optimality in order tation in the ProMP
to be a powerful MP representation. The goal
of this paper is to unify these properties in one
Property
Implementation
framework. We accomplish this objective by
Co-Activation
Product
using a probabilistic formulation for MPs. We
Modulation
Conditioning
summarized all the properties and how they are
Optimality
Encode variance
implemented in our framework in Table 1. In
Coupling
Mean, Covariance
this section, we will sequentially explain the
Learning
Max. Likelihood
importance of each of these property and disTemporal Scaling
Modulate Phase
cuss the implementation in our framework. As
Rhythmic Movements
Periodic Basis
crucial part of our objective, we will introduce
conditioning and a product of ProMPs as new
operations that can be applied on the ProMPs due to the probabilistic formulation. Finally, we show
how to derive a controller which follows a given trajectory distribution.
2.1
Probabilistic Trajectory Representation
We model a single movement execution as a trajectory ? = {qt }t=0...T , defined by the joint angles
qt over time. In our framework, a MP describes multiple ways to execute a movement, which
naturally leads to a probability distribution over trajectories.
Encoding a Time-Varying Variance of Movements. Our movement primitive representation
models the time-varying variance of the trajectories to be able to capture multiple demonstrations
with high-variability. Representing the variance information is crucial as it reflects the importance of
2
single time points for the movement execution and it is often a requirement for representing optimal
behavior in stochastic systems [17].
We use a weight vector w to compactly represent a single trajectory. The probability of observing a
trajectory ? given the underlying weight vector w is given as a linear basis function model
Q
qt
yt =
= ?Tt w + y ,
p(? |w) = t N y t |?Tt w, ?y ,
(1)
q?t
where ?t = [?t , ?? t ] defines the n ? 2 dimensional time-dependent basis matrix for the joint positions qt and velocities q?t , n defines the number of basis functions and y ? N (0, ?y ) is zero-mean
i.i.d. Gaussian noise. By weighing the basis functions ?t with the parameter vector w, we can
represent the mean of a trajectory.
In order to capture the variance of the trajectories, we introduce a distribution p(w; ?) over the
weight vector w, with parameters ?. The trajectory distribution
p(? ; ?) can now be computed
?
by marginalizing out the weight vector w, i.e., p(? ; ?) = p(? |w)p(w; ?)dw. The distribution
p(? ; ?) defines a Hierarchical Bayesian Model (HBM) whose parameters are given by the observation noise variance ?y and the parameters ? of p(w; ?).
Temporal Modulation. Temporal modulation is needed for a faster or slower execution of the
movement. We introduce a phase variable z to decouple the movement from the time signal as for
previous non-probabilistic approaches [18]. The phase can be any function monotonically increasing
with time z(t). By modifying the rate of the phase variable, we can modulate the speed of the
movement. Without loss of generality, we define the phase as z0 = 0 at the beginning of the
movement and as zT = 1 at the end. The basis functions ?t now directly depend on the phase
instead of time, such that ?t = ?(zt ) and the corresponding derivative becomes ?? t = ?0 (zt )z?t .
Rhythmic and Stroke-Based Movements. The choice of the basis functions depends on the type
of movement, which can be either rhythmic or stroke-based. For stroke-based movements, we use
VM
Gaussian basis functions bG
i , while for rhythmic movements we use Von-Mises basis functions bi
to model periodicity in the phase variable z, i.e.,
(zt ? ci )2
cos(2?(zt ? ci ))
G
VM
bi (z) = exp ?
, bi (z) = exp
,
(2)
2h
h
where h defines the width of the basisPand ci the center for the ith basis function. We normalize the
basis functions with ?i (zt ) = bi (z)/ j bj (z).
Encoding Coupling between Joints. So far, we have considered each degree of freedom to be
modeled independently. However, for many tasks we have to coordinate the movement of the joints.
A common way to implement such coordination is via the phase variable zt that couples the mean of
the trajectory distribution [18]. Yet, it is often desirable to also encode higher-order moments of the
coupling, such as the covariance of the joints at time point t. Hence, we extend our model to multiple
dimensions. For each dimension i, we maintain a parameter vector wi , and we define the combined,
weight vector w as w = [wT1 , . . . , wTn ]T . The basis matrix ?t now extends to a block-diagonal
matrix containing the basis functions and their derivatives for each dimension. The observation
vector y t consists of the angles and velocities of all joints. The probability of an observation y at
time t is given by
??
? ?
?
?
y 1,t ?Tt . . .
0
??
??
.. ? w, ? ? = N (y |? w, ? )
..
p(y t |w) = N ?? ... ? ? ...
(3)
y?
t
y
.
t
. ?
T
y d,t
0 ? ? ? ?t
where y i,t = [qi,t , q?i,t ]T denotes the joint angle and velocity for the ith joint. We now maintain a
distribution p(w; ?) over the combined parameter vector w. Using this distribution, we can also
capture the covariance between joints.
Learning from Demonstrations. One crucial requirement of a MP representation is that the parameters of a single primitive are easy to acquire from demonstrations. To facilitate the estimation
3
of the parameters, we will assume a Gaussian distribution for p(w; ?) = N (w|?w , ?w ) over the
parameters w. Consequently, the distribution of the state p(y t |?) for time step t is given by
?
p (y t ; ?) = N y t |?Tt w, ?y N (w|?w , ?w ) dw = N y t |?Tt ?w , ?Tt ?w ?t + ?y , (4)
and, thus, we can easily evaluate the mean and the variance for any time point t. As a ProMP
represents multiple ways to execute an elemental movement, we also need multiple demonstrations
to learn p(w; ?). The parameters ? = {?w , ?w } can be learned from multiple demonstrations by
maximum likelihood estimation, for example, by using the expectation maximization algorithm for
HBMs with Gaussian distributions [19].
2.2
New Probabilistic Operators for Movement Primitives
The ProMPs allow for the formulation of new operators from probability theory, e.g., conditioning
for modulating the trajectory and a product of distributions for co-activating MPs. We will now
describe both operators in our general framework and, subsequently, discuss their implementation
for our specific choice of Gaussian distributions for p(w; ?).
Modulation of Via-Points, Final Positions or Velocities by Conditioning. The modulation of
via-points and final positions are important properties of any MP framework such that the MP can
be adapted to new situations. In our probabilistic formulation, such operations can be described
by conditioning the MP to reach a certain state y ?t at time t. Conditioning is performed by adding
a desired observation xt = [y ?t , ??y ] to our probabilistic model and applying Bayes theorem, i.e.,
p(w|x?t ) ? N y ?t |?Tt w, ??y p(w). The state vector y ?t represents the desired position and velocity vector at time t and ??y describes the accuracy of the desired observation. We can also condition
on any subset of y ?t . For example, by specifying a desired joint position q1 for the first joint the
trajectory distribution will automatically infer the most probable joint positions for the other joints.
For Gaussian trajectory distributions the conditional distribution p (w|x?t ) for w is Gaussian with
mean and variance
?1
[new]
?w
= ?w + ?w ?t ??y + ?Tt ?w ?t
y ?t ? ?Tt ?w ,
(5)
?1
[new]
(6)
?Tt ?w .
= ?w ? ?w ?t ??y + ?Tt ?w ?t
?w
Conditioning a ProMP to different target states is also illustrated in Figure 1(a). We can see that, despite the modulation of the ProMP by conditioning, the ProMP stays within the original distribution,
and, hence, the modulation is also learned from the original demonstrations. Modulation strategies
in current approaches such as the DMPs do not show this beneficial effect [18].
Combination and Blending of Movement Primitives. Another beneficial probabilistic operation
is to continuously combine and blend different MPs into a single movement. Suppose that we
maintain a set of i different primitives that we want to combine. We can co-activate them by taking
Q
[i]
the products of distributions, i.e., pnew (? ) ? i pi (? )? where the?[i] ? [0, 1] factors denote the
th
activation of the i primitive. This product captures the overlapping region of the active MPs, i.e.,
the part of the trajectory space where all MPs have high probability mass.
However, we also want to be able to modulate the activations of the primitives, for example, to
continuously blend the movement execution from one primitive to the next. Hence, we decompose
[i]
the trajectory into single time steps and use time-varying activation functions ?t , i.e.,
?
[i]
QQ
p? (? ) ? t i pi (y t )?t , pi (y t ) = pi (y t |w[i] )pi (w[i] )dw[i] .
(7)
[i]
[i]
For Gaussian distributions pi (y t ) = N (y t |?t , ?t ), the resulting distribution p? (y t ) is again
Gaussian with variance and mean
?1
P [i] [i] ?1 ?1
[i]
[i]
[i]
?
? ?1 P
?
?t =
, ?t = (?t )
?t
(8)
i ?t /?t
i ?t /?t
Both terms, and their derivatives, are required to obtain the stochastic feedback controller which is
finally used to control the robot. We illustrated the co-activation of two ProMPs in Figure 1(b) and
the blending of two ProMPs in Figure 1(c).
4
3
3
Demonstration 1
Demonstration 2
Combination
1
1
0
0
-1
-2
-1
0
0.3
1
0
0.3
time [s]
0.7
(a) Conditioning
1
0
Demonstration 1
Demonstration 2
Blending
2
q [rad]
q [rad]
2
time [s]
0.7
1
?1
?2
0
0.3
0.7
1
-2
1
0
0
0.3
0.7
1
?1
?2
0
(b) Combination
0.3
0.7
1
(c) Blending
Figure 1: (a) Conditioning on different target states. The blue shaded area represents the learned
trajectory distribution. We condition on different target positions, indicated by the ?x?-markers. The
produced trajectories exactly reach the desired targets while keeping the shape of the demonstrations.
(b) Combination of two ProMPs. The trajectory distributions are indicated by the blue and red
shaded areas. Both primitives have to reach via-points at different points in time, indicated by
the ?x?-markers. We co-activate both primitives with the same activation factor. The trajectory
distribution generated by the resulting feedback controller now goes through all four via-points.
(c) Blending of two ProMPs. We smoothly blend from the red primitive to the blue primitive. The
activation factors are shown in the bottom. The resulting movement (green) first follows the red
primitive and, subsequently, switches to following the blue primitive.
2.3
Using Trajectory Distributions for Robot Control
In order to fully exploit the properties of trajectory distributions, a policy for controlling the robot
is needed that reproduces these distributions. To this effect, we analytically derivate a stochastic
feedback controller that can accurately reproduce the mean vectors ?t and the variances ?t for all t
of a given trajectory distribution.
We follow a model-based approach. First, we approximate the continuous time dynamics of the
system by a linearized discrete-time system with step duration dt,
y t+dt = (I + At dt) y t + B t dtu + ct dt,
(9)
where the system matrices At , the input matrices B t and the drift vectors ct can be obtained by first
order Taylor expansion of the dynamical system1 . We assume a stochastic linear feedback controller
with time varying feedback gains is generating the control actions, i.e.,
? N (u |0, ?u/dt) ,
u = K t y t + k t + u ,
(10)
where the matrix K t denotes a feedback gain matrix and kt a feed-forward component. We use a
control noise which behaves like a Wiener process [21], and, hence, its variance grows linearly with
the step duration2 dt. By substituting Eq. (10) into Eq. (9), we rewrite the next state of the system as
y t+dt = (I + (At + B t K t ) dt) y t + B t dt(kt + u ) + cdt = F t y t + f t + B t dtu ,
with F t = (I + (At + B t K t ) dt) , f t = B t kt dt + cdt.
(11)
For improved clarity, we will omit the time-index as subscript for most matrices in the remainder
of the paper. From Eq. 4 we know that the distribution for our current state y t is Gaussian with
mean ?t = ?Tt ?w and covariance3 ?t = ?Tt ?w ?t . As the system dynamics are modeled by a
Gaussian linear model, we can obtain the distribution of the next state p (y t+dt ) analytically from
the forward model
?
p y t+dt = N y t+dt |F y t + f , ?s dt N (y t |?t , ?t ) dy t
=N y t+dt |F ?t + f , F ?t F T + ?s dt ,
(12)
1
If inverse dynamics control [20] is used for the robot, the system reduces to a linear system where the terms
At , B t and ct are constant in time.
2
As we multiply the noise by Bdt, we need to divide the covariance ?u of the control noise u by dt to
obtain this desired behavior.
3
The observation noise is omitted as it represents independent noise which is not used for predicting the
next state.
5
where dt?s = dtB?u B T represents the system noise matrix. Both sides of Eq. 12 are Gaussian
distributions, where the left-hand side can also be computed by our desired trajectory distribution
p(? ; ?). We match the mean and the variances of both sides with our control law, i.e.,
?t+dt = F ?t F T + ?s dt,
?t+dt = F ?t + (Bk + c)dt,
(13)
where F is given in Eq. (11) and contains the time varying feedback gains K. Using both constraints, we can now obtain the time dependend gains K and k.
Derivation of the Controller Gains.
?t+dt ? ?t
=
By rearranging terms, the covariance constraint becomes
T
?s dt + (A + BK) ?t dt + ?t (A + BK) dt + O(dt2 ),
(14)
where O(dt2 ) denotes all second order terms in dt. After dividing by dt and taking the limit of
dt ? 0, the second order terms disappear and we obtain the time derivative of the covariance
?t+dt ? ?t
T
?? t = lim
= (A + BK)?t + ?t (A + BK) + ?s .
dt?0
dt
(15)
? t can also be obtained from the trajectory distribution ?
?t=?
? T ?w ?t + ?T ?w ?
? t,
The matrix ?
t
t
which we substitute into Eq. (15). After rearranging terms, the equation reads
T
? t ?w ?T -A?t -?s /2 .
M + M T = BK?t + (BK?t ) , with M =?
t
Setting M = BK?t and solving for the gain matrix K
K = B ? ??Tt ?w ?t ? A?t ? ?s /2 ??1
t ,
(16)
(17)
yields the solution, where B ? denotes the pseudo-inverse of the control matrix B.
Derivation of the Feed-Forward Controls. Similarly, we obtain the feed-forward control signal k
by matching the mean of the trajectory distribution ?t+dt with the mean computed with the forward
model. After rearranging terms, dividing by dt and taking the limit of dt ? 0, we arrive at the
continuous time constraint for the vector k,
?? t = (A + BK)?t + Bk + c.
(18)
? t ?w and
We can again use the trajectory distribution p(? ; ?) to obtain ?t = ?t ?w and ?? t = ?
solve Eq. (18) for k,
? t ?w ? (A + BK) ?t ?w ? c
k = B? ?
(19)
Estimation of the Control Noise.
In order to match a trajectory distribution, we also need to
match the control noise matrix ?u which has been applied to generate the distribution. We first
compute the system noise covariance ?s = B?u B T by examining the cross-correlation between
time steps of the trajectory distribution. To do so, we compute the joint distribution p y t , y t+dt of
the current state y t and the next state y t+dt ,
?t C t
yt
?t
p y t , y t+dt = N
,
,
(20)
y t+dt ?t+dt
C Tt ?t+dt
where C t = ?t ?w ?Tt+dt is the cross-correlation. We can again use our model to match the
cross correlation.
The joint distribution for y t and y t+dt
is obtained by our system dynamics by
p y t , y t+dt = N (y t |?t , ?t ) N y t+dt |F y t + f , ?u which yields
yt
?t
?t F T
?t
p y t , y t+dt = N
,
.
(21)
y t+dt
F ?t + f
F ?t F ?t F T + ?s dt
The noise covariance ?s can be obtained by matching both covariance matrices given in Eq. (20)
and (21),
T
?s dt = ?t+dt ? F ?t F T = ?t+dt ? F ?t ??1
= ?t+dt ? C Tt ??1
t ?t F
t Ct
?
?T
(22)
The variance ?u of the control noise is then given by ?u = B ?s B . As we can see from
Eq. (22) the variance of our stochastic feedback controller does not depend on the controller gains
and can be pre-computed before estimating the controller gains.
6
t = 0s
t = 0.25s
t = 0.5s
t = 0.75s
t = 1.0s
6
4
2
y?axis [m]
0
6
4
2
0
6
4
2
0
?2
0
2
4
6
?2
0
2
4
6
?2
0
2
4
x?axis [m]
6
?2
0
2
4
6
?2
0
2
4
6
Figure 2: A 7-link planar robot has to
reach a target position at T = 1.0s
with its end-effector while passing a
via-point at t1 = 0.25s (top) or t2 =
0.75s (middle). The plot shows the
mean posture of the robot at different
time steps in black and samples generated by the ProMP in gray. The
ProMP approach was able to exactly reproduce the demonstration which have
been generated by an optimal control
law. The combination of both learned
ProMPs is shown in the bottom. The
resulting movement reached both viapoints with high accuracy.
Figure 3: Robot Hockey. The robot shoots a hockey puck. We demonstrate ten straight shots for
varying distances and ten shots for varying angles. The pictures show samples from the ProMP
model for straight shots (b) and angled shots (c). Learning from combined data set yields a model
that represents variance in both, distance and angle (d). Multiplying the individual models leads to a
model that only reproduces shots where both models had probability mass, in the center at medium
distance (e). The last picture shows the effect of conditioning on only left and right angles (f).
3
Experiments
We evaluated our approach on two different real robot tasks, one stroke based movement and one
rhythmic movements. Additionally, we illustrate our approach on a 7-link simulated planar robot.
For all real robot experiments we use a seven degrees of freedom KUKA lightweight robot arm. A
more detailed description of the experiments is given in the supplementary material.
7-link Reaching Task. In this task, a seven link planar robot has to reach a target position in
end-effector space. While doing so, it also has to reach a via-point at a certain time point. We
generated the demonstrations for learning the MPs with an optimal control law [22]. In the first set of
demonstrations, the robot has to reach the via-point at t1 = 0.25s. The reproduced behavior with the
ProMPs is illustrated in Figure 2(top). We learned the coupling of all seven joints with one ProMP.
The ProMP exactly reproduced the via-points in task space while exhibiting a large variability in
between the time points of the via-points. Moreover, the ProMP could also reproduce the coupling
of the joints from the optimal control law which can be seen by the small variance of the end-effector
in comparison to the rather large variance of the single joints at the via-points. The ProMP could
achieve an average cost value of a similar quality as the optimal controller. We also used a second set
of demonstrations where the first via-point was located at time step t2 = 0.75, which is illustrated
in Figure 2(middle). We combined the ProMPs learned from both demonstrations, which resulted
in the movement illustrated in Figure 2(bottom). The combination of both MPs accurately reaches
both via-points at t1 = 0.25 and t2 = 0.75.
7
0.5
Desired
Feedback Controller
1.7
0.3
q [rad]
q [rad]
1.6
1.5
0.2
0.1
0
1.4
-0.1
1.3
1
(a)
Demonstration 1
Demonstration 2
Combination
0.4
2
3
4
5
time [s]
(b)
6
7
8
9
10
-0.2
2.5
3
3.5
4
4.5
5
time [s]
5.5
6
6.5
7
7.5
(c)
Figure 4: (a)The maracas task. (b) Trajectory distribution for playing maracas (joint number 4). By
modulating the speed of the phase signal zt , the speed of the movement can be adapted. The plot
shows the desired distribution in blue and the generated distribution from the feedback controller
in green. Both distributions match. (c) Blending between two rhythmic movements (blue and red
shaded areas) for playing maracas. The green shaded is produced by continuously switching from
the blue to the red movement.
Robot Hockey. In the hockey task, the robot has to shoot a hockey puck in different directions and
distances. The task setup can be seen in Figure 3(a). We record two different sets of demonstrations,
one that contains straight shots with varying distances while the second set contains shots with a
varying shooting angle. Both data sets contain ten demonstrations each. Sampling from the two
models generated by the different data sets yields shots that exhibit the demonstrated variance in
either angle or distance, as shown in Figure 3(b) and 3(c). When combining the two individual
primitives, the resulting model shoots only in the center at medium distance, i.e., the intersection
of both MPs. We also learn a joint distribution over the final puck position and the weight vectors
w and condition on the angle of the shot. The conditioning yields a model that shoots in different
directions, depending on the conditioning, see Figure 3(f).
Robot Maracas. A maracas is a musical instrument containing grains, such that shaking it produces sounds. Demonstrating fast movements can be difficult on the robot arm, due to the inertia
of the arm. Instead, we demonstrate a slower movement of ten periods to learn the motion. We
use this slow demonstration and change the phase after learning the model to achieve a shaking
movement of appropriate speed to generate the desired sound of the instrument. Using a variable
phase also allows us to change the speed of the motion during one execution to achieve different
sound patterns. We show an example movement of the robot in Figure 4(a). The desired trajectory
distribution of the rhythmic movement and the resulting distribution generated from the feedback
controller are shown in Figure 4(b). Both distributions match. We also demonstrated a second type
of rhythmic shaking movement which we use to continuously blend between both movements to
produce different sounds. One such transition between the two ProMPs is shown for one joint in
Figure 4(c).
4
Conclusion
Probabilistic movement primitives are a promising approach for learning, modulating, and re-using
movements in a modular control architecture. To effectively take advantage of such a control architecture, ProMPs support simultaneous activation, match the quality of the encoded behavior from the
demonstrations, are able to adapt to different desired target positions, and efficiently learn by imitation. We parametrize the desired trajectory distribution of the primitive by a Hierarchical Bayesian
Model with Gaussian distributions. The trajectory distribution can be easily obtained from demonstrations. Our probabilistic formulation allows for new operations for movement primitives, including conditioning and combination of primitives. Future work will focus on using the ProMPs in a
modular control architecture and improving upon imitation learning by reinforcement learning.
Acknowledgements
The research leading to these results has received funding from the European Community?s Framework Programme CoDyCo (FP7-ICT-2011-9 Grant.No.600716), CompLACS (FP7-ICT-2009-6
Grant.No.270327), and GeRT (FP7-ICT-2009-4 Grant.No.248273).
8
References
[1] A. Ijspeert and S. Schaal. Learning Attractor Landscapes for Learning Motor Primitives. In Advances in
Neural Information Processing Systems 15, (NIPS). MIT Press, Cambridge, MA, 2003.
[2] M. Khansari-Zadeh and A. Billard. Learning Stable Non-Linear Dynamical Systems with Gaussian Mixture Models. IEEE Transaction on Robotics, 2011.
[3] J. Kober, K. M?lling, O. Kroemer, C. Lampert, B. Sch?lkopf, and J. Peters. Movement Templates for
Learning of Hitting and Batting. In International Conference on Robotics and Automation (ICRA), 2010.
[4] J. Kober and J. Peters. Policy Search for Motor Primitives in Robotics. Machine Learning, pages 1?33,
2010.
[5] A. Ude, A. Gams, T. Asfour, and J. Morimoto. Task-Specific Generalization of Discrete and Periodic
Dynamic Movement Primitives. Trans. Rob., (5), October 2010.
[6] B. da Silva, G. Konidaris, and A. Barto. Learning Parameterized Skills. In International Conference on
Machine Learning, 2012.
[7] P. Kormushev, S. Calinon, and D. Caldwell. Robot Motor Skill Coordination with EM-based Reinforcement Learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2010.
[8] C. Daniel, G. Neumann, and J. Peters. Learning Concurrent Motor Skills in Versatile Solution Spaces. In
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
[9] George Konidaris, Scott Kuindersma, Roderic Grupen, and Andrew Barto. Robot Learning from Demonstration by Constructing Skill Trees. International Journal of Robotics Research, 31(3):360?375, March
2012.
[10] A. dAvella and E. Bizzi. Shared and Specific Muscle Synergies in Natural Motor Behaviors. Proceedings
of the National Academy of Sciences (PNAS), 102(3):3076?3081, 2005.
[11] M. Williams, B.and Toussaint and A. Storkey. Modelling Motion Primitives and their Timing in Biologically Executed Movements. In Advances in Neural Information Processing Systems (NIPS), 2007.
[12] L. Rozo, S. Calinon, D. G. Caldwell, P. Jimenez, and C. Torras. Learning Collaborative Impedance-Based
Robot Behaviors. In AAAI Conference on Artificial Intelligence, 2013.
[13] E. Rueckert, G. Neumann, M. Toussaint, and W.Pr Maass. Learned Graphical Models for Probabilistic
Planning provide a new Class of Movement Primitives. 2012.
[14] L. Righetti and A Ijspeert. Programmable central pattern generators: an application to biped locomotion
control. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006.
[15] A. Paraschos, G Neumann, and J. Peters. A probabilistic approach to robot trajectory generation. In
Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2013.
[16] S. Calinon, P. Kormushev, and D. Caldwell. Compliant Skills Acquisition and Multi-Optima Policy
Search with EM-based Reinforcement Learning. Robotics and Autonomous Systems (RAS), 61(4):369 ?
379, 2013.
[17] E. Todorov and M. Jordan. Optimal Feedback Control as a Theory of Motor Coordination. Nature
Neuroscience, 5:1226?1235, 2002.
[18] S. Schaal, J. Peters, J. Nakanishi, and A. Ijspeert. Learning Movement Primitives. In International
Symposium on Robotics Research, (ISRR), 2003.
[19] A. Lazaric and M. Ghavamzadeh. Bayesian Multi-Task Reinforcement Learning. In Proceedings of the
27th International Conference on Machine Learning (ICML), 2010.
[20] J. Peters, M. Mistry, F. E. Udwadia, J. Nakanishi, and S. Schaal. A Unifying Methodology for Robot
Control with Redundant DOFs. Autonomous Robots, (1):1?12, 2008.
[21] H. Stark and J. Woods. Probability and Random Processes with Applications to Signal Processing (3rd
Edition). 3 edition, August 2001.
[22] M. Toussaint. Robot Trajectory Optimization using Approximate Inference. In Proceedings of the 26th
International Conference on Machine Learning, (ICML), 2009.
9
| 5177 |@word trial:1 middle:2 advantageous:1 linearized:1 covariance:10 q1:1 versatile:1 shot:9 moment:1 contains:3 lightweight:1 jimenez:1 daniel:3 past:1 existing:2 current:3 activation:10 yet:2 grain:1 shape:1 christian:1 enables:1 motor:6 plot:2 intelligence:1 weighing:1 beginning:1 ith:2 record:1 alexandros:1 parameterizations:1 symposium:1 shooting:1 consists:1 grupen:1 combine:3 introduce:4 ra:1 behavior:9 planning:1 multi:2 automatically:1 increasing:1 becomes:2 estimating:1 moreover:3 underlying:1 mass:2 medium:2 temporal:3 pseudo:1 exactly:4 universit:1 scaled:1 control:28 grant:3 omit:1 before:1 t1:3 timing:2 tation:1 limit:2 switching:1 despite:1 encoding:2 subscript:1 modulation:10 black:1 specifying:1 shaded:4 co:8 bi:4 block:2 implement:1 jan:1 area:3 adapting:2 matching:2 pre:1 operator:3 wt1:1 applying:1 deterministic:1 demonstrated:2 yt:3 center:3 primitive:35 straightforward:1 go:1 independently:1 duration:1 williams:1 formulate:1 unify:1 system1:1 dw:3 kuka:1 gert:1 autonomous:3 coordinate:1 qq:1 target:11 controlling:2 gerhard:1 heavily:1 suppose:1 locomotion:1 velocity:7 storkey:1 located:1 bottom:3 capture:4 region:1 grasping:1 movement:68 principled:1 environment:1 ideally:1 dt2:2 dynamic:5 ghavamzadeh:1 depend:2 rewrite:1 solving:1 upon:1 basis:13 compactly:1 easily:2 joint:22 derivation:3 fast:1 describe:1 activate:2 bdt:1 artificial:1 whose:1 modular:5 heuristic:1 solve:3 supplementary:1 encoded:1 ability:1 dependend:1 final:3 reproduced:2 advantage:2 product:6 kober:2 adaptation:1 remainder:1 tu:1 combining:1 shaking:3 flexibility:1 achieve:3 academy:1 description:1 normalize:1 elemental:2 requirement:3 neumann:5 optimum:1 produce:2 generating:1 derive:3 coupling:5 illustrate:1 depending:1 andrew:1 qt:4 received:1 eq:9 dividing:2 implemented:2 come:1 exhibiting:1 direction:2 modifying:1 stochastic:8 subsequently:2 enable:1 material:1 implementing:1 require:1 activating:2 darmstadt:3 generalization:1 decompose:1 probable:1 elementary:1 blending:8 considered:1 exp:2 bj:1 substituting:1 major:1 angled:1 omitted:1 bizzi:1 estimation:3 applicable:1 coordination:3 modulating:4 concurrent:1 asfour:1 cdt:2 reflects:1 mit:1 gaussian:14 aim:1 reaching:1 rather:1 varying:9 barto:2 encode:4 focus:1 schaal:3 properly:1 modelling:1 likelihood:2 inference:1 dependent:1 rozo:2 reproduce:3 germany:1 aforementioned:1 art:1 sampling:1 represents:6 icml:2 wtn:1 dmps:1 future:1 t2:3 intelligent:3 simultaneously:1 resulted:1 national:1 puck:3 individual:2 phase:12 attractor:1 maintain:3 freedom:3 multiply:1 introduces:1 mixture:1 activated:1 kt:3 capable:1 pancake:1 tree:1 dofs:1 taylor:1 divide:1 re:2 desired:15 effector:3 increased:1 maximization:1 cost:1 technische:1 subset:1 calinon:3 examining:1 periodic:3 accomplish:1 combined:4 clone:1 international:10 stay:1 probabilistic:20 vm:2 compliant:1 complacs:1 continuously:4 von:1 again:3 aaai:1 central:1 opposed:1 containing:2 usable:1 derivative:4 rescaling:1 leading:1 stark:1 de:1 summarized:1 automation:2 rueckert:1 kroemer:1 mp:36 depends:2 bg:1 performed:1 observing:1 doing:1 red:5 reached:1 bayes:1 maintains:1 parallel:3 collaborative:1 morimoto:1 accuracy:2 wiener:1 variance:20 musical:1 efficiently:1 yield:5 landscape:1 caldwell:3 lkopf:1 bayesian:3 accurately:2 produced:2 trajectory:45 multiplying:1 straight:3 stroke:6 explain:1 simultaneous:1 reach:8 konidaris:2 acquisition:1 naturally:1 mi:1 couple:2 gain:8 knowledge:1 lim:1 adaptability:1 feed:3 higher:1 dt:54 follow:1 planar:3 methodology:1 improved:1 formulation:7 execute:2 evaluated:1 generality:1 correlation:3 working:1 hand:1 overlapping:1 marker:2 defines:4 quality:3 indicated:3 gray:1 grows:1 building:1 effect:4 facilitate:1 concept:2 requiring:1 contain:1 analytically:3 hence:6 read:1 maass:1 illustrated:5 during:1 width:1 tt:17 demonstrate:2 motion:4 silva:1 roderic:1 shoot:4 novel:2 funding:1 common:1 behaves:1 conditioning:15 extend:1 interpretation:1 cup:1 cambridge:1 rd:1 similarly:2 biped:1 had:1 robot:39 stable:1 etc:1 scenario:2 certain:2 success:1 accomplished:2 exploited:1 muscle:1 seen:2 george:1 redundant:1 period:1 monotonically:1 signal:4 torras:1 multiple:9 desirable:3 sound:4 infer:1 reduces:1 pnas:1 smooth:1 faster:1 match:7 adapt:1 offer:1 cross:3 nakanishi:2 qi:1 basic:1 controller:14 expectation:1 represent:3 robotics:8 want:3 derivate:1 crucial:4 sch:1 incorporates:1 jordan:1 extracting:1 easy:1 switch:1 affect:1 todorov:1 architecture:8 peter:8 passing:1 action:1 programmable:1 detailed:1 ten:4 paraschos:3 generate:2 neuroscience:1 lazaric:1 blue:7 discrete:2 four:1 pnew:1 demonstrating:1 clarity:1 iros:1 isrr:1 wood:1 angle:9 inverse:2 powerful:1 parameterized:1 extends:1 planner:1 arrive:1 zadeh:1 dy:1 scaling:1 ct:4 adapted:2 constraint:3 throwing:1 kuindersma:1 speed:7 optimality:2 ball:3 combination:10 march:1 describes:2 beneficial:2 em:2 wi:1 rob:1 biologically:1 pr:1 equation:1 discus:1 hbm:1 needed:2 know:1 fp7:3 instrument:2 end:4 parametrize:1 operation:7 permit:1 gam:1 hierarchical:2 appropriate:1 dtb:1 slower:2 original:2 substitute:1 denotes:4 top:2 batting:2 graphical:2 unifying:1 exploit:2 disappear:1 rsj:2 icra:1 objective:2 ude:1 realized:1 flipping:1 blend:4 strategy:1 posture:1 diagonal:1 unclear:1 exhibit:3 distance:7 link:4 simulated:2 seven:3 reason:1 modeled:2 index:1 demonstration:28 acquire:1 setup:1 difficult:1 october:1 executed:1 implementation:3 zt:8 policy:7 adjustable:1 billard:1 observation:6 hbms:1 situation:2 variability:2 august:1 community:1 drift:1 bk:11 required:1 khansari:1 rad:4 learned:8 established:1 nip:2 trans:1 able:4 dynamical:2 pattern:2 scott:1 max:1 including:2 green:3 ia:1 power:1 difficulty:1 natural:1 predicting:1 arm:3 representing:8 altered:2 temporally:1 picture:2 dtu:1 axis:2 ict:3 acknowledgement:1 marginalizing:1 law:4 loss:1 fully:1 generation:1 generator:2 toussaint:3 humanoid:2 degree:3 consistent:1 playing:2 pi:6 periodicity:1 last:1 keeping:1 side:3 allow:3 template:1 taking:3 rhythmic:9 feedback:13 dimension:3 transition:1 forward:5 commonly:1 reinforcement:6 inertia:1 programme:1 far:2 transaction:1 approximate:2 compact:2 skill:6 synergy:1 reproduces:4 sequentially:2 active:1 imitation:3 continuous:3 search:2 lling:1 table:2 impedance:1 promising:1 nature:1 additionally:2 learn:7 hockey:5 composing:2 inherently:1 rearranging:3 improving:1 expansion:1 complex:4 european:1 constructing:1 da:1 main:1 linearly:1 noise:13 lampert:1 edition:2 slow:1 position:13 z0:1 theorem:1 specific:3 xt:1 essential:1 exists:1 adding:1 effectively:1 importance:2 ci:3 execution:6 smoothly:1 intersection:1 hitting:3 ma:1 conditional:1 modulate:3 goal:2 consequently:1 price:1 absence:1 shared:1 change:3 decouple:1 ijspeert:3 support:2 incorporate:1 evaluate:2 |
4,617 | 5,178 | Variational Policy Search via Trajectory Optimization
Vladlen Koltun
Stanford University and Adobe Research
vladlen@cs.stanford.edu
Sergey Levine
Stanford University
svlevine@cs.stanford.edu
Abstract
In order to learn effective control policies for dynamical systems, policy search
methods must be able to discover successful executions of the desired task.
While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with
high-dimensional policies that make parameter-space exploration infeasible. We
present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum
likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard
supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks.
1
Introduction
Direct policy search methods have the potential to scale gracefully to complex, high-dimensional
control tasks [12]. However, their effectiveness depends on discovering successful executions of the
desired task, usually through random exploration. As the dimensionality and complexity of a task
increases, random exploration can prove inadequate, resulting in poor local optima. We propose to
decouple policy optimization from exploration by using a variational decomposition of a maximum
likelihood policy objective. In our method, exploration is performed by a model-based trajectory
optimization algorithm that is not constrained by the policy parameterization, but attempts to minimize both the cost and the deviation from the current policy, while the policy is simply optimized to
match the resulting trajectory distribution. Since direct model-based trajectory optimization is usually much easier than policy search, this method can discover low cost regions much more easily.
Intuitively, the trajectory optimization ?guides? the policy search toward regions of low cost.
The trajectory optimization can be performed by a variant of the differential dynamic programming
algorithm [4], and the policy is optimized with respect to a standard maximum likelihood objective.
We show that this alternating optimization maximizes a well-defined policy objective, and demonstrate experimentally that it can learn complex tasks in high-dimensional domains that are infeasible
for methods that rely on random exploration. Our evaluation shows that the proposed algorithm
produces good results on two challenging locomotion problems, outperforming prior methods.
2
Preliminaries
In standard policy search, we seek to find a distribution over actions ut in each state xt , denoted
PT
?? (ut |xt ), so as to minimize the sum of expected costs E[c(?)] = E[ t=1 c(xt , ut )], where ?
is a sequence of states and actions. The expectation is taken with respect to the system dynamics
p(xt+1 |xt , ut ) and the policy ?? (ut |xt ), which is typically parameterized by a vector ?.
An alternative to this standard formulation is to convert the task into an inference problem, by introducing a binary random variable Ot at each time step that serves as the indicator for ?optimality.?
1
We follow prior work and define the probability of Ot as p(Ot = 1|xt , ut ) ? exp(?c(xt , ut )) [19].
Using the dynamics distribution p(xt+1 |xt , ut ) and the policy ?? (ut |xt ), we can define a dynamic
Bayesian network that relates states, actions, and the optimality indicator. By setting Ot = 1 at all
time steps and learning the maximum likelihood values for ?, we can perform policy optimization
[20]. The corresponding optimization problem has the objective
!
Z
Z
T
T
X
Y
p(O|?) = p(O|?)p(?|?)d? ? exp ?
c(xt , ut ) p(x1 )
?? (ut |xt )p(xt+1 |xt , ut )d?. (1)
t=1
t=1
Although this objective differs from the classical minimum average cost objective, previous work
showed that it is nonetheless useful for policy optimization and planning [20, 19]. In Section 5, we
discuss how this objective relates to the classical objective in more detail.
3
Variational Policy Search
Following prior work [11], we can decompose log p(O|?) by using a variational distribution q(?):
log p(O|?) = L(q, ?) + DKL (q(?)kp(?|O, ?)),
where the variational lower bound L is given by
Z
p(O|?)p(?|?)
L(q, ?) = q(?) log
d?,
q(?)
and the second term is the Kullback-Leibler (KL) divergence
Z
Z
p(?|O, ?)
p(O|?)p(?|?)
DKL (q(?)kp(?|O, ?)) = ? q(?) log
d? = ? q(?) log
d?.
q(?)
q(?)p(O|?)
(2)
We can then optimize the maximum likelihood objective in Equation 1 by iteratively minimizing the
KL divergence with respect to q(?) and maximizing the bound L(q, ?) with respect to ?. This is the
standard formulation for expectation maximization [9], and has been applied to policy optimization
in previous work [8, 21, 3, 11]. However, prior policy optimization methods typically represent q(?)
by sampling trajectories from the current policy ?? (ut |xt ) and reweighting them, for example by
the exponential of their cost. While this can improve policies that already visit regions of low cost,
it relies on random policy-driven exploration to discover those low cost regions. We propose instead
to directly optimize q(?) to minimize both its expected cost and its divergence from the current
policy ?? (ut |xt ) when a model of the dynamics is available. In the next section, we show that, for
a Gaussian distribution q(?), the KL divergence in Equation 2 can be minimized by a variant of the
differential dynamic programming (DDP) algorithm [4].
4
Trajectory Optimization
DDP is a trajectory optimization algorithm based on Newton?s method [4]. We build off of a variant
of DDP called iterative LQR, which linearizes the dynamics around the current trajectory, computes
the optimal linear policy under linear-quadratic assumptions, executes this policy, and repeats the
process around the new trajectory until convergence [17]. We show how this procedure can be used
to minimize the KL divergence in Equation 2 when q(?) is a Gaussian distribution over trajectories.
This derivation follows previous work [10], but is repeated here and expanded for completeness.
Iterative LQR is a dynamic programming algorithm that recursively computes the value function
backwards through time. Because of the linear-quadratic assumptions, the value function is always
quadratic, and the dynamics are Gaussian with the mean at f (xt , ut ) and noise . Given a trajectory
? 1 ), . . . , (?
? T ) and defining x
? t = xt ? x
? t and u
? t = ut ? u
? t , the dynamics and cost function
(?
x1 , u
xT , u
are then approximated as following, with subscripts x and u denoting partial derivatives:
? t+1 ? fxt x
? t + fut u
?t +
x
1 T
1 T
?T
? T cut + x
? cxxt x
?t + u
? cuut u
?t + u
?T
? t + c(?
? t ).
c(xt , ut ) ? x
xt , u
t cuxt x
t cxt + u
2 t
2 t
2
Under this approximation, we can recursively compute the Q-function as follows:
T
Qxxt = cxxt +fxt
Vxxt+1 fxt
T
Quut = cuut +fut
Vxxt+1 fut
T
Qxt = cxt +fxt
Vxt+1
T
Qut = cut +fut
Vxt+1 ,
T
Quxt = cuxt +fut
Vxxt+1 fxt
as well as the value function and linear policy terms:
?1
Vxt = Qxt ? QT
uxt Quut Qu
?1
Vxxt = Qxxt ? QT
uxt Quut Qux
kt = ?Q?1
uut Qut
Kt = ?Q?1
uut Quxt .
The deterministic optimal policy is then given by
? t + kt + Kt (xt ? x
? t ).
g(xt ) = u
? t and u
?t
By repeatedly computing the optimal policy around the current trajectory and updating x
based on the new policy, iterative LQR converges to a locally optimal solution [17]. In order to use
this algorithm to minimize the KL divergence in Equation 2, we introduce a modified cost function
c?(xt , ut ) = c(xt , ut ) ? log ?? (ut |xt ). The optimal trajectory for this cost function approximately1
minimizes the KL divergence when q(?) is a Dirac delta function, since
" T
#
Z
X
DKL (q(?)kp(?|O, ?)) = q(?)
c(xt , ut ) ? log ?? (ut |xt ) ? log p(xt+1 |xt , ut ) d? + const.
t=1
However, we can also obtain a Gaussian q(?) by using the framework of linearly solvable MDPs
[16] and the closely related concept of maximum entropy control [23]. The optimal policy ?G under
this framework minimizes an augmented cost function, given by
c?(xt , ut ) = c?(xt , ut ) ? H(?G ),
where H(?G ) is the entropy of a stochastic policy ?G (ut |xt ), and c?(xt , ut ) includes log ?? (ut |xt )
as above. Ziebart [23] showed that the optimal policy can be written as
?G (ut |xt ) = exp(?Qt (xt , ut ) + Vt (xt )),
where V is a ?softened? value function given by
Z
Vt (xt ) = log exp (Qt (xt , ut )) dut .
Under linear dynamics and quadratic costs, V has the same form as in the LQR derivation above,
which means that ?G (ut |xt ) is a linear Gaussian with mean g(xt ) and covariance Q?1
uut [10]. Together with the linearized dynamics, the resulting policy specifies a Gaussian distribution over trajectories with Markovian independence:
q(?) = p?(xt )
T
Y
?G (ut |xt )?
p(xt+1 |xt , ut ),
t=1
where ?G (ut |xt ) = N (g(xt ), Q?1
?(xt ) is an initial state distribution, and p?(xt+1 |xt , ut ) =
uut ), p
? t +fut u
?t +x
? t+1 , ?f t ) is the linearized dynamics with Gaussian noise ?f t . This distribution
N (fxt x
also corresponds to a Laplace approximation for p(?|O, ?), which is formed from the exponential of
the second order Taylor expansion of log p(?|O, ?) [15].
Once we compute ?G (ut |xt ) using iterative LQR/DDP, it is straightforward to obtain the marginal
distributions q(xt ), which will be useful in the next section for minimizing the variational bound
L(q, ?). Using ?t and ?t to denote the mean and covariance of the marginal at time t and assuming
that the initial state distribution at t = 1 is given, the marginals can be computed recursively as
?t
?t+1 = [ fxt fut ]
? t + kt + Kt (?t ? x
?t)
u
?t
?t KT
T
t
?t+1 = [ fxt fut ]
[ fxt fut ] + ?f t .
T
Kt ?t Q?1
uut + Kt ?t Kt
1
The minimization is not exact if the dynamics p(xt+1 |xt , ut ) are not deterministic, but the result is very
close if the dynamics have much lower entropy than the policy and exponentiated cost, which is often the case.
3
Algorithm 1 Variational Guided Policy Search
1: Initialize q(?) using DDP with cost c?(xt , ut ) = ?0 c(xt , ut )
2: for iteration k = 1 to K do
3:
Compute marginals (?1 , ?t ), . . . , (?T , ?T ) for q(?)
4:
Optimize L(q, ?) with respect to ? using standard nonlinear optimization methods
k
5:
Set ?k based on annealing schedule, for example ?k = exp K?k
K log ?0 + K log ?K
6:
Optimize q(?) using DDP with cost c?(xt , ut ) = ?k c(xt , ut ) ? log ?? (ut |xt )
7: end for
8: Return optimized policy ?? (ut |xt )
When the dynamics are nonlinear or the modified cost c?(xt , ut ) is nonquadratic, this solution only
approximates the minimum of the KL divergence. In practice, the approximation is quite good
when the dynamics and the cost c(xt , ut ) are smooth. Unfortunately, the policy term log ?? (ut |xt )
in the modified cost c?(xt , ut ) can be quite jagged early on in the optimization, particularly for
nonlinear policies. To mitigate this issue, we compute the derivatives of the policy not only along
the current trajectory, but also at samples drawn from the current marginals q(xt ), and average them
together. This averages out local perturbations in log ?? (ut |xt ) and improves the approximation.
In Section 8, we discuss more sophisticated techniques that could be used in future work to handle
highly nonlinear dynamics for which this approximation may be inadequate.
5
Variational Guided Policy Search
The variational guided policy search (variational GPS) algorithm alternates between minimizing the
KL divergence in Equation 2 with respect to q(?) as described in the previous section, and maximizing the bound L(q, ?) with respect to the policy parameters ?. Minimizing the KL divergence
reduces the difference between L(q, ?) and log p(O|?), so that the maximization of L(q, ?) becomes
a progressively better approximation for the maximization of log p(O|?). The method is summarized in Algorithm 1. The bound L(q, ?) can be maximized by a variety of standard optimization
methods, such as stochastic gradient descent (SGD) or LBFGS. The gradient is given by
Z
T
M T
X
1 XX
?L(q, ?) = q(?)
? log ?? (ut |xt )d? ?
? log ?? (uit |xit ),
(3)
M
t=1
i=1 t=1
where the samples (xit , uit ) are drawn from the marginals q(xt , ut ). When using SGD, new samples can be drawn at every iteration, since sampling from q(xt , ut ) only requires the precomputed
marginals from the preceding section. Because the marginals are computed using linearized dynamics, we can be assured that the samples will not deviate drastically from the optimized trajectory,
regardless of the true dynamics. The resulting SGD optimization is analogous to a supervised learning task with an infinite training set. When using LBFGS, a new sample set can generated every n
LBFGS iterations. We found that values of n from 20 to 50 produced good results.
When choosing the policy class, it is common to use deterministic policies with additive Gaussian
noise. In this case, we can optimize the policy more quickly and with many fewer samples by only
sampling states and evaluating the integral over actions analytically. Letting ??xt , ??xt and ?qxt , ?qxt
denote the means and covariances of ?? (ut |xt ) and q(ut |xt ), we can write L(q, ?) as
M T Z
1 XX
L(q, ?) ?
q(ut |xit ) log ?? (ut |xit )dut + const
M i=1 t=1
=
M T
1
T
1
1 XX 1 ?
q
?
?xi ? ?qxi ???1
??xi ? ?qxi ? log ??xi ? tr ???1
+ const.
i ?x i
x
t
x
t
t
t
t
t
t
t
M i=1 t=1 2
2
2
Two additional details should be taken into account in order to obtain the best results. First, although
model-based trajectory optimization is more powerful than random exploration, complex tasks such
as bipedal locomotion, which we address in the following section, are too difficult to solve entirely
with trajectory optimization. To solve such tasks, we can initialize the procedure from a good initial
4
trajectory, typically provided by a demonstration. This trajectory is only used for initialization and
need not be reproducible by any policy, since it will be modified by subsequent DDP invocations.
Second, unlike the average cost objective, the maximum likelihood objective is sensitive to the magnitude of the cost. Specifically, the logarithm of Equation 1 corresponds to a soft minimum over all
likely trajectories under the current policy, with the softness of the minimum inversely proportional
to the cost magnitude. As the magnitude increases, this objective scores policies based primarily
on their best-case cost, rather than the average case. As the magnitude decreases, the objective becomes more similar to the classic average cost. Because of this, we found it beneficial to gradually
anneal the cost by multiplying it by ?k at the k th iteration, starting with a high magnitude to favor
aggressive exploration, and ending with a low magnitude to optimize average case performance. In
our experiments, ?k begins at 1 and is reduced exponentially to 0.1 by the 50th iteration.
Since our method produces both a parameterized policy ?? (ut |xt ) and a DDP solution ?G (ut |xt ),
one might wonder why the DDP policy itself is not a suitable controller. The issue is that ?? (ut |xt )
can have an arbitrary parameterization, and admits constraints on available information, stationarity,
etc., while ?G (ut |xt ) is always a nonstationary linear feedback policy. This has three major advantages: first, only the learned policy may be usable at runtime if the information available at runtime
differs from the information during training, for example if the policy is trained in simulation and
executed on a physical system with limited sensors. Second, if the policy class is chosen carefully,
we might hope that the learned policy would generalize better than the DDP solution, as shown in
previous work [10]. Third, multiple trajectories can be used to train a single policy from different
initial states, creating a single controller that can succeed in a variety of situations.
6
Experimental Evaluation
We evaluated our method on two simulated planar locomotion tasks: swimming and bipedal walking. For both tasks, the policy sets joint torques on a simulated robot consisting of rigid links. The
swimmer has 3 links and 5 degrees of freedom, including the root position, and a 10-dimensional
state space that includes joint velocities. The walker has 7 links, 9 degrees of freedom, and 18
state dimensions. Due to the high dimensionality and nonlinear dynamics, these tasks represent a
significant challenge for direct policy learning. The cost function for the walker was given by
c(x, u) = wu kuk2 + (vx ? vx? )2 + (py ? p?y )2 ,
where vx and vx? are the current and desired horizontal velocities, py and p?y are the current and
desired heights of the hips, and the torque penalty was set to wu = 10?4 . The swimmer cost
excludes the height term and uses a lower torque penalty of wu = 10?5 . As discussed in the
previous section, the magnitude of the cost was decreased by a factor of 10 during the first 50
iterations, and then remained fixed. Following previous work [10], the trajectory for the walker was
initialized with a demonstration from a hand-crafted locomotion system [22].
The policy was represented by a neural network with one hidden layer and a soft rectifying nonlinearity of the form a = log(1 + exp(z)), with Gaussian noise at the output. Both the weights of the
neural network and the diagonal covariance of the output noise were learned as part of the policy
optimization. The number of policy parameters ranged from 63 for the 5-unit swimmer to 246 for
the 10-unit walker. Due to its complexity and nonlinearity, this policy class presents a challenge to
traditional policy search algorithms, which often focus on compact, linear policies [8].
Figure 1 shows the average cost of the learned policies on each task, along with visualizations of
the swimmer and walker. Methods that sample from the current policy use 10 samples per iteration,
unless noted otherwise. To ensure a fair comparison, the vertical axis shows the average cost E[c(?)]
rather than the maximum likelihood objective log p(O|?). The cost was evaluated for both the
actual stochastic policy (solid line), and a deterministic policy obtained by setting the variance of
the Gaussian noise to zero (dashed line). Each plot also shows the cost of the initial DDP solution.
Policies with costs significantly above this amount do not succeed at the task, either falling in the
case of the walker, or failing to make forward progress in the case of the swimmer. Our method
learned successful policies for each task, and often converged faster than previous methods, though
performance during early iterations was often poor. We believe this is because the variational bound
L(q, ?) does not become a good proxy for log p(O|?) until after several invocations of DDP, at which
point the algorithm is able to rapidly improve the policy.
5
swimmer, 10 hidden units
400
350
average cost
average cost
400
300
250
200
150
100
4000
20
40
60
iteration
80
walker, 10 hidden units
250
200
150
20
40
60
iteration
80
100
DDP solution
variational GPS
GPS
adapted GPS
cost-weighted
cost-weighted 1000
DAGGER
weighted DAGGER
adapted DAGGER
walker, 5 hidden units
3500
average cost
average cost
300
4000
3500
3000
3000
2500
2500
2000
2000
1500
1500
1000
1000
500
0
350
100
100
swimmer, 5 hidden units
20
40
60
iteration
80
100
500
0
20
40
60
iteration
80
100
Figure 1: Comparison of variational guided policy search (VGPS) with prior methods. The average
cost of the stochastic policy is shown with a solid line, and the average cost of the deterministic
policy without Gaussian noise is shown with a dashed line. The bottom-right panel shows plots of
the swimmer and walker, with the center of mass trajectory under the learned policy shown in blue,
and the initial DDP solution shown in black.
The first method we compare to is guided policy search (GPS), which uses importance sampling to
introduce samples from the DDP solution into a likelihood ratio policy search [10]. The GPS algorithm first draws a fixed number of samples from the DDP solution, and then adds on-policy samples
at each iteration. Like our method, GPS uses DDP to explore regions of low cost, but the policy optimization is done using importance sampling, which can be susceptible to degenerate weights in
high dimensions. Since standard GPS only samples from the initial DDP solution, these samples
are only useful if they can be reproduced by the policy class. Otherwise, GPS must rely on random
exploration to improve the solution. On the easier swimmer task, the GPS policy can reproduce the
initial trajectory and succeeds immediately. However, GPS is unable to find a successful walking
policy with only 5 hidden units, which requires modifications to the initial trajectory. In addition, although the deterministic GPS policy performs well on the walker with 10 hidden units, the stochastic
policy fails more often. This suggests that the GPS optimization is not learning a good variance for
the Gaussian policy, possibly because the normalized importance sampled estimator places greater
emphasis on the relative probability of the samples than their absolute probability.
The adaptive variant of GPS runs DDP at every iteration and adapts to the current policy, in the same
manner as our method. However, samples from this adapted DDP solution are then included in the
policy optimization with importance sampling, while our approach optimizes the variational bound
L(q, ?). In the GPS estimator, each sample ?i is weighted by an importance weight dependent
on ?? (?i ), while the samples in our optimization are not weighted. When a sample has a low
probability under the current policy, it is ignored by the importance sampled optimizer. Because of
this, although the adaptive variant of GPS improves on the standard variant, it is still unable to learn
a walking policy with 5 hidden units, while our method quickly discovers an effective policy.
We also compared to an imitation learning method called DAGGER. DAGGER aims to learn a policy that imitates an oracle [14], which in our case is the DDP solution. At each iteration, DAGGER
adds samples from the current policy to a dataset, and then optimizes the policy to take the oracle
action at each dataset state. While adjusting the current policy to match the DDP solution may appear similar to our approach, we found that DAGGER performed poorly on these tasks, since the
on-policy samples initially visited states that were very far from the DDP solution, and therefore
the DDP action at these states was large and highly suboptimal. To reduce the impact of these
poor states, we implemented a variant of DAGGER which weighted the samples by their probability
under the DDP marginals. This variant succeeded on the swimming tasks and eventually found a
good deterministic policy for the walker with 10 hidden units, though the learned stochastic policy
performed very poorly. We also implemented an adapted variant, where the DDP solution is reoptimized at each iteration to match the policy (in addition to weighting), but this variant performed
6
worse. Unlike DAGGER, our method samples from a Gaussian distribution around the current DDP
solution, ensuring that all samples are drawn from good parts of the state space. Because of this, our
method is much less sensitive to poor or unstable initial policies.
Finally, we compare to an alternative variational policy search algorithm analogous to PoWER [8].
Although PoWER requires a linear policy parameterization and a specific exploration strategy, we
can construct an analogous non-linear algorithm by replacing the analytic M-step with nonlinear
optimization, as in our method. This algorithm is identical to ours, except that instead of using DDP
to optimize q(?), the variational distribution is formed by taking samples from the current policy and
reweighting them by the exponential of their cost. We call this method ?cost-weighted.? The policy
is still initialized with supervised training to resemble the initial DDP solution, but otherwise this
method does not benefit from trajectory optimization and relies entirely on random exploration. This
kind of exploration is generally inadequate for such complex tasks. Even if the number of samples
per iteration is increased to 103 (denoted as ?cost-weighted 1000?), this method still fails to solve
the harder walking task, suggesting that simply taking more random samples is not the solution.
These results show that our algorithm outperforms prior methods because of two advantages: we use
a model-based trajectory optimization algorithm instead of random exploration, which allows us to
outperform model-free methods such as the ?cost-weighted? PoWER analog, and we decompose the
policy search into two simple optimization problems that can each be solved efficiently by standard
algorithms, which leaves us less vulnerable to local optima than more complex methods like GPS.
7
Previous Work
In optimizing a maximum likelihood objective, our method builds on previous work that frames
control as inference [20, 19, 13]. Such methods often redefine optimality in terms of a log evidence
probability, as in Equation 1. Although this definition differs from the classical expected return, our
evaluation suggests that policies optimized with respect to this measure also exhibit a good average
return. As we discuss in Section 5, this objective is risk seeking when the cost magnitude is high, and
annealing can be used to gradually transition from an objective that favors aggressive exploration
to one that resembles the average return. Other authors have also proposed alternative definitions
of optimality that include appealing properties like maximization of entropy [23] or computational
benefits [16]. However, our work is the first to our knowledge to show how trajectory optimization
can be used to guide policy learning within the control-as-inference framework.
Our variational decomposition follows prior work on policy search with variational inference [3, 11]
and expectation maximization [8, 21]. Unlike these methods, our approach aims to find a variational
distribution q(?) that is best suited for control and leverages a known dynamics model. We present an
interpretation of the KL divergence minimization in Equation 2 as model-based exploration, which
can be performed with a variant of DDP. As shown in our evaluation, this provides our method
with a significant advantage over methods that rely on model-free random exploration, though at the
cost of requiring a differentiable model of the dynamics. Interestingly, our algorithm never requires
samples to be drawn from the current policy. This can be an advantage in applications where running
an unstable, incompletely optimized policy can be costly or dangerous.
Our use of DDP to guide the policy search parallels our previous Guided Policy Search (GPS)
algorithm [10]. Unlike the proposed method, GPS incorporates samples from DDP directly into
an importance-sampled estimator of the return. These samples are therefore only useful when the
policy class can reproduce them effectively. As shown in the evaluation of the walker with 5 hidden
units, GPS may be unable to discover a good policy when the policy class cannot reproduce the
initial DDP solution. Adaptive GPS addresses this issue by reoptimizing the trajectory to resemble
the current policy, but the policy is still optimized with respect to an importance-sampled return
estimate, which leaves it highly prone to local optima, and the theoretical justification for adaptation
is unclear. The proposed method justifies the reoptimization of the trajectory under a variational
framework, and uses standard maximum likelihood in place of the complex importance-sampled
objective.
We also compared our method to DAGGER [14], which uses a general-purpose supervised training
algorithm to train the current policy to match an oracle, which in our case is the DDP solution.
DAGGER matches actions from the oracle policy at states visited by the current policy, under the
7
assumption that the oracle can provide good actions in all states. This assumption does not hold
for DDP, which is only valid in a narrow region around the trajectory. To mitigate the locality of
the DDP solution, we weighted the samples by their probability under the DDP marginals, which
allowed DAGGER to solve the swimming task, but it was still outperformed by our method on the
walking task, even with adaptation of the DDP solution. Unlike DAGGER, our approach is relatively
insensitive to the instability of the learned policy, since the learned policy is not sampled.
Several prior methods also propose to improve policy search by using a distribution over high-value
states, which might come from a DDP solution [6, 1]. Such methods generally use this ?restart?
distribution as a new initial state distribution, and show that optimizing a policy from such a restart
distribution also optimizes the expected return. Unlike our approach, such methods only use the
states from the DDP solution, not the actions, and tend to suffer from the increased variance of the
restart distribution, as shown in previous work [10].
8
Discussion and Future Work
We presented a policy search algorithm that employs a variational decomposition of a maximum
likelihood objective to combine trajectory optimization with policy search. The variational distribution is obtained using differential dynamic programming (DDP), and the policy can be optimized
with a standard nonlinear optimization algorithm. Model-based trajectory optimization effectively
takes the place of random exploration, providing a much more effective means for finding low cost
regions that the policy is then trained to visit. Our evaluation shows that this algorithm outperforms
prior variational methods and prior methods that use trajectory optimization to guide policy search.
Our algorithm has several interesting properties that distinguish it from prior methods. First, the policy search does not need to sample the learned policy. This may be useful in real-world applications
where poor policies might be too risky to run on a physical system. More generally, this property improves the robustness of our method in the face of unstable initial policies, where on-policy
samples have extremely high variance. By sampling directly from the Gaussian marginals of the
DDP-induced distribution over trajectories, our approach also avoids some of the issues associated
with unstable dynamics, requiring only that the task permit effective trajectory optimization.
By optimizing a maximum likelihood objective, our method favors policies with good best-case
performance. Obtaining good best-case performance is often the hardest part of policy search, since
a policy that achieves good results occasionally is easier to improve with standard on-policy search
methods than one that fails outright. However, modifying the algorithm to optimize the standard
average cost criterion could produce more robust controllers in the future.
The use of local linearization in DDP results in only approximate minimization of the KL divergence
in Equation 2 in nonlinear domains or with nonquadratic policies. While we mitigate this by averaging the policy derivatives over multiple samples from the DDP marginals, this approach could still
break down in the presence of highly nonsmooth dynamics or policies. An interesting avenue for
future work is to extend the trajectory optimization method to nonsmooth domains by using samples
rather than linearization, perhaps analogously to the unscented Kalman filter [5, 18]. This could also
avoid the need to differentiate the policy with respect to the inputs, allowing for richer policy classes
to be used. Another interesting avenue for future work is to apply model-free trajectory optimization techniques [7], which would avoid the need for a model of the system dynamics, or to learn the
dynamics from data, for example by using Gaussian processes [2]. It would also be straightforward
to use multiple trajectories optimized from different initial states to learn a single policy that is able
to succeed under a variety of initial conditions.
Overall, we believe that trajectory optimization is a very useful tool for policy search. By separating
the policy optimization and exploration problems into two separate phases, we can employ simpler
algorithms such as SGD and DDP that are better suited for each phase, and can achieve superior
performance on complex tasks. We believe that additional research into augmenting policy learning
with trajectory optimization can further advance the performance of policy search techniques.
Acknowledgments
We thank Emanuel Todorov, Tom Erez, and Yuval Tassa for providing the simulator used in our
experiments. Sergey Levine was supported by NSF Graduate Research Fellowship DGE-0645962.
8
References
[1] A. Bagnell, S. Kakade, A. Ng, and J. Schneider. Policy search by dynamic programming. In
Advances in Neural Information Processing Systems (NIPS), 2003.
[2] M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-efficient approach to policy
search. In International Conference on Machine Learning (ICML), 2011.
[3] T. Furmston and D. Barber. Variational methods for reinforcement learning. Journal of Machine Learning Research, 9:241?248, 2010.
[4] D. Jacobson and D. Mayne. Differential Dynamic Programming. Elsevier, 1970.
[5] S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In
International Symposium on Aerospace/Defense Sensing, Simulation, and Control, 1997.
[6] S. Kakade and J. Langford. Approximately optimal approximate reinforcement learning. In
International Conference on Machine Learning (ICML), 2002.
[7] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal. STOMP: stochastic trajectory optimization for motion planning. In International Conference on Robotics and Automation, 2011.
[8] J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on
Robotics and Automation, 2009.
[9] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
[10] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine
Learning (ICML), 2013.
[11] G. Neumann. Variational inference for policy search in changing situations. In International
Conference on Machine Learning (ICML), 2011.
[12] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural
Networks, 21(4):682?697, 2008.
[13] K. Rawlik, M. Toussaint, and S. Vijayakumar. On stochastic optimal control and reinforcement
learning by approximate inference. In Robotics: Science and Systems, 2012.
[14] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction
to no-regret online learning. Journal of Machine Learning Research, 15:627?635, 2011.
[15] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal
densities. Journal of the American Statistical Association, 81(393):82?86, 1986.
[16] E. Todorov. Policy gradients in linearly-solvable MDPs. In Advances in Neural Information
Processing Systems (NIPS 23), 2010.
[17] E. Todorov and W. Li. A generalized iterative LQG method for locally-optimal feedback
control of constrained nonlinear stochastic systems. In American Control Conference, 2005.
[18] E. Todorov and Y. Tassa. Iterative local dynamic programming. In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2009.
[19] M. Toussaint. Robot trajectory optimization using approximate inference. In International
Conference on Machine Learning (ICML), 2009.
[20] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical POMDP controller optimization by
likelihood maximization. In Uncertainty in Artificial Intelligence (UAI), 2008.
[21] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning model-free robot control by a
Monte Carlo EM algorithm. Autonomous Robots, 27(2):123?130, 2009.
[22] K. Yin, K. Loken, and M. van de Panne. SIMBICON: simple biped locomotion control. ACM
Transactions Graphics, 26(3), 2007.
[23] B. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal
entropy. PhD thesis, Carnegie Mellon University, 2010.
9
| 5178 |@word simulation:2 linearized:3 seek:1 decomposition:4 covariance:4 sgd:4 tr:1 solid:2 harder:1 recursively:3 moment:1 reduction:1 initial:16 score:1 lqr:5 denoting:1 ours:1 interestingly:1 outperforms:2 current:21 must:2 written:1 additive:1 subsequent:1 analytic:1 motor:2 lqg:1 reproducible:1 plot:2 progressively:1 intelligence:1 discovering:1 fewer:1 leaf:2 parameterization:3 completeness:1 provides:1 loken:1 simpler:1 height:2 along:2 direct:3 differential:5 fxt:9 koltun:2 become:1 symposium:2 prove:1 combine:1 redefine:1 manner:1 introduce:2 expected:4 behavior:1 planning:2 simulator:1 torque:3 actual:1 becomes:2 provided:1 discover:4 xx:3 begin:1 maximizes:1 mass:1 panel:1 kind:1 minimizes:2 finding:1 mitigate:3 every:3 softness:1 runtime:2 control:12 unit:11 appear:1 local:6 subscript:1 approximately:1 might:4 black:1 emphasis:1 initialization:1 dut:2 resembles:1 suggests:2 challenging:2 limited:1 graduate:1 outright:1 acknowledgment:1 reoptimized:1 practice:1 regret:1 differs:3 procedure:2 significantly:1 cannot:1 close:1 risk:1 instability:1 py:2 optimize:8 deterministic:7 center:1 maximizing:2 straightforward:2 regardless:1 starting:1 primitive:1 pomdp:1 immediately:1 kalakrishnan:1 stomp:1 estimator:3 classic:1 handle:1 autonomous:1 justification:1 laplace:1 analogous:3 pt:1 exact:1 programming:9 gps:21 us:6 locomotion:6 swimmer:9 velocity:2 approximated:1 particularly:2 updating:1 walking:5 cut:2 bottom:1 levine:3 solved:1 region:7 decrease:1 complexity:2 ziebart:2 dynamic:33 trained:2 easily:1 joint:2 represented:1 derivation:2 train:2 effective:4 monte:1 kp:3 artificial:1 choosing:1 quite:2 richer:1 stanford:4 solve:4 otherwise:3 favor:3 itself:2 reproduced:1 online:1 differentiate:1 sequence:1 advantage:4 differentiable:1 propose:3 kober:1 adaptation:2 rapidly:1 degenerate:1 poorly:2 adapts:1 achieve:1 mayne:1 dirac:1 convergence:1 kontes:1 optimum:3 neumann:1 produce:3 converges:1 augmenting:1 qt:4 progress:1 implemented:2 c:2 resemble:2 come:1 guided:7 closely:1 modifying:1 stochastic:9 filter:2 exploration:21 vx:4 adprl:1 preliminary:1 decompose:2 extension:1 unscented:1 hold:1 around:5 exp:6 rawlik:1 major:1 optimizer:1 early:2 achieves:1 purpose:1 failing:1 outperformed:1 visited:2 uhlmann:1 ross:1 sensitive:2 tool:1 weighted:10 minimization:3 hope:1 mit:1 sensor:1 gaussian:15 always:2 aim:2 modified:4 rather:3 avoid:2 xit:4 focus:1 schaal:2 likelihood:13 elsevier:1 inference:7 dependent:1 rigid:1 typically:3 initially:1 hidden:10 koller:1 reproduce:3 issue:4 overall:1 denoted:2 constrained:2 vxxt:4 initialize:2 marginal:3 once:1 construct:1 never:1 ng:1 sampling:7 identical:1 hardest:1 icml:5 future:5 minimized:1 nonsmooth:2 gordon:1 serious:1 primarily:1 employ:2 divergence:12 phase:2 consisting:1 attempt:1 freedom:2 friedman:1 stationarity:1 highly:4 evaluation:6 bipedal:2 jacobson:1 kt:10 accurate:1 integral:1 succeeded:1 partial:1 unless:1 taylor:1 logarithm:1 initialized:2 desired:4 causal:1 theoretical:1 panne:1 hip:1 increased:2 soft:2 modeling:1 markovian:1 maximization:6 cost:51 introducing:1 deviation:1 wonder:1 successful:4 inadequate:3 too:2 theodorou:1 graphic:1 kadane:1 combined:1 density:1 international:8 vijayakumar:1 probabilistic:1 off:1 together:2 quickly:2 analogously:1 thesis:1 possibly:1 qxi:2 worse:1 creating:1 american:2 derivative:3 usable:1 return:7 li:1 account:1 potential:1 aggressive:2 suggesting:1 de:1 summarized:1 includes:2 automation:2 jagged:1 depends:1 performed:6 root:1 break:1 dagger:13 parallel:1 rectifying:1 cxt:2 minimize:5 formed:2 variance:4 efficiently:1 maximized:1 generalize:1 bayesian:1 produced:1 carlo:1 trajectory:48 multiplying:1 executes:1 converged:1 definition:2 nonetheless:1 svlevine:1 associated:1 sampled:6 emanuel:1 dataset:2 adjusting:1 knowledge:1 ut:60 dimensionality:2 improves:3 schedule:1 sophisticated:1 carefully:1 supervised:4 follow:1 planar:1 tom:1 formulation:2 evaluated:2 though:3 done:1 charlin:1 until:2 langford:1 hand:1 horizontal:1 replacing:1 nonlinear:10 reweighting:2 perhaps:1 believe:3 dge:1 concept:1 true:1 ranged:1 normalized:1 requiring:2 analytically:1 alternating:1 leibler:1 iteratively:1 during:3 noted:1 criterion:1 generalized:1 demonstrate:2 performs:1 motion:1 variational:26 discovers:1 common:1 superior:1 physical:2 exponentially:1 insensitive:1 tassa:2 discussed:1 analog:1 approximates:1 interpretation:1 marginals:10 extend:1 julier:1 significant:2 association:1 mellon:1 piperidis:1 erez:1 nonlinearity:2 biped:1 robot:4 etc:1 add:2 posterior:1 showed:2 optimizing:3 optimizes:3 driven:1 pastor:1 occasionally:1 outperforming:1 binary:1 vt:2 minimum:4 additional:2 greater:1 cxxt:2 preceding:1 reoptimization:1 schneider:1 dashed:2 pilco:1 relates:2 multiple:3 reduces:1 smooth:1 match:5 faster:1 visit:2 dkl:3 adobe:1 impact:1 variant:11 ensuring:1 prediction:1 controller:4 expectation:3 iteration:17 sergey:2 represent:2 robotics:4 addition:2 fellowship:1 annealing:2 decreased:1 walker:12 furmston:1 ot:4 unlike:6 induced:1 tend:1 incorporates:1 effectiveness:1 linearizes:1 nonstationary:1 call:1 leverage:1 backwards:1 presence:1 variety:3 independence:1 todorov:4 suboptimal:1 reduce:1 avenue:2 defense:1 nonquadratic:2 penalty:2 suffer:1 peter:2 repeatedly:1 action:9 ignored:1 useful:6 generally:3 amount:1 locally:2 reduced:1 specifies:1 outperform:2 nsf:1 delta:1 per:2 blue:1 write:1 carnegie:1 purposeful:1 falling:1 drawn:5 tierney:1 changing:1 excludes:1 swimming:3 sum:1 convert:1 run:2 parameterized:2 powerful:2 uncertainty:1 place:3 wu:3 draw:1 simbicon:1 interleaved:1 bound:7 entirely:2 layer:1 ddp:45 distinguish:1 quadratic:4 oracle:5 adapted:4 dangerous:1 constraint:1 optimality:4 extremely:1 expanded:1 relatively:1 softened:1 structured:1 alternate:1 vladlen:2 poor:5 beneficial:1 em:1 appealing:1 qu:1 kakade:2 modification:1 intuitively:1 gradually:2 taken:2 equation:9 visualization:1 discus:3 precomputed:1 eventually:1 letting:1 serf:1 end:1 available:3 permit:1 apply:1 hierarchical:1 alternative:3 robustness:1 running:1 ensure:1 include:1 graphical:1 newton:1 const:3 build:2 classical:3 seeking:1 objective:21 already:1 strategy:2 costly:1 diagonal:1 traditional:1 unclear:1 exhibit:1 gradient:4 bagnell:2 link:3 unable:3 simulated:2 incompletely:1 restart:3 separating:1 gracefully:1 separate:1 thank:1 poupart:1 barber:1 unstable:4 toward:1 uxt:2 assuming:1 kalman:2 ratio:1 minimizing:4 demonstration:2 providing:2 difficult:1 unfortunately:1 executed:1 susceptible:1 policy:159 perform:1 allowing:1 vertical:1 descent:1 defining:1 situation:2 vlassis:1 frame:1 perturbation:1 arbitrary:1 kl:11 optimized:9 aerospace:1 learned:10 narrow:1 nip:2 address:2 able:3 dynamical:1 usually:2 qux:1 challenge:3 including:1 power:3 suitable:1 rely:3 indicator:2 qxt:4 solvable:2 improve:5 mdps:2 inversely:1 risky:1 axis:1 imitates:1 deviate:1 prior:12 relative:1 interesting:3 proportional:1 toussaint:4 degree:2 proxy:1 principle:2 prone:1 repeat:1 supported:1 free:4 rasmussen:1 infeasible:2 drastically:1 guide:5 exponentiated:1 taking:2 face:1 absolute:1 benefit:2 van:1 feedback:2 dimension:2 uit:2 evaluating:1 ending:1 computes:2 transition:1 forward:1 author:1 adaptive:5 valid:1 world:1 avoids:1 reinforcement:5 far:1 transaction:1 approximate:4 compact:1 skill:1 kullback:1 uai:1 vxt:3 xi:3 imitation:2 search:33 iterative:6 why:1 learn:6 robust:1 obtaining:1 expansion:1 complex:8 anneal:1 domain:4 assured:1 linearly:2 noise:7 repeated:1 fair:1 allowed:1 x1:2 qut:2 augmented:1 crafted:1 fails:3 position:1 exponential:3 invocation:2 third:1 weighting:1 down:1 kuk2:1 remained:1 xt:79 specific:1 uut:5 sensing:1 admits:1 evidence:1 effectively:2 importance:9 phd:1 magnitude:8 execution:2 linearization:2 justifies:1 easier:3 suited:2 entropy:5 locality:1 yin:1 simply:2 likely:1 lbfgs:3 explore:1 vulnerable:1 corresponds:2 relies:2 acm:1 succeed:3 fut:9 experimentally:1 included:1 infinite:1 specifically:1 except:1 yuval:1 averaging:1 decouple:1 called:2 experimental:1 succeeds:1 highdimensional:1 deisenroth:1 |
4,618 | 5,179 | Learning Trajectory Preferences for Manipulators
via Iterative Improvement
Ashesh Jain, Brian Wojcik, Thorsten Joachims, Ashutosh Saxena
Department of Computer Science, Cornell University.
{ashesh,bmw75,tj,asaxena}@cs.cornell.edu
Abstract
We consider the problem of learning good trajectories for manipulation tasks. This
is challenging because the criterion defining a good trajectory varies with users,
tasks and environments. In this paper, we propose a co-active online learning
framework for teaching robots the preferences of its users for object manipulation
tasks. The key novelty of our approach lies in the type of feedback expected from
the user: the human user does not need to demonstrate optimal trajectories as training data, but merely needs to iteratively provide trajectories that slightly improve
over the trajectory currently proposed by the system. We argue that this co-active
preference feedback can be more easily elicited from the user than demonstrations
of optimal trajectories, which are often challenging and non-intuitive to provide
on high degrees of freedom manipulators. Nevertheless, theoretical regret bounds
of our algorithm match the asymptotic rates of optimal trajectory algorithms. We
demonstrate the generalizability of our algorithm on a variety of grocery checkout tasks, for whom, the preferences were not only influenced by the object being
manipulated but also by the surrounding environment.1
1
Introduction
Mobile manipulator robots have arms with high degrees of freedom (DoF), enabling them to perform
household chores (e.g., PR2) or complex assembly-line tasks (e.g., Baxter). In performing these
tasks, a key problem lies in identifying appropriate trajectories. An appropriate trajectory not only
needs to be valid from a geometric standpoint (i.e., feasible and obstacle-free, the criterion that most
path planners focus on), but it also needs to satisfy the user?s preferences.
Such user?s preferences over trajectories vary between users, between tasks, and between the environments the trajectory is performed in. For example, a household robot should move a glass of
water in an upright position without jerks while maintaining a safe distance from nearby electronic
devices. In another example, a robot checking out a kitchen knife at a grocery store should strictly
move it at a safe distance from nearby humans. Furthermore, straight-line trajectories in Euclidean
space may no longer be the preferred ones. For example, trajectories of heavy items should not
pass over fragile items but rather move around them. These preferences are often hard to describe
and anticipate without knowing where and how the robot is deployed. This makes it infeasible to
manually encode (e.g. [18]) them in existing path planners (such as [29, 35]) a priori.
In this work we propose an algorithm for learning user preferences over trajectories through interactive feedback from the user in a co-active learning setting [31]. Unlike in other learning settings,
where a human first demonstrates optimal trajectories for a task to the robot, our learning model
does not rely on the user?s ability to demonstrate optimal trajectories a priori. Instead, our learning algorithm explicitly guides the learning process and merely requires the user to incrementally
improve the robot?s trajectories. From these interactive improvements the robot learns a general
model of the user?s preferences in an online fashion. We show empirically that a small number of
such interactions is sufficient to adapt a robot to a changed task. Since the user does not have to
demonstrate a (near) optimal trajectory to the robot, we argue that our feedback is easier to provide
and more widely applicable. Nevertheless, we will show that it leads to an online learning algorithm
with provable regret bounds that decay at the same rate as if optimal demonstrations were available.
1
For more details and a demonstration video, visit: http://pr.cs.cornell.edu/coactive
1
Figure 1: Zero-G feedback: Learning trajectory preferences from sub-optimal zero-G feedback. (Left) Robot
plans a bad trajectory (waypoints 1-2-4) with knife close to flower. As feedback, user corrects waypoint 2 and
moves it to waypoint 3. (Right) User providing zero-G feedback on waypoint 2.
In our empirical evaluation, we learn preferences for a high DoF Baxter robot on a variety of grocery
checkout tasks. By designing expressive trajectory features, we show how our algorithm learns
preferences from online user feedback on a broad range of tasks for which object properties are of
particular importance (e.g., manipulating sharp objects with humans in vicinity). We extensively
evaluate our approach on a set of 16 grocery checkout tasks, both in batch experiments as well as
through robotic experiments wherein users provide their preferences on the robot. Our results show
that robot trained using our algorithm not only quickly learns good trajectories on individual tasks,
but also generalizes well to tasks that it has not seen before.
2
Related Work
Teaching a robot to produce desired motions has been a long standing goal and several approaches
have been studied. Most of the past research has focused on mimicking expert?s demonstrations, for
example, autonomous helicopter flights [1], ball-in-a-cup experiment [17], planning 2-D paths [27,
25, 26], etc. Such a setting (learning from demonstration, LfD) is applicable to scenarios when it is
clear to an expert what constitutes a good trajectory. In many scenarios, especially involving high
DoF manipulators, this is extremely challenging to do [2].2 This is because the users have to give
not only the end-effector?s location at each time-step, but also the full configuration of the arm in a
way that is spatially and temporally consistent. In our setting, the user never discloses the optimal
trajectory (or provide optimal feedback) to the robot, but instead, the robot learns preferences from
sub-optimal suggestions on how the trajectory can be improved.
Some later works in LfD provided ways for handling noisy demonstrations, under the assumption
that demonstrations are either near optimal [39] or locally optimal [22]. Providing noisy demonstrations is different from providing relative preferences, which are biased and can be far from optimal.
We compare with an algorithm for noisy LfD learning in our experiments. A recent work [37] leverages user feedback to learn rewards of a Markov decision process. Our approach advances over [37]
and Calinon et. al. [5] in that it models sub-optimality in user feedback and theoretically converges
to user?s hidden score function. We also capture the necessary contextual information for household
and assembly-line robots, while such context is absent in [5, 37]. Our application scenario of learning trajectories for high DoF manipulations performing tasks in presence of different objects and
environmental constraints goes beyond the application scenarios that previous works have considered. We design appropriate features that consider robot configurations, object-object relations, and
temporal behavior, and use them to learn a score function representing the preferences in trajectories.
User preferences have been studied in the field of human-robot interaction. Sisbot et. al. [34, 33] and
Mainprice et. al. [23] planned trajectories satisfying user specified preferences in form of constraints
on the distance of robot from user, the visibility of robot and the user arm comfort. Dragan et. al. [8]
used functional gradients [29] to optimize for legibility of robot trajectories. We differ from these in
that we learn score functions reflecting user preferences from implicit feedback.
3
Learning and Feedback Model
We model the learning problem in the following way. For a given task, the robot is given a context
x that describes the environment, the objects, and any other input relevant to the problem. The robot
has to figure out what is a good trajectory y for this context. Formally, we assume that the user
has a scoring function s? (x, y) that reflects how much he values each trajectory y for context x.
The higher the score, the better the trajectory. Note that this scoring function cannot be observed
directly, nor do we assume that the user can actually provide cardinal valuations according to this
2
Consider the following analogy: In search engine results, it is much harder for a user to provide the best
web-pages for each query, but it is easier to provide relative ranking on the search results by clicking.
2
function. Instead, we merely assume that the user can provide us with preferences that reflect this
scoring function. The robots goal is to learn a function s(x, y; w) (where w are the parameters to be
learned) that approximates the users true scoring function s? (x, y) as closely as possible.
Interaction Model. The learning process proceeds through the following repeated cycle of interactions between robot and user.
Step 1: The robot receives a context x. It then uses a planner to sample a set of trajectories, and
ranks them according to its current approximate scoring function s(x, y; w).
Step 2: The user either lets the robot execute the top-ranked trajectory, or corrects the robot by
providing an improved trajectory y?. This provides feedback indicating that s? (x, y?) > s? (x, y).
Step 3: The robot now updates the parameter w of s(x, y; w) based on this preference feedback and
returns to step 1.
Regret.
The robot?s performance will be measured in terms of regret, REGT =
PT
1
?
?
?
[s
(x
t , yt ) ? s (xt , yt )], which compares the robot?s trajectory yt at each time step t
t=1
T
against the optimal trajectory yt? maximizing the user?s unknown scoring function s? (x, y), yt? =
argmaxy s? (xt , y). Note that the regret is expressed in terms of the user?s true scoring function s? ,
even though this function is never observed. Regret characterizes the performance of the robot over
its whole lifetime, therefore reflecting how well it performs throughout the learning process. As we
will show in the following sections, we employ learning algorithms with theoretical bounds on the
regret for scoring functions that are linear in their parameters, making only minimal assumptions
about the difference in score between s? (x, y?) and s? (x, y) in Step 2 of the learning process.
User Feedback and Trajectory Visualization. Since the ability to easily give preference feedback
in Step 2 is crucial for making the robot learning system easy to use for humans, we designed two
feedback mechanisms that enable the user to easily provide improved trajectories.
(a) Re-ranking: We rank trajectories in order of their current predicted scores and visualize the ranking using OpenRave [7]. User observers trajectories sequentially and clicks on the first trajectory
which is better than the top ranked trajectory.
(b) Zero-G: This feedback allow users to improve trajectory waypoints by physically changing the
robot?s arm configuration as shown in Figure 1. To enable effortless steering of robot?s arm to desired configuration we leverage Baxter?s zero-force gravity-compensation mode. Hence we refer
this feedback as zero-G. This feedback is useful (i) for bootstrapping the robot, (ii) for avoiding
local maxima where the top trajectories in the ranked list are all bad but ordered correctly, and (iii)
when the user is satisfied with the top ranked trajectory except for minor errors. A counterpart of this
feedback is keyframe based LfD [2] where an expert demonstrates a sequence of optimal waypoints
instead of the complete trajectory.
Note that in both re-ranking and zero-G feedback, the user never reveals the optimal trajectory to
the algorithm but just provides a slightly improved trajectory.
4
Learning Algorithm
For each task, we model the user?s scoring function s? (x, y) with the following parameterized family
of functions.
s(x, y; w) = w ? ?(x, y)
(1)
w is a weight vector that needs to be learned, and ?(?) are features describing trajectory y for context
x. We further decompose the score function in two parts, one only concerned with the objects the
trajectory is interacting with, and the other with the object being manipulated and the environment.
s(x, y; wO , wE ) = sO (x, y; wO ) + sE (x, y; wE ) = wO ? ?O (x, y) + wE ? ?E (x, y)
(2)
We now describe the features for the two terms, ?O (?) and ?E (?) in the following.
4.1 Features Describing Object-Object Interactions
This feature captures the interaction between objects in the environment with the object being manipulated. We enumerate waypoints of trajectory y as y1 , .., yN and objects in the environment as
O = {o1 , .., oK }. The robot manipulates the object o? ? O. A few of the trajectory waypoints would
be affected by the other objects in the environment. For example in Figure 2, o1 and o2 affect the
waypoint y3 because of proximity. Specifically, we connect an object ok to a trajectory waypoint if
the minimum distance to collision is less than a threshold or if ok lies below o?. The edge connecting
yj and ok is denoted as (yj , ok ) ? E.
Since it is the attributes [19] of the object that really matter in determining the trajectory quality,
we represent each object with its attributes. Specifically, for every object ok , we consider a vector
of M binary variables [lk1 , .., lkM ], with each lkm = {0, 1} indicating whether object ok possesses
3
Figure 2: (Left) A grocery checkout environment with a few objects where the robot was asked to checkout
flowervase on the left to the right. (Middle) There are two ways of moving it, ?a? and ?b?, both are sub-optimal
in that the arm is contorted in ?a? but it tilts the vase in ?b?. Given such constrained scenarios, we need to reason
about such subtle preferences. (Right) We encode preferences concerned with object-object interactions in a
score function expressed over a graph. Here y1 , . . . , yn are different waypoints in a trajectory. The shaded
nodes corresponds to environment (table node not shown here). Edges denotes interaction between nodes.
property m or not. For example, if the set of possible properties are {heavy, fragile, sharp, hot,
liquid, electronic}, then a laptop and a glass table can have labels [0, 1, 0, 0, 0, 1] and [0, 1, 0, 0, 0, 0]
respectively. The binary variables lkp and lq indicates whether ok and o? possess property p and q respectively.3 Then, for every (yj , ok ) edge, we extract following four features ?oo (yj , ok ): projection
of minimum distance to collision along x, y and z (vertical) axis and a binary variable, that is 1, if
ok lies vertically below o?, 0 otherwise.
We now define the score sO (?) over this graph as follows:
M
X
X
sO (x, y; wO ) =
lkp lq [wpq ? ?oo (yj , ok )]
(3)
(yj ,ok )?E p,q=1
Here, the weight vector wpq captures interaction between objects with properties p and q. We obtain
wO in eq. (2) by concatenating vectors wpq . More formally, if the
P vector at position i of wO is wuv
then the vector corresponding to position i of ?O (x, y) will be (yj ,ok )?E lku lv [?oo (yj , ok )].
4.2 Trajectory Features
We now describe features, ?E (x, y), obtained by performing operations on a set of waypoints. They
comprise the following three types of the features:
Robot Arm Configurations. While a robot can reach the same operational space configuration for
its wrist with different configurations of the arm, not all of them are preferred [38]. For example,
the contorted way of holding the flowervase shown in Figure 2 may be fine at that time instant, but
would present problems if our goal is to perform an activity with it, e.g. packing it after checkout.
Furthermore, humans like to anticipate robots move and to gain users? confidence, robot should
produce predictable and legible robot motion [8].
We compute features capturing robot?s arm configuration using the location of its elbow and wrist,
w.r.t. to its shoulder, in cylindrical coordinate system, (r, ?, z). We divide a trajectory into three
parts in time and compute 9 features for each of the parts. These features encode the maximum and
minimum r, ? and z values for wrist and elbow in that part of the trajectory, giving us 6 features.
Since at the limits of the manipulator configuration, joint locks may happen, therefore we also add 3
features for the location of robot?s elbow whenever the end-effector attains its maximum r, ? and z
values respectively. Therefore obtaining ?robot (?) ? R9 (3+3+3=9) features for each one-third part
and ?robot (?) ? R27 for the complete trajectory.
Orientation and Temporal Behavior of the Object to be Manipulated. Object orientation during
the trajectory is crucial in deciding its quality. For some tasks, the orientation must be strictly
maintained (e.g., moving a cup full of coffee); and for some others, it may be necessary to change
it in a particular fashion (e.g., pouring activity). Different parts of the trajectory may have different
requirements over time. For example, in the placing task, we may need to bring the object closer to
obstacles and be more careful.
We therefore divide trajectory into three parts in time. For each part we store the cosine of the
object?s maximum deviation, along the vertical axis, from its final orientation at the goal location.
To capture object?s oscillation along trajectory, we obtain a spectrogram for each one-third part for
3
In this work, our goal is to relax the assumption of unbiased and close to optimal feedback. We therefore
assume complete knowledge of the environment for our algorithm, and for the algorithms we compare against.
In practice, such knowledge can be extracted using an object attribute labeling algorithm such as in [19].
4
the movement of the object in x, y, z directions as well as for the deviation along vertical axis (e.g.
Figure 3). We then compute the average power spectral density in the low and high frequency part
as eight additional features for each. This gives us 9 (=1+4*2) features for each one-third part.
Together with one additional feature of object?s maximum deviation along the whole trajectory, we
get ?obj (?) ? R28 (=9*3+1).
Object-Environment Interactions. This feature captures temporal variation of vertical and horizontal distances of the object o? from its surrounding surfaces. In
detail, we divide the trajectory into three equal parts, and
for each part we compute object?s: (i) minimum vertical
distance from the nearest surface below it. (ii) minimum
horizontal distance from the surrounding surfaces; and
(iii) minimum distance from the table, on which the task
is being performed, and (iv) minimum distance from the
goal location. We also take an average, over all the waypoints, of the horizontal and vertical distances between
the object and the nearest surfaces around it.4 To capture Figure 3: (Top) A good and bad trajectory
temporal variation of object?s distance from its surround- for moving a mug. The bad trajectory uning we plot a time-frequency spectrogram of the object?s dergoes ups-and-downs. (Bottom) Spectrovertical distance from the nearest surface below it, from grams for movement in z-direction: (Right)
which we extract six features by dividing it into grids. Good trajectory, (Left) Bad trajectory.
This feature is expressive enough to differentiate whether
an object just grazes over table?s edge (steep change in vertical distance) versus, it first goes up and
over the table and then moves down (relatively smoother change). Thus, the features obtained from
object-environment interaction are ?obj?env (?) ? R20 (3*4+2+6=20).
Final feature vector is obtained by concatenating ?obj?env , ?obj and ?robot , giving us ?E (?) ? R75 .
4.3 Computing Trajectory Rankings
For obtaining the top trajectory (or a top few) for a given task with context x, we would like to
maximize the current scoring function s(x, y; wO , wE ).
y ? = arg max s(x, y; wO , wE ).
(4)
y
Note that this poses two challenges. First, trajectory space is continuous and needs to be discretized
to maintain argmax in (4) tractable. Second, for a given set {y (1) , . . . , y (n) } of discrete trajectories,
we need to compute (4). Fortunately, the latter problem is easy to solve and simply amounts to sorting the trajectories by their trajectory scores s(x, y (i) ; wO , wE ). Two effective ways of solving the
former problem is either discretizing the robot?s configuration space or directly sampling trajectories
from the continuous space. Previously both approaches [3, 4, 6, 36] have been studied. However,
for high DoF manipulators sampling based approaches [4, 6] maintains tractability of the problem,
hence we take this approach. More precisely, similar to Berg et al. [4], we sample trajectories using rapidly-exploring random tree (RRT) [20].5 Since our primary goal is to learn a score function
on sampled set of trajectories we now describe our learning algorithm and for more literature on
sampling trajectories we refer the readers to [9].
4.4 Learning the Scoring Function
The goal is to learn the parameters wO and wE of the scoring function s(x, y; wO , wE ) so that it
can be used to rank trajectories according to the user?s preferences. To do so, we adapt the Preference Perceptron algorithm [31] as detailed in Algorithm 1. We call this algorithm the Trajectory
Preference Perceptron (TPP). Given a context xt , the top-ranked trajectory yt under the current parameters wO and wE , and the user?s feedback trajectory y?t , the TPP updates the weights in the
direction ?O (xt , y?t ) ? ?O (xt , yt ) and ?E (xt , y?t ) ? ?E (xt , yt ) respectively.
Despite its simplicity and even though the algorithm typically does not receive the optimal trajectory yt? = arg maxy s? (xt , y) as feedback, the TPP enjoys guarantees on the regret [31]. We
merely need to characterize by how much the feedback improves on the presented ranking using the following definition of expected ?-informative feedback: Et [s? (xt , y?t )] ? s? (xt , yt ) +
4
We query PQP collision checker plugin of OpenRave for these distances.
When RRT becomes too slow, we switch to a more efficient bidirectional-RRT. The cost function (or
its approximation) we learn can be fed to trajectory optimizers like CHOMP [29] or optimal planners like
RRT* [15] to produce reasonably good trajectories.
5
5
?(s? (xt , yt? ) ? s? (xt , yt )) ? ?t . This definition states that the user feedback should have a
score of y?t that is?in expectation over the users choices?higher than that of yt by a fraction
? ? (0, 1] of the maximum possible range s? (xt , y?t ) ? s? (xt , yt ). If this condition is not fulfilled due to bias in the feedback, the slack variable ?t captures the amount of violation. In this
way any feedback can be described by an appropriate combination of ? and ?t . Using these
two parameters, the proof by [31] can be adapted to show that the expected average regret of
PT
1
the TPP is upper bounded by E[REGT ] ? O( ??1 T + ?T
t=1 ?t ) after T rounds of feedback.
5
Experiments and Results
We now describe our data set, baseline algorithms and the evaluation metrics we use.
Following this, we present quantitative results (Section 5.2) and report robotic experiments on Baxter (Section 5.3).
Algorithm 1 Trajectory Preference Perceptron. (TPP)
(1)
(1)
Initialize wO ? 0, wE ? 0
for t = 1 to T do
Sample trajectories {y (1) , ..., y (n) }
(t)
(t)
yt = argmaxy s(xt , y; wO , wE )
Obtain user feedback y?t
(t+1)
(t)
wO
? wO + ?O (xt , y?t ) ? ?O (xt , yt )
(t+1)
(t)
wE
? wE + ?E (xt , y?t ) ? ?E (xt , yt )
end for
5.1 Experimental Setup
Task and Activity Set for Evaluation. We
evaluate our approach on 16 pick-and-place
robotic tasks in a grocery store checkout
setting. To assess generalizability of our
approach, for each task we train and test on scenarios with different objects being manipulated,
and/or with a different environment. We evaluate the quality of trajectories after the robot has
grasped the items and while it moves them for checkout. Our work complements previous works on
grasping items [30, 21], pick and place tasks [11], and detecting bar code for grocery checkout [16].
We consider following three commonly occurring activities in a grocery store:
1) Manipulation centric: These activities primarily care for the object being manipulated. Hence
the object?s properties and the way robot moves it in the environment is more relevant. Examples
include moving common objects like cereal box, Figure 4 (left), or moving fruits and vegetables,
which can be damaged when dropped/pushed into other items.
2) Environment centric: These activities also care for the interactions of the object being manipulated
with the surrounding objects. Our object-object interaction features allow the algorithm to learn
preferences on trajectories for moving fragile objects like glasses and egg cartons, Figure 4 (middle).
3) Human centric: Sudden movements by the robot put the human in a danger of getting hurt. We
consider activities where a robot manipulates sharp objects, e.g., moving a knife with a human in
vicinity as shown in Figure 4 (right). In previous work, such relations were considered in the context
of scene understanding [10, 12].
Baseline algorithms. We evaluate the algorithms that learn preferences from online feedback, under
two settings: (a) untrained, where the algorithms learn preferences for the new task from scratch
without observing any previous feedback; (b) pre-trained, where the algorithms are pre-trained on
other similar tasks, and then adapt to the new task. We compare the following algorithms:
?
?
?
?
Geometric: It plans a path, independent of the task, using a BiRRT [20] planner.
Manual: It plans a path following certain manually coded preferences.
TPP: This is our algorithm. We evaluate it under both, untrained and pre-trained settings.
Oracle-svm: This algorithm leverages the expert?s labels on trajectories (hence the name Oracle)
and is trained using SVM-rank [13] in a batch manner. This algorithm is not realizable in practice,
as it requires labeling on the large space of trajectories. We use this only in pre-trained setting
and during prediction it just predicts once and does not learn further.
? MMP-online: This is an online implementation of Maximum margin planning (MMP) [26, 28]
algorithm. MMP attempts to make an expert?s trajectory better than any other trajectory by a
Figure 4: (Left) Manipulation centric: a box of cornflakes doesn?t interact much with surrounding items and is
indifferent to orientation. (Middle) Environment centric: an egg carton is fragile and should preferably be kept
upright and closer to a supporting surface. (Right) Human centric: a knife is sharp and interacts with nearby
soft items and humans. It should strictly be kept at a safe distance from humans.
6
margin, and can be interpreted as a special case of our algorithm with 1-informative feedback.
However, adapting MMP to our experiments poses two challenges: (i) we do not have knowledge
of optimal trajectory; and (ii) the state space of the manipulator we consider is too large, and
discretizing makes learning via MMP intractable. We therefore train MMP from online user
feedback observed on a set of trajectories. We further treat the observed feedback as optimal.
At every iteration we train a structural support vector machine (SSVM) [14] using all previous
feedback as training examples, and use the learned weights to predict trajectory scores for the
next iteration. Since we learn on a set of trajectories, the argmax operation in SSVM remains
tractable. We quantify closeness of trajectories by the l2 ?norm of difference in their feature
representations, and choose the regularization parameter C for training SSVM in hindsight, to
give an unfair advantage to MMP-online.
Evaluation metrics. In addition to performing a user study on Baxter robot (Section 5.3), we also
designed a data set to quantitatively evaluate the performance of our online algorithm. An expert
labeled 1300 trajectories on a Likert scale of 1-5 (where 5 is the best) on the basis of subjective
human preferences. Note that these absolute ratings are never provided to our algorithms and are
only used for the quantitative evaluation of different algorithms. We quantify the quality of a ranked
list of trajectories by its normalized discounted cumulative gain (nDCG) [24] at positions 1 and 3.
While nDCG@1 is a suitable metric for autonomous robots that execute the top ranked trajectory,
nDCG@3 is suitable for scenarios where the robot is supervised by humans.
5.2
Results and Discussion
TPP
Features
We now present the quantitative results on the data set of 1300 labeled trajectories.
How well does TPP generalize to new tasks? To study generalization of preference feedback
we evaluate performance of TPP-pre-trained (i.e., TPP algorithm under pre-trained setting) on a
set of tasks the algorithm has not seen before. We study generalization when: (a) only the object
being manipulated changes, e.g., an egg carton replaced by tomatoes, (b) only the surrounding
environment changes, e.g., rearranging objects in the environment or changing the start location of
tasks, and (c) when both change. Figure 5 shows nDCG@3 plots averaged over tasks for all types of
activities.6 TPP-pre-trained starts-off with higher nDCG@3 values than TPP-untrained in all three
cases. Further, as more feedback is received, performance of both algorithms improve to eventually
become (almost) identical. We further observe, generalizing to tasks with both new environment
and object is harder than when only one of them changes.
How does TPP compare to other al- Table 1: Comparison of different algorithms and study
gorithms? Despite the fact that TPP of features in untrained setting. Table contains average
never observes optimal feedback, it per- nDCG@1(nDCG@3) values over 20 rounds of feedback.
Manipulation Environment
Human
Algorithms
Mean
forms better than baseline algorithms,
centric
centric
centric
Geometric
0.46 (0.48)
0.45 (0.39)
0.31 (0.30) 0.40 (0.39)
see Figure 5. It improves over OracleManual
0.61 (0.62)
0.77 (0.77)
0.33 (0.31) 0.57 (0.57)
SVM in less than 5 feedbacks, which
Obj-obj interaction
0.68 (0.68)
0.80 (0.79)
0.79 (0.73) 0.76 (0.74)
Robot
arm
config
0.82
(0.77)
0.78
(0.72)
0.80
(0.69) 0.80 (0.73)
is not updated since it requires expert?s
Object trajectory
0.85 (0.81)
0.88 (0.84)
0.85 (0.72) 0.86 (0.79)
labels on test set and hence it is impracObject environment
0.70 (0.69)
0.75 (0.74)
0.81 (0.65) 0.75 (0.69)
TPP (all features)
0.88 (0.84)
0.90 (0.85)
0.90 (0.80) 0.89 (0.83)
tical. MMP-online assumes every user
MMP-online
0.47 (0.50)
0.54 (0.56)
0.33 (0.30) 0.45 (0.46)
feedback as optimal, and over iterations
accumulates many contradictory training examples. This also highlights the sensitivity of MMP to
sub-optimal demonstrations. We also compare against planners with manually coded preferences
e.g., keep a flowervase upright. However, some preferences are difficult to specify, e.g., not to move
heavy objects over fragile items. We empirically found the resulting manual algorithm produces
poor trajectories with an average nDCG@3 of 0.57 over all types of activities.
How helpful are different features? Table 1 shows the performance of the TPP algorithm in the untrained setting using different features. Individually each feature captures several aspects indicating
goodness of trajectories, and combined together they give the best performance. Object trajectory
features capture preferences related to the orientation of the object. Robot arm configuration and
object environment features capture preferences by detecting undesirable contorted arm configurations and maintaining safe distance from surrounding surfaces, respectively. Object-object features
by themselves can only learn, for example, to move egg carton closer to a supporting surface, but
might still move it with jerks or contorted arms. These features can be combined with other features
to yield more expressive features. Nevertheless, by themselves they perform better than Manual
algorithm. Table 1 also compares TPP and MMP-online under untrained setting.
6
Similar results were obtained with nDCG@1 metric. We have not included it due to space constraints.
7
nDCG@3
(a) Same environment, different object
(b) New Environment, same object
(c) New Environment, different object
Figure 5: Study of generalization with change in object, environment and both. Manual, Oracle-SVM, Pretrained MMP-online (?), Untrained MMP-online (? ?), Pre-trained TPP (?), Untrained TPP (? ?).
5.3 Robotic Experiment: User Study in learning trajectories
We perform a user study of our system on Baxter robot on a variety of tasks of varying difficulties.
Thereby, showing our approach is practically realizable, and that the combination of re-rank and
zero-G feedbacks allows the users to train the robot in few feedbacks.
Experiment setup: In this study, five users (not associated with this work) used our system to
train Baxter for grocery checkout tasks, using zero-G and re-rank feedback. Zero-G was provided
kinesthetically on the robot, while re-rank was elicited in a simulator (on a desktop computer). A set
of 10 tasks of varying difficulty level was presented to users one at a time, and they were instructed
to provide feedback until they were satisfied with the top ranked trajectory. To quantify the quality
of learning each user evaluated their own trajectories (self score), the trajectories learned of the other
users (cross score), and those predicted by Oracle-svm, on a Likert scale of 1-5 (where 5 is the best).
We also recorded the time a user took for each task?from start of training till the user was satisfied.
Results from user study. The study Table 2: Shows learning statistics for each user averaged over
shows each user on an average took 3 re- all tasks. The number in parentheses is standard deviation.
rank and 2 zero-G feedbacks to train Bax# Re-ranking # Zero-G
Average
Trajectory Quality
feedback
feedback time (min.)
self
cross
ter (Table 2). Within 5 feedbacks the users User
1
5.4 (4.1)
3.3 (3.4)
7.8 (4.9)
3.8 (0.6) 4.0 (1.4)
were able to improve over Oracle-svm,
2
1.8 (1.0)
1.7 (1.3)
4.6 (1.7)
4.3 (1.2) 3.6 (1.2)
Fig. 6 (Left), consistent with our previous
3
2.9 (0.8)
2.0 (2.0)
5.0 (2.9)
4.4 (0.7) 3.2 (1.2)
4
3.2 (2.0)
1.5 (0.9)
5.3 (1.9)
3.0 (1.2) 3.7 (1.0)
analysis. Re-rank feedback was popular
5
3.6 (1.0)
1.9 (2.1)
5.0 (2.3)
3.5 (1.3) 3.3 (0.6)
for easier tasks, Fig. 6 (Right). However as
difficulty increased the users relied more
on zero-G feedback, which allows rectifying erroneous waypoints precisely. An
average difference of 0.6 between users?
self and cross score suggests preferences
marginally varied across the users.
In terms of training time, each user took
on average 5.5 minutes per-task, which we Figure 6: (Left) Average quality of the learned trajectory afbelieve is acceptable for most applications. ter every one-third of total feedback. (Right) Bar chart showFuture research in human computer inter- ing the average number of feedback and time required for
action, visualization and better user inter- each task. Task difficulty increases from 1 to 10.
face [32] could further reduce this time. Despite its limited size, through user study we show our
algorithm is realizable in practice on high DoF manipulators. We hope this motivates researchers to
build robotic systems capable of learning from non-expert users.
For more details and video, please visit: http://pr.cs.cornell.edu/coactive
6
Conclusion
In this paper we presented a co-active learning framework for training robots to select trajectories
that obey a user?s preferences. Unlike in standard learning from demonstration approaches, our
framework does not require the user to provide optimal trajectories as training data, but can learn
from iterative improvements. Despite only requiring weak feedback, our TPP learning algorithm has
provable regret bounds and empirically performs well. In particular, we propose a set of trajectory
features for which the TPP generalizes well on tasks which the robot has not seen before. In addition
to the batch experiments, robotic experiments confirmed that incremental feedback generation is
indeed feasible and that it leads to good learning results already after only a few iterations.
Acknowledgments. We thank Shikhar Sharma for help with the experiments. This research was
supported by ARO, Microsoft Faculty fellowship and NSF Career award (to Saxena).
8
References
[1] P. Abbeel, A. Coates, and A. Y. Ng. Autonomous helicopter aerobatics through apprenticeship learning.
IJRR, 29(13), 2010.
[2] B. Akgun, M. Cakmak, K. Jiang, and A. L. Thomaz. Keyframe-based learning from demonstration. IJSR,
4(4):343?355, 2012.
[3] R. Alterovitz, T. Sim?on, and K. Goldberg. The stochastic motion roadmap: A sampling framework for
planning with markov motion uncertainty. In RSS, 2007.
[4] J. V. D. Berg, P. Abbeel, and K. Goldberg. Lqg-mp: Optimized path planning for robots with motion
uncertainty and imperfect state information. In RSS, 2010.
[5] S. Calinon, F. Guenter, and A. Billard. On learning, representing, and generalizing a task in a humanoid
robot. IEEE Transactions on Systems, Man, and Cybernetics, 2007.
[6] D. Dey, T. Y. Liu, M. Hebert, and J. A. Bagnell. Contextual sequence prediction with application to
control library optimization. In RSS, 2012.
[7] R. Diankov. Automated Construction of Robotic Manipulation Programs. PhD thesis, CMU, RI, 2010.
[8] A. Dragan and S. Srinivasa. Generating legible motion. In RSS, 2013.
[9] C. J. Green and A. Kelly. Toward optimal sampling in the space of paths. In ISRR. 2007.
[10] Y. Jiang, M. Lim, and A. Saxena. Learning object arrangements in 3d scenes using human context. In
ICML, 2012.
[11] Y. Jiang, M. Lim, C. Zheng, and A. Saxena. Learning to place new objects in a scene. IJRR, 31(9), 2012.
[12] Y. Jiang, H. Koppula, and A. Saxena. Hallucinated humans as the hidden context for labeling 3d scenes.
In CVPR, 2013.
[13] T. Joachims. Training linear svms in linear time. In KDD, 2006.
[14] T. Joachims, T. Finley, and C. Yu. Cutting-plane training of structural svms. Mach Learn, 77(1), 2009.
[15] S. Karaman and E. Frazzoli. Incremental sampling-based algorithms for optimal motion planning. In
RSS, 2010.
[16] E. Klingbeil, D. Rao, B. Carpenter, V. Ganapathi, A. Y. Ng, and O. Khatib. Grasping with application to
an autonomous checkout robot. In ICRA, 2011.
[17] J. Kober and J. Peters. Policy search for motor primitives in robotics. Machine Learning, 84(1), 2011.
[18] H. S. Koppula and A. Saxena. Anticipating human activities using object affordances for reactive robotic
response. In RSS, 2013.
[19] H. S. Koppula, A. Anand, T. Joachims, and A. Saxena. Semantic labeling of 3d point clouds for indoor
scenes. In NIPS, 2011.
[20] S. M. LaValle and J. J. Kuffner. Randomized kinodynamic planning. IJRR, 20(5):378?400, 2001.
[21] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. In RSS, 2013.
[22] S. Levine and V. Koltun. Continuous inverse optimal control with locally optimal examples. In ICML,
2012.
[23] J. Mainprice, E. A. Sisbot, L. Jaillet, J. Cort?s, R. Alami, and T. Sim?on. Planning human-aware motions
using a sampling-based costmap planner. In ICRA, 2011.
[24] C. D. Manning, P. Raghavan, and H. Sch?tze. Introduction to information retrieval, volume 1. Cambridge
University Press Cambridge, 2008.
[25] N. Ratliff. Learning to Search: Structured Prediction Techniques for Imitation Learning. PhD thesis,
CMU, RI, 2009.
[26] N. Ratliff, J. A. Bagnell, and M. Zinkevich. Maximum margin planning. In ICML, 2006.
[27] N. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitation learning. In NIPS, 2007.
[28] N. Ratliff, D. Silver, and J. A. Bagnell. Learning to search: Functional gradient techniques for imitation
learning. Autonomous Robots, 27(1):25?53, 2009.
[29] N. Ratliff, M. Zucker, J. A. Bagnell, and S. Srinivasa. Chomp: Gradient optimization techniques for
efficient motion planning. In ICRA, 2009.
[30] A. Saxena, J. Driemeyer, and A.Y. Ng. Robotic grasping of novel objects using vision. IJRR, 27(2), 2008.
[31] P. Shivaswamy and T. Joachims. Online structured prediction via coactive learning. In ICML, 2012.
[32] B. Shneiderman and C. Plaisant. Designing The User Interface: Strategies for Effective Human-Computer
Interaction. Addison-Wesley Publication, 2010.
[33] E. A. Sisbot, L. F. Marin, and R. Alami. Spatial reasoning for human robot interaction. In IROS, 2007.
[34] E. A. Sisbot, L. F. Marin-Urias, R. Alami, and T. Simeon. A human aware mobile robot motion planner.
IEEE Transactions on Robotics, 2007.
[35] I. A. Sucan, M. Moll, and L. E. Kavraki. The Open Motion Planning Library. IEEE Robotics & Automation Magazine, 19(4):72?82, 2012. http://ompl.kavrakilab.org.
[36] P. Vernaza and J. A. Bagnell. Efficient high dimensional maximum entropy modeling via symmetric
partition functions. In NIPS, 2012.
[37] A. Wilson, A. Fern, and P. Tadepalli. A bayesian approach for policy learning from trajectory preference
queries. In NIPS, 2012.
[38] F. Zacharias, C. Schlette, F. Schmidt, C. Borst, J. Rossmann, and G. Hirzinger. Making planned paths
look more human-like in humanoid robot manipulation planning. In ICRA, 2011.
[39] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In AAAI, 2008.
9
| 5179 |@word cylindrical:1 faculty:1 middle:3 norm:1 tadepalli:1 open:1 r:7 pick:2 thereby:1 harder:2 configuration:12 contains:1 score:16 liu:1 liquid:1 kinodynamic:1 cort:1 past:1 existing:1 coactive:3 current:4 contextual:2 o2:1 subjective:1 bradley:1 must:1 ashesh:2 uria:1 happen:1 informative:2 kdd:1 partition:1 lqg:1 motor:1 visibility:1 designed:2 plot:2 ashutosh:1 update:2 rrt:4 device:1 item:8 desktop:1 plane:1 sudden:1 provides:2 detecting:3 node:3 location:6 preference:41 boosting:1 org:1 five:1 along:5 lku:1 become:1 koltun:1 alterovitz:1 manner:1 apprenticeship:1 theoretically:1 inter:2 indeed:1 expected:3 behavior:2 themselves:2 planning:11 nor:1 simulator:1 discretized:1 discounted:1 affordances:1 borst:1 elbow:3 pqp:1 provided:3 becomes:1 bounded:1 laptop:1 what:2 interpreted:1 hindsight:1 bootstrapping:1 guarantee:1 temporal:4 quantitative:3 lfd:4 saxena:9 y3:1 every:5 preferably:1 interactive:2 gravity:1 automated:1 demonstrates:2 control:2 yn:2 before:3 dropped:1 local:1 vertically:1 wuv:1 limit:1 treat:1 despite:4 plugin:1 accumulates:1 mach:1 jiang:4 marin:2 path:8 ndcg:10 might:1 studied:3 suggests:1 challenging:3 shaded:1 co:4 limited:1 range:2 averaged:2 acknowledgment:1 yj:8 wrist:3 practice:3 regret:10 optimizers:1 grasped:1 danger:1 empirical:1 adapting:1 projection:1 ups:1 confidence:1 pre:8 get:1 cannot:1 close:2 undesirable:1 put:1 context:11 effortless:1 optimize:1 zinkevich:1 yt:17 maximizing:1 go:2 primitive:1 focused:1 simplicity:1 identifying:1 manipulates:2 autonomous:5 coordinate:1 variation:2 hurt:1 updated:1 pt:2 construction:1 damaged:1 user:77 magazine:1 us:1 designing:2 goldberg:2 satisfying:1 predicts:1 labeled:2 observed:4 bottom:1 cloud:1 levine:1 capture:10 graz:1 cycle:1 grasping:3 movement:3 observes:1 environment:27 predictable:1 reward:1 asked:1 ziebart:1 trained:10 solving:1 basis:1 packing:1 easily:3 joint:1 surrounding:7 train:6 jain:1 describe:5 effective:2 query:3 labeling:4 dof:6 koppula:3 widely:1 solve:1 cvpr:1 lk1:1 otherwise:1 relax:1 ability:2 statistic:1 noisy:3 final:2 online:16 differentiate:1 sequence:2 advantage:1 thomaz:1 took:3 propose:3 aro:1 interaction:16 helicopter:2 kober:1 relevant:2 rapidly:1 till:1 intuitive:1 getting:1 requirement:1 produce:4 generating:1 incremental:2 converges:1 silver:1 object:70 help:1 oo:3 pose:2 measured:1 nearest:3 minor:1 received:1 sim:2 eq:1 dividing:1 c:3 predicted:2 quantify:3 differ:1 direction:3 safe:4 closely:1 attribute:3 stochastic:1 human:25 raghavan:1 enable:2 require:1 driemeyer:1 abbeel:2 generalization:3 really:1 decompose:1 brian:1 anticipate:2 strictly:3 exploring:1 proximity:1 around:2 considered:2 practically:1 bax:1 deciding:1 predict:1 visualize:1 vary:1 lenz:1 applicable:2 label:3 currently:1 individually:1 reflects:1 hope:1 rather:1 cornell:4 mobile:2 varying:2 wilson:1 publication:1 encode:3 focus:1 joachim:5 improvement:3 rank:9 indicates:1 attains:1 baseline:3 realizable:3 glass:3 helpful:1 shivaswamy:1 typically:1 hidden:2 relation:2 manipulating:1 mimicking:1 arg:2 orientation:6 denoted:1 priori:2 grocery:9 plan:3 constrained:1 initialize:1 special:1 spatial:1 field:1 aware:2 comprise:1 never:5 equal:1 sampling:7 once:1 manually:3 placing:1 broad:1 look:1 icml:4 constitutes:1 yu:1 identical:1 env:2 others:1 report:1 quantitatively:1 cardinal:1 employ:1 primarily:1 few:5 manipulated:8 individual:1 replaced:1 argmax:2 kitchen:1 maintain:1 microsoft:1 attempt:1 freedom:2 lavalle:1 zheng:1 evaluation:5 grasp:1 indifferent:1 violation:1 argmaxy:2 tj:1 tical:1 edge:4 closer:3 capable:1 necessary:2 tree:1 iv:1 euclidean:1 divide:3 desired:2 re:8 legible:2 theoretical:2 minimal:1 effector:2 increased:1 soft:1 obstacle:2 rao:1 planned:2 modeling:1 goodness:1 tractability:1 cost:1 deviation:4 calinon:2 too:2 characterize:1 connect:1 varies:1 generalizability:2 combined:2 density:1 sensitivity:1 randomized:1 standing:1 lee:1 off:1 corrects:2 connecting:1 quickly:1 together:2 thesis:2 reflect:1 pr2:1 satisfied:3 recorded:1 choose:1 frazzoli:1 aaai:1 expert:8 return:1 ganapathi:1 automation:1 waypoint:5 matter:1 satisfy:1 explicitly:1 ranking:7 mp:1 performed:2 discloses:1 later:1 observer:1 observing:1 characterizes:1 start:3 relied:1 maintains:1 elicited:2 rectifying:1 ass:1 chart:1 yield:1 generalize:1 weak:1 bayesian:1 fern:1 marginally:1 trajectory:119 confirmed:1 researcher:1 cybernetics:1 straight:1 influenced:1 reach:1 whenever:1 manual:4 definition:2 against:3 frequency:2 proof:1 associated:1 gain:2 sampled:1 popular:1 knowledge:3 lim:2 improves:2 subtle:1 anticipating:1 actually:1 reflecting:2 centric:9 ok:15 bidirectional:1 higher:3 wesley:1 supervised:1 wherein:1 improved:4 specify:1 response:1 execute:2 though:2 box:2 evaluated:1 furthermore:2 lifetime:1 implicit:1 just:3 dey:2 until:1 flight:1 receives:1 horizontal:3 web:1 expressive:3 incrementally:1 mode:1 quality:7 manipulator:8 name:1 cereal:1 requiring:1 normalized:1 true:2 unbiased:1 counterpart:1 vicinity:2 hence:5 former:1 spatially:1 regularization:1 iteratively:1 symmetric:1 semantic:1 mug:1 round:2 aerobatics:1 during:2 self:3 please:1 maintained:1 lkm:2 cosine:1 criterion:2 guenter:1 complete:3 demonstrate:4 performs:2 motion:11 bring:1 interface:1 reasoning:1 novel:1 srinivasa:2 common:1 legibility:1 functional:2 empirically:3 pouring:1 tilt:1 volume:1 he:1 approximates:1 refer:2 cup:2 surround:1 cambridge:2 grid:1 teaching:2 moving:7 robot:72 jaillet:1 longer:1 surface:8 zucker:1 etc:1 add:1 own:1 recent:1 manipulation:8 store:4 scenario:7 certain:1 binary:3 discretizing:2 scoring:12 seen:3 minimum:7 additional:2 fortunately:1 steering:1 r27:1 spectrogram:2 care:2 shikhar:1 novelty:1 maximize:1 sharma:1 vernaza:1 ii:3 smoother:1 full:2 ing:1 match:1 adapt:3 cross:3 knife:4 long:1 retrieval:1 visit:2 coded:2 award:1 parenthesis:1 prediction:5 involving:1 checkout:11 expectation:1 metric:4 cmu:2 physically:1 iteration:4 represent:1 vision:1 robotics:3 receive:1 addition:2 fellowship:1 fine:1 standpoint:1 crucial:2 biased:1 sch:1 unlike:2 posse:2 checker:1 anand:1 obj:6 call:1 structural:2 contorted:4 near:2 ter:2 leverage:3 presence:1 easy:2 baxter:7 iii:2 variety:3 jerk:2 moll:1 concerned:2 affect:1 enough:1 tpp:21 switch:1 click:1 reduce:1 imperfect:1 knowing:1 fragile:5 absent:1 whether:3 regt:2 six:1 khatib:1 wo:16 peter:1 action:1 deep:1 ssvm:3 useful:1 enumerate:1 clear:1 se:1 collision:3 detailed:1 vegetable:1 amount:2 extensively:1 locally:2 svms:2 http:3 nsf:1 coates:1 fulfilled:1 correctly:1 per:2 discrete:1 waypoints:9 affected:1 key:2 four:1 nevertheless:3 threshold:1 changing:2 iros:1 klingbeil:1 kept:2 graph:2 isrr:1 merely:4 fraction:1 inverse:2 parameterized:1 uncertainty:2 likert:2 place:3 planner:8 throughout:1 family:1 electronic:2 reader:1 almost:1 oscillation:1 decision:1 acceptable:1 asaxena:1 pushed:1 capturing:1 bound:4 kavraki:1 carton:4 oracle:5 activity:10 adapted:1 constraint:3 precisely:2 scene:5 ri:2 nearby:3 aspect:1 extremely:1 optimality:1 min:1 performing:4 wpq:3 relatively:1 department:1 structured:3 according:3 ball:1 combination:2 poor:1 manning:1 describes:1 slightly:2 across:1 alami:3 making:3 maxy:1 kuffner:1 pr:2 thorsten:1 karaman:1 visualization:2 previously:1 remains:1 describing:2 slack:1 mechanism:1 eventually:1 addison:1 tractable:2 fed:1 end:3 available:1 generalizes:2 operation:2 eight:1 observe:1 obey:1 chestnutt:1 appropriate:4 spectral:1 batch:3 schmidt:1 top:10 denotes:1 include:1 assembly:2 assumes:1 lock:1 maintaining:2 household:3 instant:1 giving:2 especially:1 coffee:1 build:1 icra:4 move:11 already:1 arrangement:1 strategy:1 primary:1 interacts:1 bagnell:7 gradient:3 distance:17 thank:1 whom:1 argue:2 valuation:1 roadmap:1 water:1 provable:2 reason:1 toward:1 code:1 o1:2 providing:4 demonstration:11 setup:2 steep:1 difficult:1 holding:1 ratliff:5 design:1 implementation:1 motivates:1 policy:2 unknown:1 perform:4 upper:1 vertical:7 billard:1 markov:2 enabling:1 compensation:1 supporting:2 defining:1 shoulder:1 y1:2 interacting:1 varied:1 sharp:4 config:1 rating:1 complement:1 required:1 specified:1 optimized:1 hallucinated:1 engine:1 learned:5 nip:4 beyond:1 bar:2 proceeds:1 flower:1 below:4 able:1 comfort:1 indoor:1 challenge:2 program:1 max:1 green:1 video:2 hot:1 power:1 suitable:2 ranked:8 rely:1 force:1 difficulty:4 vase:1 arm:13 representing:2 improve:5 ijrr:4 temporally:1 library:2 axis:3 finley:1 extract:2 dragan:2 geometric:3 literature:1 checking:1 understanding:1 l2:1 determining:1 asymptotic:1 relative:2 kelly:1 highlight:1 suggestion:1 generation:1 analogy:1 versus:1 lv:1 humanoid:2 degree:2 sufficient:1 consistent:2 fruit:1 tomato:1 heavy:3 changed:1 maas:1 supported:1 free:1 hebert:1 infeasible:1 enjoys:1 guide:1 allow:2 bias:1 perceptron:3 face:1 absolute:1 feedback:60 valid:1 gram:1 cumulative:1 doesn:1 instructed:1 commonly:1 cakmak:1 reinforcement:1 far:1 transaction:2 approximate:1 preferred:2 keyframe:2 cutting:1 r20:1 keep:1 active:4 robotic:10 sequentially:1 reveals:1 imitation:3 search:5 iterative:2 continuous:3 simeon:1 table:11 learn:16 reasonably:1 rearranging:1 career:1 operational:1 obtaining:2 interact:1 untrained:8 complex:1 whole:2 repeated:1 carpenter:1 fig:2 ng:3 egg:4 fashion:2 deployed:1 slow:1 gorithms:1 sub:5 position:4 mmp:13 lq:2 concatenating:2 lie:4 clicking:1 unfair:1 third:4 learns:4 down:2 minute:1 erroneous:1 bad:5 xt:19 showing:1 list:2 decay:1 svm:6 closeness:1 intractable:1 importance:1 phd:2 occurring:1 margin:3 sorting:1 easier:3 entropy:2 generalizing:2 simply:1 tze:1 expressed:2 ordered:1 pretrained:1 corresponds:1 environmental:1 extracted:1 goal:8 careful:1 man:1 feasible:2 hard:1 change:8 included:1 upright:3 except:1 specifically:2 contradictory:1 total:1 r9:1 pas:1 experimental:1 indicating:3 formally:2 lkp:2 berg:2 select:1 support:1 latter:1 reactive:1 avoiding:1 evaluate:7 scratch:1 handling:1 |
4,619 | 5,180 | Forgetful Bayes and myopic planning: Human
learning and decision-making in a bandit setting
Angela J. Yu
Department of Cognitive Science
University of California, San Diego
La Jolla, CA 92093
ajyu@ucsd.edu
Shunan Zhang
Department of Cognitive Science
University of California, San Diego
La Jolla, CA 92093
s6zhang@ucsd.edu
Abstract
How humans achieve long-term goals in an uncertain environment, via repeated
trials and noisy observations, is an important problem in cognitive science. We
investigate this behavior in the context of a multi-armed bandit task. We compare human behavior to a variety of models that vary in their representational and
computational complexity. Our result shows that subjects? choices, on a trial-totrial basis, are best captured by a ?forgetful? Bayesian iterative learning model
[21] in combination with a partially myopic decision policy known as Knowledge Gradient [7]. This model accounts for subjects? trial-by-trial choice better
than a number of other previously proposed models, including optimal Bayesian
learning and risk minimization, ?-greedy and win-stay-lose-shift. It has the added
benefit of being closest in performance to the optimal Bayesian model than all
the other heuristic models that have the same computational complexity (all are
significantly less complex than the optimal model). These results constitute an advancement in the theoretical understanding of how humans negotiate the tension
between exploration and exploitation in a noisy, imperfectly known environment.
1
Introduction
How humans achieve long-term goals in an uncertain environment, via repeated trials and noisy
observations, is an important problem in cognitive science. The computational challenges consist of
the learning component, whereby the observer updates his/her representation of knowledge and uncertainty based on ongoing observations, and the control component, whereby the observer chooses
an action that balances between the short-term objective of acquiring reward and the long-term objective of gaining information about the environment. A classic task used to study such sequential
decision making problems is the multi-arm bandit paradigm [15]. In a standard bandit setting, people are given a limited number of trials to choose among a set of alternatives, or arms. After each
choice, an outcome is generated based on a hidden reward distribution specific to the arm chosen,
and the objective is to maximize the total reward after all trials. The reward gained on each trial both
has intrinsic value and informs the decision maker about the relative desirability of the arm, which
can help with future decisions. In order to be successful, decision makers have to balance their decisions between general exploration (selecting an arm about which one is ignorant) and exploitation
(selecting an arm that is known to have relatively high expected reward).
Because bandit problem elegantly capture the tension between exploration and exploitation that is
manifest in real-world decision-making situations, they have received attention in many fields, including statistics [10], reinforcement learning [11, 19], economics [1, e.g.], psychology and neuroscience [5, 4, 18, 12, 6]. There is no known analytical optimal solution to the general bandit problem,
though properties about the optimal solution of special cases are known [10]. For relatively simple,
finite-horizon problems, the optimal solution can be computed numerically via dynamic program1
ming [11], though its computational complexity grows exponentially with the number of arms and
trials. In the psychology literature, a number of heuristic policies, with varying levels of complexity in the learning and control processes, have been proposed as possible strategies used by human
subjects [5, 4, 18, 12]. Most models assume that humans either adopt simplistic policies that retain
little information about the past and sidestep long-term optimization (e.g. win-stay-lose-shift and
?-greedy), or switch between an exploration and exploitation mode either randomly [5] or discretely
over time as more is learned about the environment [18].
In this work, we analyze a new model for human bandit choice behavior, whose learning component
is based on the dynamic belief model (DBM) [21], and whose control component is based on the
knowledge gradient (KG) algorithm [7]. DBM is a Bayesian iterative inference model that assumes
that there exists statistical patterns in a sequence of observations, and they tend to change at a characteristic timescale [21]. DBM was proposed as a normative learning framework that is able to capture
the commonly observed sequential effect in human choice behavior, where choice probabilities (and
response times) are sensitive to the local history of preceding events in a systematic manner ? even
if the subjects are instructed that the design is randomized, so that any local trends arise merely by
chance and not truly predictive of upcoming stimuli [13, 8, 20, 3]. KG is a myopic approximation
to the optimal policy for sequential informational control problem, originally developed for operations research applications [7]; KG is known to be exactly optimal in some special cases of bandit
problems, such as when there are only two arms. Conditioned on the previous observations at each
step, KG chooses the option that maximizes the future cumulative reward gain, based on the myopic
assumption that the next observation is the last exploratory choice, and all remaining choices will
be exploitative (choosing the option with the highest expected reward by the end of the next trial).
Note that this myopic assumption is only used in reducing the complexity of computing the expected
value of each option, and not actually implemented in practice ? the algorithm may end up executing
arbitrarily many non-exploitative choices. KG tends to explore more when the number of trials left
is large, because finding an arm with even a slightly better reward rate than the currently best known
one can lead to a large cumulative advantage in future gain; on the other hand, when the number of
trials left is small, KG tends to stay with the currently best known option, as the relative benefit of
finding a better option diminishes against the risk of wasting limited time on a good option. KG has
been shown to outperform several established models, including the optimal Bayesian learning and
risk minimization, ?-greedy and win-stay-lose-shift, for human decision-making in bandit problems,
under two certain learning scenarios other than DBM [22].
In the following, we first describe the experiment, then describe all the learning and control models
that we consider. We then compare the performance of the models both in terms of agreement with
human behavior on a trial-to-trial basis, and in terms of computational optimality.
2
Experiment
We adopt data from [18], where a total of 451 subjects participated in the experiment as part of
?testweek? at the University of Amsterdam. In the experiment, each participant completed 20 bandit
problems in sequence, all problems had 4 arms and 15 trials. The reward rates were fixed for all
arms in each game, and were generated, prior to the start of data collection, independently from a
Beta(2, 2) distribution. All participants played the same reward rates, but the order of the games
was randomized. Participants were instructed that the reward rates in all games were drawn from
the same environment, and that the reward rates were drawn only once; participants were not told
the exact form of the Beta environment, i.e. Beta(2, 2). A screenshot of the experimental interface
is shown in Fig 1:a.
3
Models
There exist multiple levels of complexity and optimality in both the learning and the decision components of decision making models of bandit problems. For the learning component, we examine
whether people maintain any statistical representation of the environment at all, and if they do,
whether they only keep a mean estimate (running average) of the reward probability of the different options, or also uncertainty about those estimates; in addition, we consider the possibility that
they entertain trial-by-trial fluctuation of the reward probabilities. The decision component can also
2
a
b
c
FBM
?
?t-1
DBM
?
?t+1
.6
.4
.4
.6
1
0
1
0
1
1
Rt-1
Rt
Rt+1
Rt-1
Rt
Rt+1
Figure 1: (a) A screenshot of the experimental interface. The four panels correspond to the four
arms, each of which can be chosen by clicking the corresponding button. In each panel, successes
from previous trials are shown as green bars, and failures as red bars. At the top of each panel, the
ratio of successes to failures, if defined, is shown. The top of the interface provides the count of the
total number of successes to the current trial, index of the current trial and index of the current game.
(b) Bayesian graphical model of FBM, assuming fixed reward probabilities. ? ? [0, 1], Rt ? {0, 1}.
The inset shows an example of the Beta prior for the reward probabilities. The numbers in circles
show example values for the variables. (c) Bayesian graphical model of DBM, assuming reward
probabilities change from trial to trial. P(?t ) = ?? (?t = ?t?1 ) + (1 ? ?)P0 (?t ).
differ in complexity in at least two respects: the objective the decision policy tries to optimize (e.g.
reward versus information), and the time-horizon over which the decision policy optimizes its objective (e.g. greedy versus long-term). In this section, we introduce models that incorporate different
combinations of learning and decision policies.
3.1
Bayesian Learning in Beta Environments
The observations are generated independently and identically (iid) from an unknown Bernoulli distribution for each arm. We consider two Bayesian learning scenarios below, the dynamic belief
model (DBM), which assumes that the Bernoulli reward rates for all the arms can reset on any trial
with probability 1 ? ?, and the fixed belief model (FBM), a special case of DBM that assumes the
reward rates to be stationary throughout each game. In either case, we assume the prior distribution that generates the Bernoulli rates is a Beta distribution, Beta (?, ?), which is conjugate to the
Bernoulli distribution, and whose two hyper-parameters, ? and ?, specify the pseudo-counts associated with the prior.
3.1.1
Dynamic Belief Model
Under the dynamic belief model (DBM), the reward probabilities can undergo discrete changes at
times during the experimental session, such that at any trial, the subject?s prior belief is a mixture of
the posterior belief from the previous trial and a generic prior. The subject?s implicit task is then to
track the evolving reward probability of each arm over the course of the experiment.
Suppose on each game, we have K arms with reward rates, ?k , k = 1, ? ? ? , K, which are iid generated from Beta (?, ?). Let Stk and Fkt be the numbers of successes and failures obtained from the
kth arm on the trial t. The estimated reward probability of arm k at trial t is ?tk . We assume ?tk
has a Markovian dependence on ?t?1
k , such that there is a probability ? of them being the same,
and a probability 1 ? ? of ?tk being redrawn from the prior distribution Beta (?, ?). The Bayesian
ideal observer combines the sequentially developed prior belief about reward probabilities, with the
incoming stream of observations (successes and failures on each arm), to inferthe new posterior distributions. The observation
Rtk is assumed to be Bernoulli, Rtk ? Bernoulli ?tk . We use the notation
t
t
t
t
t
qk (?k ) := Pr ?k |Sk , Fk to denote the posterior distribution of ?tk given the observed sequence, also
known as the belief state. On each trial, the new posterior distribution can be computed via Bayes?
Rule:
t?1
qtk (?tk ) ? Pr Rtk |?tk Pr ?tk |St?1
(1)
k , Fk
3
where the prior probability is a weighted sum (parameterized by ?) of last trial?s posterior and the
generic prior q0 := Beta (?, ?):
t?1
0
Pr ?tk = ?|St?1
= ?qt?1
(2)
k , Fk
k (?) + (1 ? ?)q (?)
3.1.2
Fixed Belief Model
A simpler generative model (and more correct one given the true, stationary environment) is to
assume that the statistical contingencies in the task remain fixed throughout each game, i.e. all
bandit arms have fixed probabilities of giving a reward throughout the game. What the subjects
would then learn about the task over the time course of the experiment is the true value of ?. We call
this model a fixed belief model (FBM); it can be viewed as a special case of the DBM with ? = 1. In
the Bayesian update rule, the prior on each trial is simply the posterior on the previous trial.
Figure 1b;c illustrates the graphical models of FBM and DBM, respectively.
3.2
Decision Policies
We consider four different decision policies. We first describe the optimal model, and then the three
heuristic models with increasing levels of complexity.
3.2.1
The Optimal Model
The learning and decision problem for bandit problems can be viewed as as a Markov Decision
Process with a finite horizon [11], with the state being the belief state qt = (qt1 , qt2 , qt3 , qt4 ), which
obviously provides the sufficient statistics for all the data seen up through trial t. Due to the low
dimensionality of the bandit problem here (i.e. small number of arms and number of trials per
game), the optimal policy, up to a discretization of the belief state, can be computed numerically
using Bellman?s dynamic programming principle [2]. Let V t (qt ) be the expected total future reward
on trial t. The optimal policy should satisfy the following iterative property:
V t (qt ) = max ?tk + E V t+1 (qt+1 )
(3)
k
and the optimal action,
Dt ,
is chosen according to
Dt (qt ) = argmaxk ?tk + E V t+1 (qt+1 )
(4)
We solve the equation using dynamic programming, backward in time from the last time step, whose
value function and optimal policy are known for any belief state: always choose the arm with the
highest expected reward, and the value function is just that expected reward. In the simulations, we
compute the optimal policy off-line, for any conceivable setting of belief state on each trial (up to a
fine discretization of the belief state space), and then apply the computed policy for each sequence
of choice and observations that each subject experiences. We use the term ?the optimal solution? to
refer to the specific solution under ? = 2 and ? = 2, which is the true experimental design.
3.2.2
Win-Stay-Lose-Shift
WSLS does not learn any abstract representation of the environment, and has a very simple decision
policy. It assumes that the decision-maker will keep choosing the same arm as long as it continues
to produce a reward, but shifts to other arms (with equal probabilities) following a failure to gain
reward. It starts off on the first trial randomly (equal probability at all arms).
3.2.3
?-Greedy
The ?-greedy model assumes that decision-making is determined by a parameter ? that controls
the balance between random exploration and exploitation. On each trial, with probability ?, the
decision-maker chooses randomly (exploration), otherwise chooses the arm with the greatest estimated reward rate (exploitation). ?-Greedy keeps simple estimates of the reward rates, but does not
track the uncertainty of the estimates. It is not sensitive to the horizon, maximizing the immediate
gain with a constant rate, otherwise searching for information by random selection.
4
More concretely, ?-greedy adopts a stochastic policy:
(1 ? ?) /Mt
t
t
Pr D = k | ?, ? =
?/ (K ? Mt )
if k ? argmaxk0 ?tk0
otherwise
where Mt is the number of arms with the greatest estimated value at the tth trial.
3.2.4
Knowledge Gradient
The knowledge gradient (KG) algorithm [16] is an approximation to the optimal policy, by pretending only one more exploratory measurement is allowed, and assuming all remaining choices
will exploit what is known after the next measurement. It evaluates the expected change in each
estimated reward rate, if a certain arm were to be chosen, based on the current belief state. Its
approximate value function for choosing arm k on trial t given the current belief state qt is
t+1
t
t
vKG,t
=
E
max
?
|
D
=
k,
q
? max ?tk0
(5)
k
k0
k0
k0
The first term is the expected largest reward rate (the value of the subsequent exploitative choices) on
the next step if the kth arm were to be chosen, with the expectation taken over all possible outcomes
of choosing k; the second term is the expected largest reward given no more exploitative choices;
their difference is the ?knowledge gradient? of taking one more exploratory sample.
The KG decision rule is
DKG,t = arg max ?tk + (T ? t ? 1) vKG,t
k
k
(6)
The first term of Equation 6 denotes the expected immediate reward by choosing the kth arm on trial
t, whereas the second term reflects the expected knowledge gain. The formula for calculating vKG,t
k
for the binary bandit problems can be found in Chapter 5 of [14].
3.3
Model Inference and Evaluation
Unlike previous modeling papers on human decision-making in the bandit setting [5, 4, 18, 12],
which generally look at the average statistics of how people distribute their choices among the options, here we use a more stringent trial-by-trial measure of the model agreement, i.e. how well
each model captures subject?s choice. We calculate the per-trial likelihood of the subject?s choice
conditioned on the previously experienced actions and choices. For WSLS, it is 1 for a win-stay
decision, 1/3 for a lose-shift decision (because the model predicts shifting to the other three arms
with equal probability), and 0 otherwise. For probabilistic models, take ?-greedy for example, it
is (1 ? ?)/M if the subject chooses the option with the highest predictive reward, where M is the
number of arms with the highest predictive reward; it is ?/(4 ? M) for any other choice, and when
M = 4, it is considered all arms have the highest predictive reward.
We use sampling to compute a posterior distribution of the following model parameters: the parameters of the prior Beta distribution (? and ?) for all policies, ? for all DBM policies, ? for ?-greedy. For
this model fitting process, we infer the re-parameterization of ?/(? + ?) and ? + ?, with a uniform
prior on the former, and weakly informative prior for the latter, i.e. Pr (? + ?) ? (? + ?)?3/2 , as
suggested by [9]. The reparameterization has psychological interpretation as the mean reward probability and the certainty. We use uniform prior for ? and ?. Model inference use combined sampling
algorithm, with Gibbs sampling of ?, and Metropolis sampling of ?, ? and ?. All chains contained
3000 steps, with a burn-in size of 1000. All chains converged according to the R-hat measure [9].
We calculate the average per-trial likelihood (across trials, games, and subjects) under each model
based on its maximum a posteriori (MAP) parameterization.
We fit each model across all subjects, assuming that every subject shared the same prior belief of
the environment (? and ?), rate of exploration (?), and rate of change (?). For further analyses to
be shown in the result section, we also fit the ?-greedy policy and the KG policy together with both
learning models for each individual subject. All model inferences are based on a leave-one-out crossvalidation containing 20 runs. Specifically, for each run, we train the model while withholding one
game (sampled without replacement) from each subject, and test the model on the withheld game.
5
0.8
0.7
0.6
0.5
0.4
WSLS
eG
KG
b
0.8
FBM
DBM
0.7
0.6
0.5
0.4
WSLS Optimal eG
KG
c
0.8
d
Trialwise model agreement
059
060
0.9
FBM
DBM
Individually?fit Model agreement
057
058
1
Model agreement with subjects
056
a
Model agreement with optimal
054
055
DBM
DBM ind.
0.7
0.6
0.5
0.4
eG
KG
0.85
0.8
0.75
0.7
0.65
0.6
0.55
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
eG ind.
KG ind.
5
10
Trial
15
Figure 1: Average reward achieved by the KG model forward playing the bandit problems with the
Figure 2: (a) Model agreement with data simulated by the optimal solution, measured as the average
same reward
rates. KG achieves similar reward distribution as the human performance, with KG
per-trial likelihood. All models (except the optimal) are fit to data simulated by the optimal solution
playing atunder
its maximum
a posteriori
probability
= .1 and
? = .8.
KG all
achieves
the correct beta
prior Beta(2,
2). Each (MAP)
bar showsestimate,
the mean ?per-trial
likelihood
(across
the same subjects,
reward distribution
as the
whenwith
playing
withframework.
the correctFor
prior
knowledge
trials and games)
of aoptimal
decision solution
policy coupled
a learning
?-greedy
(eG) and KG, the error bars show the standard errors of the mean per-trial likelihood calculated
of the environment.
across all tests in the cross validation procedure (20-fold). WSLS does not rely on any learning
framework.(b) Model agreement with human data based on a leave-one(game)-out cross-validation,
whereiswe
withhold
one throughout.
game from each
subject for training,
i.e. we by
train1/2
theline
model
on with
New Roman
therandomly
preferred
typeface
Paragraphs
are separated
space,
a total number of 19 ? 451 games, with 19 games from each subject. For the current study, we
no indentation.
implement the optimal policy under DBM using the estimated ? under the KG DBM model in order
to reduce the computational burden. (c) Mean per-trial likelihood of the ?-greedy model (eG) and
Paper titleKG
is with
17 point,
initial caps/lower case, bold, centered between 2 horizontal rules. Top rule is
individually-fit parameters (for each subject), using cross-validation; the individualized
4 points thick
andabbreviation
bottom rule
point thick.
Allow 1/4
space
and below
titleand
to rules.
(ind. for
in is
the1legend)
DBM assumes
eachinch
person
has above
his/her own
Beta prior
All pages?.should
start at
1 inch (6
the top
of the page.
(d) Trialwise
agreement
of picas)
eG and from
KG under
individually-fit
MAP parameterization. The mean
per-trial likelihood is calculated across all subjects for each trial, with the error bars showing the
For the final
version,
names
arelikelihood
set in boldface,
and each name is centered above the correstandard
error authors?
of the mean
per-trial
across all tests.
sponding address. The lead author?s name is to be listed first (left-most), and the co-authors? names
(if different address) are set to follow. If there is only one co-author, list both author and co-author
side by side.
4 Results
Please pay special attention to the instructions in section 3 regarding figures, tables, acknowledg4.1references.
Model agreement with the Optimal Policy
ments, and
We first examine how well each of the decision policies agrees with the optimal policy on a trialto-trial basis. Figure 2a shows the mean per-trial likelihood (averaged across all tests in the cross2 Headings:
first level
validation procedure)
of each model, when fit to data simulated by the optimal solution under the
true design Beta(2,2). KG algorithm, under either learning framework, is most consistent (over
with theare
optimal
(separately
underword
FBM and
and DBM
assumptions).
Thisleft,
is notbold
sur- and in
First level90%)
headings
loweralgorithm
case (except
for first
proper
nouns), flush
givenline
that space
KG is an
approximation
to the optimal
The inferred
priorfirst
is level
point sizeprising
12. One
before
the first algorithm
level heading
and 1/2policy.
line space
after the
Beta (1.93, 2.15), correctly recovering the actual environment. The simplest WSLS model, on the
heading. other hand, achieves model agreement well above 60%. In fact, the optimal model also almost always stays after a success; the only situation that WSLS does not resemble the optimal decision
occurs when
it shifts
away from an arm that the optimal policy would otherwise stay with. Because
2.1 Headings:
second
level
the optimal solution (which simulated the data) knows the true environment, DBM does not have
advantage against FBM.
Second level headings are lower case (except for first word and proper nouns), flush left, bold and
in point size 10. One line space before the second level heading and 1/2 line space after the second
4.2 Model Agreement with Human Data
level heading.
Figure 2b shows the mean per-trial likelihood (averaged across all tests in the cross-validation pro-
2.1.1 Headings:
third
levelwhen fit to the human data. KG with DBM outperforms other models of
cedure) of each
model,
consideration. The average posterior mean of ? across all tests is .81, with standard error .091. The
average
posterior
? and
? are .65
with
standard
.074 flush
and .122,
Third level
headings
aremeans
lowerfor
case
(except
for and
first1.05,
word
and
propererrors
nouns),
left,respecbold and in
A ? value
.81 implies
thatthe
the third
subjects
behave
as if they
think
world
changes
average
point sizetively.
10. One
lineofspace
before
level
heading
and
1/2the
line
space
afteronthe
third level
heading. about every 5 steps (calculated as 1/(1 ? .81)).
We did a pairwise comparison between models on the mean per-trial likelihood of the subject?s
choice given each model?s predictive distribution, using a pairwise t-test. The test between DBM-
3 Citations, figures, tables, references
6
These instructions apply to everyone, regardless of the formatter being used.
054
055
056
057
058
059
060
061
062
063
P(stay|win)
P(shift|lose)
P(best value)
1
1
Human
Optimal
FBM KG
DBM KG0.8
FBM eG
DBM eG
WSLS 0.6
P(least known)
1
0.6
0.8
0.4
0.8
0.6
3
15
Trial
0.4
0.6
0.2
0.4
3
15
Trial
0.2
3
15
Trial
3
15
Trial
Figure
1: Averagepatterns
reward achieved
by thedata
KGand
model
playing
the bandit
problems
with
the
Figure
3: Behavioral
in the human
theforward
simulated
data from
all models.
The
four
same
reward
rates. KGprobability
achieves similar
reward
distribution
the human
withthe
KG
panels
show
the trial-wise
of staying
after
winning, as
shifting
after performance,
losing, choosing
065
playing
at its maximum
a posteriori
probability
(MAP)
estimate,
= .1exploitative
and ? = .8.choice
KG achieves
greatest
estimated
value on any
trial, choosing
the least
known
when? the
is not
066
the same
reward distribution
as are
the calculated
optimal solution
playing with
correct
knowledge
chosen,
respectively.
Probabilities
basedwhen
on simulated
data the
from
each prior
model
at their
067
the environment. and are averaged across all games and all participants. The optimal solution
MAPofparameterization,
068 shown here uses the correct prior Beta (2, 2).
069
New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2 line space, with
070
no indentation.
071 optimal and DBM-eG, and the test between DBM-optimal and FBM-optimal, are not significant at
Paper
title
is other
17 point,
caps/lowerTable
case, 1bold,
centered
between
horizontal
rules.
Top rule is
072 the .05
level.
All
testsinitial
are significant.
shows
the p-values
for2each
pairwise
comparison.
4 points thick and bottom rule is 1 point thick. Allow 1/4 inch space above and below title to rules.
073
All pages should start at 1 inch (6 picas) from the top of the page.
074
Table
1: P-values
forboldface,
all pairwise
t tests.
075
For the final version, authors?
names
are set in
and each
name is centered above the correKG DB
KG FB
eG DB
eG FB
Op DB
076 KG FB
sponding
address.
The
lead
author?s
name
is
to
be
listed
first
(left-most),
co-authors?
names
eG DB eG FB Op DB Op FB eG DB eG FB Op DB Op FB eG FB
Op DB Opand
FB the
Op DB
Op FB Op
FB
.0001
.0000
.0001 are.0000
.0060
.0002one.0001
.5066 list
.0354
.1476
077 .0480(if different
address)
set to .0187
follow..0000
If there
is only
co-author,
both .0001
author .0036
and co-author
side by side.
078
079 Figure 2c shows the model agreement with human data, of ?-greedy and KG, when their parameters
Please pay special attention to the instructions in section 3 regarding figures, tables, acknowledg080 are individually
fit. KG with DBM with individual parameterization has the best performance under
ments, and references.
081 cross validation. ?-Greedy also has a great gain in model agreement when coupled with DBM.
082 In fact, under DBM, ?-greedy and KG have close performance in the overall model agreement.
2 Headings:
firstalevel
Figure 2d shows
systematic difference between the two models in their agreement with
083 However,
human
data
on
a
trial-by-trial
base: during early trials, subjects? behavior is more consistent with
084
First
level
headings
are
lower
case
first word
andKG.
proper nouns), flush left, bold and in
?-greedy,
whereas
during
later
trials,
it (except
is more for
consistent
with
085
point
size
12.
One
line
space
before
the
first
level
heading
and
1/2 line space after the first level
086 We next break down the overall behavioral performance into four finer measures: how often people
heading.
087 do win-stay and lose-shift, how often they exploit, and whether they use random selection or search
088 for the greatest amount of information during exploration. Figure 3 shows the results of model com2.1 Headings: second level
089 parisons on these additional behavioral criteria. We show the patterns of the subjects, the optimal
090 solution with Beta(2,2), KG and eG under both learning frameworks and the simplest WSLS.
Second level headings are lower case (except for first word and proper nouns), flush left, bold and
091
in point
sizefor
10.example,
One lineshows
space the
before
the second
level heading
andwith
1/2 line
afterfollowing
the second
The first
panel,
trialwise
probability
of staying
the space
same arm
092
level heading.
a previous
success. People do not stay with the same arm after an immediate reward, which is
093
always the case for the optimal algorithm. Subjects also do not persistently explore, as predicted
094 by ?-greedy.
2.1.1 Headings:
third level
In fact, subjects
explore more during early trials, and become more exploitative later
095 on, similar to KG. As implied by Equation 5, KG calculates the probability of an arm surpassing
Third level
headings
are lower
case (except
for first word
proper
nouns),
flush
left,stage
boldofand
096 the known
best upon
chosen,
and weights
the knowledge
gainand
more
heavily
in the
early
thein
point
size
10.
One
line
space
before
the
third
level
heading
and
1/2
line
space
after
the
third
level
097 game. During the early trials, it sometimes chooses the second-best arm to maximize the knowledge
heading.
098 gain. Under DBM, a previous success will cause the corresponding arm to appear more rewarding,
099 resulting in a smaller knowledge gradient value; because knowledge is weighted more heavily during
100 the early
trials, the KGfigures,
model then
tends references
to choose the second best arms that have a larger knowledge
3 Citations,
tables,
101 gain.
102
These instructions
apply
everyone,
regardlessofofshifting
the formatter
The second
panel shows
the to
trialwise
probability
away being
given used.
a previous failure. When
064
103
the horizon is approaching, it becomes increasingly important to stay with the arm that is known to
3.1 Citations
text occasionally yield a failure. All algorithms, except for the naive
be reasonably
good,within
even ifthe
it may
105 WSLS algorithm, show a downward trend to shift after losing as the horizon approaches, along with
Citations
within
the text
should
belearning
numbered
Thebehavior.
corresponding number is to appear
106 human
choices.
?-Greedy
with
DBM
is consecutively.
closest to human
enclosed in square brackets, such as [1] or [2]-[5]. The corresponding references are to be listed in
107
The third
panel
shows
theend
probability
of choosing
the arm with
the largest
ratio.BKG,
the same
order
at the
of the paper,
in the References
section.
(Note: success
the standard
IB TEunder
X style
FBM, mimics the optimal model in that the probability of choosing the highest success ratio increases over time; they both grossly overly estimate subjects? tendency to select the highest success
2
104
7
ratio, as well as predicting an unrealized upward trend. WSLS under-estimates how often subjects
make this choice, while ?-greedy under DBM learning over-estimates it. It is KG under DBM, and
?-greedy with FBM, that are closest to subjects? behavior.
The fourth panels shows how often subjects choose to explore the least known option when they
shift away from the choice with the highest expected reward. It is DBM with either KG or ?-greedy
that provides the best fit.
In general, the KG model with DBM matches the second-order trend of human data the best, with
?-greedy following closely behind. However, there still exists a gap on the absolute scale, especially
with respect to the probability of staying with a successful arm.
5
Discussion
Our analysis suggests that human behavior in the multi-armed bandit task is best captured by a
knowledge gradient decision policy supported by a dynamic belief model learning process. Human
subjects tend to explore more often than policies that optimize the specific utility of the bandit
problems, and KG with DBM attributes this tendency to the belief of a stochastically changing
environment, causing the sequential effects due to recent trial history. Concretely, we find that people
adopt a learning process that (erroneously) assumes the world to be non-stationary, and that they
employ a semi-myopic choice policy that is sensitive to the horizon but assumes one-step exploration
when comparing action values.
Our results indicate that all decision policies considered here capture human data much better under
the dynamic belief model than the fixed belief model. By assuming the world is changeable, DBM
discount data from the distant past in favor of new data. Instead of attributing this discounting
behavior to biological limitations (e.g. memory loss), DBM explains it as the automatic engagement
of mechanisms that are critical for adapting to a changing environment. Indeed, there is previous
work suggesting that people approach bandit problems as if expecting a changing world [17]. This
is despite informing the subjects that the arms have fixed reward probabilities.
So far, our results also favor the knowledge gradient policy as the best model for human decisionmaking in the bandit task. It optimizes the semi-myopic goal of maximizing future cumulative
reward while assuming only one more time step of exploration and strict exploitation thereafter.
The KG model under the more general DBM has the largest proportion of correct predictions of
human data, and can capture the trial-wise dynamics of human behavioral reasonably well. This
result implies that humans may use a normative way, as captured by KG, to explore by combining
immediate reward expectation and long-term knowledge gain, compared to the previously proposed
behavioral models that typically assumes that exploration is random or arbitrary. In addition, KG
achieves similar behavioral patterns as the optimal model, and is computationally much less expensive (in particular being online and incurring a constant cost), making it a more plausible algorithm
for human learning and decision-making.
We observed that decision policies vary systematically in their abilities to predict human behavior
on different kinds of trials. In the real world, people might use hybrid policies to solve the bandit
problems; they might also use some smart heuristics, which dynamically adjusts the weight of the
knowledge gain to the immediate reward gain. Figure 2d suggests that subjects may be adopting
a strategy that is aggressively greedy at the beginning of the game, and then switches to a policy
that is both sensitive to the value of exploration and the impending horizon as the end of the game
approaches. One possibility is that subjects discount future rewards, which would result in a more
exploitative behavior than non-discounted KG, especially at the beginning of the game. These would
all be interesting lines of future inquiries.
Acknowledgments
We thank M Steyvers and E-J Wagenmakers for sharing the data. This material is based upon work
supported by, or in part by, the U. S. Army Research Laboratory and the U. S. Army Research Office
under contract/grant number W911NF1110391 and NIH NIDA B/START # 1R03DA030440-01A1.
8
References
[1] J. Banks, M. Olson, and D. Porter. An experimental analysis of the bandit problem. Economic
Theory, 10:55?77, 2013.
[2] R. Bellman. On the theory of dynamic programming. Proceedings of the National Academy
of Sciences, 1952.
[3] R. Cho, L. Nystrom, E. Brown, A. Jones, T. Braver, P. Holmes, and J. D. Cohen. Mechanisms
underlying dependencies of performance on stimulus history in a two-alternative forced-choice
task. Cognitive, Affective and Behavioral Neuroscience, 2:283?299, 2002.
[4] J. D. Cohen, S. M. McClure, and A. J. Yu. Should I stay or should I go? Exploration versus
exploitation. Philosophical Transactions of the Royal Society B: Biological Sciences, 362:933?
942, 2007.
[5] N. D. Daw, J. P. O?Doherty, P. Dayan, B. Seymour, and R. J. Dolan. Cortical substrates for
exploratory decisions in humans. Nature, 441:876?879, 2006.
[6] A. Ejova, D. J. Navarro, and A. F. Perfors. When to walk away: The effect of variability on
keeping options viable. In N. Taatgen, H. van Rijn, L. Schomaker, and J. Nerbonne, editors,
Proceedings of the 31st Annual Conference of the Cognitive Science Society, Austin, TX, 2009.
[7] P. Frazier, W. Powell, and S. Dayanik. A knowledge-gradient policy for sequential information
collection. SIAM Journal on Control and Optimization, 47:2410?2439, 2008.
[8] W. R. Garner. An informational analysis of absolute judgments of loudness. Journal of Experimental Psychology, 46:373?380, 1953.
[9] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman &
Hall/CRC, Boca Raton, FL, 2 edition, 2004.
[10] J. C. Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical
Society, 41:148?177, 1979.
[11] L. P. Kaebling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal
of Artificial Intelligence Research, 4:237?285, 1996.
[12] M. D. Lee, S. Zhang, M. Munro, and M. Steyvers. Psychological models of human and optimal
performance in bandit problems. Cognitive Systems Research, 12:164?174, 2011.
[13] M. I. Posner and Y. Cohen. Components of visual orienting. Attention and Performance Vol.
X, 1984.
[14] W. Powell and I. Ryzhov. Optimal Learning. Wiley, 1 edition, 2012.
[15] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American
Mathematical Society, 58:527?535, 1952.
[16] I. Ryzhov, W. Powell, and P. Frazier. The knowledge gradient algorithm for a general class of
online learning problems. Operations Research, 60:180?195, 2012.
[17] J. Shin and D. Ariely. Keeping doors open: The effect of unavailability on incentives to keep
options viable. MANAGEMENT SCIENCE, 50:575?586, 2004.
[18] M. Steyvers, M. D. Lee, and E.-J. Wagenmakers. A bayesian analysis of human decisionmaking on bandit problems. Journal of Mathematical Psychology, 53:168?179, 2009.
[19] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
[20] M. C. Treisman and T. C. Williams. A theory of criterion setting with an application to sequential dependencies. Psychological Review, 91:68?111, 1984.
[21] A. J. Yu and J. D. Cohen. Sequential effects: Superstition or rational behavior? In Advances
in Neural Information Processing Systems, volume 21, pages 1873?1880, Cambridge, MA.,
2009. MIT Press.
[22] S. Zhang and A. J. Yu. Cheap but clever: Human active learning in a bandit setting. In
Proceedings of the Cognitive Science Society Conference, 2013.
9
| 5180 |@word trial:74 exploitation:8 version:2 proportion:1 open:1 instruction:4 simulation:1 schomaker:1 p0:1 initial:1 selecting:2 past:2 outperforms:1 current:6 discretization:2 comparing:1 distant:1 subsequent:1 informative:1 cheap:1 update:2 stationary:3 greedy:24 generative:1 advancement:1 intelligence:1 parameterization:4 beginning:2 short:1 provides:3 simpler:1 zhang:3 mathematical:2 along:1 beta:18 become:1 viable:2 combine:1 fitting:1 behavioral:7 paragraph:2 affective:1 introduce:1 manner:1 pairwise:4 indeed:1 expected:12 behavior:12 planning:1 examine:2 multi:3 bellman:2 ming:1 informational:2 discounted:1 little:1 armed:2 actual:1 increasing:1 becomes:1 ryzhov:2 notation:1 underlying:1 maximizes:1 panel:8 what:2 kg:43 kind:1 developed:2 finding:2 wasting:1 pseudo:1 certainty:1 every:2 exactly:1 control:7 grant:1 appear:2 before:6 local:2 tends:3 seymour:1 despite:1 sutton:1 fluctuation:1 might:2 burn:1 dynamically:1 suggests:2 taatgen:1 co:6 limited:2 averaged:3 kg0:1 acknowledgment:1 practice:1 implement:1 procedure:2 shin:1 powell:3 evolving:1 significantly:1 adapting:1 word:5 numbered:1 close:1 selection:2 gelman:1 clever:1 context:1 risk:3 optimize:2 map:4 maximizing:2 go:1 attention:4 economics:1 independently:2 regardless:1 survey:1 williams:1 rule:11 adjusts:1 holmes:1 his:2 posner:1 reparameterization:1 classic:1 searching:1 exploratory:4 steyvers:3 diego:2 suppose:1 heavily:2 exact:1 programming:3 losing:2 us:1 substrate:1 agreement:16 trend:4 persistently:1 expensive:1 continues:1 predicts:1 observed:3 bottom:2 capture:5 com2:1 calculate:2 boca:1 highest:8 expecting:1 environment:18 complexity:8 reward:56 littman:1 dynamic:12 weakly:1 smart:1 predictive:5 upon:2 basis:3 k0:3 chapter:1 tx:1 train:1 separated:2 forced:1 describe:3 perfors:1 artificial:1 qt2:1 hyper:1 outcome:2 choosing:9 whose:4 heuristic:4 larger:1 solve:2 plausible:1 otherwise:5 favor:2 statistic:3 qt1:1 withholding:1 ability:1 timescale:1 think:1 noisy:3 final:2 online:2 obviously:1 sequence:4 advantage:2 analytical:1 ifthe:1 reset:1 causing:1 combining:1 achieve:2 representational:1 academy:1 olson:1 crossvalidation:1 decisionmaking:2 negotiate:1 produce:1 gittins:1 executing:1 leave:2 tk:12 help:1 staying:3 informs:1 measured:1 op:9 qt:8 received:1 implemented:1 recovering:1 resemble:1 implies:2 predicted:1 indicate:1 differ:1 thick:4 closely:1 correct:5 attribute:1 redrawn:1 stochastic:1 exploration:13 human:37 centered:4 stringent:1 consecutively:1 material:1 explains:1 crc:1 biological:2 considered:2 hall:1 great:1 dbm:40 indentation:2 predict:1 vary:2 adopt:3 achieves:6 early:5 diminishes:1 lose:7 currently:2 maker:4 title:2 sensitive:4 individually:4 largest:4 agrees:1 robbins:1 weighted:2 reflects:1 minimization:2 mit:2 always:3 desirability:1 varying:1 barto:1 office:1 frazier:2 bernoulli:6 likelihood:10 posteriori:3 inference:4 dayan:1 typically:1 dayanik:1 hidden:1 her:2 bandit:28 upward:1 arg:1 among:2 overall:2 noun:6 special:6 field:1 once:1 equal:3 sampling:4 chapman:1 yu:4 look:1 jones:1 future:7 mimic:1 superstition:1 stimulus:2 roman:2 employ:1 randomly:3 national:1 individual:2 ignorant:1 replacement:1 maintain:1 investigate:1 possibility:2 evaluation:1 truly:1 mixture:1 bracket:1 behind:1 myopic:7 nida:1 chain:2 entertain:1 experience:1 walk:1 circle:1 re:1 theoretical:1 uncertain:2 psychological:3 modeling:1 markovian:1 cost:1 imperfectly:1 uniform:2 successful:2 dependency:2 engagement:1 chooses:6 combined:1 st:3 person:1 cho:1 randomized:2 siam:1 stay:13 retain:1 systematic:2 told:1 off:2 vkg:3 probabilistic:1 rewarding:1 together:1 contract:1 lee:2 treisman:1 management:1 containing:1 choose:4 cognitive:8 stochastically:1 american:1 sidestep:1 style:1 account:1 distribute:1 suggesting:1 bold:5 satisfy:1 stream:1 later:2 try:1 observer:3 break:1 analyze:1 red:1 start:5 bayes:2 option:12 participant:5 square:1 qk:1 characteristic:1 correspond:1 yield:1 judgment:1 inch:3 bayesian:13 garner:1 iid:2 finer:1 history:3 converged:1 inquiry:1 sharing:1 against:2 failure:7 evaluates:1 grossly:1 nystrom:1 associated:1 gain:11 sampled:1 rational:1 manifest:1 knowledge:20 cap:2 dimensionality:1 actually:1 originally:1 dt:2 follow:2 tension:2 response:1 specify:1 though:2 typeface:2 just:1 implicit:1 thatthe:1 stage:1 hand:2 horizontal:2 porter:1 mode:1 grows:1 orienting:1 name:8 effect:5 brown:1 true:5 former:1 discounting:1 aggressively:1 q0:1 laboratory:1 moore:1 eg:18 ind:4 unavailability:1 game:22 during:7 please:2 whereby:2 criterion:2 doherty:1 interface:3 pro:1 wise:2 consideration:1 nih:1 thedata:1 mt:3 cohen:4 exponentially:1 volume:1 interpretation:1 numerically:2 surpassing:1 refer:1 measurement:2 significant:2 cambridge:2 gibbs:1 automatic:1 fk:3 session:1 had:1 base:1 closest:3 posterior:9 own:1 recent:1 jolla:2 optimizes:2 scenario:2 occasionally:1 certain:2 binary:1 arbitrarily:1 success:11 captured:3 seen:1 additional:1 preceding:1 paradigm:1 maximize:2 semi:2 multiple:1 infer:2 fbm:14 match:1 cross:5 long:7 mcclure:1 a1:1 cedure:1 calculates:1 prediction:1 simplistic:1 expectation:2 sponding:2 sometimes:1 adopting:1 achieved:2 addition:2 whereas:2 participated:1 fine:1 separately:1 unlike:1 strict:1 navarro:1 subject:37 tend:2 undergo:1 db:9 call:1 ideal:1 door:1 identically:1 variety:1 switch:2 fit:10 psychology:4 carlin:1 approaching:1 reduce:1 regarding:2 economic:1 shift:11 whether:3 utility:1 munro:1 cause:1 constitute:1 action:4 generally:1 listed:3 amount:1 discount:2 tth:1 simplest:2 outperform:1 exist:1 bkg:1 exploitative:7 neuroscience:2 estimated:6 track:2 rtk:3 per:12 correctly:1 overly:1 impending:1 discrete:1 vol:1 incentive:1 thereafter:1 four:5 drawn:2 changing:3 backward:1 button:1 merely:1 sum:1 run:2 parameterized:1 uncertainty:3 fourth:1 throughout:5 almost:1 decision:35 fl:1 pay:2 played:1 fold:1 discretely:1 annual:1 generates:1 erroneously:1 aspect:1 optimality:2 forgetful:2 relatively:2 department:2 flush:6 according:2 combination:2 pica:2 conjugate:1 remain:1 slightly:1 across:10 smaller:1 increasingly:1 metropolis:1 making:9 pr:6 taken:1 computationally:1 equation:3 previously:3 count:2 mechanism:2 know:1 wsls:11 end:3 operation:2 incurring:1 apply:3 away:4 generic:2 braver:1 alternative:2 hat:1 changeable:1 angela:1 remaining:2 assumes:9 running:1 completed:1 top:6 graphical:3 denotes:1 calculating:1 exploit:2 giving:1 especially:2 society:5 wagenmakers:2 upcoming:1 implied:1 objective:5 added:1 occurs:1 strategy:2 rt:7 dependence:1 loudness:1 gradient:10 win:7 kth:3 conceivable:1 individualized:1 thank:1 simulated:6 boldface:1 assuming:6 sur:1 index:3 ratio:4 balance:3 design:4 proper:5 policy:37 unknown:1 pretending:1 stern:1 observation:10 markov:1 finite:2 withheld:1 behave:1 immediate:5 situation:2 variability:1 qtk:1 ucsd:2 arbitrary:1 inferred:1 raton:1 philosophical:1 california:2 learned:1 established:1 daw:1 address:4 able:1 bar:5 suggested:1 below:3 pattern:3 challenge:1 royal:2 including:3 gaining:1 belief:23 green:1 max:4 event:1 greatest:4 shifting:2 rely:1 everyone:2 predicting:1 critical:1 hybrid:1 arm:45 argmaxk:1 coupled:2 naive:1 text:2 prior:21 understanding:1 literature:1 review:1 dolan:1 relative:2 loss:1 rijn:1 interesting:1 limitation:1 allocation:1 enclosed:1 versus:3 validation:6 contingency:1 sufficient:1 consistent:3 rubin:1 principle:1 editor:1 bank:1 systematically:1 playing:6 austin:1 course:2 supported:2 last:3 keeping:2 heading:24 side:4 allow:2 taking:1 bulletin:1 absolute:2 benefit:2 van:1 calculated:4 cortical:1 world:6 cumulative:3 withhold:1 fb:11 tk0:2 instructed:2 commonly:1 reinforcement:3 san:2 collection:2 concretely:2 adopts:1 forward:1 author:12 far:1 transaction:1 approximate:1 citation:4 preferred:2 keep:4 memory:1 sequentially:1 incoming:1 active:1 assumed:1 search:1 iterative:3 sk:1 table:5 nature:1 learn:2 reasonably:2 ca:2 ariely:1 complex:1 elegantly:1 did:1 arise:1 edition:2 repeated:2 allowed:1 fig:1 wiley:1 experienced:1 winning:1 clicking:1 screenshot:2 ib:1 third:9 formula:1 down:1 specific:3 inset:1 showing:1 normative:2 list:2 ajyu:1 ments:2 consist:1 intrinsic:1 exists:2 burden:1 sequential:8 gained:1 conditioned:2 illustrates:1 downward:1 horizon:8 gap:1 attributing:1 simply:1 explore:6 army:2 visual:1 kaebling:1 amsterdam:1 contained:1 fkt:1 partially:1 acquiring:1 chance:1 ma:2 goal:3 viewed:2 informing:1 stk:1 shared:1 change:6 determined:1 specifically:1 reducing:1 except:8 total:5 experimental:6 la:2 tendency:2 select:1 people:8 latter:1 ongoing:1 incorporate:1 |
4,620 | 5,181 | Context-sensitive active sensing in humans
Sheeraz Ahmad
Department of Computer Science and Engineering
University of California San Diego
9500 Gilman Drive La Jolla, CA 92093
sahmad@cs.ucsd.edu
Angela J. Yu
Department of Cognitive Science
University of California San Diego
9500 Gilman Drive La Jolla, CA 92093
ajyu@ucsd.edu
He Huang
Department of Cognitive Science
University of California San Diego
9500 Gilman Drive La Jolla, CA 92093
heh001@ucsd.edu
Abstract
Humans and animals readily utilize active sensing, or the use of self-motion, to
focus sensory and cognitive resources on the behaviorally most relevant stimuli
and events in the environment. Understanding the computational basis of natural active sensing is important both for advancing brain sciences and for developing more powerful artificial systems. Recently, we proposed a goal-directed,
context-sensitive, Bayesian control strategy for active sensing, C-DAC (ContextDependent Active Controller) (Ahmad & Yu, 2013). In contrast to previously proposed algorithms for human active vision, which tend to optimize abstract statistical objectives and therefore cannot adapt to changing behavioral context or task
goals, C-DAC directly minimizes behavioral costs and thus, automatically adapts
itself to different task conditions. However, C-DAC is limited as a model of human
active sensing, given its computational/representational requirements, especially
for more complex, real-world situations. Here, we propose a myopic approximation to C-DAC, which also takes behavioral costs into account, but achieves
a significant reduction in complexity by looking only one step ahead. We also
present data from a human active visual search experiment, and compare the performance of the various models against human behavior. We find that C-DAC and
its myopic variant both achieve better fit to human data than Infomax (Butko &
Movellan, 2010), which maximizes expected cumulative future information gain.
In summary, this work provides novel experimental results that differentiate theoretical models for human active sensing, as well as a novel active sensing algorithm that retains the context-sensitivity of the optimal controller while achieving
significant computational savings.
1
Introduction
Both artificial and natural sensing systems face the challenge of making sense out of a continuous
stream of noisy sensory inputs. One critical tool the brain has at its disposal is active sensing, a goaldirected, context-sensitive control strategy that prioritizes sensing and processing resources toward
the most rewarding or informative aspects of the environment (Yarbus, 1967). Having a formal
understanding of active sensing is not only important for advancing neuroscientific progress but also
developing context-sensitive, interactive artificial agents.
1
The most well-studied aspect of human active sensing is saccadic eye movements. Early work
suggested that saccades are attracted to salient targets that differ from surround in one or more of
feature dimensions (Koch & Ullman, 1985; Itti & Koch, 2000); however, saliency has been found
to only account for a small fraction of human saccadic eye movement (Itti, 2005). More recently,
models of human active vision have incorporated top-down objectives, such as maximizing the expected future cumulative informational gain (Infomax) (Lee & Yu, 2000; Itti & Baldi, 2006; Butko &
Movellan, 2010), and maximizing the one-step look-ahead probability of finding the target (greedy
MAP)(Najemnik & Geisler, 2005). However, these are generic statistical objectives that do not
naturally adapt to behavioral context, such as changes in the relative cost of speed versus error, or
the energetic or temporal cost associated with switching from one sensing location/configuration
to another. We recently proposed the C-DAC (Context-Dependent Active Controller) algorithm
(Ahmad & Yu, 2013), which maps from Bayesian posterior beliefs about the environment into the
action space while optimizing directly with respect to context-sensitive, behavioral goals; C-DAC
was shown to result in better accuracy and lower search time, as compared to Infomax and greedy
MAP, in various simulated task environments.
In this paper, we investigate whether human behavior is better explained by taking into account
task-specific considerations, as in C-DAC, or whether it is sufficient to optimize a generic goal,
like that of Infomax. We compare C-DAC and Infomax performance to human data, in terms of
fixation choice and duration, from a visual search experiment. We exclude greedy MAP from this
comparison, based on the results from our recent work showing that it is an almost random, and thus
highly suboptimal strategy for the well-structured visual search task presented here.
At a theoretical level, both Infomax and C-DAC are offline algorithms involving iterative computation until convergence, and which compute a global policy that specifies the optimal action (relative
to their respective objectives) for every possible setting of previous actions and observations, most
of which may not be used often or at all. Both of these algorithms suffer the well-known curse
of dimensionality, and are thus difficult, if not impossible, to generalize to more complex, realworld problems. Humans seem capable of planning and decision-making in very high-dimensional
settings, while readily adapting to different behavioral context. It therefore behooves us to find a
computationally inexpensive strategy that is nevertheless context-sensitive. Here, we consider an
approximate algorithm that chooses actions online and myopically, by considering the behavioral
cost of looking only one step ahead (instead of an infinite horizon as in the optimal C-DAC policy).
In Sec. 2, we briefly summarize C-DAC and Infomax, as well as introduce the myopic approximation
to C-DAC. In Sec. 3, we describe the experiment, present the human behavioral data, and compare
the performance of different models to the human data. In Sec. 4, we simulate scenarios where CDAC and myopic C-DAC achieve a flexible trade-off between speed, accuracy and effort depending
on the task demands, whereas Infomax falls short ? this forms experimentally testable predictions
for future investigations. We conclude in Sec. 5 with a discussion of the insights gained from both
the experiment and the models, as well as directions for future work.
2
The Models
In the following, we assume a basic active sensing scenario, which formally translates to a sequential
decision making process based on noisy inputs, where the observer can control both the sampling
location and duration. For example, in a visual search task, the observer controls where to look,
when to switch to a different sensing location, and when to stop searching and report the answer.
Although the framework discussed below applies to a broad range of active sensing problems, we
will use language specific to visual search for concreteness.
2.1
C-DAC
This model consists of both an inference strategy and a control/decision strategy. For inference,
we assume the observer starts with a prior belief over the latent variable (true target location), and
then updates her beliefs via Bayes rule upon receiving each new observation. The observer maintains a probability distribution over the k possible target locations, representing the corresponding
belief about the presence of the target in that location (belief state). Thus, if s is the target location (latent), ?t := {?1 , . . . , ?t } is the sequence of fixation locations up to time t (known), and
2
xt := {x1 , . . . , xt } is the sequence of observations up to time t (observed), the belief state and the
belief update rule are:
pt := (P (s = 1|xt ; ?t ), . . . , P (s = k|xt ; ?t ))
pit = P (s = i|xt ; ?t ) ? p(xt |s = i; ?t )P (s = i|xt?1 ; ?t?1 ) = fs,?t (xt )pit?1
(1)
where fs,? (xt ) is the likelihood function, and p0 the prior belief distribution over target location.
For the decision component, C-DAC optimizes the mapping from the belief state to the action space
(continue, switch to one of the other sensing locations, stop and report the target location) with
respect to a behavioral cost function. If the target is at location s, and the observer declares it to be
at location ?, after spending ? units of time and making n? number of switches between potential
target locations, then the total cost incurred is given by:
l(?, ?; ?? , s) = c? + cs n? + 1{?6=s}
(2)
where c is the cost per unit time, cs is the cost per switch, and cost of making a wrong response is
1 (since we can always make one of the costs to be unity via normalization). For any given policy
? (mapping belief state to action), the expected cost is L? := cE[? ] + cs E[ns ] + P (? 6= s). At any
time t, the observer can either choose to stop and declare one of the locations to be the target, or
choose to continue and look at location ?t+1 . Thus, the expected cost associated with stopping and
declaring location i to be the target is:
? it (pt , ?t ) := E[l(t, i)|pt , ?t ] = ct + cs nt + (1?pit )
Q
(3)
And the minimum expected cost for continuing sensing at location j is:
Qjt (pt = p, ?t ) := c(t + 1) + cs (nt + 1{j6=?t } ) + min E[l(? 0 , ?)|p0 = p, ?1 = j]
? 0 ,?,?? 0
(4)
The value function V (p, i), or the expected cost incurred following the optimal policy (? ? ), starting
with the prior belief p0 = p and initial observation location ?1 = i, is:
V (p, i) := min E[l(?, ?)|p0 = p, ?1 = i] .
?,?,??
(5)
Then the value function satisfies the following recursive relation (Bellman, 1952), and the action
that minimizes the right hand side is the optimal action ? ? (p, k):
i
0
?
V (p, k) = min min Q1 (p, k) , min c + cs 1{j6=k} + E[V (p , j)]
(6)
j
i
This can be solved using dynamic programming, or more specifically value iteration, whereby we
guess an initial value of the value function and iterate eq. 6 until convergence.
2.2
Infomax policy
Infomax (Butko & Movellan, 2010) presents a similar formulation in terms of belief state representation and Bayesian inference, however, for the control part, the goal is to maximize long term
information gain (or minimize cumulative future entropy of the posterior belief state). Thus, the
action-values, value function, and the resultant policy are:
Qim (pt , j) =
T
X
im
E[H(pt0 )|?t+1 = j]; V im (pt , j) = min Qim (pt , j); ?im
t+1 = argmin Q (pt , j)
j
t0 =t+1
j
Infomax does not directly prescribe when to stop, since there are only continuation actions and no
stopping action. A general heuristic used for such strategies is to stop when the confidence in one of
the locations being the target (the belief about that location) exceeds a certain threshold, which is a
3
free parameter challenging to set for any specific problem. In our recent work we used an optimistic
strategy for comparing Infomax with C-DAC by giving Infomax a stopping boundary that is fit to
the one computed by C-DAC. Here we present a novel theoretical result that gives an inner bound
of the stopping region, obviating the need to do a manual fit. The bound is sensitive to the sampling
cost c and the signal-to-noise ratio of the sensory input, and underestimates the size of the stopping
region.
Assuming that the observations are binary and Bernoulli distributed (i.i.d. conditioned on target and
fixation locations), i.e.:
fs,? (x) = p(x|s = i; ? = j) = 1{i=j} ? x (1 ? ?)1?x + 1{i6=j} (1 ? ?)x ? 1?x
(7)
We can state the following result:
Theorem 1. If p? is the solution of the equation:
(2? ? 1)(1 ? p)
p
=c
?p + (1 ? ?)(1 ? p)
where c is the cost per unit time as defined in sec. 2.1, then for all pi > p? , the optimal action is to
stop and declare location i under the cost formulation of C-DAC.
Proof. The cost incurred for collecting each new sample is c. Therefore stopping is optimal when
the improvement in belief from collecting another sample is less than the cost incurred to collect that
sample. Formally, stopping and choosing i is optimal for the corresponding belief pi = p when:
max
(p0 ) ? p ? c
0
p ?P
where P is the set of achievable beliefs starting from p. Furthermore, if we solve the above equation
for equality, to find p? , then by problem construction, it is always optimal to stop for p > p?
( stopping cost (1 ? p) < (1 ? p? )). Given the likelihood function fs,? (x) (eq. 7), we can use eq. 1
to simplify the above relation to:
(2? ? 1)(1 ? p)
=c
p
?p + (1 ? ?)(1 ? p)
2.3
Myopic C-DAC
This approximation attempts to optimize the contextual cost proposed in C-DAC, but only for one
step in the future. In other words, the planning is based on the inherent assumption that the next
action is the last action permissible, and so the goal is to minimize the cost incurred in this single
step. The actions thus available are, stop and declare the current location as the target, or choose
another sensing location before stopping. Similar to eq. 6, we can write the value function as:
V (p, k) = min
1 ? pk , min c + cs 1{j6=k} + min 1 ? E[plj ]
j
lj
(8)
where j indexes the possible sensing locations, and lj indexes the possible stopping actions for the
sensing location j.
Note that the value function computation does not involve any recursion, just a comparison between
simple-to-compute action values for different actions. For the visual search problem considered
below, because the stopping action is restricted to only the current sensing location, lj = j, the
right-hand side simplifies to
V (p, k) = min 1 ? pk , min c + cs 1{j6=k} + 1 ? E[pj ]
j
= min 1 ? pk , min c + cs 1{j6=k} + 1 ? pj
(9)
j
the last equality due to p being a martingale. It can be seen, therefore, that this myopic policy
overestimates the size of the stopping region: if there is only step left, it is never optimal to continue
looking at the same location, since such an action would not lead to any improvement in expected
accuracy, but incur a unit cost of time c. Therefore, in the simulations below, just like for Infomax,
we set the stopping boundary for myopic C-DAC using the bound presented in Theorem 1.
4
3
Case Study: Visual Search
In this section, we apply the different active sensing models discussed above to a simple visual
search task, and compare their performance with the observed human behavior in terms of accuracy
and fixation duration.
3.1
Visual search experiment
The task involves finding a target (the patch with dots moving to the left) amongst two distractors
(the patches with dots moving to the right), where a patch is a stimulus location possibly containing
the target. The definition of target versus distractor is counter-balanced across subjects. Fig. 1 shows
schematic illustration of the task at three time points in a trial. The display is gaze contingent, such
that only the location currently fixated is visible on the screen, allowing exact measurement of where
a subject obtains sensory input. At any time, the subject can declare the current fixation location to
be the target by pressing space bar. Target location for each trial is drawn independently from the
fixed underlying distribution (1/13, 3/13, 9/13), with the spatial configuration fixed during a block
and counter-balanced across blocks. As search behavior only systematically differed depending on
the probability of a patch containing a target, and not on its actual location, we average data across
all configurations of spatial statistics and differentiate the patches only by their prior likelihood of
containing the target; we call them patch 1, patch 3, and patch 9, respectively. The study had 11
participants, each presented with 6 blocks (counterbalanced for different likelihoods: 3! = 6), with
each block consisting of 90 trials, leading to a total of 5940 trials. Subjects were rewarded points
based on their performance, more if they got the answer correct (less if they got it wrong), and
penalized for total search time as well as the number of switches in sensing location.
Figure 1: Simple visual search task, with gaze contingent display.
3.2
Comparison of Model Predictions and Behavioral Data
In the model, we assume binary observations (eq. 7), which are more likely to be 1 if the location
contains the target, and more likely to be 0 if it contains a distractor (the probabilities sum to 1,
since the left and right-moving stimuli are statistically/perceptually symmetric). We assume that
within a block of trials, subjects learn about the spatial distribution of target location in that block
by inverting a Bayesian hidden Markov model, related to the Dynamic Belief Model (DBM) (Yu
& Cohen, 2009). This implies that the target location on each trial is generated from a categorical
distribution, whose underlying rates at the three locations are, with probability ?, the same as last
trial and, probability 1 ? ?, redrawn from a prior Dirichlet distribution. Even though the target
distribution is fixed in a block, we use DBM with ? = 0.8 to capture the general tendency of human
subjects to typically rely more on recent observations than distant ones in anticipating upcoming
stimuli. We assume that subjects choose the first fixation location on each trial as the option with
the highest prior probability of containing the target. The subsequent fixation decisions are made
following a given control policy (C-DAC, Infomax or Myopic C-DAC).
We investigate how well these policies explain the emergence of a certain confirmation bias in humans ? the tendency to favor the more likely (privileged) location when making a decision about
target location. We focus on this particular aspect of behavioral data because of two reasons: (1)
The more obvious aspects (e.g. where each policy would choose to fixate first) are also the more
trivial ones that all reasonable policies would display (e.g. the most probable one); (2) Confirmation
5
bias is a well studied, psychologically important phenomenon exhibited by humans in a variety of
choice and decision behavior (see (Nickerson, 1998), for a review), and is, therefore, important to
capture in its own right.
Figure 2: Confirmation bias in human data and model simulations. The parameters used for C-DAC
policy are (c, cs , ?) = (0.005, 0.1, 0.68). The stopping thresholds for both Infomax and myopic
C-DAC are set using the bound developed in Theorem 1. The spatial prior for each trial, used by
all three algorithms, is produced by running DBM on the actual experimental stimulus sequences
experienced by subjects. Units for fixation duration: millisecond (experiment), number of time-steps
(simulations)
Based on the experimental data (Fig. 2), we observe this bias in fixation choice and duration. Subjects are more likely to identify the 9 patch to contain the target, whether it is really there (?hits?, left
column) or not (?false alarms?, middle column). This is not due to a potential motor bias (tendency
to assume the first fixation location contains the target, combined with first fixating the 9 patch most
often), as we only consider trials where the subject first fixates the relevant patch. The confirmation
bias is also apparent in fixation duration (right column), as subjects fixate the 9 patch shorter than
the 1 & 3 patches when it is the target (as though faster to confirm), and longer when it is not the
target (as though slower to be dissuaded). Again, only those trials where the first fixation landed
on the relevant patch are included. As shown in Figure 2, these confirmation bias phenomena are
captured by both C-DAC and myopic C-DAC, but not by Infomax.
6
Our results show that human behavior is best modeled by a control strategy (C-DAC or myopic CDAC) that takes into account behavior costs, e.g. related to time and switching. However, C-DAC
in its original formulation is arguably not very psychologically plausible. This is because C-DAC
requires using dynamic programming (recursing Bellman?s optimal equation) offline to compute a
globally optimal policy over the continuous state space (belief state), so that the discretized state
space scales exponentially in the number of hypotheses. We have previously proposed families
of parametric and non-parametric approximations, but these still involve large representations, and
recursive solutions. On the other hand, myopic C-DAC incurs just a constant cost to compute the
policy online for only the current belief state, is consequently psychologically more plausible, and
provides a qualitative fit to the data with a simple threshold bound. We believe its performance can
be improved by using a tighter bound to approximate the stopping region. Infomax, on the other
hand, is not context sensitive, and our experiments suggest that even manually setting its threshold
to match that of C-DAC does not lead to substantial improvement in performance (not shown).
4
Model Predictions
With the addition of the parametric threshold to Infomax and myopic C-DAC, we discover the wider
disparity which we earlier observed between C-DAC and Infomax disappears for a large class of
parameter settings, since now the stopping boundary for Infomax is also context sensitive. Similar
claim holds for myopic C-DAC. However, one scenario where Infomax does not catch up to the full
context sensitivity of C-DAC, is when cost of switching from one sensing location to another comes
in to play. This is due to the rigid switching boundaries of Infomax. In contrast, myopic C-DAC
can adjust its switching boundary depending on context. We illustrate the same for the case when
(c, cs , ?) = (0.1, 0.1, 0.9) in Fig. 3.
Figure 3: Different policies for the environment (c, cs , ?) = (0.1, 0.1, 0.9), as defined on the belief
state (p1 , p2 ), under affine transform to preserve rotational symmetry. Blue: stop & declare. Green:
fixate location 1. Orange: fixate location 2. Brown: fixate location 3.
We show in Fig. 4 how the differences in policy space translate to behavioral differences in terms
of accuracy, search time, number of switches, and total behavioral cost (eq. 2). As with the previous results, we set the threshold using the bound developed in Theorem 1. Note that, as expected,
the performance of Infomax and Myopic C-DAC are closely matched on all measures for the case
cs = 0. The accuracy of C-DAC is poorer as compared to the other two, because the threshold
used for the other policies is more conservative (thus stopping and declaration happens at higher
confidence, leading to higher accuracy), but C-DAC takes less time to reach the decision. Looking
at the overall behavioral costs, we can see that although C-DAC loses in accuracy, it makes up at
other measures, leading to a comparable net cost. For the case when cs = 0.1, we notice that the
accuracy and search time are relatively unchanged for all the policies. However, C-DAC has a notable advantage in terms of number of switches, while the number of switches remain unchanged for
Infomax. This case exemplifies the context-sensitivity of C-DAC and Myopic C-DAC, as they both
reduce number of switches when switching becomes costly. When all these costs are combined we
see that C-DAC incurs the minimum overall cost, followed by Myopic C-DAC, and Infomax incurs
the highest cost due to its lack of flexibility for a changed context. Thus Myopic C-DAC, a very
simple approximation to a computationally complex policy C-DAC, still retains context sensitivity,
whereas Infomax with complexity comparable to C-DAC falls short.
7
Figure 4: Comparison between C-DAC, Infomax and Myopic C-DAC (MC-DAC) for two environments (c, cs , ?) = (0.005, 0, 0.68) and (0.005, 0.1, 0.68). For cs > 0, the performance of C-DAC is
better than MC-DAC which in turn is better than Infomax.
5
Discussion
In this paper, we presented a novel visual search experiment that involves finding a target amongst
a set of distractors differentiated only by the stimulus characteristics. We found that the fixation
and choice behavior of subjects is modulated by top-down factors, specifically the likelihood of a
particular location containing the target. This suggests that any purely bottom-up, saliency based
model would be unable to fully explain human behavior. Subjects were found to exhibit a certain
confirmation bias ? the tendency to systematically favor a location that is a priori judged more likely
to contain the target, compared to another location less likely to contain the target, even in the face
of identical sensory input and motor state. We showed that C-DAC, a context-sensitive policy we
recently introduced, can reproduce this bias. In contrast, a policy that aims to optimize statistical
objectives of task demands and ignores behavioral constraints (e.g. cost of time and switch), such as
Infomax (Lee & Yu, 2000; Itti & Baldi, 2006; Butko & Movellan, 2010), falls short. We proposed
a bound on the stopping threshold that allows us to set the decision boundary for Infomax, by
taking into account the time or sampling cost c, but that still does not sufficiently alleviate the
context-insensitivity of Infomax. This is most likely due to both a sub-optimal incorporation of
sampling cost and an intrinsic lack of sensitivity toward switching cost, because there is no natural
way to compare a unit of switching cost with a unit of information gain. To set the stage for future
experimental research, we also presented a set of predictions for scenarios where we expect the
various models to differ the most.
While C-DAC does a good job of matching human behavior, at least based on the behavioral metrics
considered here, we note that this does not necessarily imply that the brain implements C-DAC exactly. In particular, solving C-DAC exactly using dynamic programming requires a representational
complexity that scales exponentially with the dimensionality of the search problem (i.e. the number
of possible target locations), thus making it an impractical solution for more natural and complex
problems faced daily by humans and animals. For this reason, we proposed a myopic approximation
to C-DAC that scales linearly with search dimensionality, by eschewing a globally optimal solution that must be computed and maintained offline, in favor of an online, approximately and locally
optimal solution. This myopic C-DAC algorithm, by retaining context-sensitivity, was found to nevertheless reproduce critical fixation choice and duration patterns, such as the confirmation bias, seen
in human behavior. However, exact C-DAC was still better than myopic C-DAC at reproducing human data, leaving room for finding other approximations that explain brain computations even better.
One possibility is to find better approximations to the switching and stopping boundary, since these
together completely characterize any decision policy, and we previously showed that there might be
a systematic, monotonic relationship between the decision boundaries and the different cost parameters (Ahmad & Yu, 2013). We proposed one such bound on the stopping boundary here, and other
approximate bounds have been proposed for similar problems (Naghshvar & Javidi, 2012). Further
investigations are needed to find more inexpensive, yet context-sensitive active sensing policies,
that would not only provide a better explanation for brain computations, but yield better practical
algorithms for active sensing in engineering applications.
8
References
Ahmad, S., & Yu, A. (2013). Active sensing as bayes-optimal sequential decision-making. Uncertainty in Artificial Intelligence.
Bellman, R. (1952). On the theory of dynamic programming. PNAS, 38(8), 716-719.
Butko, N. J., & Movellan, J. R. (2010). Infomax control of eyemovements. IEEE Transactions on
Autonomous Mental Development, 2(2), 91-107.
Itti, L. (2005). Quantifying the contribution of low-level saliency to human eye movements in
dynamic scenes. Visual Cognition, 12(6), 1093-1123.
Itti, L., & Baldi, P. (2006). Bayesian surprise attracts human attention. In Advances in neural
information processing systems, vol. 19 (p. 1-8). Cambridge, MA: MIT Press.
Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual
attention. Vision Research, 40(10-12), 1489-506.
Koch, C., & Ullman, S. (1985). Shifts in selective visual attention: towards the underlying neural
circuitry. Hum. Neurobiol..
Lee, T. S., & Yu, S. (2000). An information-theoretic framework for understanding saccadic behaviors. In Advance in neural information processing systems (Vol. 12). Cambridge, MA: MIT
Press.
Naghshvar, M., & Javidi, T.
arXiv:1203.4626.
(2012).
Active sequential hypothesis testing.
arXiv preprint
Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature,
434(7031), 387-91.
Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Review of
General Psychology, 2(2), 175.
Yarbus, A. F. (1967). Eye movements and vision. New York: Plenum Press.
Yu, A. J., & Cohen, J. D. (2009). Sequential effects: Superstition or rational behavior? Advances in
Neural Information Processing Systems, 21, 1873-80.
9
| 5181 |@word trial:11 middle:1 briefly:1 achievable:1 simulation:3 p0:5 q1:1 incurs:3 reduction:1 initial:2 configuration:3 contains:3 disparity:1 pt0:1 current:4 comparing:1 nt:2 contextual:1 yet:1 attracted:1 readily:2 najemnik:2 must:1 visible:1 distant:1 informative:1 subsequent:1 motor:2 update:2 greedy:3 intelligence:1 guess:1 short:3 mental:1 provides:2 location:51 yarbus:2 goaldirected:1 qualitative:1 consists:1 fixation:14 behavioral:16 baldi:3 introduce:1 expected:8 behavior:13 p1:1 planning:2 distractor:2 brain:5 discretized:1 bellman:3 informational:1 globally:2 automatically:1 cdac:2 curse:1 actual:2 considering:1 becomes:1 discover:1 underlying:3 matched:1 maximizes:1 qjt:1 argmin:1 neurobiol:1 minimizes:2 developed:2 finding:4 impractical:1 temporal:1 every:1 collecting:2 interactive:1 exactly:2 wrong:2 hit:1 control:9 unit:7 arguably:1 overestimate:1 before:1 declare:5 engineering:2 switching:9 approximately:1 might:1 studied:2 collect:1 challenging:1 pit:3 suggests:1 limited:1 range:1 statistically:1 directed:1 practical:1 testing:1 recursive:2 block:7 implement:1 movellan:5 adapting:1 got:2 matching:1 confidence:2 word:1 suggest:1 cannot:1 butko:5 judged:1 context:22 impossible:1 optimize:4 map:4 maximizing:2 attention:3 starting:2 duration:7 independently:1 insight:1 rule:2 searching:1 autonomous:1 plenum:1 diego:3 target:37 pt:8 construction:1 exact:2 programming:4 play:1 prescribe:1 hypothesis:2 gilman:3 observed:3 bottom:1 preprint:1 solved:1 capture:2 naghshvar:2 region:4 movement:5 ahmad:5 trade:1 counter:2 highest:2 balanced:2 substantial:1 environment:6 complexity:3 dynamic:6 solving:1 incur:1 purely:1 upon:1 basis:1 completely:1 various:3 describe:1 eschewing:1 artificial:4 choosing:1 whose:1 heuristic:1 apparent:1 solve:1 plausible:2 qim:2 favor:3 statistic:1 emergence:1 itself:1 noisy:2 transform:1 online:3 differentiate:2 sequence:3 pressing:1 advantage:1 net:1 propose:1 relevant:3 translate:1 flexibility:1 achieve:2 adapts:1 representational:2 insensitivity:1 fixates:1 convergence:2 requirement:1 wider:1 depending:3 illustrate:1 progress:1 job:1 eq:6 p2:1 c:17 involves:2 implies:1 come:1 differ:2 direction:1 closely:1 correct:1 redrawn:1 human:29 landed:1 really:1 investigation:2 alleviate:1 probable:1 tighter:1 im:3 hold:1 koch:4 considered:2 sufficiently:1 mapping:2 dbm:3 cognition:1 claim:1 circuitry:1 achieves:1 early:1 overt:1 currently:1 sensitive:11 tool:1 mit:2 behaviorally:1 always:2 aim:1 exemplifies:1 focus:2 improvement:3 bernoulli:1 likelihood:5 contrast:3 sense:1 inference:3 dependent:1 rigid:1 stopping:20 lj:3 typically:1 hidden:1 her:1 relation:2 reproduce:2 selective:1 overall:2 flexible:1 priori:1 retaining:1 development:1 animal:2 spatial:4 orange:1 saving:1 having:1 never:1 sampling:4 manually:1 identical:1 broad:1 yu:10 look:3 prioritizes:1 future:7 superstition:1 report:2 stimulus:6 simplify:1 inherent:1 preserve:1 consisting:1 attempt:1 investigate:2 highly:1 possibility:1 adjust:1 myopic:23 poorer:1 capable:1 daily:1 respective:1 shorter:1 continuing:1 theoretical:3 column:3 earlier:1 retains:2 cost:39 characterize:1 answer:2 chooses:1 combined:2 geisler:2 sensitivity:6 lee:3 rewarding:1 off:1 receiving:1 infomax:33 gaze:2 together:1 systematic:1 again:1 containing:5 huang:1 choose:5 possibly:1 cognitive:3 itti:7 leading:3 ullman:2 account:5 exclude:1 potential:2 fixating:1 sec:5 notable:1 stream:1 observer:6 optimistic:1 start:1 bayes:2 maintains:1 participant:1 option:1 contribution:1 minimize:2 accuracy:9 characteristic:1 yield:1 saliency:4 identify:1 generalize:1 bayesian:5 produced:1 mc:2 drive:3 j6:5 explain:3 reach:1 manual:1 definition:1 against:1 inexpensive:2 underestimate:1 obvious:1 fixate:5 naturally:1 associated:2 resultant:1 proof:1 gain:4 stop:9 rational:1 distractors:2 dimensionality:3 ubiquitous:1 anticipating:1 disposal:1 higher:2 response:1 improved:1 formulation:3 though:3 furthermore:1 just:3 stage:1 until:2 hand:4 dac:63 lack:2 believe:1 effect:1 contain:3 true:1 brown:1 equality:2 symmetric:1 during:1 self:1 maintained:1 whereby:1 theoretic:1 covert:1 motion:1 spending:1 consideration:1 novel:4 recently:4 cohen:2 exponentially:2 discussed:2 he:1 significant:2 measurement:1 surround:1 cambridge:2 i6:1 language:1 had:1 dot:2 moving:3 longer:1 posterior:2 own:1 recent:3 showed:2 optimizing:1 jolla:3 optimizes:1 rewarded:1 scenario:4 certain:3 binary:2 continue:3 seen:2 minimum:2 contingent:2 captured:1 maximize:1 signal:1 full:1 pnas:1 exceeds:1 faster:1 adapt:2 match:1 long:1 privileged:1 schematic:1 prediction:4 involving:1 variant:1 basic:1 controller:3 vision:4 metric:1 arxiv:2 iteration:1 normalization:1 psychologically:3 whereas:2 addition:1 recursing:1 leaving:1 myopically:1 permissible:1 exhibited:1 subject:13 tend:1 seem:1 call:1 presence:1 switch:10 iterate:1 fit:4 counterbalanced:1 variety:1 attracts:1 psychology:1 suboptimal:1 inner:1 simplifies:1 reduce:1 translates:1 shift:2 t0:1 whether:3 effort:1 energetic:1 suffer:1 f:4 york:1 action:20 involve:2 locally:1 continuation:1 specifies:1 nickerson:2 millisecond:1 notice:1 per:3 blue:1 write:1 vol:2 salient:1 nevertheless:2 threshold:8 achieving:1 drawn:1 changing:1 pj:2 ce:1 utilize:1 advancing:2 concreteness:1 fraction:1 sum:1 realworld:1 powerful:1 uncertainty:1 almost:1 reasonable:1 family:1 patch:14 decision:12 comparable:2 bound:10 ct:1 followed:1 display:3 ahead:3 declares:1 constraint:1 incorporation:1 scene:1 aspect:4 speed:2 simulate:1 min:13 relatively:1 department:3 developing:2 structured:1 across:3 remain:1 unity:1 making:8 happens:1 explained:1 restricted:1 computationally:2 resource:2 equation:3 previously:3 turn:1 mechanism:1 needed:1 available:1 apply:1 observe:1 generic:2 differentiated:1 slower:1 original:1 angela:1 top:2 dirichlet:1 running:1 plj:1 giving:1 testable:1 especially:1 unchanged:2 upcoming:1 objective:5 hum:1 strategy:10 saccadic:3 parametric:3 costly:1 javidi:2 exhibit:1 amongst:2 unable:1 simulated:1 trivial:1 toward:2 reason:2 assuming:1 index:2 modeled:1 illustration:1 ratio:1 rotational:1 relationship:1 difficult:1 neuroscientific:1 eyemovements:1 policy:23 allowing:1 observation:7 markov:1 situation:1 looking:4 incorporated:1 ucsd:3 reproducing:1 introduced:1 inverting:1 california:3 suggested:1 bar:1 below:3 pattern:1 challenge:1 summarize:1 max:1 green:1 explanation:1 belief:21 event:1 critical:2 natural:4 rely:1 recursion:1 representing:1 eye:5 imply:1 disappears:1 categorical:1 catch:1 faced:1 prior:7 understanding:3 review:2 relative:2 fully:1 expect:1 declaring:1 versus:2 incurred:5 agent:1 affine:1 sufficient:1 systematically:2 pi:2 summary:1 penalized:1 changed:1 last:3 free:1 offline:3 formal:1 side:2 bias:11 fall:3 face:2 taking:2 distributed:1 boundary:9 dimension:1 world:1 cumulative:3 sensory:5 ignores:1 made:1 san:3 transaction:1 approximate:3 obtains:1 confirm:1 global:1 active:22 fixated:1 conclude:1 search:20 continuous:2 iterative:1 latent:2 contextdependent:1 learn:1 nature:1 ca:3 confirmation:8 symmetry:1 complex:4 necessarily:1 pk:3 linearly:1 noise:1 alarm:1 obviating:1 x1:1 fig:4 screen:1 differed:1 martingale:1 n:1 experienced:1 sub:1 guise:1 down:2 theorem:4 specific:3 xt:9 showing:1 sensing:28 ajyu:1 intrinsic:1 false:1 sequential:4 gained:1 perceptually:1 conditioned:1 horizon:1 demand:2 surprise:1 entropy:1 likely:7 visual:15 saccade:1 applies:1 monotonic:1 loses:1 satisfies:1 ma:2 declaration:1 goal:6 consequently:1 quantifying:1 towards:1 room:1 change:1 experimentally:1 included:1 infinite:1 specifically:2 conservative:1 total:4 experimental:4 la:3 tendency:4 formally:2 modulated:1 phenomenon:3 |
4,621 | 5,182 | Bellman Error Based Feature Generation using
Random Projections on Sparse Spaces
Mahdi Milani Fard, Yuri Grinberg, Amir massoud Farahmand, Joelle Pineau, Doina Precup
School of Computer Science
McGill University
Montreal, Canada
{mmilan1,ygrinb,amirf,jpineau,dprecup}@cs.mcgill.ca
Abstract
This paper addresses the problem of automatic generation of features for value
function approximation in reinforcement learning. Bellman Error Basis Functions
(BEBFs) have been shown to improve policy evaluation, with a convergence rate
similar to that of value iteration. We propose a simple, fast and robust algorithm
based on random projections, which generates BEBFs for sparse feature spaces.
We provide a finite sample analysis of the proposed method, and prove that projections logarithmic in the dimension of the original space guarantee a contraction
in the error. Empirical results demonstrate the strength of this method in domains
in which choosing a good state representation is challenging.
1
Introduction
Policy evaluation, i.e. computing the expected return of a given policy, is at the core of many reinforcement learning (RL) algorithms. In large problems, it is necessary to use function approximation
in order to perform this task; a standard choice is to hand-craft parametric function approximators,
such as a tile coding, radial basis functions or neural networks. The accuracy of parametrized policy evaluation depends crucially on the quality of the features used in the function approximator,
and thus often a lot of time and effort is spent on this step. The desire to make this process more
automatic has led to a lot of recent work on feature generation and feature selection in RL (e.g.
[1, 2, 3, 4, 5]).
An approach that offers good theoretical guarantees is to generate features in the direction of the
Bellman error of the current value estimates (Bellman Error Based features, or BEBF). Successively
adding exact BEBFs has been shown to reduce the error of a linear value function estimator at a
rate similar to value iteration, which is the best one could hope to achieve [6]. Unlike fitted value
iteration [7], which works with a fixed feature set, iterative BEBF generation gradually increases the
complexity of the hypothesis space by adding new features and thus does not diverge, as long as the
error in the generation does not cancel out the contraction effect of the Bellman operator [6]. Several
successful methods have been proposed for generating features related to the Bellman error [5, 1, 4,
6, 3]. In practice however, these methods can be computationally expensive when applied in high
dimensional input spaces.
With the emergence of more high-dimensional RL problems, it has become necessary to design and
adapt BEBF-based methods to be more scalable and computationally efficient. In this paper, we
present an algorithm that uses the idea of applying random projections specifically in very large and
sparse feature spaces (e.g. 105 ? 106 dimensions). The idea is to iteratively project the original features into exponentially lower-dimensional spaces. Then, we apply linear regression in the smaller
spaces, using temporal difference errors as targets, in order to approximate BEBFs.
Random projections have been studied extensively in signal processing [8, 9] as well as machine
learning [10, 11, 12, 13]. In reinforcement learning, Ghavamzadeh et al. [14] have used random
projections in conjunction with LSTD and have shown that this can reduce the estimation error,
1
at the cost of a controlled bias. Instead of compressing the feature space for LSTD, we focus on
the BEBF generation setting, which offers better scalability and more flexibility in practice. Our
algorithm is well suited for sparse feature spaces, naturally occurring in domains with audio and
video inputs [15], and also in tile-coded and discretized spaces.
We carry out a finite sample analysis, which helps determine the sizes that should be used for the
projections. Our analysis holds for both finite and continuous state spaces and is easy to apply
with discretized or tile-coded features, which are popular in many RL applications. The proposed
method compares favourably, from a computational point of view, to many other feature extraction
methods in high dimensional spaces, as each iteration takes only poly-logarithmic time in the number
of dimensions. The method provides guarantees on the reduction of the error, yet needs minimal
domain knowledge, as we use agnostic random projections.
Our empirical analysis indicates that the proposed method provides similar results to L2 -regularized
LSTD, but scales much better in time complexity as the observed sparsity decreases. It significantly
outperforms L1 -regularized methods both in performance and computation time. The algorithm
seems robust to the choice of parameters and has small computational and memory complexity.
2
Notation and Background
Throughout this paper, column vectors are represented by lower case bold letters, and matrices are
represented by bold capital letters. |.| denotes the size of a set, and M(X ) is the set of measures
on X . k.k0 is Donoho?s zero ?norm? indicating the number of non-zero elements in a vector. k.k
denotes the L2 norm for vectors and the operator norm forq
matrices: kMk = supv kMvk/kvk. The
P
2
Frobenius norm of a matrix is then defined as: kMkF =
i,j Mi,j . Also, we denote the MoorePenrose pseudo-inverse
of a matrix M with M? . The weighted L2 norm of a function is defined as
qR
2
kf (x)k?(x) =
|f (x)| d?(x). We focus on spaces that are large, bounded and k-sparse. Our
state is represented by a vector x ? X of D features, having kxk ? 1. We assume that x is k-sparse
in some known or unknown basis ?: X , {?z, s.t. kzk0 ? k and kzk ? 1}. Such spaces occur
both naturally (e.g. image, audio and video signals [15]) as well as from most discretization-based
methods (e.g., tile-coding).
2.1
Markov Decision Process
A Markov Decision Process (MDP) M = (S, A, T, R) is defined by a (possibly infinite) set of
states S, a set of actions A, a transition kernel T : S ? A ? M(S), where T (.|s, a) defines the
distribution of next state given that action a is taken in state s, and a (possibly stochastic) bounded
reward function R : S ? A ? M([0, Rmax ]). We assume discounted-reward MDPs, with the
discount factor denoted by ? ? [0, 1). At each discrete time step, the RL agent chooses an action
and receives a reward. The environment then changes to a new state, according to the transition
kernel.
A policy is a (possibly stochastic) function from states to actions. The value ofPa state s for policy
?, denoted by V ? (s), is the expected value of the discounted sum of rewards ( t ? t rt ) if the agent
starts in state s and acts according to policy ?. Let R(s, ?(s)) be the expected reward at state s
under policy ?. The value function satisfies:
Z
V ? (s) = R(s, ?(s)) + ? V ? (s0 )T (ds0 |s, ?(s)).
(1)
Many methods have been developed for finding the value of a policy (policy evaluation) when the
transition and reward functions are known. Dynamic programming methods apply iteratively the
Bellman operator T to an initial guess of the valueZfunction [16]:
T V (s) = R(s, ?(s)) + ?
V (s0 )T (ds0 |s, ?(s)),
(2)
When the transition and reward models are not known, one can use a finite sample set of transitions
to learn an approximate value function. When the state space is very large or continuous, the value
function is also approximated using a feature vector xs , which is a function of the state s. Often,
this approximation is linear: V (s) ? wT xs . To simplify the derivations, we use V (x) to directly
refer to the value estimate of a state with feature vector x.
2
Least-squares temporal difference learning (LSTD) and its variations [17, 18, 19] are among methods that learn a value function based on a finite sample, especially when function approximation is
needed. LSTD-type methods are efficient in their use of data, but can be computationally expensive,
as they rely on inverting a large matrix. Using LSTD in spaces induced by random projections is a
way of dealing with this problem [14]. As we show in our experiments, if the observation space is
sparse, we can also use conjugate gradient descent methods to solve the regularized LSTD problem.
Stochastic gradient descent methods are alternatives to LSTD in high-dimensional state spaces, as
their memory and computational complexity per time step are linear in the number of state features,
while providing convergence guarantees [20]. However, online gradient-type methods typically have
slow convergence rates and do not make efficient use of the data.
2.2
Bellman Error Based Feature Generation
In high-dimensional state spaces, direct estimation of the value function fails to provide good results
when using a small number of sampled transitions. Feature selection/extraction methods have thus
been used to build better approximation spaces for the value functions [1, 2, 3, 4, 5]. Among these,
we focus on methods that aim to generate features in the direction of the Bellman error defined as:
eV (.) = T V (.) ? V (.).
(3)
n
Let Sn = ((xt , rt )t=1 ) be a random sample of size n, collected on an MDP with a fixed policy.
Given an estimate V of the value function, temporal difference (TD) errors are defined to be:
?t = rt + ?V (xt+1 ) ? V (xt ).
(4)
It is easy to show that the expectation of the temporal difference at xt equals the Bellman error at
that point [16]. TD-errors are thus proxies to estimating the Bellman error.
Using temporal differences, Menache et al. [21] introduced two algorithms to construct basis functions for linear function approximation. Keller et al. [3] applied neighbourhood component analysis
as a dimensionality reduction technique to construct a low dimensional state space based on the TDerror. In their work, they iteratively add features that would help predict the Bellman error. Parr et al.
[6] later showed that any BEBF extraction method with small angular error will provably tighten the
approximation error of the value function estimate. Online BEBF extraction methods have also been
studied in the RL literature. The incremental Feature Dependency Discovery (iFDD) is a fast online
algorithm to extract non-linear binary features for linear function approximation [5].
We note that these algorithms, although theoretically interesting, are difficult to apply to very large
state spaces or need specific domain knowledge to generate good features. The problem lies in
the large estimation error when predicting BEBFs in high-dimensional state spaces. Our proposed
solution leverages the use of simple random projections to alleviate this problem.
2.3
Random Projections and Inner Product
Random projections have been introduced in signal processing, as an efficient method for compressing very high-dimensional signals (such as images or video). It is well known that random projections of appropriate sizes preserve enough information to exactly reconstruct the original signal with
high probability [22, 9]. This is because random projections are norm and distance-preserving in
many classes of feature spaces.
There are several types of random projection matrices that can be used. In this work, we assume that
each entry in the projection matrix ?D?d is an i.i.d. sample from a Gaussian distribution:
?i,j ? N (0, 1/d).
(5)
Recently, it has been shown that random projections of appropriate sizes preserve linearity of a target
function on sparse feature spaces. A bound introduced in [11] and later tightened by [23] shows that
if a function is linear in a sparse space, it is almost linear in an exponentially smaller projected space.
An immediate lemma based on Theorem 2 of [23] bounds the bias induced by random projections:
Lemma 1. Let X be a D-dimensional k-sparse space andq
?D?d be a random projection according
(?
)
4D
to Eqn 5. Fix w ? RD and 1 > ?0 > 0. Then, for prj0 = 48k
d log ?0 , with probability > 1 ? ?0 :
(? )
?x ? X : (?T w)T (?T x) ? wT x ? prj0 kwkkxk,
(6)
3
? log D) preserve the linearity up to an arbitrary constant. Along with
Hence, projections of size O(k
the analysis of the variance of the estimators, this helps bound the prediction error of the linear fit in
the compressed space.
3
Compressed Linear BEBFs
In this work, we propose a new method to generate BEBFs using linear regression in a small space
induced by random projections. We first project the state features into a much smaller space and
then regress a hyperplane to the TD-errors. For simplicity, we assume that regardless of the current
estimate of the value function, the Bellman error is always linearly representable in the original feature space. This seems like a strong assumption, but is true, for example, in virtually any discretized
space, and is also likely to hold in very high dimensional feature spaces1 .
Linear function approximators can be used to estimate the value of a given state. Let Vm be an
estimated value function described in a linear space defined by a feature set ? = {?1 , . . . ?m }. Parr
et al. [6] show that if we add a new BEBF ?m+1 = eVm to the feature set, (with mild assumptions)
the approximation error on the new linear space shrinks by a factor of ?. They also show that if we
can estimate the Bellman error within a constant angular error, cos?1 (?), the error will still shrink.
Estimating the Bellman error by regressing to temporal differences in high-dimensional sparse
spaces can result in large prediction error. This is due to the large estimation error of regression
in high dimensional spaces (over-fitting). However, as discussed in Lemma 1, random projections
were shown to exponentially reduce the dimension of a sparse feature space, only at the cost of a
controlled constant bias. A variance analysis along with proper mixing conditions can also bound
the estimation error due to the variance in MDP returns. The computational cost of the estimation is
also much smaller when the regression is applied in the compressed space.
3.1
General CBEBF Algorithm
In light of these results, we propose the Compressed Bellman Error Based Feature Generation algorithm (CBEBF). The algorithm iteratively constructs new features using compressed linear regression to the TD-errors, and uses these features with a policy evaluation algorithm to update the
estimate of the value function.
Algorithm 1 Compressed Bellman Error Based Feature Generation (CBEBF)
Input: Sample trajectory Sn = ((xt , rt )nt=1 ), where xt is the observation received at time t, and
rt is the observed reward; Number of BEBFs: m; Projection size schedule: d1 , d2 , . . . , dm
Output: V (.): estimate of the value function
Initialize V (.) to be 0 for all x.
Initialize the set of BEBFs linear weights ? ? ?.
for i ? 1 to m do
Generate projection ?D?di according to Eqn 5.
Calculate TD-errors: ?t = rt + ?V (xt+1 ) ? V (xt ).
Apply compressed regression:
Let udi ?1 be the result of OLS regression in the compressed space,
using ?T xt as inputs and ?t as outputs.
Add ?u to ?.
Apply policy evaluation with features {?
ev (x) = xT v | v ? ?} to update V (.).
end for
The optimal number of BEBFs and the schedule of projection sizes need to be determined and are
subjects of future work. But we show in the next section that logarithmic size projections should be
enough to guarantee the reduction of error in value function prediction at each step. This makes the
algorithm very attractive when it comes to computational and memory complexity, as the regression
at each step is only on a small projected feature space. As we discuss in our empirical analysis, the
algorithm is fast and robust with respect to the selection of parameters.
1
For the more general case, the analysis can be done with respect to the projected Bellman error [6]. We
assume linearity of the Bellman error to simplify the derivations.
4
3.2
Simplified CBEBF as Regularized Value Iteration
Note that in CBEBF, we can use any type of value function approximation to estimate the value function in each iteration. To simplify the bias?variance analysis and avoid multiple levels of regression,
we present here a simplified version of the CBEBF algorithm (SCBEBF). In the simplified version,
instead of storing the features in each iteration, new features are added to the value function approximator with constant weight 1. Therefore, the value estimate is simply the sum of all generated
BEBFs. As compared to the general CBEBF, the simplified version trivially has lower computational complexity per iteration, as it avoids an extra level of regression based on the features. It also
avoids storing the features by simply keeping the sum of all previously generated coefficients.
It is important to note that once we use linear value function approximation, the entire BEBF generation process can be viewed as a regularized value iteration algorithm. Each iteration of the algorithm
is a regularized Bellman backup which is linear in the features. The coefficients of this linear backup
are confined to a lower-dimensional random subspace implicitly induced by the random projection
used in each iteration.
3.3
Finite Sample Analysis of Simplified CBEBF
This section provides a finite sample analysis of the simplified CBEBF algorithm. In order to provide
such analysis, we need to have an assumption on the range of observed TD-errors. This is usually
possible by assuming that the current estimate of the value function is bounded, which is easy to
enforce by truncating any estimate of the value function between 0 and Vmax = Rmax /(1 ? ?).
The following theorem shows how well we can estimate the Bellman error by regression to the TDerrors in a compressed space. It highlights the bias?variance trade-off with respect to the choice of
the projection size.
Theorem 2. Let ?D?d be a random projection according to Eqn 5. Let Sn = ((xt , rt )nt=1 ) be
a sample trajectory collected on an MDP with a fixed policy with stationary distribution ?, in a
D-dimensional k-sparse feature space, with D > d ? 10. Let ? be the forgetting time of the chain
(defined in the appendix). Fix any estimate V of the value function, and the corresponding TD-errors
?t ?s bounded by ??max . Assume that the Bellman error is linear in the features with parameter w.
(?)
With compressed OLS regression we have wols = (X?)? ?, where X is the matrix containing xt ?s
and ? is the vector of TD-errors. Assume that X is of rank
larger than d. For any
fixed 0 < ? < 1/4,
T
(?)
with probability no less than 1 ? ?, the prediction error
x ?wols ? eV (x)
is bounded by:
?(x)
(?/4)
12 ?prj
r
kwkkxk?
s
1
(?/4)
+ 4?prj kwk
d?
d?
d
log + 2??max kxk?
n?
?
s
?d
d
log
n?
?
(7)
(?/4)
where prj is according to Lemma 1, ? and ? are the condition number and the smallest positive
1 T T
eigenvalue of the empirical
T gram
matrix n ? X X?, and we define maximum norm scaling factor
? = max(1, maxz?X
z ?
/ kzk).
A detailed proof is included in the appendix. The sketch of the proof is as follows: Lemma 1
suggests that if the Bellman error is linear in the original features, the bias due to the projection can
be bounded within a controlled constant error with logarithmic size projections. If the Markov chain
uniformly quickly forgets its past, one can also bound the on-measure variance part of the error. The
variance terms, of course, go to 0 as the number of sampled transitions n goes to infinity.
Theorem 2 can be further simplified by using concentration bounds on random projections as defined
in Eqn 5. The norm of ? can be bounded using the bounds discussed in Cand`es and Tao [8]; we
have with probability 1 ? ?? :
hp
i?1
p
p
p
k?k ? D/d + (2 log(2/?? ))/d + 1 and k?? k ?
D/d ? (2 log(2/?? ))/d ? 1
.
Similarly,
when n > d, we expect the smallest and biggest singular values of X? to be of order of
p
? n/d). Thus we have ? = O(1) and ? = O(1/d). Projections are norm-preserving and thus
O(
5
? 2 ), we can rewrite the bound on the error up to logarithmic terms as:
? ' 1. Assuming that n = O(d
p
?
? kwkkxk?(x) k log D/d + O
? d/ n .
O
(8)
The first term is a part of the bias due to the projection (excess approximation error). The rest is the
combined variance terms that shrink with larger training sets (estimation error). We clearly observe
the trade-off with respect to the compressed dimension d. With the assumptions discussed above,
? log D) should be enough to guarantee arbitrarily small
we can see that projection of size d = O(k
bias, as long as kwkkxk?(x) is small. Thus, the bound is tight enough to prove reduction in the error
as new BEBFs are added to the feature set.
p
Note that this bound matches that of Ghavamzadeh et al. [14]. The variance term is of order d/n?.
Thus, the dependence on the smallest
p eigenvalue of the gram matrix makes the variance term order
?
d/ n rather than the expected d/n. We expect the use of ridge regression instead of OLS in
the inner loop of the algorithm to remove this dependence and help with the convergence rate (see
appendix).
As mentioned before, our simplified version of the algorithm does not store the generated BEBFs
(such that it could later apply value function approximation over them). It adds up all the features
with weight 1 to approximate the value function. Therefore our analysis is different from that of
Parr et al. [6]. The following lemma (simplification of results in Parr et al. [6]) provides a sufficient
condition for the shrinkage of the error in the value function prediction:
Lemma 3. Let V ? be the value function of a policy ? imposing stationary measure ?, and let eV be
the Bellman error under policy ? for an estimate V . Given a BEBF ? satisfying:
k?(x) ? eV (x)k?(x) ? keV (x)k?(x) ,
we have that:
kV ? (x) ? (V (x) + ?(x))k?(x) ? (? + + ?) kV ? (x) ? V (x)k?(x) .
(9)
(10)
Theorem 2 (simplified in Equation (8)) does not state the error in terms of keV (x)k? =
wT x
? ,
as needed by this lemma, but rather does it in terms of kwkkxk? . Therefore, if there is a large gap
between these terms, we cannot expect to see shrinkage in the error (we can only show that the
error can be shrunk to a bounded uncontrolled constant). Ghavamzadeh
et al. [14] and Maillard and
Munos [10, 12] provide some discussion on the cases were
wT x
? and kwkkxk? are expected to
be close. These cases include when the features are rescaled orthonormal basis functions and also
with specific classes of wavelet functions.
The dependence on the norm of w is conjectured to be tight by the compressed sensing literature [24], making this bound asymptotically the best one can hope for. This dependence also points
out an interesting link between our method and L2 -regularized LSTD. We expect ridge regression to
be favourable in cases where the norm of the weight vector is small. The upper bound on the error
of compressed regression is also smaller when the norm of w is small.
Lemma 4. Assume the conditions of Theorem 2. Further assume for some constants c1 , c2 , c3 ? 1:
kwk ? c1
wT x
and kxk? ? c2
wT x
and 1/? ? c3 d,
(11)
?
?
There exist universal constants c4 and c5 , such that for any ? < ?0 < 1 and 0 < ? < 1/4, if:
2
2
1+?
D
1+?
d
2 2 2
2 2
2
d ? ? c1 c2 c3 c4
k log
and n ? (? + ? c2 c3 ?max ?)c5
d2 log ,
?0 ? ?
?
?0 ? ?
?
then with the addition of the estimated BEBF, we have that with probability 1 ? ?:
kV ? (x) ? (V (x) + ?(x))k?(x) ? ?0 kV ? (x) ? V (x)k?(x) .
(12)
The proof is included in the appendix.
Lemma 4 shows
that with enough sampled transitions, using
? ( 1+? )2 k log D guarantees contraction in the error by a factor
random projections of size d = O
?0 ??
of ?0 . Using
union
bound
over
m
iterations
of the algorithm, we prove that
projections of size
? ( 1+? )2 k log(mD) and a sample of transitions of size n = O
? ( 1+? )2 d2 log(md)
d = O
?0 ??
?0 ??
suffices to shrink the error by a factor of ?0m after m iterations.
6
4
Empirical Analysis
We conduct a series of experiments to evaluate the performance of our algorithm and compare it
against viable alternatives. Experiments are performed using a simulator that models an autonomous
helicopter in the flight regime close to hover [25]. Our goal is to evaluate the value function associated with the manually tuned policy provided with the simulator. We let the helicopter free fall
for 5 time-steps before the policy takes control. We then collect 100 transitions while the helicopter
hovers. We run this process multiple times to collect more trajectories on the policy.
The original state space of the helicopter domain consists of 12 continuous features. 6 of these
features corresponding to the velocities and position, capture most of the data needed for policy
evaluation. We use tile-coding on these 6 features as follows: 8 randomly positioned grids of size
16 ? 16 ? 16 are placed over forward, sideways and downward velocity. 8 grids of similar structure
are placed on features corresponding to the hovering coordinates. The constructed feature space is
thus of size 65536. Note that our choice of tile-coding for this domain is for demonstration purposes.
Since the true value function is not known in our case, we evaluate the performance of the algorithm
by measuring the normalized return prediction error (NRPE) on a large test set. Let U (xi ) be the
? be its average over the testing measure
empirical return observed for xi in a testing trajectory, and U
? k?(x) . Note that the best constant
?(x). We define NRPE(V ) = kU (x) ? V (x)k?(x) /kU (x) ? U
predictor has NRPE = 1.
We start by an experiment to observe the behaviour of the prediction error in SCBEBF as we run
more iterations of the algorithm. We collect 3000 sample transitions for training. We experiment
with 3 schedules for the projection size: (1) Fix d = 300 for 300 steps. (2) Fix d = 30 for 300 steps.
(3) Let d decrease with each iteration i: d = b300e?i/30 c. Figure 1 (left) shows the error averaged
over 5 runs. When d is fixed to a large number, the prediction error drops rapidly, but then rises due
to over-fitting. This problem can be mitigated by using a smaller fixed projection size at the cost of
slower convergence. In our experiments, we find a gradual decreasing schedule to provide fast and
robust convergence with minimal over-fitting effects.
1
L1?LSTD
d=300
0.9
d=300 exp(?i/30)
0.58
CLSTD
0.56
SCBEBF
0.54
NRPE
0.85
NRPE
L2?LSTD
0.6
d=30
0.95
0.8
0.52
0.75
0.5
0.7
0.48
0.65
0.46
0
50
100
150
Iteration
200
250
0.44
1000
300
2000
3000
4000
Sample Size
5000
6000
Figure 1: Left: NRPE of SCBEBF for different number of projections, under different choices of
d, averaged over 5 runs. Right: Comparison of the prediction error of different methods for varying
sample sizes. 95% confidence intervals are tight (less than 0.005 in width) and are not shown.
We next compare SCBEBF against other alternatives. There are only a few methods that can be
compared against our algorithm due to the high dimensional feature space. We compare against
Compressed LSTD (CLSTD) [14], L2 -Regularized LSTD using a Biconjugate gradient solver (L2LSTD), and L1 -Regularized LSTD using LARS-TD [2] with a Biconjugate gradient solver in the
inner loop (L1-LSTD). These conjugate gradient solvers exploit the sparsity of the feature space to
converge faster to the solution of linear equations [26]. We avoided online and stochastic gradient
type methods as they are not very efficient in sample complexity.
We compare the described methods while increasing the size of the training set. The projection
schedule for SCBEBF is set to d = b500e?i/300 c for all sample sizes. The regularization parameter
of L2-LSTD was chosen among a small set of values using 1/5 of the training data as validation set.
Due to memory and time constraints, the optimal choice of parameters could not be set for CLSTD
and L1-LSTD. The maximum size of projection for CLSTD and the maximum number of non-zero
coefficients for L1-LSTD was set to 3000. CLSTD would run out of memory and L1-LSTD would
take multiple hours if we increase these limits.
7
The results, averaged over 5 runs, are shown in Figure 1 (right). We see that L2-LSTD outperforms
other methods, closely followed by SCBEBF. Not surprisingly, L1-LSTD and CLSTD are not competitive here as they are suboptimal with the mentioned constraints. This is a consequence of the
fact that these algorithms scale worse with respect to memory and time complexity.
We conjecture that L2-LSTD is benefiting from the sparsity of the features space, not only in running
time (due to the use of conjugate gradient solvers), but also in sample complexity. This makes L2LSTD an attractive choice when the features are observed in the sparse basis. However, if the
features are sparse in some unknown basis (observation is not sparse), then the time complexity of
any linear solver in the observation basis can be prohibitive. SCBEBF, however, scales much better
in such cases as the main computation is done in the compressed space.
CPU Time (Seconds)
250
L2?LSTD
200
SCBEBF
150
100
50
0
0
100
200
300
Number of non?zero features
400
Figure 2: Runtime of L2-LSTD and SCBEBF with varying observation sparsity.
To highlight this effect, we construct an experiment in which we gradually increase the number of
non-zero features using a change of basis. The error of both L2-LSTD and SCBEBF remain mostly
unchanged as predicted by the theory. We thus only compare the running times as we change the
observation sparsity. Figure 2 shows the CPU time used by each methods with sample size of 3000,
averaged over 5 runs (using Matlab on a 3.2GHz Quad-Core Intel Xeon processor). We run 100
iterations of SCBEBF with d = b300e?i/30 c (as in the first experiment), and set the regularization
parameter of L2-LSTD to the optimal value. We can see that the running time L2-LSTD quickly
becomes prohibitive with the decreased observation sparsity, whereas the running time of SCBEBF
grows very slowly (and linearly).
5
Discussion
We provided a simple, fast and robust feature extraction algorithm for policy evaluation in sparse
and high dimensional state spaces. Using recent results on the properties of random projections, we
proved that in sparse spaces, random projections of sizes logarithmic in the original dimension are
sufficient to preserve linearity. Therefore, BEBFs can be generated on compressed spaces induced
by small random projections. Our finite sample analysis provides guarantees on the reduction in
prediction error after the addition of such BEBFs.
Our assumption of the linearity of the Bellman error in the original feature space might be too strong
for some problems. We introduced this assumption to simplify the analysis. However, most of the
discussion can be rephrased in terms of the projected Bellman error, and we expect this approach to
carry through and provide more general results (e.g. see Parr et al. [6]).
Compared to other regularization approaches to RL [2, 27, 28], our random projection method does
not require complex optimization, and thus is faster and more scalable. If features are observed in
the sparse basis, then conjugate gradient solvers can be used for regularized value function approximation. However, CBEBF seems to have better performance with smaller sample sizes and provably
works under any observation basis.
Finding the optimal choice of the projection size schedule and the number of iterations is an interesting subject of future research. We expect the use of cross-validation to suffice for the selection
of the optimal parameters, due to the robustness that we observed in the results of the algorithm. A
tighter theoretical bound might also help provide an analytical, closed-form answer to how parameters should be selected. One would expect a slow reduction in the projection size to be favourable.
Acknowledgements: Financial support for this work was provided by Natural Sciences and Engineering Research Council Canada, through their Discovery Grants Program.
8
References
[1] D. Di Castro and S. Mannor. Adaptive bases for reinforcement learning. Machine Learning and Knowledge Discovery in Databases, pages 312?327, 2010.
[2] J.Z. Kolter and A.Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In International Conference on Machine Learning, 2009.
[3] P.W. Keller, S. Mannor, and D. Precup. Automatic basis function construction for approximate dynamic
programming and reinforcement learning. In International Conference on Machine Learning, 2006.
[4] P. Manoonpong, F. W?org?otter, and J. Morimoto. Extraction of reward-related feature space using
correlation-based and reward-based learning methods. Neural Information Processing. Theory and Algorithms, pages 414?421, 2010.
[5] A. Geramifard, F. Doshi, J. Redding, N. Roy, and J.P. How. Online discovery of feature dependencies. In
International Conference on Machine Learning, 2011.
[6] R. Parr, C. Painter-Wakefield, L. Li, and M. Littman. Analyzing feature generation for value-function
approximation. In International Conference on Machine Learning, 2007.
[7] J. Boyan and A.W. Moore. Generalization in reinforcement learning: Safely approximating the value
function. In Advances in Neural Information Processing Systems, 1995.
[8] E.J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: Universal encoding
strategies. Information Theory, IEEE Transactions on, 52(12):5406?5425, 2006.
[9] E.J. Cand`es and M.B. Wakin. An introduction to compressive sampling. Signal Processing Magazine,
IEEE, 25(2):21?30, 2008.
[10] O.A. Maillard and R. Munos. Linear regression with random projections. Journal of Machine Learning
Research, 13:2735?2772, 2012.
[11] M.M. Fard, Y. Grinberg, J. Pineau, and D. Precup. Compressed least-squares regression on sparse spaces.
In AAAI, 2012.
[12] O.A. Maillard and R. Munos. Compressed least-squares regression. In Advances in Neural Information
Processing Systems, 2009.
[13] S. Zhou, J. Lafferty, and L. Wasserman. Compressed regression. In Proceedings of Advances in neural
information processing systems, 2007.
[14] M. Ghavamzadeh, A. Lazaric, O.A. Maillard, and R. Munos. LSTD with random projections. In Advances
in Neural Information Processing Systems, 2010.
[15] B.A. Olshausen, P. Sallee, and M.S. Lewicki. Learning sparse image codes using a wavelet pyramid
architecture. In Advances in neural information processing systems, 2001.
[16] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
[17] S.J. Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference learning. Machine
Learning, 22(1):33?57, 1996.
[18] J.A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, 49(2):
233?246, 2002.
[19] M.G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:
1107?1149, 2003. ISSN 1532-4435.
[20] H.R. Maei and R.S. Sutton. GQ (?): A general gradient algorithm for temporal-difference prediction
learning with eligibility traces. In Third Conference on Artificial General Intelligence, 2010.
[21] I. Menache, S. Mannor, and N. Shimkin. Basis function adaptation in temporal difference reinforcement
learning. Annals of Operations Research, 134(1):215?238, 2005.
[22] M.A. Davenport, M.B. Wakin, and R.G. Baraniuk. Detection and estimation with compressive measurements. Dept. of ECE, Rice University, Tech. Rep, 2006.
[23] M.M. Fard, Y. Grinberg, J. Pineau, and D. Precup. Random projections preserve linearity in sparse spaces.
School of Computer Science, Mcgill University, Tech. Rep, 2012.
[24] M.A. Davenport, P.T. Boufounos, M.B. Wakin, and R.G. Baraniuk. Signal processing with compressive
measurements. Selected Topics in Signal Processing, IEEE Journal of, 4(2):445?460, 2010.
[25] Andrew Y Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric
Liang. Autonomous inverted helicopter flight via reinforcement learning. In Experimental Robotics IX,
pages 363?372. Springer, 2006.
[26] Richard Barrett, Michael Berry, Tony F Chan, James Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Charles Romine, and Henk Van der Vorst. Templates for the solution of linear
systems: building blocks for iterative methods. Number 43. Society for Industrial and Applied Mathematics, 1987.
[27] A.M. Farahmand, M. Ghavamzadeh, and C. Szepesv?ari. Regularized policy iteration. In Advances in
Neural Information Processing Systems, 2010.
[28] J. Johns, C. Painter-Wakefield, and R. Parr. Linear complementarity for regularized policy evaluation and
improvement. In Advances in Neural Information Processing Systems, 2010.
9
| 5182 |@word mild:1 version:4 seems:3 norm:12 d2:3 gradual:1 crucially:1 contraction:3 biconjugate:2 carry:2 reduction:6 initial:1 series:1 mmilan1:1 tuned:1 outperforms:2 past:1 kmk:1 current:3 discretization:1 nt:2 yet:1 john:1 remove:1 drop:1 update:3 stationary:2 intelligence:1 prohibitive:2 guess:1 selected:2 amir:1 core:2 provides:5 mannor:3 org:1 along:2 udi:1 direct:1 become:1 c2:4 viable:1 constructed:1 farahmand:2 prove:3 consists:1 fitting:3 theoretically:1 forgetting:1 dprecup:1 expected:5 cand:3 simulator:2 discretized:3 bellman:26 discounted:2 decreasing:1 td:9 cpu:2 quad:1 solver:6 increasing:1 becomes:1 project:2 estimating:2 notation:1 bounded:8 linearity:6 agnostic:1 mitigated:1 provided:3 suffice:1 rmax:2 developed:1 compressive:3 finding:2 guarantee:8 pseudo:1 safely:1 temporal:11 act:1 runtime:1 exactly:1 supv:1 control:1 grant:1 positive:1 before:2 engineering:1 limit:1 consequence:1 sutton:2 encoding:1 analyzing:1 might:2 studied:2 suggests:1 challenging:1 collect:3 co:1 range:1 averaged:4 testing:2 practice:2 union:1 vorst:1 block:1 empirical:6 universal:2 significantly:1 fard:3 projection:52 confidence:1 radial:1 cannot:1 close:2 selection:5 operator:3 applying:1 maxz:1 go:2 regardless:1 keller:2 truncating:1 simplicity:1 recovery:1 kzk0:1 wasserman:1 estimator:2 orthonormal:1 financial:1 variation:1 autonomous:2 coordinate:1 mcgill:3 target:2 construction:1 annals:1 magazine:1 exact:1 programming:2 us:2 hypothesis:1 complementarity:1 element:1 velocity:2 expensive:2 approximated:1 satisfying:1 roy:1 database:1 observed:7 capture:1 calculate:1 compressing:2 decrease:2 trade:2 rescaled:1 mentioned:2 environment:1 complexity:10 reward:10 littman:1 dynamic:2 ghavamzadeh:5 rewrite:1 tight:3 eric:2 basis:13 kmkf:1 k0:1 represented:3 derivation:2 fast:5 demmel:1 artificial:1 choosing:1 larger:2 solve:1 reconstruct:1 compressed:19 emergence:1 online:5 eigenvalue:2 analytical:1 propose:3 jamie:1 product:1 milani:1 helicopter:5 hover:1 gq:1 adaptation:1 loop:2 rapidly:1 mixing:1 flexibility:1 achieve:1 benefiting:1 frobenius:1 kv:4 scalability:1 qr:1 convergence:6 prj:3 generating:1 incremental:1 adam:1 ben:1 spent:1 help:5 andrew:1 montreal:1 school:2 received:1 strong:2 c:1 predicted:1 come:1 direction:2 closely:1 stochastic:4 shrunk:1 lars:1 donato:1 require:1 behaviour:1 fix:4 suffices:1 generalization:1 alleviate:1 tighter:1 hold:2 exp:1 predict:1 parr:8 smallest:3 purpose:1 estimation:8 council:1 sideways:1 weighted:1 hope:2 mit:1 clearly:1 gaussian:1 always:1 aim:1 rather:2 avoid:1 zhou:1 shrinkage:2 varying:2 barto:2 conjunction:1 focus:3 june:1 improvement:1 rank:1 indicates:1 tech:2 industrial:1 typically:1 entire:1 hovers:1 tao:2 provably:2 among:3 denoted:2 geramifard:1 initialize:2 equal:1 construct:4 once:1 extraction:6 having:1 ng:2 manually:1 sampling:1 schulte:1 cancel:1 future:2 simplify:4 richard:1 few:1 randomly:1 preserve:5 detection:1 evaluation:9 regressing:1 kvk:1 light:1 chain:2 necessary:2 conduct:1 theoretical:2 minimal:2 fitted:1 column:1 xeon:1 tse:1 measuring:1 cost:4 entry:1 sallee:1 predictor:1 successful:1 too:1 dependency:2 answer:1 chooses:1 combined:1 international:4 vm:1 off:2 diverge:1 michael:1 precup:4 quickly:2 aaai:1 successively:1 containing:1 possibly:3 tile:6 slowly:1 davenport:2 worse:1 return:4 ganapathi:1 li:1 coding:4 bold:2 coefficient:3 kolter:1 doina:1 depends:1 later:3 view:1 lot:2 performed:1 closed:1 kwk:2 start:2 competitive:1 square:7 morimoto:1 accuracy:1 painter:2 variance:10 redding:1 trajectory:4 processor:1 against:4 shimkin:1 regress:1 james:1 dm:1 naturally:2 proof:3 mi:1 di:2 associated:1 kev:2 sampled:3 doshi:1 proved:1 popular:1 knowledge:3 dimensionality:1 maillard:4 schedule:6 positioned:1 varun:1 done:2 shrink:4 angular:2 wakefield:2 correlation:1 hand:1 receives:1 eqn:4 favourably:1 sketch:1 flight:2 defines:1 pineau:3 quality:1 mdp:4 grows:1 olshausen:1 building:1 effect:3 diel:1 normalized:1 true:2 hence:1 regularization:4 iteratively:4 moore:1 attractive:2 width:1 eligibility:1 ridge:2 demonstrate:1 l1:8 bradtke:1 image:3 jack:1 recently:1 lagoudakis:1 charles:1 ols:3 ari:1 bebfs:15 rl:7 exponentially:3 discussed:3 refer:1 measurement:2 cambridge:1 imposing:1 automatic:3 rd:1 trivially:1 grid:2 hp:1 similarly:1 mathematics:1 add:4 base:1 recent:2 showed:1 chan:1 conjectured:1 store:1 binary:1 arbitrarily:1 rep:2 yuri:1 joelle:1 approximators:2 victor:1 inverted:1 preserving:2 der:1 moorepenrose:1 determine:1 converge:1 signal:9 multiple:3 technical:1 match:1 adapt:1 faster:2 offer:2 long:2 cross:1 coded:2 controlled:3 prediction:11 scalable:2 regression:19 expectation:1 iteration:20 kernel:2 pyramid:1 confined:1 robotics:1 c1:3 background:1 addition:2 whereas:1 szepesv:1 interval:1 decreased:1 singular:1 extra:1 rest:1 unlike:1 induced:5 subject:2 virtually:1 lafferty:1 near:1 leverage:1 easy:3 enough:5 fit:1 architecture:1 suboptimal:1 reduce:3 idea:2 inner:3 effort:1 action:4 matlab:1 detailed:1 discount:1 extensively:1 generate:5 exist:1 massoud:1 coates:1 estimated:2 lazaric:1 per:2 discrete:1 rephrased:1 capital:1 asymptotically:1 sum:3 run:8 inverse:1 letter:2 baraniuk:2 dongarra:1 throughout:1 almost:1 decision:2 appendix:4 scaling:1 bound:14 uncontrolled:1 followed:1 simplification:1 strength:1 occur:1 infinity:1 constraint:2 grinberg:3 generates:1 conjecture:1 according:6 representable:1 conjugate:4 smaller:7 remain:1 evm:1 making:1 castro:1 gradually:2 taken:1 computationally:3 equation:2 previously:1 discus:1 needed:3 end:1 operation:1 apply:7 observe:2 appropriate:2 enforce:1 neighbourhood:1 alternative:3 robustness:1 slower:1 original:8 denotes:2 running:4 include:1 tony:1 wakin:3 exploit:1 especially:1 build:1 approximating:1 society:1 unchanged:1 added:2 parametric:1 concentration:1 rt:7 dependence:4 md:2 strategy:1 gradient:10 subspace:1 distance:1 link:1 parametrized:1 topic:1 evaluate:3 collected:2 assuming:2 code:1 issn:1 berger:1 providing:1 demonstration:1 liang:1 difficult:1 mostly:1 menache:2 trace:1 rise:1 design:1 proper:1 policy:24 unknown:2 perform:1 upper:1 observation:8 markov:3 finite:8 descent:2 immediate:1 arbitrary:1 canada:2 introduced:4 inverting:1 maei:1 c3:4 ds0:2 c4:2 hour:1 address:1 usually:1 ev:5 regime:1 sparsity:6 program:1 max:4 memory:6 video:3 natural:1 rely:1 regularized:12 predicting:1 boyan:2 jpineau:1 improve:1 mdps:1 extract:1 sn:3 literature:2 l2:14 discovery:4 kf:1 acknowledgement:1 berry:1 expect:7 highlight:2 generation:11 interesting:3 approximator:2 validation:2 forq:1 agent:2 sufficient:2 proxy:1 s0:2 tightened:1 storing:2 course:1 placed:2 surprisingly:1 keeping:1 free:1 bias:8 fall:1 template:1 munos:4 sparse:22 ghz:1 van:1 kzk:2 dimension:6 transition:11 avoids:2 gram:2 forward:1 c5:2 reinforcement:9 projected:4 simplified:9 vmax:1 avoided:1 adaptive:1 tighten:1 transaction:1 excess:1 approximate:4 implicitly:1 dealing:1 otter:1 xi:2 continuous:3 iterative:2 learn:2 ku:2 robust:5 ca:1 bebf:10 poly:1 complex:1 domain:6 main:1 linearly:2 backup:2 henk:1 biggest:1 intel:1 slow:2 fails:1 position:1 lie:1 mahdi:1 forgets:1 third:1 wavelet:2 ix:1 theorem:6 xt:12 specific:2 sensing:1 favourable:2 x:2 barrett:1 wols:2 adding:2 downward:1 occurring:1 gap:1 suited:1 logarithmic:6 led:1 simply:2 likely:1 desire:1 kxk:3 lewicki:1 lstd:28 springer:1 satisfies:1 ma:1 rice:1 viewed:1 goal:1 donoho:1 change:3 included:2 specifically:1 infinite:1 determined:1 uniformly:1 wt:6 hyperplane:1 lemma:10 boufounos:1 ece:1 e:3 experimental:1 craft:1 indicating:1 support:1 hovering:1 mark:1 dept:1 audio:2 d1:1 |
4,622 | 5,183 | Reinforcement Learning in Robust Markov Decision
Processes
Huan Xu
Department of Mechanical Engineering
National University of Singapore
Singapore
mpexuh@nus.edu.sg
Shiau Hong Lim
Department of Mechanical Engineering
National University of Singapore
Singapore
mpelsh@nus.edu.sg
Shie Mannor
Department of Electrical Engineering
Technion, Israel
shie@ee.technion.ac.il
Abstract
An important challenge in Markov decision processes is to ensure robustness with
respect to unexpected or adversarial system behavior while taking advantage of
well-behaving parts of the system. We consider a problem setting where some
unknown parts of the state space can have arbitrary transitions while other parts
are purely stochastic. We devise an algorithm that is adaptive to potentially adversarial behavior and show that it achieves similar regret bounds as the purely
stochastic case.
1
Introduction
Markov decision processes (MDPs) [Puterman, 1994] have been widely used to model and solve
sequential decision problems in stochastic environments. Given the parameters of an MDP, namely,
the rewards and transition probabilities, an optimal policy can be computed. In practice, these
parameters are often estimated from noisy data and furthermore, they may change during the execution of a policy. Hence, the performance of the chosen policy may deteriorate significantly; see
[Mannor et al., 2007] for numerical experiments.
The robust MDP framework has been proposed to address this issue of parameter uncertainty (e.g.,
[Nilim and El Ghaoui, 2005] and [Iyengar, 2005]). The robust MDP setting assumes that the true
parameters fall within some uncertainty set U and seeks a policy that performs the best under the
worst realization of the parameters. These solutions, however, can be overly conservative since they
are based on worst-case realization. Variants of robust MDP formulations have been proposed to
mitigate the conservativeness when additional information on parameter distribution [Strens, 2000,
Xu and Mannor, 2012] or coupling among the parameters [Mannor et al., 2012] are known. A major
drawback of previous work on robust MDPs is that they all focused on the planning problem with
no effort to learn the uncertainty. Since in practice it is often difficult to accurately quantify the
uncertainty, the solutions to the robust MDP can be conservative if a too large uncertainty set is
used.
In this work, we make the first attempt to perform learning in robust MDPs. We assume that some of
the state-action pairs are adversarial in the sense that their parameters can change arbitrarily within
U from one step to another. However, others are benign in the sense that they are fixed and behave
purely stochastically. The learner, however, is given only the uncertainty set U and knows neither
the parameters nor the true nature of each state-action pair.
1
In this setting, a traditional robust MDP approach would be equivalent to assuming that all parameters are adversarial and therefore would always execute the minimax policy. This is too conservative
since it could be the case that most of the parameters are stochastic. Alternatively, one could use an
existing online learning algorithm such as UCRL2 [Jaksch et al., 2010] and assume that all parameters are stochastic. This, as we show in the next section, may lead to suboptimal performance when
some of the states are adversarial.
Instead, we propose an online learning approach to robust MDPs. We show that the cumulative
reward obtained from this method is as good as the minimax policy that knows the true nature of
each state-action pair. This means that by incorporating learning in robust MDPs, we can effectively
resolve the ?conservativeness due to not knowing the uncertainty? effect.
The rest of the paper is structured as follows. Section 2 discusses the key difficulties in our setting
and explains why existing solutions are not applicable. In subsequent sections, we present our
algorithm, its theoretical performance bound and its analysis. Sections 3 and 4 cover the finitehorizon case while Section 5 deals with the infinite-horizon case. We present some experiment
results in Section 6 and conclude in Section 7.
2
Problem setting
We consider an MDP M with a finite state space S and a finite action space A. Let S = |S| and
A = |A|. Executing action a in state s results in a random transition according to a distribution
ps,a (?) where ps,a (s0 ) gives the probability of transitioning to state s0 , and accumulate an immediate
reward r(s, a).
A robust MDP considers the case where the transition probability is determined in an adversarial
way. That is, when action a is taken at state s, the transition probability ps,a (?) can be an arbitrary
element of the uncertainty set U(s, a). In particular, for different visits of same (s, a), the realization
of ps,a can be different, possibly depends on the history. This can model cases where the system
dynamics are influenced by competitors or exogeneous factors that are hard to model, or the MDP
is a simplification of a complicated dynamic system.
Previous research in robust MDPs focused exclusively on the planning problem. Here, the power of
the adversary ? the uncertainty set of the parameter ? is precisely known, and the goal is to find the
minimax policy ? the policy with the best performance under the worst admissible parameters.
This paper considers the learning problem of robust MDPs. We ask the following question: suppose
the power of the adversary (the extent to which it can affect the system) is not completely revealed
to the decision maker, if we are allowed to play the MDP many times, can we still obtain an optimal
policy as if we knew the true extent of its power? Or to put it that way, can we develop a procedure
that provides the exact amount of protection against the unknown adversary?
Our specific setup is as follows: for each (s, a) ? S?A an uncertainty set U(s, a) is given. However,
not all states are adversarial. Only a subset F ? S ? A is truly adversarial while all the other stateaction pairs behave purely stochastically, i.e., with a fixed unknown ps,a . Moreover, the set F is not
known to the algorithm.
This setting differs from existing setups, and is challenging for the following reasons:
1. The adversarial actions ps,a are not directly observable.
2. The adversarial behavior is not constrained, except it must belong to the uncertainty set.
3. Ignoring the adversarial component results in sub-optimal behavior.
The first challenge precludes the use of algorithm based on stochastic games such as R-Max
[Brafman and Tennenholtz, 2002]. The R-Max algorithm deals with stochastic games where the
opponent?s action-set for each state is known and the opponent?s actions are always observable. In
our setting, only the outcome (i.e., the next-state and the reward) of each transition is observable.
The algorithm does not observe the action ps,a taken by the adversary. Indeed, because the set F is
unknown, even the action set of the adversary is unknown to the algorithm.
The second challenge is due to unconstrained adversarial behavior. For state-action pairs (s, a) ? F,
the opponent is free to choose any ps,a ? U(s, a) for each transition, possibly depends on the his2
tory and the strategy of the decision maker (i.e., non-oblivious). This affects the sort of performance
guarantee one can reasonably expect from any algorithms. In particular, when considering the regret
against the best stationary policy ?in hindsight?, [Yu and Mannor, 2009] show that small change in
transition probabilities can cause large regret. Even with additional constraints on the allowed adversarial behavior, they showed that the regret bound still does not vanish with respect to the number
of steps. Indeed, most results for adversarial MDPs [Even-Dar et al., 2005, Even-Dar et al., 2009,
Yu et al., 2009, Neu et al., 2010, Neu et al., 2012] only deal with adversarial rewards while the transitions are assumed stochastic and fixed, which is considerably simpler than our setting.
Since it is not possible to achieve vanishing regret against best stationary policy in hindsight, we
choose to measure the regret against the performance of a minimax policy that knows exactly which
state-actions are adversarial (i.e., the set F) as well as the true ps,a for all stochastic state-action
pairs. Intuitively, this means that if the adversary chooses to play ?nicely?, we are not constrained
to exploit this.
Finally, given that we are competing against the minimax policy, one might ask whether we could
simply apply existing algorithms such as UCRL2 [Jaksch et al., 2010] and treat every state-action
pair as stochastic. The following example shows that ignoring any adversarial behavior may lead to
large regret compared to the minimax policy.
g?
a1
s0
g? + ?
a2
s1
s2
g? + ?
a3
s3
s4
g? ? ?
Figure 1: Example MDP with adversarial transitions.
Consider the MDP in Figure 1. Suppose that a UCRL2-like algorithm is used, where all transitions
are assumed purely stochastic. There are 3 alternative policies, each corresponds to choosing action
a1 , a2 and a3 respectively in state s0 . Action a1 leads to the optimal minimax average reward of
g ? . State s2 leads to average reward of g ? + ? for some ? > 0. State s1 has adversarial transition,
where both s2 and s4 are possible next states. s4 has a similar behavior, where it may either lead to
g ? + ? or a ?bad? region with average reward g ? ? ? for some 2? < ? < 3?.
We consider two phases. In phase 1, the adversary behaves ?benignly? by choosing all solid-line
transitions. Since both a2 and a3 lead to similar outcome, we assume that in phase 1, both a2 and a3
are chosen for T steps each. In phase 2, the adversary chooses the dashed-line transitions in both s1
and s4 . Due to a2 and a3 having similar values (both g ? + ? > g ? ) we can assume that a2 is always
chosen in phase 2 (if a3 is ever chosen in phase 2 its value will quickly drop below that of a2 ).
Suppose that a2 also runs for T steps in phase 2. A little algebra (see the supplementary material
for details) shows that at the end of phase 2 the expected value of s4 (from the learner?s point of
3???
?
view) is g4 = g ? + ???
> g ? . The total
2 and therefore the expected value of s1 is g1 = g +
4
?
accumulated rewards over both phases is however 3T g + T (2? ? ?). Let c = ? ? 2? > 0. This
means that the overall total regret is cT which is linear in T .
Note that in the above example, the expected value of a2 remains greater than the minimax value
g ? throughout phase 2 and therefore the algorithm will continue to prefer a2 , even though the actual
accumulated average value is already way below g ? . The reason behind this is that the Markov
property, which is crucial for UCRL2-like algorithms to work, has been violated due to s1 and s4
behaving in a non-independent way caused by the adversary.
3
Algorithm and main result
In this section, we present our algorithm and the main result for the finite-horizon case with the total
reward as the performance measure. Section 5 provides the corresponding algorithm and result for
the infinite-horizon average-reward case.
3
For simplicity, we assume without loss of generality a deterministic and known reward function
r(s, a). We also assume that rewards are bounded such that r(s, a) ? [0, 1]. It is straight-forward,
by introducing additional states, to extend the algorithm and analysis to the case where the reward
function is random, unknown and even adversarial.
In the finite horizon case, we consider an episodic setting where each episode has a fixed and known
length T . The algorithm starts at a (possibly random) state s0 and executes T stages. After that,
a new episode begins, with an arbitrarily chosen start state (it can simply be the last state of the
previous episode). This goes on indefinitely.
Let ? be a finite-horizon (non-stationary) policy where ?t (s) gives the action to be executed in state
s at step t in an episode, where t = 0, . . . , (T ? 1). Let Pt be a particular choice of ps,a ? U(s, a)
for every (s, a) ? F at step t. For each t = 0, . . . , (T ? 1), we define
Vt? (s) =
min
Pt ,...,PT ?2
EPt ,...,PT ?2
T
?1
X
r(st0 , ?t0 (st0 ))
and
t0 =t
Vt? (s) = max Vt? (s),
?
where st = s and st+1 , . . . , sT ?1 are random variables due to the random transitions. We assume that U is such that the minimum above exists (e.g., compact set). It is not hard to show that
given state s, there exists a policy ? with V0? (s) = V0? (s) and we can compute such a minimax
policy if the algorithm knows F and ps,a for all (s, a) ?
/ F, from literature of robust MDP (e.g.,
[Nilim and El Ghaoui, 2005] and [Iyengar, 2005]).
The main message of this paper is that we can determine a policy as good as the minimax policy
without knowing either F or ps,a for (s, a) ?
/ F. To make this formal, we define the regret (against
the minimax performance) in episode i, for i = 1, 2, . . . as
?i = V0? (si0 ) ?
T
?1
X
r(sit , ait ),
t=0
where sit and ait denote the actual state visited and action taken at step t of episode i.1 The total
regret for m episodes, which we want to minimize, is thus defined as
m
X
?(m) =
?i .
i=1
The main algorithm is given in Figure 2. OLRM is basically UCRL2 [Jaksch et al., 2010] with an
additional stochastic check to detect adversarial state-action pairs. Like UCRL2, the algorithm employs the ?optimism under uncertainty? principle. We start by assuming that all states are stochastic.
If the adversary plays ?nicely?, nothing else would have to be done. The key challenge, however, is
to successfully identify the adversarial state-action pairs when they start to behave maliciously.
A similar scenario in the multi-armed bandit setting has been addressed by
[Bubeck and Slivkins, 2012]. They show that it is possible to achieve near-optimal regret without
knowing a priori whether a bandit is stochastic or adversarial. In [Bubeck and Slivkins, 2012], the
key is to check some consistency conditions that would be satisfied if the behavior is stochastic. We
use the same strategy and the question is then, which condition? We discuss this in section 3.2.
Note that the index k = 1, 2, . . . tracks the number of policies. A policy is executed until either a
new pair (s, a) fails the stochastic check, and hence deemed to be adversarial, or some state-action
pair has been executed too many times. In either case, we need to re-compute the current optimistic
policy (see Section 3.1 for the detail). Every time a new policy is computed we call it a new epoch.
While each episode has the same length (T ), each epoch can span multiple episodes, and an epoch
can begin in the middle of an episode.
3.1
Computing an optimistic policy
Figure 3 shows the algorithm for computing the optimistic minimax policy, where we treat all stateaction pairs in the set F as adversarial, and (similar to UCRL2) use optimistic values for other
state-action pairs.
1
We provide high-probability regret bounds for any single trial, from which the expected regret can be
readily derived, if desired.
4
Input: S, A, T , ?, and for each (s, a), U(s, a)
1. Initialize the set F ? {}.
2. Initialize k ? 1.
3. Compute an optimistic policy ?
? , assuming all state-action pairs in F are adversarial (Section
3.1).
4. Execute ?
? until one of the followings happen:
? The execution count of some state-action (s, a) has been doubled.
? The executed state-action pair (s, a) fails the stochastic check (Section 3.2). In this case
(s, a) is added to F .
5. Increment k. Go back to step 3.
Figure 2: The OLRM algorithm
Here, to simplify notations, we frequently use V (?) to mean the vector whose elements are V (s)
for each s ? S. This applies to both value functions as well as probability distributions
P over S. In
particular, we use p(?)V (?) to mean the dot product between two such vectors, i.e.
s p(s)V (s).
We use Nk (s, a) to denote the total number of times the state-action pair (s, a) has been executed
before epoch k. The corresponding empirical next-state distribution based on these transitions is
denoted as P?k (?|s, a). If (s, a) has never been executed before epoch k, we define Nk (s, a) = 1 and
assume P?k (?|s, a) to be arbitrarily defined.
Input: S, A, T , ?, F , k, and for each (s, a), U(s, a), P?k (?|s, a) and Nk (s, a).
1. Set V?Tk?1 (s) = maxa r(s, a) for all s.
2. Repeat, for t = T ? 2, . . . , 0:
? For each (s, a) ? F , set
? For each (s, a) ?
/ F , set
(
k
?
Qt (s, a) = min T ? t,
? For each s, set
k
k
?
?
Qt (s, a) = min T ? t, min r(s, a) + p(?)Vt+1 (?) .
p?U (s,a)
s
k
r(s, a) + P?k (?|s, a)V?t+1
(?) + T
? kt (s, a)
V?tk (s) = maxa Q
and
2
2SAT k2
log
Nk (s, a)
?
)
.
? kt (s, a).
?
?t (s) = arg maxa Q
3. Output ?
?.
Figure 3: Algorithm for computing an optimistic minimax policy.
3.2
Stochasticity check
Every time a state-action (s, a) ?
/ F is executed, the outcome is recorded and subjected to a ?stochasticity check?. Let n be the total number of times (s, a) has been executed (including the latest one)
and s01 , . . . , s0n are the next-states for each of these transitions. Let k1 , . . . , kn be the epochs in which
each of these transitions happens. Let t1 , . . . , tn be the step within the episodes (i.e. episode stage)
where these transitions happen. Let ? be the total number of steps executed by the algorithm (from
the beginning) so far. The stochastic check fails if:
r
n
n
X
X
4SAT ? 2
k
k
j
j
0
.
P?kj (?|s, a)V?tj +1 (?) ?
V?tj +1 (sj ) > 5T nS log
?
j=1
j=1
The stochastic check follows the intuitive saying ?if it is not broke, don?t fix it?, by checking whether
the value of actual transition from (s, a) is below what is expected from the parameter estimation.
5
One can show that with high probability, all stochastic state-action pairs will always pass the stochastic check. Now consider an adversarial (s, a) pair: if the adversary plays ?nicely?, the current policy
accumulates satisfactory reward and hence nothing needs to be changed, even if the transitions themselves fail to ?look? stochastic; if the adversary plays ?nasty?, then the stochastic check will detect
it, and subsequently protect against it.
3.3
Main result
?
The following theorem summarizes the performance of OLRM. Here and in the sequel, we use O
when the log terms are omitted. Our result for the infinite-horizon case is similar (see Section 5).
Theorem 1. Given ?, T , S, A, the total regret of OLRM is
?
3/2
?
?(m) ? O(ST
Am)
for all m, with probability at least 1 ? ?.
Note that the above is with respect to the total number of episodes ?
m. Since the total number of
?
steps
is
?
=
mT
,
the
regret
bound
in
terms
of
?
is
therefore
O(ST
A? ). This gives the familiar
?
? regret as in UCRL2. Also, the bound has the same dependencies on S and A as in UCRL2. The
horizon length T plays the role of the ?diameter? in the infinite-horizon case and again it has the
same dependency as its counterpart in UCRL2.
The result shows that even though the algorithm deals with unknown stochastic and potentially
adversarial states, it achieves the same regret bound as in the fully stochastic case. In the case where
all states are in fact stochastic, this reduces to the same UCRL2 result.
4
Analysis of OLRM
We briefly explain the roadmap of the proof of Theorem 1. The complete proof can be found in the
supplementary material.
Our proof starts with the following technical Lemma.
Lemma 1. The following holds for all state-action pair (s, a) ?
/ F and for t = 0, . . . , (T ? 1) in
all epochs k ? 1, with probability at least 1 ? ?:
s
2S
4SAT k 2
k
k
log
.
P?k (?|s, a)V?t+1 (?) ? ps,a (?)V?t+1 (?) ? T
Nk (s, a)
?
Proof sketch. Since (s, a) ?
/ F is stochastic, we apply the bound from [Weissman et al., 2003] for
k
the 1-norm deviation between P?k (?|s, a) and ps,a . The bound follows from kV?t+1
(?)k? ? T .
Using Lemma 1, we show the following lemma that with high probability, all purely stochastic
state-action pairs will always pass the stochastic check.
Lemma 2. The probability that any state-action pair (s, a) ?
/ F gets added into set F while running
the algorithm is at most 2?.
Proof sketch. Each (s, a) ?
/ F is purely stochastic. Suppose (s, a) has been executed n times and
s01 , . . . , s0n are the next-states for these transitions. Recall that the check fails if
r
n
n
X
X
4SAT ? 2
k
k
j
j
0
P?kj (?|s, a)V?tj +1 (?) ?
V?tj +1 (sj ) > 5T nS log
.
?
j=1
j=1
We can derive a high-probability bound that satisfies the stochastic check by applying the AzumaHoeffding inequality on the martingale difference sequence
k
k
Xj = ps,a (?)V?tj j+1 (?) ? V?tj j+1 (s0j )
followed by an application of Lemma 1.
6
We then show that all value estimates V?tk are always optimistic.
Lemma 3. With probability at least 1 ? ?, and assume that no state-action pairs (s, a) ?
/ F have
been added to F , the following holds for every state s ? S, every t ? {0, . . . , T ? 1} and every
k ? 1:
V?tk (s) ? Vt? (s).
Proof sketch. The key challenge is to prove that state-actions in F (adversarial) that have not been
? values. This can be done
identified (i.e. all past transitions passed the test) would have optimistic Q
by, again, applying the Azuma-Hoeffding inequality.
Equipped with the previous three lemmas, we are now able to establish Theorem 1.
Proof sketch. Lemma 3 established that all value estimates V?tk are always optimistic. We can therefore bound the regret by bounding the difference between V?tk and the actual rewards received by the
algorithm. The ?optimistic gap? shrinks in an expected manner as the number of steps executed by
the algorithm grows if all state-actions are stochastic.
For an adversarial state-action (s, a) ? F, we use the following facts to ensure the above: (i) If
(s, a) has been added to F (i.e., it failed the stochastic check) then all policies afterwards would
correctly evaluate its value; (ii) All transitions before (s, a) is added to F (if ever) must have passed
the stochastic check and the check condition ensures that its behavior is consistent with what one
would expect if (s, a) was stochastic.
5
Infinite horizon case
In the infinite horizon case, let P be a particular choice of ps,a ? U(s, a) for every (s, a) ? F.
Given a (stationary) policy ?, its average undiscounted reward (or ?gain?) is defined as follows:
" ?
#
X
1
r(si , ?(si ))
gP? (s) = lim EP
? ?? ?
t=1
where s1 = s. The limit always exists for finite MDPs [Puterman, 1994]. We make the assumption
that regardless of the choice of P , the resulting MDP is communicating and unichain. 2 In this case
gP? (s) is a constant and independent of s so we can drop the argument s.
We define the worst-case average reward of ? over all possible P as g ? = minP gP? . An optimal
?
minimax policy ? ? is any policy whose gain g ? = g ? = max? g ? . We define the regret after
executing the MDP M for ? steps as
?(? ) = ? g ? ?
?
X
r(st , at ).
t=1
The main algorithm for the infinite-horizon case, which we refer as OLRM2, is essentially identical to OLRM. The main difference is in computing the optimistic policy and the corresponding
stochastic check. The detailed algorithm is presented in the supplementary material.
The algorithms from [Tewari and Bartlett, 2007] can be used to compute an optimistic minimax
policy. In particular, for each (s, a) ? F , its transition function is chosen pessimistically from
U(s, a). For each (s, a) ?
/ F , its transition function is chosen optimistically from the following set:
s
2S
4SAk 2
log
.
{p : kp(?) ? P?k (?|s, a)k1 ? ?} where ? =
Nk (s, a)
?
2
In more general settings, such as communicating or weakly communicating MDPs, although the optimal
policies (for a fixed P ) always have constant gain, the optimal minimax policies (over all possible P ) might
have non-constant gain. Additional assumptions on U, as well as a slight change in the definition of the regret
are needed to deal with these cases. This is left for future research.
7
Let P?k (?|s, ?
? k (s)) be the minimax choice of transition functions for each s where the minimax gain
?
?k
g is attained. The bias hk can be obtained by solving the following system of equations for h(?)
(see [Puterman, 1994]):
k
?s ? S, g ?? + h(s) = r(s, ?
? k (s)) + P?k (?|s, ?
? k (s))h(?).
(1)
The stochastic check for the infinite-horizon case is mostly identical to the finite-horizon case, except
? of the bias, defined as follows:
that we replace T with the maximal span H
? =
max hk (s) ? min hk (s) .
H
max
k?{k1 ,...,kn }
s
s
The stochastic check fails if:
n
X
P?kj (?|s, a)hkj (?) ?
j=1
n
X
r
?
hkj (s0j ) > 5H
j=1
nS log
4SA? 2
.
?
Let H be the maximal span of the bias of any optimal minimax policies. The following summarizes the performance of OLRM2. The proof, deferred in the supplementary material, is similar to
Theorem 1.
Theorem 2. Given ?, S, A, the total regret of OLRM2 is
?
?
?(? ) ? O(SH
A? )
for all ? , with probability at least 1 ? ?.
6
Experiment
6
2.5
x 10
Total reward
2
OLRM2
UCRL2
Standard robust MDP
Optimal minimax policy
1.5
1
0.5
0
0
2
4
6
Time steps
8
6
x 10
Figure 4: Total accumulated rewards. The vertical line marks the start of ?breakdown?.
We run both our algorithm as well as UCRL2 on the example MDP in Figure 1 for the infinitehorizon case. Figure 4 shows the result for g ? = 0.18, ? = 0.07 and ? = 0.17. It shows that
UCRL2 accumulates smaller total rewards than the optimal minimax policy while our algorithm
actually accumulates larger total rewards than the minimax policy. We also include the result for a
standard robust MDP that treats all state-action pairs as adversarial and therefore performs poorly.
Additional details are provided in the supplementary material.
7
Conclusion
We presented an algorithm for online learning of robust MDPs with unknown parameters, some
can be adversarial. We show that it achieves similar regret bound as in the fully stochastic case. A
natural extension is to allow the learning of the uncertainty sets in adversarial states, where the true
uncertainty set is unknown. Our preliminary results show that very similar regret bounds can be
obtained for learning from a class of nested uncertainty sets.
Acknowledgments
This work is partially supported by the Ministry of Education of Singapore through AcRF Tier
Two grant R-265-000-443-112 and NUS startup grant R-265-000-384-133. The research leading to
these results has received funding from the European Research Council under the European Union?s
Seventh Framework Programme (FP/2007-2013)/ ERC Grant Agreement n.306638.
8
References
[Brafman and Tennenholtz, 2002] Brafman, R. I. and Tennenholtz, M. (2002). R-max - a general
polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning
Research, 3:213?231.
[Bubeck and Slivkins, 2012] Bubeck, S. and Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. Journal of Machine Learning Research - Proceedings Track, 23:42.1?
42.23.
[Even-Dar et al., 2005] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2005). Experts in a markov
decision process. In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages 401?408. MIT Press, Cambridge, MA.
[Even-Dar et al., 2009] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2009). Online markov
decision processes. Math. Oper. Res., 34(3):726?736.
[Iyengar, 2005] Iyengar, G. N. (2005). Robust dynamic programming. Math. Oper. Res., 30(2):257?
280.
[Jaksch et al., 2010] Jaksch, T., Ortner, R., and Auer, P. (2010). Near-optimal regret bounds for
reinforcement learning. J. Mach. Learn. Res., 99:1563?1600.
[Mannor et al., 2012] Mannor, S., Mebel, O., and Xu, H. (2012). Lightning does not strike twice:
Robust mdps with coupled uncertainty. In ICML.
[Mannor et al., 2007] Mannor, S., Simester, D., Sun, P., and Tsitsiklis, J. N. (2007). Bias and
variance approximation in value function estimates. Manage. Sci., 53(2):308?322.
[McDiarmid, 1989] McDiarmid, C. (1989). On the method of bounded differences. In Surveys in
Combinatorics, number 141 in London Mathematical Society Lecture Note Series, pages 148?
188. Cambridge University Press.
[Neu et al., 2012] Neu, G., Gy?orgy, A., and Szepesv?ari, C. (2012). The adversarial stochastic shortest path problem with unknown transition probabilities. Journal of Machine Learning Research
- Proceedings Track, 22:805?813.
[Neu et al., 2010] Neu, G., Gy?orgy, A., Szepesv?ari, C., and Antos, A. (2010). Online markov decision processes under bandit feedback. In NIPS, pages 1804?1812.
[Nilim and El Ghaoui, 2005] Nilim, A. and El Ghaoui, L. (2005). Robust control of markov decision processes with uncertain transition matrices. Oper. Res., 53(5):780?798.
[Puterman, 1994] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience.
[Strens, 2000] Strens, M. (2000). A bayesian framework for reinforcement learning. In In Proceedings of the Seventeenth International Conference on Machine Learning, pages 943?950. ICML.
[Tewari and Bartlett, 2007] Tewari, A. and Bartlett, P. (2007). Bounded parameter markov decision
processes with average reward criterion. Learning Theory, pages 263?277.
[Weissman et al., 2003] Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., and Weinberger,
M. J. (2003). Inequalities for the l1 deviation of the empirical distribution. Technical report,
Information Theory Research Group, HP Laboratories.
[Xu and Mannor, 2012] Xu, H. and Mannor, S. (2012). Distributionally robust markov decision
processes. Math. Oper. Res., 37(2):288?300.
[Yu and Mannor, 2009] Yu, J. Y. and Mannor, S. (2009). Arbitrarily modulated markov decision
processes. In CDC, pages 2946?2953.
[Yu et al., 2009] Yu, J. Y., Mannor, S., and Shimkin, N. (2009). Markov decision processes with
arbitrary reward processes. Math. Oper. Res., 34(3):737?757.
9
| 5183 |@word trial:1 briefly:1 middle:1 polynomial:1 norm:1 seek:1 solid:1 mpexuh:1 series:1 exclusively:1 past:1 existing:4 current:2 protection:1 si:2 must:2 readily:1 subsequent:1 numerical:1 happen:2 benign:1 unichain:1 drop:2 stationary:4 s0n:2 beginning:1 vanishing:1 indefinitely:1 provides:2 mannor:14 math:4 mcdiarmid:2 simpler:1 ucrl2:14 mathematical:1 prove:1 interscience:1 manner:1 g4:1 finitehorizon:1 deteriorate:1 expected:6 indeed:2 behavior:10 themselves:1 planning:2 nor:1 multi:1 frequently:1 resolve:1 little:1 actual:4 armed:1 considering:1 equipped:1 begin:2 provided:1 moreover:1 bounded:3 notation:1 olrm2:4 israel:1 what:2 maxa:3 hindsight:2 st0:2 guarantee:1 mitigate:1 every:8 stateaction:2 exactly:1 k2:1 control:1 grant:3 before:3 t1:1 engineering:3 treat:3 limit:1 accumulates:3 mach:1 path:1 optimistically:1 might:2 twice:1 tory:1 verdu:1 challenging:1 seventeenth:1 acknowledgment:1 practice:2 regret:24 union:1 differs:1 procedure:1 episodic:1 empirical:2 significantly:1 doubled:1 get:1 put:1 applying:2 equivalent:1 deterministic:1 go:2 latest:1 regardless:1 focused:2 survey:1 simplicity:1 communicating:3 maliciously:1 increment:1 pt:4 suppose:4 play:6 exact:1 programming:2 agreement:1 element:2 breakdown:1 ep:1 role:1 electrical:1 worst:4 region:1 ensures:1 sun:1 episode:13 environment:1 reward:24 dynamic:4 weakly:1 solving:1 algebra:1 purely:7 learner:2 completely:1 london:1 kp:1 startup:1 outcome:3 choosing:2 whose:2 widely:1 solve:1 supplementary:5 larger:1 precludes:1 g1:1 gp:3 noisy:1 exogeneous:1 online:5 advantage:1 sequence:1 propose:1 product:1 maximal:2 realization:3 poorly:1 achieve:2 intuitive:1 kv:1 shiau:1 p:16 undiscounted:1 executing:2 tk:6 coupling:1 develop:1 ac:1 derive:1 seroussi:1 qt:2 received:2 sa:1 quantify:1 drawback:1 stochastic:42 subsequently:1 broke:1 material:5 education:1 explains:1 fix:1 preliminary:1 extension:1 hold:2 major:1 achieves:3 a2:10 omitted:1 estimation:1 applicable:1 maker:2 si0:1 visited:1 council:1 successfully:1 mit:1 iyengar:4 always:9 derived:1 check:19 hk:3 adversarial:35 ept:1 sense:2 detect:2 am:1 el:4 accumulated:3 bandit:4 issue:1 among:1 overall:1 arg:1 denoted:1 priori:1 constrained:2 initialize:2 never:1 nicely:3 having:1 identical:2 yu:6 look:1 icml:2 future:1 others:1 report:1 simplify:1 oblivious:1 employ:1 ortner:1 national:2 familiar:1 phase:10 attempt:1 message:1 deferred:1 truly:1 sh:1 behind:1 tj:6 antos:1 kt:2 huan:1 mebel:1 re:7 desired:1 theoretical:1 uncertain:1 cover:1 introducing:1 deviation:2 subset:1 technion:2 seventh:1 too:3 kn:2 dependency:2 considerably:1 chooses:2 st:6 international:1 sequel:1 quickly:1 again:2 satisfied:1 recorded:1 manage:1 choose:2 possibly:3 hoeffding:1 stochastically:2 expert:1 leading:1 oper:5 gy:2 combinatorics:1 caused:1 depends:2 view:1 optimistic:12 start:6 sort:1 complicated:1 minimize:1 il:1 variance:1 identify:1 bayesian:1 accurately:1 basically:1 straight:1 executes:1 history:1 explain:1 influenced:1 neu:6 definition:1 competitor:1 against:7 shimkin:1 proof:8 gain:5 ask:2 recall:1 lim:2 actually:1 back:1 auer:1 attained:1 wei:1 formulation:1 execute:2 though:2 done:2 generality:1 furthermore:1 shrink:1 stage:2 until:2 sketch:4 acrf:1 mdp:18 grows:1 effect:1 true:6 counterpart:1 hence:3 laboratory:1 jaksch:5 satisfactory:1 puterman:5 deal:5 during:1 game:2 strens:3 hong:1 criterion:1 complete:1 tn:1 performs:2 l1:1 funding:1 ari:2 behaves:1 mt:1 belong:1 extend:1 slight:1 accumulate:1 refer:1 cambridge:2 unconstrained:1 consistency:1 hp:1 erc:1 stochasticity:2 lightning:1 dot:1 behaving:2 v0:3 showed:1 scenario:1 inequality:3 arbitrarily:4 continue:1 vt:5 devise:1 minimum:1 additional:6 greater:1 ministry:1 determine:1 shortest:1 strike:1 dashed:1 ii:1 multiple:1 afterwards:1 reduces:1 technical:2 visit:1 weissman:3 a1:3 variant:1 essentially:1 szepesv:2 want:1 addressed:1 else:1 crucial:1 rest:1 shie:2 call:1 ee:1 near:3 revealed:1 affect:2 xj:1 competing:1 suboptimal:1 identified:1 knowing:3 t0:2 whether:3 optimism:1 bartlett:3 passed:2 effort:1 cause:1 action:37 dar:6 tewari:3 detailed:1 amount:1 s4:6 diameter:1 singapore:5 s3:1 estimated:1 overly:1 track:3 correctly:1 discrete:1 group:1 key:4 neither:1 run:2 uncertainty:16 throughout:1 saying:1 decision:15 prefer:1 summarizes:2 bound:14 ct:1 followed:1 simplification:1 precisely:1 constraint:1 argument:1 min:5 span:3 department:3 structured:1 according:1 smaller:1 kakade:2 s1:6 happens:1 intuitively:1 ghaoui:4 taken:3 tier:1 equation:1 remains:1 discus:2 count:1 fail:1 needed:1 know:4 subjected:1 end:1 opponent:3 apply:2 observe:1 sak:1 alternative:1 robustness:1 weinberger:1 pessimistically:1 s01:2 assumes:1 running:1 ensure:2 include:1 exploit:1 k1:3 establish:1 society:1 question:2 already:1 added:5 strategy:2 traditional:1 sci:1 roadmap:1 considers:2 extent:2 reason:2 assuming:3 length:3 index:1 difficult:1 setup:2 executed:11 mostly:1 potentially:2 policy:41 unknown:10 perform:1 vertical:1 markov:13 finite:7 behave:3 immediate:1 ever:2 nasty:1 mansour:2 arbitrary:3 namely:1 mechanical:2 pair:23 slivkins:4 protect:1 established:1 nu:3 nip:1 address:1 able:1 adversary:12 tennenholtz:3 below:3 azuma:1 fp:1 challenge:5 max:7 including:1 power:3 difficulty:1 natural:1 minimax:22 mdps:12 deemed:1 coupled:1 kj:3 epoch:7 sg:2 literature:1 checking:1 loss:1 expect:2 fully:2 lecture:1 cdc:1 consistent:1 s0:5 minp:1 principle:1 editor:1 changed:1 brafman:3 last:1 free:1 repeat:1 supported:1 tsitsiklis:1 formal:1 bias:4 allow:1 fall:1 saul:1 taking:1 feedback:1 transition:29 cumulative:1 world:1 ordentlich:1 forward:1 reinforcement:4 adaptive:1 programme:1 far:1 sj:2 observable:3 compact:1 sat:4 conclude:1 assumed:2 knew:1 alternatively:1 don:1 why:1 learn:2 nature:2 robust:21 reasonably:1 ignoring:2 orgy:2 s0j:2 bottou:1 european:2 main:7 conservativeness:2 s2:3 bounding:1 nothing:2 ait:2 allowed:2 xu:5 simester:1 martingale:1 wiley:1 n:3 sub:1 nilim:4 fails:5 vanish:1 admissible:1 theorem:6 transitioning:1 bad:1 specific:1 a3:6 sit:2 incorporating:1 exists:3 sequential:1 effectively:1 execution:2 horizon:13 nk:6 gap:1 simply:2 bubeck:4 infinitehorizon:1 failed:1 unexpected:1 partially:1 applies:1 corresponds:1 nested:1 satisfies:1 ma:1 goal:1 replace:1 change:4 hard:2 infinite:8 determined:1 except:2 hkj:2 lemma:9 conservative:3 total:15 pas:2 distributionally:1 mark:1 modulated:1 violated:1 evaluate:1 |
4,623 | 5,184 | Projected Natural Actor-Critic
Philip S. Thomas, William Dabney, Sridhar Mahadevan, and Stephen Giguere
School of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
{pthomas,wdabney,mahadeva,sgiguere}@cs.umass.edu
Abstract
Natural actor-critics form a popular class of policy search algorithms for ?nding
locally optimal policies for Markov decision processes. In this paper we address
a drawback of natural actor-critics that limits their real-world applicability?their
lack of safety guarantees. We present a principled algorithm for performing natural gradient descent over a constrained domain. In the context of reinforcement
learning, this allows for natural actor-critic algorithms that are guaranteed to remain within a known safe region of policy space. While deriving our class of
constrained natural actor-critic algorithms, which we call Projected Natural ActorCritics (PNACs), we also elucidate the relationship between natural gradient descent and mirror descent.
1
Introduction
Natural actor-critics form a class of policy search algorithms for ?nding locally optimal policies
for Markov decision processes (MDPs) by approximating and ascending the natural gradient [1] of
an objective function. Despite the numerous successes of, and the continually growing interest in,
natural actor-critic algorithms, they have not achieved widespread use for real-world applications. A
lack of safety guarantees is a common reason for avoiding the use of natural actor-critic algorithms,
particularly for biomedical applications. Since natural actor-critics are unconstrained optimization
algorithms, there are no guarantees that they will avoid regions of policy space that are known to be
dangerous.
For example, proportional-integral-derivative controllers (PID controllers) are the most widely used
control algorithms in industry, and have been studied in depth [2]. Techniques exist for determining
the set of stable gains (policy parameters) when a model of the system is available [3]. Policy search
can be used to ?nd the optimal gains within this set (for some de?nition of optimality). A desirable
property of a policy search algorithm in this context would be a guarantee that it will remain within
the predicted region of stable gains during its search.
Consider a second example: functional electrical stimulation (FES) control of a human arm. By selectively stimulating muscles using subcutaneous probes, researchers have made signi?cant strides
toward returning motor control to people suffering from paralysis induced by spinal cord injury [4].
There has been a recent push to develop controllers that specify how much and when to stimulate
each muscle in a human arm to move it from its current position to a desired position [5]. This
closed-loop control problem is particularly challenging because each person?s arm has different dynamics due to differences in, for example, length, mass, strength, clothing, and amounts of muscle
atrophy, spasticity, and fatigue. Moreover, these differences are challenging to model. Hence, a
proportional-derivative (PD) controller, tuned to a simulation of an ideal human arm, required manual tuning to obtain desirable performance on a human subject with biceps spasticity [6].
Researchers have shown that policy search algorithms are a viable approach to creating controllers
that can automatically adapt to an individual?s arm by training on a few hundred two-second reach1
ing movements [7]. However, safety concerns have been raised in regard to both this speci?c application and other biomedical applications of policy search algorithms. Speci?cally, the existing
state-of-the-art gradient-based algorithms, including the current natural actor-critic algorithms, are
unconstrained and could potentially select dangerous policies. For example, it is known that certain
muscle stimulations could cause the dislocation of a subject?s arm. Although we lack an accurate
model of each individual?s arm, we can generate conservative safety constraints on the space of
policies. Once again, a desirable property of a policy search algorithm would be a guarantee that it
will remain within a speci?ed region of policy space (known-safe policies).
In this paper we present a class of natural actor-critic algorithms that perform constrained
optimization?given a known safe region of policy space, they search for a locally optimal policy while always remaining within the speci?ed region. We call our class of algorithms Projected
Natural Actor-Critics (PNACs) since, whenever they generate a new policy, they project the policy
back to the set of safe policies. The interesting question is how the projection can be done in a
principled manner. We show that natural gradient descent (ascent), which is an unconstrained optimization algorithm, is a special case of mirror descent (ascent), which is a constrained optimization
algorithm. In order to create a projected natural gradient algorithm, we add constraints in the mirror descent algorithm that is equivalent to natural gradient descent. We apply this projected natural
gradient algorithm to policy search to create the PNAC algorithms, which we validate empirically.
2
Related Work
Researchers have addressed safety concerns like these before [8]. Bendrahim and Franklin [9]
showed how a walking biped robot can switch to a stabilizing controller whenever the robot leaves
a stable region of state space. Similar state-avoidant approaches to safety have been proposed by
several others [10, 11, 12]. These approaches do not account for situations where, over an unavoidable region of state space, the actions themselves are dangerous. Kuindersma et al. [13] developed
a method for performing risk-sensitive policy search, which models the variance of the objective
function for each policy and permits runtime adjustments of risk sensitivity. However, their approach does not guarantee that an unsafe region of state space or policy space will be avoided.
Bhatnagar et al. [14] presented projected natural actor-critic algorithms for the average reward setting. As in our projected natural actor-critic algorithms, they proposed computing the update to the
policy parameters and then projecting back to the set of allowed policy parameters. However, they
did not specify how the projection could be done in a principled manner. We show in Section 7
that the Euclidean projection can be arbitrarily bad, and argue that the projection that we propose is
particularly compatible with natural actor-critics (natural gradient descent).
Duchi et al. [15] presented mirror descent using the Mahalanobis norm for the proximal function,
which is very similar to the proximal function that we show to cause mirror descent to be equivalent
to natural gradient descent. However, their proximal function is not identical to ours and they did
not discuss any possible relationship between mirror descent and natural gradient descent.
3
Natural Gradients
Consider the problem of minimizing a differentiable function f : Rn ? R. The standard gradient descent approach is to select an initial x0 ? Rn , compute the direction of steepest descent,
??f (x0 ), and then move some amount in that direction (scaled by a step size parameter, ?0 ). This
process is then repeated inde?nitely: xk+1 = xk ? ?k ?f (xk ), where {?k } is a step size schedule
and k ? {1, . . .}. Gradient descent has been criticized for its low asymptotic rate of convergence.
Natural gradients are a quasi-Newton approach to improving the convergence rate of gradient descent.
When computing the direction of steepest descent, gradient descent assumes that the vector xk
resides in Euclidean space. However, in several settings it is more appropriate to assume that xk
resides in a Riemannian space with metric tensor G(xk ), which is an n ? n positive de?nite matrix
that may vary with xk [16]. In this case, the direction of steepest descent is called the natural
gradient and is given by ?G(xk )?1 ?f (xk ) [1]. In certain cases, (which include our policy search
application), following the natural gradient is asymptotically Fisher-ef?cient [16].
2
4
Mirror Descent
Mirror descent algorithms form a class of highly scalable online gradient methods that are useful
in constrained minimization of non-smooth functions [17, 18]. They have recently been applied to
value function approximation and basis adaptation for reinforcement learning [19, 20]. The mirror
descent update is
xk+1 = ??k? ??k (xk ) ? ?k ?f (xk ) ,
(1)
where ?k : Rn ? R is a continuously differentiable and strongly convex function called the proximal function, and where the conjugate of ?k is ?k? (y) maxx?Rn {x y ? ?k (x)}, for any y ? Rn .
Different choices of ?k result in different mirror descent algorithms. A common choice for a ?xed
?k = ?, ?k, is the p-norm [20], and a common adaptive ?k is the Mahalanobis norm with a dynamic
covariance matrix [15].
Intuitively, the distance metric for the space that xk resides in is not necessarily the same as that of
the space that ?f (xk ) resides in. This suggests that it may not be appropriate to directly add xk
and ??k ?f (xk ) in the gradient descent update. To correct this, mirror descent moves xk into the
space of gradients (the dual space) with ??k (xk ) before performing the gradient update. It takes
the result of this step in gradient space and returns it to the space of xk (the primal space) with ??k? .
Different choices of ?k amount to different assumptions about the relationship between the primal
and dual spaces at xk .
5
Equivalence of Natural Gradient Descent and Mirror Descent
Theorem 5.1. The natural gradient descent update at step k with metric tensor Gk G(xk ):
xk+1 = xk ? ?k G?1
k ?f (xk ),
is equivalent to (1), the mirror descent update at step k, with ?k (x) = (1/2)x Gk x.
(2)
Proof. First, notice that ??k (x) = Gk x. Next, we derive a closed-form for ?k? :
1
?
(3)
?k (y) = maxn x y ? x Gk x .
x?R
2
Since the function being maximized on the right hand side is strictly concave, the x that maximizes
it is its critical point. Solving for this critical point, we get x = G?1
k y. Substituting this into (3), we
?1
?
?1
?
1
?nd that ?k (y) = ( /2)y Gk y. Hence, ??k (y) = Gk y. Inserting the de?nitions of ??k (x) and
??k? (y) into (1), we ?nd that the mirror descent update is
?1
xk+1 =G?1
k (Gk xk ? ?k ?f (xk )) = xk ? ?k Gk ?f (xk ),
which is identical to (2).
Although researchers often use ?k that are norms like the p-norm and Mahalanobis norm, notice
that the ?k that results in natural gradient descent is not a norm. Also, since Gk depends on k, ?k is
an adaptive proximal function [15].
6
Projected Natural Gradients
When x is constrained to some set, X, ?k in mirror descent is augmented with the indicator function
IX , where IX (x) = 0 if x ? X, and +? otherwise. The ?k that was shown to generate an
update equivalent to the natural gradient descent update, with the added constraint that x ? X, is
?k (x) = (1/2)x Gk x + IX (x). Hereafter, any references to ?k refer to this augmented version.
?X (x) = (Gk +
For this proximal function, the subdifferential of ?k (x) is ??k (x) = Gk (x) + N
?X )(x), where N
?X (x) ?IX (x) and, in the middle term, Gk and N
?X are relations and + denotes
N
?X (x) is the normal cone of X at x if x ? X and ? otherwise [21].
Minkowski addition.1 N
?X )?1 (y).
??k? (y) = (Gk + N
(4)
1
Later, we abuse notation and switch freely between treating Gk as a matrix and a relation. When it is a
matrix, Gk x denotes matrix-vector multiplication that produces a vector. When it is a relation, Gk (x) produces
the singleton {Gk x}.
3
k
1
Let ?G
X (y), be the set of x ? X that are closest to y, where the length of a vector, z, is ( /2)z Gk z.
More formally,
1
k
(5)
?G
X (y) arg min (y ? x) Gk (y ? x).
x?X 2
?X )?1 (Gk y).
Lemma 6.1. ?Gk (y) = (Gk + N
X
Proof. We write (5) without the explicit constraint that x ? X by appending the indicator function:
k
?G
X (y) = arg minn hy (x),
x?R
(1/2)(y
where hy (x) =
? x) Gk (y ? x) + IX (x). Since hy is strictly convex over X and +?
elsewhere, its critical point is its global minimizer. The critical point satis?es
?X (x).
0 ? ?hy (x) = ?Gk (y) + Gk (x) + N
?X (x) = (Gk + N
?X )(x). Solving
The globally minimizing x therefore satis?es Gk y ? Gk (x) + N
?1
?
for x, we ?nd that x = (Gk + NX ) (Gk y).
?1
k
Combining Lemma 6.1 with (4), we ?nd that ?? ? (y) = ?G
X (Gk y). Hence, mirror descent with
the proximal function that produces natural gradient descent, augmented to include the constraint
that x ? X, is:
k
?X )(xk ) ? ?k ?f (xk )
xk+1 =?G
G?1
(Gk + N
X
k
?1 ?
?1
k
N
=?G
)(x
)
?
?
G
?f
(x
)
,
(I
+
G
X
k
k
k
X
k
k
?X (xk ), and hence the
where I denotes the identity relation. Since xk ? X, we know that 0 ? N
update can be written as
k
xk ? ?k G?1
xk+1 = ?G
(6)
X
k ?f (xk ) ,
which we call projected natural gradient (PNG).
7
Compatibility of Projection
The standard projected subgradient (PSG) descent method follows the negative gradient (as opposed
to the negative natural gradient) and projects back to X using the Euclidean norm. If f and X are
convex and the step size is decayed appropriately, it is guaranteed to converge to a global minimum,
x? ? X. Any such x? is a ?xed point. This means that a small step in the negative direction of any
subdifferential of f at x? will project back to x? .
k
Our choice of projection, ?G
X , results in PNG having the same ?xed points (see Lemma 7.1). This
means that, when the algorithm is at x? and a small step is taken down the natural gradient to x ,
Gk
?
k
?G
X will project x back to x . We therefore say that ?X is compatible with the natural gradient.
For comparison, the Euclidean projection of x will not necessarily return x to x? .
Lemma 7.1. The sets of ?xed points for PSG and PNG are equivalent.
Proof. A necessary and suf?cient condition for x to be a ?xed point of PSG is that ??f (x) ?
?X (x) [22]. A necessary and suf?cient condition for x to be a ?xed point of PNG is
N
?1
k
?X )?1 Gk x ? ?k G?1 ?f (x)
x
?
?
G
?f
(x)
=
(G
+
N
x =?G
k
k
X
k
k
?X )?1 (Gk x ? ?k ?f (x))
=(Gk + N
?X (x)
?Gk x ? ?k ?f (x) ? Gk (x) + N
?X (x).
? ? ?f (x) ? N
To emphasize the importance of using a compatible projection, consider the following simple example. Minimize the function f (x) = x Ax + b x, where A = diag(1, 0.01) and b = [?0.2, ?0.1] ,
subject to the constraints x 1 ? 1 and x ? 0. We implemented three algorithms, and ran each for
1000 iterations using a ?xed step size:
4
Figure 1: The thick diagonal line shows
one constraint and dotted lines show projections. Solid arrows show the directions of the natural gradient and gradient
at the optimal solution, x? . The dashed
blue arrows show PNG-Euclid?s projections, and emphasize the the projections
cause PNG-Euclid to move away from
the optimal solution.
1. PSG - projected subgradient descent using the Euclidean projection.
k
2. PNG - projected natural gradient descent using ?G
X .
3. PNG-Euclid - projected natural gradient descent using the Euclidean projection.
The results are shown in Figure 1. Notice that PNG and PSG converge to the optimal solution, x? .
From this point, they both step in different directions, but project back to x? . However, PNG-Euclid
converges to a suboptimal solution (outside the domain of the ?gure). If X were a line segment
between the point that PNG-Euclid and PNG converge to, then PNG-Euclid would converge to the
pessimal solution within X, while PSG and PNG would converge to the optimal solution within X.
Also, notice that the natural gradient corrects for the curvature of the function and heads directly
towards the global unconstrained minimum. Since the natural methods in this example use metric
tensor G = A, which is the Hessian of f , they are essentially an incremental form of Newton?s
method. In practice, the Hessian is usually not known, and an estimate thereof is used.
8
Natural Actor-Critic Algorithms
An MDP is a tuple M = (S, A, P, R, d0 , ?), where S is a set of states, A is a set of actions,
P(s |s, a) gives the probability density of the system entering state s when action a is taken in state
s, R(s, a) is the expected reward, r, when action a is taken in state s, d0 is the initial state distribution, and ? ? [0, 1) is a reward discount parameter. A parameterized policy, ?, is a conditional
probability density function??(a|s, ?) is the probability density of action a in state s given a vector
of policy parameters, ? ? Rn .
?
discounted-reward objective or the average reward objective
Let J(?) = E [ t=0 ? t rt |?] be the
n
function with J(?) = limn?? n1 E [ t=0 rt |?]. Given an MDP, M , and a parameterized policy, ?,
the goal is to ?nd policy parameters that maximize one of these objectives. When the action set is
continuous, the search for globally optimal policy parameters becomes intractable, so policy search
algorithms typically search for locally optimal policy parameters.
Natural actor-critics, ?rst proposed by Kakade [23], are algorithms that estimate and ascend the
natural gradient of J(?), using the average Fisher information matrix as the metric tensor:
?
?
?
log ?(a|s, ?k )
log ?(a|s, ?k )
Gk = G(?k ) = Es?d ,a??
,
??k
??k
where d? is a policy and objective function-dependent distribution over the state set [24].
There are many natural actor-critics, including Natural policy gradient utilizing the Temporal Differences (NTD) algorithm [25], Natural Actor-Critic using LSTD-Q(?) (NAC-LSTD) [26], Episodic
Natural Actor-Critic (eNAC) [26], Natural Actor-Critic using Sarsa(?) (NAC-Sarsa) [27], Incremental Natural Actor-Critic (INAC) [28], and Natural-Gradient Actor-Critic with Advantage Parameters
(NGAC) [14]. All of them form an estimate, typically denoted wk , of the natural gradient of J(?k ).
That is, wk ? G(?k )?1 ?J(?k ). They then perform the policy parameter update, ?k+1 = ?k +?k wk .
9
Projected Natural Actor-Critics
If we are given a closed convex set, ? ? Rn , of admissible policy parameters (e.g., the stable
region of gains for a PID controller), we may wish to ensure that the policy parameters remain
5
within ?. The natural actor-critic algorithms described in the previous section do not provide such
a guarantee. However, their policy parameter update equations, which are natural gradient ascent
updates, can easily be modi?ed to the projected natural gradient ascent update in (6) by projecting
G(? )
the parameters back onto ? using ?? k :
G(?k )
?k+1 = ??
(?k + ?k wk ) .
Many of the existing natural policy gradient algorithms, including NAC-LSTD, eNAC, NAC-Sarsa,
and INAC, follow biased estimates of the natural policy gradient [29]. For our experiments, we
must use an unbiased algorithm since the projection that we propose is compatible with the natural
gradient, but not necessarily biased estimates thereof.
NAC-Sarsa and INAC are equivalent biased discounted-reward natural actor-critic algorithms with
per-time-step time complexity linear in the number of features. The former was derived by replacing
the LSTD-Q(?) component of NAC-LSTD with Sarsa(?), while the latter is the discounted-reward
version of NGAC. Both are similar to NTD, which is a biased average-reward algorithm. The
unbiased discounted-reward form of NAC-Sarsa was recently derived [29]. References to NACSarsa hereafter refer to this unbiased variant. In our case studies we use the projected natural actorcritic using Sarsa(?) (PNAC-Sarsa), the projected version of the unbiased NAC-Sarsa algorithm.
G(? )
Notice that the projection, ?? k , as de?ned in (5), is not merely the Euclidean projection back
onto ?. For example, if ? is the set of ? that satisfy A? ? b, for some ?xed matrix A and vector b,
G(? )
then the projection, ?? k , of y onto ? is a quadratic program,
1
minimize f (?) = ? y G(?k )? + ? G(?k )?,
2
s.t. A? ? b.
In order to perform this projection, we require an estimate of the average Fisher information matrix,
G(?k ). If the natural actor-critic algorithm does not already include this (like NAC-LSTD and NACSarsa do not), then an estimate can be generated by selecting G0 = ?I, where ? is a positive scalar
and I is the identity matrix, and then updating the estimate with
?
?
Gt+1 = (1 ? ?t )Gt + ?t
log ?(at |st , ?k )
log ?(at |st , ?k ) ,
??k
??k
where {?t } is a step size schedule [14]. Notice that we use t and k subscripts since many time steps
of the MDP may pass between updates to the policy parameters.
10
Case Study: Functional Electrical Stimulation
In this case study, we searched for proportional-derivative (PD) gains to control a simulated human
arm undergoing FES. We used the Dynamic Arm Simulator 1 (DAS1) [30], a detailed biomechanical
simulation of a human arm undergoing functional electrical stimulation. In a previous study, a
controller created using DAS1 performed well on an actual human subject undergoing FES, although
it required some additional tuning in order to cope with biceps spasticity [6]. This suggests that it is
a reasonably accurate model of an ideal arm.
The DAS1 model, depicted in Figure 2a, has state st = (?1 , ?2 , ?? 1 , ?? 2 , ?target
, ?target
), where
1
2
target
target
and ?2
are the desired joint angles, and the desired joint angle velocities are zero. The
?1
goal is to, during a two-second episode, move the arm from its random initial state to a randomly
chosen stationary target. The arm is controlled by providing a stimulation in the interval [0, 1] to
each of six muscles. The reward function used was similar to that of Jagodnik and van den Bogert
[6], which punishes joint angle error and high muscle stimulation. We searched for locally optimal
PD gains using PNAC-Sarsa where the policy was a PD controller with Gaussian noise added for
exploration.
Although DAS1 does not model shoulder dislocation, we added safety constraints by limiting the
l1 -norm of certain pairs of gains. The constraints were selected to limit the forces applied to the
humerus. These constraints can be expressed in the form A? ? b, where A is a matrix, b is a vector,
and ? are the PD gains (policy parameters). We compared the performance of three algorithms:
1. NAC: NAC-Sarsa with no constraints on ?.
6
M
Mean
Returrn
NAC
PNAC
PNACEE
PNAC
15
16
17
1
(Figure 2a) DAS1, the two-joint, six-muscle biomechanical model used. Antagonistic muscle pairs are
as follows, listed as (?exor, extensor): monoarticular shoulder muscles (a: anterior deltoid, b: posterior
deltoid); monoarticular elbow muscles (c: brachialis,
d: triceps brachii (short head)); biarticular muscles
(e: biceps brachii, f: triceps brachii (long head)).
(Figure 2b) Mean return during the last 250,000
episodes of training using thee algorithms. Standard
deviation error bars from the 10 trials are provided.
The NAC bar is red to emphasize that the ?nal policy found by NAC resides in the dangerous region of
policy space.
G(?k )
2. PNAC: PNAC-Sarsa using the compatible projection, ??
3. PNAC-E: PNAC-Sarsa using the Euclidean projection.
.
Since we are not promoting the use of one natural actor-critic over another, we did not focus on
?nely tuning the natural actor-critic nor comparing the learning speeds of different natural actorcritics. Rather, we show the importance of the proper projection by allowing PNAC-Sarsa to run for
a million episodes (far longer than required for convergence), after which we plot the mean sum of
rewards during the last quarter million episodes. Each algorithm was run ten times, and the results
averaged and plotted in Figure 2b. Notice that PNAC performs worse than the unconstrained NAC.
This happens because NAC leaves the safe region of policy space during its search, and converges
to a dangerous policy?one that reaches the goal quickly and with low total muscle force, but which
can cause large, short, spikes in muscle forces surrounding the shoulder, which violates our safety
constraints. We suspect that PNAC converges to a near-optimal policy within the region of policy
space that we have designated as safe. PNAC-E converges to a policy that is worse than that found
by PNAC because it uses an incompatible projection.
11
Case Study: uBot Balancing
In the previous case study, the optimal policy lay outside the designated safe region of policy space
(this is common when a single failure is so costly that adding a penalty to the reward function for
failure is impractical, since a single failure is unacceptable). We present a second case study in which
the optimal policy lies within the designated safe region of policy space, but where an unconstrained
search algorithm may enter the unsafe region during its search of policy space (at which point large
negative rewards return it to the safe region).
The uBot-5, shown in Figure 3, is an 11-DoF mobile manipulator developed at the University of
Massachusetts Amherst [31, 32]. During experiments, it often uses its arms to interact with the
world. Here, we consider the problem faced by the controller tasked with keeping the robot balanced
during such experiments. To allow for results that are easy to visualize in 2D, we use a PD controller
that observes only the current body angle, its time derivative, and the target angle (always vertical).
This results in the PD controller having only two gains (tunable policy parameters). We use a
crude simulation of the uBot-5 with random upper-body movements, and search for the PD gains
that minimize a weighted combination of the energy used and the mean angle error (distance from
vertical).
We constructed a set of conservative estimates of the region of stable gains, with which the uBot5 should never fall, and used PNAC-Sarsa and NAC-Sarsa to search for the optimal gains. Each
training episode lasted 20 seconds, but was terminated early (with a large penalty) if the uBot-5 fell
over. Figure 3 (middle) shows performance over 100 training episodes. Using NAC-Sarsa, the PD
weights often left the conservative estimate of the safe region, which resulted in the uBot-5 falling
over. Figure 3 (right) shows one trial where the uBot-5 fell over four times (circled in red). The
7
8
NAC
PNAC
?
2
6
4
2
0
60
65
?
70
75
1
Figure 3: Left: uBot-5 holding a ball. Middle: Mean (over 20-trials) returns over time using PNACSarsa and NAC-Sarsa on the simulated uBot-5 balancing task. The shaded region depicts standard
deviations. Right: Trace of the two PD gains, ?1 and ?2 , from a typical run of PNAC-Sarsa and
NAC-Sarsa. A marker is placed for the gains after each episode, and red markers denote episodes
where the simulated uBot-5 fell over.
resulting large punishments cause NAC-Sarsa to quickly return to the safe region of policy space.
Using PNAC-Sarsa, the simulated uBot-5 never fell. Both algorithms converge to gains that reside
within the safe region of policy space. We selected this example because it shows how, even if the
optimal solution resides within the safe region of policy space (unlike the in the previous case study),
unconstrained RL algorithms may traverse unsafe regions of policy space during their search.
12
Conclusion
We presented a class of algorithms, which we call projected natural actor-critics (PNACs). PNACs
are the simple modi?cation of existing natural actor-critic algorithms to include a projection of newly
computed policy parameters back onto an allowed set of policy parameters (e.g., those of policies
that are known to be safe). We argued that a principled projection is the one that results from viewing
natural gradient descent, which is an unconstrained algorithm, as a special case of mirror descent,
which is a constrained algorithm.
We show that the resulting projection is compatible with the natural gradient and gave a simple empirical example that shows why a compatible projection is important. This example also shows how
an incompatible projection can result in natural gradient descent converging to a pessimal solution
in situations where a compatible projection results in convergence to an optimal solution. We then
applied a PNAC algorithm to a realistic constrained control problem with six-dimensional continuous states and actions. Our results support our claim that the use of an incompatible projection
can result in convergence to inferior policies. Finally, we applied PNAC to a simulated robot and
showed its substantial bene?ts over unconstrained natural actor-critic algorithms.
References
[1] S. Amari. Natural gradient works ef?ciently in learning. Neural Computation, 10:251?276, 1998.
? om and T. H?agglund. PID Controllers: Theory, Design, and Tuning. ISA: The Instrumentation,
[2] K. J. Astr?
Systems, and Automation Society, 1995.
[3] M. T. S?oylemez, N. Munro, and H. Baki. Fast calculation of stabilizing PID controllers. Automatica, 39
(1):121?126, 2003.
[4] C. L. Lynch and M. R. Popovic. Functional electrical stimulation. In IEEE Control Systems Magazine,
volume 28, pages 40?50.
[5] E. K. Chadwick, D. Blana, A. J. van den Bogert, and R. F. Kirsch. A real-time 3-D musculoskeletal model
for dynamic simulation of arm movements. In IEEE Transactions on Biomedical Engineering, volume 56,
pages 941?948, 2009.
[6] K. Jagodnik and A. van den Bogert. A proportional derivative FES controller for planar arm movement.
In 12th Annual Conference International FES Society, Philadelphia, PA, 2007.
8
[7] P. S. Thomas, M. S. Branicky, A. J. van den Bogert, and K. M. Jagodnik. Application of the actor-critic
architecture to functional electrical stimulation control of a human arm. In Proceedings of the TwentyFirst Innovative Applications of Arti?cial Intelligence, 2009.
[8] T. J. Perkins and A. G. Barto. Lyapunov design for safe reinforcement learning. Journal of Machine
Learning Research, 3:803?832, 2003.
[9] H. Bendrahim and J. A. Franklin. Biped dynamic walking using reinforcement learning. Robotics and
Autonomous Systems, 22:283?302, 1997.
[10] A. Arapostathis, R. Kumar, and S. P. Hsu. Control of markov chains with safety bounds. In IEEE
Transactions on Automation Science and Engineering, volume 2, pages 333?343, October 2005.
[11] E. Arvelo and N. C. Martins. Control design for Markov chains under safety constraints: A convex
approach. CoRR, abs/1209.2883, 2012.
[12] P. Geibel and F. Wysotzki. Risk-sensitive reinforcement learning applied to control under constraints.
Journal of Arti?cial Intelligence Research 24, pages 81?108, 2005.
[13] S. Kuindersma, R. Grupen, and A. G. Barto. Variational bayesian optimization for runtime risk-sensitive
control. In Robotics: Science and Systems VIII, 2012.
[14] S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica,
45(11):2471?2482, 2009.
[15] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Technical Report UCB/EECS-2010-24, Electrical Engineering and Computer Sciences,
University of California at Berkeley, March 2010.
[16] S. Amari and S. Douglas. Why natural gradient? In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 2, pages 1213?1216, 1998.
[17] A. Nemirovski and D. Yudin. Problem Complexity and Method Ef?ciency in Optimization. Wiley, New
York, 1983.
[18] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 2003.
[19] S. Mahadevan and B. Liu. Sparse Q-learning with mirror descent. In Proceedings of the Conference on
Unvertainty in Arti?cial Intelligence, 2012.
[20] S. Mahadevan, S. Giguere, and N. Jacek. Basis adaptation for sparse nonlinear reinforcement learning.
In Proceedings of the Conference on Arti?cial Intelligence, 2013.
[21] R. Tyrell Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey, 1970.
[22] J. Nocedal and S. Wright. Numerical Optimization. Springer, second edition, 2006.
[23] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14,
pages 1531?1538, 2002.
[24] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Advances in Neural Information Processing Systems 12, pages 1057?
1063, 2000.
[25] T. Morimura, E. Uchibe, and K. Doya. Utilizing the natural gradient in temporal difference reinforcement
learning with eligibility traces. In International Symposium on Information Geometry and its Application,
2005.
[26] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71:1180?1190, 2008.
[27] P. S. Thomas and A. G. Barto. Motor primitive discovery. In Procedings of the IEEE Conference on
Development and Learning and EPigenetic Robotics, 2012.
[28] T. Degris, P. M. Pilarski, and R. S. Sutton. Model-free reinforcement learning with continuous action in
practice. In Proceedings of the 2012 American Control Conference, 2012.
[29] P. S. Thomas. Bias in natural actor-critic algorithms. Technical Report UM-CS-2012-018, Department of
Computer Science, University of Massachusetts at Amherst, 2012.
[30] D. Blana, R. F. Kirsch, and E. K. Chadwick. Combined feedforward and feedback control of a redundant,
nonlinear, dynamic musculoskeletal system. Medical and Biological Engineering and Computing, 47:
533?542, 2009.
[31] P. Deegan. Whole-Body Strategies for Mobility and Manipulation. PhD thesis, University of Massachusetts Amherst, 2010.
[32] S. R. Kuindersma, E. Hannigan, D. Ruiken, and R. A. Grupen. Dexterous mobility with the uBot-5 mobile
manipulator. In Proceedings of the 14th International Conference on Advanced Robotics, 2009.
9
| 5184 |@word trial:3 middle:3 version:3 norm:9 nd:6 simulation:4 covariance:1 arti:4 solid:1 initial:3 liu:1 uma:1 hereafter:2 selecting:1 punishes:1 tuned:1 ours:1 franklin:2 existing:3 current:3 comparing:1 anterior:1 written:1 must:1 biomechanical:2 realistic:1 numerical:1 cant:1 motor:2 treating:1 plot:1 update:15 stationary:1 intelligence:4 leaf:2 selected:2 xk:37 steepest:3 short:2 gure:1 traverse:1 unacceptable:1 constructed:1 symposium:1 viable:1 grupen:2 manner:2 x0:2 ascend:1 expected:1 themselves:1 nor:1 growing:1 simulator:1 globally:2 discounted:4 automatically:1 actual:1 elbow:1 becomes:1 project:5 provided:1 moreover:1 notation:1 maximizes:1 mass:1 xed:8 developed:2 impractical:1 guarantee:7 temporal:2 cial:4 berkeley:1 concave:1 giguere:2 runtime:2 returning:1 scaled:1 um:1 control:14 medical:1 extensor:1 continually:1 safety:10 before:2 positive:2 engineering:4 limit:2 despite:1 sutton:3 subscript:1 nitely:1 abuse:1 studied:1 equivalence:1 suggests:2 challenging:2 shaded:1 humerus:1 nemirovski:1 averaged:1 practice:2 branicky:1 nite:1 episodic:1 empirical:1 maxx:1 projection:29 get:1 onto:4 context:2 risk:4 equivalent:6 primitive:1 convex:7 stabilizing:2 utilizing:2 deriving:1 autonomous:1 antagonistic:1 limiting:1 elucidate:1 target:6 magazine:1 us:2 pa:1 velocity:1 particularly:3 walking:2 updating:1 lay:1 nitions:1 electrical:6 region:24 cord:1 episode:8 movement:4 observes:1 ran:1 principled:4 thee:1 pd:10 balanced:1 complexity:2 substantial:1 reward:13 dynamic:6 ghavamzadeh:1 singh:1 solving:2 segment:1 ubot:11 basis:2 easily:1 joint:4 jersey:1 surrounding:1 fast:1 outside:2 dof:1 widely:1 say:1 otherwise:2 amari:2 pilarski:1 online:2 advantage:1 differentiable:2 propose:2 adaptation:2 inserting:1 loop:1 combining:1 validate:1 rst:1 convergence:5 produce:3 incremental:2 converges:4 derive:1 develop:1 chadwick:2 school:1 implemented:1 c:2 predicted:1 signi:1 lyapunov:1 direction:7 safe:15 thick:1 drawback:1 correct:1 musculoskeletal:2 stochastic:1 exploration:1 human:8 viewing:1 mcallester:1 violates:1 require:1 argued:1 tyrell:1 biological:1 sarsa:22 strictly:2 clothing:1 wright:1 normal:1 visualize:1 claim:1 substituting:1 vary:1 early:1 ntd:2 sensitive:3 create:2 weighted:1 minimization:1 always:2 gaussian:1 lynch:1 rather:1 avoid:1 mobile:2 barto:3 ax:1 derived:2 bogert:4 focus:1 schaal:1 lasted:1 dependent:1 typically:2 blana:2 relation:4 quasi:1 compatibility:1 arg:2 dual:2 denoted:1 morimura:1 development:1 constrained:8 raised:1 art:1 special:2 once:1 never:2 having:2 identical:2 others:1 report:2 few:1 randomly:1 modi:2 resulted:1 neurocomputing:1 individual:2 beck:1 geometry:1 william:1 n1:1 ab:1 epigenetic:1 interest:1 satis:2 highly:1 primal:2 chain:2 accurate:2 integral:1 tuple:1 necessary:2 mobility:2 euclidean:8 desired:3 plotted:1 criticized:1 industry:1 dexterous:1 teboulle:1 injury:1 applicability:1 deviation:2 hundred:1 eec:1 proximal:7 punishment:1 combined:1 person:1 decayed:1 density:3 amherst:5 sensitivity:1 st:3 international:4 lee:1 corrects:1 continuously:1 quickly:2 again:1 thesis:1 unavoidable:1 opposed:1 worse:2 creating:1 american:1 derivative:5 return:6 pessimal:2 account:1 de:4 singleton:1 stride:1 degris:1 wk:4 automation:2 rockafellar:1 satisfy:1 depends:1 later:1 performed:1 closed:3 hazan:1 red:3 pthomas:1 actorcritic:1 minimize:3 om:1 variance:1 maximized:1 bayesian:1 euclid:6 bhatnagar:2 researcher:4 cation:1 biceps:3 reach:1 psg:6 manual:1 ed:3 whenever:2 failure:3 energy:1 thereof:2 proof:3 riemannian:1 gain:15 newly:1 tunable:1 hsu:1 massachusetts:4 popular:1 schedule:2 back:9 follow:1 planar:1 specify:2 done:2 strongly:1 biomedical:3 twentyfirst:1 hand:1 replacing:1 nonlinear:3 marker:2 lack:3 widespread:1 dabney:1 stimulate:1 mdp:3 manipulator:2 nac:22 unbiased:4 former:1 hence:4 entering:1 mahalanobis:3 during:9 eligibility:1 inferior:1 fatigue:1 duchi:2 l1:1 performs:1 jacek:1 variational:1 ef:3 recently:2 common:4 functional:5 stimulation:8 empirically:1 quarter:1 rl:1 spinal:1 volume:5 million:2 refer:2 enter:1 tuning:4 unconstrained:9 biped:2 stable:5 actor:36 robot:4 longer:1 astr:1 gt:2 add:2 curvature:1 closest:1 posterior:1 recent:1 showed:2 instrumentation:1 manipulation:1 certain:3 success:1 arbitrarily:1 nition:1 muscle:13 minimum:2 additional:1 speci:4 freely:1 converge:6 maximize:1 redundant:1 dashed:1 stephen:1 signal:1 desirable:3 isa:1 d0:2 ing:1 smooth:1 technical:2 adapt:1 calculation:1 long:1 controlled:1 converging:1 scalable:1 variant:1 geibel:1 controller:15 essentially:1 metric:5 tasked:1 iteration:1 spasticity:3 achieved:1 robotics:4 subdifferential:2 addition:1 addressed:1 interval:1 limn:1 appropriately:1 biased:4 unlike:1 ascent:4 fell:4 induced:1 subject:4 suspect:1 call:4 ciently:1 mahadeva:1 ee:1 near:1 ideal:2 feedforward:1 mahadevan:3 easy:1 switch:2 enac:2 gave:1 architecture:1 suboptimal:1 six:3 munro:1 penalty:2 peter:1 speech:1 hessian:2 cause:5 york:1 action:8 useful:1 detailed:1 listed:1 amount:3 discount:1 locally:5 ten:1 png:14 generate:3 exist:1 notice:7 dotted:1 per:1 blue:1 write:1 four:1 falling:1 douglas:1 triceps:2 nal:1 nocedal:1 uchibe:1 asymptotically:1 subgradient:4 merely:1 cone:1 sum:1 run:3 angle:6 parameterized:2 letter:1 doya:1 decision:2 incompatible:3 kirsch:2 bound:1 guaranteed:2 quadratic:1 annual:1 strength:1 dangerous:5 constraint:14 perkins:1 kuindersma:3 hy:4 speed:1 optimality:1 min:1 innovative:1 performing:3 minkowski:1 kumar:1 martin:1 ned:1 department:1 designated:3 maxn:1 combination:1 ball:1 march:1 conjugate:1 remain:4 wysotzki:1 kakade:2 happens:1 projecting:2 intuitively:1 den:4 taken:3 pid:4 equation:1 discus:1 exor:1 singer:1 know:1 ascending:1 available:1 operation:1 permit:1 probe:1 apply:1 promoting:1 away:1 appropriate:2 appending:1 thomas:4 assumes:1 remaining:1 include:4 denotes:3 ensure:1 newton:2 atrophy:1 cally:1 approximating:1 society:2 tensor:4 objective:6 move:5 question:1 added:3 already:1 g0:1 spike:1 costly:1 rt:2 strategy:1 diagonal:1 gradient:57 distance:2 simulated:5 philip:1 nx:1 das1:5 argue:1 reason:1 toward:1 viii:1 length:2 minn:1 relationship:3 providing:1 minimizing:2 october:1 fe:5 potentially:1 holding:1 gk:40 trace:2 negative:4 design:3 proper:1 policy:69 perform:3 allowing:1 upper:1 vertical:2 markov:4 descent:45 t:1 situation:2 shoulder:3 head:3 rn:7 mansour:1 procedings:1 pair:2 required:3 bene:1 deltoid:2 california:1 acoustic:1 unsafe:3 address:1 bar:2 usually:1 program:1 including:3 critical:4 natural:82 force:3 indicator:2 advanced:1 arm:17 mdps:1 numerous:1 nding:2 created:1 philadelphia:1 faced:1 circled:1 discovery:1 multiplication:1 determining:1 asymptotic:1 inde:1 interesting:1 suf:2 proportional:4 critic:36 balancing:2 compatible:8 elsewhere:1 placed:1 last:2 keeping:1 free:1 side:1 allow:1 bias:1 fall:1 sparse:2 van:4 regard:1 feedback:1 depth:1 world:3 yudin:1 resides:6 reside:1 made:1 reinforcement:9 projected:19 avoided:1 adaptive:3 far:1 cope:1 transaction:2 emphasize:3 global:3 paralysis:1 automatica:2 popovic:1 search:21 continuous:3 why:2 reasonably:1 improving:1 interact:1 necessarily:3 domain:2 diag:1 did:3 arrow:2 terminated:1 noise:1 whole:1 actorcritics:2 sridhar:1 edition:1 suffering:1 allowed:2 repeated:1 body:3 augmented:3 cient:3 depicts:1 wiley:1 position:2 explicit:1 wish:1 ciency:1 lie:1 crude:1 ix:5 admissible:1 theorem:1 down:1 bad:1 undergoing:3 concern:2 intractable:1 adding:1 corr:1 importance:2 mirror:19 phd:1 push:1 depicted:1 expressed:1 adjustment:1 scalar:1 lstd:6 springer:1 minimizer:1 ma:1 stimulating:1 conditional:1 identity:2 goal:3 towards:1 fisher:3 jagodnik:3 typical:1 lemma:4 conservative:3 called:2 total:1 pas:1 e:3 ucb:1 selectively:1 select:2 formally:1 people:1 searched:2 latter:1 support:1 princeton:2 avoiding:1 |
4,624 | 5,185 | (More) Efficient Reinforcement Learning via
Posterior Sampling
Osband, Ian
Stanford University
Stanford, CA 94305
iosband@stanford.edu
Van Roy, Benjamin
Stanford University
Stanford, CA 94305
bvr@stanford.edu
Russo, Daniel
Stanford University
Stanford, CA 94305
djrusso@stanford.edu
Abstract
Most provably-efficient reinforcement learning algorithms introduce optimism about poorly-understood states and actions to encourage exploration.
We study an alternative approach for efficient exploration: posterior sampling for reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of known duration. At the start of each episode, PSRL
updates a prior distribution over Markov decision processes and takes one
sample from this posterior. PSRL then follows the policy that is optimal
for this sample during the episode. The algorithm is conceptually simple,
computationally efficient and allows an ?
agent to encode prior knowledge
? S AT ) bound on expected regret,
in a natural way. We establish an O(?
where T is time, ? is the episode length and S and A are the cardinalities of the state and action spaces. This bound is one of the first for an
algorithm not based on optimism, and close to the state of the art for any
reinforcement learning algorithm. We show through simulation that PSRL
significantly outperforms existing algorithms with similar regret bounds.
1
Introduction
We consider the classical reinforcement learning problem of an agent interacting with its
environment while trying to maximize total reward accumulated over time [1, 2]. The agent?s
environment is modeled as a Markov decision process (MDP), but the agent is uncertain
about the true dynamics of the MDP. As the agent interacts with its environment, it observes
the outcomes that result from previous states and actions, and learns about the system
dynamics. This leads to a fundamental tradeoff: by exploring poorly-understood states
and actions the agent can learn to improve future performance, but it may attain better
short-run performance by exploiting its existing knowledge.
Na??ve optimization using point estimates for unknown variables overstates an agent?s knowledge, and can lead to premature and suboptimal exploitation. To offset this, the majority of
provably efficient learning algorithms use a principle known as optimism in the face of uncertainty [3] to encourage exploration. In such an algorithm, each state and action is afforded
some optimism bonus such that their value to the agent is modeled to be as high as is statistically plausible. The agent will then choose a policy that is optimal under this ?optimistic?
model of the environment. This incentivizes exploration since poorly-understood states and
actions will receive a higher optimism bonus. As the agent resolves its uncertainty, the effect of optimism is reduced and the agent?s behavior approaches optimality. Many authors
have provided strong theoretical guarantees for optimistic algorithms [4, 5, 6, 7, 8]. In fact,
almost all reinforcement learning algorithms with polynomial bounds on sample complexity
employ optimism to guide exploration.
1
We study an alternative approach to efficient exploration, posterior sampling, and provide
finite time bounds on regret. We model the agent?s initial uncertainty over the environment
through a prior distribution.1 At the start of each episode, the agent chooses a new policy, which it follows for the duration of the episode. Posterior sampling for reinforcement
learning (PSRL) selects this policy through two simple steps. First, a single instance of the
environment is sampled from the posterior distribution at the start of an episode. Then,
PSRL solves for and executes the policy that is optimal under the sampled environment over
the episode. PSRL randomly selects policies according to the probability they are optimal;
exploration is guided by the variance of sampled policies as opposed to optimism.
The idea of posterior sampling goes back to 1933 [9] and has been applied successfully to
multi-armed bandits. In that literature, the algorithm is often referred to as Thompson
sampling or as probability matching. Despite its long history, posterior sampling was largely
neglected by the multi-armed bandit literature until empirical studies [10, 11] demonstrated
that the algorithm could produce state of the art performance. This prompted a surge of
interest, and a variety of strong theoretical guarantees are now available [12, 13, 14, 15].
Our results suggest this method has great potential in reinforcement learning as well.
PSRL was originally introduced in the context of reinforcement learning by Strens [16]
under the name ?Bayesian Dynamic Programming?,2 where it appeared primarily as a
heuristic method. In reference to PSRL and other ?Bayesian RL? algorithms, Kolter and
Ng [17] write ?little is known about these algorithms from a theoretical perspective, and
it is unclear, what (if any) formal guarantees can be made for such approaches.? Those
Bayesian algorithms for which performance guarantees exist are guided by optimism. BOSS
[18] introduces a more complicated version of PSRL that samples many MDPs, instead
of just one, and then combines them into an optimistic environment to guide exploration.
BEB [17] adds an exploration bonus to states and actions according to how infrequently
they have been visited. We show it is not always necessary to introduce optimism via a
complicated construction, and that the simple algorithm originally proposed by Strens [16]
satisfies strong bounds itself.
Our work is motivated by several advantages of posterior sampling relative to optimistic
algorithms. First, since PSRL only requires solving for an optimal policy for a single sampled MDP, it is computationally efficient both relative to many optimistic methods, which
require simultaneous optimization across a family of plausible environments [4, 5, 18], and
to computationally intensive approaches that attempt to approximate the Bayes-optimal
solutions directly [18, 19, 20]. Second, the presence of an explicit prior allows an agent to
incorporate known environment structure in a natural way. This is crucial for most practical applications, as learning without prior knowledge requires exhaustive experimentation
in each possible state. Finally, posterior sampling allows us to separate the algorithm from
the analysis. In any optimistic algorithm, performance is greatly influenced by the manner
in which optimism is implemented. Past works have designed algorithms, at least in part, to
facilitate theoretical analysis for toy problems. Although our analysis of posterior sampling
is closely related to the analysis in [4], this worst-case bound has no impact on the algorithm?s actual performance. In addition, PSRL is naturally suited to more complex settings
where design of an efficiently optimistic algorithm might not be possible. We demonstrate
through a computational study in Section 6 that PSRL outperforms the optimistic algorithm
UCRL2 [4]: a competitor with similar regret bounds over some example MDPs.
2
Problem formulation
We consider the problem of learning to optimize a random finite horizon MDP M =
(S, A, RM , P M , ?, ?) in repeated finite episodes of interaction. S is the state space, A is
the action space, RaM (s) is a probability distribution over reward realized when selecting
action a while in state s whose support is [0, 1], PaM (s0 |s) is the probability of transitioning
to state s0 if action a is selected while at state s, ? is the time horizon, and ? the initial
state distribution. We define the MDP and all other random variables we will consider with
1
2
For an MDP, this might be a prior over transition dynamics and reward distributions.
We alter terminology since PSRL is neither Bayes-optimal, nor a direct approximation of this.
2
respect to a probability space (?, F, P). We assume S, A, and ? are deterministic so the
agent need not learn the state and action spaces or the time horizon.
A deterministic policy ? is a function mapping each state s ? S and i = 1, . . . , ? to an action
a ? A. For each MDP M = (S, A, RM , P M , ?, ?) and policy ?, we define a value function
?
?
?
X
M
M
V?,i
(s) := EM,? ?
Raj (sj )si = s? ,
j=i
M
Ra (s)
where
denotes the expected reward realized when action a is selected while in state
s, and the subscripts of the expectation operator indicate that aj = ?(sj , j), and sj+1 ?
M
PaMj (?|sj ) for j = i, . . . , ? . A policy ? is said to be optimal for MDP M if V?,i
(s) =
max?0 V?M0 ,i (s) for all s ? S and i = 1, . . . , ? . We will associate with each MDP M a policy
?M that is optimal for M .
The reinforcement learning agent interacts with the MDP over episodes that begin at times
tk = (k ? 1)? + 1, k = 1, 2, . . .. At each time t, the agent selects an action at , observes
a scalar reward rt , and then transitions to st+1 . If an agent follows a policy ? then when
in state s at time t during episode k, it selects an action at = ?(s, t ? tk ). Let Ht =
(s1 , a1 , r1 , . . . , st?1 , at?1 , rt?1 ) denote the history of observations made prior to time t. A
reinforcement learning algorithm is a deterministic sequence {?k |k = 1, 2, . . .} of functions,
each mapping Htk to a probability distribution ?k (Htk ) over policies. At the start of the kth
episode, the algorithm samples a policy ?k from the distribution ?k (Htk ). The algorithm
then selects actions at = ?k (st , t ? tk ) at times t during the kth episode.
We define the regret incurred by a reinforcement learning algorithm ? up to time T to be
dT /? e
Regret(T, ?) :=
X
?k ,
k=1
where ?k denotes regret over the kth episode, defined with respect to the MDP M ? by
X
?
?
?k =
?(s)(V?M? ,1 (s) ? V?Mk ,1 (s)),
s?S
?
M?
with ? = ?
and ?k ? ?k (Htk ). Note that regret is not deterministic since it can
depend on the random MDP M ? , the algorithm?s internal random sampling and, through
the history Htk , on previous random transitions and random rewards. We will assess and
compare algorithm performance in terms of regret and its expectation.
3
Posterior sampling for reinforcement learning
The use of posterior sampling for reinforcement learning (PSRL) was first proposed by
Strens [16]. PSRL begins with a prior distribution over MDPs with states S, actions A and
horizon ? . At the start of each kth episode, PSRL samples an MDP Mk from the posterior
distribution conditioned on the history Htk available at that time. PSRL then computes
and follows the policy ?k = ?Mk over episode k.
Algorithm: Posterior Sampling for Reinforcement Learning (PSRL)
Data: Prior distribution f , t=1
for episodes k = 1, 2, . . . do
sample Mk ? f (?|Htk )
compute ?k = ?Mk
for timesteps j = 1, . . . , ? do
sample and apply at = ?k (st , j)
observe rt and st+1
t=t+1
end
end
3
We show PSRL obeys performance guarantees intimately related to those for learning algorithms based upon OFU, as has been demonstrated for multi-armed bandit problems [15].
We believe that a posterior sampling approach offers some inherent advantages.
Optimistic
M?
algorithms require explicit construction of the confidence bounds on V?,1
(s) based on observed data, which is a complicated statistical
problem even for simple models. In addition,
M?
even if strong confidence bounds for V?,1
(s) were known, solving for the best optimistic
policy may be computationally intractable. Algorithms such as UCRL2 [4] are computaM
tionally tractable, but must resort to separately bounding Ra (s) and PaM (s) with high
probability for each s, a. These bounds allow a ?worst-case? mis-estimation simultaneously
in every state-action pair and consequently give rise to a confidence set which may be far
too conservative.
By contrast, PSRL always selects policies according to the probability they are optimal.
Uncertainty about each policy is quantified in a statistically efficient way through the posterior distribution. The algorithm only requires a single sample from the posterior, which
may be approximated through algorithms such as Metropolis-Hastings if no closed form
exists. As such, we believe PSRL will be simpler to implement, computationally cheaper
and statistically more efficient than existing optimistic methods.
3.1
Main results
?
? S AT )
The following result establishes regret bounds for PSRL. The bounds have O(?
expected regret, and, to our knowledge, provide the first guarantees for an algorithm not
based upon optimism:
Theorem 1. If f is the distribution of M ? then,
p
E Regret(T, ??PS ) = O ? S AT log(SAT )
(1)
This result holds for any prior distribution on MDPs, and so applies to an immense class
of models. To accommodate this generality, the result bounds expected regret under the
prior distribution (sometimes called Bayes risk or Bayesian regret). We feel this is a natural
measure of performance, but should emphasize that it is more common in the literature to
bound regret under a worst-case MDP instance. The next result provides a link between
these notions of regret. Applying Markov?s inequality to (1) gives convergence in probability.
Corollary 1. If f is the distribution of M ? then for any ? > 21 ,
Regret(T, ??PS )
? 0.
p
T?
As shown in the appendix, this also bounds the frequentist regret for any MDP with non-zero
probability. State-of-the-art guarantees similar to Theorem 1 are satisfied by the algorithms
UCRL2 [4] and
RL. Here UCRL2 gives regret
? REGAL [5] for the case of non-episodic
?
bounds O(DS
AT ) where D = maxs0 6=s min? E[T (s0 |M, ?, s)] and T (s0 |M, ?, s) is the first
0
time step
? where s is reached from s under the policy ?. REGAL improves this result to
?
O(?S AT ) where ? ? D is the span of the of the optimal value function. However, there
is so far no computationally tractable implementation of this algorithm.
In many practical applications we may be interested in episodic learning tasks where the
constants D and ? could be improved to take advantage of the episode length?? . Simple
? S AT ), just
modifications to both UCRL2 and REGAL will produce regret
bounds of O(?
?
as PSRL. This is close to the theoretical lower bounds of SAT -dependence.
4
True versus sampled MDP
A simple observation, which is central to our analysis, is that, at the start of each kth
episode, M ? and Mk are identically distributed. This fact allows us to relate quantities that
depend on the true, but unknown, MDP M ? , to those of the sampled MDP Mk , which is
4
fully observed by the agent. We introduce ?(Htk ) as the ?-algebra generated by the history
up to tk . Readers unfamiliar with measure theory can think of this as ?all information
known just before the start of period tk .? When we say that a random variable X is ?(Htk )measurable, this intuitively means that although X is random, it is deterministically known
given the information contained in Htk . The following lemma is an immediate consequence
of this observation [15].
Lemma 1 (Posterior Sampling). If f is the distribution of M ? then, for any ?(Htk )measurable function g,
E[g(M ? )|Htk ] = E[g(Mk )|Htk ].
(2)
Note that taking the expectation of (2) shows E[g(M ? )] = E[g(Mk )] through the tower
property.
P
?
?
Recall, we have defined ?k = s?S ?(s)(V?M? ,1 (s) ? V?Mk ,1 (s)) to be the regret over period k.
A significant hurdle in analyzing this equation is its dependence on the optimal policy ?? ,
which we do not observe. For many reinforcement learning algorithms, there is no clean way
to relate the unknown optimal policy to the states and actions the agent actually observes.
The following result shows how we can avoid this issue using Lemma 1. First, define
X
?
?k =
?
?(s)(V Mk (s) ? V M (s))
(3)
?k ,1
?k ,1
s?S
as the difference in expected value of the policy ?k under the sampled MDP Mk , which is
known, and its performance under the true MDP M ? , which is observed by the agent.
Theorem 2 (Regret equivalence).
"m
#
"m
#
X
X
?
E
?k = E
?k
(4)
k=1
k=1
and for any ? > 0 with probability at least 1 ? ?,
Mk
M?
?k = P
Proof. Note, ?k ? ?
s?S ?(s)(V?? ,1 (s) ? V?k ,1 (s)) ? [??, ? ]. By Lemma 1, E[?k ?
? k |Ht ] = 0. Taking expectations of these sums therefore establishes the claim.
?
k
This result bounds the agent?s regret in epsiode k by the difference between the agent?s
k
estimate V?Mk ,1
(stk ) of the expected reward in Mk from the policy it chooses, and the expected
?
M
reward V?k ,1 (stk ) in M ? . If the agent has a poor estimate of the MDP M ? , we expect it to
learn as the performance of following ?k under M ? differs from its expectation under Mk .
As more information is gathered, its performance should improve. In the next section, we
formalize these ideas and give a precise bound on the regret of posterior sampling.
5
Analysis
An essential tool in our analysis will be the dynamic programming, or Bellman operator
T?M , which for any MDP M = (S, A, RM , P M , ?, ?), stationary policy ? : S ? A and value
function V : S ? R, is defined by
M
T?M V (s) := R? (s, ?) +
X
M
P?(s)
(s0 |s)V (s0 ).
s0 ?S
This operation returns the expected value of state s where we follow the policy ? under the
laws of M , for one time step. The following lemma gives a concise form for the dynamic
programming paradigm in terms of the Bellman operator.
Lemma 2 (Dynamic programming equation). For any MDP M = (S, A, RM , P M , ?, ?)
and policy ? : S ? {1, . . . , ? } ? A, the value functions V?M satisfy
M
M
M
V?,i
= T?(?,i)
V?,i+1
for i = 1 . . . ? , with
M
V?,?
+1
:= 0.
5
(5)
?
Mk
M
k
?
(s), T?k := T?Mk ,
(s) := V?,i
:= V?,i
, V?,i
In order to streamline our notation we will let V?,i
?
?
?
M
?
M
T? := T? and P? (?|s) := P?(s) (?|s).
5.1
Rewriting regret in terms of Bellman error
" ?
Xh
?
?
E ?k M , Mk = E
?T?
)V?k ,i+1 (st
(T k
?k (?,i)
?k (?,i)
i
) M ? , Mk
k +i
k
i=1
#
(6)
To see why (6) holds, simply apply the Dynamic programming equation inductively:
(V?kk ,1 ? V??k ,1 )(stk +1 )
=
(T?kk (?,1) V?kk ,2 ? T??k (?,1) V??k ,2 )(stk +1 )
=
(T?kk (?,1) ? T??k (?,1) )V?kk ,2 (stk +1 )
X
+
{P??k (?,1) (s0 |stk +1 )(V??k ,2 ? V?kk ,2 )(s0 )}
=
=
=
s0 ?S
(T?kk (?,1)
? T??k (?,1) )V?kk ,2 (stk +1 ) + (V??k ,2 ? V?kk ,2 )(stk +1 ) + dtk +1
...
?
?
X
X
(T?kk (?,i) ? T??k (?,i) )V?kk ,i+1 (stk +i ) +
dtk +i ,
i=1
where dtk +i :=
i=1
?
0
?
s0 ?S {P?k (?,i) (s |stk +i )(V?k ,i+1
P
?
V?kk ,i+1 )(s0 )}
? (V??k ,i+1 ? V?kk ,i+1 )(stk +i ).
This expresses
the regret in terms twoi factors. The first factor is the one step Bellman
h
k
error (T?k (?,i) ? T??k (?,i) )V?kk ,i+1 (stk +i ) under the sampled MDP Mk . Crucially, (6) depends only the Bellman error under the observed policy ?k and the states s1 , .., sT that are
actually visited over the first T periods. We go on to show the posterior distribution of Mk
concentrates around M ? as these actions are sampled, and so this term tends to zero.
The second term captures the randomness in the transitions of the true MDP M ? .
In state st under policy ?k , the expected value of (V??k ,i+1 ? V?kk ,i+1 )(stk +i ) is exactly
P
?
0
?
k
0
?
s0 ?S {P?k (?,i) (s |stk +i )(V?k ,i+1 ? V?k ,i+1 )(s )}. Hence, conditioned on the true MDP M
P?
and the sampled MDP Mk , the term i=1 dtk +i has expectation zero.
5.2
Introducing confidence sets
The last section reduced the algorithm?s regret to its expected Bellman error. We will
proceed by arguing that the sampled Bellman operator T?kk (?,i) concentrates around the
true Bellman operatior T??k (?,i) . To do this, we introduce high probability confidence sets
similar to those used in [4] and [5]. Let P?at (?|s) denote the emprical distribution up period
? at (s) denote the empirical average
t of transitions observed after sampling (s, a), and let R
Ptk ?1
reward. Finally, define Ntk (s, a) = t=1 1{(st ,at )=(s,a)} to be the number of times (s, a)
was sampled prior to time tk . Define the confidence set for episode k:
n
o
? t (s) ? RM (s)| ? ?k (s, a) ?(s, a)
Mk := M :
P?at (?|s) ? PaM (?|s)
? ?k (s, a) & |R
a
a
1
q
log(2SAmtk )
?
Where ?k (s, a) := 14S
max{1,Ntk (s,a)} is chosen conservatively so that Mk contains both M
and Mk with high probability. It?s worth pointing out that we have not tried to optimize
this confidence bound, and it can be improved, at least by a numerical factor, with more
? k ? ? we can decompose regret as follows:
careful analysis. Now, using that ?
m
X
k=1
?k ?
?
m
X
? k 1{M ,M ? ?M } + ?
?
k
k
k=1
m
X
k=1
6
[1{Mk ?M
/ k } + 1{M ? ?M
/ k}]
(7)
Now, since Mk is ?(Htk )-measureable, by Lemma 1, E[1{Mk ?M
=
/ k } |Htk ]
3
?
E[1{M ? ?M
/ Mk ) ? 1/m for this choice of ?k (s, a),
/ k } |Htk ]. Lemma 17 of [4] shows P(M ?
which implies
"
E
m
X
#
?k
?
"
? E
k=1
"
? E
m
X
k=1
m
X
#
? k 1{M ,M ? ?M } + 2?
?
k
k
m
X
P{M ? ?
/ Mk }.
k=1
#
?
?
E ?k |M , Mk 1{Mk ,M ? ?Mk } + 2?
k=1
? E
m X
?
X
|(T?kk (?,i) ? T??k (?,i) )V?kk ,i+1 (stk +i )|1{Mk ,M ? ?Mk } + 2?
k=1 i=1
m X
?
X
? ?E
min{?k (stk +i , atk +i ), 1} + 2?.
(8)
k=1 i=1
Pm ?
We also have the worst?case bound k=1P?
k ? T . In the technical appendix we go on
m P?
to p
provide a worst case bound on min{? k=1 i=1 min{?k (stk +i , atk +i ), 1}, T } of order
? S AT log(SAT ), which completes our analysis.
6
Simulation results
We compare performance of PSRL to UCRL2 [4]: an optimistic algorithm with similar
regret bounds. We use the standard example of RiverSwim [21], as well as several randomly
generated MDPs. We provide results in both the episodic case, where the state is reset
every ? = 20 steps, as well as the setting without episodic reset.
Figure 1: RiverSwim - continuous and dotted arrows represent the MDP under the actions
?right? and ?left?.
RiverSwim consists of six states arranged in a chain as shown in Figure 1. The agent begins
at the far left state and at every time step has the choice to swim left or right. Swimming left
(with the current) is always successful, but swimming right (against the current) often fails.
The agent receives a small reward for reaching the leftmost state, but the optimal policy is
to attempt to swim right and receive a much larger reward. This MDP is constructed so
that efficient exploration is required in order to obtain the optimal policy. To generate the
random MDPs, we sampled 10-state, 5-action environments according to the prior.
We express our prior in terms of Dirichlet and normal-gamma distributions over the transitions and rewards respectively.4 In both environments we perform 20 Monte Carlo simulations and compute the total regret over 10,000 time steps. We implement UCRL2 with
? = 0.05 and optimize the algorithm to take account of finite episodes where appropriate.
PSRL outperformed UCRL2 across every environment, as shown in Table 1. In Figure 2,
we show regret through time across 50 Monte Carlo simulations to 100,000 time?steps in
the RiverSwim environment: PSRL?s outperformance is quite extreme.
3
Our confidence sets are equivalent to those of [4] when the parameter ? = 1/m.
These priors are conjugate to the multinomial and normal distribution. We used the values
? = 1/S, ? = ? 2 = 1 and pseudocount n = 1 for a diffuse uniform prior.
4
7
Table 1: Total regret in simulation. PSRL outperforms UCRL2 over different environments.
Algorithm
PSRL
UCRL2
6.1
Random MDP
? -episodes
1.04 ? 104
5.92 ? 104
Random MDP
?-horizon
7.30 ? 103
1.13 ? 105
RiverSwim
? -episodes
6.88 ? 101
1.26 ? 103
RiverSwim
?-horizon
1.06 ? 102
3.64 ? 103
Learning in MDPs without episodic resets
The majority of practical problems in reinforcement learning can be mapped to repeated
episodic interactions for some length ? . Even in cases where there is no actual reset of
episodes, one can show that PSRL?s regret is bounded against all policies which work over
horizon ? or less [6]. Any setting with discount factor ? can be learned for ? ? (1 ? ?)?1 .
One appealing feature of UCRL2 [4] and REGAL [5] is that they learn this optimal timeframe
? . Instead of computing a new policy after a fixed number of periods, they begin a new
episode when the total visits to any state-action pair is doubled. We can apply this same
rule for episodes to PSRL in the ?-horizon case, as shown in Figure 2. Using optimism
with KL-divergence instead of L1 balls has also shown improved performance over UCRL2
[22], but its regret remains orders of magnitude more than PSRL on RiverSwim.
(a) PSRL outperforms UCRL2 by large margins (b) PSRL learns quickly despite misspecified prior
Figure 2: Simulated regret on the ?-horizon RiverSwim environment.
7
Conclusion
We establish posterior sampling for reinforcement learning
? not just as a heuristic, but as a
? S AT ) Bayesian regret bounds, which
provably efficient learning algorithm. We present O(?
are some of the first for an algorithm not motivated by optimism and are close to state of the
art for any reinforcement learning algorithm. These bounds hold in expectation irrespective
of prior or model structure. PSRL is conceptually simple, computationally efficient and can
easily incorporate prior knowledge. Compared to feasible optimistic algorithms we believe
that PSRL is often more efficient statistically, simpler to implement and computationally
cheaper. We demonstrate that PSRL performs well in simulation over several domains. We
believe there is a strong case for the wider adoption of algorithms based upon posterior
sampling in both theory and practice.
Acknowledgments
Osband and Russo are supported by Stanford Graduate Fellowships courtesy of PACCAR
inc., and Burt and Deedee McMurty, respectively. This work was supported in part by
Award CMMI-0968707 from the National Science Foundation.
8
References
[1] A. N. Burnetas and M. N. Katehakis. Optimal adaptive policies for markov decision processes.
Mathematics of Operations Research, 22(1):222?255, 1997.
[2] P. R. Kumar and P. Varaiya. Stochastic systems: estimation, identification and adaptive
control. Prentice-Hall, Inc., 1986.
[3] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
[4] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning.
The Journal of Machine Learning Research, 99:1563?1600, 2010.
[5] P. L. Bartlett and A. Tewari. Regal: A regularization based algorithm for reinforcement
learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on
Uncertainty in Artificial Intelligence, pages 35?42. AUAI Press, 2009.
[6] R. I. Brafman and M. Tennenholtz. R-max-a general polynomial time algorithm for nearoptimal reinforcement learning. The Journal of Machine Learning Research, 3:213?231, 2003.
[7] S. M. Kakade. On the sample complexity of reinforcement learning. PhD thesis, University of
London, 2003.
[8] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine
Learning, 49(2-3):209?232, 2002.
[9] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
[10] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Neural Information
Processing Systems (NIPS), 2011.
[11] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in
Business and Industry, 26(6):639?658, 2010.
[12] S. Agrawal and N. Goyal. Further optimal regret bounds for Thompson sampling. arXiv
preprint arXiv:1209.3353, 2012.
[13] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. arXiv
preprint arXiv:1209.3352, 2012.
[14] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal
finite time analysis. In International Conference on Algorithmic Learning Theory, 2012.
[15] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. CoRR, abs/1301.2609,
2013.
[16] M. Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th
International Conference on Machine Learning, pages 943?950, 2000.
[17] J. Z. Kolter and A. Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of
the 26th Annual International Conference on Machine Learning, pages 513?520. ACM, 2009.
[18] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line
reward optimization. In Proceedings of the 22nd international conference on Machine learning,
pages 956?963. ACM, 2005.
[19] A. Guez, D. Silver, and P. Dayan. Efficient bayes-adaptive reinforcement learning using samplebased search. arXiv preprint arXiv:1205.3109, 2012.
[20] J. Asmuth and M. L. Littman. Approaching bayes-optimalilty using monte-carlo tree search.
In Proc. 21st Int. Conf. Automat. Plan. Sched., Freiburg, Germany, 2011.
[21] A. L. Strehl and M. L. Littman. An analysis of model-based interval estimation for markov
decision processes. Journal of Computer and System Sciences, 74(8):1309?1331, 2008.
[22] S. Filippi, O. Capp?e, and A. Garivier. Optimism in reinforcement learning based on kullbackleibler divergence. CoRR, abs/1004.5229, 2010.
9
A
Relating Bayesian to frequentist regret
Let M be any family of MDPs with non-zero probability under the prior. Then, for any > 0,
? > 12 :
?
Regret(T, ??P S )
P
> M ?M ?0
T?
This provides regret bounds even if M ? is not distributed according to f . As long as the true
MDP is not impossible under the prior, we will have
? an asymptotic frequentist regret close to the
theoretical lower bounds of in T -dependence of O( T ).
Proof. We have for any > 0:
E[Regret(T, ??P S )]
T?
Regret(T, ??P S ) ?
M ? M P (M ? ? M)
E
T?
?
?
Regret(T, ??P S ) ?
M ? M P (M ? ? M)
T?
P
Therefore via theorem (1), for any ? > 21 :
P
B
Regret(T, ??P S ) ?
M ?M
T?
?
1
P (M ?
? M)
E[Regret(T, ? P S? )]
?0
T?
Bounding the sum of confidence set widths
We are interested in bounding min{?
O(? S
p
AT log(SAT ) for ?k (s, a) :=
q
Pm P?
min{?k stk +i , atk +i ), 1}, T } which we claim is
k=1
i=1
14S log(2SAmtk )
.
max{1,Ntk (s,a)}
Proof. In a manner similar to [4] we can say:
?
m
X
X
r
k=1 i=1
14S log(2SAmtk )
max{1, Ntk (s, a)}
?
m
X
X
?
1{Ntk ?? } +
?
m
X
X
r
1{Ntk >? }
k=1 i=1
k=1 i=1
14S log(2SAmtk )
max{1, Ntk (s, a)}
Now, the consider the event (st , at ) = (s, a)Pand P
(Ntk (s, a) ? ? ). This can happen fewer than
m
?
2? times per state action pair. Therefore,
1(Ntk (s, a) ? ? ) ? 2? SA.Now, suppose
k=1
i=1
Ntk (s, a) > ? . Then for any t ? {tk , .., tk+1 ? 1}, Nt (s, a) + 1 ? Ntk (s, a) + ? ? 2Ntk (s, a).
Therefore:
m tk+1 ?1
X
X
k=1
r
t=tk
1(Ntk (st , at ) > ? )
Ntk (st , at )
?
m tk+1 ?1 r
X
X
k=1
?
t=tk
? X
2
s,a
s
?
2SA
T
? X
2
= 2
(Nt (st , at ) + 1)?1/2
Nt (st , at ) + 1
t=1
NT +1 (s,a)
X
j ?1/2 ?
?
2
j=1
X
XZ
s,a
NT +1 (s,a)
x?1/2 dx
x=0
?
NT +1 (s, a) =
2SAT
s,a
Note that since all rewards and transitions are absolutely constrained ? [0, 1] our regret
min{?
m
?
X
X
min{?k (stk +i , atk +i ), 1}, T }
p
?
min{2? 2 SA + ?
?
p
p
?
2? 2 SAT + ? 28S 2 AT log(SAT ) ? ? S 30AT log(SAT )
k=1 i=1
Which is our required result.
10
28S 2 AT log(SAT ), T }
| 5185 |@word exploitation:1 dtk:4 version:1 polynomial:4 nd:1 simulation:6 crucially:1 tried:1 automat:1 concise:1 accommodate:1 initial:2 contains:1 selecting:1 daniel:1 outperforms:4 existing:3 past:1 current:2 contextual:1 nt:6 si:1 guez:1 must:1 dx:1 numerical:1 happen:1 designed:1 update:1 stationary:1 intelligence:1 selected:2 fewer:1 short:1 provides:2 simpler:2 ucrl2:13 constructed:1 direct:1 katehakis:1 consists:1 combine:1 incentivizes:1 manner:2 introduce:4 ra:2 expected:10 behavior:1 surge:1 nor:1 multi:4 xz:1 bellman:8 resolve:1 little:1 armed:4 actual:2 cardinality:1 provided:1 begin:4 notation:1 bounded:1 bonus:3 what:1 guarantee:7 every:4 auai:1 exactly:1 biometrika:1 rm:5 control:1 before:1 understood:3 tends:1 consequence:1 despite:2 analyzing:1 ntk:14 subscript:1 might:2 pam:3 quantified:1 equivalence:1 graduate:1 statistically:4 obeys:1 adoption:1 russo:3 practical:3 acknowledgment:1 arguing:1 practice:1 regret:48 implement:3 differs:1 goyal:2 episodic:6 empirical:3 significantly:1 attain:1 matching:1 confidence:9 suggest:1 doubled:1 close:4 operator:4 prentice:1 context:1 risk:1 applying:1 impossible:1 optimize:4 measurable:2 deterministic:4 demonstrated:2 equivalent:1 courtesy:1 go:3 duration:2 thompson:6 communicating:1 rule:2 notion:1 kauffmann:1 feel:1 construction:2 suppose:1 programming:5 associate:1 roy:2 infrequently:1 approximated:1 observed:5 preprint:3 wang:1 capture:1 worst:5 episode:26 observes:3 benjamin:1 environment:16 complexity:2 reward:14 littman:2 inductively:1 dynamic:8 neglected:1 depend:2 solving:2 weakly:1 algebra:1 singh:1 upon:3 capp:1 easily:1 london:1 monte:3 artificial:1 outcome:1 exhaustive:1 whose:1 heuristic:2 stanford:10 plausible:2 larger:1 say:2 quite:1 think:1 itself:1 ptk:1 advantage:3 sequence:1 agrawal:2 interaction:2 reset:4 poorly:3 exploiting:1 convergence:1 p:2 r1:1 produce:2 silver:1 tk:12 wider:1 sa:3 solves:1 strong:5 implemented:1 streamline:1 indicate:1 implies:1 concentrate:2 guided:2 closely:1 stochastic:2 exploration:11 atk:4 require:2 decompose:1 exploring:1 hold:3 around:2 hall:1 normal:2 great:1 mapping:2 algorithmic:1 claim:2 pointing:1 m0:1 estimation:3 proc:1 outperformed:1 visited:2 robbins:1 paccar:1 successfully:1 establishes:2 tool:1 always:3 beb:1 reaching:1 avoid:1 corollary:1 encode:1 likelihood:1 greatly:1 contrast:1 lizotte:1 bos:1 dayan:1 accumulated:1 bandit:5 selects:6 interested:2 provably:3 germany:1 issue:1 plan:1 art:4 constrained:1 ng:2 sampling:26 look:1 alter:1 future:1 inherent:1 employ:1 primarily:1 ortner:1 randomly:2 modern:1 simultaneously:1 ve:1 gamma:1 divergence:2 cheaper:2 national:1 attempt:2 ab:2 interest:1 evaluation:1 introduces:1 extreme:1 immense:1 chain:1 encourage:2 necessary:1 tree:1 theoretical:6 uncertain:1 mk:36 instance:2 industry:1 korda:1 introducing:1 uniform:1 successful:1 too:1 kullbackleibler:1 burnetas:1 nearoptimal:1 chooses:2 st:15 fundamental:1 international:4 quickly:1 na:1 thesis:1 central:1 satisfied:1 opposed:1 choose:1 conf:1 timeframe:1 resort:1 return:1 toy:1 li:1 account:1 potential:1 filippi:1 int:1 inc:2 satisfy:1 kolter:2 depends:1 view:1 closed:1 optimistic:13 reached:1 start:7 bayes:5 complicated:3 ass:1 pand:1 variance:1 largely:1 efficiently:1 gathered:1 conceptually:2 bayesian:10 identification:1 carlo:3 worth:1 executes:1 history:5 randomness:1 simultaneous:1 influenced:1 outperformance:1 competitor:1 against:2 naturally:1 proof:3 mi:1 sampled:13 recall:1 knowledge:6 improves:1 formalize:1 actually:2 back:1 auer:1 higher:1 originally:2 htk:16 dt:1 follow:1 asmuth:1 improved:3 formulation:1 arranged:1 generality:1 just:4 until:1 d:1 hastings:1 receives:1 aj:1 mdp:32 believe:4 name:1 effect:1 facilitate:1 djrusso:1 true:8 hence:1 regularization:1 jaksch:1 during:3 bowling:1 width:1 strens:4 leftmost:1 trying:1 freiburg:1 demonstrate:2 performs:1 l1:1 misspecified:1 common:1 multinomial:1 rl:2 relating:1 unfamiliar:1 significant:1 pm:2 mathematics:2 tionally:1 chapelle:1 add:1 posterior:24 perspective:1 raj:1 inequality:1 maximize:1 period:5 paradigm:1 exceeds:1 technical:1 offer:1 long:2 lai:1 visit:1 award:1 a1:1 impact:1 expectation:7 arxiv:6 sometimes:1 represent:1 receive:2 addition:2 hurdle:1 separately:1 fellowship:1 interval:1 completes:1 crucial:1 epsiode:1 near:3 presence:1 identically:1 variety:1 timesteps:1 approaching:1 suboptimal:1 idea:2 tradeoff:1 intensive:1 motivated:2 optimism:15 six:1 bartlett:1 swim:2 osband:2 proceed:1 action:24 tewari:1 discount:1 reduced:2 generate:1 exist:1 dotted:1 per:1 write:1 express:2 terminology:1 neither:1 clean:1 rewriting:1 riverswim:8 ht:2 garivier:1 ram:1 asymptotically:2 swimming:2 sum:2 run:1 uncertainty:5 almost:1 family:2 reader:1 decision:4 appendix:2 bound:31 annual:1 afforded:1 diffuse:1 optimality:1 min:9 span:1 kumar:1 iosband:1 according:5 ball:1 poor:1 conjugate:1 across:3 em:1 intimately:1 appealing:1 metropolis:1 kakade:1 modification:1 s1:2 intuitively:1 samplebased:1 computationally:8 equation:3 remains:1 tractable:2 end:2 available:2 operation:2 experimentation:1 apply:3 observe:2 appropriate:1 frequentist:3 alternative:2 denotes:2 dirichlet:1 establish:2 classical:1 realized:2 quantity:1 cmmi:1 rt:3 dependence:3 interacts:2 unclear:1 said:1 kth:5 separate:1 link:1 mapped:1 simulated:1 majority:2 bvr:1 tower:1 length:3 modeled:2 prompted:1 kk:18 measureable:1 relate:2 rise:1 design:1 implementation:1 policy:34 unknown:4 perform:1 twenty:1 observation:3 markov:5 finite:5 immediate:1 payoff:1 precise:1 interacting:1 regal:5 burt:1 introduced:1 emprical:1 pair:3 required:2 kl:1 varaiya:1 learned:1 nip:1 tennenholtz:1 proceeds:1 scott:1 appeared:1 max:6 ofu:1 event:1 natural:3 business:1 improve:2 mdps:9 irrespective:1 prior:21 literature:3 relative:2 law:1 asymptotic:1 fully:1 expect:1 allocation:1 versus:1 foundation:1 incurred:1 agent:26 s0:13 principle:1 strehl:1 supported:2 last:1 brafman:1 guide:2 formal:1 allow:1 face:1 taking:2 munos:1 fifth:1 sparse:1 van:2 distributed:2 transition:7 computes:1 conservatively:1 author:1 made:2 reinforcement:27 adaptive:4 premature:1 far:3 sj:4 approximate:1 emphasize:1 sat:9 psrl:37 continuous:1 search:2 why:1 table:2 learn:4 ca:3 schuurmans:1 complex:1 domain:1 main:1 arrow:1 bounding:3 repeated:3 referred:1 fails:1 explicit:2 deterministically:1 xh:1 learns:2 ian:1 theorem:4 transitioning:1 offset:1 evidence:1 intractable:1 exists:1 essential:1 corr:2 phd:1 magnitude:1 conditioned:2 horizon:9 margin:1 suited:1 simply:1 contained:1 scalar:1 applies:1 satisfies:1 acm:2 consequently:1 careful:1 stk:19 feasible:1 lemma:8 conservative:1 total:4 called:1 kearns:1 internal:1 support:1 absolutely:1 incorporate:2 |
4,625 | 5,186 | Adaptive Step?Size for Policy Gradient Methods
Matteo Pirotta
Dept. Elect., Inf., and Bio.
Politecnico di Milano, ITALY
Marcello Restelli
Dept. Elect., Inf., and Bio.
Politecnico di Milano, ITALY
Luca Bascetta
Dept. Elect., Inf., and Bio.
Politecnico di Milano, ITALY
matteo.pirotta@polimi.it
marcello.restelli@polimi.it
luca.bascetta@polimi.it
Abstract
In the last decade, policy gradient methods have significantly grown in popularity
in the reinforcement?learning field. In particular, they have been largely employed
in motor control and robotic applications, thanks to their ability to cope with continuous state and action domains and partial observable problems. Policy gradient
researches have been mainly focused on the identification of effective gradient
directions and the proposal of efficient estimation algorithms. Nonetheless, the
performance of policy gradient methods is determined not only by the gradient direction, since convergence properties are strongly influenced by the choice of the
step size: small values imply slow convergence rate, while large values may lead
to oscillations or even divergence of the policy parameters. Step?size value is usually chosen by hand tuning and still little attention has been paid to its automatic
selection. In this paper, we propose to determine the learning rate by maximizing
a lower bound to the expected performance gain. Focusing on Gaussian policies,
we derive a lower bound that is second?order polynomial of the step size, and
we show how a simplified version of such lower bound can be maximized when
the gradient is estimated from trajectory samples. The properties of the proposed
approach are empirically evaluated in a linear?quadratic regulator problem.
1
Introduction
Policy gradient methods have established as the most effective reinforcement?learning techniques
in robotic applications. Such methods perform a policy search to maximize the expected return of a
policy in a parameterized policy class. The reasons for their success are many. Compared to several
traditional reinforcement?learning approaches, policy gradients scale well to high?dimensional continuous state and action problems, and no changes to the algorithms are needed to face uncertainty
in the state due to limited and noisy sensors. Furthermore, policy representation can be properly designed for the given task, thus allowing to incorporate domain knowledge into the algorithm useful
to speed up the learning process and to prevent the unexpected execution of dangerous policies that
may harm the system. Finally, they are guaranteed to converge to locally optimal policies.
Thanks to these advantages, from the 1990s policy gradient methods have been widely used to learn
complex control tasks [1]. The research in these years has focused on obtaining good model?free
estimators of the policy gradient using data generated during the task execution. The oldest policy
gradient approaches are finite?difference methods [2], that estimate gradient direction by resolving
a regression problem based on the performance evaluation of policies associated to different small
perturbations of the current parameterization. Finite?difference methods have some advantages:
they are easy to implement, do not need assumptions on the differentiability of the policy w.r.t. the
policy parameters, and are efficient in deterministic settings. On the other hand, when used on real
systems, the choice of parameter perturbations may be difficult and critical for system safeness.
Furthermore, the presence of uncertainties may significantly slow down the convergence rate. Such
drawbacks have been overcome by likelihood ratio methods [3, 4, 5], since they do not need to generate policy parameters variations and quickly converge even in highly stochastic systems. Several
1
studies have addressed the problem to find minimum variance estimators by the computation of optimal baselines [6]. To further improve the efficiency of policy gradient methods, natural gradient
approaches (where the steepest ascent is computed w.r.t. the Fisher information metric) have been
considered [7, 8]. Natural gradients still converge to locally optimal policies, are independent from
the policy parameterization, need less data to attain good gradient estimate, and are less affected by
plateaus.
Once an accurate estimate of the gradient direction is obtained, policy parameters are updated by:
? t+1 = ? t + ?t ?? J ?=?t , where ?t ? R+ is the step size in the direction of the gradient. Although,
given an unbiased gradient estimate, convergence to a local optimum can be guaranteed under mild
conditions over the learning?rate values [9], their choice may significantly affect the convergence
speed or the behavior during the transient. Updating the policy with large step sizes may lead to
policy oscillations or even divergence [10], while trying to avoid such phenomena by using small
learning rates determines a growth in the number of iterations that is unbearable in most real?world
applications. In general unconstrained programming, the optimal step size for gradient ascent methods is determined through line?search algorithms [11], that require to try different values for the
learning rate and evaluate the function value in the corresponding updated points. Such an approach
is unfeasible for policy gradient methods, since it would require to perform a large number of policy
evaluations. Despite these difficulties, up to now, little attention has been paid to the study of step?
size computation for policy gradient algorithms. Nonetheless, some policy search methods based
on expectation?maximization have been recently proposed; such methods have properties similar to
the ones of policy gradients, but the policy update does not require to tune the step size [12, 13].
In this paper, we propose a new approach to compute the step size in policy gradient methods that
guarantees an improvement at each step, thus avoiding oscillation and divergence issues. Starting
from a lower bound to the difference of performance between two policies, in Section 3 we derive a
lower bound in the case where the new policy is obtained from the old one by changing its parameters along the gradient direction. Such a new bound is a (polynomial) function of the step size, that,
for positive values of the step size, presents a single, positive maximum ( i.e., it guarantees improvement) which can be computed in closed form. In Section 4, we show how the bound simplifies to a
quadratic function of the step size when Gaussian policies are considered, and Section 5 studies how
the bound needs to be changed in approximated settings (e.g., model?free case) where the policy
gradient needs to be estimated directly from experience.
2
Preliminaries
A discrete?time continuous Markov decision process (MDP) is defined as a 6-tuple
hS, A, P, R, ?, Di, where S is the continuous state space, A is the continuous action space, P
is a Markovian transition model where P(s0 |s, a) defines the transition density between state s and
s0 under action a, R : S ? A ? [0, R] is the reward function, such that R(s, a) is the expected
immediate reward for the state-action pair (s, a) and R is the maximum reward value, ? ? [0, 1) is
the discount factor for future rewards, and D is the initial state distribution. The policy of an agent
is characterized by a density distribution ?(?|s) that specifies for each state s the density distribution
over the action space A. To measure the distance between two policies we will use this norm:
Z
0
k? ? ?k? = sup
|? 0 (a|s) ? ?(a|s)|da,
s?S
A
that is the superior value over the state space of the total variation between the distributions over the
action space of policy ? 0 and ?.
We consider infinite horizon problems where the future rewards are exponentially discounted with
?. For each state s, we define the utility of following a stationary policy ? as:
"?
#
X
?
t
V (s) = E at ? ?
? R(st , at )|s0 = s .
st ? P
It is known that V
?
t=0
solves the following recursive (Bellman) equation:
Z
Z
V ? (s) =
?(a|s)R(s, a) + ?
P (s0 |s, a)V ? (s0 )ds0 da.
A
S
2
Policies can be ranked by their expected discounted reward starting from the state distribution D:
Z
Z
Z
?
?
?
JD =
D(s)V (s)ds) =
dD (s)
?(a|s)R(s, a)dads,
S
d?D (s)
P?
S
A
t
where
= (1 ? ?) t=0 ? P r(st = s|?, D) is the ??discounted future state distribution
for a starting state distribution D [5]. Solving an MDP means to find a policy ? ? that maximizes
?
the expected long-term reward: ? ? ? arg max??? JD
. For any MDP there exists at least one
deterministic optimal policy that simultaneously maximizes V ? (s), ?s ? S. For control purposes, it
is better to consider action values Q? (s, a), i.e., the value of taking action a in state s and following
a policy ? thereafter:
Z
Z
?
0
Q (s, a) = R(s, a) + ?
P(s |s, a)
?(a0 |s0 )Q? (s0 , a0 )da0 ds0 .
S
A
Furthermore, we define the advantage function:
A? (s, a) = Q? (s, a) ? V ? (s),
that quantifies the advantage (or disadvantage) of taking action a in state s instead of following
policy ?. In
for each state s, we define the advantage of a policy ? 0 over policy ? as
R particular,
?0
0
?
A? (s) = A ? (a|s)A (s, a)da and, following [14], we define its expected value w.r.t. an initial
R
0
0
state distribution ? as A??,? = S d?? (s)A?? (s)ds.
We consider the problem of finding a policy that maximizes the expected discounted reward over
a class of parameterized policies ?? = {?? : ? ? Rm }, where ?? is a compact representation of
?(a|s, ?). The exact gradient of the expected discounted reward w.r.t. the policy parameters [5] is:
Z
Z
1
d??? (s)
?? ?(a|s, ?)Q?? (s, a)dads.
?? J? (?) =
1?? S
A
The policy parameters can be updated by following the direction of the gradient of the expected
discounted reward: ? 0 = ? + ??? J? (?). In the following, we will denote with k?? J? (?)k1 and
k?? J? (?)k2 the L1? and L2?norm of the policy gradient vector, respectively.
3
Policy Gradient Formulation
In this section we provide a lower bound to the improvement obtained by updating the policy parameters along the gradient direction as a function of the step size. The idea is to start from the
general lower bound on the performance difference between any pair of policies introduced in [15]
and specialize it to the policy gradient framework.
Lemma 3.1 (Continuous MDP version of Corollary 3.6 in [15]). For any pair of stationary policies corresponding to parameters ? and ? 0 and for any starting state distribution ?, the difference
between the performance of policy ??0 and policy ?? can be bounded as follows
Z
1
?
2
0
(1)
J? (? ) ? J? (?) ?
d??? (s)A????0 (s)ds ?
k??0 ? ?? k? kQ?? k? ,
1?? S
2(1 ? ?)2
where kQ?? k? is the supremum norm of the Q?function: kQ?? k? =
sup
Q?? (s, a)
s?S,a?A
As we can notice from the above bound, to maximize the performance improvement, we need to
?
find a new policy ??0 that is associated to large average advantage A???0,? , but, at the same time, is
not too different from the current policy ?? . Policy gradient approaches provide search directions
characterized by increasing advantage values and, through the step size value, allow to control the
difference between the new policy and the target one. Exploiting a lower bound to the first order
Taylor?s expansion, we can bound the difference between the current policy and the new policy,
whose parameters are adjusted along the gradient direction, as a function of the step size ?.
Lemma 3.2. Let the update of the policy parameters be ? 0 = ? + ??? J? (?). Then
0
T
?(a|s, ? ) ? ?(a|s, ?) ???? ?(a|s, ?) ?? J? (?) + ?
where ?? = ??? J? (?).
3
2
inf
c?(0,1)
!
m
X
? 2 ?(a|s, ?)
??i ??j
,
??i ??j ?+c?? 1 + I(i = j)
i,j=1
By combining the two previous lemmas, it is possible to derive the policy performance improvement
obtained following the gradient direction.
Theorem 3.3. Let the update of the parameters be ? 0 = ? + ??? J? (?). Then for any stationary
policy ?(a|s, ?) and any starting state distribution ?, the difference in performance between ?? and
??0 is lower bounded by:
J? (? 0 ) ? J? (?) ? ? k?? J? (?)k22
!
Z
Z
m
X
? 2 ?(a|s, ?)
?2
??i ??j
??
+
d? (s)
inf
Q?? (s, a)dads
1?? S
??i ??j ?+c?? 1 + I(i = j)
A c?(0,1) i,j=1
Z
? kQ?? k?
?? ?(a|s, ?)T ?? J? (?) da
?
?
sup
2
2(1 ? ?)
s?S A
! !2
Z
m
X
? 2 ?(a|s, ?)
??i ??j
+?2 sup
sup
da .
??
1
+
I(i
=
j)
i ??j
s?S A c?(0,1)
?+c??
i,j=1
The above bound is a forth?order polynomial of the step size, whose stationary points, being the
roots of a third?order polynomial ax3 + bx2 + cx + d, can be expressed in closed form. It is worth to
notice that, for positive values of ?, the bound presents a single stationary point that corresponds to
a local maximum. In fact, since a, b ? 0 and d ? 0, the Descartes? rule of signs gives the existence
and uniqueness of the real positive root.
In the following section, we will show, in the case of Gaussian policies, how the bound in Theorem 3.3 can be reduced to a second?order polynomial in ?, thus obtaining a simpler closed-form
solution for optimal (w.r.t. the bound) step size.
4
The Gaussian Policy Model
In this section we consider the Gaussian policy model with fixed standard deviation ? and the mean
is a linear combination of the state feature vector ?(?) using a parameter vector ? of size m:
2 !
1
1 a ? ? T ?(s)
?(a|s, ?) = ?
exp ?
.
2
?
2?? 2
In the case of Gaussian policies, each second?order derivative of policy ?? can be easily bounded.
Lemma 4.1. For any Gaussian policy ?(a|s, ?) ? N (? T ?(s), ? 2 ), the second order derivative of
the policy can be bounded as follows:
2
? ?(a|s, ?) |?i (s)?j (s)|
m
??i ??j ? ?2?? 3 , ?? ? R , ?a ? A.
This result allows to restate Lemma 3.2 in the case of Gaussian policies:
T
?(a|s, ? 0 ) ? ?(a|s, ?) ? ??? ?(a|s, ?) ?? J? (?) ? ?
2
?2
T
|?? J? (?)| |?(s)| .
3
2??
In the following we will assume that features ? are uniformly bounded:
Assumption 4.1. All the basis functions are uniformly bounded by M? : |?i (s)|< M? , ?s ?
S, ?i = 1, . . . , m.
Exploiting Pinsker?s inequality [16] (which upper bounds the total variation between two distributions with their Kullback?Liebler divergence), it is possible to provide the following upper bound to
the supremum norm between two Gaussian policies.
Lemma 4.2. For any pair of stationary policies ?? and ??0 , so that ? 0 = ? +??? J? (?), supremum
norm of their difference can be upper bounded as follows:
k??0 ? ?? k? ?
?M?
k?? J? (?)k1 .
?
4
By plugging the results of Lemmas 4.1 and 4.2 into Equation (1) we can obtain a lower bound to
the performance difference between a Gaussian policy ?? and another policy along the gradient
direction that is quadratic in the step size ?.
Theorem 4.3. For any starting state distribution ?, and any pair of stationary Gaussian policies
T
?? ? N (? T ?(s), ? 2 ) and ??0 ? N (? 0 ?(s), ? 2 ), so that ? 0 = ? + ??? J? (?) and under Assumption 4.1, the difference between the performance of ??0 and the one of ?? can be lower bounded as
follows:
2
J? (? 0 ) ? J? (?) ? ? k?? J? (?)k2
Z
Z
2
1
T
?
? ?2
d??? (s) |?? J? (?)| |?(s)|
Q?? (s, a)dads
(1 ? ?) 2?? 3 S
A
!
?M?2
2
??
k?? J? (?)k1 kQ k? .
+
2(1 ? ?)2 ? 2
Since the linear coefficient is positive and the quadratic one is negative, the bound in Theorem 4.3
has a single maximum attained for some positive value of ?.
Corollary 4.4. The performance lower bound provided in Theorem 4.3 is maximized by choosing
the following step size:
?? =
?
? 2??M?2
?
(1 ? ?)2 2?? 3 k?? J? (?)k22
,
2 R
R ?
Q?? (s, a)dads
k?? J? (?)k21 kQ?? k? + 2(1 ? ?) S d?? (s) |?? J? (?)|T |?(s)|
A
that guarantees the following policy performance improvement
1
2
J? (? 0 ) ? J? (?) ? ?? k?? J? (?)k2 .
2
5
Approximate Framework
The solution for the tuning of the step size presented in the previous section depends on some
constants (e.g., discount factor and the variance of the Gaussian policy) and requires to be able to
compute some quantities (e.g., the policy gradient and the supremum value of the Q?function). In
many real?world applications such quantities cannot be computed (e.g., when the state?transition
model is unknown or too large for exact methods) and need to be estimated from experience samples.
In this section, we study how the step size can be chosen when the gradient is estimated through
sample trajectories to guarantee a performance improvement in high probability.
For sake of easiness, we consider a simplified version of the bound in Theorem 4.3, in order to obtain
a bound where the only element that needs to be estimated is the policy gradient ?? J? (?).
Corollary 5.1. For any starting state distribution ?, and any pair of stationary Gaussian policies
T
?? ? N (? T ?(s), ? 2 ) and ??0 ? N (? 0 ?(s), ? 2 ), so that ? 0 = ? + ??? J? (?) and under Assumption 4.1, the difference between the performance of ??0 and ?? is lower bounded by:
2
RM?2 k?? J? (?)k1
?
|A|
2
?
J? (? 0 ) ? J? (?) ? ? k?? J? (?)k2 ? ?2
+
,
2
2?? 2(1 ? ?)
(1 ? ?) ? 2
that is maximized by the following step size value:
?
2
(1 ? ?)3 2?? 3 k?? J? (?)k2
?
?
? = ?
2.
? 2?? + 2(1 ? ?)|A| RM?2 k?? J? (?)k1
Since we are assuming that the policy gradient ?? J? (?) is estimated through trajectory samples,
the lower bound in Corollary 5.1 must take into consideration the associated approximation error.
b ? J? (?) of
Given a set of trajectories obtained following policy ?? , we can produce an estimate ?
T
the policy gradient and we assume to be able to produce a vector = [1 , . . . , m ] , so that the i?th
component of the approximation error is bounded at least with probability 1 ? ?:
b ? J? (?) ? i ? ?.
P ??i J? (?) ? ?
i
5
Given the approximation error vector , we can adjust the bound in Corollary 5.1 to produce a new
m
bound that holds at least with probability (1 ? ?) . In particular, to preserve the inequality sign,
the estimated approximation error must be used to decrease the L2?norm of the policy gradient in
the first term (the one that provides the positive contribution to the performance improvement) and
to increase the L1?norm in the penalization term. To lower bound the L2?norm, we introduce the
b ? J? (?) whose components are a lower bound to the absolute value of the policy gradient
vector ?
built on the basis of the approximation error :
b ? J? (?) = max(|?
b ? J? (?)| ? , 0),
?
where 0 denotes the m?size vector with all zeros, and max denotes the component?wise maximum.
b ? J? (?):
Similarly, to upper bound the L1?norm of the policy gradient, we introduce the vector ?
b ? J? (?) = |?
b ? J? (?)| + .
?
Theorem 5.2. Under the same assumptions
of Corollary 5.1, and provided
that it is available a
b
b
policy gradient estimate ?? J? (?), so that P ??i J? (?) ? ??i J? (?) ? i ? ?, the difference
m
between the performance of ??0 and ?? can be lower bounded at least with probability (1 ? ?) :
2
b
2
RM?2
?
? J? (?)
?
|A|
b
0
2
1
?
J? (? ) ? J? (?) ? ?
?? J? (?)
? ?
+
,
2
2
2?? 2(1 ? ?)
(1 ? ?) ? 2
that is maximized by the following step size value:
2
?
b
(1 ? ?)3 2?? 3
?
? J? (?)
?
2
?
b = ?
2 .
b
2
? 2?? + 2(1 ? ?)|A| RM?
?? J? (?)
1
In the following, we will discuss how the approximation error of the policy gradient can be bounded.
Among the several methods that have been proposed over the years, we focus on two well?
understood policy?gradient estimation approaches: REINFORCE [3] and G(PO)MDP [4]/policy
gradient theorem (PGT) [5].
5.1
Approximation with REINFORCE gradient estimator
The REINFORCE approach [3] is the main exponent of the likelihood?ratio family. The episodic
REINFORCE gradient estimator is given by:
!!
H
N
H
X
X
X
1
RF
n
n
l?1
n
b ? J? (?) =
?? log ? (ak ; sk , ?)
?
? rl ? b
,
N n=1
k=1
l=1
where N is the number of H?step trajectories generated from a system by roll?outs and b ? R is
a baseline that can be chosen arbitrary, but usually with the goal of minimizing the variance of the
gradient estimator. The main drawback of REINFORCE is its variance, that is strongly affected by
the length of the trajectory horizon H.
The goal is to determine the number of trajectories N in order to obtain the desired accuracy of
the gradient estimate. To achieve this, we exploit the upper bound to the variance of the episodic
REINFORCE gradient estimator introduced in [17] for Gaussian policies.
Lemma 5.3 (Adapted
from Theorem 2 in [17]). Given a Gaussian policy ?(a|s, ?) ?
N ? T ?(s), ? 2 , under the assumption of uniformly bounded rewards and basis functions (Assumption 4.1), we have the following upper bound to the variance of the i?th component of the episodic
b ? J?RF (?):
REINFORCE gradient estimate ?
i
R2 M 2 H 1 ? ? H 2
?
RF
b ? J? (?) ?
V ar ?
.
i
2
N ? 2 (1 ? ?)
6
The result in the previous Lemma combined with the Chebyshev?s inequality allows to provide a
high?probability upper bound to the gradient approximation error using the episodic REINFORCE
gradient estimator.
Theorem 5.4. Given a Gaussian policy ?(a|s, ?) ? N ? T ?(s), ? 2 , under the assumption of
uniformly bounded rewards and basis functions (Assumption 4.1), using the following number of
H?step trajectories:
2
R2 M?2 H 1 ? ? H
N=
,
2
?2i ? 2 (1 ? ?)
b ? J?RF (?) generated by REINFORCE is such that with probability 1 ? ?:
the gradient estimate ?
i
b
??i J?RF (?) ? ??i J? (?) ? i .
5.2
Approximation with G(PO)MDP/PGT gradient estimator
Although the REINFORCE method is guaranteed to converge at the true gradient at the fastest possible pace, its large variance can be problematic in practice. Advances in the likelihood ratio gradient
estimators have produced new approaches that significantly reduce the variance of the estimate. Focusing on the class of ?vanilla? gradient estimator, two main approaches have been proposed: policy
gradient theorem (PGT) [5] and G(PO)MDP [4]. In [6], the authors show that, while the algorithms
b ? J?G(PO)MDP (?). For this
b ? J?P GT (?) = ?
look different, their gradient estimate are equal, i.e., ?
reason, we can limit our attention to the PGT formulation:
!!
H
H
H
X
X
X
1
b ? J?P GT (?) =
?
?? log ? (ank ; snk , ?)
? l?1 rln ? bnl
,
N n=1
k=1
l=k
bnl
where
? R have the objective to reduce the variance of the gradient estimate. Following the
procedure used to bound the approximation error of REINFORCE, we need an upper bound to the
variance of the gradient estimate of PGT that is provided by the following lemma (whose proof is
similar to the one used in [17] for the REINFORCE case).
Lemma 5.5. Given a Gaussian policy ?(a|s, ?) ? N ? T ?(s), ? 2 , under the assumption of uniformly bounded rewards and basis functions (Assumption 4.1), we have the following upper bound
b ? J?P GT (?):
to the variance of the i?th component of the PGT gradient estimate ?
i
b ? J?P GT (?) ?
V ar ?
i
R2 M?2
2
N (1 ? ?) ? 2
H
1 ? ? 2H
2H
H1??
+ H? ? 2?
.
1 ? ?2
1??
As expected, since the variance of the gradient estimate obtained with PGT is smaller than the one
with REINFORCE, also the upper bound of the PGT variance is smaller than REINFORCE one. In
particular, while the variance with REINFORCE grows linearly with the time horizon, using PGT
the dependence on the time horizon is significantly smaller. Finally, we can derive the upper bound
for the approximation error of the gradient estimated of PGT.
Theorem 5.6. Given a Gaussian policy ?(a|s, ?) ? N ? T ?(s), ? 2 , under the assumption of
uniformly bounded rewards and basis functions (Assumption 4.1), using the following number of
H?step trajectories:
H
R2 M?2
1 ? ? 2H
2H
H1??
N= 2
+ H? ? 2?
2
1 ? ?2
1??
?i ? 2 (1 ? ?)
b ? J?P GT (?) generated by PGT is such that with probability 1 ? ?:
the gradient estimate ?
i
b
??i J?P GT (?) ? ??i J? (?) ? i .
7
?const
?t =
?0
t
1e ? 07
1e ? 06
1e ? 05
1e ? 04
1e ? 03
1e ? 05
1e ? 04
?
?
0.50
itmax
itmax
17138
1675
?
itmax
itmax
24106
0.75
itmax
itmax
8669
697
?
itmax
itmax
7271
1.00
itmax
itmax
5120
499
?
itmax
itmax
3279
1.25
itmax
itmax
3348
?
?
itmax
?
1838
?
1.50
itmax
23651
2342
?
?
itmax
?
1172
1.75
itmax
17516
1714
?
?
itmax
?
813
2.00
itmax
13480
1287
?
?
itmax
?
598
5.00
21888
2163
?
?
?
?
?
1
7.50
9740
849
?
?
?
?
?
58
Table 1: Convergence speed in exact LQG scenario with ? = 0.95. The table reports the number of
iterations required by the exact gradient approach, starting from ? = 0, to learn the optimal policy
parameter ?? = ?0.6037 with an accuracy of 0.01, for different step?size values. Three different
set of experiments are shown: constant step size, decreasing step size, and the step size proposed in
Corollary 4.4. The table contains itmax when no convergence happens in 30, 000 iterations, and ?
when the algorithm diverges (? < ?1 or ? > 0). Best performances are reported in boldface.
10, 000
RF
PGT
it
822
29, 761
?
?0.0030
?0.2176
Number of trajectories
100, 000
it
?
51, 731
?0.3068
63, 985
?0.4013
500, 000
it
?
75, 345
?0.4088
83, 983
?0.4558
Table 2: Convergence speed in approximate LQG scenario with ? = 0.9. The table reports, starting
from ? = 0 and fixed ? = 1, the number of iterations performed before the proposed step size ?
b
becomes 0 and the last value of the policy parameter. Results are shown for different number of
trajectories (of 20 steps each) used in the gradient estimation by REINFORCE and PGT.
6
Numerical Simulations and Discussion
In this section we show results related to some numerical simulations of policy gradient in the
linear?quadratic Gaussian regulation (LQG) problem as formulated
in [6]. The LQG problem is
characterized by a transition model st+1 ? N st + at , ? 2 , Gaussian policy at ? N ? ? s, ? 2
and quadratic reward rt = ?0.5(s2t + a2t ). The range of state and action spaces is bounded to
the interval [?2, 2] and the initial state is drawn uniformly at random. This scenario is particularly
instructive since it allows to exactly compute all terms involved in the bounds. We first present
results in the exact scenario and then we move toward the approximated one.
Table 1 shows how the number of iterations required to learn a near?optimal value of the policy
parameter changes according to the standard deviation of the Gaussian policy and the step?size
value. As expected, very small values of the step size allow to avoid divergence, but the learning
process needs many iterations to reach a good performance (this can be observed both when the step
size is kept constant and when it decreases). On the other hand, larger step?size values may lead to
divergence. In this example, the higher the policy variance, the lower is the step size value that allows
to avoid divergence, since, in LQG, higher policy variance implies larger policy gradient values.
Using the step size ?? from Corollary 4.4 the policy gradient algorithm avoids divergence (since
it guarantees an improvement at each iteration), and the speed of convergence is strongly affected
by the variance of the Gaussian policy. In general, when the policy are nearly deterministic (small
variance in the Gaussian case), small changes in the parameters lead to large distances between
the policies, thus negatively affecting the lower bound in Equation 1. As we can notice from the
expression of ?? in Corollary 4.4, considering policies with high variance (that might be a problem in
real?world applications) allows to safely take larger step size, thus speeding up the learning process.
Nonetheless, increasing the variance over some threshold (making policies nearly random) produces
very bad policies, so that changing the policy parameter has a small impact on the performance,
and as a result slows down the learning process. How to identify an optimal variance value is
an interesting future research direction. Table 2 provides numerical results in the approximated
settings, showing the effect of varying the number of trajectories used to estimate the gradient by
REINFORCE and PGT. Increasing the number of trajectories reduces the uncertainty on the gradient
estimates, thus allowing to use larger step sizes and reaching better performances. Furthermore, the
smaller variance of PGT w.r.t. REINFORCE allows the former to achieve better performances.
However, even with a large number of trajectories, the approximated errors are still quite large
preventing to reach very high performance. For this reason, future studies will try to derive tighter
bounds. Further developments include extending these results to other policy models (e.g., Gibbs
policies) and to other policy gradient approaches (e.g., natural gradient).
8
References
[1] Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In Intelligent Robots and
Systems, 2006 IEEE/RSJ International Conference on, pages 2219?2225. IEEE, 2006.
[2] James C Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. Automatic Control, IEEE Transactions on, 37(3):332?341, 1992.
[3] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229?256, May 1992.
[4] Jonathan Baxter and Peter L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of
Artificial Intelligence Research, 15:319?350, 2001.
[5] Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient
methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12(22), 2000.
[6] Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients.
Neural Networks, 21(4):682?697, 2008.
[7] Sham Kakade. A natural policy gradient. Advances in neural information processing systems,
14:1531?1538, 2001.
[8] Jan Peters and Stefan Schaal. Natural actor-critic. Neurocomputing, 71(7):1180?1190, 2008.
[9] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400?407, 1951.
[10] P. Wagner. A reinterpretation of the policy oscillation phenomenon in approximate policy
iteration. Advances in Neural Information Processing Systems, 24, 2011.
[11] Jorge J Mor?e and David J Thuente. Line search algorithms with guaranteed sufficient decrease.
ACM Transactions on Mathematical Software (TOMS), 20(3):286?307, 1994.
[12] J. Kober and J. Peters. Policy search for motor primitives in robotics. In Advances in Neural
Information Processing Systems 22 (NIPS 2008), Cambridge, MA: MIT Press, 2009.
[13] Nikos Vlassis, Marc Toussaint, Georgios Kontes, and Savas Piperidis. Learning model-free
robot control by a monte carlo em algorithm. Autonomous Robots, 27(2):123?130, 2009.
[14] S.M. Kakade. On the sample complexity of reinforcement learning. PhD thesis, PhD thesis,
University College London, 2003.
[15] Matteo Pirotta, Marcello Restelli, Alessio Pecorino, and Daniele Calandriello. Safe policy
iteration. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 307?315. JMLR
Workshop and Conference Proceedings, May 2013.
[16] S. Pinsker. Information and Information Stability of Random Variable and Processes. HoldenDay Series in Time Series Analysis. Holden-Day, Inc., 1964.
[17] Tingting Zhao, Hirotaka Hachiya, Gang Niu, and Masashi Sugiyama. Analysis and improvement of policy gradient estimation. Neural Networks, 26(0):118 ? 129, 2012.
9
| 5186 |@word mild:1 h:1 version:3 polynomial:5 norm:9 simulation:2 paid:2 initial:3 contains:1 series:2 current:3 must:2 ronald:1 numerical:3 lqg:5 motor:3 designed:1 update:3 stationary:8 intelligence:1 parameterization:2 oldest:1 steepest:1 provides:2 simpler:1 mathematical:2 along:4 s2t:1 specialize:1 introduce:2 expected:11 behavior:1 bellman:1 discounted:6 decreasing:1 little:2 considering:1 increasing:3 becomes:1 provided:3 bounded:17 maximizes:3 finding:1 guarantee:5 safely:1 masashi:1 growth:1 exactly:1 rm:5 k2:5 bio:3 control:6 positive:7 before:1 understood:1 local:2 limit:1 despite:1 sutton:2 ak:1 hirotaka:1 niu:1 matteo:3 bnl:2 might:1 fastest:1 limited:1 range:1 recursive:1 practice:1 implement:1 procedure:1 jan:3 episodic:4 significantly:5 attain:1 unfeasible:1 cannot:1 selection:1 a2t:1 deterministic:3 maximizing:1 williams:1 attention:3 starting:9 primitive:1 focused:2 politecnico:3 estimator:10 rule:1 stability:1 variation:3 autonomous:1 updated:3 annals:1 target:1 yishay:1 exact:5 programming:1 element:1 approximated:4 particularly:1 updating:2 observed:1 decrease:3 complexity:1 reward:15 pinsker:2 singh:1 solving:1 reinterpretation:1 negatively:1 efficiency:1 basis:6 easily:1 po:4 grown:1 effective:2 london:1 monte:1 alessio:1 artificial:1 choosing:1 whose:4 quite:1 widely:1 larger:4 ability:1 statistic:1 noisy:1 advantage:7 propose:2 kober:1 rln:1 combining:1 achieve:2 forth:1 exploiting:2 convergence:9 kontes:1 optimum:1 diverges:1 extending:1 produce:4 derive:5 solves:1 implies:1 direction:13 safe:1 restate:1 drawback:2 stochastic:3 milano:3 transient:1 mcallester:2 require:3 preliminary:1 tighter:1 adjusted:1 hold:1 considered:2 exp:1 purpose:1 uniqueness:1 estimation:5 robbins:1 stefan:3 mit:1 sensor:1 gaussian:23 reaching:1 avoid:3 varying:1 corollary:9 focus:1 schaal:3 properly:1 improvement:10 likelihood:3 mainly:1 baseline:2 holden:1 da0:1 a0:2 issue:1 arg:1 among:1 exponent:1 development:1 field:1 once:1 equal:1 marcello:3 look:1 nearly:2 icml:1 future:5 report:2 spall:1 intelligent:1 connectionist:1 richard:1 simultaneously:1 divergence:8 preserve:1 neurocomputing:1 highly:1 evaluation:2 adjust:1 accurate:1 tuple:1 partial:1 experience:2 old:1 taylor:1 bx2:1 desired:1 markovian:1 ar:2 disadvantage:1 maximization:1 deviation:2 kq:6 too:2 reported:1 combined:1 thanks:2 density:3 st:5 international:2 quickly:1 thesis:2 derivative:2 safeness:1 return:1 zhao:1 savas:1 coefficient:1 inc:1 depends:1 performed:1 try:2 root:2 closed:3 dad:5 h1:2 sup:5 start:1 monro:1 contribution:1 accuracy:2 roll:1 variance:22 largely:1 maximized:4 identify:1 identification:1 produced:1 carlo:1 trajectory:14 worth:1 liebler:1 hachiya:1 plateau:1 simultaneous:1 influenced:1 reach:2 nonetheless:3 involved:1 james:1 associated:3 di:4 proof:1 gain:1 knowledge:1 focusing:2 attained:1 higher:2 day:1 tom:1 formulation:2 evaluated:1 strongly:3 furthermore:4 d:3 hand:3 defines:1 grows:1 mdp:8 effect:1 k22:2 unbiased:1 true:1 former:1 during:2 elect:3 daniele:1 trying:1 l1:3 wise:1 consideration:1 recently:1 superior:1 empirically:1 rl:1 exponentially:1 volume:1 mor:1 cambridge:1 gibbs:1 piperidis:1 tuning:2 automatic:2 unconstrained:1 vanilla:1 similarly:1 sugiyama:1 robot:3 actor:1 gt:6 multivariate:1 italy:3 inf:5 scenario:4 inequality:3 success:1 jorge:1 herbert:1 minimum:1 nikos:1 employed:1 determine:2 maximize:2 converge:4 resolving:1 sham:1 reduces:1 characterized:3 long:1 luca:2 plugging:1 impact:1 descartes:1 regression:1 metric:1 expectation:1 iteration:9 robotics:2 proposal:1 affecting:1 addressed:1 ank:1 interval:1 ascent:2 near:1 presence:1 easy:1 baxter:1 affect:1 reduce:2 simplifies:1 idea:1 chebyshev:1 expression:1 utility:1 bartlett:1 peter:5 action:11 useful:1 tune:1 discount:2 locally:2 differentiability:1 reduced:1 generate:1 specifies:1 problematic:1 notice:3 sign:2 estimated:8 popularity:1 pace:1 discrete:1 dasgupta:1 affected:3 thereafter:1 easiness:1 threshold:1 drawn:1 changing:2 prevent:1 calandriello:1 kept:1 year:2 parameterized:2 uncertainty:3 family:1 bascetta:2 oscillation:4 decision:1 bound:41 guaranteed:4 quadratic:6 adapted:1 dangerous:1 gang:1 software:1 sake:1 regulator:1 speed:5 according:1 combination:1 smaller:4 em:1 kakade:2 making:1 happens:1 equation:3 discus:1 needed:1 available:1 snk:1 jd:2 existence:1 denotes:2 include:1 itmax:22 pgt:15 const:1 exploit:1 k1:5 rsj:1 objective:1 move:1 quantity:2 dependence:1 rt:1 traditional:1 gradient:87 distance:2 reinforce:18 evaluate:1 reason:3 boldface:1 toward:1 assuming:1 length:1 ratio:3 minimizing:1 difficult:1 regulation:1 negative:1 slows:1 policy:134 unknown:1 perform:2 allowing:2 upper:11 markov:1 finite:2 polimi:3 immediate:1 vlassis:1 mansour:1 perturbation:3 arbitrary:1 introduced:2 david:3 pair:6 required:2 ds0:2 established:1 nip:1 able:2 usually:2 built:1 max:3 rf:6 critical:1 natural:5 difficulty:1 ranked:1 improve:1 imply:1 speeding:1 l2:3 georgios:1 interesting:1 toussaint:1 penalization:1 agent:1 sufficient:1 s0:7 dd:1 editor:1 critic:1 changed:1 last:2 free:3 allow:2 face:1 taking:2 wagner:1 absolute:1 overcome:1 world:3 transition:4 avoids:1 preventing:1 author:1 adaptive:1 reinforcement:7 simplified:2 cope:1 transaction:2 approximate:3 observable:1 compact:1 skill:1 kullback:1 supremum:4 satinder:1 robotic:2 harm:1 continuous:6 search:6 decade:1 quantifies:1 sk:1 table:7 learn:3 tingting:1 obtaining:2 expansion:1 complex:1 domain:2 da:5 marc:1 main:3 linearly:1 restelli:3 slow:2 pirotta:3 jmlr:1 third:1 down:2 theorem:12 bad:1 showing:1 k21:1 r2:4 exists:1 workshop:1 phd:2 execution:2 horizon:5 cx:1 unexpected:1 expressed:1 corresponds:1 determines:1 acm:1 ma:1 goal:2 formulated:1 fisher:1 change:3 determined:2 infinite:2 uniformly:7 lemma:11 total:2 sanjoy:1 college:1 jonathan:1 avoiding:1 phenomenon:2 incorporate:1 dept:3 instructive:1 |
4,626 | 5,187 | Policy Shaping: Integrating Human Feedback
with Reinforcement Learning
Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea Thomaz
College of Computing
Georgia Institute of Technology, Atlanta, GA 30332, USA
{sgriffith7, kausubbu, jkscholz}@gatech.edu,
{isbell, athomaz}@cc.gatech.edu
Abstract
A long term goal of Interactive Reinforcement Learning is to incorporate nonexpert human feedback to solve complex tasks. Some state-of-the-art methods
have approached this problem by mapping human information to rewards and values and iterating over them to compute better control policies. In this paper we
argue for an alternate, more effective characterization of human feedback: Policy
Shaping. We introduce Advise, a Bayesian approach that attempts to maximize
the information gained from human feedback by utilizing it as direct policy labels.
We compare Advise to state-of-the-art approaches and show that it can outperform
them and is robust to infrequent and inconsistent human feedback.
1 Introduction
A long?term goal of machine learning is to create systems that can be interactively trained or guided
by non-expert end-users. This paper focuses specifically on integrating human feedback with Reinforcement Learning. One way to address this problem is to treat human feedback as a shaping
reward [1?5]. Yet, recent papers have observed that a more effective use of human feedback is as
direct information about policies [6, 7]. Most techniques for learning from human feedback still,
however, convert feedback signals into a reward or a value. In this paper we introduce Policy Shaping, which formalizes the meaning of human feedback as policy feedback, and demonstrates how to
use it directly as policy advice. We also introduce Advise, an algorithm for estimating a human?s
Bayes optimal feedback policy and a technique for combining this with the policy formed from the
agent?s direct experience in the environment (Bayesian Q-Learning).
We validate our approach using a series of experiments. These experiments use a simulated human
teacher and allow us to systematically test performance under a variety of conditions of infrequent
and inconsistent feedback. The results demonstrate two advantages of Advise: 1) it is able to outperform state of the art techniques for integrating human feedback with Reinforcement Learning; and
2) by formalizing human feedback, we avoid ad hoc parameter settings and are robust to infrequent
and inconsistent feedback.
2 Reinforcement Learning
Reinforcement Learning (RL) defines a class of algorithms for solving problems modeled as a
Markov Decision Process (MDP). An MDP is specified by the tuple (S, A, T, R), which defines
the set of possible world states, S, the set of actions available to the agent in each state, A, the
transition function T : S ? A ? Pr[S], a reward function R : S ? A ? R, and a discount factor
0 ? ? ? 1. The goal of a Reinforcement Learning algorithm is to identify a policy, ? : S ? A,
which maximizes the expected reward from the environment. Thus, the reward function acts as a
single source of information that tells an agent what is the best policy for this MDP.
This paper used an implementation of the Bayesian Q-learning (BQL) Reinforcement Learning
algorithm [8], which is based on Watkins? Q-learning [9]. Q-learning is one way to find an optimal
1
policy from the environment reward signal. The policy for the whole state space is iteratively refined
by dynamically updating a table of Q-values. A specific Q-value, Q[s, a], represents a point estimate
of the long-term expected discounted reward for taking action a in state s.
Rather than keep a point estimate of the long-term discounted reward for each state-action pair,
Bayesian Q-learning maintains parameters that specify a normal distribution with unknown mean
and precision for each Q-value. This representation has the advantage that it approximates the
agent?s uncertainty in the optimality of each action, which makes the problem of optimizing the
exploration/exploitation trade-off straightforward. Because the Normal-Gamma (NG) distribution
is the conjugate prior for the normal distribution, the mean and the precision are estimated using
s,a
a NG distribution with hyperparameters h?s,a
, ?s,a , ? s,a i. These values are updated each
0 , ?
time an agent performs an action a in state s, accumulates reward r, and transitions to a new state
s? . Details on how these parameters are updated can be found in [8]. Because BQL is known to
under-explore, ? s,a is updated as shown in [10] using an additional parameter ?.
The NG distribution for each Q-value can be used to estimate the probability that each action a ? As
in a state s is optimal, which defines a policy, ?R , used for action selection. The optimal action can
? a) and taking the argmax. A large number of samples can be
be estimated by sampling each Q(s,
used to approximate the probability an action is optimal by simply counting the number of times an
action has the highest Q-value [8].
3 Related Work
A key feature of Reinforcement Learning is the use of a reward signal. The reward signal can be
modified to suit the addition of a new information source (this is known as reward shaping [11]).
This is the most common way human feedback has been applied to RL [1?5]. However, several
difficulties arise when integrating human feedback signals that may be infrequent, or occasionally
inconsistent with the optimal policy?violating the necessary and sufficient condition that a shaping
function be potential-based [11]. Another difficulty is the ambiguity of translating a statement like
?yes, that?s right? or ?no, that?s wrong? into a reward. Typically, past attempts have been a manual
process, yielding ad hoc approximations for specific domains. Researchers have also extended reward shaping to account for idiosyncrasies in human input. For example, adding a drift parameter
to account for the human tendency to give less feedback over time [1, 12].
Advancements in recent work sidestep some of these issues by showing human feedback can instead
be used as policy feedback. For example, Thomaz and Breazeal [6] added an UNDO function to the
negative feedback signal, which forced an agent to backtrack to the previous state after its value
update. Work by Knox and Stone [7, 13] has shown that a general improvement to learning from
human feedback is possible if it is used to directly modify the action selection mechanism of the
Reinforcement Learning algorithm. Although both approaches use human feedback to modify an
agent?s exploration policy, they still treat human feedback as either a reward or a value. In our
work, we assume human feedback is not an evaluative reward, but is a label on the optimality of
actions. Thus the human?s feedback is making a direct statement about the policy itself, rather than
influencing the policy through a reward.
In other works, rather than have the human input be a reward shaping input, the human provides
demonstrations of the optimal policy. Several papers have shown how the policy information in
human demonstrations can be used for inverse optimal control [14, 15], to seed an agent?s exploration [16, 17], and in some cases be used entirely in place of exploration [18, 19]. Our work
similarly focuses on people?s knowledge of the policy, but instead of requiring demonstrations we
want to allow people to simply critique the agent?s behavior (?that was right/wrong?).
Our position that human feedback be used as direct policy advice is related to work in transfer learning [20, 21], in which an agent learns with ?advice? about how it should behave. This advice is provided as first order logic rules and is also provided offline, rather than interactively during learning.
Our approach only requires very high-level feedback (right/wrong) and is provided interactively.
4 Policy Shaping
In this section, we formulate human feedback as policy advice, and derive a Bayes optimal algorithm
for converting that feedback into a policy. We also describe how to combine the feedback policy with
the policy of an underlying Reinforcement Learning algorithm. We call our approach Advise.
2
4.1 Model Parameters
We assume a scenario where the agent has access to communication from a human during its learning
process. In addition to receiving environmental reward, the agent may receive a ?right?/?wrong?
label after performing an action. In related work, these labels are converted into shaping rewards
(e.g., ?right? becomes +1 and ?wrong? ?1), which are then used to modify Q-values, or to bias
action selection. In contrast, we use this label directly to infer what the human believes is the
optimal policy in the labeled state.
Using feedback in this way is not a trivial matter of pruning actions from the search tree. Feedback can be both inconsistent with the optimal policy and sparsely provided. Here, we assume a
human providing feedback knows the right answer, but noise in the feedback channel introduces inconsistencies between what the human intends to communicate and what the agent observes. Thus,
feedback is consistent, C, with the optimal policy with probability 0 < C < 1.1
We also assume that a human watching an agent learn may not provide feedback after every single
action, thus the likelihood, L, of receiving feedback has probability 0 < L < 1. In the event
feedback is received, it is interpreted as a comment on the optimality of the action just performed.
The issue of credit assignment that naturally arises with learning from real human feedback is left
for future work (see [13] for an implementation of credit assignment in a different framework for
learning from human feedback).
4.2 Estimating a Policy from Feedback
It is possible that the human may know any number of different optimal actions in a state, the probability an action, a, in a particular state, s, is optimal is independent of what labels were provided
to the other actions. Subsequently, the probability s, a is optimal can be computed using only the
?right? and ?wrong? labels associated with it. We define ?s,a to be the difference between the number of ?right? and ?wrong? labels. The probability s, a is optimal can be obtained using the binomial
distribution as:
C ?s,a
C ?s,a
,
+ (1 ? C)?s,a
(1)
Although many different actions may be optimal in a given state, we will assume for this paper that
the human knows only one optimal action, which is the one they intend to communicate. In that
case, an action, a, is optimal in state s if no other action is optimal (i.e., whether it is optimal now
also depends on the labels to the other actions in the state). More formally:
P
C ?s,a (1 ? C)
j6=a
?s,j
(2)
We take Equation 2 to be the probability of performing s, a according to the feedback policy, ?F
(i.e., the value of ?F (s, a)). This is the Bayes optimal feedback policy given the ?right? and ?wrong?
labels seen, the value for C, and that only one action is optimal per state. This is obtained by
application of Bayes? rule in conjunction with the binomial distribution and enforcing independence
conditions arising from our assumption that there is only one optimal action. A detailed derivation
of the above results is available in the Appendix Section A.1 and A.2.
4.3 Reconciling Policy Information from Multiple Sources
Because the use of Advise assumes an underlying Reinforcement Learning algorithm will also be
used (e.g., here we use BQL), the policies derived from multiple information sources must be reconciled. Although there is a chance, C, that a human could make a mistake when s/he does provide
feedback, given sufficient time, with the likelihood of feedback, L > 0.0 and the consistency of
feedback C =
6 0.5, the total amount of information received from the human should be enough for
the the agent to choose the optimal policy with probability 1.0. Of course, an agent will also be
learning on its own at the same time and therefore may converge to its own optimal policy much
sooner than it learns the human?s policy. Before an agent is completely confident in either policy,
however, it has to determine what action to perform using the policy information each provides.
1
Note that the consistency of feedback is not the same as the human?s or the agent?s confidence the feedback
is correct.
3
Pac-Man
Frogger
Figure 1: A snapshot of each domain used for the experiments. Pac-Man consisted of a 5x5 grid
world with the yellow Pac-Man avatar, two white food pellets, and a blue ghost. Frogger consisted
of a 4x4 grid world with the green Frogger avatar, two red cars, and two blue water hazards.
We combine the policies from multiple information sources by multiplying them together: ? ?
?R ??F . Multiplying distributions together is the Bayes optimal method for combining probabilities
from (conditionally) independent sources [22], and has been used to solve other machine learning
problems as well (e.g., [23]). Note that BQL can only approximately estimate the uncertainty that
each action is optimal from the environment reward signal. Rather than use a different combination
method to compensate for the fact that BQL converges too quickly, we introduced the exploration
tuning parameter, ?, from [10], that can be manually tuned until BQL performs close to optimal.
5 Experimental Setup
We evaluate our approach using two game domains, Pac-Man and Frogger (see Fig. 1).
5.1 Pac-Man
Pac-Man consists of a 2-D grid with food, walls, ghosts, and the Pac-Man avatar. The goal is to
eat all the food pellets while avoiding moving ghosts (+500). Points are also awarded for each
food pellet (+10). Points are taken away as time passes (-1) and for losing the game (-500). Our
experiments used a 5 ? 5 grid with two food pellets and one ghost. The action set consisted of the
four primary cartesian directions. The state representation included Pac-Man?s position, the position
and orientation of the ghost and the presence of food pellets.
5.2 Frogger
Frogger consists of a 2-D map with moving cars, water hazards, and the Frogger avatar. The goal
is to cross the road without being run over or jumping into a water hazard (+500). Points are lost
as time passes (-1), for hopping into a water hazard (-500), and for being run over (-500). Each car
drives one space per time step. The car placement and direction of motion is randomly determined
at the start and does not change. As a car disappears off the end of the map it reemerges at the
beginning of the road and continues to move in the same direction. The cars moved only in one
direction, and they started out in random positions on the road. Each lane was limited to one car.
Our experiments used a 4 ? 4 grid with two water hazards and two cars. The action set consisted
of the four primary cartesian directions and a stay-in-place action. The state representation included
frogger?s position and the position of the two cars.
5.3 Constructing an Oracle
We used a simulated oracle in the place of human feedback, because this allows us to systematically
vary the parameters of feedback likelihood, L, and consistency, C and test different learning settings
in which human feedback is less than ideal. The oracle was created manually by a human before
the experiments by hand labeling the optimal actions in each state. For states with multiple optimal
actions, a small negative reward (-10) was added to the environment reward signal of the extra
optimal state-action pairs to preserve the assumption that only one action be optimal in each state.
6 Experiments
6.1 A Comparison to the State of the Art
In this evaluation we compare Policy Shaping with Advise to the more traditional Reward Shaping,
as well as recent Interactive Reinforcement Learning techniques. Knox and Stone [7, 13] tried eight
different strategies for combining feedback with an environmental reward signal and they found that
4
BQL + Action Biasing
BQL + Control Sharing
BQL + Reward Shaping
BQL + Advise
Ideal Case
Reduced Consistency
Reduced Frequency
Moderate Case
(L = 1.0, C = 1.0)
Pac-Man
Frogger
0.58 ? 0.02 0.16 ? 0.05
0.34 ? 0.03 0.07 ? 0.06
0.54 ? 0.02 0.11 ? 0.07
0.77 ? 0.02 0.45 ? 0.04
(L = 0.1, C = 1.0)
Pac-Man
Frogger
-0.33 ? 0.17 0.05 ? 0.06
-2.87 ? 0.12 -0.32 ? 0.13
-0.47 ? 0.30
0 ? 0.08
-0.01 ? 0.11 0.02 ? 0.07
(L = 1.0, C = 0.55)
Pac-Man
Frogger
0.16 ? 0.04 0.04 ? 0.06
0.01 ? 0.12 0.02 ? 0.07
0.14 ? 0.04 0.03 ? 0.07
0.21 ? 0.05 0.16 ? 0.06
(L = 0.5, C = 0.8)
Pac-Man
Frogger
0.25 ? 0.04 0.09 ? 0.06
-0.18 ? 0.19 0.01 ? 0.07
0.17 ? 0.12 0.05 ? 0.07
0.13 ? 0.08 0.22 ? 0.06
Table 1: Comparing the learning rates of BQL + Advise to BQL + Action Biasing, BQL + Control
Sharing, and BQL + Reward Shaping for four different combinations of feedback likelihood, L, and
consistency, C, across two domains. Each entry represents the average and standard deviation of the
cumulative reward in 300 episodes, expressed as the percent of the maximum possible cumulative
reward for the domain with respect to the BQL baseline. Negative values indicate performance
worse than the baseline. Bold values indicate the best performance for that case.
two strategies, Action Biasing and Control Sharing, consistently produced the best results. Both of
these methods use human feedback rewards to modify the policy, rather than shape the MDP reward
function. Thus, they still convert human feedback to a value but recognize that the information
contained in that value is policy information. As will be seen, Advise has similar performance
to these state of the art methods, but is more robust to a noisy signal from the human and other
parameter changes.
Action Biasing uses human feedback to bias the action selection mechanism of the underlying RL
algorithm. Positive and negative feedback is declared a reward rh , and ?rh , respectively. A table
of values, H[s, a] stores the feedback signal for s, a. The modified action selection mechanism is
? a)+ B[s, a]? H[s, a], where Q(s,
? a) is an estimate of the long-term expected
given as argmaxa Q(s,
discounted reward for s, a from BQL, and B[s, a] controls the influence of feedback on learning. The
value of B[s, a] is incremented by a constant b when feedback is received for s, a, and is decayed by
a constant d at all other time steps.
Control Sharing modifies the action selection mechanism directly with the addition of a transition
between 1) the action that gains an agent the maximum known reward according to feedback, and
2) the policy produced using the original action selection method. The transition is defined as the
probability P (a = argmaxa H[s, a]) = min(B[s, a], 1.0). An agent transfers control to a feedback
policy as feedback is received, and begins to switch control to the underlying RL algorithm as
B[s, a] decays. Although feedback is initially interpreted as a reward, Control Sharing does not use
that information, and thus is unaffected if the value of rh is changed.
Reward Shaping, the traditional approach to learning from feedback, works by modifying the MDP
reward. Feedback is first converted into a reward, rh , or ?rh . The modified MDP reward function
is R? (s, a) ? R(s, a) + B[s, a] ? H[s, a]. The values to B[s, a] and H[s, a] are updated as above.
The parameters to each method were manually tuned before the experiments to maximize learning performance. We initialized the BQL hyperparameters to h?s,a
= 0, ?s,a = 0.01, ?s,a =
0
s,a
1000, ? = 0.0000i, which resulted in random initial Q-values. We set the BQL exploration parameter ? = 0.5 for Pac-Man and ? = 0.0001 for Frogger. We used a discount factor of ? = 0.99.
Action Biasing, Control Sharing, and Reward Shaping used a feedback influence of b = 1 and a
decay factor of d = 0.001. We set rh = 100 for Action Biasing in both domains. For Reward
Shaping we set rh = 100 in Pac-Man and rh = 1 in Frogger 2
We compared the methods using four different combinations of feedback likelihood, L, and consistency, C, in Pac-Man and Frogger, for a total of eight experiments. Table 1 summarizes the
quantitative results. Fig. 2 shows the learning curve for four cases.
In the ideal case of frequent and correct feedback (L = 1.0; C = 1.0), we see in Fig. 2 that Advise
does much better than the other methods early in the learning process. A human reward that does not
match both the feedback consistency and the domain may fail to eliminate unnecessary exploration
and produce learning rates similar to or worse than the baseline. Advise avoided these issues by not
converting feedback into a reward.
The remaining three graphs in Fig. 2 show one example from each of the non-ideal conditions
that we tested: reduced feedback consistency (L = 1.0; C = 0.55), reduced frequency (L = 0.1;
2
We used the conversion rh = 1, 10, 100, or 1000 that maximized MDP reward in the ideal case to also
evaluate the three cases of non-ideal feedback.
5
Frogger ? Reduced Consistency
Pac-Man ? Reduced Frequency
(L = 1.0; C = 0.55)
(L = 0.1; C = 1.0)
400
Average Reward
Average Reward
400
200
0
?200
0
100
150
200
Number of Episodes
250
300
?600
0
0
100
150
200
250
300
Number of Episodes
?600
0
200
0
BQL
BQL + Action Biasing
BQL + Control Sharing
BQL + Reward Shaping
BQL + Advise
?200
?400
50
(L = 0.5; C = 0.8)
400
200
?200
?400
50
600
400
200
?200
?400
?600
0
600
Pac-Man ? Moderate Case
Average Reward
600
Average Reward
600
Frogger ? Ideal Case
(L = 1.0; C = 1.0)
?400
50
100
150
200
Number of Episodes
250
300
?600
0
50
100
150
200
250
300
Number of Episodes
Figure 2: Learning curves for each method in four different cases. Each line is the average with
standard error bars of 500 separate runs to a duration of 300 episodes. The Bayesian Q-learning
baseline (blue) is shown for reference.
C = 1.0), and a case that we call moderate (L = 0.5; C = 0.8). Action Biasing and Reward
Shaping3 performed comparably to Advise in two cases. Action Biasing does better than Advise in
one case in part because the feedback likelihood is high enough to counter Action Biasing?s overly
influential feedback policy. This gives the agent an extra push toward the goal without becoming
detrimental to learning (e.g., causing loops). In its current form, Advise makes no assumptions
about the likelihood the human will provide feedback.
The cumulative reward numbers in Table 1 show that Advise always performed near or above the
BQL baseline, which indicates robustness to reduced feedback frequency and consistency. In contrast, Action Biasing, Control Sharing, and Reward Shaping blocked learning progress in several
cases with reduced consistency (the most extreme example is seen in column 3 of Table 1). Control
Sharing performed worse than the baseline in three cases. Action Biasing and Reward Shaping both
performed worse than the baseline in one case.
Thus having a prior estimate of the feedback consistency (the value of C) allows Advise to balance
what it learns from the human appropriately with its own learned policy. We could have provided
the known value of C to the other methods, but doing so would not have helped set rh , b, or d. These
parameters had to be tuned since they only slightly correspond to C. We manually selected their
values in the ideal case, and then used these same settings for the other cases. However, different
values for rh , b, and d may produce better results in the cases with reduced L or C. We tested this in
our next experiment.
6.2 How The Reward Parameter Affects Action Biasing
In contrast to Advise, Action Biasing and Control Sharing do not use an explicit model of the
feedback consistency. The optimal values to rh , b, and d for learning with consistent feedback may
be the wrong values to use for learning with inconsistent feedback. Here, we test how Action Biasing
performed with a range of values for rh for the case of moderate feedback (L = 0.5 and C = 0.8),
and for the case of reduced consistency (L = 1.0 and C = 0.55). Control Sharing was left out of
this evaluation because changing rh did not affect its learning rate. Reward Shaping was left out of
this evaluation due to the problems mentioned in Section 6.1. The conversion from feedback into
reward was set to either rh = 500 or 1000. Using rh = 0 is equivalent to the BQL baseline.
The results in Fig. 3 show that a large value for rh is appropriate for more consistent feedback;
a small value for rh is best for reduced consistency. This is clear in Pac-Man when a reward of
rh = 1000 led to better-than-baseline learning performance in the moderate feedback case, but
decreased learning rates dramatically below the baseline in the reduced consistency case. A reward
of zero produced the best results in the reduced consistency case. Therefore, rh depends on feedback
consistency.
This experiment also shows that the best value for rh is somewhat robust to a slightly reduced
consistency. A value of either r = 500 or 1000, in addition to r = 100 (see Fig. 2.d), can produce
good results with moderate feedback in both Pac-Man and Frogger. The use of a human influence
parameter B[s, a] to modulate the value for rh is presumably meant to help make Action Biasing
more robust to reduced consistency. The value for B[s, a] is, however, increased by b whenever
3
The results with Reward Shaping are misleading because it can end up in infinite loops when feedback is
infrequent or inconsistent with the optimal policy. In frogger we had this problem for rh > 1.0, which forced
us to use rh = 1.0. This was not a problem in Pac-Man because the ghost can drive Pac-Man around the map;
instead of roaming the map on its own Pac-Man oscillated between adjacent cells until the ghost approached.
6
Frogger ? Reduced Consistency
Pac-Man ? Moderate Case
(L = 0.5; C = 0.8)
(L = 1.0; C = 0.55)
(L = 0.5; C = 0.8)
0
reward rh
?200
0
500
1000
?400
50
100
150
200
Number of Episodes
250
300
Average Reward
Average Reward
200
?600
0
600
400
600
400
200
0
?200
200
0
100
150
200
250
300
Number of Episodes
?600
0
200
0
?200
?400
50
(L = 1.0; C = 0.55)
400
?200
?400
?600
0
Pac-Man ? Reduced Consistency
Average Reward
600
400
Average Reward
600
Frogger ? Moderate Case
?400
50
100
150
200
Number of Episodes
250
300
?600
0
50
100
150
200
250
300
Number of Episodes
Figure 3: How different feedback reward values affected BQL + Action Biasing. Each line shows the
average and standard error of 500 learning curves over a duration of 300 episodes. Reward values
of rh = 0, 500, and 1000 were used for the experiments. Results were computed for the moderate
feedback case (L = 0.5; C = 0.8) and the reduced consistency case (L = 1.0; C = 0.55).
feedback is received, and reduced by d over time; b and d are more a function of the domain than the
information in accumulated feedback. Our next experiment demonstrates why this is bad for IRL.
6.3 How Domain Size Affects Learning
Action Biasing, Control Sharing, and Reward Shaping use a ?human influence? parameter, B[s, a],
that is a function of the domain size more than the amount of information in accumulated feedback.
To show this we held constant the parameter values and tested how the algorithms performed in a
larger domain. Frogger was increased to a 6?6 grid with four cars (see Fig. 4). An oracle was created
automatically by running BQL to 50,000 episodes 500 times, and then for each state choosing the
action with the highest value. The oracle provided moderate feedback (L = 0.5; C = 0.8) for the
33360 different states that were identified in this process.
Figure 4 shows the results. Whereas Advise still has a learning curve above the BQL baseline (as
it did in the smaller Frogger domain; see the last column in Table. 1), Action Biasing, Control
Sharing, and Reward Shaping all had a negligible effect on learning, performing very similar to the
BQL baseline. In order for those methods to perform as well as they did with the smaller version of
Frogger, the value for B[s, a] needs to be set higher and decayed more slowly by manually finding
new values for b and d. Thus, like rh , the optimal values to b and d are dependent on both the domain
? used by Advise only
and the quality of feedback. In contrast, the estimated feedback consistency, C,
depends on the true feedback consistency, C. For comparison, we next show how sensitive Advise
is to a suboptimal estimate of C.
6.4 Using an Inaccurate Estimate of Feedback Consistency
Interactions with a real human will mean that in most cases Advise will not have an exact estimate,
? of the true feedback consistency, C. It is presumably possible to identify a value for C? that is close
C,
to the true value. Any deviation from the true value, however, may be detrimental to learning. This
experiment shows how an inaccurate estimate of C affected the learning rate of Advise. Feedback
was generated with likelihood L = 0.5 and a true consistency of C = 0.8. The estimated consistency
was either C? = 1.0, 0.8, or 0.55.
The results are shown in Fig. 5. In both Pac-Man and Frogger using C? = 0.55 reduced the effectiveness of Advise. The learning curves are similar to the baseline BQL learning curves because using
an estimate of C near 0.5 is equivalent to not using feedback at all. In general, values for C? below C
decreased the possible gains from feedback. In contrast, using an overestimate of C boosted learning
rates for these particular domains and case of feedback quality. In general, however, overestimating
C can lead to a suboptimal policy especially if feedback is provided very infrequently. Therefore, it
is desirable to use C? as the closest overestimate of its true value, C, as possible.
7 Discussion
Overall, our experiments indicate that it is useful to interpret feedback as a direct comment on the
optimality of an action, without converting it into a reward or a value. Advise was able to outperform
tuned versions of Action Biasing, Control Sharing, and Reward Shaping. The performance of Action
Biasing and Control Sharing was not as good as Advise in many cases (as shown in Table 1) because
they use feedback as policy information only after it has been converted into a reward.
7
Average Reward
Average Reward
400
200
200
0
BQL
BQL + A.B.
BQL + C.S.
BQL + R.S.
BQL + Advise
?200
?400
0.5
1
1.5
2
2.5
3
3.5
Number of Episodes
4
4.5
0
?200
estimated C
?400
1.0
0.8
0.55
?600
0
5
4
x 10
Figure 4: The larger Frogger domain and the corresponding learning results for the case of moderate feedback (L = 0.5; C = 0.8). Each
line shows the average and standard error of
160 learning curves over a duration of 50,000
episodes.
Frogger
600
400
200
?600
0
Pac-Man
600
Average Reward
600
400
50
100
150
200
Number of Episodes
250
300
0
?200
?400
?600
0
50
100
150
200
250
300
Number of Episodes
Figure 5: The affect of over and underestimating the true feedback consistency, C, on
BQL + Advise in the case of moderate feedback
(L = 0.5, C = 0.8). A line shows the average
and standard error of 500 learning curves over a
duration of 300 episodes.
Action Biasing, Control Sharing, and Reward Shaping suffer because their use of ?human influence?
parameters is disconnected from the amount of information in the accumulated feedback. Although
b and d were empirically optimized before the experiments, the optimal values of those parameters
are dependent on the convergence time of the underlying RL algorithm. If the size of the domain increased, for example, B[s, a] would have to be decayed more slowly because the number of episodes
required for BQL to converge would increase. Otherwise Action Biasing, Control Sharing, and Reward Shaping would have a negligible affect on learning. Control Sharing is especially sensitive
to how well the value of the feedback influence parameter, B[s, a], approximates the amount of
information in both policies. Its performance bottomed out in some cases with infrequent and inconsistent feedback because B[s, a] overestimated the amount of information in the feedback policy.
However, even if B[s, a] is set in proportion to the exact probability of the correctness of each policy
(i.e., calculated using Advise), Control Sharing does not allow an agent to simultaneously utilize
information from both sources.
? in contrast to three.
Advise has only one input parameter, the estimated feedback consistency, C,
?
C is a fundamental parameter that depends only on the true feedback consistency, C, and does not
? Advise represents the
change if the domain size is increased. When it has the right value for C,
exact amount of information in the accumulated feedback in each state, and then combines it with
the BQL policy using an amount of influence equivalent to the amount of information in each policy.
These advantages help make Advise robust to infrequent and inconsistent feedback, and fair well
with an inaccurate estimate of C.
A primary direction for future work is to investigate how to estimate C? during learning. That is, a
static model of C may be insufficient for learning from real humans. An alternative approach is to
compute C? online as a human interacts with an agent. We are also interested in addressing other
aspects of human feedback like errors in credit assignment. A good place to start is the approach
described in [13] which is based on using gamma distributions. Another direction is to investigate
Advise for knowledge transfer in a sequence of reinforcement learning tasks (cf. [24]). With these
extensions, Advise may be especially suitable for learning from humans in real-world settings.
8 Conclusion
This paper defined the Policy Shaping paradigm for integrating feedback with Reinforcement Learning. We introduced Advise, which tries to maximize the utility of feedback using a Bayesian approach to learning. Advise produced results on par with or better than the current state of the art
Interactive Reinforcement Learning techniques, showed where those approaches fail while Advise
is unaffected, and it demonstrated robustness to infrequent and inconsistent feedback. With these
advancements this paper may help to make learning from human feedback an increasingly viable
option for intelligent systems.
Acknowledgments
The first author was partly supported by a National Science Foundation Graduate Research Fellowship. This research is funded by the Office of Naval Research under grant N00014-14-1-0003.
8
References
[1] C. L. Isbell, C. Shelton, M. Kearns, S. Singh, and P. Stone, ?A social reinforcement learning
agent,? in Proc. of the 5th Intl. Conf. on Autonomous Agents, pp. 377?384, 2001.
[2] H. S. Chang, ?Reinforcement learning with supervision by combining multiple learnings and
expert advices,? in Proc. of the American Control Conference, 2006.
[3] W. B. Knox and P. Stone, ?Tamer: Training an agent manually via evaluative reinforcement,?
in Proc. of the 7th IEEE ICDL, pp. 292?297, 2008.
[4] A. Tenorio-Gonzalez, E. Morales, and L. Villaseor-Pineda, ?Dynamic reward shaping: training
a robot by voice,? in Advances in Artificial Intelligence?IBERAMIA, pp. 483?492, 2010.
[5] P. M. Pilarski, M. R. Dawson, T. Degris, F. Fahimi, J. P. Carey, and R. S. Sutton, ?Online
human training of a myoelectric prosthesis controller via actor-critic reinforcement learning,?
in Proc. of the IEEE ICORR, pp. 1?7, 2011.
[6] A. L. Thomaz and C. Breazeal, ?Teachable robots: Understanding human teaching behavior
to build more effective robot learners,? Artificial Intelligence, vol. 172, no. 6-7, pp. 716?737,
2008.
[7] W. B. Knox and P. Stone, ?Combining manual feedback with subsequent MDP reward signals
for reinforcement learning,? in Proc. of the 9th Intl. Conf. on AAMAS, pp. 5?12, 2010.
[8] R. Dearden, N. Friedman, and S. Russell, ?Bayesian Q-learning,? in Proc. of the 15th AAAI,
pp. 761?768, 1998.
[9] C. Watkins and P. Dayan, ?Q learning: Technical note,? Machine Learning, vol. 8, no. 3-4,
pp. 279?292, 1992.
[10] T. Matthews, S. D. Ramchurn, and G. Chalkiadakis, ?Competing with humans at fantasy
football: Team formation in large partially-observable domains,? in Proc. of the 26th AAAI,
pp. 1394?1400, 2012.
[11] A. Y. Ng, D. Harada, and S. Russell, ?Policy invariance under reward transformations: Theory
and application to reward shaping,? in Proc. of the 16th ICML, pp. 341?348, 1999.
[12] C. L. Isbell, M. Kearns, S. Singh, C. R. Shelton, P. Stone, and D. Kormann, ?Cobot in LambdaMOO: An Adaptive Social Statistics Agent,? JAAMAS, vol. 13, no. 3, pp. 327?354, 2006.
[13] W. B. Knox and P. Stone, ?Reinforcement learning from simultaneous human and MDP reward,? in Proc. of the 11th Intl. Conf. on AAMAS, pp. 475?482, 2012.
[14] A. Y. Ng and S. Russell, ?Algorithms for inverse reinforcement learning,? in Proc. of the 17th
ICML, 2000.
[15] P. Abbeel and A. Y. Ng, ?Apprenticeship learning via inverse reinforcement learning,? in Proc.
of the 21st ICML, 2004.
[16] C. Atkeson and S. Schaal, ?Learning tasks from a single demonstration,? in Proc. of the IEEE
ICRA, pp. 1706?1712, 1997.
[17] M. Taylor, H. B. Suay, and S. Chernova, ?Integrating reinforcement learning with human
demonstrations of varying ability,? in Proc. of the Intl. Conf. on AAMAS, pp. 617?624, 2011.
[18] L. P. Kaelbling, M. L. Littmann, and A. W. Moore, ?Reinforcement learning: A survey,? JAIR,
vol. 4, pp. 237?285, 1996.
[19] W. D. Smart and L. P. Kaelbling, ?Effective reinforcement learning for mobile robots,? 2002.
[20] R. Maclin and J. W. Shavlik, ?Creating advice-taking reinforcement learners,? Machine Learning, vol. 22, no. 1-3, pp. 251?281, 1996.
[21] L. Torrey, J. Shavlik, T. Walker, and R. Maclin, ?Transfer learning via advice taking,? in Advances in Machine Learning I, Studies in Computational Intelligence (J. Koronacki, S. Wirzchon, Z. Ras, and J. Kacprzyk, eds.), vol. 262, pp. 147?170, Springer Berlin Heidelberg, 2010.
[22] C. Bailer-Jones and K. Smith, ?Combining probabilities.? GAIA-C8-TN-MPIA-CBJ-053,
2011.
[23] M. L. Littman, G. A. Keim, and N. Shazeer, ?A probabilistic approach to solving crossword
puzzles,? Artificial Ingelligence, vol. 134, no. 1-2, pp. 23?55, 2002.
[24] G. Konidaris and A. Barto, ?Autonomous shaping: Knowledge transfer in reinforcement learning,? in Proc. of the 23rd ICML, pp. 489?496, 2006.
9
| 5187 |@word exploitation:1 version:2 proportion:1 tried:1 initial:1 series:1 tuned:4 past:1 current:2 comparing:1 yet:1 must:1 subsequent:1 shape:1 update:1 intelligence:3 selected:1 advancement:2 beginning:1 smith:1 underestimating:1 characterization:1 provides:2 direct:6 viable:1 consists:2 combine:3 introduce:3 apprenticeship:1 crossword:1 ra:1 expected:3 behavior:2 andrea:1 discounted:3 automatically:1 food:6 nonexpert:1 becomes:1 provided:8 estimating:2 underlying:5 formalizing:1 maximizes:1 begin:1 what:7 fantasy:1 interpreted:2 finding:1 transformation:1 formalizes:1 quantitative:1 every:1 act:1 interactive:3 demonstrates:2 wrong:9 control:25 grant:1 overestimate:2 before:4 positive:1 influencing:1 negligible:2 treat:2 modify:4 mistake:1 sutton:1 accumulates:1 critique:1 becoming:1 approximately:1 dynamically:1 limited:1 scholz:1 range:1 graduate:1 acknowledgment:1 lost:1 confidence:1 integrating:6 griffith:1 road:3 advise:38 argmaxa:2 ga:1 selection:7 close:2 influence:7 equivalent:3 map:4 demonstrated:1 modifies:1 straightforward:1 duration:4 survey:1 formulate:1 oscillated:1 rule:2 utilizing:1 autonomous:2 updated:4 avatar:4 infrequent:8 user:1 exact:3 losing:1 us:1 infrequently:1 updating:1 continues:1 sparsely:1 labeled:1 observed:1 bql:38 intends:1 episode:18 trade:1 highest:2 incremented:1 observes:1 counter:1 mentioned:1 russell:3 environment:5 reward:82 littman:1 dynamic:1 trained:1 singh:2 solving:2 smart:1 learner:2 completely:1 derivation:1 forced:2 effective:4 describe:1 artificial:3 approached:2 tell:1 labeling:1 formation:1 choosing:1 refined:1 tamer:1 larger:2 solve:2 otherwise:1 pilarski:1 football:1 ability:1 statistic:1 torrey:1 itself:1 noisy:1 online:2 hoc:2 advantage:3 sequence:1 pineda:1 thomaz:3 interaction:1 frequent:1 causing:1 combining:6 loop:2 moved:1 validate:1 convergence:1 intl:4 produce:3 converges:1 help:3 derive:1 received:5 progress:1 indicate:3 direction:7 guided:1 correct:2 modifying:1 subsequently:1 exploration:7 human:67 translating:1 abbeel:1 wall:1 extension:1 around:1 credit:3 normal:3 presumably:2 seed:1 mapping:1 puzzle:1 matthew:1 vary:1 early:1 proc:14 label:10 sensitive:2 correctness:1 create:1 pellet:5 always:1 modified:3 rather:6 avoid:1 boosted:1 varying:1 mobile:1 gatech:2 office:1 conjunction:1 barto:1 derived:1 focus:2 naval:1 improvement:1 consistently:1 schaal:1 likelihood:8 indicates:1 contrast:6 baseline:13 dependent:2 dayan:1 accumulated:4 inaccurate:3 typically:1 eliminate:1 initially:1 maclin:2 interested:1 issue:3 overall:1 orientation:1 art:6 having:1 ng:6 sampling:1 manually:6 x4:1 represents:3 jones:1 icml:4 future:2 overestimating:1 intelligent:1 randomly:1 simultaneously:1 gamma:2 preserve:1 national:1 resulted:1 recognize:1 argmax:1 suit:1 attempt:2 friedman:1 atlanta:1 investigate:2 evaluation:3 introduces:1 extreme:1 chernova:1 yielding:1 held:1 tuple:1 necessary:1 experience:1 jumping:1 tree:1 sooner:1 taylor:1 initialized:1 prosthesis:1 increased:4 column:2 assignment:3 kaelbling:2 deviation:2 entry:1 addressing:1 harada:1 too:1 answer:1 teacher:1 knox:5 confident:1 decayed:3 fundamental:1 st:1 evaluative:2 stay:1 overestimated:1 probabilistic:1 off:2 receiving:2 together:2 quickly:1 ambiguity:1 aaai:2 interactively:3 choose:1 slowly:2 idiosyncrasy:1 watching:1 worse:4 conf:4 expert:2 sidestep:1 american:1 creating:1 account:2 potential:1 converted:3 degris:1 bold:1 matter:1 ad:2 depends:4 performed:7 helped:1 try:1 doing:1 red:1 start:2 bayes:5 maintains:1 option:1 carey:1 formed:1 maximized:1 correspond:1 identify:2 yes:1 yellow:1 bayesian:7 backtrack:1 produced:4 comparably:1 multiplying:2 cc:1 researcher:1 j6:1 drive:2 unaffected:2 simultaneous:1 manual:2 sharing:19 whenever:1 ed:1 konidaris:1 frequency:4 pp:19 naturally:1 associated:1 static:1 gain:2 knowledge:3 car:10 shaping:31 higher:1 jair:1 violating:1 specify:1 just:1 until:2 hand:1 irl:1 defines:3 quality:2 mdp:9 usa:1 effect:1 requiring:1 consisted:4 true:8 iteratively:1 moore:1 white:1 conditionally:1 adjacent:1 x5:1 during:3 game:2 kaushik:1 stone:7 demonstrate:1 tn:1 performs:2 motion:1 percent:1 meaning:1 charles:1 common:1 rl:5 empirically:1 he:1 approximates:2 interpret:1 blocked:1 tuning:1 rd:1 consistency:32 grid:6 similarly:1 teaching:1 had:3 funded:1 moving:2 access:1 robot:4 supervision:1 actor:1 closest:1 own:4 recent:3 showed:1 optimizing:1 moderate:12 awarded:1 scenario:1 occasionally:1 store:1 n00014:1 dawson:1 inconsistency:1 seen:3 additional:1 somewhat:1 converting:3 converge:2 maximize:3 determine:1 paradigm:1 signal:12 multiple:5 desirable:1 ramchurn:1 infer:1 technical:1 match:1 cross:1 long:5 hazard:5 compensate:1 chalkiadakis:1 controller:1 cell:1 receive:1 addition:4 want:1 whereas:1 fellowship:1 decreased:2 walker:1 source:7 appropriately:1 extra:2 pass:2 shane:1 undo:1 comment:2 cobot:1 inconsistent:10 effectiveness:1 call:2 near:2 counting:1 presence:1 ideal:8 enough:2 variety:1 independence:1 switch:1 affect:5 identified:1 suboptimal:2 competing:1 whether:1 utility:1 suffer:1 action:66 dramatically:1 useful:1 iterating:1 detailed:1 clear:1 amount:8 discount:2 reduced:20 outperform:3 lambdamoo:1 estimated:6 arising:1 per:2 overly:1 blue:3 vol:7 affected:2 key:1 four:7 keim:1 teachable:1 changing:1 utilize:1 graph:1 convert:2 run:3 inverse:3 uncertainty:2 communicate:2 place:4 gonzalez:1 decision:1 appendix:1 summarizes:1 entirely:1 oracle:5 placement:1 isbell:4 lane:1 declared:1 aspect:1 optimality:4 min:1 c8:1 performing:3 eat:1 influential:1 according:2 alternate:1 combination:3 disconnected:1 conjugate:1 across:1 slightly:2 smaller:2 increasingly:1 making:1 pr:1 taken:1 equation:1 mechanism:4 fail:2 know:3 end:3 available:2 eight:2 away:1 appropriate:1 alternative:1 robustness:2 voice:1 original:1 binomial:2 assumes:1 reconciling:1 remaining:1 running:1 cf:1 littmann:1 hopping:1 especially:3 build:1 icra:1 move:1 intend:1 added:2 strategy:2 primary:3 breazeal:2 traditional:2 interacts:1 detrimental:2 separate:1 simulated:2 berlin:1 argue:1 trivial:1 water:5 enforcing:1 toward:1 modeled:1 insufficient:1 providing:1 demonstration:5 balance:1 setup:1 statement:2 negative:4 implementation:2 policy:60 unknown:1 perform:2 conversion:2 snapshot:1 markov:1 behave:1 extended:1 communication:1 team:1 shazeer:1 drift:1 introduced:2 pair:2 required:1 specified:1 optimized:1 learned:1 address:1 able:2 bar:1 below:2 ghost:7 biasing:23 green:1 belief:1 dearden:1 subramanian:1 event:1 suitable:1 difficulty:2 technology:1 misleading:1 disappears:1 started:1 created:2 prior:2 understanding:1 par:1 foundation:1 agent:27 sufficient:2 consistent:3 systematically:2 critic:1 morale:1 course:1 changed:1 supported:1 last:1 offline:1 bias:2 allow:3 institute:1 shavlik:2 taking:4 koronacki:1 roaming:1 feedback:126 curve:8 calculated:1 world:4 transition:4 cumulative:3 author:1 reinforcement:29 adaptive:1 avoided:1 atkeson:1 social:2 approximate:1 pruning:1 observable:1 keep:1 logic:1 unnecessary:1 search:1 why:1 table:8 channel:1 transfer:5 robust:6 learn:1 heidelberg:1 complex:1 constructing:1 domain:18 did:3 reconciled:1 rh:27 whole:1 noise:1 hyperparameters:2 arise:1 fair:1 aamas:3 advice:8 fig:8 georgia:1 precision:2 position:6 explicit:1 watkins:2 learns:3 bad:1 specific:2 showing:1 pac:26 decay:2 adding:1 gained:1 push:1 cartesian:2 led:1 simply:2 explore:1 tenorio:1 expressed:1 contained:1 partially:1 chang:1 springer:1 environmental:2 chance:1 modulate:1 goal:6 man:26 change:3 included:2 specifically:1 determined:1 infinite:1 kearns:2 total:2 partly:1 tendency:1 experimental:1 invariance:1 formally:1 college:1 people:2 arises:1 jonathan:1 meant:1 icdl:1 avoiding:1 incorporate:1 evaluate:2 tested:3 shelton:2 |
4,627 | 5,188 | Optimistic policy iteration and natural actor-critic:
A unifying view and a non-optimality result
Paul Wagner
Department of Information and Computer Science
Aalto University
FI-00076 Aalto, Finland
paul.wagner@aalto.fi
Abstract
Approximate dynamic programming approaches to the reinforcement learning
problem are often categorized into greedy value function methods and value-based
policy gradient methods. As our first main result, we show that an important subset
of the latter methodology is, in fact, a limiting special case of a general formulation of the former methodology; optimistic policy iteration encompasses not only
most of the greedy value function methods but also natural actor-critic methods,
and permits one to directly interpolate between them. The resulting continuum adjusts the strength of the Markov assumption in policy improvement and, as such,
can be seen as dual in spirit to the continuum in TD(?)-style algorithms in policy
evaluation. As our second main result, we show for a substantial subset of softgreedy value function approaches that, while having the potential to avoid policy
oscillation and policy chattering, this subset can never converge toward an optimal policy, except in a certain pathological case. Consequently, in the context of
approximations (either in state estimation or in value function representation), the
majority of greedy value function methods seem to be deemed to suffer either from
the risk of oscillation/chattering or from the presence of systematic sub-optimality.
1
Introduction
We consider the reinforcement learning problem in which one attempts to find an approximately
optimal policy for controlling a stochastic nonlinear dynamical system. We focus on the setting in
which the target system is actively sampled during the learning process. Here the sampling policy
changes during the learning process in a manner that depends on the main policy being optimized.
This learning setting is often called interactive learning [e.g., 23, ?3]. Many approaches to the
problem are value-based and build on the methodology of simulation-based approximate dynamic
programming [23, 4, 9, 19, 8, 21]. The majority of these methods are often categorized into greedy
value function methods (critic-only) and value-based policy gradient methods (actor-critic) [e.g.,
23, 13].
Within this interactive setting, the policy gradient approach has better convergence guarantees, with
the strongest case being for Monte Carlo evaluation with ?compatible? value function approximation.
In this case, convergence with probability one (w.p.1) to a local optimum can be established for
arbitrary differentiable policy classes under mild assumptions [22, 13, 19]. On the other hand, while
the greedy value function approach is often considered to possess practical advantages in terms of
convergence speed and representational flexibility, its behavior in the proximity of an optimum is
currently not well understood. It is well known that interactively operated approximate hard-greedy
An extended version of this paper with full proofs and additional background material is available at
http://books.nips.cc/ and http://users.ics.aalto.fi/pwagner/.
1
value function methods can fail to converge to any single policy and instead become trapped in
sustained policy oscillation or policy chattering, which is currently a poorly understood phenomenon
[6, 7]. This applies to both non-optimistic and optimistic policy iteration (value iteration being a
special case of the latter). In general, the best guarantees for this methodology exist in the form of
sub-optimality bounds [6, 7]. The practical value of these bounds, however, is under question (e.g.,
[2; 7, ?6.2.2]), as they can permit very bad solutions. Furthermore, it has been shown that these
bounds are tight [7, ?6.2.3; 12, ?3.2].
A hard-greedy policy is a discontinuous function of its parameters, which has been identified as a
key source of problems [18, 10, 17, 22]. In addition to the observation that the class of stochastic
policies may often permit much simpler solutions [cf. 20], it is known that continuously stochastic
policies can also re-gain convergence: both non-optimistic and optimistic soft-greedy approximate
policy iteration using, for example, the Gibbs/Boltzmann policy class, is known to converge with
enough softness, ?enough? being problem-specific. This has been shown by Perkins & Precup [18]
and Melo et al. [14], respectively, although with no consideration of the quality of the obtained
solutions nor with an interpretation of how ?enough? relates to the problem at hand. Unfortunately,
the aforementioned sub-optimality bounds are also lost in this case (consider temperature ? ? ?);
while convergence is re-gained, the properties of the obtained solutions are rather unknown.
To summarize, there are considerable shortcomings in the current understanding of the learning
dynamics at the very heart of the approximate dynamic programming methodology. We share the
belief of Bertsekas [5, 6], expressed in the context of the policy oscillation phenomenon, that a
better understanding of these issues ?has the potential to alter in fundamental ways our thinking
about approximate DP.?
In this paper, we provide insight into the convergence behavior and optimality of the generalized
optimistic form of the greedy value function methodology by reflecting it against the policy gradient
approach. While these two approaches are considered in the literature mostly separately, we are
motivated by the belief that it is eventually possible to fully unify them, so as to have the benefits and
insights from both in a single framework with no artificial (or historical) boundaries, and that such a
unification can eventually resolve the issues outlined above. These issues revolve mainly around the
greedy methodology, while at the same time, solid convergence results exist for the policy gradient
methodology; connecting these methodologies more firmly might well lead to a fuller understanding
of both.
After providing background in Section 2, we take the following steps in this direction. First, we
show that natural actor-critic methods from the policy gradient side are, in fact, a limiting special
case of optimistic policy iteration (Sec. 3). Second, we show that while having the potential to avoid
policy oscillation and chattering, a substantial subset of soft-greedy value function approaches can
never converge to an optimal policy, except in a certain pathological case (Sec. 4). We then conclude
with a discussion in a broader context and use the results to complete a high-level convergence and
optimality property map of the variants of the considered methodology (Sec. 5).
2
Background
A Markov decision process (MDP) is defined by a tuple M = (S, A, P, r), where S and A denote
the state and action spaces. St ? S and At ? A denote random variables at time t. s, s0 ? S
and a, b ? A denote state and action instances. P(s, a, s0 ) = P(St+1 = s0 |St = s, At = a)
defines the transition dynamics and r(s, a) ? R defines the expected immediate reward function.
Non-Markovian aggregate states, i.e., subsets of S, are denoted by y. A policy ?(a|s, ?k ) ? ? is
a stochastic mapping from states to actions, parameterized by ?k ? ?. Improvement is performed
PH
with respect to the performance metric J(?) = 1/H t E[r(St , At )|?(?)]. ?? J(?k ) ? ? denotes
a parameter gradient at ?k . ?? J(?k ) ? ? denotes the corresponding policy gradient in the selected
policy space.PWePdefine the policy distance k?u ? ?v k as some p-norm of the action probability
? a, w
differences ( s a |?u (a|s) ? ?v (a|s)|p )1/p . Action value functions Q(s,
?k ),
P ?k ) and Q(s, a, w
parameterized by w
?k , are estimators of the ?-discounted cumulative reward t ? t E[r(St , At )|S0 =
s, A0 = a, ?(?k )] for some (s, a) when following some policy ?(?k ). The state value function
V (s, w
?k ) is an estimator of such cumulative reward that follows some s. We use to denote a small
positive infinitesimal quantity.
2
We focus on the Gibbs (Boltzmann) policy class with a linear combination of basis functions ?:
>
e?k ?(s,a)
?(a|s, ?k ) = P ?> ?(s,b) .
k
be
(1)
We shall use the term ?semi-uniformly stochastic policy? for referring to a policy for which ?(a|s) =
cs ? ?(a|s) = 0, ?s, a, ?s ?cs ? [0, 1]. Note that both the uniformly stochastic policy and all
deterministic policies are special cases of semi-uniformly stochastic policies.
For the value function, we focus on least-squares linear-in-parameters approximation with the same
basis ? as in (1). We consider both advantage values [see 22, 19]
!
X
>
? k (s, a, w
Q
?k ) = w
?
?(s, a) ?
?(b|s, ?k )?(s, b)
(2)
k
b
and absolute action values
Qk (s, a, w
?k ) = w
?k> ?(s, a) .
(3)
Evaluation can be based on either Monte Carlo or temporal difference estimation. We focus on
optimistic policy iteration, which contains both non-optimistic policy iteration and value iteration as
special cases, and on the policy gradient counterparts of these.
In the general form of optimistic approximate policy iteration (e.g., [7, ?6.4]; see also [6, ?3.3]), a
value function parameter vector w is gradually interpolated toward the most recent evaluation w:
?
wk+1 = wk + ?k (w
? k ? wk ) ,
?k ? (0, 1] .
(4)
Non-optimistic policy iteration is obtained with ?k = 1, ?k and ?complete? evaluations w
?k (see
below). The corresponding Gibbs soft-greedy policy is obtained by combining (1) and a temperature
(softness) parameter ? with
?k+1 = wk+1 /?k ,
?k ? (0, ?) .
(5)
Hard-greedy iteration is obtained in the limit as ? ? 0.
In optimistic policy iteration, policy improvement is based on an incomplete evaluation. We distinguish between two dimensions of completeness, which are evaluation depth and evaluation accuracy. By evaluation depth, we refer to the look-ahead depth after which truncation with the previous
value function estimate occurs. For example, LSPE(0) and LSTD(0) [e.g., 15] implement shallow
and deep evaluation, respectively. With shallow evaluation, the current value function parameter
vector wk is required for look-ahead truncation when computing w
?k+1 . Inaccurate (noisy) evaluation necessitates additional caution in the policy improvement process and is the usual motivation
for using (4) with ? < 1.
It is well known that greedy policy iteration can be non-convergent under approximations [4]. The
widely used projected equation approach can manifest convergence behavior that is complex and not
well understood, including bounded but potentially severe sustained policy oscillations [6, 7] (see
the extended version for further details). Similar consequences arise in the context of partial observability for approximate or incomplete state estimation [e.g., 20, 16]. A novel explanation to the
phenomenon in the non-optimistic case was recently proposed in [24, 25], where policy oscillation
was re-cast as sustained overshooting over an attractive stochastic policy. Policy convergence can
be established under various restrictions (see the extended version for further details). Most importantly to this paper, convergence can be established with continuously soft-greedy action selection
[18, 14], in which case, however, the quality of the obtained solutions is unknown.
In policy gradient reinforcement learning [22, 13, 19, 8], improvement is obtained via stochastic
gradient ascent:
?J(?k )
?k+1 = ?k + ?k G(?k )?1
= ?k + ?k ?k ,
(6)
??
where ?k ? (0, ?), G is a Riemannian metric tensor that ideally encodes the curvature of the
policy parameterization, and ?k is some estimate of the gradient. With value-based policy gradient
methods, using (1) together with either (2) or (3) fulfills the ?compatibility condition? [22, 13]. With
(2), the value function parameter vector w
?k becomes the natural gradient estimate for the evaluated
policy ?(?k ), leading to natural actor-critic algorithms [11, 19], for which
?k = w
?k .
3
(7)
For policy gradient learning with a ?compatible? value function and Monte Carlo evaluation, convergence w.p.1 to a local optimum is established under standard assumptions [22, 13]. Temporal
difference evaluation can lead to sub-optimal results with a known sub-optimality bound [13, 8].
3
Forgetful natural actor-critic
In this section, we show that an important subset of natural actor-critic algorithms is a limiting
special case of optimistic policy iteration. A related connection was recently shown in [24, 25],
where a modified form of the natural actor-critic algorithm by Peters & Schaal [19] was shown
to correspond to non-optimistic policy iteration. In the following, we generalize and simplify this
result: by starting from the more general setting of optimistic policy iteration, we arrive at a unifying
view that both encompasses a broader range of greedy methods and permits interpolation between
the approaches directly with existing (unmodified) methodology.
We consider the Gibbs policy class from (1) and the linear-in-parameters advantage function from
(2), which form a ?compatible? actor-critic setup. We assume deep policy evaluation (cf. Section 2).
We begin with the natural actor-critic (NAC) algorithm by Peters & Schaal [19] (cf. (6) and (7)) and
generalize it by adding a forgetting term:
?k+1 = ?k + ?k ?k ? ?k ?k ,
(8)
where ?k ? (0, ?), ?k ? (0, 1]. We refer to this generalized algorithm as the forgetful natural
actor-critic algorithm, or NAC(?). In the following, we show that this algorithm is, within the
discussed context, equivalent to the general form of optimistic policy iteration in (4) and (5), with
the following translation of the parameterization:
?k
?k
?k =
, or ?k =
.
(9)
?k
?k
Taking the forgetting factor ? in (8) toward zero leads back toward the original natural actor-critic
algorithm, with the implication that the original algorithm is a limiting special case of optimistic
policy iteration.
Theorem 1. For the case of deep policy evaluation (Section 2), the natural actor-critic algorithm
for the Gibbs policy class ((6), (7), (1), (2)) is a limiting special case of Gibbs soft-greedy optimistic
policy iteration ((4), (5), (1), (2)).
Proof. The update rule for Gibbs soft-greedy optimistic policy iteration is given in (4) and (5). By
moving the temperature to scale w
? (assume w0 to be scaled accordingly), we obtain
0
wk+1 = wk0 + ?k (w
?k /?k ? wk0 )
(10)
0
?k+1 = wk+1 ,
again with ?k ? (0, 1], ?k ? (0, ?). Such a re-formulation effectively re-scales w and is possible
only with deep policy evaluation (cf. Section 2), with which the non-scaled w is not needed by the
policy evaluation process. We can now remove the redundant second line and rename w0 to ?:
?k+1 = ?k + ?k (w
?k /?k ? ?k ) .
(11)
Finally, we open up the last term and encapsulate ?/? into ?:
?k+1 = ?k + ?k (w
?k /?k ) ? ?k ?k
= ?k + ?k w
?k ? ?k ?k ,
(12)
(13)
with ?k = ?k /?k . Based on (7), we observe that (13) is equivalent to (8). The original natural
actor-critic algorithm is obtained in the limit as ?k ? 0, which causes the forgetting term ?k ?k to
vanish (the effective step size ? can still be controlled with ? ).
This result has some interesting implications. First, it becomes apparent that the implicit effective
step size in optimistic policy iteration is, in fact, ? = ?/? , i.e., it is inversely related to the temperature ? . If the interpolation factor ? is held fixed, a low temperature, which can lead to policy
4
oscillation, equals a long effective step size. This agrees with the interpretation of policy oscillation
as overshooting in [24, 25]. Likewise, a high temperature equals a short effective step size. In [18],
convergence is established for a high enough constant temperature. This result now becomes translated to showing that convergence is established with a short enough constant effective step size,1
which creates an interesting and more direct connection to convergence results for (batch) steepest
descent methods with a constant step size [e.g., 1, 3]. In addition, this connection might permit the
application of the results in the aforementioned literature to establish, in the considered context, a
constant step size convergence result for the natural actor-critic methodology.
Second, we see that the interpolation scheme in optimistic policy iteration, while originally introduced for the sake of countering an inaccurate value function estimate, actually goes in the direction
of the policy gradient methodology. Smooth interpolation between policy gradient and greedy value
function learning turns out to be possible by simply adjusting the interpolation factor ? while treating the temperature ? as an inverse of the step size (we return to provide an interpretation of the role
of ? at a later point). Contrary to the related result in [24], no modifications to existing algorithms
are needed. This connection also allows the convergence results from the policy gradient literature
to be brought in (see Section 2): convergence w.p.1, under standard assumptions from the referred
literature, to an optimal solution is established in the limit for this class of approximate optimistic
policy iteration as the interpolation factor ? is taken toward zero and the step size requirements are
inversely enforced on the temperature ? .
Third, we observe that in non-optimistic policy iteration (? = 1), the forgetting term resets the
parameter vector to the origin at the beginning of every iteration, with the implication that solutions
that are not within the range of a single step from the origin in the direction of the natural gradient
cannot be reached in any number of iterations. The choice of the effective step size, which is
inversely controlled by the temperature, becomes again decisive: a step size that is too short (the
temperature is too high) will cause the algorithm to permanently undershoot the desired optimum,
thus trapping it in sustained sub-optimality, while a step size that is too long (the temperature is too
low) will cause it to overshoot, which can additionally trap it in sustained oscillation. Unfortunately,
even hitting the target exactly with a perfect step size will fail to lead to convergence and optimality
at the same time. Our next section examines these issues more closely.
4
Systematic non-optimality of soft-greedy methods
For greedy value function methods, using the hard-greedy policy class trivially prevents convergence to other than deterministic policies. Furthermore, the proximity of an attractive stochastic
policy can prevent convergence altogether and trap the process in oscillation (cf. Section 2). The
Gibbs soft-greedy policy class, on the other hand, can represent stochastic policies, fixed points do
exist [10, 17], and convergence toward some policy is guaranteed with sufficient softness [18, 14].
While convergence toward deterministic optimal decisions is trivially lost as soon as any softness
is introduced (? 6? 0, and assuming a bounded value function), one might hope that convergence
toward stochastic optimal decisions could still occur in some cases. Unfortunately, as we show in
the following, this is not the case: in the presence of any softness, this approach can never converge
toward any optimal policy (i.e., convergence and optimality become mutually exclusive), except in
a certain pathological case.
At this point, we wish to make clear that we are not arguing against the practical value of the greedy
value function methodology in (interactively) approximated problems; the methodology has some
clear merits, and the sub-optimality and oscillations could well be negligible in a given task. Instead,
we take the following result, together with existing literature on policy oscillations, as an indication
of a fundamental theoretical incompatibility of this methodology to this context: the way by which
this methodology deals with stochastic optima seems to be fundamentally flawed, and we believe
that a thorough understanding of this flaw will have, in addition to facilitating sound theoretical
advances, also immediate practical value by permitting correctly informed trade-off decisions.
Theorem 2. Assume an unbiased value function estimator (e.g., Monte Carlo evaluation). Now,
for Gibbs soft-greedy policy iteration ((1), (4) and (5)) using a linear-in-parameters value function
approximator ((2) or (3)), including optimistic and non-optimistic variants (any ? in (4)), there
cannot exist a fixed point at an optimum, except for the uniformly stochastic policy.
1
Note that the diminishing step size ?t in [18, Fig. 1] concerns policy evaluation, not policy improvement.
5
Proof outline. A fixed point of the update rule (4) must satisfy
w
? k = wk ,
(14)
i.e., at a fixed point, the policy evaluation step w
?k := eval(?(wk /?k )) for the current parameter
vector must yield the same parameter vector as its result:
eval (? (wk /?k )) = wk .
(15)
wk = w
?k = ?k = G(?k )?1 ?? J(?k ) ,
(16)
By applying (14) and (7), we have
which shows that the fixed-point policy ?(wk /?k ) in (15) is defined solely by its own (scaled)
performance gradient.
For an optimal policy and an unbiased estimator, this parameter gradient must, by definition, map to
the zero policy gradient, i.e., to ?? J(?k ) = 0. Consequently, an optimal policy at a fixed point is
defined solely by the zero policy gradient, making the policy equal to ?(0), which is the uniformly
stochastic policy. For the full proof, see the extended version.
Theorem 3. Consider the family of methods from Theorem 2. Assume a smooth policy gradient field
(k?? J(?u ) ? ?? J(?v )k ? 0 as k?u ? ?v k ? 0) and ? 6? 0. First, the policy
distance
between a
fixed point policy ? f and an optimal policy ? ? cannot be vanishingly small (
? f ? ? ?
6< ), except
if the optimal policy ? ? is a semi-uniformly stochastic policy. Second, for bounded returns (? 6? 1
f
and r(s, a) 6? ??, ?s, a), the policy
f distance
between a fixed point policy ? and?an optimal policy
?
?
? cannot be vanishingly small ( ? ? ? 6< ), except if the optimal policy ? is the uniformly
stochastic policy.
Proof outline. For a policy ?
? = ?(wk /?k ) that is vanishingly close to an optimum, an unbiased
parameter gradient ?k must, assuming a smooth gradient field, map to a policy gradient that is
vanishingly close to zero, i.e., ?k must have a vanishingly small effect on ?
? with any finite step size:
k?(wk /?k + ??k ) ? ?(wk /?k )k < ,
?? > 0, ? 6? ? .
(17)
If ?
? is also a fixed point, then, by (16), we can substitute both wk and ?k in (17) with w
?k :
k?(w
?k /?k + ?w
?k ) ? ?(w
?k /?k )k < ,
? k? ((1/?k + ?)w
?k ) ? ?((1/?k )w
?k )k < ,
?? > 0, ? 6? ?
?? > 0, ? ?
6 ?.
(18)
We now see that ?
? is defined solely by a temperature-scaled version of a vanishingly small policy
gradient, and that the condition in (17) is equivalent to stating that any finite decrease of the temperature must not have a non-vanishing effect on ?
? . As only semi-uniformly stochastic policies are
invariant to such temperature decreases, it follows that ?
? must be vanishingly close to such a policy.
Furthermore, if assuming bounded returns, then no dimension of the term w
? > ?(s, a) can approach
positive or negative infinity when w
? is estimated using (2) or (3). Consequently, for ? 6? 0, the
uniformly stochastic policy ?(0) becomes the only semi-uniformly stochastic policy that the Gibbs
policy class in (1) can approach, with the implication that ?
? must be vanishingly close to the uniformly stochastic policy. For the full proof, see the extended version.
To interpret the preceding theorems, we observe that the gist of them is that, assuming a wellbehaved gradient field, the closer the evaluated policy is to an optimum, the closer the target point of
the next greedy update will be to the origin (in policy parameter space). At a fixed point, the policy
parameter vector must equal the target point of the next update, causing convergence to or toward a
policy that is exactly optimal but not at the origin to be a contradiction (Theorem 2). Convergence to
or toward a policy that is vanishingly close to an optimum is also impossible, except if the optimum
is (semi-)uniformly stochastic (Theorem 3).
In practical terms, Theorem 2 states that even if the task at hand and the chosen hyperparameters
would allow convergence to some policy in a finite number of iterations, the resulting policy can
6
never contain optimal decisions, except for uniformly stochastic ones. Theorem 3 generalizes this
result to the case of asymptotic convergence toward some limiting policy: for unbounded returns
and any ? 6? 0, it is impossible to have asymptotic convergence toward any optimal decision in any
state, except for semi-uniformly stochastic decisions, and for bounded returns and any ? 6? 0, it is
impossible to have asymptotic convergence toward any non-uniform optimal decision in any state.
If convergence is to occur, then the limiting policy must reside ?between? the origin and an optimum,
i.e., the result must always undershoot the optimum that the learning process was influenced by.
However, we can see in (15) that by decreasing the temperature ? , it is possible to shift this point of
convergence further away from the origin and closer to the optimum: in the limit of ? ? 0, (15) can
permit the parameter vector w
? to converge toward a point that approaches the origin while, at the
same time, allowing the corresponding policy ?(w/?
? ) to converge toward a policy that is arbitrarily
close to a distant optimum (one can also see that with ? ? 0, the inequality in (18) becomes satisfied
for any w
?k , due to ? 6? ?). Unfortunately, as we already know, such manipulation of the distance of
the fixed point from an optimum by adjusting ? can ruin convergence altogether in non-Markovian
problems. Perkins & Precup [18] report negative convergence results for non-optimistic iteration
(? = 1) with a too low ? , while for optimistic iteration (? < 1), Melo et al. [14] report a lack
of positive results. Interestingly, this latter case is exactly what Theorem 1 addressed, showing
that there actually is a way out and that it is by moving toward natural policy gradient iteration:
decreasing the temperature ? toward zero causes the sub-optimality to vanish, while decreasing the
interpolation factor ? at the same rate prevents the effective step size from exploding.
Finally, we provide a brief discussion on some questions that may have occurred to the reader by
now. First, how does the preceding fit with the well-known soundness of greedy value function
methods in the Markovian case? The crucial difference between the Markovian case (fully observable and tabular) and the non-Markovian case (partially observable or non-tabular) follows from the
standard result for MDPs that states that in the former, all optima must be deterministic (with the
possibility of redundant stochastic optima) [e.g., 23, ?A.2]. For the Gibbs policy class, deterministic policies reside at infinity in some direction in the parameter space, with two implications for
the Markovian case. First, the distance to an optimum never decreases. Consequently, the value
function, being a correction toward an optimum, never vanishes toward a ?neutral? state. Second,
only the direction of an optimum is relevant, as the distance can be always assumed to be infinite.
This implies that in, and only in Markovian problems, the value function never ceases to retain all
necessary information about the current solution, while in non-Markovian problems, relying solely
on the value function can lead to losing track of the current solution.
Second, when moving toward an optimum at infinity, how can the value function / natural gradient
(encoded by w
? = ?) stay non-zero and continue to properly represent action values while the corresponding policy gradient ?? J(?) must approach zero at the same time? We note that the equivalence
in (7) is between a value function and a natural gradient ?. We then recall that the curvature of the
Gibbs policy class turns into a plateau at infinity, onto which the policy becomes pushed when moving toward a deterministic optimum. The increasing discrepancy between ? = G(?)?1 ?? J(?) 6? 0
and ?? J(?) ? 0 can be consumed by G(?)?1 as it captures the curvature of this plateau.
5
Common ground
Figure 1 shows a map of relevant variants of optimistic policy iteration, parameterized as in (4). As
is well known, the hard-greedy variants of this methodology (seen on the left edge on the map) can
become trapped in non-converging cycles over potentially non-optimal policies (see Section 2 for
references and exceptions). For a continuously soft-greedy policy class (toward right on the map),
convergence can be established with enough softness [18, 14]. The natural actor-critic algorithm,
which is convergent and optimal, is placed to the lower left corner by Theorem 1, while the inevitable non-optimality of soft-greedy variants toward right follows from Theorems 2 and 3. The
exact (problem-dependent) place and shape of the line separating non-convergent and convergent
soft-greedy variants (dashed line on the map) remains an open problem.
The main value of Theorem 1 is in bringing the greedy value function and policy gradient methodologies closer to each other. In our context, the unifying NAC(?) formulation in (8) permits interpolation between the methodologies using the ? parameter. As discussed at the end of Section 4, the
policy-forgetting term requires a Markovian problem for being justified: a greedy update implicitly
7
Non-optimistic soft-greedy (small ? )
7 Non-convergence (Perkins & Precup)
7 Non-optimality (Theorems 2?3)
Non-optimistic hard-greedy
7 Oscillation
(Bertsekas, . . . )
7 Non-optimality
Non-optimistic soft-greedy (large ? )
3 Convergence (Perkins & Precup)
7 Non-optimality (Theorems 2?3)
1
Optimistic hard-greedy
7 Chattering
(Bertsekas, . . . )
7 Non-optimality
Optimistic soft-greedy (large ? )
3 Convergence (Melo et al.)
7 Non-optimality (Theorems 2?3)
cf. Fig. 2b
?
c
g. 2
Fi
cf.
0
Natural actor-critic
3 Convergence
(Theorem 1)
3 Optimality
?
?
0
s1
ar
al
s2
0
ar
al
1
1/4
? (left) ? ? (right)
y1
1
? = 0.2, ? = 1
? = 0.2, ? = 0.2
? = 0.2, ? = 0.05
0.5
? (left) ? ? (right)
Figure 1: The hyperparameter space of the general form of (approximate) optimistic policy
iteration in (4), with known convergence and optimality properties (see text for assumptions).
1
? = ? = 0.2
? = ? = 0.05
? = ? = 0.01
NAC (? = 1)
0.5
(a) A non-Markovian
0
0
problem (adapted from
0
5
10
15
20
0
5
10
15
20
[24]). The incoming arrow
iteration
iteration
indicates the start state.
Arrows leading out indicate
(b) Non-optimality or oscillation
(c) Interpolation toward NAC with
termination with the shown
with ? 6? 0. The variants are
? ? 0 and ? ? 0. The variants are
reward.
marked with in Fig. 1 (schematic). marked with in Fig. 1 (schematic).
Figure 2: Empirical illustration of the behavior of optimistic policy iteration ((1), (2), (4) and (5),
with tabular ?) in the proximity of a stochastic optimum. The problem is shown in Fig. 2a. In
Figures 2b and 2c, the optimum at ?(left) ? ?(right) = log(2) is denoted by a solid green line. The
uniformly stochastic policy is denoted by a dashed red line.
stands on a Markov assumption and the ? parameter in (8) can be interpreted as adjusting the strength
of this assumption. In this respect, the policy improvement parameter ? in NAC(?) can be seen (inversely) as a dual in spirit to the policy evaluation parameter ? in TD(?)-style algorithms. On the
policy evaluation side, having ? = 0 obtains variance reduction by assuming and exploiting Markovianity of the problem, while ? = 1 obtains unbiased estimates also for non-Markovian problems.
On the policy improvement side, with ? = 1, we have strictly greedy updates that gain in speed as
the policy can respond instantly to new opportunities appearing in the value function (for empirical
observations of such a speed gain, see [11, 25]), and in representational flexibility due to the lack of
continuity constraints between successive policies (for a canonical example, consider fitted Q iteration). This comes at the price of either oscillation or non-optimality if the Markov assumption fails
to hold, which is illustrated in Figure 2b for the problem in 2a. With ? ? 0, we approach natural
gradient updates that remain sound also in non-Markovian settings, which is illustrated in Figure 2c.
The possibility to interpolate between the approaches might turn out useful in problems with partial
Markovianity: a large ? in the NAC(?) formulation can be used to quickly find the rough direction of
the strongest attractors, after which gradually decreasing ? allows a convergent final ascent toward
an optimum.
Acknowledgments
This work has been financially supported by the Academy of Finland through project no. 254104,
and by the Foundation of Nokia Corporation.
8
References
[1] Armijo, L. (1966). Minimization of functions having Lipschitz continuous first partial derivatives. Pacific
Journal of Mathematics, 16(1), 1?3.
[2] Baxter, J., & Bartlett, P. L. (2000). Reinforcement learning in POMDP?s via direct gradient ascent. In
Proceedings of the Seventeenth International Conference on Machine Learning, (pp. 41?48).
[3] Bertsekas, D. P. (1997). A new class of incremental gradient methods for least squares problems. SIAM
Journal on Optimization, 7(4), 913?926.
[4] Bertsekas, D. P. (2005). Dynamic Programming and Optimal Control. Athena Scientific.
[5] Bertsekas, D. P. (2010). Pathologies of temporal difference methods in approximate dynamic programming. In 49th IEEE Conference on Decision and Control, (pp. 3034?3039).
[6] Bertsekas, D. P. (2011). Approximate policy iteration: A survey and some new methods. Journal of
Control Theory and Applications, 9(3), 310?335.
[7] Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Athena Scientific.
[8] Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., & Lee, M. (2009). Natural actor-critic algorithms. Automatica, 45(11), 2471?2482.
[9] Bus?oniu, L., Babu?ska, R., De Schutter, B., & Ernst, D. (2010). Reinforcement learning and dynamic
programming using function approximators. CRC Press.
[10] De Farias, D. P., & Van Roy, B. (2000). On the existence of fixed points for approximate value iteration
and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3), 589?608.
[11] Kakade, S. M. (2002). A natural policy gradient. In Advances in Neural Information Processing Systems.
[12] Kakade, S. M. (2003). On the Sample Complexity of Reinforcement Learning. Ph.D. thesis, University
College London.
[13] Konda, V. R., & Tsitsiklis, J. N. (2004). On actor-critic algorithms. SIAM Journal on Control and
Optimization, 42(4), 1143?1166.
[14] Melo, F. S., Meyn, S. P., & Ribeiro, M. I. (2008). An analysis of reinforcement learning with function
approximation. In Proceedings of the 25th International Conference on Machine Learning, (pp. 664?671).
[15] Nedi?c, A., & Bertsekas, D. P. (2003). Least squares policy evaluation algorithms with linear function
approximation. Discrete Event Dynamic Systems: Theory and Applications, 13(1?2), 79?110.
[16] Pendrith, M. D., & McGarity, M. J. (1998). An analysis of direct reinforcement learning in non-Markovian
domains. In Proceedings of the Fifteenth International Conference on Machine Learning.
[17] Perkins, T. J., & Pendrith, M. D. (2002). On the existence of fixed points for Q-learning and sarsa in
partially observable domains. In Proceedings of the Nineteenth International Conference on Machine
Learning, (pp. 490?497).
[18] Perkins, T. J., & Precup, D. (2003). A convergent form of approximate policy iteration. In Advances in
Neural Information Processing Systems.
[19] Peters, J., & Schaal, S. (2008). Natural actor-critic. Neurocomputing, 71(7-9), 1180?1190.
[20] Singh, S. P., Jaakkola, T., & Jordan, M. I. (1994). Learning without state-estimation in partially observable
Markovian decision processes. In Proceedings of the Eleventh International Conference on Machine
Learning.
[21] Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
[22] Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (2000). Policy gradient methods for reinforcement
learning with function approximation. In Advances in Neural Information Processing Systems.
[23] Szepesv?ari, C. (2010). Algorithms for Reinforcement Learning. Morgan & Claypool Publishers.
[24] Wagner, P. (2011). A reinterpretation of the policy oscillation phenomenon in approximate policy iteration. In Advances in Neural Information Processing Systems 24, (pp. 2573?2581).
[25] Wagner, P. (to appear). Policy oscillation is overshooting. Neural Networks. Author manuscript available
at http://users.ics.aalto.fi/pwagner/.
9
| 5188 |@word mild:1 version:6 norm:1 seems:1 open:2 termination:1 simulation:1 solid:2 reduction:1 contains:1 interestingly:1 existing:3 current:5 must:13 distant:1 shape:1 wellbehaved:1 remove:1 treating:1 gist:1 update:7 overshooting:3 greedy:39 selected:1 parameterization:2 accordingly:1 trapping:1 beginning:1 steepest:1 vanishing:1 short:3 completeness:1 successive:1 simpler:1 unbounded:1 direct:3 become:3 sustained:5 eleventh:1 manner:1 forgetting:5 expected:1 behavior:4 nor:1 discounted:1 decreasing:4 relying:1 td:2 resolve:1 increasing:1 becomes:7 begin:1 project:1 bounded:5 lspe:1 what:1 interpreted:1 informed:1 caution:1 corporation:1 guarantee:2 temporal:4 thorough:1 every:1 interactive:2 softness:6 exactly:3 scaled:4 control:4 appear:1 bertsekas:9 encapsulate:1 positive:3 negligible:1 understood:3 local:2 limit:4 consequence:1 sutton:3 solely:4 interpolation:9 approximately:1 might:4 equivalence:1 range:2 seventeenth:1 practical:5 acknowledgment:1 arguing:1 lost:2 implement:1 empirical:2 cannot:4 close:6 selection:1 onto:1 context:8 risk:1 applying:1 impossible:3 restriction:1 equivalent:3 map:7 deterministic:6 go:1 starting:1 pomdp:1 survey:1 unify:1 nedi:1 contradiction:1 adjusts:1 insight:2 estimator:4 importantly:1 rule:2 examines:1 meyn:1 limiting:7 controlling:1 target:4 user:2 exact:1 programming:7 losing:1 origin:7 roy:1 approximated:1 role:1 capture:1 cycle:1 trade:1 decrease:3 substantial:2 vanishes:1 complexity:1 reward:4 ideally:1 dynamic:10 ghavamzadeh:1 overshoot:1 singh:2 tight:1 reinterpretation:1 creates:1 basis:2 translated:1 necessitates:1 farias:1 various:1 shortcoming:1 effective:7 monte:4 london:1 artificial:1 aggregate:1 apparent:1 encoded:1 widely:1 nineteenth:1 soundness:1 noisy:1 final:1 advantage:3 differentiable:1 indication:1 vanishingly:9 reset:1 causing:1 relevant:2 combining:1 poorly:1 flexibility:2 representational:2 academy:1 ernst:1 exploiting:1 convergence:41 optimum:25 requirement:1 perfect:1 incremental:1 stating:1 c:2 implies:1 indicate:1 come:1 direction:6 closely:1 discontinuous:1 stochastic:27 mcallester:1 material:1 crc:1 sarsa:1 strictly:1 correction:1 hold:1 proximity:3 around:1 considered:4 ic:2 ruin:1 ground:1 claypool:1 mapping:1 finland:2 continuum:2 estimation:4 currently:2 agrees:1 hope:1 minimization:1 brought:1 rough:1 mit:1 always:2 modified:1 rather:1 avoid:2 incompatibility:1 barto:1 broader:2 jaakkola:1 focus:4 schaal:3 improvement:8 properly:1 indicates:1 mainly:1 aalto:5 flaw:1 dependent:1 inaccurate:2 a0:1 diminishing:1 compatibility:1 issue:4 dual:2 aforementioned:2 denoted:3 special:8 equal:4 field:3 never:7 having:4 fuller:1 sampling:1 flawed:1 look:2 thinking:1 alter:1 tabular:3 discrepancy:1 report:2 inevitable:1 simplify:1 fundamentally:1 pathological:3 interpolate:2 neurocomputing:1 attractor:1 attempt:1 possibility:2 eval:2 evaluation:24 severe:1 operated:1 held:1 implication:5 tuple:1 closer:4 unification:1 partial:3 necessary:1 edge:1 incomplete:2 re:5 desired:1 theoretical:2 fitted:1 instance:1 soft:15 markovian:14 ar:2 unmodified:1 subset:6 neutral:1 markovianity:2 uniform:1 too:5 ska:1 referring:1 st:5 fundamental:2 international:5 siam:2 retain:1 stay:1 systematic:2 off:1 lee:1 connecting:1 continuously:3 precup:5 quickly:1 together:2 again:2 thesis:1 satisfied:1 interactively:2 corner:1 book:1 derivative:1 style:2 leading:2 return:5 actively:1 potential:3 de:2 sec:3 wk:17 babu:1 satisfy:1 depends:1 decisive:1 performed:1 view:2 later:1 optimistic:37 reached:1 start:1 red:1 square:3 accuracy:1 qk:1 variance:1 likewise:1 correspond:1 yield:1 generalize:2 carlo:4 bhatnagar:1 cc:1 plateau:2 strongest:2 influenced:1 definition:1 infinitesimal:1 against:2 pendrith:2 pp:5 proof:6 riemannian:1 oniu:1 sampled:1 gain:3 adjusting:3 manifest:1 recall:1 actually:2 reflecting:1 back:1 manuscript:1 originally:1 methodology:20 formulation:4 evaluated:2 furthermore:3 implicit:1 hand:4 nonlinear:1 lack:2 continuity:1 defines:2 quality:2 scientific:2 mdp:1 believe:1 nac:7 effect:2 contain:1 unbiased:4 counterpart:1 former:2 undershoot:2 illustrated:2 deal:1 attractive:2 during:2 generalized:2 outline:2 complete:2 temperature:17 consideration:1 novel:1 fi:5 recently:2 ari:1 common:1 pwagner:2 discussed:2 interpretation:3 occurred:1 interpret:1 refer:2 gibbs:12 outlined:1 trivially:2 mathematics:1 pathology:1 moving:4 actor:20 curvature:3 own:1 recent:1 manipulation:1 certain:3 inequality:1 arbitrarily:1 continue:1 approximators:1 seen:3 morgan:1 additional:2 preceding:2 converge:7 redundant:2 dashed:2 exploding:1 relates:1 full:3 semi:7 sound:2 smooth:3 melo:4 long:2 permitting:1 controlled:2 schematic:2 converging:1 variant:8 neuro:1 metric:2 fifteenth:1 iteration:42 represent:2 justified:1 background:3 addition:3 separately:1 szepesv:1 addressed:1 source:1 crucial:1 publisher:1 posse:1 bringing:1 ascent:3 contrary:1 spirit:2 seem:1 jordan:1 presence:2 enough:6 baxter:1 fit:1 identified:1 observability:1 consumed:1 shift:1 motivated:1 bartlett:1 suffer:1 peter:3 cause:4 action:8 deep:4 useful:1 clear:2 ph:2 wk0:2 http:3 exist:4 canonical:1 trapped:2 estimated:1 correctly:1 track:1 instantly:1 discrete:1 hyperparameter:1 shall:1 key:1 revolve:1 prevent:1 enforced:1 inverse:1 parameterized:3 respond:1 arrive:1 family:1 reader:1 place:1 oscillation:18 decision:10 pushed:1 bound:5 guaranteed:1 distinguish:1 convergent:6 strength:2 ahead:2 occur:2 infinity:4 perkins:6 adapted:1 constraint:1 encodes:1 sake:1 interpolated:1 speed:3 optimality:23 forgetful:2 department:1 pacific:1 combination:1 remain:1 kakade:2 shallow:2 modification:1 making:1 s1:1 gradually:2 invariant:1 heart:1 countering:1 taken:1 equation:1 mutually:1 remains:1 bus:1 turn:3 eventually:2 fail:2 needed:2 know:1 merit:1 end:1 available:2 generalizes:1 permit:7 observe:3 away:1 appearing:1 batch:1 permanently:1 altogether:2 existence:2 original:3 substitute:1 denotes:2 cf:7 opportunity:1 unifying:3 konda:1 build:1 establish:1 tensor:1 question:2 quantity:1 occurs:1 already:1 exclusive:1 usual:1 financially:1 gradient:39 dp:1 distance:6 separating:1 majority:2 athena:2 w0:2 toward:26 assuming:5 illustration:1 providing:1 setup:1 unfortunately:4 mostly:1 potentially:2 negative:2 policy:161 boltzmann:2 unknown:2 allowing:1 observation:2 markov:4 finite:3 descent:1 immediate:2 extended:5 y1:1 mansour:1 arbitrary:1 introduced:2 cast:1 required:1 optimized:1 connection:4 established:8 nip:1 dynamical:1 below:1 summarize:1 encompasses:2 including:2 green:1 explanation:1 belief:2 event:1 natural:24 scheme:1 firmly:1 inversely:4 brief:1 mdps:1 deemed:1 text:1 understanding:4 literature:5 asymptotic:3 fully:2 interesting:2 approximator:1 foundation:1 sufficient:1 s0:4 critic:21 share:1 translation:1 compatible:3 placed:1 last:1 truncation:2 soon:1 supported:1 tsitsiklis:2 side:3 allow:1 taking:1 wagner:4 nokia:1 absolute:1 benefit:1 van:1 boundary:1 dimension:2 depth:3 transition:1 cumulative:2 stand:1 reside:2 author:1 reinforcement:11 projected:1 historical:1 ribeiro:1 approximate:15 observable:4 obtains:2 implicitly:1 schutter:1 incoming:1 automatica:1 conclude:1 assumed:1 continuous:1 additionally:1 complex:1 domain:2 main:4 arrow:2 motivation:1 s2:1 paul:2 arise:1 hyperparameters:1 facilitating:1 categorized:2 fig:5 referred:1 sub:8 fails:1 wish:1 vanish:2 third:1 theorem:17 bad:1 specific:1 showing:2 cease:1 concern:1 trap:2 adding:1 effectively:1 gained:1 simply:1 prevents:2 expressed:1 chattering:5 hitting:1 partially:3 applies:1 lstd:1 marked:2 consequently:4 price:1 lipschitz:1 considerable:1 change:1 hard:7 infinite:1 except:9 uniformly:15 called:1 exception:1 college:1 rename:1 latter:3 fulfills:1 armijo:1 phenomenon:4 |
4,628 | 5,189 | DESPOT: Online POMDP Planning with Regularization
Adhiraj Somani
Nan Ye
David Hsu
Wee Sun Lee
Department of Computer Science
National University of Singapore
adhirajsomani@gmail.com, {yenan,dyhsu,leews}@comp.nus.edu.sg
Abstract
POMDPs provide a principled framework for planning under uncertainty, but are
computationally intractable, due to the ?curse of dimensionality? and the ?curse
of history?. This paper presents an online POMDP algorithm that alleviates these
difficulties by focusing the search on a set of randomly sampled scenarios. A
Determinized Sparse Partially Observable Tree (DESPOT) compactly captures the
execution of all policies on these scenarios. Our Regularized DESPOT (R-DESPOT)
algorithm searches the DESPOT for a policy, while optimally balancing the size of
the policy and its estimated value obtained under the sampled scenarios. We give
an output-sensitive performance bound for all policies derived from a DESPOT,
and show that R-DESPOT works well if a small optimal policy exists. We also give
an anytime algorithm that approximates R-DESPOT. Experiments show strong
results, compared with two of the fastest online POMDP algorithms. Source code
along with experimental settings are available at http://bigbird.comp.
nus.edu.sg/pmwiki/farm/appl/.
1
Introduction
Partially observable Markov decision processes (POMDPs) provide a principled general framework
for planning in partially observable stochastic environments. However, POMDP planning is computationally intractable in the worst case [11]. The challenges arise from three main sources. First,
a POMDP may have a large number of states. Second, as the state is not fully observable, the
agent must reason with beliefs, which are probability distributions over the states. Roughly, the size
of the belief space grows exponentially with the number of states. Finally, the number of actionobservation histories that must be considered for POMDP planning grows exponentially with the
planning horizon. The first two difficulties are usually referred to as the ?curse of dimensionality?,
and the last one, the ?curse of history?. To address these difficulties, online POMDP planning (see
[17] for a survey) chooses one action at a time and interleaves planning and plan execution. At each
time step, the agent performs a D-step lookahead search. It plans the immediate next action for the
current belief only and reasons in the neighborhood of the current belief, rather than over the entire
belief space. Our work adopts this online planning approach.
Recently an online POMDP planning algorithm called POMCP has successfully scaled up to very
large POMDPs [18]. POMCP, which is based on Monte Carlo tree search, tries to break the two
curses by sampling states from the current belief and sampling histories with a black-box simulator. It uses the UCT algorithm [9] to control the exploration-exploitation trade-off during the online
lookahead search. However, UCT is sometimes overly greedy and suffers the worst-case performance of ?(exp(exp(. . . exp(1) . . .)))1 samples to find a sufficiently good action [4].
This paper presents a new algorithm for online POMDP planning. It enjoys the same strengths
as POMCP?breaking the two curses through sampling?but avoids POMCP?s extremely poor
worst-case behavior by evaluating policies on a small number of sampled scenarios [13]. In each
planning step, the algorithm searches for a good policy derived from a Determinized Sparse Partially Observable Tree (DESPOT) for the current belief, and executes the policy for one step. A
DESPOT summarizes the execution of all policies under K sampled scenarios. It is structurally
similar to a standard belief tree, but contains only belief nodes reachable under the K scenarios
1
Composition of D ? 1 exponential functions.
1
a1
o2
o1
o1
a2
a1
a2
o2
o1
o2
o1
o1
o2
a1
a2
o2
o1
o2
(Figure 1). We can view a DESPOT as a sparsely
sampled belief tree. While a belief tree of height
D contains O(|A|D |Z|D ) nodes, where |A| and
|Z| are the sizes of the action set and the observation set, respectively, a corresponding DESPOT
contains only O(|A|D K) nodes, leading to dramatic improvement in computational efficiency
when K is small.
One main result of this work is an output-sensitive
bound, showing that a small number of sampled
Figure 1: A belief tree of height D = 2 (gray) scenarios is sufficient to give a good estimate
and a corresponding DESPOT (black) obtained with
2 sampled scenarios. Every tree nodes represents a of the true value of any policy ?, provided that
the size of ? is small (Section 3). Our Regubelief. Every colored dot represents a scenario.
larized DESPOT (R-DESPOT) algorithm interprets
this lower bound as a regularized utility function, which it uses to optimally balance the size of a
policy and its estimated performance under the sampled scenarios. We show that R-DESPOT computes a near-optimal policy whenever a small optimal policy exists (Section 4). For anytime online
planning, we give a heuristic approximation, Anytime Regularized DESPOT (AR-DESPOT), to the
R-DESPOT algorithm (Section 5). Experiments show strong results of AR-DESPOT, compared with
two of the fastest online POMDP algorithms (Section 6).
2
Related Work
There are two main approaches to POMDP planning: offline policy computation and online search.
In offline planning, the agent computes beforehand a policy contingent upon all possible future
scenarios and executes the computed policy based on the observations received. Although offline
planning algorithms have achieved dramatic progress in computing near-optimal policies (e.g., [15,
21, 20, 10]), they are difficult to scale up to very large POMDPs, because of the exponential number
of future scenarios that must be considered.
In contrast, online planning interleaves planning and plan execution. The agent searches for a single
best action for the current belief only, executes the action, and updates the belief. The process
then repeats at the new belief. A recent survey [17] lists three main categories of online planning
algorithms: heuristic search, branch-and-bound pruning, and Monte Carlo sampling. AR-DESPOT
contains elements of all three, and the idea of constructing DESPOTs through deterministic sampling
is related to those in [8, 13]. However, AR-DESPOT balances the size of a policy and its estimated
performance during the online search, resulting in improved performance for suitable planning tasks.
During the online search, most algorithms, including those based on Monte Carlo sampling (e.g.,
[12, 1]), explicitly represents the belief as a probability distribution over the state space. This,
however, limits their scalability for large state spaces, because a single belief update can take time
quadratic in the number of states. In contrast, DESPOT algorithms represent the belief as a set of
particles, just as POMCP [18] does, and do not perform belief update during the online search.
Online search and offline policy computation are complementary and can be combined, e.g., by
using approximate or partial policies computed offline as the default policies at the bottom of the
search tree for online planning (e.g., [2, 5]) or as macro-actions to shorten the search horizon [7].
3
Determinized Sparse Partially Observable Trees
3.1 POMDP Preliminaries
A POMDP is formally a tuple (S, A, Z, T, O, R), where S is a set of states, A is a set of actions, Z
is a set of observations, T (s, a, s0 ) = p(s0 |s, a) is the probability of transitioning to state s0 when the
agent takes action a in state s, O(s, a, z) = p(z|s, a) is the probability of observing z if the agent
takes action a and ends in state s, and R(s, a) is the immediate reward for taking action a in state s.
A POMDP agent does not know the true state, but receives observations that provide partial information on the state. The agent maintains a belief, often represented as a probability distribution
over S. It starts with an initial belief b0 . At time t, it updates the belief bt according to Bayes?
rule by incorporating information from the action taken at time t ? 1 and the resulting observation:
bt = ? (bt?1 , at?1 , zt ). A policy ? : B 7? A specifies the action a ? A at belief b ? B. The value of
a policy ? at a beliefP
b is the expected total
discounted
reward obtained by following ? with initial
?
t
b0 = b , for some discount factor ? ? [0, 1).
belief b: V? (b) = E
?
R
s
,
?(b
)
t
t
t=0
2
One way of online POMDP planning is to construct a belief tree (Figure 1), with the current belief
b0 as the initial belief at the root of the tree, and perform lookahead search on the tree for a policy ?
that maximizes V? (b0 ). Each node of the tree represents a belief. A node branches into |A| action
edges, and each action edge branches further into |Z| observation edges. If a node and its child
represent beliefs b and b0 , respectively, then b0 = ? (b, a, z) for some a ? A and z ? Z. To search
a belief tree, we typically truncate it at a maximum depth D and perform a post-order traversal. At
each leaf node, we simulate a default policy to obtain a lower bound on its value. At each internal
node, we apply Bellman?s principle of optimality to choose a best action:
nX
X
o
V (b) = max
(1)
b(s)R(s, a) + ?
p(z|b, a)V ? (b, a, z) ,
a?A
s?S
z?Z
which recursively computes the maximum value of action branches and the average value of observation branches. The results are an approximately optimal policy ?
? , represented as a policy tree,
and the corresponding value V?? (b0 ). A policy tree retains only the chosen action branches, but all
observation branches from the belief tree2 . The size of such a policy is the number of tree nodes.
Our algorithms represent a belief as a set of particles, i.e., sampled states. We start with an initial
belief. At each time step, we search for a policy ?
? , as described above. The agent executes the
first action a of ?
? and receives a new observation z. We then apply particle filtering to incorporate
information from a and z into an updated new belief. The process then repeats.
3.2 DESPOT
While a standard belief tree captures the execution of all policies under all possible scenarios, a
DESPOT captures the execution of all policies under a set of sampled scenarios (Figure 1). It contains
all the action branches, but only the observation branches under the sampled scenarios.
We define DESPOT constructively by applying a deterministic simulative model to all possible action
sequences under K scenarios sampled from an initial belief b0 . A scenario is an abstract simulation
trajectory starting with some state s0 . Formally, a scenario for a belief b is a random sequence ? =
(s0 , ?1 , ?2 , . . .), in which the start state s0 is sampled according to b and each ?i is a real number
sampled independently and uniformly from the range [0, 1]. The deterministic simulative model is a
function g : S ? A ? R 7? S ? Z, such that if a random number ? is distributed uniformly over [0, 1],
then (s0 , z 0 ) = g(s, a, ?) is distributed according to p(s0 , z 0 |s, a) = T (s, a, s0 )O(s0 , a, z 0 ). When
we simulate this model for an action sequence (a1 , a2 , a3 , . . .) under a scenario (s0 , ?1 , ?2 , . . .), the
simulation generates a trajectory (s0 , a1 , s1 , z1 , a2 , s2 , z2 , . . .), where (st , zt ) = g(st?1 , at , ?t ) for
t = 1, 2, . . .. The simulation trajectory traces out a path (a1 , z1 , a2 , z2 , . . .) from the root of the
standard belief tree. We add all the nodes and edges on this path to the DESPOT. Each DESPOT node
b contains a set ?b , consisting of all scenarios that it encounters. The start states of the scenarios in
?b form a particle set that represents b approximately. We insert the scenario (s0 , ?0 , ?1 , . . .) into
the set ?b0 and insert (st , ?t+1 , ?t+2 , . . .) into the set ?bt for the belief node bt reached at the end
of the subpath (a1 , z1 , a2 , z2 , . . . , at , zt ), for t = 1, 2, . . .. Repeating this process for every action
sequence under every sampled scenario completes the construction of the DESPOT.
A DESPOT is determined completely by the K scenarios, which are sampled randomly a priori.
Intuitively, a DESPOT is a standard belief tree with some observation branches removed. While
a belief tree of height D has O(|A|D |Z|D ) nodes, a corresponding DESPOT has only O(|A|D K)
nodes, because of reduced observation branching under the sampled scenarios. Hence the name
Determinized Sparse Partially Observable Tree (DESPOT).
To evaluate a policy ? under sampled scenarios, define V?,? as the total discounted reward of the
P
trajectory obtained by simulating ? under a scenario ?. Then V?? (b) = ???b V?,? / |?b | is an
estimate of V? (b), the value of ? at b, under a set of scenarios, ?b . We then apply the usual belief
tree search from the previous subsection to a DESPOT to find a policy having good performance
under the sampled scenarios. We call this algorithm Basic DESPOT (B-DESPOT).
The idea of using sampled scenarios for planning is exploited in hindsight optimization (HO) as
well [3, 22]. HO plans for each scenario independently and builds K separate trees, each with
O(|A|D ) nodes. In contrast, DESPOT captures all K scenarios in a single tree with O(|A|D K)
nodes and allows us to reason with all scenarios simultaneously. For this reason, DESPOT can
provide stronger performance guarantees than HO.
2
A policy tree can be represented more compactly by labeling each node by the action edge that follows and
then removing the action edge. We do not use this representation here.
3
4
Regularized DESPOT
To search a DESPOT for a near-optimal policy, B-DESPOT chooses a best action at every internal
node of the DESPOT, according to the scenarios it encounters. This, however, may cause overfitting:
the chosen policy optimizes for the sampled scenarios, but does not perform well in general, as
many scenarios are not sampled. To reduce overfitting, our R-DESPOT algorithm leverages the idea
of regularization, which balances the estimated performance of a policy under the sampled scenarios
and the policy size. If the subtree at a DESPOT node is too large, then the performance of a policy
for this subtree may not be estimated reliably with K scenarios. Instead of searching the subtree for
a policy, R-DESPOT terminates the search and uses a simple default policy from this node onwards.
To derive R-DESPOT, we start with two theoretical results. The first one provides an output-sensitive
lower bound on the performance of any arbitrary policy derived from a DESPOT. It implies that
despite its sparsity, a DESPOT contains sufficient information for approximate policy evaluation,
and the accuracy depends on the size of the policy. The second result shows that by optimizing
this bound, we can find a policy with small size and high value. For convenience, we assume that
R(s, a) ? [0, Rmax ] for all s ? S and a ? A, but the results can be easily extended to accommodate
negative rewards. The proofs of both results are available in the supplementary material.
Formally, a policy tree derived from a DESPOT contains the same root as the DESPOT, but only one
action branch at each internal node. Let ?b0 ,D,K denote the class of all policy trees derived from
DESPOTs that have height D and are constructed from K sampled scenarios for belief b0 . Like a
DESPOT, a policy tree ? ? ?b0 ,D,K may not contain all observation branches. If the execution of
? encounters an observation branch not present in ?, we simply follow the default policy from then
on. Similarly, we follow the default policy, when reaching a leaf node. We now bound the error on
the estimated value of a policy derived from a DESPOT.
Theorem 1 For any ?, ? ? (0, 1), every policy tree ? ? ?b0 ,D,K satisfies
ln(4/? ) + |?| ln KD|A||Z|
1???
Rmax
V? (b0 ) ?
V? (b0 ) ?
?
,
1+?
(1 + ?)(1 ? ?)
?K
(2)
with probability at least 1?? , where V?? (b0 ) is the estimated value of ? under any set of K randomly
sampled scenarios for belief b0 .
The second term on the right hand side (RHS) of (2) captures the additive error in estimating the
value of policy tree ?, and depends on the size of ?. We can make this error arbitrarily small
by choosing a suitably large K, the number of sampled scenarios. Furthermore, this error grows
logarithmically with |A| and |Z|, indicating that the approximation scales well with large action and
observation sets. The constant ? can be tuned to tighten the bound. A smaller ? value allows the first
term on the RHS of (2) to approximate V?? better, but increases the additive error in the second term.
We have specifically constructed the bound in this multiplicative-additive form, due to Haussler [6],
in order to apply efficient dynamic programming techniques in R-DESPOT.
Now a natural idea is to search for a near-optimal policy ? by maximizing the RHS of (2), which
guarantees the performance of ? by accounting for both the estimated performance and the size of ?.
Theorem 2 Let ? ? be an optimal policy at a belief b0 . Let ? be a policy derived from a DESPOT
that has height D and is constructed from K randomly sampled scenarios for belief b0 . For any
?, ? ? (0, 1), if ? maximizes
|?| ln KD|A||Z|
1???
Rmax
V? (b0 ) ?
?
1+?
(1 + ?)(1 ? ?)
?K
among all policies derived from the DESPOT, then
ln(8/? )+|? ? | ln
Rmax
? (b0 ) ?
V? (b0 ) ? 1??
V
?
1+?
(1+?)(1??)
?K
KD|A||Z|
+ (1 ? ?)
(3)
q
2 ln(2/? )
K
+ ?D
,
with probability at least 1 ? ? .
Theorem 2 implies that if a small optimal policy tree ? ? exists, then we can find a near-optimal
policy with high probability by maximizing (3). Note that ? ? is a globally optimal policy at b0 . It
may or may not lie in ?b0 ,D,K . The expression in (3) can be rewritten in the form V?? (b0 ) ? ?|?|,
similar to that of regularized utility functions in many machine learning algorithms.
4
We now describe R-DESPOT, which consists of two main steps. First, R-DESPOT constructs a
DESPOT T of height D using K scenarios, just as B-DESPOT does. To improve online planning
performance, it may use offline learning to optimize the values for D and K. Second, R-DESPOT
performs bottom-up dynamic programming on T and derive a policy tree that maximizes (3).
For a given policy tree ? derived the DESPOT T , we define the regularized weighted discounted
utility (RWDU) for a node b of ?:
|?b | ?(b) ?
?(b) =
?
V?b (b) ? ?|?b |,
K
where |?b | is the number of scenarios passing through node b, ? is the discount factor, ?(b) is
the depth of b in the tree ?, ?b is the subtree of ? rooted at b, and ? is a fixed constant. Then the
regularized utility V?? (b0 ) ? ?|?| is simply ?(b0 ). We can compute ?(?b ) recursively:
X
? ab ) +
?(b) = R(b,
?(b0 ) and
b0 ?CH? (b)
X
? ab ) = 1
? ?(b) R(s? , ab ) ? ?.
R(b,
K
???b
where ab is the chosen action of ? at the node b, CH? (b) is the set of child nodes of b in ?, and s?
is the start state associated with the scenario ?.
We now describe the dynamic programming procedure that searches for an optimal policy in T . For
any belief node b in T , let ? ? (b) be the maximum RWDU of b under any policy tree ? derived from
b | ?(b) ?
V?0 (b) ? ?, for some
T . We compute ? ? (b) recursively. If b is a leaf node of T , ? ? (b) = |?
K ?
default policy ?0 . Otherwise,
n
o
X
|?b | ?(b) ?
?
? 0
?
? (b) = max
?
V?0 (b) ? ?, max R(b, a) +
? (b ) ,
(4)
a
K
0
b ?CH(b,a)
where CH(b, a) is the set of child nodes of b under the action branch a. The first maximization
in (4) chooses between executing the default policy or expanding the subtree at b. The second
maximization chooses among the different actions available. The value of an optimal policy for
the DESPOT T rooted at the belief b0 is then ? ? (b0 ) and can be computed with bottom-up dynamic
programming in time linear in the size of T .
5
Anytime Regularized DESPOT
To further improve online planning performance for large-scale POMDPs, we introduce ARDESPOT, an anytime approximation of R-DESPOT. AR-DESPOT applies heuristic search and branchand-bound pruning to uncover the more promising parts of a DESPOT and then searches the partially
constructed DESPOT for a policy that maximizes the regularized utility in Theorem 2. A brief summary of AR-DESPOT is given in Algorithm 1. Below we provides some details on how AR-DESPOT
performs the heuristic search (Section 5.1) and constructs the upper and lower bounds for branchand-bound pruning (Sections 5.2 and 5.3 ).
5.1
DESPOT Construction by Forward Search
AR-DESPOT incrementally constructs a DESPOT T using heuristic forward search [19, 10]. Initially,
T contains only the root node with associated belief b0 and a set ?b0 of scenarios sampled according
b0 . We then make a series of trials, each of which augments T by tracing a path from the root to a
leaf of T and adding new nodes to T at the end of the path. For every belief node b in T , we maintain
an upper bound U (b) and a lower bound L(b) on V??? (b), which is the value of the optimal policy
? ? for b under the set of scenarios ?b . Similarly we maintain bounds U (b, a) and L(b, a) on the
P
P
0
b0 | ?
?
Q-value Q?? (b, a) = |?1b | ???b R(s? , a) + ? b0 ?CH(b,a) |?
|?b | V? (b ). A trial starts the root of
?
T . In each step, it chooses the action branch a that maximizes U (b, a) for the current node b and
then chooses the observation branch z ? that maximizes the weighted excess uncertainty at the child
node b0 = ? (b, a? , z):
|?b0 |
WEU(b0 ) =
excess(b0 ),
|?b |
0
where excess(b0 ) = U (b0 ) ? L(b0 ) ? ? ??(b ) [19] and is a constant specifying the desired gap
between the upper and lower bounds at the root b0 . If the chosen node ? (b, a? , z ? ) has negative
5
Algorithm 1 AR-DESPOT
1: Set b0 to the initial belief.
2: loop
3:
T ? B UILD D ESPOT (b0 ).
4:
Compute an optimal policy ? ? for T us5:
6:
7:
RUN T RIAL(b, T )
1: if ?(b) > D then
2:
return b
3: if b is a leaf node then
4:
Expand b one level deeper, and insert
all new nodes into T as children of b.
5: a? ? arg maxa?A U (b, a).
6: z ? ? arg maxz?Zb,a? WEU(? (b, a? , z)).
7: b ? ? (b, a? , z ? ).
8: if WEU(b) ? 0 then
9:
return RUN T RIAL(b, T )
10: else
11:
return b
ing (4)
Execute the first action of a of ? ? .
Receive observation z.
Update the belief b0 ? ? (b0 , a, z).
B UILD D ESPOT(b0 )
1: Sample a set ?b0 of K random scenarios
for b0 .
2: Insert b0 into T as the root node.
3: while time permitting do
4:
b ? RUN T RIAL (b0 , T ).
5:
Back up upper and lower bounds for every node on the path from b to b0 .
6: return T
excess uncertainty, the trial ends. Otherwise it continues until reaching a leaf node of T . We then
expand the leaf node b one level deeper by adding new belief nodes for every action and every
observation as children of b. Finally we trace the path backward to the root and perform backup on
both the upper and lower bounds at each node along the way. For the lower-bound backup,
X |?? (b,a,z) |
1 X
L(b) = max
R(s? , a) + ?
L(? (b, a, z)) .
a?A |?b |
|?b |
(5)
z?Zb,a
???b
where Zb,a is the set of observations encountered when action a is taken at b under all scenarios in
?b . The upper bound backup is the same. We repeat the trials as long as time permits, thus making
the algorithm anytime.
5.2
Initial Upper Bounds
There are several approaches for constructing the initial upper bound at a node b of a DESPOT. A
simple one is the uninformative bound of Rmax /(1 ? ?). To obtain a tighter bound, we may exploit
domain-specific knowledge. Here we give a domain-independent construction, which is the average
upper bound over all scenarios in ?b . The upper bound for a particular scenario ? ? ?b is the maximum value achieved by any arbitrary policy under ?. Given ?, we have a deterministic planning
problem and solve it by dynamic programming on a trellis of D time slices. Trellis nodes represent
states, and edges represent actions at each time step. The path with highest value in the trellis gives
the upper bound under ?. Repeating this procedure for every ? ? ?b and taking the average gives
an upper bound on the value of b under the set ?b . It can be computed in O(K|S||A|D) time.
5.3
Initial Lower Bounds and Default Policies
To construct the lower bound at a node b, we may simulate any policy for N steps under the scenarios
in ?b and compute the average total discounted reward, all in O(|?b |N ) time. One possibility is
to use a fixed-action policy for this purpose. A better one is to handcraft a policy that chooses an
action based on the history of actions and observations, a technique used in [18]. However, it is
often difficult to handcraft effective history-based policies. We thus construct a policy using the
belief b: ?(b) = f (?(b)), where ?(b) is the mode of the probability distribution b and f : S ? A
is a mapping that specifies the action at the state s ? S. It is much more intuitive to construct f ,
and we can approximate ?(b) easily by determining the most frequent state using ?b . Note that
although history-based policies satisfy the requirements of Theorem 1, belief-based policies do not.
The difference is, however, unlikely to be significant to affect performance in practice.
6
6
Experiments
To evaluate AB-DESPOT experimentally, we compared it with four other algorithms. Anytime Basic DESPOT (AB-DESPOT) is AR-DESPOT without the dynamic programming step that computes
RWDU. It helps to understand the benefit of regularization. AEMS2 is an early successful online
POMDP algorithm [16, 17]. POMCP has scaled up to very large POMDPs [18]. SARSOP is a
state-of-the-art offline POMDP algorithm [10]. It helps to calibrate the best performance achievable
for POMDPs of moderate size. In our online planning tests, each algorithm was given exactly 1
second per step to choose an action. For AR-DESPOT and AB-DESPOT, K = 500 and D = 90.
The regularization parameter ? for AR-DESPOT was selected offline by running the algorithm with a
training set distinct from the online test set. The discount factor is ? = 0.95. For POMCP, we used
the implementation from the original authors3 , but modified it in order to support very large number
of observations and strictly follow the 1-second time limit for online planning.
We evaluated the algorithms on four domains, including a very large one with about 1056 states
(Table 1). In summary, compared with AEMS2, AR-DESPOT is competitive on smaller POMDPs,
but scales up much better on large POMDPs. Compared with POMCP, AR-DESPOT performs better
than POMCP on the smaller POMDPs and scales up just as well.
We first tested the algorithms on Tag [15], a standard benchmark problem. In Tag, the agent?s goal
is to find and tag a target that intentionally moves away. Both the agent and target operate in a grid
with 29 possible positions. The agent knows its own position but can observe the target?s position
only if they are in the same location. The agent can either stay in the same position or move to
the four adjacent positions, paying a cost for each move. It can also perform the tag action and
is rewarded if it successfully tags the target, but is penalized if it fails. For POMCP, we used the
Tag implementation that comes with the package, but modified it slightly to improve its default
rollout policy. The modified policy always tags when the agent is in the same position as the robot,
providing better performance. For AR-DESPOT, we use a simple particle set default policy, which
moves the agent towards the mode of the target in the particle set. For the upper bound, we average
the upper bound for each particle as described in Section 5.2. The results (Table 1) show that ARDESPOT gives comparable performance to AEMS2.
Theorem 1 suggests that AR-DESPOT may still perform well when the observation space is large,
if a good small policy exists. To examine the performance of AR-DESPOT on large observation
spaces, we experimented with an augmented version of Tag called LaserTag. In LaserTag, the
agent moves in a 7 ? 11 rectangular grid with obstacles placed in 8 random cells. The behavior
of the agent and opponent are identical to that in Tag, except that in LaserTag the agent knows it
location before the game starts, whereas in Tag this happens only after the first observation is seen.
The agent is equipped with a laser that gives distance
estimates in 8 directions. The distance between 2 adjacent cells is considered one unit, and the laser reading
in each direction is generated from a normal distribution centered at the true distance of the agent from the
nearest obstacle in that direction, with a standard deviation of 2.5 units. The readings are discretized into
whole units, so an observation comprises a set of 8 integers. For a map of size 7 ? 11, |Z| is of the order
of 106 . The environment for LaserTag is shown in Figure 2. As can be seen from Table 1, AR-DESPOT outperforms POMCP on this problem. We can also see the Figure 2: Laser Tag. The agent moves in a
11 grid with obstacles placed randomly in
effect of regularization by comparing AR-DESPOT with 87 ?
cells. It is equipped with a noisy laser that
AB-DESPOT. It is not feasible to run AEMS2 or SAR- gives distance estimates in 8 directions.
SOP on this problem in reasonable time because of the
very large observation space.
To demonstrate the performance of AR-DESPOT on large state spaces, we experimented with the
RockSample problem [19]. The RockSample(n, k) problem mimics a robot moving in an n ? n grid
containing k rocks, each of which may be good or bad. At each step, the robot either moves to an
adjacent cell, samples a rock, or senses a rock. Sampling gives a reward of +10 if the rock is good
and -10 otherwise. Both moving and sampling produce a null observation. Sensing produces an
observation in {good, bad}, with the probability of producing the correct observation decreasing
3
http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Applications.html
7
Table 1: Performance comparison, according to the average total discounted reward achieved. The results
for SARSOP and AEMS2 are replicated from [14] and [17], respectively. SARSOP and AEMS2 failed to run
on some domains, because their state space or observation space is too large. For POMCP, both results from
our own tests and those from [18] (in parentheses) are reported. We could not reproduce the earlier published
results, possibly because of the code modification and machine differences.
Tag
LaserTag
No. States |S|
870
4,830
No. Actions |A|
5
5
No. Observations |Z|
30
? 1.5 ? 106
SARSOP
?6.03 ? 0.12
?
AEMS2
?6.19 ? 0.15
?
POMCP
?7.14 ? 0.28 ?19.58 ? 0.06
AB-DESPOT
AR-DESPOT
RS(7,8)
RS(11,11)
RS(15,15)
Pocman
12,544
247,808
7,372,800
? 1056
13
16
20
4
3
3
3
1024
21.47 ? 0.04
21.56 ? 0.11
?
?
21.37 ? 0.22
?
?
?
16.80 ? 0.30
18.10 ? 0.36
12.23 ? 0.32 294.16 ? 4.06
(20.71 ? 0.21) (20.01 ? 0.23) (15.32 ? 0.28)
?6.57 ? 0.26 ?11.13 ? 0.30 21.07 ? 0.32
21.60 ? 0.32
18.18 ? 0.30 290.34 ? 4.12
?6.26 ? 0.28 ?9.34 ? 0.26
21.08 ? 0.30
21.65 ? 0.32
18.57 ? 0.30 307.96 ? 4.22
exponentially with the agent?s distance from the rock. A terminal state is reached when the agent
moves past the east edge of the map. For AR-DESPOT, we use a default policy derived from the
particle set as follows: a new state is created with the positions of the robot and the rocks unchanged,
and each rock is labeled as good or bad depending on whichever condition is more prevalent in the
particle set. The optimal policy for the resulting state is used as the default policy. The optimal
policy for all states is computed before the algorithm begins, using dynamic programming with the
same horizon length as the maximum depth of the search tree. For the initial upper bound, we use the
method described in Section 5.2. As in [18], we use a particle filter to represent the belief to examine
the behavior of the algorithms in very large state spaces. For POMCP, we used the implementation
in [18] but ran it on the same platform as AR-DESPOT. As the results for our runs of POMCP are
poorer than those reported in [18], we also reproduce their reported results in Table 1. The results
in Table 1 indicate that AR-DESPOT is able to scale up to very large state spaces. Regularization
does not appear beneficial to this problem, possibly because it is mostly deterministic, except for the
sensing action.
Finally, we implemented Pocman, the partially observable version of the video game Pacman, as
described in [18]. Pocman has an extremely large state space of approximately 1056 . We compute
an approximate upper bound for a belief by summing the following quantities for each particle in
it, and taking the average over all particles: reward for eating each pellet discounted by its distance
from pocman; reward for clearing the level discounted by the maximum distance to a pellet; default
per-step reward of ?1 for a number of steps equal to the maximum distance to a pellet; penalty for
eating a ghost discounted by the distance to the closest ghost being chased (if any); penalty for dying
discounted by the average distance to the ghosts; and half the penalty for hitting a wall if pocman
tries to double back along its direction of movement. This need not always be an upper bound,
but AR-DESPOT can be modified to run even when this is the case. For the lower bound, we use
a history-based policy that chases a random ghost, if visible, when pocman is under the effect of a
powerpill, and avoids ghosts and doubling-back when it is not. This example shows that AR-DESPOT
can be used successfully even in cases of extremely large state space.
7
Conclusion
This paper presents DESPOT, a new approach to online POMDP planning. Our R-DESPOT algorithm
and its anytime approximation, AR-DESPOT, search a DESPOT for an approximately optimal policy,
while balancing the size of the policy and the accuracy on its value estimate. Theoretical analysis
and experiments show that the new approach outperforms two of the fastest online POMDP planning
algorithms. It scales up better than AEMS2, and it does not suffer the extremely poor worst-case
behavior of POMCP. The performance of AR-DESPOT depends on the upper and lower bounds
supplied. Effective methods for automatic construction of such bounds will be an interesting topic
for further investigation.
Acknowledgments. This work is supported in part by MoE AcRF grant 2010-T2-2-071, National
Research Foundation Singapore through the SMART IRG program, and US Air Force Research
Laboratory under agreement FA2386-12-1-4031.
8
References
[1] J. Asmuth and M.L. Littman. Approaching Bayes-optimality using Monte-Carlo tree search.
In Proc. Int. Conf. on Automated Planning & Scheduling, 2011. 2
[2] D.P. Bertsekas. Dynamic Programming and Optimal Control, volume 1. Athena Scientific, 3rd
edition, 2005. 2
[3] E.K.P. Chong, R.L. Givan, and H.S. Chang. A framework for simulation-based network control
via hindsight optimization. In Proc. IEEE Conf. on Decision & Control, volume 2, pages 1433?
1438, 2000. 3
[4] P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In Proc. Uncertainty in
Artificial Intelligence, 2007. 1
[5] S. Gelly and D. Silver. Combining online and offline knowledge in UCT. In Proc. Int. Conf.
on Machine Learning, 2007. 2
[6] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other
learning applications. Information and Computation, 100(1):78?150, 1992. 4
[7] R. He, E. Brunskill, and N. Roy. Efficient planning under uncertainty with macro-actions. J.
Artificial Intelligence Research, 40(1):523?570, 2011. 2
[8] M. Kearns, Y. Mansour, and A.Y. Ng. Approximate planning in large POMDPs via reusable
trajectories. In Advances in Neural Information Processing Systems (NIPS), volume 12, pages
1001?1007. 1999. 2
[9] L. Kocsis and C. Szepesvari. Bandit based Monte-Carlo planning. In Proc. Eur. Conf. on
Machine Learning, pages 282?293, 2006. 1
[10] H. Kurniawati, D. Hsu, and W.S. Lee. SARSOP: Efficient point-based POMDP planning by
approximating optimally reachable belief spaces. In Proc. Robotics: Science and Systems,
2008. 2, 5, 7
[11] O. Madani, S. Hanks, and A. Condon. On the undecidability of probabilistic planning and
infinite-horizon partially observable Markov decision problems. In Proc. AAAI Conf. on Artificial Intelligence, pages 541?548, 1999. 1
[12] D. McAllester and S. Singh. Approximate planning for factored POMDPs using belief state
simplification. In Proc. Uncertainty in Artificial Intelligence, 1999. 2
[13] A.Y. Ng and M. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs.
In Proc. Uncertainty in Artificial Intelligence, pages 406?415, 2000. 1, 2
[14] S.C.W. Ong, S.W. Png, D. Hsu, and W.S. Lee. Planning under uncertainty for robotic tasks
with mixed observability. Int. J. Robotics Research, 29(8):1053?1068, 2010. 8
[15] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for
POMDPs. In Proc. Int. Jnt. Conf. on Artificial Intelligence, pages 477?484, 2003. 2, 7
[16] S. Ross and B. Chaib-Draa. AEMS: An anytime online search algorithm for approximate
policy refinement in large POMDPs. In Proc. Int. Jnt. Conf. on Artificial Intelligence, pages
2592?2598. 2007. 7
[17] S. Ross, J. Pineau, S. Paquet, and B. Chaib-Draa. Online planning algorithms for POMDPs. J.
Artificial Intelligence Research, 32(1):663?704, 2008. 1, 2, 7, 8
[18] D. Silver and J. Veness. Monte-Carlo planning in large POMDPs. In Advances in Neural
Information Processing Systems (NIPS). 2010. 1, 2, 6, 7, 8
[19] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In Proc. Uncertainty
in Artificial Intelligence, pages 520?527, 2004. 5, 7
[20] T. Smith and R. Simmons. Point-based POMDP algorithms: Improved analysis and implementation. In Proc. Uncertainty in Artificial Intelligence, 2005. 2
[21] M.T.J. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs.
J. Artificial Intelligence Research, 24:195?220, 2005. 2
[22] S.W. Yoon, A. Fern, R. Givan, and S. Kambhampati. Probabilistic planning via determinization
in hindsight. In AAAI, pages 1010?1016, 2008. 3
9
| 5189 |@word trial:4 exploitation:1 version:2 achievable:1 stronger:1 suitably:1 simulation:4 r:3 condon:1 accounting:1 dramatic:2 accommodate:1 recursively:3 initial:10 contains:9 series:1 tuned:1 o2:6 outperforms:2 past:1 current:7 com:1 z2:3 comparing:1 gmail:1 must:3 additive:3 visible:1 update:5 greedy:1 leaf:7 selected:1 half:1 intelligence:11 smith:2 colored:1 provides:2 node:47 location:2 height:6 rollout:1 along:3 constructed:4 consists:1 introduce:1 expected:1 behavior:4 roughly:1 planning:41 examine:2 simulator:1 discretized:1 bellman:1 terminal:1 discounted:9 globally:1 decreasing:1 curse:6 equipped:2 provided:1 estimating:1 begin:1 maximizes:6 null:1 rmax:5 maxa:1 perseus:1 dying:1 hindsight:3 guarantee:2 every:11 exactly:1 scaled:2 uk:1 control:4 unit:3 dyhsu:1 grant:1 appear:1 producing:1 branchand:2 bertsekas:1 before:2 limit:2 despite:1 path:7 approximately:4 black:2 specifying:1 suggests:1 appl:1 fastest:3 range:1 acknowledgment:1 practice:1 procedure:2 convenience:1 scheduling:1 applying:1 fa2386:1 optimize:1 deterministic:5 maxz:1 map:2 maximizing:2 starting:1 independently:2 pomdp:21 survey:2 rectangular:1 shorten:1 factored:1 rule:1 haussler:2 searching:1 sar:1 updated:1 simmons:2 construction:4 target:5 programming:8 us:3 agreement:1 element:1 logarithmically:1 roy:1 continues:1 larized:1 sparsely:1 labeled:1 bottom:3 yoon:1 capture:5 worst:4 sun:1 trade:1 removed:1 highest:1 movement:1 ran:1 principled:2 environment:2 reward:10 littman:1 ong:1 traversal:1 dynamic:8 singh:1 smart:1 upon:1 efficiency:1 completely:1 compactly:2 easily:2 represented:3 laser:4 distinct:1 describe:2 effective:2 monte:6 artificial:11 labeling:1 neighborhood:1 choosing:1 heuristic:6 supplementary:1 solve:1 irg:1 otherwise:3 paquet:1 farm:1 noisy:1 online:30 kocsis:1 chase:1 sequence:4 net:1 rock:7 ucl:1 macro:2 frequent:1 loop:1 combining:1 alleviates:1 lookahead:3 intuitive:1 scalability:1 double:1 requirement:1 produce:2 silver:3 executing:1 help:2 derive:2 depending:1 ac:1 nearest:1 b0:54 received:1 progress:1 paying:1 strong:2 implemented:1 c:1 implies:2 come:1 indicate:1 direction:5 correct:1 filter:1 stochastic:1 exploration:1 centered:1 mcallester:1 material:1 givan:2 wall:1 preliminary:1 investigation:1 generalization:1 tighter:1 kurniawati:1 somani:1 insert:4 strictly:1 sufficiently:1 considered:3 normal:1 exp:3 mapping:1 early:1 a2:7 purpose:1 proc:13 ross:2 sensitive:3 pellet:3 successfully:3 weighted:2 aems2:8 always:2 modified:4 rather:1 reaching:2 rocksample:2 eating:2 rial:3 derived:11 jnt:2 improvement:1 prevalent:1 contrast:3 entire:1 bt:5 typically:1 initially:1 unlikely:1 bandit:2 expand:2 reproduce:2 arg:2 among:2 html:1 priori:1 plan:4 art:1 platform:1 equal:1 construct:7 having:1 ng:2 sampling:8 veness:1 identical:1 represents:5 future:2 mimic:1 t2:1 gordon:1 randomly:5 wee:1 simultaneously:1 national:2 madani:1 undecidability:1 consisting:1 maintain:2 ab:9 onwards:1 possibility:1 evaluation:1 chong:1 sens:1 beforehand:1 poorer:1 tuple:1 edge:8 partial:2 draa:2 tree:40 desired:1 theoretical:2 earlier:1 obstacle:3 ar:28 retains:1 maximization:2 calibrate:1 cost:1 deviation:1 successful:1 too:2 optimally:3 actionobservation:1 reported:3 despot:104 chooses:7 combined:1 st:3 eur:1 randomized:1 stay:1 lee:3 off:1 probabilistic:2 aaai:2 containing:1 choose:2 possibly:2 conf:7 leading:1 return:4 int:5 satisfy:1 explicitly:1 depends:3 multiplicative:1 try:2 break:1 view:1 root:9 observing:1 handcraft:2 reached:2 start:8 bayes:2 maintains:1 competitive:1 air:1 accuracy:2 fern:1 carlo:6 trajectory:5 comp:2 pomdps:19 published:1 executes:4 history:8 suffers:1 whenever:1 intentionally:1 proof:1 associated:2 hsu:3 sampled:28 chaib:2 anytime:10 subsection:1 dimensionality:2 knowledge:2 uncover:1 back:3 focusing:1 asmuth:1 follow:3 improved:2 execute:1 box:1 sarsop:5 evaluated:1 furthermore:1 just:3 uct:3 hank:1 until:1 hand:1 receives:2 web:1 incrementally:1 acrf:1 mode:2 pineau:2 gray:1 scientific:1 grows:3 name:1 effect:2 ye:1 contain:1 true:3 regularization:6 hence:1 laboratory:1 adjacent:3 during:4 branching:1 game:2 rooted:2 theoretic:1 demonstrate:1 performs:4 pacman:1 recently:1 exponentially:3 pocman:6 volume:3 he:1 approximates:1 significant:1 composition:1 automatic:1 rd:1 grid:4 similarly:2 particle:12 reachable:2 dot:1 moving:2 interleaf:2 robot:4 add:1 closest:1 own:2 recent:1 optimizing:1 optimizes:1 moderate:1 rewarded:1 scenario:52 arbitrarily:1 exploited:1 seen:2 contingent:1 staff:1 branch:16 ing:1 long:1 post:1 adhiraj:1 permitting:1 a1:7 parenthesis:1 basic:2 iteration:3 sometimes:1 represent:6 achieved:3 cell:4 robotics:2 receive:1 whereas:1 uninformative:1 completes:1 else:1 source:2 operate:1 jordan:1 call:1 integer:1 near:5 leverage:1 automated:1 affect:1 approaching:1 interprets:1 reduce:1 idea:4 observability:1 expression:1 utility:5 penalty:3 suffer:1 passing:1 cause:1 action:45 repeating:2 discount:3 png:1 category:1 augments:1 reduced:1 http:2 specifies:2 supplied:1 singapore:2 yenan:1 estimated:8 overly:1 per:2 pomcp:16 reusable:1 four:3 backward:1 run:7 package:1 uncertainty:10 reasonable:1 uild:2 decision:4 summarizes:1 comparable:1 authors3:1 bound:39 nan:1 simplification:1 quadratic:1 encountered:1 strength:1 tag:12 generates:1 simulate:3 extremely:4 optimality:2 department:1 leews:1 according:6 truncate:1 poor:2 kd:3 terminates:1 smaller:3 slightly:1 beneficial:1 spaan:1 making:1 s1:1 happens:1 modification:1 intuitively:1 taken:2 computationally:2 ln:6 know:3 whichever:1 end:4 available:3 rewritten:1 permit:1 opponent:1 apply:4 observe:1 away:1 simulating:1 encounter:3 ho:3 original:1 running:1 exploit:1 gelly:1 build:1 approximating:1 unchanged:1 move:8 pegasus:1 quantity:1 usual:1 distance:10 separate:1 thrun:1 athena:1 nx:1 topic:1 reason:4 code:2 o1:6 length:1 providing:1 balance:3 difficult:2 mostly:1 trace:2 negative:2 sop:1 constructively:1 implementation:4 reliably:1 zt:3 policy:93 perform:7 upper:18 observation:31 markov:2 benchmark:1 immediate:2 extended:1 vlassis:1 mansour:1 arbitrary:2 david:1 moe:1 z1:3 subpath:1 nu:2 nip:2 address:1 able:1 usually:1 below:1 ghost:5 sparsity:1 challenge:1 reading:2 program:1 including:2 max:4 video:1 belief:59 suitable:1 difficulty:3 natural:1 regularized:9 force:1 improve:3 brief:1 mdps:1 created:1 sg:2 determining:1 fully:1 mixed:1 interesting:1 filtering:1 foundation:1 agent:23 sufficient:2 s0:13 principle:1 balancing:2 summary:2 penalized:1 repeat:3 last:1 placed:2 clearing:1 supported:1 enjoys:1 offline:9 side:1 deeper:2 understand:1 taking:3 munos:1 sparse:4 tracing:1 distributed:2 slice:1 benefit:1 default:13 depth:3 evaluating:1 avoids:2 computes:4 adopts:1 forward:2 refinement:1 replicated:1 tighten:1 excess:4 pruning:3 observable:9 approximate:8 overfitting:2 robotic:1 summing:1 search:35 table:6 promising:1 szepesvari:1 expanding:1 constructing:2 domain:4 main:5 rh:3 s2:1 backup:3 arise:1 whole:1 edition:1 child:6 complementary:1 augmented:1 referred:1 structurally:1 trellis:3 position:7 fails:1 comprises:1 exponential:2 brunskill:1 lie:1 breaking:1 removing:1 theorem:6 transitioning:1 bad:3 specific:1 showing:1 pac:1 sensing:2 list:1 experimented:2 a3:1 intractable:2 exists:4 incorporating:1 determinization:1 adding:2 execution:7 subtree:5 horizon:4 gap:1 simply:2 aems:1 failed:1 pmwiki:1 hitting:1 partially:9 doubling:1 chang:1 applies:1 ch:5 satisfies:1 kambhampati:1 goal:1 towards:1 feasible:1 experimentally:1 determined:1 specifically:1 uniformly:2 except:2 infinite:1 kearns:1 zb:3 called:2 total:4 experimental:1 east:1 indicating:1 formally:3 internal:3 support:1 coquelin:1 incorporate:1 evaluate:2 tested:1 |
4,629 | 519 | Adaptive Development of Connectionist Decoders
for Complex Error-Correcting Codes
Sheri L. Gish
Mario Blalull
IBM Rf'search Division
Almaden Research Center
650 Harry Road
San Jose, C A 95120
Abstract
\Ve present. an approach for df'velopment of a decoder for any complex
binary error-correct.ing code- (ECC) via training from examples of decoded
received words. Our decoder is a connectionist architecture. We describe
two sepa.rate solutions: A system-level solution (the Cascaded Networks
Decoder); and the ECC-Enhanced Decoder, a solution which simplifies
the mapping problem which must be solved for decoding. Although both
solutions meet our basic approach constraint for simplicity and compactness. only the ECC- Enhanced Decoder meet.s our second basic constraint
of being a generic solution.
1
1.1
INTRODUCTION
THE DECODING PROBLEM
An error-correcting code (ECC) is used to identify and correct errors in a received
binary vector which is possibly corrupted clue to transmission across a noisy channel.
In order to use a selected error-correcting code. the information bits, or the bits
containing t.he message. are tllCOdid int.o a valid ECC codeword by the addition of
a set of f'xtra hits, the redulldallcy, detf'fmined by tlw properties of the selected
ECC. To decode a received word. there is a pre-processing step first in which a
syndrome is calculated from the word. The syndrome is a vector whose length is
equal t.o the redundancy. If the syndrome is the all-zero vector, then the received
691
692
Gish and Blaum
word is a valid codeword (no errors). The non-zero syndromes have a one-to-one
relationship wit.h t.he error vectors provided the number of errors does not exceed
the error-COlTect ing capability of the ('Ode. (An error wctor is a binary vector
equal in length to an ECC codeword with the error positions having a value of 1
while the rest of t.1lf' positions have the value 0). The decoding process is defined as
the mapping of a syndrome to it.s associat.ed error vector. Once an error vector is
found, the correct,ed codeword can be calculated by XORillg the error vector with
the received word. For more background in error-correct.ing codes , the reader is
referred to any book in the field, such as [2, 9] .
ECC's differ in the number of errors which they can correct and also in the distance
(measured as a Hamming distance in codespace) which can be recognized between
tllP received word and a t.rue code\vord . Codes which can correct. more errors and
cover greater distances are considered more powerful. However, in practice the
difficulty of developing an efficient. decoder 'which can correct many errors prevents
the use of most ECC's in the solut.ion of real world problems. Although decoding
can be done for any ECC via lookup tahle, this method quickly becomes intractable
as the length of codewords and the numher of errors possibly corrected increase.
Devdopment of an efficient. decoder for a part.icular ECC is not straightforward.
Moreover, it was shown that decoding of a random code is an NP-hard problem [1, 4].
The purpose of our work is to develop an ECC decoder using the trainable machine
paradigm; i.e. we develop a decoder via training using examples of decoded received
words. To prove our collcept, we have selected a binary hlock code, the (23,12,7)
Golay Code, v.'hich has "real world" complexity. The Golay Code corrects up to 3
errors and has minimum distance 7. A Golay codeword is 23 bits long (12 informat.ion hits, 11 bit redundancy); the syndrome is 11 bits long. There exist many
efficient. decoding methods for the Golay code [2, 3, 9], but t.he code complexity
represents quite a challenge for our proposed approach.
1.2
A CONNECTIONIST ECC DECODER
\Ve use a connect.ionist archit.ecture as our ECC decoder; the input is a syndrome
(we assume that the straight.forward step of syndrome calculation is pre-processing)
and the output is the port.ion of t.he error vector conesponding to the information
bits in the received word (we ignore the redundancy). The primary reason for our
choice of a connect.ionist. architecturE' is its inherent simplicity and compactness;
a connectionist. archit.ecture solut.ion is readily implemented in either hardware or
software solutions to complex real world problems. The particular architecture we
use is t.he multi-layer feedforward network with one hidclf'n layer. There are full
connections only between adja.cent layers. The number of nodes in the input layer
is the number of bit.s in the syndrome, and t.he number of nodes in the output layer
is the number ofinformat.ion bit.:; in t.he ECC' codeword. Tlw number of nodes in the
hidden layer is a free parameter, but typically this number is no more than 1 or 2
nodes great.f'l' t.han the number of nodes in t.he input. layer. Our activation function
is t.he logistic funct.ion and our t.raining algorit.hm is backpropaga.tion (see [10] for a
desniption of both) . This architectural approach has been demonst.rated to be both
cost-effective and a superior performer compared to classical stat.istical alternative
methods in t.he solut.ion of complex mapping prohlems when it is used as a trainable
pattern classifier [6, 7].
Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes
There are two basic constraints which we have placed on our trainable connectionist
decoder. First, the final connectionist archit.ect ure must be simple and contain as
few nodes as possible. Second, the method we u::;e to develop our decoder must be
able to be generalized to any binary ECC. To meet the second constraint, we insured
t.hat t.he training uat.aset. cont.ained only examples of decoded words (i.e. no a priori
knowledge of code patterning or exist.ing decoding algorithms was included), and
also that the training dataset was a.<; small a subset of t.he possible error vectors as
was required to obtain generalization by trained net.works .
2
RESULTS
2.1
THE CASCADED NETWORKS DECODER
Using our basic approach, we have developed two separate solutions. One, the
Cascaded Networks Decoder (see Figure 1) a systf'm-If'vf'l solution which parses
t.he decoding problem into a set of more t.ractable problems each addressed by a
separate network. These smaller networks each solve f'ither simple classification
problems (binary decisions) or are specialized decoders. Performance of the Casca.ded Net.works Df'coder is 95% correct. for t.he Gola.y code (test.ed on all 211 possible
error \"ect.ors). and the whole system is small and compact. How~ver, this solution
does not meet our const.raint. that t.he solution method bf' gf'lleric since the parsing
of thf' original prohlem does rf'quire t:'ome a priori knowledge about. the ECC, and
t.he training of each network is dOHt' 011 a separate, self-contained schedule.
2.2
THE ECC-ENHANCED DECODER
The approach taken by the Cascaded Networks Decoder simplifies the solution
strategy of the decoding problem, while the E('('-Enhancpd Decoder simplifies the
mapping problem to he solved by tlw decoder. In the ECC-Enhanced Decoder,
both the input syndrome and the out.put f'rJ"or vector art' encoded as codewords
of an EC(,. Such f'ncoding should serye to sf'parat.e tIlt' inputs in input space and
the outputs in out.put. space , creating a "region-to-rpgion" mapping which is much
easier t.han t.he "point-to-point" ma.pping required without. encoding [8]. In addition,
the decoding of t.he network output. compensates for some level of uncertainty in
the network's performance; an output vector within a small dista.nce of the target
vector will be corrected to the actual target by the ECC. This enhances training
procedures [.5, 8].
\Ve have founu that t.he ECC-Enhanced Decoder method meets all of our constraints
for a connect.ionist architecture. However, we also have found that choosing the best
ECC for encoding the input. and for encoding the output. represent.s two critical and
quite separate problems which must he soh?ed in order for the method to succeed.
2.2.1
Choosing the Input ECC Encoding
The goal for the chosen ECC int.o which t.he input is encoded is to achieve maximum
sepal'ation of input patterns in code spacE'. The major constraint is the size of the
codeword (number of bits which thf' lengt.h of the redundancy must be), because
longer codewords increase the complexit.y of training and the size (in number of
693
694
Gish and Blaum
ERROR VECTOR
12 BITS
' .
:
,': :
SYNDROME <S>
11 BITS
Figure 1: Cascaded Networks Decoder. A system-level solution incorporating 5
casca.ded lleural networks.
nodes) of the connectionist architecturf'. To det.ermine the effect of different types
of ECC's on the separation of input patterns in code space, we constructed a 325
pattern training dataset (mapping 11 bit. syndrome to 12 bit error vector) and
encoded only the inputs using 4 different ECC's. The candidate ECC's (with the
size of redundancy required to encode t.he 11 bit syndrome) were
? Hamming (bit level, 4 bit. redundancy)
? Extended Ha.mming (bit. level,
!)
bit rpclundancy)
? Reed Solomon (4 bit byt.f' level. 2 byt~ ff"!dundancy)
? Fire (bit level, 11 bit redundancy)
\Ve t.rained 5 networks (1 with no encoding of input. 1 each with a different ECC
encoding) using this training elataset. Empirically, we had determined that this
training dataset. is slightly t.oo small to achieve generalization for this task; we
trained each net\"wrk until its performance level on a 435 pattern test dataset (differellt patterns from the training dataset but. encoded identically) degraded 20%.
\Ve then analyzed the effect of the input encoding on the patterning of error positions we observed for the output. vectors.
Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes
The ff'suHs of our analysis iUp illustrat.t'd in Figures 2 and 3. These bar graphs
look only at. out.put vect.ors found t.o haH' 2 or more errors, a.nd show the proximity
of error positions within an output vector. Each bar corre:sponds to the maximum
distancp of error positions within a vector (adjacent posit ions have a distance of
1). The bar height. represent.s t.he total frf'quency of vect.ors with a given maximum
distance; each bar is color-coded to break down t.he frequt' llcy by total number of
errors per vect.or. This type of measurt'ment. shows the degree of burst (clustering of
error posit.ions) in t he errors; knowing \'\?het.her or not one has burst errors influences
t.he likf'lihood of correct.ion of those errors by an ECC (for instance, Fire codes are
burst correcting codes).
~~--------------------------~
J.
.t
..
?
o..tacc.
?
2 Enors
..
11113 En ... B4 orr'n
"
10
o..laDC ?
.2En'" .lEn... 1:m4.rron
..
Os ......
FigUl'e 2: Bar Gl'aphs of Out.put Errors Made hy tllf' Decoder. There was no
encoding of t.he illPut in this instance. Training datasd results are on left, test
dataset. rf'Sult.s are on right.
Our aualy:sis shows t.hat. t.he Reed Solomon ECC is t.he only input encoding which
separat.ed t.he input pat.terns in a way which mack liSe of an output pa.ttern ECC
encoding effect.ive (result.ed ill more burst-type errors, decreased the total number of
error positions in output wctors which had errors). The J 1 bit redundancy required
by the Fire code for input encoding increased complexity so that this solution was
worse t.han t.llf' others in terms of performance. Thus, \V(' have chosen the Reed
Solomon ECC for input. encoding in our ECC-Enhanced Decoder.
2.2.2
Choosing the Output ECC Encoding
Tllf' goal for t.ht' chosell ECC into which t.he out.put is encoded is correction of
the maximum I1llml)f'r of errors made by the decoder. Like t.he constraint imposed
on the chosen ECC for input encoding, the ECC select.ell for encoding the output
695
696
Gish and Blaum
~r---------------------------~
.
J,.
Il:
.
10
DUtaoce
.2En... IlbEnon
?
11
10
II
~e
~4 ...or.
.ZErron 111113Err...
m4 ...... Os ......
~~--------------------------~
&
,
10
II
I)iollDCe
.2Enon II1II3 Err... t::?I4 ......
Figur{~
3: Bar C.;raphs of Effects of Different ECC Input Encodings on Output Errors
Made by the Decoder. Training dataset results are 011 left, test dataset results are on
right. Top row is Hamming cod(=' encoding. bottom row is Reed Solomon encoding.
should add as small a redundancy as possible. However, thne is another even more
import.ant constraint on t.he choice of ECC for output. encoding: decoding simplicity.
The major advant.age gained from encoding t.he out.put is the correction of slight
uncert.ainty in the performance of the decoder, and t.his advantage is gained after
the out.put is decoded. Thus, any ECC selected for output encoding should be one
which can be decoded efficiently.
The f'rror separat.ion results we gained from our analysis of the effects of input
encoding were used t.o guide our choices for an ECC into which the output would
be encoded . \Ve chose our ECC from the 4 candidat.es we considered for the input
(these ECC's all can he decoded efficiently). The ff~dundancy cost for encoding a
12 bit. error vector was t.he same as in t.he 11 bit. input case for t.he Reed Solomon
and Fire codes, but. was increased by 1 bit. for the Hamming codes. Based on the
result. t.hat. a Reed Solomon encoding of t.he input both increased the amount of
Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes
burst errors and decreased the total number of errors per output vector, we chose
the Hamming cod~ and t.he Fire code for our output encoding ECC . Both encodings
yielded excellent performance on the Golay code decoding problem; the Fire code
output encoding result.ed in better generalization by the network and thus better
performallce (87% correct) t.han the Hamming code output encoding (84% correct).
References
[1] E. R. Berlekarnp, R. J. McEliece and H. C. A. van Tilborg, "On the Inherent
Intractability of Certain Coding Problems ," IEEE Trans. on In/. Theory, Vol.
IT-8. pp. 384-:386. May 1978.
[2] R. E. Blahut, Thwr.1J and Practice of Error COlltrol Codes, Addison-Wesley,
1983.
[3] M. Blaum and J. Bruck, "Decoding the Golay Code with Venn Diagrams,"
IEEE TrailS . 011 Illf. Theor.lJ, Vol. IT-:3G, pp. 906-910, July 1990.
[4] .J. Bruck and M. Naor, "The Hardness of Decoding Linear Codes with Preprocessing," IEEE Tr'a 11 S. 011 In/. Thwr./j , Vol. IT-36, pp. 381-385, March 1990.
[5) T. G. Dietterich anel G. Bakiri, "Error-Correcting Out.put Codes: A General
Met.hod for Improving Mult.idass Inductive Learning Programs," Oregon State
University Computer Science TR 91-30-2, 1991.
[6] S. L. Gish and "V . E. Blanz, "Comparing a Connect.ionist Trainable Classifier
with Classical Statistical Decision Analysis Methods ," IBM Research Report
RJ 6891 (65717), June 1989.
[7] S. L. Gish and 'V. E . Blanz, "Comparing the Performance of a Connectionist. and St.at.istical Classifiers on an Image Segmentation Problem," in D. S.
Touret.zky (eel) NfuralIlIformation ProCfssing ,,)'yste1Jls 2, pp. 614-621, Morgan Kaufmann Publishers, 1990.
[8] H. Li, T. Kronaneler and I. Ingemarsson, "A Pattern Classifier Integrating
Multilayer Percept.ron and Error-Correcting Code," in Proceedings of the
IAPR \Vorkshop on Machine Vision Applications, pp. 113-116. Tokyo, November 1990.
[9] F. J. Mac\Villiams and N. J. A. Sloane, The Theory of Error-Correcting Codes,
Amst.erdam. The Netherlallds: North-Holland, 1977.
[10] D. E. Rumelhart, G. E . Hinton, and R . .J. \\,illiams, "Learning Internal Represent.ations hy Error Propagation," in D. E. Rumelhart, J . L. McClelland et.
al. (eds) Parallel Distributed Procc.';sing Vol. 1, Chaptf'f 8, MIT Press, 1986.
697
| 519 |@word nd:1 bf:1 sepa:1 gish:6 tr:2 complexit:1 err:2 ida:1 comparing:2 activation:1 si:1 must:5 parsing:1 readily:1 import:1 selected:4 patterning:2 node:7 ron:1 hah:1 height:1 istical:2 constructed:1 burst:5 ect:2 prove:1 naor:1 hardness:1 prohlem:1 multi:1 actual:1 becomes:1 provided:1 moreover:1 coder:1 sheri:1 developed:1 classifier:4 hit:2 uncert:1 ecc:44 encoding:28 meet:5 ure:1 chose:2 practice:2 lf:1 procedure:1 mult:1 word:9 road:1 pre:2 integrating:1 put:8 influence:1 imposed:1 center:1 straightforward:1 wit:1 simplicity:3 correcting:10 his:1 enhanced:6 target:2 decode:1 trail:1 pa:1 rumelhart:2 lihood:1 tllp:1 observed:1 bottom:1 solved:2 algorit:1 region:1 complexity:3 trained:2 funct:1 iapr:1 division:1 golay:6 describe:1 effective:1 cod:2 dista:1 choosing:3 whose:1 quite:2 encoded:6 solve:1 ive:1 compensates:1 blanz:2 tlw:3 noisy:1 final:1 advantage:1 net:3 ment:1 ome:1 achieve:2 transmission:1 oo:1 develop:3 figur:1 stat:1 measured:1 received:8 implemented:1 met:1 differ:1 posit:2 correct:11 tokyo:1 aset:1 sult:1 generalization:3 wrk:1 theor:1 ttern:1 correction:2 proximity:1 considered:2 great:1 mapping:6 major:2 purpose:1 soh:1 mit:1 encode:1 lise:1 june:1 typically:1 lj:1 compactness:2 hidden:1 her:1 classification:1 ill:1 almaden:1 priori:2 development:4 art:1 ell:1 equal:2 once:1 field:1 having:1 represents:1 look:1 connectionist:11 np:1 others:1 inherent:2 few:1 report:1 ve:6 m4:2 fire:6 blahut:1 message:1 analyzed:1 llf:1 separat:2 instance:2 increased:3 cover:1 ations:1 insured:1 cost:2 mac:1 subset:1 connect:4 corrupted:1 st:1 eel:1 corrects:1 decoding:14 backpropaga:1 quickly:1 solomon:6 containing:1 possibly:2 worse:1 book:1 creating:1 illf:1 li:1 lookup:1 harry:1 orr:1 coding:1 north:1 int:2 oregon:1 tion:1 break:1 mario:1 len:1 capability:1 parallel:1 il:1 degraded:1 kaufmann:1 efficiently:2 percept:1 identify:1 ecture:2 ant:1 illustrat:1 straight:1 villiams:1 ed:8 pp:5 hamming:6 dataset:8 knowledge:2 color:1 segmentation:1 schedule:1 wesley:1 illput:1 done:1 until:1 mceliece:1 o:2 propagation:1 logistic:1 quire:1 effect:5 dietterich:1 contain:1 inductive:1 adjacent:1 self:1 generalized:1 sloane:1 image:1 demonst:1 superior:1 specialized:1 empirically:1 tilt:1 b4:1 iup:1 he:41 slight:1 had:2 han:4 longer:1 add:1 codeword:7 certain:1 binary:6 morgan:1 minimum:1 greater:1 performer:1 syndrome:13 recognized:1 rained:1 paradigm:1 july:1 ii:2 full:1 rj:2 colltrol:1 ing:4 calculation:1 hich:1 long:2 coded:1 basic:4 multilayer:1 vision:1 ained:1 df:2 represent:3 ion:11 addition:2 background:1 ode:1 ionist:4 addressed:1 decreased:2 diagram:1 publisher:1 rest:1 exceed:1 feedforward:1 identically:1 architecture:4 simplifies:3 knowing:1 det:1 vord:1 sepal:1 amount:1 hardware:1 mcclelland:1 tacc:1 exist:2 per:2 icular:1 vol:4 redundancy:9 ht:1 graph:1 nce:1 jose:1 uncertainty:1 powerful:1 reader:1 architectural:1 separation:1 decision:2 informat:1 vf:1 bit:25 layer:7 corre:1 yielded:1 i4:1 constraint:8 software:1 hy:2 sponds:1 developing:1 march:1 across:1 smaller:1 slightly:1 illiams:1 mack:1 taken:1 addison:1 generic:1 alternative:1 hat:3 cent:1 original:1 top:1 clustering:1 codespace:1 const:1 archit:3 bakiri:1 classical:2 codewords:3 strategy:1 primary:1 enhances:1 distance:6 separate:4 decoder:32 reason:1 numher:1 code:35 length:3 cont:1 relationship:1 reed:6 vect:3 sing:1 november:1 pat:1 extended:1 hinton:1 het:1 enon:1 required:4 connection:1 trans:1 able:1 bar:6 pattern:7 challenge:1 solut:3 program:1 rf:3 lengt:1 critical:1 ation:1 difficulty:1 bruck:2 cascaded:5 rated:1 hm:1 thf:2 gf:1 par:1 age:1 anel:1 degree:1 port:1 intractability:1 ibm:2 row:2 placed:1 gl:1 free:1 quency:1 guide:1 venn:1 van:1 distributed:1 calculated:2 raining:1 valid:2 world:3 forward:1 made:3 adaptive:4 san:1 clue:1 preprocessing:1 ec:1 compact:1 ignore:1 frf:1 ver:1 ermine:1 search:1 channel:1 improving:1 pping:1 complex:7 excellent:1 rue:1 whole:1 ded:2 associat:1 referred:1 en:3 ff:3 byt:2 position:6 decoded:6 sf:1 candidate:1 uat:1 down:1 intractable:1 incorporating:1 gained:3 hod:1 easier:1 prevents:1 contained:1 rror:1 holland:1 ma:1 succeed:1 raint:1 goal:2 hard:1 included:1 determined:1 corrected:2 total:4 e:1 select:1 internal:1 tern:1 tllf:2 trainable:4 |
4,630 | 5,190 | Approximate Dynamic Programming Finally
Performs Well in the Game of Tetris
Victor Gabillon
INRIA Lille - Nord Europe,
Team SequeL, FRANCE
victor.gabillon@inria.fr
Mohammad Ghavamzadeh?
INRIA Lille - Team SequeL
& Adobe Research
mohammad.ghavamzadeh@inria.fr
Bruno Scherrer
INRIA Nancy - Grand Est,
Team Maia, FRANCE
bruno.scherrer@inria.fr
Abstract
Tetris is a video game that has been widely used as a benchmark for various optimization techniques including approximate dynamic programming (ADP) algorithms. A look at the literature of this game shows that while ADP algorithms
that have been (almost) entirely based on approximating the value function (value
function based) have performed poorly in Tetris, the methods that search directly
in the space of policies by learning the policy parameters using an optimization
black box, such as the cross entropy (CE) method, have achieved the best reported
results. This makes us conjecture that Tetris is a game in which good policies are
easier to represent, and thus, learn than their corresponding value functions. So,
in order to obtain a good performance with ADP, we should use ADP algorithms
that search in a policy space, instead of the more traditional ones that search in a
value function space. In this paper, we put our conjecture to test by applying such
an ADP algorithm, called classi?cation-based modi?ed policy iteration (CBMPI),
to the game of Tetris. Our experimental results show that for the ?rst time an ADP
algorithm, namely CBMPI, obtains the best results reported in the literature for
Tetris in both small 10 ? 10 and large 10 ? 20 boards. Although the CBMPI?s
results are similar to those of the CE method in the large board, CBMPI uses
considerably fewer (almost 1/6) samples (calls to the generative model) than CE.
1
Introduction
Tetris is a popular video game created by Alexey Pajitnov in 1985. The game is played on a
grid originally composed of 20 rows and 10 columns, where pieces of 7 different shapes fall
from the top ? see Figure 1. The player has to choose where to place each falling piece by
moving it horizontally and rotating it. When a row is ?lled, it is removed and all the cells
above it move one line down. The goal is to remove as many rows as possible before the
game is over, i.e., when there is no space available at the top of the grid for the new piece.
In this paper, we consider the variation of the game in
which the player knows only the current falling piece, and
not the next several coming pieces. This game constitutes
an interesting optimization benchmark in which the goal
is to ?nd a controller (policy) that maximizes the average
(over multiple games) number of lines removed in a game
(score).1 This optimization problem is known to be computationally hard. It contains a huge number of board
con?gurations (about 2200 ? 1.6 ? 1060 ), and even in
the case that the sequence of pieces is known in advance, Figure 1: A screen-shot of the game of
Tetris with its seven pieces (shapes).
?nding the optimal strategy is an NP hard problem [4].
Approximate dynamic programming (ADP) and reinforcement learning (RL) algorithms have been
used in Tetris. These algorithms formulate Tetris as a Markov decision process (MDP) in which
the state is de?ned by the current board con?guration plus the falling piece, the actions are the
?
1
Mohammad Ghavamzadeh is currently at Adobe Research, on leave of absence from INRIA.
Note that this number is ?nite because it was shown that Tetris is a game that ends with probability one [3].
1
possible orientations of the piece and the possible locations that it can be placed on the board,2 and
the reward is de?ned such that maximizing the expected sum of rewards from each state coincides
with maximizing the score from that state. Since the state space is large in Tetris, these methods
use value function approximation schemes (often linear approximation) and try to tune the value
function parameters (weights) from game simulations. The ?rst application of ADP in Tetris seems
to be by Tsitsiklis and Van Roy [22]. They used the approximate value iteration algorithm with two
state features: the board height and the number of holes in the board, and obtained a low score of 30
to 40. Bertsekas and Ioffe [1] proposed the ?-Policy Iteration (?-PI) algorithm (a generalization of
value and policy iteration) and applied it to Tetris. They approximated the value function as a linear
combination of a more elaborate set of 22 features and reported the score of 3, 200 lines. The exact
same empirical study was revisited recently by Scherrer [16], who corrected an implementation bug
in [1], and reported more stable learning curves and the score of 4, 000 lines. At least three other
ADP and RL papers have used the same set of features, we refer to them as the ?Bertsekas features?,
in the game of Tetris. Kakade [11] applied a natural policy gradient method to Tetris and reported
a score of about 6, 800 lines. Farias and Van Roy [6] applied a linear programming algorithm to
the game and achieved the score of 4, 700 lines. Furmston and Barber [8] proposed an approximate
Newton method to search in a policy space and were able to obtain a score of about 14, 000.
Despite all the above applications of ADP in Tetris (and possibly more), for a long time, the best
Tetris controller was the one designed by Dellacherie [5]. He used a heuristic evaluation function
to give a score to each possible strategy (in a way similar to value function in ADP), and eventually
returned the one with the highest score. Dellacherie?s evaluation function is made of 6 high-quality
features with weights chosen by hand, and achieved a score of about 5, 000, 000 lines [19]. Szita
and L?orincz [18] used the ?Bertsekas features? and optimized the weights by running a black box
optimizer based on the cross entropy (CE) method [15]. They reported the score of 350, 000 lines
averaged over 30 games, outperforming the ADP and RL approaches that used the same features.
More recently, Thiery and Scherrer [20] selected a set of 9 features (including those of Dellacherie?s)
and optimized the weights with the CE method. This led to the best publicly known controller (to
the best of our knowledge) with the score of around 35, 000, 000 lines.
Due to the high variance of the score and its sensitivity to some implementation details [19], it
is dif?cult to have a precise evaluation of Tetris controllers. However, our brief tour d?horizon
of the literature, and in particular the work by Szita and L?orincz [18] (optimizing the ?Bertsekas
features? by CE), indicate that ADP algorithms, even with relatively good features, have performed
extremely worse than the methods that directly search in the space of policies (such as CE and
genetic algorithms). It is important to note that almost all these ADP methods are value function
based algorithms that ?rst de?ne a value function representation (space) and then search in this
space for a good function, which later gives us a policy.
The main motivation of our work comes from the above observation. This observation makes us
conjecture that Tetris is a game whose policy space is easier to represent, and as a result to search in,
than its value function space. Therefore, in order to obtain a good performance with ADP algorithms
in this game, we should use those ADP methods that search in a policy space, instead of the more
traditional ones that search in a value function space. Fortunately a class of such ADP algorithms,
called classi?cation-based policy iteration (CbPI), have been recently developed and analyzed [12,
7, 13, 9, 17]. These algorithms differ from the standard value function based ADP methods in
how the greedy policy is computed. Speci?cally, at each iteration CbPI algorithms approximate the
entire greedy policy as the output of a classi?er, while in the standard methods, at every given state,
the required action from the greedy policy is individually calculated based on the approximation
of the value function of the current policy. Since CbPI methods search in a policy space (de?ned
by a classi?er) instead of a value function space, we believe that they should perform better than
their value function based counterparts in problems in which good policies are easier to represent
than their corresponding value functions. In this paper, we put our conjecture to test by applying
an algorithm in this class, called classi?cation-based modi?ed policy iteration (CBMPI) [17], to the
game of Tetris, and compare its performance with the CE method and the ?-PI algorithm. The choice
of CE and ?-PI is because the former has achieved the best known results in Tetris and the latter?s
performance is among the best reported for value function based ADP algorithms. Our extensive
experimental results show that for the ?rst time an ADP algorithm, namely CBMPI, obtains the best
results reported in the literature for Tetris in both small 10 ? 10 and large 10 ? 20 boards. Although
2
The total number of actions at a state depends on the falling piece, with the maximum of 32, i.e. |A| ? 32.
2
Input: parameter space ?, number of parameter vectors n, proportion ? ? 1, noise ?
Initialize: Set the parameter ? = ?
0 and ? 2 = 100I (I is the identity matrix)
for k = 1, 2, . . . do
2
Generate a random sample of n parameter vectors {?i }n
i=1 ? N (?, ? I)
For each ?i , play L games and calculate the average number of rows removed (score) by the controller
?
Select ??n? parameters with the highest score ?1? , . . . , ???n?
???n? ?
???n? ?
2
2
1
1
Update ? and ?: ?(j) = ??n? i=1 ?i (j) and ? (j) = ??n?
i=1 [?i (j) ? ?(j)] + ?
Figure 2: The pseudo-code of the cross-entropy (CE) method used in our experiments.
the CBMPI?s results are similar to those achieved by the CE method in the large board, CBMPI
uses considerably fewer (almost 1/6) samples (call to the generative model of the game) than CE. In
Section 2, we brie?y describe the algorithms used in our experiments. In Section 3, we outline the
setting of each algorithm in our experiments and report our results followed by discussion.
2
Algorithms
In this section, we brie?y describe the algorithms used in our experiments: the cross entropy (CE)
method, classi?cation-based modi?ed policy iteration (CBMPI) [17] and its slight variation direct
policy iteration (DPI) [13], and ?-policy iteration (see [16] for a description of ?-PI). We begin by
de?ning some terms and notations. A state s in Tetris consists of two components: the description
of the board b and the type of the falling piece p. All controllers rely on an evaluation function that
gives a value to each possible action at a given state. Then, the controller chooses the action with
the highest value. In ADP, algorithms aim at tuning the weights such that the evaluation function
approximates well the optimal expected future score from each state. Since the total number of
states is large in Tetris, the evaluation function f is usually de?ned as a linear combination of a
set of features ?, i.e., f (?) = ?(?)?. We can think of the parameter vector ? as a policy (controller)
whose performance is speci?ed by the corresponding evaluation function f (?) = ?(?)?. The features
used in Tetris for a state-action pair (s, a) may depend on the description of the board b? resulted
from taking action a in state s, e.g., the maximum height of b? . Computing such features requires
the knowledge of the game?s dynamics, which is known in Tetris.
2.1
Cross Entropy Method
Cross-entropy (CE) [15] is an iterative method whose goal is to optimize a function f parameterized
by a vector ? ? ? by direct search in the parameter space ?. Figure 2 contains the pseudo-code of
the CE algorithm used in our experiments [18, 20]. At each iteration k, we sample n parameter vectors {?i }ni=1 from a multivariate Gaussian distribution N (?, ? 2 I). At the beginning, the parameters
of this Gaussian have been set to cover a wide region of ?. For each parameter ?i , we play L games
and calculate the average number of rows removed by this controller (an estimate of the evaluation
?
, and use
function). We then select ??n? of these parameters with the highest score, ?1? , . . . , ???n?
2
them to update the mean ? and variance ? of the Gaussian distribution, as shown in Figure 2. This
updated Gaussian is used to sample the n parameters at the next iteration. The goal of this update
is to sample more parameters from the promising part of ? at the next iteration, and eventually
converge to a global maximum of f .
2.2
Classi?cation-based Modi?ed Policy Iteration (CBMPI)
Modi?ed policy iteration (MPI) [14] is an iterative algorithm to compute the optimal policy of a
MDP that starts with initial policy ?1 and value v0 , and generates a sequence of value-policy pairs
?
?
vk = (T?k )m vk?1 (evaluation step),
?k+1 = G (T?k )m vk?1 (greedy step),
where Gvk is a greedy policy w.r.t. vk , T?k is the Bellman operator associated with the policy ?k ,
and m ? 1 is a parameter. MPI generalizes the well-known value and policy iteration algorithms for
the values m = 1 and m = ?, respectively. CBMPI [17] is an approximation of MPI that uses an
explicit representation for the policies ?k , in addition to the one used for the value functions vk . The
idea is similar to the classi?cation-based PI algorithms [12, 7, 13] in which we search for the greedy
policy in a policy space ? (de?ned by a classi?er) instead of computing it from the estimated value
function (as in the standard implementation of MPI). As described in Figure 3, CBMPI begins with
an arbitrary initial policy ?1 ? ? and value function v0 ? F.3 At each iteration k, a new value func3
Note that the function space F and policy space ? are de?ned by the choice of the regressor and classi?er.
3
Input: value function space F, policy space ?, state distribution ?
Initialize: Set ?1 ? ? and v0 ? F to an arbitrary policy and value function
for k = 1, 2, . . . do
? Perform rollouts:
(i) iid
Construct the rollout set Dk = {s(i) }N
??
i=1 , s
(i)
for all states s ? Dk do
Perform a rollout and return v?k (s(i) ) (using Equation 1)
?
(i) iid
Construct the rollout set Dk? = {s(i) }N
??
i=1 , s
for all states s(i) ? Dk? and actions a ? A do
for j = 1 to M do
Perform a rollout and return Rkj (s(i) , a) (using Equation 4)
?M
j
(i)
1
?
Qk (s(i) , a) = M
j=1 Rk (s , a)
? Approximate value function:
?; v)
(regression)
(see Equation 2)
vk ? argmin L?F
k (?
v?F
? Approximate greedy policy:
?; ?)
(classi?cation)
?k+1 ? argmin L?k? (?
(see Equation 3)
???
Figure 3: The pseudo-code of the CBMPI algorithm.
tion vk is built as the best approximation of the m-step Bellman operator (T?k )m vk?1 in F (evaluation step). This is done by solving a regression problem whose target function is (T?k )m vk?1 . To set
up the regression problem, we build a rollout set Dk by sampling N states i.i.d. from a distribution
?
(i) (i) (i)
(i)
(i)
(i) ?
?. For each state s(i) ? Dk , we generate a rollout s(i) , a0 , r0 , s1 , . . . , am?1 , rm?1 , sm of size
(i)
(i)
(i)
(i)
m, where at = ?k (st ), and rt and st+1 are the reward and next state induced by this choice of
?
?
action. From this rollout, we compute an unbiased estimate v?k (s(i) ) of (T?k )m vk?1 (s(i) ) as
v?k (s(i) ) =
m?1
?
(i)
? t rt + ? m vk?1 (s(i)
m ),
(? is the discount factor),
(1)
t=0
??N
??
and use it to build a training set s(i) , v?k (s(i) ) i=1 . This training set is then used by the regressor
to compute vk as an estimate of (T?k )m vk?1 . The regressor ?nds a function v ? F that minimizes
the empirical error
N
?2
1 ??
(?
?
;
v)
=
(2)
L?F
v?k (s(i) ) ? v(s(i) ) .
k
N i=1
The
? greedy step
? at iteration k computes the policy ?k+1 as the best approximation of
G (T?k )m vk?1 by minimizing the cost-sensitive empirical error (cost-sensitive classi?cation)
N?
?
??
1 ??
?
? k (s(i) , a) ? Q
? k s(i) , ?(s(i) ) .
?
max Q
?; ?) = ?
Lk (?
N i=1 a?A
(3)
To set up this cost-sensitive classi?cation problem, we build a rollout set Dk? by sampling N ? states
i.i.d. from a distribution ?. For each state s(i) ? Dk? and each action a ? A, we build M independent
?
(i,j) (i,j) (i,j)
(i,j) (i,j) (i,j) ?M
rollouts of size m + 1, i.e., s(i) , a, r0 , s1 , a1 , . . . , am , rm , sm+1 j=1 , where for t ? 1,
(i,j)
(i,j)
(i,j)
(i,j)
at
= ?k (st ), and rt
and st+1 are the reward and next state induced by this choice of
? k (s(i) , a) =
action. From these rollouts, we compute an unbiased estimate of Qk (s(i) , a) as Q
?M
j (i)
1
j=1 Rk (s , a) where each rollout estimate is de?ned as
M
Rkj (s(i) , a) =
m
?
(i,j)
? t rt
(i,j)
+ ? m+1 vk?1 (sm+1 ).
(4)
t=0
If we remove the regressor from CBMPI and only use the m-truncated rollouts Rkj (s(i) , a) =
?m t (i,j)
? k (s(i) , a), then CBMPI become the direct policy iteration (DPI) algoto compute Q
t=0 ? rt
rithm [13] that we also use in our experiments (see [17] for more details on the CBMPI algorithm).
4
In our implementation of CBMPI (DPI) in Tetris (Section 3), we use the same rollout set
(Dk = Dk? ) and rollouts for the classi?er and regressor. This is mainly to be more sample
ef?cient. Fortunately, we observed that this does not affect the overall performance of the algorithm. We set the discount factor ? = 1. Regressor: We use linear function approximation for the value function, i.e., v?k (s(i) ) = ?(s(i) )w, where ?(?) and w are the feature
and weight vectors, and minimize the empirical error L?F
?; v) using the standard least-squares
k (?
method. Classi?er: The training set of the classi?er is of size N with s(i) ? Dk? as input and
?
?
? k (s(i) , a) ? Q
? k (s(i) , a) ? Q
? k (s(i) , a1 ), . . . , maxa Q
? k (s(i) , a|A| ) as output. We use the
maxa Q
policies of the form ?u (s) = argmaxa ?(s, a)u, where ? is the policy feature vector (possibly different from the value function feature vector ?) and u is the policy parameter vector. We compute
?; ?u ), de?ned by (3), using the covarithe next policy ?k+1 by minimizing the empirical error L?k? (?
ance matrix adaptation evolution strategy (CMA-ES) algorithm [10]. In order to evaluate a policy
u in CMA-ES, we only need to compute L?k? (?
?; ?u ), and given the training set, this procedure does
not require any simulation of the game. This is in contrary with policy evaluation in CE that requires
playing several games, and it is the main reason that we obtain the same performance as CE with
CBMPI with almost 1/6 number of samples (see Section 3.2).
3
Experimental Results
In this section, we evaluate the performance of CBMPI (DPI) and compare it with CE and ?-PI. CE
is the state-of-the-art method in Tetris with huge performance advantage over ADP/RL methods [18,
19, 20]. In our experiments, we show that for a well-selected set of features, CBMPI improves over
all the previously reported ADP results. Moreover, its performance is comparable to that of the CE
method, while using considerably fewer samples (call to the generative model of the game).
3.1
Experimental Setup
In our experiments, the policies learned by the algorithms are evaluated by their score (average
number of rows removed in a game) averaged over 200 games in the small 10 ? 10 board and over
20 games in the large 10 ? 20 board. The performance of each algorithm is represented by a learning
curve whose value at each iteration is the average score of the policies learned by the algorithm at
that iteration in 100 separate runs of the algorithm. In addition to their score, we also evaluate the
algorithms by the number of samples they use. In particular, we show that CBMPI/DPI use 6 times
less samples than CE. As discussed in Section 2.2, this is due the fact that although the classi?er
in CBMPI/DPI uses a direct search in the space of policies (for the greedy policy), it evaluates
each candidate policy using the empirical error of Eq. 3, and thus, does not require any simulation
? k ?s in its training set). In fact, the budget B
of the game (other than those used to estimate the Q
of CBMPI/DPI is ?xed in advance by the number of rollouts N M and the rollout?s length m as
B = (m + 1)N M |A|. In contrary, CE evaluates a candidate policy by playing several games, a
process that can be extremely costly (sample-wise), especially for good policies in the large board.
In our CBMPI/DPI experiments, we set the number of rollouts per state-action pair M = 1, as
this value has shown the best performance. Thus, we only study the behavior of CBMPI/DPI as
a function of m and N . In CBMPI, the parameter m balances between the errors in evaluating
the value function and the policy. For large values of m, the size of the rollout set decreases as
N = O(B/m), which in turn decreases the accuracy of both the regressor and classi?er. This leads
to a trade-off between long rollouts and the number of states in the rollout set. The solution to
? k ?s) strictly depends on the capacity of the
this trade-off (bias/variance tradeoff in estimation of Q
value function space F. A rich value function space leads to solve the trade-off for small values
of m, while a poor space, or no space in the case of DPI, suggests large values of m, but not
too large to still guarantee a large enough N . We sample the rollout states in CBMPI/DPI from
the trajectories generated by a very good policy for Tetris, namely the DU controller [20]. Since
the DU policy is good, this rollout set is biased towards boards with small height. We noticed
from our experiments that the performance can be signi?cantly improved if we use boards with
different heights in the rollout sets. This means that better performance can be achieved with more
uniform sampling distribution, which is consistent with what we can learn from the CBMPI and DPI
performance bounds. We set the initial value function parameter to w = ?0 and select the initial policy
?1 (policy parameter u) randomly. We also set the CMA-ES parameters (classi?er parameters) to
? = 0.5, ? = 0, and n equal to 15 times the number of features.
5
In the CE experiments, we set ? = 0.1 and ? = 4, the best parameters reported in [20]. We also
set n = 1000 and L = 10 in the small board and n = 100 and L = 1 in the large board.
Set of Features: We use the following features, plus a constant offset feature, in our experiments:4
(i) Bertsekas features: First introduced by [2], this set of 22 features has been mainly used in the
ADP/RL community and consists of: the number of holes in the board, the height of each column,
the difference in height between two consecutive columns, and the maximum height of the board.
(ii) Dellacherie-Thiery (D-T) features: This set consists of the six features of Dellacherie [5], i.e., the
landing height of the falling piece, the number of eroded piece cells, the row transitions, the column
transitions, the number of holes, and the number of board wells; plus 3 additional features proposed
in [20], i.e., the hole depth, the number of rows with holes, and the pattern diversity feature. Note
that the best policies reported in the literature have been learned using this set of features.
2
(iii) RBF height features: These new 5 features are de?ned as exp( ?|c?ih/4|
2(h/5)2 ), i = 0, . . . , 4, where
c is the average height of the columns and h = 10 or 20 is the total number of rows in the board.
3.2
Experiments
We ?rst run the algorithms on the small board to study the role of their parameters and to select the
best features and parameters (Section 3.2.1). We then use the selected features and parameters and
apply the algorithms to the large board (Figure 5 (d)) Finally, we compare the best policies found in
our experiments with the best controllers reported in the literature (Tables 1 and 2).
3.2.1
Small (10 ? 10) Board
Here we run the algorithms with two different feature sets: Dellacherie-Thiery (D-T) and Bertsekas.
D-T features: Figure 4 shows the learning curves of CE, ?-PI, DPI, and CBMPI algorithms. Here
we use D-T features for the evaluation function in CE, the value function in ?-PI, and the policy in
DPI and CBMPI. We ran CBMPI with different feature sets for the value function and ?D-T plus
the 5 RBF features? achieved the best performance (Figure 4 (d)).5 The budget of CBMPI and DPI
is set to B = 8, 000, 000 per iteration. The CE method reaches the score 3000 after 10 iterations
using an average budget B = 65, 000, 000. ?-PI with the best value of ? only manages to score
400. In Figure 4 (c), we report the performance of DPI for different values of m. DPI achieves
its best performance for m = 5 and m = 10 by removing 3400 lines on average. As explained
? while
in Section 3.1, having short rollouts (m = 1) in DPI leads to poor action-value estimates Q,
having too long rollouts (m = 20) decreases the size of the training set of the classi?er N . CBMPI
outperforms the other algorithms, including CE, by reaching the score of 4300 for m = 2. The value
8000000
of m = 2 corresponds to N = (2+1)?32
? 84, 000. Note that unlike DPI, CBMPI achieves good
performance with very short rollouts m = 1. This indicates that CBMPI is able to approximate the
value function well, and as a result, to build a more accurate training set for its classi?er than DPI.
The results of Figure 4 show that an ADP algorithm, namely CBMPI, outperforms the CE method
using a similar budget (80 vs. 65 millions after 10 iterations). Note that CBMPI takes less iterations
to converge than CE. More generally Figure 4 con?rms the superiority of the policy search and
classi?cation-based PI methods to value function based ADP algorithms (?-PI). This suggests that
the D-T features are more suitable to represent the policies than the value functions in Tetris.
Bertsekas features: Figures 5 (a)-(c) show the performance of CE, ?-PI, DPI, and CBMPI algorithms. Here all the approximations in the algorithms are with the Bertsekas features. CE achieves
the score of 500 after about 60 iterations and outperforms ?-PI with score of 350. It is clear that
the Bertsekas features lead to much weaker results than those obtained by the D-T features in Figure 4 for all the algorithms. We may conclude then that the D-T features are more suitable than the
Bertsekas features to represent both value functions and policies in Tetris. In DPI and CBMPI, we
managed to obtain results similar to CE, only after multiplying the per iteration budget B used in the
D-T experiments by 10. However, CBMPI and CE use the same number of samples, 150, 000, 000,
when they converge after 2 and 60 iterations, respectively (see Figure 5). Note that DPI and CBMPI
obtain the same performance, which means that the use of a value function approximation by CBMPI
4
For a precise de?nition of the features, see [19] or the documentation of their code [21].
Note that we use D-T+5 features only for the value function of CBMPI, and thus, we have a fair comparison
between CBMPI and DPI. To have a fair comparison with ?-PI, we ran this algorithm with D-T+5 features, and
it only raised its performance to 800, still far from the CBMPI?s performance.
5
6
500
400
300
200
100
Averaged lines removed
4000
3000
2000
1000
Averaged lines removed
Parameter ?
0
0.4
5
10
15
20
0
20
40
Iterations
100
4000
3000
2000
20
1000
5
10
1000
Averaged lines removed
4000
3000
2000
80
(b) ?-PI with ? = {0, 0.4, 0.7, 0.9}.
Rollout size m of DPI
1
2
60
Iterations
(a) The cross-entropy (CE) method.
Averaged lines removed
0.7
0.9
0
0
CE
Rollout size m of CBMPI
5
10
20
0
0
1
2
2
4
6
8
10
2
Iterations
4
6
8
10
Iterations
(c) DPI with budget B = 8, 000, 000 per iteration
and m = {1, 2, 5, 10, 20}.
(d) CBMPI with budget B = 8, 000, 000 per iteration and m = {1, 2, 5, 10, 20}.
Figure 4: Learning curves of CE, ?-PI, DPI, and CBMPI algorithms using the 9 Dellacherie-Thiery
(D-T) features on the small 10 ? 10 board. The results are averaged over 100 runs of the algorithms.
does not lead to a signi?cant performance improvement over DPI. At the end, we tried several values
of m in this setting among which m = 10 achieved the best performance for both DPI and CBMPI.
3.2.2
Large (10 ? 20) Board
We now use the best parameters and features in the small board experiments, run CE, DPI, and
CBMPI algorithms in the large board, and report their results in Figure 5 (d). The per iteration
budget of DPI and CBMPI is set to B = 16, 000, 000. While ?-PI with per iteration budget 620, 000,
at its best, achieves the score of 2500 (due to space limitation, we do not report these results here),
DPI and CBMPI, with m = 10, reach the scores of 12, 000, 000 and 21, 000, 000 after 3 and 6
iterations, respectively. CE matches the performances of CBMPI with the score of 20, 000, 000 after
8 iterations. However, this is achieved with almost 6 times more samples, i.e., after 8 iterations,
CBMPI and CE use 256, 000, 000 and 1, 700, 000, 000 samples, respectively.
Comparison of the best policies: So far the reported scores for each algorithm was averaged over
the policies learned in 100 separate runs. Here we select the best policies observed in our all experiments and compute their scores more accurately by averaging over 10, 000 games. We then compare
these results with the best policies reported in the literature, i.e., DU and BDU [20] in both small
and large boards in Table 1. The DT-10 and DT-20 policies, whose weights and features are given in
Table 2, are policies learned by CBMPI with D-T features in the small and large boards, respectively.
As shown in Table 1, DT-10 removes 5000 lines and outperforms DU, BDU, and DT-20 in the small
board. Note that DT-10 is the only policy among these four that has been learned in the small board.
In the large board, DT-20 obtains the score of 51, 000, 000 and not only outperforms the other three
policies, but also achieves the best reported result in the literature (to the best of our knowledge).
7
600
500
400
300
200
Averaged lines removed
100
600
500
400
300
200
Averaged lines removed
100
Parameter ?
0
0.4
0
50
100
150
0
20
40
Iterations
100
Rollout size m of DPI
10
5
10
20
5
CE
0
0
CBMPI
Rollout size m of CBMPI
10
Averaged lines removed ( ? 106 )
600
500
400
300
200
100
80
(b) ?-PI with ? = {0, 0.4, 0.7, 0.9}.
Rollout size m=10
DPI
60
Iterations
(a) The cross-entropy (CE) method.
Averaged lines removed
0.7
0.9
0
0
CE
2
4
6
8
10
1
Iterations
(c) DPI (dash-dotted line) & CBMPI (dash line) with
budget B = 80, 000, 000 per iteration and m = 10.
2
3
4
5
Iterations
6
7
8
(d) DPI (dash-dotted line) and CBMPI (dash line)
with m = {5, 10} and CE (solid line).
Figure 5: (a)-(c) Learning curves of CE, ?-PI, DPI, and CBMPI algorithms using the 22 Bertsekas
features on the small 10 ? 10 board. (d) Learning curves of CE, DPI, and CBMPI algorithms using
the 9 Dellacherie-Thiery (D-T) features on the large 10 ? 20 board.
Boards \ Policies
Small (10 ? 10) board
Large (10 ? 20) board
DU
3800
31, 000, 000
BDU
4200
36, 000, 000
DT-10
5000
29, 000, 000
DT-20
4300
51, 000, 000
Table 1: Average (over 10, 000 games) score of DU, BDU, DT-10, and DT-20 policies.
feature
weight
feature
weight
feature
weight
landing height -2.18 -2.68 column transitions -3.31 -6.32
hole depth
-0.81 -0.43
eroded piece cells 2.42 1.38
holes
0.95 2.03 rows with holes -9.65 -9.48
row transitions -2.17 -2.41
board wells
-2.22 -2.71
diversity
1.27 0.89
Table 2: The weights of the 9 Dellacherie-Thiery features in DT-10 (left) and DT-20 (right) policies.
4
Conclusions
The game of Tetris has been always challenging for approximate dynamic programming (ADP)
algorithms. Surprisingly, much simpler black box optimization methods, such as cross entropy
(CE), have produced controllers far superior to those learned by the ADP algorithms. In this paper,
we applied a relatively novel ADP algorithm, called classi?cation-based modi?ed policy iteration
(CBMPI), to Tetris. Our results showed that for the ?rst time an ADP algorithm (CBMPI) performed
extremely well in both small 10?10 and large 10?20 boards and achieved performance either better
(in the small board) or equal with considerably fewer samples (in the large board) than the state-ofthe-art CE methods. In particular, the best policy learned by CBMPI obtained the performance of
51, 000, 000 lines on average, a new record in the large board of Tetris.
8
References
[1] D. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in
neuro-dynamic programming. Technical report, MIT, 1996.
[2] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scienti?c, 1996.
[3] H. Burgiel. How to Lose at Tetris. Mathematical Gazette, 81:194?200, 1997.
[4] E. Demaine, S. Hohenberger, and D. Liben-Nowell. Tetris is hard, even to approximate. In
Proceedings of the Ninth International Computing and Combinatorics Conference, pages 351?
363, 2003.
[5] C. Fahey. Tetris AI, Computer plays Tetris, 2003. http://colinfahey.com/tetris/
tetris.html.
[6] V. Farias and B. van Roy. Tetris: A study of randomized constraint sampling. Springer-Verlag,
2006.
[7] A. Fern, S. Yoon, and R. Givan. Approximate Policy Iteration with a Policy Language Bias:
Solving Relational Markov Decision Processes. Journal of Arti?cial Intelligence Research,
25:75?118, 2006.
[8] T. Furmston and D. Barber. A unifying perspective of parametric policy search methods for
Markov decision processes. In Proceedings of the Advances in Neural Information Processing
Systems, pages 2726?2734, 2012.
[9] V. Gabillon, A. Lazaric, M. Ghavamzadeh, and B. Scherrer. Classi?cation-based policy iteration with a critic. In Proceedings of ICML, pages 1049?1056, 2011.
[10] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9:159?195, 2001.
[11] S. Kakade. A natural policy gradient. In Proceedings of the Advances in Neural Information
Processing Systems, pages 1531?1538, 2001.
[12] M. Lagoudakis and R. Parr. Reinforcement Learning as Classi?cation: Leveraging Modern
Classi?ers. In Proceedings of ICML, pages 424?431, 2003.
[13] A. Lazaric, M. Ghavamzadeh, and R. Munos. Analysis of a Classi?cation-based Policy Iteration Algorithm. In Proceedings of ICML, pages 607?614, 2010.
[14] M. Puterman and M. Shin. Modi?ed policy iteration algorithms for discounted Markov decision problems. Management Science, 24(11), 1978.
[15] R. Rubinstein and D. Kroese. The cross-entropy method: A uni?ed approach to combinatorial
optimization, Monte-Carlo simulation, and machine learning. Springer-Verlag, 2004.
[16] B. Scherrer. Performance Bounds for ?-Policy Iteration and Application to the Game of Tetris.
Journal of Machine Learning Research, 14:1175?1221, 2013.
[17] B. Scherrer, M. Ghavamzadeh, V. Gabillon, and M. Geist. Approximate modi?ed policy iteration. In Proceedings of ICML, pages 1207?1214, 2012.
[18] I. Szita and A. L?orincz. Learning Tetris Using the Noisy Cross-Entropy Method. Neural
Computation, 18(12):2936?2941, 2006.
[19] C. Thiery and B. Scherrer. Building Controllers for Tetris. International Computer Games
Association Journal, 32:3?11, 2009.
[20] C. Thiery and B. Scherrer. Improvements on Learning Tetris with Cross Entropy. International
Computer Games Association Journal, 32, 2009.
[21] C. Thiery and B. Scherrer.
MDPTetris features documentation, 2010.
http://
mdptetris.gforge.inria.fr/doc/feature_functions_8h.html.
[22] J. Tsitsiklis and B Van Roy. Feature-based methods for large scale dynamic programming.
Machine Learning, 22:59?94, 1996.
9
| 5190 |@word seems:1 proportion:1 nd:2 simulation:4 tried:1 arti:1 solid:1 shot:1 initial:4 contains:2 score:33 genetic:1 outperforms:5 current:3 com:1 cant:1 shape:2 remove:3 designed:1 update:3 v:1 generative:3 fewer:4 selected:3 greedy:9 eroded:2 intelligence:1 cult:1 beginning:1 short:2 record:1 revisited:1 location:1 simpler:1 height:11 rollout:21 mathematical:1 direct:4 become:1 consists:3 expected:2 behavior:1 bellman:2 discounted:1 begin:2 notation:1 moreover:1 maximizes:1 what:1 xed:1 argmin:2 minimizes:1 maxa:2 developed:1 gurations:1 guarantee:1 pseudo:3 temporal:1 every:1 cial:1 rm:2 superiority:1 bertsekas:13 before:1 despite:1 inria:8 black:3 alexey:1 plus:4 suggests:2 challenging:1 dif:1 averaged:12 ance:1 procedure:1 shin:1 nite:1 empirical:6 argmaxa:1 operator:2 put:2 applying:2 optimize:1 landing:2 maximizing:2 formulate:1 variation:2 updated:1 target:1 play:3 exact:1 programming:8 us:4 roy:4 approximated:1 documentation:2 observed:2 role:1 yoon:1 calculate:2 region:1 decrease:3 removed:13 highest:4 trade:3 ran:2 liben:1 reward:4 dynamic:8 ghavamzadeh:6 depend:1 solving:2 completely:1 farias:2 various:1 represented:1 geist:1 describe:2 monte:1 rubinstein:1 whose:6 heuristic:1 widely:1 solve:1 cma:3 think:1 noisy:1 sequence:2 advantage:1 coming:1 fr:4 adaptation:2 maia:1 poorly:1 bug:1 description:3 ostermeier:1 rst:6 leave:1 eq:1 signi:2 indicate:1 come:1 differ:1 ning:1 require:2 generalization:1 givan:1 strictly:1 around:1 exp:1 parr:1 optimizer:1 consecutive:1 achieves:5 nowell:1 estimation:1 lose:1 combinatorial:1 currently:1 hansen:1 sensitive:3 individually:1 mit:1 gaussian:4 always:1 aim:1 reaching:1 vk:15 improvement:2 indicates:1 mainly:2 am:2 entire:1 guration:1 a0:1 france:2 szita:3 scherrer:10 orientation:1 among:3 overall:1 html:2 art:2 raised:1 initialize:2 equal:2 construct:2 having:2 sampling:4 lille:2 look:1 icml:4 constitutes:1 future:1 np:1 report:5 modern:1 randomly:1 modi:8 composed:1 resulted:1 rollouts:11 huge:2 evaluation:12 derandomized:1 analyzed:1 scienti:1 accurate:1 rotating:1 column:6 cover:1 cost:3 tour:1 uniform:1 too:2 reported:15 considerably:4 chooses:1 st:4 grand:1 sensitivity:1 international:3 randomized:1 sequel:2 cantly:1 off:3 regressor:7 gabillon:4 kroese:1 management:1 choose:1 possibly:2 worse:1 return:2 de:12 diversity:2 combinatorics:1 depends:2 piece:14 performed:3 try:1 later:1 tion:1 start:1 minimize:1 square:1 publicly:1 ni:1 accuracy:1 variance:3 who:1 qk:2 ofthe:1 accurately:1 manages:1 iid:2 produced:1 fern:1 trajectory:1 multiplying:1 carlo:1 cation:14 reach:2 ed:10 evaluates:2 associated:1 con:3 popular:1 nancy:1 knowledge:3 improves:1 thiery:9 originally:1 dt:12 improved:1 done:1 box:3 evaluated:1 hand:1 quality:1 believe:1 mdp:2 building:1 unbiased:2 managed:1 counterpart:1 evolution:2 former:1 puterman:1 game:39 self:1 mpi:4 coincides:1 outline:1 mohammad:3 performs:1 wise:1 ef:1 recently:3 novel:1 lagoudakis:1 superior:1 rl:5 million:1 association:2 discussed:1 approximates:1 he:1 slight:1 adp:30 refer:1 ai:1 tuning:1 grid:2 bruno:2 language:1 moving:1 stable:1 europe:1 v0:3 multivariate:1 showed:1 perspective:1 optimizing:1 verlag:2 outperforming:1 victor:2 nition:1 fortunately:2 additional:1 speci:2 r0:2 converge:3 ii:1 multiple:1 technical:1 match:1 cross:12 long:3 a1:2 adobe:2 neuro:2 regression:3 controller:13 iteration:52 represent:5 achieved:10 cell:3 gforge:1 addition:2 furmston:2 biased:1 unlike:1 induced:2 contrary:2 leveraging:1 call:3 iii:1 enough:1 affect:1 idea:1 tradeoff:1 six:1 rms:1 returned:1 action:13 generally:1 clear:1 tune:1 discount:2 generate:2 http:2 dotted:2 estimated:1 lazaric:2 per:8 four:1 falling:6 ce:47 sum:1 run:6 parameterized:1 place:1 almost:6 lled:1 doc:1 decision:4 comparable:1 entirely:1 bound:2 followed:1 played:1 dash:4 constraint:1 generates:1 extremely:3 relatively:2 conjecture:4 ned:9 combination:2 poor:2 kakade:2 s1:2 explained:1 computationally:1 equation:4 previously:1 turn:1 eventually:2 know:1 end:2 available:1 generalizes:1 apply:1 fahey:1 top:2 running:1 newton:1 unifying:1 cally:1 rkj:3 build:5 especially:1 approximating:1 move:1 noticed:1 strategy:4 costly:1 rt:5 parametric:1 traditional:2 evolutionary:1 gradient:2 separate:2 capacity:1 athena:1 seven:1 barber:2 reason:1 code:4 length:1 minimizing:2 balance:1 setup:1 nord:1 implementation:4 policy:91 perform:4 observation:2 markov:4 sm:3 benchmark:2 truncated:1 relational:1 orincz:3 team:3 precise:2 ninth:1 dpi:38 arbitrary:2 community:1 introduced:1 namely:4 required:1 pair:3 extensive:1 optimized:2 learned:8 able:2 usually:1 pattern:1 built:1 including:3 max:1 video:2 suitable:2 natural:2 rely:1 scheme:1 brief:1 ne:1 nding:1 created:1 lk:1 literature:8 interesting:1 limitation:1 demaine:1 consistent:1 playing:2 pi:19 critic:1 row:11 placed:1 surprisingly:1 tsitsiklis:3 bias:2 weaker:1 fall:1 wide:1 taking:1 munos:1 van:4 curve:6 calculated:1 depth:2 evaluating:1 transition:4 rich:1 computes:1 made:1 reinforcement:2 far:3 approximate:13 obtains:3 uni:1 global:1 ioffe:2 conclude:1 search:15 iterative:2 table:6 promising:1 learn:2 du:6 main:2 motivation:1 noise:1 fair:2 cient:1 brie:2 board:44 screen:1 elaborate:1 rithm:1 explicit:1 candidate:2 down:1 rk:2 removing:1 er:13 offset:1 dk:11 ih:1 budget:10 hole:8 horizon:1 easier:3 entropy:12 led:1 horizontally:1 springer:2 corresponds:1 goal:4 identity:1 rbf:2 towards:1 absence:1 hard:3 corrected:1 averaging:1 classi:27 called:4 total:3 tetri:46 experimental:4 e:3 player:2 est:1 select:5 latter:1 evaluate:3 |
4,631 | 5,191 | Reward Mapping for Transfer in Long-Lived Agents
Xiaoxiao Guo
Computer Science and Eng.
University of Michigan
guoxiao@umich.edu
Satinder Singh
Computer Science and Eng.
University of Michigan
baveja@umich.edu
Richard Lewis
Department of Psychology
University of Michigan
rickl@umich.edu
Abstract
We consider how to transfer knowledge from previous tasks (MDPs) to a current task in long-lived and bounded agents that must solve a sequence of tasks
over a finite lifetime. A novel aspect of our transfer approach is that we reuse
reward functions. While this may seem counterintuitive, we build on the insight
of recent work on the optimal rewards problem that guiding an agent?s behavior with reward functions other than the task-specifying reward function can help
overcome computational bounds of the agent. Specifically, we use good guidance reward functions learned on previous tasks in the sequence to incrementally
train a reward mapping function that maps task-specifying reward functions into
good initial guidance reward functions for subsequent tasks. We demonstrate that
our approach can substantially improve the agent?s performance relative to other
approaches, including an approach that transfers policies.
1
Introduction
We consider agents that live for a long time in a sequential decision-making environment. While
many different interpretations are possible for the notion of long-lived, here we consider agents
that have to solve a sequence of tasks over a continuous lifetime. Thus, our problem is closely
related to that of transfer learning in sequential decision-making, which can be thought of as a
problem faced by agents that have to solve a set of tasks. Transfer learning [18] has explored the
reuse across tasks of many different components of a reinforcement learning (RL) architecture,
including value functions [16, 5, 8], policies [9, 20], and models of the environment [1, 17]. Other
transfer approaches have considered parameter transfer [19], selective reuse of sample trajectories
from previous tasks [7], as well as reuse of learned abstract representations such as options [12, 6].
A novel aspect of our transfer approach in long-lived agents is that we will reuse reward functions.
At first blush, it may seem odd to consider using a reward function different from the one specifying
the current task in the sequence (indeed, in most RL research rewards are considered an immutable
part of the task description). But there is now considerable work on designing good reward functions,
including reward-shaping [10], inverse RL [11], optimal rewards [13] and preference-elicitation [3].
In this work, we specifically build on the insight of the optimal rewards problem (ORP; described in
more detail in the next section) that guiding an agent?s behavior with reward functions other than the
task-specifying reward function can help overcome computational bounds in the agent architecture.
We base our work on an algorithm from Sorg et.al. [14] that learns good guidance reward functions
incrementally in a single-task setting.
Our main contribution in this paper is a new approach to transfer in long-lived agents in which we
use good guidance reward functions learned on previous tasks in the sequence to incrementally train
a reward mapping function that maps task-specifying reward functions into good initial guidance
reward functions for subsequent tasks. We demonstrate that our approach can substantially improve
a long-lived agent?s performance relative to other approaches, first on an illustrative grid world
domain, and second on a networking domain from prior work [9] on the reuse of policies for transfer.
1
In the grid world domain only the task-specifying reward function changes with tasks, while in the
networking domain both the reward function and the state transition function change with tasks.
2
Background: Optimal Rewards for Bounded Agents in Single Tasks
We consider sequential decision-making environments formulated as controlled Markov processes
(CMPs); these are defined via a state space S, an action space A, and a transition function T that
determines a distribution over next states given a current state and action. A task in such a CMP is
defined via a reward function R that maps state-action pairs to scalar values. The objective of the
agent in a task is to execute the optimal policy, i.e., to choose actions in such a way as to optimize
utility defined as the expected value of cumulative reward over some lifetime. A CMP and reward
function together define a Markov decision process or MDP; hence tasks in this paper are MDPs.
There are many approaches to planning an optimal policy in MDPs. Here we will use UCT [4] which
incrementally plans the action to take in the current state. It simulates a number of trajectories from
the current state up to some maximum depth, choosing actions at each point based on the sum of an
estimated action-value that encourages exploitation and a reward bonus that encourages exploration.
It has theoretical guarantees of convergence and works well in practice on a variety of large-scale
planning problems. We use UCT in this paper because it is one of the state of the art algorithms in
RL planning and because there exists a good optimal reward finding algorithm for it [14].
Optimal Rewards Problem (ORP). In almost all of RL research, the reward function is considered part of the task specification and thus unchangeable. The optimal reward framework of Singh
et al. [13] stems from the observation that a reward function plays two roles simultaneously in RL
problems. The first role is that of evaluation in that the task-specifying reward function is used by
the agent designer to evaluate the actual behavior of the agent. The second is that of guidance in that
the reward function is also used by the RL algorithm implemented by the agent to determine its behavior (e.g., via Q-learning [21] or UCT planning [4]). The optimal rewards problem separates these
two roles into two separate reward functions, the task-specifying objective reward function used to
evaluate performance, and an internal reward function used to guide agent behavior. Given a CMP
M , an objective reward function Ro , an agent A parameterized by an internal reward function, and
?
a space of possible internal reward functions R, an optimal internal reward function Ri is defined
as follows (throughout superscript o will denoted objective evaluation quantities and superscript i
will denote internal quantities):
n
o
?
o
i
Ri = arg max
E
U
(h)
,
h?hA(R ),M i
i
R ?R
i
where A(R ) is the agent with internal reward function Ri , h ? hA(Ri ), M i is a random history
(trajectory of alternating states and actions) obtained by the interaction of agent A(Ri ) with CMP
M , and U o (h) is the objective utility (as specified by Ro ) to the agent designer of interaction history
h. The optimal internal reward function will depend on the agent A?s architecture and its limitations,
and this distinguishes ORP from other reward-design approaches such as inverse-RL. When would
the optimal internal reward function be different from the objective reward function? If an agent is
unbounded in its capabilities with respect to the CMP then the objective reward function is always an
optimal internal reward function. More crucially though, in the realistic setting of bounded agents,
optimal internal reward functions may be quite different from objective reward functions. Singh
et al.[13] and Sorg et al.[14] provide many examples and some theory of when a good choice of
internal reward can mitigate agent bounds, including bounds corresponding to limited lifetime to
learn [13], limited memory [14], and limited resources for planning (the specific bound of interest
in this paper).
?
PGRD: Solving the ORP on-line while planning. Computing Ri can be computationally nontrivial. We will use Sorg et.al.?s [14, 15] policy gradient reward design (PGRD) method that is based
on the insight that any planning algorithm can be viewed as procedurally translating the internal
reward function Ri into behavior?that is, Ri are indirect parameters of the agent?s policy. PGRD
cheaply computes the gradient of the objective utility with respect to the Ri parameters through UCT
planning. Specifically, it takes a simulation model of the CMP and an objective reward function and
uses UCT to simultaneously plan actions with respect to the current internal reward function as well
as to update the internal reward function in the direction of the gradient of the objective utility for
use in the next planning step.
2
(a) Conventional Agent
(b) Non-transfer ORP Agent
Task Sequence
Task Sequence
time
t1
t2
t3
evaluation
reward
?o1
?o2
?o3
tn
time
t1
t2
evaluation
reward
? o1
? o2
? i1
?i 2
t3
tn
?o3
?on
Environment
? on
Agent
guidance
reward
Environment
Critic-Agent
Agent (Actor-Agent)
Actor-Agent
? i3
Agent
?i n
ActorAgent
(ActorAgent)
(c) Reward Mapping Transfer ORP Agent
(d) Sequential Transfer ORP Agent
Task Sequence
time
t1
t2
t3
tn
evaluation
reward
? o1
?o2
? o3
?on
Task Sequence
Environment
Agent
t1
evaluation
reward
?o1
t2
?o2
t3
tn
?o3
? on
Environment
reward
mapping
Critic-Agent
Actor-Agent
time
for all j,
?ij =f?(?oj)
initialize
initialize
? i1
?i 2
initialize
?i 3
Agent
guidance
reward
initialize
?i n
Critic-Agent
?i 1
?i 2
initialize
?i 3
initialize
?i n
initialize
Actor-Agent
ActorAgent
ActorAgent
Figure 1: The four agent types compared in this paper. In each figure, time flows from left to right. The
sequence of objective reward parameters and task durations for n tasks are shown in the environment portion
of each figure. In figures (b-d) the agent portion of the figure is further split into a critic-agent and an actoragent; figure (a) does not have this split because it is the conventional agent. The critic-agent translates the
objective reward parameters ?o into the internal reward parameters ?i . The actor-agent is a UCT agent in all
our implementations. The critic-agent component varies across the figures and is crucial to understanding the
differences among the agents (see text for detailed descriptions).
3
Four Agent Architectures for the Long-Lived Agent Problem
Long-Lived Agent?s Objective Utility. We will consider the case where objective rewards are
linear functions of objective reward features. Formally, the j th task is defined by objective reward
function Rjo (s, a) = ?jo ? ? o (s, a), where ?jo is the parameter vector for the j th task, ? o are the taskindependent objective reward features of state and action, and ??? denotes the inner-product. Note
that the features are constant across tasks while the parameters vary. The j th task lasts for tj time
steps. Given some agent A the expected
objective
utility achieved for a particular task sequence
o
PK n ?o
o
K
j
{?j , tj }j=1 , is Eh?hA,M i j=1 U (hj ) , where for ease of exposition we denote the history
during task j simply as hj . In general, there may be a distribution over task sequences, and the
expected objective utility would then be a further expectation over such a distribution.
In some transfer or other long-lived agent research, the emphasis is on learning in that the agent is
assumed to lack complete knowledge of the CMP and the task specifications. Our emphasis here
is on planning in that the agent is assumed to know the CMP perfectly as well as the task specifications as they change. If the agent were unbounded in planning capacity, there would be nothing
interesting left to consider because the agent could simply find the optimal policy for each new task
and execute it. What makes our problem interesting therefore is that our UCT-based planning agent
is computationally limited: the depth and number of trajectories feasible are small enough (relative
3
to the size of the CMP) that it cannot find near-optimal actions. This sets up the potential for both
the use of the ORP and of transfer across tasks. Note that basic UCT does use a reward function but
does not use an initial value function or policy and hence changing a reward function is a natural
and consequential way to influence UCT. While non-trivial modifications of UCT could allow use of
value functions and/or policies, we do not consider them here. In addition, in our setting a model of
the CMP is available to the agent and so there is no scope for transfer by reuse of model knowledge.
Thus, our reuse of reward functions may well be the most consequential option available in UCT.
Next we discuss four different agent architectures represented graphically in Figure 1, starting with
a conventional agent that ignores both the potential of transfer and that of ORP, followed by three
different agents that do not to varying degrees.
Conventional Agent. Figure 1(a) shows the baseline conventional UCT-based agent that ignores
the possibility of transfer and treats each task separately. It also ignores ORP and treats each task?s
objective reward as the internal reward for UCT planning during that task.
The remaining three agents will all consider the ORP, and share the following details: The space of
internal reward functions R is the space of all linear functions of internal reward features ? i (s, a),
i.e., R(s, a) = {? ? ? i (s, a)}??? , where ? is the space of possible parameters ? (in this paper all
finite vectors). Note that the internal reward features ? i and the objective reward features ? o do not
have to be identical.
Non-Transfer ORP Agent. Figure 1(b) shows the non-transfer agent that ignores the possibility of
transfer but exploits ORP. It initializes the internal reward function to the objective reward function
of each new task as it starts and then uses PGRD to adapt the internal reward function while acting
in that task. Nothing is transferred across task boundaries. This agent was designed to help separate
the contributions of ORP and transfer to performance gains.
Reward-Mapping-Transfer ORP Agent. Figure 1(c) shows the reward-mapping agent that incorporates our main new idea. It exploits both transfer and ORP via incrementally learning a reward
mapping function. A reward mapping function f maps objective reward function parameters to internal reward function parameters: ?j, ?ji = f (?jo ). The reward mapping function is used to initialize
the internal reward function at the beginning of each new task. PGRD is used to continually adapt
the initialized internal reward function throughout each task.
The reward mapping function is incrementally trained as follows: when task j ends, the objective
reward function parameters ?jo and the adapted internal reward function parameters ??ji are used as
an input-output pair to update the reward mapping function. In our work, we use nonparametric
kernel-regression to learn the reward mapping function. Pseudocode for a general reward mapping
agent is presented in Algorithm 1.
Sequential-Transfer ORP Agent. Figure 1(d) shows the sequential-transfer agent. It also exploits
both transfer and ORP. However, it does not use a reward mapping function but instead continually updates the internal reward function across task boundaries using PGRD. The internal reward
function at the end of a task becomes the initial internal reward function at the start of the next task
achieving a simple form of sequential transfer.
4
Empirical Evaluation
The four agent architectures are compared to demonstrate that the reward mapping approach can
substantially improve the bounded agent?s performance, first on an illustrative grid world domain,
and second on a networking routing domain from prior work [9] on the transfer of policies.
4.1
Food-and-Shelter Domain
The purpose of the experiments in this domain are (1) to systematically explore the relative benefits
of the use of ORP, and of transfer (with and without the use of the reward-mapping function), each
in isolation and together, (2) to explore the sensitivity and dependence of these relative benefits on
parameters of the long-lived setting such as mean duration of tasks, and (3) to visualize what is
learned by the reward mapping function.
4
Algorithm 1 General pseudocode for Reward Mapping Agent (Figure 1(c))
1: Input: {?jo , tj }kj=1 , where j is task indicator, tj is task duration, and ?jo are the objective reward
function parameters specifying task j.
2:
3: for t = 1, 2, 3, ... do
4:
if a new task j starts then
5:
obtain current objective reward parameters ?jo
6:
compute: ?ji = f (?jo )
7:
initialize the internal reward function using ?ji
8:
end if
9:
at := planning(st ; ?ji ) (select action using UCT guided by reward function ?ji )
10:
(st+1 , rt+1 ) := takeAction(st , at )
11:
?i := updateInternalRewardFunction(?i , st , at , st+1 , rt+1 ) (via PGRD)
12:
13:
if current task ends then
14:
obtain current internal reward parameters as ??ji
15:
update reward mapping function f using training pair (?o , ??ji )
16:
end if
17: end for
3
A
D
1
1
food
Agent
1
E
1
C
2
B
L
shelter
possible food
locations
3
G
M
2
3 1
I
1
R
2
J
1
H
K
N
2
Q
2
3
3
O
(a) Food-and-Shelter Domain.
F
1
2
2
1
1
P
(b) Network Routing Domain.
Figure 2: Domains used in empirical evaluation; the network routing domain comes from [9].
The environment is a simple 3 by 3 maze with three left-to-right corridors. Thick black lines indicate
impassable walls. The position of the shelter and possible positions of food are shown in Figure 2.
Dynamics. The shelter breaks down with a probability of 0.1 at each time step. Once the shelter
is broken, it remains broken until repaired by the agent. Food appears at the rightmost column of
one of the three corridors and can be eaten by the agent when the agent is at the same location with
the food. When food is eaten, new food reappears in a different corridor. The agent can move in
four cardinal directions, and every movement action has a probability of 0.1 to result in movement
in a random direction; if the direction is blocked by a wall or the boundary, the action results in no
movement. The agent eats food and repairs shelter automatically whenever collocated with food and
shelter respectively. The discount factor ? = 0.95.
State. A state is a tuple (l, f, h), where l is the location of the agent, f is the location of the food,
and h indicates whether the shelter is broken.
Objective Reward Function. At each time step, the agent receives a positive reward of e (the eatbonus) for eating food and a negative reward of b (the broken-cost) if the shelter is broken. Thus,
the objective reward function?s parameters are ?jo = (ej , bj ), where ej ? [0, 1] and bj ? [?1, 0].
Different tasks will require the agent to behave in different ways. For example, if (ej , bj ) = (1,0),
the agent should explore the maze to eat more food. If (ej , bj ) = (0, -1), the agent should remain at
the shelter?s location in order to repair the shelter as it breaks.
Space of Internal Reward Functions. The internal reward function is Rji (s) = Rjo (s) + ?ji ? i (s),
where Rjo (s) is the objective reward function, ? i (s) = 1 ? nl1(s) is the inverse recency feature
5
0.015
0.01
0.005
0
?0.005
?0.01
t=50
t=200
t=500
0.025
avg. objective reward per time step
0.02
mean task duration 500
mean task duration 50
Reward Mapping
Sequential Transfer
Non?Transfer
Conventional
avg. objective reward per time step
avg. objective reward per time step
0.025
0.02
0.015
0.01
0.005
Reward Mapping
Sequential Transfer
Non?Transfer
Conventional
0
?0.005
?0.01
0
1
2
3
milliseconds per decision
4
0.04
0.03
0.02
0.01
0
?0.01
0
1
2
milliseconds per decision
3
Figure 3: (Left) Performance of four agents in food-and-shelter domain at three different mean task durations.
(Middle and Right) Comparing performance while accounting for computational overhead of learning and using
the reward mapping function. See text for details.
and nl (s) is the number of time steps since the agent?s last visit to the location in state s. Since
there is exactly one internal reward parameter, ?ji is a scalar. A positive ?ji encourages the agent to
visit locations not visited recently, and a negative ?ji encourages the agent to visit locations visited
recently.
Results: Performance advantage of reward mapping. 100 sequences of 200 tasks were generated, with Poisson distributions for task durations, and with objective reward function parameters
sampled uniformly from their ranges. The agents used UCT with depth 2 and 500 trajectories; the
conventional agent is thereby bounded as evidenced in its poor performance (see Figure 3).
Optimal Internal Reward for UCT
0.1 ?0.90 ?0.76 ?1.00 ?0.86 ?0.92 ?0.90 ?0.98 ?0.66 ?0.76 ?0.60
0.2 0.76 ?0.84 ?0.80 ?0.78 ?0.74 ?0.84 ?0.68 ?0.90 ?0.94 ?0.82
0.3 0.82 0.36 ?0.74 ?0.76 ?0.60 ?0.86 ?0.72 ?0.58 ?0.96 ?0.86
eat bonus
0.4 0.60 0.46 0.36 0.36 ?0.70 ?0.70 ?0.94 ?0.62 ?0.82 ?0.74
0.5 0.50 0.42 0.36 0.38 0.42 ?0.86 ?0.68 ?0.94 ?0.74 ?0.98
0.6 0.46 0.46 0.32 0.42 0.56 0.38 0.30 ?0.76 ?0.80 ?0.66
0.7 0.46 0.46 0.42 0.40 0.52 0.58 0.36 0.36 ?0.76 ?0.96
0.8 0.54 0.60 0.50 0.34 0.44 0.58 0.36 0.40 0.48 0.40
0.9 0.74 0.62 0.62 0.46 0.46 0.44 0.54 0.48 0.50 0.56
1 0.72 0.90 0.58 0.42 0.40 0.42 0.54 0.40 0.44 0.42
?0.1 ?0.2 ?0.3 ?0.4 ?0.5 ?0.6 ?0.7 ?0.8 ?0.9 ?1.0
broken cost
Reward Mapping learned after 50 tasks
0.1 0.18 0.11 0.02 ?0.06 ?0.13 ?0.17 ?0.19 ?0.22 ?0.26 ?0.30
0.2 0.22 0.14 0.05 ?0.03 ?0.09 ?0.12 ?0.15 ?0.18 ?0.23 ?0.27
0.3 0.26 0.19 0.11 0.03 ?0.03 ?0.07 ?0.10 ?0.14 ?0.18 ?0.22
0.4 0.31 0.25 0.18 0.10 0.04 ?0.01 ?0.04 ?0.09 ?0.13 ?0.16
eat bonus
The left panel in Figure 3 shows average objective reward per
time step (with standard error bars). There are three sets of four
bars each where each bar within a set is for a different architecture (see legend), and each set is for a different mean task
duration (50, 200, and 500 from left to right). For each task
duration the reward mapping agent does best and the conventional agent does the worst. These results demonstrate transfer helps performance and that transfer via the new reward
mapping approach can substantially improve a bounded longlived agent?s performance relative to transfer via the competing
method of sequential transfer. As task durations get longer the
ratio of the reward-mapping agent?s performance to the nontransfer agent?s performance get smaller, though remains > 1
(by visually taking the ratio of the corresponding bars). This
is expected because the longer the task duration the more time
PGRD has to adapt to the task, and thus the less the better initialization provided by the reward mapping function matters.
0.5 0.37 0.32 0.25 0.17 0.11 0.05 0.01 ?0.03 ?0.06 ?0.08
0.6 0.42 0.37 0.30 0.22 0.16 0.10 0.06 0.03 0.01 ?0.00
0.7 0.43 0.39 0.32 0.24 0.17 0.13 0.10 0.09 0.07 0.07
In addition, the sequential transfer agent does better than the
0.8 0.43 0.39 0.31 0.22 0.16 0.13 0.12 0.12 0.12 0.11
non-transfer agent for the shortest task duration of 50 while the
0.9 0.44 0.39 0.30 0.22 0.16 0.14 0.13 0.13 0.13 0.13
1 0.47 0.39 0.30 0.23 0.19 0.16 0.13 0.12 0.11 0.11
situation reverses for the longest task duration of 500. This is
?0.1 ?0.2 ?0.3 ?0.4 ?0.5 ?0.6 ?0.7 ?0.8 ?0.9 ?1.0
intuitive and significant as follows. Recall that the initialization
broken cost
of the internal reward function from the final internal reward
function of the previous task can hurt performance in the se- Figure 4: Reward mapping function
quential transfer setting if the current task requires quite differ- visualization: Top: Optimal mapping,
ent behavior from the previous?but it can help if two succes- Bottom: Mapping found by the Resive tasks are similar. Correcting the internal reward function ward Mapping agent after 50 tasks.
could cost a large number of steps. These effects are exacerbated by longer task durations because
the agent then has longer to adapt its internal reward function to each task. In general, as task
duration increases, the non-transfer agent improves but the sequential transfer agent worsens.
Results: Performance Comparison considering computational overhead. The above results
ignore the computational overhead incurred by learning and using the reward mapping function.
The two rightmost plots in the bottom row of Figure 3 show the average objective reward per time
step as a function of milliseconds per decision for the four agent architectures for a range of depth
{1, . . . , 6}, and trajectory-count {200, 300, . . . , 600} parameters for UCT. The plots show that for
6
the entire range of time-per-decision, the best performing agents are reward-mapping agents?in
other words, it is not better to spend the overhead time of the reward-mapping on additional UCT
search. This can be seen by observing that the highest dot at any vertical column on the x-axis
belongs to the reward mapping agent. Thus, the overhead of the reward mapping function in the
reward mapping agent is insignificant relative to the computational cost of UCT (this last cost is all
the conventional agent incurs).
Results: Reward mapping visualization. Using a fixed set of tasks (as described above) with
mean duration of 500, we estimated the optimal internal reward parameter (the coefficient of the
inverse-recency feature) for UCT by a brute-force grid search. The optimal internal reward parameter is visualized as a function of the two parameters of the objective reward function (broken cost and
eat bonus) in Figure 4, top. Negative coefficients (light color squares) for inverse-recency feature
discourage exploration while positive coefficients (dark color squares) encourage exploration. As
would be expected the top right corner (high penalty for broken shelter and low reward for eating)
discourages exploration while the bottom left corner (high reward for eating and low cost for broken
shelter) encourages exploration. Figure 4, bottom, visualizes the learned reward mapping function
after training on 50 tasks. There is a clearly similar pattern to the optimal mapping in the upper
graph, though it has not captured the finer details.
4.2
Network Routing Domain
The purposes of the following experiments are to (1) compare performance of our agents to a competing policy transfer method [9] from a closely related setting on a networking application domain
defined by the competing method; (2) demonstrate that our reward mapping and other agents can be
extended to a multi-agent setting as required by this domain; and (3) demonstrate that the rewardmapping approach can be extended to handle task changes that involve changes to the transition
function as well as objective reward.
The network routing domain [9] (see Figure 2(b)) is defined from the following components. (1) A
set of routers, or nodes. Every router has a queue to store packets. In our experiments, all queues
are of size three. (2) A set of links between two routers. All links are bidirectional and full-duplex,
and every link has a weight (uniformly sampled from {1,2,3}) to indicate the cost of transmitting a
packet. (3) A set of active packets. Every packet is a tuple (source, destination, alive-time), where
source is the node which generated the packet, destination is the node that the packet is sent to, and
alive-time is the time period that the packet has existed in the network. When a packet is delivered
to its destination node, the alive-time is the end-to-end delay. (4) A set of packet generators. Every
node has a packet generator that specifies a stochastic method to generate packets. (5) A set of
power consumption functions. Every node?s power consumption at time t is the number of packets
in its queue multiplied by a scalar parameter sampled uniformly in the range [0, 0.5].
Actions, dynamics, and states. Every node makes its routing decision separately and has its own
action space (these determine which neighbor the first packet in the queue is sent to). If multiple
packets reach the same node simultaneously, they are inserted into the queue in random order. Packets that arrives after the queue is full cause network congestion and result in packet loss. The global
state at time t consists of the contents of all queues at all nodes at t.
Transition function. In a departure from the original definition of the routing domain, we parameterize the transition function to allow a comparison of agents? performance when transition functions
change. Originally, the state transition function in the routing problem was determined by the fixed
network topology and by the parameters of the packet generators that determined among other things
the destination of packets. In our modification, nodes in the network are partitioned into three groups
(G1 , G2 , and G3 ) and the probabilities that the destination of a packet belongs to each group of nodes
(pG1 , pG2 , and pG3 ) are parameters we manipulate to change the state transition function.
Objective reward function. The objective reward function is a linear combination of three objective
reward features, the delay measured as the sum of the inverse end-to-end delay of all packets received
at all nodes at time t, the loss measured as the number of lost packets at time t, and power measured
as the sum of the power consumption of all nodes at time t. The weights of these three features are
the parameters of the objective reward function. The weight for the delay feature ? (0, 1), while the
weights for both loss and power are ? (?0.2, 0); different choices of these weights correspond to
different objective reward functions.
7
i
Internal reward function. The internal reward function for the agent at node k is Rj,k
(s, a) =
o
i
i
o
i
Rj (s, a) + ?j,k ?k (s, a), where Rj (s, a) is the objective reward function, ?k (s, a) is a binary feature
vector with one binary feature for each (packet destination, action) pair. It sets the bits corresponding
to the destination of the first packet in node k?s queue at state s and action a to 1; all other bits are
set to 0. The internal reward features are capable of representing arbitrary policies (and thus we also
implemented classical policy gradient with these features using OLPOMDP [2] but found it to be
far slower than the use of PGRD with UCT and hence don?t present those results here).
Extension of Reward Mapping Agent to handle transition function changes. The parameters
describing the transition function are concatenated with the parameters defining the objective reward
function and used as input to the reward mapping function (whose output remains the initial internal
reward function).
Competing policy transfer method. The competing
policy transfer agent from [9] reuses policy knowledge
across tasks based on a model-based average-reward
RL algorithm. Their method keeps a library of policies derived from previous tasks and for each new task
chooses an appropriate policy from the library and then
improves the initial policy with experience. Their policy selection criterion was designed for the case when
only the linear reward parameters change. However,
in our experiments, tasks could differ in three different
ways: (1) only reward functions change, (2) only transition functions change, and (3) both reward functions
and transition functions change. Their policy selection
criterion is applied to cases (1) and (3). For case (2),
when only transition functions change, their method is
modified to select the library-policy whose transition
function parameters are closest to the new transition
function parameters.
avg. objective reward per time step
Handling Multi-Agency. Every nodes? agent observes the full state of the environment. All agents
make decisions independently at each time step. Nodes do not know other nodes? policies, but can
observe how the other nodes have acted in the past and use the empirical counts of past actions to
sample other nodes? actions accordingly during UCT planning.
0.4
Reward Mapping
Sequential Transfer
Non?Transfer
Conventional
Policy Transfer
0.3
0.2
0.1
0
R only
T only R and T
Figure 5: Performance on the network routing domain. (Left) tasks differ in objective reward functions (R) only. (Middle) tasks differ
in transition function (T) only. (Right) tasks
differ in both objective reward and transition
(R and T) functions. See text for details.
Results: Performance advantage of Reward Mapping Agent. Three sets of 100 task sequences were generated, one in which the tasks differed
in objective reward function only, another in which they differed in state transition function only,
and third in which they differed in both. Figure 5 compares the average objective reward per time
step for all four agents defined above as well as the competing policy transfer agent on the three sets.
In all cases, the reward-mapping agent works best and the conventional agent worst. The competing
policy transfer agent is second best when only the reward-function changes?just the setting for
which it was designed.
5
Conclusion and Discussion
Reward functions are a particularly consequential locus for knowledge transfer; reward functions
specify what the agent is to do but not how, and can thus transfer across changes in the environment
dynamics (transition function) unlike previously explored loci for knowledge transfer such as value
functions or policies or models. Building on work on the optimal reward problem for single task
settings, our main algorithmic contribution for our long-lived agent setting is to take good guidance reward functions found for previous objective rewards and learn a mapping used to effectively
initialize the guidance reward function for subsequent tasks. We demonstrated that our reward mapping approach can outperform alternate approaches; current and future work is focused on greater
theoretical understanding of the general conditions under which this is true.
Acknowledgments. This work was supported by NSF grant IIS-1148668. Any opinions, findings,
conclusions, or recommendations expressed here are those of the authors and do not necessarily
reflect the views of the sponsors.
8
References
[1] Christopher G. Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based reinforcement learning. In International Conference on Robotics and Automation, pages 3557?3564, 1997.
[2] Peter L Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable markov
decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control., volume 1,
pages 124?129, 2000.
[3] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility
elicitation. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 363?
369, 2000.
[4] Levente Kocsis and Csaba Szepesv?ari. Bandit based monte-carlo planning. In Machine Learning: ECML,
pages 282?293. Springer, 2006.
[5] George Konidaris and Andrew Barto. Autonomous shaping: Knowledge transfer in reinforcement learning. In Proceedings of the 23rd International Conference on Machine learning, pages 489?496, 2006.
[6] George Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement learning. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, volume 2, pages
895?900, 2007.
[7] Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement
learning. In Proceedings of the 25th International Conference on Machine learning, pages 544?551,
2008.
[8] Yaxin Liu and Peter Stone. Value-function-based transfer for reinforcement learning using structure mapping. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, volume 21(1),
page 415, 2006.
[9] Sriraam Natarajan and Prasad Tadepalli. Dynamic preferences in multi-criteria reinforcement learning.
In Proceedings of the 22nd International Conference on Machine learning, 2005.
[10] Andrew Y. Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on
Machine Learning, pages 278?287, 1999.
[11] Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of the
Seventeenth International Conference on Machine Learning, pages 663?670, 2000.
[12] Theodore J Perkins and Doina Precup. Using options for knowledge transfer in reinforcement learning.
University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999.
[13] Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development.,
2(2):70?82, 2010.
[14] Jonathan Sorg, Satinder Singh, and Richard L Lewis. Reward design via online gradient ascent. Advances
of Neural Information Processing Systems, 23, 2010.
[15] Jonathan Sorg, Satinder Singh, and Richard L Lewis. Optimal rewards versus leaf-evaluation heuristics
in planning agents. In Proceedings of the Twenty-Fifth Conference on Artificial Intelligence, 2011.
[16] Fumihide Tanaka and Masayuki Yamamura. Multitask reinforcement learning on the distribution of mdps.
In Proceedings IEEE International Symposium on Computational Intelligence in Robotics and Automation., volume 3, pages 1108?1113, 2003.
[17] Matthew E Taylor, Nicholas K Jong, and Peter Stone. Transferring instances for model-based reinforcement learning. In Machine Learning and Knowledge Discovery in Databases, pages 488?505. 2008.
[18] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. The
Journal of Machine Learning Research, 10:1633?1685, 2009.
[19] Matthew E Taylor, Shimon Whiteson, and Peter Stone. Transfer via inter-task mappings in policy search
reinforcement learning. In Proceedings of the 6th International Joint Conference on Autonomous Agents
and Multiagent Systems, page 37, 2007.
[20] Lisa Torrey and Jude Shavlik. Policy transfer via Markov logic networks. In Inductive Logic Programming, pages 234?248. Springer, 2010.
[21] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
9
| 5191 |@word multitask:1 worsens:1 exploitation:1 middle:2 consequential:3 tadepalli:1 nd:1 simulation:1 crucially:1 prasad:1 eng:2 accounting:1 incurs:1 thereby:1 initial:6 liu:1 rightmost:2 o2:4 past:2 current:11 comparing:1 router:3 must:1 ronald:1 subsequent:3 sorg:6 realistic:1 designed:3 plot:2 update:4 rjo:3 congestion:1 intelligence:5 leaf:1 accordingly:1 reappears:1 beginning:1 mental:1 node:20 location:8 preference:2 daphne:1 unbounded:2 direct:1 corridor:3 symposium:1 consists:1 overhead:5 inter:1 expected:5 indeed:1 andrea:1 behavior:7 planning:17 multi:3 automatically:1 food:15 actual:1 guoxiao:1 considering:1 becomes:1 provided:1 bounded:6 bonus:4 unchangeable:1 panel:1 duplex:1 what:3 substantially:4 finding:2 csaba:1 transformation:1 guarantee:1 mitigate:1 every:8 pg2:1 exactly:1 ro:2 brute:1 control:1 grant:1 reuses:1 continually:2 t1:4 positive:3 treat:2 black:1 emphasis:2 initialization:2 theodore:1 specifying:9 ease:1 limited:4 range:4 seventeenth:2 acknowledgment:1 sriraam:1 practice:1 lost:1 empirical:3 thought:1 word:1 get:2 cannot:1 selection:2 recency:3 live:1 influence:1 optimize:1 conventional:12 map:4 demonstrated:1 chajewska:1 graphically:1 starting:1 duration:16 independently:1 focused:1 survey:1 correcting:1 insight:3 counterintuitive:1 handle:2 notion:1 autonomous:3 hurt:1 play:1 programming:1 us:2 designing:1 particularly:1 natarajan:1 database:1 bottom:4 role:3 inserted:1 worst:2 parameterize:1 movement:3 highest:1 russell:2 observes:1 alessandro:1 environment:11 broken:10 agency:1 reward:197 dynamic:4 trained:1 singh:6 depend:1 solving:1 joint:2 indirect:1 represented:1 train:2 monte:1 artificial:4 choosing:1 quite:2 whose:2 spend:1 solve:3 heuristic:1 ward:1 g1:1 torrey:1 superscript:2 final:1 delivered:1 kocsis:1 sequence:14 advantage:2 online:1 interaction:2 product:1 sixteenth:1 description:2 intuitive:1 ent:1 convergence:1 help:5 andrew:5 measured:3 ij:1 odd:1 received:1 exacerbated:1 implemented:2 come:1 indicate:2 revers:1 differ:5 direction:4 guided:1 thick:1 closely:2 stochastic:2 exploration:5 packet:23 routing:9 translating:1 opinion:1 require:1 wall:2 extension:1 considered:3 visually:1 mapping:52 scope:1 visualize:1 bj:4 algorithmic:1 parr:1 matthew:3 vary:1 purpose:2 visited:2 eats:1 clearly:1 always:1 i3:1 modified:1 cmp:10 hj:2 ej:4 varying:1 blush:1 eating:3 barto:3 derived:1 longest:1 indicates:1 tech:1 baseline:1 dayan:1 entire:1 transferring:1 eaten:2 koller:1 bandit:1 selective:1 i1:2 arg:1 among:2 denoted:1 development:1 plan:2 art:1 initialize:10 once:1 ng:2 identical:1 stuart:2 marcello:1 future:1 t2:4 richard:4 cardinal:1 distinguishes:1 simultaneously:3 national:2 interest:1 possibility:2 evaluation:9 arrives:1 nl:1 light:1 tj:4 tuple:2 encourage:1 capable:1 experience:1 taylor:3 initialized:1 masayuki:1 guidance:10 theoretical:2 santamaria:1 instance:1 column:2 cost:9 delay:4 harada:1 varies:1 yamamura:1 chooses:1 st:5 international:9 sensitivity:1 amherst:1 destination:7 together:2 precup:1 transmitting:1 jo:9 reflect:1 choose:1 juan:1 corner:2 rji:1 potential:2 automation:2 coefficient:3 matter:1 doina:1 break:2 view:1 observing:1 portion:2 start:3 option:4 capability:1 carlos:1 contribution:3 square:2 t3:4 correspond:1 carlo:1 trajectory:6 finer:1 visualizes:1 history:3 bonarini:1 networking:4 reach:1 whenever:1 xiaoxiao:1 definition:1 konidaris:2 gain:1 sampled:3 rational:1 massachusetts:1 intrinsically:1 recall:1 knowledge:9 color:2 improves:2 shaping:3 urszula:1 appears:1 bidirectional:1 originally:1 specify:1 execute:2 though:3 lifetime:4 just:1 uct:22 until:1 receives:1 christopher:2 lack:1 incrementally:6 mdp:1 building:2 effect:1 usa:1 jch:1 true:1 inductive:1 hence:3 alternating:1 shelter:15 during:3 encourages:5 illustrative:2 criterion:3 o3:4 stone:4 complete:1 demonstrate:6 tn:4 novel:2 recently:2 ari:1 pseudocode:2 discourages:1 rl:9 ji:12 volume:4 interpretation:1 significant:1 blocked:1 rd:1 grid:4 baveja:1 dot:1 specification:3 actor:5 longer:4 base:1 closest:1 own:1 recent:1 perspective:1 belongs:2 store:1 binary:2 rep:1 seen:1 captured:1 additional:1 greater:1 george:2 determine:2 shortest:1 period:1 ii:1 full:3 multiple:1 rj:3 stem:1 adapt:4 long:12 manipulate:1 visit:3 controlled:2 sponsor:1 basic:1 regression:1 expectation:1 poisson:1 jude:1 kernel:1 achieved:1 robotics:2 background:1 addition:2 separately:2 impassable:1 szepesv:1 source:2 crucial:1 unlike:1 ascent:1 simulates:1 sent:2 thing:1 legend:1 flow:1 incorporates:1 seem:2 near:1 split:2 enough:1 baxter:1 variety:1 isolation:1 psychology:1 architecture:8 perfectly:1 competing:7 topology:1 inner:1 idea:1 translates:1 whether:1 motivated:1 utility:8 bartlett:1 reuse:8 penalty:1 queue:8 peter:6 cause:1 action:20 detailed:1 se:1 involve:1 nonparametric:1 discount:1 dark:1 visualized:1 generate:1 specifies:1 outperform:1 nsf:1 millisecond:3 designer:2 estimated:2 lazaric:1 per:11 group:2 four:9 achieving:1 changing:1 levente:1 graph:1 sum:3 inverse:7 parameterized:1 procedurally:1 succes:1 almost:1 throughout:2 decision:13 bit:2 bound:5 daishi:1 followed:1 existed:1 nontrivial:1 adapted:1 alive:3 perkins:1 ri:9 aspect:2 performing:1 eat:4 transferred:1 department:1 acted:1 alternate:1 combination:1 poor:1 across:8 remain:1 smaller:1 pg1:1 partitioned:1 g3:1 making:4 modification:2 repair:2 computationally:2 resource:1 visualization:2 remains:3 previously:1 discus:1 count:2 describing:1 know:2 locus:2 end:10 umich:3 available:2 multiplied:1 olpomdp:1 observe:1 appropriate:1 nicholas:1 batch:1 slower:1 original:1 denotes:1 remaining:1 top:3 exploit:3 concatenated:1 build:2 classical:1 objective:51 initializes:1 move:1 quantity:2 dependence:1 rt:2 evolutionary:1 gradient:5 separate:3 link:3 capacity:1 consumption:3 portable:1 trivial:1 orp:19 o1:4 ratio:2 rickl:1 negative:3 design:3 implementation:1 lived:11 policy:31 twenty:2 upper:1 vertical:1 observation:1 markov:4 finite:2 behave:1 ecml:1 situation:1 extended:2 defining:1 arbitrary:1 evidenced:1 pair:4 required:1 specified:1 learned:6 tanaka:1 elicitation:2 bar:4 pattern:1 departure:1 including:4 max:1 memory:1 oj:1 power:5 natural:1 pgrd:9 eh:1 force:1 indicator:1 representing:1 improve:4 mdps:4 library:3 axis:1 kj:1 faced:1 prior:2 understanding:2 text:3 discovery:1 relative:7 repaired:1 loss:3 multiagent:1 interesting:2 limitation:1 versus:1 generator:3 incurred:1 agent:134 degree:1 quential:1 systematically:1 critic:6 share:1 row:1 supported:1 last:3 guide:1 allow:2 lisa:1 shavlik:1 neighbor:1 taking:1 fifth:1 benefit:2 overcome:2 depth:4 boundary:3 world:3 transition:19 cumulative:1 computes:1 ignores:4 maze:2 author:1 reinforcement:14 avg:4 adaptive:1 atkeson:1 far:1 transaction:1 observable:1 ignore:1 skill:1 keep:1 satinder:4 logic:2 global:1 active:1 assumed:2 don:1 continuous:1 search:3 learn:3 transfer:63 whiteson:1 necessarily:1 discourage:1 domain:20 pk:1 main:3 restelli:1 nothing:2 differed:3 position:2 guiding:2 watkins:1 third:1 learns:1 shimon:1 down:1 specific:1 explored:2 insignificant:1 exists:1 sequential:13 effectively:1 michigan:3 simply:2 explore:3 cheaply:1 expressed:1 g2:1 scalar:3 partially:1 recommendation:1 springer:2 determines:1 lewis:4 ma:1 viewed:1 formulated:1 exposition:1 considerable:1 change:15 feasible:1 content:1 specifically:3 determined:2 uniformly:3 acting:1 invariance:1 jong:1 formally:1 select:2 internal:44 guo:1 jonathan:4 evaluate:2 handling:1 |
4,632 | 5,192 | Learning a Deep Compact Image Representation for
Visual Tracking
Naiyan Wang
Dit-Yan Yeung
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
winsty@gmail.com
dyyeung@cse.ust.hk
Abstract
In this paper, we study the challenging problem of tracking the trajectory of a
moving object in a video with possibly very complex background. In contrast to
most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning
architectures, by putting more emphasis on the (unsupervised) feature learning
problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust
against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural
network which is constructed from the encoder part of the trained autoencoder as
a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the
moving object. Comparison with the state-of-the-art trackers on some challenging
benchmark video sequences shows that our deep learning tracker is more accurate
while maintaining low computational cost with real-time performance when our
MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).
1
Introduction
Visual tracking, also called object tracking, refers to automatic estimation of the trajectory of an
object as it moves around in a video. It has numerous applications in many domains, including
video surveillance for security, human-computer interaction, and sports video analysis. While a
certain application may require multiple moving objects be tracked, the typical setting is to treat
each object separately. After the object to track is identified either manually or automatically in the
first video frame, the goal of visual tracking is to automatically track the trajectory of the object
over the subsequent frames. Although existing computer vision techniques may offer satisfactory
solutions to this problem under well-controlled environments, the problem can be very challenging
in many practical applications due to factors such as partial occlusion, cluttered background, fast
and abrupt motion, dramatic illumination changes, and large variations in viewpoint and pose.
Most existing trackers adopt either the generative or the discriminative approach. Generative trackers, like other generative models in machine learning, assume that the object being tracked can be
described by some generative process and hence tracking corresponds to finding the most probable candidate among possibly infinitely many. The motivation behind generative trackers is to
develop image representations which can facilitate robust tracking. They have been inspired by
recent advances in fast algorithms for robust estimation and sparse coding, such as the alternating direction method of multipliers (ADMM) and accelerated gradient methods. Some popular
generative trackers include incremental visual tracking (IVT) [18], which represents the tracked object based on principal component analysis (PCA), and the l1 tracker (L1T) [16], which assumes
1
that the tracked object can be represented by a sparse combination of overcomplete basis vectors.
Many extensions [26, 25, 4, 21] have also been proposed. On the other hand, the discriminative
approach treats tracking as a binary classification problem which learns to explicitly distinguish
the object being tracked from its background. Some representative trackers in this category are the
online AdaBoost (OAB) tracker [6], multiple instance learning (MIL) tracker [3], and structured
output tracker (Struck) [8]. While generative trackers usually produce more accurate results under
less complex environments due to the richer image representations used, discriminative trackers are
more robust against strong occlusion and variations since they explicitly take the background into
consideration. We refer the reader to a recent paper [23] which empirically compares many existing
trackers based on a common benchmark.
From the learning perspective, visual tracking is challenging because it has only one labeled instance
in the form of an identified object in the first video frame. In the subsequent frames, the tracker has
to learn variations of the tracked object with only unlabeled data available. With no prior knowledge
about the object being tracked, it is easy for the tracker to drift away from the target. To address
this problem, some trackers taking the semi-supervised learning approach have been proposed [12,
7]. An alternative approach [22] first learns a dictionary of image features (such as SIFT local
descriptors) from auxiliary data and then transfers the knowledge learned to online tracking.
Another issue is that many existing trackers make use of image representations that may not be good
enough for robust tracking in complex environments. This is especially the case for discriminative
trackers which usually put more emphasis on improving the classifiers rather than the image features
used. While many trackers simply use raw pixels as features, some attempts have used more informative features, such as Haar features, histogram features, and local binary patterns. However, these
features are all handcrafted offline but not tailor-made for the tracked object. Recently, deep learning
architectures have been used successfully to give very promising results for some complicated tasks,
including image classification [14] and speech recognition [10]. The key to success is to make use
of deep architectures to learn richer invariant features via multiple nonlinear transformations. We
believe that visual tracking can also benefit from deep learning for the same reasons.
In this paper, we propose a novel deep learning tracker (DLT) for robust visual tracking. We attempt
to combine the philosophies behind both generative and discriminative trackers by developing a
robust discriminative tracker which uses an effective image representation learned automatically.
There are some key features which distinguish DLT from other existing trackers. First, it uses a
stacked denoising autoencoder (SDAE) [20] to learn generic image features from a large image
dataset as auxiliary data and then transfers the features learned to the online tracking task. Second,
unlike some previous methods which also learn features from auxiliary data, the learned features in
DLT can be further tuned to adapt to specific objects during the online tracking process. Because
DLT makes use of multiple nonlinear transformations, the image representations obtained are more
expressive than those of previous methods based on PCA. Moreover, since representing the tracked
object does not require solving an optimization problem as in previous trackers based on sparse
coding, DLT is significantly more efficient and hence is more suitable for real-time applications.
2
Particle Filter Approach for Visual Tracking
The particle filter approach [5] is commonly used for visual tracking. From the statistical perspective, it is a sequential Monte Carlo importance sampling method for estimating the latent state
variables of a dynamical system based on a sequence of observations. Supppse st and yt denote
the latent state and observation variables, respectively, at time t. Mathematically, object tracking
corresponds to the problem of finding the most probable state for each time step t based on the
observations up to the previous time step:
st = argmax p(st | y1:t?1 )
Z
= argmax p(st | st?1 ) p(st?1 | y1:t?1 ) dst?1 .
(1)
When a new observation yt arrives, the posterior distribution of the state variable is updated according to Bayes? rule:
p(yt | st ) p(st | y1:t?1 )
p(st | y1:t ) =
.
(2)
p(yt | y1:t?1 )
2
What is specific to the particle filter approach is that it approximates the true posterior state distribution p(st | y1:t ) by a set of n samples, called particles, {sti }ni=1 with corresponding importance weights {wit }ni=1 which sum to 1. The particles are drawn from an importance distribution
q(st | s1:t?1 , y1:t ) and the weights are updated as follows:
wit = wit?1 ?
p(yt | sti ) p(sti | st?1
)
i
.
t
1:t?1
1:t
q(s | s
,y )
(3)
For the choice of the importance distribution q(st | s1:t?1 , y1:t ), it is often simplified to a first-order
Markov process q(st | st?1 ) in which state transition is independent of the observation. Consequently, the weights are updated as wit = wit?1 p(yt | sti ). Note that the sum of weights may no
longer be equal to 1 after each weight update step. In case it is smaller than a threshold, resampling
is applied to draw n particles from the current particle set in proportion to their weights and then
resetting their weights to 1/n. If the weight sum is above the threshold, linear normalization is
applied to ensure that the weights sum to 1.
For object tracking, the state variable si usually represents the six affine transformation parameters
which correspond to translation, scale, aspect ratio, rotation, and skewness. In particular, each
dimension of q(st | st?1 ) is modeled independently by a normal distribution. For each frame, the
tracking result is simply the particle with the largest weight. While many trackers also adopt the
same particle filter approach, the main difference lies in the formulation of the observation model
p(yt | sti ). Apparently, a good model should be able to distinguish well the tracked object from
the background while still being robust against various types of object variation. For discriminative
trackers, the formulation is often to set the probability exponentially related to the confidence of the
classifier output.
The particle filter framework is the dominant approach in visual tracking for several reasons. First,
it is more general than the Kalman filter approach by going beyond the Gaussian distribution. Moreover, it approximates the posterior state distribution by a set of particles instead of just a single point
such as the mode. For visual tracking, this property makes it easier for the tracker to recover from
incorrect tracking results. A tutorial on using particle filters for visual tracking can be found in [2].
Some recent work, e.g., [15], further improves the particle filter framework for visual tracking.
3
The DLT Tracker
We now present our DLT tracker. During the offline training stage, unsupervised feature learning is
carried out by training an SDAE with auxiliary image data to learn generic natural image features.
Layer-by-layer pretraining is first applied and then the whole SDAE is fine-tuned. During the online
tracking process, an additional classification layer is added to the encoder part of the trained SDAE
to result in a classification neural network. More details are provided in the rest of this section.
3.1
3.1.1
Offline Training with Auxiliary Data
Dataset and Preprocessing
We use the Tiny Images dataset [19] as auxiliary data for offline training. The dataset was collected
from the web by providing non-abstract English nouns to seven search engines, covering many of
the objects and scenes found in the real world. From the almost 80 million tiny images each of
size 32 ? 32, we randomly sample 1 million images for offline training. Since most state-of-the-art
trackers included in our empirical comparison use only grayscale images, we have converted all the
sampled images to grayscale (but our method can also use the color images directly if necessary).
Consequently, each image is represented by a vector of 1024 dimensions corresponding to 1024
pixels. The feature value of each dimension is linearly scaled to the range [0, 1] but no further
preprocessing is applied.
3.1.2
Learning Generic Image Features with a Stacked Denoising Autoencoder
The basic building block of an SDAE is a one-layer neural network called a denoising autoencoder
(DAE), which is a more recent variant of the conventional autoencoder. It learns to recover a data
sample from its corrupted version. In so doing, robust features are learned since the neural network
3
contains a ?bottleneck? which is a hidden layer with fewer units than the input units. We show the
architecture of DAE in Fig. 1(a).
Let there be a total of k training samples. For the ith sample, let xi denote the original data sample
? i be the corrupted version of xi , where the corruption could be masking corruption, additive
and x
Gaussian noise or salt-and-pepper noise. For the network weights, let W and W0 denote the weights
for the encoder and decoder, respectively, which may be tied though it is not necessary. Similarly,
b and b0 refer to the bias terms. A DAE learns by solving the following (regularized) optimization
problem:
k
X
? i k22 + ?(kWk2F + kW0 k2F ),
min
kxi ? x
(4)
0
0
W,W ,b,b
i=1
where
hi = f (W?
xi + b)
(5)
? i = f (W0 hi + b0 ).
x
Here ? is a parameter which balances the reconstruction loss and weight penalty terms, k?kF denotes
the Frobenius norm, and f (?) is a nonlinear activation function which is typically the logistic sigmoid
function or hyperbolic tangent function. By reconstructing the input from a corrupted version of it,
a DAE is more effective than the conventional autoencoder in discovering more robust features by
preventing the autoencoder from simply learning the identity mapping.
To further enhance learning meaningful features, sparsity constraints [9] are imposed on the mean
activation values of the hidden units. If the logistic sigmoid activation function is used, the output
of each unit may be regarded as the probability of it being active. Let ?j denote the target sparsity
? can
level of the jth unit and ??j its average empirical activation rate. The cross-entropy of ? and ?
then be introduced as an additional penalty term to Eqn. 4:
?) = ?
H(? k ?
m h
X
?j log(?
?j ) + (1 ? ?j ) log(1 ? ??j )
j=1
i
(6)
k
1X
?=
?
hi ,
k i=1
where m is the number of hidden units. After the pretraining phase, the SDAE can be unrolled to
form a feedforward neural network. The whole network is fine-tuned using the classical backpropagation algorithm. To increase the convergence rate, either the simple momentum method or more
advanced optimization techniques such as the L-BFGS or conjugate gradient method can be applied.
For the network architecture, we use overcomplete filters in the first layer. This is a deliberate
choice since it has been found that an overcomplete basis can usually capture the image structure
better. This is in line with the neurophysiological mechanism in the V1 visual cortex [17]. Then the
number of units is reduced by half whenever a new layer is added until there are only 256 hidden
units, serving as the bottleneck of the autoencoder. The whole structure of the SDAE is depicted in
Fig. 1(b). To further speed up pretraining in the first layer to learn localized features, we divide each
32 ? 32 tiny image into five 16 ? 16 patches (upper left, upper right, lower left, lower right, and
the center one which overlaps with the other four), and then train five DAEs each of which has 512
hidden units. After that, we initialize a large DAE with the weights of the five small DAEs and then
train the large DAE normally. Some randomly selected filters in the first layer are shown in Fig. 2.
As expected, most of the filters play the role of highly localized edge detectors.
3.2
Online Tracking Process
The object to track is specified by the location of its bounding box in the first frame. Some negative examples are collected from the background at a short distance from the object. A sigmoid
classification layer is then added to the encoder part of the SDAE obtained from offline training.
The overall network architecture is shown in Fig. 1(c). When a new video frame arrives, we first
draw particles according to the particle filter approach. The confidence pi of each particle is then
determined by making a simple forward pass through the network. An appealing characteristic of
this approach is that the computational cost of this step is very low even though it has high accuracy.
4
(a)
(b)
(c)
Figure 1: Some key components of the network architecture: (a) denoising autoencoder; (b) stacked
denoising autoencoder; (c) network for online tracking.
Figure 2: Some filters in the first layer of the learned SDAE.
If the maximum confidence of all particles in a frame is below a predefined threshold ? , it may
indicate significant appearance change of the object being tracked. To address this issue, the whole
network can be tuned again in case this happens. We note that the threshold ? should be set by
maintaining a tradeoff. If ? is too small, the tracker cannot adapt well to appearance changes. On
the other hand, if ? is too large, even an occluding object or the background may be mis-treated as
the tracked object and hence leads to drifting of the target.
4
Experiments
We empirically compare DLT with some state-of-the-art trackers in this section using 10 challenging
benchmark video sequences. These trackers are: MTT [26], CT [24], VTD [15], MIL [3], a latest
variant of L1T [4], TLD [13], and IVT [18]. We use the original implementations of these trackers
provided by their authors. In case a tracker can only deal with grayscale video, the rgb2gray
function provided by the MATLAB Image Processing Toolbox is used to convert the color video
to grayscale. To accelerate the computation, we also utilize GPU computation provided by the
MATLAB Parallel Computing Toolbox in both offline training and online tracking. The codes and
supplemental material are provided on the project page: http://winsty.net/dlt.html.
4.1
DLT Implementation Details
We use the gradient method with momentum for optimization. The momentum parameter is set
to 0.9. For offline training of the SDAE, we inject Gaussian noise with a variance of 0.0004 to
generate the corrupted input. We set ? = 0.0001, ?i = 0.05, and the mini-batch size to 100. For
online tuning, we use a larger ? value of 0.002 to avoid overfitting and a smaller mini-batch size
of 10. The threshold ? is set to 0.9. The particle filter uses 1000 particles. For other parameters such
as the affine parameters in the particle filter and the search window size in the other methods, we
perform grid search to determine the best values. The same setting is applied to all other methods
compared if applicable.
4.2
Quantitative Comparison
We use two common performance metrics for quantitative comparison: success rate and centralpixel error. Let BB T denote the bounding box produced by a tracker and BB G the ground-truth
5
bounding box. For each video frame, a tracker is considered successful if the overlap percentage
area(BB T ?BB G )
area(BB T ?BB G ) > 50%. As for the central-pixel error, it is defined as the Euclidean distance (in
pixels) between the centers of BB T and BB G . The quantitative comparison results are summarized
in Table 1 . For each row which corresponds to one of 10 video sequences, the best result is shown
in red and second best in blue. We also report the central-pixel errors over all frames for each video
sequence. Since TLD can report that the tracked object is missing in some frames, we exclude it
from the central-pixel error comparison. On average, DLT is the best according to both performance
metrics. For most video sequences, it is among the best two methods. We also list the running time
of each sequence in detail in Table 2. Thanks to advances of the GPU technology, our tracker can
achieve an average frame rate of 15fps (frames per second) which is sufficient for many real-time
applications.
Ours
MTT
CT
VTD
MIL
L1T TLD
IVT
car4
100(6.0)
100(3.4) 24.7(95.4) 35.2(41.5) 24.7(81.8) 30.8(16.8) 0.2(-)
100(4.2)
car11
100(1.2)
100(1.3)
70.7(6.0) 65.6(23.9) 68.4(19.3)
100(1.3) 29.8(-)
100(3.2)
davidin 66.1(7.1)
68.6(7.8) 25.3(15.3) 49.4(27.1) 17.7(13.1) 27.3(17.5) 44.4(-)
92.0(3.9)
trellis
93.6(3.3) 66.3(33.7) 23.0(80.4) 30.1(81.3) 25.9(71.7) 62.1(37.6) 48.9(-) 44.3(44.7)
woman 67.1(9.4) 19.8(257.8) 16.0(109.6) 17.1(133.6) 12.2(123.7) 21.1(138.2) 5.8(-) 21.5(111.2)
animal 87.3(10.2) 88.7(11.1) 85.9(10.8) 91.5(10.8) 63.4(16.1) 85.9(10.4) 63.4(-) 81.7(10.8)
shaking 88.4(11.5) 12.3(28.1) 92.3(10.9)
99.2(5.2) 26.0(28.6)
0.5(90.8) 15.6(-) 1.1(138.4)
singer1
100(3.3) 35.6(34.0) 10.3(16.8)
99.4(3.4) 10.3(26.0)
100(3.7) 53.6(-)
96.3(7.9)
surfer
86.5(4.6)
83.8(6.9) 13.5(18.7)
90.5(5.5) 44.6(14.7)
75.7(9.5) 40.5(-)
90.5(5.9)
bird2
65.9(16.8)
9.2(92.8) 58.2(19.7) 13.3(151.1) 69.4(16.3) 45.9(57.5) 31.6(-) 10.2(104.1)
average 85.5(7.3) 58.4(47.6) 42.0(38.4) 59.1(48.4) 36.3(41.1) 54.9(40.1) 33.4(-) 63.8(43.4)
Table 1: Comparison of 8 trackers on 10 video sequences. The first number denotes the success rate
(in percentage), while the number in parentheses denotes the central-pixel error (in pixels).
car4
15.27
car11
16.04
davidin
13.20
trellis
17.30
woman
20.92
animal
10.93
shaking
12.72
singer1
15.18
surfer
14.17
bird2
14.36
Average
15.01
Table 2: Comparison of running time on 10 video sequences (in fps).
car11
50
150
100
20
200
400
600
Frame Number
animal
800
800
600
Center Error
Center Error
40
100
400
0
0
100
200
300
Frame Number
shaking
400
0
0
100
200
300
400
Frame Number
singer1
0
0
500
600
400
200
50
200
400
Frame Number
surfer
600
0
0
300
300
60
300
250
250
50
250
200
150
100
Center Error
0
0
60
Center Error
50
woman
800
200
Center Error
100
trellis
250
200
150
100
Center Error
150
davidin
150
Center Error
80
Center Error
100
200
Center Error
Center Error
car4
250
40
30
20
200
400
Frame Number
bird2
600
200
150
100
200
50
0
0
200
400
600
Frame Number
800
0
0
50
100
200
300
Frame Number
400
0
0
10
100
200
300
Frame Number
400
0
0
50
100
200
300
Frame Number
400
0
0
20
40
60
Frame Number
80
100
Figure 3: Frame-by-frame comparison of 7 trackers on 10 video sequences in terms of central-pixel
error (in pixels).
4.3
Qualitative Comparison
Fig. 4 shows some key frames with bounding boxes reported by all eight trackers for each of the
10 video sequences. More detailed results for the complete video sequences can be found in the
supplemental material.
In both the car4 and car11 sequences, the tracked objects are cars moving on an open road. For car4,
the challenge is that the illumination changes greatly near the entrance of a tunnel. For car11, the
6
environment is very dark with illumination in the cluttered background. Since the car being tracked
is a rigid object, its shape does not change much and hence generative trackers like IVT, L1T and
MTT generally perform well for these two sequences. DLT can also track the car accurately.
In the davidin and trellis sequences, each tracker has to track a face in indoor and outdoor environments, respectively. Both sequences are challenging because the illumination and pose vary
drastically along the video. Moreover, out-of-plane rotation occurs in some frames. As a consequence, all trackers drift or even fail to different degrees. Generally speaking, DLT and MTT yield
the best results which are followed by IVT.
In the woman sequence, we track a woman walking in the street. The woman is severely occluded
several times by the parked cars. TLD first fails at frame 63 because of the pose change. All other
trackers compared fail when the woman walks close to the car at about frame 130. DLT can follow
the target accurately.
In the animal sequence, the target is a fast moving animal with motion blur. All methods can merely
track the target to the end. Only MIL and TLD fail in some frames. TLD is also misled by some
similar objects in the background, e.g., in frame 41.
Both the shaking and singer1 sequences are recordings on the stage with illumination changes. For
shaking, the pose of the head being tracked also changes. L1T, IVT and TLD totally fail before
frame 10, while MTT and MIL show some drifting effects then. VTD and DLT give satisfactory
results which are followed by CT. Compared to shaking, the singer1 sequence is easier to track. All
trackers except MTT can track the object but CT and MIL do not support scale change and hence
the results are less satisfactory.
In the surfer sequence, the goal is to track the head of a surfer while its pose changes along the video
sequence. All trackers can merely track it except that TLD shows an incorrect scale and both CT
and MIL drift slightly.
The bird2 sequence is very challenging since the pose of the bird changes drastically when it is
occluded. Most trackers fail or drift at about frame 15 with the exception of L1T, TLD and DLT.
However, after the bird turns, L1T and TLD totally fail but CT and MIL can recover to some degree.
DLT can track the bird accurately along the entire sequence.
5
Discussions
Our proposed method is similar in spirit to that of [22] but there are some key differences that are
worth noting. First, we learn generic image features from a larger and more general dataset rather
than a smaller set with only some chosen image categories. Second, we learn the image features from
raw images automatically instead of relying on handcrafted SIFT features. Third, further learning is
allowed during the online tracking process of our method so as to adapt better to the specific object
being tracked.
For the choice of deep network architecture, we note that another potential candidate is the popular convolutional neural network (CNN) model. The resulting tracker would be similar to previous
patch (or fragment) based methods [1, 11] which have been shown to be robust against partial occlusion. Nevertheless, current research on CNN focuses on learning shift-invariant features for such
tasks as image classification and object detection. However, the nature of object tracking is very different in that it has to learn shift-variant but similarity-preserving features to overcome the drifting
problem. As of now, there is very little relevant work, with the possible exception of [11] which
tries to improve the pooling step in the sparse coding literature to address this issue. This could be
an interesting future research direction to pursue.
6
Concluding Remarks
In this paper, we have successfully taken deep learning to a new territory of challenging applications.
Noting that the key to success for deep learning architectures is the learning of useful features, we
first train a stacked denoising autoencoder using many auxiliary natural images to learn generic
image features. This alleviates the problem of not having much labeled data in visual tracking
applications. After offline training, the encoder part of the SDAE is used as a feature extractor
7
car4
car11
davidin
trellis
woman
singer1 shaking animal
surfer
bird2
Figure 4: Comparison of 8 trackers on 10 video sequences in terms of the bounding box reported.
during the online tracking process to train a classification neural network to distinguish the tracked
object from the background. This can be regarded as knowledge transfer from offline training using
auxiliary data to online tracking. Since further tuning is allowed during the online tracking process,
both the feature extractor and the classifier can adapt to appearance changes of the moving object.
Through quantitative and qualitative comparison with state-of-the-art trackers on some challenging
benchmark video sequences, we demonstrate that our deep learning tracker gives very encouraging
results while having low computational cost.
As the first work on applying deep neural networks to visual tracking, many opportunities remain
open for further research. As discussed above, it would be an interesting direction to investigate a
shift-variant CNN. Also, the classification layer in our current tracker is just a linear classifier for
simplicity. Extending it to more powerful classifiers, as in other discriminative trackers, may provide
more room for further performance improvement.
Acknowledgment
This research has been supported by General Research Fund 621310 from the Research Grants
Council of Hong Kong.
8
References
[1] A. Adam, E. Rivlin, and I. Shimshoni. Robust fragments-based tracking using the integral histogram. In
CVPR, pages 798?805, 2006.
[2] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online
nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174?188,
2002.
[3] B. Babenko, M. Yang, and S. Belongie. Robust object tracking with online multiple instance learning.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1619?1632, 2011.
[4] C. Bao, Y. Wu, H. Ling, and H. Ji. Real time robust L1 tracker using accelerated proximal gradient
approach. In CVPR, pages 1830?1837, 2012.
[5] A. Doucet, D. N. Freitas, and N. Gordon. Sequential Monte Carlo Methods In Practice. Springer, New
York, 2001.
[6] H. Grabner, M. Grabner, and H. Bischof. Real-time tracking via on-line boosting. In BMVC, pages 47?56,
2006.
[7] H. Grabner, C. Leistner, and H. Bischof. Semi-supervised on-line boosting for robust tracking. In ECCV,
pages 234?247, 2008.
[8] S. Hare, A. Saffari, and P. H. Torr. Struck: Structured output tracking with kernels. In ICCV, pages
263?270, 2011.
[9] G. Hinton. A practical guide to training restricted Boltzmann machines. In Neural Networks: Tricks of
the Trade, pages 599?619. 2012.
[10] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen,
T. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE
Signal Processing Magazine, 29(6):82?97, 2012.
[11] X. Jia, H. Lu, and M. Yang. Visual tracking via adaptive structural local sparse appearance model. In
CVPR, pages 1822?1829, 2012.
[12] Z. Kalal, J. Matas, and K. Mikolajczyk. P-N learning: Bootstrapping binary classifiers by structural
constraints. In CVPR, pages 49?56, 2010.
[13] Z. Kalal, K. Mikolajczyk, and J. Matas. Tracking-learning-detection. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 34(7):1409?1422, 2012.
[14] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, pages 1106?1114, 2012.
[15] J. Kwon and K. Lee. Visual tracking decomposition. In CVPR, pages 1269?1276, 2010.
[16] X. Mei and H. Ling. Robust visual tracking using l1 minimization. In ICCV, pages 1436?1443, 2009.
[17] B. Olshausen and D. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1?
Vision Research, 37(23):3311?3326, 1997.
[18] D. Ross, J. Lim, R. Lin, and M. Yang. Incremental learning for robust visual tracking. International
Journal of Computer Vision, 77(1):125?141, 2008.
[19] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: A large data set for nonparametric object
and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958?
1970, 2008.
[20] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. Journal of Machine
Learning Research, 11:3371?3408, 2010.
[21] D. Wang, H. Lu, and M. Yang. Online object tracking with sparse prototypes. IEEE Transactions on
Image Processing, 22(1), 2013.
[22] Q. Wang, F. Chen, J. Yang, W. Xu, and M. Yang. Transferring visual prior for online object tracking.
IEEE Transactions on Image Processing, 21(7):3296?3305, 2012.
[23] Y. Wu, J. Lim, and M. Yang. Online object tracking: A benchmark. In CVPR, 2013.
[24] K. Zhang, L. Zhang, and M.-H. Yang. Real-time compressive tracking. In ECCV, pages 864?877, 2012.
[25] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja. Low-rank sparse learning for robust visual tracking. ECCV,
pages 470?484, 2012.
[26] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja. Robust visual tracking via multi-task sparse learning. In
CVPR, pages 2042?2049, 2012.
9
| 5192 |@word kong:2 cnn:3 version:3 proportion:1 norm:1 rivlin:1 open:2 decomposition:1 dramatic:1 liu:2 contains:1 fragment:2 tuned:5 ours:1 existing:6 freitas:1 current:3 com:1 babenko:1 si:1 gmail:1 activation:4 ust:1 gpu:3 subsequent:2 additive:1 informative:1 entrance:1 shape:1 blur:1 update:1 fund:1 resampling:1 generative:9 fewer:1 discovering:1 half:1 selected:1 intelligence:3 plane:1 ith:1 short:1 boosting:2 cse:1 location:1 zhang:4 five:3 kingsbury:1 along:3 constructed:1 fps:2 incorrect:2 qualitative:2 combine:1 expected:1 multi:1 inspired:2 relying:1 freeman:1 automatically:4 little:1 encouraging:1 window:1 totally:2 provided:5 estimating:1 moreover:3 project:1 what:1 skewness:1 pursue:1 compressive:1 supplemental:2 finding:2 transformation:3 bootstrapping:1 quantitative:4 classifier:7 scaled:1 unit:10 normally:1 grant:1 before:1 engineering:1 local:4 treat:2 consequence:1 severely:1 emphasis:2 bird:3 challenging:9 range:1 practical:2 acknowledgment:1 practice:1 block:1 backpropagation:1 mei:1 area:2 empirical:2 yan:1 significantly:1 hyperbolic:1 confidence:3 road:1 refers:1 cannot:1 unlabeled:1 close:1 naiyan:1 put:1 applying:1 conventional:2 imposed:1 yt:7 center:12 missing:1 latest:1 cluttered:2 independently:1 sainath:1 wit:5 simplicity:1 abrupt:1 rule:1 regarded:2 variation:5 updated:3 target:6 play:1 magazine:1 us:3 jaitly:1 trick:1 recognition:3 walking:1 labeled:2 role:1 wang:3 capture:1 trade:1 environment:5 occluded:2 trained:2 solving:2 maskell:1 arulampalam:1 basis:3 kalal:2 accelerate:1 represented:2 various:1 train:5 stacked:6 fast:3 effective:2 monte:2 richer:2 larger:2 cvpr:7 encoder:5 online:21 sequence:26 net:1 propose:1 reconstruction:1 interaction:1 relevant:1 shaking:7 alleviates:1 achieve:1 frobenius:1 bao:1 sutskever:1 convergence:1 extending:1 produce:1 incremental:2 adam:1 object:44 develop:1 pose:6 b0:2 strong:1 auxiliary:9 involves:1 indicate:1 larochelle:1 direction:3 filter:16 human:1 saffari:1 material:2 require:2 singer1:6 leistner:1 probable:2 mathematically:1 extension:1 tracker:58 around:1 ground:1 normal:1 considered:1 mapping:1 surfer:6 dictionary:1 adopt:2 vary:1 torralba:1 estimation:2 applicable:1 ross:1 council:1 largest:1 successfully:2 minimization:1 gaussian:4 rather:2 avoid:1 surveillance:1 mil:8 focus:1 improvement:1 rank:1 hk:1 contrast:1 greatly:1 rigid:1 typically:1 entire:1 transferring:1 hidden:5 going:1 pixel:10 issue:3 classification:11 among:2 overall:1 html:1 animal:6 art:4 noun:1 initialize:1 equal:1 field:1 having:2 sampling:1 manually:1 represents:2 yu:1 unsupervised:2 k2f:1 future:1 report:2 gordon:2 kwon:1 randomly:2 argmax:2 occlusion:3 phase:1 attempt:2 detection:2 highly:1 investigate:1 arrives:2 behind:2 predefined:1 accurate:2 edge:1 integral:1 partial:2 necessary:2 modest:1 divide:1 euclidean:1 walk:1 overcomplete:4 dae:6 instance:3 modeling:1 cost:3 krizhevsky:1 successful:1 graphic:1 too:2 reported:2 corrupted:4 kxi:1 proximal:1 st:17 thanks:1 international:1 lee:1 enhance:1 mtt:6 again:1 central:5 possibly:2 woman:8 inject:1 converted:1 exclude:1 bfgs:1 potential:1 coding:4 summarized:1 explicitly:2 try:1 apparently:1 doing:1 red:1 bayes:1 recover:3 complicated:1 parallel:1 masking:1 parked:1 jia:1 ni:2 accuracy:1 convolutional:2 descriptor:1 characteristic:1 resetting:1 variance:1 correspond:1 yield:1 raw:2 vincent:1 territory:1 bayesian:1 accurately:3 produced:1 lu:2 carlo:2 trajectory:3 worth:1 corruption:2 detector:1 whenever:1 oab:1 against:4 hare:1 mohamed:1 mi:1 sampled:1 dataset:5 popular:2 knowledge:4 color:2 improves:1 car:5 lim:2 supervised:2 follow:1 adaboost:1 bmvc:1 formulation:2 though:2 box:5 just:2 stage:2 until:1 autoencoders:1 hand:2 eqn:1 web:1 expressive:1 nonlinear:4 mode:1 logistic:2 believe:1 olshausen:1 facilitate:1 building:1 k22:1 effect:1 multiplier:1 true:1 hence:5 alternating:1 satisfactory:3 deal:1 daes:2 during:6 covering:1 shimshoni:1 hong:2 criterion:1 complete:1 demonstrate:1 motion:2 l1:3 image:36 consideration:1 novel:1 recently:1 common:2 rotation:2 sigmoid:3 vtd:3 empirically:2 tracked:19 ji:1 salt:1 handcrafted:2 exponentially:1 million:3 discussed:1 approximates:2 refer:2 significant:1 automatic:1 tuning:2 grid:1 similarly:1 particle:21 moving:6 longer:1 cortex:1 similarity:1 dominant:1 posterior:3 recent:5 perspective:2 dyyeung:1 certain:1 binary:3 success:4 preserving:1 additional:3 deng:1 employed:1 determine:1 signal:2 semi:2 multiple:5 adapt:5 offer:1 cross:1 lin:1 controlled:1 parenthesis:1 variant:4 basic:1 vision:3 metric:2 yeung:1 histogram:2 normalization:1 kernel:1 background:10 separately:1 fine:2 rest:1 unlike:1 recording:1 pooling:1 spirit:1 structural:2 near:1 noting:2 yang:8 feedforward:1 bengio:1 easy:1 enough:1 pepper:1 l1t:7 architecture:9 identified:2 dlt:17 prototype:1 tradeoff:1 shift:3 bottleneck:2 six:1 pca:2 penalty:2 speech:2 york:1 speaking:1 pretraining:3 remark:1 matlab:3 deep:15 tunnel:1 generally:2 useful:2 detailed:1 nonparametric:1 dark:1 category:2 dit:1 reduced:1 http:1 generate:1 deliberate:1 percentage:2 tutorial:2 track:12 per:1 serving:1 blue:1 ivt:6 putting:1 key:6 four:1 threshold:5 nevertheless:1 drawn:1 dahl:1 utilize:1 v1:2 merely:2 sum:4 convert:1 sti:5 powerful:1 dst:1 tailor:1 almost:1 reader:1 wu:2 patch:2 draw:2 layer:13 hi:3 ct:6 followed:3 distinguish:4 constraint:2 scene:2 aspect:1 speed:1 min:1 concluding:1 department:1 structured:2 developing:1 according:3 combination:1 conjugate:1 smaller:3 slightly:1 reconstructing:1 remain:1 appealing:1 making:1 s1:2 happens:1 invariant:2 iccv:2 restricted:1 taken:1 kw0:1 turn:1 mechanism:1 fail:6 end:1 available:1 eight:1 away:1 generic:6 alternative:1 batch:2 drifting:3 original:2 assumes:1 denotes:3 include:1 ensure:1 running:2 opportunity:1 maintaining:2 especially:1 grabner:3 classical:1 move:1 matas:2 added:3 occurs:1 strategy:1 gradient:4 distance:2 decoder:1 street:1 w0:2 lajoie:1 seven:1 collected:2 reason:2 kalman:1 code:1 modeled:1 mini:2 ratio:1 providing:1 balance:1 unrolled:1 manzagol:1 negative:1 implementation:3 boltzmann:1 perform:2 upper:2 observation:6 markov:1 benchmark:5 hinton:3 head:2 frame:33 y1:8 drift:4 introduced:1 struck:2 specified:1 toolbox:2 clapp:1 security:1 bischof:2 imagenet:1 engine:1 acoustic:1 learned:6 nip:1 address:3 able:1 beyond:1 usually:4 pattern:4 dynamical:1 below:1 indoor:1 sparsity:2 challenge:1 including:2 video:24 suitable:1 overlap:2 natural:3 treated:1 regularized:1 haar:1 advanced:1 misled:1 representing:1 improve:1 technology:2 numerous:1 carried:1 autoencoder:12 prior:2 literature:1 tangent:1 kf:1 loss:1 interesting:2 localized:2 ghanem:2 degree:2 vanhoucke:1 affine:2 sufficient:1 tld:10 viewpoint:1 tiny:4 pi:1 translation:1 row:1 eccv:3 supported:1 english:1 jth:1 offline:12 bias:1 drastically:2 guide:1 senior:1 taking:1 face:1 sparse:9 benefit:1 overcome:1 dimension:3 transition:1 world:1 preventing:1 forward:1 made:1 commonly:1 preprocessing:2 simplified:1 author:1 nguyen:1 adaptive:1 mikolajczyk:2 transaction:6 bb:8 compact:1 doucet:1 active:1 overfitting:1 belongie:1 discriminative:8 xi:3 fergus:1 grayscale:4 search:3 latent:2 table:4 promising:1 nature:1 learn:12 transfer:4 robust:19 improving:1 complex:3 domain:1 main:1 linearly:1 motivation:1 whole:4 noise:3 kwk2f:1 bounding:5 ling:2 allowed:2 xu:1 fig:5 representative:1 sdae:11 ahuja:2 trellis:5 momentum:3 fails:1 candidate:2 lie:1 tied:1 outdoor:1 third:1 extractor:4 learns:4 specific:3 sift:2 list:1 sequential:2 importance:4 illumination:5 chen:1 easier:2 entropy:1 depicted:1 simply:3 appearance:6 infinitely:1 neurophysiological:1 visual:23 tracking:56 sport:1 springer:1 corresponds:3 truth:1 goal:2 identity:1 consequently:2 room:1 admm:1 change:13 included:1 specifically:1 typical:1 determined:1 except:2 torr:1 denoising:9 principal:1 called:3 total:1 pas:1 meaningful:1 occluding:1 exception:2 support:1 accelerated:2 philosophy:1 |
4,633 | 5,193 | Learning the Local Statistics of Optical Flow
Dan Rosenbaum1 , Daniel Zoran2 , Yair Weiss1,2
CSE , 2 ELSC , Hebrew University of Jerusalem
{danrsm,daniez,yweiss}@cs.huji.ac.il
1
Abstract
Motivated by recent progress in natural image statistics, we use newly available
datasets with ground truth optical flow to learn the local statistics of optical flow
and compare the learned models to prior models assumed by computer vision
researchers. We find that a Gaussian mixture model (GMM) with 64 components
provides a significantly better model for local flow statistics when compared to
commonly used models. We investigate the source of the GMM?s success and
show it is related to an explicit representation of flow boundaries. We also learn
a model that jointly models the local intensity pattern and the local optical flow.
In accordance with the assumptions often made in computer vision, the model
learns that flow boundaries are more likely at intensity boundaries. However,
when evaluated on a large dataset, this dependency is very weak and the benefit of
conditioning flow estimation on the local intensity pattern is marginal.
1
Introduction
Sintel MPI
KITTI
Figure 1: Samples of frames and flows from new flow databases. We leverage these newly available
resources to learn the statistics of optical flow and compare this to assumptions used by computer
vision researchers.
The study of natural image statistics is a longstanding research topic with both scientific and engineering interest. Recent progress in this field has been achieved by approaches that systematically
compare different models of natural images with respect to numerical criteria such as log likelihood
on held-out data or coding efficiency [1, 10, 14]. Interestingly, the best models in terms of log likelihood, when used as priors in image restoration tasks, also yield state-of-the-art performance [14].
Many problems in computer vision require good priors. A notable example is the computation of
optical flow: a vector at every pixel that corresponds to the two dimensional projection of the motion
1
at that pixel. Since local motion information is often ambiguous, nearly all optical flow estimation
algorithms work by minimizing a cost function that has two terms: a local data term and a ?prior?
term (see. e.g. [13, 11] for some recent reviews).
Given the success in image restoration tasks, where learned priors give state-of-the-art performance,
one might expect a similar story in optical flow estimation. However, with the notable exception
of [9] (which served as a motivating example for this work and is discussed below) there have been
very few attempts to learn priors for optical flow by modeling local statistics. Instead, the state-ofthe-art methods still use priors that were formulated by computer vision researchers. In fact, two
of the top performing methods in modern optical flow benchmarks use a hand-defined smoothness
constraint that was suggested over 20 years ago [6, 2].
One big difference between image statistics and flow statistics is the availability of ground truth
data. Whereas for modeling image statistics one merely needs a collection of photographs (so that
the amount of data is essentially unlimited these days), for modeling flow statistics one needs to
obtain the ground truth motion of the points in the scene. In the past, the lack of availability of
ground truth data did not allow for learning an optical flow prior from examples. In the last two
years, however, two ground truth datasets have become available. The Sintel dataset (figure 1)
consists of a thousand pairs of frames from a highly realistic computer graphics film with a wide
variety of locations and motion types. Although it is synthetic, the work in [3] convincingly show
that both in terms of image statistics and in terms of flow statistics, the synthetic frames are highly
similar to real scenes. The KITTI dataset (figure 1) consists of frames taken from a vehicle driving
in a European city [5]. The vehicle was equipped with accurate range finders as well as accurate
localization of its own motion, and the combination of these two sources allow computing optical
flow for points that are stationary in the world. Although this is real data, it is sparse (only about
50% of the pixels have ground truth flow).
In this paper we leverage the availability of ground truth datasets to learn explicit statistical models
of optical flow. We compare our learned model to the assumptions made by computer vision algorithms for estimating flow. We find that a Gaussian mixture model with 64 components provides a
significantly better model for local flow statistics when compared to commonly used models. We
investigate the source of the GMM?s success and show that it is related to an explicit representation
of flow boundaries. We also learn a model that jointly models the local intensity pattern and the
local optical flow. In accordance with the assumptions often made in computer vision, the model
learns that flow boundaries are more likely at intensity boundaries. However, when evaluated on a
large dataset, this dependency is very weak and the benefit of conditioning flow estimation on the
local intensity pattern is marginal.
1.1
Priors for optical flow
One of the earliest methods for optical flow that is still used in applications is the celebrated LucasKanade algorithm [7]. It overcomes the local ambiguity of motion analysis by assuming that the
optical flow is constant within a small image patch and finds this constant motion by least-squares
estimation. Another early algorithm that is still widely used is the method of Horn and Schunck [6].
It finds the optical flow by minimizing a cost function that has a data term and a ?smoothness? term.
Denoting by u the horizontal flow and v the vertical flow, the smoothness term is of the form:
X
JHS =
u2x + u2y + vx2 + vy2
x,y
where ux , uy are the spatial derivatives of the horizontal flow u and vx , vy are the spatial derivatives
of the vertical flow v. When combined with modern optimization methods, this algorithm is often
among the top performing methods on modern benchmarks [11, 5].
Rather than using a quadratic smoothness term, many authors have advocated using more robust
terms that would be less sensitive to outliers in smoothness. Thus the Black and Anandan [2] algorithm uses:
X
JBA =
?(ux ) + ?(uy ) + ?(vx ) + ?(vy )
x,y
where ?(t) is a function that grows slower than a quadratic. Popular choices for ? include the
Lorentzian, the truncated
? quadratic and the absolute value ?(x) = |x| (or a differentiable approximation to it ?(x) = + x2 )[11]. Both the Lorentzian and the absolute value robust smoothness
2
terms were shown to outperform quadratic smoothness in [11] and the absolute value was better
among the two robust terms.
Several authors have also suggested that the smoothness term be based on the local intensity pattern,
since motion discontinuities are more likely to occur at intensity boundaries. Ren [8] modified
the weights in the Lucas and Kanade least-squares estimation so that pixels that are on different
sides of an intensity boundary will get lower weights. In the context of Horn and Shunck, several
authors suggest using weights to the horizontal and vertical flow derivatives, where the weights had
an inverse relationship with the image derivatives: large image derivatives lead to low weight in the
flow smoothness (see [13] and references within for different variations on this idea). Perhaps the
simplest such regularizer is of the form:
JHSI =
X
w(Ix )(u2x + vx2 ) + w(Iy )(u2y + vy2 )
(1)
x,y
As we discuss below, this prior can be seen as a Gaussian prior on the flow that is conditioned on
the intensity.
In contrast to all the previously discussed priors, Roth and Black [9] suggested learning a prior from
a dataset. They used a training set of optical flow obtained by simulating the motion of a camera in
natural range images. The prior learned by their system was similar to a robust smoothness prior,
but the filters are not local derivatives but rather more random-looking high pass filters. They did not
observe a significant improvement in performance when using these filters, and standard derivative
filters are still used in most smoothness based methods.
Given the large number of suggested priors, a natural question to ask is: what is the best prior to use?
One way to answer this question is to use these priors as a basis for an optical flow estimation algorithm and see which algorithm gives the best performance. Although such an approach is certainly
informative it is difficult to get a definitive answer using it. For example, Sun et al. [11] reported that
adding a non-local smoothness term to a robust smoothness prior significantly improved results on
the Middlebury benchmark, while Geiger et al. [5] reported that this term decreased performance on
KITTI benchmark. Perhaps the main difficulty with this approach is that the prior is only one part of
an optical flow estimation algorithm. It is always combined with a non-convex likelihood term and
optimized using a nonlinear optimization algorithm. Often the parameters of the optimization have
a very large influence on the performance of the algorithm.
In this paper we take an alternative approach. Motivated by recent advances in natural image statistics and the availability of new datasets, we compare different priors in terms of (1) log likelihood
on held-out data and (2) inference performance with tractable posteriors. Our results allow us to
rigorously compare different prior assumptions.
2
Comparing priors as density models
In order to compare different prior models as density models, we generate a training set and test
set of optical flow patches from the ground truth databases. Denoting by f a single vector that
concatenates all the optical flow in a patch (e.g. if we consider 8 ? 8 patches, f is a vector of length
128 where the first 64 components denote u and the last 64 components denote v). Given a prior
probability model Pr(f ; ?) we use the training set to estimate the free parameters of the model ? and
then we measure the log likelihood of held out patches from the test set.
From Sintel, we divided the pairs of frames for which ground truth is available into 708 pairs which
we used for training and 333 pairs which we used for testing. The data is divided into scenes and we
made sure that different scenes are used in training and testing. We created a second test set from
the KITTI dataset by choosing a subset of patches for which full ground truth flow was available.
Since we only consider full patches, this set is smaller and hence we use it only for testing, not for
training.
The priors we compared are:
? Lucas and Kanade. This algorithm is equivalent to the assumption that the observed flow is
generated by a constant (u0 , v0 ) that is corrupted by IID Gaussian noise. If we also assume
3
that u0 , v0 have a zero mean Gaussian distribution, Pr(f ) is a zero mean multidimensional
Gaussian with covariance given by ?p2 OOt + ?n2 I where O is a binary 128 ? 2 matrix and
?p the standard deviation of u0 , v0 and ?n the standard deviation of the noise.
? Horn and Schunck. By exponentiating JHS we see that Pr(f ; ?) is a multidimensional
Gaussian with covariance matrix ?DDT where D is a 256 ? 128 derivative matrix that
computes the derivatives of the flow field at each pixel and ? is the weight given to the
prior relative to the data term. This covariance matrix is not positive definite, so we use
?DDT + I and determine ?, using maximum likelihood.
? L1. We exponentiate JBA and obtain a multidimensional Laplace distribution. As in Horn
and Schunck, this distribution is not normalizeable so we multiply it by an IID Laplacian
prior on each component with variance 1/. This again gives two free parameters (?, )
which we find using maximum likelihood. Unlike the Gaussian case, the solution of the
ML parameters and the normalization constant cannot be done in closed form, and we use
Hamiltonian Annealed Importance Sampling [10].
? Gaussian Mixture Models (GMM). Motivated by the success of GMMs in modeling natural
image statistics [14] we use the training set to estimate GMM priors for optical flow. Each
mixture component is a multidimensional Gaussian with full covariance matrix and zero
mean and we vary the number of components between 1 and 64. We train the GMM using
the standard Expectation-Maximization (EM) algorithm using mini-batches. Even with a
few mixture components, the GMM has far more free parameters than the previous models
but note that we are measuring success on held out patches so that models that overfit
should be penalized.
The summary of our results are shown in figure 2 where we show the mean log likelihood on the
Sintel test set. One interesting thing that can be seen is that the local statistics validate some assumptions commonly used by computer vision researchers. For example, the Horn and Shunck
smoothness prior is as good as the optimal Gaussian prior (GMM1) even though it uses local first
derivatives. Also, the robust prior (L1) is much better than Horn and Schunck. However, as the number of Gaussians increase the GMM is significantly better than a robust prior on local derivatives.
A closer inspection of our results is shown in figure 3. Each figure shows the histogram of log likelihood of held out patches: the more shifted the histogram is to the right, the better the performance.
It can be seen that the GMM is indeed much better than the other priors including cases where the
test set is taken from KITTI (rather than Sintel) and when the patch size is 12 ? 12 rather than 8 ? 8.
5
log-likelihood
4
3
2
1
0
LK
HS
L1
GMM1
GMM2 GMM4
Models
GMM8 GMM16 GMM64
Figure 2: mean log likelihood of the different models for 8 ? 8 patches extracted from held out data
from Sintel. The GMM outperforms the models that are assumed by computer vision researchers.
2.1
Comparing models using tractable inference
A second way of comparing the models is by their ability to restore corrupted patches of optical
flow. We are not claiming that optical flow restoration is a real-world application (although using
priors to ?fill in? holes in optical flow is quite common, e.g. [12, 8]). Rather, we use it because
for the models we are discussing the inference can either be done in closed form or using convex
optimization, so we would expect that better priors will lead to better performance.
We perform two flow restoration tasks. In ?flow denoising? we take the ground truth flow and add
IID Gaussian noise to all flow vectors. In ?flow inpainting? we add a small amount of noise to all
4
Sintel
KITTI
0
log(fraction of patches)
?5
LK
HS
L1
GMM64
?10
?15
?200
?100
?50
log-likelihood
?8
?6
?4
?2
log-likelihood
0
2
0
2
0
log(fraction of patches)
LK
HS
L1
GMM64
LK
HS
L1
GMM64
?2
12 ? 12 patches
log(fraction of patches)
?6
0
?10
?15
?200
?4
?10
?150
0
?5
LK
HS
L1
GMM64
?2
8 ? 8 patches
log(fraction of patches)
0
?4
?6
?8
?150
?100
?50
log-likelihood
?6
0
?4
?2
log-likelihood
Figure 3: Histograms of log-likelihood of different models on the KITTI and Sintel test sets with
two different patch sizes. As can be seen, the GMM outperforms other models in all four cases.
flow vectors and a very big amount of noise to some of the flow vectors (essentially meaning that
these flow vectors are not observed). For the Gaussian models and the GMM models the Bayesian
Least Squares (BLS) estimator of f given y can be computed in closed form. For the Laplacian
model, we use MAP estimation which leads to a convex optimization problem. Since MAP may be
suboptimal for this case, we optimize the parameters ?, for MAP inference performance.
Results are shown in figures 4,5. The standard deviation of the ground truth flow is approximately
11.6 pixels and we add noise with standard deviations 10, 20 and 30 pixel. Consistent with the
log likelihood results, L1 outperforms the Gaussian methods but is outperformed by the GMM. For
small noise values the difference between L1 and the GMM is small, but as the amount of noise
increases L1 becomes similar in performance to the Gaussian methods and is much worse than the
GMM.
3
The secret of the GMM
We now take a deeper look at how the GMM models optical flow patches. The first (and not surprising) thing we found is that the covariance matrices learned by the model are block diagonal (so that
the u and v components are independent given the assignment to a particular component).
More insight can be gained by considering the GMM as a local subspace model: a patch which
is generated by component k is generated as a linear combination of the eigenvectors of the kth
covariance. The coefficients of the linear combination have energy that decays with the eigenvalue:
so each patch can be well approximated by the leading eigenvectors of the corresponding covariance.
Unlike global subspace models, different subspace models can be used for different patches, and
during inference with the model one can infer which local subspace is most likely to have generated
the patch.
Figure 6 shows the dominant leading eigenvectors of all 32 covariance matrices in the GMM32
model: the eigenvectors of u are followed by the eigenvectors of v. The number of eigenvectors
displayed in each row is set so that they capture 99% of the variance in that component. The rows
are organized by decreasing mixing weight. The right hand half of each row shows (u,v) patches
that are sampled from that Gaussian.
5
Denoising: ? = 10
? = 20
?6
?8
?2
LK
HS
L1
GMM64
?4
?6
?8
log(fraction of patches)
LK
HS
L1
GMM64
?4
?10
?10
20
40
60
PSNR
80
100
Inpainting: 2 ? 2
40
60
PSNR
80
20
?10
80
100
60
PSNR
80
100
6?6
LK
HS
L1
GMM64
?4
?6
?8
?10
60
PSNR
40
0
log(fraction of patches)
?8
log(fraction of patches)
?6
40
?8
4?4
LK
HS
L1
GMM64
20
?6
100
?2
?4
LK
HS
L1
GMM64
?4
?10
20
?2
log(fraction of patches)
? = 30
?2
log(fraction of patches)
log(fraction of patches)
?2
LK
HS
L1
GMM64
?2
?4
?6
?8
?10
20
40
60
PSNR
80
100
20
40
60
PSNR
80
100
Figure 4: Denoising with different noise values and inpainting with different hole sizes.
Figure 5: Visualizing denoising performance (? = 30).
It can be seen that the first 10 components or so model very smooth components (in fact the samples
appear to be completely flat). A closer examination of the eigenvalues shows that these ten components correspond to smooth motions of different speeds. This can also be seen by comparing the
v samples on the top row which are close to gray with those in the next two rows which are much
closer to black or white (since the models are zero mean, black and white are equally likely for any
component).
As can be seen in the figure, almost all the energy in the first components is captured by uniform
motions. Thus these components are very similar to a non-local smoothness assumption similar to
the one suggested in [11]): they not only assume that derivatives are small but they assume that all
the 8 ? 8 patch is constant. However, unlike the suggestion in [11] to enforce non-local smoothness
by applying a median filter at all pixels, the GMM only applies non-local smoothness at a subset of
patches that are inferred to be generated by such components.
As we go down in the figure towards more rare components. we see that the components no longer
model flat components but rather motion boundaries. This can be seen both in the samples (rightmost
rows) and also in the leading eigenvectors (shown on the left) which each control one side of a
boundary. For example, the bottom row of the figure illustrates a component that seems to generate
primarily diagonal motion boundaries.
Interestingly, such local subspace models of optical flow have also been suggested by Fleet et al. [4].
They used synthetic models of moving occlusion boundaries and bars to learn linear subspace models of the flow. The GMM seems to support their intuition that learning separate linear subspace
models for flat vs motion boundary is a good idea. However, unlike the work of Fleet et al. the
separation into ?flat? vs. ?motion boundary? was learned in an unsupervised fashion directly from
the data.
6
leading eigenvectors
u
patch samples
v
u
v
Figure 6: The eigenvectors and samples of the GMM components. GMM is better because it explicitly models edges and flat patches separately.
4
A joint model for optical flow and intensity
As mentioned in the introduction, many authors have suggested modifying the smoothness assumption by conditioning it on the local intensity pattern and giving a higher penalty for motion discontinuities in the absence of intensity discontinuities. We therefore ask, does conditioning on the local
intensity give better log likelihood on held out flow patches? Does it give better performance in
tractable inference tasks?
We evaluated two flow models that are conditioned on the local intensity pattern. The first one is a
conditional Gaussian (eq. 1) with exponential weights, i.e. w(Ix ) = exp(?Ix2 /? 2 ) and the variance
parameter ? 2 is optimized to maximize performance. The second one is a Gaussian mixture model
that simultaneously models both intensity and flow.
The simultaneous GMM we use includes a 200 component GMM to model the intensity together
with a 64 dimensional GMM to model the flow. We allow a dependence between the hidden variable
of the intensity GMM and that of the flow GMM. This is equivalent to a hidden Markov model
(HMM) with 2 hidden variables: one represents the intensity component and one represents the
flow component (figure 8). We learn the HMM using the EM algorithm. Initialization is given
by independent GMMs learned for the intensity (we actually use the one learned by [14] which is
available on their website) and for the flow. The intensity GMM is not changed during the learning.
Conditioned on the intensity pattern, the flow distribution is still a GMM with 64 components (as in
the previous section) but the mixing weights depend on the intensity.
Given these two conditional models, we now ask: will the conditional models give better performance than the unconditional ones? The answer, shown in figure 7 was surprising (to us). Conditioning on the intensity gives basically zero improvement in log likelihood and a slight improvement
in flow denoising only for very large amounts of noise. Note that for all models shown in this figure,
the denoised estimate is the Bayesian Least Squares (BLS) estimate, and is optimal given the learned
models.
To investigate this effect, we examine the transition matrix between the intensity components and
the flow components (figure 8). If intensity and flow were independent, we would expect all rows
of the transition matrix to be the same. If an intensity boundary always lead to a flow boundary,
we would expect the bottom rows of the matrix to have only one nonzero element. By examining
the learned transition matrix we find that while there is a dependency structure, it is not very strong.
7
Regardless of whether the intensity component corresponds to a boundary or not, the most likely
flow components are flat. When there is an intensity boundary, the flow boundary in the same
orientation becomes more likely. However, even though it is more likely than in the unconditioned
case, it is still less likely than the flat components.
To rule out that this effect is due to a local optimum found by EM, we conducted additional experiments whereby the emission probabilities were held fixed to the GMMs learned independently for
flow and motion and each patch in the training set was assigned one intensity and one flow component. We then estimated the joint distribution over flow and motion components by simply counting
the relative frequency in the training set. The results were nearly identical to those found by EM.
In summary, while our learned model supports the standard intuition that motion boundaries are
more likely at intensity boundaries, it suggests that when dealing with a large dataset with high
variability, there is very little benefit (if any) in conditioning flow models on the local intensity.
Hidden Markov model
Denoising: ? = 90
Likelihood
h flow
intensity
flow
?2
log(fraction of patches)
h intensity
log(fraction of patches)
0
?5
?10
HS
HSI
GMM
HMM
?15
?20
HS
HSI
GMM
HMM
?4
?6
?8
?10
?15
?10
?5
log-likelihood
0
20
40
60
PSNR
80
100
Figure 7: The hidden Markov model we use to jointly model intensity and flow. Both log likelihood
and inference evaluations show almost no improvement of conditioning flow on intensity.
un-conditional mixing-weights
h intensity
50
intensity
conditional mixing-weights
100
150
200
10
20
30
40
h flow
50
60
Figure 8: Left: the transition matrix learned by the HMM. Right: comparing rows of the matrix
to the unconditional mixing weights. Conditioned on an intensity boundary, motion boundaries
become more likely but are still less likely than a flat motion.
5
Discussion
Optical flow has been an active area of research for over 30 years in computer vision, with many
methods based on assumed priors over flow fields. In this paper, we have leveraged the availability
of large ground truth databases to learn priors from data and compare our learned models to the
assumptions typically made by computer vision researchers. We find that many of the assumptions
are actually supported by the statistics (e.g. the Horn and Schunck model is close to the optimal Gaussian model, robust models are better, intensity discontinuities make motion discontinuities
more likely). However, a learned GMM model with 64 components significantly outperforms the
standard models used in computer vision, primarily because it explicitly distinguishes between flat
patches and boundary patches and then uses a different form of nonlocal smoothness for the different
cases.
Acknowledgments
Supported by the Israeli Science Foundation, Intel ICRI-CI and the Gatsby Foundation.
8
References
[1] M. Bethge. Factorial coding of natural images: how effective are linear models in removing
higher-order dependencies? 23(6):1253?1268, June 2006.
[2] Michael J. Black and P. Anandan. A framework for the robust estimation of optical flow. In
ICCV, pages 231?236, 1993.
[3] Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open
source movie for optical flow evaluation. In ECCV (6), pages 611?625, 2012.
[4] David J. Fleet, Michael J. Black, Yaser Yacoob, and Allan D. Jepson. Design and use of linear
models for image motion analysis. International Journal of Computer Vision, 36(3):171?193,
2000.
[5] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the
kitti vision benchmark suite. In CVPR, pages 3354?3361, 2012.
[6] Berthold KP Horn and Brian G Schunck. Determining optical flow. Artificial intelligence,
17(1):185?203, 1981.
[7] Bruce D Lucas, Takeo Kanade, et al. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial
intelligence, 1981.
[8] Xiaofeng Ren. Local grouping for optical flow. In CVPR, 2008.
[9] Stefan Roth and Michael J. Black. On the spatial statistics of optical flow. International
Journal of Computer Vision, 74(1):33?50, 2007.
[10] J Sohl-Dickstein and BJ Culpepper. Hamiltonian annealed importance sampling for partition
function estimation. 2011.
[11] Deqing Sun, Stefan Roth, and Michael J Black. Secrets of optical flow estimation and their
principles. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on,
pages 2432?2439. IEEE, 2010.
[12] Li Xu, Zhenlong Dai, and Jiaya Jia. Scale invariant optical flow. In Computer Vision?ECCV
2012, pages 385?399. Springer, 2012.
[13] Henning Zimmer, Andr?es Bruhn, and Joachim Weickert. Optic flow in harmony. International
Journal of Computer Vision, 93(3):368?388, 2011.
[14] Daniel Zoran and Yair Weiss. Natural images, gaussian mixtures and dead leaves. In NIPS,
pages 1745?1753, 2012.
9
| 5193 |@word h:13 seems:2 open:1 covariance:8 inpainting:3 celebrated:1 daniel:3 denoting:2 interestingly:2 rightmost:1 past:1 outperforms:4 comparing:5 surprising:2 takeo:1 realistic:1 numerical:1 partition:1 informative:1 v:2 stationary:1 half:1 intelligence:2 website:1 leaf:1 inspection:1 hamiltonian:2 provides:2 cse:1 location:1 become:2 jonas:1 consists:2 dan:1 allan:1 secret:2 indeed:1 examine:1 decreasing:1 little:1 equipped:1 considering:1 becomes:2 estimating:1 what:1 suite:1 every:1 multidimensional:4 control:1 appear:1 positive:1 engineering:1 local:32 accordance:2 middlebury:1 approximately:1 might:1 black:9 initialization:1 suggests:1 range:2 uy:2 horn:8 camera:1 acknowledgment:1 testing:3 block:1 definite:1 area:1 lorentzian:2 oot:1 significantly:5 projection:1 suggest:1 naturalistic:1 get:2 cannot:1 close:2 gmm4:1 context:1 influence:1 applying:1 optimize:1 equivalent:2 map:3 roth:3 annealed:2 jerusalem:1 go:1 regardless:1 independently:1 convex:3 estimator:1 insight:1 rule:1 fill:1 variation:1 autonomous:1 vx2:2 laplace:1 us:3 element:1 approximated:1 recognition:1 database:3 observed:2 bottom:2 capture:1 thousand:1 sun:2 mentioned:1 intuition:2 rigorously:1 zoran:1 depend:1 localization:1 efficiency:1 basis:1 completely:1 joint:3 regularizer:1 train:1 effective:1 kp:1 artificial:2 choosing:1 quite:1 film:1 widely:1 cvpr:3 ability:1 statistic:19 jointly:3 unconditioned:1 differentiable:1 eigenvalue:2 mixing:5 validate:1 optimum:1 kitti:8 ac:1 advocated:1 progress:2 eq:1 strong:1 p2:1 c:1 filter:5 modifying:1 vx:2 require:1 yweiss:1 brian:1 ground:13 exp:1 bj:1 driving:2 vary:1 early:1 estimation:12 lenz:1 outperformed:1 harmony:1 sensitive:1 city:1 stefan:2 gaussian:20 always:2 modified:1 rather:6 yacoob:1 earliest:1 emission:1 june:1 joachim:1 improvement:4 likelihood:22 contrast:1 inference:7 typically:1 daniez:1 hidden:5 pixel:8 among:2 orientation:1 lucas:3 art:3 spatial:3 marginal:2 field:3 sampling:2 identical:1 represents:2 look:1 unsupervised:1 nearly:2 bruhn:1 culpepper:1 few:2 primarily:2 modern:3 distinguishes:1 simultaneously:1 occlusion:1 attempt:1 interest:1 investigate:3 highly:2 multiply:1 evaluation:2 certainly:1 mixture:7 unconditional:2 held:8 accurate:2 edge:1 closer:3 modeling:4 measuring:1 restoration:4 assignment:1 maximization:1 cost:2 deviation:4 subset:2 rare:1 uniform:1 examining:1 conducted:1 graphic:1 motivating:1 reported:2 dependency:4 answer:3 corrupted:2 synthetic:3 combined:2 density:2 international:4 huji:1 michael:5 together:1 iy:1 bethge:1 again:1 ambiguity:1 leveraged:1 worse:1 dead:1 derivative:12 leading:4 li:1 coding:2 availability:5 coefficient:1 includes:1 notable:2 explicitly:2 vehicle:2 closed:3 denoised:1 bruce:1 jia:1 il:1 square:4 variance:3 yield:1 ofthe:1 correspond:1 weak:2 bayesian:2 jhs:2 iid:3 ren:2 basically:1 served:1 researcher:6 ix2:1 ago:1 simultaneous:1 energy:2 frequency:1 sampled:1 newly:2 dataset:7 popular:1 ask:3 psnr:7 organized:1 stanley:1 garrett:1 actually:2 higher:2 day:1 improved:1 wei:1 evaluated:3 done:2 though:2 overfit:1 hand:2 horizontal:3 nonlinear:1 lack:1 gray:1 perhaps:2 icri:1 grows:1 scientific:1 effect:2 hence:1 assigned:1 nonzero:1 white:2 visualizing:1 during:2 ambiguous:1 whereby:1 mpi:1 criterion:1 motion:23 l1:16 exponentiate:1 image:18 meaning:1 common:1 conditioning:7 discussed:2 slight:1 significant:1 smoothness:19 had:1 moving:1 u2x:2 longer:1 v0:3 jiaya:1 add:3 dominant:1 posterior:1 own:1 recent:4 binary:1 success:5 discussing:1 seen:8 captured:1 additional:1 anandan:2 dai:1 determine:1 jba:2 maximize:1 u0:3 hsi:2 full:3 infer:1 smooth:2 divided:2 equally:1 finder:1 laplacian:2 vision:19 essentially:2 expectation:1 histogram:3 normalization:1 achieved:1 whereas:1 separately:1 decreased:1 median:1 source:4 unlike:4 sure:1 henning:1 thing:2 flow:98 gmms:3 leverage:2 counting:1 variety:1 suboptimal:1 andreas:1 idea:2 fleet:3 whether:1 motivated:3 penalty:1 stereo:1 yaser:1 eigenvectors:9 factorial:1 amount:5 ten:1 simplest:1 generate:2 outperform:1 vy:2 andr:1 shifted:1 estimated:1 ddt:2 bls:2 dickstein:1 four:1 gmm:32 registration:1 merely:1 fraction:12 year:3 inverse:1 raquel:1 almost:2 patch:41 geiger:2 separation:1 sintel:8 followed:1 weickert:1 quadratic:4 occur:1 optic:1 constraint:1 scene:4 x2:1 flat:9 unlimited:1 speed:1 performing:2 optical:38 combination:3 smaller:1 em:4 outlier:1 iccv:1 pr:3 invariant:1 taken:2 resource:1 previously:1 discus:1 tractable:3 available:6 gaussians:1 observe:1 enforce:1 simulating:1 alternative:1 yair:2 batch:1 slower:1 top:3 include:1 giving:1 question:2 dependence:1 diagonal:2 kth:1 subspace:7 separate:1 hmm:5 philip:1 topic:1 urtasun:1 assuming:1 length:1 relationship:1 mini:1 minimizing:2 hebrew:1 difficult:1 claiming:1 design:1 perform:1 vertical:3 datasets:4 markov:3 benchmark:5 displayed:1 truncated:1 looking:1 variability:1 frame:5 intensity:40 inferred:1 david:1 pair:4 optimized:2 learned:15 discontinuity:5 israeli:1 nip:1 suggested:7 bar:1 below:2 pattern:9 convincingly:1 including:1 natural:9 difficulty:1 restore:1 examination:1 movie:1 lk:11 created:1 ready:1 shunck:2 prior:38 review:1 determining:1 relative:2 expect:4 interesting:1 suggestion:1 foundation:2 elsc:1 consistent:1 principle:1 story:1 systematically:1 eccv:2 row:10 penalized:1 summary:2 changed:1 last:2 free:3 supported:2 side:2 allow:4 deeper:1 wide:1 absolute:3 sparse:1 benefit:3 boundary:24 berthold:1 world:2 transition:4 computes:1 author:4 commonly:3 made:5 collection:1 exponentiating:1 longstanding:1 far:1 nonlocal:1 overcomes:1 dealing:1 ml:1 global:1 active:1 assumed:3 butler:1 zimmer:1 un:1 iterative:1 kanade:3 learn:9 concatenates:1 robust:9 european:1 jepson:1 did:2 main:1 big:2 definitive:1 noise:10 n2:1 wulff:1 xu:1 intel:1 fashion:1 gatsby:1 explicit:3 exponential:1 learns:2 ix:2 down:1 removing:1 xiaofeng:1 decay:1 grouping:1 adding:1 sohl:1 importance:2 gained:1 ci:1 conditioned:4 illustrates:1 hole:2 photograph:1 simply:1 likely:13 schunck:6 ux:2 applies:1 springer:1 corresponds:2 truth:13 extracted:1 conditional:5 formulated:1 towards:1 absence:1 denoising:6 pas:1 e:1 exception:1 support:2 deqing:1 |
4,634 | 5,194 | Third-Order Edge Statistics: Contour Continuation,
Curvature, and Cortical Connections
Steven W. Zucker
Computer Science
Yale University
New Haven, CT 06520
zucker@cs.yale.edu
Matthew Lawlor
Applied Mathematics
Yale University
New Haven, CT 06520
matthew.lawlor@yale.edu
Abstract
Association field models have attempted to explain human contour grouping performance, and to explain the mean frequency of long-range horizontal connections
across cortical columns in V1. However, association fields only depend on the
pairwise statistics of edges in natural scenes. We develop a spectral test of the sufficiency of pairwise statistics and show there is significant higher order structure.
An analysis using a probabilistic spectral embedding reveals curvature-dependent
components.
1
Introduction
Natural scene statistics have been used to explain a variety of neural structures. Driven by the
hypothesis that early layers of visual processing seek an efficient representation of natural scene
structure, decorrelating or reducing statistical dependencies between subunits provides insight into
retinal ganglion cells [17], cortical simple cells [13, 2], and the firing patterns of larger ensembles
[18]. In contrast to these statistical models, the role of neural circuits can be characterized functionally [3, 14] by positing roles such as denoising, structure enhancement, and geometric computations.
Such models are based on evidence of excitatory connections among co-linear and co-circular neurons [5], as well as the presence of co-linearity and co-circularity of edges in natural images [8],
[7]. The fact that statistical relationships have a geometric structure is not surprising: To the extent
that the natural world consists largely of piecewise smooth objects, the boundaries of those objects
should consist of piecewise smooth curves.
Common patterns between excitatory neural connections, co-occurrence statistics, and the geometry
of smooth surfaces suggests that the functional and statistical approaches can be linked. Statistical
questions about edge distributions in natural images have differential geometric analogues, such as
the distribution of intrinsic derivatives in natural objects. From this perspective, previous studies
of natural image statistics have primarily examined ?second-order? differential properties of curves;
i.e., the average change in orientation along curve segments in natural scenes. The pairwise statistics
suggest that curves tend toward co-linearity, in that the (average) change in orientation is small.
Similarly, for long-range horizontal connections, cells with similar orientation preference tend to be
connected to each other.
Is this all there is? From a geometric perspective, do curves in natural scenes exhibit continuity in
curvatures, or just in orientation? Are edge statistics well characterized at second-order? Does the
same hold for textures?
To answer these questions one needs to examine higher-order statistics of natural scenes, but this
is extremely difficult computationally. One possibility is to design specialized patterns, such as intensity textures [16], but it is difficult to generalize such results into visual cortex. We make use
of natural invariances in image statistics to develop a novel spectral technique based on preserving
1
a probabilistic distance. This distance characterizes what is beyond association field models (discussed next) to reveal the ?third-order? structure in edge distributions. It has different implications
for contours and textures and, more generally, for learning.
(A)
(B)
(C)
(D)
(E)
?
y
y
x
?
y
?
x
x
?4
?3
?
?2
y
x
Natural
Images
X-Y-?
Edges
Conditional
Cooccurrence
Probabilities
Embeddings
Edge
Clusters
Likely edge combinations
in natural images
Figure 1: Outline of paper: We construct edge maps from a large database of natural images, and
estimate the distribution of edge triplets. To visualize this distribution, we construct an embedding
which reveals likely triplets of edges. Clusters in this embedded space consist of curved lines
2
Edge Co-occurrence Statistics
Edge co-occurrence probabilities are well studied [1, 8, 6, 11]. Following them, we use random
variables indicating edges at given locations and orientations. More precisely, an edge at position,
orientation ri = (xi , yi , ?i ), denoted Xri , is a {0, 1} valued random variable. Co-occurrence statistics examine various aspects of pairwise marginal distributions, which we denote by P (Xri , Xrj ).
The image formation process endows scene statistics with a natural translation invariance. If the
camera were allowed to rotate randomly about the focal axis, natural scene statistics would also
have a rotational invariance. For computational convenience, we enforce this rotational invariance
by randomly rotating our images. Thus,
P (Xr1 , ..., Xrn ) = P (XT (r1 ) , ..., XT (rn ) )
where T is a roto-translation.
We can then estimate joint distributions of nearby edges by looking at patches of edges centered at
a (position, orientation) location rn and rotating the patch into a canonical orientation and position
that we denote r0 . Let T (rn ) = r0 . Then
P (Xr1 , ..., Xrn ) = P (XT (r1 ) , ..., Xr0 )
Several examples of statistics derived from the distribution of P (Xri , Xr0 ) are shown in Fig. 2.
These are pairwise statistics of oriented edges in natural images. The most important visible feature
of these pairwise statistics is that of good continuation: Conditioned on the presence of an edge at
the center, edges of similar orientation and horizontally aligned with the edge at the center have high
probability. Note that all of the above implicitly or explicit enforced rotation invariance, either by
2
August and Zucker, 2000
Geisler et al, 2001
Elder & Goldberg, 2002
Figure 2: Association fields derive from image co-occurrence statistics. Here we show three attempts
to characterize them. Different authors consider probabilities or likelihoods; Elder further conditions
on boundaries. We simply interpret them as illustrating the probability (likelihood) of an edge near
a horizontal edge at the center position.
Figure 3: Two approximately equally likely triples of edges under the pairwise independence assumption of Elder et. al. Conditional independence is one of several possible pairwise distributional
assumptions. Intuitively, however, the second triple is much more likely. We examine third-order
statistics to demonstrate that this is in fact the case.
only examining relative orientation with respect to a reference orientation or by explicit rotation of
the images.
It is critical to estimate the degree to which these pairwise statistics characterize the full joint distribution of edges (Fig. 3). Many models for neural firing patterns imply relatively low order joint
statistics. For example, spin-glass models [15] imply pairwise statistics are sufficient, while Markov
random fields have an order determined by the size of neighborhood cliques.
3
Contingency Table Analysis
To test whether the joint distribution of edges can be well described by pairwise statistics, we
performed a contingency table analysis of edge triples at two different threshold levels from images in the van Hataran database. We computed estimated joint distributions for each triple of
edges in an 11 ? 11 ? 8 patch, not constructed to have an edge at the center. Using a ?2
test, we computed the probability that each edge triple distribution could occur under hypothesis
H0 : {No three way interaction}. This is a test of the hypothesis that
log P (Xri , Xrj , Xrk ) = f (Xri , Xrj ) + g(Xrj , Xrk ) + h(Xri , Xrk )
for each triple (Xri , Xrj , Xrk ), and includes the cases of independent edges, conditionally independent edges, and other pairwise interactions. For almost all triples, this probability was extremely
small. (The few edge triples for which the null hypothesis cannot be rejected consisted of edges that
were spaced very far apart, which are far more likely to be nearly statistically independent of one
another.)
3
n = 150705016
percentage of triples where pH0 > .05
4
threshold = .05
0.0082%
threshold = .1
0.0067%
Counting Triple Probabilities
We chose a random sampling of black and white images from the van Hataren image dataset[10].
They were randomly rotated and then filtered using oriented Gabor filters covering 8 angles from
[0, ?). Each Gabor has a carrier period of 1.5 pixels per radian and an envelope standard deviation
of 5 pixels. The filters were convolved in near quadrature pairs, squared and summed.
(a)
(b)
Figure 4: Example image (a) and edges (b) for statistical analysis. Note: color corresponds to
orientation
To restrict analysis to the statistics of curves, we applied local non-maxima suppression across orientation columns in a direction normal to the given orientation. This threshold is a heuristic attempt
to exclude non-isolated curves due to dense textures. We note that previous studies in pairwise edge
statistics have used similar heuristics or hand labeling of edges to eliminate textures. The resulting
edge maps were subsampled to eliminate statistical dependence due to overlapping filters.
Thresholding the edge map yields X : U ? {0, 1}, where U ? R2 ? S is a discretization of R2 ? S.
We treat X as a function or a binary vector as convenient. We randomly select 21 ? 21 ? 8 image
patches with an oriented edge at the center, and denote these characteristic patches by Vi
Since edges are significantly less frequent than their absence, we focus on (positive) edge cooccurrence statistics. For simplicity, we denote P (Xri = 1, Xrj = 1, Xrk = 1) by E[Xri Xrj Xrk ].
In addition, we will denote the event Xri = 1 by Yri . (A small orientation anisotropy has been
reported in natural scenes (e.g., [9]), but does not appear in our data because we effectively averaged
over orientations by randomly rotating the images.)
We compute the matrix M + where
+
Mij
= E[Xri Xrj |Yr0 ]
n
?
1X
Vi ViT
n i=1
Figure 5: Histogram of edge probabilities. The
threshold to include an edge in M + is p > 0.2,
and is marked in red.
where Vi is a (vectorized) random patch of edges centered around an edge with orientation ?i = 0.
In addition, we only compute pairwise probabilities for edges of high marginal probability (Fig. 5)
4
5
Visualizing Triples of Edges
By analogy with the pairwise analysis above, we seek to find those edge triples that frequently cooccur. But this is significantly more challenging. For pairwise statistics, one simply fixes an edge to
lie in the center and ?colors? the other edge by the joint probability of the co-occurring pair (Fig. 2).
No such plot exists for triples of edges. Even after conditioning, there are over 12 million edge
triples to consider.
Our trick: Embed edges in a low dimensional space such that the distance between the edges represents the relative likelihood of co-occurrence. We shall do this in a manner such that distance in
Embedded Space ? Relative Probability.
As before, let Xri be a binary random variable, where Xri = 1 means there is an edge at location
ri = (xi , yi , ?i ). We define a distance between edges
2
D+
(ri , rj ) = E[Xr2i |Yr0 ] ? 2E[Xri Xrj |Yr0 ] + E[Xr2j |Yr0 ]
+
+
= Mii+ ? 2Mij
+ Mjj
The first and the last terms represent pairwise co-occurrence probabilities; i.e., these are the association field. The middle term represents the interaction between Xri and Xrj conditioned on the
presence of X0 . Thus this distance is zero if the edges always co-occur in images, given the horizontal edge at the origin, and is large if the pair of edges frequently occur with the horizontal edge
but rarely together. (The relevance to learning is discussed below.)
We will now show how, for natural images, edges can be placed in a low dimensional space where
the distance in that space will be proportional to this probabilistic distance.
6
Dimensionality Reduction via Spectral Theorem
We exploit the fact that M + is symmetric and introduce the spectral expansion
M+ =
n
X
?l ?l (i)?l (j)
l=1
where ?l is an eigenvector of M + .
Define the spectral embedding ? :
xi
yi
?i
!
? Rn
p
p
p
?(ri ) = { ?1 ?1 (i), ?2 ?2 (i), ..., ?n ?n (i)}
(1)
The Euclidean distance between embedded points is then
k?(ri ) ? ?(rj )k2 = h?(ri ), ?(ri )i ? 2h?(ri ), ?(rj )i + h?(rj ), ?(rj )i
+
+
= Mii+ ? 2Mij
+ Mjj
2
= D+
(ri , rj )
? maps edges to points in an embedded space where squared distance is equal to relative probability.
The usefulness of this embedding comes from the fact that the spectrum of M + decays rapidly
(Fig. 6). Therefore we truncate ?, including only dimensions with high eigenvalues. This gives a
dramatic reduction in dimensionality, and allows us to visualize the relationship between triples of
edges (Fig. 7). In particular, a cluster, say, C, of edges in embedding space all have high probability
of co-occurring, and the diameter of the cluster
d = max D2 (ri , rj )
i,j?C
bounds the conditional co-occurrence probability of all edges in the cluster.
E[Xri , Xrj |Yr0 ] ?
5
2p ? d
2
Spectrum of co?occurance kernel
1.2
1
lambda
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
Figure 6: Spectrum of M + . Other spectra are similar. Note rapid decay of the spectrum indicating
the diffusion distance is well captured by embedding using only the first few eigenfunctions.
Spectral embedding colored by embedding coordinates
0.1
0.1
0.1
0.05
0.05
0.05
0
0
0
?0.05
?0.05
?0.05
?0.1
?0.1
?0.1
?0.15
?0.15
?0.15
?0.2
?0.1
?0.05
0
0.05
0.1
?0.2
?0.1
?0.05
0
0.05
0.1
?0.2
?0.1
?0.05
0
0.05
0.1
Edge map colored by embedding coordinates
?2
?3
?4
Figure 7: Display of third-order edge structure showing how oriented edges are related to their
spectral embeddings. (top) Spectral embeddings. Note clusters of co-occurring edges. (bottom)
Edge distributions. The eigenvectors of M + are used to color both the edges and the embedding.
The color in each figure can be interpreted as a coordinate given by one of the ? vectors. Edges that
share colors (coordinates) in all dimensions (?2 , ?3 , ?4 ) are close in probabilistic distance, which
implies they have a high probability of co-occurring along with the edge in the center. Compare with
Fig. 2 where red edges all have high probability of occurring with the center, but no information is
known about their co-occurrence probability.
where p = mini E(Xri |Yr0 ). For our embeddings p > .2 see Fig. 5.
To highlight information not contained in the association field, we normalized our probability matrix
by its row sums, and removed all low-probability edges. Embedding the mapping from R2 ? S ?
Rm reveals the cocircular structure of edge triples in the image data (Fig. 7). The colors along each
column correspond, so similar colors map to nearby points along the dimension corresponding to
the row. Under this dimensionality reduction, each small cluster in diffusion space corresponds to
half of a cocircular field. In effect, the coloring by ?2 shows good continuation in orientation (with
our crude quantization) while the coloring by ?4 shows co-circular connections. In effect, then, the
6
association field is the union of co-circular connections, which also follows from marginalizing the
third-order structure away. We used 40,000 (21 ? 21 ? 8) patches.
Shown in Fig. 7 are low dimensional projections of the diffusion map and their corresponding colorings in R2 ? S. To provide a neural interpretation of these results, let each point in R2 ? S represent
a neuron with a receptive field centered at the point (x, y) with preferred orientation ?. Each cluster
then signifies those neurons that have a high probability of co-firing given that the central neuron
fires, so clusters in diffusion coordinates should be ?wired? together by the Hebbian postulate. Such
curvature-based facilitation can explain the non-monotonic variance in excitatory long-range horizontal connections in V1 [3, 4]. It may also have implications for the receptive fields of V2 neurons.
As clusters of co-circular V1 cells are correlated in their firing, it may be efficient to represent them
with a single cell with excitatory feedforward connections. This predicts that efficient coding models
that take high order interactions into account should exhibit cells tuned to curved boundaries.
7
Implications for Inhibition and Texture
Our approach also has implications beyond excitatory connections for boundary facilitation. We
repeated our conditional spectral embedding, but now conditioned on the absence of an edge at the
center (Fig. 8). This could provide a model for inhibition, as clusters of edges in this embedding
are likely to co-occur conditioned on the absence of an edge at the center. We find that the embedding has no natural clustering. Compared to excitatory connections, this suggests that inhibition is
relatively unstructured, and agrees with many neurobiological studies.
0.15
8
7
0.1
6
0.05
5
0
4
?0.05
3
?0.1
2
?0.15
1
10
0
?10
?10
?5
0
5
10
?0.2
?0.2
?0.15
?0.1
?0.05
0
0.05
0.1
0.15
Figure 8: Embeddings conditioned on the absence of an edge at the center location. Note how
less structured it is, compared to the positive embeddings. As such it could serve as a model for
inhibitory connections, which span many orientations.
Finally, we repeated this third-order analysis (but without local non-maxima suppression) on a structured model for isotropic textures on 3D surfaces and again found a curvature dependency (Fig. 9).
Every 3-D surface has a pair of associated dense texture flows in the image plane that correspond to
the slant and tilt directions of the surface. For isotropic textures, the slant direction corresponds to
the most likely orientation signaled by oriented filters.
As this is a representation of a dense vector field, it is more difficult to interpret than the edge map.
We therefore applied k-means clustering in the embedded space and segmented the resulting vector
field. The resulting clusters show two-sided continuation of the texture flow with a fixed tangential
curvature (Fig. 10).
In summary, then, we have developed a method for revealing third-order orientation structure by
spectral methods. It is based on a diffusion metric that makes third-order terms explicit, and yields
a Euclidean distance measure by which edges can be clustered. Given that long-range horizontal
connections are consistent with these clusters, how biological learning algorithms converge to them
remains an open question. Given that research in computational neuroscience is turning to thirdorder [12] and specialized interactions, this question now becomes more pressing.
7
(a)
(b)
0.1
0.1
0.1
0.05
0.05
0.05
0
0
0
?0.05
?0.05
?0.05
?0.1
0.1
?0.1
0.1
0.05
0.02
0
0
?0.1
0.1
0.05
0.02
0
0
?0.02
?0.05
?0.04
?0.1
?0.06
?2
0.05
0.02
0
0
?0.02
?0.05
?0.04
?0.1
?0.02
?0.05
?0.04
?0.1
?0.06
?3
?0.06
?4
Figure 9: (top) Oriented textures provide information about surface shape. (bottom) As before,
we looked at the conditional co-occurrence matrices of edge orientations over a series of randomly
generated shapes. Slant orientations and embedding colored by each eigenvector. The edge map is
thresholded to contain only orientations of high probability. The resulting embedding ?(vi ) of those
orientations is shown below. The eigenvectors of M + are used to color both the orientations and
the embedding. Clusters of orientations in this embedding have a high probability of co-occurring
along with the edge in the center.
0.1
0.08
0.06
0.04
0.02
0
?0.02
?0.04
?0.06
?0.08
0.05
0
?0.05
0.08
0.06
0.04
0.02
0
?0.02
?0.04
?0.06
Figure 10: Clustering of dense texture flows. Color corresponds to the cluster index. Clusters were
separated into different figures so as to minimize the x, y overlap of the orientations. Embedding on
the right is identical to the embeddings above, but viewed along the ?3 , ?4 axes.
References
[1] Jonas August and Steven W Zucker. The curve indicator random field: Curve organization
via edge correlation. In Perceptual organization for artificial vision systems, pages 265?288.
Springer, 2000.
[2] A.J. Bell and T.J. Sejnowski. The independent components of natural scenes are edge filters.
8
Vision research, 37(23):3327?3338, 1997.
[3] O. Ben-Shahar and S. Zucker. Geometrical computations explain projection patterns of longrange horizontal connections in visual cortex. Neural Computation, 16(3):445?476, 2004.
[4] William H Bosking, Ying Zhang, Brett Schofield, and David Fitzpatrick. Orientation selectivity
and the arrangement of horizontal connections in tree shrew striate cortex. The Journal of
Neuroscience, 17(6):2112?2127, 1997.
[5] Heather J. Chisum, Franois Mooser, and David Fitzpatrick. Emergent properties of layer 2/3
neurons reflect the collinear arrangement of horizontal connections in tree shrew visual cortex.
The Journal of Neuroscience, 23(7):2947?2960, 2003.
[6] James H Elder and Richard M Goldberg. Ecological statistics of gestalt laws for the perceptual
organization of contours. Journal of Vision, 2(4), 2002.
[7] J.H. Elder and RM Goldberg. The statistics of natural image contours. In Proceedings of the
IEEE Workshop on Perceptual Organisation in Computer Vision. Citeseer, 1998.
[8] WS Geisler, JS Perry, BJ Super, and DP Gallogly. Edge co-occurrence in natural images
predicts contour grouping performance. Vision research, 41(6):711?724, 2001.
[9] Bruce C Hansen and Edward A Essock. A horizontal bias in human visual processing of
orientation and its correspondence to the structural components of natural scenes. Journal of
Vision, 4(12), 2004.
[10] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images
compared with simple cells in primary visual cortex. Proceedings: Biological Sciences,
265(1394):359?366, Mar 1998.
[11] Norbert Kr?uger. Collinearity and parallelism are statistically significant second-order relations
of complex cell responses. Neural Processing Letters, 8(2):117?129, 1998.
[12] Ifije E Ohiorhenuan and Jonathan D Victor. Information-geometric measure of 3-neuron firing patterns characterizes scale-dependence in cortical networks. Journal of computational
neuroscience, 30(1):125?141, 2011.
[13] Bruno A Olshausen et al. Emergence of simple-cell receptive field properties by learning a
sparse code for natural images. Nature, 381(6583):607?609, 1996.
[14] T.K. Sato, I. Nauhaus, and M. Carandini. Traveling waves in visual cortex. Neuron, 75(2):218?
229, 2012.
[15] Elad Schneidman, Michael J Berry, Ronen Segev, and William Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007?
1012, 2006.
[16] Ga?sper Tka?cik, Jason S Prentice, Jonathan D Victor, and Vijay Balasubramanian. Local statistics in natural scenes predict the saliency of synthetic textures. Proceedings of the National
Academy of Sciences, 107(42):18149?18154, 2010.
[17] JH Van Hateren. A theory of maximizing sensory information.
68(1):23?29, 1992.
Biological cybernetics,
[18] William E Vinje and Jack L Gallant. Sparse coding and decorrelation in primary visual cortex
during natural vision. Science, 287(5456):1273?1276, 2000.
9
| 5194 |@word collinearity:1 illustrating:1 middle:1 open:1 d2:1 seek:2 citeseer:1 dramatic:1 franois:1 reduction:3 series:1 tuned:1 discretization:1 surprising:1 visible:1 shape:2 plot:1 half:1 plane:1 isotropic:2 filtered:1 colored:3 provides:1 location:4 preference:1 zhang:1 positing:1 along:6 constructed:1 cooccur:1 differential:2 jonas:1 consists:1 introduce:1 manner:1 x0:1 pairwise:18 rapid:1 examine:3 frequently:2 balasubramanian:1 anisotropy:1 becomes:1 brett:1 linearity:2 circuit:1 null:1 what:1 interpreted:1 eigenvector:2 developed:1 every:1 k2:1 rm:2 yr0:6 appear:1 carrier:1 positive:2 before:2 local:3 treat:1 firing:5 approximately:1 black:1 chose:1 bosking:1 studied:1 examined:1 heather:1 suggests:2 challenging:1 co:28 range:4 statistically:2 averaged:1 camera:1 union:1 bell:1 gabor:2 significantly:2 convenient:1 revealing:1 projection:2 suggest:1 convenience:1 cannot:1 close:1 ga:1 prentice:1 xrn:2 map:9 center:12 maximizing:1 vit:1 simplicity:1 unstructured:1 insight:1 facilitation:2 embedding:19 population:1 coordinate:5 goldberg:3 hypothesis:4 origin:1 trick:1 distributional:1 database:2 predicts:2 bottom:2 steven:2 role:2 connected:1 removed:1 cooccurrence:2 thirdorder:1 xr2i:1 depend:1 segment:1 serve:1 joint:6 emergent:1 various:1 separated:1 sejnowski:1 artificial:1 labeling:1 formation:1 neighborhood:1 h0:1 heuristic:2 larger:1 valued:1 elad:1 say:1 statistic:30 emergence:1 eigenvalue:1 pressing:1 shrew:2 interaction:5 frequent:1 aligned:1 rapidly:1 academy:1 enhancement:1 cluster:16 r1:2 wired:1 rotated:1 object:3 ben:1 derive:1 develop:2 edward:1 c:1 come:1 implies:1 direction:3 filter:6 centered:3 human:2 fix:1 clustered:1 biological:3 hold:1 around:1 normal:1 mapping:1 bj:1 visualize:2 predict:1 matthew:2 fitzpatrick:2 early:1 hansen:1 agrees:1 xr0:2 always:1 super:1 derived:1 focus:1 ax:1 likelihood:3 contrast:1 suppression:2 glass:1 dependent:1 eliminate:2 w:1 relation:1 pixel:2 among:1 orientation:31 denoted:1 schofield:1 summed:1 schaaf:1 marginal:2 field:15 construct:2 equal:1 sampling:1 identical:1 represents:2 nearly:1 piecewise:2 haven:2 primarily:1 few:2 tangential:1 randomly:6 oriented:6 richard:1 ohiorhenuan:1 national:1 subsampled:1 geometry:1 fire:1 william:3 attempt:2 organization:3 circular:4 possibility:1 circularity:1 implication:4 xrk:6 edge:90 tree:2 euclidean:2 rotating:3 signaled:1 isolated:1 column:3 lawlor:2 signifies:1 deviation:1 usefulness:1 examining:1 characterize:2 reported:1 dependency:2 answer:1 synthetic:1 geisler:2 probabilistic:4 michael:1 together:2 squared:2 central:1 postulate:1 again:1 reflect:1 lambda:1 derivative:1 account:1 exclude:1 retinal:1 coding:2 includes:1 vi:4 performed:1 jason:1 linked:1 characterizes:2 red:2 wave:1 bruce:1 minimize:1 spin:1 variance:1 largely:1 characteristic:1 ensemble:1 spaced:1 yield:2 correspond:2 ronen:1 saliency:1 generalize:1 weak:1 cybernetics:1 explain:5 frequency:1 james:1 nauhaus:1 associated:1 radian:1 dataset:1 carandini:1 color:9 dimensionality:3 cik:1 elder:5 coloring:3 higher:2 response:1 sufficiency:1 decorrelating:1 mar:1 strongly:1 just:1 rejected:1 correlation:2 traveling:1 hand:1 horizontal:11 overlapping:1 perry:1 continuity:1 uger:1 reveal:1 olshausen:1 effect:2 consisted:1 normalized:1 contain:1 symmetric:1 white:1 conditionally:1 visualizing:1 during:1 covering:1 outline:1 demonstrate:1 geometrical:1 image:26 jack:1 novel:1 occurance:1 common:1 rotation:2 specialized:2 functional:1 conditioning:1 tilt:1 million:1 association:7 discussed:2 interpretation:1 gallogly:1 functionally:1 interpret:2 significant:2 slant:3 focal:1 mathematics:1 similarly:1 bruno:1 zucker:5 cortex:7 surface:5 inhibition:3 j:1 curvature:6 perspective:2 driven:1 apart:1 selectivity:1 ecological:1 binary:2 shahar:1 yri:1 yi:3 der:1 victor:2 preserving:1 captured:1 r0:2 converge:1 period:1 schneidman:1 full:1 rj:7 hebbian:1 smooth:3 segmented:1 characterized:2 long:4 equally:1 vision:7 metric:1 histogram:1 represent:3 kernel:1 cell:9 addition:2 envelope:1 eigenfunctions:1 tend:2 flow:3 structural:1 near:2 presence:3 counting:1 feedforward:1 embeddings:7 variety:1 independence:2 restrict:1 whether:1 collinear:1 generally:1 eigenvectors:2 diameter:1 continuation:4 percentage:1 canonical:1 inhibitory:1 xr1:2 estimated:1 neuroscience:4 per:1 shall:1 threshold:5 thresholded:1 diffusion:5 v1:3 sum:1 enforced:1 angle:1 letter:1 almost:1 patch:7 mii:2 layer:2 ct:2 bound:1 display:1 yale:4 correspondence:1 sato:1 occur:4 precisely:1 segev:1 scene:12 ri:10 nearby:2 aspect:1 extremely:2 span:1 relatively:2 structured:2 truncate:1 combination:1 across:2 intuitively:1 sided:1 computationally:1 remains:1 away:1 spectral:11 enforce:1 v2:1 occurrence:11 convolved:1 top:2 clustering:3 include:1 exploit:1 question:4 arrangement:2 looked:1 receptive:3 primary:2 dependence:2 striate:1 bialek:1 exhibit:2 dp:1 distance:13 extent:1 toward:1 code:1 index:1 relationship:2 mini:1 rotational:2 ying:1 difficult:3 xri:17 design:1 gallant:1 neuron:8 markov:1 curved:2 subunit:1 looking:1 rn:4 august:2 ph0:1 intensity:1 david:2 pair:4 connection:16 beyond:2 below:2 pattern:6 parallelism:1 including:1 max:1 analogue:1 critical:1 event:1 natural:29 overlap:1 endows:1 decorrelation:1 turning:1 indicator:1 imply:3 axis:1 geometric:5 berry:1 marginalizing:1 relative:4 law:1 embedded:5 highlight:1 proportional:1 analogy:1 vinje:1 triple:16 contingency:2 degree:1 tka:1 sufficient:1 vectorized:1 consistent:1 thresholding:1 share:1 translation:2 row:2 excitatory:6 summary:1 placed:1 last:1 bias:1 jh:1 cocircular:2 sparse:2 van:5 boundary:4 curve:9 cortical:4 world:1 dimension:3 contour:6 sensory:1 author:1 far:2 gestalt:1 implicitly:1 preferred:1 neurobiological:1 longrange:1 clique:1 reveals:3 xi:3 spectrum:5 triplet:2 table:2 roto:1 nature:2 expansion:1 complex:1 dense:4 allowed:1 repeated:2 quadrature:1 fig:13 position:4 explicit:3 lie:1 crude:1 perceptual:3 third:8 theorem:1 embed:1 xt:3 showing:1 r2:5 decay:2 evidence:1 grouping:2 consist:2 intrinsic:1 exists:1 quantization:1 workshop:1 effectively:1 organisation:1 kr:1 texture:13 conditioned:5 occurring:6 vijay:1 simply:2 likely:7 ganglion:1 visual:8 horizontally:1 mjj:2 contained:1 norbert:1 monotonic:1 springer:1 mij:3 corresponds:4 conditional:5 marked:1 viewed:1 absence:4 change:2 determined:1 reducing:1 denoising:1 invariance:5 attempted:1 xrj:11 indicating:2 select:1 rarely:1 rotate:1 jonathan:2 relevance:1 hateren:2 correlated:2 |
4,635 | 5,195 | What Are the Invariant Occlusive Components of
Image Patches? A Probabilistic Generative Approach
Georgios Exarchakis
Redwood Center for Theoretical Neuroscience,
The University of California, Berkeley, US
exarchakis@berkeley.edu
Zhenwen Dai
University of Sheffield, UK, and
FIAS, Goethe-University Frankfurt, Germany
z.dai@sheffield.ac.uk
?
J?org Lucke
Cluster of Excellence Hearing4all, University of Oldenburg, Germany,
and BCCN Berlin, Technical University Berlin, Germany
joerg.luecke@uni-oldenburg.de
Abstract
We study optimal image encoding based on a generative approach with non-linear
feature combinations and explicit position encoding. By far most approaches to
unsupervised learning of visual features, such as sparse coding or ICA, account
for translations by representing the same features at different positions. Some
earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of
components to encode image patches. Here, we for the first time apply a model
with non-linear feature superposition and explicit position encoding for patches.
By avoiding linear superpositions, the studied model represents a closer match to
component occlusions which are ubiquitous in natural images. In order to account
for occlusions, the non-linear model encodes patches qualitatively very different
from linear models by using component representations separated into mask and
feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts
the components, and that it can correctly identify the occlusive components with
the hidden variables of the model. On natural image patches, the model learns
component masks and features for typical image components. By using reverse
correlation, we estimate the receptive fields associated with the model?s hidden
units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that
capture occlusions and invariances can be trained efficiently on image patches, and
that the resulting encoding represents an alternative model for the neural encoding
of images in the primary visual cortex.
1
Introduction
Probabilistic generative models are used to mathematically formulate the generation process of observed data. Based on a good probabilistic model of the data, we can infer the processes that have
generated a given data point, i.e., we can estimate the hidden causes of the generation. These hidden
causes are usually the objects we want to infer knowledge about, be it for medical data, biological processes, or sensory data such as acoustic or visual data. However, real data are usually very
complex, which makes the formulation of an exact data model infeasible. Image data are a typical
example of such complex data. The true generation process of images involves, for instance, different objects with different features at different positions, mutual occlusions, object shades, lighting
1
mask
feature
Translation
Component 1
Component 2
Background
Figure 1: An illustration of the generation process of our model.
conditions and reflections due to self-structure and nearby objects. Even if a generative model can
capture some of these features, an inversion of the model using Bayes? rule very rapidly becomes
analytically and computationally intractable. As a consequence, generative modelers make compromises to allow for trainability and applicability of their generative approaches.
Two properties that have, since long, been identified as crucial for models of images are object occlusions [1?5] and the invariance of object identity to translations [6?13]. However, models incorporating both occlusions and invariances suffer from a very pronounced combinatorial complexity.
They could, so far, only be trained with very low dimensional hidden spaces [2, 14, 15]. At first
glance, occlusion modeling is, furthermore, mathematically more inconvenient. For these reasons,
many studies including style and content models [16], other bi-linear models [17, 18], invariant
sparse coding [19, 20], or invariant NMF [21] do not model occlusions. Analytical and computation
reasons are often explicitly stated as the main motivation for the use of the linear superposition of
components (see, e.g., [16, 17]).
In this work, we for the first time study the encoding of natural image patches using a model with
both non-linear feature combinations and translation invariances.
2
A Generative Model with Non-linear and Invariant Components
The model used to study image patch encoding assumes an exclusive component combination, i.e.,
for each pixel exclusively one cause is made responsible. It thus shares the property of exclusiveness
with visual occlusions. The model will later be shown to capture occluding components. We will,
however, not model explicit occlusion using a depth variable (compare [2]) but will focus on the
exclusiveness property. The applied model is a novel version of the invariant occlusive components
model studied for mid-level vision earlier [22]. We first briefly reiterate the basic model in the
following and discuss the main differences of the new version afterwards.
We consider image patches ~y with D2 observed scalar variables, ~y = (y1 , . . . , yD2 ). An image
patch is assumed to contain a subset from a set of H components. Each component h can be located
at a different position denoted by an index variable xh ? {1, . . . , D2 }, which is associated with a
set of permutation matrices that covers all the possible planar translations {T1 , . . . , TD2 } (similar
formulations have also been used in sprite models [14, 15]). Each component h is modeled to
appear in an image patch with probability ?h ? (0, 1). Following [22], we do not model component
presence and absence explicitly but, for mathematical convenience, assign the special ?position? ?1
to all the components which are not chosen to generate the patch. Assuming a uniform distribution
for the positions, the prior distribution for components and their positions is thus given by:
p(~
x|~? ) =
Y
h
(
1 ? ?h , xh = ?1
p(xh |?h ), p(xh |?h ) = ?h
,
,
otherwise
D2
(1)
where the hidden variable ~x = (x1 , . . . , xH ) contains the information on presence/absence and
position of all the image components.
In contrast to linear models, the studied approach requires two sets of parameters for the encoding of image components: component masks and component features. Component masks describe
where an image component is located, and component features describe what a component encodes
(compare [2, 3, 14, 15]). High values of mask parameters ?
~ h encode the pixels most associated
with a component h but the encoding has to be understood relative to a global component position.
The feature parameters w
~ h encode the values of a component?s features. Fig. 1 shows an example
2
of the mask and feature parameters for two typical low-level visual features. Given a particular position, the mask and feature parameters of the component are transformed to the target position by
multiplying a corresponding translation matrix like Txh ?
~ h and Txh w
~ h . When generating an image
patch, two or more components may occupy the same pixel, but according to occlusion the pixel
value is exclusively determined by only one of them. This exclusiveness is formulated by defining
a mask variable m
~ = (m1 , . . . , mD2 ). For a pixel at a position d, md determines which component
is responsible for the pixel value. Therefore, md takes a value from the set of present components
? = {h|xh 6= ?1} plus a special value ?0? indicating background, and the prior distribution of m
~
?
is defined as:
2
?0
p(m|~
~ x, A) =
D
Y
p(md |~
x, A),
p(md = h|~
x, A) =
d=1
P
h0 ?? (Txh0
(Txh ?
~ h )d
?
~ h0 )d
P
h0 ?? (Txh0
?
~ h0 )d
??
0+
??
0+
,
h=0
,
h??
,
(2)
where A = (~
?1 , . . . , ?
~ H ) contains the mask parameters for all the components, and ?0 defines the
mask parameter for background. The mask variable md chooses the component h with a high likelihood if the translated mask parameter of the corresponding component is high at the position d. Note
that md follows a mixture model given the presence/absence and positions of all the components ~x.
This can be thought of as an approximation to the distribution of mask variables marginalizing the
depth orderings and pixel transparency in the exact occlusive model (see Supplement A for a comparison). After drawing the values of the hidden variables ~x and m,
~ an image patch can be generated
with a Gaussian noise model, which is given by:
(
2
p(~
y |m,
~ ~x, ?) =
D
Y
p(yd |md , ~
x, ?),
p(yd |md = h, ~
x, ?) =
d=1
2
),
h=0
N (yd ; B, ?B
,
~ h )d , ? 2 ), h ? ?
N (yd ; (Txh w
(3)
2
) are all the model
where ? 2 is the variance of components, and ? = (~? , W, A, ? 2 , ?0 , B, ?B
2
.
parameters. The background distribution is a Gaussian distribution with mean B and variance ?B
Compared to an occlusive model with exact EM (see Supplement A), our approach will use the
exclusiveness approximation and a truncated posterior approximation in order to make learning
tractable.
The model described in (1) to (3) has been optimized for the encoding of image patches. First,
feature variables are scalar to encode light intensities or input by the lateral geniculus nucleus (LGN)
rather than color features for mid-level vision. Second, to capture the frequency of presence for
individual components, we implement the learning of the prior parameter of presence ~? . Third, the
pre-selection function in the variational approximation (see below) has been adapted to the usage
of scalar valued features. As a scalar value is much less distinctive than the sophisticated image
features used in [22], the pre-selection of components has been changed to the complete component
instead of only salient features.
3
Efficient Likelihood Optimization
Given a set of image patches Y = (~y (1) , . . . , ~y (n) ), learning is formulated as inferring the best model
parameters w.r.t. the log-likelihood L = p(Y |?). Following the Expectation Maximization (EM)
approach, the parameter update equations are derived. The equations of the mask parameter ?
~ h , and
feature parameter w
~ h are the same as in [22]. Additionally, we derived the update equation for the
prior parameter of presence:
N
?h =
1 X
N n=1
X
p(~
x|~
y (n) , ?).
(4)
~
x?{xh 6=?1}
By learning the prior parameters ?h , the probabilities of individual components? presence can be
estimated. This allows us to gain more insights about the statistics of image components. In the
update equations, a posterior distribution has been estimated for each data point, which corresponds
to the E-step of an EM algorithm. The posterior distribution of our model can be decomposed as:
p(m,
~ ~x|~
y , ?) = p(~
x|~
y , ?)
QD2
d=1
p(md |~
x, ~
y , ?),
(5)
in which p(~x|~y , ?) and p(md |~x, ~y , ?) are estimated separately. Computing the exact distribution
of p(~x|~y , ?) is intractable, as it includes the combinatorics of the presence/absence of components
and their positions. An efficient posterior approximation, Expectation Truncation (ET), has been
successfully employed. ET approximates the posterior distribution as a truncated distribution [23]:
p(~
x|~
y , ?) ? P
p(~
y, ~
x|?)
, if ~
x ? Kn ,
p(~
y, ~
x0 |?)
n
(6)
~
x0 ?K
and zero otherwise. If Kn is chosen to be small but to contain the states with most posterior probability mass, the computation of the posterior distribution becomes tractable while a high accuracy
3
Figure 2: Numerical Experiments on Artificial Data. (a) eight samples of the generated data sets.
(b) The parameters of the eight components used to generate the data set. The 1st row contains
the binary transparency parameters and the 2nd row shows the feature parameters. (c) The learned
model parameters (H = 9). The top plot shows the learned prior probabilities ~? . The 1st row shows
the mask parameters A; the 2nd row shows the feature parameters W ; the 3rd row gives a good
visualization of only the frequent used elements/pixels (setting the feature parameter whd of the
elements/pixels with ?hd < 0.5 to zero). (d) The result of inference given a image patch (shown on
the left). The right side shows the four components inferred to be present (each takes a column). The
1st and 2nd rows show the mask and features parameters shifted according to the MAP inference
~xMAP , and the 3rd row shows the inferred posterior p(md |~xMAP , ~y , ?). All the plots are heat map
(Jet color map) visualizations of scalar values.
of the approximations can be maintained [23]. To select a proper subspace Kn , ? features (pixel
intensities) are chosen according to their mask parameters. Based on the chosen features, a score
value S(xh ) is computed for each component at each position (see [22]). We select H 0 components,
denoted as H, for the candidates that may appear in the given image according to the probability p(~y , x
?h |?). x
?h corresponds to the vector ~x with xh = x?h and the rest components absent
0
(xh0 = ?1, h 6= h), where x?h is the best position of the component h w.r.t. S(xh ). This is different
from the earlier work [22], where Kn is constructed directly according to S(xh ). For each component, we select the set of its candidate positions Xh , xh ? Xh , which contains the p best positions
w.r.t. S(xh ). Then the truncated subspace Kn is defined as:
X
X
Kn = {~
x|(
sj ? ? and si = 0, ?i ?
/ H) or
sj 0 ? 1},
(7)
j0
j
where sh represents the presence/absence state of the component h (sh = 0 if xh = ?1 ? xh ?
/ Xh
and sh = 1 if xh ? Xh ). To avoid converging to local optima, we used the directional annealing
scheme [22] for our learning algorithm.
4
Numerical Experiments on Artificial Data
The goal of the experiment on artificial data is to verify that the model and inference method can
recover the correct parameters, and to investigate inference on the data generated according to occlusions with explicit depth variable. We generated 4?4 gray-scale image patches. In the data set, eight
different components are used, which are four vertical ?bars? and four horizontal ?bars?, and each bar
has a different intensity and has a binary vector indicating its ?transparency? (1 for non-transparent
and 0 for transparent, see Fig. 2b) . When generating an image patch, a subset of components is
selected according to their prior probabilities ?h = 0.25, and the selected components are combined
according to a random depth ordering (flat priors on the ordering). A component with smaller depth
will occlude the components with larger depth, and for each image patch we sample a new depthordering. For the pixels in which all the selected components are transparent, the value is determined
according to the background with zero intensity (B = 0). All the pixels generated by components
are subject to a Gaussian noise with ? = 0.02 and the pixels belonging to the background have a
Gaussian noise with ?B = 0.001. In total, we generated N = 1, 000 image patches. Fig. 2a shows
eight samples. The artificial data is similar to data generated by the occlusive components analysis
model (OCA; [2]), except of the use of scalar features and the assumption of shift-invariance.
Fig. 2c shows the learned model parameters on the generated data set. We learned nine components
(H = 9). The initial feature value W was set to randomly selected data points. The initial mask
parameter A was independently and uniformly drawn from the interval (0, 1). The initial annealing
temperature was set to T = 5. After keeping constant for 20 iterations, the temperature linearly
decreased to 1 in 100 iterations. For the robustness of learning, ? decreased together with the
temperature from 0.2 to 0.02, and an additive Gaussian noise with zero mean and ?w = 0.04 was
4
injected into W and ?w gradually decreased to zero. The algorithm terminated when the temperature
was equal to 1 and the difference of the pseudo data log-likelihood of two consecutive iterations was
sufficiently small (less than 0.1%). The approximation parameters used in learning was H 0 = 8,
? = 4, p = 2 and ? = 3. In this result, all the eight generative components have been successfully
learned. The 2nd to last component (see Fig. 2c) is a dumpy component (low ?h , i.e., very rarely
used). Its single pixel structure is therefore an artifact. With the learned parameters, the model could
infer the present components, their positions and the pixel-to-component assignment. Fig. 2d shows
a typical example. Given an image patch on the left, the present components and their positions
are correctly inferred. Furthermore, as shown on the 3rd row, the posterior probabilities of the
mask variable p(md |~x, ~y , ?) give a clear assignment of the contributing component for each pixel.
This information is potentially very valuable for tasks like parts-based object segmentation or to
infer the depth ordering among the components. We assess the reliability of our learning algorithm
by repeating the learning procedure with the same configuration but different random parameter
initializations. The algorithm recovers all the generative components in 11 out of 20 repetitive runs.
The 9 runs not recovering all bars did still recover reasonable solutions with usually 7 bars out of
8 bars represented. In general, optima of bar stimuli seem to have much more pronounced local
optima, e.g., compared to image patches.
5
Numerical Experiments on Image Patches
After we verified the inference and learning algorithm on artificial data, it was applied to patches of
natural images. As training set we used N = 100, 000 patches of size 16 ? 16 pixels extracted at
random positions from random images of the van Hateren natural image database [24]. We modeled
the sensitivity of neurons in the LGN using a difference-of-Gaussians (DoG) filter for different
positions, i.e., we processed all patches by convolving them with a DoG kernel. Following earlier
studies (see [5] for references), the ratio between the standard deviation of the positive and the
negative Gaussian was chosen to be 1/3 and the amplitudes chosen to obtain a mean-free centersurround filter. Fig. 3a shows some samples of the image patches after preprocessing.
Our algorithm learned H = 100 components from the natural image data set. The model parameters
were initialized in the same way as for artificial data. The annealing temperature was initialized with
T = 10, kept constant for 10 iterations, the temperature linearly decreased to 1 in 100 iterations. ?
decreased together with the temperature from 0.5 to 0.2, and an additive Gaussian noise with zero
mean and ?w = 0.2 was injected into W and ?w gradually decreased to zero. The approximation
parameters used for learning were H 0 = 6, ? = 4, p = 2 and ? = 50. After 134 iterations, the
model parameters had essentially converged.
Figs. 3bc show the learned mask parameters and the learned feature values for all the 100 components. Mask parameters define the frequently used areas within a component, and feature parameters
reveal the appearance of a component on image patches. As can be observed, image components
are very differently represented from linear models. See the component in Fig. 3d as an example:
mask parameters are localized and all positive; feature parameters have positive and negative values
across the whole patch. Masks and features can be combined to resemble a familiar Gabor function via point-wise multiplication (see Fig. 3d). All the above shown component representations are
sorted in descending order according to the learned prior probabilities of occurrence ~? (see Fig. 3e).
6
Estimation of Receptive Fields
For visualization, mask and feature parameters can be combined via point-wise multiplication. To
more systematically and quantitatively interpret the learned components and to compare them to
biological experimental findings, we estimated the predicted receptive fields (RFs). RFs estimates
were computed with reverse correlation based on the model inference results. Reverse correlation
can be defined as procedure to find the best linear approximation of the components? presence given
~ h, h ?
an image patch ~y (n) . More formally, we search for a set of predicted receptive fields R
{1, . . . , H} that minimize the following cost function:
f=
1
N
P P
n
~
x?Kn
p(~
x |~
y (n) , ?)
~ T ? y (n)
h (Rh Txh ~
P
(n)
? sh )2 + ?
P ~T ~
h Rh Rh ,
(8)
where ~y is the nth stimulus and ? is the coefficient for L2 regularization. sh is a binary variable
representing the presence/absence state of the component h, where sh = 0 if xh = ?1, and sh = 1
5
(a)
(e)
(b)
RF
(c)
(d)
(f)
Figure 3: The invariant occlusive components from natural image patches. (a) shows 20 samples of
the pre-processed image patches. (b) shows the mask parameter and (c) shows the feature parameter.
(d) shows an example of the relation with the learned model parameters and the estimated RFs. (e)
shows the learned prior probabilities ~? . (f) shows the estimated Receptive Fields (RF). The RFs were
fitted with 2 dimensional Gabor and DoG functions. The dashed line marks the RFs that have a more
globular structure. The solid lines mark the RFs the were fitted accurately by a Gabor function. The
dotted lines marks the RFs that were not approximated very well by the fitted function. All the
shown model parameters in (b-c) and receptive fields in (f) are sorted in descent according to ~? . The
plots (a-d) and (f) are heat map visualization with local scaling on individual fields (Jet color map),
and (a), (c) and (f) fix light green to be zero.
otherwise. As our model allows the components to be at different locations, the reverse correlation
is computed by shifting the stimuli according to the inferred location of each components. T?xh represents the transformation matrix applied to the stimulus for the component h, which is the opposite
transformation of the inferred transformation Txh (T?xh Txh = 1). For the absent components, the
stimulus is used without any transformations (T?1 = 1).
Due to the intractability of computing an exact posterior distribution, given a data point, the cost
function only sums across the truncated subspace Kn in the variational approximation (see Sec. 3).
~ h can be estimated as:
By setting the derivative of the cost function to zero, R
?1 P
(n) ?
~ h = ?N 1 + P hT?x ~
R
(Txh ~
y (n) )T iqn
s(T?xh ~
y (n) )T iqn
hy
n
n h~
(9)
where h?iqn denotes the expectation value w.r.t. the posterior distribution p(~x |~y (n) , ?) and 1 is
~ h , we often observe that many of the eigenvalues of the data
an identity matrix. When solving R
PN
(n)
~h
covariance matrix n=1 hT?xh ~y (T?xh ~y (n) )T iqn are close to zero, which makes the solution of R
very unstable. Therefore, we introduce a L2 regularization to the cost function. The regularization
coefficient ? is chosen between the minimum and maximum element of the data covariance matrix.
The estimated receptive fields are not sensitive to the value of the regularization coefficient ? as
long as ? is large enough to resolve the numerical instability (see Supplement for a comparison of
the receptive fields estimated with different ? values). From the experiments with artificial data and
6
natural image patches, we observed that the L2 regularization successfully eliminated the numerical
stability problem.
Fig. 3f shows the RFs estimated according to our model. For further analysis, we matched the RFs
using Gabor functions and DoG functions as was suggested in [5]. If we factored in the occurrence
probabilities, we found that the model considered about 17% of all components of the patches to be
globular, 56% to be Gabor-like and 27% to have another structure (see Supplement for details). The
prevalence of ?center-on? globular fields may be a consequence of the prevalence of convex object
shapes.
7
Discussion
The encoding of image patches investigated in this study separates feature and position information
of visual components. Functionally, such an encoding has been found very useful, e.g., for the construction of object recognition systems. Many state-of-the-art systems for visual object classification
make use of convolutional neural networks [12, 25, 26]. Such networks compute the responses of
a set of filters for all positions in a predefined area and use the maximal response for further processing ([12] for a review). If we identify the predefined area with one image patch as processed by
our approach, then the encoding studied here is to some extent similar to convolutional networks:
(A) it uses like convolutional networks one set of component parameters for all positions; and (B) a
hidden component variable of the generative model integrates or ?pools? the information across all
positions. As the here studied approach is based on a generative data model, the integration across
positions can directly be interpreted as inversion of the generation process. Crucially, the inversion
can take occlusions of visual features into account while convolutional networks do not model occlusions. Furthermore, the generative model uses a probabilistic encoding, i.e., it assigns probabilities
to positions and features of a joint feature and position space. Ambiguous visual input can therefore
be represented appropriately. In contrast, convolutional networks use one position for each feature
as representation. In this sense a convolutional encoding could be regarded as MAP estimate for the
feature position while the generative integration could be interpreted as probabilistic pooling. Many
bilinear models have also been applied to image patches, e.g., [17, 18]. Such studies do report that
neurally plausible receptive fields (RFs) in the form of Gabor functions emerge [17, 18]. Likewise,
invariant versions of NMF [21] or ICA (in the form of ISA [9] have been applied to image patches.
In addition to Gabors, we observed in our study a large variety of further types of RFs. Gabor filters
with different orientations, phase and frequencies, as well as globular fields and fields with more
complex structures (Fig. 3f). Gabors have been studied since several decades, globular and more
complex fields have attracted attention in the last couple of years. In particular, globular fields have
attracted attention [5, 27, 28] as they have been reported together with Gabors in macaques and
other species ([29] and [5] for further references). Such fields have been associated with occlusions
before [5, 28, 30]; and our study now for the first time reports globular fields for an occlusive and
translation invariant approach. The results may be taken as further evidence of the connection between occlusions and globular fields. However, also linear convolutional approaches have recently
reported such fields [19, 31]. Linear approaches seem to require a high degree of overcompleteness
or specific priors while globular fields naturally emerge for occlusion-like non-linearities. More concretely: for non-invariant linear sparse coding, globular fields only emerged from a sufficiently high
degree of overcompleteness onwards [32, 33] or with specific prior settings and overcompleteness
[27]; for non-invariant occlusive models [5, 30] globular fields always emerge alongside Gabors
for any overcompleteness. The results reported here can be taken as confirming this observation
for position invariant encoding. The invariant non-linear model assigns high degrees of occurrences
(high ?h ) to Gabor-like and to globular fields (first rows in Fig. 3f). Components with more complex
structures are assigned lower occurrence frequencies. In total the model assumes a fraction between
10 and 20% of all data components to be globular. Such high percentages may be related to the
high percentages of globular fields (?16-23%) measured in vivo ([29] and [5] for references). In
contrast, the highest degrees of occurrences, e.g., for convolutional matching pursuit [31] seems to
be assigned exclusively to Gabor features. Globular fields only emerge (alongside other non-Gabor
fields) for higher degrees of overcompleteness. A direct comparison in terms of occurrence frequencies is difficult because the linear models to not infer occurrence frequencies from data. The closest
match to such frequencies would be an (inverse) sparsity which is set by hand for almost all linear
approaches. The reason is the use of MAP-based point-estimates while our approach uses a more
probabilistic posterior estimate.
7
Because of their separate encoding of features and positions, all models with separate position encoding can represent high degrees of over-completeness. Convolutional matching pursuit [31] shows
results for up to 64 filters of size 8 ? 8. With 8 horizontal and 8 vertical shifts, the number of noninvariant components would amount to 8 ? 8 ? 64 = 3136. Convolutional sparse coding [19]
reports results by assuming 128 components for 9 ? 9 patches.The number of non-invariant components would therefore amount to 10, 368. For our network we obtained results for up to 100
components of size 16 ? 16. With 16 horizontal and 16 vertical shift this amounts to 25, 600 noninvariant components. In terms of components per observed variable, invariant models are therefore
now computationally feasible in a regime the visual cortex is estimated to operate in [33].
The hidden units associated with component feature are fully translation invariant. In terms of neural encoding, their insensitivity to stimulus shifts would therefore place them into the category of
V1 complex cells. Also globular fields or fields that seem sensitive to structures such as corners
would warrant such units the label ?complex cell?. No hidden variable in the model can directly be
associated with simple cell responses. However, a possible neural network implementation of the
model is an explicit representation of component features at different positions. The weight sharing
of the model would be lost but units with explicit non-invariant representation could correspond to
simple cells. While such a correspondence can connect our predictions to experimental studies of
simple cells, recently developed approaches for the estimation of translation invariant cell responses
[34, 35] can represent a more direct connection. To approximately implement the non-linear generative model neurally, the integration of information would have to be a very active process. In
contrast to passive pooling mechanisms across units representing linear filters (such as simple cells),
it would involve neural units with explicit position encoding. Such units would control or ?gate?
the information transfer from simple cells to downstream complex cells. As such our probabilistic
model can be related to ideas of active control units for individual components [6, 7, 10, 11, 36] (also
compare [37]). A notable difference to all these models is that the here studied approach allows to
interpret active control as optimal inference w.r.t. a generative model of translations and occlusions.
Future work can go in different directions. Different transformations could be considered or learned
[37], explicit modeling in time could be incorporated (compare [17]), and/or further hierarchical
stages could be considered. The crucial challenge all such developments face are computational
intractabilities due to large combinatorial hidden spaces. Base on the presented results, we believe,
however, that advances in analytical and computational training technology will enable an increasingly sophisticated modeling of image patches in the future.
Acknowledgement.
We thank Richard E. Turner for helpful discussions and acknowledge funding by DFG grant LU 1196/4-2.
References
[1] D. Mumford and B. Gidas. Stochastic models for generic images. Q. Appl. Math., 59:85?111, 2001.
[2] J. L?ucke, R. Turner, M. Sahani, and M. Henniges. Occlusive Components Analysis. NIPS, 22:1069?77,
2009.
[3] Nicolas LeRoux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model of images
by factoring appearance and shape. Neural Computation, 23:593?650, 2011.
[4] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. NIPS, 25:1745?1753, 2012.
[5] J. Bornschein, M. Henniges, and J. L?ucke. Are V1 receptive fields shaped by low-level visual occlusions?
A comparative study. PLoS Computational Biology, 9(6):e1003062.
[6] G. E. Hinton. A parallel computation that assigns canonical object-based frames of reference. In Proc.
IJCAI, pages 683?685, 1981.
[7] C. H. Anderson and D. C. Van Essen. Shifter circuits: a computational strategy for dynamic aspects of
visual processing. PNAS, 84(17):6297?6301, 1987.
[8] M. Lades, J. Vorbr?uggen, J. Buhmann, J. Lange, C. v. d. Malsburg, R. W?urtz, and W. Konen. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers,
42(3):300?311, 1993.
[9] A. Hyv?arinen and P. Hoyer. Emergence of phase- and shift-invariant features by decomposition of natural
images into independent feature subspaces. Neural Computation, 12(7):1705?20, 2000.
[10] D. W. Arathorn. Map-Seeking circuits in Visual Cognition ? A Computational Mechanism for Biological
and Machine Vision. Standford Univ. Press, Stanford, California, 2002.
8
[11] J. L?ucke, C. Keck, and C. von der Malsburg. Rapid convergence to feature layer correspondences. Neural
Computation, 20(10):2441?2463, 2008.
[12] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 253?6, 2010.
[13] Y. Hu, K. Zhai, S. Williamson, and J. Boyd-Graber. Modeling Images using Transformed Indian Buffet
Processes. In ICML, 2012.
[14] N. Jojic and B. Frey. Learning flexible sprites in video layers. In CVPR, 2001.
[15] C. K. I. Williams and M. K. Titsias. Greedy learning of multiple objects in images using robust statistics
and factorial learning. Neural Computation, 16:1039?62, 2004.
[16] J. B. Tenenbaum and W. T. Freeman. Separating Style and Content with Bilinear Models. Neural Computation, 12(6):1247?83, 2000.
[17] P. Berkes, R. E. Turner, and M. Sahani. A structured model of video reproduces primary visual cortical
organisation. PLoS Computational Biology, 5(9):e1000495, 2009.
[18] C. F. Cadieu and B. A. Olshausen. Learning intermediate-level representations of form and motion from
natural movies. Neural Computation, 24(4):827?866, 2012.
[19] K. Kavukcuoglu, P. Sermanet, Y.L. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierarchies for visual recognition. NIPS, 23:14, 2010.
[20] K. Gregor and Y. LeCun. Efficient learning of sparse invariant representations. CoRR, abs/1105.5307,
2011.
[21] J. Eggert, H. Wersing, and E. K?orner. Transformation-invariant representation and NMF. In 2004 IEEE
International Joint Conference on Neural Networks, pages 2535?39, 2004.
[22] Z. Dai and J. L?ucke. Unsupervised learning of translation invariant occlusive components. In CVPR,
pages 2400?2407. 2012.
[23] J. L?ucke and J. Eggert. Expectation truncation and the benefits of preselection in training generative
models. Journal of Machine Learning Research, 11:2855?900, 2010.
[24] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265:359?66, 1998.
[25] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
211(11):1019 ? 1025, 1999.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, volume 25, pages 1106?1114, 2012.
[27] M. Rehn and F. T. Sommer. A network that uses few active neurones to code visual input predicts the
diverse shapes of cortical receptive fields. Journal of Computational Neuroscience, 22(2):135?46, 2007.
[28] J. L?ucke. Receptive field self-organization in a model of the fine-structure in V1 cortical columns. Neural
Computation, 21(10):2805?45, 2009.
[29] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual
cortex. Journal of Neurophysiology, 88:455?63, 2002.
[30] G. Puertas, J. Bornschein, and J. L?ucke. The maximal causes of natural scenes are edge filters. In NIPS,
volume 23, pages 1939?1947. 2010.
[31] A. Szlam, K. Kavukcuoglu, and Y. LeCun. Convolutional matching pursuit and dictionary training. arXiv
preprint arXiv:1010.0422, 2010.
[32] B. A. Olshausen, C. F. Cadieu, and D. K. Warland. Learning real and complex overcomplete representations from the statistics of natural images. volume 7446, page 74460S. SPIE, 2009.
[33] B. A. Olshausen. Highly overcomplete sparse coding. In Proc. of HVEI, page 86510S, 2013.
[34] M. Eickenberg, R.J. Rowekamp, M. Kouh, and T.O. Sharpee. Characterizing responses of translationinvariant neurons to natural stimuli: maximally informative invariant dimensions. Neural Computation,
24(9):2384?421, 2012.
[35] B. Vintch, A. Zaharia, J.A. Movshon, and E.P. Simoncelli. Efficient and direct estimation of a neural
subunit model for sensory coding. In Proc. of NIPS, pages 3113?3121, 2012.
[36] B. Olshausen, C. Anderson, and D. Van Essen. A neurobiological model of visual attention and invariant
pattern recognition based on dynamic routing of information. J Neuroscience, 13(11):4700?4719, 1993.
[37] R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored higher-order
Boltzmann machines. Neural Computation, 22(6):1473?1492, 2010.
[38] M.J.D. Powell. An efficient method for finding the minimum of a function of several variables without
calculating derivatives. The Computer Journal, 7(2):155?162, 1964.
9
| 5195 |@word neurophysiology:1 version:3 inversion:3 briefly:1 seems:1 nd:4 ucke:7 d2:3 hyv:1 hu:1 crucially:1 covariance:2 decomposition:1 solid:1 initial:3 configuration:1 contains:4 exclusively:3 oldenburg:2 score:1 bc:1 si:1 attracted:2 john:1 numerical:5 additive:2 informative:1 confirming:1 shape:3 plot:3 update:3 occlude:1 generative:18 selected:4 leaf:1 greedy:1 completeness:1 math:1 location:2 org:1 mathematical:1 constructed:1 direct:3 symposium:1 eickenberg:1 introduce:1 x0:2 excellence:1 mask:26 ica:2 rapid:1 frequently:1 freeman:1 decomposed:1 resolve:1 becomes:2 matched:1 linearity:1 circuit:3 occlusive:11 mass:1 what:2 interpreted:2 developed:1 finding:2 transformation:7 pseudo:1 berkeley:2 uk:2 control:3 szlam:1 unit:8 medical:1 grant:1 appear:2 before:1 t1:1 positive:3 frey:1 understood:1 local:3 bccn:1 consequence:2 bilinear:2 encoding:24 yd:4 approximately:1 plus:1 initialization:1 studied:7 appl:1 bi:1 hearing4all:1 responsible:2 lecun:4 lost:1 implement:2 prevalence:2 procedure:2 powell:1 j0:1 area:3 gabor:15 thought:1 matching:3 boyd:1 pre:3 convenience:1 close:1 selection:2 instability:1 descending:1 map:8 center:2 go:1 attention:3 williams:1 independently:1 convex:1 formulate:1 joerg:1 assigns:3 factored:2 rule:1 insight:1 regarded:1 hd:1 kouh:1 stability:1 target:1 construction:1 hierarchy:1 exact:5 us:4 element:3 recognition:6 approximated:1 located:2 predicts:1 database:1 yd2:1 observed:6 preprint:1 capture:4 ordering:4 plo:2 highest:1 valuable:1 complexity:1 dynamic:3 trained:2 zoran:1 solving:1 compromise:1 titsias:1 distinctive:1 translated:1 joint:2 differently:1 represented:3 separated:1 heat:2 univ:1 describe:2 london:1 artificial:8 h0:4 emerged:1 larger:1 valued:1 plausible:1 distortion:1 drawing:1 otherwise:3 stanford:1 cvpr:2 statistic:3 emergence:1 whd:1 eigenvalue:1 analytical:2 bornschein:2 jamie:1 maximal:2 frequent:1 rapidly:1 insensitivity:1 pronounced:2 sutskever:1 ijcai:1 cluster:1 optimum:3 keck:1 convergence:1 generating:2 comparative:1 object:14 ac:1 measured:1 recovering:1 predicted:2 involves:1 resemble:1 direction:1 correct:1 filter:8 stochastic:1 routing:1 enable:1 globular:17 require:1 arinen:1 assign:1 transparent:3 fix:1 biological:3 konen:1 mathematically:2 sufficiently:2 considered:3 cognition:1 dictionary:1 consecutive:1 exclusiveness:4 estimation:3 proc:3 integrates:1 standford:1 combinatorial:2 label:1 superposition:4 sensitive:3 rowekamp:1 successfully:3 overcompleteness:5 gaussian:8 always:1 rather:1 avoid:1 pn:1 encode:4 derived:2 focus:1 likelihood:4 contrast:4 zhenwen:1 sense:1 helpful:1 inference:7 factoring:1 hidden:11 relation:1 transformed:2 lgn:2 germany:3 pixel:17 among:1 classification:2 orientation:1 denoted:2 flexible:1 oca:1 development:1 art:1 special:2 integration:3 mutual:1 schaaf:1 field:34 equal:1 spatial:2 shaped:1 eliminated:1 cadieu:2 biology:2 represents:4 unsupervised:2 icml:1 warrant:1 future:2 report:3 stimulus:7 quantitatively:1 richard:1 few:1 randomly:1 individual:4 dfg:1 familiar:1 phase:2 occlusion:19 ab:1 onwards:1 organization:1 investigate:1 essen:2 highly:1 mixture:2 sh:7 light:2 predefined:2 edge:1 closer:1 poggio:1 initialized:2 inconvenient:1 overcomplete:2 theoretical:1 fitted:3 instance:1 column:2 earlier:4 modeling:4 cover:1 assignment:2 maximization:1 applicability:1 cost:4 deviation:1 subset:2 uniform:1 krizhevsky:1 reported:3 kn:8 connect:1 chooses:1 combined:3 st:3 iqn:4 international:2 sensitivity:1 memisevic:1 probabilistic:9 pool:1 together:3 von:1 corner:1 dead:1 convolving:1 derivative:2 style:2 account:3 de:1 coding:6 sec:1 includes:1 coefficient:3 combinatorics:1 explicitly:2 notable:1 reiterate:1 later:1 bayes:1 recover:2 parallel:1 vivo:1 ass:1 minimize:1 accuracy:1 convolutional:14 variance:2 efficiently:1 likewise:1 correspond:1 identify:2 directional:1 kavukcuoglu:3 accurately:1 lu:1 multiplying:1 fias:1 lighting:1 converged:1 sharing:1 farabet:1 orner:1 frequency:6 naturally:1 associated:6 spie:1 modeler:1 recovers:1 couple:1 gain:1 knowledge:1 color:3 ubiquitous:1 segmentation:1 amplitude:1 sophisticated:2 higher:2 planar:1 response:5 wei:1 maximally:1 formulation:2 anderson:2 furthermore:3 stage:1 correlation:4 hand:1 horizontal:3 glance:1 defines:1 artifact:1 gray:1 reveal:1 arathorn:1 believe:1 olshausen:4 facilitate:1 usage:1 contain:2 true:1 verify:1 analytically:1 regularization:5 assigned:2 lades:1 jojic:1 ringach:1 self:2 maintained:1 ambiguous:1 complete:1 eggert:2 motion:1 reflection:1 temperature:7 passive:1 image:58 variational:2 wise:2 novel:1 recently:2 funding:1 leroux:1 volume:3 m1:1 approximates:1 interpret:2 functionally:1 translationinvariant:1 frankfurt:1 rd:3 luecke:1 reliability:1 had:1 cortex:5 base:1 berkes:1 posterior:12 closest:1 reverse:4 binary:3 der:2 minimum:2 dai:3 employed:1 dashed:1 multiple:1 simoncelli:1 afterwards:1 neurally:2 infer:5 transparency:3 isa:1 technical:1 match:2 jet:2 pnas:1 long:2 converging:1 prediction:1 basic:1 sheffield:2 vision:4 expectation:4 essentially:1 arxiv:2 iteration:6 repetitive:1 kernel:1 represent:3 cell:11 background:6 want:1 separately:1 addition:1 annealing:3 interval:1 decreased:6 winn:1 fine:1 crucial:2 appropriately:1 rest:1 operate:1 subject:1 pooling:2 seem:3 presence:11 intermediate:1 shotton:1 enough:1 variety:1 architecture:1 identified:1 opposite:1 lange:1 idea:1 absent:2 shift:5 movshon:1 suffer:1 sprite:2 neurones:1 cause:4 nine:1 deep:1 heess:1 useful:1 clear:1 involve:1 factorial:1 amount:3 repeating:1 preselection:1 mid:2 tenenbaum:1 processed:3 category:1 generate:2 occupy:1 percentage:2 canonical:1 shifted:1 dotted:1 neuroscience:4 estimated:11 correctly:2 per:1 diverse:1 salient:1 four:3 drawn:1 verified:1 ht:2 kept:1 henniges:2 v1:3 downstream:1 fraction:1 sum:1 year:1 run:2 inverse:1 injected:2 place:1 almost:1 reasonable:1 patch:41 scaling:1 layer:2 correspondence:2 lucke:1 adapted:1 scene:1 md2:1 encodes:2 flat:1 hy:1 nearby:1 aspect:1 structured:1 according:13 combination:3 belonging:1 smaller:1 across:5 em:3 increasingly:1 gidas:1 invariant:25 gradually:2 taken:2 computationally:2 equation:4 mutually:1 visualization:4 puertas:1 discus:1 mechanism:2 geniculus:1 tractable:2 pursuit:3 gaussians:1 apply:1 eight:5 observe:1 hierarchical:2 generic:1 occurrence:7 alternative:1 robustness:1 buffet:1 gate:1 assumes:2 top:1 denotes:1 sommer:1 malsburg:2 calculating:1 warland:1 gregor:2 society:1 seeking:1 mumford:1 receptive:14 primary:4 exclusive:1 md:12 strategy:1 hoyer:1 subspace:4 separate:4 thank:1 berlin:2 exarchakis:2 lateral:1 centersurround:1 link:1 separating:1 extent:1 unstable:1 reason:3 assuming:2 shifter:1 code:1 index:1 modeled:2 illustration:1 ratio:1 zhai:1 sermanet:1 difficult:1 potentially:1 stated:1 negative:2 implementation:1 proper:1 boltzmann:1 vertical:3 neuron:2 observation:1 acknowledge:1 descent:1 riesenhuber:1 truncated:4 defining:1 hinton:3 incorporated:1 subunit:1 y1:1 frame:1 redwood:1 intensity:4 nmf:3 inferred:5 dog:4 optimized:1 connection:2 imagenet:1 california:2 acoustic:1 learned:15 macaque:2 nip:6 bar:7 suggested:1 usually:3 below:1 alongside:2 pattern:1 regime:1 sparsity:1 challenge:1 rf:13 including:1 green:1 video:2 royal:1 shifting:1 natural:15 buhmann:1 turner:3 nth:1 representing:3 scheme:1 movie:1 technology:1 mathieu:1 extract:1 sahani:2 prior:12 review:1 l2:3 acknowledgement:1 multiplication:2 marginalizing:1 georgios:1 relative:1 contributing:1 fully:1 permutation:1 generation:5 zaharia:1 localized:1 nucleus:1 degree:6 rehn:1 systematically:1 intractability:2 share:1 translation:11 row:9 changed:1 last:2 truncation:2 keeping:1 infeasible:1 free:1 side:1 allow:1 face:1 characterizing:1 emerge:4 sparse:6 van:5 benefit:1 vintch:1 depth:7 cortical:3 dimension:1 sensory:2 concretely:1 qualitatively:1 made:1 preprocessing:1 far:3 transaction:1 sj:2 uni:1 neurobiological:1 global:1 active:4 reproduces:1 assumed:2 search:1 decade:1 additionally:1 nature:1 transfer:1 robust:1 nicolas:2 symmetry:1 williamson:1 investigated:2 complex:10 did:1 main:2 linearly:2 terminated:1 motivation:1 noise:5 whole:1 rh:3 graber:1 x1:1 fig:14 xmap:2 position:40 inferring:1 explicit:9 xh:26 goethe:1 candidate:2 third:1 learns:1 shade:1 specific:2 evidence:1 organisation:1 intractable:2 incorporating:1 corr:1 supplement:4 boureau:1 appearance:2 visual:19 scalar:6 corresponds:2 determines:1 extracted:1 identity:2 formulated:2 goal:1 sorted:2 absence:6 content:2 feasible:1 wersing:1 typical:4 determined:2 except:1 uniformly:1 total:2 specie:1 invariance:5 experimental:2 trainability:1 sharpee:1 occluding:2 indicating:2 select:3 xh0:1 rarely:1 formally:1 mark:3 indian:1 hateren:2 avoiding:1 |
4,636 | 5,196 | Action from Still Image Dataset and Inverse Optimal
Control to Learn Task Specific Visual Scanpaths
Stefan Mathe1,3 and Cristian Sminchisescu2,1
Institute of Mathematics of the Romanian Academy of Science
2
Department of Mathematics, Faculty of Engineering, Lund University
3
Department of Computer Science, University of Toronto
1
stefan.mathe@imar.ro, cristian.sminchisescu@math.lth.se
Abstract
Human eye movements provide a rich source of information into the human visual information processing. The complex interplay between the task and the
visual stimulus is believed to determine human eye movements, yet it is not fully
understood, making it difficult to develop reliable eye movement prediction systems. Our work makes three contributions towards addressing this problem. First,
we complement one of the largest and most challenging static computer vision
datasets, VOC 2012 Actions, with human eye movement recordings collected under the primary task constraint of action recognition, as well as, separately, for
context recognition, in order to analyze the impact of different tasks. Our dataset
is unique among the eyetracking datasets of still images in terms of large scale
(over 1 million fixations recorded in 9157 images) and different task controls. Second, we propose Markov models to automatically discover areas of interest (AOI)
and introduce novel sequential consistency metrics based on them. Our methods
can automatically determine the number, the spatial support and the transitions
between AOIs, in addition to their locations. Based on such encodings, we quantitatively show that given unconstrained read-world stimuli, task instructions have
significant influence on the human visual search patterns and are stable across
subjects. Finally, we leverage powerful machine learning techniques and computer vision features in order to learn task-sensitive reward functions from eye
movement data within models that allow to effectively predict the human visual
search patterns based on inverse optimal control. The methodology achieves state
of the art scanpath modeling results.
1
Introduction
Eye movements provide a rich source of knowledge into the human visual information processing
and result from the complex interplay between the visual stimulus, prior knowledge of the visual
world, and the task. This complexity poses a challenge to current models, which often require
a complete specification of the cognitive processes and of the way visual input is integrated by
them[4, 20]. The advent of modern eyetracking systems, powerful machine learning techniques,
and visual features opens up the prospect of learning eye movement models directly from large real
human eye movement datasets, collected under task constraints. This trend is still in its infancy, here
we aim to advance it on several fronts:
? We introduce a large scale dataset of human eye movements collected under the task constraints of both action and context recognition from a single image, for the VOC 2012 Actions dataset. The eye movement data is introduced in ?3 and is publicly available at
http://vision.imar.ro/eyetracking-voc-actions/.
? We present a model to automatically discover areas of interest (AOIs) from eyetracking data, in
?4. The model integrates both spatial and sequential eye movement information, in order to better
1
Figure 1: Saliency maps obtained from the gaze patterns of 12 viewers under action recognition (left
image in pair) and context recognition (right, in pair), from a single image. Note that human gaze
significantly depends on the task (see tab. 1b for quantitative results). The visualization also suggests
the existence of stable consistently fixated areas of interest (AOIs). See fig. 2 for illustration.
constrain estimates and to automatically identify the spatial support and the transitions between
AOIs in addition to their locations. We use the proposed AOI discovery tools to study inter-subject
consistency and show that, on this dataset, task instructions have a significant influence on human
visual attention patterns, both spatial and sequential. Our findings are presented in ?5.
? We leverage the large amount of collected fixations and saccades in order to develop a novel, fully
trainable, eye movement prediction model. The method combines inverse reinforcement learning
and advanced computer vision descriptors in order to learn task sensitive reward functions based on
human eye movements. The model has the important property of being able to efficiently predict
scanpaths of arbitrary length, by integrating information over a long time horizon. This leads to
significantly improved estimates. Section ?6.2 gives the model and its assessment.
2
Related Work
Human gaze pattern annotations have been collected for both static images[11, 13, 14, 12, 26, 18]
and for video[19, 23, 15], see [24] for a recent overview. Most of the image datasets available
have been collected under free-viewing, and the few task controlled ones[14, 7] have been designed
for small scale studies. In contrast, our dataset is both task controlled and more than one order
of magnitude larger than the existing image databases. This makes it adequate to using machine
learning techniques for saliency modeling and eye movement prediction.
The influence of task on eye movements has been investigated in early human vision studies[25, 3]
for picture viewing, but these groundbreaking studies have been fundamentally qualitative. Statistical properties like the saccade amplitude and the fixation duration have been shown to be influenced
by the task[5]. A quantitative analysis of task influence on visual search in the context of action
recognition from video appears in our prior work[19].
Human visual saliency prediction has received significant interest in computer vision (see [2] for an
overview). Recently, the trend has been to learn saliency models from fixation data in images[13, 22]
and video[15, 19]. The prediction of eye movements has been less studied. In contrast, predefined
visual saliency measures can be used to obtain scanpaths[11] in conjunction with non-maximum
suppression. Eye movements have also been modeled explicitly by maximizing the expected future
information gain[20, 4] (as one step in [20] or until the goal is reached in [4]). The methods operate
on pre-specified reward functions, which limits their applicability. The method we propose shares
some resemblance with these later methods, in that we also aim at maximizing the future expected
reward, albeit our reward function is learned instead of being pre-specified, and we work in an
inverse optimal control setting, which allows, in principle, an arbitrary time horizon. We are not
aware of any eye movement models that are learned from eye movement data.
3
Action from a Single Image ? New Human Eye Movement Dataset
One objective of this work is to introduce eye movement recordings for the PASCAL VOC image
dataset used for action recognition. Presented in [10], it is one of the largest and most challenging
2
Figure 2: Illustration of areas of interest (AOI) obtained from scanpaths of subjects on three stimuli
for the action (left) and context (right) recognition tasks. Ellipses depict states, scaled to match the
learned spatial support, whereas dotted arrows illustrate high probability saccades. Visual search
patterns are highly consistent both spatially and sequentially and are strongly influenced by task.
See fig. 3 and tab. 1 for quantitative results on spatial and sequential consistency.
available datasets of real world actions in static images. It contains 9157 images, covering 10 classes
(jumping, phoning, playing instrument, reading, riding bike, riding horse, running, taking photo,
using computer, walking). Several persons may appear in each image. Multiple actions may be
performed by the same person and some instances belong to none of the 10 target classes.
Human subjects: We have collected data from 12 volunteers (5 male and 7 female) aged 22 to 46.
Task: We split the subjects into two groups based on the given task. The first, action group (8 subjects) was asked to recognize the actions in the image and indicate them from the labels provided
by the PASCAL VOC dataset. To assess the effects of task on visual search, we asked the members of the second, context group (4 subjects), to find which of 8 contextual elements occur in the
background of each image. Two of these contextual elements ? furniture, painting/wallpaper ? are
typical of indoors scenes, while the remaining 6 ? body of water, building, car/truck, mountain/hill,
road, tree ? occur mostly outdoors.
Recording protocol: The recording setup is identical to the one used in [19]. Before each image
was shown, participants were required to fixate a target in the center of a uniform background on the
screen. We asked subjects in the action group to solve a multi-target ?detect and classify? task: press
a key each time they have identified a person performing an action from the given set and also list
the actions they have seen. The exposure time for this task was 3 seconds.1 Their multiple choice
answers were recorded through a set of check-boxes displayed immediately following each image
exposure. Participants in the context group underwent a similar protocol, having a slightly lower
exposure time of 2.5 seconds. The images were shown to each subject in a different random order.
Dataset statistics: The dataset contains 1,085,381 fixations. The average scanpath length is 10.0 for
the action subjects and 9.5 for the context subjects, including the initial central fixation. The time
elapsed from stimulus display until the first three key presses, averaged over trials in which they
occur, are 1, 1.6 and 1.9 seconds, respectively.
4
Automatic Discovery of Areas of Interest and Transitions using HMMs
Human fixations tend to cluster on salient regions that generally correspond to objects and object
parts (fig. 1). Such areas of interest (AOI) offer an important tool for human visual pattern analysis,
e.g. in evaluating inter-subject consistency[19] or the prediction quality of different saliency models.
Manually specifying AOIs is both time consuming and subjective. In this section, we propose a
model to automatically discover the AOI locations, their spatial support and the transitions between
them, from human scanpaths recorded for a given image. While this may appear straightforward,
we are not aware of a similar model in the literature.
In deriving the model, we aim at four properties. First, we want to be able to exploit not only
human fixations, but also constraints from saccades. Consider the case of several human subjects
fixating the face of a person and the book she is reading. Based on fixations alone, it can be difficult
to separate the book and the person?s face into two distinct AOIs due to proximity. Nevertheless,
frequent saccades between the book and the person?s face provide valuable hints for hypothesizing
two distinct, semantically meaningful AOIs. Second, we wish to adapt to an unknown and varying
number of AOIs in different images. Third, we want to estimate not only the center of the AOI, but
also the spatial support and location uncertainty. Finally, we wish to find the transition probabilities
between AOIs. To meet such criteria in a visual representation, we use a statistical model.
1
Protocol may result in multiple keypresses per image. Exposure times were set empirically in a pilot study.
3
1
0.8
Detection rate
agreement
cross-stimulus control
random baseline
task
action
context
recognition
recognition
92.2%?1.1% 81.3%?1.5%
64.0%?0.7% 59.1%?0.9%
50.0%?0.0% 50.0%?0.0%
Detection rate
consistency measure
1
0.8
0.6
0.4
0.2
0.6
0.4
0.2
Inter?Subject Agreement
Cross?Stimulus Control
0
0
0.2
0.4
0.6
False alarm rate
0.8
action recognition
1
0
0
Inter?Subject Agreement
Cross?Stimulus Control
0.2
0.4
0.6
False alarm rate
0.8
1
context recognition
(b)
(a)
Figure 3: (a) Spatial inter-subject consistency for the tasks of action and context recognition, with
standard deviations across subjects. (b) ROC curves for predicting the fixations of one subject from
the fixations of the other subjects in the same group on the same image (blue) or on an image (green)
randomly selected from the dataset. See tab. 1 for sequential consistency results.
Image Specific Human Gaze Model: We model human gaze patterns in an image as a Hidden
n
Markov Model (HMM) where states {si }i=1 correspond to AOIs fixated by the subjects and transitions correspond to saccades. The observations are the fixation coordinates z = (x, y). The
emission probability for AOI i is a Gaussian: p(z|si ) = N (z|?i , ?i ), where ?i and ?i model the
center and the spatial extent of the area of interest (AOI) i. In training, we are given a set of scan
k
n
paths ? j = z1 , z2 , . . . , ztj j=1 and we find the parameters ? = {?i , ?i }i=1 that maximize the
Pk
joint log likelihood j=1 log p(?j |?), using EM[9]. We obtain AOIs, for each image and task, by
training the HMM using the recorded human eye scanpaths. We compute the number of states N ?
that maximizes the leave-one-out cross validation likelihood over the scanpaths within the training
set, with N ? [1, 10]. We then re-train the model with N ? states over the entire set of scanpaths.
Results: Fig. 2 shows several HMMs trained from the fixations of subjects performing action recognition. On average, the model discovers 8.0 AOIs for action recognition and 5.6 for context recognition. The recovered AOIs are task dependent and tend to center on object and object parts with
high task relevance, like phones, books, hands or legs. Context recognition AOIs generally appear
on the background and have larger spatial support, in agreement with the scale of the corresponding
structures. There is a small subset of AOIs that is common to both tasks. Most of these AOIs fall
on faces, an effect that has also been noted in [6]. Interestingly, some AOI transitions suggest the
presence of cognitive routines aimed at establishing relevant relationships between object parts, e.g.
whether a person is looking at the manipulated object (fig. 2).
The HMM allows us to visualize and analyze the sequential inter-subject consistency (?5) among
subjects. It also allows us to evaluate the performance of eye movement prediction models (?6.2).
5
Consistency Analysis
Qualitative studies in human vision[25, 16] have advocated a high degree of agreement between the
gaze patterns of humans in answering questions regarding static stimuli and have shown that gaze
patterns are highly task dependent, although such findings have not yet been confirmed by largescale quantitative analysis. In this section, we confirm these effects on our large scale dataset for
action and context recognition, from a single image. We first study spatial consistency using saliency
maps, then analyze sequential consistency in terms of AOI ordering under various metrics.
Spatial Consistency: In this section, we evaluate the spatial inter-subject agreement in images.
Evaluation Protocol: To measure the inter-subject agreement, we predict the regions fixated by a
particular subject from a saliency map derived from the fixations of the other subjects on the same
image. Samples represent image pixels and each pixel?s score is the empirical saliency map derived
from training subjects[14]. Labels are 1 at pixels fixated by the test subject, and 0 elsewhere. For
unbiased cross-stimulus control, we check how well a subject?s fixations on one stimulus can be
predicted from those of the other subjects on a different, unrelated, stimulus. The average precision
for predicting fixations on the same stimulus is expected to be much greater than on different stimuli.
Findings: Area under the curve (AUC) measured for the two subject groups and the corresponding
ROC curves are shown in fig. 3. We find good inter-subject agreement for both tasks, consistent with
previously reported results for both images and video [14, 19].
4
Sequential Consistency using AOIs: Next we evaluate the degree to which scanpaths agree in
the order in which interesting locations are fixated. We do this as a three step process. First,
we map each fixation to an AOI obtained with the HMM presented in ?4, converting scanpaths to
sequences of symbols. Then, we define two metrics for comparing scanpaths, and compute intersubject agreement in a leave-one-out fashion, for each.
Matching fixations to AOIs: We assign a subject?s fixation to an AOI, if it falls within an ellipse
corresponding to its spatial support (fig. 2). If no match is found, we assign the fixation as null.
However, due to noise, we allow the spatial support to be increased by a factor. The dashed blue
curve in fig. 4c-left shows the fraction (AOIP) of fixations of each human subject, with 2D positions
that fall inside AOIs derived from scanpaths of other subjects, as a function of the scale factor.
Through the rest of this section, we report results for the threshold to twice the estimated AOI scale,
which ensures a 75% fixation match rate across subjects in both task groups.
AOI based inter-subject consistency: Once we have converted each scanpath to a sequence of fixations, we define two metrics for inter-subject agreement. Given two sequences of symbols, the AOI
transition (AOIT) metric is defined as the number of consecutive non-null symbol pairs (AOI transitions) that two sequences have in common. The second metric (AOIS), is obtained by sequence
alignment, as in [19], and represents the longest common subsequence among the two scanpaths.
Both metrics are normalized by the length of the longest scanpath. To measure inter-subject agreement, we match the scanpath of each subject i to the scanpaths belonging to other subjects, under
the two metrics defined above. The value of the metric for the best match defines the leave-one-out
agreement for subject i. We then average over all subjects.
Baselines: In addition to inter-subject agreement, we define three baselines. First, for cross-stimulus
control, we evaluate agreement as in the case of spatial consistency, when the test and reference
scanpaths correspond to different randomly selected images. Second, for the random baseline, we
generate for each image a set of 100 random scanpaths, where fixations are uniformly distributed
across the image. The average metric assigned to these scanpaths with respect to the subjects represents the baseline for sequential inter-subject agreement, in the absence of bias. Third, we randomize
the order of each subject?s fixations in each image, while keeping their locations fixed, and compute
inter-subject agreement with respect to the original scanpaths of the rest of the subjects. The initial
central fixation is left unchanged during randomization. This baseline is intended to measure the
amount of observed consistency due to the fixation order.
Findings: Both metrics reveal considerable inter-subject agreement (table 1), with values significantly higher than for cross-stimulus control and the random baselines. When each subject?s fixations are randomized, the fraction of matched saccades (AOIT) drops sharply, suggesting that sequential effects have a significant share in the overall inter-subject agreement. The AOIS metric is
less sensitive to these effects, as it allows for gaps in matching AOI sequences.2
Influence of Task: We will next study the task influence on human visual patterns. We compare the
visual patterns of the two subject groups using saliency map and sequential AOI metrics.
Evaluation Protocol: For each image, we derive a saliency map from the fixations of subjects doing
action recognition, and report the average p-statistic at the locations fixated by subjects performing
context recognition. We also compute agreement under the AOI-based metrics between the scanpaths of subjects performing context recognition, and subjects from the action recognition group.
Findings: Only 44.1% of fixations made during context recognition fall onto action recognition
AOIs, with an average p-value of 0.28 with respect to the action recognition fixation distribution.
Only 10% of the context recognition saccades have also been made by active subjects, and the
AOIS metric between context and active subjects? scanpaths is 23.8%. This indicates significant
differences between the subject groups in terms of their visual search patterns.
6
Task-Specific Human Gaze Prediction
In this section, we show that it is possible to effectively predict task-specific human gaze patterns,
both spatially and sequentially. To achieve this, we combine the large amounts of information available in our dataset with state-of-the art visual features and machine learning techniques.
2
Although harder to interpret numerically, the negative log likelihood of scanpaths under HMMs also defines a valid sequential consistency measure. We observe the following values for the action recognition task:
agreement 9.2, agreement (random order) 13.1, cross-stimulus control 25.8, random baseline 46.6.
5
consistency measure
agreement
agreement (random order)
cross-stimulus control
random scanpaths
task
action recognition
context recognition
AOIP
AOIT
AOIS
AOIP
AOIT
AOIS
79.9%?1.9% 34.0%?1.3% 39.9%?1.0% 76.4%?2.6% 35.6%?0.9% 44.9%?0.4%
79.9%?1.9% 21.8%?0.7% 31.0%?0.7% 76.4%?2.6% 23.2%?0.3% 35.5%?0.3%
29.4%?0.8% 4.9% ? 0.3% 13.9%?0.3% 40.0%?2.1% 7.9% ? 0.5% 19.6%?0.2%
15.5%?0.1% 1.5% ? 0.0% 2.5% ? 0.0% 31.9%?0.1% 4.2% ? 0.0% 7.6% ? 0.0%
Table 1: Sequential inter-subject consistency measured using AOIs (fig. 2), for both task groups.
A large fraction of each subject?s fixations falls onto AOIs derived from the scanpaths of the other
subjects (AOIP). Significant inter-subject consistency exists in terms of AOI transitions (AOIT) and
scanpath alignment score (AOIS).
6.1
Task-Specific Human Visual Saliency Prediction
We first study the prediction of human visual saliency maps. Human fixations typically fall onto
image regions that are meaningful for the visual task (fig. 2). These regions often contain objects
and object parts that have similar identities and configurations for each semantic class involved, e.g.
the configuration of the legs while running. We exploit this repeatability and represent each human
fixation by HoG descriptors[8]. We then train a sliding window detector with human fixations and
compare it with competitive approaches reported in the literature.
Evaluation Protocol: For each subject group, we obtain positive examples from fixated locations
across the training portion of the dataset. Negative examples are extracted similarly at random
image locations positioned at least 3o away from all human fixations. We extract 7 HoG descriptors with different grid configurations and concatenate them, then represent the resulting descriptor
using an explicit, approximate ?2 kernel embedding[17]. We train a linear SVM to obtain a detector, which we run in sliding window fashion over the test set in order to predict saliency maps.
We evaluate the detector under the AUC metric and the spatial KL divergence criterion presented
in [19]. We use three baselines for comparison. The first two are the uniform saliency map and
the central bias map (with intensity inversely proportional to distance from center). As an upper
bound on performance, we also compute saliency maps derived from the fixations recorded from
subjects. The KL divergence score for this baseline is derived by splitting the human subjects into
two groups and computing the KL divergence between the saliency maps derived from these two
groups, while the AUC metric is computed in a leave-one-out fashion, as for spatial consistency. We
compare the model with two state of the art predictors. The first is the bottom-up saliency model
of Itti&Koch[11]. The second is a learned saliency predictor introduced by Judd et al.[13], which
integrates low and mid-level features with several high-level object detectors such as cars and people
and is capable to optimally weight these features given a training set of human fixations. Note that
many of these objects often occur in the VOC 2012 actions dataset.
Findings: Itti&Koch?s model is not designed to predict task-specific saliency and cannot handle task
influences on visual attention (fig. 4). Judd?s model can adapt to some extent by adjusting feature
weights, which were trained on our dataset. Out of the evaluated models, we find that the taskspecific HoG detector performs best under both metrics, especially under the spatial KL divergence,
which is relevant for computer vision applications[19]. Its flexibility stems from its large scale
training using human fixations, the usage of general-purpose computer vision features (as opposed,
e.g., to the specific object detectors used by Judd et al.[13]), and in part from the use of a powerful
nonlinear kernel for which good linear approximations are available[17, 1].
6.2
Scanpath Prediction via Maximum Entropy Inverse Reinforcement Learning
We now consider the problem of eye movement prediction under specific task constraints. Models
of human visual saliency can be used to generate scanpaths, e.g. [11]. However, current models are
designed to predict saliency for the free-viewing condition and do not capture the focus induced by
the cognitive task. Others [20, 4] hypothesize that the reward driving eye movements is the expected
future information gain.
Here we take a markedly different approach. Instead of specifying the reward function, we learn it
directly from large amounts of human eye movement data, by exploiting policies that operate over
long time horizons. We cast the problem as Inverse Reinforcement Learning (IRL), where we aim
to recover the intrinsic reward function that induces, with high probability, the scanpaths recorded
from human subjects solving a specific visual recognition task. Our learned model can imitate
6
feature
uniform baseline
central bias
human
HOG detector?
Itti & Koch[11]
Judd et al.[13]?
baselines
action recognition
KL
AUC
12.00
0.500
9.59
0.780
6.14
0.922
predictors
8.54
0.736
16.53
0.533
11.00
0.715
context recognition
KL
AUC
11.02
0.500
8.82
0.685
5.90
0.813
8.10
15.04
9.66
feature
human scanpaths
random scanpaths
IRL?
Renninger [20]
Itti & Koch [11]
0.646
0.512
0.636
0.35
0.3
0.6
0.4
0.5
agreement
cross?stimulus
random
cross?task
Itti & Koch
Renninger et al.
IRL
44.9%
40.3%
42.9%
11.6%
7.0%
7.5%
25.7%
23.9%
24.1%
agreement
cross?stimulus
random
cross?task
Itti & Koch
Renninger et al.
IRL
0.4
AOIS score
0.4
AOIT score
AOIP score
0.8
0.45
agreement
cross?stimulus
random
cross?task
Itti & Koch
Renninger et al.
IRL
context recognition
AOIP
AOIT
AOIS
76.4%
35.6%
44.9%
31.9%
4.2%
7.6%
(b) eye movement prediction
(a) human visual saliency prediction
1
baselines
action recognition
AOIP
AOIT
AOIS
79.9%
34.0%
39.9%
15.5%
1.5%
2.5%
predictors
35.6%
6.6%
18.4%
24.4%
2.0%
14.6%
28.6%
2.7%
16.8%
0.25
0.2
0.3
0.2
0.15
0.1
0.2
0.1
0.05
0
0
1
2
AOI scale factor
3
4
0
0
1
2
AOI scale factor
3
0
0
4
1
2
AOI scale factor
3
4
(c)
Figure 4: Task-specific human gaze prediction performance on the VOC 2012 actions dataset. (a)
Our trained HOG detector outperforms existing saliency models, when evaluated under both the KL
divergence and AUC metrics. (b-c) Learning techniques can also be used to predict eye movements
under task constraints. Our proposed Inverse Reinforcement Learning (IRL) model better matches
observed human visual search scanpaths when compared with two existing methods, under each of
the AOI based metrics we introduce. Methods marked by ?*? have been trained on our dataset.
useful saccadic strategies associated with cognitive processes involved in complex tasks such as
action recognition, but avoids the difficulty of explicitly specifying these processes.
Problem Formulation: We model a scanpath ? as a sequence of states st = (xt , yt ) and actions
at = (?x, ?y), where states correspond to fixations, represented by their visual angular coordinates
with respect to the center of the screen, and actions model saccades, represented as displacement
vectors expressed in visual degrees. We rely on a maximum entropy IRL formulation[27] to model
the distribution over the set ?(s,T ) of all possible scanpaths of length T starting from state s for a
given image as:
" T
#
X
1
(s,T )
? exp
r? (st , at ) , ?? ? ?(s,T )
(1)
p? (?) = (T )
Z (s)
t=1
where r? (st , at ) is the reward function associated with taking the saccadic action at while fixating
at position st , ? are the model parameters and Z (T ) (s) is the partition function for paths of length T
starting with state s, see (3). The reward function r? (st , at ) = f> (st )? at is the inner product between
a feature vector f(st ) extracted at image location st and a vector of weights corresponding to action
at . Note that reward functions in our formulation depend on the subject?s action. This enables the
model to encode saccadic preferences conditioned on the current observation, in addition to planning
future actions by maximizing the cumulative reward along the entire scanpath, as implied by (1).
In our formulation, the goal of Maximum Entropy IRL is to find the weights ? that maximize the
likelihood of the demonstrated scanpaths across all the images in the dataset. For a single image and
given the set of human scanpaths E, all starting at the image center sc , the likelihood is:
1 X
(sc ,T )
log p?
(?)
(2)
L? =
|E|
??E
This maximization problem can be solved using a two step dynamic programming formulation. In
the backward step, we compute the state and state-action partition functions for each possible state
s and action a, and for each scanpath length i = 1, T :
"
(i)
Z? (s)
=
X
???
(s,i)
exp
i
X
#
r? (st , at ) ,
(i)
Z? (s, a)
t=1
=
X
(s,i)
???
s.t.
a1 =a
7
exp
" i
X
t=1
#
r? (st , at )
(3)
(i)
The optimal policy ?? at the ith fixation is:
(i)
(T ?i+1)
?? (a|s) = Z?
(T ?i+1)
(s, a)/Z?
(s)
(4)
(sc ,T )
This policy induces the maximum entropy distribution p?
over scanpaths for the image and is
used in the forward step to efficiently
compute
the
expected
mean
feature count for each action
hP
i
T
a
?
a, which is f? = E??p(sc ,T )
t=1 f (st ) ? I [at = a] , where I [?] is the indicator function. The
?
gradient of the likelihood function (2) with respect to the parameters ? a is:
?L? ?a ?a
= f ? f?
(5)
?? a
P
P
1
where ?
f a = |E|
??E
t f (st ) ? I [at = a] is the empirical feature count along training scanpaths.
Eqs. (1)?(5) are defined for a given input image. The likelihood and its gradient over the training
set are obtained by summing up the corresponding quantities. In our formulation policies encode
the image specific strategy of the observer, based on a task specific reward function that is learned
across all images. We thus learn two different IRL models, for action and context analysis. Note
that we restrict ourselves to scanpaths of length T starting from the center of the screen and do not
predefine goal states. We validate T to the average scanpath length in the dataset.
Experimental Procedure: We use a fine grid with 0.25o stepsize for the state space. The space of all
possible saccades on this grid is too large to be practical (? 105 ). We obtain a reduced vocabulary
of 1, 000 actions by clustering saccades in the training set, using k-means. We then encode all
scanpaths in this discrete (state,action) space, with an average positional error of 0.47o . We extract
HoG features at each grid point and augment them with the output of our saliency detector. We
optimize the weight vector ? in the IRL framework and use a BFGS solver for fast convergence.
Findings: A trained MaxEnt IRL eye movement predictor performs better than the bottom up models
of Itti&Koch[11] and Renninger et al.[20] (fig. 4bc). The model is particularly powerful for predicting saccades (see the AOIT metric), as it can match more than twice the number of AOI transitions
generated by bottom up models for the action recognition task. It also outperforms the other models
under the AOIP and AOIS metrics. Note that the latter only captures the overall ranking among
AOIs as defined by the order in which these are fixated. A gap still remains to human performance,
underlining the difficulty of predicting eye movements in real world images and for complex tasks
such as action recognition. For context recognition, prediction scores are generally closer to the
human baseline. This is, at least in part, facilitated by the often larger size of background structures
as compared to the humans or the manipulated objects involved in actions (fig. 2).
7
Conclusions
We have collected a large set of eye movement recordings for VOC 2012 Actions, one of the most
challenging datasets for action recognition in still images. Our data is obtained under the task
constraints of action and context recognition and is made publicly available. We have leveraged this
large amount of data (1 million human fixations) in order to develop Hidden Markov Models that
allow us to determine fixated AOI locations, their spatial support and the transitions between them
automatically from eyetracking data. This technique has made possible to develop novel evaluation
metrics and to perform quantitative analysis regarding inter-subject consistency and the influence of
task on eye movements. The results reveal that given real world unconstrained image stimuli, the
task has a significant influence on the observed eye movements both spatially and sequentially. At
the same time such patterns are stable across subjects.
We have also introduced a novel eye movement prediction model that combines state-of-the-art
reinforcement learning techniques with advanced computer vision operators to learn task-specific
human visual search patterns. To our knowledge, the method is the first to learn eye movement
models from human eyetracking data. When measured under various evaluation metrics, the model
shows superior performance to existing bottom-up eye movement predictors. To close the human
performance gap, better image features, and more complex joint state and action spaces, within
reinforcement learning schemes, will be explored in future work.
Acknowledgments: Work supported in part by CNCS-UEFISCDI under CT-ERC-2012-1.
8
References
[1] E. Bazavan, F. Li, and C. Sminchisescu. Fourier kernel learning. In European Conference on Computer
Vision, 2012.
[2] A. Borji and L. Itti. State-of-the-art in visual attention modelling. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 35, 2011.
[3] G. T. Buswell. How People Look at Pictures: A Study of the Psychology of Perception in Art. Chicago
University Press, 1935.
[4] N. J. Butko and J. R. Movellan. Infomax control of eye movements. IEEE Transactions on Autonomous
Mental Development, 2:91?107, 2010.
[5] M. S. Castelhano, M. L. Mack, and J. M. Henderson. Viewing task influences eye movement control
during active scene perception. Journal of Vision, 9, 2008.
[6] M. Cerf, E. P. Frady, and C. Koch. Faces and text attract gaze independent of the task: Experimental data
and computer model. Journal of Vision, 9, 2009.
[7] M. Cerf, J. Harel, W. Einhauser, and C. Koch. Predicting human gaze using low-level saliency combined
with face detection. In Advances in Neural Information Processing Systems, 2007.
[8] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE International
Conference on Computer Vision and Pattern Recognition, 2005.
[9] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm.
Journal of the Royal Statistical Society, 1977.
[10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html.
[11] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention.
Vision Research, 40, 2000.
[12] T. Judd, F. Durand, and A. Torralba. Fixations on low resolution images. In IEEE International Conference on Computer Vision, 2009.
[13] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In IEEE
International Conference on Computer Vision, 2009.
[14] K.A.Ehinger, B.Sotelo, A.Torralba, and A.Oliva. Modeling search for people in 900 scenes: A combined
source model of eye guidance. Visual Cognition, 17, 2009.
[15] W. Kienzle, B. Scholkopf, F. Wichmann, and M. Franz. How to find interesting locations in video: a
spatiotemporal interest point detector learned from human eye movements. In DAGM, 2007.
[16] M. F. Land and B. W. Tatler. Looking and Acting. Oxford University Press, 2009.
[17] F. Li, G. Lebanon, and C. Sminchisescu. Chebyshev approximations to the histogram ?2 kernel. In IEEE
International Conference on Computer Vision and Pattern Recognition, 2012.
[18] E. Marinoiu, D. Papava, and C. Sminchisescu. Pictorial human spaces: How well do humans perceive a
3d articulated pose? In IEEE International Conference on Computer Vision, 2013.
[19] S. Mathe and C. Sminchisescu. Dynamic eye movement datasets and learnt saliency models for visual
action recognition. In European Conference on Computer Vision, 2012.
[20] L. W. Renninger, J. Coughlan, P. Verghese, and J. Malik. An information maximization model of eye
movements. In Advances in Neural Information Processing Systems, pages 1121?1128, 2004.
[21] R. Subramanian, H. Katti, N. Sebe, and T.-S. Kankanhalli, M. Chua. An eye fixation database for saliency
detection in images. In European Conference on Computer Vision, 2010.
[22] A. Torralba, A. Oliva, M. Castelhano, and J. Henderson. Contextual guidance of eye movements and
attention in real-world scenes: The role of global features in object search. Psychological Review, 113,
2006.
[23] E. Vig, M. Dorr, and D. D. Cox. Space-variant descriptor sampling for action recognition based on
saliency and eye movements. In European Conference on Computer Vision, 2012.
[24] S. Winkler and R. Subramanian. Overview of eye tracking datasets. In International Workshop on Quality
of Multimedia Experience, 2013.
[25] A. Yarbus. Eye Movements and Vision. New York Plenum Press, 1967.
[26] K. Yun, Y. Pen, D. Samaras, G. J. Zelinsky, and T. L. Berg. Studying relationships between human gaze,
description and computer vision. In IEEE International Conference on Computer Vision and Pattern
Recognition, 2013.
[27] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In AAAI Conference on Artificial Intelligence, 2008.
9
| 5196 |@word trial:1 cox:1 faculty:1 dalal:1 everingham:1 triggs:1 open:1 instruction:2 harder:1 initial:2 configuration:3 contains:2 score:7 bc:1 interestingly:1 subjective:1 existing:4 outperforms:2 current:3 contextual:3 z2:1 recovered:1 comparing:1 si:2 yet:2 concatenate:1 partition:2 chicago:1 enables:1 hypothesize:1 designed:3 drop:1 depict:1 alone:1 intelligence:2 selected:2 imitate:1 ith:1 coughlan:1 chua:1 mental:1 math:1 toronto:1 location:12 preference:1 org:1 yarbus:1 along:2 scholkopf:1 qualitative:2 fixation:44 combine:3 inside:1 introduce:4 inter:20 expected:5 planning:1 multi:1 voc:9 automatically:6 window:2 solver:1 provided:1 discover:3 unrelated:1 matched:1 maximizes:1 bike:1 advent:1 null:2 mountain:1 finding:7 quantitative:5 borji:1 ro:2 scaled:1 control:14 appear:3 before:1 positive:1 engineering:1 understood:1 limit:1 vig:1 encoding:1 oxford:1 establishing:1 meet:1 path:2 twice:2 studied:1 suggests:1 challenging:3 specifying:3 hmms:3 averaged:1 unique:1 practical:1 acknowledgment:1 dorr:1 movellan:1 procedure:1 displacement:1 area:8 empirical:2 significantly:3 matching:2 pre:2 integrating:1 road:1 suggest:1 onto:3 cannot:1 close:1 operator:1 butko:1 context:25 influence:10 optimize:1 www:1 map:13 demonstrated:1 center:8 maximizing:3 yt:1 exposure:4 attention:5 straightforward:1 duration:1 starting:4 williams:1 renninger:6 resolution:1 splitting:1 immediately:1 perceive:1 deriving:1 embedding:1 handle:1 coordinate:2 autonomous:1 plenum:1 target:3 programming:1 agreement:26 trend:2 element:2 recognition:45 particularly:1 walking:1 database:2 observed:3 bottom:4 role:1 solved:1 capture:2 region:4 ensures:1 ordering:1 movement:43 prospect:1 valuable:1 dempster:1 complexity:1 reward:13 asked:3 ziebart:1 dynamic:2 trained:5 depend:1 solving:1 samara:1 joint:2 various:2 represented:2 einhauser:1 train:3 articulated:1 distinct:2 fast:1 artificial:1 sc:4 horse:1 larger:3 solve:1 statistic:2 winkler:1 laird:1 cristian:2 interplay:2 sequence:7 propose:3 product:1 frequent:1 relevant:2 tatler:1 flexibility:1 achieve:1 academy:1 description:1 validate:1 exploiting:1 convergence:1 cluster:1 leave:4 object:14 illustrate:1 develop:4 derive:1 pose:2 measured:3 advocated:1 intersubject:1 received:1 eq:1 taskspecific:1 predicted:1 indicate:1 human:63 viewing:4 require:1 assign:2 randomization:1 viewer:1 proximity:1 koch:11 exp:3 cognition:1 predict:9 visualize:1 driving:1 achieves:1 early:1 consecutive:1 torralba:4 purpose:1 integrates:2 overt:1 label:2 sensitive:3 largest:2 tool:2 stefan:2 gaussian:1 aim:4 varying:1 conjunction:1 encode:3 derived:7 emission:1 focus:1 she:1 consistently:1 longest:2 check:2 likelihood:8 indicates:1 verghese:1 contrast:2 modelling:1 suppression:1 baseline:14 detect:1 dependent:2 attract:1 dagm:1 integrated:1 entire:2 typically:1 hidden:2 pixel:3 overall:2 among:4 html:1 pascal:3 augment:1 development:1 spatial:21 art:6 aware:2 once:1 having:1 sampling:1 manually:1 identical:1 represents:2 papava:1 look:2 future:5 hypothesizing:1 report:2 stimulus:22 quantitatively:1 fundamentally:1 few:1 hint:1 modern:1 randomly:2 oriented:1 harel:1 manipulated:2 recognize:1 divergence:5 pictorial:1 intended:1 ourselves:1 detection:5 interest:9 highly:2 evaluation:5 alignment:2 henderson:2 male:1 ztj:1 predefined:1 capable:1 closer:1 experience:1 jumping:1 tree:1 incomplete:1 maxent:1 re:1 guidance:2 psychological:1 instance:1 classify:1 modeling:3 increased:1 maximization:2 applicability:1 addressing:1 deviation:1 subset:1 aoi:26 uniform:3 predictor:6 front:1 too:1 optimally:1 reported:2 answer:1 spatiotemporal:1 learnt:1 combined:2 person:7 st:12 international:7 randomized:1 infomax:1 gaze:13 central:4 recorded:6 aaai:1 opposed:1 leveraged:1 zelinsky:1 castelhano:2 cognitive:4 book:4 itti:10 li:2 fixating:2 converted:1 suggesting:1 bfgs:1 explicitly:2 ranking:1 depends:1 later:1 performed:1 observer:1 analyze:3 tab:3 reached:1 doing:1 competitive:1 participant:2 portion:1 recover:1 annotation:1 sebe:1 contribution:1 ass:1 publicly:2 descriptor:5 efficiently:2 correspond:5 saliency:30 identify:1 painting:1 repeatability:1 none:1 confirmed:1 detector:10 influenced:2 involved:3 fixate:1 associated:2 static:4 gain:2 pilot:1 dataset:21 adjusting:1 knowledge:3 car:2 amplitude:1 routine:1 positioned:1 appears:1 higher:1 voc2012:2 methodology:1 zisserman:1 improved:1 formulation:6 evaluated:2 box:1 strongly:1 underlining:1 dey:1 angular:1 until:2 hand:1 irl:11 nonlinear:1 assessment:1 defines:2 outdoors:1 quality:2 resemblance:1 reveal:2 riding:2 usage:1 effect:5 building:1 normalized:1 unbiased:1 contain:1 eyetracking:6 pascalnetwork:1 assigned:1 read:1 spatially:3 semantic:1 during:3 auc:6 covering:1 noted:1 criterion:2 hill:1 yun:1 complete:1 performs:2 covert:1 image:55 novel:4 recently:1 discovers:1 common:3 superior:1 empirically:1 overview:3 million:2 belong:1 interpret:1 numerically:1 significant:7 automatic:1 unconstrained:2 consistency:22 mathematics:2 similarly:1 grid:4 hp:1 erc:1 stable:3 specification:1 recent:1 female:1 phone:1 durand:2 buswell:1 seen:1 greater:1 converting:1 determine:3 maximize:2 dashed:1 sliding:2 multiple:3 stem:1 match:7 adapt:2 believed:1 long:2 offer:1 phoning:1 cross:15 ellipsis:1 controlled:2 impact:1 prediction:17 a1:1 variant:1 oliva:2 vision:25 metric:24 volunteer:1 histogram:2 represent:3 kernel:4 addition:4 whereas:1 separately:1 background:4 want:2 fine:1 winn:1 aged:1 source:3 scanpaths:35 operate:2 wallpaper:1 rest:2 markedly:1 recording:5 subject:72 tend:2 induced:1 member:1 leverage:2 presence:1 split:1 psychology:1 identified:1 restrict:1 inner:1 regarding:2 chebyshev:1 shift:1 whether:1 york:1 action:57 scanpath:11 adequate:1 generally:3 useful:1 se:1 indoors:1 aimed:1 cerf:2 amount:5 mid:1 induces:2 reduced:1 http:2 generate:2 dotted:1 estimated:1 per:1 blue:2 discrete:1 group:15 key:2 salient:1 four:1 nevertheless:1 threshold:1 backward:1 sotelo:1 groundbreaking:1 fraction:3 run:1 inverse:8 facilitated:1 powerful:4 uncertainty:1 bound:1 ct:1 furniture:1 display:1 truck:1 occur:4 constraint:7 sharply:1 constrain:1 scene:4 fourier:1 performing:4 department:2 belonging:1 across:8 slightly:1 em:2 making:1 wichmann:1 leg:2 mack:1 visualization:1 previously:1 agree:1 remains:1 count:2 mechanism:1 instrument:1 photo:1 studying:1 available:6 predefine:1 observe:1 away:1 stepsize:1 existence:1 original:1 running:2 remaining:1 clustering:1 exploit:2 especially:1 ellipse:1 society:1 unchanged:1 implied:1 objective:1 malik:1 question:1 quantity:1 randomize:1 primary:1 saccadic:3 strategy:2 bagnell:1 gradient:3 distance:1 separate:1 hmm:4 collected:8 extent:2 water:1 length:8 modeled:1 relationship:2 illustration:2 index:1 romanian:1 difficult:2 mostly:1 setup:1 hog:6 negative:2 policy:4 unknown:1 perform:1 upper:1 observation:2 datasets:8 markov:3 mathe:2 displayed:1 looking:2 arbitrary:2 intensity:1 introduced:3 complement:1 pair:3 required:1 specified:2 kl:7 z1:1 cast:1 elapsed:1 learned:7 able:2 pattern:20 perception:2 lund:1 reading:2 challenge:3 reliable:1 including:1 video:5 green:1 royal:1 gool:1 subramanian:2 difficulty:2 rely:1 predicting:5 largescale:1 indicator:1 advanced:2 scheme:1 eye:47 inversely:1 picture:2 extract:2 text:1 prior:2 literature:2 discovery:2 review:1 fully:2 interesting:2 proportional:1 validation:1 degree:3 consistent:2 rubin:1 principle:1 playing:1 share:2 land:1 elsewhere:1 maas:1 supported:1 free:2 keeping:1 bias:3 allow:3 institute:1 fall:6 taking:2 underwent:1 face:6 distributed:1 van:1 curve:4 judd:6 vocabulary:1 transition:12 world:6 rich:2 evaluating:1 valid:1 avoids:1 made:4 reinforcement:7 cumulative:1 forward:1 franz:1 transaction:2 lebanon:1 approximate:1 confirm:1 global:1 sequentially:3 active:3 fixated:9 summing:1 consuming:1 others:1 subsequence:1 search:11 pen:1 table:2 learn:8 sminchisescu:5 investigated:1 complex:5 european:4 protocol:6 pk:1 arrow:1 noise:1 alarm:2 body:1 fig:13 roc:2 screen:3 fashion:3 ehinger:2 precision:1 position:2 wish:2 explicit:1 infancy:1 answering:1 third:2 specific:13 xt:1 symbol:3 list:1 explored:1 svm:1 exists:1 intrinsic:1 workshop:2 albeit:1 sequential:13 effectively:2 false:2 magnitude:1 conditioned:1 horizon:3 gap:3 entropy:5 visual:38 positional:1 expressed:1 tracking:1 saccade:12 extracted:2 lth:1 goal:3 identity:1 marked:1 towards:1 absence:1 considerable:1 typical:1 uniformly:1 semantically:1 acting:1 kienzle:1 multimedia:1 experimental:2 meaningful:2 berg:1 support:9 people:3 latter:1 scan:1 relevance:1 evaluate:5 trainable:1 |
4,637 | 5,197 | Action is in the Eye of the Beholder: Eye-gaze Driven
Model for Spatio-Temporal Action Localization
Nataliya Shapovalova?
Michalis Raptis?
?
?
Simon Fraser University
Comcast
{nshapova,mori}@cs.sfu.ca
Leonid Sigal?
Greg Mori?
?
Disney Research
mraptis@cable.comcast.com
lsigal@disneyresearch.com
Abstract
We propose a weakly-supervised structured learning approach for recognition and
spatio-temporal localization of actions in video. As part of the proposed approach,
we develop a generalization of the Max-Path search algorithm which allows us to
efficiently search over a structured space of multiple spatio-temporal paths while
also incorporating context information into the model. Instead of using spatial
annotations in the form of bounding boxes to guide the latent model during training, we utilize human gaze data in the form of a weak supervisory signal. This is
achieved by incorporating eye gaze, along with the classification, into the structured loss within the latent SVM learning framework. Experiments on a challenging benchmark dataset, UCF-Sports, show that our model is more accurate,
in terms of classification, and achieves state-of-the-art results in localization. In
addition, our model can produce top-down saliency maps conditioned on the classification label and localized latent paths.
1
Introduction
Structured prediction models for action recognition and localization are emerging as prominent alternatives to more traditional holistic bag-of-words (BoW) representations. The obvious advantage
of such models is the ability to localize, spatially and temporally, an action (and actors) in potentially long and complex scenes with multiple subjects. Early alternatives [3, 7, 14, 27] address this
challenge using sub-volume search, however, this implicitly assumes that the action and actor(s) are
static within the frame. More recently, [9] and [18, 19] propose figure-centric approaches that can
track an actor by searching over the space of spatio-temporal paths in video [19] and by incorporating person detection into the formulation [9]. However, all successful localization methods, to date,
require spatial annotations in the form of partial poses [13], bounding boxes [9, 19] or pixel level
segmentations [7] for learning. Obtaining such annotations is both time consuming and unnatural;
often it is not easy for a human to decide which spatio-temporal segment corresponds to an action.
One alternative is to proceed in a purely unsupervised manner and try to mine for most discriminant
portions of the video for classification [2]. However, this often results in overfitting due to the relatively small and constrained nature of the datasets, as discriminant portions of the video, in training,
may correspond to regions of background and be unrelated to the motion of interest (e.g., grass may
be highly discriminative for ?kicking? action because in the training set most instances come from
soccer, but clearly ?kicking? can occur on nearly any surface). Bottom-up perceptual saliency, computed from eye-gaze of observers (obtained using an eye tracker), has recently been introduced as
another promising alternative to annotation and supervision [11, 21]. It has been shown that traditional BoW models computed over the salient regions of the video result in superior performance,
compared to dense sampling of descriptors. However, this comes at expense of losing ability to
localize actions. Bottom-up saliency models usually respond to numerous unrelated low-level stimuli [25](e.g., textured cluttered backgrounds, large motion gradients from subjects irrelevant to the
action, etc.) which often fall outside the region of the action (and can confuse classifiers).
1
In this paper we posit that a good spatio-temporal model for action recognition and localization
should have three key properties: (1) be figure-centric, to allow for subject and/or camera motion,
(2) discriminative, to facilitate classification and localization, and (3) perceptually semantic, to mitigate overfitting to accidental statistical regularities in a training set. To avoid reliance on spatial
annotation of actors we utilize human gaze data (collected by having observers view corresponding videos [11]) as weak supervision in learning1 . Note that such weak annotation is more natural,
effortless (from the point of view of an annotator) and can be done in real-time. By design, gaze
gives perceptually semantic interest regions, however, while semantic, gaze, much like bottom-up
saliency, is not necessarily discriminative. Fig. 1(b) shows that while for some (typically fast) actions like ?diving?, gaze may be well aligned with the actor and hence discriminative, for others, like
?golf? and ?horse riding?, gaze may either drift to salient but non discriminant regions (the ball), or
simply fall on background regions that are prominent or of intrinsic aesthetic value to the observer.
To deal with complexities of the search and ambiguities in the weak-supervision, given by gaze, we
formulate our model in a max-margin framework where we attempt to infer latent smooth spatiotemporal path(s) through the video that simultaneously maximize classification accuracy and pass
through regions of high gaze concentration. During learning, this objective is encouraged in the
latent Structural SVM [26] formulation through a real-valued loss that penalizes misclassification
and, for correctly classified instances, misalignment with salient regions induced by the gaze. In
addition to classification and localization, we show that our model can provide top-down actionspecific saliency by predicting distribution over gaze conditioned on the action label and inferred
spatio-temporal path. Having less (annotation) information available at training time, our model
shows state-of-the art classification and localization accuracy on the UCF-Sports dataset and is the
first, to our knowledge, to propose top-down saliency for action classification task.
2
Related works
Action recognition: The literature on vision-based action recognition is too vast. Here we focus
on the most relevant approaches and point the reader to recent surveys [20, 24] for a more complete
overview. The most prominent action recognition models to date utilize visual BoW representations [10, 22] and extensions [8, 15]. Such holistic models have proven to be surprisingly good at
recognition, but are, by design, incapable of spatial or temporal localization of actions.
Saliency and eye gaze: Work in cognitive science suggests that control inputs to the attention mechanism can be grouped into two categories: stimulus-driven (bottom-up) and goal-driven (top-down)
[4]. Recent work in action recognition [11, 21] look at bottom-up saliency as a way to sparsify
descriptors and to bias BoW representations towards more salient portions of the video. In [11] and
[21] multiple subjects were tasked with viewing videos while their gaze was recorded. A saliency
model is then trained to predict the gaze and is used to either prune or weight the descriptors. However, the proposed saliency-based sampling is purely bottom-up, and still lacks ability to localize
actions in either space or time2 . In contrast, our model is designed with spatio-temporal localization in mind and uses gaze data as weak supervision during learning. In [16] and [17] authors use
?objectness? saliency operator and person detector as weak supervision respectively, however, in
both cases the saliency is bottom-up and task independent. The top-down discriminative saliency,
based on distribution of gaze, in our approach, allows our model to focus on perceptually salient regions that are also discriminative. Similar in spirit, in [5] gaze and action labels are simultaneously
inferred in ego-centric action recognition setting. While conceptually similar, the model in [5] is
significantly different both in terms of formulation and use. The model [5] is generative and relies
on existence of object detectors.
Sub-volume search: Spatio-temporal localization of actions is a difficult task, largely due to the
computational complexity of search involved. One way to alleviate this computational complexity
is to model the action as an axis aligned rectangular 3D volume. This allows spatio-temporal search
to be formulated efficiently using convolutions in the Fourier [3] or Clifford Fourier [14] domain. In
[28] an efficient spatio-temporal branch-and-bound approach was proposed as alternative. However,
the assumption of single fixed axis aligned volumetric representation is limiting and only applicable
1
We assume no gaze data is available for test videos.
Similar observations have been made in object detection domain [25], where purely bottom-up saliency
has been shown to produce responses on textured portions of the background, outside of object of interest.
2
2
(a)
(b)
Figure 1: Graphical model representation is illustrated in (a). Term ?(x, h) captures information
about context (all the video excluding regions defined by latent variables h); terms ?(x, hi ) capture
information about latent regions. Inferred latent regions should be discriminative and match high
density regions of eye gaze data. In (b) ground truth eye gaze density, computed from fixations of
multiple subjects, is overlaid over images from sequences of 3 different action classes (see Sect. 1).
for well defined and relatively static actions. In [7] an extension to multiple sub-volumes that model
parts of the action is proposed and amounts to a spatio-temporal part-based (pictorial structure)
model. While part-based model of [7] allows for greater flexibility, the remaining axis-aligned
nature of part sub-volumes is still largely appropriate for recognition in scenarios where camera and
subject are relatively static. This constraint is slightly relaxed in [12] where a part-based model built
on dense trajectory clustering is proposed. However, [12] relies on sophisticated pre-processing
which requires building long feature trajectories over time, which is difficult to do for fast motions
or less textured regions.
Most closely related approaches to our work come from [9, 18, 19]. In [18] Tran and Yuan show
that a rectangular axis-aligned volume constraint can be relaxed by efficiently searching over the
space of smooth paths within the spatio-temporal volume. The resulting Max-Path algorithm is
applied to object tracking in video. In [19] this approach is further extended by incorporating MaxPath inference into a max-margin structured output learning framework, resulting in an approach
capable of localizing actions. We generalize Max-Path idea by allowing multiple smooth paths and
context within a latent max-margin structured output learning. In addition, our model is trained to
simultaneously localize and classify actions. Alternatively, [9] uses latent SVM to jointly detect
an actor and recognize actions. In practice, [9] relies on human detection for both inference and
learning and only sub-set of frames can be localized due to the choice of the features (HOG3D).
Similarly, [2] relies on person detection and distributed partial pose representation, in the form of
poselets, to build a spatio-temporal graph for action recognition and localization. We want to stress
that [2, 9, 18, 19] require bounding box annotations for actors in learning. In contrast, we focus on
weaker and more natural source of data ? gaze, to formulate our learning criteria.
3
Recognizing and Localizing Actions in Videos
Our goal is to learn a model that can jointly localize and classify human actions in video. This problem is often tackled in the same manner as object recognition and localization in images. However,
extension to a temporal domain comes with many challenges. The core challenges we address are:
(i) dealing with a motion of the actor within the frame, resulting from camera or actor?s own motion
in the world; (ii) complexity of the resulting spatio-temporal search, that needs to search over the
space of temporal paths; (iii) ability to model coarse temporal progression of the action and action
context, and (iv) learning in absence of direct annotations for actor(s) position within the frame.
To this end, we propose a model that has the ability to localize temporally and spatially discriminative regions of the video and encode the context in which these regions occur. The output of the
model indicates the absence or presence of a particular action in the video sequence while simultaneously extracting the most discriminative and perceptually salient spatio-temporal video regions.
During the training phase, the selection of these regions is implicitly driven by eye gaze fixations
collected by a sample of viewers. As a consequence, our model is able to perform top-down video
saliency detection conditioned on the performed action and localized action region.
3
1
Model Formulation
Given a set of video sequences {x1 , . . . , xn } ? X and their associated labels {y1 , . . . , yn }, with
yi ? {?1, 1}, our purpose is to learn a mapping f : X ? {?1, 1}. Additionally, we introduce auxe
iliary latent variables {h1 , . . . , hn }, where hi = {hi1 , . . . , hiK } and hik ? ??{(lj , tj , rj , bj )Tj=T
}
s
denotes the left, top, right and bottom coordinates of spatio-temporal paths of bounding boxes that
are defined from frame Ts up to Te . The latent variables h specify the spatio-temporal regions
selected by our model. Our function is then defined yx? (w) = f (x; w), where
(yx? (w), h?x (w)) =
argmax
F (x, y, h; w), F (x, y, h; w) = wT ?(x, y, h),
(1)
(y,h)?{?1,1}?H
w is a parameter of the model, and ?(x, y, h) ? Rd is a joint feature map. Video sequences in
which the action of interest is absent are treated as zero vectors in the Hilbert space induced by the
feature map ? similar to [1]. Whereas, the corresponding feature map of videos where the action
of interest is present is being decomposed into two components: a) the latent regions and b) context
regions. As a consequence, the scoring function is written:
F (x, y = 1, h; w) = wT ?(x, y = 1, h) = w0T ?(x, h) +
K
X
wkT ?(x, hk ) + b
(2)
k=1
where K is the number of latent regions of the action model and b is the bias term. A graphical
representation of the model is illustrated in Fig. 1(a).
Latent regions potential wkT ?(x, hk ): This potential function measures the compatibility of latent
spatio-temporal region hk with the action model. More specifically, ?(x, hk ) returns the sum of
normalized BoW histograms extracted from the bounding box defined by the latent variable hk =
e
(lj , tj , rj , bj )Tj=T
at each corresponding frame.
s
Context potential w0T ?(x, h): We define context as the entire video sequence excluding the latent
regions; our aim is to capture any information that is not directly produced by the appearance and
motion of the actor. The characteristics of the context are encoded in ?(x, h) as a sum of normalized
BoW histograms at each frame of the video excluding the regions indicated by latent variables h.
Many action recognition scoring functions recently proposed [9, 12, 16] include the response of a
global BoW statistical representation of the entire video. While such formulations are simpler, since
the response of the global representation is independent from the selection of the latent variables,
they are also somewhat unsatisfactory from the modeling point of view. First, the visual information
that corresponds to the latent region of interest implicitly gets to be counted twice. Second, it is
impossible to decouple and analyze importance of foreground and contextual information separately.
2
Inference
Given the model parameters w and an unseen video x our goal is to infer the binary action label y ? as
well as the location of latent regions h? (Eq. 1). The scoring function for the case of y = ?1 is equal
to zero due to the trivial zero vector feature map (Sect. 1). However, estimating the optimal value of
the scoring function for the case of y = 1 involves the maximization over the latent variables. The
search space over even a single spatio-temporal path (non-smooth) of variable size bounding boxes
in a video sequence of width M , height N and length T is exponential: O(M N )2T . Therefore, we
restrict the search space by introducing a number of assumptions. We constraint the search space
to smooth spatio-temporal paths3 of fixed size bounding boxes [18]. These constraints allows the
inference of the optimal latent variables for a single region using dynamic programming, similarly
to Max-Path algorithm proposed by Tran and Yuan [18].
Algorithm 1 summarizes the process of dynamic programming considering both the context and the
latent region contributions. The time and space complexity of this algorithm is O(M N T ). However, without introducing further constraints on the latent variables, the extension of this forward
message passing procedure to multiple latent regions results in an exponential, in the number of
regions, algorithm because of the implicit dependency of the latent variables through the context
3
The feasible positions of the bounding box in a frame are constrained by its location in the previous frame.
4
Algorithm 1 MaxCPath: Inference of Single Latent Region with Context
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
Input : R(t): the context local response without the presence of bounding box,
Q0 (u, v, t): the context local response excluding the bounding box at location (u, v),
Q1 (u, v, t): the latent region local response
Output : S(t): score of the best path till frame t, L(t): end point of the best path till t,
P (u, v, t): the best path record for tracing back
Initialize S ? = ? inf, S(u, v, 0) = ?inf , ?u, v, l? = null
for t ? 1 to T do // Forward Process, Backward Process: t ? T to 1
for each (u, v) ? [1..M ] ? [1..N ] do
(u0 , v0 ) ? argmax(u0 ,v0 )?Nb(u,v) S(u0 , v 0 , t ? 1)
PT
if S(u0 , v0 , t ? 1) >
i=1 R(i) then
S(u, v, t) ? S(u0 , v0 , t ? 1) + Q0 (u, v, t) + Q1 (u, v, t) ? R(t)
P (u, v, t) ? (u0 , v0 , t ? 1)
else
P
S(u, v, t) ? Q0 (u, v, t) + Q1 (u, v, t) + T
i=1 R(i) ? R(t)
end if
if S(u, v, t) > S ? then
S ? ? S(u, v, t) and l? ? (u, v, t)
end if
end for
S(t) ? S ? and L(t) ? l?
end for
Algorithm 2 Inference: Two Latent Region with Context
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
Input : R(t): the context local response without the presence of bounding box, Q0 (u, v, t): the context local response excluding the
bounding box at location (u, v), Q1 (u, v, t): the latent region local response of the first latent region, Q2 (u, v, t): the latent region
local response of the second latent region.
Output : S ? : the maximum score of the inference, h1 , h2 : first and second latent regions
Initialize S ? = ? inf, t? = null
(S1 , L1 , P1 ) ? M axCP ath ? F orward(R, Q0 , Q1 )
(S2 , L2 , P2 ) ? M axCP ath ? Backward(R, Q0 , Q2 )
for t ? 1 to T ? 1 do
P
S ? S1 (t) + S2 (t + 1) ? T
i=1 R(i)
?
if S > S then
S ? ? S and t? ? t
end if
end for
h1 ? traceBackward(P1 , L1 (t? ))
h2 ? traceF orward(P2 , L2 (t? + 1))
term. Incorporating temporal ordering constraints between the K latent regions leads to a polynomial time algorithm. More specifically, the optimal scoring function can be inferred by enumerating
all potential end locations of each latent region and executing independently Algorithm 1 at each
interval in O(M N T K ). For the special case of K = 2, we derive a forward/backward message
process that remains linear in the size of video volume: O(M N T ); see summary in Algorithm 2. In
our experimental validation a model with 2 latent regions proved to be sufficiently expressive.
3 Learning Framework
Identifying the spatio-temporal regions of the video sequences that will enable our model to detect
human action is a challenging optimization problem. While the introduction of latent variables in
discriminative models [6, 9, 12, 13, 23, 26] is natural for many applications (e.g., modeling body
parts) and has also offered excellent performance [6], it also lead to training formulations with nonconvex functions. In our training formulation we adopt the large-margin latent structured output
learning [26], however we also introduce a loss function that weakly supervises the selection of latent
variables based on human gaze information. Our training set of videos {x1 , . . . , xn } along with their
action labels {y1 , . . . , yn } contains 2D fixation points (sampled at much higher frequency than the
video frame rate) of 16 subjects observing the videos [11]. We transform these measurements using
kernel density estimation with Gaussian kernel (with bandwidth set to the visual angle span of 2? )
to a probability density function of gaze gi = {gi1 , . . . , giTi } at each frame of video xi . Following
the Latent Structural SVM formulation [26], our learning takes the following form:
n
X
1
kwk2 + C
?i
w,??0 2
i=1
min
? i ) ? ?(yi , gi , y?i , h
? i ) ? ?i ,
max wT ?(xi , yi , h0i ) ? wT ?(xi , y?i , h
h0i ?H
5
(3)
? i ? H,
??
yi ? {?1, 1}, ?h
? i ) ? 0 is an asymmetric loss function encoding the cost of an incorrect action
where ?(yi , gi , y?i , h
label prediction but also of mislocalization of the eye gaze. The loss function is defined as follows:
PK
1
?
if yi = y?i = 1,
1? K
? i) =
k=1 ?(gi , hik )
?(yi , gi , y?i , h
(4)
1
1 ? 2 (yi y?i + 1)
otherwise.
? ik ) indicates the minimum overlap of h
? ik and a given eye gaze gi map over a frame:
?(gi , h
? ik ) = min ?p (bj , g j ), Ts,k ? j ? Te,k ,
?(gi , h
ik i
j
(
P
j
1
if
0 < r < 1,
bjik gi ? r,
j
j
P
?p (bik , gi ) =
j
1
j g
otherwise,
i
b
r
(5)
(6)
ik
where bjik is the bounding box at frame j of the k-th latent region in the xi video. The parameter r
regulates the minimum amount of eye gaze ?mass? that should be enclosed by each bounding box.
The loss function can be easily incorporated in Algorithm 1 during the loss-augmented inference.
4
Gaze Prediction
Our model is based on the core assumption that a subset of perceptually salient regions of a video,
encoded by the gaze map, share discriminative idiosyncrasies useful for human action classification.
The loss function dictating the learning process enables the model?s parameter (i.e , w) to encode
this notion into our model4 . Assuming our assumption holds in practice, we can use selected latent
regions for prediction of top-down saliency within the latent region. We do so by regressing the
amount of eye gaze (probability density map over gaze) on a fixed grid, inside each bounding box
of the latent regions, by conditioning on low level features that construct the feature map ?i and the
action label. In this way the latent regions select consistent salient portions of videos using top-down
knowledge about the action, and image content modulates the saliency prediction within that region.
Given the training data gaze g and the corresponding inferred latent variables h, we learn a linear
regression, per action class, that maps augmented feature representation of the extracted bounding
boxes, of each latent region, to a coarse description of the corresponding gaze distribution. Each
bounding box is divided into a 4 ? 4 grid and a BoW representation for each cell is computed;
augmented feature is constructed by concatenating these histograms. Similarly, the human gaze is
summarized by a 16 dimension vector by accumulating gaze density at each cell over a 4 ? 4 grid.
For visualization, we further smooth the predictions to obtain a continuous and smooth gaze density
over the latent regions. We find our top-down saliency predictions to be quite good (see Sect. 5) in
most cases which experimentally validated our initial model assumption.
5
Experiments
We evaluate our model on the UCF-Sports dataset presented in [14]. The dataset contains 150 videos
extracted from broadcast television channels and includes 10 different action classes. The dataset
includes annotation of action classes as well as bounding boxes around the person of interest (which
we ignore for training but use to measure localization performance). We follow the evaluation setup
defined in the of Lan et al. [9] and split the dataset into 103 training and 47 test samples. We
employ the eye gaze data made available by Mathe and Sminchisescu [11]. The data captures eye
movements of 16 subjects while they were watching the video clips from the dataset. The eye gaze
data are represented with a probability density function (Sect. 4).
Data representation: We extract HoG, HoF, and HoMB descriptors [12] at a dense spatio-temporal
grid and at 4 different scales. These descriptors are clustered into 3 vocabularies of 500, 500, 300
sizes correspondingly. For the baseline experiments, we use `1 -normalized histogram representation.
For the potentials described in Sect. 1, we represent latent regions/context with the sum of perframe normalized histograms. Per-frame normalization, as opposed to global normalization over the
spatio-temporal region, allows us to aggregate scores iteratively in Algorithm 1.
Baselines: We compare our model to several baseline methods. All our baselines are trained with
linear SVM, to make them comparable to our linear model, and use the same feature representation
4
Parameter r of the loss (Sect. 3) modulates importance of gaze localization within the latent region.
6
Baselines
Our Model
State-of-the-art
Model
Global BoW
BoW with SS
BoW with TS
Accuracy
64.29
65.95
69.64
Localization
N/A
N/A
N/A
# of Latent Regions
K=1
K=2
K=1
K=2
77.98
82.14
26.4
20.8
77.62
81.31
32.3
29.3
76.79
80.71
29.6
30.4
73.1
27.8
N/A
54.3?
75.3
N/A
79.4
N/A
Region
Region+Context
Region+Global
Lan et al. [9]
Tran and Yuan [19]
Shapovalova et al. [16]
Raptis et al. [12]
Table 1: Action classification and localization results. Our model significantly outperforms the
baselines and most of the State-of-the-art results (see text for discussion). ? Note that the average
localization score is calculated based only on three classes reported in [19].
as described above. We report performance of three baselines: (1) Global BoW, where video is
represented with just one histogram and all the temporal-spatial structure is discarded. (2) BoW
with spatial split (SS), where video is divided by a 2 ? 2 spatial grid and parts in order to capture
spatial structure. (3) BoW with temporal split (TS), where the video is divided into 2 consecutive
temporal segments. This setup allows the capture of the basic temporal structure of human action.
Our model: We evaluate three different variants of our model, which we call Region, Region+Global, and Region+Context. Region: includes only the latent regions, the potentials ?
from our scoring function in Eq. 1, and ignores the context features ?. Region+Global: the context
potential ? is replaced with a Global BoW, like in our first baseline. Region+Context: represents
our full model from the Eq. 1. We test all our models with one and two latent regions.
Action classification and localization: Results of action classification are summarized in Table 1.
We train a model for each action separately in a standard one-vs-all framework. Table 1 shows that
all our models outperform the BoW baselines and the results of Lan et al. [9] and Shapovalova et
al. [16]. The Region and Region+Context models with two latent regions demonstrate superior
performance compared to Raptis et al. [12]. Our model with 1 latent region performs slightly worse
then model of Raptis et al. [12], however note that [12] used non-linear SVM with ?2 kernel and 4
regions, while we work with linear SVM only. Further, we can clearly see that having 2 latent regions
is beneficial, and improves the classification performance by roughly 4%. The addition of Global
BoW marginally decreases the performance, due to, we believe, over counting of image evidence
and hence overfitting. Context does not improve classification, but does improve localization.
We perform action localization by following the evaluation procedure of [9, 19] and estimate how
well inferred latent regions capture the human5 performing the action. Given a video, for each frame
we compute the overlap score between the latent region and the ground truth bounding box of the
human. The overlap O(bjk , bjgt ) is defined by the ?intersection-over-union? metric between inferred
and ground truth bounding box. The total localization score per video is computed as an average of
PT
the overlap scores of the frames: T1 j=1 O(bjk , bjgt ). Note, since our latent regions may not span
the entire video, instead of dividing by the number of frames T , we divide by the total length of
the inferred latent regions. To be consistent with the literature [9, 19], we calculate the localization
score of each test video given its ground truth action label.
Table 1 illustrates average localization scores6 . It is clear that our model with Context achieves
considerably better localization than without (Region) especially with two latent regions. This can
be explained by the fact that in UCF-Sports background tends to be discriminative for classification;
hence without proper context a latent region is likely to drift to the background (which reduces
localization score). Context in our model models the background and leaves the latent regions free
to select perceptually salient regions of the video. Numerically, our full model (Region+Context)
outperforms the model of Lan et al. [9] (despite [9] having person detections and actor annotations
5
Note that by definition the task of localizing a human is unnatural for our model since it captures perceptually salient fixed sized discriminate regions for action classification, not human localization. This unfavorably
biases localization results agains our model; see Fig. 3 for visual comparison between annotated person regions
and our inferred discriminative salient latent regions.
6
It is worth mentioning that [19] and [9] have regions detected at different subsets of frames; thus in terms
of localization, these methods are not directly comparable.
7
Region
K=1 K=2
60.6
47.6
Ave.
Region+Context
K=1 K=2
68.5
63.8
Region, K = 1
Corr.
?2
0.36
1.64
0.44
1.43
Ours
[11]
Region+Context, K = 1
Corr.
?2
0.36
1.55
0.46
1.31
Table 2: Average amount of gaze (left): Table shows fraction of ground truth gaze captured by the
latent region(s) on test videos; context improves the performance. Top-down saliency prediction
(right): ?2 distance and norm. cross-correlation between predicted and ground-truth gaze densities.
Diving
Running
1
1
Tran&Yuan(2011)
Tran&Yuan(2012)
0.8
0.4
0.2
0.2
0
0
0.4
0.6
Recall
0.8
1
Our model
0.6
Precision
Precision
0.4
0.2
Tran&Yuan(2012)
0.8
Our model
0.6
0
Tran&Yuan(2011)
Tran&Yuan(2012)
0.8
Our model
Precision
Horse?riding
1
Tran&Yuan(2011)
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Recall
0.8
1
0
0.2
0.4
0.6
0.8
1
Recall
Figure 2: Precision-Recall curves for localization: We compare our model (Region+Context with
K=1 latent region) to the method from [18] and [19].
Figure 3: Localization and gaze prediction: First row: groundtruth gaze and person bounding box,
second row: predicted gaze and extent of the latent region in the frame.
at training). We cannot compare our average performance to Tran and Yuan [19] since their approach
is evaluated only on 3 action classes out of 10, but we provide their numbers in Table 1 for reference.
We build Precision-Recall (PR) curves for our model (Region+Context) and results reported in [19]
to better evaluate our method with respect to [19] (see Fig. 2). We refer to [19] for experimental setup
and evaluate the PR curves at ? = 0.2. For the 3 classes in [19] our model performs considerably
better for ?diving? action, similarly for ?horse-riding?, and marginally worse for the ?running?.
Gaze localization and prediction: Since our model is driven by eye-gaze, we also measure how
much gaze our latent regions can actually capture on the test set and whether we can predict eyegaze saliency maps for the inferred latent regions. Evaluation of the gaze localization is performed
in a similar fashion to the evaluation of action localization described earlier. We estimate amount
of gaze that falls into each bounding box of the latent region, and then average the gaze amount
over the length of all the latent regions of the video. Thus, each video has a gaze localization score
sg ? [0, 1]. Table 2 (left) summarizes average gaze localization for different variants of our model.
Noteworthy, we are able to capture around 60% of gaze by latent regions when modeling context.
We estimate gaze saliency, as described in Sect. 4. Qualitative results of the gaze prediction are
illustrated in Fig. 3. For quantitative comparison we compute normalized cross-correlation and ?2
distance between predicted and ground truth gaze, see Table 2 (right). We also evaluate performance
of bottom-up gaze prediction [11] within inferred latent regions. Better results of bottom-up approach can be explained by superior low-level features used for learning [11]. Still, we can observe
that for both approaches the full model (Region+Context) is more consistent with gaze prediction.
6
Conclusion
We propose a novel weakly-supervised structured learning approach for recognition and spatiotemporal localization of actions in video. Special case of our model with two temporally ordered
paths and context can be solved in linear time complexity. In addition, our approach does not require
actor annotations for training. Instead we rely on gaze data for weak supervision, by incorporating it
into our structured loss. Further, we show how our model can be used to predict top-down saliency
in the form of gaze density maps. In the future, we plan to explore the benefits of searching over
region scale and focus on more complex spatio-temporal relationships between latent regions.
8
References
[1] M. Blaschko and C. Lampert. Learning to localize objects with structured output regression. ECCV,
2008.
[2] C. Chen and K. Grauman. Efficient activity detection with max-subgraph search. In CVPR, 2012.
[3] K. G. Derpanis, M. Sizintsev, K. Cannons, and R. P. Wildes. Efficient action spotting based on a spacetime
oriented structure representation. In CVPR, 2010.
[4] D. V. Essen, B. Olshausen, C. Anderson, and J. Gallant. Pattern recognition, attention, and information
bottlenecks in the primate visual system. SPIE Conference on Visual Information Processing: From
Neurons to Chips, 1991.
[5] A. Fathi, Y. Li, and J. M. Rehg. Learning to recognize daily actions using gaze. In ECCV, 2012.
[6] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. IEEE PAMI, 2010.
[7] Y. Ke, R. Sukthankar, and M. Hebert. Event detection in crowded videos. In ICCV, 2007.
[8] A. Kovashka and K. Grauman. Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition. In CVPR, 2010.
[9] T. Lan, Y. Wang, and G. Mori. Discriminative figure-centric models for joint action localization and
recognition. In ICCV, 2011.
[10] I. Laptev. On space-time interest points. IJCV, 64, 2005.
[11] S. Mathe and C. Sminchisescu. Dynamic eye movement datasets and learnt saliency models for visual
action recognition. In ECCV, 2012.
[12] M. Raptis, I. Kokkinos, and S. Soatto. Discovering discriminative action parts from mid-level video
representations. In CVPR, 2012.
[13] M. Raptis and L. Sigal. Poselet key-framing: A model for human activity recognition. In CVPR, 2013.
[14] M. Rodriguez, J. Ahmed, and M. Shah. Action MACH: a spatio-temporal maximum average correlation
height filter for action recognition. In CVPR, 2008.
[15] M. Ryoo and J. Aggarwal. Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities. In ICCV, 2009.
[16] N. Shapovalova, A. Vahdat, K. Cannons, T. Lan, and G. Mori. Similarity constrained latent support vector
machine: An application to weakly supervised action classification. In ECCV, 2012.
[17] P. Siva and T. Xiang. Weakly supervised action detection. In BMVC, 2011.
[18] D. Tran and J. Yuan. Optimal spatio-temporal path discovery for video event detection. In CVPR, 2011.
[19] D. Tran and J. Yuan. Max-margin structured output regression for spatio-temporal action localization. In
NIPS, 2012.
[20] P. Turaga, R. Chellappa, V. Subrahmanian, and O. Udrea. Machine recognition of human activities: A
survey. IEEE Transactions on Circuits and Systems for Video Technology, 18(11):1473?1488, 2008.
[21] E. Vig, M. Dorr, and D. Cox. Space-variant descriptor sampling for action recognition based on saliency
and eye movements. In ECCV, 2012.
[22] H. Wang, M. M. Ullah, A. Kl?aser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features
for action recognition. In BMVC, 2009.
[23] Y. Wang and G. Mori. Hidden part models for human action recognition: Probabilistic vs. max-margin.
IEEE PAMI, 2010.
[24] D. Weinland, R. Ronfard, and E. Boyer. A survey of vision-based methods for action representation,
segmentation and recognition. Computer Vision and Image Understanding, 115(2):224?241, 2011.
[25] J. Yang and M.-H. Yang. Top-down visual saliency via joint crf and dictionary learning. In CVPR, 2012.
[26] C.-N. J. Yu and T. Joachims. Learning structural svms with latent variables. In ICML, 2009.
[27] J. Yuan, Z. Liu, and Y. Wu. Discriminative subvolume search for efficient action detection. In CVPR,
2009.
[28] J. Yuan, Z. Liu, and Y. Wu. Discriminative video pattern search for efficient action detection. IEEE PAMI,
33(9), 2011.
9
| 5197 |@word cox:1 polynomial:1 norm:1 kokkinos:1 q1:5 initial:1 liu:2 contains:2 score:10 ours:1 outperforms:2 ullah:1 com:2 contextual:1 written:1 supervises:1 enables:1 designed:1 grass:1 v:2 generative:1 selected:2 leaf:1 discovering:1 core:2 record:1 coarse:2 location:5 simpler:1 height:2 along:2 constructed:1 direct:1 ik:5 yuan:14 incorrect:1 fixation:3 qualitative:1 ijcv:1 inside:1 introduce:2 manner:2 roughly:1 p1:2 decomposed:1 considering:1 estimating:1 unrelated:2 blaschko:1 circuit:1 mass:1 null:2 emerging:1 q2:2 disneyresearch:1 temporal:39 mitigate:1 quantitative:1 grauman:2 classifier:1 control:1 ramanan:1 yn:2 t1:1 local:8 tends:1 vig:1 consequence:2 vahdat:1 despite:1 encoding:1 mach:1 path:19 noteworthy:1 pami:3 twice:1 suggests:1 challenging:2 mentioning:1 bjk:2 camera:3 kicking:2 dorr:1 practice:2 union:1 procedure:2 significantly:2 subvolume:1 word:1 pre:1 get:1 cannot:1 selection:3 operator:1 nb:1 context:37 effortless:1 impossible:1 accumulating:1 sukthankar:1 map:12 attention:2 cluttered:1 independently:1 survey:3 formulate:2 rectangular:2 ke:1 identifying:1 time2:1 rehg:1 searching:3 notion:1 coordinate:1 limiting:1 pt:2 hierarchy:1 losing:1 programming:2 us:2 ego:1 recognition:26 asymmetric:1 bottom:11 solved:1 capture:10 wang:3 calculate:1 region:107 sect:7 ordering:1 movement:3 decrease:1 complexity:6 ronfard:1 mine:1 dynamic:3 trained:4 weakly:5 segment:2 laptev:2 purely:3 localization:40 textured:3 misalignment:1 easily:1 joint:3 chip:1 represented:2 train:1 fast:2 chellappa:1 detected:1 horse:3 aggregate:1 beholder:1 outside:2 neighborhood:1 quite:1 encoded:2 valued:1 cvpr:9 s:2 otherwise:2 ability:5 gi:10 unseen:1 jointly:2 transform:1 advantage:1 sequence:7 propose:5 tran:12 aligned:5 relevant:1 ath:2 date:2 bow:17 holistic:2 till:2 flexibility:1 subgraph:1 description:1 regularity:1 produce:2 executing:1 object:7 wilde:1 derive:1 develop:1 pose:2 eq:3 p2:2 dividing:1 c:1 involves:1 come:4 predicted:3 poselets:1 posit:1 closely:1 annotated:1 filter:1 human:18 viewing:1 enable:1 mcallester:1 require:3 generalization:1 clustered:1 alleviate:1 learning1:1 extension:4 viewer:1 hold:1 tracker:1 sufficiently:1 ground:7 around:2 overlaid:1 mapping:1 predict:3 bj:3 achieves:2 early:1 adopt:1 consecutive:1 dictionary:1 purpose:1 estimation:1 applicable:1 bag:1 label:9 grouped:1 clearly:2 gaussian:1 aim:1 avoid:1 cannon:2 sparsify:1 encode:2 validated:1 focus:4 joachim:1 unsatisfactory:1 model4:1 indicates:2 hk:5 contrast:2 ave:1 baseline:9 detect:2 inference:8 typically:1 lj:2 entire:3 hidden:1 boyer:1 pixel:1 compatibility:1 classification:18 w0t:2 plan:1 spatial:8 art:4 constrained:3 initialize:2 special:2 equal:1 construct:1 having:4 sampling:3 encouraged:1 represents:1 look:1 unsupervised:1 nearly:1 yu:1 icml:1 foreground:1 future:1 others:1 stimulus:2 report:1 employ:1 oriented:1 simultaneously:4 recognize:2 pictorial:1 replaced:1 phase:1 argmax:2 attempt:1 detection:13 interest:8 message:2 highly:1 essen:1 regressing:1 evaluation:5 tj:4 accurate:1 capable:1 partial:2 daily:1 iv:1 divide:1 penalizes:1 girshick:1 instance:2 classify:2 modeling:3 earlier:1 localizing:3 maximization:1 cost:1 introducing:2 subset:2 hof:1 recognizing:1 successful:1 too:1 reported:2 dependency:1 spatiotemporal:2 learnt:1 considerably:2 person:7 density:10 probabilistic:1 gaze:64 clifford:1 ambiguity:1 recorded:1 opposed:1 hn:1 broadcast:1 idiosyncrasy:1 watching:1 cognitive:1 worse:2 return:1 li:1 potential:7 summarized:2 includes:3 crowded:1 performed:2 try:1 observer:3 view:3 h1:3 analyze:1 observing:1 portion:5 annotation:12 simon:1 raptis:6 contribution:1 greg:1 accuracy:3 descriptor:6 largely:2 efficiently:3 characteristic:1 correspond:1 saliency:25 conceptually:1 generalize:1 weak:7 produced:1 marginally:2 trajectory:2 worth:1 classified:1 detector:2 volumetric:1 bjik:2 definition:1 frequency:1 involved:1 obvious:1 associated:1 spie:1 static:3 sampled:1 dataset:7 proved:1 recall:5 knowledge:2 improves:2 siva:1 segmentation:2 hilbert:1 sophisticated:1 actually:1 back:1 centric:4 higher:1 supervised:4 follow:1 response:10 specify:1 bmvc:2 formulation:8 done:1 box:22 evaluated:1 anderson:1 just:1 implicit:1 correlation:3 aser:1 expressive:1 lack:1 rodriguez:1 indicated:1 believe:1 riding:3 facilitate:1 supervisory:1 building:1 normalized:5 olshausen:1 hence:3 soatto:1 spatially:2 q0:6 iteratively:1 semantic:3 illustrated:3 deal:1 during:5 width:1 soccer:1 criterion:1 prominent:3 hik:3 stress:1 complete:1 demonstrate:1 crf:1 performs:2 motion:7 l1:2 image:5 novel:1 recently:3 superior:3 overview:1 regulates:1 conditioning:1 volume:8 numerically:1 kwk2:1 measurement:1 refer:1 rd:1 grid:5 similarly:4 ucf:4 actor:13 surface:1 supervision:6 etc:1 v0:5 similarity:1 own:1 recent:2 irrelevant:1 driven:5 diving:3 scenario:1 inf:3 poselet:1 nonconvex:1 incapable:1 binary:1 yi:8 scoring:6 captured:1 minimum:2 greater:1 relaxed:2 somewhat:1 prune:1 maximize:1 signal:1 ii:1 branch:1 multiple:7 u0:6 rj:2 infer:2 full:3 reduces:1 smooth:7 aggarwal:1 match:2 ahmed:1 cross:2 long:2 divided:3 fraser:1 prediction:13 variant:3 regression:3 basic:1 vision:3 metric:1 tasked:1 histogram:6 kernel:3 represent:1 normalization:2 achieved:1 cell:2 addition:5 background:7 want:1 whereas:1 separately:2 interval:1 else:1 source:1 wkt:2 subject:8 induced:2 spirit:1 bik:1 call:1 extracting:1 structural:3 presence:3 counting:1 yang:2 aesthetic:1 easy:1 iii:1 split:3 restrict:1 bandwidth:1 idea:1 golf:1 enumerating:1 absent:1 bottleneck:1 whether:1 unnatural:2 proceed:1 passing:1 action:84 dictating:1 useful:1 clear:1 amount:6 mid:1 clip:1 svms:1 category:1 outperform:1 lsigal:1 track:1 correctly:1 per:3 ryoo:1 key:2 salient:11 reliance:1 lan:6 localize:7 hi1:1 utilize:3 backward:3 vast:1 graph:1 fraction:1 sum:3 angle:1 respond:1 reader:1 decide:1 groundtruth:1 wu:2 sfu:1 summarizes:2 comparable:2 bound:1 hi:2 tackled:1 spacetime:1 accidental:1 activity:4 occur:2 constraint:6 scene:1 fourier:2 span:2 min:2 performing:1 relatively:3 structured:11 turaga:1 ball:1 beneficial:1 slightly:2 h0i:2 fathi:1 cable:1 primate:1 s1:2 explained:2 iccv:3 pr:2 mori:5 visualization:1 remains:1 mechanism:1 mind:1 end:9 available:3 progression:1 observe:1 appropriate:1 alternative:5 shah:1 existence:1 top:13 michalis:1 assumes:1 remaining:1 clustering:1 graphical:2 denotes:1 include:1 running:2 yx:2 build:2 especially:1 objective:1 concentration:1 traditional:2 gradient:1 distance:2 collected:2 discriminant:3 trivial:1 extent:1 assuming:1 length:3 relationship:2 difficult:2 setup:3 potentially:1 hog:1 expense:1 design:2 proper:1 perform:2 allowing:1 gallant:1 neuron:1 convolution:1 observation:1 datasets:2 benchmark:1 discarded:1 mathe:2 t:4 extended:1 excluding:5 incorporated:1 disney:1 frame:20 y1:2 drift:2 inferred:11 introduced:1 kl:1 framing:1 shapovalova:4 nip:1 address:2 able:2 spotting:1 usually:1 pattern:2 challenge:3 built:1 max:11 video:55 misclassification:1 overlap:4 natural:3 treated:1 rely:1 predicting:1 event:2 improve:2 technology:1 eye:19 temporally:3 numerous:1 axis:4 extract:1 schmid:1 text:1 literature:2 l2:2 sg:1 discovery:1 understanding:1 xiang:1 loss:10 discriminatively:1 proven:1 enclosed:1 localized:3 annotator:1 validation:1 h2:2 offered:1 consistent:3 sigal:2 share:1 row:2 eccv:5 summary:1 surprisingly:1 free:1 unfavorably:1 hebert:1 guide:1 allow:1 bias:3 weaker:1 fall:3 felzenszwalb:1 correspondingly:1 tracing:1 distributed:1 benefit:1 curve:3 dimension:1 xn:2 world:1 vocabulary:1 calculated:1 ignores:1 author:1 made:2 forward:3 counted:1 transaction:1 ignore:1 implicitly:3 dealing:1 global:10 overfitting:3 spatio:30 consuming:1 discriminative:18 alternatively:1 xi:4 search:15 latent:79 continuous:1 table:9 additionally:1 promising:1 channel:1 nature:2 learn:3 ca:1 obtaining:1 sminchisescu:2 excellent:1 complex:3 necessarily:1 domain:3 pk:1 dense:3 bounding:22 s2:2 lampert:1 derpanis:1 x1:2 body:1 fig:5 augmented:3 hog3d:1 fashion:1 precision:5 sub:5 position:2 exponential:2 concatenating:1 perceptual:1 down:12 svm:7 evidence:1 incorporating:6 intrinsic:1 corr:2 importance:2 modulates:2 te:2 perceptually:7 conditioned:3 confuse:1 television:1 margin:6 illustrates:1 chen:1 subrahmanian:1 intersection:1 simply:1 appearance:1 orward:2 likely:1 explore:1 visual:8 gi1:1 ordered:1 tracking:1 sport:4 corresponds:2 truth:7 relies:4 extracted:3 goal:3 formulated:1 sized:1 towards:1 leonid:1 absence:2 feasible:1 content:1 objectness:1 specifically:2 experimentally:1 wt:4 decouple:1 total:2 pas:1 discriminate:1 experimental:2 select:2 support:1 evaluate:5 |
4,638 | 5,198 | Higher Order Priors for Joint Intrinsic Image,
Objects, and Attributes Estimation
Vibhav Vineet
Oxford Brookes University, UK
vibhav.vineet@gmail.com
Carsten Rother
TU Dresden, Germany
carsten.rother@tu-dresden.de
Philip H.S. Torr
University of Oxford, UK
philip.torr@eng.ox.ac.uk
Abstract
Many methods have been proposed to solve the problems of recovering intrinsic
scene properties such as shape, reflectance and illumination from a single image,
and object class segmentation separately. While these two problems are mutually
informative, in the past not many papers have addressed this topic. In this work we
explore such joint estimation of intrinsic scene properties recovered from an image, together with the estimation of the objects and attributes present in the scene.
In this way, our unified framework is able to capture the correlations between
intrinsic properties (reflectance, shape, illumination), objects (table, tv-monitor),
and materials (wooden, plastic) in a given scene. For example, our model is able to
enforce the condition that if a set of pixels take same object label, e.g. table, most
likely those pixels would receive similar reflectance values. We cast the problem
in an energy minimization framework and demonstrate the qualitative and quantitative improvement in the overall accuracy on the NYU and Pascal datasets.
1
Introduction
Recovering scene properties (shape, illumination, reflectance) that led to the generation of an image
has been one of the fundamental problems in computer vision. Barrow and Tenebaum [13] posed
this problem as representing each scene properties with its distinct ?intrinsic? images. Over the
years, many decomposition methods have been proposed [5, 16, 17], but most of them focussed on
recovering a reflectance image and a shading1 image without explicitly modelling illumination or
shape. But in the recent years a breakthrough in the research on intrinsic images came with the works
of Barron and Malik [1-4] who presented an algorithm that jointly estimated the reflectance, the
illumination and the shape. They formulate this decomposition problem as an energy minimization
problem that captures prior information about the structure of the world.
Further, recognition of objects and their material attributes is central to our understanding of the
world. A great deal of work has been devoted to estimating the objects and their attributes in the
scene: Shotton et.al. [22] and Ladicky et.al. [9] propose approaches to estimate the object labels at
the pixel level. Separately, Adelson [20], Farhadi et.al. [6], Lazebnik et.al. [23] define and estimate
the attributes at the pixel, object and scene levels. Some of these attributes are material properties
such as woollen, metallic, shiny, and some are structural properties such as rectangular, spherical.
While these methods for estimating the intrinsic images, objects and attributes have separately been
successful in generating good results on laboratory and real-world datasets, they fail to capture the
strong correlation existing between these properties. Knowledge about the objects and attributes
in the image can provide strong prior information about the intrinsic properties. For example, if a
set of pixels takes the same object label, e.g. table, most likely those pixels would receive similar
reflectance values. Thus recovering the objects and their attributes can help reduce the ambiguities
present in the world leading to better estimation of the reflectance and other intrinsic properties.
1
shading is the product of some shape and some illumination model which includes effects such as shadows,
indirect lighting etc.
1
Input Image
Input Depth Image
Reflectance
Shading
Depth
Object
Attributes
Object-color coding
Attribute-color coding
Figure 1: Given a RGBD image, our algorithm jointly estimates the intrinsic properties such as
reflectance, shading and depth maps, along with the per-pixel object and attribute labels.
Additionally such a decomposition might be useful for per-pixel object and attribute segmentation
tasks. For example, using reflectance (illumination invariant) should improve the results-when estimating per-pixel object and attribute labels [24]. Moreover if a set of pixels have similar reflectance
values, they are more likely to have the same object and attribute class.
Some of the previous research has looked at the correlation of objects and intrinsic properties by
propagating results from one step to the next. Osadchy et.al. [18] use specular highlights to improve
recognition of transparent, shiny objects. Liu et.al. [15] recognize material categories utilizing the
correlation between the materials and their reflectance properties (e.g. glass is often translucent).
Weijer et.al. [14] use knowledge of the objects present in the scene to better separate the illumination
from the reflectance images. However, the problem with these approaches is that the errors in one
step can propagate to the next steps with no possibility of recovery. Joint estimation of the intrinsic
images, objects and attributes can be used to overcome these issues. For instance, in the context of
joint object recognition and depth estimation such positive synergy effects have been shown in e.g.
[8].
In this work, our main contribution is to explore such synergy effects existing between the intrinsic
properties, objects and material attributes present in a scene (see Fig. 1). Given an image, our
algorithm jointly estimates the intrinsic properties such as reflectance, shading and depth maps,
along with per-pixel object and attribute labels. We formulate it in a global energy minimization
framework, and thus our model is able to enforce the consistency among these terms. Finally,
we use an approximate dual decomposition based strategy to efficiently perform inference in the
joint model consisting of both the continuous (reflectance, shape and illumination) and discrete
(objects and attributes) variables. We demonstrate the potential of our approach on the aNYU and
aPascal datasets, which are extended versions of the NYU [25] and Pascal [26] datasets with perpixel attribute labels. We evaluate both the qualitative and quantitative improvements for the object
and attribute labelling, and qualitative improvement for the intrinsic images estimation.
We introduce the problem in Sec. 2. Section 3 provides details about our joint model, section 4
describes our inference and learning, Sec. 5 and 6 provide experimentation and discussion.
2
Problem Formulation
Our goal is to jointly estimate the intrinsic properties of the image, i.e. reflectance, shape and
illumination, along with estimating the objects and attributes at the pixel level, given an image
array C? = (C?1 ...C?V ) where C?i ? R3 is the ith pixel?s associated RGB value in the image with
i ? V = {1...V }. Before going into the details of the joint formulation, we consider the formulations
for independently solving these problems. We first briefly describe the SIRFS (shape, illumination
and reflectance from shading) model [2] for estimating the intrinsic properties for a single given
object, and then a CRF model for estimating objects, and attributes [12].
2.1 SIRFS model for a single, given object mask
We build on the SIRFS model [2] for estimating the intrinsic properties of an image. They formulate the problem of recovering the shape, illumination and reflectance as an energy minimization
problem given an image. Let R = (R1 ...RV ), Z = (Z1 ...ZV ) be the reflectance, and depth maps
respectively, where Ri ? R3 and Zi ? R3 , and the illumination L be a 27-dimensional vector of
spherical harmonics [10]. Further, let S(Z, L) be a function that generates a shading image given
the depth map Z and the illumination L. Here Si ? R3 and subsumes all light-dependent properties,
e.g. shadows, inter-reflections (refer to [2] for details). The SIRFS model then minimizes the energy
minimizeR,Z,L E SIRFS
subject to
= E R (R) + E Z (Z) + E L (L)
C? = R ? S(Z, L)
2
(1)
where ??? represents componentwise multiplication, and E R (R), E Z (Z) and E L (L) are the costs
for the reflectance, depth and illumination respectively. The most likely solution is then estimated by
using a multi-scale L-BFGS, a limited-memory approximation of the Broyden-Fletcher-GoldfarbShanno algorithm [2], strategy which in practice finds better local optima than other gradient descent
strategies. The SIRFS model is limited to estimating the intrinsic properties for a single object mask
within an image. The recently proposed Scene-SIRFS model [4] proposes an approach to recover
the intrinsic properties of whole image by embedding a mixture of shapes in a soft segmentation
of the scene. In Sec. 3 we will also extend the SIRFS model to handle multiple objects. The main
difference to Scene-SIRFS is that we perform joint optimization over the object (and attributes)
labelling and intrinsic image properties per-pixel.
2.2 Multilabel Object and Attribute Model
The problem of estimating the per-pixel objects and attributes labels can also be formulated in a
CRF framework [12]. Let O = (O1 ...OV ) and A = (A1 ...AV ) be the object and attribute variables
associated with all V pixels, where each object variable Oi takes one out of K discrete labels such as
table, monitor, or floor. Each attribute variable Ai takes a label from the power set of the M attribute
labels, for example the subset of attribute labels can be Ai = {red, shiny, wet}. Efficient inference
is performed by first representing each attributes subset Ai by M binary attribute variables Am
i ?
th
th
m
{0, 1}, meaning that Am
=
1
if
the
i
pixel
takes
the
m
attribute
and
it
is
absent
when
A
=
0.
i
i
Under this assumption, the most likely solution for the objects and the attributes correspond to
minimizing the following energy function
X
X X
X
XX
m
?i,m (Am
?ij (Oi , Oj )+
?ij (Am
E OA (O, A) =
?i (Oi ) +
i )+
i , Aj ) (2)
m i?V
i?V
i<j?V
m i<j?V
Here ?i (Oi ) and ?i,m (Am
i ) are the object and per-binary attribute dependent unary terms respecm
tively. Similarly, ?ij (Oi , Oj ) and ?ij (Am
i , Aj ) are the pairwise terms defined over the object and
per-binary attribute variables. Finally the best configuration for the object and attributes are estimated using a mean-field based inference approach [12]. Further details about the form of the unary,
pairwise terms and the inference approach are described in our technical report [29].
3
Joint Model for Intrinsic Images, Objects and Attributes
Now, we provide the details of our formulation for jointly estimating the intrinsic images (R, Z, L)
along with the objects (O) and attribute (A) properties given an image C? in a probabilistic framework. We define the posterior probability and the corresponding joint energy function E as:
P (R, Z, L, O, A|I)
=
E(R, Z, L, O, A|I)
subject to
=
1/Z(I) exp{?E(R, Z, L, O, A|I)}
E SIRFSG (R, Z, L|O, A) +E RO (R, O)+E RA (R, A)+E OA (O, A)
C? = R ? S(Z, L)
(3)
We define E SIRFSG = E R (R) + E Z (Z) + E L (L), a new global energy term. The terms
E RO (R, O) and E RA (R, A) capture correlations between the reflectance, objects and/or attribute
labels assigned to the pixels. These terms take the form of higher order potentials defined on
the image segments or regions of pixels generated using unsupervised segmentation approach of
Felzenswalb and Huttenlocker [21]. Let S corresponds to the set of these image segments. These
terms are described in detail below.
3.1 SIRFS model for a scene
Given this representation of the scene, we model the scene specific E SIRFSG by a mixture of
reflectance, and depth terms embedded into the segmentation of the image and an illumination term
as:
X
E SIRFSG (R, Z, L|O, A) =
E R (Rc ) + E Z (Zc ) + E L (L)
(4)
c?S
where R = {Rc }, Z = {Zc }. Here E R (Rc ) and E Z (Zc ) are the reflectance and depth terms
respectively defined over segments c ? S. In the current formulation, we have assumed that we have
a single model of illumination L for whole scene which corresponds to a 27-dimensional vector of
spherical harmonics [2].
3
3.2 Reflectance, Objects term
The joint reflectance-object energy term E RO (R, O) captures the relations between the objects
present in the scene and their reflectance properties. Our higher order term takes following form:
X
X
E RO (R, O) =
?oc ?(Rc ) +
?rc ?(Oc )
(5)
c?S
c?S
where Rc , Oc are the labeling for the subset of pixels c respectively. Here ?oc ?(Rc ) is an object
dependent quality sensitive higher order cost defined over the reflectance variables, and ?rc ?(Oc ) is
a reflectance dependent quality sensitive higher order cost defined over the object variables. The term
?(Rc ) reduces the variance of the reflectance values within a clique and takes the form ?(Rc ) =
kck?? (?p + ?v Gr (c)) where
P
k i?c (Ri ? ?c )2 k
r
G (c) = exp ???
.
(6)
kck
P
R
i
and ?? , ?p , ?v , ?? are constants. Further in order
Here kck is the size of the clique, ?c = i?c
kck
to measure the quality of the reflectance assignment to the segment, we weight the higher order cost
?(Rc ) with an object dependent ?oc that measures the quality of the segment. In our case, ?oc takes
following form:
1
if Oi = l, ?i ? c
?oc =
(7)
?o otherwise
where ?o < 1 is a constant. This term allows variables within a segment to take different reflectance
values if the pixels in that segment take different object labels. Currently the term ?oc gives rise to a
hard constraint on the penalty but can be extended to one that penalizes the cost softly as in [29].
Similarly we enforce higher order consistency over the object labeling in a clique c ? S. The term
?(Oc ) takes the form of pattern-based P N -Potts model [7] as:
o
?l
if Oi = l, ?i ? c
?(Oc ) =
(8)
o
?max
otherwise
o
are constants. Further we weight this term with a reflectance dependent quality
where ?lo , ?max
sensitive term ?rc . In our experiment we measure this term based on the variance of reflectance
terms on all constituent pixels of a segment, i.e., Gr (c) (define earlier). Thus ?rc takes following
form:
1
if Gr (c) < K, ?i ? c
c
?r =
(9)
?r otherwise
where K and ?r < 1 are constants. Essentially, this quality measurement allows the pixels within
a segment to take different object labels, if the variation in the reflectance terms within the segment
is above a threshold. To summarize, these two higher order terms enforce the cost of inconsistency
within the object and reflectance labels.
3.3 Reflectance, Attributes term
Similarly we define the term E RA (R, A) which enforces a higher order consistency between reflectance and attribute variables. Such higher order consistency takes the following form:
XX
X
c
E RA (R, A) =
?a,m
?(Rc ) +
?rc ?(Am
)
(10)
c
m
c
?a,m
?(Rc )
c?S
c?S
?rc ?(Am
c )
where
and
are the higher order terms defined over the reflectance image and
the attribute image corresponding to the mth attribute respectively. Forms of these terms are similar
to the one defined for the object-reflectance higher order terms; these terms are further explained in
the supplementary material.
4
Inference and Learning
Given the above model, our optimization problem involves solving following joint energy function
to get the most likely solution for (R, Z, L, O, A):
E(R, Z, L, O, A|I) = E SIRFSG (R, Z, L) + E RO (R, O) + E RA (R, A) + E OA (O, A)
(11)
4
However, this problem is very challenging since it consists of both the continuous variables
(R, Z, L) and discrete variables (O, A). Thus in order to minimize the function efficiently without losing accuracy we follow an approximate dual decomposition strategy [28].
We first introduce a set of duplicate variables for the reflectance (R1 , R2 , R3 ), objects (O1 , O2 ),
and attributes (A1 , A2 ) and a set of new equality constraints to enforce the consistency on these
duplicate variables. Our optimization problem thus takes the following form:
E(R1 , Z, L) + E(O1 , A1 ) + E(R2 , O2 ) + E(R3 , A2 )
minimize
R1 ,R2 ,R3 ,Z,L,O 1 ,O 2
R 1 = R 2 = R 3 ; O 1 = O 2 ; A1 = A2
subject to
(12)
From now on we have removed the subscripts and superscripts from the energy terms for simplicity
of the notations. Now we formulate it as an unconstrained optimization problem by introducing a
set of lagrange multipliers ?r1 , ?r2 , ?o , ?a and decompose the dual problem into four sub-problems as:
E(R1 , Z, L)
+ E(O1 , A1 ) + E(R2 , O2 ) + E(R3 , A2 ) + ?r1 (R1 ? R2 )
+ ?r2 (R2 ? R3 ) + ?o (O1 ? O2 ) + ?a (A1 ? A2 )
= g1 (R1 , Z, L) + g 2 (O1 , A1 ) + g3 (O2 , R2 ) + g4 (A2 , R3 ),
(13)
where
g1 (R1 , Z, L)
=
minimizeR1 ,Z,L
E(R1 , Z, L) + ?r1 R1
g2 (O1 , A1 )
=
minimizeO1 ,A1
E(O1 , A1 ) + ?o O1 + ?a A1
g3 (O2 , R2 )
=
minimizeO2 ,R2
E(O2 , R2 ) ? ?o O2 ? ?r1 R2
g4 (A2 , R3 )
=
minimizeA2 ,R3
E(A2 , R3 ) ? ?a A2 ? ?r2 R3
(14)
are the slave problems which are optimized separately and efficiently while treating the dual variables ?r1 , ?r2 , ?o , ?a constant, and the master problem then optimizes these dual variables to enforce
consistency. Next, we solve each of the sub-problems and the master problem.
Solving subproblem g1 (R1 , Z, L): Solving the sub-problem g1 (R1 , Z, L) requires optimizing
with only continuous variables (R1 , Z, L). We follow a multi-scale LBFGS strategy [2] to optimize this part. Each step of the LBFGS approach requires evaluating the gradient of g1 (R1 , Z, L)
wrt. R1 , Z, L.
Solving subproblem g2 (O1 , A1 ): The second sub-problem g2 (O1 , A1 ) involves only discrete
variables (O1 , A1 ). The dual variable dependent terms add ?o O1 to the object unary potential
?i (O1 ) and ?a A1 to the attribute unary potential ?i (A1 ). Let ? 0 (O1 ) and ? 0 (A1 ) be the updated
object and attribute unary potentials. We follow a filter-based mean-field strategy [11, 12] for the op1
1
timization. In the mean-field framework, given the true distribution P = exp(?g2Z?(O ,A )) , we find
an approximate distribution Q, where approximation is measured in terms of the KL-divergence
between the P and Q distributions. Here Z? is the normalizing
constant. Based on the model in
Q
1
1
A
O
Sec. 2.2, Q takes the form as Qi (Oi1 , A1i ) = QO
Q
(A
i m ), where Qi is a multi-class
i (Oi )
i,m
m
A
distribution over the object variable, and Qi,m is a binary distribution over {0,1}. With this, the
mean-field updates for the object variables take the following form:
X X
1
1
0
1
1
0
1
1
exp{??
(O
)
?
QO
(15)
QO
i (Oi = l) =
i
i
j (Oj = l )(?ij (Oi , Oj ))}
ZiO
0
l ?1..K j6=i
where ?ij is a potts term modulated by a contrast sensitive pairwise cost defined by a mixture of
Gaussian kernels [12], and ZiO is per-pixel normalization factor. Given this form of the pairwise
terms, as in [12], we can efficiently evaluate the pairwise summations in Eq. 15 using K Gaussian
convolutions. The updates for the attribute variables also take similar form (refer to the supplementary material).
Solving subproblems g3 (O2 , R2 ), g4 (A2 , R3 ): These two problems take the following forms:
X
X
g3 (O2 , R2 ) = minimizeO2 ,R2
?oc2 ?(Rc2 ) +
?rc2 ?(Oc2 ) ? ?o O2 ? ?r1 R2
(16)
c?S
2
3
g4 (A , R )
=
minimizeA2 ,R3
c?S
X X
m
?ac 2 ,m ?(Rc3 )+
c?S
X
c?S
5
2
2 3
?rc3 ?(A2,m
c ) ??a A ??r R
Solving of these two sub-problems requires optimization with both the continuous R2 and discrete
O2 , A2 variables respectively. However since these two sub-problems consist of higher order terms
(described in Eq. 8) and dual variable dependent terms, we follow a simple co-ordinate descent
strategy to update the reflectance and the object (and attribute) variables iteratively. The optimization
of the object (and attribute) variables are performed in a mean-field framework, and a gradient
descent based approach is used for the reflectance variables.
Solving master problem The master problem then updates the dual-variables ?r1 , ?r2 , ?o , ?a given
the current solution from the slaves. Here we provide the update equations for ?r1 ; the updates
for the other dual variables take similar form. The master calculates the gradient of the problem
E(R, Z, L, O, A|I) wrt. ?r1 , and then iteratively updates the values of ?r1 as:
1
?
?1
?r1 = ?r1 + ?r1 g1r (R1 , Z, L) + g3r (O2 , R2 )
(17)
?1
?1
where ?rt is the step size tth iteration and g1r , g3r are the gradients w.r.t. to the ?r1 . It should be noted
that we do not guarantee the convergence of our approach since the subproblems g1 (.) and g2 (.) are
solved approximately. Further details on our inference techniques are provided in the supplementary
material.
Learning: In the model described above, there are many parameters joining each of these terms.
We use a cross-validation strategy to estimate these parameters in a sequential manner and thus
ensuring efficient strategy to estimate a good set of parameters. The unary potentials for the objects
and attributes are learnt using a modified TextonBoost model of Ladicky et.al. [9] which uses a
colour, histogram of oriented gradient (HOG), and location features.
5
Experiments
We demonstrate our joint estimation approach on both the per-pixel object and attribute labelling
tasks, and estimation of the intrinsic properties of the images. For the object and attribute labelling
tasks, we conduct experiments on the NYU 2 [25] and Pascal [26] datasets both quantitatively and
qualitatively. To this end, we annotate the NYU 2 and the Pascal datasets with per-pixel attribute
labels. As a baseline, we compare our joint estimation approach against the mean-field based method
[12], and the graph-cuts based ?-expansion method [9]. We assess the accuracy in terms of the
overall percentage of the pixels correctly labelled, and the intersection/union score per class (defined
in terms of the true/false positives/negatives for a given class as TP/(TP+FP+FN)). Additionally we
also evaluate our approach in estimating better intrinsic properties of the images though qualitatively
only, since it is extremely difficult to generate the ground truths for the intrinsic properties, e.g.
reflectance, depth and illumination for any general image. We compare our intrinsic properties
results against the model of Barron and Malik2 [2, 4], Gehler et.al. [5] and the Retinex model [17].
Further, only visually we also show how our approach is able to recover better smooth and de-noised
depth maps compared to the raw depth provided by the Kinect [25]. In all these cases, we use the
code provided by the authors for the AHCRF [9], mean-field approach [11, 12]. Details of all the
experiments are provided below.
5.1 aNYU 2 dataset
We first conduct experiment on aNYU 2 RGBD dataset, an extended version of the indoor NYU
2 dataset [25]. The dataset consists of 725 training images, 100 validation and 624 test images.
Further, the dataset consists of per-pixel object and attribute labels (see Fig. 1 and 3 for per-pixel
attribute labels). We select 15 object and 8 attribute classes that have sufficient number of instances
to train the unary classifier responses. The object labels corresponds to some indoor object classes
as floor, wall, .. and attribute labels corresponds to material properties of the objects as wooden,
painted, .... Further, since this dataset has depth from the Kinect depths, we use them to initialize
the depth maps Z for both our joint estimation approach and the Barron and Malik models [2-4].
We show quantitative and qualitative results in Tab. 1 and Fig. 3 respectively. As shown, our joint
approach achieves an improvement of almost 2.3% , and 1.2% in the overall accuracy and average
intersection-union (I/U) score over the model of AHCRF [9], and almost 1.5 % improvement in the
2
We extended the SIRFS [2] model to our Scene-SIRFS using a mixture of reflectance and depth maps,
and a single illumination model. These mixtures of reflectance and depth maps were embedded in the soft
segmentation of the scene generated using the approach of Felzenswalb et.al. [21]. We call this model: Barron
and Malik [2,4].
6
Algorithm
AHCRF [9]
DenseCRF [12]
Ours (OA+Intr)
Av. I/U
28.88
29.66
30.14
Oveall(% corr)
51.06
50.70
52.23
Algorithm
AHCRF [9]
DenseCRF [12]
Ours (OA+Intr)
(a) Object Accuracy
Av. I/U
21.9
22.02
24.175
Oveall(% corr)
40.7
37.6
39.25
(b) Attribute Accuracy
Table 1: Quantitative results on aNYU 2 dataset for both the object segmentation (a), and attributes
segmentation (b) tasks. The table compares performance of our approach (last line) against three
baselines. The importance of our joint estimation for intrinsic images, objects and attributes is
confirmed by the better performance of our algorithm compared to the graph-cuts based (AHCRF)
method [9] and mean-field based approach [12] for both the tasks. Here intersection vs. union (I/U)
P
is defined as T P +FT N
+F P and ?% corr? as the total proportional of correctly labelled pixels.
Input Image
our reflectance
our shading
our normals
our depth
reflectance [17]
reflectance[5]
Kinect depth
reflectance [2,4]
shading [2,4]
normals [2,4]
depth [2,4]
shading [17]
shading[5]
Input Image
our reflectance
our shading
our normals
our depth
reflectance [17]
reflectance[5]
Kinect depth
reflectance [2,4]
shading [2,4]
normals [2,4]
depth [2,4]
shading [17]
shading[5]
Figure 2: Given an image and its depth image for the aNYU dataset, these figures qualitatively compare our algorithm in jointly estimating better the intrinsic properties such as reflectance, shading,
normals and depth maps. We compare against the model Barron and Malik [2,4], the Retinex model
[17] (2nd last column) and the Gehler et.al. approach [5] (last column).
average I/U over the model of [12] for the object class segmentation . Similarly we also observe an
improvement of almost 2.2 % and 0.5 % in the overall accuracy and I/U score over AHCRF [12],
and almost 2.1 % and 1.6 % in the overall accuracy and average I/U over the model of [12] for the
per-pixel attribute labelling task. These quantitative improvement suggests that our model is able to
improve the object and attribute labelling using the intrinsic properties information. Qualitatively
also we observe an improvement in the output of both the object and attribute segmentation tasks as
shown in Fig. 3.
Further, we show the qualitative improvement in the results of the intrinsic properties in the Fig. 2.
As shown our joint approach helps to recover better depth map compared to the noisy kinect depth
maps; justifying the unification of reconstruction and objects and attributes based recognition tasks.
Further, our reflectance and shading images visually look much better than the models of Retinex
[17] and Gehler et.al. [5], and similar to the Barron and Malik approach [2,4].
5.2 aPascal dataset
We also show experiments on aPascal dataset, our extended Pascal dataset with per-pixel attribute
labels. We select a subset of 517 images with the per-pixel object labels from the Pascal dataset and
annotate it with 7 material attribute labels at the pixel level. These attributes correspond to wooden,
skin, metallic, glass, shiny... etc. Further for the Pascal dataset we do not have any initial depth
estimate. Thus, we start with a depth map where each point in the space is given same constant
depth value.
Some quantitative and qualitative results are shown in Tab. 2 and Fig. 3 respectively. As shown, our
approach achieves an improvement of almost 2.0 % and 0.5 % in the I/U score for the object and
7
Algorithm
AHCRF [9]
DenseCRF [12]
Ours (OA + Intr)
Av. I/U
32.53
36.9
38.1
Oveall(% corr)
82.30
79.4
81.4
Algorithm
AHCRF [9]
DenseCRF [12]
Ours (OA+Intr)
(a) Object Accuracy
Av. I/U
17.4
18.28
18.85
Oveall(% corr)
95.1
96.2
96.7
(b) Attribute Accuracy
Table 2: Quantitative results on aPascal dataset for both the object segmentation (a), and attributes
segmentation (b) tasks. The table compares performance of our approach (last line) against three
baselines. The importance of our joint estimation for intrinsic images, objects and attributes is
confirmed by the better performance of our algorithm compared to the graph-cuts based (AHCRF)
method [9] and mean-field based approach [12] for both the tasks. Here intersection vs. union (I/U)
P
is defined as T P +FT N
+F P and ?% corr? as the total proportional of correctly labelled pixels.
attribute labelling tasks respectively over the model of [12]. We observe qualitative improvement in
the accuracy shown in Fig. 3.
Input Image
Reflectance
Depth
Ground truth
Output [9]
Output [10]
Our Object
Our Attribute
NYU Object-color coding
Attribute-color coding
Figure 3: Qualitative results on aNYU (first 2 lines) and aPascal (last line) dataset. From left to
right: input image, reflectance, depth images, ground truth, output from [9] (AHCRF), output from
[12], our output for the object segmentation. Last column shows our attribute segmentation output.
(Attributes for NYU dataset: wood, painted, cotton, glass, brick, plastic, shiny, dirty; Attributes for
Pascal dataset: skin, metal, plastic, wood, cloth, glass, shiny.)
6
Discussion and Conclusion
In this work, we have explored the synergy effects between intrinsic properties of an images, and
the objects and attributes present in the scene. We cast the problem in a joint energy minimization
framework; thus our model is able to encode the strong correlations between intrinsic properties
(reflectance, shape,illumination), objects (table, tv-monitor), and materials (wooden, plastic) in a
given scene. We have shown that dual-decomposition based techniques can be effectively applied to
perform optimization in the joint model. We demonstrated its applicability on the extended versions
of the NYU and Pascal datasets. We achieve both the qualitative and quantitative improvements for
the object and attribute labeling, and qualitative improvement for the intrinsic images estimation.
Future directions include further exploration of the possibilities of integrating priors based on the
structural attributes such as slanted, cylindrical to the joint intrinsic properties, objects and attributes
model. For instance, knowledge that the object is slanted would provide a prior for the depth and
distribution of the surface normals. Further, the possibility of incorporating a mixture of illumination
models to better model the illumination in a natural scene remains another future direction.
Acknowledgements. This work was supported by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. P.H.S. Torr is in receipt of
Royal Society Wolfson Research Merit Award.
References
[1] Barron, J.T. & Malik, J. (2012) Shape, albedo, and illumination from a single image of an unknown object.
In IEEE CVPR, pp. 334-341. Providence, USA.
[2] Barron, J.T. & Malik, J. (2012) Color constancy, intrinsic images, and shape estimation. In ECCV, pp.
57-70. Florence, Italy.
8
[3] Barron, J.T. & Malik, J. (2012) High-frequency shape and albedo from shading using natural image statistics. In IEEE CVPR, pp. 2521-2528. CO, USA.
[4] Barron, J., & Malik, J. (2013) Intrinsic scene properties from a single RGB-D image. In IEEE CVPR.
[5] Gehler, P.V., Rother, C., Kiefel, M., Zhang, L. & Bernhard, S. (2011) Recovering intrinsic images with a
global sparsity prior on reflectance. In NIPS, pp. 765-773. Granada, Spain.
[6] Farhadi, A., Endres, I., Hoiem, D. & Forsyth D.A., (2009) Describing objects by their attributes. In IEEE
CVPR, pp. 1778-1785. Miami, USA.
[7] Kohli, P., Kumar, M.P., & Torr, P.H.S. (2009) P & beyond: move making algorithms for solving higher
order functions. In IEEE PAMI, pp. 1645-1656.
[8] Ladicky, L., Sturgess, P., Russell C., Sengupta, S., Bastnlar, Y., Clocksin, W.F., & Torr P.H.S. (2012) Joint
optimization for object class segmentation and dense stereo reconstruction. In IJCV, pp. 739-746.
[9] Ladicky, L., Russell C., Kohli P. & Torr P.H.S., (2009) Associative hierarchical CRFs for object class image
segmentation. In IEEE ICCV, pp. 739-746. Kyoto, Japan.
[10] Sloan, P.P., Kautz, J., & Snyder, J., (2002) Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. In SIGGRAPH, pp. 527-536.
[11] Vineet, V., Warrell J., & Torr P.H.S., (2012) Filter-based mean-field inference for random fields with
higher-order terms and product label-spaces . In IEEE ECCV, pp. 31-44. Florence, Italy.
[12] Kr?ahenb?uhl P. & Koltun V., (2011) Efficient inference in fully connected CRFs with Gaussian edge potentials. In IEEE NIPS, pp. 109-117. Granada, Spain.
[13] Barrow, H.G. & Tenenbaum, J.M. (1978) Recovering intrinsic scene characteristics from images. In A.
Hanson and E. Riseman, editors, Computer Vision Systems, pp. 3-26. Academic Press, 1978.
[14] Weijer, J.V.d., Schmid, C. & Verbeek, J. (2007) Using high-level visual information for color constancy.
In IEEE, ICCV, pp. 1-8.
[15] Liu, C., Sharan, L., Adelson, E.H., & Rosenholtz, R. (2010) Exploring features in a bayesian framework
for material recognition. In IEEE, CVPR, pp. 239-246.
[16] Horn, B.K.P. (1970) Shape from shading: a method for obtaining the shape of a smooth opaque object
from one view. Technical Report, MIT.
[17] Land, E.H., & McCann, J.J. (1971) Lightness and retinex theory. In JOSA.
[18] Osadchy, M., Jacobs, D.W. & Ramamoorthi, R. (2003) Using specularities for recognition . In IEEE ICCV.
[19] Adelson, E.H. (2000) Lightness perception and lightness illusions. The New Cognitive Neuroscience, 2nd
Ed. MIT Press, pp. 339-351.
[20] Adelson, E.H. (2001) On seeing stuff: the perception of materials by humans and machines. SPIE, vol.
4299, pp. 1-12.
[21] Felzenswalb, P.F., & Huttenlocker, D.P. (2004) Efficient graph-based image segmentation. In IJCV.
[22] Shotton, J., Winn, J., Rother, C., & Criminisi, A. (2003) TextonBoost for Image Understanding: MultiClass Object Recognition and Segmentation by Jointly Modeling Texture, Layout, and Context. In IEEE IJCV.
[23] Tighe, J. & Lazebnik, S. (2011) Understanding scenes on many levels. In IEEE ICCV pp. 335-342.
[24] LeCun, Y., Huang, F.J., & Bottou, L. (2004) Learning methods for generic object recognition with invariance to pose and lighting. In IEEE CVPR pp. 97-104.
[25] Silberman, N., Hoim, D., Kohli, P., & Fergus, R. (2012) Indoor segmentation and support inference from
RGBD images. In IEEE ECCV pp. 746-760.
[26] Everingham, M., Gool, L.J.V., Williams, C.K.I., Winn, J.M. & Zisserman, A. (2010) The pascal visual
object classes (VOC) challenge. In IEEE IJCV pp. 303-338.
[27] Cheng, M. M., Zheng, S., Lin, W.Y., Warrell, J., Vineet, V., Sturgess, P., Mitra, N., Crook, N., & Torr,
P.H.S. (2013) ImageSpirit: Verbal Guided Image Parsing. Oxford Brookes Technical Report.
[28] Domj, Q. T., Necoara, I., & Diehl, M. (2013) Fast Inexact Decomposition Algorithms for Large-Scale
Separable Convex Optimization. In JOTA.
[29] Kohli, P., Ladicky, L., & Torr, P.H.S. (2008) on. In IEEE CVPR, 2008.
9
| 5198 |@word kohli:4 cylindrical:1 version:3 briefly:1 nd:2 everingham:1 propagate:1 rgb:2 eng:1 decomposition:7 jacob:1 textonboost:2 shading:18 initial:1 liu:2 configuration:1 score:4 hoiem:1 ours:4 past:1 existing:2 o2:13 recovered:1 com:1 current:2 si:1 gmail:1 slanted:2 parsing:1 fn:1 informative:1 shape:17 treating:1 update:7 v:2 ith:1 provides:1 location:1 zhang:1 rc:17 along:4 shiny:6 koltun:1 qualitative:10 consists:3 ijcv:4 manner:1 mccann:1 excellence:1 introduce:2 g4:4 pairwise:5 inter:1 mask:2 ra:5 multi:3 voc:1 spherical:3 farhadi:2 provided:4 estimating:12 moreover:1 xx:1 notation:1 translucent:1 spain:2 wolfson:1 minimizes:1 unified:1 guarantee:1 quantitative:8 stuff:1 ro:5 classifier:1 uk:3 positive:2 before:1 local:1 mitra:1 osadchy:2 painted:2 joining:1 oxford:3 subscript:1 approximately:1 pami:1 might:1 dresden:2 suggests:1 challenging:1 co:2 limited:2 horn:1 lecun:1 enforces:1 practice:1 union:4 illusion:1 integrating:1 seeing:1 get:1 context:2 sirfs:12 optimize:1 map:12 demonstrated:1 crfs:2 layout:1 williams:1 independently:1 convex:1 rectangular:1 formulate:4 simplicity:1 recovery:1 utilizing:1 array:1 embedding:1 handle:1 variation:1 updated:1 losing:1 us:1 recognition:8 cut:3 gehler:4 ft:2 subproblem:2 constancy:2 solved:1 capture:5 region:1 noised:1 connected:1 russell:2 removed:1 environment:1 dynamic:1 multilabel:1 ov:1 solving:9 segment:10 joint:23 indirect:1 siggraph:1 train:1 distinct:1 fast:1 describe:1 labeling:3 posed:1 solve:2 supplementary:3 cvpr:7 otherwise:3 statistic:1 g1:6 jointly:7 noisy:1 superscript:1 associative:1 propose:1 reconstruction:2 product:2 tu:2 achieve:1 constituent:1 convergence:1 optimum:1 r1:30 generating:1 object:100 help:2 ac:2 pose:1 propagating:1 measured:1 ij:6 eq:2 strong:3 recovering:7 shadow:2 involves:2 direction:2 guided:1 attribute:82 filter:2 criminisi:1 exploration:1 human:1 material:14 transparent:1 wall:1 decompose:1 summation:1 exploring:1 miami:1 ground:3 normal:6 visually:2 exp:4 great:1 fletcher:1 achieves:2 radiance:1 a2:12 albedo:2 estimation:15 wet:1 label:25 currently:1 sensitive:4 minimization:5 mit:2 gaussian:3 modified:1 encode:1 improvement:13 potts:2 modelling:1 contrast:1 sharan:1 baseline:3 am:8 glass:4 wooden:4 inference:10 dependent:8 cloth:1 unary:7 softly:1 mth:1 relation:1 going:1 germany:1 pixel:35 overall:5 issue:1 among:1 pascal:10 dual:10 proposes:1 sengupta:1 breakthrough:1 weijer:2 initialize:1 uhl:1 field:11 represents:1 adelson:4 unsupervised:1 look:1 future:2 report:3 quantitatively:1 duplicate:2 oriented:1 recognize:1 divergence:1 consisting:1 warrell:2 possibility:3 zheng:1 brooke:2 mixture:6 light:1 devoted:1 necoara:1 edge:1 unification:1 conduct:2 penalizes:1 instance:3 column:3 soft:2 earlier:1 brick:1 modeling:1 tp:2 assignment:1 cost:7 introducing:1 applicability:1 subset:4 successful:1 zio:2 gr:3 providence:1 learnt:1 endres:1 fundamental:1 vineet:4 probabilistic:1 a1i:1 together:1 central:1 ambiguity:1 huang:1 receipt:1 cognitive:1 leading:1 japan:1 potential:7 op1:1 de:2 bfgs:1 coding:4 sec:4 includes:1 subsumes:1 forsyth:1 explicitly:1 sloan:1 tighe:1 performed:2 view:1 tab:2 red:1 start:1 recover:3 kautz:1 florence:2 contribution:1 ass:1 oi:10 minimize:2 accuracy:11 variance:2 who:1 efficiently:4 characteristic:1 correspond:2 tenebaum:1 raw:1 bayesian:1 plastic:4 lighting:3 confirmed:2 j6:1 ed:1 inexact:1 against:5 energy:12 pp:20 frequency:2 associated:2 spie:1 josa:1 dataset:17 knowledge:3 color:6 segmentation:19 higher:15 follow:4 response:1 zisserman:1 formulation:5 ox:1 though:1 correlation:6 qo:3 aj:2 quality:6 vibhav:2 usa:3 effect:4 multiplier:1 true:2 equality:1 assigned:1 laboratory:1 iteratively:2 deal:1 noted:1 oc:11 crf:2 demonstrate:3 reflection:1 image:63 lazebnik:2 harmonic:2 meaning:1 recently:1 tively:1 extend:1 refer:2 measurement:1 ai:3 broyden:1 unconstrained:1 consistency:6 similarly:4 surface:1 etc:2 add:1 posterior:1 recent:1 optimizing:1 optimizes:1 italy:2 binary:4 came:1 inconsistency:1 floor:2 rv:1 multiple:1 reduces:1 kyoto:1 smooth:2 technical:3 academic:1 cross:1 lin:1 justifying:1 award:1 a1:17 qi:3 calculates:1 ensuring:1 verbeek:1 vision:2 essentially:1 iteration:1 kernel:1 normalization:1 histogram:1 annotate:2 ahenb:1 receive:2 separately:4 addressed:1 winn:2 subject:3 ramamoorthi:1 call:1 structural:2 shotton:2 rendering:1 specular:1 zi:1 reduce:1 multiclass:1 absent:1 colour:1 penalty:1 stereo:1 oi1:1 useful:1 rc2:2 tenenbaum:1 category:1 tth:1 sturgess:2 generate:1 percentage:1 rosenholtz:1 estimated:3 neuroscience:1 per:17 correctly:3 discrete:5 vol:1 snyder:1 kck:4 zv:1 ist:2 four:1 threshold:1 monitor:3 graph:4 year:2 wood:2 master:5 opaque:1 almost:5 cheng:1 constraint:2 ladicky:5 scene:26 ri:2 generates:1 extremely:1 kumar:1 separable:1 tv:2 describes:1 g3:4 making:1 explained:1 invariant:1 iccv:4 equation:1 mutually:1 remains:1 describing:1 r3:16 fail:1 precomputed:1 wrt:2 merit:1 end:1 experimentation:1 observe:3 hierarchical:1 barron:10 enforce:6 generic:1 dirty:1 include:1 reflectance:60 build:1 society:1 silberman:1 malik:9 skin:2 move:1 looked:1 strategy:9 rt:1 gradient:6 separate:1 oa:7 philip:2 riseman:1 topic:1 rother:4 code:1 o1:15 minimizing:1 timization:1 difficult:1 hog:1 subproblems:2 negative:1 rise:1 unknown:1 perform:3 av:5 convolution:1 datasets:7 kiefel:1 descent:3 barrow:2 extended:6 kinect:5 community:1 ordinate:1 cast:2 kl:1 z1:1 componentwise:1 optimized:1 hanson:1 cotton:1 perpixel:1 nip:2 able:6 beyond:1 below:2 pattern:1 perception:2 indoor:3 fp:1 sparsity:1 summarize:1 challenge:1 oj:4 memory:1 max:2 royal:1 pascal2:1 power:1 gool:1 apascal:5 natural:2 representing:2 improve:3 lightness:3 schmid:1 prior:6 understanding:3 acknowledgement:1 multiplication:1 embedded:2 fully:1 highlight:1 generation:1 proportional:2 validation:2 sufficient:1 metal:1 editor:1 granada:2 land:1 lo:1 eccv:3 supported:1 last:6 zc:3 verbal:1 focussed:1 overcome:1 depth:34 world:4 evaluating:1 author:1 qualitatively:4 programme:1 approximate:3 intr:4 bernhard:1 synergy:3 clique:3 global:3 assumed:1 fergus:1 continuous:4 table:9 additionally:2 transfer:1 diehl:1 obtaining:1 metallic:2 expansion:1 bottou:1 european:1 main:2 dense:1 whole:2 rgbd:3 fig:7 sub:6 slave:2 specific:1 densecrf:4 nyu:8 r2:22 explored:1 normalizing:1 intrinsic:40 consist:1 incorporating:1 false:1 sequential:1 corr:6 importance:2 effectively:1 kr:1 texture:1 felzenswalb:3 illumination:23 labelling:7 jota:1 intersection:4 led:1 specularities:1 explore:2 likely:6 lbfgs:2 crook:1 visual:2 lagrange:1 g2:4 oc2:2 corresponds:4 minimizer:1 truth:3 carsten:2 goal:1 formulated:1 labelled:3 hard:1 torr:9 total:2 invariance:1 select:2 support:1 retinex:4 modulated:1 evaluate:3 |
4,639 | 5,199 | Decision Jungles:
Compact and Rich Models for Classification
Jamie Shotton
Sebastian Nowozin
Toby Sharp
John Winn
Microsoft Research
Pushmeet Kohli
Antonio Criminisi
Abstract
Randomized decision trees and forests have a rich history in machine learning and
have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data,
the number of nodes in decision trees will grow exponentially with depth. For
certain applications, for example on mobile or embedded processors, memory is
a limited resource, and so the exponential growth of trees limits their depth, and
thus their potential accuracy. This paper proposes decision jungles, revisiting the
idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows
these to be compact and powerful discriminative models for classification. Unlike
conventional decision trees that only allow one path to every node, a DAG in a
decision jungle allows multiple paths from the root to each leaf. We present and
compare two new node merging algorithms that jointly optimize both the features
and the structure of the DAGs efficiently. During training, node splitting and node
merging are driven by the minimization of exactly the same objective function,
here the weighted sum of entropies at the leaves. Results on varied datasets show
that, compared to decision forests and several other baselines, decision jungles
require dramatically less memory while considerably improving generalization.
1
Introduction
Decision trees have a long history in machine learning and were one of the first models proposed
for inductive learning [14]. Their use for classification and regression was popularized by the work
of Breiman [6]. More recently, they have become popular in fields such as computer vision and
information retrieval, partly due to their ability to handle large amounts of data and make efficient
predictions. This has led to successes in tasks such as human pose estimation in depth images [29].
Although trees allow making predictions efficiently, learning the optimal decision tree is an NP-hard
problem [15]. In his seminal work, Quinlan proposed efficient approximate methods for learning
decision trees [27, 28]. Some researchers have argued that learning optimal decision trees could
be harmful as it may lead to overfitting [21]. Overfitting may be reduced by controlling the model
complexity, e.g. via various stopping criteria such as limiting the tree depth, and post-hoc pruning.
These techniques for controlling model complexity impose implicit limits on the type of classification boundaries and feature partitions that can be induced by the decision tree. A number of
approaches have been proposed in the literature to regularize tree models without limiting their
modelling power. The work in [7] introduced a non-greedy Bayesian sampling-based approach for
constructing decision trees. A prior over the space of trees and their parameters induces a posterior
distribution, which can be used, for example, to marginalize over all tree models. There are similarities between the idea of randomly drawing multiple trees via a Bayesian procedure and construction
of random tree ensembles (forests) using bagging, a method shown to be effective in many applications [1, 5, 9]. Another approach to improve generalization is via large-margin tree classifiers [4].
1
While the above-mentioned methods can reduce overfitting, decision trees face a fundamental limitation: their exponential growth with depth. For large datasets where deep trees have been shown to
be more accurate than large ensembles (e.g. [29]), this exponential growth poses a problem for implementing tree models on memory-constrained hardware such as embedded or mobile processors.
In this paper, we investigate the use of randomized ensembles of rooted decision directed acyclic
graphs (DAGs) as a means to obtain compact and yet accurate classifiers. We call these ensembles
?decision jungles?, after the popular ?decision forests?. We formulate the task of learning each DAG
in a jungle as an energy minimization problem. Building on the information gain measure commonly
used for training decision trees, we propose an objective that is defined jointly over the features of the
split nodes and the structure of the DAG. We then propose two minimization methods for learning
the optimal DAG. Both methods alternate between optimizing the split functions at the nodes of the
DAG and optimizing the placement of the branches emanating from the parent nodes. As detailed
later, they differ in the way they optimize the placement of branches.
We evaluate jungles on a number of challenging labelling problems. Our experiments below quantify
a substantially reduced memory footprint for decision jungles compared to standard decision forests
and several baselines. Furthermore, the experiments also show an important side-benefit of jungles:
our optimization strategy is able to achieve considerably improved generalization for only a small
extra cost in the number of features evaluated per test example.
Background and Prior Work. The use of rooted decision DAGs (?DAGs? for short) has been
explored by a number of papers in the literature. In [16, 26], DAGs were used to combine the
outputs of C ? C binary 1-v-1 SVM classifiers into a single C-class classifier. More recently, in [3],
DAGs were shown to be a generalization of cascaded boosting.
It has also been shown that DAGs lead to accurate predictions while having lower model complexity, subtree replication, and training data fragmentation compared to decision trees. Most existing
algorithms for learning DAGs involve training a conventional tree that is later manipulated into a
DAG. For instance [17] merges same-level nodes which are associated with the same split function.
They report performance similar to that of C4.5-trained trees, but with a much reduced number of
nodes. Oliveira [23] used local search method for constructing DAGs in which tree nodes are removed or merged together based on similarity of the underlying sub-graphs and the corresponding
message length reduction. A message-length criterion is also employed by the node merging algorithm in [24]. Chou [8] investigated a k-means clustering for learning decision trees and DAGs
(similar ?ClusterSearch? below), though did not jointly optimize the features with the DAG structure. Most existing work on DAGs have focused on showing how the size and complexity of the
learned tree model can be reduced without substantially degrading its accuracy. However, their use
for increasing test accuracy has attracted comparatively little attention [10, 20, 23].
In this paper we show how jungles, ensembles of DAGs, optimized so as to reduce a well defined
objective function, can produce results which are superior to those of analogous decision tree ensembles, both in terms of model compactness as well as generalization. Our work is related to [25],
where the authors achieve compact classification DAGs via post-training removal of redundant subtrees in forests. In contrast, our probabilistic node merging is applied directly and efficiently during
training, and both saves space as well as achieves greater generalization for multi-class classification.
Contributions. In summary, our contributions are: (i) we highlight that traditional decision trees
grow exponentially in memory with depth, and propose decision jungles as a means to avoid this;
(ii) we propose and compare two learning algorithms that, within each level, jointly optimize an
objective function over both the structure of the graph and the features; (iii) we show that not only
do the jungles dramatically reduce memory consumption, but can also improve generalization.
2
Forests and Jungles
Before delving into the details of our method for learning decision jungles, we first briefly discuss
how decision trees and forests are used for classification problems and how they relate to jungles.
Binary decision trees. A binary decision tree is composed of a set of nodes each with an in-degree
of 1, except the root node. The out-degree for every internal (split) node of the tree is 2 and for the
leaf nodes is 0. Each split node contains a binary split function (?feature?) which decides whether an
2
grass
csg
cow
0
csg
csg
grass
1
2
sheep
csg
(a)
?
csg
3
csg
4
csg
5
(b)
Training
patches
Figure 1: Motivation and notation. (a) An example use of a rooted decision DAG for classifying
image patches as belonging to grass, cow or sheep classes. Using DAGs instead of trees reduces the
number of nodes and can result in better generalization. For example, differently coloured patches
of grass (yellow and green) are merged together into node 4, because of similar class statistics. This
may encourage generalization by representing the fact that grass may appear as a mix of yellow and
green. (b) Notation for a DAG, its nodes, features and branches. See text for details.
input instance that reaches that node should progress through the left or right branch emanating from
the node. Prediction in binary decision trees involves every input starting at the root and moving
down as dictated by the split functions encountered at the split nodes. Prediction concludes when
the instance reaches a leaf node, each of which contains a unique prediction. For classification trees,
this prediction is a normalized histogram over class labels.
Rooted binary decision DAGs. Rooted binary DAGs have a different architecture compared to
decision trees and were introduced by Platt et al. [26] as a way of combining binary classifier for
multi-class classification tasks. More specifically a rooted binary DAG has: (i) one root node, with
in-degree 0; (ii) multiple split nodes, with in-degree ? 1 and out-degree 2; (iii) multiple leaf nodes,
with in-degree ? 1 and out-degree 0. Note that in contrast to [26], if we have a C-class classification
problem, here we do not necessarily expect to have C DAG leaves. In fact, the leaf nodes are not
necessarily pure; And each leaf remains associated with an empirical class distribution.
Classification DAGs vs classification trees. We explain the relationship between decision trees and
decision DAGs using the image classification task illustrated in Fig. 1(a) as an example. We wish
to classify image patches into the classes: cow, sheep or grass. A labelled set of patches is used to
train a DAG. Since patches corresponding to different classes may have different average intensity,
the root node may decide to split them according to this feature. Similarly, the two child nodes may
decide to split the patches further based on their chromaticity. This results in grass patches with
different intensity and chromaticity (bright yellow and dark green) ending up in different subtrees.
However, if we detect that two such nodes are associated with similar class distributions (peaked
around grass in this case) and merge them, then we get a single node with training examples from
both grass types. This helps capture the degree of variability intrinsic to the training data, and reduce
the classifier complexity. While this is clearly a toy example, we hope it gives some intuition as to
why rooted DAGs are expected to achieve the improved generalization demonstrated in Section 4.
3
Learning Decision Jungles
We train each rooted decision DAG in a jungle independently, though there is scope for merging
across DAGs as future work. Our method for training DAGs works by growing the DAG one level
at a time.1 At each level, the algorithm jointly learns the features and branching structure of the
nodes. This is done by minimizing an objective function defined over the predictions made by the
child nodes emanating from the nodes whose split features are being learned.
Consider the set of nodes at two consecutive levels of the decision DAG (as shown in Fig. 1b). This
set consist of the set of parent nodes Np and a set of child nodes Nc . We assume in this work a known
value for M = |Nc |. M is a parameter of our method and may vary per level. Let ?i denote the
parameters of the split feature function f for parent node i ? Np , and Si denote the set of labelled
training instances (x, y) that reach node i. Given ?i and Si , we can compute the set of instances
from node i that travel through its left and right branches as SiL (?i ) = {(x, y) ? Si | f (?i , x) ? 0}
1
Jointly training all levels of the tree simultaneously remains an expensive operation [15].
3
and SiR (?i ) = Si \ SiL (?i ), respectively. We use li ? Nc to denote the current assignment of the left
outwards edge from parent node i ? Np to a child node, and similarly ri ? Nc for the right outward
edge. Then, the set of instances that reach any child node j ? Nc is:
?
? ?
?
[
[
L
R
Sj ({?i }, {li }, {ri }) = ?
Si (?i )? ? ?
Si (?i )? .
(1)
i?Np
s.t. li =j
i?Np
s.t. ri =j
The objective function E associated with the current level of the DAG is a function of {Sj }j?Nc .
We can now formulate the problem of learning the parameters of the decision DAG as a joint minimization of the objective over the split parameters {?i } and the child assignments {li }, {ri }. Thus,
the task of learning the current level of a DAG can be written as:
min
{?i },{li },{ri }
E({?i }, {li }, {ri }) .
(2)
Maximizing the Information Gain. Although our method can be used for optimizing any objective
E that decomposes over nodes, including in theory a regression-based objective, for the sake of
simplicity we focus in this work on the information gain objective commonly used for classification
problems. The information gain objective requires the minimization of the total weighted entropy
of instances, defined as:
X
E({?i }, {li }, {ri }) =
|Sj | H(Sj )
(3)
j?Nc
where Sj is defined in (1), and H(S) is the Shannon entropy of the class labels y in the training
instances (x, y) ? S.
Note that if the number of child nodes M is equal to twice the number of parent nodes i.e. M =
2|Np |, then the DAG becomes a tree and we can optimize the parameters of the different nodes
independently, as done in standard decision tree training, to achieve optimal results.
3.1
Optimization
The minimization problem described in (2) is hard to solve exactly. We propose two local search
based algorithms for its solution: LSearch and ClusterSearch. As local optimizations, neither are
likely to reach a global minimum, but in practice both are effective at minimizing the objective. The
experiments below show that the simpler LSearch appears to be more effective.
LSearch. The LSearch method starts from a feasible assignment of the parameters, and then alternates between two coordinate descent steps. In the first (split-optimization) step, it sequentially goes
over every parent node k in turn and tries to find the split function parameters ?k that minimize the
objective function, keeping the values of {li }, {ri } and the split parameters of all other nodes fixed:
for k ? Np
?k ? argmin E(?k0 ? {?i }i?Np \{k} , {li }, {ri })
0
?k
This minimization over ?k0 is done by random sampling in a manner similar to decision forest training [9]. In the second (branch-optimization) step, the algorithm redirects the branches emanating
from each parent node to different child nodes, so as to yield a lower objective:
for k ? Np
lk ? argmin E({?i }, lk0 ? {li }i?Np \{k} , {ri })
0 ?N
lk
c
rk ? argmin E({?i }, {li }, rk0 ? {ri }i?Np \{k} )
0 ?N
rk
c
The algorithm terminates when no changes are made, and is guaranteed to converge. We found that
a greedy initialization of LSearch (allocating splits to the most energetic parent nodes first) resulted
in a lower objective after optimization than a random initialization. We also found that a stochastic
version of the above algorithm where only a single randomly chosen node was optimized at a time
resulted in similar reductions in the objective for considerably less compute.
4
ClusterSearch. The ClusterSearch algorithm also alternates between optimizing the branching variables and the split parameters, but differs in that it optimizes the branching variables more globally.
First, 2|Np | temporary child nodes are built via conventional tree-based, training-objective minimization procedures. Second, the temporary nodes are clustered into M = |Nc | groups to produce a
DAG. Node clustering is done via the Bregman information objective optimization technique in [2].
4
Experiments and results
This section compares testing accuracy and computational performance of our decision jungles with
state-of-the-art forests of binary decision trees and their variants on several classification problems.
4.1
Classification Tasks and Datasets
We focus on semantic image segmentation (pixel-wise classification) tasks, where decision forests
have proven very successful [9, 19, 29]. We evaluate our jungle model on the following datasets:
(A) Kinect body part classification [29] (31 classes). We train each tree or DAG in the ensemble on
a separate 1000 training images with 250 example pixels randomly sampled per image. Following
[29], 3 trees or DAGs are used unless otherwise specified. We test on (a common set of) 1000
images drawn randomly from the MSRC-5000 test set [29]. We use a DAG merging schedule of
|NcD | = min(M, 2min(5,D) ? 1.2max(0,D?5) ), where M is a fixed constant maximum width and D is
the current level (depth) in the tree.
(B) Facial features segmentation [18] (8 classes including background). We train each of 3 trees or
DAGs in the ensemble on a separate 1000 training images using every pixel. We use a DAG merging
schedule of |NcD | = min(M, 2D ).
(C) Stanford background dataset [12] (8 classes). We train on all 715 labelled images, seeding
our feature generator differently for each of 3 trees or DAGs in the ensemble. Again, we use a DAG
merging schedule of |NcD | = min(M, 2D ).
(D) UCI data sets [22]. We use 28 classification data sets from the UCI corpus as prepared on the
libsvm data set repository.2 For each data set all instances from the training, validation, and test set,
if available, are combined to a large set of instances. We repeat the following procedure five times:
randomly permute the instances, and divide them 50/50 into training and testing set. Train on the
training set, evaluate the multiclass accuracy on the test set. We use 8 trees or DAGs per ensemble.
Further details regarding parameter choices can be found in the supplementary material.
For all segmentation tasks we use the Jaccard index (intersection over union) as adopted in PASCAL
VOC [11]. Note that this measure is stricter than e.g. the per class average metric reported in [29].
On the UCI dataset we report the standard classification accuracy numbers. In order to keep training
time low, the training sets are somewhat reduced compared to the original sources, especially for
(A). However, identical trends were observed in limited experiments with more training data.
4.2
Baseline Algorithms
We compare our decision jungles with several tree-based alternatives, listed below.
Standard Forests of Trees. We have implemented standard classification forests, as described in [9]
and building upon their publically available implementation.
Baseline 1: Fixed-Width Trees (A). As a first variant on forests, we train binary decision trees
with an enforced maximum width M at each level, and thus a reduced memory footprint. This is
useful to tease out whether the improved generalization of jungles is due more to the reduced model
complexity or to the node merging. Training a tree with fixed width is achieved by ranking the leaf
nodes i at each level by decreasing value of E(Si ) and then greedily splitting only the M/2 nodes
with highest value of the objective. The leaves that are not split are discarded.
Baseline 2: Fixed-Width Trees (B). A related, second tree-based variant is obtained by greedily
optimizing the best split candidate for all leaf nodes, then ranking the leaves by reduction in the
2
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
5
0.2
0.15
Standard Trees
Baseline 1: Fixed-Width Trees (A)
Baseline 2: Fixed-Width Trees (B)
Baseline 3: Priority Scheduled Trees
Merged DAGs
0.05
0
1
10
100
1000
10000 100000 1000000
Total number of nodes
0.4
0.3
0.2
Standard Trees
Baseline 3: Priority Scheduled Trees
Merged DAGs
0.1
0
1
10
100
1000
10000 100000 1000000
Total number of nodes
(c)
Test segmentation accuracy
0.2
0.15
0.1
Standard Trees
Baseline 1: Fixed-Width Trees (A)
Baseline 2: Fixed-Width Trees (B)
Merged DAGs
0
0
50
100
0.3
0.2
150
200
Max. no. feature evaluations / pixel
0.7
0.6
0.5
0.4
0.3
0.2
Standard Trees
Baseline 3: Priority Scheduled Trees
Merged DAGs
0.1
(e)
0
1
10
100
1000
10000
Max. no. feature evaluations / pixel
(f)
Standard Trees
Baseline 3: Priority Scheduled Trees
Merged DAGs
0.1
0
1
100
10000
1000000
Total number of nodes
Faces dataset
0.25
0.05
0.4
0.5
0.8
Kinect dataset
Test segmentation accuracy
0.5
(b)
0.3
(d)
0.6
Test segmentation accuracy
(a)
Stanford Background dataset
Faces dataset
0.7
Test segmentation accuracy
0.25
0.1
0.5
0.8
Kinect dataset
Test segmentation accuracy
Test segmentation accuracy
0.3
Stanford Background dataset
0.4
0.3
0.2
0.1
Standard Trees
Baseline 3: Priority Scheduled Trees
Merged DAGs
0
1
10
100
1000
Max. no. feature evaluations / pixel
Figure 2: Accuracy comparisons. Each graph compares Jaccard scores for jungles vs. standard
decision forests and three other baselines. (a, b, c) Segmentation accuracy as a function of the total
number of nodes in the ensemble (i.e. memory usage) for three different datasets. (d, e, f) Segmentation accuracy as a function of the maximum number of test comparisons per pixel (maximum depth
? size of ensemble), for the same datasets. Jungles achieve the same accuracy with fewer nodes.
Jungles also improve the overall generalization of the resulting classifier.
objective, and greedily taking only the M/2 splits that most reduce the objective.3 The leaf nodes
that are not split are discarded from further consideration.
Baseline 3: Priority Scheduled Trees. As a final variant, we consider priority-driven tree trainining. Current leaf nodes are ranked by the reduction in the objective that would be achieved by
splitting them. At each iteration, the top M nodes are split, optimal splits computed and the new
children added into the priority queue. This baseline is identical to the baseline 2 above, except that
nodes that are not split at a particular iteration are part of the ranking at subsequent iterations. This
can be seen as a form of tree pruning [13], and in the limit, will result in standard binary decision
trees. As shown later, the trees at intermediate iterations can give surprisingly good generalization.
4.3
Comparative Experiments
Prediction Accuracy vs. Model Size. One of our two main hypotheses is that jungles can reduce the
amount of memory used compared to forests. To investigate this we compared jungles to the baseline
forests on three different datasets. The results are shown in Fig. 2 (top row). Note that the jungles
of merged DAGs achieve the same accuracy as the baselines with substantially fewer total nodes.
For example, on the Kinect dataset, to achieve an accuracy of 0.2, the jungle requires around 3000
nodes whereas the standard forest require around 22000 nodes. We use the total number of nodes as
a proxy for memory usage; the two are strongly linked, and the proxy works well in practice. For
example, the forest of 3 trees occupied 80MB on the Kinect dataset vs. 9MB for a jungle of 3 DAGs.
On the Faces dataset the forest of 3 trees occupied 7.17MB vs. 1.72MB for 3 DAGs.
A second hypothesis is that merging provides a good way to regularize the training and thus increases
generalization. Firstly, observe how all tree-based baselines saturate and in some cases start to
overfit as the trees become larger. This is a common effect with deep trees and small ensembles.
Our merged DAGs appear to be able to avoid this overfitting (at least in as far as we have trained
them here), and further, actually have increased the generalization quite considerably.
3
In other words, baseline 1 optimizes the most energetic nodes, whereas baseline 2 optimizes all nodes and
takes only the splits that most reduce the objective.
6
0.3
0.25
0.2
0.15
0.1
1 Standard Tree
3 Standard Trees
9 Standard Trees
1 Merged DAG
3 Merged DAGs
9 Merged DAGs
0.05
0.25
0.2
0.15
1 Standard Tree
3 Standard Trees
9 Standard Trees
1 Merged DAG
3 Merged DAGs
9 Merged DAGs
0.1
0.05
0
(a)
0.8
Kinect dataset
Test segmentation accuracy
Kinect dataset
Test segmentation accuracy
Test segmentation accuracy
0.3
100
10000
Total number of nodes
1000000
(b)
Standard Trees
Merged DAGs (M=128)
Merged DAGs (M=256)
Merged DAGs (M=512)
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
Faces dataset
0.7
1
10
100
1000
Max. no. feature evaluations / pixel
(c)
1
10
100
1000
10000 100000 1000000
Total number of nodes
Figure 3: (a, b) Effect of ensemble size on test accuracy. (a) plots accuracy against the total
number of nodes in the ensemble, whereas (b) plots accuracy against the maximum number of computations required at test time. For a fixed ensemble size jungles of DAGs achieve consistently
better generalization than conventional forests. (c) Effect of merging parameter M on test accuracy. The model width M has a regularizing effect on our DAG model. For other results shown on
this dataset, we set M = 256. See text for details.
Interestingly, the width-limited tree-based baselines perform substantially better than the standard
tree training algorithm, and in particular the priority scheduling appears to work very well, though
still inferior to our DAG model. This suggests that both reducing the model size and node merging
have a substantial positive effect on generalization.
Prediction Accuracy vs. Depth. We do not expect the reduction in memory given by merging to
come for free: there is likely to be a cost in terms of the number of nodes evaluated for any individual
test example. Fig. 2 (bottom row) shows this trade-off. The large gains in memory footprint and
accuracy come at a relatively small cost in the number of feature evaluations at test time. Again,
however, the improved generalization is also evident. The need to train deeper also has some effect
on training time. For example, training 3 trees for Kinect took 32mins vs. 50mins for 3 DAGs.
Effect of Ensemble Size. Fig. 3 (a, b) compares results for 1, 3, and 9 trees/DAGs in a forest/jungle.
Note from (a) that in all cases, a jungle of DAGs uses substantially less memory than a standard
forest for the same accuracy, and also that the merging consistently increases generalization. In
(b) we can see again that this comes at a cost in terms of test time evaluations, but note that the
upper-envelope of the curves belongs in several regions to DAGs rather than trees.
LSearch vs. ClusterSearch Optimization. In experiments we observed the LSearch algorithm to
perform better than the ClusterSearch optimization, both in terms of the objective achieved (reported
in the table below for the face dataset) and also in test accuracy. The difference is slight, yet very
consistent. In our experiments the LSearch algorithm was used with 250 iterations.
Number of nodes
LSearch objective
ClusterSearch objective
2047
0.735
0.739
5631
0.596
0.605
10239
0.514
0.524
20223
0.423
0.432
30207
0.375
0.382
40191
0.343
0.351
Effect of Model Width. We performed an experiment investigating changes to M , the maximum
tree width. Fig. 3 (c) shows the results. The merged DAGs consistently outperform the standard
trees both in terms of memory consumption and generalization, for all settings of M evaluated.
Smaller values of M improve accuracy while keeping memory constant, but must be trained deeper.
Qualitative Image Segmentation Results. Fig. 4 shows some randomly chosen segmentation results on both the Kinect and Faces data. On the Kinect data, forests of 9 trees are compared to
jungles of 9 DAGs. The jungles appear to give smoother segmentations than the standard forests,
perhaps more so than the quantitative results would suggest. On the Faces data, small forests of 3
trees are compared to jungles of 3 DAGs, with each model containing only 48k nodes in total.
Results on UCI Datasets. Figure 5 reports the test classification accuracy as a function of model
size for two UCI data sets. The full results for all UCI data sets are reported in the supplementary
material. Overall using DAGs allows us to achieve higher accuracies at smaller model sizes, but in
7
Input
Image
Ground
Truth
Input
Image
Standard Trees Merged DAGs
Segmentation Segmentation
Ground
Truth
Standard Trees Merged DAGs
Segmentation Segmentation
Figure 4: Qualitative results. A few example results on the Kinect body parts and face segmentation
tasks, comparing standard trees and merged DAGs with the same number of nodes.
Dataset "poker", 10 classes, 5 folds
1
0.9
0.9
0.8
0.8
Multiclass accuracy
Multiclass accuracy
Dataset "mnist?60k", 10 classes, 5 folds
1
0.7
0.6
0.5
0.4
0.3
0.2
0.6
0.5
0.4
0.3
0.2
8 Standard Trees
8 Merged DAGs
0.1
0
0.7
1
10
2
10
3
10
8 Standard Trees
8 Merged DAGs
0.1
0
4
10
2
10
Total number of nodes
4
10
6
10
Total number of nodes
Figure 5: UCI classification results for two data sets, MNIST-60k and Poker, eight trees or DAGs
per ensemble. The MNIST result is typical in that the accuracy improvements of DAGs over trees
is small but achieved at a smaller number of nodes (memory). The largest UCI data set (Poker, 1M
instances) profits most from the use of randomized DAGs.
most cases the generalization performance is not improved or only slightly improved. The largest
improvements for DAGs over trees is reported for the largest dataset (Poker).
5
Conclusion
This paper has presented decision jungles as ensembles of rooted decision DAGs. These DAGs are
trained, level-by-level, by jointly optimizing an objective function over both the choice of split function and the structure of the DAG. Two local optimization strategies were evaluated, with an efficient
move-making algorithm producing the best results. Our evaluation on a number of diverse and challenging classification tasks has shown jungles to improve both memory efficiency and generalization
for several tasks compared to conventional decision forests and their variants.
We believe that decision jungles can be extended to regression tasks. We also plan to investigate
multiply rooted trees and merging between DAGs within a jungle.
Acknowledgements. The authors would like to thank Albert Montillo for initial investigation of
related ideas.
8
References
[1] Y. Amit and D. Geman. Randomized inquiries about shape; an application to handwritten digit recognition. Technical Report 401, Dept. of Statistics, University of Chicago, IL, Nov 1994.
[2] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of
Machine Learning Research, 6:1705?1749, Oct. 2005.
[3] D. Benbouzid, R. Busa-Fekete, and B. K?egl. Fast classification using sparse decision DAGs. In Proc. Intl
Conf. on Machine Learning (ICML), New York, NY, USA, 2012. ACM.
[4] K. P. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu. Enlarging the margins in perceptron decision
trees. Machine Learning, 41(3):295?313, 2000.
[5] L. Breiman. Random forests. Machine Learning, 45(1), 2001.
[6] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and Regression Trees. Chapman
and Hall/CRC, 1984.
[7] H. Chipman, E. I. George, and R. E. Mcculloch. Bayesian CART model search. Journal of the American
Statistical Association, 93:935?960, 1997.
[8] P. Chou. Optimal partitioning for classification and regression trees. IEEE Trans. PAMI, 13(4), 1991.
[9] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer,
2013.
[10] T. Elomaa and M. K?aa? ri?ainen. On the practice of branching program boosting. In European Conf. on
Machine Learning (ECML), 2001.
[11] M. Everingham, L. van Gool, C. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes
(VOC) Challenge. http://www.pascal-network.org/challenges/VOC/.
[12] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent
regions. In Proc. IEEE ICCV, 2009.
[13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001.
[14] E. B. Hunt, J. Marin, and P. T. Stone. Experiments in Induction. Academic Press, New York, 1966.
[15] L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information
Processing Letters, 5(1):15?17, 1976.
[16] B. Kijsirikul, N. Ussivakul, and S. Meknavin. Adaptive directed acyclic graphs for multiclass classification. In Pacific Rim Intl Conference on Artificial Intelligence (PRICAI), 2002.
[17] R. Kohavi and C.-H. Li. Oblivious decision trees, graphs, and top-down pruning. In Intl Joint Conf. on
Artifical Intelligence (IJCAI), 1995.
[18] P. Kontschieder, P. Kohli, J. Shotton, and A. Criminisi. GeoF: Geodesic forests for learning coupled
predictors. In Proc. IEEE CVPR, 2013.
[19] V. Lepetit and P. Fua. Keypoint recognition using randomized trees. IEEE Trans. PAMI, 2006.
[20] J. Mahoney and R. J. Mooney. Initializing ID5R with a domain theory: some negative results. Technical
Report 91-154, Dept. of Computer Science, University of Texas, Austin, TX, 1991.
[21] K. V. S. Murthy and S. L. Salzberg. On growing better decision trees from data. PhD thesis, John Hopkins
University, 1995.
[22] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning databases. Technical
Report 28, University of California, Irvine, Department of Information and Computer Science, 1998.
[23] A. L. Oliveira and A. Sangiovanni-Vincentelli. Using the minimum description length principle to infer
reduced ordered decision graphs. Machine Learning, 12, 1995.
[24] J. J. Oliver. Decision graphs ? an extension of decision trees. Technical Report 92/173, Dept. of Computer
Science, Monash University, Victoria, Australia, 1992.
[25] A. H. Peterson and T. R. Martinez. Reducing decision trees ensemble size using parallel decision DAGs.
Intl Journ. on Artificial Intelligence Tools, 18(4), 2009.
[26] J. C. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin DAGs for multiclass classification. In Proc.
NIPS, pages 547?553, 2000.
[27] J. R. Quinlan. Induction of decision trees. Machine Learning, 1986.
[28] J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.
[29] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi,
A. Kipman, and A. Blake. Efficient human pose estimation from single depth images. IEEE Trans. PAMI,
2013.
9
| 5199 |@word kohli:3 repository:2 version:1 briefly:1 everingham:1 profit:1 lepetit:1 reduction:5 initial:1 contains:2 score:1 interestingly:1 existing:2 current:5 comparing:1 si:7 yet:2 attracted:1 written:1 must:1 john:2 subsequent:1 partition:1 chicago:1 shape:1 seeding:1 plot:2 ainen:1 grass:9 v:8 greedy:2 leaf:14 fewer:2 intelligence:3 cook:1 short:1 provides:1 boosting:2 node:88 firstly:1 simpler:1 org:1 five:1 become:2 replication:1 qualitative:2 combine:1 busa:1 manner:1 expected:1 growing:2 multi:2 globally:1 voc:3 decreasing:1 little:1 increasing:1 becomes:1 underlying:1 notation:2 rivest:1 mcculloch:1 argmin:3 substantially:5 degrading:1 ghosh:1 quantitative:1 every:5 growth:3 stricter:1 exactly:2 classifier:7 platt:2 partitioning:1 medical:1 appear:3 producing:1 before:1 positive:1 local:4 jungle:39 limit:3 marin:1 monash:1 path:2 merge:1 pami:3 twice:1 initialization:2 suggests:1 challenging:2 limited:3 hunt:1 directed:3 unique:1 testing:2 practice:3 union:1 differs:1 fitzgibbon:1 footprint:3 procedure:3 digit:1 empirical:1 word:1 suggest:1 get:1 hyafil:1 marginalize:1 scheduling:1 seminal:1 optimize:5 conventional:5 www:2 demonstrated:1 maximizing:1 go:1 attention:1 starting:1 independently:2 williams:1 focused:1 formulate:2 simplicity:1 splitting:3 pure:1 regularize:2 his:1 handle:1 coordinate:1 analogous:1 limiting:2 controlling:2 construction:1 us:1 hypothesis:2 trend:1 element:1 expensive:1 particularly:1 recognition:2 geman:1 database:1 observed:2 csie:1 bottom:1 initializing:1 capture:1 revisiting:1 region:2 sangiovanni:1 trade:1 removed:1 highest:1 mentioned:1 intuition:1 substantial:1 complexity:6 cristianini:2 geodesic:1 trained:4 upon:1 efficiency:1 joint:2 differently:2 k0:2 various:1 tx:1 train:8 fast:1 effective:3 emanating:4 artificial:2 newman:1 whose:1 quite:1 stanford:3 solve:1 supplementary:2 larger:1 drawing:1 otherwise:1 cvpr:1 ability:1 statistic:2 jointly:7 final:1 hoc:1 took:1 propose:5 jamie:1 mb:4 uci:9 combining:1 achieve:9 description:1 parent:8 ijcai:1 intl:4 produce:2 comparative:1 object:1 help:1 pose:3 progress:1 implemented:1 involves:1 come:3 quantify:1 differ:1 merged:25 criminisi:4 stochastic:1 human:2 australia:1 libsvmtools:1 material:2 implementing:1 crc:1 require:2 argued:1 generalization:22 clustered:1 ntu:1 investigation:1 extension:1 around:3 hall:1 ground:2 blake:2 scope:1 achieves:1 consecutive:1 vary:1 estimation:2 csg:7 travel:1 proc:4 label:2 largest:3 tool:1 weighted:2 minimization:8 hope:1 clearly:1 rather:1 occupied:2 avoid:2 breiman:3 mobile:2 focus:2 improvement:2 consistently:3 modelling:1 contrast:2 greedily:3 baseline:24 chou:2 detect:1 stopping:1 publically:1 compactness:1 koller:1 journ:1 pixel:8 overall:2 classification:29 pascal:3 proposes:1 plan:1 constrained:1 art:1 field:1 equal:1 having:1 sampling:2 chapman:1 identical:2 icml:1 peaked:1 future:1 np:14 report:7 few:1 oblivious:1 randomly:6 manipulated:1 composed:1 simultaneously:1 resulted:2 individual:1 divergence:1 microsoft:1 friedman:2 message:2 investigate:3 multiply:1 evaluation:7 sheep:3 mahoney:1 subtrees:2 accurate:3 allocating:1 bregman:2 edge:2 encourage:1 oliver:1 facial:1 unless:1 tree:118 harmful:1 divide:1 benbouzid:1 taylor:2 girshick:1 instance:12 classify:1 increased:1 salzberg:1 assignment:3 cost:4 predictor:1 successful:1 reported:4 considerably:4 combined:1 fundamental:2 randomized:5 kijsirikul:1 probabilistic:1 off:1 together:2 hopkins:1 again:3 thesis:1 containing:1 priority:9 conf:3 american:1 toy:1 li:12 potential:1 ranking:3 later:3 root:5 try:1 performed:1 linked:1 start:2 parallel:1 rk0:1 contribution:2 minimize:1 bright:1 il:1 accuracy:36 merugu:1 kaufmann:1 efficiently:3 ensemble:21 yield:1 yellow:3 bayesian:3 handwritten:1 researcher:1 mooney:1 processor:2 history:2 murthy:1 explain:1 inquiry:1 reach:5 sebastian:1 against:2 energy:1 associated:4 gain:5 sampled:1 dataset:19 irvine:1 popular:2 segmentation:22 schedule:3 rim:1 actually:1 appears:2 higher:1 zisserman:1 improved:6 fua:1 evaluated:4 though:3 done:4 strongly:1 furthermore:1 implicit:1 overfit:1 banerjee:1 perhaps:2 scheduled:6 believe:1 building:2 usage:2 effect:8 normalized:1 usa:1 inductive:1 dhillon:1 moore:1 semantic:1 illustrated:1 chromaticity:2 during:2 branching:4 width:13 inferior:1 rooted:11 criterion:2 stone:2 evident:1 complete:1 image:15 wise:1 consideration:1 recently:2 superior:1 common:2 exponentially:2 association:1 slight:1 dag:96 msrc:1 similarly:2 shawe:2 moving:1 similarity:2 posterior:1 dictated:1 optimizing:6 optimizes:3 driven:2 belongs:1 certain:1 binary:13 success:2 seen:2 minimum:2 greater:1 somewhat:1 impose:1 george:1 employed:1 morgan:1 converge:1 montillo:1 redundant:1 ii:2 branch:7 multiple:4 mix:1 smoother:1 reduces:1 full:1 infer:1 technical:4 academic:1 long:1 retrieval:1 post:2 vincentelli:1 prediction:10 variant:5 regression:5 vision:3 metric:1 albert:1 histogram:1 iteration:5 achieved:4 background:5 whereas:3 winn:2 grow:2 source:1 kohavi:1 publisher:1 extra:1 envelope:1 unlike:1 induced:1 cart:1 call:1 chipman:1 intermediate:1 shotton:4 enough:1 split:28 iii:2 architecture:1 hastie:1 cow:3 reduce:7 idea:3 regarding:1 multiclass:5 texas:1 whether:2 finocchio:1 lk0:1 energetic:2 queue:1 york:2 deep:2 antonio:1 dramatically:2 useful:1 detailed:1 involve:1 listed:1 amount:2 outward:1 oliveira:2 dark:1 prepared:1 induces:1 hardware:1 reduced:8 http:2 outperform:1 per:7 tibshirani:1 diverse:1 group:1 drawn:1 neither:1 libsvm:1 graph:9 ncd:3 sum:1 enforced:1 letter:1 powerful:1 decide:2 wu:1 patch:8 hettich:1 decision:67 jaccard:2 guaranteed:1 fold:2 encountered:1 placement:2 ri:12 scene:1 sake:1 pricai:1 min:7 relatively:1 gould:1 pacific:1 department:1 according:1 popularized:1 alternate:3 belonging:1 across:1 terminates:1 smaller:3 slightly:1 tw:1 making:2 iccv:1 resource:1 remains:2 discus:1 turn:1 cjlin:1 adopted:1 available:2 operation:1 decomposing:1 eight:1 observe:1 victoria:1 save:1 alternative:1 original:1 bagging:1 top:3 clustering:3 quinlan:3 especially:1 amit:1 comparatively:1 objective:27 move:1 added:1 strategy:2 fulton:1 traditional:1 poker:4 separate:2 thank:1 consumption:2 evaluate:3 induction:2 length:3 index:1 relationship:1 minimizing:2 nc:8 olshen:1 relate:1 negative:1 implementation:1 perform:2 upper:1 datasets:9 discarded:2 descent:1 ecml:1 extended:1 variability:1 varied:1 kinect:11 sharp:2 intensity:2 introduced:2 required:1 specified:1 kipman:1 optimized:2 c4:2 california:1 merges:1 learned:2 temporary:2 nip:1 trans:3 able:2 below:5 challenge:2 program:2 built:1 green:3 memory:17 including:2 max:5 gool:1 power:1 ranked:1 cascaded:1 representing:1 improve:5 keypoint:1 lk:2 concludes:1 coupled:1 text:2 prior:2 literature:2 coloured:1 removal:1 acknowledgement:1 geometric:1 sir:1 embedded:2 expect:2 highlight:1 limitation:2 acyclic:3 proven:1 sil:2 generator:1 validation:1 degree:8 proxy:2 consistent:2 principle:1 nowozin:1 classifying:1 row:2 austin:1 summary:1 repeat:1 surprisingly:1 keeping:2 tease:1 free:1 side:1 allow:2 deeper:2 perceptron:1 peterson:1 face:10 taking:1 sparse:1 benefit:1 van:1 boundary:1 depth:10 curve:1 ending:1 rich:2 author:2 commonly:2 made:2 adaptive:1 far:1 pushmeet:1 sj:5 approximate:1 compact:4 pruning:3 nov:1 keep:1 global:1 overfitting:4 decides:1 sequentially:1 investigating:1 corpus:1 discriminative:1 search:3 decomposes:1 why:1 table:1 delving:1 forest:30 improving:1 permute:1 investigated:1 necessarily:2 european:1 constructing:3 domain:1 did:1 main:1 motivation:1 toby:1 martinez:1 child:10 body:2 fig:7 ny:1 sub:1 wish:1 exponential:3 candidate:1 learns:1 down:2 rk:2 saturate:1 enlarging:1 showing:1 explored:1 svm:1 intrinsic:1 consist:1 mnist:3 merging:15 fragmentation:1 phd:1 labelling:1 subtree:1 egl:1 margin:3 outwards:1 entropy:3 intersection:1 led:1 likely:2 visual:1 elomaa:1 ordered:1 fekete:1 springer:2 aa:1 truth:2 acm:1 oct:1 labelled:3 bennett:1 considerable:1 hard:2 feasible:1 change:2 specifically:1 except:2 reducing:2 typical:1 semantically:1 kontschieder:1 total:13 partly:1 merz:1 shannon:1 internal:1 artifical:1 dept:3 regularizing:1 |
4,640 | 52 | 693
Teaching Artificial Neural Systems to Drive:
Manual Training Techniques for Autonomous Systems
J. F. Shepanski and S. A. Macy
TRW, Inc .
One Space Park, 02/1779
Redondo Beach, CA 90278
Abetract
We have developed a methodology for manually training autononlous control systems
based on artificial neural systems (ANS). In applications where the rule set governing an expert's
decisions is difficult to formulate, ANS can be used to ext.ra.c:t rules by associating the information
an expert receives with the actions h~ takes . Properly constructed networks imitate rules of
behavior that permits them to function autonomously when they are trained on the spanning set
of possible situations. This training can be provided manually, either under the direct. supervision
or a system trainer, or indirectly using a background mode where the network assimilates training
data as the expert perrorms his day-to-day tasks. To demonstrate these methods we have trained
an ANS network to drive a vehicle through simulated rreeway traffic.
I ntJooducticn
Computational systems employing fine grained parallelism are revolutionizing the way we
approach a number or long standing problems involving pattern recognition and cognitive processing. The field spans a wide variety or computational networks, rrom constructs emulating neural
runctions, to more crystalline configurations that resemble systolic arrays. Several titles are used
to describe this broad area or research, we use the term artificial neural systems (ANS). Our concern in this work is the use or ANS ror manually training certain types or autonomous systems
where the desired rules of behavior are difficult to rormulate.
Artificial neural systems consist of a number or processing elements interconnected in a
weighted, user-specified fashion, the interconnection weights acting as memory ror the system.
Each processing element calculatE',> an output value based on the weighted sum or its inputs. In
addition, the input data is correlated with the output or desired output (specified by an instructive
agent) in a training rule that is used to adjust the interconnection weights. In this way the ne~
work learns patterns or imitates rules of behavior and decision making.
The partiCUlar ANS architecture we use is a variation of Rummelhart et. al. [lJ multi-layer
perceptron employing the generalized delta rule (GD R). Instead of a single, multi-layer ,structure, our final network has a a multiple component or "block" configuration where one blOt'k'~
output reeds into another (see Figure 3). The training methodology we have developed is not
tied to a particular training rule or architecture and should work well with alternative networks
like Grossberg's adaptive resonance model[2J.
? American Institute of Physics 1988
694
The equations describing the network are derived and described in detail by Rumelhart et.
al.[l]. In summary, they are:
Transfer function:
Sj =
?
E WjiOi;
(1)
i-O
Weight adaptation rule:
Error calculation:
Awl'?? =( 1- a l'..)n., l'??0 J?0??
OJ
+ a l'??Awp.revious
.'
l'
'"
=0j{1- OJ) E0.tW.ti,
( 2)
( 3)
.t=1
where OJ is the output or processing element j or a sensor input, wi is the interconnection weight
leading from element ito i, n is the number of inputs to j, Aw is the adjustment of w, '1 is the
training constant, a is the training "momentum," OJ is the calculated error for element i, and m
is the Canout oC a given element. Element zero is a constant input, equal to one, so that. WjO is
equivalent to the bias threshold of element j. The (1- a) factor in equation (2) differs from standard GDR formulation, but. it is useful for keeping track of the relative magnitudes of the two
terms. For the network's output layer the summation in equation (3) is replaced with the
difference between the desired and actual output value of element j.
These networks are usually trained by presenting the system with sets of input/output data
vectors in cyclic fashion, the entire cycle of database presentation repeated dozens of times . This
method is effective when the training agent is a computer operating in batch mode, but would be
intolerable for a human instructor. There are two developments that will help real-time human
training. The first is a more efficient incorporation of data/response patterns into a network. The
second, which we are addressing in this paper, is a suitable environment wherein a man and ANS
network can interact in training situation with minimum inconvenience or boredom on the
human's part. The ability to systematically train networks in this fashion is extremely useful for
developing certain types of expert systems including automatic signal processors, autopilots,
robots and other autonomous machines. We report a number of techniques aimed at facilitating
this type of training, and we propose a general method for teaching these networks .
System. Development
Our work focuses on the utility of ANS for system control. It began as an application of
Barto and Sutton's associative search network[3]. Although their approach was useful in a
number of ways, it fell short when we tried to use it for capturing the subtleties of human
decision-making. In response we shifted our emphasis rrom constructing goal runctions for
automatic learning, to methods for training networks using direct human instruction. An integral
part or this is the development or suitable interraces between humans, networks and the outside
world or simulator. In this section we will report various approaches to these ends, and describe a
general methodology for manually teaching ANS networks . To demonstrate these techniques we
taught a network to drive a robot vehicle down a simulated highway in traffic. This application
combines binary decision making and control of continuous parameters.
Initially we investigated the use or automatic learning based on goal functions[3] for training control systems. We trained a network-controlled vehicle to maintain acceptable following
distances from cars ahead or it. On a graphics workstation, a one lane circular track was
695
constructed and occupied by two vehicles: a network-controlled robot car and a pace car that
varied its speed at random .. Input data to the network consisted of the separation distance and
the speed of the robot vehicle . The values of a goal function were translated into desired output
for GDR training. Output controls consisted of three binary decision elements : 1) accelerate one
increment of speed, 2) maintain speed, and 3) decelerate one increment of speed. At all times
the desired output vector had exactly one of these three elements active . The goal runction was
quadratic with a minimum corresponding to the optimal following distance. Although it had no
direct control over the simulation, the goal function positively or negatively reinforced the
system's behavior.
The network was given complete control of the robot vehicle, and the human trainer had
no influence except the ability to start and terminate training. This proved unsatisractory because
the initial system behavior--governed by random interconnection weights--was very unstable. The
robot tended to run over the car in rront of it before significant training occurred . By carerully
halting and restarting training we achieved stable system behavior. At first the rollowing distance
maintained by the robot car oscillated as ir the vehicle was attached by a sj)ring to the pace car.
This activity gradually damped. Arter about one thousand training steps the vehicle maintained
the optimal following distance and responded quickly to changes in the pace car's speed.
Constructing composite goal functions to promote more sophisticated abilities proved
difficult, even ill-defined, because there were many unspecified parameters. To generate goal
runctions ror these abilities would be similar to conventional programming--the type or labor we
want to circumvent using ANS. On the other hand, humans are adept at assessing complex situations and making decisions based on qualitative data, but their "goal runctions" are difficult ir not
impossible to capture analytically. One attraction of ANS is that it can imitate behavior based on
these elusive rules without rormally specifying them. At this point we turned our efforts to
manual training techniques.
The initially trained network was grafted into a larger system and augmented with additional inputs: distance and speed inrormation on nearby pace cars in a second traffic lane, and an
output control signal governing lane changes . The original network's ability to maintain a safe
following distance was retained intact. Thts grafting procedure is one of two methods we studied
for adding ne .... abilities to an existin, system. (The second, which employs a block structure, is
described below.) The network remained in direct control of the robot vehicle, but a human
trainer instructed it when and when not to change lanes. His commands were interpreted as the
desired output and used in the GDR training algorithm. This technique, which we call coaching,
proved userul and the network quickly correlated its environmental inputs with the teacher's
instructions. The network became adept at changing lanes and weaving through traffic. We found
that the network took on the behavior pattern or its trainer. A conservative teacher produced a
timid network, while an aggressive tzainer produced a network that tended to cut off other automobiles and squeeze through tight openings . Despite its success, the coaching method of training
did not solve the problem or initial network instability.
The stability problem was solved by giving the trainer direct control over the simulation.
The system configuration (Figure 1), allows the expert to exert control or release it to the n~t?
work. During initial tzaining the expert is in the driver's seat while the network acts the role of
696
apprentice. It receives sensor information, predicts system commands, and compares its predictions. against the desired output (ie. the trainer's commands) . Figure 2 shows the data and command flow in detail. Input data is processed through different channels and presented to the
trainer and network. Where visual and audio formats are effective for humans, the network uses
information in vector form. This differentiation of data presentation is a limitation of the system;
removing it is a cask for future ~search. The trainer issues control commands in accordance with
his assigned ~k while the network takes the trainer's actions as desired system responses and
correlates these with the input. We refer to this procedure as master/apprentice training, network
training proceeds invisibly in the background as the expert proceeds with his day to day work. It
avoids the instability problem because the network is free to make errors without the adverse
consequence of throwing the operating environment into disarray.
I
Input
World (--> sensors)
l+
or
Simulation
~------------------~
~
Actuation
I
Ne',WOrk
~-
I
Expert
Commands
+
~------~---------------------------~
J
Figure 1. A scheme for manually training ANS networks. Input data is received by both
the network and trainer. The trainer issues commands that are actuated (solid command
line). or he coaches the network in how it ought to respond (broken command line).
--+ Commands
Preprocessing
tortunan
Input
data
Preprocessing
for network
N twork
e
t
--+
Predicted
commands
~
9'l. Actuation
.1-r"
'-------------.
Coaching/emphasis
Training
rule
Fegure 2. Data and convnand flow In the training system. Input data is processed and presented
to the trainer and network. In master/appre~ice training (solid command Hne). the trainer's
orders are actuated and the network treats his commands as the system's desired output. In
coaching. the network's predicted oonvnands are actuated (broken command line). and the
trainer influences weight adaptation by specifying the desired system output and controlHng
the values of trailing constants-his -suggestions- are not cirec:tty actuated.
Once initial. bacqround wainmg is complete, the expert proceeds in a more formal
manner to teach the network. He releases control of the command system to the network in
order to evaluate ita behavior and weaknesses. He then resumes control and works through a
697
series of scenarios designed to train t.he network out of its bad behavior. By switching back and
forth. between human and network control, the expert assesses the network's reliability and
teaches correct responses as needed. We find master/apprentice training works well for behavior
involving continuous functions, like steering. On the other hand, coaching is appropriate for decision Cunctions, like when Ule car ought to pass. Our methodology employs both techniques.
The Driving Network
The fully developed freeway simulation consists of a two lane highway that is made of
joined straight and curved segments which vary at. random in length (and curvature). Several
pace cars move at random speeds near the robot vehicle. The network is given the tasks of tracking the road, negotiating curves. returning to the road if placed far afield, maintaining safe distances from the pace cars, and changing lanes when appropriate. Instead of a single multi-layer
structure, the network is composed of two blocks; one controls the steering and the other regulates speed and decides when the vehicle should change lanes (Figure 3). The first block receives
information about the position and speed of the robot vehicle relative to other ears in its vicinity.
Its output is used to determine the automobile's speed and whet.her the robot should change
lanes . The passing signal is converted to a lane assignment based on the car's current lane position. The second block receives the lane assignment and data pertinent to the position and orientation of the vehicle with respect to the road. The output is used to determine the steering angle
of the robot car.
Block 1
Inputs
Outputs
Constant.
Speed.
Disl. Ahead, Pl ?
Disl. Ahead, Ol ?
Dist. Behind, Ol ?
ReI. Speed Ahead, Pl ?
ReI. Speed Ahead, Ol ?
ReI. Speed Behind, Ol ?
I
Speed
Change lanes
?
Steering Angle
Convert lane change to lane number
Constant
Rei. Orientation
-..--t~ lane Nurmer
lateral Dist.
Curvature
?
?
?
?
?
??
?
Figure 3. The two blocks of the driving ANS network. Heavy arrows Indicate total interconnectivity
between layers. PL designates the traffic lane presently oca.apied by the robot vehicle, Ol refers
to the other lane, QJrvature refers to the road, lane nurrber is either 0 or 1, relative orientation and
lateral distance refers to the robot car's direction and podion relative to the road'l direction and
center line. respectively.
.
698
The input data is displayed in pictorial and textual form to the driving instructor. He views
the road and nearby vehicles from the perspective of the driver's seat or overhead. The network
receives information in the form of a vector whose elements have been scaled to unitary order,
O( 1) . Wide ranging input parameters, like distance, are compressed using the hyperbolic tangent
or logarithmic functions . In each block , the input layer is totally interconnected to both the ou~
put and a hidden layer. Our scheme trains in real time, and as we discuss later, it trains more
smoothly with a small modification of the training algorithm .
Output is interpreted in two ways: as a binary decision or as a continuously varying parameter. The first simply compares the sigmoid output against a threshold. The second scales the
output to an appropriate range for its application . For example, on the steering output element, a
0.5 value is interpreted as a zero steering angle. Left and right turns of varying degrees are initiated when this output is above or below 0.5, respectively.
The network is divided into two blocks that can be trained separately. Beside being conceptually easier to understand , we find this component approach is easy to train systematically.
Because each block has a restricted, well-defined set of tasks, the trainer can concentrate
specifically on those functions without being concerned that other aspects of the network behavior
are deteriorating.
"'e trained the system from bottom up, first teaching the network to stay on the road ,
negotiate curves , chan~e lanes, and how to return if the vehicle strayed off the highway. Block 2,
responsible for steering, learned these skills in a few minutes using the master/apprentice mode.
It tended to steer more slowly than a human but further training progressively improved its
responsiveness.
We experimented with different trammg constants and "momentum" values. Large "
values, about 1, caused weights to change too coarsely. " values an order of magnitude smaller
worked well . We found DO advantage in using momentum for this method of training , in fact,
the system responded about three times more slowly when 0 =0.9 than when the momentt:m
term was dropped. Our standard training parameters were" =0.2, and Cl' =00
a)
~
Db)~~
=D-=-~=~~--=~--= ~
Figure 4. Typical behavior of a network-controlled vehicle (dam rectangle) when trained by
a) a conservative miYer, ItI:I b}. reckless driver. Speed Is indicated by the length of the arrows.
After Block 2 "Was trained, we gave steering control to the network and concentrated on
teaching the network to change lanes and adjust speed. Speed control in this ('"asP. was a continuous variable and was best taught using master/apprentice training. On the other hand, the binary
decision to change lanes was best taught by coaching . About ten minutes of training were needed
to teach the network to weave through traffic. We found that the network readily adapts the
699
behavioral pattern of its trainer. A conservative trainer generated a network that hardly ever
passed, while an aggressive trainer produced a network that drove recklessly and tended to cut off
other-cars (Figure 4).
Discussion
One of the strengths of el:pert 5ystf'mS based on ANS is that the use of input data in the
decision making and control proc~ss does not have to be specified . The network adapts its internal weights to conform to input/ output correlat.ions it discovers . It is important, however, that
data used by the human expert is also available to the network. The different processing of sensor data for man and network may have important consequences, key information may be
presented to the man but not. the machine.
This difference in data processing is particularly worrisome for image data where human
ability to extract detail is vastly superior to our au tomatic image processing capabilities. Though
we would not require an image processing system to understand images, it would have to extract
relevant information from cluttered backgrounds. Until we have sufficiently sophisticated algorithms or networks to do this, our efforts at constructing expert systems which halldle image data
are handicapped .
Scaling input data to the unitary order of magnitude is important for training stability. 111is
is evident from equations (1) and (2) . The sigmoid transfer function ranges from 0.1 to 0.9 in
approximat.eiy four units, that is, over an 0(1) domain. If system response must change in reaction to a large, O( n) swing of a given input parameter, the weight associated with that input will
be trained toward an O( n- 1) magnitude. On the other hand, if the same system responds to an
input whose range is O( 1), its associated weight will also be 0(1). The weight adjustment equation does not recognize differences in weight magnitude, therefore relatively small weights will
undergo wild magnitude adjustments and converge weakly. On the other hand, if all input parameters are of the same magnitude their associated weights will reflect this and the training constant
can be adjusted for gentle weight convergence . Because the output of hidden units are constrained between zero and one, O( 1) is a good target range for input parameters. Both the hyperbolic tangent and logarithmic functions are useful for scaling wide ranging inputs . A useful form
of the latter is
.8[I+ln(x/o)]
.8x/o
-.8[I+ln(-%/o)]
if o<x,
if-o::;x::;o,
ifx<-o,
( 4)
where 0>0 and defines the limits of the intermediate linear section, and .8 is a scaling factor.
This symmetric logarithmic function is continuous in its first derivative, and useful when network
behavior should change slowly as a parameter increases without bound. On the othl'r hand, if the
system should approach a limiting behavior, the tanh function is appropriate.
Weight adaptation is also complicated by relaxing the common practice of restricting interconnections to adjacent layers. Equation (3) shows that the calculated error for a hidden layergiven comparable weights, fanouts and output errors-will be one quarter or less than that of the
700
output layer. This is caused by the slope ractor, 0 .. ( 1- oil. The difference in error magnitudes is
not noticeable in networks restricted to adjacent layer interconnectivity. But when this constraint
is released the effect of errors originating directly from an output unit has 4" times the magnitude
and effect of an error originating from a hidden unit removed d layers from the output layer.
Compared to the corrections arising from the output units, those from the hidden units have little
influence on weight adjustment, and the power of a multilayer structure is weakened . The system
will train if we restrict connections to adjacent layers, but it trains slowly. To compensate for this
effect we attenuate the error magnitudes originating from the output layer by the above factor.
This heuristic procedure works well and racilitates smooth learning.
Though we have made progress in real-time learning systems using GDR, compared to
humans-who can learn from a single data presentation-they remain relatively sluggish in learning
and response rates. We are interested in improvements of the GDR algorithm or alternative
architectures that facilitate one-shot or rapid learning. In the latter case we are considering least
squares restoration techniquesl4] and Grossberg and Carpenter's adaptive resonance modelsI3,5].
The construction of automated expert systems by observation of human personnel is
attractive because of its efficient use of the expert's time and effort. Though the classic AI
approach of rule base inference is applicable when such rules are clear cut and well organized, too
often a human expert can not put his decision making process in words or specify the values of
parameters that influence him . The attraction or ANS based systems is that imitations of expert
behavior emerge as a natural consequence of their training.
Referenees
1) D. E. Rumelhart, G . E. Hinton, and R. J. Williams, "Learning Internal Representations by
Error Propagation," in Parallel D~tributed Proceuing: Ezploration~ in the Micro~trvcture 0/ Cognition,
Vol. I, D. E . Rumelhart and J. L. McClelland (Eds.)' chap. 8, (1986), Bradford BooksjMIT Press,
Cambridge
2) S. Grossberg,
Studie~
0/ Mind and Brain, (1982), Reidel, Boston
3) A. Barto and R. Sutton, "Landmark Learning: An Illustration of Associative Search," BiologicaIC,6emetiu,42, (1981), p.l
4) A. Rosenfeld and A . Kak, Digital Pieture Proeming, Vol. 1, chap. 7, (1982), Academic Press,
New York
5) G. A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-organizing
Neural Pattern Recognition Machine," Computer Vision, Graphiu and Image Procu,ing, 37,
( 1987), p.54
| 52 |@word instruction:2 simulation:4 tried:1 solid:2 shot:1 initial:4 configuration:3 cyclic:1 series:1 reaction:1 current:1 deteriorating:1 must:1 readily:1 pertinent:1 afield:1 designed:1 progressively:1 imitate:2 inconvenience:1 short:1 ifx:1 correlat:1 constructed:2 direct:5 driver:3 qualitative:1 consists:1 combine:1 overhead:1 weave:1 behavioral:1 wild:1 manner:1 ra:1 rapid:1 behavior:16 dist:2 multi:3 simulator:1 ol:5 brain:1 inrormation:1 chap:2 actual:1 little:1 freeway:1 considering:1 totally:1 provided:1 unspecified:1 interpreted:3 developed:3 differentiation:1 ought:2 ti:1 act:1 exactly:1 returning:1 scaled:1 control:19 unit:6 before:1 ice:1 dropped:1 accordance:1 treat:1 limit:1 consequence:3 switching:1 ext:1 sutton:2 despite:1 proceuing:1 initiated:1 tributed:1 emphasis:2 exert:1 studied:1 au:1 weakened:1 specifying:2 relaxing:1 range:4 grossberg:4 responsible:1 practice:1 block:12 differs:1 procedure:3 area:1 hyperbolic:2 composite:1 instructor:2 road:7 refers:3 word:1 put:2 dam:1 influence:4 impossible:1 instability:2 equivalent:1 conventional:1 center:1 elusive:1 williams:1 cluttered:1 formulate:1 oscillated:1 rule:13 attraction:2 array:1 seat:2 his:7 stability:2 classic:1 autonomous:3 variation:1 increment:2 limiting:1 drove:1 reckless:1 target:1 user:1 construction:1 programming:1 us:1 element:13 rumelhart:3 recognition:2 particularly:1 eiy:1 cut:3 predicts:1 database:1 wjo:1 role:1 bottom:1 solved:1 capture:1 calculate:1 thousand:1 cycle:1 autonomously:1 removed:1 environment:2 broken:2 runctions:4 trained:10 weakly:1 tight:1 segment:1 ror:3 negatively:1 translated:1 accelerate:1 various:1 train:7 describe:2 effective:2 artificial:4 outside:1 whose:2 heuristic:1 larger:1 solve:1 s:1 interconnection:5 compressed:1 ability:7 rosenfeld:1 final:1 associative:2 advantage:1 took:1 propose:1 interconnected:2 adaptation:3 turned:1 relevant:1 organizing:1 awl:1 adapts:2 forth:1 gentle:1 squeeze:1 convergence:1 assessing:1 negotiate:1 ring:1 help:1 noticeable:1 received:1 progress:1 predicted:2 resemble:1 indicate:1 direction:2 safe:2 rei:4 concentrate:1 correct:1 human:17 require:1 summation:1 adjusted:1 pl:3 correction:1 sufficiently:1 revolutionizing:1 cognition:1 trailing:1 driving:3 vary:1 nurrber:1 released:1 proc:1 applicable:1 trammg:1 tanh:1 title:1 highway:3 him:1 interconnectivity:2 weighted:2 sensor:4 occupied:1 asp:1 varying:2 barto:2 command:15 coaching:6 derived:1 focus:1 release:2 properly:1 improvement:1 inference:1 el:1 lj:1 entire:1 initially:2 her:1 hidden:5 originating:3 interested:1 trainer:18 issue:2 ill:1 orientation:3 oca:1 development:3 resonance:2 constrained:1 field:1 construct:1 equal:1 once:1 beach:1 manually:5 park:1 broad:1 promote:1 rummelhart:1 future:1 report:2 micro:1 employ:2 opening:1 few:1 composed:1 recognize:1 pictorial:1 replaced:1 maintain:3 circular:1 adjust:2 weakness:1 behind:2 damped:1 integral:1 desired:10 e0:1 negotiating:1 steer:1 restoration:1 assignment:2 addressing:1 graphic:1 too:2 aw:1 teacher:2 gd:1 ie:1 stay:1 standing:1 physic:1 off:3 continuously:1 quickly:2 referenees:1 vastly:1 reflect:1 ear:1 slowly:4 cognitive:1 expert:16 derivative:1 american:1 leading:1 return:1 aggressive:2 halting:1 converted:1 inc:1 caused:2 vehicle:17 view:1 later:1 traffic:6 start:1 capability:1 complicated:1 parallel:2 slope:1 ass:1 square:1 ir:2 responded:2 became:1 who:1 reinforced:1 resume:1 conceptually:1 produced:3 drive:3 straight:1 processor:1 tended:4 manual:2 ed:1 against:2 associated:3 workstation:1 proved:3 car:15 organized:1 ou:1 sophisticated:2 back:1 trw:1 day:4 methodology:4 response:6 wherein:1 improved:1 specify:1 formulation:1 though:3 governing:2 until:1 hand:6 receives:5 approximat:1 propagation:1 defines:1 mode:3 indicated:1 oil:1 effect:3 facilitate:1 consisted:2 swing:1 analytically:1 assigned:1 vicinity:1 symmetric:1 attractive:1 adjacent:3 during:1 self:1 maintained:2 kak:1 oc:1 m:1 generalized:1 presenting:1 evident:1 complete:2 demonstrate:2 ranging:2 image:6 discovers:1 began:1 redondo:1 sigmoid:2 superior:1 common:1 quarter:1 regulates:1 attached:1 occurred:1 he:5 significant:1 refer:1 cambridge:1 ai:1 attenuate:1 automatic:3 teaching:5 had:3 reliability:1 robot:14 stable:1 supervision:1 operating:2 base:1 curvature:2 chan:1 perspective:1 scenario:1 massively:1 certain:2 binary:4 success:1 responsiveness:1 minimum:2 additional:1 steering:8 determine:2 converge:1 signal:3 multiple:1 smooth:1 ing:1 academic:1 calculation:1 long:1 compensate:1 divided:1 controlled:3 prediction:1 involving:2 multilayer:1 vision:1 achieved:1 ion:1 background:3 addition:1 fine:1 want:1 twork:1 separately:1 fell:1 undergo:1 db:1 flow:2 call:1 intolerable:1 unitary:2 near:1 intermediate:1 coach:1 easy:1 concerned:1 variety:1 automated:1 gave:1 architecture:4 associating:1 restrict:1 utility:1 passed:1 effort:3 passing:1 york:1 hardly:1 action:2 useful:6 clear:1 aimed:1 adept:2 ten:1 concentrated:1 processed:2 mcclelland:1 generate:1 shifted:1 delta:1 arising:1 track:2 pace:6 conform:1 vol:2 coarsely:1 taught:3 key:1 four:1 threshold:2 disarray:1 changing:2 rectangle:1 sum:1 convert:1 run:1 angle:3 master:5 respond:1 separation:1 decision:11 acceptable:1 scaling:3 comparable:1 capturing:1 layer:14 bound:1 quadratic:1 activity:1 strength:1 ahead:5 incorporation:1 throwing:1 worked:1 constraint:1 lane:22 nearby:2 aspect:1 speed:19 span:1 extremely:1 format:1 relatively:2 developing:1 smaller:1 remain:1 wi:1 tw:1 making:6 modification:1 presently:1 gradually:1 restricted:2 ln:2 equation:6 describing:1 discus:1 turn:1 needed:2 mind:1 weaving:1 end:1 available:1 decelerate:1 permit:1 indirectly:1 appropriate:4 apprentice:5 alternative:2 batch:1 original:1 maintaining:1 giving:1 move:1 responds:1 blot:1 distance:10 simulated:2 lateral:2 landmark:1 timid:1 unstable:1 spanning:1 toward:1 systolic:1 length:2 retained:1 reed:1 illustration:1 difficult:4 grafted:1 teach:3 reidel:1 observation:1 iti:1 curved:1 displayed:1 situation:3 emulating:1 ever:1 hinton:1 varied:1 tty:1 specified:3 connection:1 learned:1 textual:1 proceeds:3 parallelism:1 pattern:6 usually:1 below:2 handicapped:1 oj:4 crystalline:1 memory:1 including:1 power:1 suitable:2 natural:1 circumvent:1 scheme:2 ne:3 extract:2 imitates:1 tangent:2 relative:4 beside:1 fully:1 suggestion:1 limitation:1 worrisome:1 ita:1 recklessly:1 digital:1 agent:2 degree:1 systematically:2 heavy:1 summary:1 placed:1 keeping:1 free:1 bias:1 formal:1 understand:2 perceptron:1 institute:1 wide:3 emerge:1 curve:2 calculated:2 world:2 avoids:1 pert:1 instructed:1 made:2 adaptive:2 boredom:1 preprocessing:2 employing:2 far:1 correlate:1 sj:2 restarting:1 grafting:1 skill:1 disl:2 active:1 decides:1 imitation:1 search:3 continuous:4 designates:1 learn:1 terminate:1 transfer:2 ca:1 channel:1 actuated:4 interact:1 investigated:1 complex:1 automobile:2 constructing:3 cl:1 domain:1 did:1 arrow:2 repeated:1 facilitating:1 ule:1 positively:1 augmented:1 carpenter:2 fashion:3 autopilot:1 momentum:3 position:3 governed:1 tied:1 ito:1 learns:1 grained:1 dozen:1 down:1 remained:1 removing:1 bad:1 minute:2 experimented:1 concern:1 consist:1 restricting:1 adding:1 gdr:5 magnitude:10 sluggish:1 easier:1 boston:1 smoothly:1 logarithmic:3 simply:1 visual:1 labor:1 adjustment:4 personnel:1 tracking:1 joined:1 subtlety:1 environmental:1 hne:1 goal:8 presentation:3 man:3 change:12 adverse:1 specifically:1 except:1 typical:1 acting:1 conservative:3 total:1 pas:1 bradford:1 intact:1 internal:2 latter:2 actuation:2 evaluate:1 audio:1 instructive:1 correlated:2 |
4,641 | 520 | Rule Induction through Integrated Symbolic and
Subsymbolic Processing
Clayton McMillan, Michael C. Mozer, Paul Smolensky
Department of Computer Science and
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Abstract
We describe a neural network, called RufeNet, that learns explicit, symbolic condition-action rules in a formal string manipulation domain.
RuleNet discovers functional categories over elements of the domain,
and, at various points during learning, extracts rules that operate on
these categories. The rules are then injected back into RuleNet and
training continues, in a process called iterative projection. By incorporating rules in this way, RuleNet exhibits enhanced learning and generalization performance over alternative neural net approaches. By
integrating symbolic rule learning and subsymbolic category learning,
RuleNet has capabilities that go beyond a purely symbolic system. We
show how this architecture can be applied to the problem of case-role
assignment in natural language processing, yielding a novel rule-based
solution.
1 INTRODUCTION
We believe that neural networks are capable of more than pattern recognition; they can
also perform higher cognitive tasks which are fundamentally rule-governed. Further we
believe that they can perform higher cognitive tasks better if they incorporate rules rather
than eliminate them. A number of well known cognitive models, particularly of language,
have been criticized for going too far in eliminating rules in fundamentally rule-governed
domains. We argue that with a suitable choice of high-level, rule-governed task, representation, processing architecture, and learning algorithm, neural networks can represent and
learn rules involving higher-level categories while simultaneously learning those categories. The resulting networks can exhibit better learning and task performance than neural
networks that do not incorporate rules, have capabilities that go beyond that of a purely
symbolic rule-learning algorithm.
969
970
McMillan, Mozer, and Smolensky
We describe an architecture, called RuleNet, which induces symbolic condition-action
rules in a string mapping domain. In the following sections we describe this domain, the
task and network architecture, simulations that demonstrate the potential for this
approach, and finally, future directions of the research leading toward more general and
complex domains.
2 DOMAIN
We are interested in domains that map input strings to output strings. A string consists of n
slots, each containing a symbol. For example, the string abed contains the symbol e in
slot 3. The domains we have studied are intrinsically rule-based, meaning that the mapping function from input to output strings can be completely characterized by explicit,
mutually exclusive condition-action rules. These rules are of the general form "if certain
symbols are present ill the input then perform a certain mapping from the input slots to the
output slots." The conditions do not operate directly on the input symbols, but rather on
categories defined over the input symbols. Input symbols can belong to mUltiple categories. For example, the words boy and girl are instances of the higher level category
HUMAN. We denote instances with lowercase bold font, and categories with uppercase
bold font. It should be apparent from context whether a letter string refers to a single
instance, such as boy, or a string of instances, such as abed.
Three types of conditions are allowed: 1) a simple condition, which states that an instance
of some category must be present in a particular slot of the input string, 2) a conjunction of
two simple conditions, and 3) a disjunction of two simple conditions. A typical condition
might be that an instance of the category W must be present in slot 1 of the input string and
an instance of category Y must be present in slot 3.
The action performed by a rule produces an output string in which the content of each slot
is either a fixed symbol or a function of a particular input slot, with the additional constraint that each input slot maps to at most one output slot. In the present work, this function of the input slots is the identity function. A typical action might be to switch the
symbols in slots 1 and 2 of the input, replace slot 3 with the symbol a, and copy slot 4 of
the input to the output string unchanged, e.g., abed - baad.
We call rules of this general form second-order categorical permutation (SCP) rules. The
number of rules grows exponentially with the length of the strings and the number of input
symbols. An example of an SCP rule for strings of length four is:
if (input1 is an instance of Wand input] is an instance of Y) then
(output1 =input2' oUtput2 =input1' output] = a, output4 ==input4 )
where illputa and outputJl denote input slot a and output slot ~, respectively. As a shorthand for this rule, we write [A W_Y_ - 21a4], where the square brackets indicate this is
a rule, the" A" denotes a conjunctive condition, and the "_" denotes a wildcard symbol. A
disjunction is denoted by "v".
This formal string manipulation task can be viewed as an abstraction of several interesting
cognitive models in the connectionist literature, including case-role assignment (McClelland & Kawamoto, 1986), grapheme-phoneme mapping (Sejnowski & Rosenberg, 1987),
and mapping verb stems to the past tense (Rumelhart & McClelland, 1986).
Rule Induction through Integrated Symbolic and Subsymbolic Processing
o
single unit
layer of units
. - complete connectivity
I>-- gating connection
c:::::I
m condition units
n pools of v category units
n pools of u hidden units
input
Figure 1: The RuleNet Architecture
3 TASK
RuleNet's task is to induce a compact set of rules that accurately characterizes a set of
training examples. We generate training examples using a predefined rule base. The rules
are over strings of length four and alphabets which are subsets of {a, b, c, d, e, f, g,
h, i, j, k, I}. For example, the rule [v Y_VI_ - 4h21] may be used to generate the
exemplars:
hedk - kheh, cldk-khlc, gbdj - j hbg, gdbk-khdg
where category VI consists of a, b, c, d, i, and category Y consists of f, g, h. Such
exemplars form the corpus used to train RuleNet. Exemplars whose input strings meet the
conditions of several rules are excluded. RuleNet's task is twofold: It must discover the
categories solely based upon the usage of their instances, and it must induce rules based
upon those categories.
The rule bases used to generate examples are minimal in the sense that no smaller set of
rules could have produced the examples. Therefore, in our simulations the target number
of rules to be induced is the same as the number used to generate the training corpus.
There are several traditional, symbolic systems, e.g., COBWEB (Fisher, 1987), that
induce rules for classifying inputs based upon training examples. It seems likely that,
given the correct representation, a system such as COBWEB could learn rules that would
classify patterns in our domain. However, it is not clear whether such a system could also
learn the action associated with each class. Classifier systems (Booker, et ai., 1989) learn
both conditions and actions, but thcre is no obvious way to map a symbol in slot a of the
input to slot ~ of the output. We have also devised a greedy combinatoric algorithm for
inducing this type of rule, which has a number of shortcomings in comparison to RuleNet.
See McMillan (1992) for comparisons of RuleNet and alternative symbolic approaches.
4 ARCHITECTURE
RuleNet can implement SCP rules of the type outlined above. As shown in Figure 1,
RuleNet has five layers of units: an input layer, an output layer, a layer of category units, a
layer of condition units, and a layer of hidden units. The operation of RuleNet can be
divided into three functional components: categorization is performed in the mapping
from the input layer to the category layer via the hidden units, the conditions are evaluated
in the mapping from the category layer to the condition layer, and actions are performed in
971
972
McMillan. Mozer. and Smolensky
the mapping from the input layer to the output layer, gated by the condition units.
The input layer is divided into II pools of units, one for each slot, and activates the category layer, which is also divided into 11 pools. Input pool a maps to category pool a. Units
in category pool a represent possible categorizations of the symbol in input slot a. One or
more category units will respond to each input symbol. The activation of the hidden and
category units is computed with a logistic squashing function. There are m units in the
condition layer, one per rule. The activation of condition unit i, Pi' is computed as follows:
logistic (11 et;)
p. I
~ logistic (Ilet)
J
The activation Pi represents the probability that rule i applies to the current input. The normalization enforces a soft winner-take-all competition among condition units. To the
degree that a condition unit wins, it enables a set of weights from the input layer to the output layer. These weights correspond to the action for a particular rule. There is one set of
weights, A j , for each of the m rules. The activation of the output layer, y, is calculated from
the input layer, x, as follows:
Essentially, the transformation Ai for rule each rule i is applied to the input, and it contributes to the output to the degree that condition i is satisfied. Ideally, just one condition unit
will be fully activated by a given input, and the rest will remain inactive.
This architecture is based on the local expert architecture of Jacobs, Jordan, Nowlan, and
Hinton (1991), but is independently motivated in our work by the demands of the task
domain. RuleNet has essentially the same structure as the Jacobs network, where the
action substructure of RuleNet corresponds to their local experts and the condition substructure corresponds to their gatillg lIetwork. However, their goal-to minimize crosstalk
between logically independent sub tasks-is quite different than ours.
4.1 Weight Templates
In order to interpret the weights in RuleNet as symbolic SCP rules, it is necessary to establish a correspondence between regions of weight space and SCP rules.
A weight template is a parameterized set of constraints on some weights-a manifold in
weight space-that has a direct correspondence to an SCP rule. The strategy behind iterative projection is twofold: constrain gradient descent so that weights stay close to templates in weight space, and periodically project the learned weights to the nearest
template, which can then readily be interpreted as a set of SCP rules.
For SCP rules, there are three types of weight templates: one dealing with categorization,
one with rule conditions, and one with rule actions. Each type of template is defined over a
subset of the weights in RuleNet. The categorization templates are defined over the
weights from input to category units, the condition templates are defined over the weights
from category to condition units for each rule i, ci ' and the action templates are defined
over the weights from input to output units for each rule i, Ai'
Rule Induction through Integrated Symbolic and Subsymbolic Processing
Category templates. The category templates specify that the mapping from each input slot
a to category pool a, for 1 s a S II, is uniform. This imposes category invariance across
the input string.
Condition templates. The weight vector ci , which maps category activities to the activity
of condition unit i, has Vil elements-v being the number of category units per slot and 11
being the number of slots. The fact that the condition unit should respond to at most one
category in each slot implies that at most one weight in each v-element subvector of c j
should be nonzero. For example, assuming there are three categories, N, X, and Y, the vector cj that detects the simple condition "illput2 is an instance of X" is: (000 OcpO 000 000),
where cp is an arbitrary parameter. Additionally, a bias is required to ensure that the net
input will be negative unless the condition is satisfied. Here, a bias value, b, of -O.5cp will
suffice. For disjunctive and conjunctive conditions, weights in two slots should be equal to
cp, the rest zero, and the appropriate bias is -.5cp or -1.5cp, respectively. There is a weight
template for each condition type and each combination of slots that takes part in a condition. We generalize these templates further in a variety of ways. For instance, in the case
where each input symbol falls into exactly one category, if a constant Ea is added to all
weights of Cj corresponding to slot a and Ea is also subtracted from b, the net input to condition unit i will be unaffected. Thus, the weight template must include the {E a }.
Action templates. If we wish the actions carried out by the network to correspond to the
string manipulations allowed by our rule domain, it is necessary to impose some restrictions on the values assigned to the action weights for rule i, A j ? Ai has an 11 x Il block form,
where II is the length of input/output strings. Each block is a k x k submatrix, where k is
the number of elements in the representation of each input symbol. The block at block-row
~, block-column a of Aj copies illputa to outputr. if it is the identity matrix. Thus, the
weight templates restrict each block to being either the identity matrix or the zero matrix.
If outputr. is to be a fixed symbol, then block-row ~ must be all zero except for the output
bias weights in block-row ~.
The weight templates are defined over a submatrix Ajr.' the set of weights mapping the
input to an output slot ~. There are 11+1 templates, one for the mapping of each input slot
to the output, and one for the writing of a fixed symbol to the output. An additional constraint that only one block may be nonzero in block-column a of Ai ensures that inputa
maps to at most one output slot.
4.2 Constraints on Weight Changes
Recall that the strategy in iterative projection is to constrain weights to be close to the templates described above, in order that they may be readily interpreted as symbolic rules. We
use a combination of hard and soft constraints, some of which we briefly describe here.
To ensure that during learning every block in Ai approaches the identity or zero matrix, we
constrain the off-diagonal terms to be zero and constrain weights along the diagonal of
each block to be the same, thus limiting the degrees of freedom to one parameter within
each block. All weights in Cj except the bias are constrained to positive or zero values.
Two soft constraints are imposed upon the network to encourage all-or-none categorization of input instances: A decay term is used on all weights in cj except the maximum in
each slot, and a second cost term encourages binary activation of the category units.
973
974
McMillan, Mozer, and Smolensky
4.3 Projection
The constraints described above do not guarantee that learning will produce weights that
correspond exactly to SCP rules. However, using projection, it is possible to transform the
condition and action weights such that the resulting network can be interpreted as rules.
The essential idea of projection is to take a set of learned weights, such as CI , and compute
values for the parameters in each of the corresponding weight templates such that the
resulting weights match the learned weights. The weight template parameters are estimated using a least squares procedure, and the closest template, based upon a Euclidean
distance metric, is taken to be the projected weights.
5 SIMULATIONS
We ran sim ulations on 14 different training sets, averaging the performance of the network
over at least five runs with different initial weights for each set. The training data were
generated from SCP rule bases containing 2-8 rules and strings of length four. Between
four and eight categories were used. Alphabets ranged from eight to 12 symbols. Symbols
were represented by either local or distributed activity vectors. Training set sizes ranged
from 3-15% of possible examples.
Iterative projection involved the following steps: (1) start with one rule (one set of c;-AI
weights), (2) perform gradient descent for 500-5,000 epochs, (3) project to the nearest set
of SCP rules and add a new rule. Steps (2) and (3) were repeated until the training set was
fully covered.
In virtually every run on each data set in which RuleNet converged to a set of rules that
completely covered the training set, the rules extracted were exactly the original rules used
to generate the training set. In the few remaining runs, RuleNet discovered an equivalent
set of rules.
It is instructive to examine the evolution of a rule set. The rightmost column of Figure 2
shows a set of five rules over four categories, used to generate 200 exemplars, and the left
portion of the Figure shows the evolution of the hypothesis set of rules learned by RuleNet
over 20,000 training epochs, projecting every 4000 epochs. At epoch 8000, RuleNet has
discovered two rules over two categories, covering 24.5% of the training set. At epoch
12,000, RuleNet has discovered three rules over three categories, covering 52% of the
training set. At epoch 20,000, RuleNet has induced five rules over four categories that
epoch 8000
epoch 12,000
epoch 20,000
[v B_C_ - 4h21] [v B_C_ - 4h21] [v B_C_
[1\ _B_C - 341?] [1\ _EC - 2413] [ _B_
[1\ _B_B - 321?] [v _E_D
[1\ _D_B
[v _EC
Categ.
B
Instance
f 9 h
C
abc i
Categ.
Instance
9 h
B
f
C
E
abc d i
a i j k
-
4h21]
4213]
342?]
3214]
2413]
Categ. Instance
original rules/categ.
[v Y_W_ - 4h21]
[ _Y_
[v _Z_X
[1\ _X_Y
[v _ZW
Categ.
-
4213]
342?]
3214]
2413]
Instance
w abc d
D
abc d i
e 9 1
B
E
a c i j k
z
C
f 9 h
Figure 2: Evolution of a Rule Set
X
y
i
e 9 1
f 9 h
a c i j k
Rule Induction through Integrated Symbolic and Subsymbolic Processing
Table 1: Generalization performance of RuleNet (average of five runs)
Architecture
RuleNet
Jacobs architecture
3-layer backprop
# of patterns in set
Data Set 1
(8 Rules)
tram
test
100
100
22
100
100
27
120 1635
% of patterns correctly mapped
Data Set 2
Data Set 3
(3 Rules)
(3 Rules)
tram test
tram test
100 100
100 100
14
100
7
100
14
100
100
7
45 1380
45 1380
Data Set 4
(5 Rules)
tram test
100 100
100
27
100
35
75 1995
cover 100% of the training examples. A close comparison of these rules with the original
rules shows that they only differ in the arbitrary labels RuleNet has attached to the categories.
Learning rules can greatly enhance generalization. In cases where RuleNet learns the original rules, it can be expected to generalize perfectly to any pattern created by those rules.
We compared the performance of RuleNet to that of a standard three-layer backprop network (with 15 hidden units per rule) and a version of the Jacobs architecture, which in
principle has the capacity to perform the task. Four rule bases were tested, and roughly 5%
of the possible examples were used for training and the remainder were used for generalization testing. Outputs were thresholded to 0 or 1. The cleaned up outputs were compared
to the targets to determine which were mapped correctly. All three learn the training set
perfectly. However, on the test set, RuleNet's ability to generalize is 300% to 2000% better than the other systems (Table1).
Finally, we applied RuleNet to case-role assignment, as considered by McClelland and
Kawamoto (1986). Case-role assignment is the problem of mapping syntactic constituents
of a sentence to underlying semantic, or thematic, roles. For example, in the sentence,
"The boy broke the window", boy is the subject at the syntactic level and the agent, or acting entity, at the semantic level. Window is the object at the syntactic level and the patient,
or entity being acted upon, at the semantic level. The words of a sentence can be represented as a string of Il slots, where each slot is labeled with a constituent, such as subject,
and that slot is filled with the corresponding word, such as boy. The output is handled analogously. We used McClelland and Kawamoto's 152 sentences over 34 nouns and verbs as
RuleNet's training set. The five categories and six rules induced by RuleNet are shown in
Table 2, where S = subject, 0 = object, and wNP = noun in the with noun-phrase. We conjecture that RuleNet has induced such a small set of rules in part because it employs
Table 2: SCP Rules Induced by RuleNet in Case-Role Assignment
Rule
if 0 = VICTIM then wNP-modifier
if 0 = THING 1\ wNP = UTENSIL
then wNP-instrument
if S = BREAKER then S-instrument
if S THING then S-patient
if V moved then self-patient
if S = ANIMATE then food-patient
=
=
Sample of Sentences Handled Correctly
The boy ate the pasta with cheese.
The boy ate the pasta with the fork.
The rock broke the window.
The window broke. The fork moved.
The man moved.
The lion ate.
975
976
McMillan, Mozer, and Smolensky
implicit conflict resolution, automatically assigning strengths to categories and conditions.
These rules cover 97% of the training set and perform the correct case-role assignments on
84% of the 1307 sentences in the test set.
6 DISCUSSION
RuleNet is but one example of a general methodology for rule induction in neural networks. This methodology involves five steps: 1) identify a fundamentally rule-governed
domain, 2) identify a class of rules that characterizes that domain, 3) design a general
architecture, 4) establish a correspondence between components of symbolic rules and
manifolds of weight space-weight templates, and 5) devise a weight-template-based
learning procedure.
Using this methodology, we have shown that RuleNet is able to perform both category and
rule learning. Category learning strikes us as an intrinsically subsymbolic process. Functional categories are often fairly arbitrary (consider the classification of words as nouns or
verbs) or have complex statistical structure (consider the classes "liberals" and "conservatives"). Consequently, real-world categories can seldom be described in terms of boolean
(symbolic) expressions; subsymbolic representations are more appropriate.
While category learning is intrinsically subsymbolic, rule learning is intrinsically a symbolic process. The integration of the two is what makes RuleNet a unique and powerful
system. Traditional symbolic machine learning approaches aren't well equipped to deal
with subsymbolic learning, and connectionist approaches aren't well equipped to deal
with the symbolic. RuleNct combines the strengths of each approach.
Acknowledgments
This research was supported by NSF Presidential Young Investigator award IRI-9058450, grant 9021 from the James S. McDonnell Foundation, and DEC external research grant 1250 to MM; NSF
grants IRI-8609599 and ECE-8617947 to PS; by a grant from the Sloan Foundation's computational
neuroscience program to PS; and by the Optical Connectionist Machine Program of the NSF Engineering Research Center for Optoelectronic Computing Systems at the University of Colorado at
Boulder.
References
Booker, L.B., Goldberg, D.E., and Holland, J.H. (1989). Classifier systems and genetic algorithms,
Artificiallntelligellce 40:235-282.
Fisher, D.H. (1987). Knowledge acquisition via incremental concept clustering. Machine Learning
2:139-172.
Jacobs, R., Jordan, M., Nowlan, S., Hinton, G. (1991). Adaptive mixtures of local experts. Neural
Computation, 3:79-87.
McClelland, J. & Kawamoto, A. (1986). Mechanisms of sentence processing: assigning roles to constituents. In J.L. McClelland, D.E. Rumelhart, & the PDP Research Group, Parallel Distributed Processing: Explorations in tire microstructure of cognition, Vol. 2. Cambridge, MA: MIT PresslBradford Books.
McMillan, C. (1992). Rule induction in a neural network through integrated symbolic and subsymbolic processing. Unpublished Ph.D. Thesis. Boulder, CO: Department of Computer Science, University of Colorado.
Rumelhart, D., & McClelland, 1. (1986). On learning the past tense of English verbs. In 1.L. McClelland, D.E. Rumelhart, & the PDP Research Group, Parallel Distributed Processing: Explorations in
the microstructure of cognition. Vol. 2. Cambridge, MA: MIT PresslBradford Books.
Sejnowski, T. 1. & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text,
Complex Systems, 1: 145-168.
| 520 |@word version:1 briefly:1 eliminating:1 seems:1 simulation:3 jacob:5 initial:1 contains:1 tram:4 genetic:1 ours:1 rightmost:1 past:2 current:1 nowlan:2 activation:5 assigning:2 conjunctive:2 must:7 grapheme:1 readily:2 periodically:1 enables:1 greedy:1 liberal:1 five:7 along:1 direct:1 consists:3 shorthand:1 combine:1 expected:1 roughly:1 examine:1 detects:1 automatically:1 food:1 window:4 equipped:2 project:2 discover:1 underlying:1 suffice:1 what:1 interpreted:3 string:23 transformation:1 guarantee:1 every:3 exactly:3 classifier:2 unit:29 grant:4 positive:1 engineering:1 local:4 meet:1 solely:1 might:2 studied:1 co:2 pronounce:1 unique:1 acknowledgment:1 enforces:1 crosstalk:1 testing:1 block:13 implement:1 procedure:2 projection:7 word:4 integrating:1 refers:1 induce:3 symbolic:19 ilet:1 close:3 context:1 writing:1 restriction:1 equivalent:1 map:6 imposed:1 center:1 go:2 iri:2 independently:1 resolution:1 rule:102 limiting:1 enhanced:1 target:2 colorado:3 goldberg:1 hypothesis:1 element:4 rumelhart:4 recognition:1 particularly:1 continues:1 labeled:1 role:8 disjunctive:1 fork:2 region:1 ensures:1 ran:1 mozer:5 ideally:1 animate:1 purely:2 upon:6 completely:2 girl:1 various:1 represented:2 alphabet:2 train:1 describe:4 shortcoming:1 sejnowski:2 disjunction:2 apparent:1 whose:1 quite:1 victim:1 presidential:1 ability:1 wnp:4 syntactic:3 transform:1 net:3 rock:1 remainder:1 inducing:1 moved:3 competition:1 constituent:3 table1:1 p:2 produce:2 categorization:5 incremental:1 object:2 exemplar:4 nearest:2 sim:1 involves:1 indicate:1 implies:1 differ:1 direction:1 correct:2 outputr:2 exploration:2 human:1 broke:3 backprop:2 microstructure:2 generalization:4 mm:1 considered:1 mapping:12 cognition:2 label:1 mit:2 activates:1 rather:2 rosenberg:2 conjunction:1 logically:1 greatly:1 sense:1 abstraction:1 lowercase:1 integrated:5 eliminate:1 hidden:5 going:1 interested:1 booker:2 among:1 ill:1 classification:1 denoted:1 constrained:1 noun:4 fairly:1 integration:1 equal:1 represents:1 future:1 connectionist:3 fundamentally:3 few:1 employ:1 simultaneously:1 freedom:1 mixture:1 bracket:1 yielding:1 uppercase:1 activated:1 behind:1 predefined:1 capable:1 encourage:1 necessary:2 unless:1 filled:1 euclidean:1 output2:1 minimal:1 criticized:1 instance:17 column:3 classify:1 combinatoric:1 soft:3 boolean:1 cover:2 assignment:6 phrase:1 cost:1 subset:2 uniform:1 too:1 stay:1 input2:1 off:1 pool:8 michael:1 enhance:1 analogously:1 connectivity:1 thesis:1 satisfied:2 containing:2 cognitive:5 external:1 expert:3 book:2 leading:1 potential:1 bold:2 sloan:1 vi:1 performed:3 characterizes:2 portion:1 start:1 capability:2 parallel:3 substructure:2 minimize:1 square:2 il:2 phoneme:1 correspond:3 identify:2 generalize:3 accurately:1 produced:1 none:1 vil:1 unaffected:1 converged:1 acquisition:1 involved:1 james:1 obvious:1 associated:1 intrinsically:4 recall:1 knowledge:1 cj:4 ea:2 back:1 higher:4 methodology:3 specify:1 evaluated:1 just:1 implicit:1 until:1 logistic:3 aj:1 believe:2 grows:1 usage:1 tense:2 concept:1 ranged:2 evolution:3 assigned:1 excluded:1 nonzero:2 semantic:3 deal:2 during:2 self:1 encourages:1 covering:2 complete:1 demonstrate:1 cp:5 meaning:1 discovers:1 novel:1 functional:3 winner:1 exponentially:1 attached:1 belong:1 interpret:1 cambridge:2 ai:7 seldom:1 outlined:1 language:2 base:4 add:1 closest:1 manipulation:3 certain:2 binary:1 devise:1 additional:2 impose:1 determine:1 strike:1 ii:3 multiple:1 stem:1 match:1 characterized:1 y_:1 devised:1 divided:3 award:1 involving:1 essentially:2 metric:1 patient:4 represent:2 normalization:1 dec:1 zw:1 operate:2 rest:2 induced:5 subject:3 virtually:1 thing:2 jordan:2 call:1 input1:2 switch:1 variety:1 architecture:12 restrict:1 perfectly:2 idea:1 inactive:1 whether:2 motivated:1 handled:2 six:1 expression:1 action:16 clear:1 covered:2 ph:1 induces:1 category:50 mcclelland:8 generate:6 nsf:3 estimated:1 neuroscience:1 per:3 correctly:3 utensil:1 write:1 vol:2 group:2 four:7 thresholded:1 wand:1 run:4 letter:1 injected:1 respond:2 parameterized:1 powerful:1 submatrix:2 modifier:1 layer:22 correspondence:3 presslbradford:2 activity:3 strength:2 constraint:7 constrain:4 optical:1 conjecture:1 acted:1 department:2 combination:2 mcdonnell:1 smaller:1 remain:1 across:1 ate:3 abed:3 projecting:1 boulder:3 taken:1 mutually:1 mechanism:1 instrument:2 kawamoto:4 b_:1 operation:1 eight:2 appropriate:2 optoelectronic:1 subtracted:1 alternative:2 ajr:1 original:4 denotes:2 remaining:1 ensure:2 include:1 clustering:1 a4:1 establish:2 unchanged:1 added:1 font:2 strategy:2 exclusive:1 traditional:2 diagonal:2 exhibit:2 gradient:2 win:1 distance:1 mapped:2 capacity:1 entity:2 manifold:2 argue:1 toward:1 induction:6 assuming:1 length:5 boy:7 negative:1 design:1 perform:7 gated:1 descent:2 hinton:2 pdp:2 discovered:3 verb:4 arbitrary:3 mcmillan:7 clayton:1 unpublished:1 subvector:1 required:1 cleaned:1 connection:1 sentence:7 conflict:1 learned:4 output1:1 beyond:2 able:1 lion:1 pattern:5 smolensky:5 program:2 including:1 suitable:1 natural:1 created:1 carried:1 categorical:1 extract:1 text:1 epoch:9 literature:1 fully:2 permutation:1 interesting:1 foundation:2 degree:3 agent:1 imposes:1 principle:1 classifying:1 pi:2 squashing:1 row:3 supported:1 copy:2 english:2 tire:1 formal:2 bias:5 institute:1 fall:1 template:25 distributed:3 calculated:1 world:1 adaptive:1 projected:1 far:1 ec:2 compact:1 dealing:1 cheese:1 corpus:2 iterative:4 table:3 additionally:1 learn:6 contributes:1 pasta:2 complex:3 domain:14 paul:1 allowed:2 repeated:1 sub:1 thematic:1 explicit:2 wish:1 governed:4 learns:2 subsymbolic:10 young:1 gating:1 symbol:20 decay:1 incorporating:1 essential:1 ci:3 demand:1 aren:2 likely:1 holland:1 applies:1 corresponds:2 extracted:1 abc:4 ma:2 slot:35 identity:4 viewed:1 goal:1 consequently:1 twofold:2 replace:1 fisher:2 content:1 change:1 hard:1 man:1 typical:2 except:3 averaging:1 acting:1 conservative:1 called:3 invariance:1 ece:1 wildcard:1 h21:5 scp:12 investigator:1 incorporate:2 tested:1 instructive:1 |
4,642 | 5,200 | Non-Linear Domain Adaptation with Boosting
Carlos Becker?
C. Mario Christoudias
Pascal Fua
?
CVLab, Ecole
Polytechnique F?ed?erale de Lausanne, Switzerland
firstname.lastname@epfl.ch
Abstract
A common assumption in machine vision is that the training and test samples
are drawn from the same distribution. However, there are many problems when
this assumption is grossly violated, as in bio-medical applications where different acquisitions can generate drastic variations in the appearance of the data due
to changing experimental conditions. This problem is accentuated with 3D data,
for which annotation is very time-consuming, limiting the amount of data that
can be labeled in new acquisitions for training. In this paper we present a multitask learning algorithm for domain adaptation based on boosting. Unlike previous
approaches that learn task-specific decision boundaries, our method learns a single decision boundary in a shared feature space, common to all tasks. We use
the boosting-trick to learn a non-linear mapping of the observations in each task,
with no need for specific a-priori knowledge of its global analytical form. This
yields a more parameter-free domain adaptation approach that successfully leverages learning on new tasks where labeled data is scarce. We evaluate our approach
on two challenging bio-medical datasets and achieve a significant improvement
over the state of the art.
1
Introduction
Object detection and segmentation approaches often assume that the training and test samples are
drawn from the same distribution. There are many problems in Computer Vision, however, where
this assumption can be grossly violated, such as in bio-medical applications that usually involve
expensive and complicated data acquisition processes that are not easily repeatable. As illustrated
in Fig. 1, this can result in newly-acquired data that is significantly different from the data used for
training. As a result, a classifier trained on data from one acquisition often cannot generalize well to
data obtained from a new one. Furthermore, although it is possible to expect supervised data from
a new acquisition, it is unreasonable to expect the practitioner to re-label large amounts of data for
each new image that is acquired, particularly in the case of 3D image stacks.
A possible solution is to treat each acquisition as a separate, but related classification problem, and
exploit their possible relationship to learn from the supervised data available across all of them.
Typically, each such classification problem is called a task, which is associated with a domain.
For example, for Fig. 1(a,b) the task is mitochondria segmentation in both acquisitions. However,
the domains are different, namely Striatum and Hippocampus EM stacks. Techniques in domain
adaptation [1] and more generally multi-task learning [2, 3] seek to leverage data from a set of
different yet related tasks or domains to help learn a classifier in a seemingly new task. In domain
adaptation, it is typically assumed that there is a fairly large amount of labeled data in one domain,
commonly referred to as the source domain, and that a limited amount of supervision is available in
the other, often called the target domain. Our goal is to exploit the labeled data in the source domain
to learn an accurate classifier in the target domain despite having only a few labeled samples in the
latter.
?
This work was supported in part by the ERC grant MicroNano.
1
Mitochondria Segmentation
(3D stacks)
(a) Striatum
Path Classification
(2D images to 3D stacks)
(b) Hippocampus
(c) Aerial road images
(d) Neural Axons (OPF)
Figure 1: (a,b) Slice cuts from two 3D Electron Microscopy acquisitions from different parts of the
brain of a rat. (c,d) 2D aerial road images and 3D neural axons from Olfactory Projection Fibers
(OPF). Top and bottom rows show example images and ground truth respectively.
The data acquisition problem is unique to many multi-task learning problems, however, in that each
task is in fact the same, but what has changed is that the features across different acquisitions have
undergone some unknown transformation. That is to say that each task can be well described by a
single decision boundary in some common feature space that preserves the task-relevant features and
discards the domain specific ones corresponding to unwanted acquisition artifacts. This contrasts the
more general multi-task setting where each task is comprised of both a common and task-specific
boundary, even when mapped to a common feature space, as illustrated in Fig. 2. A method that can
jointly optimize over the common decision boundary and shared feature space is therefore desired.
Linear latent variable methods such as those based on Canonical Correlation Analysis (CCA) [4,
5] can be applied to learn a shared feature space across the different acquisitions. However, the
situation is further complicated by the fact that the unknown transformations are generally nonlinear. Although kernel methods can be applied to model the non-linearity [4, 6, 7], this requires
the existence of a well-defined kernel function that can often be difficult to specify a priori. Also,
the computational complexity of kernel methods scales quadratically with the number of training
examples, limiting their application to large datasets.
In this paper we propose a solution to the data acquisition problem and devise a method that can
jointly solve for the non-linear decision boundary and transformations across tasks. As illustrated
in Fig. 2 our approach maps features from possibly high-dimensional, task-specific feature spaces
to a low-dimensional space common to all tasks. We assume that only the mappings are taskdependent and that in the shared space the problem is linearly separable and the decision boundary
is common to all tasks. We use the boosting-trick [8, 9, 10] to simultaneously learn the non-linear
task-specific mappings as well as the decision boundary, with no need for specific a-priori knowledge
of their global analytical form. This yields a more parameter-free domain adaptation approach that
successfully leverages learning on new tasks where labeled data is scarce.
We evaluate our approach on the two challenging bio-medical datasets depicted by Fig. 1. We
first consider the classification of curvilinear structures in 3D image stacks of Olfactory Projection
Fibers (OPF) [11] using labeled 2D aerial road images. We then perform mitochondria segmentation
in large 3D Electron Microscopy (EM) stacks of neural rat tissue, demonstrating the ability of our
algorithm to leverage labeled data from different data acquisitions on this challenging task. On both
datasets our approach obtains a significant improvement over using labeled data from either domain
alone and outperforms recent multi-task learning baseline methods.
2
Related Work
Initial ideas to multi-task learning exploited supervised data from related tasks to define a form of
regularization in the target problem [2, 12]. In this setting, related tasks, also sometimes referred to
2
(a) Standard Multi-task Learning
(b) Domain Adaptation
Figure 2: Illustration of the difference between (a) standard Multi-task Learning (MTL) and (b) our
Domain Adaptation (DA) approach on two tasks. MTL assumes a single, pre-defined transformation
?(x) : X ? Z and learns shared and task-specific linear boundaries in Z, namely ? o , ? 1 and
? 2 ? Z. In contrast, our DA approach learns a single linear boundary ? in a common feature space
Z, and task-specific mappings ?1 (x), ?2 (x) : X ? Z. Best viewed in color.
as auxiliary problems [13], are used to learn a latent representation and find discriminative features
shared across tasks. This representation is then transferred to the target task to help regularize the
solution and learn from fewer labeled examples. The success of these approaches crucially hinges
on the ability to define auxiliary tasks. Although this can be easily done in certain situations, e.g., as
in [13], in many cases it is unclear how to generate them and the solution can be limiting, especially
when provided only a few auxiliary problems. Unlike such methods, our approach is able to find an
informative shared representation even with as little as one related task.
More recent multi-task learning methods jointly optimize over both the shared and task-specific
components of each task [3, 14, 10, 15]. In [3] it was shown how the two step iterative optimization of [13] can be cast into a single convex optimization problem. In particular, for each task their
approach computes a linear decision boundary defined as a linear combination between a shared
hyperplane, shared across tasks, and a task-specific one in either the original or a kernelized feature
space. This idea was later further generalized to allow for more generic forms [14, 16, 17, 15], as
in [14] that investigated the use of a hierarchically combined decision boundary. The use of boosting for multi-task learning was explored in [10] as an alternative to kernel-based approaches. For
each task they optimize for a shared and task-specific decision boundary similar to [3], except nonlinearities are modeled using a boosted feature space. As with other methods, however, additional
parameters are required to control the degree of sharing between tasks that can be difficult to set,
especially when one or more tasks have only a few labeled samples.
For many problems, such as those common to domain adaptation [1], the decision problem is in fact
the same across tasks, however, the features of each task have undergone some unknown transformation. Feature-based approaches seek to uncover this transformation by learning a mapping between
the features across tasks [18, 19, 7]. A cross-domain Mahalanobis distance metric was introduced
in [18] that leverages across-task correspondences to learn a transformation from the source to target
domain. A similar method was later developed in [20] to handle cross-domain feature spaces of a
different dimensionality. Shared latent variable models have also been proposed to learn a shared
representation across multiple feature sources or tasks [4, 19, 6, 7, 21].
Feature-based methods generally rely on the kernel-trick to model non-linearities that requires the
selection of a pre-defined kernel function and is difficult to scale to large datasets. In this paper,
we exploit the boosting-trick [10] to handle non-linearities and learn a shared representation across
tasks, overcoming these limitations. This results in a more parameter-free, scalable domain adaptation approach that can leverage learning on new tasks where labeled data is scarce.
3
Our Approach
We consider the problem of learning a binary decision function from supervised data collected across
multiple tasks or domains. In our setting, each task is an instance of the same underlying decision
problem, however, its features are assumed to have undergone some unknown non-linear transformation.
3
t
Assume that we are given training samples X t = {xti , yit }N
i=1 from t = 1, . . . , T tasks, where
xti ? RD represents a feature vector for sample i in task t and yit ? {?1, 1} its label. For each task,
we seek to learn a non-linear transformation, ?t (xt ), that maps xt to a common, task-independent
feature space, Z, accounting for any unwanted feature shift. Instead of relying on cleverly chosen
kernel functions we model each transformation using a set of task-specific non-linear functions
Ht = {ht1 , . . . , htM }, htj : RD ? R, to define ?t : X t ? Z as ?t (xt ) = [ht1 (xt ), . . . , htM (xt )]| .
A wide variety of task-specific feature functions can be explored within our framework. We consider
functions of the form,
htj (xt ) = hj (xt ? ?jt ),
j = 1, . . . , M
(1)
where H = {h1 , . . . , hM } are shared across tasks and ?jt ? RD . This seems like an appropriate
model in the case of feature shift between tasks, for example due to different acquisition parameters.
Each hj can be interpreted as a weak non-linear predictor of the task label and in practice M is
large, possibly infinite. In what follows, we set H to be the set of regression trees or stumps [8] that
in combination with ? t can be used to model highly complex, non-linear transformations.
Assuming that the problem is linearly separable in Z the predictive function ft (?) : RD ? R for
each task can then be written as
ft (x) = ? | ?t (xt ) =
M
X
?j hj (xt ? ?jt )
(2)
j=1
where ? ? RM is a linear decision boundary in Z that is common to all tasks. This contrasts
previous approaches to multi-task learning such as [3, 10] that learn a separate decision boundary
per task and, as we show later, is better suited for problems in domain adaptation.
We learn the functions ft (?) by minimizing the exponential loss on the training data across each task
? ? , ?? = min
?,?
T
X
L(?, ?t ; X t ),
(3)
t=1
where
t
t
t
L(?, ? ; X ) =
N
X
t
exp ?
yit ft (xti )
=
i=1
N
X
i=1
M
h
i
X
exp ? yit
?j hj (xti ? ?jt ) ,
(4)
j=1
t
].
and ? = [?1 , . . . , ?T ] with ?t = [?1t , . . . , ?M
The explicit minimization of Eq. (3) can be very difficult, since in practice, M can be prohibitively
large and the hj ?s are typically discontinuous and highly non-linear. Luckily, this is a problem for
which boosting is particularly well suited [8], as it has been demonstrated to be an effective method
for constructing a highly accurate classifier from a possibly large collection of weak prediction
functions. Similar to the kernel-trick, the resulting boosting-trick [8, 9, 10] can be used to define a
non-linear mapping to a high dimensional feature space for which we assume the data to be linearly
separable. Unlike the kernel-trick, however, the boosting-trick defines an explicit mapping for which
? is assumed to be sparse [22, 10].
We propose to use gradient boosting [8, 9] to solve for ft (?). Given any twice-differentiable loss
function, gradient boosting minimizes the loss in a stage-wise manner for iterations k = 1 to K. In
particular, we use the quadratic approximation introduced by [9]. When applied to minimize Eq. (3),
? ? H and the set of {?? 1 , . . . , ?? T }
the goal at each boosting iteration is to find the weak learner h
that minimize
T
X
t=1
? t
?
N
h
i2
X
t
? t ? ?? t ) ? rt ? ,
?
wik
h(x
ik
(5)
i=1
t
t
t
t
t
where wik
and rik
can be computed by differentiating the loss of Eq. (4), obtaining wik
= e?yi ft (xi )
t
t
1
T
?
and rik = yi . Once h and {?? , . . . , ?? } are found, a line-search procedure is applied to determine
4
Algorithm 1 Non-Linear Domain Adaptation with Boosting
t
Input: Training samples and labels for T tasks X t = {(xti , yit )}N
i=1
Number of iterations K, shrinkage factor 0 < ? ? 1
1: Set ft (?) = 0 ? t = 1, . . . , T
2: for k = 1 to K do
3:
t
t
t
t
Let wik
= yit
= e?yi ft (xi ) and rik
t
4:
Find
n
?
h(?),
?? 1 , . . . , ?? T
o
=
T X
N
X
argmin
h?H,? 1 ,...,? T
t 2
t
h(xti ? ? t ) ? rik
wik
t=1 i=1
t
5:
Find ?
? through line search: ?
? = argmin
?
T X
N
X
h
i
? ti ? ?? t )
exp ? yit ft (xti ) + ? h(x
t=1 i=1
6:
Set ?? = ? ?
?
7:
? ? ? ?? t ) ? t = 1, . . . , T
Update ft (?) = ft (?) + ?? h(
8: end for
9: return ft (?) ? t = 1, . . . , T
? and the predictive functions ft (?) are updated, as described in Alg. 1.
the optimal weighting for h
Shrinkage may be applied to help regularize the solution, particularly when using powerful weak
learners such as regression trees [8].
Our proposed approach is summarized in Alg. 1. The main difficulty in applying this method is
? and {?? 1 , . . . , ?? T } that minimize Eq. 5. This can be
in line 4, which finds the optimal values of h
very expensive, depending on the type of weak learners employed. In the next section we show that
regression trees and boosted stumps can be used efficiently to minimize Eq. (5) at train time.
3.1
Weak Learners
Regression trees have proven very effective when used as weak learners with gradient boosting [23].
An important advantage is that training regression trees needs practically no parameter tuning and
is very efficient when a greedy top-down approach is used [8].
Decision stumps represent a special case of single-level regression trees. Despite their simplicity,
they have been demonstrated to achieve a high performance in challenging tasks such as face and
object detection [24, 25]. In cases where feature dimensionality D is very large, decision stumps
may be preferred over regression trees to reduce training time.
Regression Trees: We use trees whose splits operate on a single dimension of the feature vector,
and follow the top-down greedy tree learning approach described in [8]. The top split is learned first,
seeking to minimize
argmin
T
X
n?{1,...,D}, t=1
?1 ,?2 ,{? 1 ,...,? T }
?
t
N
X
?
?
Nt
X
t 2?
t
t 2
t
?
, (6)
1{xti [n]?? t } wik
?1 ? rik
+
1{xti [n]?? t } wik
?2 ? rik
i=1
i=1
where x[n] ? R denotes the value of the nth dimension of x, 1{?} is the indicator function, and
?
1{?} = 1 ? 1{?} . The difference w.r.t. classic regression trees is that, besides learning the values of
?1 , ?2 and n, our approach requires the tree to also learn a threshold ? t ? R per task. Given that
each split operates on a single attribute x[n], the resulting ?? t is sparse, and learned one component
at a time as the tree is built.
Once the top split is learned, a new split is trained on each of its child leaves, in a recursive manner.
This process is repeated until the maximum depth L, given as a parameter, is reached, or there are
not enough samples to learn a new node at a given leaf.
5
Decision Stumps: Decision stumps consist of a single split and return values ?1 , ?2 = ?1. If also
t
rik
= ?1, which is true when boosting with the exponential loss, then it can be demonstrated that
minimizing Eq (6) can be separated into T independent minimization problems for all D attributes
for each n. Once this is done, a quick search can be performed to determine the n that minimizes
Eq. (6). This makes decision stumps feasible for large-scale applications with very high dimensional
feature spaces.
In the special case of the exponential loss and decision stumps, it can be shown that Alg. 1 reduces
to a procedure similar to classic AdaBoost [26], with the exception that weak learner search is done
in the multi-task manner described above.
4
Evaluation
We evaluated our approach on two challenging domain adaptation problems for which annotation
is very time-consuming, representative of the problems we seek to address. We first describe the
datasets, our experimental setup and baselines, and finally present and discuss the obtained results.
4.1
Datasets
Path Classification Tracing arbors of curvilinear structures is a well studied problem that finds
applications in a broad range of fields from neuroscience to photogrammetry. We consider the
detection of 3D curvilinear structures in 3D image stacks of Olfactory Projection Fibers (OPF)
using 2D aerial road images (see Fig. 1(c,d)). For this problem, the task is to predict whether a
tubular path between two image locations belongs to a curvilinear structure. We used a publiclyavailable dataset [11] of 2D aerial images of road networks as the source domain and 3D stacks of
Olfactory Projection Fibers (OPF) from the DIADEM challenge as the target domain. The source
domain consists of six fully-labeled 2D aerial road images and the target domain contains eight
fully-labeled 3D stacks. We aim at using large amounts of labeled data from 2D road images to
leverage learning in the 3D stacks. This is a clear scenario where transfer learning can be highly
beneficial, because labeling 2D images is much easier than annotating 3D stacks. Therefore, being
able to take advantage of 2D data is essential to reduce tedious 3D labeling effort.
Mitochondria Segmentation: Mitochondria are organelles that play an important role in cellular
functioning. The goal of this task is to segment mitochondria from large 3D Electron Microscopy
(EM) stacks of 5 nm voxel size, acquired from the brain of a rat. As in the path classification
problem, 3D annotations are time-consuming and exploiting already-annotated stacks is essential
to speed up analysis. The source domain is a fully-labeled EM stack from the Striatum region
of 853x506x496 voxels with 39 labeled mitochondria. The target domain consists of two stacks
acquired from the Hippocampus, one a training volume of size 1024x653x165 voxels and the other
a test volume that is 1024x883x165 voxels, with 10 and 42 labeled mitochondria in each respectively.
The target test volume is fully-labeled, while the training one is partially annotated, similar to a real
scenario. To capture contextual information, state-of-the-art methods typically use filter response
vectors of more than 100k dimensions, which in combination with the size of this dataset, makes the
use of linear latent space models difficult and the direct application of kernel methods infeasible.
4.2
Experimental Setup
For path classification we employ a dictionary whose codewords are Histogram of Gradient Deviations (HGD) descriptors, as in [11]. This is well suited for characterizing tubular structures and
can be applied in the same way to 2D and 3D images. This allows us, in theory, to apply a classifier trained on 2D images to 3D volumes. However, differences in appearance and geometry of the
structures may potentially adversely affect classifier accuracy when 2D-trained ones are applied to
3D stacks, which motivates domain adaptation. We use half of the target domain for training and
half for testing. 2500 positive and negative samples are extracted from each image through random
sampling, as in [11]. This results in balanced sets of 30k samples for training in the source domain,
and 20k for training and 20k for testing in the target domain.
To simulate the lack of training data, we randomly pick an equal number of positive and negative
samples for training from the target domain. The HGD codewords are extracted from the road
images and used for both domains to generate consistent feature vectors. We employ gradient
boosted trees, which in our experiments outperformed boosted stumps and kernel SVMs. For all
6
10%
Our Approach
Kernel CCA
Chapelle et al.
Pooling
TD only
Full TD
Test error
8%
6%
4%
2%
20
30
40
70
100
150
250
Number of training samples in TD
500
1000
Figure 3: Path Classification: Median, lower and upper quartiles of the test error as the number of
training samples is varied. Our approach nears Full TD performance with as few as 70 training samples in the target domain and significantly outperforms the baseline methods. Best viewed in color.
the boosting-based baselines we set the maximum tree depth to L = 3, equivalent to a maximum of
8 leaves, and shrinkage ? = 0.1, as in [8]. The number of boosting iterations is set to K = 500. For
this dataset we report the test error computed as the percentage of mis-classified examples.
For mitochondria segmentation we use the boosting-based method of [27], which is optimized for 3D
stacks and whose source code is publicly available. This method is based on boosted stumps, which
makes it very efficient at both train and test time. Similar to [27], we group voxels into supervoxels to
reduce training and testing time, which yields 15k positive and 275k negative supervoxel samples in
the source domain. In the target domain it renders 12k negative training samples. To simulate a real
scenario, we create 10 different transfer learning problems using the samples from one mitochondria
at a time as positives, which translates into approximately 300 positive training supervoxels each.
We use the default parameters provided by the authors of [27] in their source code (K = 2000), and
we evaluate segmentation performance with the Jaccard Index, as in [27].
4.3
Baselines
On each dataset, we compare our approach against the following baselines: training with reference
or target domain data only (shown as SD only and TD only), training a single classifier with both target and source domain data (Pooling), and with the multi-task approach of [10] (shown as Chapelle
et al.). We evaluate performance with varying amounts of supervision in the target domain, and also
show the performance of a classifier trained with all the available labeled data, shown as Full TD,
which represents fully supervised performance on this domain and is useful in gauging the relative
performance improvement of each method.
We compare to linear Canonical Correlation Analysis (CCA) and Kernel CCA (KCCA) [4] for learning a shared latent space on the path classification dataset, and use a Radial Basis kernel function
for KCCA, which is a commonly used kernel. Its bandwidth is set to the mean distance across the
training observations. The data size and dimensionality of the mitochondria dataset is prohibitive
for these methods, and instead we compare to Mean-Variance Normalization (MVN) and Histogram
Matching (HM) that are common normalizations one might apply to compensate for acquisition artifacts. MVN normalizes each input 3D intensity patch to have a unit variance and zero-mean, useful
for compensating for linear brightness and contrast changes in the image. HM applies a non-linear
transformation and normalizes the intensity values of one data volume such that the histogram of its
intensities matches the other.
4.4
Results: Path Classification
The results of applying our method and the baselines for path classification are shown in Fig. 3. Our
approach outperforms the baselines, and the difference in performance is particularly accentuated
in the case of very few training samples. The next best competitor is the multi-task method of [10],
although it exhibits a much higher variance than our approach and performs poorly when only provided a few labeled target examples. This is also the case for KCCA. The results of linear CCA
are not shown in the plots because it yielded very low performance compared to the other baselines,
7
0.65
Jaccard Index
0.6
0.55
0.5
Full TD
SD only
0.45
0.4
TD only
Pooling
Pooling + MVN
Pooling + HM
Chapelle et al.
Our Approach
Figure 4: Mitochondria Segmentation: Box plot of the Jaccard index measure for our method and
the baselines over 10 runs on the target domain. Simple Mean-Variance Normalization (MVN)
and Histogram Matching (HM) although helpful are unable to fully correct for differences between
acquisitions. In contrast, our method yields a higher performance without the need for such priors
and is able to faithfully leverage the source domain data to learn from relatively few examples in the
target domain, outperforming the baseline methods.
achieving a 14% error rate with 1k labeled examples and its performance significantly degrading
with fewer training samples. Similarly, SD only performance is 16%.
Our approach is very close to Full TD in performance when using as few as 70 training samples, even
though the Full TD classifier was trained with 20k samples from the target domain. This highlights
the ability of our method to effectively leverage the large amounts of source-domain data. As shown
in Fig. 3, there is a clear tendency for all methods to converge at the value of Full TD, although
our approach does so significantly faster. The low performance of Chapelle et al. [10] suggests
that modeling the domain shift using shared and task-specific boundaries, as is commonly done in
multi-task learning methods, is not a good model for domain adaptation problems such as the ones
shown in Fig. 1. This gets accentuated by the parameter tuning required by [10], done through crossvalidation, that performs poorly when only afforded a few labeled samples in the target domain and
yields a longer training time. The method of [10] took 25 minutes to train, while our approach only
took between 2 and 15 minutes, depending on the amount of labeled target data.
4.5
Results: Mitochondria Segmentation
A box plot showing the distribution of the VOC scores throughout 10 different runs is shown in
Fig. 4. Our approach significantly outperforms the multi-task method of [10] and the other baselines, followed in performance by pooling with mean-variance normalization (MVN) and histogram
matching (HM). In contrast, our method yields higher performance and smaller variance over the
different runs without the need for such priors. From a practical point of view, our approach does
not require parameter tuning and cross-validation is not necessary. This can be a bottleneck in some
scenarios where large volumes of data are used for training. For this task, training our method took
less than an hour per run, while [10] took over 7 hours due to cross-validation.
5
Conclusion
In this paper we presented an approach for performing non-linear domain adaptation with boosting.
Our method learns a task-independent decision boundary in a common feature space, obtained via
a non-linear mapping of the features in each task. This contrasts recent approaches that learn taskspecific boundaries and is better suited for problems in domain adaptation where each task is of the
same decision problem, but whose features have undergone an unknown transformation. In this setting, we illustrated how the boosting-trick can be used to define task-specific feature mappings and
effectively model non-linearity, offering distinct advantages over kernel-based approaches both in
accuracy and efficiency. We evaluated our approach on two challenging bio-medical datasets where
it achieved a significant gain over using labeled data from either domain alone and outperformed
recent multi-task learning methods.
8
References
[1] Jiang, J.: A literature survey on domain adaptation of statistical classifiers. (2008)
[2] Caruana, R.: Multitask Learning. Machine Learning 28 (1997)
[3] Evgeniou, T., Micchelli, C., Pontil, M.: Learning Multiple Tasks with Kernel Methods. JMLR
6 (2005)
[4] Bach, F.R., Jordan, M.I.: Kernel Independent Component Analysis. JMLR 3 (2002) 1?48
[5] Ek, C.H., Torr, P.H., Lawrence, N.D.: Ambiguity Modelling in Latent Spaces. In: MLMI.
(2008)
[6] Salzmann, M., Ek, C.H., Urtasun, R., Darrell, T.: Factorized Orthogonal Latent Spaces. In:
AISTATS. (2010)
[7] Memisevic, R., Sigal, L., Fleet, D.J.: Shared Kernel Information Embedding for Discriminative Inference. PAMI (April 2012) 778?790
[8] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer (2001)
[9] Zheng, Z., Zha, H., Zhang, T., Chapelle, O., Sun, G.: A General Boosting Method and Its
Application to Learning Ranking Functions for Web Search. In: NIPS. (2007)
[10] Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Boosted
Multi-Task Learning. Machine Learning (2010)
[11] Turetken, E., Benmansour, F., Fua, P.: Automated Reconstruction of Tree Structures Using
Path Classifiers and Mixed Integer Programming. In: CVPR. (June 2012)
[12] Baxter, J.: A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research
(2000)
[13] Ando, R.K., Zhang, T.: A Framework for Learning Predictive Structures from Multiple Tasks
and Unlabeled Data. JMLR 6 (2005) 1817?1853
[14] Daum?e, H.: Bayesian Multitask Learning with Latent Hierarchies. In: UAI. (2009)
[15] Kumar, A., Daum?e, H.: Learning Task Grouping and Overlap in Multi-task Learning. In:
ICML. (2012)
[16] Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task Learning for Classification with
Dirichlet Process Priors. JMLR 8 (2007)
[17] Jacob, L., Bach, F., Vert, J.P.: Clustered Multi-task Learning: a Convex Formulation. In:
NIPS. (2008)
[18] Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting Visual Category Models to New Domains. In: ECCV. (2010)
[19] Shon, A.P., Grochow, K., Hertzmann, A., Rao, R.P.N.: Learning Shared Latent Structure for
Image Synthesis and Robotic Imitation. In: NIPS. (2006) 1233?1240
[20] Kulis, B., Saenko, K., Darrell, T.: What You Saw is Not What You Get: Domain Adaptation
Using Asymmetric Kernel Transforms. In: CVPR. (2011)
[21] Gopalan, R., Li, R., Chellappa, R.: Domain Adaptation for Object Recognition: An Unsupervised Approach. In: ICCV. (2011)
[22] Rosset, S., Zhu, J., Hastie, T.: Boosting as a Regularized Path to a Maximum Margin Classifier.
JMLR (2004)
[23] Caruana, R., Niculescu-Mizil, A.: An Empirical Comparison of Supervised Learning Algorithms. In: ICML. (2006)
[24] Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. In:
CVPR. (2001)
[25] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI 34(2)
(February 2012) 225?239
[26] Freund, Y., Schapire, R.: A Short Introduction to Boosting (1999) Journal of Japanese Society
for Artificial Intelligence, 14(5):771-780.
[27] Becker, C., Ali, K., Knott, G., Fua, P.: Learning Context Cues for Synapse Segmentation. TMI
(2013) In Press.
9
| 5200 |@word multitask:3 kulis:2 hippocampus:3 seems:1 tedious:1 seek:4 crucially:1 accounting:1 jacob:1 pick:1 brightness:1 initial:1 contains:1 score:1 salzmann:1 ecole:1 offering:1 outperforms:4 contextual:1 nt:1 yet:1 written:1 informative:1 plot:3 update:1 alone:2 greedy:2 fewer:2 leaf:3 half:2 prohibitive:1 intelligence:2 cue:1 short:1 boosting:23 node:1 location:1 zhang:3 direct:1 ik:1 consists:2 olfactory:4 manner:3 acquired:4 rapid:1 multi:20 brain:2 compensating:1 relying:1 voc:1 td:11 little:1 xti:9 provided:3 linearity:4 underlying:1 factorized:1 what:4 argmin:3 interpreted:1 minimizes:2 degrading:1 developed:1 htj:2 grochow:1 transformation:13 ti:1 unwanted:2 prohibitively:1 classifier:12 rm:1 bio:5 control:1 medical:5 grant:1 unit:1 positive:5 treat:1 sd:3 striatum:3 despite:2 jiang:1 path:11 approximately:1 pami:2 might:1 twice:1 studied:1 suggests:1 lausanne:1 challenging:6 limited:1 range:1 unique:1 practical:1 testing:3 practice:2 recursive:1 procedure:2 pontil:1 empirical:1 significantly:5 vert:1 projection:4 matching:3 pre:2 road:8 radial:1 adapting:1 cascade:1 krishnapuram:1 get:2 cannot:1 close:1 selection:1 unlabeled:1 context:1 applying:2 optimize:3 equivalent:1 map:2 demonstrated:3 quick:1 convex:2 survey:1 simplicity:1 regularize:2 classic:2 handle:2 embedding:1 variation:1 limiting:3 updated:1 target:23 play:1 hierarchy:1 programming:1 trick:9 element:1 expensive:2 particularly:4 recognition:1 asymmetric:1 cut:1 labeled:25 bottom:1 ft:13 role:1 capture:1 region:1 sun:1 balanced:1 complexity:1 hertzmann:1 trained:6 segment:1 ali:2 predictive:3 efficiency:1 learner:6 basis:1 easily:2 htm:2 fiber:4 train:3 separated:1 distinct:1 effective:2 describe:1 chellappa:1 artificial:2 labeling:2 whose:4 solve:2 cvpr:3 say:1 annotating:1 ability:3 jointly:3 seemingly:1 advantage:3 differentiable:1 analytical:2 took:4 propose:2 reconstruction:1 adaptation:20 relevant:1 erale:1 poorly:2 achieve:2 deformable:1 christoudias:1 crossvalidation:1 curvilinear:4 exploiting:1 darrell:3 object:4 help:3 depending:2 eq:7 auxiliary:3 taskspecific:1 switzerland:1 discontinuous:1 attribute:2 annotated:2 filter:1 luckily:1 quartile:1 correct:1 accentuated:3 require:1 clustered:1 practically:1 ground:1 exp:3 lawrence:1 mapping:9 predict:1 electron:3 dictionary:1 outperformed:2 label:4 saw:1 create:1 successfully:2 faithfully:1 minimization:2 aim:1 hj:5 shrinkage:3 boosted:7 varying:1 june:1 improvement:3 modelling:1 contrast:7 baseline:12 helpful:1 inference:1 shivaswamy:1 epfl:1 nears:1 typically:4 niculescu:1 kernelized:1 classification:12 pascal:1 priori:3 art:2 special:2 fairly:1 field:1 once:3 equal:1 having:1 evgeniou:1 sampling:1 represents:2 broad:1 jones:1 icml:2 carin:1 unsupervised:1 gauging:1 report:1 few:9 employ:2 randomly:1 preserve:1 simultaneously:1 geometry:1 ando:1 friedman:1 detection:4 highly:4 zheng:1 evaluation:1 accurate:2 necessary:1 orthogonal:1 tree:16 re:1 desired:1 instance:1 modeling:1 rao:1 caruana:2 deviation:1 predictor:1 comprised:1 xue:1 rosset:1 combined:1 fritz:1 memisevic:1 synthesis:1 ambiguity:1 nm:1 possibly:3 adversely:1 ek:2 return:2 li:1 nonlinearities:1 de:1 stump:10 summarized:1 ranking:1 later:3 h1:1 performed:1 view:1 mario:1 reached:1 zha:1 tmi:1 carlos:1 complicated:2 annotation:3 minimize:5 publicly:1 accuracy:2 descriptor:1 variance:6 efficiently:1 yield:6 generalize:1 weak:8 bayesian:1 tissue:1 classified:1 detector:1 sharing:1 ed:1 against:1 grossly:2 competitor:1 acquisition:17 associated:1 mi:1 gain:1 newly:1 dataset:6 knowledge:2 color:2 dimensionality:3 segmentation:10 uncover:1 publiclyavailable:1 higher:3 supervised:6 mtl:2 follow:1 specify:1 adaboost:1 synapse:1 response:1 fua:4 april:1 done:5 evaluated:2 box:2 though:1 furthermore:1 formulation:1 stage:1 correlation:2 until:1 web:1 nonlinear:1 lack:1 defines:1 artifact:2 vadrevu:1 true:1 functioning:1 inductive:1 regularization:1 i2:1 illustrated:4 mahalanobis:1 lastname:1 rat:3 generalized:1 polytechnique:1 performs:2 image:21 wise:1 common:14 volume:6 significant:3 rd:4 tuning:3 similarly:1 erc:1 chapelle:6 supervision:2 longer:1 mitochondrion:13 recent:4 belongs:1 supervoxel:1 discard:1 scenario:4 certain:1 binary:1 success:1 ht1:2 outperforming:1 yi:3 devise:1 exploited:1 additional:1 employed:1 determine:2 converge:1 multiple:4 full:7 reduces:1 match:1 faster:1 cross:4 organelle:1 tubular:2 compensate:1 bach:2 prediction:1 scalable:1 regression:9 kcca:3 liao:1 vision:2 metric:1 iteration:4 kernel:20 sometimes:1 represent:1 histogram:5 microscopy:3 normalization:4 achieved:1 median:1 source:14 operate:1 unlike:3 pooling:6 jordan:1 practitioner:1 integer:1 leverage:9 split:6 enough:1 baxter:1 automated:1 variety:1 affect:1 hastie:2 bandwidth:1 reduce:3 idea:2 translates:1 shift:3 bottleneck:1 whether:1 six:1 fleet:1 becker:2 effort:1 render:1 generally:3 useful:2 clear:2 involve:1 gopalan:1 fleuret:1 amount:8 transforms:1 svms:1 category:1 generate:3 schapire:1 percentage:1 canonical:2 neuroscience:1 per:3 tibshirani:1 group:1 demonstrating:1 threshold:1 achieving:1 drawn:2 yit:7 changing:1 ht:1 hasler:1 run:4 powerful:1 you:2 throughout:1 patch:1 decision:23 jaccard:3 photogrammetry:1 cca:5 followed:1 correspondence:1 quadratic:1 yielded:1 afforded:1 speed:1 simulate:2 min:1 kumar:1 performing:1 separable:3 relatively:1 transferred:1 combination:3 supervoxels:2 aerial:6 cleverly:1 across:15 beneficial:1 em:4 smaller:1 iccv:1 discus:1 drastic:1 end:1 available:4 unreasonable:1 eight:1 apply:2 generic:1 appropriate:1 alternative:1 weinberger:1 existence:1 original:1 top:5 assumes:1 denotes:1 dirichlet:1 hinge:1 daum:2 exploit:3 especially:2 february:1 society:1 seeking:1 micchelli:1 already:1 codewords:2 rt:1 unclear:1 exhibit:1 gradient:5 distance:2 separate:2 mapped:1 unable:1 collected:1 cellular:1 urtasun:1 tseng:1 assuming:1 besides:1 code:2 modeled:1 relationship:1 illustration:1 index:3 minimizing:2 difficult:5 setup:2 potentially:1 negative:4 motivates:1 unknown:5 perform:1 upper:1 observation:2 datasets:8 knott:1 situation:2 viola:1 varied:1 stack:17 intensity:3 overcoming:1 introduced:2 namely:2 cast:1 required:2 optimized:1 quadratically:1 learned:3 hour:2 nip:3 address:1 able:3 usually:1 firstname:1 challenge:1 built:1 overlap:1 difficulty:1 rely:1 regularized:1 indicator:1 scarce:3 nth:1 wik:7 zhu:1 mizil:1 hm:6 mvn:5 prior:3 voxels:4 literature:1 opf:5 relative:1 freund:1 loss:6 expect:2 fully:6 highlight:1 mixed:1 limitation:1 proven:1 validation:2 degree:1 rik:7 consistent:1 undergone:4 sigal:1 row:1 normalizes:2 eccv:1 changed:1 supported:1 free:3 infeasible:1 bias:1 allow:1 wide:1 face:1 characterizing:1 differentiating:1 sparse:2 tracing:1 slice:1 boundary:18 dimension:3 depth:2 default:1 computes:1 author:1 commonly:3 collection:1 voxel:1 obtains:1 preferred:1 global:2 robotic:1 uai:1 assumed:3 consuming:3 discriminative:2 xi:2 imitation:1 search:5 latent:9 iterative:1 learn:19 transfer:2 obtaining:1 alg:3 investigated:1 complex:1 japanese:1 constructing:1 domain:61 da:2 aistats:1 hierarchically:1 main:1 linearly:3 child:1 cvlab:1 repeated:1 fig:10 referred:2 representative:1 axon:2 explicit:2 exponential:3 jmlr:5 weighting:1 learns:4 down:2 minute:2 specific:16 repeatable:1 xt:9 jt:4 showing:1 explored:2 grouping:1 consist:1 essential:2 effectively:2 taskdependent:1 margin:1 easier:1 suited:4 depicted:1 appearance:2 visual:1 partially:1 shon:1 applies:1 springer:1 ch:1 truth:1 extracted:2 goal:3 viewed:2 shared:19 feasible:1 change:1 infinite:1 except:1 operates:1 torr:1 hyperplane:1 called:2 experimental:3 arbor:1 tendency:1 saenko:2 exception:1 latter:1 violated:2 evaluate:4 |
4,643 | 5,201 | Modeling Clutter Perception using Parametric
Proto-object Partitioning
Wen-Yu Hua
Department of Statistics
Pennsylvania State University
wxh182@psu.edu
Chen-Ping Yu
Department of Computer Science
Stony Brook University
cheyu@cs.stonybrook.edu
Dimitris Samaras
Department of Computer Science
Stony Brook University
samaras@cs.stonybrook.edu
Gregory J. Zelinsky
Department of Psychology
Stony Brook University
Gregory.Zelinsky@stonybrook.edu
Abstract
Visual clutter, the perception of an image as being crowded and disordered, affects aspects of our lives ranging from object detection to aesthetics, yet relatively
little effort has been made to model this important and ubiquitous percept. Our
approach models clutter as the number of proto-objects segmented from an image, with proto-objects defined as groupings of superpixels that are similar in
intensity, color, and gradient orientation features. We introduce a novel parametric method of clustering superpixels by modeling mixture of Weibulls on Earth
Mover?s Distance statistics, then taking the normalized number of proto-objects
following partitioning as our estimate of clutter perception. We validated this
model using a new 90-image dataset of real world scenes rank ordered by human
raters for clutter, and showed that our method not only predicted clutter extremely
well (Spearman?s ? = 0.8038, p < 0.001), but also outperformed all existing clutter perception models and even a behavioral object segmentation ground truth. We
conclude that the number of proto-objects in an image affects clutter perception
more than the number of objects or features.
1
Introduction
Visual clutter, defined colloquially as a ?confused collection? or a ?crowded disorderly state?, is
a dimension of image understanding that has implications for applications ranging from visualization and interface design to marketing and image aesthetics. In this study we apply methods from
computer vision to quantify and predict human visual clutter perception.
The effects of visual clutter have been studied most extensively in the context of an object detection
task, where models attempt to describe how increasing clutter negatively impacts the time taken to
find a target object in an image [19][25][29][18][6]. Visual clutter has even been suggested as a
surrogate measure for set size effect, the finding that search performance often degrades with the
number of objects in a scene [32]. Because human estimates of the number of objects in a scene
are subjective and noisy - one person might consider a group of trees to be an object (a forest or a
grove) while another person might label each tree in the same scene as an ?object?, or even each
trunk or branch of every tree - it may be possible to capture this seminal search relationship in an
objectively defined measure of visual clutter [21][25]. One of the earliest attempts to model visual
clutter used edge density, i.e. the ratio of the number of edge pixels in an image to image size
[19]. The subsequent feature congestion model ignited interest in clutter perception by estimating
1
Figure 1: How can we quantify set size or the number of objects in these scenes, and would this
object count capture the perception of scene clutter?
image complexity in terms of the density of intensity, color, and texture features in an image [25].
However, recent work has pointed out limitations of the feature congestion model [13][21], leading
to the development of alternative approaches to quantifying visual clutter [25][5][29][18].
Our approach is to model visual clutter in terms of proto-objects: regions of locally similar features
that are believed to exist at an early stage of human visual processing [24]. Importantly, proto-objects
are not objects, but rather the fragments from which objects are built. In this sense, our approach
finds a middle ground between features and objects. Previous work used blob detectors to segment
proto-objects from saliency maps for the purpose of quantifying shifts of visual attention [31], but
this method is limited in that it results in elliptical proto-objects that do not capture the complexity
or variability of shapes in natural scenes. Alternatively, it may be possible to apply standard image
segmentation methods to the task of proto-object discovery. While we believe this approach has
merit (see Section 4.3), it is also limited in that the goal of these methods is to approximate a human
segmented ground truth, where each segment generally corresponds to a complete and recognizable
object. For example, in the Berkeley Segmentation Dataset [20] people were asked to segment each
image into 2 to 20 equally important and distinguishable things, which results in many segments
being actual objects. However, one rarely knows the number of objects in a scene, and ambiguity in
what constitutes an object has even led some researchers to suggest that obtaining an object ground
truth for natural scenes is an ill-posed problem [21].
Our clutter perception model uses a parametric method of proto-object partitioning that clusters superpixels, and requires no object ground truth. In summary, we create a graph having superpixels as
nodes, then compute feature similarity distances between adjacent nodes. We use Earth Mover?s Distance (EMD) [26] to perform pair-wise comparisons of feature histograms over all adjacent nodes,
and model the EMD statistics with mixture of Weibulls to solve an edge-labeling problem, which
identifies and removes between-cluster edges to form isolated superpixel groups that are subsequently merged. We refer to these merged image fragments as proto-objects. Our approach is based
on the novel finding that EMD statistics can be modeled by a Weibull distribution (Section 2.2),
and this allows us to model such similarity distance statistics with a mixture of Weibull distribution,
resulting in extremely efficient and robust superpixel clustering in the context of our model. Our
method runs in linear time with respect to the number of adjacent superpixel pairs, and has an endto-end run time of 15-20 seconds for a typical 0.5 megapixel image, a size that many supervised
segmentation methods cannot yet accommodate using desktop hardware [2][8][14][23][34].
2
2.1
Proto-object partitioning
Superpixel pre-processing and feature similarity
To merge similar fragments into a coherent proto-object region, the term fragment and the measure
of coherence (similarity) must be defined. We define an image fragment as a group of pixels that
share similar low-level image features: intensity, color, and orientation. This conforms with processing in the human visual system, and also makes a fragment analogous to an image superpixel,
which is a perceptually meaningful atomic region that contains pixels similar in color and texture
[30]. However, superpixel segmentation methods in general produce a fixed number of superpixels
from an image, and groups of nearby superpixels may belong to the same proto-object due to the intended over-segmentation. Therefore, we extract superpixels as image fragments for pre-processing,
2
and subsequently merge similar superpixels into proto-objects. We define that a pair of adjacent superpixels belong to a coherent proto-object if they are similar in all three low-level image features.
Thus we need to determine a similarity threshold for each of the three features, that separates the
similarity distance values into ?similar?, and ?dissimilar? populations, detailed in Section 2.2.
In this work, the similarity statistics are based on comparing histograms of intensity, color, and orientation features from an image fragment. The intensity feature is a 1D 256 bin histogram, the color
feature is a 76?76 (8 bit color) 2D histogram using hue and saturation from the HSV colorspace,
and the orientation feature is a symmetrical 1D 360 bin histogram using gradient orientations, similar to the HOG feature [10]. All three feature histograms are normalized to have the same total mass,
such that bin counts sum to one.
We use Earth Mover?s Distance (EMD) to compute the similarity distance between feature histograms [26], which is known to be robust to partially matching histograms. For any pair of adjacent superpixels va and vb , their normalized feature similarity distances for each of the intensity,
[ f , where xn;f decolor, and orientation features are computed as: xn;f = EMD(va;f , vb;f )/EMD
notes the similarity (0 is exactly the same, and 1 means completely opposite) between the nth pair
(n = 1, ..., N ) of nodes va and vb under feature f ? {i, c, o} as intensity, color, and orientation.
[ f is the maximum possible EMD for each of the three image features; it is well defined in
EMD
this situation such that the largest difference between intensities is black to white, hues that are
[ f normalizes
180? apart, and a horizontal gradient against a vertical gradient. Therefore, EMD
xn;f ? [0, 1]. In the subsequent sections, we explain our proposed method for finding the adaptive
similarity threshold from xf , which is the EMDs of all pairs of adjacent nodes .
2.2
EMD statistics and Weibull distributon
Any pair of adjacent superpixels are either similar enough to belong to the same proto-object, or they
belong to different proto-objects, as separated by the adaptive similarity threshold ?f that is different
for every image. We formulate this as an edge labeling problem: given a graph G = (V, E), where
va ? V and vb ? V are two adjacent nodes (superpixels) having edge ea,b ? E, a 6= b between
them, also the nth edge of G. The task is to label the binary indicator variable yn;f = I(xn;f < ?f )
on edge ea,b such that yn;f = 1 if xn;f < ?f , which means va;f and vb;f are similar (belongs to the
same proto-object), otherwise yn;f = 0 if va;f and vb;f are dissimilar (belongs to different protoobjects). Once ?f is computed, removing the edges such that yf = 0 results in isolated clusters of
locally similar image patches, which are the desired groups of proto-objects.
Intuitively, any pair of adjacent nodes is either within the same proto-object cluster, or between
different clusters (yn;f = {1, 0}), therefore we consider two populations (the within-cluster edges,
and the between-cluster edges) to be modeled from the density of xf in a given image. In theory,
this would mean that the density of xf is a distribution exhibiting bi-modality, such that the left
mode corresponds to the set of xf that are considered similar and coherent, while the right mode
contains the set of xf that represent dissimilarity. At first thought, applying k-means with k = 2
or a mixture of two Gaussians would allow estimation of the two populations. However, there is
no evidence showing that similarity distances follow symmetrical or normal distributions. In the
following, we argue that the similarity distances xf computed by EMD follow Weibull distribution,
which is a distribution of the Exponential family that is skewed in shape.
Pm Pn 0
Pm Pn 0
0
We define EMD(P, Q) = ( i
f d )/( i
f ), with an optimal flow fij
such that
P 0
P 0
P j ij0 ij
Pj ijP
0
f
?
p
,
f
?
q
,
f
=
min(
p
,
q
),
and
f
?
0,
where
P =
i
j
i
j
ij
j ij
i ij
i,j i,j
i
j
{(x1 , p1 ), ..., (xm , pm )} and Q = {(y1 , q1 ), ..., (yn , qn )} are the two signatures to be compared,
and dij denotes a dissimilarity metric (i.e. L2 distance) between xi and yj in Rd . When P and
Q are normalized to have the
Pnsame total mass, EMD becomes identical to Mallows distance [17],
defined as Mp (X, Y ) = ( n1 i=1 |xi ? yi |p )1/p , where X and Y are sorted vectors of the same size,
and Mallows distance is an Lp -norm based distance measurement. Furthermore, Lp -norm based
distance metrics are Weibull distributed if the two feature vectors to be compared are correlated
and non-identically distributed [7]. We show that our features assumptions are satisfied in Section
4.1. Hence, we can model each feature of xf as a mixture of two Weibull distributions separately,
and compute the corresponding ?f as the boundary locations between the two components of the
mixtures. Although the Weibull distribution has been used in modeling actual image features such
3
as texture and edges [12][35], it has not been used to model EMD similarity distance statistics until
now.
2.3
Weibull mixture model
Our Weibull mixture model (WMM) takes the following general form:
W K (x; ?) =
K
X
?k ?k (x; ?k ) ,
?(x; ?, ?, c) =
k=1
? x ? c ??1 ?( x?c )?
(
)
e ?
? ?
(1)
where ?k = (?k , ?k , ck ) is the parameter vector for the k th mixture component, and ? denotes the
three-parameter Weibull
Ppdf with the scale (?), shape (?), and location (c) parameter, and the mixing
parameter ? such that k ?k = 1. In this case, our two-component WMM contains a 7-parameter
vector ? = (?1 , ?1 , c1 , ?2 , ?2 , c2 , ?) that yields the following complete form:
W 2 (x; ?) = ?(
1 ?1
2 ?2
?1 x ? c1 ?1 ?1 ?( x?c
?2 x ? c2 ?2 ?1 ?( x?c
(
)
)e ?1 ) + (1 ? ?)( (
)
)e ?2 )
?1 ?1
?2 ?2
(2)
To estimate the parameters of W 2 (x; ?), we tested two optimization methods: maximum likelihood
estmation (MLE), and nonlinear least squares minimization (NLS). Both MLE and NLS requires an
initial parameter vector ?0 to begin the optimization, and the choice of ?0 is crucial to the convergence
? In our case, the initial guess is quite well defined: for any node
of the optimal parameter vector ?.
N
of a specific feature vj;f , and its set of adjacent neighbors vj;f
= N (vj;f ), the neighbor that is most
similar to vj;f is most likely to belong to the same cluster as vj;f , and it is especially true under an
over-segmention scenario. Therefore, the initial guess for the first mixture component ?1;f is the
0
N
MLE of ?1;f (?1;f
; x0f ), such that x0f = {min(EMD(vj;f , vj;f
))|vj;f ; j = 1, ..., z, f ? {i, c, o}},
where z is the total number of superpixels, and x0f ? xf . After obtaining ?10 = (?10 , ?10 , c01 ), several
?20 can be computed for the re-start purpose via MLE from the data taken by P r(xf |?10 ) > p, where
P r is the cumulative distribution function, and p is a range of percentiles. Together, they form the
complete initial guess parameter vector ?0 = (?10 , ?10 , c01 , ?20 , ?20 , c02 , ? 0 ) where ? 0 = 0.5.
2.3.1
Parameter estimation
Maximum likelihood estimation (MLE) estimates the parameters by maximizing the log-likelihood
function of the observed samples. The log-likelihood function of W 2 (x; ?) is given by:
ln L(?; x) =
N
X
n=1
ln{?(
?1 xn ? c1 ?1 ?1 ?( xn??c1 )?1
?2 xn ? c2 ?2 ?1 ?( xn??c2 )?2
1
2
(
)
)e
)
)e
+(1??)( (
}
?1
?1
?2
?2
(3)
Due to the complexity of this log-likelihood function and the presence of the location parameters
c1,2 , we adopt the Nelder-Mead method as a derivative-free optimization of MLE that performs parameter estimation with direct-search [22][16], by minimizing the negative log-likelihood function
of Eq. 3.
For the NLS optimization method, first xf are approximated with histograms much like a box filter
that smoothes a curve. The appropriate histogram bin-width for data representation is computed
by w = 2(IQR)n?1/3 , where IQR is the interquartile range of the data with n observations [15].
This allows us to optimize a two component WMM to the height of each bin with NLS as a curve
fitting problem, which is a robust alternative to MLE when the noise level can be reduced by some
approximation scheme. Then, we find the least squares minimizer by using the trust-region method
[27][28]. Both the Nelder-Mead MLE algorithm and the NLS method are detailed in the supplementary material.
Figure 2 shows the WMM fit using the Nelder-Mead MLE method. In addition to the good fit of the
mixture model to the data, it also shows that the right skewed data (EMD statistics) is remarkably
Weibull, this further validates that EMD statistics follow Weibull distribution both in theory and
experiments.
4
Figure 2: (a) original image, (b) after superpixel pre-processing [1] (977 initial segments), (c) final
proto-object partitioning result (150 segments). Each final segment is shown with its mean RGB
value to approximate proto-object perception. (d) W 2 (xf ; ?f ) optimized using the Nelder-Mead
algorithm for intensity, (e) color, and (f) orientation based on the image in (b). The red line indicates
the individual Weibull components; and the blue line is the density of the mixture W 2 (xf ; ?f ).
2.4
Visual clutter model with model selection
At times, the dissimilar population can be highly mixed in with the similar population, the density of
which would resemble more of a single Weibull in shape such as Figure 2d. Therefore, we fit a single
Weibull as well as a two component WMM over xf , and apply the Akaike Information Criterion
(AIC) to prevent any possible over-fittings by the two component WMM. AIC tends to place a
heavier penalty on the simpler model, which is suitable in our case to ensure that the preference
is placed on the two-population mixture models. For models optimized using MLE, the standard
AIC is used; for the NLS cases, the corrected AIC (AICc) for smaller sample size (generally when
n/k ? 40) with residual sum of squares (RSS) is used, and it is defined as AICc = n ln(RSS/n) +
2k +2k(k +1)/(n?k ?1), where k is the number of model parameters, n is the number of samples.
The optimal ?f can then be determined as follows:
?f =
?
?max(x, ),
?
s.t. ?1 ?1;f (x|?1;f ) = ?2 ?2;f (x|?2;f ) AIC(W 2 ) ? AIC(W 1 )
(4)
max(??1 (ln(1 ? ? ))1/?1 , )
Otherwise
The first case is when the mixture model is preferred, then the optimal ?f is the crossing point
between the mixture components, and the equality can be solved in linear time by searching over the
values of the vector xf ; in the second case when the single Weibull is preferred by model selection,
?f is calculated by the inverse CDF of W 1 , which computes the location of a given percentile
parameter ? . Note that ?f is lower bounded by a tolerance parameter in both cases to prevent
unusual behaviors when an image is nearly blank (?f ? [, 1]), making ? and the only model
parameters in our framework.
We perform Principle Component Analysis (PCA) on the similarity distance values xf of intensity,
color, and orientation and obtain the combined distance feature value by projecting xf to the first
principle component, such that the relative importance of each distance feature is captured by its
variance through PCA. This projected distance feature is used to construct a minimum spanning tree
over the superpixels to form the structure of graph G, which weakens the inter-cluster connectivity
by removing cycles and other excessive graph connections. Finally, each edge of G is labeled
5
according to Section 2.2 given the computed ?f , such that an edge is labeled as 1 (similar) only if
the pair of superpixels are similar in all three features. Edges labeled as 0 (dissimilar) are removed
from G to form isolated clusters (proto-objects), and our visual clutter model produces a normalized
clutter measure that is between 0 and 1 by dividing the number of proto-objects by the initial number
of superpixels such that it is invariant to different scales of superpixel over-segmentation.
3
Dataset and ground truth
Various in-house image datasets have been used in previous work to evaluate their models of visual
clutter. The feature congestion model was evaluated on 25 images of US city/road maps and weather
maps [25]; the models in [5] and [29] were evaluated on another 25 images consisting of 6, 12, or 24
synthetically generated objects arranged into a grid ; and the model from [18] used 58 images of six
map or chart categories (airport terminal maps, flowcharts, road maps, subway maps, topographic
charts, and weather maps). In each of these datasets, each image must be rank ordered for visual
clutter with respect to every other image in the set by the same human subject, which is a tiring and
time consuming process. This rank ordering is essential for a clutter perception experiment as it
establishes a stable clutter metric that is meaningful across participants; alas it limits the dataset size
to the number of images each individual observer can handle. Absolute clutter scales are undesirable
as different raters might use different ranges on this scale.
We created a comparatively large clutter perception dataset consisting of 90 800?600 real world
images sampled from the SUN Dataset images [33] for which there exists human segmentations of
objects and object counts. These object segmentations serve as one of the ground truths in our study.
The high resolution of these images is also important for the accurate perception and assessment
of clutter. The 90 images were selected to constitute six groups based on their ground truth object
counts, with 15 images in each group. Specifically, group 1 had images with object counts in the
1-10 range, group 2 had counts in the 11-20 range, up to group 6 with counts in the 51-60 range.
These 90 images were rated in the laboratory by 15 college-aged participants whose task was to
order the images in terms of least to most perceived visual clutter. This was done by displaying each
image one at a time and asking participants to insert it into an expanding set of previously rated
images. Participants were encouraged to take as much time as they needed, and were allowed to
freely scroll through the existing set of clutter rated images when deciding where to insert the new
image. A different random sequence of images was used for each participant (in order to control for
biases and order effects), and the entire task lasted approximately one hour. The average correlation
(Spearman?s rank-order correlation) over all pairs of participants was 0.6919 (p < 0.001), indicating
good agreement among raters. We used the median ranked position of each image as the ground truth
for clutter perception in our experiments.
4
4.1
Experiment and results
Image feature assumptions
In their demonstration that similarity distances adhere to a Weibull dstribution, Burghouts et al. [7]
derived and related Lp -norm based distances from the statistics of sums [3][4] such that for nonPN
identical and correlated random variables Xi , the sum i=1 Xi is Weibull distributed if Xi are
p
upper-bounded with a finite N , where Xi = |si ? Ti | such that N is the dimensionality of the
feature vector, i is the index, and s, t ? T are different sample vectors of the same feature.
The three image features used in this model are finite and upper bounded, and we follow the procedure from [7] with L2 distance to determine whether they are correlated. We consider distances
from one reference superpixel feature vector s to 100 other randomly selected superpixel feature
vectors T (of the same feature), and compute the differences at index i such that we are obtaining
the random variable Xi = |si ? Ti |p . Pearson?s correlation is then used to determine the relationship between Xi and Xj , i 6= j at a confidence level of 0.05. This procedure is repeated 500 times
per image for all three feature types over all 90 images. As predicted, we found an almost perfect
correlation between feature value differences for each of the features tested: Intensity: 100%, Hue:
99.2%, Orientation: 98.97%). This confirms that the low level image features used in this study
follow a Weibull distribution.
6
WMM-mle
0.8038
WMM-nls
0.7966
MS[9]
0.7262
GB[11]
0.6612
PL[6]
0.6439
ED[19]
0.6231
FC[25]
0.5337
# Obj
0.5255
C3[18]
0.4810
Table 1: Correlations between human clutter perception and all the evaluated methods. WMM is the
Weibull mixture model underlying our proto-object partitioning approach, with both optimization
methods.
4.2
Model evaluation
We ran our model with different parameter settings of ? {0.01, 0.02, ..., 0.20} and ? ?
{0.5, 0.6, ..., 0.9} using SLIC superpixels [1] initialized at 1000 seeds. We then correlated the
number of proto-objects formed after superpixel merging with the ground truth behavioral clutter
perception estimates by computing the Spearman?s Rank Correlation (Spearman?s ?) following the
convention of [25][5][29][18].
A model using MLE as the optimization method achieved the highest correlation, ? = 0.8038,
p < 0.001 with = 0.14 and ? = 0.8. Because we did not have separate training/testing sets,
we performed 10-fold cross-validation and obtained an average testing correlation of r = 0.7599,
p < 0.001. When optimized using NLS, the model achieved a maximum correlation of ? = 0.7966,
p < 0.001 with = 0.14 and ? = 0.4, and the corresponding 10-fold cross-validation yielded an
average testing correlation of r = 0.7375, p < 0.001. The high cross-validation averages indicate
that our model is highly robust, and generalizable to unseen data.
It is worth pointing out that, the optimal value of the tolerance parameter showed a peak correlation
at 0.14. To the extent that this is meaningful and extends to people, it suggests that visual clutter
perception may ignore feature dissimilarity on the order of 14% when deciding whether two adjacent
regions are similar and should be merged.
We compared our model to four other state-of-the-art models of clutter perception: the feature congestion model [25], the edge density method [19], the power-law model [6], and the C3 model [18].
Table 1 shows that our model significantly out-performed all of these previously reported methods.
The relatively poor performance of the recent C3 model was surprising, and can probably be attributed to the previous evaluation of that model using charts and maps rather than arbitrary realistic
scenes (personal communication with authors). Collectively, these results suggest that a model that
merges superpixels into proto-objects best describes human clutter perception, and that the benefit of
using a proto-object model for clutter prediction is not small; our model resulted in an improvement
of at least 15% over existing models of clutter perception. Although we did not record run-time
statistics on the other models, our model, implemented in Matlab1 , had an end-to-end (excluding
superpixel pre-processing) run-time of 15-20 seconds using 800?600 images running on an Win7
Intel Core i-7 computer with 8 Gb RAM.
4.3
Comparison to image segmentation methods
We also attempted to compare our method to state of the art image segmentation algorithms such
as gPb-ucm [2], but found that the method was unable to process our image dataset using either an
Intel Core i-7 machine with 8 Gb RAM or an Intel Xeon machine with 16 Gb RAM, at the high
image resolutions required by our behavioral clutter estimates. A similar limitation was found for
image segmentation methods that utilizes gPb contour detection as pre-processing, such as [8][14],
while [23][34] took 10 hours on a single image and did not converge.
Therefore, we limit our evaluation to mean-shift [9] and Graph-based method [11], as they are able
to produce variable numbers of segments based on the unsupervised partitioning of the 90 images
from our dataset. Despite using the best dataset parameter settings for these unsupervised methods,
our method remains the highest correlated model with the clutter perception ground truth as shown
in Table 1, and that methods allowing quantification of proto-object set size (WMM, Mean-shift,
and Graph-based) outperformed all of the previous clutter models .
We also correlated the number of objects segmented by humans (as provided in the SUN Dataset)
with the clutter perception ground truth, denoted as # obj in Table 1. Interestingly, despite object
1
Code is available at mysbfiles.stonybrook.edu/~cheyu/projects/proto-objects.html
7
Figure 3: Top: Four images from our dataset, rank ordered for clutter perception by human raters,
median clutter rank order from left to right: 6, 47, 70, 87. Bottom: Corresponding images after
parametric proto-object partitioning, median clutter rank order from left to right: 7, 40, 81, 83.
count being a human-derived estimate, it produced among the lowest correlations with clutter perception. This suggests that clutter perception is not determined by simply the number of objects in
a scene; it is the proto-object composition of these objects that is important.
5
Conclusion
We proposed a model of visual clutter perception based on a parametric image partitioning method
that is fast and able to work on large images. This method of segmenting proto-objects from an image using mixture of Weibull distributions is also novel in that it models similarity distance statistics
rather than feature statistics obtained directly from pixels. Our work also contributes to the behavioral understanding of clutter perception. We showed that our model is an excellent predictor of
human clutter perception, outperforming all existing clutter models, and predicts clutter perception
better than even a behavioral segmentation of objects. This suggests that clutter perception is best
described at the proto-object level, a level intermediate to that of objects and features. Moreover,
our work suggests a means of objectively quantifying a behaviorally meaningful set size for scenes,
at least with respect to clutter perception. We also introduced a new and validated clutter perception
dataset consisting of a variety of scene types and object categories. This dataset, the largest and most
comprehensive to date, will likely be used widely in future model evaluation and method comparison studies. In future work we plan to extend our parametric partitioning method to general image
segmentation and data clustering problems, and to use our model to predict human visual search
behavior and other behaviors that might be affected by visual clutter.
6
Acknowledgment
We appreciate the authors of [18] for sharing and discussing their code, Dr. Burghouts for providing
detailed explanations to the feature assumption part in [7], and Dr. Matthew Asher for providing
their human search performance data on their work in Journal of Vision, 2013. This work was
supported by NIH Grant R01-MH063748 to G.J.Z., NSF Grant IIS-1111047 to G.J.Z. and D.S., and
the SUBSAMPLE Project of the DIGITEO Institute, France.
References
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels compared to stateof-the-art superpixel methods. IEEE TPAMI, 2012.
[2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
IEEE TPAMI, 2010.
[3] E. Bertin. Global fuctuations and gumbel statistics. Physical Review Letters, 2005.
8
[4] E. Bertin and M. Clusel. Generalised extreme value statistics and sum of correlated variables. Journal of
Pnysics A, 2006.
[5] M. J. Bravo and H. Farid. Search for a category target in clutter. Perception, 2004.
[6] M. J. Bravo and H. Farid. A scale invariant measure of clutter. Journal of Vision, 2008.
[7] G. J. Burghouts, A. W. M. Smeulders, and J.-M. Geusebroek. The distribution family of similarity distances. In NIPS, 2007.
[8] J. Carreira and C. Sminchisescu. Constrained parametric min-cuts for automatic object segmentation. In
CVPR, 2010.
[9] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE TPAMI,
2002.
[10] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[11] P. F. Felzenszwalb and H. D. P. Efficient graph-based image segmentation. In ICCV, 2004.
[12] J.-M. Geusebroek and A. W. Smeulders. A six-stimulus theory for stochastic texture. IJCV, 2005.
[13] J. M. Henderson, M. Chanceaux, and T. J. Smith. The influence of clutter on real-world scene search:
Evidence from search efficiency and eye movements. Journal of Vision, 2009.
[14] A. Ion, J. Carreira, and C. Sminchisescu. Image segmentation by figure-ground composition into maximal
cliques. In ICCV, 2011.
[15] A. J. Izenman. Recent developments in nonparametric density estimation. Journal of the American
Statistical Association, 1991.
[16] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright. Convergence properties of the nelder-mead
simplex method in low dimensions. SIAM Journal on Optimization, 1998.
[17] E. Levina and P. Bickel. The earth mover?s distance is the mallows distance: some insights from statistics.
In ICCV, 2001.
[18] M. C. Lohrenz, J. G. Trafton, R. M. Beck, and M. L. Gendron. A model of clutter for complex, multivariate geospatial displays. Human Factors, 2009.
[19] M. L. Mack and A. Oliva. Computational estimation of visual complexity. In the 12th Annual Object,
Perception, Attention, and Memory Conference, 2004.
[20] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001.
[21] M. B. Neider and G. J. Zelinsky. Cutting through the clutter: searching for targets in evolving complex
scenes. Journal of Vision, 2011.
[22] J. A. Nelder and R. Mead. A simplex method for function minimization. The computer journal, 1965.
[23] S. R. Rao, H. Mobahi, A. Y. Yang, S. Sastry, and Y. Ma. Natural image segmentation with adaptive texture
and boundary encoding. In ACCV, 2009.
[24] R. A. Rensink. Seeing, sensing, and scrutinizing. Vision Research, 2000.
[25] R. Rosenholtz, Y. Li, and L. Nakano. Measuring visual clutter. Journal of Vision, 2007.
[26] Y. Rubner, C. Tomasi, and L. J. Guibas. A metric for distributions with applications to image databases.
In ICCV, 1998.
[27] T. Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM Journal
on Numerical Analysis, 1983.
[28] P. L. Toint. Towards an efficient sparsity exploiting newton method for minimization. Sparse Matrices
and Their Uses, 1981.
[29] R. van den Berg, F. W. Cornelissen, and J. B. T. M. Roerdink. A crowding model of visual clutter. Journal
of Vision, 2009.
[30] O. Veksler, Y. Boykov, and P. Mehrani. Superpixels and supervoxels in an energy optimization framework.
In ECCV, 2010.
[31] M. Wischnewski, A. Belardinelli, W. X. Schneider, and J. J. Steil. Where to look next? combining static
and dynamic proto-objects in a tva-based model of visual attention. Cognitive Computation, 2010.
[32] J. M. Wolfe. Visual search. Attention, 1998.
[33] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
[34] A. Y. Yang, J. Wright, Y.Ma, and S. Sastry. Unsupervised segmentation of natural images via lossy data
compression. CVIU, 2008.
[35] V. Yanulevskaya and J.-M. Geusebroek. Significance of the Weibull distribution and its sub-models in
natural image statistics. In Int. Conference on Computer Vision Theory and Applications, 2009.
9
| 5201 |@word middle:1 dalal:1 compression:1 norm:3 scroll:1 triggs:1 confirms:1 r:2 rgb:1 q1:1 accommodate:1 crowding:1 initial:6 contains:3 fragment:8 ala:1 interestingly:1 subjective:1 existing:4 elliptical:1 comparing:1 blank:1 surprising:1 si:2 yet:2 stony:3 must:2 subsequent:2 realistic:1 numerical:1 shape:4 remove:1 congestion:4 selected:2 guess:3 desktop:1 smith:2 core:2 record:1 geospatial:1 stonybrook:4 node:8 hsv:1 location:4 preference:1 simpler:1 height:1 c2:4 direct:1 megapixel:1 ijcv:1 fitting:2 behavioral:5 recognizable:1 introduce:1 inter:1 behavior:3 p1:1 terminal:1 little:1 actual:2 increasing:1 becomes:1 provided:1 confused:1 colloquially:1 estimating:1 begin:1 bounded:3 mass:2 underlying:1 lowest:1 what:1 moreover:1 weibull:22 generalizable:1 c01:2 finding:3 ignited:1 berkeley:1 every:3 ti:2 exactly:1 partitioning:10 control:1 grant:2 tiring:1 yn:5 segmenting:1 generalised:1 tends:1 limit:2 despite:2 encoding:1 mead:6 merge:2 approximately:1 might:4 black:1 achanta:1 studied:1 suggests:4 limited:2 bi:1 range:6 acknowledgment:1 yj:1 atomic:1 mallow:3 testing:3 ucm:1 procedure:2 maire:1 evolving:1 thought:1 weather:2 matching:1 significantly:1 pre:5 road:2 confidence:1 seeing:1 suggest:2 cannot:1 undesirable:1 selection:2 context:2 applying:1 seminal:1 influence:1 optimize:1 map:9 maximizing:1 attention:4 formulate:1 resolution:2 insight:1 importantly:1 population:6 searching:2 handle:1 meer:1 analogous:1 target:3 us:2 akaike:1 superpixel:13 agreement:1 crossing:1 wolfe:1 approximated:1 recognition:1 digiteo:1 cut:1 predicts:1 labeled:3 database:3 observed:1 bottom:1 solved:1 capture:3 region:6 cycle:1 sun:3 ordering:1 movement:1 removed:1 highest:2 ran:1 complexity:4 gpb:2 asked:1 dynamic:1 personal:1 signature:1 segment:8 samara:2 negatively:1 serve:1 efficiency:1 completely:1 various:1 separated:1 fast:1 describe:1 labeling:2 pearson:1 quite:1 whose:1 posed:1 solve:1 supplementary:1 widely:1 cvpr:3 otherwise:2 objectively:2 statistic:19 topographic:1 unseen:1 noisy:1 validates:1 final:2 blob:1 sequence:1 tpami:3 took:1 maximal:1 combining:1 date:1 mixing:1 colorspace:1 exploiting:1 convergence:2 cluster:10 produce:3 perfect:1 burghouts:3 object:72 weakens:1 ij:4 eq:1 dividing:1 implemented:1 c:2 predicted:2 resemble:1 indicate:1 quantify:2 exhibiting:1 convention:1 nls:8 fij:1 merged:3 filter:1 subsequently:2 stochastic:1 disordered:1 human:19 material:1 bin:5 insert:2 pl:1 considered:1 ground:13 normal:1 deciding:2 wright:3 seed:1 guibas:1 predict:2 pointing:1 matthew:1 bickel:1 early:1 adopt:1 torralba:1 abbey:1 earth:4 purpose:2 estimation:6 perceived:1 outperformed:2 roerdink:1 label:2 largest:2 create:1 city:1 establishes:1 minimization:3 behaviorally:1 rather:3 ck:1 pn:2 earliest:1 validated:2 derived:2 susstrunk:1 improvement:1 rank:8 likelihood:6 indicates:1 superpixels:20 lasted:1 sense:1 entire:1 france:1 pixel:4 among:2 orientation:10 ill:1 denoted:1 html:1 stateof:1 development:2 plan:1 art:3 constrained:1 airport:1 construct:1 once:1 having:2 psu:1 emd:17 encouraged:1 identical:2 look:1 yu:2 unsupervised:3 constitutes:1 nearly:1 excessive:1 future:2 simplex:2 stimulus:1 wen:1 randomly:1 oriented:1 mover:4 resulted:1 individual:2 comprehensive:1 beck:1 intended:1 consisting:3 n1:1 attempt:2 detection:5 interest:1 highly:2 interquartile:1 evaluation:4 henderson:1 mixture:17 extreme:1 implication:1 accurate:1 grove:1 edge:16 conforms:1 tree:4 initialized:1 desired:1 re:1 isolated:3 xeon:1 modeling:3 asking:1 rao:1 measuring:2 predictor:1 veksler:1 dij:1 reported:1 gregory:2 combined:1 person:2 density:8 peak:1 siam:2 together:1 lucchi:1 connectivity:1 ijp:1 ambiguity:1 satisfied:1 zelinsky:3 dr:2 cornelissen:1 cognitive:1 american:1 derivative:1 leading:1 li:1 crowded:2 int:1 weibulls:2 mp:1 performed:2 observer:1 neider:1 red:1 start:1 participant:6 smeulders:2 square:3 chart:3 formed:1 variance:1 percept:1 yield:1 saliency:1 farid:2 steihaug:1 produced:1 zoo:1 worth:1 researcher:1 ping:1 detector:1 explain:1 sharing:1 ed:1 against:1 energy:1 attributed:1 static:1 sampled:1 dataset:13 color:10 dimensionality:1 ubiquitous:1 segmentation:21 ea:2 supervised:1 follow:5 fua:1 arranged:1 evaluated:3 box:1 done:1 furthermore:1 marketing:1 stage:1 until:1 correlation:12 horizontal:1 trust:2 nonlinear:1 assessment:1 mode:2 yf:1 believe:1 lossy:1 effect:3 normalized:5 true:1 hence:1 equality:1 laboratory:1 white:1 adjacent:11 skewed:2 width:1 comaniciu:1 percentile:2 criterion:1 m:1 complete:3 performs:1 interface:1 image:78 ranging:2 wise:1 novel:3 boykov:1 steil:1 nih:1 disorderly:1 physical:1 belong:5 extend:1 association:1 refer:1 measurement:1 composition:2 rd:1 automatic:1 grid:1 pm:3 sastry:2 pointed:1 had:3 stable:1 similarity:19 multivariate:1 showed:3 recent:3 belongs:2 apart:1 scenario:1 hay:1 ecological:1 binary:1 outperforming:1 discussing:1 life:1 yi:1 captured:1 minimum:1 schneider:1 freely:1 determine:3 converge:1 ii:1 branch:1 segmented:4 xf:16 iqr:2 levina:1 believed:1 cross:3 equally:1 mle:12 va:6 impact:1 prediction:1 oliva:2 vision:9 metric:4 histogram:11 represent:1 achieved:2 ion:1 c1:5 addition:1 remarkably:1 separately:1 aged:1 median:3 adhere:1 modality:1 crucial:1 probably:1 subject:1 thing:1 flow:1 obj:2 yang:2 presence:1 synthetically:1 aesthetic:2 enough:1 identically:1 intermediate:1 variety:1 affect:2 fit:3 psychology:1 xj:1 pennsylvania:1 opposite:1 shift:4 whether:2 six:3 heavier:1 pca:2 gb:4 effort:1 penalty:1 constitute:1 bravo:2 generally:2 detailed:3 clutter:65 nonparametric:1 hue:3 extensively:1 locally:2 hardware:1 category:3 reduced:1 exist:1 nsf:1 rosenholtz:1 per:1 blue:1 affected:1 group:10 slic:2 four:2 subway:1 threshold:3 prevent:2 pj:1 ram:3 graph:7 sum:5 run:4 inverse:1 letter:1 place:1 family:2 c02:1 smoothes:1 almost:1 extends:1 patch:1 utilizes:1 coherence:1 vb:6 bit:1 toint:1 aic:6 display:1 fold:2 yielded:1 annual:1 scene:16 tal:1 nearby:1 aspect:1 extremely:2 min:3 shaji:1 relatively:2 martin:1 department:4 according:1 project:2 poor:1 supervoxels:1 spearman:4 conjugate:1 smaller:1 across:1 describes:1 lp:3 making:1 den:1 iccv:5 intuitively:1 projecting:1 invariant:2 emds:1 mack:1 taken:2 ln:4 visualization:1 previously:2 trunk:1 remains:1 count:8 needed:1 know:1 merit:1 end:3 unusual:1 available:1 gaussians:1 apply:3 hierarchical:1 appropriate:1 fowlkes:2 alternative:2 original:1 denotes:2 clustering:3 ensure:1 running:1 top:1 newton:1 nakano:1 especially:1 comparatively:1 appreciate:1 r01:1 malik:2 izenman:1 parametric:7 degrades:1 surrogate:1 gradient:6 distance:28 separate:2 unable:1 arbelaez:1 argue:1 extent:1 spanning:1 toward:1 code:2 modeled:2 relationship:2 index:2 ratio:1 minimizing:1 demonstration:1 providing:2 reed:1 hog:1 negative:1 design:1 perform:2 allowing:1 upper:2 vertical:1 observation:1 datasets:2 finite:2 accv:1 situation:1 variability:1 communication:1 excluding:1 y1:1 arbitrary:1 intensity:11 introduced:1 pair:10 required:1 c3:3 optimized:3 connection:1 tomasi:1 coherent:3 merges:1 flowchart:1 hour:2 nip:1 brook:3 able:2 suggested:1 perception:34 dimitris:1 xm:1 sparsity:1 saturation:1 geusebroek:3 built:1 max:2 memory:1 explanation:1 endto:1 power:1 suitable:1 natural:6 ranked:1 quantification:1 indicator:1 residual:1 nth:2 scheme:1 rated:3 eye:1 identifies:1 created:1 extract:1 review:1 understanding:2 discovery:1 l2:2 relative:1 law:1 mixed:1 limitation:2 bertin:2 validation:3 rubner:1 xiao:1 principle:2 raters:4 displaying:1 share:1 normalizes:1 eccv:1 summary:1 ij0:1 placed:1 free:1 supported:1 bias:1 allow:1 institute:1 neighbor:2 taking:1 felzenszwalb:1 absolute:1 sparse:1 benefit:1 distributed:3 tolerance:2 boundary:2 dimension:2 evaluating:1 xn:9 contour:2 world:3 cumulative:1 curve:2 qn:1 calculated:1 computes:1 made:1 collection:1 adaptive:3 projected:1 author:2 approximate:2 ignore:1 preferred:2 cutting:1 clique:1 global:1 symmetrical:2 conclude:1 nelder:6 xi:8 consuming:1 alternatively:1 search:9 table:4 robust:5 expanding:1 obtaining:3 contributes:1 forest:1 sminchisescu:2 excellent:1 complex:2 vj:8 tva:1 did:3 significance:1 lohrenz:1 noise:1 subsample:1 wmm:10 lagarias:1 allowed:1 repeated:1 asher:1 x1:1 intel:3 ehinger:1 sub:1 position:1 exponential:1 house:1 removing:2 specific:1 showing:1 mobahi:1 sensing:1 evidence:2 grouping:1 essential:1 exists:1 merging:1 importance:1 texture:5 dissimilarity:3 perceptually:1 gumbel:1 chen:1 cviu:1 led:1 distinguishable:1 fc:1 likely:2 simply:1 visual:26 ordered:3 partially:1 van:1 chanceaux:1 hua:1 collectively:1 corresponds:2 truth:11 minimizer:1 cdf:1 ma:2 goal:1 sorted:1 quantifying:3 towards:1 carreira:2 typical:1 determined:2 corrected:1 specifically:1 rensink:1 total:3 attempted:1 meaningful:4 rarely:1 indicating:1 college:1 berg:1 people:2 dissimilar:4 evaluate:1 proto:37 tested:2 correlated:7 |
4,644 | 5,202 | Mid-level Visual Element Discovery
as Discriminative Mode Seeking
Carl Doersch
Carnegie Mellon University
cdoersch@cs.cmu.edu
Abhinav Gupta
Carnegie Mellon University
abhinavg@cs.cmu.edu
Alexei A. Efros
UC Berkeley
efros@cs.berkeley.edu
Abstract
Recent work on mid-level visual representations aims to capture information at the
level of complexity higher than typical ?visual words?, but lower than full-blown
semantic objects. Several approaches [5, 6, 12, 23] have been proposed to discover
mid-level visual elements, that are both 1) representative, i.e., frequently occurring
within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work,
we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8].
Given a weakly-labeled image collection, our method discovers visually-coherent
patch clusters that are maximally discriminative with respect to the labels. One
advantage of our formulation is that it requires only a single pass through the data.
We also propose the Purity-Coverage plot as a principled way of experimentally
analyzing and evaluating different visual discovery approaches, and compare our
method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art
performance on the MIT Scene-67 dataset.
1
Introduction
In terms of sheer size, visual data is, by most accounts, the biggest ?Big Data? out there. But,
unfortunately, most machine learning algorithms (with some notable exceptions, e.g. [13]) are not
equipped to handle it directly, at the raw pixel level, making research on finding good visual representations particularly relevant and timely. Currently, the most popular visual representations in
machine learning are based on ?visual words? [24], which are obtained by unsupervised clustering
(k-means) of local features (SIFT) over a large dataset. However, ?visual words? is a very low-level
representation, mostly capturing local edges and corners ([21] notes that ?visual letters? or ?visual
phonemes? would have been a more accurate term). Part of the problem is that the local SIFT features are relatively low-dimensional (128D), and might not be powerful enough to capture anything
of higher complexity. However, switching to a more descriptive feature (e.g. 2, 000-dimensional
HOG) causes k-means to produce visually poor clusters due to the curse of dimensionality [5].
Recently, several approaches [5, 6, 11, 12, 15, 23, 26, 27] have proposed mining visual data for discriminative mid-level visual elements, i.e., entities which are more informative than ?visual words,?
and more frequently occurring and easier to detect than high-level objects. Most such approaches
require some form of weak per-image labels, e.g., scene categories [12] or GPS coordinates [5] (but
can also run unsupervised [23]), and have been recently used for tasks including image classification
[12, 23, 27], object detection [6], visual data mining [5, 15], action recognition [11], and geometry
estimation [7]. But how are informative visual elements to be identified in the weakly-labeled visual dataset? The idea is to search for clusters of image patches that are both 1) representative, i.e.
frequently occurring within the dataset, and 2) visually discriminative. Unfortunately, algorithms
for finding patches that fit these criteria remain rather ad-hoc and poorly understood. and often
do not even directly optimize these criteria. Hence, our goal in this work is to quantify the terms
?representative? and ?discriminative,? and show that a formulation which draws inspiration from
1
Distance:
2.58
2.92
3.07
3.10
3.16
Distance:
1.01
1.13
1.13
1.15
1.17
Figure 1: The distribution of patches in HOG feature space is very non-uniform and absolute distances cannot
be trusted. We show two patches with their 5 nearest-neighbors from the Paris Street View dataset [5]; beneath
each nearest neighbor is its distance from query. Although the nearest neighbors on the left are visually much
better, their distances are more than twice those on the right, meaning that the actual densities of the two regions
will differ by a factor of more than 2d , where d is the intrinsic dimensionality of patch feature space. Since this
is a 2112-dimensional feature space, we estimate d to be on the order of hundreds.
the well-known, well-understood mean-shift algorithm can produce visual elements that are more
representative and discriminative than those of previous approaches.
Mining visual elements from a large dataset is difficult for a number of reasons. First, the search
space is huge: a typical dataset for visual data mining has tens of thousands of images, and finding
something in an image (e.g., finding matches for a visual template) involves searching across tens
of thousands of patches at different positions and scales. To make matters worse, patch descriptors
tend to be on the order of thousands of dimensions; not only is the curse of dimensionality a constant
problem, but we must sift through terabytes of data. And we are searching for a needle in a haystack:
the vast majority of patches are actually uninteresting, either because they are rare (e.g., they may
contain multiple random things in a configuration that never occurs again) or they are redundant due
to the overlapping nature of patches. This suggests the need for an online algorithm, because we
wish to discard much of the data while making as few passes through the dataset as possible.
The well-known mean-shift algorithm [2, 3, 8] has been proposed to address many of these problems.
The goal of mean-shift is to find the local maxima (modes) of a density using a sample from that
density. Intuitively, mean-shift initializes each cluster centroid to a single data point, then iteratively
1) finds data points that are sufficiently similar to each centroid, and, 2) averages these data points
to update the cluster centroid. In the end, each cluster generally depends on only a tiny fraction of
the data, thus eliminating the need to keep the entire dataset in memory.
However, there is one issue with using classical mean-shift to solve our problem directly: it only
finds local maxima of a single, unlabeled density, which may not be discriminative. But in our
case, we can use the weak labels to divide our data into two different subsets (?positive? (+) and
?negative? ( )) and seek visual elements which appear only in the ?positive? set and not in the
?negative? set. That is, we want to find points in feature space where the density of the positive
set is large, and the density of the negative set is small. This can be achieved by maximizing the
well-studied density ratio p+ (x)/p (x) instead of maximizing the density. While a number of
algorithms exist for estimating ratios of densities (see [25] for a review), we did not find any that
were particularly suitable for finding local maxima of density ratios. Hence, the first contribution of
our paper is to propose a discriminative variant of mean-shift for finding visual elements. Similar to
the way mean-shift performs gradient ascent on a density estimate, our algorithm performs gradient
ascent on the density ratio (section 2). When we perform gradient ascent separately for each element
as in standard mean-shift, however, we find that the most frequently-occuring elements tend to
be over-represented. Hence, section 3 describes a modification to our gradient ascent algorithm
which uses inter-element communication to approximate common adaptive bandwidth procedures.
Finally, in section 4 we demonstrate that our algorithms produce visual elements which are more
representative and discriminative than previous methods, and in section 5 we show they significantly
improve performance in scene classification.
2
Mode Seeking on Density Ratios
Our goal is to extract discriminative visual elements by finding the local maxima of the density ratio.
However, one issue with performing gradient ascent directly on standard density ratio estimates is
that common estimators tend to use a fixed kernel bandwidth, for example:
n
X
r?(x) /
?i K(kx xi k/h)
i=1
where r? is the ratio estimate, the parameters ?i 2 R are weights associated with each datapoint,
K is a kernel function (e.g., a Gaussian), and h is a globally-shared bandwidth parameter. The
2
bandwidth defines how much the density is smoothed before gradient ascent is performed, meaning
these estimators assume a roughly equal distribution of points in all regions of the space. Unfortunately, absolute distances in HOG feature space cannot be trusted, as shown in Figure 1: any kernel
bandwidth which is large enough to work well in the left example will be far too large to work well
in the right. One way to deal with the non-uniformity of the feature space is to use an adaptive
bandwidth [4]: that is, different bandwidths are used in different regions of the space. However,
previous algorithms are difficult to implement for large data in high-dimensional spaces; [4], for instance, requires a density estimate for every point used in computing the gradient of their objective,
because their formulation relies on a per-point bandwidth rather than a per-cluster bandwidth. In
our case, this is prohibitively expensive. While approximations exist [9], they rely on approximate
nearest neighbor algorithms, which work for low-dimensional spaces (? 48 dimensions in [9]), but
empirically we have found poor performance in HOG feature space (> 2000 dimensions). Hence,
we take a different approach which we have tailored for density ratios.
We begin by using a result from [2] that classical mean-shift (using a flat kernel) is equivalent to
finding the local maxima of the following density estimate:
Pn
d(xi , w), 0)
i=1 max(b
(1)
z(b)
In standard mean-shift, d is the Euclidean distance function, b is a constant that controls the kernel
bandwidth, and z(b) is a normalization constant. Here, the flat kernel has been replaced by its
shadow kernel, the triangular kernel, using Theorem 1 from [2]. We want to maximize the density
ratio, so we simply divide the two density estimates. We allow an adaptive bandwidth, but rather
than associating a bandwidth with each datapoint, we compute it as a function of w which depends
on the data.
Pnpos
max(B(w) d(x+
i , w), 0)
(2)
Pni=1
neg
max(B(w)
d(x
i=1
i , w), 0)
Where the normalization term z(b) is cancelled. This expression, however, produces poor estimates
of the ratio if the denominator is allowed to shrink to zero; in fact, it can produce arbitrarily large
but spurious local maxima. Hence, we define B(w) as the value of b which satisfies:
nneg
X
max(b
(3)
d(xi , w), 0) =
i=1
Where is a constant analogous to the bandwidth parameter, except that it directly controls how
many negative datapoints are in each cluster. Note the value of the sum is strictly increasing in b
when it is nonzero, so the b satisfying the constraint is unique. With this definition of B(w), we are
actually fixing the value of the denominator of (2) (We include the denominator here only to make
the ratio explicit, and we will drop it in later formula). This approach makes the implicit assumption
that the distribution of the negatives captures the overall density of the patch space. Note that if
we assume the denominator distribution is uniform, then B(w) becomes fixed and our objective is
identical to fixed-bandwidth mean-shift.
Returning to our formulation, we must still choose the distance function d. In high-dimensional
feature space, [20] suggests that normalized correlation provides a better metric than the Euclidean
distance commonly used in mean-shift. Formulations of mean-shift exist for data constrained to
the unit sphere [1], but again we must adapt them to the ratio setting. Surprisingly, replacing the
Euclidean distance with normalized correlation leads to a simpler optimization problem. First, we
mean-subtract and normalize all datapoints xi and rewrite (2) as:
Pnneg
npos
>
X
b, 0) =
i=1 max(w xi
max(w> x+
b,
0)
s.t.
(4)
i
2
kwk = 1
i=1
Where B(w) has been replaced by b as in equation (3), to emphasize that we can treat B(w) as a
constraint in an optimization problem. We can further rewrite the above equation as finding the local
maxima of:
npos
X
i=1
nneg
max(w> x+
i
b, 0)
kwk2 s.t.
3
X
i=1
max(w> xi
b, 0) =
(5)
First&
Itera#on&
ini#al&
ini#al&
ini#al&
Final&&
Itera#on&
Figure 2: Left: without competition, the algorithm from section 2 correctly learns a street lamp element.
Middle: The same algorithm trained on a sidewalk barrier, which is too similar to the very common ?window
with railing? element, which takes over the cluster. Right: with the algorithm from section 3, the window gets
down-weighted and the algorithm can learn the sidewalk barrier.
Note that (5) is equivalent to (4) for some appropriate rescaling of and . It can be easily shown
that multiplying by a constant factor does not change the relative location of local maxima, as long
as we divide by that same factor. Such a re-scaling will in fact result in re-scaling w by the same
value, so we can choose a and which makes the norm of w equal to 1. 1
After this rewriting, we are left with an objective that looks curiously like a margin-based method.
Indeed, the negative set is treated very much like the negative set in an SVM (we penalize the linear
sum of the margin violations), which follows [23]. However, unlike [23], which makes the ad-hoc
choice of 5 positive examples, our algorithm allows each cluster to select the optimal number of
positives based on the decision boundary. This is somewhat reminiscent of unsupervised marginbased clustering [29, 16].
Mean-shift prescribes that we initialize the procedure outlined above at every datapoint. In our
setting, however, this is not practical, so we instead use a randomly-sampled subset. We run this
as an online algorithm by breaking the dataset into chunks and then mining, one chunk at a time,
for patches where w> x b > ? for some small ?, akin to ?hard mining? for SVMs. We perform
gradient ascent after each mining phase. An example result for this algorithm is shown in in Figure 2,
and we include further results below. Gradient ascent on our objective is surprisingly efficient, as
described in Appendix A.
3
Better Adaptive Bandwidth via Inter-Element Communication
Implicit in our formulation thus far is the idea that we do not want a single mode, but instead many
distinct modes which each corresponds to a different element. In theory, mode-seeking will find
every mode that is supported by the data. In practice, clusters often drift from weak modes to
stronger modes, as demonstrated in Figure 2 (middle). One way to deal with this is to assign smaller
bandwidths to patches in dense regions of the space [4], e.g., the window railing on row 1 of Figure 2
(middle) would hopefully have a smaller bandwidth and hence not match to the sidewalk barrier.
However, estimating a bandwidth for every datapoint in our setting is not practical, so we seek an
approach which only requires one pass through the data. Since patches in regions of the feature space
with high density ratio will be members of many clusters, we want a mechanism that will reduce
their bandwidth. To accomplish this, we extend the standard local (per-element) optimization of
mean-shift into a joint optimization among the m different element clusters. Specifically, we control
how a single patch can contribute to multiple clusters by introducing a sharing weight ?i,j for each
patch i that is contained in a cluster j, akin to soft-assignment in EM GMM fitting. Returning to our
fomulation, we maximize (again with respect to the w?s and b?s):
npos m
XX
i=1 j=1
?i,j max(wj> x+
i
bj , 0)
m
X
j=1
nneg
kwj k2 s.t. 8j
X
max(wj> xi
bj , 0) =
(6)
i=1
Where each ?i,j is chosen such that any patch which is a member of multiple clusters gets a
lower weight. (6) also has a natural interpretation in terms of maximizing the ?representativeness? of the set of clusters: clusters are rewarded for representing patches that are not represented by other clusters. But how can we set the ??s? One way is to set ?i,j = max(wj> x+
i
Pm
bj , 0)/ k=1 max(wk> x+
bk , 0), and alternate between setting the ??s and optimizing the w?s and
i
1
Admittedly this means that the norm of w has an indirect effect on the underlying bandwidth: specifically
if the norm of w is increased, it has a similar effect as a proportional derease in in (4). However, since w
is roughly proportional to the density of the positive data, the bandwidth is only reduced when the density of
positive data is high.
4
200 Elements
Purity of 75%
10
0.98
0.98
9
0.96
0.96
0.94
0.94
0.92
0.92
0.9
0.9
0.88
0.88
0.86
0.86
0.84
0.84
0.82
0.82
0.8
0
0.1
0.2
0.3
0.4
Coverage (Fraction of Positive Dataset)
0.5
Coverage (Fraction of Positive Dataset)
1
Purity
Purity
25 Elements
1
This work
This work, no inter-element
SVM Retrained 5x (Doersch et al. 2012)
LDA Retrained 5x
LDA Retrained
Exemplar LDA (Hariharan et al. 2012)
8
7
6
5
4
3
2
1
0.6
250 0.2
300 0.4 350
400
Coverage (FractionNumber
of Positive
Dataset)
of Elements
0.8
0
0.8
450
500
Figure 3: Purity-coverage graph for our algorithm and baselines. In each plot, purity measures the accuracy
of the element detectors, whereas coverage captures how often they fire. Curves are computed over the top 25
(left) and 200 (right) elements. Higher is better.
b?s at each iteration. Intuitively, this algorithm would be much like EM, alternating between softly
assigning cluster memberships for each datapoint and then optimizing each cluster. However, this
goes against our mean-shift intuition: if two patches are really instances of the same element, then
clusters initialized from those two points should converge to the same mode and not ?compete? with
one another. So, our heuristic is to first cluster the elements. Let Cj be the assigned cluster for the
j?th element. Then we set
?i,j =
max(wj> x+
i
max(wj> x+
bj , 0)
i
Pm
bj , 0) + k=1 I(Ck 6= Cj ) max(wk> x+
i
(7)
bk , 0)
In this way, any ?competition? from elements that are too similar to each other is ignored. To obtain
the clusters, we perform agglomerative (UPGMA) clustering on the set of element clusters, using
the negative of the number of overlapping cluster members as a ?distance? metric.
In practice, however, it is extremely rare that the exact same patch is a member of two different clusters; instead, clusters will have member patches that merely overlap with each other. Our heuristic
0
deal with this is to compute a quantity ?i,j,p
which is analogous to the ?i,j defined above, but is
0
defined for every pixel p. Then we compute ?i,j for a given patch by averaging ?i,j,p
over all pixels
in the patch. Specifically, we compute ?i,j for patch i as the mean over all pixels p in that patch of
the following quantity:
0
?i,j,p
=
max(wj> x+
i
bj , 0) +
P
max(wj> x+
i
Pm
x2Ov(p)
k=1
bj , 0)
I(Ck 6= Cj ) max(wk> x+
i
bk , 0)
(8)
Where Ov(p) denotes the set of features for positive patches that contain the pixel p.
It is admittedly difficult to analyze how well these heuristics approximate the adaptive bandwidth
approach of [4], and even there the setting of the bandwidth for each datapoint has heuristic aspects.
However, empirically our approach leads to improvements in performance as discussed below, and
suggests a potential area for future work.
4
Evaluation via Purity-Coverage Plot
Our aim is to discover visual elements that are maximally representative and discriminative. To
measure this, we define two quantities for a set of visual elements: coverage (which captures representativeness) and purity (which captures discriminativeness). Given a held-out test set, visual
elements will generate a set of patch detections. We define the coverage of this set of patches to be
the fraction of the pixels from the positive images claimed by at least one patch. We define the purity
of a set as the percentage of the patches that share the same label. For an individual visual element,
of course, there is an inherent trade-off between purity and coverage: if we lower the detection
threshold, we cover more pixels but also increase the likelihood of making mistakes. Hence, we can
construct a purity-coverage curve for a set of elements, analogous to a precision-recall curve. We
could perform this analysis on any dataset containing positive and negative images, but [5] presents
a dataset which is particularly suitable. The goal is to mine visual elements which define the look
and feel of a geographical locale, with a training set of 2,000 Paris Street View images and 8,000
5
Purity of 100%
Purity of 90%
0.7
Coverage (Fraction of Positive Dataset)
Coverage (Fraction of Positive Dataset)
0.25
0.2
0.15
0.1
0.05
0
0
100
200
300
Number of Elements
400
0.6
0.5
0.4
0.3
0.2
0.1
0
0
500
This work
This work, no inter-element
SVM Retrained 5x (Doersch et al. 2012)
LDA Retrained 5x
LDA Retrained
Exemplar LDA (Hariharan et al. 2012)
100
200
300
Number of Elements
400
500
Figure 4: Coverage versus the number of elements used in the representation. On the left we keep only the
detections with a score higher than the score of the detector?s first error (i.e. purity 1). On the right, we lower
the detection threshold until the elements are 90% pure. Note: this is the same purity and coverage measure for
the same elements as Figure 3, just plotted differently.
non-Paris images, as well as 2,999 of both classes for testing. Purity-coverage curves for this dataset
are shown in Figure 3.
To plot the curve for a given value of purity p, we rank all patches by w> x b independently for every
element, and select, for a given element, all patches up until the last point where the element has the
desired purity. We then compute the coverage as the union of patches selected for every element.
Because we are taking a union of patches, adding more elements can only increase coverage, but in
practice we prefer concise representations, both for interpretability and for computational reasons.
Hence, to compare two element discovery methods, we must select exactly the same number of
elements for both of them. Different works have proposed different heuristics for selecting elements,
which would make the resulting curves incomparable. Hence, we select elements in the same way
for all algorithms, which approximates an ?ideal? selection for our measure. Specifically, we first
fix a level of purity (95%) and greedily select elements to maximize coverage (on the testing data)
for that level of purity. Hence, this ranking serves as an oracle to choose the ?best? set of elements
for covering the dataset at that level of purity. While this ranking has a bias toward large elements
(which inherently cover more pixels per detection), we believe that it provides a valuable comparison
between algorithms. Our purity-coverage curves are shown in Figure 3, for the 25 and 200 top
elements, respectively. We can also slice the same data differently, fixing a level of purity for all
elements and varying the number of elements, as shown in Figure 4.
Baselines: We included five baselines of increasing complexity. Our goal is not only to analyze our
own algorithm; we want to show the importance of the various components of previous algorithms
as well. We initially train 20, 000 visual elements for all the baselines, and select the top elements
using the method above. The simplest baseline is ?Exemplar LDA,? proposed by [10]. Each cluster
is represented by a hyperplane which maximally separates a single seed patch from the negative
dataset learned via LDA, i.e. the negative distribution is approximated using a single multivariate
Gaussian. To show the effects of re-clustering, ?LDA Retrained? takes the top 5 positive-set patches
retrieved in Exemplar LDA (including the initial patch itself), and repeats LDA, separating those 5
from the negative Gaussian. This is much like the well-established method of ?query expansion? for
retrieval, and is similar to [12] (although they use multiple iterations of query expansion). Finally,
?LDA Retrained 5 times? begins with elements initialized via the LDA retraining method, and retrains the LDA classifier, each time throwing out the previous top 5 used to train the previous LDA,
and selecting a new top 5 from held-out data. This is much like the iterative SVM training of [5],
except that it uses LDA instead of an SVM. Finally, we include the algorithm of [5], which is a
weakly supervised version of [23], except that knn is being used for initialization instead of kmeans.
The iterations of retraining clearly improve performance, and it seems that replacing LDA with an
SVM also gives improvement, especially for difficult elements.
Implementation details: We use the same patch descriptors described in [5] and whiten them following [10]. We mine elements using the online version of our algorithm, with a chunk size of 1000
(200 Paris, 800 non-Paris per batch). We set ? = t/500 where t is the iteration number, such that
the bandwidth increases proportional to the number of samples. We train the elements for about 200
6
Figure 5: For each correctly classified image (left), we show four elements (center) and heatmap of
the locations (right) that contributed most to the classification.
Table 1: Results on MIT 67 scenes
ROI + Gist [19]
MM-scene [30]
DPM [17]
CENTRIST [28]
Object Bank [14]
RBoW [18]
26.05
28.00
30.40
36.90
37.60
37.93
D-Patches [23]
LPR [22]
BoP [12]
miSVM [15]
D-Patches (full) [23]
MMDL [27]
38.10
44.84
46.10
46.40
49.40
50.15
D-Parts [26]
IFV [12]
BoP+IFV [12]
Ours (no inter-element, ?2)
Ours (?3)
Ours+IFV
51.40
60.77
63.10
63.36
64.03
66.87
gradient steps after each chunk of mining. To compute ?i,j for patch i and detector j, we actually use
scale-space voxels rather than pixels, since a large detection can completely cover a small detection
but not vice versa. Hence, the set of scale-space voxels covered is a 3D box, the width of the bounding box by its height (both discretized
by a factor of 8 for efficiency)pby 5, covering exactly one
p
?octave? of scale space (i.e. log2( width ? height) ? 5 through log2( width ? height) ? 5 + 4).
For experiments without inter-element communication, we simply set ?i,j to .1. Finally, to reduce
the impact of highly redundant textures, we divide ?i,j divided by the total number of detections for
element j in the image containing i. Source code will be available online.
5
Scene Classification
Finally, we evaluate whether our visual element representation is useful for scene classification. We
use the MIT Scene-67 dataset [19], where machine performance remains substantially below human
7
Ground Truth (GT): deli
GT: museum
Guess: grocery store
Guess: garage
GT: laundromat
GT: office
Guess: closet
GT: corridor
Guess: classroom
Guess: staircase
GT: bakery
Guess: buffet
Figure 6: Each of these images was misclassified by the algorithm, and the heatmaps explain why.
For instance, it may not be obvious why a corridor would be classified as a staircase, but we can see
(top right) that the algorithm has identified the railings as a key staircase element, and has found no
other staircase elements the image.
performance. For indoor scenes, objects within the scene are often more useful features than global
scene statistics [12]: for instance, shoe shops are similar to other stores in global layout, but they
mostly contain shoes.
Implementation details: We used the original Indoor-67 train/test splits (80 training and 20 testing
images per class). We learned 1600 elements per class, for a total of 107, 200 elements, following
the procedure described above. We include right-left flipped images as extra positives. 5 batches
were sufficient, as this dataset is smaller. We also used smaller descriptors: 6-by-6 HOG cells,
corresponding to 64-by-64 patches and 1188-dimensional descriptors. We again select elements
by fixing purity and greedily selecting elements to maximize coverage, as above. However, rather
than defining coverage as the number of pixels (which is biased toward larger elements), we simply
count the detections, penalizing for overlap: we penalize each individual detection by a factor of
1/(1 + noverlap ), where noverlap is the number of detections from previously selected detectors
that a given detection overlaps with. We select 200 top elements per class. To construct our final
feature vector, we use a 2-level (1x1 and 2x2) spatial pyramid and take the max score per detector
per region, thresholded at .5 (since below this value we do not expect the detection scores to be
meaningful) resulting in a 67,000-dimensional vector. We average the feature vector for the right
and left flips of the image, and classify using 67 one-vs-all linear SVM?s. Note that this differs from
[23], which selects only the elements for a given class in each class-specific SVM.
Figure 5 shows a few qualitative results of our algorithm. Quantitative results and comparisons
are shown in Table 1. We significantly outperform other methods based on discriminative patches,
suggesting that our training method is useful. We even outperform the Improved Fisher Vector
of [12], as well as IFV combined with discriminative patches (IFV+BoP). Finally, although the
optimally-performing representation is dense (about 58% of features are nonzero), it can be made
much sparser without sacrificing much performance. For instance, if we trivially zero-out lowvalued features until fewer than 6% are nonzero, we still achieve 60.45% accuracy.
6
Conclusion
We developed an extension of the classic mean-shift algorithm to density ratio estimation, showing
that the resulting algorithm could be used for element discovery, and demonstrating state-of-the-art
results for scene classification. However, there is still much room for improvement in weaklysupervised element discovery algorithms. For instance, our algorithm is limited to binary labels, but
image labels may be continuous (e.g., GPS coordinates or dates). Also, our elements are detected
based only on individual patches, but images often contain global structures beyond patches.
Acknowledgements: We thank Abhinav Shrivastava, Yong Jae Lee, Supreeth Achar, and Geoff Gordon for helpful insights
and discussions. This work was partially supported by NDSEG fellowship to CD, An Amazon Web Services grant, a Google
Research grant, ONR MURI N000141010934, and IARPA via Air Force Research Laboratory. The U.S. Government is
authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL or the U.S. Government.
References
[1] H. E. Cetingul and R. Vidal. Intrinsic mean shift for clustering on Stiefel and Grassmann manifolds. In
CVPR, 2009.
8
[2] Y. Cheng. Mean shift, mode seeking, and clustering. PAMI, 17(8):790?799, 1995.
[3] D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift. In
CVPR, 2000.
[4] D. Comaniciu, V. Ramesh, and P. Meer. The variable bandwidth mean shift and data-driven scale selection. In ICCV, 2001.
[5] C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. A. Efros. What makes Paris look like Paris? SIGGRAPH,
2012.
[6] I. Endres, K. Shih, J. Jiaa, and D. Hoiem. Learning collections of part models for object recognition. In
CVPR, 2013.
[7] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3D primitives for single image understanding. In
ICCV, 2013.
[8] K. Fukunaga and L. Hostetler. The estimation of the gradient of a density function, with applications in
pattern recognition. Information Theory, 1975.
[9] B. Georgescu, I. Shimshoni, and P. Meer. Mean shift based clustering in high dimensions: A texture
classification example. In CVPR, 2003.
[10] B. Hariharan, J. Malik, and D. Ramanan. Discriminative decorrelation for clustering and classification.
In ECCV, 2012.
[11] A. Jain, A. Gupta, M. Rodriguez, and L. Davis. Representing videos using mid-level discriminative
patches. In CVPR, 2013.
[12] M. Juneja, A. Vedaldi, C. V. Jawahar, and A. Zisserman. Blocks that shout: Distinctive parts for scene
classification. In CVPR, 2013.
[13] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[14] L.-J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene
classification and semantic feature sparsification. NIPS, 2010.
[15] Q. Li, J. Wu, and Z. Tu. Harvesting mid-level visual concepts from large-scale internet images. In CVPR,
2013.
[16] T. Malisiewicz and A. A. Efros. Recognition by association via learning per-exemplar distances. In
CVPR, 2008.
[17] M. Pandey and S. Lazebnik. Scene recognition and weakly supervised object localization with deformable
part-based models. In ICCV, 2011.
[18] S. N. Parizi, J. G. Oberlin, and P. F. Felzenszwalb. Reconfigurable models for scene recognition. In
CVPR, 2012.
[19] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
[20] M. Radovanovi?c, A. Nanopoulos, and M. Ivanovi?c. Nearest neighbors in high-dimensional data: The
emergence and influence of hubs. In ICML, 2009.
[21] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman. Using multiple segmentations to
discover objects and their extent in image collections. In CVPR, 2006.
[22] F. Sadeghi and M. F. Tappen. Latent pyramidal regions for recognizing scenes. In ECCV. 2012.
[23] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In
ECCV, 2012.
[24] J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In ICCV,
2003.
[25] M. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio estimation: A comprehensive review. RIMS
Kokyuroku, 2010.
[26] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation.
In ICCV, 2013.
[27] X. Wang, B. Wang, X. Bai, W. Liu, and Z. Tu. Max-margin multiple-instance dictionary learning. In
ICML, 2013.
[28] J. Wu and J. M. Rehg. Centrist: A visual descriptor for scene categorization. PAMI, 2011.
[29] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In NIPS, 2004.
[30] J. Zhu, L.-J. Li, L. Fei-Fei, and E. P. Xing. Large margin learning of upstream scene understanding
models. NIPS, 2010.
9
| 5202 |@word version:2 middle:3 eliminating:1 norm:3 stronger:1 retraining:2 seems:1 seek:2 concise:1 bai:1 configuration:1 liu:1 score:4 selecting:3 hoiem:1 initial:1 ours:3 current:1 assigning:1 must:4 reminiscent:1 informative:2 plot:4 drop:1 update:1 gist:1 v:1 selected:2 guess:6 fewer:1 lamp:1 harvesting:1 provides:2 contribute:1 location:2 simpler:1 five:1 height:3 corridor:2 qualitative:1 fitting:1 inter:6 indeed:1 roughly:2 frequently:4 discretized:1 freeman:1 globally:1 actual:1 curse:2 equipped:1 window:3 increasing:2 becomes:1 begin:2 discover:3 estimating:2 itera:2 xx:1 underlying:1 what:1 interpreted:1 substantially:1 developed:1 finding:9 sparsification:1 locale:1 berkeley:2 every:7 quantitative:1 exactly:2 prohibitively:1 returning:2 k2:1 classifier:1 control:3 unit:1 grant:2 ramanan:1 appear:1 positive:17 before:1 understood:2 local:12 treat:1 service:1 mistake:1 thereon:1 switching:1 analyzing:1 pami:2 might:1 twice:1 initialization:1 studied:2 suggests:3 limited:1 malisiewicz:1 unique:1 practical:2 testing:3 practice:3 union:2 implement:1 differs:1 block:1 procedure:3 area:1 significantly:2 vedaldi:1 matching:1 word:4 get:2 cannot:2 needle:1 unlabeled:1 selection:2 influence:1 optimize:1 equivalent:2 demonstrated:1 center:1 maximizing:3 go:1 layout:1 primitive:1 independently:1 amazon:1 pure:1 estimator:2 insight:1 datapoints:2 rehg:1 classic:1 handle:1 searching:2 coordinate:2 meer:3 analogous:3 feel:1 exact:1 carl:1 gps:2 us:2 element:81 recognition:6 particularly:3 expensive:1 satisfying:1 approximated:1 tappen:1 muri:1 labeled:2 wang:2 capture:6 thousand:3 region:7 wj:7 sun:1 trade:1 russell:1 valuable:1 principled:1 intuition:1 complexity:3 mine:2 trained:1 weakly:4 uniformity:1 rewrite:2 prescribes:1 ov:1 singh:2 distinctive:1 localization:1 efficiency:1 completely:1 easily:1 joint:1 indirect:1 differently:2 geoff:1 represented:3 various:1 siggraph:1 train:4 distinct:1 jain:1 query:3 detected:1 heuristic:5 larger:1 solve:1 cvpr:11 drawing:1 garage:1 triangular:1 statistic:1 knn:1 emergence:1 itself:1 ifv:5 final:2 online:4 hoc:3 advantage:1 descriptive:1 neufeld:1 propose:2 tu:2 relevant:1 beneath:1 date:1 poorly:1 achieve:1 deformable:1 normalize:1 competition:2 sutskever:1 cluster:30 produce:5 categorization:1 object:11 fixing:3 pose:1 exemplar:5 nearest:5 coverage:22 c:3 involves:1 shadow:1 quantify:1 differ:1 human:1 require:1 government:2 assign:1 fix:1 really:1 strictly:1 heatmaps:1 extension:1 mm:1 sufficiently:1 ground:1 roi:1 visually:5 seed:1 bakery:1 bj:7 efros:6 dictionary:1 torralba:1 purpose:1 estimation:4 label:6 currently:1 jawahar:1 vice:1 trusted:2 weighted:1 mit:3 clearly:1 gaussian:3 aim:2 rather:6 ck:2 pn:1 varying:1 office:1 closet:1 ponce:1 improvement:3 rank:1 likelihood:1 centroid:3 baseline:5 detect:1 greedily:2 helpful:1 rigid:1 membership:1 softly:1 entire:1 initially:1 spurious:1 misclassified:1 reproduce:1 selects:1 pixel:10 issue:2 classification:13 overall:1 among:1 grocery:1 art:2 constrained:1 initialize:1 uc:1 spatial:1 equal:2 construct:2 never:1 heatmap:1 identical:1 flipped:1 look:3 unsupervised:4 icml:2 future:1 gordon:1 fouhey:1 inherent:1 few:2 randomly:1 museum:1 comprehensive:1 individual:3 replaced:2 geometry:1 phase:1 fire:1 n000141010934:1 detection:14 huge:1 mining:8 highly:1 alexei:1 evaluation:1 violation:1 copyright:1 held:2 accurate:1 edge:1 laundromat:1 divide:4 euclidean:3 initialized:2 re:3 plotted:1 desired:1 sacrificing:1 instance:7 increased:1 soft:1 classify:1 cover:3 assignment:1 introducing:1 subset:2 rare:2 uniform:2 hundred:1 uninteresting:1 krizhevsky:1 recognizing:2 too:3 optimally:1 accomplish:1 endres:1 combined:1 chunk:4 density:28 geographical:1 lee:1 off:1 again:4 ndseg:1 containing:2 choose:3 worse:1 corner:1 rescaling:1 li:3 account:1 potential:1 suggesting:1 distribute:1 wk:3 representativeness:2 matter:1 notable:1 ranking:2 ad:3 depends:2 performed:1 view:4 later:1 analyze:3 kwk:1 xing:2 annotation:1 timely:1 contribution:1 disclaimer:1 air:1 hariharan:3 accuracy:2 convolutional:1 phoneme:1 descriptor:5 cosegmentation:1 weak:3 raw:1 multiplying:1 shout:1 classified:2 datapoint:6 detector:6 explain:1 quattoni:1 sharing:1 definition:1 against:2 obvious:1 associated:1 sampled:1 dataset:24 popular:1 recall:1 dimensionality:3 cj:3 classroom:1 segmentation:1 rim:1 actually:3 afrl:1 higher:4 supervised:2 zisserman:3 maximally:3 improved:1 formulation:6 shrink:1 box:2 hostetler:1 just:1 implicit:2 correlation:2 until:3 web:1 replacing:2 su:1 overlapping:2 hopefully:1 google:2 rodriguez:1 defines:1 mode:12 lda:17 believe:1 effect:3 contain:4 normalized:2 staircase:4 concept:1 hence:11 inspiration:1 assigned:1 alternating:1 iteratively:1 nonzero:3 laboratory:1 semantic:2 deal:3 width:3 comaniciu:2 covering:2 davis:1 anything:1 whiten:1 shimshoni:1 larson:1 criterion:2 octave:1 ini:3 occuring:1 demonstrate:1 performs:2 stiefel:1 image:24 meaning:2 lazebnik:1 discovers:1 recently:2 common:3 empirically:2 extend:1 interpretation:1 discussed:1 approximates:1 association:1 kwk2:1 mellon:2 haystack:1 versa:1 doersch:4 outlined:1 pm:3 trivially:1 sugiyama:1 gt:6 something:1 multivariate:1 own:1 recent:1 retrieved:1 optimizing:2 driven:2 discard:1 rewarded:1 claimed:1 store:2 binary:1 arbitrarily:1 onr:1 neg:1 somewhat:1 terabyte:1 purity:24 converge:1 maximize:4 redundant:2 full:2 multiple:6 match:2 adapt:1 sphere:1 long:1 retrieval:2 divided:1 grassmann:1 impact:1 variant:1 denominator:4 cmu:2 metric:2 iteration:4 kernel:8 tailored:1 normalization:2 pyramid:1 achieved:1 cell:1 penalize:2 whereas:1 want:5 separately:1 fellowship:1 pyramidal:1 source:1 bop:3 extra:1 biased:1 unlike:1 pass:1 ascent:8 tend:3 thing:1 member:5 dpm:1 ideal:1 split:1 enough:2 fit:1 identified:2 bandwidth:25 associating:1 reduce:2 idea:2 incomparable:1 shift:23 retrains:1 whether:1 expression:1 curiously:1 akin:2 cause:1 action:1 deep:1 ignored:1 generally:1 useful:3 covered:1 mid:7 ten:2 svms:1 category:1 simplest:1 reduced:1 generate:1 outperform:2 exist:3 percentage:1 blown:1 governmental:1 deli:1 per:12 correctly:2 carnegie:2 misvm:1 key:1 four:1 sheer:1 demonstrating:2 threshold:2 centrist:2 shih:1 gmm:1 rewriting:1 penalizing:1 thresholded:1 vast:1 graph:1 merely:1 fraction:6 sum:2 run:2 compete:1 letter:1 powerful:1 wu:2 patch:48 draw:1 endorsement:1 decision:1 appendix:1 scaling:2 prefer:1 capturing:1 internet:1 cheng:1 oracle:1 constraint:2 throwing:1 fei:4 scene:21 flat:2 x2:1 yong:1 weaklysupervised:1 aspect:1 extremely:1 fukunaga:1 performing:2 relatively:1 alternate:1 marginbased:1 poor:3 nanopoulos:1 remain:1 across:1 describes:1 smaller:4 em:2 making:3 modification:1 intuitively:2 iccv:5 equation:2 remains:1 previously:1 count:1 mechanism:1 flip:1 end:1 serf:1 available:1 vidal:1 sidewalk:3 appropriate:1 cancelled:1 batch:2 buffet:1 original:1 top:8 clustering:9 include:4 denotes:1 log2:2 especially:1 classical:2 seeking:5 initializes:1 objective:4 implied:1 quantity:3 occurs:1 malik:1 gradient:11 distance:12 separate:1 thank:1 separating:1 entity:1 street:4 majority:1 manifold:1 agglomerative:1 extent:1 reason:2 toward:2 code:1 ratio:16 difficult:5 unfortunately:3 mostly:2 hog:5 negative:12 implementation:2 policy:1 perform:4 contributed:1 ramesh:2 defining:1 hinton:1 communication:3 smoothed:1 retrained:8 drift:1 mmdl:1 bk:3 paris:8 connection:1 imagenet:1 sivic:3 coherent:1 learned:2 herein:1 established:1 nip:4 address:1 beyond:1 below:4 pattern:1 juneja:1 indoor:3 including:2 memory:1 max:20 interpretability:1 video:3 suitable:2 overlap:3 treated:1 rely:1 natural:1 force:1 decorrelation:1 zhu:1 representing:3 shop:1 improve:2 sadeghi:1 abhinav:2 kokyuroku:1 reprint:1 extract:1 text:1 prior:1 review:2 discovery:7 voxels:2 acknowledgement:1 understanding:2 relative:1 expect:1 proportional:3 versus:1 sufficient:1 bank:2 tiny:1 share:1 cd:1 row:1 eccv:3 course:1 surprisingly:2 supported:2 last:1 repeat:1 hebert:1 kanamori:1 bias:1 allow:1 noverlap:2 neighbor:5 template:1 pni:1 barrier:3 taking:1 absolute:2 felzenszwalb:1 slice:1 boundary:1 dimension:4 curve:7 evaluating:1 author:1 collection:3 adaptive:5 commonly:1 made:1 suzuki:1 far:2 approximate:3 emphasize:1 keep:2 global:3 discriminative:19 xi:7 jiaa:1 search:2 iterative:1 continuous:1 pandey:1 latent:1 why:2 table:2 nature:1 learn:1 inherently:1 shrivastava:1 schuurmans:1 expansion:2 necessarily:1 upstream:1 official:1 did:1 dense:2 big:1 bounding:1 pby:1 iarpa:2 jae:1 allowed:1 x1:1 xu:1 representative:6 biggest:1 precision:1 position:1 wish:1 explicit:1 breaking:1 learns:1 theorem:1 formula:1 down:1 specific:1 reconfigurable:1 oberlin:1 sift:3 showing:1 hub:1 svm:8 gupta:5 intrinsic:2 adding:1 importance:1 texture:2 notwithstanding:1 occurring:3 kx:1 margin:5 sparser:1 easier:1 subtract:1 authorized:1 simply:3 shoe:2 visual:37 expressed:1 contained:2 tracking:1 partially:1 kwj:1 corresponds:1 truth:1 satisfies:1 relies:1 goal:5 kmeans:1 room:1 shared:1 fisher:1 experimentally:1 change:1 hard:1 typical:2 except:3 specifically:4 included:1 averaging:1 hyperplane:1 admittedly:2 total:2 pas:2 meaningful:1 exception:1 select:8 evaluate:3 |
4,645 | 5,203 | Optimal integration of visual speed across different
spatiotemporal frequency channels
Matja?z Jogan and Alan A. Stocker
Department of Psychology
University of Pennsylvania
Philadelphia, PA 19104
{mjogan,astocker}@sas.upenn.edu
Abstract
How do humans perceive the speed of a coherent motion stimulus that contains
motion energy in multiple spatiotemporal frequency bands? Here we tested the
idea that perceived speed is the result of an integration process that optimally combines speed information across independent spatiotemporal frequency channels.
We formalized this hypothesis with a Bayesian observer model that combines the
likelihood functions provided by the individual channel responses (cues). We experimentally validated the model with a 2AFC speed discrimination experiment
that measured subjects? perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single
gratings. We found that the perceived speeds of the combined stimuli are independent of the relative phase of the underlying grating components. The results
also show that the discrimination thresholds are smaller for the combined stimuli
than for the individual grating components, supporting the cue combination hypothesis. The proposed Bayesian model fits the data well, accounting for the full
psychometric functions of both simple and combined stimuli. Fits are improved if
we assume that the channel responses are subject to divisive normalization. Our
results provide an important step toward a more complete model of visual motion perception that can predict perceived speeds for coherent motion stimuli of
arbitrary spatial structure.
1
Introduction
Low contrast stimuli are perceived to move slower than high contrast ones [17]. This effect can
be explained with a Bayesian observer model that assumes a prior distribution with a peak at slow
speeds [18, 8, 15]. This assumption has been verified by reconstructing subjects? individual prior
distributions from psychophysical data [16]. Based on a noisy sensory measurement m of the true
stimulus speed s the Bayesian observer model computes the posterior probability
p(s|m) =
p(m|s)p(s)
p(m)
(1)
by multiplying the likelihood function p(m|s) with the probability p(s) representing the observer?s
prior expectation. If the measurement is unreliable (e.g. if stimulus contrast is low), the likelihood
function is broad and the posterior probability distribution is shifted toward the peak of the prior,
resulting in a perceived speed that is biased toward slow speeds. While this model is able to account
for changes in perceived speed as a function of different internal noise levels (modulated by stimulus
contrast), it does not possess the power to predict the influence of other factors known to modulate
perceived speed such as for example the spatial frequency of the stimulus [14, 10, 2].
1
1
0.
6
s)
)
de
g
2
6
r2
?s(c/deg)
1.5
r|
s
r1
/s
0.5
?t(Hz)
3
p(
b
s(
a
Figure 1: a) A natural stimulus in motion exhibits a rich spatiotemporal frequency spectrum that
determines how humans perceive its speed s. b) Spatiotemporal energy diagram for motion in a
given direction (i.e. speed) showing individual spatiotemporal frequency channels (white circles).
A stimulus that contains spatial frequencies of 0.5 c/deg and 1.5 c/deg and moves with a speed
of 2 deg/s will trigger responses ~r = {r1 , r2 } in two corresponding channels (red circles). The
uncertainty about s given the response vector ~r is expressed in the joint likelihood function p(~r|s).
In this paper we make a step toward a more general observer model of visual speed perception that,
in the longterm, will allow us to predict perceived speed for arbitrary complex stimuli (Fig. 1a).
Inspired by physiological and psychophysical evidence we present an extension of the standard
Bayesian model (Eq. 1), which decomposes complex motion stimuli into simpler components processed in separate spatiotemporal frequency channels. Based on the motion energy model [1, 12],
we assume that each channel is sensitive to a narrow spatiotemporal frequency band. The observed
speed of a stimulus is then a result of combining the sensory evidence provided by these individual
channels with a prior expectation for slow speeds. Optimal integration of different sources of sensory evidence has been well documented in cue-combination experiments using cues of different
modalities (see e.g. [4, 7]). Here we employ an analogous approach by treating the responses of
individual spatiotemporal frequency channels as independent cues about a stimulus? motion.
We validated the model against the data of a series of psychophysical experiments in which we measured how humans? speed percept of coherent motion depends on the stimulus energy in different
spatial frequency bands. Stimuli consisted of drifting sinusoidal gratings at two different spatial
frequencies and contrasts, and various combinations of these single gratings. For a given stimulus
speed s, single gratings target only one channel while the combined stimuli target multiple channels. A joint fit to the psychometric functions of all conditions demonstrates that our new model
well captures human behavior both in terms of perceptual biases and discrimination thresholds.
2
Bayesian model
To define the new model, we start with the stimulus. We consider s to be the speed of locally coherent
and translational stimulus motion (Fig. 1a). This motion can be represented by its power spectrum in
spatiotemporal frequency space. For a given motion direction the energy lies in a two-dimensional
plane spanned by a temporal frequency axis ?t and a spatial frequency axis ?s and is constrained
to coordinates that satisfy s = ?t /?s (Fig. 1b; red dashed line). According to the motion energy
model, we assume that the visual system contains motion units that are tuned to specific locations in
this plane [1, 12]. A coherent motion stimulus with speed s and multiple spatial frequencies ?s will
therefore drive only those units whose tuning curves are centered at coordinates (?s , ?s s).
We formulate our Bayesian observer model in terms of k spatiotemporal frequency channels, each
tuned to a narrow spatiotemporal frequency band (Fig. 1b). A moving stimulus will elicit a total
response ~r = [r1 , r2 , ..., rk ] from these channels. The response of each channel provides a likelihood
2
channels
likelihoods
low speed prior
stimulus
estimate
normalization
posterior
Figure 2: Bayesian observer model of speed perception with multiple spatiotemporal channels. A
moving stimulus with speed s is decomposed and processed in separate channels that are sensitive to
energy in specific spatiotemporal frequency bands. Based on the channel response ri we formulate
a likelihood function p(ri |s) for each channel. The posterior distribution p(s|~r) is defined by the
combination of the likelihoods with a prior distribution p(s). Here we assume perceived speed s? to
be the mode of the posterior. We consider a model with and without response normalization across
channels (red dashed line).
function p(ri |s). Assuming independent channel noise, we can formulate the posterior probability
of an Bayesian observer model that performs optimal integration as
p(s|~r) ? p(s)
Y
p(ri |s) .
(2)
i
We rely on the results of Stocker and Simoncelli [16] for the characterization of the likelihood functions and the speed prior. Likelihoods are assumed to be Gaussians when considered in a transformed
logarithmic speed space of the form s = log(1 + slinear /s0 ), where s0 is a small constant [9]. If we
assume that each channel represents a large number of similarly tuned neurons with Poisson firing
statistics, then the average channel likelihood is centered on the value of s for which the activity in
the channel peaks, and the width of the likelihood ?i is inversely proportional to the square-root of
the channel?s response [11]. Also based on [16] we locally approximate the logarithm of the speed
prior as linear, thus log(p(s)) = as + b.
For reasons of simplicity and without loss of generality, we focus on the case where the stimulus
activates two channels with responses ~r = [ri ], i ? {1, 2}. Given our assumptions,
the likelihoods
?
are normal distributions with mean ?(ri ) and standard deviation ?i ? 1/ ri . The posterior (2) can
therefore be written as
(s ? ?(r1 ))2
(s ? ?(r2 ))2
p(s|~r) ? exp ?
?
+ as + b .
(3)
2?12
2?22
We assume that the model observer?s speed percept s? reflects the value of s that maximizes the
posterior. Thus, maximizing the exponent in Eq. 3 leads to
s? =
?22
?12
?12 ?22
?(r
)
+
?(r
)
+
a
.
1
2
?12 + ?22
?12 + ?22
?12 + ?22
(4)
A full probabilistic account over many trials (observations) requires the characterization of the full
distribution of the estimates p(?
s|s). Assuming that E h?(ri )|si approximates the stimulus speed s,
the expected value of s? is
3
E h?
s|si =
=
?22
?2
?2 ?2
E h?(r1 )|si + 2 1 2 E h?(r2 )|si + a 2 1 2 2
2
+ ?2
?1 + ?2
?1 + ?2
2
2
2 2
2 2
?2
?
? ?
? ?
s + 2 1 2s + a 21 2 2 = s + a 21 2 2 .
?12 + ?22
?1 + ?2
?1 + ?2
?1 + ?2
?12
Following the approximation in [16], the variance of the estimates s? is
2
2
?12
?22
var
h?(r
)|si
+
var h?(r2 )|si
var h?
s|si ?
1
?12 + ?22
?12 + ?22
2
2
?22
?12
?12 ?22
2
2
?
?
=
+
=
.
1
2
?12 + ?22
?12 + ?22
?12 + ?22
(5)
(6)
The noisy observer?s percept is fully determined by Eqs. (5) and (6). By a similar derivation it is
also easy to show that for a single active channel the distribution has mean E h?
s|si = s + a?12 and
2
variance var h?
s|si = ?1 .
The model makes the following predictions: First, the variance of the speed estimates (i.e., percepts)
for stimuli that activate both channels is always smaller than the variances of estimates that are
based on each of the channel responses alone (?12 and ?22 ). This improved reliability is a hallmark
of optimal cue combination as has been demonstrated for cross-modal integration [4, 7]. Second,
because of the slow speed prior a is negative, and perceived speeds are more biased toward slower
speeds the larger the sensory uncertainty. As a result, the perceived speed of combined stimuli that
activate both channels is always faster than the percepts based on each of the individual channel
responses alone. Finally, the model predicts that the perceived speed of a combined stimulus solely
depends on the responses of the channels to its constituent components, and is therefore independent
of the relative phase of the components we combined [5].
2.1
Response normalization
So far we assumed that the channels do not interact, i.e., their responses are independent of the number of active channels and the overall activity in the system. Here we extend our proposal with the
additional hypothesis that channels interact via divisive normalization. Divisive normalization [6]
has been considered one of the canonical neural computations responsible for e.g., contrast gain control, efficient coding, attention or surround suppression [13] (see [3] for a comprehensive review).
Here we assume that the response of an individual channel ri is normalized such that its normalized
response ri? is given by
rn
ri? = ri P i n .
(7)
j rj
Normalization typically increases the contrast (i.e., the relative difference) between the individual
channel responses for increasing values of the exponent n. For large n it typically acts like a winnertakes-all mechanism. Note that normalization affects only the responses ri , thus modulating the
width of the individual likelihood functions. The integration based on the normalized responses ri?
remains optimal (see Fig. 2). By explicitly modeling the encoding of visual motion in spatiotemporal
frequency channels, we already extended the Bayesian model of speed perception toward a more
physiological interpretation. Response normalization is one more step in this direction.
3
Results
In the second part of this paper we test the validity of our model with and without channel normalization against data from a psychophysical two alternative forced choice (2AFC) speed discrimination
experiment.
3.1
Speed discrimination experiment
Seven subjects performed a 2AFC visual speed discrimination task. In each trial, subjects were presented for 1250ms with a reference and a test stimulus on either side of a fixation mark (eccentricity
4
peaks-add
3?s = 1.5
peaks-subtract
amplitude
?s = 0.5
Figure 3: Single frequency gratings were combined in either a ?peaks-add? or a ?peaks-subtract?
phase configuration (0 deg and 60 deg phase, respectively) [5]. The red bar indicates that the two
configurations have different overall contrast levels even though they are composed of the same
frequencies. We used these two phase-combinations to test whether the channel hypothesis is valid
or not.
6 deg, size 4 deg). Both stimuli were drifting gratings, both drifting either leftwards or rightwards
at different speeds. Motion directions and the order of the gratings were randomly selected for each
trial. After stimulus presentation, a brief flash appeared on the left or right side of the fixation mark
and subjects had to answer whether the grating that was presented on the indicated side was moving
faster or slower than the grating on the other side. This procedure was chosen in order to prevent
potential decision biases.
The stimulus test set comprised 10 stimuli. Four of these stimuli were simple sinewave gratings of
a single spatial frequency, either ?s = 0.5 or 3?s = 1.5 c/deg. The low frequency test stimulus
had a contrast of 22.5%, while the three higher frequency stimuli had contrasts 7.5, 22.5 and 67.5%,
respectively. The other six stimuli were pair-wise combinations of the single frequency gratings
(Fig. 3), combined in either a ?peaks-add? or a ?peaks-subtract? phase configuration [5] (i.e. 0 deg
and 60 deg phase). All test stimuli were drifting at a speed of 2 deg/s. The reference stimulus was
a broadband stimulus stimulus whose speed was regulated by an adaptive staircase procedure. Each
of the 10 stimulus conditions were run for 190 trials. Data from all seven subjects were combined.
The simple stimuli were designed to target individual spatiotemporal frequency channels while the
combined stimuli were meant to target two channels simultaneously. The two phase configurations (peaks-add and peaks-subtract) were used to test the multiple channel hypothesis: if combined
stimuli are decomposed and processed in separate channels, their perceived speeds should be independent of the phase configuration. In particular, the difference in overall contrast of the two
configurations should not affect perceived speed (Fig 3).
Matching speeds (PSEs) and relative discrimination thresholds (Weber-fraction) were extracted from
a maximum-likelihood fit of each of the 10 psychometric functions with a cumulative Gaussian.
Fig. 4a,b shows the extracted discrimination thresholds and the relative matching speed, respectively. The data faithfully reproduce the general prediction of the Bayesian model for speed perception [16] that perceived speed decreases with increasing uncertainty, which can be nicely seen
from the inverse relationship between matching speeds and discrimination thresholds for each of the
different test stimuli. We found no significant difference in perceived speeds and thresholds between
the combined grating stimuli in ?peaks-add? and ?peaks-subtract? configuration (Fig. 4a,b; right),
despite the fact that the effective contrast of both configurations differs significantly (by 30, 22 and
11% for the {22.5, 7.5}, {22.5, 22.5} and {22.5, 67.5}% contrast conditions, respectively). This
suggests that the perceived speed of combined stimuli is independent of the relative phase between
the individual stimulus components, and therefore is processed in independent channels.
3.2
Model fits
In order to fit the model observer to the data, we assumed that on every trial of the 2AFC task, the
observer first makes individual estimates of the test and the reference speeds [?
st , s?r ] according to
the corresponding distributions p(?
s|s) (see Section 2), and then, based on these estimates, decides
5
a
relative threshold
0.2
0.1
0.05
b
matching speed (deg/s)
3
data
channel model
channel model+norm.
95% CI
2
1.5
c=22.5
7.5
simple
0.5 c/deg
22.5
67.5
combined
peaks-add
simple
1.5 c/deg
combined
peaks-subtract
Figure 4: Data and model fits for speed discrimination task: a) relative discrimination thresholds
(Weber-fraction) and b) matching speeds (PSEs). Error bars represent the 95% confidence interval
from 100 bootstrapped samples of the data. For the single frequency gratings, the perceived speed
increases with contrast as predicted by the standard Bayesian model. For the combined stimuli, there
is no significant difference (based on 95% confidence intervals) in perceived speeds between the
combined grating stimuli in ?peaks-add? and ?peaks-subtract? configuration. The Bayesian model
with normalized responses (red line) better accounts for the data than the model without interaction
between the channels (blue line).
which stimulus is faster. According to signal detection theory, the resulting psychometric function
is described by the cumulative probability distribution
Z ?
Z s?r
P (?
sr > s?t ) =
p(?
sr |sr )
p(?
st |st )d?
st d?
sr
(8)
0
0
where p(?
sr |sr ) and p(?
st |st ) are the distributions of speed estimates for the reference and the test
stimulus according to our Bayesian observer model. The model without normalization has six parameters: four channel responses ri for each simple stimulus reflecting the individual likelihood
widths, the reference response rref and the local slope of the prior a.1 The model with normalization
has two additional parameters n1 and n2 , reflecting the exponents of the normalization in each of
the two channels (Eq. 7).
The model with and without response normalization was simultaneously fit to the psychometric functions of all 10 test conditions using the cumulative probability distribution (Eq. 8) and a
1
Alternatively, channel responses as function of contrast could be modeled according to a contrast response
2
function ri = M + Rmax c2 c+c2 , where M is the baseline response, Rmax the maximal response, and c50 is
50
the semi saturation contrast level.
6
gaussian fit
channel model
channel model+norm.
0.8
0.5
0.2
P
0.8
0.5
0.2
1
2 3 4
1
2 3 4
1
2 3 4
reference speed (deg/s)
1
2 3 4
1
2 3 4
Figure 5: Psychometric curves for the ten testing conditions in Figure 4 (upper left to lower right
corner): Gaussian fits (black curves) to the psychometric data (circles) are compared to the fits of the
Bayesian channel model (blue curves) and the Bayesian channel model with normalized responses
(red curves). Histograms reflect the distributions of trials for the average subject.
maximum-likelihood optimization procedure. Figure 5 shows the fitted psychometric functions for
both models as well as a generic cumulative Gaussian fit to the data. From these fits we extracted the
matching speeds (PSEs) and relative discrimination thresholds (Weber-fractions) shown in Fig. 4.
In general, the Bayesian model is quite well supported by the data. In particular, the data reflect
the inverse relationship between relative matching speeds and discrimination thresholds predicted
by the slow-speed prior of the model. The model with response normalization, however, better captures subjects? precepts in particular in conditions where very low contrast stimuli were combined.
This is evident from a visual comparison of the full psychometric functions (Fig. 5) as well as the
extracted discrimination thresholds and matching speeds (Fig. 4). This impression is supported by
a log-likelihood ratio in favor of the model with normalized responses. Computing the Akaike Information Criterion (AIC) furthermore reveals that this advantage is not due to the larger number
of free parameters of the normalization model with an advantage of ?AIC = 127 (with significance
p = 10e ? 28) in favor of the latter. Further support of the normalized model comes form the fitted parameter values: for the model with no normalization, the response level of the highest contrast
stimulus r4 was not well constrained2 (r1 =6.18, r2 =5.50, r3 =8.69, r4 = 6e+07, rref =11.66, a=-1.83),
while the fit to the normalized model led to more reasonable parameter values (r1 =10.33, r2 =9.96,
r3 =11.99, r4 =37.73, rref =13.44, n1 =2e-16, n2 =6.8, a=-3.39). In particular, the fit prior slope parameter is in good agreement with values from a previous study [16]. Note that the exponent n1 is
not well-constrained because the stimulus set only included one contrast level for the low-frequency
channel.
The results suggest that the perceived speed of a combined stimulus can be accurately described
as an optimal combination of sensory information provided by individual spatiotemporal frequency
channels that interact via response normalization.
4
Discussion
We have shown that human visual speed perception can be accurately described by a Bayesian
observer model that optimally combines sensory information from independent channels, each sensitive to motion energies in a specific spatiotemporal frequency band. Our model expands the previously proposed Bayesian model of speed perception [16]. It no longer assumes a single likelihood
function affected by stimulus contrast but rather considers the combination of likelihood functions
based on the motion energies in different spatiotemporal frequency channels. This allows the model
to account for stimuli with more complex spatial structures.
2
The fit essentially assumed ?4 = 0.
7
We tested our model against data from a 2AFC speed discrimination experiment. Stimuli consisted
of drifting sinewave gratings at different spatial frequencies and combinations thereof. Subjects?
perceived speeds of the combined stimuli were independent of the phase configuration of the constituent sinewave gratings even though different phases resulted in different overall contrast values.
This supports the hypothesis that perceived speed is processed across multiple spatiotemporal frequency channels (Graham and Nachmias used a similar approach to demonstrate the existence of
individual spatial frequency channels [5]). The proposed observer model provided a good fit to
the data, but the fit was improved when the channel responses were assumed to be subject to normalization by the overall channel response. Considering that divisive normalization is arguably an
ubiquitous process in neural representations, we see this result as a consequence of our attempt to
formulate Bayesian observer models at a level that is closer to a physiological description. Note
that we consider the integration of the sensory information still optimal albeit based on the normalized responses ri? . Future experiments that will test more stimulus combinations will help to further
improve the characterization of the channel responses and interactions.
Although we did not discuss alternative models, it is apparent that the presented data eliminates
some obvious candidates. For example, both a winner-take-all model that only uses the sensory information from the most reliable channel, or an averaging model that equally weighs each channel?s
response independent of its reliability, would make predictions that significantly diverge from the
data. Both models would not predict a decrease in sensory uncertainty for the combined stimuli,
which is a key feature of optimal cue-combination. This decrease is nicely reflected in the measured
decrease in discrimination thresholds for the combined stimuli when the thresholds for both individual gratings were approximately the same (Fig. 4b). Note, that because of the slow speed prior,
a Bayesian model predicts that the perceived speed are inversely proportional to the discrimination
threshold, a prediction that is well supported by our data. The fitted model parameters are also in
agreement with previous accounts of the estimated shape of the speed prior: the slope of the linear
approximation of the log-prior probability density is negative and comparable to previously reported
values [16].
In this paper we focused on speed perception. However, there is substantial evidence that the visual
system in general decomposes complex stimuli into their simpler constituents. The problem of how
the scattered information is then integrated into a coherent percept poses many interesting questions
with regard to the optimality of this integration across modalities [4, 7]. Our study generalizes cueintegration to the pooling of information within a single perceptual modality. Here we provide a
behavioral account for both discrimination thresholds and matching speeds by directly estimating
the parameters of the likelihoods and the speed prior from psychophysical data.
Finally, the fact that the Bayesian model can account for both the perception of simple and complex
stimuli speaks for its generality. In the long term, the goal is to be able to predict the perceived
motion for an arbitrarily complex natural stimulus, and we believe the proposed model is a step in
this direction.
Acknowledgments
This work was supported by the Office of Naval Research (grant N000141110744).
References
[1] E. H. Adelson and J. R. Bergen. Spatiotemporal energy models for the perception of motion.
Journal of the Optical Society of America A Optics and image science, 2(2):284?99, 1985.
[2] K. R. Brooks, T. Morris, and P. Thompson. Contrast and stimulus complexity moderate the
relationship between spatial frequency and perceived speed: Implications for MT models of
speed perception. Journal of Vision, 11(14), 2011.
[3] M. Carandini and D. J. Heeger. Normalization as a canonical neural computation. Nature
Reviews Neuroscience, 13(1):51?62, 2012.
[4] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically
optimal fashion. Nature, 415(6870):429?33, 2002.
8
[5] N. Graham and J. Nachmias. Detection of grating patterns containing two spatial frequencies:
a comparison of single-channel and multiple-channel models. Vision Research, pages 251?259,
1971.
[6] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience,
9(2):181?197, 1992.
[7] J. M. Hillis, S. J. Watt, M. S. Landy, and M. S. Banks. Slant from texture and disparity cues :
Optimal cue combination. Journal of Vision, 4(12):967?992, 2004.
[8] F. H?urlimann, D. C. Kiper, and M. Carandini. Testing the Bayesian model of perceived speed.
Vision Research, 42:2253?2257, 2002.
[9] H. Nover, C. H. Anderson, and G. C. DeAngelis. A logarithmic, scale-invariant representation
of speed in macaque middle temporal area accounts for speed discrimination performance. J.
Neurosci, 25:10049?60, 2005.
[10] N. J. Priebe and S. G. Lisberger. Estimating target speed from the population response in visual
area MT. Journal of Neuroscience, 24(8):1907?1916, 2004.
[11] T. D. Sanger. Probability density estimation for the interpretation of neural population codes.
J. Neurophysiology, 76(4):2790?93, 1996.
[12] E. P. Simoncelli and D. J. Heeger. A model of neuronal responses in visual area MT. Vision
Research, 38(5):743?761, 1998.
[13] E. P. Simoncelli and O. Schwartz. Modeling surround suppression in V1 neurons with a
statistically-derived normalization model. Advances in Neural Information Processing Systems (NIPS), 11, 1999.
[14] A. T. Smith and G. K. Edgar. Perceived speed and direction of complex gratings and plaids.
Journal of the Optical Society of America A Optics and image science, 8(7):1161?1171, 1991.
[15] A. A. Stocker. Analog integrated 2-D optical flow sensor. Analog Integrated Circuits and
Signal Processing, 46(2):121?138, February 2006.
[16] A. A. Stocker and E. P. Simoncelli. Noise characteristics and prior expectations in human
visual speed perception. Nat Neurosci, 4(9):578?85, 2006.
[17] L. S. Stone and P. Thompson. Human speed perception is contrast dependent. Vision Research,
32(8):1535?1549, 1992.
[18] Y. Weiss, E. P. Simoncelli, and E. H. Adelson. Motion illusions as optimal percepts. Nature
Neuroscience, 5(6):598?604, 2002.
9
| 5203 |@word neurophysiology:1 trial:6 longterm:1 middle:1 norm:2 accounting:1 configuration:10 contains:3 series:1 disparity:1 tuned:3 bootstrapped:1 n000141110744:1 si:9 written:1 shape:1 treating:1 designed:1 discrimination:19 alone:2 cue:9 selected:1 plane:2 smith:1 provides:1 characterization:3 location:1 simpler:2 c2:2 fixation:2 combine:3 behavioral:1 speaks:1 upenn:1 expected:1 behavior:1 inspired:1 decomposed:2 considering:1 increasing:2 provided:4 estimating:2 underlying:1 maximizes:1 circuit:1 rmax:2 temporal:2 every:1 act:1 expands:1 demonstrates:1 schwartz:1 control:1 unit:2 grant:1 arguably:1 local:1 consequence:1 despite:1 encoding:1 firing:1 solely:1 approximately:1 black:1 r4:3 suggests:1 statistically:2 acknowledgment:1 responsible:1 testing:2 differs:1 illusion:1 procedure:3 area:3 elicit:1 significantly:2 matching:9 confidence:2 suggest:1 influence:1 demonstrated:1 maximizing:1 attention:1 thompson:2 focused:1 formulate:4 formalized:1 simplicity:1 perceive:2 spanned:1 population:2 coordinate:2 analogous:1 target:5 trigger:1 astocker:1 akaike:1 hypothesis:6 jogan:1 agreement:2 pa:1 us:1 predicts:2 observed:1 capture:2 decrease:4 highest:1 substantial:1 complexity:1 joint:2 various:2 represented:1 america:2 cat:1 derivation:1 forced:1 effective:1 activate:2 deangelis:1 whose:2 quite:1 larger:2 apparent:1 favor:2 statistic:1 noisy:2 precept:1 advantage:2 interaction:2 maximal:1 combining:1 ernst:1 description:1 constituent:3 eccentricity:1 r1:7 help:1 pose:1 measured:3 sa:1 eq:5 grating:22 predicted:2 come:1 direction:6 plaid:1 centered:2 human:8 extension:1 considered:2 normal:1 exp:1 leftwards:1 predict:5 perceived:28 estimation:1 sensitive:3 modulating:1 faithfully:1 reflects:1 activates:1 always:2 gaussian:4 sensor:1 rather:1 office:1 validated:2 focus:1 derived:1 naval:1 likelihood:21 indicates:1 ps:3 contrast:25 suppression:2 baseline:1 bergen:1 dependent:1 typically:2 integrated:3 transformed:1 reproduce:1 translational:1 overall:5 exponent:4 spatial:14 integration:8 constrained:2 nicely:2 represents:1 broad:1 adelson:2 afc:5 future:1 stimulus:72 employ:1 randomly:1 composed:1 simultaneously:2 resulted:1 comprehensive:1 individual:17 phase:12 n1:3 attempt:1 detection:2 stocker:4 implication:1 closer:1 logarithm:1 circle:3 weighs:1 fitted:3 modeling:2 kiper:1 deviation:1 comprised:1 optimally:2 reported:1 answer:1 spatiotemporal:21 combined:23 st:6 density:2 peak:17 probabilistic:1 diverge:1 reflect:2 containing:1 corner:1 account:8 potential:1 sinusoidal:2 de:1 coding:1 satisfy:1 explicitly:1 depends:2 performed:1 root:1 observer:16 red:6 start:1 slope:3 square:1 variance:4 characteristic:1 percept:7 bayesian:23 accurately:2 edgar:1 multiplying:1 drive:1 against:3 energy:10 frequency:38 thereof:1 obvious:1 gain:1 carandini:2 ubiquitous:1 amplitude:1 reflecting:2 higher:1 reflected:1 response:43 improved:3 modal:1 wei:1 though:2 generality:2 furthermore:1 anderson:1 mode:1 indicated:1 believe:1 effect:1 validity:1 consisted:2 true:1 staircase:1 normalized:9 white:1 width:3 m:1 criterion:1 stone:1 evident:1 complete:1 impression:1 demonstrate:1 performs:1 motion:23 hallmark:1 wise:1 weber:3 image:2 mt:3 winner:1 extend:1 interpretation:2 approximates:1 analog:2 measurement:2 significant:2 surround:2 slant:1 tuning:1 similarly:1 had:3 reliability:2 moving:3 longer:1 cortex:1 add:7 posterior:8 moderate:1 arbitrarily:1 seen:1 additional:2 dashed:2 semi:1 signal:2 multiple:7 full:4 simoncelli:5 rj:1 alan:1 faster:3 cross:1 long:1 equally:1 prediction:4 essentially:1 expectation:3 vision:6 poisson:1 histogram:1 represent:1 normalization:23 cell:1 proposal:1 interval:2 diagram:1 source:1 modality:3 biased:2 eliminates:1 posse:1 sr:6 haptic:1 subject:11 hz:1 pooling:1 flow:1 easy:1 affect:2 fit:18 psychology:1 winnertakes:1 pennsylvania:1 idea:1 whether:2 six:2 band:6 locally:2 ten:1 processed:5 morris:1 documented:1 canonical:2 shifted:1 estimated:1 neuroscience:4 blue:2 affected:1 key:1 four:2 threshold:15 prevent:1 verified:1 v1:1 fraction:3 run:1 inverse:2 uncertainty:4 reasonable:1 decision:1 graham:2 comparable:1 aic:2 activity:2 optic:2 ri:17 speed:89 optimality:1 optical:3 department:1 according:5 nachmias:2 combination:14 watt:1 across:5 smaller:2 reconstructing:1 explained:1 invariant:1 c50:1 remains:1 previously:2 discus:1 r3:2 mechanism:1 generalizes:1 gaussians:1 generic:1 alternative:2 slower:3 drifting:6 existence:1 assumes:2 landy:1 sanger:1 february:1 society:2 psychophysical:5 move:2 already:1 question:1 striate:1 exhibit:1 regulated:1 separate:3 seven:2 considers:1 toward:6 reason:1 assuming:2 code:1 modeled:1 relationship:3 ratio:1 negative:2 priebe:1 upper:1 neuron:2 observation:1 supporting:1 extended:1 rn:1 arbitrary:2 pair:1 coherent:6 narrow:2 hillis:1 macaque:1 brook:1 nip:1 able:2 bar:2 perception:13 pattern:1 appeared:1 saturation:1 reliable:1 power:2 natural:2 rely:1 representing:1 improve:1 brief:1 inversely:2 axis:2 philadelphia:1 prior:18 review:2 relative:10 loss:1 fully:1 interesting:1 proportional:2 var:4 integrate:1 s0:2 bank:2 supported:4 free:1 bias:2 allow:1 side:4 regard:1 curve:5 valid:1 cumulative:4 rich:1 computes:1 sensory:9 adaptive:1 far:1 approximate:1 unreliable:1 deg:16 active:2 decides:1 reveals:1 assumed:5 alternatively:1 spectrum:2 decomposes:2 channel:69 nature:3 interact:3 complex:7 did:1 significance:1 neurosci:2 noise:3 n2:2 neuronal:1 fig:13 psychometric:9 broadband:1 scattered:1 fashion:1 slow:6 heeger:3 lie:1 candidate:1 perceptual:2 rk:1 specific:3 showing:1 sinewave:3 r2:8 physiological:3 evidence:4 albeit:1 ci:1 texture:1 nat:1 subtract:7 logarithmic:2 led:1 visual:14 expressed:1 lisberger:1 determines:1 extracted:4 modulate:1 goal:1 presentation:1 flash:1 experimentally:1 change:1 included:1 determined:1 averaging:1 total:1 divisive:4 internal:1 mark:2 support:2 latter:1 modulated:1 meant:1 tested:2 rightwards:1 |
4,646 | 5,204 | DeViSE: A Deep Visual-Semantic Embedding Model
Andrea Frome*, Greg S. Corrado*, Jonathon Shlens*, Samy Bengio
Jeffrey Dean, Marc?Aurelio Ranzato, Tomas Mikolov
* These authors contributed equally.
{afrome, gcorrado, shlens, bengio, jeff, ranzato?, tmikolov}@google.com
Google, Inc.
Mountain View, CA, USA
Abstract
Modern visual recognition systems are often limited in their ability to scale to
large numbers of object categories. This limitation is in part due to the increasing
difficulty of acquiring sufficient training data in the form of labeled images as the
number of object categories grows. One remedy is to leverage data from other
sources ? such as text data ? both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model
trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model
matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show
that the semantic information can be exploited to make predictions about tens
of thousands of image labels not observed during training. Semantic knowledge
improves such zero-shot predictions achieving hit rates of up to 18% across thousands of novel labels never seen by the visual model.
1
Introduction
The visual world is populated with a vast number of objects, the most appropriate labeling of which
is often ambiguous, task specific, or admits multiple equally correct answers. Yet state-of-theart vision systems attempt to solve recognition tasks by artificially assigning images to a small
number of rigidly defined classes. This has led to building labeled image data sets according to
these artificial categories and in turn to building visual recognition systems based on N-way discrete
classifiers. While growing the number of labels and labeled images has improved the utility of
visual recognition systems [7], scaling such systems beyond a limited number of discrete categories
remains an unsolved problem. This problem is exacerbated by the fact that N-way discrete classifiers
treat all labels as disconnected and unrelated, resulting in visual recognition systems that cannot
transfer semantic information about learned labels to unseen words or phrases. One way of dealing
with this issue is to respect the natural continuity of visual space instead of artificially partitioning
it into disjoint categories [20].
We propose an approach that addresses these shortcomings by training a visual recognition model
with both labeled images and a comparatively large and independent dataset ? semantic information
from unannotated text data. This deep visual-semantic embedding model (DeViSE) leverages textual
data to learn semantic relationships between labels, and explicitly maps images into a rich semantic
embedding space. We show that this model performs comparably to state-of-the-art visual object
classifiers when trained and evaluated on flat 1-of-N metrics, while simultaneously making fewer
semantically unreasonable mistakes along the way. Furthermore, we show that the model leverages
?
Current affiliation: Facebook, Inc.
1
visual and semantic similarity to correctly predict object category labels for unseen categories, i.e.
?zero-shot? classification, even when the number of unseen visual categories is 20,000 for a model
trained on just 1,000 categories.
2
Previous Work
The current state-of-the-art approach to image classification is a deep convolutional neural network
trained with a softmax output layer (i.e. multinomial logistic regression) that has as many units
as the number of classes (see, for instance [11]). However, as the number of classes grows, the
distinction between classes blurs, and it becomes increasingly difficult to obtain sufficient numbers
of training images for rare concepts.
One solution to this problem, termed WSABIE [20], is to train a joint embedding model of both images and labels, by employing an online learning-to-rank algorithm. The proposed model contained
two sets of parameters: (1) a linear mapping from image features to the joint embedding space, and
(2) an embedding vector for each possible label. Compared to the proposed approach, WSABIE
only explored linear mappings from image features to the embedding space, and the available labels
were only those provided in the image training set. It could thus not generalize to new classes.
More recently, Socher et al [18] presented a model for zero-shot learning where a deep neural
network was first trained in an unsupervised manner from many images in order to obtain a rich
image representation [3]; in parallel, a neural network language model [2] was trained in order to
obtain embedding representations for thousands of common terms. The authors trained a linear
mapping between the image representations and the word embeddings representing 8 classes for
which they had labeled images, thus linking the image representation space to the embedding space.
This last step was performed using a mean-squared error criterion. They also trained a simple model
to determine if a given image was from any of the 8 original classes or not (i.e., an outlier detector).
When the model determined an image to be in the set of 8 classes, a separately trained softmax
model was used to perform the 8-way classification; otherwise the model predicted the nearest class
in the embedding space (in their setting, only 2 outlier classes were considered). Their model differs
from our proposed approach in several ways: first and foremost, the scale, as our model considers
1,000 known classes for the image model and up to 20,000 unknown classes, instead of respectively
8 and 2; second, in [18] there is an inherent trade-off between the quality of predictions for trained
and outlier classes; third, by using a different visual model, different language model, and different
training objective, we were able to train a single unified model that uses only embeddings.
There has been other recent work showing impressive zero-shot performance on visual recognition
tasks [12, 17, 16], however all of these rely on a curated source of semantic information for the
labels: the WordNet hierarchy is used in [12] and [17], and [16] uses a knowledge base containing
descriptive properties for each class. By contrast, our approach learns its semantic representation
directly from unannotated data.
3
Proposed Approach
Our objective is to leverage semantic knowledge learned in the text domain, and transfer it to a model
trained for visual object recognition. We begin by pre-training a simple neural language model wellsuited for learning semantically-meaningful, dense vector representations of words [13]. In parallel,
we pre-train a state-of-the-art deep neural network for visual object recognition [11], complete with
a traditional softmax output layer. We then construct a deep visual-semantic model by taking the
lower layers of the pre-trained visual object recognition network and re-training them to predict the
vector representation of the image label text as learned by the language model. These three training
phases are detailed below.
3.1
Language Model Pre-training
The skip-gram text modeling architecture introduced by Mikolov et al [13, 14] has been shown to
efficiently learn semantically-meaningful floating point representations of terms from unannotated
text. The model learns to represent each term as a fixed length embedding vector by predicting
adjacent terms in the document (Figure 1a, right). We call these vector representations embedding
2
A
B
Traditional
Visual Model
Deep Visual Semantic
Embedding Model
Skip-gram
Language Model
label
similarity metric
nearby word
softmax layer
transformation
core
visual
model
core
visual
model
embedding
vector
lookup table
image
label
image
parameter
initialization
softmax layer
parameter
initialization
embedding
vector
lookup table
source word
reptiles
birds
insects
food
musical instruments
clothing
dogs
aquatic life
animals
transportation
Figure 1: (a) Left: a visual object categorization network with a softmax output layer; Right: a skip-gram
language model; Center: our joint model, which is initialized with parameters pre-trained at the lower layers
of the other two models. (b) t-SNE visualization [19] of a subset of the ILSVRC 2012 1K label embeddings
learned using skip-gram.
vectors. Because synonyms tend to appear in similar contexts, this simple objective function drives
the model to learn similar embedding vectors for semantically related words.
We trained a skip-gram text model on a corpus of 5.7 million documents (5.4 billion words) extracted
from wikipedia.org. The text of the web pages was tokenized into a lexicon of roughly 155,000
single- and multi-word terms consisting of common English words and phrases as well as terms from
commonly used visual object recognition datasets [7]. Our skip-gram model used a hierarchical
softmax layer for predicting adjacent terms and was trained using a 20-word window with a single
pass through the corpus. For more details and a pointer to open-source code, see [13].
We trained skip-gram models of varying hidden dimensions, ranging from 100-D to 2,000-D, and
found 500- and 1,000-D embeddings to be a good compromise between training speed, semantic
quality, and the ultimate performance of the DeViSE model described below. The semantic quality
of the embedding representations learned by these models is impressive.1 A visualization of the language embedding space over a subset of ImageNet labels indicates that the language model learned
a rich semantic structure that could be exploited in vision tasks (Figure 1b).
3.2
Visual Model Pre-training
The visual model architecture we employ is based on the winning model for the 1,000-class ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 [11, 6]. The deep neural network
model consists of several convolutional filtering, local contrast normalization, and max-pooling layers, followed by several fully connected neural network layers trained using the dropout regularization technique [10]. We trained this model with a softmax output layer, as described in [11], to
predict one of 1,000 object categories from the ILSVRC 2012 1K dataset [7], and were able to reproduce their results. This trained model serves both as our benchmark for performance comparisons,
as well as the initialization for our joint model.
3.3
Deep Visual-Semantic Embedding Model
Our deep visual-semantic embedding model (DeViSE) is initialized from these two pre-trained neural network models (Figure 1a). The embedding vectors learned by the language model are unit
normed and used to map label terms into target vector representations2 .
The core visual model, with its softmax prediction layer now removed, is trained to predict these
vectors for each image, by means of a projection layer and a similarity metric. The projection layer
is a linear transformation that maps the 4,096-D representation at the top of our core visual model
into the 500- or 1,000-D representation native to our language model.
1
For example, the 9 nearest terms to tiger shark using cosine distance are bull shark, blacktip shark, shark,
oceanic whitetip shark, sandbar shark, dusky shark, blue shark, requiem shark, and great white shark. The
9 nearest terms to car are cars, muscle car, sports car, compact car, automobile, racing car, pickup truck,
dealership, and sedans.
2
In [13], which introduced the skip-gram model for text, cosine similarity between vectors is used for
measuring semantic similarity. Unit-norming the vectors and using dot product similarity is an equivalent
similarity measurement.
3
The choice of loss function proved to be important. We used a combination of dot-product similarity
and hinge rank loss (similar to [20]) such that the model was trained to produce a higher dot-product
similarity between the visual model output and the vector representation of the correct label than between the visual output and other randomly chosen text terms. We defined the per training example
hinge rank loss:
X
loss(image, label) =
max[0, margin ? ~tlabel M~v (image) + ~tj M~v (image)]
(1)
j6=label
where ~v (image) is a column vector denoting the output of the top layer of our core visual network
for the given image, M is the matrix of trainable parameters in the linear transformation layer,
~tlabel is a row vector denoting learned embedding vector for the provided text label, and ~tj are
the embeddings of other text terms. In practice, we found that it was expedient to randomize the
algorithm both by (1) restricting the set of false text terms to possible image labels, and (2) truncating
the sum after the first margin-violating false term was encountered. The ~t vectors were constrained
to be unit norm, and a fixed margin of 0.1 was used in all experiments3 . We also experimented
with an L2 loss between visual and label embeddings, as suggested by Socher et al. [18], but that
consistently yielded about half the accuracy of the rank loss model. We believe this is because the
nearest neighbor evaluation is fundamentally a ranking problem and is best solved with a ranking
loss, whereas the L2 loss only aims to make the vectors close to one another but remains agnostic to
incorrect labels that are closer to the target image.
The DeViSE model was trained by asynchronous stochastic gradient descent on a distributed computing platform described in [4]. As above, the model was presented only with images drawn from
the ILSVRC 2012 1K training set, but now trained to predict the term strings as text4 . The parameters of the projection layer M were first trained while holding both the core visual model and the
text representation fixed. In the later stages of training the derivative of the loss function was backpropagated into the core visual model to fine-tune its output5 , which typically improved accuracy
by 1-3% (absolute). Adagrad per-parameter dynamic learning rates were utilized to keep gradients
well scaled at the different layers of the network [9].
At test time, when a new image arrives, one first computes its vector representation using the visual
model and the transformation layer; then one needs to look for the nearest labels in the embedding
space. This last step can be done efficiently using either a tree or a hashing technique, in order to
be faster than the naive linear search approach (see for instance [1]). The nearest labels are then
mapped back to ImageNet synsets for scoring (see Supplementary Materials for details).
4
Results
The goals of this work are to develop a vision model that makes semantically relevant predictions
even when it makes errors and that generalizes to classes outside of its labeled training set, i.e. zeroshot learning. We compare DeViSE to two models that employ the same high-quality core vision
model, but lack the semantic structure imparted by our language model: (1) a softmax baseline
model ? a state-of-the-art vision model [11] which employs a 1000-way softmax classifier; (2) a
random embedding model ? a version of our model that uses random unit-norm embedding vectors
in place of those learned by the language model. Both use the trained visual model described in
Section 3.2.
In order to demonstrate parity with the softmax baseline on the most commonly-reported metric, we
compute ?flat? hit@k metrics ? the percentage of test images for which the model returns the one
true label in its top k predictions. To measure the semantic quality of predictions beyond the true
label, we employ a hierarchical precision@k metric based on the label hierarchy provided with the
3
The margin was chosen to be a fraction of the norm of the vectors, which is 1.0. A wide range of values
would likely work well.
4
ImageNet image labels are synsets, a set of synonymous terms, where each term is a word or phrase. We
found training the model to predict the first term in each synset to be sufficient, but sampling from the synset
terms might work equally well.
5
In principle the gradients can also be back-propagated into the vector representations of the text labels. In
this case, the language model should continue to train simultaneously in order to maintain the global semantic
structure over all terms in the vocabulary.
4
Model type
Softmax baseline
DeViSE
Random embeddings
Chance
dim
N/A
500
1000
500
1000
N/A
1
55.6
53.2
54.9
52.4
50.5
0.1
Flat hit@k (%)
2
5
67.4 78.5
65.2 76.7
66.9 78.4
63.9 74.8
62.2 74.2
0.2
0.5
10
85.0
83.3
85.0
80.6
81.5
1.0
Hierarchical precision@k
2
5
10
20
0.452 0.342 0.313 0.319
0.447 0.352 0.331 0.341
0.454 0.351 0.325 0.331
0.428 0.315 0.271 0.248
0.418 0.318 0.290 0.292
0.007 0.013 0.022 0.042
Table 1: Comparison of model performance on our test set, taken from the ImageNet ILSVRC 2012 1K
validation set. Note that hierarchical precision@1 is equivalent to flat hit@1. See text for details.
ImageNet image repository [7]. In particular, for each true label and value of k, we generate a ground
truth list from the semantic hierarchy, and compute a per-example precision equal to the fraction of
the model?s k predictions that overlap with the ground truth list. We report mean precision across
the test set. Detailed descriptions of the generation of the ground truth lists, the hierarchical scoring
metric, and train/validation/test dataset splits are provided in the Supplementary Materials.
4.1
ImageNet (ILSVRC) 2012 1K Results
This section presents flat and hierarchical results on the ILSVRC 2012 1K dataset, where the classes
of the examples presented at test time are the same as those used for training. Table 1 shows results
for the DeViSE model for 500- and 1000-dimensional skip-gram models compared to the random
embedding and softmax baseline models, on both the flat and hierarchical metrics.6
On the flat metric, the softmax baseline shows higher accuracy for k = 1, 2. At k = 5, 10, the
1000-D DeViSE model has reached parity, and at k = 20 (not shown) it performs slightly better.
We expected the softmax model to be the best performing model on the flat metric, given that its
cross-entropy training objective is most well matched to the evaluation metric, and are surprised that
the performance of DeViSE is so close to softmax performance.
On the hierarchical metric, the DeViSE models show better semantic generalization than the softmax baseline, especially for larger k. At k = 5, the 500-D DeViSE model shows a 3% relative
improvement over the softmax baseline, and at k = 20 almost a 7% relative improvement. This is a
surprisingly large gain, considering that the softmax baseline is a reproduction of the best published
model on these data. The gap that exists between the DeViSE model and softmax baseline on the
hierarchical metric reflects the benefit of semantic information above and beyond visual similarity [8]. The gap between the DeViSE model and the random embeddings model establishes that the
source of the gain is the well-structured embeddings learned by the language model not some other
property of our architecture.
4.2
Generalization and Zero-Shot Learning
A distinct advantage of our model is its ability to make reasonable inferences about candidate labels
it has never visually observed. For example, a DeViSE model trained on images labeled tiger shark,
bull shark, and blue shark, but never with images labeled shark, would likely have the ability to
generalize to this more coarse-grained descriptor because the language model has learned a representation of the general concept of shark which is similar to all of the specific sharks. Similarly,
if tested on images of highly specific classes which the model has never seen before, for example
a photo of an oceanic whitecap shark, and asked whether the correct label is more likely oceanic
whitecap shark or some other unfamiliar label (say, nuclear submarine), our model stands a fighting chance of guessing correctly because the language model ensures that representation of oceanic
whitecap shark is closer to the representation of sharks the model has seen, while the representation
of nuclear submarine is closer to those of other sea vessels.
6
Note that our softmax baseline results differ from the results in [11] due to a simplification in the evaluation
procedure: [11] creates several distorted versions of each test image and aggregates the results for a final label,
whereas in our experiments, we evaluate using only the original test image. Our softmax baseline is able to
reproduce the performance of the model in [11] when evaluated with the same procedure.
5
Our model
A
B
C
Softmax over ImageNet 1K
D
eyepiece, ocular
Polaroid
compound lens
telephoto lens, zoom lens
rangefinder, range finder
typewriter keyboard
tape player
reflex camera
CD player
space bar
oboe, hautboy, hautbois
bassoon
English horn, cor anglais
hook and eye
hand
reel
punching bag, punch bag, ...
whistle
bassoon
letter opener, paper knife, ...
barbet
patas, hussar monkey, ...
babbler, cackler
titmouse, tit
bowerbird, catbird
patas, hussar monkey, ...
proboscis monkey, Nasalis ...
macaque
titi, titi monkey
guenon, guenon monkey
E
F
Our model
Softmax over ImageNet 1K
fruit
pineapple, ananas
pineapple
coral fungus
pineapple plant, Ananas ...artichoke, globe artichoke
sweet orange
sea anemone, anemone
sweet orange tree, ...
cardoon
comestible, edible, ...
dressing, salad dressing
Sicilian pizza
vegetable, veggie, veg
fruit
pot, flowerpot
cauliflower
guacamole
cucumber, cuke
broccoli
dune buggy, beach buggy
searcher beetle, ...
seeker, searcher, quester
Tragelaphus eurycerus, ...
bongo, bongo drum
warplane, military plane
missile
projectile, missile
sports car, sport car
submarine, pigboat, sub, ...
Figure 2: For each image, the top 5 zero-shot predictions of DeViSE+1K from the 2011 21K label set and the
softmax baseline model, both trained on ILSVRC 2012 1K. Predictions ordered by decreasing score, with correct predictions in bold. Ground truth: (a) telephoto lens, zoom lens; (b) English horn, cor anglais; (c) babbler,
cackler; (d) pineapple, pineapple plant, Ananas comosus; (e) salad bar; (f) spacecraft, ballistic capsule, space
vehicle.
Flat hit@k (%)
Data Set
2-hop
3-hop
ImageNet 2011 21K
Model
DeViSE-0
DeViSE+1K
DeViSE-0
DeViSE+1K
DeViSE-0
DeViSE+1K
# Candidate
Labels
1,589
2,589
7,860
8,860
20,841
21,841
1
6.0
0.8
1.7
0.5
0.8
0.3
2
10.0
2.7
2.9
1.4
1.4
0.8
5
18.1
7.9
5.3
3.4
2.5
1.9
10
26.4
14.2
8.2
5.9
3.9
3.2
20
36.4
22.7
12.5
9.7
6.0
5.3
Table 2: Flat hit@k performance of DeViSE on ImageNet-based zero-shot datasets of increasing difficulty
from top to bottom. DeViSE-0 and DeViSE+1K are the same trained model, but DeViSE-0 is restricted to only
predict zero-shot classes, whereas DeViSE+1K predicts both the zero-shot and the 1K training labels. For all,
zero-shot classes did not occur in the image training set.
To test this hypothesis, we extracted images from the ImageNet 2011 21K dataset with labels that
were not included in the ILSVRC 2012 1K dataset on which DeViSE was trained. These are ?zeroshot? data sets in the sense that our model has no visual knowledge of these labels, though embeddings for the labels were learned by the language model. The softmax baseline is only able to predict
labels from ILSVRC 2012 1K. The zero-shot experiments were performed with the same trained
500-D DeViSE model used for results in Section 4.1, but it is evaluated in two ways: DeViSE-0
only predicts the zero-shot labels, and DeViSE+1K predicts zero-shot labels and the ILSVRC 2012
1K training labels.
Figure 2 shows label predictions for a handful of selected examples from this dataset to qualitatively
illustrate model behavior. Note that DeViSE successfully predicts a wide range of labels outside
its training set, and furthermore, the incorrect predictions are generally semantically ?close? to the
desired label. Figure 2 (a), (b), (c), and (d) show cases where our model makes significantly better
top-5 predictions than the softmax-based model. For example, in Figure 2 (a), the DeViSE model
is able to predict a number of lens-related labels even though it was not trained on images in any
of the predicted categories. Figure 2 (d) illustrates a case where the top softmax prediction is quite
good, but where it is unable to generalize to new labels and its remaining predictions are off the
mark, while our model?s predictions are more plausible. Figure 2 (e) highlights a case where neither
model gets the exact true label, but both models are giving plausible labels. Figure 2 (f) shows a
case where the softmax model emits more nearly correct labels than the DeViSE model.
To quantify the performance of the model on zero-shot data, we constructed from our ImageNet
2011 21K zero-shot data three test data sets of increasing difficulty based on the image labels?
tree distance from the training ILSVRC 2012 1K labels in the ImageNet label hierarchy [7]. The
easiest dataset, ?2-hop?, is comprised of the 1,589 labels that are within two tree hops of the training
labels, making them visually and semantically similar to the training set. A more difficult ?3-hop?
dataset was constructed in the same manner. Finally, we built a third, particularly challenging dataset
consisting of all the labels in ImageNet 2011 21K that are not in ILSVRC 2012 1K.
6
Data Set
2-hop
3-hop
ImageNet 2011 21K
Model
DeViSE-0
DeViSE+1K
Softmax baseline
DeViSE-0
DeViSE+1K
Softmax baseline
DeViSE-0
DeViSE+1K
Softmax baseline
1
0.06
0.008
0
0.017
0.005
0
0.008
0.003
0
Hierarchical precision@k
2
5
10
0.152 0.192 0.217
0.204 0.196 0.201
0.236 0.181 0.174
0.037 0.191 0.214
0.053 0.192 0.201
0.053 0.157 0.143
0.017 0.072 0.085
0.025 0.083 0.092
0.023 0.071 0.069
20
0.233
0.214
0.179
0.236
0.214
0.130
0.096
0.101
0.065
Table 3: Hierarchical precision@k results on zero-shot classification. Performance of DeViSE compared to
the softmax baseline model across the same datasets as in Table 2. Note that the softmax model can never
directly predict the correct label so its precision@1 is 0.
Model
DeViSE
Mensink et al. 2012 [12]
Rohrbach et al. 2011 [17]
200 labels
31.8%
35.7%
34.8%
1000 labels
9.0%
1.9%
-
Table 4: Flat hit@5 accuracy on the zero-shot task from [12]. DeViSE experiments were performed with a
500-D model. The [12] model uses a curated hierarchy over labels for zero-shot classification, but without using
this information, our model is close in performance on the 200 zero-shot class label task. When the models can
predict any of the 1000 labels, we achieve better accuracy, indicating DeViSE has less of a bias toward training
classes than [12]. As in [12], we include a result on a similar task from [17], though their work used a different
set of 200 zero-shot classes.
We again calculated the flat hit@k measure to determine how frequently DeViSE-0 and DeViSE+1K
predicted the correct label for each of these data sets (Table 2). DeViSE-0?s top prediction was the
correct label 6.0% of the time across 1,589 novel labels, and the rate increases with k to 36.4% within
the top 20 predictions. As the zero-shot data sets become more difficult, the accuracy decreases in
absolute terms, though it is better relative to chance (not shown). Since a traditional softmax visual
model can never produce the correct label on zero-shot data, its performance would be 0% for all
k. The DeViSE+1K model performed uniformly worse than the plain DeViSE-0 model by a margin
that indicates it has a bias toward training classes.
To provide a stronger baseline for comparison, we compared the performance of our model and
the softmax model on the hierarchical metric we employed above. Although the softmax baseline
model can never predict exactly the correct label, the hierarchical metric will give the model credit
for predicting labels that are in the neighborhood of the correct label in the ImageNet hierarchy
(for k > 1). Visual similarity is strongly correlated with semantic similarity for nearby object
categories [8], and the softmax model does leverage visual similarity between zero-shot and training
images to make predictions that will be scored favorably (e.g. Figure 2d).
The easiest dataset, ?2-hop?, contains object categories that are as visually and semantically similar
to the training set as possible. For this dataset the softmax model outperforms the DeViSE model for
hierarchical precision@2, demonstrating just how large a role visual similarity plays in predicting
semantically ?nearby? labels (Table 3). However, for k = 5, 10, 20, our model produces superior
predictions relative to the ImageNet hierarchy, even on this easiest dataset. For the two more difficult datasets, where there are more novel categories and the novel categories are less closely related
to those in the training data set, DeViSE outperforms the softmax model at all measured hierarchical precisions. The quantitative gains can be quite large, as much as 82% relative improvement
over softmax performance, and qualitatively, the softmax model?s predictions can be surprisingly
unreasonable in some cases (e.g. Figure 2c). The random embeddings model we described above
performed substantially worse than either of the real models. These results indicate that our architecture succeeds in leveraging the semantic knowledge captured by the language model to make
reasonable predictions, even as test images become increasingly dissimilar from those used in the
training set.
To provide a comparison with other work in zero-shot learning, we also directly compare to the
zero-shot results from [12]. These were performed on a particular 800/200 split of the 1000 classes
7
from ImageNet 2010: training and model tuning is performed using the 800 classes, and test images
are drawn from the remaining 200 classes. Results are shown in Table 4.
Taken together, these zero-shot experiments indicate that the DeViSE model can exploit both visual
and semantic information to predict novel classes never before observed. Furthermore, the presence
of semantic information in the model substantially improves the quality of its predictions.
5
Conclusion
In contrast to previous attempts in this area [18], we have shown that our joint visual-semantic embedding model can be trained to give performance comparable to a state-of-the-art softmax based
model on a flat object classification metric, while simultaneously making more semantically reasonable errors, as indicated by its improved performance on a hierarchical label metric. We have
also shown that this model is able to make correct predictions across thousands of previously unseen
classes by leveraging semantic knowledge elicited only from unannotated text.
The advantages of this architecture, however, extend beyond the experiments presented here.
First, we believe that our model?s unusual compatibility with larger, less manicured data sets will
prove to be a major strength moving forward. In particular, the skip-gram language model we
constructed included only a modestly sized vocabulary, and was exposed only to the text of a single
online encyclopedia; we believe that the gains available to models with larger vocabularies and
trained on vastly larger text corpora will be significant, and easily outstrip methods which rely on
manually constructed semantic hierarchies (e.g. [17]). Perhaps more importantly, though here we
trained on a curated academic image dataset, our model?s architecture naturally lends itself to being
trained on all available images that can be annotated with any text term contained in the (larger)
vocabulary. We believe that training massive ?open? image datasets of this form will dramatically
improve the quality of visual object categorization systems.
Second, we believe that the 1-of-N (and nearly balanced) visual object classification problem is
soon to be outmoded by practical visual object categorization systems that can handle very large
numbers of labels [5] and the re-definition of valid label sets at test time. For example, our model
can be trained once on all available data, and simultaneously used in one application requiring
only coarse object categorization (e.g. house, car, pedestrian) and another application requiring
fine categorization in a very specialized subset (e.g. Honda Civic, Ferrari F355, Tesla Model-S).
Moreover, because test time computation can be sub-linear in the number of labels contained in the
training set, our model can be used in exactly such systems with much larger numbers of labels,
including overlapping or never-observed categories.
Moving forward, we are experimenting with techniques which more directly leverage the structure
inherent in the learned language embedding, greatly reducing training costs of the joint model and
allowing even greater scaling [15].
Acknowledgments
Special thanks to those who lent their insight and technical support for this work, including Matthieu
Devin, Alex Krizhevsky, Quoc Le, Rajat Monga, Ilya Sutskever, and Wojciech Zaremba.
References
[1] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in
Neural Information Processing Systems, NIPS, 2010.
[2] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. Journal of Machine
Learning Research, 3:1137?1155, 2003.
[3] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning (ICML), 2011.
[4] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao,
MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed
deep networks. In Advances in Neural Information Processing Systems, NIPS, 2012.
8
[5] Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, and Jay Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2013.
[6] Jia Deng, Alex Berg, Sanjeev Satheesh, Hao Su, Aditya Khosla, and Fei-Fei Li. Imagenet large scale
visual recognition challenge 2012.
[7] Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-fei. Imagenet: A large-scale hierarchical
image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[8] Thomas Deselaers and Vittorio Ferrari. Visual and semantic similarity in imagenet. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2011.
[9] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[10] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580, 2012.
[11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, NIPS, 2012.
[12] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Metric learning for large scale
image classification: Generalizing to new classes at near-zero cost. In European Conference on Computer
Vision (ECCV), 2012.
[13] Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (ICLR), Scottsdale,
Arizona, USA, 2013.
[14] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations
of words and phrases and their compositionality. In Advances in Neural Information Processing Systems,
NIPS, 2013.
[15] Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Jonathon Shlens, Andrea Frome, Greg S. Corrado,
and Jeffrey Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv (to be
submitted), 2013.
[16] Mark Palatucci, Dean Pomerleau, Geoffrey E. Hinton, and Tom M. Mitchell. Zero-shot learning with
semantic output codes. In Advances in Neural Information Processing Systems, NIPS, 2009.
[17] Marcus Rohrbach, Michael Stark, and Bernt Schiele. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2011.
[18] R. Socher, M. Ganjoo, H. Sridhar, O. Bastani, C. D. Manning, and A. Y. Ng. Zero-shot learning through
cross-modal transfer. In International Conference on Learning Representations (ICLR), Scottsdale, Arizona, USA, 2013.
[19] L.J.P. van der Maaten and G.E. Hinton. Visualizing high-dimensional data using t-sne. Journal of Machine
Learning Research, 9:2579?2605, 2008.
[20] Jason Weston, Samy Bengio, and Nicolas Usunier. Large scale image annotation: learning to rank with
joint word-image embeddings. Machine Learning, 81(1):21?35, 2010.
9
| 5204 |@word repository:1 version:2 norm:3 stronger:1 open:2 shot:30 contains:1 score:1 denoting:2 document:2 outperforms:2 current:2 com:1 yet:1 assigning:1 devin:2 blur:1 half:1 fewer:1 selected:1 plane:1 core:8 pointer:1 coarse:2 lexicon:1 honda:1 org:1 along:1 constructed:4 become:2 surprised:1 incorrect:2 gcorrado:1 consists:1 prove:1 manner:2 spacecraft:1 expected:1 roughly:1 behavior:1 frequently:1 growing:1 multi:2 andrea:2 whistle:1 salakhutdinov:1 decreasing:1 food:1 window:1 considering:1 increasing:3 becomes:1 provided:4 begin:1 unrelated:1 matched:1 moreover:1 agnostic:1 easiest:3 mountain:1 string:1 monkey:5 substantially:2 sedan:1 dressing:2 unified:1 transformation:4 quantitative:1 zaremba:1 exactly:2 classifier:4 hit:8 scaled:1 partitioning:1 unit:5 appear:1 before:2 local:1 treat:1 mistake:1 encoding:1 rigidly:1 sandbar:1 might:1 bird:1 initialization:3 challenging:1 co:1 limited:2 range:3 horn:2 camera:1 practical:1 acknowledgment:1 practice:1 differs:1 procedure:2 area:1 significantly:1 projection:3 sudheendra:1 word:14 pre:7 get:1 cannot:1 close:4 context:1 vijayanarasimhan:1 equivalent:2 dean:7 map:3 transportation:1 center:1 vittorio:1 normed:1 truncating:1 convex:1 ke:1 tomas:4 matthieu:2 insight:1 importantly:1 shlens:4 nuclear:2 embedding:30 handle:1 ferrari:2 hierarchy:8 target:2 play:1 massive:1 exact:1 us:4 samy:3 hypothesis:1 recognition:18 particularly:1 utilized:1 curated:3 racing:1 native:1 labeled:9 predicts:4 observed:4 bottom:1 role:1 database:1 preprint:1 solved:1 thousand:4 ensures:1 connected:1 ranzato:3 trade:1 removed:1 decrease:1 balanced:1 schiele:1 asked:1 dynamic:1 trained:37 tit:1 compromise:1 exposed:1 creates:1 expedient:1 easily:1 joint:7 train:6 distinct:1 fast:1 shortcoming:1 artificial:1 labeling:1 aggregate:1 outside:2 neighborhood:1 quite:2 bernt:1 supplementary:2 solve:1 larger:6 say:1 plausible:2 otherwise:1 ducharme:1 cvpr:4 ability:3 kai:4 unseen:4 itself:1 final:1 online:3 descriptive:1 advantage:2 propose:1 product:3 adaptation:1 relevant:1 achieve:1 description:1 billion:1 sutskever:4 sea:2 produce:3 categorization:5 object:21 illustrate:1 develop:1 andrew:2 measured:1 nearest:6 exacerbated:1 pot:1 predicted:3 frome:2 skip:10 indicate:2 quantify:1 differ:1 closely:1 correct:12 annotated:1 stochastic:2 jonathon:3 pineapple:5 material:2 wellsuited:1 generalization:2 broccoli:1 clothing:1 gabriela:1 considered:1 ground:4 credit:1 visually:3 great:1 mapping:3 predict:13 major:1 ruslan:1 estimation:1 bag:2 label:80 ballistic:1 establishes:1 successfully:1 reflects:1 cucumber:1 aim:1 varying:1 deselaers:1 improvement:3 consistently:1 rank:5 indicates:2 experimenting:1 greatly:1 contrast:3 baseline:19 sense:1 dim:1 inference:1 synonymous:1 perronnin:1 typically:1 hidden:1 reproduce:2 compatibility:1 issue:1 classification:9 insect:1 animal:1 art:6 special:1 softmax:44 constrained:1 orange:2 equal:1 construct:1 never:9 once:1 beach:1 sampling:1 hop:8 manually:1 ng:3 look:1 unsupervised:1 nearly:2 theart:1 icml:1 report:1 fundamentally:1 inherent:2 employ:4 sweet:2 modern:1 randomly:1 richard:1 simultaneously:4 zoom:2 floating:1 phase:1 consisting:2 jeffrey:5 maintain:1 attempt:2 detection:1 highly:1 evaluation:3 arrives:1 tj:2 accurate:1 closer:3 typewriter:1 tree:5 initialized:2 re:2 desired:1 instance:2 column:1 modeling:1 military:1 measuring:1 phrase:4 bull:2 cost:2 subset:3 rare:1 comprised:1 krizhevsky:3 reported:1 answer:1 thanks:1 international:3 probabilistic:1 off:2 dong:1 michael:1 together:1 ilya:4 sanjeev:1 squared:1 again:1 vastly:1 containing:1 worse:2 derivative:1 return:1 wojciech:1 li:5 stark:1 segal:1 lookup:2 bold:1 coding:1 inc:2 afrome:1 pedestrian:1 explicitly:1 ranking:2 unannotated:5 performed:7 view:1 later:1 vehicle:1 csurka:1 hazan:1 jason:1 reached:1 parallel:2 elicited:1 annotation:1 jia:3 greg:5 convolutional:3 musical:1 accuracy:6 efficiently:2 descriptor:1 dusky:1 identify:1 titi:2 who:1 generalize:3 norouzi:1 vincent:1 comparably:1 drive:1 j6:1 published:1 submitted:1 detector:2 facebook:1 definition:1 ocular:1 tucker:1 naturally:1 unsolved:1 propagated:1 gain:4 emits:1 dataset:14 proved:1 oceanic:4 mitchell:1 knowledge:7 car:9 improves:2 back:2 higher:2 hashing:1 violating:1 tom:1 modal:1 improved:3 wei:1 mensink:2 evaluated:3 done:1 though:5 strongly:1 furthermore:3 just:2 stage:1 hand:1 lent:1 salad:2 web:1 su:1 overlapping:1 lack:1 google:2 continuity:1 logistic:1 quality:7 indicated:1 perhaps:1 grows:2 aquatic:1 believe:5 building:2 usa:3 concept:2 true:4 remedy:1 requiring:2 regularization:1 semantic:38 white:1 adjacent:2 visualizing:1 during:1 ambiguous:1 cosine:2 criterion:1 fungus:1 complete:1 demonstrate:2 mohammad:1 gleaned:1 performs:2 duchi:1 image:58 ranging:1 novel:5 recently:1 common:2 wikipedia:1 superior:1 specialized:1 multinomial:1 million:1 linking:1 extend:1 zeroshot:2 measurement:1 unfamiliar:1 significant:1 tuning:1 populated:1 similarly:1 grangier:1 language:22 had:1 dot:3 moving:2 similarity:15 impressive:2 artichoke:2 base:1 imparted:1 recent:1 beetle:1 termed:1 compound:1 keyboard:1 affiliation:1 continue:1 yagnik:1 life:1 der:1 devise:52 exploited:2 muscle:1 seen:3 scoring:2 captured:1 greater:1 employed:1 deng:2 determine:2 corrado:5 multiple:1 technical:1 match:1 faster:1 academic:1 cross:2 knife:1 equally:3 finder:1 prediction:26 verbeek:1 regression:1 globe:1 vision:10 metric:18 foremost:1 searcher:2 arxiv:3 represent:1 normalization:1 monga:2 palatucci:1 whereas:3 separately:1 fine:2 source:5 pooling:1 tend:1 leveraging:2 call:1 near:1 leverage:6 presence:1 yang:1 bengio:6 embeddings:13 split:2 architecture:6 florent:1 drum:1 edible:1 whether:1 utility:1 ultimate:1 tape:1 deep:13 dramatically:1 generally:1 detailed:2 bowerbird:1 tune:1 vegetable:1 backpropagated:1 ten:1 encyclopedia:1 category:16 generate:1 percentage:1 coates:1 punch:1 disjoint:1 correctly:2 platform:1 per:3 blue:2 discrete:3 demonstrating:1 achieving:1 drawn:2 anemone:2 bastani:1 neither:1 rangefinder:1 vast:1 subgradient:1 fraction:2 sum:1 letter:1 distorted:1 place:1 almost:1 reasonable:4 submarine:3 shark:20 maaten:1 scaling:2 comparable:1 dropout:1 layer:19 followed:1 simplification:1 arizona:2 truck:1 encountered:1 yielded:1 strength:1 occur:1 handful:1 bongo:2 constrain:1 alex:4 fei:4 flat:13 nearby:3 speed:1 nitish:1 mikolov:5 performing:1 missile:2 structured:1 according:1 combination:2 disconnected:1 manning:1 across:5 slightly:1 increasingly:2 wsabie:2 making:4 quoc:2 outlier:3 restricted:1 taken:2 visualization:2 remains:2 previously:1 turn:1 singer:1 ganjoo:1 instrument:1 serf:1 photo:1 cor:2 unusual:1 available:4 generalizes:1 usunier:1 unreasonable:2 hierarchical:17 appropriate:1 ruzon:1 marcaurelio:1 original:2 thomas:3 top:9 remaining:2 include:1 scottsdale:2 hinge:2 exploit:1 giving:1 coral:1 especially:1 comparatively:1 objective:4 norming:1 randomize:1 traditional:3 guessing:1 modestly:1 gradient:3 lends:1 iclr:2 distance:2 unable:1 mapped:1 considers:1 toward:2 marcus:1 tokenized:1 length:1 code:2 fighting:1 relationship:1 difficult:4 sne:2 holding:1 favorably:1 hao:1 pizza:1 pomerleau:1 satheesh:1 unknown:1 contributed:1 perform:1 allowing:1 datasets:5 benchmark:1 descent:1 pickup:1 hinton:4 jakob:1 compositionality:1 introduced:2 dog:1 trainable:1 imagenet:24 learned:13 textual:1 distinction:1 macaque:1 nip:5 address:1 beyond:4 able:6 suggested:1 below:2 bar:2 pattern:4 challenge:3 valid:1 built:1 max:2 including:2 overlap:1 difficulty:3 natural:1 rely:2 predicting:4 representing:1 improve:1 eye:1 hook:1 naive:1 text:21 l2:2 adagrad:1 relative:5 fully:1 loss:9 plant:2 highlight:1 generation:1 limitation:1 filtering:1 versus:1 geoffrey:3 validation:2 sufficient:3 fruit:2 principle:1 veg:1 cd:1 row:1 eccv:1 surprisingly:2 last:2 asynchronous:1 english:3 parity:2 soon:1 synset:4 bias:2 senior:1 neighbor:1 wide:2 taking:1 absolute:2 sparse:1 distributed:3 benefit:1 van:1 calculated:1 plain:1 evaluating:1 dimension:1 vocabulary:4 world:1 gram:10 rich:3 computes:1 stand:1 author:2 commonly:2 qualitatively:2 forward:2 adaptive:1 preventing:1 employing:1 reptile:1 compact:1 keep:1 dealing:1 global:1 corpus:3 search:1 khosla:1 table:11 learn:3 transfer:4 capsule:1 ca:1 nicolas:1 improving:1 vessel:1 automobile:1 european:1 artificially:2 marc:1 domain:1 opener:1 did:1 dense:1 aurelio:1 synonym:1 scored:1 paul:1 sridhar:1 tesla:1 precision:10 sub:2 mao:1 winning:1 candidate:2 house:1 third:2 jay:1 learns:2 grained:1 specific:3 showing:1 buggy:2 explored:1 experimented:1 admits:1 list:3 reproduction:1 exists:1 socher:4 quantization:1 restricting:1 false:2 importance:1 experiments3:1 illustrates:1 margin:5 gap:2 chen:3 entropy:1 generalizing:1 led:1 likely:3 rohrbach:2 visual:56 aditya:1 contained:3 ordered:1 sport:3 reflex:1 acquiring:1 srivastava:1 truth:4 chance:3 extracted:2 weston:2 goal:1 sized:1 jeff:1 tiger:2 included:2 determined:1 uniformly:1 semantically:11 reducing:1 wordnet:1 lens:6 pas:1 player:2 succeeds:1 meaningful:2 indicating:1 ilsvrc:13 berg:1 mark:5 support:1 dissimilar:1 rajat:2 evaluate:1 tested:1 correlated:1 |
4,647 | 5,205 | Visual Concept Learning: Combining Machine Vision
and Bayesian Generalization on Concept Hierarchies
Yangqing Jia1 , Joshua Abbott2 , Joseph Austerweil3 , Thomas Griffiths2 , Trevor Darrell1
1
UC Berkeley EECS 2 Dept of Psychology, UC Berkeley
3
Dept of Cognitive, Linguistics, and Psychological Sciences, Brown University
{jiayq, joshua.abbott, tom griffiths, trevor}@berkeley.edu
joseph austerweil@brown.edu
Abstract
Learning a visual concept from a small number of positive examples is a significant challenge for machine learning algorithms. Current methods typically fail
to find the appropriate level of generalization in a concept hierarchy for a given
set of visual examples. Recent work in cognitive science on Bayesian models
of generalization addresses this challenge, but prior results assumed that objects
were perfectly recognized. We present an algorithm for learning visual concepts
directly from images, using probabilistic predictions generated by visual classifiers as the input to a Bayesian generalization model. As no existing challenge
data tests this paradigm, we collect and make available a new, large-scale dataset
for visual concept learning using the ImageNet hierarchy as the source of possible
concepts, with human annotators to provide ground truth labels as to whether a
new image is an instance of each concept using a paradigm similar to that used in
experiments studying word learning in children. We compare the performance of
our system to several baseline algorithms, and show a significant advantage results
from combining visual classifiers with the ability to identify an appropriate level
of abstraction using Bayesian generalization.
1
Introduction
Machine vision methods have achieved considerable success in recent years, as evidenced by performance on major challenge problems [4, 7], where strong performance has been obtained for
assigning one of a large number of labels to each of a large number of images. However, this research has largely focused on a fairly narrow task: assigning a label (or sometimes multiple labels)
to a single image at a time. This task is quite different from that faced by a human child trying to
learn a new word, where the child is provided with multiple positive examples and has to generalize
appropriately. Even young children are able to learn novel visual concepts from very few positive
examples [3], something that still poses a challenge for machine vision systems. In this paper, we
define a new challenge task for computer vision ? visual concept learning ? and provide a first
account of a system that can learn visual concepts from a small number of positive examples.
In our visual concept learning task, a few example images from a visual concept are given and
the system has to indicate whether a new image is or is not an instance of the target concept. A
key aspect of this task is determining the degree to which the concept should be generalized [21]
when multiple concepts are logically consistent with the given examples. For example, consider the
concepts represented by examples in Figure 1 (a-c) respectively, and the task of predicting whether
new images (d-e) belong to them or not. The ground truth from human annotators reveals that the
level of generalization varies according to the conceptual diversity, with greater diversity leading to
broader generalization. In the examples shown in Figure 1, people might identify the concepts as
(a) Dalmatians, (b) all dogs, and (c) all animals, but not generalize beyond these levels although no
1
(a)
(b)
(c)
(d)
(e)
Figure 1: Visual concept learning. (a-c): positive examples of three visual concepts. Even without
negative data, people are able to learn these concepts: (a) Dalmatians, (b) dogs and (c) animals.
Note that although (a) contains valid examples of dogs and both (a) and (b) contain valid examples
of animals, people restrict the scope of generalization to more specific concepts, and find it easy to
make judgments about whether novel images such as (d) and (e) are instances of the same concepts
? the task we refer to as visual concept learning.
negative images forbids so. Despite recent successes in large-scale category-level object recognition,
we will show state-of-the-art machine vision systems fail to exhibit such patterns of generalization,
and have great difficulty learning without negative examples.
Bayesian models of generalization [1, 18, 21] account for these phenomena, determining the scope
of a novel concept (e.g., does the concept refer to Dalmatians, all dogs, or all animals?) in a similar
manner to people. However, these models were developed by cognitive scientists interested in analyzing human cognition, and require examples to be manually labeled as belonging to a particular
leaf node in a conceptual hierarchy. This is reasonable if one is asking whether proposed psychological models explain human behavior, but prevents the models from being used to automatically
solve visual concept learning problems for a robot or intelligent agent.
We bring these two threads of research together, using machine vision systems to assign novel
images locations within a conceptual hierarchy and a Bayesian generalization model to determine
how to generalize from these examples. This results in a system that comes closer to human performance than state-of-the-art machine vision baselines. As an additional contribution, since no
existing dataset adequately tests human-like visual concept learning, we have collected and made
available to the community the first large-scale dataset for evaluating whether machine vision algorithms can learn concepts that agree with human perception and label new unseen images, with
ground-truth labeling obtained from human annotators from Amazon Mechanical Turk. We believe
that this new task provides challenges beyond the conventional object classification paradigms.
2
Background
In machine vision, scant attention has been given to the problem of learning a visual concept from
a few positive examples as we have defined it. When the problem has been addressed, it has largely
been considered from a hierarchical regularization [16] or transfer learning [14] perspective, assuming that a fixed set of labels are given and exploiting transfer or regularization within a hierarchy.
Mid-level representations based on attributes [8, 13] focus on extracting common attributes such
as ?fluffy? and ?aquatic? that could be used to semantically describe object categories better than
low-level features. Transfer learning approaches have been proposed to jointly learn classifiers with
structured regularization [14].
Of all these previous efforts, our paper is most closely related to work that uses object hierarchies to
support classification. Salakhutdinov et al. [16] proposed learning a set of object classifiers with regularization using hierarchical knowledge, which improves the classification of objects at the leaves
of the hierarchy. However, this work did not address the problem of determining the level of abstraction within the hierarchy at which to make generalizations, which is a key aspect of the visual
concept learning problem. Deng et al. [5] proposed predicting object labels only to a granularity that
the classifier is confident with, but their goal was minimizing structured loss rather than mimicking
human generalization.
Existing models from cognitive science mainly focus on understanding human generalization judgments within fairly restricted domains. Tenenbaum and colleagues [18, 20] proposed mathematical
abstractions for the concept learning problem, building on previous work on models of generalization by Shepard [17]. Xu and Tenenbaum [21] and Abbott et al. [1] conducted experiments
2
with human participants that provided support for this Bayesian generalization framework. Xu and
Tenenbaum [21] showed participants one or more positive examples of a novel word (e.g., ?these
three objects are Feps?), while manipulating the taxonomic relationship between the examples. For
instance, participants could see three toy Dalmatians, three toy dogs, or three toy animals. Participants were then asked to identify the other ?Feps? among a variety of both taxonomically related
and unrelated objects presented as queries. If the positive examples were three Dalmatians, people
might be asked whether other Dalmatians, dogs, and animals are Feps, along with other objects such
as vegetables and vehicles. Subsequent work has used the same basic methodology in experiments
using a manually collated set of images as stimuli [1].
All of these models assume that objects are already mapped onto locations in a perceptual space or
conceptual hierarchy. Thus, they are not able to make predictions about genuinely novel stimuli.
Linking such generalization models to direct perceptual input is necessary in order to be able to use
this approach to learn visual concepts directly from images.
3
A Large-scale Concept Learning Dataset
Existing datasets (PASCAL [7], ILSVRC [2], etc.) test supervised learning performance with relatively large amounts of positive and negative examples available, with ground truth as a set of
mutually-exclusive labels. To our knowledge, no existing dataset accurately captures the task we
refer to as visual concept learning: to learn a novel word from a small set of positive examples like
humans do. In this section, we describe in detail our effort to make available a dataset for such task.
3.1
Test Procedure
In our test procedure, an agent is shown n example images (n = 5 in our dataset) sampled from a
node (may be leaf nodes or intermediate nodes) from the ImageNet synset tree, and is then asked
whether other new images sampled from ImageNet belong to the concept or not. The scores that the
agent gives are then compared against human ground truth that we collect, and we use precisionrecall curves to evaluate the performance.
From a machine vision perspective, one may ask whether this visual concept learning task differs
from the conventional ImageNet-defined classification problem ? identifying the node from which
the examples are drawn, and then answering yes for images in the subtree corresponding to the node,
and no for images not from the node. In fact, we will show in Section 5.2 that using this approach
fails to explain how people learn visual concepts. Human performance in the above task exhibits
much more sophisticated concept learning behaviors than simply identifying the node itself, and
the latter differs significantly from what we observe from human participants. In addition, with no
negative images, a conventional classification model fails to distinguish between nodes that are both
valid candidates (e.g., ?dogs? and ?animals? when shown a bunch of dog images). These make our
visual concept learning essentially different and richer than a conventional classification problem.
3.2
Automatic Generation of Examples and Queries
Large-scale experimentation requires an efficient scheme to generate test data across varying levels
of a concept hierarchy. To this end, we developed a fully-automated procedure for constructing a
large-scale dataset suitable for a challenge problem focused on visual concept learning. We used
the ImageNet LSVRC [2] 2010 data as the basis for automatically constructing a hierarchicallyorganized set of concepts at four different levels of abstraction. We had two goals in constructing
the dataset: to cover concepts at various levels of abstraction (from subordinate concepts to superordinate concepts, such as from Dalmatian to living things), and to find query images that comprehensively test human generalization behavior. We address these two goals in turn.
To generate concepts at various levels of abstraction, we use all the nodes in the ImageNet hierarchy
as concept candidates, starting from the leaf node classes as the most specific level concept. We then
generate three more levels of increasingly broad concepts along the path from the leaf to the root for
each leaf node in the hierarchy. Examples from such concepts are then shown to human participants
to obtain human generalization judgements, which will serve as the ground truth. Specifically, we
use the leaf node class itself as the most basic trial type L0 , and select three levels of nested concepts
3
berry
edible fruit
natural object
(a)
Count
blueberry
450
400
350
300
250
200
150
100
50
0 0
10
L1
L2
L3
101
102
Subtree size (log scale)
103
(b)
Figure 2: Concepts drawn from ImageNet. (a) example images sampled from the four levels for
blueberry, and (b) the histogram for the subtree sizes of different levels of concepts (x axis in
log scale).
L1 , L2 , L3 which correspond to three intermediate nodes along the path from the leaf node to the
root. We choose the three nodes that maximize the combined information gain across these levels:
X3
C(L1???3 ) =
log(|Li+1 | ? |Li |) ? log |Li+1 |,
(1)
i=0
where |Li | is the number of leaf nodes under the subtree rooted at Li , and L4 is the whole taxonomy
tree. As a result, we obtain levels that are ?evenly? distributed over the taxonomy tree. Such levels
coarsely correspond to the sub-category, basic, super-basic, and super-category levels in the taxonomy: for example, the four levels used in Figure 1 are dalmatian, domestic dog, animal,
organism for the leaf node dalmatian, and in Figure 2(a) are blueberry, berry, edible
fruit, and natural object for the leaf node blueberry. Figure 2(b) shows a histogram of
the subtree sizes for L1 to L3 respectively.
For each concept, the five images shown to participants as examples of that concept were randomly
sampled from five different leaf node categories from the corresponding subtree in the ILSVRC
2010 test images. Figure 1 and 2 show such examples.
To obtain the ground truth (the concepts people perceive when given the set of examples), we then
randomly sample twenty query images, and ask human participants whether each of these query
images belong to the concept given by the example images. A total of 20 images are randomly
sampled as follows: three each from the L0 , L1 , L2 and L3 subtrees, and eight images outside L3 .
This ensures a complete coverage over in-concept and out-of-concept queries. We explicitly made
sure that the leaf node classes of the query images were different from those of the examples if
possible, and no duplicates exist among the 20 queries. Note that we always sampled the example
and query images from the ILSVRC 2010 test images, allowing us to subsequently train our machine
vision models with the training and validation images from the ILSVRC dataset while keeping those
in the visual concept learning dataset as novel test images.
3.3
Collecting Human Judgements
We created 4,000 identical concepts (four for each leaf node) using the protocol above, and recruited
participants online through Amazon Mechanical Turk (AMT, http://www.mturk.com) to obtain the human ground truth data. For each concept, an AMT HIT (a single task presented to the
human participants) is formed with five example images and twenty query images, and the participants were asked whether each query belongs to the concept represented by the examples. Each HIT
was completed by five unique participants, with a compensation of $0.05 USD per HIT. Participants
were allowed to complete as many unique trials as they wished. Thus, a total of 20,000 AMT HITs
were collected, and a total of 100,000 images were shown to the participants. On average, each
participant took approximately one minute to finish each HIT, spending about 3 seconds per query
image. The dataset is publicly available at http://www.eecs.berkeley.edu/?jiayq/.
4
Visually-Grounded Bayesian Concept Learning
In this section, we describe an end-to-end framework which combines Bayesian word learning models and visual classifiers, and is able to perform concept learning with perceptual inputs.
4
4.1
Bayesian Concept Learning
Prior work on concept learning [21] addressed the problem of generalization from examples using
a Bayesian framework: given a set of N examples (images in our case) X = {x1 , x2 , . . . , xN } that
are members of an unknown concept C, the probability that a query instance xquery also belongs to
the same concept is given by
X
Pnew (xquery ? C|X ) =
Pnew (xnew |h)P (h|X ),
(2)
h?H
where H is called the ?hypothesis space? ? a set of possible hypotheses for what the concept might
be. Each hypothesis corresponds to a (often semantically related) subset of all the objects in the
world, such as ?dogs? or ?animals?. Given a specific hypothesis h, the probability Pnew (xnew |h)
that a new instance belongs to it is 1 if xnew is in the set, and 0 otherwise, and P (h|X ) is the
posterior probability of a hypothesis h given the examples X .
The posterior distribution over hypotheses is computed using the Bayes? rule: it is proportional to
the product of the likelihood, P (X |h), which is the probability of drawing these examples from the
hypothesis h uniformly at random times the prior probability P (h) of the hypothesis:
YN
P (h|X ) ? P (h)
Pexample (xi |h),
(3)
i=1
where we also make the strong sampling assumption that each xi is drawn uniformly at random from
the set of instances picked out by h. Importantly, this ensures that the model acts in accordance
with the ?size principle? [18, 20], meaning that the conditional probability of an instance given
a hypothesis is inversely proportional to the size of the hypothesis, i.e., the number of possible
instances that could be drawn from the hypothesis:
Pexample (xi |h) = |h|?1 I(xi ? h),
(4)
where |h| is the size of the hypothesis and I(?) is an indicator function that has value 1 when the
statement is true. We note that the probability of an example and that of a query given a hypothesis
are different: the former depends on the size of the underlying hypothesis, representing the nature
of training with strong sampling. For example, as the number of examples that are all Dalmatians
increases, it becomes increasingly likely that the concept is just Dalmatians and not dogs in general
even though both are logically possible, because it would have been incredibly unlikely to only
sample Dalmatians given that the truth concept was dogs. In addition, the prior distribution P (h)
captures biases due to prior knowledge, which favor particular kinds of hypotheses over others
(which we will discuss in the next subsection). For example, it is known that people favor basic
level object categories such as dogs over subcategories (such as Dalmatians) or supercategories
(such as animals).
4.2
Concept Learning with Perceptual Uncertainty
Existing Bayesian word learning models assume that objects are perfectly recognized, thus representing them as discrete indices into a set of finite tokens. Hypotheses are then subsets of the complete set of tokens and are often hierarchically nested. Although perceptual spaces were adopted
in [18], only very simple hypotheses (rectangles over the position of dots) were used. Performing
Bayesian inference with a complex perceptual input such as images is thus still a challenge. To this
end, we utilize the state-of-the-art image classifiers and classify each image into the set of leaf node
classes given in the ImageNet hierarchy, and then build a hypothesis space on top of the classifier
outputs.
Specifically, we construct the hypothesis space over the image labels using the ImageNet hierarchy,
with each subtree rooted at a node serving as a possible hypothesis. The hypothesis sizes are then
computed as the number of leaf node classes under the corresponding node, e.g., the node ?animal?
would have a larger size than the node ?dogs?. The large number of images collected by ImageNet
allows us to train classifiers from images to the leaf node labels, which we will describe shortly.
Assuming that there are a total of K leaf nodes, for an image xi that is classified as label y?i , the
likelihood P (xi |h) is then defined as
XK
1
Pexample (xi |h) =
Aj y?i I(j ? h),
(5)
j=1
|h|
5
where A is the normalized confusion matrix, with Aj,i being the probability that the true leaf node is
j given the classifier output being i. The motivation of using the confusion matrix is that classifiers
are not perfect and misclassification could happen. Thus, the use of the confusion matrix incorporates the visual ambiguity into the word learning framework by providing an unbiased estimation of
the true leaf node label for an image.
The prior probability of a hypothesis was defined to be an Erlang distribution, P (h) ?
(|h|/? 2 ) exp{?|h|/?}, which is a standard prior over sizes in Bayesian models of generalization
[17, 19]. The parameter ? is set to 200 according to [1] in order to fit human cognition, which favors
basic level hypotheses [15]. Finally, the probability of a new instance belonging to a hypothesis is
PK
similar to the likelihood, but without the size term, as Pnew (xnew |h) = j=1 Aj y?new I(?
ynew ? h),
where y?new is the classifier prediction.
4.3
Learning the Perceptual Classifiers
To train the image classifiers for the perceptual component in our model, we used the ILSVRC
training images, which consisted of 1.2 million images categorized into the 1,000 leaf node classes,
and followed the pipeline in [11] to obtain feature vectors to represent the images. This pipeline
uses 160K dimensional features, yielding a total of about 1.5TB for the training data. We trained the
classifiers with linear multinomial logistic regressors with minibatch Adagrad [6] algorithm, which
is a quasi-Newton stochastic gradient descent approach. The hyperparameters of the classifiers are
learned with the held-out validation data.
Overall, we obtained a performance of 41.33% top-1 accuracy and a 61.91% top-5 accuracy on
the validation data, and 41.28% and 61.69% respectively on the testing data, and the training took
about 24 hours with 10 commodity computers. Although this is not the best ImageNet classifier
to date, we believe that the above pipeline is a fair representation of the state-of-the-art computer
vision approaches. Algorithms using similar approaches have reported competitive performance in
image classification on a large number of classes (on the scale of tens of thousands) [10, 9], which
provides reassurance about the possibility of using state-of-the-art classification models in visual
concept learning.
To obtain the confusion matrix A of the classifiers, we note that the validation data alone does not
suffice to provide a dense estimation of the full confusion matrix, because there is a large number of
entries (1 million) but very few validation images (50K). Thus, instead of using the validation data
for estimation of A, we approximated the classifier?s leave-one-out (LOO) behavior on the training
data with a simple one-step gradient descent update to ?unlearn? each image. Specifically, we started
from the trained classifier parameters, and for each training image x, we compute the gradient of the
loss function when x is left out of the training set. We then take one step update in the direction of
the gradient to obtain the updated classifier, and use it to perform prediction on x. This allows us to
obtain a much denser estimation that worked better than existing methods. We refer the reader to the
supplementary material for the technical details about the classifier training and the LOO confusion
matrix estimation.
5
Experiments
In this section, we describe the experimental protocol adopted to compare our system with human
performance and compare our system against various baseline algorithms. Quantitatively, we use
the precision-recall curve, the average precision (AP) and the F1 score at the point where precision = recall to evaluate the performance and to compare against the human performance, which is
calculated by randomly sampling one human participant per distinctive HIT, and comparing his/her
prediction against the four others.
To the best of our knowledge, there are no existing vision models that explicitly handles our concept
learning task. Thus, we compare our vision baseg Bayes generalization algorithm (denoted by VG)
described in the previous section against the following baselines, which are reasonable extensions
of existing vision or cognitive science models:
1. Naive vision approach (NV): this uses a nearest neighbor approach by computing the
score of a query as its distance to the closest example image, using GIST features [12].
6
1.0
Method
NV
PM
HC
HB
NP
VG (ours)
Human Performance
Precision
0.8
0.6
0.4
0.2
0.0
0.0
NV
PM
HB
0.2
HC
VG
0.4
NP
human
0.6
0.8
AP
36.37
61.74
60.58
57.50
76.24
72.82
-
F1 Score
35.64
56.07
56.82
52.72
72.70
66.97
75.47
1.0
Recall
Figure 3: The precision-recall curves of our method and the baseline algorithms. The human results
are shown as the red crosses, and the non-perceptual Bayesian word learning model (NB) is shown
as magenta dashed lines. The table summarizes the average precision (AP) and F1 scores of the
methods.
2. Prototype model (PM): an extension of the image classifiers. We use the L1 normalized
classifier output from the multinomial logistic regressors as a vector for the query image,
and compute the score as its ?2 distance to the closest example image.
3. Histogram of classifier outputs (HC): similar to the prototype model, but instead of computing the distance between the query and each example, we compute the score as the ?2
distance to the histogram of classifier outputs, aggregated over the examples.
4. Hedging the bets extension (HB): we extend the hedging idea [5] to handle sets of query
images. Specifically, we find the subtree in the hierarchy that maximizes the information
gain while maintaining an overall accuracy above a threshold over the set of example
images. The score of a query image is then computed as the probability that it belongs to
this subtree. The threshold is tuned on a randomly selected subset of the data.
5. Non-perceptual word learning (NP): the classical Bayesian word learning model in [21]
assuming a perfect classifier, i.e., by taking the ground-truth leaf labels for the test images. This is not practical in actual applications, but evaluating NP helps understand how a
perceptual component contributes to modeling human behavior.
5.1
Main Results
Figure 3 shows the precision-recall curves for our method and the baseline methods, and summarizes the average precision and F1 scores. Conventional vision approaches that build upon image
classifiers work better than simple image features (such as GIST), which is sensible given that object categories provide relatively more semantics than simple features. However, all the baselines
still have performances far from human?s, because they miss the key mechanism for inferring the
?width? of the latent concept represented by a set of images (instead of a single image as conventional approaches assume). In contrast, adopting the size principle and the Bayesian generalization
framework allows us to perform much better, obtaining an increase of about 10% in average precision and F1 scores, closer to the human performance than other visual baselines.
The non-perceptual (NP) model exhibits better overall average precision than our method, which
suggests that image classifiers can still be improved. This is indeed the case, as state-of-the-art
recognition algorithms may still significantly underperform human. However, note that for a system
to work in a real-world scenario such as aid-giving robots, it is crucial that the agent be able to
take direct perceptual inputs. It is also interesting to note that all visual models yield higher precision values in the low-recall region (top left of Figure 3) than the NP model, which does not use
perceptual input and has a lower starting precision. This suggests that perceptual signals do play
an important role in human generalization behaviors, and should not be left out of the pipeline as
previous Bayesian word learning methods do.
5.2
Analysis of Per-level Responses
In addition to the quantitative precision-recall curves, we perform a qualitative per-level analysis
similar to previous word learning work [1]. To this end, we binarize the predictions at the threshold
that yields the same precision and recall, and then plot the per-level responses, i.e., the proportion
of query images from level Li that are predicted positive, given examples from level Lj .
7
0.5
0.2
0.0
0.0
0.0
L0
L1
L2
L3
0.5
NP oracle
1.0
0.6
0.4
0.0
0.0
0.0
L0
L1
0.6
0.4
1.0
0.5
0.2
0.0
0.0
0.0
L0
L1
L2
L3
L3
0.5
Our method
1.0
0.5
IC oracle
1.0
(b) Our method
Generalization Probability
0.8
L2
IC oracle
1.0
human ground truth
Generalization Probability
L0
L1
L2
L3
L4
0.5
0.2
(a) NP model
PM baseline
1.0
1.0
human ground truth
0.4
L0
L1
L2
L3
L4
0.8
0.5
PM baseline
1.0
L0
L1
L2
L3
L4
0.8
0.6
0.4
1.0
human ground truth
0.6
our method
1.0
1.0
Generalization Probability
Generalization Probability
L0
L1
L2
L3
L4
0.8
human ground truth
NP oracle
1.0
0.5
0.2
0.0
0.0
0.0
L0
L1
L2
L3
(c) PM baseline
(d) IC oracle
Figure 4: Per-level generalization predictions from various methods, where the horizontal axis shows
four levels at which examples were provided (L0 to L3 ). At each level, five bars show the proportion
of queries form levels L0 to L4 that are labeled as instances of the concept by each method. These
results are summarized in a scatter plot showing model predictions (horizontal axis) vs. human
judgments (vertical axis), with the red line showing a linear regression fit.
Generalization Probability
human oracle
We show in Figures 4 and 5 the per-level generalization
1.0
L0
L1
results from human, the NP model, our method, and the
0.8
L2
PM baseline which best represents state-of-the-art vision
L3
baselines. People show a monotonic decrease in generalL4
0.6
ization as the query level moves conceptually further from
0.4
the examples. In addition, for queries of the same level,
its generalization score peaks when examples from the
0.2
same level are presented, and drops when lower or higher
0.0
L0
L1
L2
L3
level examples are presented. The NP model tends to give
Figure
5: Per-level generalization from
more extreme predictions (either very low or very high),
possibly due to the fact that it assumes perfect recogni- human participants.
tion, while visual inputs are actually difficult to precisely
classify even for a human being. The conventional vision baseline does not utilize the size principle to model human concept learning, and as a result shows very similar behavior with different
level of examples. Our method exhibits a good correlation with the human results, although it has
a smaller generalization probability for L0 queries, possibly because current visual models are still
not completely accurate in identifying leaf node classes [5].
Last but not least, we examine how well a conventional image classification approach could explain our experimental results. To do so, Figure 44(d) plots the results of an image classification
(IC) oracle that predicts yes for an image within the ground-truth ImageNet node that the current
examples were sampled from and no otherwise. Note that the IC oracle never generalizes beyond
the level from which the examples are drawn, and thus, exhibits very different generalization results
compared to the human participants in our experiment. Thus, visual concept learning poses more
realistic and challenging problems for computer vision studies.
6
Conclusions
We proposed a new task for machine vision ? visual concept learning ? and presented the first system
capable of approaching human performance on this problem. By linking research on object classification in machine vision and Bayesian generalization in cognitive science, we were able to define
a system that could infer the appropriate scope of generalization for a novel concept directly from a
set of images. This system outperforms baselines that draw on previous approaches in both machine
vision and cognitive science, coming closer to human performance than any of these approaches.
However, there is still significant room to improve performance on this task, and we present our
visual concept learning dataset as the basis for a new challenge problem for machine vision, going
beyond assigning labels to individual objects.
8
References
[1] J. T. Abbott, J. L. Austerweil, and T. L. Griffiths. Constructing a hypothesis space from the
Web for large-scale Bayesian word learning. In Proceedings of the 34th Annual Conference of
the Cognitive Science Society, 2012.
[2] A. Berg, J. Deng, and L. Fei-Fei.
net.org/challenges/LSVRC/2010/.
ILSVRC 2010.
http://www.image-
[3] S. Carey. The child as word learner. Linguistic Theory and Psychological Reality, 1978.
[4] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
[5] J. Deng, J. Krause, A. Berg, and L. Fei-Fei. Hedging your bets: Optimizing accuracyspecificity trade-offs in large scale visual recognition. In CVPR, 2012.
[6] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. JMLR, 12:2121?2159, 2010.
[7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL
Visual Object Classes (VOC) challenge. IJCV, 88(2):303?338, 2010.
[8] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In
CVPR, 2009.
[9] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[10] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building
high-level features using large scale unsupervised learning. In ICML, 2012.
[11] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image
classification: fast feature extraction and svm training. In CVPR, 2011.
[12] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the
spatial envelope. International journal of computer vision, 42(3):145?175, 2001.
[13] D. Parikh and K. Grauman. Relative attributes. In ICCV, 2011.
[14] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse
prototype representations. In CVPR, 2008.
[15] E. Rosch, C. B. Mervis, W. D. Gray, D. M. Johnson, and P. Boyes-Braem. Basic objects in
natural categories. Cognitive psychology, 8(3):382?439, 1976.
[16] R. Salakhutdinov, A. Torralba, and J.B. Tenenbaum. Learning to share visual appearance for
multiclass object detection. In CVPR, 2011.
[17] R. N. Shepard. Towards a universal law of generalization for psychological science. Science,
237:1317?1323, 1987.
[18] J. B. Tenenbaum. Bayesian modeling of human concept learning. In NIPS, 1999.
[19] J. B. Tenenbaum. Rules and similarity in concept learning. In NIPS, 2000.
[20] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24(4):629?640, 2001.
[21] F. Xu and J.B. Tenenbaum. Word learning as Bayesian inference. Psychological Review,
114(2):245?272, 2007.
9
| 5205 |@word trial:2 judgement:2 proportion:2 everingham:1 underperform:1 contains:1 score:11 hoiem:1 tuned:1 ours:1 reassurance:1 existing:9 outperforms:1 current:3 com:1 comparing:1 assigning:3 scatter:1 devin:1 realistic:1 subsequent:1 happen:1 shape:1 plot:3 gist:2 update:2 drop:1 v:1 alone:1 leaf:23 selected:1 xk:1 provides:2 node:35 location:2 org:1 five:5 mathematical:1 along:3 direct:2 qualitative:1 ijcv:1 combine:1 behavioral:1 manner:1 indeed:1 behavior:7 examine:1 brain:1 salakhutdinov:2 voc:1 automatically:2 actual:1 farhadi:1 domestic:1 provided:3 becomes:1 unrelated:1 underlying:1 suffice:1 maximizes:1 what:2 kind:1 developed:2 berkeley:4 commodity:1 collecting:1 act:1 quantitative:1 grauman:1 classifier:29 hit:6 yn:1 positive:11 scientist:1 accordance:1 tends:1 despite:1 analyzing:1 path:2 approximately:1 ap:3 might:3 collect:2 suggests:2 challenging:1 unique:2 practical:1 testing:1 differs:2 x3:1 precisionrecall:1 procedure:3 scant:1 universal:1 significantly:2 word:15 griffith:3 onto:1 nb:1 www:3 conventional:8 dean:1 williams:1 attention:1 starting:2 incredibly:1 focused:2 amazon:2 identifying:3 perceive:1 rule:2 importantly:1 collated:1 his:1 handle:2 updated:1 hierarchy:16 target:1 play:1 us:3 hypothesis:25 recognition:3 approximated:1 genuinely:1 predicts:1 labeled:2 database:1 role:1 capture:2 pexample:3 thousand:1 region:1 ensures:2 ranzato:1 decrease:1 trade:1 asked:4 trained:2 serve:1 distinctive:1 upon:1 learner:1 basis:2 completely:1 represented:3 various:4 train:3 fast:1 describe:5 query:24 labeling:1 outside:1 quite:1 richer:1 larger:1 solve:1 cvpr:6 denser:1 drawing:1 otherwise:2 supplementary:1 ability:1 austerweil:2 favor:3 unseen:1 jointly:1 itself:2 online:2 advantage:1 net:1 took:2 product:1 coming:1 jia1:1 cao:1 combining:2 date:1 holistic:1 exploiting:1 sutskever:1 cour:1 darrell:1 perfect:3 leave:1 object:24 help:1 pose:2 nearest:1 wished:1 strong:3 coverage:1 predicted:1 indicate:1 come:1 direction:1 closely:1 attribute:4 subsequently:1 stochastic:2 human:50 material:1 subordinate:1 require:1 assign:1 f1:5 generalization:39 extension:3 considered:1 ground:14 ic:5 visually:1 great:1 exp:1 scope:3 cognition:2 major:1 torralba:2 estimation:5 jiayq:2 label:14 offs:1 always:1 super:2 rather:1 varying:1 bet:2 broader:1 linguistic:1 l0:15 focus:2 likelihood:3 logically:2 mainly:1 contrast:1 baseline:15 inference:3 abstraction:6 typically:1 unlikely:1 lj:1 her:1 manipulating:1 quasi:1 going:1 interested:1 semantics:1 mimicking:1 overall:3 classification:14 among:2 pascal:2 denoted:1 animal:11 art:7 spatial:1 fairly:2 uc:2 construct:1 never:1 extraction:1 ng:1 sampling:3 manually:2 identical:1 represents:1 broad:1 yu:1 unsupervised:1 icml:1 others:2 stimulus:2 intelligent:1 duplicate:1 few:4 quantitatively:1 np:11 randomly:5 individual:1 detection:1 possibility:1 extreme:1 yielding:1 held:1 subtrees:1 accurate:1 closer:3 erlang:1 necessary:1 capable:1 tree:3 ynew:1 psychological:5 instance:11 classify:2 modeling:3 asking:1 cover:1 subset:3 entry:1 krizhevsky:1 conducted:1 johnson:1 loo:2 reported:1 varies:1 eec:2 endres:1 combined:1 confident:1 peak:1 international:1 probabilistic:1 dong:1 together:1 ambiguity:1 choose:1 possibly:2 huang:1 usd:1 cognitive:9 leading:1 toy:3 li:8 account:2 diversity:2 summarized:1 forsyth:1 explicitly:2 depends:1 hedging:3 vehicle:1 root:2 picked:1 tion:1 hazan:1 red:2 competitive:1 bayes:2 participant:18 carey:1 contribution:1 formed:1 publicly:1 accuracy:3 convolutional:1 largely:2 judgment:3 identify:3 correspond:2 yield:2 yes:2 conceptually:1 generalize:3 bayesian:23 accurately:1 bunch:1 classified:1 explain:3 quattoni:1 trevor:2 against:5 colleague:1 turk:2 sampled:7 gain:2 dataset:13 ask:2 recall:8 knowledge:4 subsection:1 improves:1 sophisticated:1 recogni:1 actually:1 higher:2 supervised:1 tom:1 methodology:1 improved:1 response:2 zisserman:1 though:1 just:1 correlation:1 horizontal:2 web:1 minibatch:1 logistic:2 aj:3 gray:1 aquatic:1 believe:2 building:2 concept:85 brown:2 contain:1 true:3 adequately:1 regularization:4 former:1 normalized:2 unbiased:1 ization:1 width:1 rooted:2 generalized:1 trying:1 complete:3 confusion:6 duchi:1 l1:16 bring:1 image:76 spending:1 meaning:1 novel:9 parikh:1 common:1 unlearn:1 multinomial:2 shepard:2 million:2 belong:3 linking:2 organism:1 extend:1 significant:3 refer:4 automatic:1 pm:7 had:1 dot:1 l3:16 robot:2 similarity:2 etc:1 something:1 posterior:2 closest:2 recent:3 showed:1 perspective:2 optimizing:1 belongs:4 scenario:1 success:2 joshua:2 greater:1 additional:1 deng:4 recognized:2 determine:1 paradigm:3 maximize:1 aggregated:1 living:1 dashed:1 signal:1 multiple:3 full:1 corrado:1 infer:1 technical:1 cross:1 dept:2 lin:1 prediction:9 basic:7 regression:1 oliva:1 vision:25 essentially:1 mturk:1 histogram:4 sometimes:1 grounded:1 represent:1 adopting:1 achieved:1 monga:1 background:1 addition:4 krause:1 addressed:2 winn:1 source:1 crucial:1 appropriately:1 envelope:1 sure:1 nv:3 recruited:1 thing:1 member:1 incorporates:1 extracting:1 yang:1 granularity:1 intermediate:2 easy:1 automated:1 variety:1 hb:3 finish:1 psychology:2 fit:2 perfectly:2 restrict:1 approaching:1 idea:1 prototype:3 multiclass:1 edible:2 whether:11 thread:1 effort:2 deep:1 vegetable:1 amount:1 mid:1 tenenbaum:8 ten:1 category:8 generate:3 http:3 exist:1 per:9 serving:1 discrete:1 coarsely:1 key:3 four:6 pnew:4 threshold:3 yangqing:1 drawn:5 abbott:3 utilize:2 rectangle:1 subgradient:1 year:1 taxonomic:1 uncertainty:1 reasonable:2 reader:1 draw:1 summarizes:2 followed:1 distinguish:1 xnew:4 oracle:8 annual:1 precisely:1 worked:1 fei:6 your:1 x2:1 scene:1 aspect:2 performing:1 relatively:2 structured:2 according:2 supercategories:1 belonging:2 across:2 smaller:1 increasingly:2 joseph:2 restricted:1 iccv:1 pipeline:4 agree:1 mutually:1 turn:1 count:1 fail:2 discus:1 mechanism:1 singer:1 describing:1 end:5 studying:1 generalizes:1 available:5 adopted:2 experimentation:1 eight:1 observe:1 hierarchical:3 appropriate:3 shortly:1 thomas:1 top:4 assumes:1 linguistics:1 completed:1 maintaining:1 newton:1 giving:1 build:2 braem:1 classical:1 society:1 move:1 already:1 rosch:1 xquery:2 exclusive:1 exhibit:5 gradient:4 distance:4 mapped:1 sensible:1 evenly:1 collected:3 binarize:1 assuming:3 index:1 relationship:1 providing:1 minimizing:1 difficult:1 taxonomy:3 statement:1 negative:5 twenty:2 perform:4 allowing:1 unknown:1 vertical:1 datasets:1 finite:1 compensation:1 descent:2 hinton:1 community:1 evidenced:1 dog:14 mechanical:2 imagenet:14 dalmatian:13 learned:1 narrow:1 hour:1 nip:3 mervis:1 address:3 able:7 beyond:4 superordinate:1 bar:1 perception:1 pattern:1 challenge:12 tb:1 gool:1 suitable:1 misclassification:1 difficulty:1 natural:3 predicting:2 indicator:1 zhu:1 representing:2 scheme:1 improve:1 inversely:1 axis:4 created:1 started:1 naive:1 faced:1 prior:7 understanding:1 berry:2 l2:13 review:1 determining:3 adagrad:1 relative:1 law:1 loss:2 fully:1 subcategories:1 generation:1 interesting:1 proportional:2 vg:3 lv:1 annotator:3 validation:6 degree:1 agent:4 consistent:1 fruit:2 principle:3 share:1 token:2 last:1 keeping:1 synset:1 bias:1 understand:1 neighbor:1 comprehensively:1 taking:1 sparse:1 distributed:1 van:1 curve:5 calculated:1 xn:1 evaluating:2 valid:3 world:2 made:2 adaptive:1 regressors:2 far:1 reveals:1 conceptual:4 assumed:1 xi:7 forbids:1 latent:1 table:1 reality:1 learn:9 transfer:4 nature:1 obtaining:1 contributes:1 hc:3 complex:1 constructing:4 domain:1 protocol:2 did:1 pk:1 hierarchically:1 dense:1 main:1 whole:1 motivation:1 hyperparameters:1 child:5 allowed:1 fair:1 categorized:1 xu:3 x1:1 aid:1 precision:14 fails:2 sub:1 position:1 consisted:1 inferring:1 candidate:2 perceptual:15 answering:1 jmlr:1 young:1 minute:1 magenta:1 specific:3 showing:2 svm:1 taxonomically:1 socher:1 subtree:9 chen:1 simply:1 likely:1 appearance:1 visual:39 prevents:1 monotonic:1 nested:2 truth:15 corresponds:1 amt:3 conditional:1 goal:3 towards:1 room:1 considerable:1 lsvrc:2 specifically:4 uniformly:2 semantically:2 miss:1 total:5 called:1 boyes:1 experimental:2 l4:6 select:1 ilsvrc:6 berg:2 people:9 support:2 latter:1 collins:1 evaluate:2 phenomenon:1 |
4,648 | 5,206 | Learning invariant representations and applications
to face verification
Qianli Liao, Joel Z Leibo, and Tomaso Poggio
Center for Brains, Minds and Machines
McGovern Institute for Brain Research
Massachusetts Institute of Technology
Cambridge MA 02139
lql@mit.edu, jzleibo@mit.edu, tp@ai.mit.edu
Abstract
One approach to computer object recognition and modeling the brain?s ventral
stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and scaling, since they are easiest to solve via convolution. In accord with a recent theory of transformationinvariance [1], we propose a model that, while capturing other common convolutional networks as special cases, can also be used with arbitrary identitypreserving transformations. The model?s wiring can be learned from videos of
transforming objects?or any other grouping of images into sets by their depicted
object. Through a series of successively more complex empirical tests, we study
the invariance/discriminability properties of this model with respect to different
transformations. First, we empirically confirm theoretical predictions (from [1])
for the case of 2D affine transformations. Next, we apply the model to non-affine
transformations; as expected, it performs well on face verification tasks requiring
invariance to the relatively smooth transformations of 3D rotation-in-depth and
changes in illumination direction. Surprisingly, it can also tolerate clutter ?transformations? which map an image of a face on one background to an image of the
same face on a different background. Motivated by these empirical findings, we
tested the same model on face verification benchmark tasks from the computer
vision literature: Labeled Faces in the Wild, PubFig [2, 3, 4] and a new dataset
we gathered?achieving strong performance in these highly unconstrained cases
as well.
1
Introduction
In the real world, two images of the same object may only be related by a very complicated and
highly nonlinear transformation. Far beyond the well-studied 2D affine transformations, objects
may rotate in depth, receive illumination from new directions, or become embedded on different
backgrounds; they might even break into pieces or deform?melting like Salvador Dali?s pocket
watch [5]?and still maintain their identity. Two images of the same face could be related by the
transformation from frowning to smiling or from youth to old age. This notion of an identitypreserving transformation is considerably more expansive than those normally considered in computer vision. We argue that there is much to be gained from pushing the theory (and practice) of
transformation-invariant recognition to accommodate this unconstrained notion of a transformation.
Throughout this paper we use the formalism for describing transformation-invariant hierarchical
architectures developed by Poggio et al. (2012). In [1], the authors propose a theory which, they
argue, is general enough to explain the strong performance of convolutional architectures across a
1
wide range of tasks (e.g. [6, 7, 8]) and possibly also the ventral stream. The theory is based on the
premise that invariance to identity-preserving transformations is the crux of object recognition.
The present paper has two primary points. First, we provide empirical support for Poggio et al.?s
theory of invariance (which we review in section 2) and show how various pooling methods for
convolutional networks can all be understood as building invariance since they are all equivalent to
special cases of the model we study here. We also measure the model?s invariance/discriminability
with face-matching tasks. Our use of computer-generated image datasets lets us completely control
the transformations appearing in each test, thereby allowing us to measure properties of the representation for each transformation independently. We find that the representation performs well even
when it is applied to transformations for which there are no theoretical guarantees?e.g., the clutter
?transformation? which maps an image of a face on one background to the same face on a different
background.
Motivated by the empirical finding of strong performance with far less constrained transformations
than those captured by the theory, in the paper?s second half we apply the same approach to faceverification benchmark tasks from the computer vision literature: Labeled Faces in the Wild, PubFig [2, 3, 4], and a new dataset we gathered. All of these datasets consist of photographs taken
under natural conditions (gathered from the internet). We find that, despite the use of a very simple
classifier?thresholding the angle between face representations?our approach still achieves results
that compare favorably with the current state of the art and even exceed it in some cases.
2
Template-based invariant encodings for objects unseen during training
We conjecture that achieving invariance to identity-preserving transformations without losing discriminability is the crux of object recognition. In the following we will consider a very expansive
notion of ?transformation?, but first, in this section we develop the theory for 2D affine transformations1 .
Our aim is to compute a unique signature for each image x that is invariant with respect to a group
of transformations G. We consider the orbit {gx | g ? G} of x under the action of the group. In this
section, G is the 2D affine group so its elements correspond to translations, scalings, and in-plane
rotations of the image (notice that we use g to denote both elements of G and their representations,
acting on vectors). We regard two images as equivalent if they are part of the same orbit, that is, if
they are transformed versions of one another (x0 = gx for some g ? G).
The orbit of an image is itself invariant with respect to the group. For example, the set of images
obtained by rotating x is exactly the same as the set of images obtained by rotating gx. The orbit
is also unique for each object: the set of images obtained by rotating x only intersects with the
set of images obtained by rotating x0 when x0 = gx. Thus, an intuitive method of obtaining an
invariant signature for an image, unique to each object, is just to check which orbit it belongs to. We
can assume access to a stored set of orbits of template images ?k ; these template orbits could have
been acquired by unsupervised learning?possibly by observing objects transform and associating
temporally adjacent frames (e.g. [9, 10]).
The key fact enabling this approach to object recognition is this: It is not necessary to have all
the template orbits beforehand. Even with a small, sampled, set of template orbits, not including
the actual orbit of x, we can still compute an invariant signature. Observe that when g is unitary
hgx, ?k i = hx, g ?1 ?k i. That is, the inner product of the transformed image with a template is the
same as the inner product of the image with a transformed template. This is true regardless of
whether x is in the orbit of ?k or not. In fact, the test image need not resemble any of the templates
(see [11, 12, 13, 1]).
Consider gt ?k to be a realization of a random variable. For a set {gt ?k , | t = 1, ..., T } of images
sampled from the orbit of the template ?k , the distribution of hx, gt ?k i is invariant and unique to each
object. See [1] for a proof of this fact in the case that G is the group of 2D affine transformations.
1
See [1] for a more complete exposition of the theory.
2
Thus, the empirical distribution of the inner products hx, gt ?k i is an estimate of an invariant. Following [1], we can use the empirical distribution function (CDF) as the signature:
?kn (x) =
T
1X
?(hx, gt ?k i + n?)
T t=1
(1)
where ? is a smooth version of the step function (?(x) = 0 for x ? 0, ?(x) = 1 for x > 0), ? is
the resolution (bin-width) parameter and n = 1, . . . , N . Figure 1 shows the results of an experiment
demonstrating that the ?kn (x) are invariant to translation and in-plane rotation. Since each face has its
own characteristic empirical distribution function, it also shows that these signatures could be used
to discriminate between them. Table 1 reports the average Kolmogorov-Smirnov (KS) statistics
comparing signatures for images of the same face, and for different faces: Mean(KSsame ) ? 0 =?
invariance and Mean(KSdifferent ) > 0 =? discriminability.
1
(A) IN-PLANE ROTATION
(B) TRANSLATION
2
Figure 1: Example signatures (empirical distribution functions?CDFs) of images depicting two
different faces under affine transformations. (A) shows in-plane rotations. Signatures for the upper
and lower face are shown in red and purple respectively. (B) Shows the analogous experiment with
translated faces. Note: In order to highlight the difference between the two distributions, the axes
do not start at 0.
Since the distribution of the hx, gt ?k i is invariant, we have many choices of possible signatures.
Most notably, we can choose any of its statistical moments and these may also be invariant?or
nearly so?in order to be discriminative and ?invariant for a task? it only need be the case that for
each k, the distributions of the hx, gt ?k i have different moments. It turns out that many different
convolutional networks can be understood in this framework2 . The differences between them correspond to different choices of 1. the set of template orbits (which group), 2. the inner product
(more generally, we consider the template response function ?g?k (?) := f (h?, gt ?k i), for a possibly
non-linear function f ?see [1]) and 3. the moment used for the signature. For example, a simple
neural-networks-style convolutional net with one convolutional layer and one subsampling layer (no
bias term) is obtained by choosing G =translations and ?k (x) =mean(?). The k-th filter is the
template ?k . The network?s nonlinearity could be captured by choosing ?g?k (x) = tanh(x ? g?k );
note the similarity to Eq. (1). Similar descriptions could be given for modern convolutional nets,
e.g. [6, 7, 11]. It is also possible to capture HMAX [14, 15] and related models (e.g. [16]) with this
framework. The ?simple cells? compute normalized dot products or Gaussian radial basis functions
of their inputs with stored templates and ?complex cells? compute, for example, ?k (x) = max(?).
The templates are normally obtained by translation or scaling of a set of fixed patterns, often Gabor
functions at the first layer and patches of natural images in subsequent layers.
3
Invariance to non-affine transformations
The theory of [1] only guarantees that this approach will achieve invariance (and discriminability)
in the case of affine transformations. However, many researchers have shown good performance of
related architectures on object recognition tasks that seem to require invariance to non-affine transformations (e.g. [17, 18, 19]). One possibility is that achieving invariance to affine transformations
2
The computation can be made hierarchical by using the signature as the input to a subsequent layer.
3
is itself a larger-than-expected part of the full object recognition problem. While not dismissing that
possibility, we emphasize here that approximate invariance to many non-affine transformations can
be achieved as long as the system?s operation is restricted to certain nice object classes [20, 21, 22].
A nice class with respect to a transformation G (not necessarily a group) is a set of objects that all
transform similarly to one another under the action of G. For example, the 2D transformation mapping a profile view of one person?s face to its frontal view is similar to the analogous transformation
of another person?s face in this sense. The two transformations will not be exactly the same since any
two faces differ in their exact 3D structure, but all faces do approximately share a gross 3D structure,
so the transformations of two different faces will not be as different from one another as would, for
example, the image transformations evoked by 3D rotation of a chair versus the analogous rotation
of a clock. Faces are the prototypical example of a class of objects that is nice with respect to many
transformations3 .
(A) ROTATION IN DEPTH
(B) ILLUMINATION
Figure 2: Example signatures (empirical distribution functions) of images depicting two different
faces under non-affine transformations: (A) Rotation in depth. (B) Changing the illumination direction (lighting from above or below).
Figure 2 shows that unlike in the affine case, the signature of a test face with respect to template faces
at different orientations (3D rotation in depth) or illumination conditions is not perfectly invariant
(KSsame > 0), though it still tolerates substantial transformations. These signatures are also useful for discriminating faces since the empirical distribution functions are considerably more varied
between faces than they are across images of the same face (Mean(KSdifferent ) > Mean(KSsame ),
table 1). Table 2 reports the ratios of within-class discriminability (negatively related to invariance) and between-class discriminability for moment-signatures. Lower values indicate both better
transformation-tolerance and stronger discriminability.
Transformation
Mean(KSsame )
Mean(KSdifferent )
Translation
0.0000
1.9420
In-plane rotation
0.2160
19.1897
Out-of-plane rotation
2.8698
5.2950
Illumination
1.9636
2.8809
Table 1: Average Kolmogorov-Smirnov statistics comparing the distributions of normalized inner
products across transformations and across objects (faces).
Transformation
Translation
In-plane rotation
Out-of-plane rotation
Illumination
MEAN
0.0000
0.0031
0.3045
0.7197
L1
0.0000
0.0031
0.3045
0.7197
L2
0.0000
0.0033
0.3016
0.6994
L5
0.0000
0.0042
0.2923
0.6405
MAX
0.0000
0.0030
0.1943
0.2726
Table 2: Table of ratios of ?within-class discriminability? to ?between-class discriminability? for
one template k?(xi ) ? ?(xj )k2 . within: xi , xj depict the same face, and between: xi , xj depict
different faces. Columns are different statistical moments used for pooling (computing ?(x)).
3
It is interesting to consider the possibility that faces co-evolved along with natural visual systems in order
to be highly recognizable.
4
4
Towards the fully unconstrained task
The finding that this templates-and-signatures approach works well even in the difficult cases of 3Drotation and illumination motivates us to see how far we can push it. We would like to accommodate
a totally-unconstrained notion of invariance to identity-preserving transformations. In particular,
we investigate the possibility of computing signatures that are invariant to all the task-irrelevant
variability in the datasets used for serious computer vision benchmarks. In the present paper we
focus on the problem of face-verification (also called pair-matching). Given two images of new
faces, never encountered during training, the task is to decide if they depict the same person or not.
We used the following procedure to test the templates-and-signatures approach on face verification
problems using a variety of different datasets (see fig. 4A). First, all images were preprocessed with
low-level features (e.g., histograms of oriented gradients (HOG) [23]), followed by PCA using all the
images in the training set and z-score-normalization4 . At test-time, the k-th element of the signature
of an image x is obtained by first computing all the hx, gt ?k i where gt ?k is the t-th image of the k-th
template person?both encoded by their projection onto the training set?s principal components?
then pooling the results. We used h?, ?i = normalized dot product, and ?k (x) = mean(?).
At test time, the classifier receives images of two faces and must classify them as either depicting
the same person or not. We used a simple classifier that merely computes the angle between the
signatures of the two faces (via a normalized dot product) and responds ?same? if it is above a fixed
threshold or ?different? if below threshold. We chose such a weak classifier since the goal of these
simulations was to assess the value of the signature as a feature representation. We expect that the
overall performance levels could be improved for most of these tasks by using a more sophisticated
classifier5 . We also note that, after extracting low-level features, the entire system only employs two
operations: normalized dot products and pooling.
The images in the Labeled Faces in the Wild (LFW) dataset vary along so many different dimensions
that it is difficult to try to give an exhaustive list. It contains natural variability in, at least, pose,
lighting, facial expression, and background [2] (example images in fig. 3). We argue here that LFW
and the controlled synthetic data problems we studied up to now are different in two primary ways.
First, in unconstrained tasks like LFW, you cannot rely on having seen all the transformations of any
template. Recall, the theory of [1] relies on previous experience with all the transformations of template images in order to recognize test images invariantly to the same transformations. Since LFW
is totally unconstrained, any subset of it used for training will never contain all the transformations
that will be encountered at test time. Continuing to abuse the notation from section 2, we can say
that the LFW database only samples a small subset of G, which is now the set of all transformations
that occur in LFW. That is, for any two images in LFW, x and x0 , only a small (relative to |G|) subset
of their orbits are in LFW. Moreover, {g | gx ? LFW} and {g 0 | g 0 x0 ? LFW} almost surely do not
overlap with one another6 .
The second important way in which LFW differs from our synthetic image sets is the presence of
clutter. Each LFW face appears on many different backgrounds. It is commmon to consider clutter to be a separate problem from that of achieving transformation-invariance, indeed, [1] conjectures that the brain employs separate mechanisms, quite different from templates and pooling?e.g.
4
PCA reduces the final algorithm?s memory requirements. Additionally, it is much more plausible that
the brain could store principal components than directly memorizing frames of past visual experience. A
network of neurons with Hebbian synapses (modeled by Oja?s rule)?changing its weights online as images are
presented?converges to the network that projects new inputs onto the eigenvectors of its past input?s covariance
[24]. See also [1] for discussion of this point in the context of the templates-and-signatures approach.
5
Our classifier is unsupervised in the sense that it doesn?t have any free parameters to fit on training data.
However, our complete system is built using labeled data for the templates, so from that point-of-view it may
be considered supervised. On the other hand, we also believe that it could be wired up by an unsupervised
process?probably involving the association of temporally-adjacent frames?so there is also a sense in which
the entire system could be considered, at least in principle, to be unsupervised. We might say that, insofar as
our system models the ventral stream, we intend it as a (strong) claim about what the brain could learn via
unsupervised mechanisms.
6
The brain also has to cope with sampling and its effects can be strikingly counterintuitive. For example,
Afraz et al. showed that perceived gender of a face is strongly biased toward male or female at different
locations in the visual field; and that the spatial pattern of these biases was distinctive and stable over time for
each individual [25]. These perceptual heterogeneity effects could be due to the templates supporting the task
differing in the precise positions (transformations) at which they were encountered during development.
5
attention?toward achieving clutter-tolerance. We set aside those hypotheses for now since the goal
of the present work is to explore the limits of the totally unconstrained notion of identity-preserving
transformation. Thus, for the purposes of this paper, we consider background-variation as just another transformation. That is, ?clutter-transformations? map images of an object on one background
to images of the same object on different backgrounds.
We explicitly tested the effects of non-uniform transformation-sampling and background-variation
using two new fully-controlled synthetic image sets for face-verification7 . Figure 3B shows the
results of the test of robustness to non-uniform transformation-sampling for 3D rotation-in-depthinvariant face verification. It shows that the method tolerates substantial differences between the
transformations used to build the feature representation and the transformations on which the system
is tested. We tested two different models of natural non-uniform transformation sampling, in one
case (blue curve) we sampled the orbits at a fixed rate when preparing templates, in the other case,
we removed connected subsets of each orbit. In both cases the test used the entire orbit and never
contained any of the same faces as the training phase. It is arguable which case is a better model of
the real situation, but we note that even in the worse case, performance is surprisingly high?even
with large percentages of the orbit discarded. Figure 3C shows that signatures produced by pooling
over clutter conditions give good performance on a face-verification task with faces embedded on
backgrounds. Using templates with the appropriate background size for each test, we show that our
models continue to perform well as we increase the size of the background while the performance
of standard HOG features declines.
(A) LFW IMAGES
(B) NON-UNIFORM SAMPLING
(C) BACKGROUND VARIATION TASK
95
1
90
Our model
0.9
Non?consecutive
85
Consecutive
HOG
0.8
75
AUC
Accuracy
80
70
0.7
65
0.6
60
55
50
0.5
0
20
40
60
80
Percentage discarded
100
0
2
4
6
Background size
8
10
Figure 3: (A) Example images from Labeled Faces in the Wild. (B) Non-uniform sampling simulation. The abscissa is the percentage of frames discarded from each template?s transformation
sequence, the ordinate is the accuracy on the face verification task. (C) Pooling over variation in the
background. The abscissa is the background size (10 scales), and the ordinate is the area under the
ROC curve (AUC) for the face verification task.
5
Computer vision benchmarks: LFW, PubFig, and SUFR-W
An implication of the argument in sections 2 and 4, is that there needs to be a reasonable number of
images sampled from each template?s orbit. Despite the fact that we are now considering a totally
unconstrained set of transformations, i.e. any number of samples is going to be small relative to |G|,
we found that approximately 15 images gt ?k per face is enough for all the face verification tasks
we considered. 15 is a surprisingly manageable number, however, it is still more images than LFW
has for most individuals. We also used the PubFig83 dataset, which has the same problem as LFW,
and a subset of the original PubFig dataset. In order to ensure we would have enough images from
each template orbit, we gathered a new dataset?SUFR-W8 ?with ?12,500 images, depicting 450
individuals. The new dataset contains similar variability to LFW and PubFig but tends to have more
images per individual than LFW (there are at least 15 images of each individual). The new dataset
does not contain any of the same individuals that appear in either LFW or PubFig/PubFig83.
7
We obtained 3D models of faces from FaceGen (Singular Inversions Inc.) and rendered them with Blender
(www.blender.org).
8
See paper [26] for details. Data available at http://cbmm.mit.edu/
6
(A) MODEL
(B) PERFORMANCE
Template preparation
HOG
1
PCA
0.9
0.8
> Threshold?
...
HOG
(a) Inputs
0.7
Principal
Templates
Components
(PCs)
Normalized
Project
onto PCs dot products
(b) Features
Histogram
and/or statistical
moments
(e.g. mean
pooling)
0.6
0.5
Our Model --- AUC: 0.817
HOG --- AUC: 0.707
Our Model w/ scrambled
identities --- AUC: 0.681
Our Model w/ random noise
templates--- AUC: 0.649
0.4
0.3
Normalized
dot product
0.2
...
(c) Signatures
True Positive Rate
Testing
Person 4
Person 3
Person 2
Person 1
0.1
0
(d) Veri?cation
0
0.2
0.4
0.6
False Positive Rate
0.8
1
Figure 4: (A) Illustration of the model?s processing pipeline. (B) ROC curves for the new dataset
using templates from the training set. The second model (red) is a control model that uses HOG
features directly. The third (control) model pools over random images in the dataset (as opposed to
images depicting the same person). The fourth model pools over random noise images.
(A) PIPELINE
(B) PERFORMANCE
LBP
LPQ+LBP+LTP
(C) ROC CURVES
LBP Signatures (Sig.)
87.1
LPQ+LBP+LTP Sig.
84.6
Accuracy (%)
0.8
81.7
78.0
76.4
74.1
Signature
75.4
74.3
75.2
70.6
68.9
2. Alignment
3. Recognition
0.7
78.6
65.2
63.4
66.3
65.1
PubFig83 Our data
True positive rate
1. Detection
1
0.9
0.6
0.5
LFW
AUC 0.937
PubFig
AUC 0.897
Our data
AUC 0.856
PubFig83
AUC 0.847
0.4
0.3
0.2
0.1
0
PubFig
LFW
0
0.2
0.4
Acc. 87.1%
Acc. 81.7%
Acc. 78.0%
Acc. 76.4%
0.6
0.8
False positive rate
1
Figure 5: (A) The complete pipeline used for all experiments. (B) The performance of four different
models on PubFig83, our new dataset, PubFig and LFW. For these experiments, Local Binary Patterns (LBP), Local Phase Quantization (LPQ), Local Ternary Patterns (LTP) were used [27, 28, 29];
they all perform very similarly to HOG?just slightly better (?1%). These experiments used nondetected and non-aligned face images as inputs?thus the errors include detection and alignment
errors (about 1.5% of faces are not detected and 6-7% of the detected faces are significantly misaligned). In all cases, templates were obtained from our new dataset (excluding 30 images for a
testing set). This sacrifices some performance (?1%) on each dataset but prevents overfitting: we
ran the exact same model on all 4 datasets. (C) The ROC curves of the best model in each dataset.
Figure 4B shows ROC curves for face verification with the new dataset. The blue curve is our model.
The purple and green curves are control experiments that pool over images depicting different individuals, and random noise templates respectively. Both control models performed worse than raw
HOG features (red curve).
For all our PubFig, PubFig83 and LFW experiments (Fig. 5), we ignored the provided training
data. Instead, we obtained templates from our new dataset. For consistency, we applied the same
detection/alignment to all images. The alignment method we used ([30]) produced images that were
somewhat more variable than the method used by the authors of the LFW dataset (LFW-a) ?the
performance of our simple classifier using raw HOG features on LFW is 73.3%, while on LFW-a it
is 75.6%.
Even with the very simple classifier, our system?s performance still compares favorably with the
current state of the art. In the case of LFW, our model?s performance exceeds the current stateof-the-art for an unsupervised system (86.2% using LQP ? Local Quantized Patterns [31]?Note:
these features are not publicly available; otherwise we would have tried using them for preprocess7
ing), though the best supervised systems do better9 . The strongest result in the literature for face
verification with PubFig8310 is 70.2% [4]?which is 6.2% lower than our best model.
6
Discussion
The templates-and-signatures approach to recognition permits many seemingly-different convolutional networks (e.g. ConvNets and HMAX) to be understood in a common framework. We have
argued here that the recent strong performance of convolutional networks across a variety of tasks
(e.g., [6, 7, 8]) is explained because all these problems share a common computational crux: the
need to achieve representations that are invariant to identity-preserving transformations.
We argued that when studying invariance, the appropriate mathematical objects to consider are
the orbits of images under the action of a transformation and their associated probability distributions. The probability distributions (and hence the orbits) can be characterized by one-dimensional
projections?thus justifying the choice of the empirical distribution function of inner products with
template images as a representation for recognition. In this paper, we systematically investigated
the properties of this representation for two affine and two non-affine transformations (tables 1 and
2). The same probability distribution could also be characterized by its statistical moments. Interestingly, we found when we considered more difficult tasks in the second half of the paper, representations based on statistical moments tended to outperform the empirical distribution function.
There is a sense in which this result is surprising, since the empirical distribution function contains
more invariant ?information? than the moments?on the other hand, it could also be expected that
the moments ought to be less noisy estimates of the underlying distribution. This is an interesting
question for further theoretical and experimental work.
Unlike most convolutional networks, our model has essentially no free parameters. In fact, the
pipeline we used for most experiments actually has no operations at all besides normalized dot products and pooling (also PCA when preparing templates). These operations are easily implemented
by neurons [32]. We could interpret the former as the operation of ?simple cells? and the latter as
?complex cells??thus obtaining a similar view of the ventral stream to the one given by [33, 16, 14]
(and many others).
Despite the classifier?s simplicity, our model?s strong performance on face verification benchmark
tasks is quite encouraging (Fig. 5). Future work could extend this approach to other objects, and
other tasks.
Acknowledgments This material is based upon work supported by the Center for Brains, Minds
and Machines (CBMM), funded by NSF STC award CCF-1231216.
References
[1] T. Poggio, J. Mutch, F. Anselmi, J. Z. Leibo, L. Rosasco, and A. Tacchetti, ?The computational magic of
the ventral stream: sketch of a theory (and why some deep architectures work),? MIT-CSAIL-TR-2012035, 2012.
[2] G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, ?Labeled faces in the wild: A database for studying face recognition in unconstrained environments,? in Workshop on faces in real-life images: Detection,
alignment and recognition (ECCV), (Marseille, Fr), 2008.
[3] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, ?Attribute and Simile Classifiers for Face
Verification,? in IEEE International Conference on Computer Vision (ICCV), (Kyoto, JP), pp. 365?372,
Oct. 2009.
[4] N. Pinto, Z. Stone, T. Zickler, and D. D. Cox, ?Scaling-up Biologically-Inspired Computer Vision: A
Case-Study on Facebook,? in IEEE Computer Vision and Pattern Recognition, Workshop on Biologically
Consistent Vision, 2011.
[5] S. Dali, ?The persistence of memory (1931).? Museum of Modern Art, New York, NY.
[6] A. Krizhevsky, I. Sutskever, and G. Hinton, ?ImageNet classification with deep convolutional neural
networks,? in Advances in neural information processing systems, vol. 25, (Lake Tahoe, CA), 2012.
9
Note: Our method of testing does not strictly conform to the protocol recommended by the creators of
LFW [2]: we re-aligned (worse) the faces. We also use the identities of the individuals during training.
10
The original PubFig dataset was only provided as a list of URLs from which the images could be downloaded. Now only half the images remain available. On the original dataset, the strongest performance reported
is 78.7% [3]. The authors of that study also made their features available, so we estimated the performance
of their features on the available subset of images (using SVM). We found that an SVM classifier, using their
features, and our cross-validation splits gets 78.4% correct?3.3% lower than our best model.
8
[7] O. Abdel-Hamid, A. Mohamed, H. Jiang, and G. Penn, ?Applying convolutional neural networks concepts
to hybrid NN-HMM model for speech recognition,? in IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pp. 4277?4280, 2012.
[8] C. F. Cadieu, H. Hong, D. Yamins, N. Pinto, N. J. Majaj, and J. J. DiCarlo, ?The neural representation
benchmark and its evaluation on brain and machine,? arXiv preprint arXiv:1301.3530, 2013.
[9] P. F?oldi?ak, ?Learning invariance from transformation sequences,? Neural Computation, vol. 3, no. 2,
pp. 194?200, 1991.
[10] L. Wiskott and T. Sejnowski, ?Slow feature analysis: Unsupervised learning of invariances,? Neural computation, vol. 14, no. 4, pp. 715?770, 2002.
[11] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, ?What is the best multi-stage architecture for
object recognition?,? IEEE International Conference on Computer Vision, pp. 2146?2153, 2009.
[12] J. Z. Leibo, J. Mutch, L. Rosasco, S. Ullman, and T. Poggio, ?Learning Generic Invariances in Object
Recognition: Translation and Scale,? MIT-CSAIL-TR-2010-061, CBCL-294, 2010.
[13] A. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng, ?On random weights and unsupervised
feature learning,? Proceedings of the International Conference on Machine Learning (ICML), 2011.
[14] M. Riesenhuber and T. Poggio, ?Hierarchical models of object recognition in cortex,? Nature Neuroscience, vol. 2, pp. 1019?1025, Nov. 1999.
[15] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio, ?Robust Object Recognition with CortexLike Mechanisms,? IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 3, pp. 411?426, 2007.
[16] K. Fukushima, ?Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,? Biological Cybernetics, vol. 36, pp. 193?202, Apr. 1980.
[17] Y. LeCun, F. J. Huang, and L. Bottou, ?Learning methods for generic object recognition with invariance to
pose and lighting,? in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), vol. 2, pp. 90?97, IEEE, 2004.
[18] E. Bart and S. Ullman, ?Class-based feature matching across unrestricted transformations,? Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 9, pp. 1618?1631, 2008.
[19] N. Pinto, Y. Barhomi, D. Cox, and J. J. DiCarlo, ?Comparing state-of-the-art visual features on invariant
object recognition tasks,? in Applications of Computer Vision (WACV), 2011 IEEE Workshop on, 2011.
[20] T. Vetter, A. Hurlbert, and T. Poggio, ?View-based models of 3D object recognition: invariance to imaging
transformations,? Cerebral Cortex, vol. 5, no. 3, p. 261, 1995.
[21] J. Z. Leibo, J. Mutch, and T. Poggio, ?Why The Brain Separates Face Recognition From Object Recognition,? in Advances in Neural Information Processing Systems (NIPS), (Granada, Spain), 2011.
[22] H. Kim, J. Wohlwend, J. Z. Leibo, and T. Poggio, ?Body-form and body-pose recognition with a hierarchical model of the ventral stream,? MIT-CSAIL-TR-2013-013, CBCL-312, 2013.
[23] N. Dalal and B. Triggs, ?Histograms of oriented gradients for human detection,? IEEE International
Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, no. 886-893, 2005.
[24] E. Oja, ?Simplified neuron model as a principal component analyzer,? Journal of mathematical biology,
vol. 15, no. 3, pp. 267?273, 1982.
[25] A. Afraz, M. V. Pashkam, and P. Cavanagh, ?Spatial heterogeneity in the perception of face and form
attributes,? Current Biology, vol. 20, no. 23, pp. 2112?2116, 2010.
[26] J. Z. Leibo, Q. Liao, and T. Poggio, ?Subtasks of Unconstrained Face Recognition,? in International Joint
Conference on Computer Vision, Imaging and Computer Graphics, VISIGRAPP, (Lisbon), 2014.
[27] T. Ojala, M. Pietikainen, and T. Maenpaa, ?Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,? Pattern Analysis and Machine Intelligence, IEEE Transactions on,
vol. 24, no. 7, pp. 971?987, 2002.
[28] X. Tan and B. Triggs, ?Enhanced local texture feature sets for face recognition under difficult lighting
conditions,? in Analysis and Modeling of Faces and Gestures, pp. 168?182, Springer, 2007.
[29] V. Ojansivu and J. Heikkil?a, ?Blur insensitive texture classification using local phase quantization,? in
Image and Signal Processing, pp. 236?243, Springer, 2008.
[30] X. Zhu and D. Ramanan, ?Face detection, pose estimation, and landmark localization in the wild,? in
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[31] S. u. Hussain, T. Napoleon, and F. Jurie, ?Face recognition using local quantized patterns,? in Proc. British
Machine Vision Conference (BMCV), vol. 1, (Guildford, UK), pp. 52?61, 2012.
[32] M. Kouh and T. Poggio, ?A canonical neural circuit for cortical nonlinear operations,? Neural computation, vol. 20, no. 6, pp. 1427?1451, 2008.
[33] D. Hubel and T. Wiesel, ?Receptive fields, binocular interaction and functional architecture in the cat?s
visual cortex,? The Journal of Physiology, vol. 160, no. 1, p. 106, 1962.
9
| 5206 |@word cox:2 version:2 manageable:1 inversion:1 stronger:1 smirnov:2 dalal:1 triggs:2 wiesel:1 simulation:2 tried:1 blender:2 covariance:1 thereby:1 tr:3 accommodate:2 moment:10 series:1 score:1 contains:3 interestingly:1 past:2 current:4 comparing:3 surprising:1 must:1 subsequent:2 blur:1 bmcv:1 depict:3 aside:1 bart:1 half:3 intelligence:2 plane:8 quantized:2 location:1 gx:5 tahoe:1 org:1 mathematical:2 along:2 become:1 zickler:1 wild:6 recognizable:1 acquired:1 x0:5 sacrifice:1 indeed:1 notably:1 expected:3 abscissa:2 tomaso:1 multi:1 brain:10 inspired:1 actual:1 encouraging:1 considering:1 totally:4 project:2 provided:2 notation:1 moreover:1 underlying:1 circuit:1 spain:1 easiest:1 evolved:1 what:2 developed:1 differing:1 finding:3 transformation:67 ought:1 guarantee:2 w8:1 exactly:2 classifier:10 k2:1 uk:1 control:5 normally:2 penn:1 ramanan:1 appear:1 positive:4 understood:3 local:8 tends:1 limit:1 despite:3 encoding:1 ak:1 mach:1 jiang:1 approximately:2 abuse:1 might:2 chose:1 discriminability:10 studied:2 k:1 evoked:1 misaligned:1 co:1 limited:1 cdfs:1 range:1 jarrett:1 jurie:1 unique:4 acknowledgment:1 lecun:2 testing:3 ternary:1 practice:1 differs:1 procedure:1 suresh:1 area:1 empirical:13 majaj:1 gabor:1 significantly:1 matching:3 projection:2 persistence:1 radial:1 vetter:1 physiology:1 get:1 onto:3 cannot:1 context:1 applying:1 www:1 equivalent:2 map:3 center:2 regardless:1 attention:1 independently:1 resolution:1 simplicity:1 rule:1 counterintuitive:1 kouh:1 notion:5 variation:4 analogous:3 enhanced:1 tan:1 exact:2 losing:1 us:1 hypothesis:1 sig:2 element:3 recognition:31 labeled:6 database:2 preprint:1 capture:1 connected:1 ranzato:1 removed:1 marseille:1 ran:1 gross:1 substantial:2 transforming:1 environment:1 signature:27 negatively:1 distinctive:1 upon:1 localization:1 completely:1 basis:1 translated:1 strikingly:1 easily:1 icassp:1 joint:1 various:1 cat:1 kolmogorov:2 intersects:1 sejnowski:1 mcgovern:1 detected:2 choosing:2 exhaustive:1 quite:2 encoded:1 larger:1 solve:1 plausible:1 say:2 cvpr:3 otherwise:1 statistic:2 unseen:1 transform:2 itself:2 noisy:1 final:1 online:1 seemingly:1 sequence:2 net:2 propose:2 interaction:1 product:13 fr:1 aligned:2 realization:1 organizing:1 achieve:2 multiresolution:1 intuitive:1 description:1 sutskever:1 requirement:1 wired:1 converges:1 object:31 develop:1 pose:4 tolerates:2 eq:1 strong:6 implemented:1 involves:1 indicate:1 resemble:1 differ:1 direction:3 correct:1 attribute:2 filter:1 human:1 saxe:1 material:1 bin:1 require:1 premise:1 crux:3 hx:7 argued:2 guildford:1 hamid:1 biological:1 strictly:1 considered:5 cbmm:2 cbcl:2 mapping:1 claim:1 ventral:6 achieves:1 vary:1 consecutive:2 purpose:1 perceived:1 estimation:1 proc:1 tanh:1 mit:7 gaussian:1 aim:1 ax:1 focus:1 check:1 expansive:2 kim:1 sense:4 nn:1 entire:3 bhand:1 transformed:3 going:1 overall:1 classification:3 orientation:1 stateof:1 development:1 constrained:1 special:2 art:5 spatial:2 field:2 never:3 having:1 ng:1 sampling:6 cadieu:1 preparing:2 biology:2 unsupervised:9 nearly:1 icml:1 future:1 report:2 others:1 serious:1 employ:2 modern:2 oriented:2 oja:2 museum:1 recognize:1 intell:1 individual:8 phase:3 maintain:1 fukushima:1 detection:6 highly:3 possibility:4 investigate:1 evaluation:1 joel:1 alignment:5 male:1 pc:2 implication:1 beforehand:1 necessary:1 experience:2 poggio:12 facial:1 old:1 continuing:1 rotating:4 orbit:22 re:1 theoretical:3 formalism:1 modeling:2 column:1 classify:1 lqp:1 tp:1 subset:6 uniform:5 krizhevsky:1 graphic:1 stored:2 reported:1 kn:2 considerably:2 synthetic:3 person:10 international:7 discriminating:1 csail:3 l5:1 cortexlike:1 pool:3 successively:1 opposed:1 choose:1 possibly:3 rosasco:2 huang:2 worse:3 style:1 ullman:2 deform:1 inc:1 explicitly:1 stream:6 piece:1 performed:1 break:1 view:5 try:1 observing:1 red:3 start:1 complicated:1 ass:1 purple:2 publicly:1 accuracy:3 convolutional:12 characteristic:1 miller:1 gathered:4 correspond:2 weak:1 raw:2 kavukcuoglu:1 produced:2 lighting:4 researcher:1 cybernetics:1 unaffected:1 cation:1 acc:4 explain:1 synapsis:1 strongest:2 tended:1 facebook:1 pp:17 mohamed:1 proof:1 associated:1 hurlbert:1 sampled:4 dataset:19 massachusetts:1 recall:1 pocket:1 sophisticated:1 actually:1 appears:1 tolerate:1 supervised:2 melting:1 response:1 improved:1 mutch:3 though:2 strongly:1 salvador:1 just:3 stage:1 binocular:1 clock:1 convnets:1 hand:2 receives:1 sketch:1 nonlinear:2 gray:1 believe:1 building:1 effect:3 smiling:1 requiring:1 true:3 normalized:8 contain:2 former:1 hence:1 ccf:1 concept:1 serre:1 wiring:1 adjacent:2 during:4 width:1 self:1 auc:10 hong:1 stone:1 neocognitron:1 complete:3 mattar:1 performs:2 l1:1 image:67 common:4 rotation:16 functional:1 empirically:1 jp:1 cerebral:1 insensitive:1 association:1 extend:1 interpret:1 cambridge:1 ai:1 unconstrained:10 consistency:1 similarly:2 nonlinearity:1 analyzer:1 dot:7 funded:1 access:1 stable:1 similarity:1 cortex:3 gt:11 own:1 recent:2 showed:1 female:1 belongs:1 irrelevant:1 store:1 certain:1 binary:2 continue:1 life:1 preserving:5 captured:2 seen:1 somewhat:1 unrestricted:1 belhumeur:1 surely:1 recommended:1 signal:2 full:1 reduces:1 kyoto:1 hebbian:1 smooth:2 exceeds:1 ing:1 youth:1 characterized:2 cross:1 long:1 gesture:1 justifying:1 award:1 controlled:2 prediction:1 involving:1 liao:2 vision:16 lfw:29 essentially:1 arxiv:2 histogram:3 accord:1 achieved:1 cell:4 receive:1 background:18 facegen:1 lbp:5 singular:1 biased:1 unlike:2 veri:1 probably:1 pooling:9 ltp:3 hgx:1 seem:1 extracting:1 unitary:1 presence:1 exceed:1 split:1 enough:3 insofar:1 variety:2 xj:3 fit:1 hussain:1 architecture:6 associating:1 perfectly:1 inner:6 idea:1 decline:1 shift:1 whether:1 motivated:2 pca:4 expression:1 url:1 speech:2 york:1 action:3 deep:2 ignored:1 generally:1 useful:1 eigenvectors:1 clutter:7 pubfig83:6 http:1 outperform:1 percentage:3 arguable:1 nsf:1 canonical:1 notice:1 estimated:1 neuroscience:1 per:2 blue:2 conform:1 vol:16 group:7 key:1 four:1 demonstrating:1 threshold:3 achieving:5 changing:2 preprocessed:1 leibo:6 imaging:2 merely:1 angle:2 you:1 fourth:1 throughout:1 almost:1 decide:1 reasonable:1 patch:1 lake:1 scaling:4 capturing:1 layer:5 internet:1 followed:1 encountered:3 occur:1 dismissing:1 argument:1 chair:1 kumar:1 simile:1 rendered:1 relatively:1 conjecture:2 across:6 slightly:1 remain:1 biologically:2 memorizing:1 explained:1 invariant:21 restricted:1 iccv:1 koh:1 taken:1 pipeline:4 describing:1 turn:1 mechanism:4 mind:2 yamins:1 studying:2 available:5 operation:6 permit:1 cavanagh:1 apply:2 observe:1 hierarchical:4 appropriate:2 generic:2 appearing:1 robustness:1 original:3 anselmi:1 creator:1 subsampling:1 ensure:1 include:1 pushing:1 build:1 intend:1 question:1 receptive:1 primary:2 responds:1 gradient:2 separate:3 hmm:1 landmark:1 argue:3 toward:2 besides:1 dicarlo:2 modeled:1 illustration:1 ratio:2 difficult:4 hog:10 favorably:2 magic:1 anal:1 motivates:1 perform:2 allowing:1 upper:1 convolution:1 neuron:3 datasets:5 discarded:3 benchmark:6 enabling:1 riesenhuber:2 oldi:1 supporting:1 heterogeneity:2 situation:1 variability:3 precise:1 excluding:1 frame:4 hinton:1 varied:1 arbitrary:1 tacchetti:1 subtasks:1 ordinate:2 pair:1 imagenet:1 acoustic:1 learned:2 nip:1 trans:1 beyond:1 usually:1 pattern:15 below:2 perception:1 built:1 including:1 max:2 video:1 memory:2 green:1 overlap:1 natural:5 rely:1 hybrid:1 lisbon:1 zhu:1 technology:1 temporally:2 review:1 literature:3 nice:3 l2:1 relative:2 embedded:2 fully:2 expect:1 highlight:1 prototypical:1 interesting:2 wacv:1 versus:1 age:1 validation:1 abdel:1 downloaded:1 affine:17 verification:14 consistent:1 wiskott:1 thresholding:1 principle:1 systematically:1 granada:1 share:2 translation:9 eccv:1 surprisingly:3 supported:1 free:2 bias:2 institute:2 wide:1 template:40 face:71 tolerance:2 regard:1 curve:9 depth:5 dimension:1 world:1 cortical:1 computes:1 doesn:1 author:3 made:2 simplified:1 far:3 cope:1 transaction:2 approximate:1 emphasize:1 nov:1 confirm:1 overfitting:1 hubel:1 discriminative:1 xi:3 scrambled:1 lpq:3 why:2 table:7 additionally:1 learn:1 nature:1 robust:1 ca:1 obtaining:2 depicting:6 bottou:1 investigated:1 complex:3 pubfig:11 necessarily:1 stc:1 protocol:1 bileschi:1 qianli:1 apr:1 noise:3 profile:1 body:2 fig:4 roc:5 ny:1 slow:1 position:2 perceptual:1 third:1 hmax:2 british:1 list:2 svm:2 grouping:1 consist:1 workshop:3 quantization:2 false:2 gained:1 texture:3 illumination:8 push:1 chen:1 depicted:1 photograph:1 explore:1 visual:5 prevents:1 contained:1 heikkil:1 watch:1 pinto:3 springer:2 gender:1 wolf:1 relies:1 ma:1 cdf:1 oct:1 identity:8 goal:2 exposition:1 towards:1 invariantly:1 identitypreserving:2 change:1 acting:1 principal:4 called:1 pietikainen:1 discriminate:1 invariance:22 experimental:1 berg:2 support:1 ojala:1 latter:1 rotate:1 frontal:1 preparation:1 tested:4 nayar:1 |
4,649 | 5,207 | Deep Neural Networks for Object Detection
Christian Szegedy
Alexander Toshev Dumitru Erhan
Google, Inc.
{szegedy, toshev, dumitru}@google.com
Abstract
Deep Neural Networks (DNNs) have recently shown outstanding performance on
image classification tasks [14]. In this paper we go one step further and address
the problem of object detection using DNNs, that is not only classifying but also
precisely localizing objects of various classes. We present a simple and yet powerful formulation of object detection as a regression problem to object bounding
box masks. We define a multi-scale inference procedure which is able to produce high-resolution object detections at a low cost by a few network applications.
State-of-the-art performance of the approach is shown on Pascal VOC.
1
Introduction
As we move towards more complete image understanding, having more precise and detailed object
recognition becomes crucial. In this context, one cares not only about classifying images, but also
about precisely estimating estimating the class and location of objects contained within the images,
a problem known as object detection.
The main advances in object detection were achieved thanks to improvements in object representations and machine learning models. A prominent example of a state-of-the-art detection system is
the Deformable Part-based Model (DPM) [9]. It builds on carefully designed representations and
kinematically inspired part decompositions of objects, expressed as a graphical model. Using discriminative learning of graphical models allows for building high-precision part-based models for
variety of object classes.
Manually engineered representations in conjunction with shallow discriminatively trained models
have been among the best performing paradigms for the related problem of object classification
as well [17]. In the last years, however, Deep Neural Networks (DNNs) [12] have emerged as a
powerful machine learning model.
DNNs exhibit major differences from traditional approaches for classification. First, they are deep
architectures which have the capacity to learn more complex models than shallow ones [2]. This
expressivity and robust training algorithms allow for learning powerful object representations without the need to hand design features. This has been empirically demonstrated on the challenging
ImageNet classification task [5] across thousands of classes [14, 15].
In this paper, we exploit the power of DNNs for the problem of object detection, where we not only
classify but also try to precisely localize objects. The problem we are address here is challenging,
since we want to detect a potentially large number object instances with varying sizes in the same
image using a limited amount of computing resources.
We present a formulation which is capable of predicting the bounding boxes of multiple objects in
a given image. More precisely, we formulate a DNN-based regression which outputs a binary mask
of the object bounding box (and portions of the box as well), as shown in Fig. 1. Additionally,
we employ a simple bounding box inference to extract detections from the masks. To increase
localization precision, we apply the DNN mask generation in a multi-scale fashion on the full image
as well as on a small number of large image crops, followed by a refinement step (see Fig. 2).
1
In this way, only through a few dozen DNN-regressions we can achieve state-of-art bounding box
localization.
In this paper, we demonstrate that DNN-based regression is capable of learning features which
are not only good for classification, but also capture strong geometric information. We use the
general architecture introduced for classification by [14] and replace the last layer with a regression
layer. The somewhat surprising but powerful insight is that networks which to some extent encode
translation invariance, can capture object locations as well.
Second, we introduce a multi-scale box inference followed by a refinement step to produce precise
detections. In this way, we are able to apply a DNN which predicts a low-resolution mask, limited
by the output layer size, to pixel-wise precision at a low cost ? the network is a applied only a few
dozen times per input image.
In addition, the presented method is quite simple. There is no need to hand design a model which
captures parts and their relations explicitly. This simplicity has the advantage of easy applicability
to wide range of classes, but also show better detection performance across a wider range of objects
? rigid ones as well as deformable ones. This is presented together with state-of-the-art detection
results on Pascal VOC challenge [7] in Sec. 7.
2
Related Work
One of the most heavily studied paradigms for object detection is the deformable part-based model,
with [9] being the most prominent example. This method combines a set of discriminatively trained
parts in a star model called pictorial structure. It can be considered as a 2-layer model ? parts being
the first layer and the star model being the second layer. Contrary to DNNs, whose layers are generic,
the work by [9] exploits domain knowledge ? the parts are based on manually designed Histogram
of Gradients (HOG) descriptors [4] and the structure of the parts is kinematically motivated.
Deep architectures for object detection and parsing have been motivated by part-based models and
traditionally are called compositional models, where the object is expressed as layered composition
of image primitives. A notable example is the And/Or graph [20], where an object is modeled
by a tree with And-nodes representing different parts and Or-nodes representing different modes
of the same part. Similarly to DNNs, the And/Or graph consists of multiple layers, where lower
layers represent small generic image primitives, while higher layers represent object parts. Such
compositional models are easier to interpret than DNNs. On the other hand, they require inference
while the DNN models considered in this paper are purely feed-forward with no latent variables to
be inferred.
Further examples of compositional models for detection are based on segments as primitives [1],
focus on shape [13], use Gabor filters [10] or larger HOG filters [19]. These approaches are traditionally challenged by the difficulty of training and use specially designed learning procedures.
Moreover, at inference time they combine bottom-up and top-down processes.
Neural networks (NNs) can be considered as compositional models where the nodes are more
generic and less interpretable than the above models. Applications of NNs to vision problems are
decades old, with Convolutional NNs being the most prominent example [16]. It was not until
recently than these models emerged as highly successful on large-scale image classification tasks
[14, 15] in the form of DNNs. Their application to detection, however, is limited. Scene parsing,
as a more detailed form of detection, has been attempted using multi-layer Convolutional NNs [8].
Segmentation of medical imagery has been addressed using DNNs [3]. Both approaches, however,
use the NNs as local or semi-local classifiers either over superpixels or at each pixel location. Our
approach, however, uses the full image as an input and performs localization through regression. As
such, it is a more efficient application of NNs.
Perhaps the closest approach to ours is [18] which has similar high level objective but use much
smaller network with a different features, loss function and without a machinery to distinguish between multiple instances of the same class.
2
...
...
...
DBN
mask
regression
layer
full object mask
left object mask
top object mask
Figure 1: A schematic view of object detection as DNN-based regression.
refine
object box
extraction
DNN
scale 1
DNN
scale 2
small set of boxes covering image
object box
extraction
merged object masks
Figure 2: After regressing to object masks across several scales and large image boxes, we perform
object box extraction. The obtained boxes are refined by repeating the same procedure on the sub
images, cropped via the current object boxes. For brevity, we display only the full object mask,
however, we use all five object masks.
3
DNN-based Detection
The core of our approach is a DNN-based regression towards an object mask, as shown in Fig. 1.
Based on this regression model, we can generate masks for the full object as well as portions of
the object. A single DNN regression can give us masks of multiple objects in an image. To further
increase the precision of the localization, we apply the DNN localizer on a small set of large subwindows. The full flow is presented in Fig. 2 and explained below.
4
Detection as DNN Regression
Our network is based on the convolutional DNN defined by [14]. It consists of total 7 layers, the
first 5 of which being convolutional and the last 2 fully connected. Each layer uses a rectified linear
unit as a non-linear transformation. Three of the convolutional layers have in addition max pooling.
For further details, we refer the reader to [14].
We adapt the above generic architecture for localization. Instead of using a softmax classifier as a
last layer, we use a regression layer which generates an object binary mask DN N (x; ?) ? RN ,
where ? are the parameters of the network and N is the total number of pixels. Since the output
of the network has a fixed dimension, we predict a mask of a fixed size N = d ? d. After being
resized to the image size, the resulting binary mask represents one or several objects: it should have
value 1 at particular pixel if this pixel lies within the bounding box of an object of a given class and
0 otherwise.
The network is trained by minimizing the L2 error for predicting a ground truth mask m ? [0, 1]N
for an image x:
X
min
||(Diag(m) + ?I)1/2 (DN N (x; ?) ? m)||22 ,
?
(x,m)?D
where the sum ranges over a training set D of images containing bounding boxed objects which are
represented as binary masks.
Since our base network is highly non-convex and optimality cannot be guaranteed, it is sometimes
necessary to regularize the loss function by using varying weights for each output depending on the
3
ground truth mask. The intuition is that most of the objects are small relative to the image size and
the network can be easily trapped by the trivial solution of assigning a zero value to every output.
To avoid this undesirable behavior, it is helpful to increase the weight of the outputs corresponding
to non-zero values in the ground truth mask by a parameter ? ? R+ . If ? is chosen small, then the
errors on the output with groundtruth value 0 are penalized significantly less than those with 1 and
therefore encouraging the network to predict nonzero values even if the signals are weak.
In our implementation, we used networks with a receptive field of 225 ? 225 and outputs predicting
a mask of size d ? d for d = 24.
5
Precise Object Localization via DNN-generated Masks
Although the presented approach is capable of generating high-quality masks, there are several
additional challenges. First, a single object mask might not be sufficient to disambiguate objects
which are placed next to each other. Second, due to the limits in the output size, we generate masks
that are much smaller than the size of the original image. For example, for an image of size 400?400
and d = 24, each output would correspond to a cell of size 16 ? 16 which would be insufficient to
precisely localize an object, especially if it is a small one. Finally, since we use as an input the full
image, small objects will affect very few input neurons and thus will be hard to recognize. In the
following, we explain how we address these issues.
5.1 Multiple Masks for Robust Localization
To deal with multiple touching objects, we generate not one but several masks, each representing
either the full object or part of it. Since our end goal is to produce a bounding box, we use one
network to predict the object box mask and four additional networks to predict four halves of the
box: bottom, top, left and right halves, all denoted by mh , h ? {full, bottom, top, left, left}. These
five predictions are over-complete but help reduce uncertainty and deal with mistakes in some of the
masks. Further, if two objects of the same type are placed next to each other, then at least two of the
produced five masks would not have the objects merged which would allow to disambiguate them.
This would enable the detection of multiple objects.
At training time, we need to convert the object box to these five masks. Since the masks can be
much smaller than the original image, we need to downsize the ground truth mask to the size of the
network output. Denote by T (i, j) the rectangle in the image for which the presence of an object is
predicted by output (i, j) of the network. This rectangle has upper left corner at ( dd1 (i?1), dd2 (j?1))
and has size dd1 ? dd1 , where d is the size of the output mask and d1 , d2 the height and width of the
image. During training we assign as value m(i, j) to be predicted as portion of T (i, j) being covered
by box bb(h) :
area(bb(h) ? T (i, j))
(1)
area(T (i, j))
where bb(full) corresponds to the ground truth object box. For the remaining values of h, bb(h)
corresponds to the four halves of the original box.
mh (i, j; bb) =
Note that we use the full object box as well as the top, bottom, left and right halves of the box to
define total five different coverage types. The resulting mh (bb) for groundtruth box bb are being
used at training time for network of type h.
At this point, it should be noted that one could train one network for all masks where the output
layer would generate all five of them. This would enable scalability. In this way, the five localizers
would share most of the layers and thus would share features, which seems natural since they are
dealing with the same object. An even more aggressive approach ? using the same localizer for a
lot of distinct classes ? seems also workable.
5.2 Object Localization from DNN Output
In order to complete the detection process, we need to estimate a set of bounding boxes for each
image. Although the output resolution is smaller than the input image, we rescale the binary masks
to the resolution as the input image. The goal is to estimate bounding boxes bb = (i, j, k, l)
parametrized by their upper-left corner (i, j) and lower-right corner (k, l) in output mask coordinates.
4
To do this, we use a score S expressing an agreement of each bounding box bb with the masks and
infer the boxes with highest scores. A natural agreement would be to measure what portion of the
bounding box is covered by the mask:
S(bb, m) =
X
1
m(i, j)area(bb ? T (i, j))
area(bb)
(2)
(i,j)
where we sum over all network outputs indexed by (i, j) and denote by m = DN N (x) the output
of the network. If we expand the above score over all five mask types, then final score reads:
X
? mh ))
S(bb) =
(S(bb(h), mh ) ? S(bb(h),
(3)
h?halves
where halves = {full, bottom, top, left, left} index the full box and its four halves. For h denoting
? denotes the opposite half of h, e.g. a top mask should be well covered by a top
one of the halves h
? a rectangular region around bb
mask and not at all by the bottom one. For h = full, we denote by h
whose score will penalize if the full masks extend outside bb. In the above summation, the score for
a box would be large if it is consistent with all five masks.
We use the score from Eq. (3) to exhaustively search in the set of possible bounding boxes. We
consider bounding boxes with mean dimension equal to [0.1, . . . , 0.9] of the mean image dimension
and 10 different aspect ratios estimated by k-means clustering of the boxes of the objects in the
training data. We slide each of the above 90 boxes using stride of 5 pixels in the image. Note that
the score from Eq. (3) can be efficiently computed using 4 operations after the integral image of the
mask m has been computed. The exact number of operations is 5(2 ? #pixels + 20 ? #boxes),
where the first term measures the complexity of the integral mask computation while the second
accounts for box score computation.
To produce the final set of detections we perform two types of filtering. The first is by keeping boxes
with strong score as defined by Eq. (2), e.g. larger than 0.5. We further prune them by applying a
DNN classifier by [14] trained on the classes of interest and retaining the positively classified ones
w.r.t to the class of the current detector. Finally, we apply non-maximum suppression as in [9].
5.3
Multi-scale Refinement of DNN Localizer
The issue with insufficient resolution of the network output is addressed in two ways: (i) applying
the DNN localizer over several scales and a few large sub-windows; (ii) refinement of detections by
applying the DNN localizer on the top inferred bounding boxes (see Fig. 2).
Using large windows at various scales, we produce several masks and merge them into higher resolution masks, one for each scale. The range of the suitable scales depends on the resolution of the
image and the size of the receptive field of the localizer - we want the image be covered by network
outputs which operate at a higher resolution, while at the same time we want each object to fall
within at least one window and the number of these windows to be small.
To achieve the above goals, we use three scales: the full image and two other scales such that the
size of the window at a given scale is half of the size of the window at the previous scale. We cover
the image at each scale with windows such that these windows have a small overlap ? 20% of their
area. These windows are relatively small in number and cover the image at several scales. Most
importantly, the windows at the smallest scale allow localization at a higher resolution.
At inference time, we apply the DNN on all windows. Note that it is quite different from sliding
window approaches because we need to evaluate a small number of windows per image, usually
less than 40. The generated object masks at each scale are merged by maximum operation. This
gives us three masks of the size of the image, each ?looking? at objects of different sizes. For each
scale, we apply the bounding box inference from Sec. 5.2 to arrive at a set of detections. In our
implementation, we took the top 5 detections per scale, resulting in a total of 15 detections.
To further improve the localization, we go through a second stage of DNN regression called refinement. The DNN localizer is applied on the windows defined by the initial detection stage ? each of
the 15 bounding boxes is enlarged by a factor of 1.2 and is applied to the network. Applying the
localizer at higher resolution increases the precision of the detections significantly.
5
The complete algorithm is outlined in Algorithm 1.
Algorithm 1: Overall algorithm: multi-scale DNN-based localization and subsequent refinement.
The above algorithm is applied for each object class separately.
Input: x input image of size; networks DN N h producing full and partial object box mask.
Output: Set of detected object bounding boxes with confidence scores.
detections ? ?
scales ? compute suitable scales for image.
for s ? scales do
windows ? generate windows for the given scale s.
for w ? windows do
for h ? {lower, upper, top, bottom, f ull} do
mhw ? DN N h (w)
end
end
mh ? merge masks mhw , w ? windows
detectionss ? obtain a set of bounding boxes with scores from mh as in Sec. 5.2
detections ? detections ? detectionss
end
ref ined ? ?
for d ? detections do
c ? cropped image for enlarged bounding box of d
for h ? {lower, upper, top, bottom, f ull} do
mhw ? DN N h (c)
end
detection ? infer highest scoring bounding box from mh as in Sec. 5.2
ref ined ? ref ined ? detection
end
return ref ined
6
DNN Training
One of the compelling features of our network is its simplicity: the classifier is simply replaced by
a mask generation layer without any smoothness prior or convolutional structure. However, it needs
to be trained with a huge amount of training data: objects of different sizes need to occur at almost
every location.
For training the mask generator, we generate several thousand samples from each image divided
into 60% negative and 40% positive samples. A sample is considered to be negative if it does not
intersect the bounding box of any object of interest. Positive samples are those covering at least 80%
of the area of some of the object bounding boxes. The crops are sampled such that their width is
distributed uniformly between the prescribed minimum scale and the width of the whole image.
We use similar preparations steps to train the classifier used for the final pruning of our detections.
Again, we sample several thousand samples from each image: 60% negative and 40% positive
samples. The negative samples are those whose bounding boxes have less than 0.2 Jaccard-similarity
with any of the groundtruth object boxes The positive samples must have at least 0.6 similarity with
some of the object bounding boxes and are labeled by the class of the object with most similar
bounding box to the crop. Adding the extra negative class acts as a regularizer and improves the
quality of the filters. In both cases, the total number of samples is chosen to be ten million for each
class.
Since training for localization is harder than classification, it is important to start with the weights
of a model with high quality low-level filters. To achieve this, we first train the network for classification and reuse the weights of all layers but the classifier for localization. For localization, we we
have fine-tuned the whole network, including the convolutional layers.
The networks were trained by stochastic gradient using A DAG RAD [6] to estimate the learning rate
of the layers automatically.
6
class
DetectorNet1
Sliding windows1
3-layer model [19]
Felz. et al. [9]
Girshick et al. [11]
class
DetectorNet1
Sliding windows1
3-layer model [19]
Felz. et al. [9]
Girshick et al. [11]
aero
.292
.213
.294
.328
.324
table
.302
.110
.252
.259
.257
bicycle
.352
.190
.558
.568
.577
dog
.282
.134
.125
.088
.116
bird
.194
.068
.094
.025
.107
horse
.466
.220
.504
.492
.556
boat
.167
.120
.143
.168
.157
m-bike
.417
.243
.384
.412
.475
bottle
.037
.058
.286
.285
.253
person
.262
.173
.366
.368
.435
bus
.532
.294
.440
.397
.513
plant
.103
.070
.151
.146
.145
car
.502
.237
.513
.516
.542
sheep
.328
.118
.197
.162
.226
cat
.272
.101
.213
.213
.179
sofa
.268
.166
.251
.244
.342
chair
.102
.059
.200
.179
.210
train
.398
.240
.368
.392
.442
cow
.348
.131
.193
.185
.240
tv
.470
.119
.393
.391
.413
Table 1: Average precision on Pascal VOC2007 test set.
Figure 3: For each image, we show two heat maps on the right: the first one corresponds to the
output of DN N full , while the second one encodes the four partial masks in terms of the strength of
the colors red, green, blue and yellow. In addition, we visualize the estimated object bounding box.
All examples are correct detections with exception of the examples in the last row.
7
Experiments
Dataset: We evaluate the performance of the proposed approach on the test set of the Pascal Visual
Object Challenge (VOC) 2007 [7]. The dataset contains approx. 5000 test images over 20 classes.
Since our approach has large number of parameters, we train on the VOC2012 training and validation set which has approx. 11K images. At test time an algorithm produces for an image a set
of detections, defined bounding boxes and their class labels. We use precision-recall curves and
average precision (AP) per class to measure the performance of the algorithm.
Evaluation: The complete evaluation on VOC2007 test is given in Table 1. We compare our
approach, named DetectorNet, to three related approaches. The first is a sliding window version
of a DNN classifier by [14]. After training this network as a 21-way classifier (VOC classes and
background), we generate bounding boxes with 8 different aspect ration and at 10 different scales
paced 5 pixels apart. The smallest scale is 1/10-th of the image size, while the largest covers the
whole image. This results in approximately 150, 000 boxes per image. Each box is mapped affinely
to the 225 ? 225 receptive field. The detection score is computed by the softmax classifier. We
reduce the number of the boxes by non-maximum suppression using Jaccard similarity of at least
1
Trained on VOC2012 training and validation sets.
7
bird
bus
1
1
0.8
0.8
0.6
0.6
table
0.8
0.4
precision
precision
precision
0.6
0.4
0.4
0.2
0.2
0.2
DetectorNet
DetectorNet ? stage 1
0
0
DetectorNet
DetectorNet ? stage 1
0.2
0.4
recall
0.6
0
0
0.2
DetectorNet
DetectorNet ? stage 1
0.4
0.6
recall
0.8
0
0
0.2
0.4
0.6
0.8
recall
Figure 4: Precision recall curves of DetectorNet after the first stage and after the refinement.
0.5 to discard boxes. After the initial training, we performed two rounds of hard negative mining on
the training set. This added two million examples to our original training set and has cut down the
ratio of false positives.
The second approach is the 3-layer compositional model by [19] which can be considered a deep
architecture. As a co-winner of VOC2011 this approach has shown excellent performance. Finally,
we compare against the DPM by [9] and [11].
Although our comparison is somewhat unfair, as we trained on the larger VOC2012 training set, we
show state-of-the art performance on most of the models: we outperform on 8 classes and perform
on par on other 1. Note that it might be possible to tune the sliding window to perform on par with
DetectorNet, however the sheer amount of network evaluations makes that approach infeasible while
DetectorNet requires only (#windows ? #mask types) ? 120 crops per class to be evaluated. On
a 12-core machine, our implementation took about 5-6 secs per image for each class.
Contrary to the widely cited DPM approach by [9], DetectorNet excels at deformable objects such
as bird, cat, sheep, dog. This shows that it can handle less rigid objects in a better way while working
well at the same time on rigid objects such as car, bus, etc.
We show examples of the detections in Fig. 3, where both the detected box as well as all five generated masks are visualized. It can be seen that the DetectorNet is capable of accurately finding
not only large but also small objects. The generated masks are well localized and have almost no
response outside the object. Such high-quality detector responses are hard to achieve and in this
case are possible because of the expressive power of the DNN and its natural way of incorporating
context.
The common misdetections are due to similarly looking objects (left object in last row of Fig. 3)
or imprecise localization (right object in last row). The latter problem is due to the ambiguous
definition of object extend by the training data ? in some images only the head of the bird is visible
while in others the full body. In many cases we might observe a detection of both the body and face
if they are both present in the same image.
Finally, the refinement step contributes drastically to the quality of the detection. This can be seen in
Fig. 4 where we show the precision vs recall of DetectorNet after the first stage of detection and after
refinement. A noticeable improvement can be observed, mainly due to the fact that better localized
true positives have their score boosted.
8
Conclusion
In this work we leverage the expressivity of DNNs for object detector. We show that the simple
formulation of detection as DNN-base object mask regression can yield strong results when applied
using a multi-scale course-to-fine procedure. These results come at some computational cost at
training time ? one needs to train a network per object type and mask type. As a future work we aim
at reducing the cost by using a single network to detect objects of different classes and thus expand
to a larger number of classes.
8
References
[1] Narendra Ahuja and Sinisa Todorovic. Learning the taxonomy and models of categories present in arbitrary images. In International Conference on Computer Vision, 2007.
R in Machine Learning,
[2] Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends
2(1):1?127, 2009.
[3] Dan Ciresan, Alessandro Giusti, Juergen Schmidhuber, et al. Deep neural networks segment neuronal
membranes in electron microscopy images. In Advances in Neural Information Processing Systems 25,
2012.
[4] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision
and Pattern Recognition, 2005.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In Computer Vision and Pattern Recognition, 2009.
[6] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. In Conference on Learning Theory. ACL, 2010.
[7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International Journal of Computer Vision, 88(2):303?338, 2010.
[8] Cl?ement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical features
for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915?1929,
2013.
[9] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with
discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627?1645, 2010.
[10] Sanja Fidler and Ale?s Leonardis. Towards scalable representations of object categories: Learning a hierarchy of parts. In Computer Vision and Pattern Recognition, 2007.
[11] R. B. Girshick, P. F. Felzenszwalb, and D. McAllester. Discriminatively trained deformable part models,
release 5. http://people.cs.uchicago.edu/ rbg/latent-release5/.
[12] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[13] Iasonas Kokkinos and Alan Yuille. Inference and learning with hierarchical shape models. International
Journal of Computer Vision, 93(2):201?225, 2011.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems 25, 2012.
[15] Quoc V Le, Marc?Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean,
and Andrew Y Ng. Building high-level features using large scale unsupervised learning. In International
Conference on Machine Learning, 2012.
[16] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The
handbook of brain theory and neural networks, 1995.
[17] Jorge S?anchez and Florent Perronnin. High-dimensional signature compression for large-scale image
classification. In Computer Vision and Pattern Recognition, 2011.
[18] Hannes Schulz and Sven Behnke. Object-class segmentation using deep convolutional neural networks.
In Proceedings of the DAGM Workshop on New Challenges in Neural Computation, 2011.
[19] Long Zhu, Yuanhao Chen, Alan Yuille, and William Freeman. Latent hierarchical structural learning for
object detection. In Computer Vision and Pattern Recognition, 2010.
[20] Song Chun Zhu and David Mumford. A stochastic grammar of images. Computer Graphics and Vision,
2(4):259?362, 2007.
9
| 5207 |@word version:1 dalal:1 compression:1 seems:2 kokkinos:1 everingham:1 triggs:1 d2:1 decomposition:1 harder:1 initial:2 contains:1 score:14 series:1 denoting:1 ours:1 tuned:1 current:2 com:1 surprising:1 yet:1 assigning:1 must:1 parsing:2 john:1 devin:1 subsequent:1 visible:1 shape:2 christian:1 designed:3 interpretable:1 v:1 half:10 intelligence:2 core:2 node:3 location:4 five:10 height:1 dn:7 consists:2 combine:2 dan:1 introduce:1 mask:63 behavior:1 multi:7 brain:1 inspired:1 voc:5 salakhutdinov:1 freeman:1 automatically:1 encouraging:1 window:21 becomes:1 estimating:2 moreover:1 bike:1 what:1 finding:1 transformation:1 every:2 act:1 ull:2 classifier:9 unit:1 medical:1 ramanan:1 producing:1 positive:6 local:2 limit:1 mistake:1 laurent:1 merge:2 ap:1 might:3 approximately:1 bird:4 acl:1 studied:1 challenging:2 co:1 limited:3 range:4 lecun:2 ement:1 procedure:4 area:6 intersect:1 gabor:1 significantly:2 imprecise:1 confidence:1 cannot:1 undesirable:1 layered:1 context:2 applying:4 bill:1 map:1 demonstrated:1 dean:1 go:2 primitive:3 williams:1 convex:1 rectangular:1 resolution:10 formulate:1 simplicity:2 matthieu:1 insight:1 importantly:1 regularize:1 handle:1 traditionally:2 coordinate:1 hierarchy:1 heavily:1 exact:1 us:2 agreement:2 trend:1 recognition:6 cut:1 predicts:1 labeled:1 database:1 bottom:8 observed:1 aero:1 capture:3 thousand:3 region:1 connected:1 ranzato:1 highest:2 alessandro:1 intuition:1 complexity:1 ration:1 exhaustively:1 signature:1 trained:10 deva:1 segment:2 purely:1 localization:15 yuille:2 easily:1 mh:8 geoff:1 various:2 represented:1 cat:2 regularizer:1 train:6 distinct:1 heat:1 sven:1 detected:2 horse:1 labeling:1 outside:2 refined:1 quite:2 emerged:2 whose:3 larger:4 widely:1 elad:1 kai:1 otherwise:1 grammar:1 final:3 online:1 advantage:1 took:2 achieve:4 deformable:5 scalability:1 sutskever:1 produce:6 generating:1 object:98 wider:1 depending:1 help:1 andrew:1 rescale:1 noticeable:1 eq:3 strong:3 coverage:1 predicted:2 c:1 come:1 merged:3 correct:1 filter:4 stochastic:3 human:1 engineered:1 enable:2 mcallester:2 require:1 dnns:11 assign:1 summation:1 around:1 considered:5 ground:5 bicycle:1 predict:4 visualize:1 electron:1 major:1 narendra:1 smallest:2 ruslan:1 felz:2 sofa:1 label:1 ross:1 largest:1 aim:1 avoid:1 resized:1 boosted:1 varying:2 conjunction:1 encode:1 release:1 focus:1 improvement:2 mainly:1 superpixels:1 affinely:1 suppression:2 detect:2 helpful:1 inference:8 rigid:3 perronnin:1 dagm:1 ined:4 relation:1 dnn:29 expand:2 schulz:1 pixel:8 issue:2 classification:11 among:1 pascal:5 denoted:1 overall:1 retaining:1 art:5 softmax:2 field:3 equal:1 having:1 extraction:3 ng:1 manually:2 represents:1 unsupervised:1 future:1 others:1 yoshua:2 few:5 employ:1 oriented:1 recognize:1 pictorial:1 replaced:1 william:1 detection:49 interest:2 huge:1 highly:2 mining:1 workable:1 evaluation:3 regressing:1 sheep:2 integral:2 capable:4 partial:2 necessary:1 machinery:1 tree:1 indexed:1 old:1 girshick:4 instance:2 classify:1 compelling:1 cover:3 localizing:1 challenged:1 juergen:1 cost:4 applicability:1 krizhevsky:1 successful:1 graphic:1 nns:6 thanks:1 person:1 cited:1 international:4 dong:1 together:1 ilya:1 imagery:1 again:1 containing:1 kinematically:2 corner:3 return:1 li:2 szegedy:2 aggressive:1 account:1 stride:1 star:2 sec:5 inc:1 notable:1 explicitly:1 depends:1 performed:1 try:1 view:1 lot:1 hazan:1 portion:4 start:1 red:1 yuanhao:1 greg:1 convolutional:10 descriptor:1 efficiently:1 correspond:1 yield:1 yellow:1 weak:1 accurately:1 produced:1 rectified:1 classified:1 explain:1 detector:3 farabet:1 definition:1 against:1 sampled:1 dataset:2 recall:6 knowledge:1 car:2 improves:1 localizers:1 segmentation:2 color:1 dimensionality:1 iasonas:1 carefully:1 feed:1 higher:5 voc2012:3 response:2 zisserman:1 hannes:1 formulation:3 evaluated:1 box:61 stage:7 until:1 hand:3 working:1 expressive:1 google:2 mode:1 quality:5 perhaps:1 building:2 true:1 fidler:1 read:1 nonzero:1 deal:2 round:1 during:1 width:3 covering:2 noted:1 ambiguous:1 prominent:3 complete:5 demonstrate:1 performs:1 duchi:1 image:62 wise:1 recently:2 common:1 dd1:3 empirically:1 winner:1 million:2 extend:2 interpret:1 refer:1 composition:1 expressing:1 dag:1 ai:1 smoothness:1 approx:2 dbn:1 outlined:1 similarly:2 sanja:1 similarity:3 etc:1 base:2 closest:1 touching:1 apart:1 discard:1 schmidhuber:1 binary:5 jorge:1 scoring:1 seen:2 minimum:1 additional:2 care:1 somewhat:2 deng:1 prune:1 paradigm:2 corrado:1 signal:1 semi:1 ii:1 multiple:7 full:19 sliding:5 infer:2 ale:1 alan:2 adapt:1 long:1 divided:1 schematic:1 prediction:1 scalable:1 regression:15 crop:4 vision:10 histogram:2 represent:2 sometimes:1 monga:1 achieved:1 cell:1 penalize:1 microscopy:1 addition:3 want:3 cropped:2 separately:1 addressed:2 fine:2 background:1 winn:1 crucial:1 extra:1 operate:1 specially:1 pooling:1 dpm:3 contrary:2 flow:1 structural:1 presence:1 leverage:1 bengio:2 easy:1 variety:1 affect:1 architecture:6 ciresan:1 opposite:1 cow:1 reduce:2 florent:1 behnke:1 dd2:1 motivated:2 reuse:1 giusti:1 song:1 speech:1 compositional:5 todorovic:1 deep:10 detailed:2 covered:4 tune:1 amount:3 repeating:1 slide:1 ten:1 visualized:1 category:2 generate:7 http:1 outperform:1 trapped:1 estimated:2 per:8 blue:1 four:5 sheer:1 localize:2 rectangle:2 graph:2 subgradient:1 year:1 sum:2 convert:1 powerful:4 uncertainty:1 named:1 arrive:1 almost:2 reader:1 groundtruth:3 yann:2 jaccard:2 layer:26 followed:2 distinguish:1 display:1 guaranteed:1 paced:1 rbg:1 refine:1 strength:1 occur:1 precisely:5 fei:2 alex:1 scene:2 encodes:1 toshev:2 generates:1 aspect:2 min:1 optimality:1 prescribed:1 performing:1 chair:1 relatively:1 tv:1 membrane:1 across:3 smaller:4 shallow:2 quoc:1 explained:1 resource:1 bus:3 singer:1 end:6 operation:3 apply:6 observe:1 hierarchical:4 generic:4 original:4 top:12 remaining:1 denotes:1 clustering:1 graphical:2 exploit:2 yoram:1 build:1 especially:1 move:1 objective:1 added:1 mumford:1 receptive:3 traditional:1 exhibit:1 gradient:3 subwindows:1 excels:1 mapped:1 capacity:1 parametrized:1 extent:1 trivial:1 modeled:1 index:1 insufficient:2 ratio:2 minimizing:1 potentially:1 hog:2 taxonomy:1 negative:6 design:2 implementation:3 perform:4 upper:4 neuron:1 anchez:1 najman:1 hinton:2 looking:2 precise:3 head:1 rn:1 arbitrary:1 camille:1 inferred:2 introduced:1 david:2 dog:2 bottle:1 imagenet:3 rad:1 expressivity:2 address:3 able:2 leonardis:1 below:1 usually:1 pattern:7 challenge:5 max:1 including:1 green:1 gool:1 power:2 suitable:2 overlap:1 difficulty:1 natural:3 predicting:3 boat:1 zhu:2 representing:3 improve:1 voc2007:2 extract:1 prior:1 understanding:1 geometric:1 l2:1 relative:1 loss:2 fully:1 discriminatively:4 plant:1 par:2 generation:2 filtering:1 geoffrey:1 localized:2 generator:1 validation:2 foundation:1 sufficient:1 consistent:1 classifying:2 navneet:1 share:2 translation:1 row:3 course:1 penalized:1 placed:2 last:7 keeping:1 infeasible:1 drastically:1 allow:3 uchicago:1 wide:1 fall:1 face:1 felzenszwalb:2 distributed:1 van:1 curve:2 dimension:3 forward:1 refinement:9 adaptive:1 erhan:1 transaction:2 bb:17 pruning:1 dealing:1 handbook:1 discriminative:1 search:1 latent:3 decade:1 table:4 additionally:1 disambiguate:2 learn:1 robust:2 contributes:1 boxed:1 excellent:1 complex:1 cl:1 domain:1 diag:1 voc2011:1 marc:1 main:1 aurelio:1 bounding:29 whole:3 ref:4 positively:1 enlarged:2 fig:8 body:2 neuronal:1 fashion:1 ahuja:1 precision:13 sub:2 localizer:8 lie:1 unfair:1 dozen:2 down:2 dumitru:2 chun:1 incorporating:1 socher:1 workshop:1 false:1 adding:1 chen:2 easier:1 simply:1 visual:2 sinisa:1 expressed:2 contained:1 pedro:1 corresponds:3 truth:5 goal:3 towards:3 couprie:1 replace:1 jeff:1 hard:3 uniformly:1 reducing:2 called:3 total:5 invariance:1 attempted:1 exception:1 people:1 latter:1 alexander:1 brevity:1 outstanding:1 preparation:1 rajat:1 evaluate:2 d1:1 |
4,650 | 5,208 | Fast Template Evaluation with Vector Quantization
David Forsyth
Department of Computer Science
University of Illinois at Urbana-Champaign
daf@illinois.edu
Mohammad Amin Sadeghi
Department of Computer Science
University of Illinois at Urbana-Champaign
msadegh2@illinois.edu
Abstract
Applying linear templates is an integral part of many object detection systems and
accounts for a significant portion of computation time. We describe a method that
achieves a substantial end-to-end speedup over the best current methods, without
loss of accuracy. Our method is a combination of approximating scores by vector
quantizing feature windows and a number of speedup techniques including cascade. Our procedure allows speed and accuracy to be traded off in two ways: by
choosing the number of Vector Quantization levels, and by choosing to rescore
windows or not. Our method can be directly plugged into any recognition system
that relies on linear templates. We demonstrate our method to speed up the original Exemplar SVM detector [1] by an order of magnitude and Deformable Part
models [2] by two orders of magnitude with no loss of accuracy.
1
Introduction
One core operation in computer vision involves evaluating a bank of templates at a set of sample
locations in an image. These sample locations are usually determined by sliding a window over the
image. This is by far the most computationally demanding task in current popular object detection
algorithms including canonical pedestrian [3] and face detection [4] methods (modern practice uses
a linear SVM); the deformable part models [2]; and exemplar SVMs [1]. The accuracy and flexibility of these algorithms has turned them into the building blocks of many modern computer vision
systems that would all benefit from a fast template evaluation algorithm. There is a vast literature
of models that are variants of these methods, but they mostly evaluate banks of templates at a set of
sample locations in images.
Because this operation is important, there is now a range of methods to speed up this process,
either by pruning locations to evaluate a template [7, 8] or by using fast convolution techniques.
The method we describe in this paper is significantly faster than any previous method, at little or
no loss of accuracy in comparison to the best performing reference implementations. Our method
does not require retraining (it can be applied to legacy models). Our method rests on the idea
that it is sufficient to compute an accurate, fixed-precision approximation to the value the original
template would produce. We use Vector Quantization speedups, together with a variety of evaluation
techniques and a cascade to exclude unpromising sample locations, to produce this approximation
quickly.
Our implementation is available online1 in the form of a MATLAB/C++ library. This library provides simple interfaces for evaluating templates in dense or sparse grids of locations. We used this
library to implement a deformable part model algorithm that runs nearly two orders of magnitude
faster than the original implementation [2]. This library is also used to obtain an order of magnitude
speed-up for the exemplar SVM detectors of [1]. Our library could also be used to speed up various
convolution-based techniques such as convolutional neural networks.
1
http://vision.cs.uiuc.edu/ftvq
1
As we discuss in section 4, speed comparisons in the existing literature are somewhat confusing.
Computation costs break into two major terms: per image terms, like computing HOG features;
and per (image?category) terms, where the cost scales with the number of categories as well as the
number of images. The existing literature, entirely properly, focuses on minimizing the per (image
? category) terms, and as a result, various practical overhead costs are sometimes omitted. We feel
that for practical systems, all costs should be accounted for, and we do so.
1.1
Prior Work
At heart, evaluating a deformable part model involves evaluating a bank of templates at a set of
locations in a scaled feature pyramid. There are a variety of strategies to speed up evaluation.
Cascades speed up evaluation by using cheap tests to identify sample points that do not require
further evaluation. Cascades have been very successful in face detection algorithms (eg. [5, 6]) For
example, Felzenszwalb et al. [7] evaluate root models, and then evaluate the part scores iteratively
only in high-chance locations. At each iteration it evaluates the corresponding template only if
the current score of the object is higher than a certain threshold (trained in advance), resulting in an
order of magnitude speed-up without significant loss of accuracy. Pedersoli et al. [8] follow a similar
approach but estimate the score of a location using a lower resolution version of the templates.
Transform methods evaluate templates at all locations simultaneously by exploiting properties of
the Fast Fourier Transform. These methods, pioneered by Dubout et al. [9], result in a several fold
speed-up while being exact; however, there is the per image overhead of computing an FFT at the
start, and a per (image ? category) overhead of computing an inverse FFT at the end. Furthermore,
the approach computes the scores of all locations at once, and so is not random-access; it cannot be
efficiently combined with a cascade detection process. In contrast, our template evaluation algorithm
does not require batching template evaluations. As a result, we can combine our evaluation speedups
with the cascade framework of [7]. We show that using our method in a cascade framework leads to
two orders of magnitude speed-up comparing to the original deformable part model implementation.
Extreme category scaling methods exploit locality sensitive hashing to get a system that can detect
100,000 object categories in a matter of tens of seconds [10]. This strategy appears effective ? one
can?t tell precisely, because there is no ground truth data for that number of categories, nor are
their baselines ? and achieves a good speedup with very large numbers of categories. However,
the method cannot speedup detection of the 20 VOC challenge objects without significant loss of
accuracy. In contrast, because our method relies on evaluation speedups, it can speed up evaluation
of even a single template.
Kernel approximation methods: Maji and Berg showed how to evaluate a histogram intersection
kernel quickly [13]. Vedaldi et al. [12] propose a kernel approximation technique and use a new set
of sparse features that are naturally faster to evaluate. This method provides a few folds speed-up
with manageable loss of accuracy.
Vector Quantization offers speedups in situations where arithmetic accuracy is not crucial
(eg. [12, 14, 15, 16]). Jegou et al. [15] use Vector Quantization as a technique for approximate
nearest neighbour search. They represent a vector by a short code composed of a number of subspace quantization indices. They efficiently estimate the euclidean distance between two vectors
from their codes. This work has been very successful as it offers two orders of magnitude speedup
with a reasonable accuracy. Kokkinos [14] describes a similar approach to speed up dot-product.
This method can efficiently estimate the score of a template at a certain location by looking-up a
number of tables. Vector Quantization is our core speedup technique.
Feature quantization vs. Model quantization: Our method is similar to [12] as we both use Vector
Quantization to speed up template evaluation. However, there is a critical difference in the way we
quantize space. [12] quantizes the feature space and trains a new model using a high-dimensional
sparse feature representation. In contrast, our method uses legacy models (that were trained on a
low-dimensional dense feature space) and quantizes the space only at the level of evaluating the
scores. Our approach is simpler because it does not need to retrain a model; it also leads to higher
accuracy as shown in Table 2.
2
(a) Input Image
(b) Original HOG
(c) 256 clusters
(d) 16 clusters
Figure 1: Visualization of Vector Quantized HOG features. (a) is the original image, (b) is the HOG
visualization, (c) is the visualization of Vector Quantized HOG feature into c = 256 clusters, (d)
is the visualization of Vector Quantized HOG feature into c = 16 clusters. HOG visualizations are
produced using the inverse HOG algorithm from [19]. Vector Quantized HOG features into c = 256
clusters can often preserve most of the visual information.
2
Fast Approximate Scoring with Vector Quantization
The vast majority of modern object detectors work as follows:
? In a preprocessing stage, an image pyramid and a set of underlying features for each layer
of the pyramid are computed.
? For each location in each layer of the pyramid, a fixed size window of the image features spanning the location is extracted. A set of linear functions of each such window is
computed. The linear functions are then assembled into a score for each category at that
location.
? A post processing stage rejects scores that are either not local extrema or under threshold.
Precisely how the score is computed from linear functions varies from detector to detector. For
example, exemplar SVMs directly use the score; deformable part models summarize a score from
several linear functions in nearby windows; and so on. The threshold for the post-processing stage
is chosen using application loss criteria. Typically, detectors are evaluated by marking true windows
in test data; establishing an overlap criterion to distinguish between false and true detects; plotting
precision as a function of recall; and then computing the average precision (AP; the integral of this
plot). A detector that gets a good AP does so by assigning high values of the score to windows that
strongly overlap the right answer. Notice that what matters here is the ranking of windows, rather
than the actual value of the score; some inaccuracy in score computation might not affect the AP.
In all cases, the underlying features are the HOG features, originally described by Dalal and
Triggs [3]. HOG features for a window consist of a grid of cells, where each cell contains a ddimensional vector (typically d = 32) that corresponds to a small region of the image (typically
8 ? 8 pixels).
The linear template is usually thought of as an m ? n table of vectors. Each entry of the table
corresponds to a grid element, and contains a d dimensional vector w. The score at location (x, y)
is given by:
m
n
X
X
S(x, y) =
w(?x, ?y) ? h(x + ?x ? 1, y + ?y ? 1)
?y=1 ?x=1
where w is a weight vector and h is the feature vector at a certain cell (both d-dimensional vectors).
We wish to compute an approximation to this score where (a) the accuracy of the approximation is
3
Computation Time vs. Estimation Error
0.1
1
PCA
Principal Component Analysis, D = 2
Estimated Score
Estimation Error
2
16
0.06
34 5
64
256
0.04 1024
4096
0.02
0
6
7 8
9
10
0.2
0.4
0.6
Computation Time (?s)
?1
?1.4
?1.4
Estimated Score
VQ
0.08
?1.8
?2.2
?2.6
0.8
Vector Quantization, C = 4096
?1
?3
?3
?1.8
?2.2
?2.6
?2.6
?2.2 ?1.8
True Score
?1.4
?1
?3
?3
?2.6
?2.2 ?1.8
True Score
?1.4
?1
Figure 2: The plot on the left side illustrates the trade-off between computation time and estimation
error | S(x, y) ? S 0 (x, y) | using two approaches: Principal Component Analysis and Vector Quantization. The time reported here is the average time required for estimating the score of a 12 ? 12
template. The number of PCA dimensions and the number of clusters are indicated on the working
points. The two scatter-plots illustrate template score estimations using 107 sample points. The
working points D = 2 for PCA and c = 4096 for VQ are comparable in terms of running time.
relatively easily manipulated, so we can trade-off speed and performance and (b) the approximation
is extremely fast.
To do so, we quantize the feature vectors in each cell h(x, y) into c clusters using a basic k-means
procedure and encode each quantized cell q(x, y) using its cluster ID (which can range from 1 to
c). Figure 1 visualizes original and our quantized HOG features. We pre-compute the partial dot
product of each template cell w(?x, ?y) with all 1 ? i ? c possible centroids and store them in a
lookup table T(?x, ?y, i). We then approximate the dot product by looking up the table:
S 0 (x, y) =
m
n
X
X
T(?x, ?y, q(x + ?x ? 1, y + ?y ? 1)).
?y=1 ?x=1
This reduces per template computation complexity of exhaustive search from ?(mnd) to ?(mn). In
practice 32 multiplications and 32 additions are replaced by one lookup and one addition. This can
potentially speed up the process by a factor of 32. Table lookup is often slower than multiplication,
therefore gaining the full speed-up requires certain implementation techniques that we will explain
in the next section.
The cost of this approximation is that S 0 (x, y) 6= S(x, y), and tight bounds on the difference are
unavailable. However, as c gets large, we expect the approximation to improve. As figure 2 demonstrates, the approximation is good in practice, and improves quickly with larger c. A natural alternative, offered by Felzenszwalb et al. [7] is to use PCA to compress the cell vectors. This approximation should work well if high scoring vectors lie close to a low-dimensional affine space; the
approximation can be improved by taking more principal components. However, the approximation
will work poorly if the cell vectors have a ?blobby? distribution, which appears to be the case here.
Our experimental analysis shows Vector Quantization is generally more effective than principal
component analysis for speeding-up dot product estimation. Figure 2 compares the time-accuracy
trade-offs posed by both techniques.
It should be obvious that this VQ approximation technique is compatible with a cascade. As results
below show, this approximate estimate of S(x, y) is in practice extremely fast, particularly when
implemented with a cascade. The value of c determines the trade-off between speed and accuracy.
While the loss of accuracy is small, it can be mitigated. Most object detection algorithms evaluate
for a small fraction of the scores that are higher than a certain threshold. Very low scores contribute
little recall, and do not change AP significantly either (because the contribution to the integral is
tiny). A further speed-accuracy tradeoff involves re-scoring the top scoring windows using the
exact evaluation of S(x, y). Our experimental results show that the described Vector Quantized
convolution coupled with a re-estimation step would significantly speed up detection process without
any loss of accuracy.
4
0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0 0 0
3 2
0 0 0 0 0 0
1
0 0 0 0 0 0
0
0
0
0
Spatial Padding
Sapp
Sdef
S
Figure 3: Left: A single template can be zero-padded spatially to generate multiple larger templates.
We pack the spatially padded templates to evaluate several locations in one pass. Right: visualization
of Sapp , Sdef and S. to estimate the maximum score we start from center and move to the highest
scoring neighbour until we reach a local maximum. In this example, we take three iterations to reach
global maximum. In this example we compute the template on 17 locations in three steps (right most
image).
3
Fast Score Estimation Techniques
Implementing a Vector Quantization score estimation is straightforward, and is the primary source of
our speedup. However, a straightforward implementation cannot leverage the full speed-up potential
available with Vector Quantization. In this section we describe a few important techniques we used
to obtain further speed.
Exploiting Cascades: It should be obvious that our VQ approximation technique is compatible with
a cascade. We incorporated our Vector Quantization technique into the cascade detection algorithm
of [7], resulting in a few folds speed-up with no loss of accuracy. The cascade algorithm estimates
the root score and the part scores iteratively (based on a pre-trained order). At each iteration it
prunes out the locations lower than a certain score threshold. This process is done in two passes;
the first pass uses a fast score estimation technique while the second pass uses the original template
evaluation. Felzenswalb et al. [7] use PCA for the fast approximation stage. We instead use Vector
Quantization to estimate the scores. In the case of deformable part models this procedure limits the
process for both convolution and distance transform together. Furthermore, we use more aggressive
pruning thresholds because our estimation is more accurate.
Fast deformation estimates: To find the best deformation for a part template, Felzenswalb et al. [7]
perform an exhaustive search over a 9 ? 9 grid of locations and find the deformation (?x, ?y) that
maximizes:
max S(?x, ?y) = Sapp (?x, ?y) + Sdef (?x, ?y)
?x,?y
? 4 ? ?x, ?y ? 4
where Sapp is the appearance score and Sdef is the deformation score. We observed that since Sdef
is convex and significantly influences the score, searching for a local minima would be a reasonable approximation. In a hill-climbing process we start from S(0, 0) and iteratively move to any
neighbouring location that has the highest score among all neighbours. We stop when S(?x, ?y)
is larger than all its 8 neighbouring cells (Figure 3). This process considerably limits the number of
locations to be processed and further speeds up the process without any loss in accuracy.
Packed Lookup Tables: Depending on the detailed structure of memory, a table lookup instruction could be a couple of folds slower than a multiplication instruction. When there are multiple
templates to be evaluated at a certain location we pack their corresponding lookup tables and index
them all in one memory access, thereby reducing the number of individual memory references. This
allow using SIMD instructions to run multiple additions in one CPU instruction.
Padding Templates: Packing lookup tables appears unhelpful when there is only one template
to evaluate. However, we can obtain multiple templates in this case by zero-padding the original
template (to represent various translates of that template; Figure 3). This allows packing the lookup
tables to obtain the score of multiple locations in one pass.
5
Original DPM [2]
DPM Cascade [7]
FFLD [9]
Our+rescoring
Our-rescoring
HOG features
40ms
40ms
40ms
40ms
40ms
per image
0ms
6ms
7ms
76ms
76ms
per (image?category)
665ms
84ms
91ms
21ms
9ms
per category
0ms
3ms
43ms
6ms
6ms
Table 1: Average running time of the state-of-the-art detection algorithms on PASCAL VOC 2007
dataset. The running time is braked into four major terms. Feature computation, per image preprocess, per (image?category) process and per category preprocess. The running times refer to a
parallel implementation using 6 threads on a XEON E5-1650 Processor.
Sparse lookup tables: Depending on the design of features and the clustering approach lookup
tables can be sparse in some applications. Packing p dense lookup tables would require a dense
c ? p table. However, if the lookup tables are sparse each row of the table could be stored in a
sparse data structure. Thus, when indexing the table with a certain index, we just need to update the
scores of a small fraction of templates. This would both limit the memory complexity and the time
complexity for evaluating the templates.
Fixed point arithmetic: The most popular data type for linear classification systems is 32-bit single
precision floating point. In this architecture 24 bits are specified for mantissa and sign. Since the
template evaluation process in this paper does not involve multiplication, the power datum would
stay in about the same range so one could keep the data in fixed-point format as it requires simpler
addition arithmetic. Our experiments have shown that using 16-bit fixed point precision speeds up
evaluation without sacrificing the accuracy.
4
Computation Cost Model
In order to assess detection speed we need to understand the underlying computation cost. The
current literature is confusing because there is no established speed evaluation measure. Dean et
al. [10] report a running time for all 20 PASCAL VOC categories that include all the preprocessing.
Dubout et al. [9] only report convolution time and distance transform time. Felzenszwalb et al. [7]
compare single-core running time while others report multi-core running times.
Computation costs break into two major terms: per image terms, where the cost scales with the number of images and per (image?category) terms, where the cost scales with the number of categories
as well as the number of images. The total time taken is the sum of four costs:
? Computing HOG features is a mandatory, per image step, shared by all HOG-based detection algorithms.
? per image preprocessing is any process on image data-structure except HOG feature extraction. Examples include applying an FFT, or vector quantizing the HOG features.
? per category preprocessing establishes the required detector data-structure. This is not
usually a significant bottle-neck as there are often more images than categories.
? per (image?category) processes include convolution, distance transform and any postprocess that depends both on the image and the category.
Table 1 compares the performance of our approach with four major state-of-the-art algorithms. The
algorithms described are evaluated on various scales of the image with various root templates. We
compared algorithms based on parallel implementation. Reference codes published by the authors
(except [7]) were all implemented to use multiple cores. We parallelized [7] and the HOG feature extraction function for fair comparison. We evaluate all running times on a XEON E5-1650 Processor
(6 Cores, 12MB Cache, 3.20 GHz).
6
Method
HSC [20]
WTA [10]
DPM V5 [22]
DPM V4 [21]
DPM V3 [2]
Rigid templates [23]
mAP
0.343
0.240
0.330
0.301
0.268
0.31
time
180s*
26s*
13.3s
13.2s
11.6s
10s*
Method
Vedaldi [12]
DPM V4 -parts
FFLD [9]
DPM Cascade [7]
Our+rescoring
Our-rescoring
mAP
0.277
0.214
0.323
0.331
0.331
0.298
time
7s*
2.8s
1.8s
1.7s
0.53s
0.29s
Table 2: Comparison of various different object detection methods on PASCAL VOC 2007 dataset.
The reported time here is the time to complete the detection of 20 categories starting from raw
image. The reference implementations of the marked (*) algorithms were not accessible so we used
published time statistics. These four works were published after 2012 and their baseline computers
are comparable to ours in terms of speed.
5
Experimental Results
We tested our template evaluation library for two well known detections methods. (a) Deformable
part models and (b) exemplar SVM detectors. We used PASCAL VOC 2007 dataset that is a established benchmark for object detection algorithms. We also used legacy models from [1, 22] trained
on this dataset. We use the state-of-the-art baselines published in [1, 22].
We compare our algorithm using the 20 standard VOC objects. We report our average precision on
all categories and compare them to the baselines. We also report mean average precision (mAP) and
running time by averaging over categories (Table 3).
We run all of our experiments with c = 256 clusters. We perform an exhaustive search to find
the nearest cluster for all HOG pyramid cells that takes on average 76ms for one image. The
computation of our exhaustive nearest neighbour search linearly depends on the number of clusters.
In our experiments c = 256 is shown to be enough for preserving detection accuracy. However, for
more general applications one might need to consider a different c.
5.1
Deformable Part Models
Deformable part models algorithm is the standard object detection baseline. Although there is significant difference between the latest version [22] and the earlier versions [2] various authors still
compare to the old versions. Table 2 compares our implementation to ten prominent methods including the original deformable part models versions 3, 4 and 5. In this paper we compare the average
running time of the algorithms together with mean average precision of 20 categories. Detailed per
category average precisions are published in the reference papers.
The original DPM package comes with a number of implementations for convolution (that is the
dominant process). We compare to the fastest version that uses both CPU SIMD instructions and
multi-threading. All baseline algorithms are also multi-threaded. We present two versions of our
cascade method. The first version (FTVQ+rescoring) selects a pool of candidate locations by quickly
estimating scores. It then evaluates the original templates on the candidates to fine tune the scores.
The second version (FTVQ-rescoring) purely relies on Vector Quantization to estimate scores and
does not rescore templates. The second algorithm runs twice as fast with about 3% drop in mean
average precision.
5.2
Exemplar Detectors
Exemplar SVMs are important benchmarks as they deal with a large set of independent templates
that must be evaluated throughout the images. We first estimate template scores using our Vector
Quantization based library. For the convolution we get roughly 25 fold speedup comparing to the
baseline implementation. Both our library and the baseline convolution make use of SIMD operations and multi-threading. We re-estimate the score of the top 1% of locations for each category and
we are virtually able to reproduce the original average precisions (Table 3). Including MATLAB
implementation overhead, our version of exemplar SVM is roughly 8-fold faster than the baseline
without any loss in accuracy.
7
train
tv
sofa
sheep
potted plant
motor bike
person
horse
dog
dining table
chair
cow
cat
bus
car
bottle
boat
bird
aero
bicycle
Method
mAP
time
DPM V5 [22] .33 .59 .10 .18 .25 .51 .53 .19 .21 .24 .28 .12 .57 .48 .43 .14 .22 .36 .47 .39 0.330 665ms
Ours+rescoring .33 .59 .10 .16 .27 .51 .54 .22 .20 .24 .27 .13 .57 .49 .43 .14 .21 .36 .45 .42 0.331 21ms
Ours-rescoring .26 .58 .10 .11 .22 .45 .53 .20 .17 .19 .21 .11 .53 .44 .41 .11 .19 .32 .43 .41 0.298 9ms
Exemplar [1]
Ours
.19 .47 .03 .11 .09 .39 .40 .02 .06 .15 .07 .02 .44 .38 .13 .05 .20 .12 .36 .28 0.198 13.7ms
.18 .47 .03 .11 .09 .39 .40 .02 .06 .15 .07 .02 .44 .38 .13 .05 .20 .12 .36 .28 0.197 1.7ms
Table 3: Comparison of our method with two baselines on PASCAL VOC 2007. The top three rows
refer to DPM implementation while the last two rows refer to exemplar SVMs. We test our algorithm
both with and without accurate rescoring. The two bottom rows compare the performance of our
exemplar SVM implementation with the baseline. For the top three rows running time refers to per
(image?category) time. For the two bottom rows running time refers to per (image?exemplar) time
that includes MATLAB overhead.
6
Discussion
In this paper we present a method to speed-up object detection by two orders of magnitude with little
or no loss of accuracy. The main contribution of this paper lies in the right selection of techniques
that are compatible and together lead to a major speedup in template evaluation. The implementation
of this work is available online to facilitate future research. This library is of special interest in largescale and real-time object detection tasks.
While our method is focussed on fast evaluation, it has implications for training. HOG features
require 32 ? 4 = 128 bytes to store the information in each cell (more than 60GB for the entire
PASCAL VOC 2007 training set). This is why current detector training algorithms need to reload
images and recompute their feature vectors every time they are being used. Batching is not compatible with the random-access nature of most training algorithms.
In contrast, Vector Quantized HOG features into 256 clusters would need 1 Byte per cell. This
makes storing the feature vectors of the whole PASCAL VOC 2007 training images in random access
memory entirely feasible (it would require about 1GB of memory). Doing so allows a SVM solver to
access points in the training set quickly. Our application specific implementation of PEGASOS [24]
solves a SVM classifier for a 12 ? 12 template with 108 training examples (uniformly distributed in
the training set) in a matter of one minute. Being able to access the whole training set plus faster
template evaluation could make hard negative mining either faster or unnecessary.
There are more opportunities for speedup. Notice that we pay a per image penalty computing the
Vector Quantization of the HOG features, on top of the cost of computing those features. We expect
that this could be sped up considerably, because we believe that estimating the Vector Quantized
center to which an image patch goes should be much faster than evaluating the HOG features, then
matching.
Acknowledgement
This work was supported in part by NSF Expeditions award IIS-1029035 and in part by ONR MURI
award N000141010934.
References
[1] T. Malisiewicz and A. Gupta and A. Efros. Ensemble of Exemplar-SVMs for Object Detection
and Beyond. In International Conference on Computer Vision, 2011.
8
[2] P. F. Felzenszwalb and R. B. Girshick and D. McAllester and D. Ramanan. Object Detection
with Discriminatively Trained Part Based Models. In IEEE Transactions on Pattern Analysis
and Machine Intelligence, 2010.
[3] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2005.
[4] H. Rowley and S. Baluja and T. Kanade. Neural Network-Based Face Detection. In IEEE
Transactions On Pattern Analysis and Machine intelligence, 1998.
[5] P. Viola, M. Jones. Rapid object detection using a boosted cascade of simple features in Conference on Computer Vision and Pattern Recognition, 2001
[6] R. Sznitman, C. Becker, F. Fleuret, and P. Fua. Fast Object Detection with Entropy-Driven
Evaluation. in Conference on Computer Vision and Pattern Recognition, 2013
[7] P. F. Felzenszwalb and R. B. Girshick and D. McAllester. Cascade Object Detection with Deformable Part Models. In IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[8] M. Pedersoli and J. Gonzalez and A. Bagdanov and and JJ. Villanueva. Recursive Coarse-toFine Localization for fast Object Detection. In European Conference on Computer Vision, 2010.
[9] C. Dubout and F. Fleuret. Exact Acceleration of Linear Object Detectors. In European Conference on Computer Vision, 2012.
[10] T. Dean and M. Ruzon and M. Segal and J. Shlens and S. Vijayanarasimhan and J. Yagnik.
Fast, Accurate Detection of 100,000 Object Classes on a Single Machine. In IEEE Conference
on Computer Vision and Pattern Recognition, 2013.
[11] P. Indyk and R. Motwani. Approximate nearest neighbours: Towards removing the curse of
dimensionality. In ACM Symposium on Theory of Computing, 1998.
[12] A. Vedaldi and A. Zisserman. Sparse Kernel Approximations for Efficient Classification and
Detection In IEEE Conference on Computer Vision and Pattern Recognition, 2012.
[13] S. Maji and A. Berg, J. Malik. Efficient Classification for Additive Kernel SVMs. In IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2013.
[14] I. Kokkinos. Bounding Part Scores for Rapid Detection with Deformable Part Models In 2nd
Parts and Attributes Workshop, in conjunction with ECCV, 2012.
[15] Herv Jgou and Matthijs Douze and Cordelia Schmid. Product quantization for nearest neighbour search. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010.
[16] R. M. Gray and D. L. Neuhoff. Quantization. In IEEE Transactions on Information Theory,
1998.
[17] S. Singh, and A. Gupta and A. Efros. Unsupervised Discovery of Mid-level Discriminative
Patches. In European Conference on Computer Vision, 2012.
[18] I. Endres and K. Shih and J. Jiaa and D. Hoiem. Learning Collections of Part Models for
Object Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[19] C. Vondrick and A. Khosla and T. Malisiewicz and A. Torralba. Inverting and Visualizing
Features for Object Detection. In arXiv preprint arXiv:1212.2278, 2012.
[20] X. Ren and D. Ramanan. Histograms of Sparse Codes for Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[21] P. Felzenszwalb and R. Girshick and D. McAllester. Discriminatively Trained Deformable Part
Models, Release 4. In http://people.cs.uchicago.edu/ pff/latent-release4/.
[22] R. Girshick and P. Felzenszwalb and D. McAllester. Discriminatively Trained Deformable Part
Models, Release 5. In http://people.cs.uchicago.edu/ rbg/latent-release5/.
[23] S. Divvala and A. Efros and M. Hebert. How important are ?Deformable Parts? in the Deformable Parts Model? In European Conference on Computer Vision, Parts and Attributes
Workshop, 2012
[24] S. Shalev-Shwartz and Y. Singer and N. Srebro. Pegasos: Primal Estimated sub-GrAdient
SOlver for SVM in Proceedings of the 24th international conference on Machine learning,
2007
9
| 5208 |@word version:10 manageable:1 dalal:2 kokkinos:2 retraining:1 triggs:2 nd:1 instruction:5 thereby:1 contains:2 score:45 hoiem:1 ours:4 existing:2 current:5 comparing:2 assigning:1 scatter:1 must:1 additive:1 cheap:1 motor:1 plot:3 drop:1 update:1 v:2 intelligence:4 postprocess:1 core:6 short:1 provides:2 quantized:9 contribute:1 location:26 rescoring:9 potted:1 recompute:1 simpler:2 coarse:1 symposium:1 overhead:5 combine:1 rapid:2 roughly:2 nor:1 uiuc:1 multi:4 jegou:1 voc:9 detects:1 little:3 actual:1 window:11 cpu:2 cache:1 solver:2 curse:1 estimating:3 underlying:3 mitigated:1 maximizes:1 bike:1 what:1 extremum:1 every:1 scaled:1 demonstrates:1 classifier:1 ramanan:2 local:3 limit:3 id:1 establishing:1 ap:4 might:2 plus:1 twice:1 bird:1 rescore:2 fastest:1 range:3 malisiewicz:2 practical:2 practice:4 block:1 implement:1 recursive:1 procedure:3 cascade:18 significantly:4 vedaldi:3 reject:1 thought:1 pre:2 refers:2 matching:1 get:4 cannot:3 close:1 selection:1 pegasos:2 vijayanarasimhan:1 applying:2 influence:1 dean:2 map:4 center:2 straightforward:2 latest:1 starting:1 go:1 convex:1 resolution:1 shlens:1 searching:1 feel:1 pioneered:1 exact:3 neighbouring:2 us:5 element:1 recognition:10 particularly:1 muri:1 observed:1 bottom:2 preprint:1 aero:1 region:1 trade:4 highest:2 substantial:1 complexity:3 rowley:1 trained:7 hsc:1 tight:1 singh:1 purely:1 localization:1 packing:3 easily:1 various:7 cat:1 maji:2 train:2 fast:16 describe:3 effective:2 tell:1 horse:1 choosing:2 shalev:1 exhaustive:4 larger:3 posed:1 statistic:1 transform:5 indyk:1 online:1 quantizing:2 dining:1 propose:1 douze:1 product:5 mb:1 turned:1 flexibility:1 poorly:1 deformable:17 amin:1 neuhoff:1 exploiting:2 cluster:12 motwani:1 produce:2 object:24 illustrate:1 depending:2 exemplar:13 nearest:5 solves:1 ddimensional:1 c:3 involves:3 come:1 implemented:2 attribute:2 human:1 mcallester:4 implementing:1 require:6 ground:1 bicycle:1 traded:1 major:5 achieves:2 efros:3 torralba:1 omitted:1 estimation:10 sofa:1 sensitive:1 establishes:1 offs:1 rather:1 boosted:1 conjunction:1 encode:1 release:2 focus:1 properly:1 contrast:4 centroid:1 baseline:11 detect:1 rigid:1 typically:3 entire:1 reproduce:1 selects:1 pixel:1 among:1 classification:3 pascal:7 spatial:1 art:3 special:1 once:1 simd:3 extraction:2 cordelia:1 blobby:1 jones:1 unsupervised:1 nearly:1 future:1 report:5 others:1 few:3 modern:3 oriented:1 neighbour:6 composed:1 simultaneously:1 preserve:1 manipulated:1 individual:1 floating:1 replaced:1 unpromising:1 n000141010934:1 detection:33 interest:1 mining:1 evaluation:22 sheep:1 extreme:1 primal:1 implication:1 accurate:4 integral:3 partial:1 plugged:1 euclidean:1 old:1 re:3 sacrificing:1 deformation:4 girshick:4 xeon:2 earlier:1 cost:12 entry:1 successful:2 reported:2 stored:1 answer:1 varies:1 endres:1 considerably:2 combined:1 person:1 international:2 accessible:1 stay:1 matthijs:1 v4:2 off:4 pool:1 together:4 quickly:5 account:1 exclude:1 potential:1 aggressive:1 lookup:12 segal:1 includes:1 pedestrian:1 forsyth:1 matter:3 ranking:1 depends:2 break:2 root:3 doing:1 portion:1 start:3 parallel:2 expedition:1 contribution:2 ass:1 accuracy:23 convolutional:1 efficiently:3 ensemble:1 identify:1 preprocess:2 climbing:1 raw:1 produced:1 ren:1 published:5 visualizes:1 processor:2 detector:12 explain:1 reach:2 evaluates:2 obvious:2 naturally:1 couple:1 stop:1 dataset:4 popular:2 mantissa:1 recall:2 car:1 improves:1 sapp:4 dimensionality:1 appears:3 hashing:1 higher:3 originally:1 follow:1 zisserman:1 improved:1 fua:1 evaluated:4 done:1 strongly:1 furthermore:2 dubout:3 stage:4 just:1 until:1 working:2 indicated:1 gray:1 believe:1 building:1 facilitate:1 true:4 spatially:2 iteratively:3 eg:2 deal:1 visualizing:1 criterion:2 m:26 prominent:1 hill:1 complete:1 mohammad:1 demonstrate:1 interface:1 vondrick:1 image:39 sped:1 significant:5 refer:3 grid:4 illinois:4 dot:4 access:6 dominant:1 showed:1 driven:1 mandatory:1 store:2 certain:8 onr:1 yagnik:1 scoring:5 preserving:1 minimum:1 somewhat:1 prune:1 parallelized:1 v3:1 arithmetic:3 sliding:1 full:2 multiple:6 ii:1 reduces:1 champaign:2 faster:7 offer:2 post:2 award:2 variant:1 basic:1 vision:16 arxiv:2 iteration:3 sometimes:1 kernel:5 histogram:3 pyramid:5 represent:2 cell:12 addition:4 fine:1 source:1 crucial:1 rest:1 pass:1 virtually:1 dpm:10 leverage:1 enough:1 fft:3 variety:2 affect:1 architecture:1 cow:1 idea:1 tradeoff:1 translates:1 thread:1 herv:1 pca:5 gb:2 padding:3 becker:1 penalty:1 jj:1 matlab:3 generally:1 fleuret:2 detailed:2 involve:1 tune:1 mid:1 ten:2 svms:6 category:27 processed:1 http:3 generate:1 canonical:1 nsf:1 notice:2 sign:1 estimated:3 per:23 four:4 shih:1 threshold:6 vast:2 padded:2 fraction:2 sum:1 run:4 inverse:2 package:1 throughout:1 reasonable:2 patch:2 gonzalez:1 confusing:2 scaling:1 comparable:2 bit:3 entirely:2 layer:2 bound:1 pay:1 rbg:1 distinguish:1 datum:1 fold:6 tofine:1 precisely:2 nearby:1 fourier:1 speed:30 extremely:2 chair:1 performing:1 relatively:1 format:1 speedup:14 department:2 marking:1 tv:1 combination:1 describes:1 wta:1 online1:1 indexing:1 heart:1 taken:1 computationally:1 visualization:6 vq:4 bus:1 discus:1 singer:1 end:3 available:3 operation:3 batching:2 ruzon:1 alternative:1 slower:2 original:14 compress:1 top:5 running:12 clustering:1 include:3 sznitman:1 opportunity:1 exploit:1 approximating:1 threading:2 move:2 malik:1 v5:2 strategy:2 mnd:1 primary:1 gradient:2 subspace:1 distance:4 majority:1 threaded:1 spanning:1 code:4 index:3 minimizing:1 mostly:1 potentially:1 hog:23 negative:1 implementation:17 design:1 packed:1 perform:2 convolution:9 urbana:2 benchmark:2 situation:1 viola:1 looking:2 incorporated:1 bagdanov:1 david:1 inverting:1 bottle:2 required:2 pedersoli:2 specified:1 dog:1 established:2 inaccuracy:1 assembled:1 able:2 beyond:1 unhelpful:1 usually:3 below:1 pattern:12 challenge:1 summarize:1 including:4 gaining:1 max:1 memory:6 power:1 critical:1 demanding:1 overlap:2 natural:1 largescale:1 boat:1 mn:1 sadeghi:1 improve:1 library:9 coupled:1 schmid:1 speeding:1 byte:2 prior:1 literature:4 acknowledgement:1 discovery:1 multiplication:4 loss:13 expect:2 plant:1 discriminatively:3 srebro:1 offered:1 sufficient:1 affine:1 plotting:1 daf:1 bank:3 tiny:1 storing:1 row:6 eccv:1 compatible:4 accounted:1 supported:1 last:1 hebert:1 legacy:3 side:1 allow:1 understand:1 uchicago:2 divvala:1 template:48 face:3 felzenszwalb:7 taking:1 focussed:1 sparse:9 benefit:1 ghz:1 distributed:1 dimension:1 evaluating:7 computes:1 author:2 collection:1 preprocessing:4 far:1 transaction:5 pruning:2 approximate:5 keep:1 global:1 quantizes:2 unnecessary:1 discriminative:1 jiaa:1 shwartz:1 search:6 latent:2 khosla:1 why:1 table:27 kanade:1 pack:2 nature:1 unavailable:1 e5:2 quantize:2 european:4 dense:4 main:1 linearly:1 whole:2 bounding:1 fair:1 retrain:1 precision:11 sub:1 wish:1 lie:2 candidate:2 minute:1 removing:1 specific:1 svm:9 reload:1 gupta:2 consist:1 workshop:2 quantization:23 false:1 magnitude:8 felzenswalb:2 illustrates:1 pff:1 locality:1 entropy:1 intersection:1 appearance:1 visual:1 release4:1 corresponds:2 truth:1 chance:1 relies:3 extracted:1 determines:1 acm:1 marked:1 acceleration:1 towards:1 shared:1 feasible:1 change:1 hard:1 determined:1 except:2 reducing:1 uniformly:1 averaging:1 baluja:1 principal:4 total:1 pas:4 neck:1 experimental:3 berg:2 people:2 evaluate:11 tested:1 |
4,651 | 5,209 | Transfer Learning in a Transductive Setting
Marcus Rohrbach
Sandra Ebert
Bernt Schiele
Max Planck Institute for Informatics, Saarbr?ucken, Germany
{rohrbach,ebert,schiele}@mpi-inf.mpg.de
Abstract
Category models for objects or activities typically rely on supervised learning
requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even
though it is a common scenario. In this work, we extend transfer learning with
semi-supervised learning to exploit unlabeled instances of (novel) categories with
no or only a few labeled instances. Our proposed approach Propagated Semantic
Transfer combines three techniques. First, we transfer information from known to
novel categories by incorporating external knowledge, such as linguistic or expertspecified information, e.g., by a mid-level layer of semantic attributes. Second,
we exploit the manifold structure of novel classes. More specifically we adapt a
graph-based learning algorithm ? so far only used for semi-supervised learning ?
to zero-shot and few-shot learning. Third, we improve the local neighborhood
in such graph structures by replacing the raw feature-based representation with a
mid-level object- or attribute-based representation. We evaluate our approach on
three challenging datasets in two different applications, namely on Animals with
Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer
and semi-supervised approaches on all datasets.
1
Introduction
While supervised training is an integral part of building visual, textual, or multi-modal category
models, more recently, knowledge transfer between categories has been recognized as an important
ingredient to scale to a large number of categories as well as to enable fine-grained categorization.
This development reflects the psychological point of view that humans are able to generalize to
novel1 categories with only a few training samples [17, 1]. This has recently gained increased
interest in the computer vision and machine learning literature, which look at zero-shot recognition
(with no training instances for a class) [11, 19, 9, 22, 16], and one- or few-shot recognition [29, 1,
21]. Knowledge transfer is particularly beneficial when scaling to large numbers of classes [23, 16],
distinguishing fine-grained categories [6], or analyzing compositional activities in videos [9, 22].
Recognizing categories with no or only few labeled training instances is challenging. To improve existing transfer learning approaches, we exploit several sources of information. Our approach allows
using (1) trained category models, (2) external knowledge, (3) instance similarity, and (4) labeled
instances of the novel classes if available. More specifically we learn category or attribute models
based on labeled training data for known categories y (see also Figure 1) using supervised training.
These trained models are then associated with the novel categories z using, e.g. expert or automatically mined semantic relatedness (cyan lines in Figure 1). Similar to unsupervised learning [32, 28]
our approach exploits similarities in the data space via a graph structure to discover dense regions
that are associated with coherent categories or concepts (orange graph structure in Figure 1). However, rather than using the raw input space, we map our data into a semantic output space with the
1
We use ?novel? throughout the paper to denote categories with no or few labeled training instances.
1
object/attribute classifier scores to estimate instance similarity
external
knowledge
y1
y2
y3
y4
y5
known
classes
x2
x1
x3
z 1 x x4
5
x6
x7
x9
z 2 x8
x10
x12 x13 x11
z3
x14 x15
Semantic knowledge transfer
z1
x11
x5
z 2 x8
z3
+
x12
x2
x4
x9
x6
x3
z1
x7
x10
x11
x5
z 2 x8
x13
x14
14
x15
Few labeled instances
x11
z3
+
x12
x2
x4
x9
x6
x3
x7
x10
x11
x13
x14
14
x15
Instance similarity
x2
x1
x3
z1 x x4
5
x6
x7
x9
z 2 x8
x10
x12 x13 x11
z3
x14 x15
= Improved prediction
Figure 1: Conceptual visualisation of our approach Propagated Semantic Transfer. Known categories y, novel categories z, instances x (colors denote predicted category affiliation). Qualitative
results can be found in supplemental material and on our website.
models trained on the known classes (pink arrow) to benefit from their discriminative knowledge.
Given the uncertain predictions and the graph structure we adapt semi-supervised label propagation [34, 33] to generate more reliable predictions. If labeled instances are available they can be
seamlessly added. Note, attribute or category models do not have to be retrained if novel classes are
added which is an important aspect e.g. in a robotic scenario.
The main contribution of this work is threefold. First, we propose a novel approach that extends
semantic knowledge transfer to the transductive setting, exploiting similarities in the unlabeled data
distribution. The approach allows to do zero-shot recognition but also smoothly integrate labels for
novel classes (Section 3). Second, we improve the local neighborhood structure in the raw feature
space by mapping the data into a low dimensional semantic output space using the trained attribute
and category models. Third, we validate our approach on three challenging datasets for two different applications, namely on Animals with Attributes and ImageNet for image classification and on
MPII Composites for activity recognition (Section 4). We also provide a discussion of related work
(Section 2) and conclusions for future work (Section 5). The implementation for our Propagated
Semantic Transfer and code to easily reproduce the results in this paper is available on our website.
2
Related work
Knowledge transfer or transfer learning has the goal to transfer information of learned models to
changing or unknown data distributions while reducing the need and effort to collect new training
labels. It refers to a variety of tasks, including domain adaptation [25] or sharing of knowledge and
representations [30, 3] (a recent categorization can be found in [20]).
In this work we focus on transferring knowledge from known categories with sufficient training
instances to novel categories with limited training data. In computer vision or machine learning
literature this setting is normally referred to as zero-shot learning [11, 19, 24, 9, 16] if there are no
instances for the test classes available and one- or few-shot learning [16, 9, 8] if there are one or few
instances available for the novel classes.
To recognize novel categories zero-shot recognition uses additional information, typically in the
form of an intermediate attribute representation [11, 9], direct similarity [24] between categories, or
hierarchical structures of categories [35]. The information can either be manually specified [11, 9]
or mined automatically from knowledge bases [24, 22]. Our approach builds on these works by
using a semantic knowledge transfer approach as the first step. If one or a few training examples are
available, these are typically used to select or adapt known models [1, 9, 26]. In contrast to related
work, our approach uses the above mentioned semantic knowledge transfer also when few training
examples are available to reduce the dependency on the quality of the samples. Also, we still use
the labeled examples to propagate information.
Additionally, we exploit the neighborhood structure of the unlabeled instances to improve recognition for zero- and few-shot recognition. This is in contrast to previous works with the exception of
2
the zero-shot approach of [9] that learns a discriminative, latent attribute representation and applies
self-training on the unseen categories. While conceptually similar, the approach is different to ours,
as we explicitly use the local neighborhood structure of the unlabeled instances. A popular choice to
integrate local neighborhood structure of the data are graph-based methods. These have been used to
discover a grouping by spectral clustering [18, 14], and to enable semi-supervised learning [34, 33].
Our setting is similar to the semi-supervised setting. To transfer labels from labeled to unlabeled
data label propagation is widely used [34, 33] and has shown to work successfully in several applications [13, 7]. In this work, we extend transfer learning by considering the neighborhood structure
of the novel classes. For this we adapt the well-known label propagation approach of [33]. We build
a k-nearest neighbor graph to capture the underlying manifold structure as it has shown to provide
the most robust structure [15]. Nevertheless, the quality of the graph structure is key to success of
graph-based methods and strongly dependents on the feature representation [5]. We thus improve
the graph structure by replacing the noisy raw input space with the more compact semantic output
space which has shown to improve recognition [26, 22].
To improve image classification with reduced training data, [4, 27] use attributes as an intermediate
layer and incorporate unlabeled data, however, both works are in a classical semi-supervised learning setting similar to [5], while our setting is transfer learning. More specifically [27] propose to
bootstrap classifiers by adding unlabeled data. The bootstrapping is constrained by attributes shared
across classes. In contrast, we use attributes for transfer and exploit the similarity between instances
of the novel classes. [4] automatically discover a discriminative attribute representation, while incorporating unlabeled data. This notion of attributes is different to ours as we want to use semantic
attributes to enable transfer from other classes. Other directions to improve the quality of the intermediate representation include integrating metric learning [31, 16] or online methods [10] which we
defer to future work.
3
Propagated Semantic Transfer (PST)
Our main objective is to robustly recognize novel categories by transferring knowledge from known
classes and exploiting the similarity of the test instances. More specifically our novel approach called
Propagated Semantic Transfer consists of the following four components: we employ semantic
knowledge transfer from known classes to novel classes (Sec. 3.1); we combine the transferred
predictions with labels for the novel classes (Sec. 3.2); a similarity metric is defined to achieve a
robust graph structure (Sec. 3.3); we propagate this information within the novel classes (Sec. 3.4).
3.1
Semantic knowledge transfer
We first transfer knowledge using a semantic representation. This allows to include external knowledge sources. We model the relation between a set of K known classes
T y1 , . . . , yK to the set of
N novel classes z1 , . . . , zN . Both sets are disjoint, i.e. {y1 , . . . , yK } {z1 , . . . , zN } = ?. We use
two strategies to achieve this transfer: i) an attribute representation that employs an intermediate
representation of a1 , . . . , aM attributes or ii) direct similarities calculated among the known object
classes. Both work without any training examples for zn , i.e. also for zero-shot recognition [11, 24].
i) Attribute representation. We use the Direct-Attribute-Prediction (DAP) model [11], using
our formulation [24]. An intermediate level of M attribute classifiers p(am |x) is trained on the
known classes yk to estimate the presence of attribute am in the instance x. The subsequent
knowledge transfer requires an external knowledge source that provides class-attribute associations
azmn ? {0, 1} indicating if attribute am is associated with class zn . Options for such association
information are discussed in Section 4.2. Given this information the probability of the novel classes
zn to be present in the instance x can then be estimated [24]:
p(zn |x) ?
M
Y
zn
(2p(am |x))am .
(1)
m=1
ii) Direct similarity. As an alternative to attributes, we can use the U most similar training classes
y1 , ..., yU as a predictor for novel class zn given an instance x [24]:
p(zn |x) ?
U
Y
(2p(yu |x))
u=1
3
zn
yu
,
(2)
where yuzn provides continuous normalized weights for the strength of the similarity between the
novel class zn and the known class yu [24]. To comply with [23, 22] we slightly diverge from these
models for the ImageNet and MPII Composites dataset P
by using a sum formulation instead of the
M
azn p(a |x)
PM m znm
probabilistic expression, i.e. for attributes p(zn |x) ? m=1
, and for direct similarity
a
PU
m=1
m
p(y |x)
p(zn |x) ? u=1U u . Note that in this case we do not obtain probability estimates, however, for
label propagation the resulting scores are sufficient.
3.2
Combining transferred and ground truth labels
In the following we treat the multi-class problem as N binary problems, where N is the number
of binary classes. For class zn the semantic knowledge transfer provides p(zn |x) ? [0, 1] for all
instances x. We combine the best predictions per class, scaled to [?1, 1], with labels ?l(zn |x) ?
{?1, 1} provided for some instances x in the following way:
? ?
if there is a label for x
?? l(zn |x)
l(zn |x) = (1 ? ?)(2p(zn |x) ? 1) if p(zn |x)is among top-? fraction of predictions for zn (3)
?
0
otherwise.
? provides a weighting between the true labels and the predicted labels. In the zero-shot case we
only use predictions, i.e. ? = 0. The parameters ?, ? ? [0, 1] are chosen, similar to the remaining
parameters, using cross-validation on the training set.
3.3
Similarity metric based on discriminative models for graph construction
We enhance transfer learning by exploiting also the neighborhood structure within novel classes,
i.e. we assume a transductive setting. Graph-based semi-supervised learning incorporates this information by employing a graph structure over all instances. In this section we describe how to improve
the graph structure as it has a strong influence on the final results [5]. The k-NN graph is usually
built on the raw feature descriptors of the data. Distances are computed for each pair (xi , xj ) by
d(xi , xj ) =
D
X
|xi,d ? xj,d |,
(4)
d=1
where D is the dimensionality of the raw feature space. We note that the visual representation used
for label propagation can be independent of the visual representation used for transfer. While the
visual representation for transfer is required to provide good generalization abilities in conjunction with the employed supervised learning strategy, the visual representation for label propagation
should induce a good neighborhood structure. Therefore we propose to use the more compact output
space trained on the known classes which we found to provide a much better structure, see Figure
5b. We thus compute the distances either on the M-dimensional vector of the attribute classifiers
p(am |x) with M D, i.e.,
M
X
d(xi , xj ) =
|p(am |xi ) ? p(am |xj )|,
(5)
m=1
or on the K-dimensional vector of object-classifiers p(yk |x) with K D, i.e.
d(xi , xj ) =
K
X
|p(y? |xi ) ? p(y? |xj )|.
(6)
?=1
?d(xi ,xj )
These distances are transformed into similarities with a RBF kernel: s(xi , xj ) = exp
.
2
2?
Finally, we construct a k-NN graph that is known for its good performance [15, 5], i.e.,
s(xi , xj ) if s(xi , xj ) is among the k largest similarities of xi
Wij =
(7)
0
otherwise.
4
Figure 2: AwA (left), ImageNet (middle), and MPII Composite Activities (right)
3.4
Label propagation with certain and uncertain labels
In this work, we build upon the label propagation by [33]. The k-NN graph with RBF kernel gives
the weighted graph W (see Section 3.3). Based on this graph we compute a normalized graph
Laplacian, i.e., S = D?1/2 W D?1/2 with the diagonal matrix D summing up the weights in each
row in W . Traditional semi-supervised label propagation uses sparse ground truth labels. In contrast
we have dense labels l(zn |x) which are a combination of uncertain predictions and certain labels (see
Eq. 3) for all instances {x1 , . . . , xi } of the novel classes zn . Therefore, we modify the initialization
by setting
L(0)
(8)
n = [l(zn |x1 ), . . . , l(zn |xi )]
for the N novel classes. For each class, labels are propagated through this graph structure converging
to the following closed form solution
L?n = (I ? ?S)?1 L(0)
for 1 ? n ? N,
n
(9)
with the regularization parameter ? ? (0, 1]. The resulting framework makes use of the manifold
structure underlying the novel classes to regulate the predictions from transfer learning. In general,
the algorithm converges after few iterations.
4
4.1
Evaluation
Datasets
We shortly outline the most important properties of the examined datasets in the following paragraphs and show example images/frames in Figure 2.
AwA The Animals with Attributes dataset (AwA) [11] is one of the first and most widely used
datasets for semantic knowledge transfer and zero-shot recognition. It consists of 50 mammal
classes, 40 training (24,395 images) and 10 disjoint test classes (6,180 images). We use the provided pre-computed 6 image descriptors, which are concatenated.
ImageNet The ImageNet 2010 challenge [2] requires large scale and fine-grained recognition. It
consists of 1000 image categories which are split into 800 training and 200 test categories according
to [23]. We use the LLC and Fisher-Vector encoded SIFT descriptors provided by [23].
MPII Composite Activities The MPII Composite Cooking Activities dataset [22] distinguishes 41
basic cooking activities, such as prepare scrambled egg or prepare carrots with video recordings
of varying length from 1 to 41 minutes. It consists of a total of 256 videos, 44 are used for training the attribute representation, 170 are used as test data. We use the provided dense-trajectory
representation and train/test split.
4.2
External knowledge sources and similarity measures
Our approach incorporates external knowledge to enable semantic knowledge transfer from known
classes y to unseen classes z. We use the class-attribute associations azmn for attribute-based transfer
(Equation 1) or inter-class similarity yuzn for direct-similarity-based transfer (Equation 2) provided
with the datasets. In the following we shortly outline the knowledge sources and measures.
Manual (AwA) AwA is accompanied with a set of 85 attributes and associations to all 40 training
and all 10 test classes. The associations are provided by human judgments [11].
Hierarchy (ImageNet) For ImageNet the manually constructed WordNet/ImagNet hierarchy is used
to find the most similar of the 800 known classes (leaf nodes in the hierarchy). Furthermore, the 370
inner nodes can group several classes into attributes [23].
5
Performance
AUC Acc.
DAP [11]
IAP [11]
Zero-Shot Learning [9]
PST (ours)
on image descriptors
on attributes
81.4
80.0
n/a
41.4
42.2
41.3
81.2
83.7
40.5
42.7
50
mean Acc in %
Approach
45
40
PST (ours) ? manual def. ass.
LP + attr. classifiers ? manual ass.
PST (ours) ? Yahoo Image attr.
LP + attr. classifiers ? Yahoo Img attr.
LP [5]
35
30
(a) Zero-Shot. Predictions with attributes and
manual defined associations, in %.
0
10
20
30
# training samples per class
40
50
(b) Few-Shot
Figure 3: Results on AwA Dataset, see Sec. 4.3.1.
Linguistic knowledge bases (AwA, ImageNet) An alternative to manual association are automatically mined associations. We use the provided similarity matrices which are extracted using different
linguistic similarity measures. They are either based on linguistic corpora, namely Wikipedia and
WordNet, or on hit-count statistics of web search. One can distinguish basic web search (Yahoo
Web), web search refined to part associations (Yahoo Holonyms), image search (Yahoo Image and
Flickr Image), or use the information of the summary snippets returned by web search (Yahoo Snippets). As ImageNet does not provide attributes, we mined 811 part-attributes from the associated
WordNet hierarchy [23].
Script data (MPII Composites) To associate composite cooking activities such as preparing carrots with attributes of fine-grained activities (e.g. wash, peel), ingredients (e.g. carrots), and tools
(e.g. knife, peeler), textual description (Script data) of these activities were collected with AMT. The
provided associations are computed based on either the frequency statistics or, more discriminate,
by term frequency times inverse document frequency (tf*idf ). Words in the text can be matched to
labels either literally or by using WordNet expansion [22].
4.3
Results
To enable a direct comparison, we closely follow the experimental setups of the respective datasets
[11, 23, 22]. On all datasets we train attribute or object classifiers (for direct similarity) with one-vsall SVMs using Mean Stochastic Gradient Descent [23] and, for AwA and MPII Composites, with a
?2 kernel approximation as in [22]. To get more distinctive representations for label propagation we
train sigmoid functions [12] to estimate probabilities (on the training set for AwA/MPII Composites
and on the validation set for ImageNet).
The hyper-parameters of our new Propagated Semantic Transfer algorithm are estimated using 5fold cross-validation on the respective training set, splitting them into 80% known and 20% novel
classes: We determine the parameters for our approach on the AwA training set and then set them
for all datasets to ? = 0.8, ? = 0.98, the number of neighbors k = 50, the number of iterations for
propagation to 10, and use L1 distance. Due to the different recognition precision of the datasets
we determine ? = 0.15/0.04 separately for AwA/ImageNet. For MPII Composites we only do
zero-shot recognition and use all samples due to the limited number of samples of ? 7 per class.
For few-shot recognition we report the mean over 10 runs where we pick examples randomly. The
labeled examples are included in the evaluation to make it comparable to the zero-shot case.
We validate our claim that the classifier output space induces a better neighborhood structure than
the raw features by examining the k-Nearest-Neighbour (kNN) quality for both. In Figure 5b we
compare the kNN quality on two datasets (see Sec. 4.1) for both feature representation. We observe
that the attribute (Eq. 5) and object (Eq. 6) classifier-based representations (green and magenta
dashed line) achieve a significantly higher accuracy than the respective raw feature-based representation (Eq. 4, Fig. 5b solid lines). We note that a good kNN-quality is required but not sufficient for
good propagation, as it also depends on the distribution and quality of initial predictions. In the following, we compare the performance of the raw features with the attribute classifier representation.
6
60
[23]
PST (ours)
0
top?5 accuracy (in %)
Hierachy ? leaf nodes
Hierachy ? inner nodes
Attributes ? Wikipedia
Attributes ? Yahoo Holonyms
Attributes ? Yahoo Image
Attributes ? Yahoo Snippets
Direct similarity ? Wikipedia
Direct similarity ? Yahoo Web
Direct similarity ? Yahoo Image
Direct similarity ? Yahoo Snippets
55
50
45
40
30
10
20
30
top?5 accuracy (in %)
(a) Zero-Shot.
PST (ours) ? Hierachy (inner nodes)
PST (ours) ? Yahoo Img direct
LP + object classifiers
35
0
5
10
15
# training samples per class
20
(b) Few-Shot.
Figure 4: Results on ImageNet, see Sec. 4.3.2.
4.3.1
AwA - image classification
We start by comparing the performance of related work to our approach on AwA (see Sec. 4.1) in
Figure 3. We start by examining the zero-shot results in Figure 3a, where no training examples
are available for the novel or in this case unseen classes. The best results to our knowledge for
on this dataset are reported by [11]. On this 10-class zero-shot task they achieve 81.4% area under ROC-curve (AUC) and 41.4% multi-class accuracy (Acc) with DAP, averaged over the 10 test
classes. Additionally we report results from Zero-Shot Learning [9] which achieves 41.3% Acc. Our
Propagated Semantic Transfer, using the raw image descriptors to build a neighborhood structure,
achieves 81.2% AUC and 40.5% Acc. However, when propagating on the 85-dimensional attribute
space, we improve over [11] and [9] to 83.7% AUC and 42.7% Acc. To understand the difference
in performance between the attribute and the image descriptor space we examine the neighborhood
quality used for propagating labels shown in Figure 5b. The k-NN accuracy, measured on the ground
truth labels, is significantly higher for the attribute space (green dashed curve) compared to the raw
features (solid green). The information is more likely propagated to neighbors of the correct class
for the attribute-space leading to a better final prediction. Another advantage is the significantly
reduced computation and storage costs for building the k-NN graph which scales linearly with the
dimensionality. We believe that such an intermediate space, in this case represented by attributes,
might provide a better neighborhood structure and could be used in other label-propagation tasks.
Next we compare our approach in the few-shot setting, i.e. we add labeled examples per class. In
Figure 3b we compare our approach (PST) to two label propagation (LP) baselines. We first note
that PST (red curves) seamlessly moves from zero-shot to few-shot, while traditional LP (blue and
black curves) needs at least one training example. We first examine the three solid lines. The black
curve is our best LP variant from [5] evaluated on the 10 test classes of AwA rather than all 50
as in [5]. We also compute LP in combination with the similarity metric based on the attribute
classifier scores (blue curves). This transfer of knowledge residing in the classifier trained on the
known classes already gives a significant improvement in performance. Our approach (red curve)
additionally transfers labels from the known classes and improves further. Especially for few labels
our approach benefits from the transfer, e.g. for 5 labeled samples per class PST achieves 43.9%
accuracy, compared to 38.1% for LP with attribute classifiers and 32.2% for [5]. For less samples
LP drops significantly while our approach has nearly stable performance. For large amounts of
training data, PST approaches - as expected - LP (red vs. blue in Figure 3b).
The dashed lines in Figure 3b provide results for automatically mined associations azmn between
attributes and classes. It is interesting to note that these automatically mined associations achieve
performance very close to the manual defined associations (dashed vs. solid). In this plot we use
Yahoo Image as base for the semantic relatedness, but we also provide the improvements of PST for
the other linguistic language sources in supplemental material.
4.3.2
ImageNet - large scale image classification
In this section we evaluate our Propagated Semantic Transfer approach on a large image classification task with 200 unseen image categories using the setup as proposed by [23]. We report the top-5
accuracy2 [2] which requires one of the best five predictions for an image to be correct.
2
top-5 accuracy = 1 - top-5 error as defined in [2]
7
60
accuracy in %
Script data, freqs?literal
Script data, freqs?WN
Script data, tf*idf?literal
Script data, tf*idf?WN
40
20
0
0
10
20
30
mean AP (in %)
[22]
PST (ours)
40
(a) MPII Composite Activities, see Sec. 4.3.3.
0
20
40
60
80
k nearest neighours
AwA ? attribute classifiers
AwA ? raw features
ImageNet ? object classifiers
ImageNet ? raw features
100
(b) Accuracy of the majority vote from
kNN (kNN-Classifier) on test sets? ground truth.
Figure 5: Results
Results are reported in Figure 4. For zero-shot recognition our PST (red bars) improves performance
over [23] (black bars) as shown in Figure 4a. The largest improvement in top-5 accuracy is achieved
for Yahoo Image with Attributes which increases by 6.7% to 25.3%. The absolute performance of
34.0% top-5 accuracy is achieved by using the inner nodes of the WordNet hierarchy for transfer,
closely followed by Yahoo Web with direct similarity, achieving 33.1% top-5 accuracy. Similar to
the AwA dataset we improve PST over the LP-baseline for few-shot recognition (Figure 4b).
4.3.3
MPII composite - activity recognition
In the last two subsections, we showed the benefit of Propagated Semantic Transfer on two image
classification challenges. We now evaluate our approach on the video-activity recognition dataset
MPII Composite Cooking Activities [22]. We compute mean AP using the provided features and
follow the setup of [22]. In Figure 5a we compare our performance (red bars) to the results of
zero-shot recognition without propagation [22] (black bars) for four variants of Script data based
transfer. Our approach achieves significant performance improvements in all four cases, increasing
mean AP by 11.1%, 10.7%, 12.0%, and 7.7% to 34.0%, 32.8%, 34.4%, and 29.2%, respectively.
This is especially impressive as it reaches the level of supervised training: for the same set of
attributes (and very few, ? 7 training categories per class) [22] achieve 32.2% for SVM, 34.6%
for NN-classification, and up to 36.2% for a combination of NN with script data.
We find these results encouraging as it is much more difficult to collect and label training examples for this domain than for image classification and the complexity and compositional nature of
activities frequently requires recognizing unseen categories [9].
5
Conclusion
In this work we address a frequently occurring setting where there is large amount of training data
for some classes, but other, e.g. novel classes, have no or only few labeled training samples. We
propose a novel approach named Propagated Semantic Transfer, which integrates semantic knowledge transfer with the visual similarities of unlabeled instances within the novel classes. We adapt a
semi-supervised label-propagation approach by building the neighborhood graph on expressive, lowdimensional semantic output space and by initializing it with predictions from knowledge transfer.
We evaluated this approach on three diverse datasets for image and video-activity recognition,
consistently improving performance over the state-of-the-art for zero-shot and few-shot prediction.
Most notably we achieve 83.7% AUC / 42.7% multi-class accuracy on the Animals with Attributes
dataset for zero-shot recognition, scale to 200 unseen classes on ImageNet, and achieve up to 34.4%
(+12.0%) mean AP on MPII Composite Activities which is on the level of supervised training on this
dataset. We show that our approach consistently improves performance independent of factors such
as (1) the specific datasets and descriptors, (2) different transfer approaches: direct vs. attributes,
(3) types of transfer association: manually defined, linguistic knowledge bases, or script data, (4)
domain: image and video activity recognition, or (5) model: probabilistic vs. sum formulation.
Acknowledgements. This work was partially funded by the DFG project SCHI989/2-2.
8
References
[1] E. Bart & S. Ullman. Single-example learning of novel classes using representation by similarity. In
BMVC, 2005.
[2] A. Berg, J. Deng, & L. Fei-Fei. ILSVRC 2010. www.image-net.org/challenges/LSVRC/2010/, 2010.
[3] U. Blanke & B. Schiele. Remember and transfer what you have learned - recognizing composite activities
based on activity spotting. In ISWC, 2010.
[4] J. Choi, M. Rastegari, A. Farhadi, & L. S. Davis. Adding Unlabeled Samples to Categories by Learned
Attributes. In CVPR, 2013.
[5] S. Ebert, D. Larlus, & B. Schiele. Extracting Structures in Image Collections for Object Recognition. In
ECCV, 2010.
[6] R. Farrell, O. Oza, V. Morariu, T. Darrell, & L. S. Davis. Birdlets: Subordinate categorization using
volumetric primitives and pose-normalized appearance. In ICCV, 2011.
[7] R. Fergus, Y. Weiss, & A. Torralba. Semi-supervised learning in gigantic image collections. NIPS 2009.
[8] M. Fink. Object classification from a single example utilizing class relevance pseudo-metrics. In NIPS,
2004.
[9] Y. Fu, T. M. Hospedales, T. Xiang, & S. Gong. Learning multi-modal latent attributes. TPAMI, PP(99),
2013.
[10] P. Kankuekul, A. Kawewong, S. Tangruamsub, & O. Hasegawa. Online Incremental Attribute-based
Zero-shot Learning. In CVPR, 2012.
[11] C. Lampert, H. Nickisch, & S. Harmeling. Attribute-based classification for zero-shot learning of object
categories. TPAMI, PP(99), 2013.
[12] H.-T. Lin, C.-J. Lin, & R. C. Weng. A note on platt?s probabilistic outputs for support vector machines.
Machine Learning, 2007.
[13] J. Liu, B. Kuipers, & S. Savarese. Recognizing human actions by attributes. In CVPR, 2011.
[14] U. Luxburg. A tutorial on spectral clustering. Stat Comput, 17(4):395?416, 2007.
[15] M. Maier, U. V. Luxburg, & M. Hein. Influence of graph construction on graph-based clustering measures.
In NIPS, 2008.
[16] T. Mensink, J. Verbeek, F. Perronnin, & G. Csurka. Metric Learning for Large Scale Image Classification:
Generalizing to New Classes at Near-Zero Cost. In ECCV, 2012.
[17] Y. Moses, S. Ullman, & S. Edelman. Generalization to novel images in upright and inverted faces.
Perception, 25:443?461, 1996.
[18] A. Y. Ng, M. I. Jordan, & Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2002.
[19] M. Palatucci, D. Pomerleau, G. Hinton, & T. Mitchell. Zero-shot learning with semantic output codes. In
NIPS, 2009.
[20] S. J. Pan & Q. Yang. A survey on transfer learning. TKDE, 22:1345?59, 2010.
[21] R. Raina, A. Battle, H. Lee, B. Packer, & A. Ng. Self-taught learning: Transfer learning from unlabeled
data. In ICML, 2007.
[22] M. Rohrbach, M. Regneri, M. Andriluka, S. Amin, M. Pinkal, & B. Schiele. Script data for attribute-based
recognition of composite activities. In ECCV, 2012.
[23] M. Rohrbach, M. Stark, & B. Schiele. Evaluating Knowledge Transfer and Zero-Shot Learning in a
Large-Scale Setting. In CVPR, 2011.
[24] M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, & B. Schiele. What Helps Where ? And Why? Semantic
Relatedness for Knowledge Transfer. In CVPR, 2010.
[25] K. Saenko, B. Kulis, M. Fritz, & T. Darrell. Adapting visual category models to new domains. In ECCV,
2010.
[26] V. Sharmanska, N. Quadrianto, & C. H. Lampert. Augmented Attribute Representations. In ECCV, 2012.
[27] A. Shrivastava, S. Singh, & A. Gupta. Constrained Semi-Supervised Learning Using Attributes and
Comparative Attributes. In ECCV, 2012.
[28] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, & W. T. Freeman. Discovering Object Categories in
Image Collections. In ICCV, 2005.
[29] S. Thrun. Is learning the n-th thing any easier than learning the first. In NIPS, 1996.
[30] A. Torralba, K. Murphy, & W. Freeman. Sharing visual features for multiclass and multiview object
detection. In CVPR, 2004.
[31] D. Tran & A. Sorokin. Human activity recognition with metric learning. In ECCV, 2008.
[32] M. Weber, M. Welling, & P. Perona. Towards automatic discovery of object categories. In CVPR, 2000.
[33] D. Zhou, O. Bousquet, T. N. Lal, Jason Weston, & B. Sch?olkopf. Learning with Local and Global
Consistency. In NIPS, 2004.
[34] X. Zhu, Z. Ghahramani, & J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In ICML, 2003.
[35] A. Zweig & D. Weinshall. Exploiting object hierarchy: Combining models from different category levels.
In ICCV, 2007.
9
| 5209 |@word hierachy:3 kulis:1 middle:1 propagate:2 pick:1 mammal:1 solid:4 shot:37 initial:1 liu:1 score:3 ours:9 document:1 outperforms:1 existing:1 comparing:1 subsequent:1 drop:1 plot:1 v:4 bart:1 leaf:2 website:2 morariu:1 discovering:1 provides:4 node:6 org:1 five:1 constructed:1 direct:15 qualitative:1 consists:4 edelman:1 combine:3 paragraph:1 inter:1 notably:1 expected:1 mpg:1 examine:2 frequently:2 multi:5 freeman:2 automatically:6 researched:1 encouraging:1 ucken:1 farhadi:1 considering:1 increasing:1 kuiper:1 provided:9 discover:3 underlying:2 matched:1 project:1 what:2 weinshall:1 supplemental:2 bootstrapping:1 pseudo:1 remember:1 y3:1 fink:1 classifier:18 scaled:1 hit:1 platt:1 normally:1 cooking:4 planck:1 gigantic:1 iswc:1 local:5 treat:1 modify:1 analyzing:1 ap:4 might:1 black:4 initialization:1 examined:1 collect:2 challenging:3 limited:2 averaged:1 harmeling:1 x3:4 bootstrap:1 znm:1 area:1 significantly:4 composite:17 adapting:1 pre:1 integrating:1 refers:1 induce:1 word:1 get:1 unlabeled:11 close:1 storage:1 influence:2 www:1 map:1 primitive:1 survey:1 splitting:1 attr:4 utilizing:1 x14:4 notion:1 construction:2 hierarchy:6 distinguishing:1 us:3 associate:1 recognition:26 particularly:1 labeled:13 initializing:1 capture:1 oza:1 region:1 russell:1 yk:4 mentioned:1 schiele:7 complexity:1 trained:7 singh:1 upon:1 distinctive:1 easily:1 represented:1 train:3 describe:1 hyper:1 neighborhood:13 refined:1 bernt:1 widely:2 encoded:1 cvpr:7 otherwise:2 ability:1 statistic:2 knn:5 unseen:6 transductive:3 noisy:1 final:2 online:2 advantage:1 tpami:2 net:1 propose:4 lowdimensional:1 tran:1 adaptation:1 combining:2 achieve:8 amin:1 description:1 validate:2 olkopf:1 exploiting:4 darrell:2 categorization:3 incremental:1 converges:1 comparative:1 object:16 help:1 pose:1 stat:1 gong:1 propagating:2 measured:1 nearest:3 eq:4 strong:1 predicted:2 direction:1 closely:2 correct:2 attribute:67 stochastic:1 human:4 enable:5 material:2 subordinate:1 sandra:1 accuracy2:1 generalization:2 sufficiently:1 residing:1 ground:4 exp:1 mapping:1 claim:1 efros:1 achieves:4 torralba:2 integrates:1 label:34 prepare:2 largest:2 tf:3 successfully:1 tool:1 reflects:1 weighted:1 gaussian:1 rather:2 zhou:1 varying:1 conjunction:1 linguistic:6 focus:1 improvement:4 consistently:3 seamlessly:2 contrast:4 baseline:2 am:9 dependent:1 perronnin:1 nn:7 typically:3 transferring:3 perona:1 visualisation:1 relation:1 reproduce:1 transformed:1 wij:1 germany:1 x11:6 classification:12 among:3 yahoo:16 development:1 animal:4 art:2 constrained:2 orange:1 andriluka:1 field:1 construct:1 ng:2 manually:3 x4:4 preparing:1 look:1 unsupervised:1 yu:4 nearly:1 icml:2 future:2 report:3 few:24 employ:2 distinguishes:1 randomly:1 neighbour:1 packer:1 recognize:2 dfg:1 murphy:1 detection:1 peel:1 interest:1 evaluation:2 weng:1 integral:1 fu:1 respective:3 literally:1 savarese:1 hein:1 uncertain:3 psychological:1 increased:1 instance:27 zn:25 cost:2 predictor:1 recognizing:4 examining:2 reported:2 dependency:1 nickisch:1 fritz:1 probabilistic:3 lee:1 informatics:1 diverge:1 enhance:1 x9:4 literal:2 external:7 expert:1 leading:1 ullman:2 stark:2 de:1 accompanied:1 sec:9 explicitly:1 farrell:1 depends:1 script:10 view:1 csurka:1 closed:1 jason:1 red:5 start:2 option:1 defer:1 contribution:1 accuracy:13 descriptor:7 maier:1 judgment:1 azn:1 conceptually:1 generalize:1 raw:13 trajectory:1 acc:6 flickr:1 reach:1 sharing:2 manual:6 volumetric:1 frequency:3 pp:2 associated:4 propagated:12 pst:15 dataset:9 popular:1 mitchell:1 knowledge:36 x13:4 color:1 dimensionality:2 improves:3 subsection:1 higher:2 supervised:19 x6:4 follow:2 modal:2 improved:1 bmvc:1 wei:2 formulation:3 evaluated:2 though:1 strongly:1 mensink:1 furthermore:1 zisserman:1 web:7 replacing:2 expressive:1 propagation:16 quality:8 believe:1 building:3 requiring:1 concept:1 y2:1 normalized:3 true:1 regularization:1 semantic:31 x5:2 self:2 auc:5 davis:2 mpi:1 outline:2 dap:3 multiview:1 l1:1 freqs:2 image:34 weber:1 harmonic:1 novel:36 recently:2 common:1 wikipedia:3 sigmoid:1 extend:2 association:14 discussed:1 significant:2 hospedales:1 automatic:1 consistency:1 pm:1 language:1 funded:1 stable:1 similarity:30 impressive:1 base:4 pu:1 add:1 birdlets:1 recent:1 showed:1 inf:1 scenario:2 certain:2 affiliation:1 success:1 binary:2 inverted:1 gurevych:1 additional:1 employed:1 deng:1 recognized:1 determine:2 dashed:4 semi:13 ii:2 x10:4 adapt:5 cross:2 knife:1 lin:2 zweig:1 a1:1 laplacian:1 prediction:16 converging:1 basic:2 variant:2 verbeek:1 vision:2 metric:7 iteration:2 kernel:3 palatucci:1 szarvas:1 achieved:2 want:1 fine:4 separately:1 source:6 sch:1 recording:1 thing:1 incorporates:2 lafferty:1 jordan:1 extracting:1 near:1 presence:1 yang:1 intermediate:6 split:2 wn:2 variety:1 xj:11 reduce:1 inner:4 multiclass:1 expression:1 effort:1 returned:1 compositional:2 action:1 awa:17 amount:2 mid:2 induces:1 svms:1 category:39 reduced:2 generate:1 tutorial:1 moses:1 estimated:2 disjoint:2 per:7 kankuekul:1 blue:3 diverse:1 tkde:1 threefold:1 taught:1 group:1 key:1 four:3 nevertheless:1 achieving:1 changing:1 graph:26 fraction:1 sum:2 run:1 inverse:1 luxburg:2 you:1 named:1 extends:1 throughout:1 scaling:1 comparable:1 layer:2 cyan:1 def:1 followed:1 distinguish:1 mined:6 fold:1 activity:23 sorokin:1 strength:1 idf:3 fei:2 x2:4 bousquet:1 x7:4 aspect:1 x12:4 transferred:2 according:1 imagnet:1 combination:3 pink:1 battle:1 beneficial:1 across:1 slightly:1 pan:1 lp:12 larlus:1 iccv:3 equation:2 count:1 iap:1 available:8 observe:1 hierarchical:1 spectral:3 regulate:1 robustly:1 alternative:2 shortly:2 top:9 clustering:4 include:2 remaining:1 exploit:6 concatenated:1 carrot:3 build:4 especially:2 ghahramani:1 classical:1 objective:1 move:1 added:2 already:1 strategy:2 diagonal:1 traditional:2 gradient:1 distance:4 thrun:1 majority:1 manifold:3 y5:1 collected:1 marcus:1 code:2 length:1 y4:1 z3:4 setup:3 difficult:1 hasegawa:1 implementation:1 pomerleau:1 unknown:1 datasets:14 snippet:4 descent:1 hinton:1 y1:4 frame:1 retrained:1 sharmanska:1 namely:3 pair:1 specified:1 required:2 z1:5 imagenet:17 lal:1 sivic:1 coherent:1 learned:3 textual:2 saarbr:1 nip:7 address:1 able:1 bar:4 spotting:1 usually:1 perception:1 challenge:3 built:1 max:1 including:1 video:6 reliable:1 green:3 rely:1 raina:1 ebert:3 zhu:1 improve:11 x8:4 text:1 comply:1 literature:2 acknowledgement:1 discovery:1 holonym:2 xiang:1 interesting:1 ingredient:2 validation:3 integrate:2 sufficient:3 row:1 eccv:7 summary:1 last:1 understand:1 institute:1 neighbor:3 face:1 absolute:1 sparse:1 benefit:3 curve:7 calculated:1 llc:1 evaluating:1 collection:3 far:2 employing:1 welling:1 compact:2 relatedness:3 global:1 robotic:1 conceptual:1 summing:1 img:2 corpus:1 discriminative:4 xi:14 fergus:1 scrambled:1 continuous:1 latent:2 mpii:14 search:5 why:1 additionally:3 learn:1 transfer:57 robust:2 nature:1 rastegari:1 shrivastava:1 improving:1 expansion:1 as:2 domain:4 dense:3 main:2 linearly:1 arrow:1 lampert:2 quadrianto:1 x1:4 augmented:1 fig:1 referred:1 roc:1 egg:1 precision:1 comput:1 third:2 weighting:1 learns:1 grained:4 x15:4 minute:1 magenta:1 choi:1 specific:1 sift:1 svm:1 gupta:1 grouping:1 incorporating:2 adding:2 gained:1 wash:1 occurring:1 easier:1 smoothly:1 generalizing:1 likely:1 appearance:1 rohrbach:5 visual:8 partially:1 applies:1 truth:4 amt:1 extracted:1 weston:1 goal:1 rbf:2 towards:1 shared:1 fisher:1 included:1 specifically:4 lsvrc:1 reducing:1 upright:1 wordnet:5 called:1 total:1 discriminate:1 experimental:1 vote:1 saenko:1 exception:1 select:1 indicating:1 berg:1 ilsvrc:1 support:1 relevance:1 incorporate:1 evaluate:3 |
4,652 | 521 | Neural Network - Gaussian Mixture Hybrid for
Speech Recognition or Density Estimation
Yoshua Bengio
Dept. Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Giovanni Flammia
Speech Technology Center,
Aalborg University, Denmark
Renato De Morl
School of Computer Science
McGill University
Canada
Ralf Kompe
Erlangen University, Computer Science
Erlangen, Germany
Abstract
The subject of this paper is the integration of multi-layered Artificial Neural Networks (ANN) with probability density functions such as Gaussian
mixtures found in continuous density Hidden Markov Models (HMM). In
the first part of this paper we present an ANN/HMM hybrid in which
all the parameters of the the system are simultaneously optimized with
respect to a single criterion. In the second part of this paper, we study
the relationship between the density of the inputs of the network and the
density of the outputs of the networks. A few experiments are presented
to explore how to perform density estimation with ANNs.
1
INTRODUCTION
This paper studies the integration of Artificial Neural Networks (ANN) with probability density functions (pdf) such as the Gaussian mixtures often used in continuous density Hidden Markov Models. The ANNs considered here are multi-layered
or recurrent networks with hyperbolic tangent hidden units. Raw or preprocessed
data is fed to the ANN, and the outputs of the ANN are used as observations for
a parametric probability density function such as a Gaussian mixture. One may
view either the ANN as an adaptive preprocessor for the Gaussian mixture, or the
Gaussian mixture as a statistical postprocessor for the ANN. A useful role for the
ANN would be to transform the input data so that it can be more efficiently modeled by a Gaussian mixture . An interesting situation is one in which most of the
input data points can be described in a lower dimensional space. In this case, it
is desired that the ANN learns the possibly non-linear transformation to a more
compact representation.
175
176
Bengio, De Mori, Flammia, and Kampe
In the first part of this paper, we briefly describe a hybrid of ANNs and Hidden Markov Models (HMM) for continuous speech recognition. More details on
this system can be found in (Bengio 91). In this hybrid, all the free parameters
are simultaneously optimized with respect to a single criterion. In recent years,
many related combinations have been studied (e.g., Levin 90, Bridle 90, Bourlard
& Wellekens 90). These approaches are often motivated by observed advantages and
disadvantages of ANNs and HMMs in speech recognition (Bourlard & Wellekens 89,
Bridle 90). Experiments of phoneme recognition on the TIMIT database with the
proposed ANN /HMM hybrid are reported. The task under study is the recognition (or spotting) of plosive sounds in continuous speech. Comparative results on
this task show that the hybrid performs better than the ANN alone, better than
the ANN followed by a dynamic programming based postprocessor using duration
constraints, and better than the HMM alone. Furthermore, a global optimization
of all the parameters of the system also yielded better performance than a separate
optimization.
In the second part of this paper, we attempt to extend some of the findings of the
first part, in order to use the same basic architecture (ANNs followed by Gaussian
mixtures) to perform density estimation. We establish the relationship between
the network input and output densities, and we then describe a few experiments
exploring how to perform density estimation with this system.
2
ANN/HMM HYBRID
In a HMM, the likelihood of the observations, given the model, depends in a simple continuous way on the observations. It is therefore possible to compute the
derivative of an optimization criterion C, with respect to the observations of the
HMM. For example, one may use the criterion of the Maximum Likelihood (ML)
of the observations, or of the Maximum Mutual Information (MMI) between the
observations and the correct sequence. If the observation at each instant is the
vector output, Yi, of an ANN, then one can use this gradient,
to optimize the
parameters of the ANN with back-propagation. See (Bridle 90, Bottou 91, Bengio
91, Bengio et a192) on ways to compute this gradient.
gf"
2.1
EXPERIMENTS
A preliminary experiment has been performed using a prototype system based on
the integration of ANNs with HMMs. The ANN was initially trained based on
a prior task decomposition. The task is the recognition of plosive phonemes pronounced by a large speaker population. The 1988 version of the TIM IT continuous
speech database has been used for this purpose. SI and SX sentences from regions
2, 3 and 6 were used, with 1080 training sentences and 224 test sentences, 135 training speakers and 28 test speakers. The following 8 classes have been considered:
/p/,/t/,/k/,/b/,/d/,/g/,/dx/,/all other phones/. Speaker-independent recognition
of plosive phonemes in continuous speech is a particularly difficult task because
these phonemes are made of short and non-stationary events that are often confused with other acoustically similar consonants or may be merged with other unit
segments by a recognition system.
Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation
Levell
initially trained
to re~ze
broad phonetic
????
classes
specialized
networks
Level 2
Initially trained to
principal
Level 3
components
of lower
levels
gradielll
SPEECH
????????.. ?.. ?1"??.
J.OUl'U?U'-
preprocessing
initially trained to perfonn
some specialized task
e.g. ploslve discrirnation
Figure 1: Architecture of the ANN/HMM Hybrid for the Experiments.
The ANNs were trained with back-propagation and on-line weight update. As discussed in (Bengio 91), speech knowledge is used to design the input, output, and
architecture of the system and of each one of the networks. The experimental system is based on the scheme shown in Figure 1. The architecture is built on three
levels. The approach that we have taken is to select different input parameters and
different ANN architectures depending on the phonetic features to be recognized.
At levell, two ANNs are initially trained to perform respectively plosive recognition
(ANN3) and broad classification of phonemes (ANN2). ANN3 has delays and recurrent connections and is trained to recognize static articulatory features of plosives
in a way that depends of the place of articulation of the right context phoneme.
ANN2 has delays but no recurrent connections. The design of ANN2 and ANN3 is
described in more details in (Bengio 91). At level 2, ANNI acts.as an integrator of
parameters generated by the specialized ANNs oflevel 1. ANNI is a linear network
that initially computes the 8 principal components of the concatenated output vectors of the lower level networks (ANN2 and ANN3). In the experiment described
below, the combined network (ANN1+ANN2+ANN3) has 23578 weights. Level 3
contains the HMMs, in which each distribution is modeled by a Gaussian mixture
with 5 densities. See (Bengio et al 92) for more details on the topology of the
HMM. The covariance matrix is assumed to be diagonal since the observations are
initially principal components and this assumption reduces significantly the number of parameters to be estimated. After one iteration of ML re-estimation of the
HMM parameters only, all the parameters of the hybrid system were simultaneously tuned to maximize the ML criterion for the next 2 iterations. Because of the
simplicity of the implementation of the hybrid trained with ML, this criterion was
used in these experiments. Although such an optimization may theoretically worsen
performance 1 , we observed an marked improvement in performance after the final
global tuning. This may be explained by the fact that a nearby local maximum of
1
In section 3, we consider maximization of the likelihood of the inpu ts of the network,
177
178
Bengio, De Mori, Flammia, and Kompe
the likelihood is attained from the initial starting point based on prior and separate
training of the ANN and the HMM.
=
Table 1: Comparative Recognition Results. % recognized
100 - % substitutions
- % deletions. % accuracy = 100 - % substitutions - % deletions -% insertions.
ANNs alone
HMMs alone
ANNs+DP
ANNs+HMM
ANNs+HMM+global opt.
% rec
%ms
% del
% subs
% acc
85
76
88
87
90
32
6.3
16
6.8
3.8
0.04
2.2
0.01
0.9
1.4
15
22.3
53
69
72
81
86
11
12
9.0
In order to assess the value of the proposed approach as well as the improvements
brought by the HMM as a post-processor for time alignment, the performance
of the hybrid system was evaluated and compared with that of a simple postprocessor applied to the outputs of the ANNs and with that of a standard dynamic
programming postprocessor that models duration probabilities for each phoneme.
The simple post-processor assigns a symbol to each output frame of the ANNs by
comparing the target output vectors with actual output vectors. It then smoothes
the resulting string to remove very short segments and merges consecutive segments
that have the same symbol. The dynamic programming (DP) postprocessor finds
the sequence of phones that minimizes a cost that imposes durational constraints
for each phoneme. In the HMM alone system, the observations are the cepstrum
and the energy of the signal, as well as their derivatives. Comparative results for
the three systems are summarized in Table 1.
3
DENSITY ESTIMATION WITH AN ANN
In this section, we consider an extension of the system of the previous section.
The objective is to perform density estimation of the inputs of the ANN. Instead
of maximizing a criterion that depends on the density of the outputs of an ANN,
we maximize the likelihood of inputs of the ANN. Hence the ANN is more than a
preprocessor for the gaussian mixtures, it is part of the probability density function
that is to be estimated. Instead of representing a pdf only with a set of spatially
local functions or kernels such as gaussians (Silverman 86), we explore how to use
a global transformation such as one performed by an ANN in order to represent a
pdf. Let us first define some notation: f x (x) def p( X = x), fy (y) def p(Y = y),
and fXIY(x)(x)
3.1
def
p(X
= x I Y = y(x)).
RELATION BETWEEN INPUT PDF AND OUTPUT PDF
Theorem Suppose a random variable Y (e.g., the outputs of an ANN) is a deterministic parametric function y(X) of a random variable X (here, the inputs of the
ANN), where y and x are vectors of dimension ny and n x . Let J -- 8(Xl.Xl,
8(Yl.h.?? .Yn v)
oo .X .. )
n
not the outputs of the network.
Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation
be the Jacobian of the transformation from X to Y, and assume J = U DV t be a
singular value decomposition of J, with s(x) =1 Il~1/ Dii 1the product of the singular values. Suppose Y is modeled by a probability density function fy(y). Then,
for n z >= ny and s(x) > 0
fx(x) =
Proof. In the case in which n z
integral,
(1)
fy(y(x? fXIY(x)(x) s(x)
= ny, by change of variable y -- x in the following
1
fy(y) dy
01/
=1
(2)
we obtain the following result 2 :
fx(x) = fy(y(x?
(3)
1 Determinant(J) 1
Let us now consider the case ny < n z , i.e., the network has less outputs than inputs.
In order to do so we will introduce an intermediate transformation to a space Z of
dimension n z in which some dimensions directly correspond to Y. Define Z such
that f} Zl,Z2,???,Z.. = V t ? Decompose Z into Z' and Z":
f} Xl ,X2, . .. ,X ....
= (Zl' ... , zn1/) , Z" = (Zn1/+1, ... , zn",)
(4)
There is a one-to-one mapping Yz (z') between Z' and Y, and its Jacobian is U D',
where D' is the matrix composed of the first ny columns of D. Perform a change
of variables y -- z' in the integral of equation 2:
z'
1
=1
fy (yz (z'? s dz'
(5)
0.1
In order to make a change of variable to the variable x, we have to specify the
conditional pdf fXIY(x)(x) and the corresponding pdf
p(z" 1z') = p(z", z, 1z') =3 p(z 1y) =4 fXIY(X)(x). Hence we can write
1
p(z" 1 z') dz"
=1
(6)
0.11
Multiplying the two integrals in equations 5 and 6, we obtain the following:
1=
1
p(z"lz')dz"
0.11
1
0.1
fy(yz(z'?sdz'=
1
o.
fy(yz(z')p(z"lz')sdz
(7)
and substituting z __ vtx:
1
fy(y(x? fXIY(X)(X) s(x) dx
1,
(8)
0 ..
which yields to the general result of equation 1 D.
Unfortunately, it is not clear how to efficiently evaluate fXIY(x)(x) and then compute its derivative with respect to the network weights. In the experiments described
in the next section we first study empirically the simpler case in which nx n y ?
=
2in that case, 1Determinant(l) 1= sand IXIY(x)(x)
3knowing z' is equivalent to knowing y.
fbecause z = Vtx and Determinant(V) = 1.
= 1.
179
180
Bengio, De Mori, Flammia, and Kampe
Figure 2: First Series of Experiments on Density Estimation with an ANN, for data
generated on a non-linear input curve. From left to right: Input samples, density
of the input, X, estimated with ANN+Gaussian, ANN that maps X to Y, density
of the output, Y, as estimated by a Gaussian.
3.2
ESTIMATION OF THE PARAMETERS
When estimating a pdf, one can approximate the functions fy(y) and y(x) by
parameterized functions. For example, we consider for the output pdf the class
of densities fy (y; 8) modeled by a Gaussian mixture of a certain number of components, where 8 is a set of means, variances and mixing proportions. For the
non-linear transformation y(x;w) from X to Y, we choose an ANN, defined by its
architecture and the values of its weights w. In order to choose values for the Gaussian and ANN parameters one can maximize the a-posteriori (MAP) probability of
these parameters given the data, or if no prior is known or assumed, maximize the
likelihood (ML) of the input data given the parameters. In the preliminary experiments described here, the logarithm of the likelihood of the data was maximized,
i.e., the optimal parameters are defined as follows:
(0, w)
= argmax L log(Jx(x?
(9,w)
(9)
xeS
where::: is the set of inputs samples.
In order to estimate a density with the above described system, one computes the
derivative of p(X
x I 8,w) with respect to w. If the output pdf is a Gaussian
mixture, we reestimate its parameters 8 with the EM algorithm (only fy (y) depends
on 8 in the expression for f x (x) in equations 3 or 1). Differentiating equation 3
with respect to w yields:
=
8
8w(logfx(x?
8
'" 8
8J??
= 8w(logfy(y(x;w);
8? + L...J 8J .. (log(Determinant(J?) 8::
i,j
I,
(10)
The derivative of the logarithm of the determinant can be computed simply as
follows (Bottou 91):
8~ij (log(Determinant(J?) = (J-1)ji,
since VA, Determinant(A)
= E j AijCofactorij(A)
(11)
,and (A-l)ij = ~=;,;;..;;...Io~~
Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation
?.
.?.,
?
.,
\,
?
\
.
.\
?
,
I
Figure 3: Second Series of Experiments on Density Estimation with an ANN. From
left to right: Input samples, density with non-linear net + Gaussian, output samples
after network transformation.
3.3
EXPERIMENTS
The first series of experiments verified that a transformation of the inputs with
an ANN could improve the likelihood of the inputs and that gradient ascent in
the ML criterion could find a good solution. In these experiments, we attempt
to model some two-dimensional data extracted from a speech database. The 1691
training data points are shown in the left of Figure 2. In the first experiment, a
diagonal Gaussian is used, with no ANN. In the second experiment a linear network
and a diagonal Gaussian are used. In the third experiment, a non-linear network
with 4 hidden units and a diagonal Gaussian are used. The average log likelihoods
obtained on a test set of 617 points were -3.00, -2.95 and -2.39 respectively for the
three experiments. The estimated input and output pdfs for the last experiment
are depicted in Figure 2, with white indicating high density and black low density.
The second series of experiments addresses the following question: if we use a
Gaussian mixture with diagonal covariance matrix and most of the data is on a nonlinear hypersurface cI> of dimension less than n x , can the ANN's outputs separate
the dimensions in which the data varies greatly (along ~) from those in which
it almost doesn't (orthogonal to ~)7 Intuitively, it appears that this will be the
case, because the variance of outputs which don't vary with the data will be close
to zero, while the determinant of the Jacobian is non-zero. The likelihood will
correspondingly tend to infinity. The first experiment in this series verified that
this was the case for linear networks. For data generated on a diagonal line in
2-dimensional space, the resulting network separated the" variant" dimension from
the "invariant" dimension, with one of the output dimensions having near zero
variance, and the transformed data lying on a line parallel to the other output
dimension.
Experiments with non-linear networks suggest that with such networks, a solution
that separates the variant dimensions from the invariant ones is not easily found
by gradient ascent. However, it was possible to show that such a solution was at
a maximum (possibly local) of the likelihood. A last experiment was designed to
demonstrate this. The input data, shown in Figure 3, was artificially generated to
make sure that a solution existed. The network had 2 inputs, 3 hidden units and 2
181
182
Bengio, De Mori, Flammia, and Kampe
outputs. The input samples and the input density corresponding to the weights in
a maximum of the likelihood are displayed in Figure 3, along with the transformed
input data for those weights. The points are projected by the ANN to a line parallel
to the first output dimension. Any variation of the weights from that solution, in
the direction of the gradient, even with a learning rate as small as 10- 14, yielded
either no perceptible improvement or a decrease in likelihood.
4
CONCLUSION
This paper has studied an architecture in which an ANN performs a non-linear
transformation of the data to be analyzed, and the output of the ANN is modeled
by a Gaussian mixture. The design of the ANN can incorporate prior knowledge
about the problem, for example to modularize the task and perform an initial
training of the sub-networks. In phoneme recognition experiments, an ANN/HMM
hybrid based on this architecture performed better than the ANN alone or the HMM
alone. In the second part of th paper, we have shown how the pdf of the input of
the network relates to the pdf of the outputs of the network. The objective of this
work is to perform density estimation with a non-local non-linear transformation of
the data. Preliminary experiments showed that such estimation was possible and
that it did improve the likelihood of the resulting pdf with respect to using only a
Gaussian pdf. We also studied how this system could perform a non-linear analogue
to principal components analysis.
References
Bengio Y. 1991. Artificial Neural Networks and their Application to Sequence
Recognition. PhD Thesis, School of Computer Science, McGill University, Montreal,
Canada.
Bengio Y., De Mori R., Flammia G., and Kompe R. 1992. Phonetically motivated acoustic parameters for continuous speech recognition using artificial neural
networks. To appear in Speech Communication.
Bottou L. 1991. Une approche theorique a. l'apprentissage connexioniste; applications a. la reconnaissance de la parole. Doctoral Thesis, Universite de Paris Sud,
France.
Bourlard, H. and Wellekens, C.J. (1989). Speech pattern discrimination and multilayer perceptrons. Computer, Speech and Language, vol. 3, pp. 1-19.
Bridle J .S. 1990. Training stochastic model recognition algorithms as networks can
lead to maximum mutual information estimation of parameters. Advances in Neural
Information Processing Systems 2, (ed . D.S. Touretzky) Morgan Kauffman Publ.,
pp. 211-217.
Levin E. 1990. Word recognition using hidden control neural architecture. Proceedings of the International Conference on Acoustics, Speech and Signal Processing,
Albuquerque, NM, April 90, pp. 433-436.
Silverman B.W. 1986. Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York, NY.
| 521 |@word determinant:8 version:1 briefly:1 proportion:1 decomposition:2 covariance:2 initial:2 substitution:2 contains:1 series:5 tuned:1 comparing:1 z2:1 si:1 dx:2 remove:1 designed:1 update:1 discrimination:1 alone:7 stationary:1 une:1 short:2 simpler:1 along:2 introduce:1 theoretically:1 multi:2 brain:1 integrator:1 sud:1 actual:1 confused:1 estimating:1 notation:1 string:1 minimizes:1 finding:1 transformation:9 perfonn:1 act:1 zl:2 unit:4 control:1 yn:1 appear:1 local:4 io:1 black:1 doctoral:1 studied:3 hmms:4 silverman:2 hyperbolic:1 significantly:1 word:1 suggest:1 close:1 layered:2 context:1 optimize:1 equivalent:1 deterministic:1 map:2 center:1 maximizing:1 dz:3 starting:1 duration:2 simplicity:1 assigns:1 ralf:1 population:1 fx:2 variation:1 mcgill:2 target:1 suppose:2 programming:3 recognition:18 particularly:1 ze:1 rec:1 database:3 observed:2 role:1 region:1 decrease:1 insertion:1 dynamic:3 trained:8 segment:3 easily:1 separated:1 describe:2 artificial:4 statistic:1 transform:1 final:1 advantage:1 sequence:3 net:1 product:1 mixing:1 pronounced:1 vtx:2 comparative:3 tim:1 depending:1 recurrent:3 oo:1 montreal:1 ij:2 school:2 direction:1 merged:1 correct:1 stochastic:1 dii:1 sand:1 preliminary:3 decompose:1 opt:1 exploring:1 extension:1 lying:1 considered:2 hall:1 mapping:1 substituting:1 vary:1 oul:1 consecutive:1 jx:1 purpose:1 estimation:17 brought:1 gaussian:25 improvement:3 pdfs:1 likelihood:14 greatly:1 posteriori:1 initially:7 hidden:7 relation:1 zn1:2 transformed:2 france:1 germany:1 classification:1 integration:3 mutual:2 having:1 chapman:1 broad:2 yoshua:1 kompe:3 few:2 aalborg:1 composed:1 simultaneously:3 recognize:1 argmax:1 attempt:2 alignment:1 mixture:17 durational:1 analyzed:1 articulatory:1 integral:3 orthogonal:1 logarithm:2 desired:1 re:2 column:1 disadvantage:1 zn:1 maximization:1 cost:1 delay:2 levin:2 reported:1 varies:1 combined:1 density:33 international:1 yl:1 acoustically:1 reconnaissance:1 thesis:2 nm:1 choose:2 possibly:2 cognitive:1 derivative:5 de:8 summarized:1 depends:4 performed:3 view:1 approche:1 parallel:2 worsen:1 timit:1 ass:1 il:1 accuracy:1 phonetically:1 phoneme:9 variance:3 efficiently:2 maximized:1 correspond:1 yield:2 raw:1 mmi:1 albuquerque:1 multiplying:1 processor:2 acc:1 anns:15 reestimate:1 touretzky:1 ed:1 energy:1 pp:3 universite:1 proof:1 static:1 erlangen:2 bridle:4 massachusetts:1 knowledge:2 back:2 appears:1 attained:1 specify:1 april:1 cepstrum:1 evaluated:1 furthermore:1 nonlinear:1 propagation:2 del:1 hence:2 spatially:1 white:1 speaker:4 criterion:8 m:1 pdf:14 demonstrate:1 performs:2 specialized:3 empirically:1 ji:1 extend:1 discussed:1 cambridge:1 tuning:1 language:1 had:1 recent:1 showed:1 phone:2 phonetic:2 certain:1 yi:1 morgan:1 recognized:2 maximize:4 signal:2 relates:1 sound:1 reduces:1 levell:2 dept:1 post:2 inpu:1 va:1 variant:2 basic:1 multilayer:1 iteration:2 represent:1 kernel:1 singular:2 flammia:6 postprocessor:5 ascent:2 sure:1 subject:1 tend:1 near:1 intermediate:1 bengio:13 architecture:9 topology:1 prototype:1 knowing:2 motivated:2 expression:1 speech:18 york:1 useful:1 clear:1 estimated:5 write:1 vol:1 preprocessed:1 verified:2 year:1 parameterized:1 place:1 almost:1 smoothes:1 dy:1 renato:1 def:3 followed:2 existed:1 modularize:1 yielded:2 constraint:2 infinity:1 x2:1 nearby:1 combination:1 em:1 perceptible:1 explained:1 invariant:2 dv:1 intuitively:1 taken:1 mori:5 equation:5 wellekens:3 fed:1 gaussians:1 instant:1 concatenated:1 yz:4 establish:1 objective:2 question:1 parametric:2 diagonal:6 gradient:5 dp:2 separate:4 hmm:18 nx:1 fy:12 parole:1 denmark:1 modeled:5 relationship:2 difficult:1 unfortunately:1 design:3 implementation:1 publ:1 perform:9 observation:9 markov:3 t:1 displayed:1 situation:1 communication:1 frame:1 canada:2 paris:1 optimized:2 sentence:3 connection:2 acoustic:2 merges:1 deletion:2 address:1 spotting:1 below:1 pattern:1 kauffman:1 articulation:1 built:1 analogue:1 event:1 hybrid:15 bourlard:3 representing:1 scheme:1 improve:2 technology:2 gf:1 prior:4 tangent:1 interesting:1 imposes:1 apprentissage:1 morl:1 last:2 free:1 institute:1 correspondingly:1 differentiating:1 curve:1 dimension:11 giovanni:1 computes:2 doesn:1 made:1 adaptive:1 preprocessing:1 projected:1 lz:2 hypersurface:1 approximate:1 compact:1 ml:6 global:4 assumed:2 consonant:1 don:1 continuous:8 table:2 plosive:5 bottou:3 artificially:1 did:1 ny:6 sub:2 xl:3 jacobian:3 third:1 learns:1 theorem:1 preprocessor:2 symbol:2 x:1 ci:1 phd:1 sx:1 depicted:1 simply:1 explore:2 extracted:1 ma:1 conditional:1 marked:1 ann:42 change:3 principal:4 experimental:1 la:2 perceptrons:1 indicating:1 select:1 incorporate:1 evaluate:1 |
4,653 | 5,210 | Reshaping Visual Datasets for Domain Adaptation
Boqing Gong
U. of Southern California
Los Angeles, CA 90089
boqinggo@usc.edu
Kristen Grauman
U. of Texas at Austin
Austin, TX 78701
grauman@cs.utexas.edu
Fei Sha
U. of Southern California
Los Angeles, CA 90089
feisha@usc.edu
Abstract
In visual recognition problems, the common data distribution mismatches between
training and testing make domain adaptation essential. However, image data is
difficult to manually divide into the discrete domains required by adaptation algorithms, and the standard practice of equating datasets with domains is a weak
proxy for all the real conditions that alter the statistics in complex ways (lighting,
pose, background, resolution, etc.) We propose an approach to automatically discover latent domains in image or video datasets. Our formulation imposes two key
properties on domains: maximum distinctiveness and maximum learnability. By
maximum distinctiveness, we require the underlying distributions of the identified
domains to be different from each other to the maximum extent; by maximum
learnability, we ensure that a strong discriminative model can be learned from the
domain. We devise a nonparametric formulation and efficient optimization procedure that can successfully discover domains among both training and test data.
We extensively evaluate our approach on object recognition and human activity
recognition tasks.
1
Introduction
A domain refers to an underlying data distribution. Generally, there are two: the one with which
classifiers are trained, and the other to which classifiers are applied. While many learning algorithms
assume the two are the same, in real-world applications, the distributions are often mismatched,
causing significant performance degradation when the classifiers are applied. Domain adaptation
techniques are crucial in building robust classifiers to address mismatched new and unexpected
target environments. As such, the subject has been intensively studied in computer vision [1, 2, 3, 4],
speech and language processing [5, 6], and statistics and learning [7, 8, 9, 10].
While domain adaptation research largely focuses on how adaptation should proceed, there are also
vital questions concerning the domains themselves: what exactly is a domain composed of? and
how are domains different from each other? For some applications, the answers come naturally.
For example, in speech recognition, we can organize data into speaker-specific domains where each
domain contains a single speaker?s utterances. In language processing, we can organize text data
into language-specific domains. For those types of data, we can neatly categorize each instance
with a discrete set of semantically meaningful properties; a domain is thus naturally composed of
instances of the same (subset of) properties.
For visual recognition, however, the same is not possible. In addition to large intra-category appearance variations, images and video of objects (or scenes, attributes, activities, etc.) are also
significantly affected by many extraneous factors such as pose, illumination, occlusion, camera resolution, and background. Many of these factors simply do not naturally lend themselves to deriving
discrete domains. Furthermore, the factors overlap and interact in images in complex ways. In fact,
even coming up with a comprehensive set of such properties is a daunting task in its own right?not
to mention automatically detecting them in images!
1
Partially due to these conceptual and practical constraints, datasets for visual recognition are not
deliberately collected with clearly identifiable domains [11, 12, 13, 14, 15]. Instead, standard image/video collection is a product of trying to ensure coverage of the target category labels on one
hand, and managing resource availability on the other. As a result, a troubling practice in visual domain adaptation research is to equate datasets with domains and study the problem of cross-dataset
generalization or correcting dataset bias [16, 17, 18, 19].
One pitfall of this ad hoc practice is that a dataset could be an agglomeration of several distinctive
domains. Thus, modeling the dataset as a single domain would necessarily blend the distinctions,
potentially damaging visual discrimination. Consider the following human action recognition task,
which is also studied empirically in this work. Suppose we have a training set containing videos of
multiple subjects taken at view angles of 30? and 90? , respectively. Unaware of the distinction of
these two views of videos, a model for the training set as a single training domain needs to account
for both inter-subject and inter-view variations. Presumably, applying the model to recognizing
videos taken at view angle of 45? (i.e., from the test domain) would be less effective than applying
models accounting for the two view angles separately, i.e., modeling inter-subject variations only.
How can we avoid such pitfalls? More specifically, how can we form characteristic domains, without resorting to the hopeless task of manually defining properties along which to organize them?
We propose novel learning methods to automatically reshape datasets into domains. This is a challenging unsupervised learning problem. At the surface, we are not given any information about
the domains that the datasets contain, such as the statistical properties of the domains, or even the
number of domains. Furthermore, the challenge cannot be construed as a traditional clustering problem; simply clustering images by their appearance is prone to reshaping datasets into per-category
domains, as observed in [20] and our own empirical studies. Moreover, there may be many complex factors behind the domains, making it difficult to model the domains with parametric mixture
models on which traditional clustering algorithms (e.g., Kmeans or Gaussian mixtures) are based.
Our key insights are two axiomatic properties that latent domains should possess: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we identify domains that are
maximally different in distribution from each other. This ensures domains are characteristic in terms
of their large inter-domain variations. By maximum learnability, we identify domains from which
we can derive strong discriminative models to apply to new testing data.
In section 2, we describe our learning methods for extracting domains with these desirable properties. We derive nonparametric approaches to measure domain discrepancies and show how to
optimize them to arrive at maximum distinctiveness. We also show how to achieve maximum learnability by monitoring an extracted domain?s discriminative learning performance, and we use that
property to automatically choose the number of latent domains. To our best knowledge, [20] is
the first and only work addressing latent domain discovery. We postpone a detailed discussion and
comparison to their method to section 3, after we have described our own.
In section 4, we demonstrate the effectiveness of our approach on several domain adaptation tasks for
object recognition and human activity recognition. We show that we achieve far better classification
results using adapted classifiers learned on the discovered domains. We conclude in section 5.
2
Proposed approach
We assume that we have access to one or more annotated datasets with a total of M data instances.
The data instances are in the form of (xm , ym ) where xm ? RD is the feature vector and ym ? [C]
the corresponding label out of C categories. Moreover, we assume that each data instance comes
from a latent domain zm ? [K] where K is the number of domains.
In what follows, we start by describing our algorithm for inferring zm assuming K is known. Then
we describe how to infer K from the data.
2.1
Maximally distinctive domains
Given K, we denote the distributions of unknown domains Dk by Pk (x, y) for k ? [K]. We do not
impose any parametric form on Pk (?, ?). Instead, the marginal distribution Pk (x) is approximated
2
by the empirical distribution P?k (x)
1 X
P?k (x) =
?x zmk ,
Mk m m
where Mk is the number of data instances to be assigned to the domain k and ?xm is an atom at
xm . zmk
a binary indicator variable and takes the value of 1 when zm = k. Note that
P ? {0, 1} isP
Mk = m zmk and k Mk = M.
What kind of properties do we expect from P?k (x)? Intuitively, we would like any two different
domains P?k (x) and P?k0 (x) to be as distinctive as possible. In the context of modeling visual data,
this implies that intra-class variations between domains are often far more pronounced than interclass variations within the same domain. As a concrete example, consider the task of differentiating
commercial jetliners from fighter jets. While the two categories are easily distinguishable when
viewed from the same pose (frontal view, side view, etc.), there is a significant change in appearance
when either category undergoes a pose change. Clearly, defining domains by simply clustering the
images by appearance is insufficient; the inter-category and inter-pose variations will both contribute
to the clustering procedure and may lead to unreasonable clusters. Instead, to identify characteristic
domains, we need to look for divisions of the data that yield maximally distinctive distributions.
To quantify this intuition, we need a way to measure the difference in distributions. To this end, we
apply a kernel-based method to examine whether two samples are from the same distribution [21].
Concretely, let k(?, ?) denote a characteristic positive semidefinite kernel (such as the Gaussian kernel). We compute the the difference between the means of two empirical distributions in the reproducing kernel Hilbert space (RKHS) H induced by the kernel function,
2
1 X
1 X
0
d(k, k ) =
k(?, xm )zmk ? 0
k(?, xm )zmk0
(1)
Mk m
Mk m
H
where k(?, xm ) is the image (or kernel-induced feature) of xm under the kernel. The measure
approaches zero as the number of samples tends to infinity, if and only if the two domains are the
same, Pk = Pk0 . We define the total domain distinctiveness (TDD) as the sum of this quantity over
all possible pairs of domains:
X
TDD(K) =
d(k, k 0 ),
(2)
k6=k0
and choose domain assignments for zm such that TDD is maximized. We first discuss this optimization problem in its native formulation of integer programming, followed by a more computationally
convenient continuous optimization.
Optimization In addition to the binary constraints on zmk , we also enforce
K
X
k=1
zmk = 1,
? m ? [M], and
M
M
1 X
1 X
zmk ymc =
ymc ,
Mk m=1
M m=1
? c ? [C],
k ? [K] (3)
where ymc is a binary indicator variable, taking the value of 1 if ym = c.
The first constraint stipulates that every instance will be assigned to one domain and one domain
only. The second constraint, which we refer to as the label prior constraint (LPC), requires that
within each domain, the class labels are distributed according to the prior distribution (of the labels),
estimated empirically from the labeled data.
LPC does not restrict the absolute numbers of instances of different labels in each domain. It only
reflects the intuition that in the process of data collection, the relative percentages of different classes
are approximately in accordance with a prior distribution that is independent of domains. For example, in action recognition, if the ?walking? category occurs relatively frequently in a domain
corresponding to brightly lit video, we also expect it to be frequent in the darker videos. Thus, when
data instances are re-arranged into latent domains, the same percentages are likely to be preserved.
The optimization problem is NP-hard due to the integer constraints. In the following, we relax it
into a continuous optimization, which is more accessible with off-the-shelf optimization packages.
3
Relaxation We introduce new variables ?mk = zmk /Mk , and relax them to live on the simplex
(
)
M
X
T
?k = (?1k , ? ? ? , ?Mk ) ? ? = ?k : ?mk ? 0,
?mk = 1
m=1
for k = 1, ? ? ? , K. With the new variables, our optimization problem becomes
X
X
max
TDD(K) =
(?k ? ?k0 )T K(?k ? ?k0 )
?
k6=k0
s.t. 1/M ?
(4)
k6=k0
X
?mk ? 1/C,
m = 1, 2, ? ? ? , M,
(5)
k
(1 ? ?)/M
X
m
ymc ?
X
?mk ymc ? (1 + ?)/M
m
X
ymc ,
c = 1, ? ? ? , C,
k = 1, ? ? ? , K,
m
where K is the M ? M kernel matrix. The first constraint stems from the (default) requirement that
every domain should have at least one instance per category, namely, Mk ? C and every domain
should at most have M instances (Mk ? M). The second constraint is a relaxed version of the LPC,
allowing a small deviation from the prior distribution by setting ? = 1%. We assign xm to the
domain k for which ?mk is the maximum of ?m1 , ? ? ? , ?mK .
This relaxed optimization problem is a maximization of convex quadratic function subject to linear
constraints. Though in general still NP-hard, this type of optimization problem has been studied
extensively and we have found existing solvers are adequate in yielding satisfactory solutions.
2.2
Maximally learnable domains: determining the number of domains
Given M instances, how many domains hide inside? Note that the total domain distinctiveness
TDD(K) increases as K increases ? presumably, in the extreme case, each domain has only a few
instances and their distributions would be maximally different from each other. However, such tiny
domains would offer insufficient data to separate the categories of interest reliably.
To infer the optimal K, we appeal to maximum learnability, another desirable property we impose
on the identified domains. Specifically, for any identified domain, we would like the data instances it
contains to be adequate to build a strong classifier for labeled data ? failing to do so would cripple
the domain?s adaptability to new test data.
Following this line of reasoning, we propose domain-wise cross-validation (DWCV) to identify the
optimal K. DWCV consists of the following steps. First, starting from K = 2, we use the method
described in the previous section to identify K domains. Second, for each identified domain, we
build discriminative classifiers, using the label information and evaluate them with cross-validation.
Denote the cross-validation accuracy for the k-th domain by Ak . We then combine all the accuracies
with a weighted sum
K
X
A(K) = 1/M
M k Ak .
k=1
For very large K such that each domain contains only a few examples, A(K) approaches the classification accuracy using the class prior probability to classify. Thus, starting at K = 2 (and assuming
A(2) is greater than the prior probability?s classification accuracy), we choose K? as the value that
attains the highest cross-validation accuracy: K? = arg maxK A(K). For N-fold cross-validation,
a practical bound for the largest K we need to examine is Kmax ? min{M/(NC), C}. Beyond this
bound it does not quite make sense to do cross-validation.
3
Related work
Domain adaptation is a fundamental research subject in statistical machine learning [9, 22, 23, 10],
and is also extensively studied in speech and language processing [5, 6, 8] and computer vision [1,
2, 3, 4, 24, 25]. Mostly these approaches are validated by adaptating between datasets, which, as
discussed above, do not necessarily correspond to well-defined domains.
4
In our previous work, we proposed to identify some landmark data points in the source domain which
are distributed similarly to the target domain [26]. While that approach also slices the training set, it
differs in the objective. We discover the underlying domains of the training datasets, each of which
will be adaptable, whereas the landmarks in [26] are intentionally biased towards the single given
target domain. Hoffman et al.?s work [20] is the most relevant to ours. They also aim at discovering
the latent domains from datasets, by modeling the data with a hierarchical distribution consisting
of Gaussian mixtures. However, their explicit form of distribution may not be easily satisfiable
in real data. In contrast, we appeal to nonparametric methods, overcoming this limitation without
assuming any form of distribution. In addition, we examine the new scenario where the test set is
also composed of heterogeneous domains.
A generalized clustering approach by Jegelka et al. [27] shares the idea of maximum distinctiveness (or ?discriminability? used in [27]) criterion with our approach. However, their focus is the
setting of unsupervised clustering where ours is domain discovery. As such, they adopt a different
regularization term from ours, which exploits labels in the datasets.
Multi-domain adaptation methods suppose that multiple source domains are given as input, and the
learner must adapt from (some of) them to do well in testing on a novel target domain [28, 29, 10].
In contrast, in the problem we tackle, the division of data into domains is not given?our algorithm
must discover the latent domains. After our approach slices the training data into multiple domains,
it is natural to apply multi-domain techniques to achieve good performance on a test domain. We
will present some related experiments in the next section.
4
Experimental Results
We validate our approach on visual object recognition and human activity recognition tasks. We
first describe our experimental settings, and then report the results of identifying latent domains
and using the identified domains for adapting classifiers to a new mono-domain test set. After that,
we present and report experimental results of reshaping heterogeneous test datasets into domains
matching to the identified training domains. Finally, we give some qualitative analyses and details
on choosing the number of domains.
4.1
Experimental setting
Data For object recognition, we use images from Caltech-256 (C) [14] and the image datasets of
Amazon (A), DSLR (D), and Webcam (W) provided by Saenko et al. [2]. There are total 10 common
categories among the 4 datasets. These images mainly differ in the data collection sources: Caltech256 was collected from webpages on the Internet, Amazon images from amazon.com, and DSLR
and Webcam images from an office environment. We represent images with bag-of-visual-words
descriptors following previous work on domain adaptation [2, 4]. In particular, we extract SURF
[30] features from the images, use K-means to build a codebook of 800 clusters, and finally obtain
an 800-bin histogram for each image.
For action recognition from video sequences, we use the IXMAS multi-view action dataset [15].
There are five views (Camera 0, 1, ? ? ? , 4) of eleven actions in the dataset. Each action is performed
three times by twelve actors and is captured by the five cameras. We keep the first five actions
performed by alba, andreas, daniel, hedlena, julien, and nicolas such that the irregularly performed
actions [15] are excluded. In each view, 20 sequences are randomly selected per actor per action.
We use the shape-flow descriptors to characterize the motion of the actions [31].
Evaluation strategy The four image datasets are commonly used as distinctive domains in research
in visual domain adaptation [2, 3, 4, 32]. Likewise, each view in the IXMAS dataset is often taken
as a domain in action recognition [33, 34, 35, 24]. Similarly, in our experiments, we use a subset of
these datasets (views) as source domains for training classifiers and the rest of the datasets (views)
as target domains for testing. However, the key difference is that we do not compare performance of
different adaptation algorithms which assume domains are already given. Instead, we evaluate the
effectiveness of our approach by investigating whether its automatically identified domains improve
adaptation, that is, whether recognition accuracy on the target domains can be improved by reshaping
the datasets into their latent source domains.
5
Table 1: Oracle recognition accuracy on target domains by adapting original or identified domains
S
A, C D, W C, D, W
Cam 0, 1
Cam 2, 3, 4
T
D, W A, C
A
Cam 2, 3, 4
Cam 0, 1
GORIG
41.0
32.6
41.8
44.6
47.1
GOTHER [20] 39.5
33.7
34.6
43.9
45.1
GOURS
42.6
35.5
44.6
47.3
50.3
Table 2: Adaptation recognition accuracies, using original and identified domains with different
multi-source adaptation methods
Latent
Multi-DA
A, C D, W C, D, W
Cam 0, 1
Cam 2, 3, 4
Domains
method
D, W A, C
A
Cam 2, 3, 4
Cam 0, 1
ORIGINAL
UNION
41.7
35.8
41.0
45.1
47.8
ENSEMBLE
31.7
34.4
38.9
43.3
29.6
[20]
MATCHING
39.6
34.0
34.6
43.2
45.2
ENSEMBLE
38.7
35.8
42.8
45.0
40.5
OURS
MATCHING
42.6
35.5
44.6
47.3
50.3
We use the geodesic flow kernel for adapting classifiers [4]. To use the kernel-based method for
computing distribution difference, we use Gaussian kernels, cf. section 2. We set the kernel bandwidth to be twice the median distances of all pairwise data points. The number of latent domains K
is determined by the DWCV procedure (cf. section 2.2).
4.2
Identifying latent domains from training datasets
Notation Let S = {S1 , S2 , . . . , SJ } denote the J datasets we will be using as training source datasets
and let T = {T1 , T2 , . . . , TL } denote the L datasets we will be using as testing target datasets.
Furthermore, let K denote the number of optimal domains discovered by our DWCV procedure and
U = {U1 , U2 , . . . , UK } the K hidden domains identified by our approach. Let r(A ? B) denote the
recognition accuracy on the target domain B with A as the source domain.
Goodness of the identified domains We examine whether {Uk } is a set of good domains by computing the expected best possible accuracy of using the identified domains separately for adaptation
GOURS = EB?P max r(Uk , B) ?
k
1X
max r(Uk ? Tl )
k
L
(6)
l
where B is a target domain drawn from a distribution on domains P. Since this distribution is not
obtainable, we approximate the expectation with the empirical average over the observed testing
datasets {Tl }. Likewise, we can define GORIG where we compute the best possible accuracy for the
original domains {Sj }, and GOTHER where we compute the same quantity for a competing method
for identifying latent domains, proposed in [20]. Note that the max operation requires that the target
domains be annotated; thus the accuracies are the most optimistic estimate for all methods, and
upper bounds of practical algorithms.
Table 1 reports the three quantities on different pairs of sources and target domains. Clearly, our
method yields a better set of identified domains, which are always better than the original datasets.
We also experimented using Kmeans or random partition for clustering data instances into domains.
Neither yields competitive performance and the results are omitted here for brevity.
Practical utility of identified domains In practical applications of domain adaptation algorithms,
however, the target domains are not annotated. The oracle accuracies reported in Table 1 are thus not
achievable in general. In the following, we examine how closely the performance of the identified
domains can approximate the oracle if we employ multi-source adaptation.
To this end, we consider several choices of multiple-source domain adaptation methods:
? UNION The most naive way is to combine all the source domains into a single dataset and
adapt from this ?mega? domain to the target domains. We use this as a baseline.
? ENSEMBLE A more sophisticated strategy is to adapt each source domain to the target domain and combine the adaptation results in the form of combining multiple classifiers [20].
6
Table 3: Results of reshaping the test set when it consists of data from multiple domains.
From identified (Reshaping training only)
Cam 012
Cam 123
Cam 234
Cam 340
Cam 401
A0 ? F
B0 ? F
C0 ? F
36.4
40.4
46.5
50.7
43.6
37.1
38.7
45.7
50.6
41.8
37.7
39.6
46.1
50.5
43.9
No reshaping
A
S
B
S
C?F
Conditional reshaping
X ? FX , ?X ? {A0 , B 0 , C 0 }
37.3
39.9
47.8
52.3
43.3
38.5
41.1
49.2
54.9
44.8
? MATCHING This strategy compares the empirical (marginal) distribution of the source
domains and the target domains and selects the single source domain that has the smallest
difference to the target domain to adapt. We use the kernel-based method to compare
distributions, as explained in section 2. Note that since we compare only the marginal
distributions, we do not require the target domains to be annotated.
Table 2 reports the averaged recognition accuracies on the target domains, using either the original
datasets/domains or the identified domains as the source domains. The latent domains identified
by our method generally perform well, especially using MATCHING to select the single best source
domain to match the target domain for adaptation. In fact, contrasting Table 2 to Table 1, the
MATCHING strategy for adaptation is able to match the oracle accuracies, even though the matching
process does not use label information from the target domains.
4.3
Reshaping the test datasets
So far we have been concentrating on reshaping multiple annotated datasets (for training classifiers)
into domains for adapting to test datasets. However, test datasets can also be made of multiple latent
domains. Hence, it is also instrumental to investigate whether we can reshape the test datasets into
multiple domains to achieve better adaptation results.
However, the reshaping process for test datasets has a critical difference from reshaping training
datasets. Specifically, we should reshape test datasets, conditioning on the identified domains from
the training datasets ? the goal is to discover latent domains in the test datasets that match the
domains in the training datasets as much as possible. We term this conditional reshaping.
Computationally, conditional reshaping is more tractable than identifying latent domains from the
training datasets. Concretely, we minimize the distribution differences between the latent domains in
the test datasets and the domains in the training datasets, using the kernel-based measure explained in
section 2. The optimization problem, however, can be relaxed into a convex quadratic programming
problem. Details are in the Suppl. Material.
Table 3 demonstrates the benefit of conditionally reshaping the test datasets, on cross-view action
recognition. This problem inherently needs test set reshaping, since the person may be viewed from
any direction at test time. (In contrast, test sets for the object recognition datasets above are less
heterogeneous.) The first column shows five groups of training datasets, each being a different view,
denoted by A, B and C. InSeach group, the remaining views D and E are merged into a new test
dataset, denoted by F = D E.
0
0
0
Two baselines are included: (1) adapting from the S
identified
S domains A , B and C to the merged
dataset F ; (2) adapting from the merged dataset A B C to F . These are contrasted to adapting
from the identified domains in the training datasets to the matched domains in F . In most groups,
there is a significant improvement in recognition accuracies by conditional reshaping over no reshaping on either training or testing, and reshaping on training only.
4.4
Analysis of identified domains and the optimal number of domains
It is also interesting to see which factors are dominant in the identified domains. Object appearance,
illumination, or background? Do they coincide with the factors controlled by the dataset collectors?
Some exemplar images are shown in Figure 1, where each row corresponds to an original dataset,
and each column is an identified domain across two datasets. On the left of Figure 1 we reshape
Amazon and Caltech-256 into two domains. In Domain II all the ?laptop? images 1) are taken from
7
Identified Domain I
Identified Domain II
Identified Domain II
Webcam DSLR
Caltech Amazon
Identified Domain I
Figure 1: Exemplar images from the original and identified domains after reshaping. Note that
identified domains contain images from both datasets.
(A, C)
50
DWCV
Domain adaptation
35
4
# of domains
70
65
60
20
5
0
2
DWCV
Domain adaptation
3
4
60
Accuracy (%)
30
10
3
Accuracy (%)
40
30
2
(Cam 2, 3, 4)
70
40
45
Accuracy (%)
Accuracy (%)
(Cam 1, 2, 3)
(C, D, W)
50
DWCV
Domain adaptation
55
50
45
40
35
2
5
# of domains
50
40
30
20
10
3
4
# of domains
5
0
2
DWCV
Domain adaptation
3
4
5
# of domains
Figure 2: Domain-wise cross-validation (DWCV) for choosing the number of domains.
the front view and 2) have colorful screens, while Domain I images are less colorful and have more
diversified views. It looks like the domains in Amazon and Caltech-256 are mainly determined by
the factors of object pose and appearance (color).
The figures on the right are from reshaping DSLR and Webcam, of which the ?keyboard? images
are taken in an office environment with various lighting, object poses, and background controlled
by the dataset creators [2]. We can see that the images in Domain II have gray background, while
in Domain I the background is either white or wooden. Besides, keyboards of the same model,
characterized by color and shape, are almost perfectly assigned to the same domain. In sum, the
main factors here are probably background and object appearance (color and shape).
Figure 2 plots some intermediate results of the domain-wise cross-validation (DWCV) for determining the number of domains K to identify from the multiple training datasets. In addition to the
DWCV accuracy A(K), the average classification accuracies on the target domain(s) are also included for reference. We set A(K) to 0 when some categories in a domain are assigned with only
one or no data point (as a result of optimization). Generally, A(K) goes up and then drops at some
point, before which is the optimal K? we use in the experiments. Interestingly, the number favored
by DWCV coincides with the number of datasets we mix, even though, as our experiments above
show, the ideal domain boundaries do not coincide with the dataset boundaries.
5
Conclusion
We introduced two domain properties, maximum distinctiveness and maximum learnability, to discover latent domains from datasets. Accordingly, we proposed nonparametric approaches encouraging the extracted domains to satisfy these properties. Since in each domain visual discrimination
is more consistent than that in the heterogeneous datasets, better prediction performance can be
achieved on the target domain. The proposed approach is extensively evaluated on visual object
recognition and human activity recognition tasks. Our identified domains outperform not only the
original datasets but also the domains discovered by [20], validating the effectiveness of our approach. It may also shed light on dataset construction in the future by examining the main factors of
the domains discovered from the existing datasets.
Acknowledgments K.G is supported by ONR ATL N00014-11-1-0105. B.G. and F.S. is supported by ARO
Award# W911NF-12-1-0241 and DARPA Contract# D11AP00278 and the IARPA via DoD/ARL contract #
W911NF-12-C-0012. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein
are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
8
References
[1] L. Duan, D. Xu, I.W. Tsang, and J. Luo. Visual event recognition in videos by learning from web data. In
CVPR, 2010.
[2] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV,
2010.
[3] R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011.
[4] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In
CVPR, 2012.
[5] H. Daum?e III. Frustratingly easy domain adaptation. In ACL, 2007.
[6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In
EMNLP, 2006.
[7] J. Huang, A.J. Smola, A. Gretton, K.M. Borgwardt, and B. Scholkopf. Correcting sample selection bias
by unlabeled data. In NIPS, 2007.
[8] S.J. Pan, I.W. Tsang, J.T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE
Trans. NN, (99):1?12, 2009.
[9] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence. Dataset shift in machine
learning. The MIT Press, 2009.
[10] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS, 2009.
[11] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image
database. In CVPR, 2009.
[12] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International Journal of Computer Vision, 88(2):303?338, 2010.
[13] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. LabelMe: a database and web-based tool
for image annotation. IJCV, 77:157?173, 2008.
[14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical report, California
Institute of Technology, 2007.
[15] D. Weinland, E. Boyer, and R. Ronfard. Action recognition from arbitrary views using 3d exemplars. In
ICCV, 2007.
[16] A. Torralba and A.A. Efros. Unbiased look at dataset bias. In CVPR, 2011.
[17] B. Gong, F. Sha, and K. Grauman. Overcoming dataset bias: An unsupervised domain adaptation approach. In NIPS Workshop on Large Scale Visual Recognition and Retrieval, 2012.
[18] L. Cao, Z. Liu, and T. S Huang. Cross-dataset action detection. In CVPR, 2010.
[19] T. Tommasi, N. Quadrianto, B. Caputo, and C. Lampert. Beyond dataset bias: multi-task unaligned shared
knowledge transfer. In ACCV, 2012.
[20] J. Hoffman, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multisource domain
adaptation. In ECCV. 2012.
[21] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS. 2007.
[22] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[23] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation.
In NIPS, 2007.
[24] R. Li and T. Zickler. Discriminative virtual views for cross-view action recognition. In CVPR, 2012.
[25] K. Tang, V. Ramanathan, L. Fei-Fei, and D. Koller. Shifting weights: Adapting object detectors from
image to video. In NIPS, 2012.
[26] B. Gong, K. Grauman, and F. Sha. Connecting the dots with landmarks: Discriminatively learning
domain-invariant features for unsupervised domain adaptation. In ICML, 2013.
[27] S. Jegelka, A. Gretton, B. Sch?olkopf, B. K Sriperumbudur, and U. Von Luxburg. Generalized clustering
via kernel embeddings. In Advances in Artificial Intelligence, 2009.
[28] Q. Sun, R. Chattopadhyay, S. Panchanathan, and J. Ye. A two-stage weighting framework for multi-source
domain adaptation. In NIPS, 2011.
[29] L. Duan, I. W Tsang, D. Xu, and T. Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, 2009.
[30] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In ECCV, 2006.
[31] D. Tran and A. Sorokin. Human activity recognition with metric learning. In ECCV. 2008.
[32] A. Bergamo and L. Torresani. Exploiting weakly-labeled web images to improve object classification: a
domain adaptation approach. In NIPS, 2010.
[33] A. Farhadi and M. Tabrizi. Learning to recognize activities from the wrong view point. In ECCV, 2008.
[34] C.-H. Huang, Y.-R. Yeh, and Y.-C. Wang. Recognizing actions across cameras by exploring the correlated
subspace. In ECCV, 2012.
[35] J. Liu, M. Shah, B. Kuipers, and S. Savarese. Cross-view action recognition via view knowledge transfer.
In CVPR, 2011.
9
| 5210 |@word kulis:2 version:1 achievable:1 instrumental:1 everingham:1 c0:1 accounting:1 mention:1 liu:2 contains:3 daniel:1 rkhs:1 ours:4 interestingly:1 existing:2 com:1 luo:1 must:2 partition:1 shape:3 eleven:1 plot:1 drop:1 discrimination:2 intelligence:1 discovering:2 selected:1 accordingly:1 chua:1 detecting:1 contribute:1 codebook:1 five:4 along:1 zickler:1 scholkopf:1 qualitative:1 consists:2 ijcv:1 combine:3 inside:1 introduce:1 pairwise:1 inter:6 expected:1 themselves:2 examine:5 frequently:1 multi:8 planning:1 freeman:1 voc:1 pitfall:2 automatically:5 duan:2 encouraging:1 kuiper:1 farhadi:1 solver:1 becomes:1 provided:1 discover:6 underlying:3 moreover:2 brightly:1 notation:1 matched:1 laptop:1 what:3 kind:1 interpreted:1 contrasting:1 every:3 tackle:1 shed:1 exactly:1 grauman:5 classifier:13 demonstrates:1 uk:4 wrong:1 colorful:2 organize:3 positive:1 t1:1 before:1 accordance:1 thereon:1 tends:1 ak:2 approximately:1 acl:1 discriminability:1 twice:1 equating:1 studied:4 eb:1 challenging:1 speeded:1 averaged:1 practical:5 camera:4 acknowledgment:1 testing:7 practice:3 union:2 postpone:1 differs:1 procedure:4 empirical:5 significantly:1 adapting:9 convenient:1 matching:7 word:1 refers:1 cannot:1 unlabeled:1 selection:1 context:1 applying:2 live:1 kmax:1 optimize:1 shi:1 pk0:1 go:1 williams:1 starting:2 convex:2 resolution:2 amazon:6 identifying:4 correcting:2 insight:1 deriving:1 atl:1 variation:7 zmk:8 fx:1 target:24 suppose:2 commercial:1 construction:1 programming:2 recognition:32 approximated:1 walking:1 native:1 labeled:3 database:2 observed:2 caltech256:1 wang:1 tsang:3 ensures:1 schoelkopf:1 sun:1 russell:1 highest:1 intuition:2 environment:3 ronfard:1 cam:15 geodesic:2 trained:1 weakly:1 predictive:1 distinctive:5 division:2 learner:1 isp:1 easily:2 darpa:1 k0:6 various:1 tx:1 effective:1 describe:3 chellappa:1 artificial:1 choosing:2 quite:1 cvpr:7 relax:2 statistic:2 tuytelaars:1 hoc:1 sequence:2 propose:3 tran:1 aro:1 unaligned:1 coming:1 adaptation:41 product:1 causing:1 zm:4 frequent:1 relevant:1 combining:1 cao:1 achieve:4 pronounced:1 validate:1 olkopf:1 los:2 webpage:1 exploiting:1 cluster:2 requirement:1 darrell:2 ben:1 object:16 derive:2 blitzer:2 pose:7 gong:4 exemplar:3 b0:1 strong:3 coverage:1 c:1 auxiliary:1 come:2 implies:1 quantify:1 differ:1 direction:1 rasch:1 arl:2 closely:1 annotated:5 attribute:1 merged:3 human:6 material:1 virtual:1 bin:1 require:2 government:2 assign:1 generalization:1 kristen:1 exploring:1 presumably:2 lawrence:1 efros:1 adopt:1 smallest:1 omitted:1 torralba:2 purpose:1 failing:1 axiomatic:1 label:9 bag:1 utexas:1 largest:1 successfully:1 tool:1 reflects:1 weighted:1 hoffman:2 bergamo:1 mit:1 feisha:1 clearly:3 gaussian:4 always:1 aim:1 avoid:1 shelf:1 office:2 validated:1 focus:2 improvement:1 likelihood:1 mainly:2 contrast:3 attains:1 baseline:2 sense:1 rostamizadeh:1 wooden:1 inference:2 nn:1 a0:2 hidden:1 perona:1 boyer:1 koller:1 reproduce:1 selects:1 arg:1 among:2 classification:5 pascal:1 denoted:2 extraneous:1 k6:3 multisource:1 favored:1 marginal:3 atom:1 manually:2 lit:1 look:3 unsupervised:6 icml:2 alter:1 discrepancy:1 simplex:1 t2:1 np:2 report:5 future:1 employ:1 few:2 torresani:1 randomly:1 composed:3 recognize:1 comprehensive:1 murphy:1 usc:2 occlusion:1 consisting:1 detection:1 interest:1 investigate:1 intra:2 evaluation:1 mixture:3 extreme:1 semidefinite:1 yielding:1 behind:1 light:1 copyright:1 tdd:5 divide:1 savarese:1 re:1 mk:18 instance:15 classify:1 modeling:4 column:2 w911nf:2 goodness:1 assignment:1 maximization:1 addressing:1 subset:2 deviation:1 dod:2 recognizing:2 examining:1 front:1 learnability:7 characterize:1 reported:1 answer:1 person:1 fritz:1 fundamental:1 twelve:1 borgwardt:2 accessible:1 international:1 contract:2 off:1 dong:1 ym:3 connecting:1 concrete:1 von:1 containing:1 choose:3 huang:3 emnlp:1 tabrizi:1 li:4 account:1 distribute:1 availability:1 alba:1 satisfy:1 ad:1 performed:3 view:25 optimistic:1 candela:1 start:1 competitive:1 satisfiable:1 annotation:2 construed:1 minimize:1 accuracy:22 descriptor:2 largely:1 equate:1 characteristic:4 yield:3 identify:7 maximized:1 correspond:1 likewise:2 ensemble:3 weak:1 monitoring:1 lighting:2 detector:1 dslr:4 sriperumbudur:1 intentionally:1 naturally:3 chattopadhyay:1 dataset:22 concentrating:1 intensively:1 knowledge:3 color:3 hilbert:1 holub:1 obtainable:1 adaptability:1 sophisticated:1 adaptable:1 zisserman:1 maximally:5 daunting:1 improved:1 formulation:3 arranged:1 though:3 evaluated:1 furthermore:3 smola:2 stage:1 hand:1 web:3 undergoes:1 gray:1 building:1 ye:1 contain:2 unbiased:1 deliberately:1 regularization:1 assigned:4 hence:1 excluded:1 satisfactory:1 white:1 conditionally:1 speaker:2 coincides:1 criterion:1 generalized:2 trying:1 demonstrate:1 mcdonald:1 motion:1 reasoning:1 image:29 wise:3 novel:2 common:2 agglomeration:1 empirically:2 conditioning:1 discussed:1 m1:1 significant:3 refer:1 rd:1 resorting:1 similarly:2 neatly:1 sugiyama:1 language:4 dot:1 panchanathan:1 access:1 actor:2 surface:1 etc:3 dominant:1 own:3 hide:1 boqing:1 scenario:1 keyboard:2 n00014:1 binary:3 onr:1 devise:1 caltech:5 captured:1 greater:1 relaxed:3 impose:2 deng:1 managing:1 ii:4 multiple:12 mix:1 desirable:2 gretton:3 infer:2 stem:1 technical:1 jet:1 adapt:4 match:3 cross:13 offer:1 characterized:1 retrieval:1 fighter:1 concerning:1 reshaping:21 award:1 controlled:2 prediction:1 heterogeneous:4 vision:3 expectation:1 metric:1 histogram:1 kernel:17 represent:1 suppl:1 achieved:1 preserved:1 background:7 addition:4 separately:2 whereas:1 winn:1 median:1 source:20 crucial:1 sch:1 biased:1 rest:1 posse:1 probably:1 subject:6 induced:2 validating:1 flow:3 effectiveness:3 integer:2 extracting:1 structural:1 yang:1 ideal:1 intermediate:1 vital:1 iii:1 easy:1 embeddings:1 identified:31 restrict:1 bandwidth:1 andreas:1 idea:1 competing:1 perfectly:1 texas:1 angeles:2 shift:2 whether:5 tommasi:1 utility:1 speech:3 proceed:1 action:17 adequate:2 generally:3 detailed:1 gopalan:1 nonparametric:4 extensively:4 category:14 outperform:1 percentage:2 governmental:1 estimated:1 per:4 mega:1 stipulates:1 discrete:3 affected:1 group:3 key:3 four:1 drawn:1 mono:1 neither:1 relaxation:1 sum:3 luxburg:1 angle:3 package:1 arrive:1 almost:1 endorsement:1 griffin:1 bound:3 internet:1 followed:1 correspondence:1 fold:1 quadratic:2 identifiable:1 activity:7 oracle:4 adapted:1 sorokin:1 constraint:9 infinity:1 fei:5 scene:1 u1:1 min:1 relatively:1 according:1 sampleproblem:1 shimodaira:1 across:2 pan:1 making:1 s1:1 intuitively:1 explained:2 iccv:2 invariant:1 taken:5 computationally:2 resource:1 describing:1 discus:1 irregularly:1 tractable:1 end:2 operation:1 unreasonable:1 apply:3 kwok:1 hierarchical:2 enforce:1 reshape:4 shah:1 original:9 clustering:9 ensure:2 cf:2 remaining:1 creator:1 daum:1 exploit:1 build:3 especially:1 webcam:4 implied:1 objective:1 question:1 quantity:3 occurs:1 blend:1 parametric:2 sha:4 strategy:4 already:1 traditional:2 southern:2 subspace:1 distance:1 separate:1 landmark:3 extent:1 collected:2 assuming:3 besides:1 insufficient:2 nc:1 difficult:2 troubling:1 mostly:1 potentially:1 reliably:1 policy:1 unknown:1 perform:1 allowing:1 upper:1 datasets:54 accv:1 defining:2 maxk:1 discovered:4 mansour:1 interclass:1 reproducing:1 arbitrary:1 overcoming:2 introduced:1 david:1 pair:2 required:1 namely:1 imagenet:1 california:3 learned:2 distinction:2 herein:1 nip:8 trans:1 address:1 beyond:2 able:1 mismatch:1 xm:9 cripple:1 lpc:3 challenge:2 max:4 video:11 lend:1 gool:2 shifting:1 overlap:1 critical:1 natural:1 event:1 indicator:2 representing:1 improve:2 technology:1 julien:1 reprint:1 extract:1 utterance:1 naive:1 text:1 prior:6 yeh:1 discovery:2 determining:2 relative:1 expect:2 discriminatively:1 interesting:1 limitation:1 validation:8 jegelka:2 proxy:1 imposes:1 consistent:1 tiny:1 share:1 austin:2 prone:1 hopeless:1 row:1 eccv:6 supported:2 mohri:1 bias:5 side:1 mismatched:2 institute:1 distinctiveness:9 taking:1 differentiating:1 absolute:1 distributed:2 slice:2 benefit:1 default:1 boundary:2 world:1 van:2 unaware:1 concretely:2 collection:3 commonly:1 made:1 coincide:2 author:1 far:3 sj:2 approximate:2 keep:1 investigating:1 conceptual:1 conclude:1 discriminative:5 continuous:2 latent:21 bay:1 frustratingly:1 table:9 transfer:3 robust:2 ca:2 nicolas:1 inherently:1 caputo:1 improving:1 interact:1 complex:3 necessarily:3 domain:231 da:1 official:1 surf:2 pk:4 main:2 s2:1 lampert:1 iarpa:2 quadrianto:1 collector:1 xu:2 tl:3 screen:1 darker:1 inferring:1 pereira:2 explicit:1 weighting:2 tang:1 specific:2 covariate:1 learnable:1 appeal:2 dk:1 experimented:1 essential:1 socher:1 workshop:1 ramanathan:1 quionero:1 notwithstanding:1 illumination:2 authorized:1 distinguishable:1 simply:3 appearance:7 likely:1 visual:16 unexpected:1 contained:1 diversified:1 expressed:1 partially:1 schwaighofer:1 u2:1 corresponds:1 extracted:2 conditional:4 viewed:2 goal:1 kmeans:2 towards:1 labelme:1 shared:1 change:2 hard:2 included:2 specifically:3 determined:2 contrasted:1 semantically:1 degradation:1 total:4 experimental:4 meaningful:1 saenko:3 select:1 damaging:1 crammer:1 categorize:1 brevity:1 frontal:1 evaluate:3 correlated:1 |
4,654 | 5,211 | Heterogeneous-Neighborhood-based Multi-Task
Local Learning Algorithms
Yu Zhang
Department of Computer Science, Hong Kong Baptist University
yuzhang@comp.hkbu.edu.hk
Abstract
All the existing multi-task local learning methods are defined on homogeneous
neighborhood which consists of all data points from only one task. In this paper,
different from existing methods, we propose local learning methods for multitask classification and regression problems based on heterogeneous neighborhood
which is defined on data points from all tasks. Specifically, we extend the knearest-neighbor classifier by formulating the decision function for each data point
as a weighted voting among the neighbors from all tasks where the weights are
task-specific. By defining a regularizer to enforce the task-specific weight matrix
to approach a symmetric one, a regularized objective function is proposed and
an efficient coordinate descent method is developed to solve it. For regression
problems, we extend the kernel regression to multi-task setting in a similar way
to the classification case. Experiments on some toy data and real-world datasets
demonstrate the effectiveness of our proposed methods.
1
Introduction
For single-task learning, besides global learning methods there are local learning methods [7], e.g.,
k-nearest-neighbor (KNN) classifier and kernel regression. Different from the global learning methods, the local learning methods make use of locality structure in different regions of the feature space
and are complementary to the global learning algorithms. In many applications, the single-task local learning methods have shown comparable performance with the global counterparts. Moreover,
besides classification and regression problems, the local learning methods are also applied to some
other learning problems, e.g., clustering [18] and dimensionality reduction [19]. When the number
of labeled data is not very large, the performance of the local learning methods is limited due to sparse local density [14]. In this case, we can leverage the useful information from other related tasks
to help improve the performance which matches the philosophy of multi-task learning [8, 4, 16].
Multi-task learning utilizes supervised information from some related tasks to improve the performance of one task at hand and during the past decades many advanced methods have been proposed
for multi-task learning, e.g., [17, 3, 9, 1, 2, 6, 12, 20, 14, 13]. Among those methods, [17, 14] are
two representative multi-task local learning methods. Even though both methods in [17, 14] use
KNN as the base learner for each task, Thrun and O?Sullivan [17] focus on learning cluster structure
among different tasks while Parameswaran and Weinberger [14] learn different distance metrics for
different tasks. The KNN classifiers use in both two methods are defined on the homogeneous neighborhood which is the set of nearest data points from the same task the query point belongs to. In
some situation, it is better to use a heterogeneous neighborhood which is defined as the set of nearest
data points from all tasks. For example, suppose we have two similar tasks marked with two colors
as shown in Figure 1. For a test data point marked with ??? from one task, we obtain an estimation with low confidence or even a wrong one based on the homogeneous neighborhood. However,
if we can use the data points from both two tasks to define the neighborhood (i.e., heterogeneous
neighborhood), we can obtain a more confident estimation.
1
In this paper, we propose novel local learning models for
multi-task learning based on the heterogeneous neighborhood. For multi-task classification problems, we extend
the KNN classifier by formulating the decision function
on each data point as weighted voting of its neighbors
from all tasks where the weights are task-specific. Since
multi-task learning usually considers that the contribution
of one task to another one equals that in the reverse direc- Figure 1: Data points with one color
tion, we define a regularizer to enforce the task-specific (i.e., black or red) are from the same
weight matrix to approach a symmetric matrix and then task and those with one type of marker
based on this regularizer, a regularized objective function (i.e., ?+? or ?-?) are from the same class.
is proposed. We develop an efficient coordinate descent A test data point is represented by ???.
method to solve it. Moreover, we also propose a local
method for multi-task regression problems. Specifically,
we extend the kernel regression method to multi-task setting in a similar way to the classification
case. Experiments on some toy data and real-world datasets demonstrate the effectiveness of our
proposed methods.
2
A Multi-Task Local Classifier based on Heterogeneous Neighborhood
In this section, we propose a local classifier for multi-task learning by generalizing the KNN algorithm, which is one of the most widely used local classifiers for single-task learning.
Suppose we are given m learning tasks {Ti }m
i=1 . The training set consists of n triples (xi , yi , ti )
with the ith data point as xi ? RD , its label yi ? {?1, 1} and its task indicator ti ? {1, . . . , m}. So
each task is a binary classification problem with ni = |{j|tj = i}| data points belonging to the ith
task Ti .
For the ith data point xi , we use Nk (i) to denote the set of the indices of its k nearest neighbors. If
Nk (i) is a homogeneous neighborhood
which only
P
contains data points from the task that xi belongs
to, we can use d(xi ) = sgn
j?Nk (i) s(i, j)yj to make a decision for xi where sgn(?) denotes the
sign function and s(i, j) denotes a similarity function between xi and xj . Here, by defining Nk (i) as
a heterogeneous neighborhood which contains data points from all tasks, we cannot directly utilize
this decision function and instead we introduce a weighted decision function by using task-specific
weights as
?
?
d(xi ) = sgn ?
X
wti ,tj s(i, j)yj ?
j?Nk (i)
where wqr represents the contribution of the rth task Tr to the qth one Tq when Tr has some data
points to be neighbors of a data point from Tq . Of course, the contribution from one task to itself
should be positive and also the largest, i.e., wii ? 0 and ?wii ? wij ? wii for j 6= i. When
wqr (q 6= r) approaches wqq , it means Tr is very similar to Tq in local regions. At another extreme
where wqr (q 6= r) approaches ?wqq , if we flip the labels of data points in Tr , Tr can have a positive
contribution ?wqr to Tq which indicates that Tr is negatively correlated to Tq . Moreover, when
wqr /wqq (q 6= r) is close to 0 which implies there is no contribution from Tr to Tq , Tr is likely
to be unrelated to Tq . So the utilization of {wqr } can model three task relationships: positive task
correlation, negative task correlation and task unrelatedness as in [6, 20].
P
We use f (xi ) to define the estimation function as f (xi ) = j?Nk (i) wti ,tj s(i, j)yj . Then similar to
support vector machine (SVM), we use hinge loss l(y, y 0 ) = max(0, 1 ? yy 0 ) to measure empirical
performance on the training data. Moreover, recall that wqr represents the contribution of Tr to
Tq and wrq is the contribution of Tq to Tr . Since multi-task learning usually considers that the
contribution of Tr to Tq almost equals that of Tq to Tr , we expect wqr to be close to wrq . To encode
this priori information into our model, we can either formulate it as wqr = wrq , a hard constraint,
or a soft regularizer, i.e., minimizing (wqr ? wrq )2 to enforce wqr ? wrq , which is more preferred.
Combining all the above considerations, we can construct a objective function for our proposed
method MT-KNN as
min
W
n
X
i=1
l(yi , f (xi )) +
?1
?2
kW ? WT k2F +
kWk2F
4
2
2
s.t. wqq ? 0, wqq ? wqr ? ?wqq (q 6= r)
(1)
where W is a m ? m matrix with wqr as its (q, r)th element and k ? kF denotes Frobenius norm of a
matrix. The first term in the objective function of problem (1) measures the training loss, the second
one enforces W to be a symmetric matrix which implies wqr ? wrq , and the last one penalizes
the complexity of W. The regularization parameters ?1 and ?2 balance the trade-off between these
three terms.
2.1
Optimization Procedure
In this section,
P we discuss how
to solve problem (1). We first rewrite f (xi ) as f (xi ) =
Pm
?i where Nkj (i) denotes the set of the indices of xi ?s nearest
j=1 wti j
l?N j (i) s(i, l)yl = wti x
k
neighbors from the jth task in Nk (i),Pwti = (wti 1 , . . . , wti m ) is the ti th row of W, and x
?i is a
m ? 1 vector with the jth element as l?N j (i) s(i, l)yl . Then we can reformulate problem (1) as
k
min
W
m X
X
l(yj , wi x
?j ) +
i=1 j?Ti
?1
?2
kW ? WT k2F +
kWk2F
4
2
s.t. wqq ? 0, wqq ? wqr ? ?wqq (q 6= r).
(2)
To solve problem (2), we use a coordinate descent method, which is also named as an alternating
optimization method in some literatures.
By adopting the hinge loss in problem (2), the optimization problem for wik (k 6= i) is formulated
as
min
wik
X
? 2
wik ? ?ik wik +
max(0, ajik wik + bjik )
2
j?T
s.t. cik ? wik ? eik
(3)
i
where
?jk is the kth element of x
?j , ajik = ?yj x
?jk , bjik = 1 ?
P ? = ?1 + ?2 , ?ik = ?1 wki , x
yj t6=k wit x
?jt , cik = ?wii , and eik = wii . If the objective function of problem (3) only has
the first two terms, this problem will become a univariate quadratic programming (QP) problem
with a linear inequality constraint, leading to an analytical solution. Moreover, similar to SVM we
can introduce some slack variables for the third term in the objective function of problem (3) and
then that problem will become a QP problem with ni + 1 variables and 2ni + 1 linear constraints.
We can use off-the-shelf softwares to solve this problem in polynomial time. However, the whole
optimization procedure may not be very efficient since we need to solve problem (3) and call QP
solvers for multiple times. Here we utilize the piecewise linear structure of the last term in the
objective function of problem (3) and propose a more efficient solution.
We assume all aj are non-zero and otherwise we can discard them without affecting the solution
since the corresponding losses are constants. We define six index sets as
C1 = {j|ajik > 0, ?
bj
bj
bjik
< cik }, C2 = {j|ajik > 0, cik ? ? ik
? eik }, C3 = {j|ajik > 0, ? ik
> eik }
j
j
aik
aik
ajik
C4 = {j|ajik < 0, ?
bjik
bjik
bjik
j
j
<
c
},
C
=
{j|a
<
0,
c
?
?
?
e
},
C
=
{j|a
<
0,
?
> eik }.
5
6
ik
ik
ik
ik
ik
ajik
ajik
ajik
It is easy to show that when j ? C1 ?C6 where the operator ? denotes the union of sets, ajik w+bjik >
0 holds for w ? [cik , eik ], corresponding to the set of data points with non-zero loss. Oppositely
when j ? C3 ? C4 , the values of the corresponding losses become zero since ajik w + bjik ? 0 holds
for w ? [cik , eik ]. The variation lies in the data points with indices j ? C2 ? C5 . We sort sequence
{?bjik /ajik |j ? C2 } and record it in a vector u of length du with u1 ? . . . ? udu . Moreover, we also
keep a index mapping qu with its rth element qru defined as qru = j if ur = ?bjik /ajik . Similarly,
for sequence {?bjik /ajik |j ? C5 }, we define a sorted vector v of length dv and the corresponding
index mapping qv . By using the merge-sort algorithm, we merge u and v into a sorted vector s and
then we add cik and eik into s as the minimum and maximum elements if they are not contained in
s. Obviously, in range [sl , sl+1 ] where sl is the lth element of s and ds is the length of s, problem
(3) becomes a univariate QP problem which has an analytical solution. So we can compute local
minimums in successive regions [sl , sl+1 ] (l = 1, . . . , ds ? 1) and get the global minimum over
region [cik , eik ] by comparing all local optima. The key operation is to compute the coefficients
of quadratic function over each region [sl , sl+1 ] and we devise an algorithm in Table 1 which only
needs to scan s once, leading to an efficient solution for problem (3).
3
The first step of the algorithm in Table 1 needs O(ni )
time complexity to construct the six sets C1 to C6 . In step
2, we need to sort two sequences to obtain u and v in
O(du ln du + dv ln dv ) time and merge two sequences to
get s in O(du + dv ). Then it costs O(ni ) to calculate
coefficients c0 and c1 by scanning C1 , C2 and C6 in step
4 and 5. Then from step 6 to step 13, we need to scan
vector s once which costs O(du + dv ) time. The overall
complexity of the algorithm in Table 1 is O(du ln du +
dv ln dv + ni ) which is at most O(ni ln ni ) due to du +
dv ? ni .
wii
X
?2 2
max(0, aji wii + bji )
wii +
2
j?T
Construct four sets C1 , C2 , C3 , C4 , C5 and C6 ;
Construct u, qu , v, qv and s;
Insert cik and eik into s if needed;
P
c0 :=
bj ;
Pj?C1 ?C2 ?C6 ik
j
c1 :=
j?C1 ?C2 ?C6 aik ? ?ik ;
w := sds ;
o := c0 + c1 w + ?w2 /2;
for l = ds ? 1 to 1
if sl+1 = ur for some r
08:
c0 := c0 ? bikr ; c1 := c1 ? aikr ;
end if
if sl+1 = vr for some r
s.t. wii ? ci ,
(4)
i
qu
qu
qv
qv
c0 := c0 + bikr ; c1 := c1 + aikr ;
end if
c
10:
w0 := min(sl+1 , max(sl , ? ?1 ));
2
11:
o0 := c0 + c1 w0 + ?w0 /2;
if o0 < o
12:
w := w0 ; o := o0 ;
end if
13:
l := l ? 1;
end for
09:
For wii , the optimization problem is formulated as
min
Table 1: Algorithm for problem (3)
01:
02:
03:
04:
05:
06:
07:
P
?jt , ci =
?ji , bji = 1 ? yj t6=i wit x
where aji = ?yj x
max(0, maxj6=i (|wij |)), and | ? | denotes the absolute value of a scalar. The main difference between problem (3)
and (4) is that there exist a box constraint for wik in problem (3) but in problem (4) wii is only
bj
lower-bounded. We define ei as ei = maxj {? aij } for all aji 6= 0. For wii ? [ei , +?), the objective
i
P
2
function of problem (4) can be reformulated as ?22 wii
+ j?S (aji wii + bji ) where S = {j|aji > 0}
(1)
and the minimum value in [ei , +?) will take at wii = max{ei , ?
P
j?S
?2
aji
}. Then we can use the
(2)
wii
algorithm in Table 1 to find the minimizor
in the interval [ci , ei ] for problem (4). Finally we
(1)
(2)
can choose the optimal solution to problem (4) from {wii , wii } by comparing the corresponding
values of the objective function.
Since the complexity to solve both
Pmproblem (3) and (4) is O(ni ln ni ), the complexity of one update
for the whole matrix W is O(m i=1 ni ln ni ). Usually the coordinate descent algorithm converges
very fast in a small number of iterations and hence the whole algorithm to solve problem (2) or (1)
is very efficient.
We can use other loss functions for problem (2) instead of hinge loss, e.g., square loss l(s, t) =
(s ? t)2 as in the least square
show that problem (3) has an analytical
to
SVM [10]. It is easy
P
j j
?ik ?2 j?T aik bik
i
P
j 2
?+2 j?T (aik )
i
P
j j
?2 j?T ai bi
i
max ci ,
P
j
?2 +2 j?T (ai )2
solution as wik = min max cik ,
computed as wii =
, eik
and the solution to problem (4) can be
. Then the computational complexity of the whole
i
algorithm to solve problem (2) by adopting square loss is O(mn).
3
A Multi-Task Local Regressor based on Heterogeneous Neighborhood
In this section, we consider the situation that each task is a regression problem with each label
yi ? R.
Similar to the classification case in the previous section, one candidate for multi-task local regressor
is a generalization of kernel regression, a counterpart of KNN classifier for regression problems, and
the estimation function can be formulated as
P
j?N (i)
f (xi ) = P k
wti ,tj s(i, j)yj
j?Nk (i)
wti ,tj s(i, j)
(5)
where wqr also represents the contribution of Tr to Tq . Since the denominator of f (xi ) is a linear
combination of elements in each row of W with data-dependent combination coefficients, if we
utilize a similar formulation to problem (1) with square loss, we need to solve a complex and nonconvex fractional programming problem. For computational consideration, we resort to another way
to construct the multi-task local regressor.
4
Recall that the estimation function for the classification case is formulated as f (xi ) =
P
Pm
s(i, l)yl . We can see that the expression in the brackets on the right-hand
j
j=1 wti j
l?Nk (i)
side can be viewed as a prediction for xi based on its neighbors in the jth task. Inspired by this
observation, we can construct a prediction y?ji for xi based on its neighbors from the jth task by
utilizing any regressor, e.g., kernel regression and support vector regression. Here due to the local
nature of our proposed method, we choose the kernel regression method,
which is a local regression
P
method, as a good candidate and hence y?ji is formulated as y?ji =
s(i,l)yl
j
l?N (i)
k
s(i,l)
j
l?N (i)
k
P
.
When j equals
ti which means we use neighbored data points from the task that xi belongs to, we can use this
prediction in confidence. However, if j 6= ti , we cannot totally trust the prediction and need to add
some weight wti ,j as a confidence. Then by using the square loss, we formulate an optimization
problem to get the estimation function f (xi ) based on {?
yji } as
f (xi ) = arg min
y
m
X
Pm
j=1
wti ,j (y ? y?ji )2 = Pm
wti ,j y?ji
j=1
j=1
wti ,j
.
(6)
Compared with the regression function of the direct extension of kernel regression to multi-task
learning in Eq. (5), the denominator of our proposed regressor in Eq. (6) only includes the row
summation of W, making the optimization problem easier to solve as we will see later. Since the
scale of wij does not matter theP
value of the estimation function in Eq. (6), we constrain the row
m
summation of W to be 1, i.e., j=1 wij = 1 for i = 1, . . . , m. Moreover, the estimation y?tii
by using data from the same task as xi is more trustful than the estimations based on other tasks,
which suggestsP
wii should be the largest among elements in the ith row. Then this constraint implies
1
1
that wii ? m
k wik = m > 0. To capture the negative task correlations, wij (i 6= j) is only
required to be a real scalar and wij ? ?wii . Combining the above consideration, we formulate an
optimization problem as
min
W
m X
X
(wi y
?j ? yj )2 +
i=1 j?Ti
?2
?1
kW ? WT k2F +
kWk2F s.t. W1 = 1, wii ? wij ? ?wii ,
4
2
(7)
j T
) . In the following
where 1 denotes a vector of all ones with appropriate size and y
?j = (?
y1j , . . . , y?m
section, we discuss how to optimize problem (7).
3.1
Optimization Procedure
Due to the linear equality constraints in problem (7), we cannot apply a coordinate descent method
to update variables one by one in a similar way to problem (2). However, similar to the SMO
algorithm [15] for SVM, we can update two variables in one row of W at one time to keep the linear
equality constraints valid.
We update each row one by one and the optimization problem with respect to wi is formulated as
min
wi
1
wi AwiT + wi bT
2
s.t.
m
X
wij = 1, ?wii ? wij ? wii ?j 6= i,
(8)
j=1
P
where A = 2 j?Ti y
?j y
?jT + ?1 Iim + ?2 Im , Im is an m ? m identity matrix, Iim is a copy of Im
P
by setting the (i, i)th element to be 0, b = ?2 j?Ti yj y
?jT ? ?1 cTi , and ci is the ith column of W
by setting its ith element to 0. We define the Lagrangian as
m
X
X
X
1
J = wi AwiT + wi bT ? ?(
wij ? 1) ?
(wii ? wij )?j ?
(wii + wij )?j .
2
j=1
j6=i
j6=i
The Karush-Kuhn-Tucker (KKT) optimality condition is formulated as
?J
= wi aj + bj ? ? + ?j ? ?j = 0, for j 6= i
?wij
X
?J
= w i a i + bi ? ? ?
(?k + ?k ) = 0
?wii
(9)
(10)
k6=i
?j ? 0, (wii ? wij )?j = 0 ?j 6= i
?j ? 0, (wii + wij )?j = 0 ?j 6= i,
5
(11)
(12)
where aj is the jth column of A and bj is the jth element of b. It is easy to show that ?j ?j = 0
for all j 6= i. When wij satisfies wij = wii , according to Eq. (12) we have ?j = 0 and further
wi aj + bj = ? ? ?j ? ? according to Eq. (9). When wij = ?wii , based on Eq. (11) we can
get ?j = 0 and then wi aj + bj = ? + ?j ? ?. For wij between those two extremes (i.e.,
?wii < wij < wii ), ?j = ?j = 0 according to Eqs. (11) P
and (12), which implies that wi aj + bj =
?. Moreover, Eq. (10) implies that wi ai + bi = ? + k6=i (?k + ?k ) ? ?. We define sets as
S1 = {j|wij = wii , j 6= i}, S2 = {j| ? wii < wij < wii }, S3 = {j|wij = ?wii }, and S4 = {i}.
Then a feasible wi is a stationary point of problem (8) if and only if maxj?S1 ?S2 {wi aj + bj } ?
mink?S2 ?S3 ?S4 {wi ak + bk }. If there exist a pair of indices (j, k), where j ? S1 ? S2 and k ?
S2 ? S3 ? S4 , satisfying wi aj + bj > wi ak + bk , {j, k} is called a violating pair. If the current
estimation wi is not an optimal solution, there should exist some violating pairs. Our SMO algorithm
updates a violating pair at one step by choosing the most violating pair {j, k} with j and k defined
as j = arg maxl?S1 ?S2 {wi al + bl } and k = arg minl?S2 ?S3 ?S4 {wi al + bl }. We define the update
rule for wij and wik as w
?ij = wij + t and w
?ik = wik ? t. By noting that j cannot be i, t should
satisfy the following constraints to make the updated solution feasible:
when k = i, t ? wik ? wij + t ? wik ? t, t ? wik ? wil ? wik ? t ?l 6= j&l 6= k
when k 6= i, ?wii ? wij + t ? wii , ?wii ? wik ? t ? wii .
w ?w
When k = i, there will be a constraint on t as t ? e ? min ik 2 ij , minl6=j&l6=k (wik ? |wil |)
and otherwise t will satisfy c ? t ? e where c = max(wik ? wii , ?wij ? wii ) and e = min(wii ?
wij , wii + wik ). Then the optimization problem for t can be unified as
min
t
ajj + aii ? 2aji 2
t + (wi aj + bj ? wi ai ? bi )t
2
s.t. c ? t ? e,
where for
the case
that k = i, c is
set to be ??. This problem has an analytical solution as
w ai +bi ?wi aj ?bj
t = min e, max c, iajj
. We update each row of W one by one until convergence.
+aii ?2aji
4
Experiments
In this section, we test the empirical performance of our proposed methods in some toy data and
real-world problems.
4.1
Toy Problems
We first use one UCI dataset, i.e., diabetes data, to analyze the learned W matrix. The diabetes data
consist of 768 data points from two classes. We randomly select p percent of data points to form
the training set of two learning tasks respectively. The regularization parameters ?1 and ?2 are fixed
as 1 and the number of nearest neighbors is set to 5. When
p = 20 and p = 40,
the means of the
0.1025 0.1011
0.1014 0.1004
estimated W over 10 trials are
and
. This result shows
0.0980 0.1056
0.1010 0.1010
wij (j 6= i) is very close to wii for i = 1, 2. This observation implies our method can find that these
two tasks are positive correlated which matches our expectation since those two tasks are from the
same distribution.
For the second experiment, we randomly select p percent of data points to form the training set
of two learning tasks respectively but differently we flip the labels of one task so that those two
tasks
should be negatively
The matrices W?s learned for p = 20 and p = 40 are
correlated.
0.1019 ?0.1017
0.1019 ?0.0999
and
. We can see that wij (j 6= i) is very close
?0.1007 0.1012
?0.0997 0.1038
to ?wii for i = 1, 2, which is what we expect.
As the third problem, we construct two learning tasks as in the first one but flip 50% percent of the
class labels in each class of those two tasks. Here those two tasks can be viewed as unrelated tasks
since the label assignment
matrices W?s for p = 20 and p = 40 are
is random. The estimated
0.1575
0.0398
0.0144
0.1281
and
0.1015
0.0081
?0.0003
0.1077
, where wij (i 6= j) is much smaller than wii . From
the structure of the estimations, we can see that those two tasks are more likely to be unrelated,
matching our expectation. In summary, our method can learn the positive correlations, negative
correlations and task unrelatedness for those toy problems.
6
4.2
Experiments on Classification Problems
Two multi-task classification problems are used in our experiments.
The first problem we investigate is Table 2: Comparison of classification errors of different
a handwritten letter classification ap- methods on the two classification problems in the form of
plication consisting of seven tasks mean?std.
Letter
USPS
each of which is to distinguish tKNN
0.0775?0.0053 0.0445?0.0131
wo letters. The corresponding lettermtLMNN
0.0511?0.0053 0.0141?0.0038
s for each task to classify are: c/e,
MTFL
0.0505?0.0038 0.0140?0.0025
g/y, m/n, a/g, a/o, f/t and h/n. Each
MT-KNN(hinge)
0.0466?0.0023 0.0114?0.0013
class in each task has about 1000 data
MT-KNN(square) 0.0494?0.0028 0.0124?0.0014
points which have 128 features corresponding to the pixel values of handwritten letter images. The second one is the USPS digit classification problem and it consists of nine
binary classification tasks each of which is to classify two digits. Each task contains about 1000 data
points with 255 features for each class.
Running Time (in second)
Here the similarity function we use is a heat
kx ?x k2
0.8
kernel s(i, j) = exp{? i2?2j 2 } where ?
Our Method
CVX Solver
0.7
is set to the mean pairwise Euclidean dis0.6
tance among training data. We use 5-fold
cross validation to determine the optimal ?1
0.5
and ?2 whose candidate values are chosen
0.4
from n ? {0.01, 0.1, 0.5, 1, 5, 10, 100} and the
0.3
optimal number of nearest neighbors from
0.2
{5, 10, 15, 20}. The classification error is used
0.1
as the performance measure. We compare our
0
Letter
USPS
Robot
method, which is denoted as MT-KNN, with
Dataset
the KNN classifier which is a single-task learning method, the multi-task large margin nearest
neighbor (mtLMNN) method [14]1 which is a Figure 2: Comparison on average running time
multi-task local learning method based on the over 100 trials between our proposed coordinate
homogeneous neighborhood, and the multi-task descent methods and the CVX solver on classififeature learning (MTFL) method [2] which is a cation and regression problems.
global method for multi-task learning. By utilizing hinge and square losses, we also consider two variants of our MT-KNN method. To mimic
the real-world situation where the training data are usually limited, we randomly select 20% of the
whole data as training data and the rest to form the test set. The random selection is repeated for 10
times and we record the results in Table 2. From the results, we can see that our method MT-KNN
is better than KNN, mtLMNN and MTFL methods, which demonstrates that the introduction of the
heterogeneous neighborhood is helpful to improve the performance. For different loss functions
utilized by our method, MT-KNN with hinge loss is better than that with square loss due to the
robustness of the hinge loss against the square loss.
For those two problems, we also compare our proposed coordinate descent method described in
Table 1 with some off-the-shelf solvers such as the CVX solver [11] with respect to the running
time. The platform to run the experiments is a desktop with Intel i7 CPU 2.7GHz and 8GB RAM
and we use Matlab 2009b for implementation and experiments. We record the average running time
over 100 trials in Figure 2 and from the results we can see that on the classification problems above,
our proposed coordinate descent method is much faster than the CVX solver which demonstrates
the efficiency of our proposed method.
4.3
Experiments on Regression Problems
Here we study a multi-task regression problem to learn the inverse dynamics of a seven degree-offreedom SARCOS anthropomorphic robot arm.2 The objective is to predict seven joint torques based
1
2
http://www.cse.wustl.edu/?kilian/code/files/mtLMNN.zip
http://www.gaussianprocess.org/gpml/data/
7
on 21 input features, corresponding to seven joint positions, seven joint velocities and seven joint
accelerations. So each task corresponds to the prediction of one torque and can be formulated as a
regression problem. Each task has 2000 data points. The similarity function used here is also the heat
kernel and 5-fold cross validation is used to determine the hyperparameters, i.e., ?1 , ?2 and k. The
performance measure used is normalized mean squared error (nMSE), which is mean squared error
on the test data divided by the variance of the ground truth. We compare our method denoted by MTKR with single-task kernel regression (KR), the multi-task feature learning (MTFL) under different
configurations on the size of the training set. Compared with KR and MTFL methods, our method
achieves better performance over different sizes of the training sets. Moreover, for our proposed
coordinate descent method introduced in section 3.1, we compare it with CVX solver and record
the results in the last two columns of Figure 2. We find the running time of our proposed method is
much smaller than that of the CVX solver which demonstrates that the proposed coordinate descent
method can speed up the computation of our MT-KR method.
0.08
KR
MTFL
MT?KR
nMSE
0.06
0.04
0.02
0
0.1
0.2
The size of training set
0.3
Figure 3: Comparison of different methods on the robot arm application when varying the size of
the training set.
4.4
Sensitivity Analysis
Here we test the sensitivity of the performance
with respect to the number of nearest neighbors.
By changing the number of nearest neighbors
from 5 to 40 at an interval of 5, we record the
mean of the performance of our method over 10
trials in Figure 4. From the results, we can see
our method is not very sensitive to the number
of nearest neighbors, which makes the setting
of k not very difficult.
0.06
Error
0.05
0.04
0.03
0.02
0.01
5
Conclusion
Letter
USPS
Robot
5
10
15
20
25
30
Number of Neighbors
35
40
Figure 4: Sensitivity analysis of the performance
of our method with respect to the number of nearest neighbors at different data sets.
In this paper, we develop local learning methods for multi-task classification and regression
problems. Based on an assumption that all task
pairs contributes to each other almost equally,
we propose regularized objective functions and develop efficient coordinate descent methods to
solve them. Up to here, each task in our studies is a binary classification problem. In some applications, there may be more than two classes in each task. So we are interested in an extension of our
method to multi-task multi-class problems. Currently the task-specific weights are shared by all data
points from one task. One interesting research direction is to investigate a localized variant where
different data points have different task-specific weights based on their locality structure.
Acknowledgment
Yu Zhang is supported by HKBU ?Start Up Grant for New Academics?.
8
References
[1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch?olkopf, J. C. Platt, and
T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 41?48, Vancouver,
British Columbia, Canada, 2006.
[3] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine
Learning Research, 4:83?99, 2003.
[4] J. Baxter. A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine Learning, 28(1):7?39, 1997.
[5] J. C. Bezdek and R. J. Hathaway. Convergence of alternating optimization. Neural, Parallel & Scientific
Computations, 11(4):351?368, 2003.
[6] E. Bonilla, K. M. A. Chai, and C. Williams. Multi-task Gaussian process prediction. In J.C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 153?160, Vancouver, British Columbia, Canada, 2007.
[7] L. Bottou and V. Vapnik. Local learning algorithms. Neural Computation, 4(6):888?900, 1992.
[8] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[9] T. Evgeniou and M. Pontil. Regularized multi-task learning. In Proceedings of the Tenth ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, pages 109?117, Seattle, Washington, USA, 2004.
[10] T. V. Gestel, J. A. K. Suykens, B. Baesens, S. Viaene, J. Vanthienen, G. Dedene, B. De Moor, and J. Vandewalle. Benchmarking least squares support vector machine classifiers. Machine Learning, 54(1):5?32,
2004.
[11] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, 2011.
[12] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: a convex formulation. In D. Koller,
D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems
21, pages 745?752, Vancouver, British Columbia, Canada, 2008.
[13] A. Kumar and H. Daum?e III. Learning task grouping and overlap in multi-task learning. In Proceedings
of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012.
[14] S. Parameswaran and K. Weinberger. Large margin multi-task metric learning. In J. Lafferty, C. K. I.
Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1867?1875, 2010.
[15] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In
B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods: Support Vector
Learning. MIT Press, 1998.
[16] S. Thrun. Is learning the n-th thing any easier than learning the first? In D. S. Touretzky, M. Mozer, and
M. E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 640?646, Denver,
CO, 1995.
[17] S. Thrun and J. O?Sullivan. Discovering structure in multiple learning tasks: The TC algorithm. In
Proceedings of the Thirteenth International Conference on Machine Learning, pages 489?497, Bari, Italy,
1996.
[18] M. Wu and B. Sch?olkopf. A local learning approach for clustering. In B. Sch?olkopf, J. C. Platt, and
T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 1529?1536, Vancouver, British Columbia, Canada, 2006.
[19] M. Wu, K. Yu, S. Yu, and B. Sch?olkopf. Local learning projections. In Proceedings of the Twenty-Fourth
International Conference on Machine Learning, pages 1039?1046, Corvallis, Oregon, USA, 2007.
[20] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning.
In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages 733?742, Catalina
Island, California, 2010.
9
| 5211 |@word multitask:3 kong:1 trial:4 polynomial:1 norm:1 mtlmnn:3 c0:8 plication:1 ajj:1 jacob:1 tr:13 reduction:1 configuration:1 contains:3 past:1 existing:2 qth:1 current:1 comparing:2 update:7 mtfl:6 stationary:1 intelligence:1 discovering:1 desktop:1 scotland:1 ith:6 record:5 sarcos:1 cse:1 successive:1 c6:6 org:1 zhang:4 c2:7 direct:1 become:3 ik:14 consists:3 introduce:2 pairwise:1 multi:36 torque:2 inspired:1 cpu:1 solver:8 totally:1 becomes:1 moreover:9 unrelated:3 wki:1 bounded:1 what:1 bakker:1 developed:1 unified:1 qru:2 voting:2 ti:11 classifier:10 wrong:1 k2:1 utilization:1 demonstrates:3 grant:2 platt:4 uk:1 catalina:1 positive:5 local:28 sd:1 ak:2 merge:3 ap:1 black:1 co:1 limited:2 range:1 bi:5 acknowledgment:1 enforces:1 yj:11 union:1 sullivan:2 procedure:3 digit:2 aji:8 pontil:2 empirical:2 vert:1 matching:1 boyd:1 confidence:3 wustl:1 projection:1 get:4 cannot:4 close:4 selection:1 operator:1 unlabeled:1 optimize:1 www:2 lagrangian:1 williams:2 convex:3 formulate:3 wit:2 rule:1 utilizing:2 coordinate:11 variation:1 updated:1 suppose:2 aik:5 programming:3 homogeneous:5 diabetes:2 element:11 velocity:1 satisfying:1 jk:2 utilized:1 bari:1 std:1 labeled:1 capture:1 calculate:1 region:5 culotta:1 kilian:1 trade:1 mozer:1 complexity:6 wil:2 dynamic:1 rewrite:1 predictive:1 negatively:2 efficiency:1 learner:1 usps:4 aii:2 joint:4 differently:1 represented:1 regularizer:4 heat:2 fast:2 query:1 zemel:1 artificial:1 neighborhood:15 choosing:1 whose:1 widely:1 solve:12 otherwise:2 knn:15 knearest:1 itself:1 obviously:1 sequence:4 analytical:4 propose:6 uci:1 combining:2 roweis:1 frobenius:1 olkopf:5 chai:1 convergence:2 cluster:1 optimum:1 seattle:1 converges:1 wqq:9 help:1 develop:3 ij:2 nearest:12 eq:8 implies:6 kuhn:1 direction:1 sgn:3 offreedom:1 generalization:1 karush:1 clustered:1 anthropomorphic:1 summation:2 im:3 insert:1 extension:2 hold:2 ground:1 exp:1 mapping:2 bj:13 predict:1 achieves:1 estimation:11 label:6 currently:1 sensitive:1 gaussianprocess:1 largest:2 hasselmo:1 qv:4 weighted:3 hoffman:2 moor:1 mit:1 gaussian:1 shelf:2 varying:1 gpml:1 encode:1 focus:1 indicates:1 hk:1 sigkdd:1 parameswaran:2 helpful:1 dependent:1 bt:2 koller:2 wij:32 interested:1 pixel:1 overall:1 classification:19 among:5 arg:3 denoted:2 priori:1 k6:2 platform:1 equal:3 construct:7 once:2 evgeniou:2 washington:1 sampling:1 represents:3 kw:3 yu:4 k2f:3 eik:11 mimic:1 bezdek:1 piecewise:1 wqr:16 randomly:3 maxj:2 consisting:1 tq:12 ando:1 investigate:2 mining:1 extreme:2 bracket:1 tj:5 euclidean:1 taylor:1 penalizes:1 minimal:1 column:3 soft:1 classify:2 caruana:1 assignment:1 cost:2 vandewalle:1 scanning:1 confident:1 density:1 international:4 sensitivity:3 off:3 yl:4 regressor:5 w1:1 squared:2 choose:2 resort:1 hkbu:2 leading:2 toy:5 tii:1 de:1 includes:1 coefficient:3 matter:1 oregon:1 satisfy:2 bonilla:1 tion:1 later:1 analyze:1 red:1 start:1 sort:3 parallel:1 contribution:9 square:10 ni:13 variance:1 gestel:1 handwritten:2 bayesian:2 comp:1 j6:2 cation:1 touretzky:1 bjik:11 against:1 tucker:1 dis0:1 dataset:2 recall:2 color:2 fractional:1 dimensionality:1 knowledge:1 cik:10 oppositely:1 supervised:1 violating:4 disciplined:1 formulation:3 though:1 box:1 smola:1 correlation:5 d:3 hand:2 until:1 ei:6 trust:1 marker:1 aj:10 scientific:1 usa:2 normalized:1 counterpart:2 regularization:2 hence:2 equality:2 alternating:2 symmetric:3 i2:1 during:1 hong:1 theoretic:1 demonstrate:2 percent:3 image:1 consideration:3 novel:1 mt:9 qp:4 ji:6 denver:1 extend:4 rth:2 corvallis:1 ai:5 rd:1 pm:4 similarly:1 heskes:1 shawe:1 maxj6:1 robot:4 similarity:3 base:1 add:2 italy:1 belongs:3 reverse:1 discard:1 nonconvex:1 inequality:1 binary:3 yi:4 devise:1 minimum:4 zip:1 minl:1 determine:2 multiple:4 match:2 faster:1 academic:1 cross:2 bach:1 divided:1 y1j:1 equally:1 prediction:6 variant:2 regression:22 heterogeneous:9 denominator:2 metric:2 expectation:2 yeung:1 iteration:1 kernel:11 adopting:2 suykens:1 c1:15 affecting:1 thirteenth:1 interval:2 sch:5 w2:1 rest:1 file:1 thing:1 lafferty:1 effectiveness:2 bik:1 call:1 leverage:1 noting:1 bengio:1 easy:3 baxter:1 iii:1 xj:1 wti:13 i7:1 six:2 o0:3 expression:1 gb:1 wo:1 reformulated:1 nine:1 matlab:2 useful:1 s4:4 http:2 sl:11 exist:3 s3:4 sign:1 estimated:2 yy:1 key:1 four:1 changing:1 pj:1 tenth:1 utilize:3 ram:1 run:1 inverse:1 letter:6 fourth:1 uncertainty:1 named:1 almost:2 wu:2 utilizes:1 cvx:7 decision:5 comparable:1 distinguish:1 fold:2 quadratic:2 constraint:9 constrain:1 software:2 u1:1 neighbored:1 speed:1 min:13 formulating:2 optimality:1 kumar:1 department:1 according:3 combination:2 belonging:1 smaller:2 ur:2 wi:24 island:1 qu:4 making:1 s1:4 dv:8 ln:7 discus:2 slack:1 needed:1 singer:1 flip:3 end:4 wii:50 operation:1 apply:1 enforce:3 appropriate:1 weinberger:2 robustness:1 denotes:7 clustering:3 running:5 hinge:7 l6:1 daum:1 bl:2 objective:11 nkj:1 kth:1 distance:1 thrun:3 w0:4 seven:6 considers:2 besides:2 length:3 index:7 relationship:2 reformulate:1 code:1 minimizing:1 balance:1 difficult:1 negative:3 mink:1 implementation:1 twenty:1 observation:2 datasets:2 descent:11 defining:2 situation:3 canada:4 bk:2 introduced:1 pair:6 required:1 c3:3 c4:3 california:1 smo:2 learned:2 usually:4 max:10 tance:1 overlap:1 regularized:4 indicator:1 ajik:15 advanced:1 mn:1 wik:19 arm:2 improve:3 columbia:4 literature:1 discovery:1 kf:1 vancouver:4 loss:18 expect:2 interesting:1 localized:1 triple:1 validation:2 degree:1 editor:7 row:8 course:1 summary:1 supported:1 last:3 copy:1 t6:2 jth:6 aij:1 side:1 iim:2 burges:1 neighbor:17 absolute:1 sparse:1 ghz:1 edinburgh:1 maxl:1 world:4 valid:1 c5:3 preferred:1 keep:2 global:6 kkt:1 xi:23 thep:1 yji:1 decade:1 table:8 learn:4 nature:1 contributes:1 schuurmans:1 du:8 bottou:2 complex:1 main:1 kwk2f:3 whole:5 s2:7 hyperparameters:1 repeated:1 complementary:1 nmse:2 representative:1 intel:1 benchmarking:1 vr:1 position:1 lie:1 candidate:3 third:2 wrq:6 british:4 specific:7 jt:4 gating:1 udu:1 svm:4 grouping:1 consist:1 vapnik:1 sequential:1 kr:5 ci:5 direc:1 nk:9 kx:1 easier:2 margin:2 locality:2 generalizing:1 tc:1 likely:2 univariate:2 contained:1 hathaway:1 scalar:2 corresponds:1 truth:1 satisfies:1 acm:1 bji:3 cti:1 lth:1 marked:2 formulated:8 sorted:2 viewed:2 identity:1 acceleration:1 shared:1 feasible:2 hard:1 specifically:2 wt:3 called:1 select:3 support:5 scan:2 philosophy:1 argyriou:1 correlated:3 |
4,655 | 5,212 | Learning Feature Selection Dependencies in
Multi-task Learning
Jos?e Miguel Hern?andez-Lobato
Department of Engineering
University of Cambridge
jmh233@cam.ac.uk
Daniel Hern?andez-Lobato
Computer Science Department
Universidad Aut?onoma de Madrid
daniel.hernandez@uam.es
Abstract
A probabilistic model based on the horseshoe prior is proposed for learning dependencies in the process of identifying relevant features for prediction. Exact
inference is intractable in this model. However, expectation propagation offers
an approximate alternative. Because the process of estimating feature selection
dependencies may suffer from over-fitting in the model proposed, additional data
from a multi-task learning scenario are considered for induction. The same model
can be used in this setting with few modifications. Furthermore, the assumptions
made are less restrictive than in other multi-task methods: The different tasks
must share feature selection dependencies, but can have different relevant features
and model coefficients. Experiments with real and synthetic data show that this
model performs better than other multi-task alternatives from the literature. The
experiments also show that the model is able to induce suitable feature selection
dependencies for the problems considered, only from the training data.
1
Introduction
Many linear regression problems are characterized by a large number d of features or explaining
attributes and by a reduced number n of training instances. In this large d but small n scenario
there is an infinite number of potential model coefficients that explain the training data perfectly
well. To avoid over-fitting problems and to obtain estimates with good generalization properties, a
typical regularization is to assume that the model coefficients are sparse, i.e., most coefficients are
equal to zero [1]. This is equivalent to considering that only a subset of the features or attributes
are relevant for prediction. The sparsity assumption can be introduced by carrying out Bayesian
inference under a sparsity enforcing prior for the model coefficients [2, 3], or by minimizing a loss
function penalized by some sparse regularizer [4, 5]. Among the priors that enforce sparsity, the
horseshoe has some attractive properties that are very convenient for the scenario described [3]. In
particular, this prior has heavy tails, to model coefficients that significantly differ from zero, and an
infinitely tall spike at the origin, to favor coefficients that take negligible values.
The estimation of the coefficients under the sparsity assumption can be improved by introducing
dependencies in the process of determining which coefficients are zero [6, 7]. An extreme case of
these dependencies appears in group feature selection methods in which groups of coefficients are
considered to be jointly equal or different from zero [8, 9]. However, a practical limitation is that
the dependency structure (groups) is often assumed to be given. Here, we propose a model based on
the horseshoe prior that induces the dependencies in the feature selection process from the training
data. These dependencies are expressed by a correlation matrix that is specified by O(d) parameters.
Unfortunately, the estimation of these parameters from the training data is difficult since we consider
n < d instances only. Thus, over-fitting problems are likely to appear. To improve the estimation
process we assume a multi-task learning setting, where several learning tasks share feature selection
dependencies. The method proposed can be adapted to such a scenario with few modifications.
1
Traditionally, methods for multi-task learning under the sparsity assumption have considered common relevant and irrelevant features among tasks [8, 10, 11, 12, 13, 14]. Nevertheless, recent research cautions against this assumption when the supports and values of the coefficients for each
task can vary widely [15]. The model proposed here limits the impact of this problem because it is
has fewer restrictions. The tasks used for induction can have, besides different model coefficients,
different relevant features. They must share only the dependency structure for the selection process.
The model described here is most related to the method for sparse coding introduced in [16], where
spike-and-slab priors [2] are considered for multi-task linear regression under the sparsity assumption and dependencies in the feature selection process are specified by a Boltzmann machine. Fitting
exactly the parameters of a Boltzmann machine to the observed data has exponential cost in the number of dimensions of the learning problem. Thus, when compared to the proposed model, the model
considered in [16] is particularly difficult to train. For this, an approximate algorithm based on
block-coordinate optimization has been described in [17]. The algorithm alternates between greedy
MAP estimation of the sparsity patterns of each task and maximum pseudo-likelihood estimation of
the Boltzmann parameters. Nevertheless, this algorithm lacks a proof of convergence and we have
observed that is prone to get trapped in sub-optimal solutions.
Our experiments with real and synthetic data show the better performance of the model proposed
when compared to other methods that try to overcome the problem of different supports among
tasks. These methods include the model described in [16] and the model for dirty data proposed
in [15]. These experiments also illustrate the benefits of the proposed model for inducing dependencies in the feature selection process. Specifically, the dependencies obtained are suitable for the
multi-task learning problems considered. Finally, a difficulty of the model proposed is that exact
Bayesian inference is intractable. Therefore, expectation propagation (EP) is employed for efficient
approximate inference. In our model EP has a cost that is O(Kn2 d), where K is the number of
learning tasks, n is the number of samples of each task, and d is the dimensionality of the data.
The rest of the paper is organized as follows: Section 2 describes the proposed model for learning
feature selection dependencies. Section 3 shows how to use expectation propagation to approximate
the quantities required for induction. Section 4 compares this model with others from the literature
on synthetic and real data regression problems. Finally, Section 5 gives the conclusions of the paper
and some ideas for future work.
2
A Model for Learning Feature Selection Dependencies
We describe a linear regression model that can be used for learning dependencies in the process
of identifying relevant features or attributes for prediction. For simplicity, we first deal with the
case of a single learning task. Then, we show how this model can be extended to address multitask learning problems. In the single task scenario we consider some training data in the form of
n d-dimensional vectors summarized in a design matrix X = (x1 , . . . , xn )T and associated targets
y = (y1 , . . . , yn )T , with yi ? R. A linear predictive rule is assumed for y given X. Namely,
y = Xw + , where w is a vector of latent coefficients and is a vector of independent Gaussian
noise with variance ? 2 , i.e., ? N (0, ? 2 I). Given X and y, the likelihood for w is:
n
n
Y
Y
p(y|X, w) =
p(yi |xi , w) =
N (yi |wT xi , ? 2 ) = N (y|Xw, ? 2 I) .
(1)
i=1
i=1
Consider the under-determined scenario n < d. In this case, the likelihood is not strictly concave
and infinitely many values of w fit the training data perfectly well. A strong regularization technique
that is often used in this context is to assume that only some features are relevant for prediction [1].
This is equivalent to assuming that w is sparse with many zeros. This inductive bias can be naturally
incorporated into the model using a horseshoe sparsity enforcing prior for w [3].
The horseshoe prior lacks a closed form but can be defined as a scale mixture of Gaussians:
Z
d
Y
p(w|? ) =
p(wj |? ) ,
p(wj |? ) = N (wj |0, ?2j ? 2 ) C + (?j |0, 1) d?j ,
(2)
j=1
where ?j is a latent scale for coefficient wj , C + (?|0, 1) is a half-Cauchy distribution with zero location and unit scale and ? > 0 is a global shrinkage parameter that controls the level of sparsity. The
2
?2
?1
0
1
2
3
5
4
3
Prob. Density
1
0.005
0
0.000
0.1
0.0
?3
2
0.025
0.020
Horseshoe
Gaussian
Student?t(df=1)
Laplace
0.015
Prob. Density
0.4
0.3
0.2
Prob. Density
0.5
0.6
Horseshoe
Gaussian
Student?t(df=1)
Laplace
0.010
0.7
smaller the value of ? the sparser the prior and vice-versa. Figure 1 (left) and (middle) show a comparison of the horseshoe with other priors from the literature. The horseshoe has an infinitely tall
spike at the origin which favors coefficients with small values, and has heavy tails which favor coefficients that take values that significantly differ from zero. Furthermore, assume that ? = ? 2 = 1
and that X = I, and define ?j = 1/(1 + ?2j ). Then, the posterior mean for wj is (1 ? ?j )yj , where
?j is a random shrinkage coefficient that can be interpreted as the amount of weight placed at the
origin [3]. Figure 1 (right) shows the prior density for ?j that results from the horseshoe. It is from
the shape of this figure that the horseshoe takes its name. We note that one expects to see two things
under this prior: relevant coefficients (?j ? 0, no shrinkage), and zeros (?j ? 1, total shrinkage).
The horseshoe is therefore very convenient for the sparse inducing scenario described before.
4
5
6
7
0.0
0.2
0.4
0.6
0.8
1.0
Figure 1: (left) Density of different priors, horseshoe, Gaussian, Student-t and Laplace near the
origin. Note the infinitely tall spike of the horseshoe. (middle) Tails of the different priors considered
before. (right) Prior density of the shrinkage parameter ?j for the horseshoe prior.
A limitation of the horseshoe is that it does not consider dependencies in the feature selection process. Specifically, the fact that one feature is actually relevant for prediction has no impact at all
in the prior relevancy or irrelevancy of other features. We now describe how to introduce these
dependencies in the horseshoe. Consider the definition of a Cauchy distribution as the ratio of two
independent standard Gaussian random variables [18]. An equivalent representation of the prior is:
Z Y
d
p(w|?2 , ? 2 ) =
N (wj |0, u2j /vj2 ) N (uj |0, ?2 ) N (vj |0, ? 2 ) duj dvj .
(3)
j=1
where uj and vj are latent variables introduced for each dimension j. In particular, ?j = uj ?/vj ?.
Furthermore, ? has been incorporated into the prior for uj and vj using ? 2 = ?2 /? 2 . The latent
variables uj and vj can be interpreted as indicators of the relevance or irrelevance of feature j. The
larger u2j , the more relevant the feature. Conversely, the larger vj2 , the more irrelevant.
A simple way of introducing dependencies in the feature selection process is to consider correlations
among variables uj and vj , with j = 1, . . . , d. These correlations can be introduced in (3) as follows:
?
?
Z Y
d
p(w|?2 , ? 2 , C) = ?
N (wj |0, u2j /vj2 )? N (u|0, ?2 C) N (v|0, ? 2 C) dudv ,
(4)
j=1
where u = (u1 , . . . , ud )T , v = (v1 , . . . , vd )T , C is a correlation matrix that specifies the dependencies in the feature selection process, and ?2 and ? 2 act as regularization parameters that control the
level of sparsity. When C = I, (4) factorizes and gives the same prior as the one in (2) and (3). In
practice, however, C has to be estimated from the data. This can be problematic since it will involve
the estimation of O(d2 ) free parameters which can lead to over-fitting. To alleviate this problem and
also to allow for efficient approximate inference we consider a special form for C:
p
p
C = ?M? ,
M = (D + PPT ) ,
? = diag(1/ M11 , . . . , 1/ Mdd ) ,
(5)
where diag(a1 , . . . , ad ) denotes a diagonal matrix with entries a1 , . . . , ad ; D is a diagonal matrix
whose entries are all equal to some small positive constant (this matrix guarantees that C?1 exists);
the products by ? ensure that the entries of C are in the range (?1, 1); and P is a d ? m matrix
of real entries which specifies the correlation structure of C. Thus, C is fully determined by P and
will only have O(md) free parameters with m < d. The value of m is a regularization parameter
that limits the complexity of C. The larger its value, the more expressive C is. For computational
reasons described later on we will set in our experiments m equal to n, the number of data instances.
3
2.1
Inference, Prediction and Learning Feature Selection Dependencies
Denote by z = (wT , uT , vT )T the vector of latent variables of the model described above. Based on
the formulation of the previous section, the joint probability distribution of y and z is:
p(y, z|X, ? 2 , ?2 , ? 2 , C) = N (y|Xw, ? 2 I)N (u|0, ?2 C)N (v|0, ? 2 C)
d
Y
N wj |0, u2j /vj2 . (6)
j=1
Figure 2 shows the factor graph corresponding to this joint probability distribution. This graph
summarizes the interactions between the random variables in the model. All the factors in (6) are
Gaussian, except the ones corresponding to the prior for wj given uj and vj , N (wj |0, u2j /vj2 ).
Given the observed targets y one is typically interested in inferring the latent variables z of the
model. For this, Bayes? theorem can be used:
p(z|X, y, ? 2 , ?2 , ? 2 , C) =
p(y, z|X, ? 2 , ?2 , ? 2 , C)
,
p(y|X, ? 2 , ?2 , ? 2 , C)
(7)
where the numerator in the r.h.s. of (7) is the joint distribution (6) and the denominator is simply a
normalization constant (the model evidence) which can be used for Bayesian model selection [19].
The posterior distribution in (7) is useful to compute a predictive distribution for the target ynew
associated to a new unseen data instance xnew :
Z
2 2
2
p(ynew |xnew , X, ? , ? , ? , C) = p(ynew |xnew , w) p(z|X, ? 2 , ?2 , ? 2 , C) dz .
(8)
Similarly, one can marginalize (7) with respect to w to obtain a posterior distribution for u and v
which can be useful to identify the most relevant or irrelevant features.
Ideally, however, one should also infer C, the correlation matrix that describes the dependencies in
the feature selection process, and compute a posterior distribution for it. This can be complicated,
even for approximate inference methods. Denote
by Z the model evidence, i.e., the denominator in
the r.h.s. of (7). A simpler alternative is to use
gradient ascent to maximize log Z (and therefore
Z) with respect to P, the matrix that completely
specifies C. This corresponds to type-II maximum likelihood (ML) estimation and allows to
determine P from the training data alone, without
resorting to cross-validation [19]. The gradient of
log Z with respect to P, i.e., ? log Z/?P can be
used for this task. The other hyper-parameters of
Factor graph of the probabilistic
the model ? 2 , ?2 and ? 2 can be found following Figure 2:
model. The factor f (?) corresponds to the likelia similar approach.
...
...
...
Unfortunately, neither (7), (8) nor the model evidence can be computed in closed form. Specifically, it is not possible to compute the required
integrals analytically. Thus, one has to resort to
approximate inference. For this, we use expectation propagation [20]. See Section 3 for details.
2.2
hood N (y|Xw, ? 2 I), and each gj (?) to the prior
for wj given uj and vj , N (wj |0, u2j /vj2 ). Finally,
hu (?) and hv (?) correspond to N (u|0, ?2 C) and
N (v|0, ? 2 C), respectively. Only the targets y are
observed, the other variables are latent.
Extension to the Multi-Task Learning Setting
In the single-task learning setting maximizing the model evidence with respect to P is not expected
to be effective to improve the prediction accuracy. The reason is the difficulty of obtaining an accurate estimate of P. This matrix has m ? d free parameters and these have to be induced from a
small number of n < d training instances. The estimation process is hence likely to be affected by
over-fitting. One way to mitigate over-fitting problems is to consider additional data for the estimation process. These additional data may come from a multi-task learning setting, where there are K
4
related but different tasks available for induction. A simple assumption is that all these tasks share a
common dependency structure C for the feature selection process, although the model coefficients
and the actual relevant features may differ between tasks. This assumption is less restrictive than assuming jointly relevant and irrelevant features across tasks and can be incorporated into the learning
process using the described model with few modifications. By using the data from the K tasks for
the estimation of P we expect to obtain better estimates and to improve the prediction accuracy.
Assume there are K learning tasks available for induction and that each task k = 1, . . . , K consists
of a design matrix Xk with nk d-dimensional data instances and target values yk . As in (1), a linear
predictive rule with additive Gaussian noise ?k2 is considered for each task. Let wk be the model
coefficients of task k. Assume for the model coefficients of each task a horseshoe prior as the one
specified in (4) with a shared correlation matrix C, but with task specific hyper-parameters ?2k and
?k2 . Denote by uk and vk the vectors of latent Gaussian variables of the prior for task k. Similarly, let
zk = (wkT , uTk , vkT )T be the vector of latent variables of task k. Then, the joint posterior distribution
of the latent variables of the different tasks factorizes as follows:
p
2 2
2 K
{z}K
k=1 |{Xk , yk , ?k , ?k , ?k }k=1 , C
K
Y
p(yk , zk |Xk , ?k2 , ?2k , ?k2 , C)
=
,
p(yk |Xk , ?k2 , ?2k , ?k2 , C)
(9)
k=1
where each factor in the r.h.s. of (9) is given by (7). This indicates that the K models for each task
can be learnt independently given C and ?k2 , ?2k and ?k2 ?k. Denote by ZMT the denominator in the
QK
QK
r.h.s. of (9), i.e., ZMT = k=1 p(yk |Xk , ?k2 , ?2k , ?k2 , C) = k=1 Zk , with Zk the evidence for task
k. Then, ZMT is the model evidence for the multi-task setting. As in single-task learning, specific
values for the hyper-parameters of each task and C can be found by a type-II maximum likelihood
(ML) approach. For this, log ZMT is maximized using gradient ascent. Specifically, the gradient
of log ZMT with respect to ?k2 , ?2k , ?k2 and P can be easily computed in terms of the gradient of
each log Zk . In summary, if there is a method to approximate the required quantities for learning
a single task using the model proposed, implementing a multi-task learning method that assumes
shared feature selection dependencies but task dependent hyper-parameters is straight-forward.
3
Approximate Inference
Expectation propagation (EP) [20] is used to approximate the posterior distribution and the evidence
of the model described in Section 2. For the clarity of presentation we focus on the model for a single
learning task. The multi-task extension of Section 2.2 is straight-forward. Consider the posterior
distribution of z, (6). Up to a normalization constant this distribution can be written as
p(z|X, y, ? 2 , ?2 , ? 2 ) ? f (w)hu (u)hv (v)
d
Y
gj (z) ,
(10)
j=1
where the factors in the r.h.s. of (10) are displayed in Figure 2. Note that all factors except the gj ?s
Qd
are Gaussian. EP approximates (10) by a distribution q(z) ? f (w)hu (u)hv (v) j=1 g?j (z), which
is obtained by replacing each non-Gaussian factor gj in (10) with an approximate factor g?j that is
Gaussian but need not be normalized. Since the Gaussian distribution belongs to the exponential
family of distributions, which is closed under the product and division operations [21], q is Gaussian
with natural parameters equal to the sum of the natural parameters of each factor.
EP iteratively updates each g?j until convergence by first computing q \j ? q/?
gj and then minimizing the Kullback-Leibler (KL) divergence between gj q \j and q new , KL(gj q \j ||q new ), with respect to
q new . The new approximate factor is obtained as g?jnew = sj q new /q \j , where sj is the normalization
constant of gj q \j . This update rule ensures that g?j looks similar to gj in regions of high posterior
probability in terms of q \j [20]. Minimizing the KL divergence is a convex problem whose optimum
is found by matching the means and the covariance matrices between gj q \j and q new . These expectations can be readily obtained from the derivatives of log sj with respect to the natural parameters
of q \j [21]. Unfortunately, the computation of sj is intractable under the horseshoe. As a practical
alternative, our EP implementation employes numerical quadrature to evaluate sj and its derivatives.
Importantly, gj , and therefore g?j , depend only on wj , uj and vj , so a three-dimensional quadrature
5
will suffice. However, using similar arguments to those in [7] more efficient alternatives exist. Assume that q \j (wj , uj , vj ) = N (wj |mj , ?j )N (uj |0, ?j )N (vj |0, ?j ), i.e., q \j factorizes with respect
to wj , uj and vj and that the mean of uj and vj is zero. Since gj is symmetric with respect to uj
and vj then E[uj ] = E[vj ] = E[uj vj ] = E[uj wj ] = E[vj wj ] = 0 under gj q \j . Thus, if the initial
approximate factors g?j factorize with respect to wj , uj and vj , and have zero mean with respect to
uj and vj , any updated factor will also satisfy these properties and q \j will have the assumed form.
The crucial point here is that the dependencies introduced by gj do not lead to correlations that need
to be tracked under a Gaussian approximation. In this situation, the integral of gj q \j with respect to
wj is given by the convolution of two Gaussians and the integral of the result with respect to uj and
vj can be simplified using arguments similar to those employed to obtain (3). Namely,
Z
?j 2
sj = N mj |0, ?j + ?j C + (?j |0, 1)d?j ,
(11)
?j
where mj , ?j , ?j and ?j are the parameters of q \j . The derivatives of log sj with respect to the
natural parameters of q \j can also be evaluated using a one-dimensional quadrature. Therefore,
each update of g?j requires five quadratures: one to evaluate sj and four to evaluate its derivatives.
Instead of sequentially updating each g?j , we follow [7] and update these factors in parallel. For this,
we compute all q \j at the same time and update each g?j . The marginals of q are strictly required
for this task. These can be efficiently obtained using the low rank representation structure of the
covariance matrix of q that results from the fact that all the g?j ?s are factorizing univariate Gaussians
and from the assumed form for C in (5). Specifically, if m (the number of columns of P) is equal
to n, the cost of this operation (and hence the cost of EP) is O(n2 d). Lastly, we damp the update of
each g?j as follows: g?j = (?
gjnew )? (?
gjold )1?? , where g?jnew and g?jold respectively denote the new and the
old g?j , and ? ? [0, 1] is a parameter that controls the amount of damping. Damping significantly
improves the convergence of EP and leaves the fixed points of the algorithm invariant [22].
After EP has converged, q can be used instead of the exact posterior in (8) to make predictions.
? the normalization constant of q:
Similarly, the model evidence in (7) can be approximated by Z,
Z
d
Y
Z? = f (w)hu (u)hv (v)
g?j (z)dwdudv .
(12)
j=1
Since all the factors in (12) are Gaussian, log Z? can be readily computed and maximized with respect
to ? 2 , ?2 , ? 2 and P to find good values for these hyper-parameters. Specifically, once EP has
converged, the gradient of the natural parameters of the g?j ?s with respect to these hyper-parameters
is zero [21]. Thus, the gradient of log Z? with respect to ? 2 , ?2 , ? 2 and P can be computed in terms
of the gradient of the exact factors. The derivations are long and tedious and hence omitted here,
but by careful consideration of the covariance structure of q it is possible to limit the complexity of
these computations to O(n2 d) if m is equal to n. Therefore, to fit a model that maximizes log Z? we
alternate between running EP to obtain the estimate of log Z? and its gradient, and doing a gradient
ascent step to maximize this estimate with respect to ? 2 , ?2 , ? 2 and P. The derivation details of the
EP algorithm and an R-code implementation of it can be found in the supplementary material.
4
Experiments
We carry out experiments to evaluate the performance of the model described in Section 2. We refer
to this model as HSDep . Other methods from the literature are also evaluated. The first one, HSST ,
is a particular case of HSDep that is obtained when each task is learnt independently and correlations
in the feature selection process are ignored (i.e., C = I). A multi-task learning model, HSMT , which
assumes common relevant and irrelevant features among tasks is also considered. The details of
this model are omitted, but it follows [10] closely. It assumes a horseshoe prior in which the scale
parameters ?j in (2) are shared among tasks, i.e., each feature is either relevant or irrelevant in all
tasks. A variant of HSM T , SSMT , is also evaluated. SSMT considers a spike-and-slab prior for joint
feature selection across all tasks, instead of a horseshoe prior. The details about the prior of SSMT
are given in [10]. EP is used for approximate inference in both HSMT and SSMT . The dirty model,
DM, described in [15] is also considered. This model assumes shared relevant and irrelevant features
6
among tasks. However, some tasks are allowed to have specific relevant features. For this, a loss
function is minimized via combined `1 and `1 /`? block regularization. Particular cases of DM are
the lasso [4] and the group lasso [8]. Finally, we evaluate the model introduced in [16]. This model,
BM, uses spike-and-slab priors for feature selection and specifies dependencies in this process using
a Boltzmann machine. BM is trained using the approximate block-coordinate algorithm described
in [17]. All models considered assume Gaussian additive noise around the targets.
4.1
Experiments with Synthetic Data
A first batch of experiments is carried out using synthetic data. We generate K = 64 different tasks of n = 64 samples and d = 128 features. In each task, the entries of Xk are
sampled from a standard Gaussian distribution and the model coefficients, wk , are all set to
zero except for the i-th group of 8 consecutive coefficients, with i chosen randomly for each
task from the set {1, 2, . . . , 16}. The values of these 8 non-zero coefficients are uniformly distributed in the interval [?1, 1]. Thus, in each task there are only 8 relevant features for prediction. Given each Xk and each wk , the targets yk are obtained using (1) with ?k2 = 0.5
?k. The hyper-parameters of each method are set as follows: In HSST ?2k and ?k2 are found
by type-II ML. In HSMT ?2 and ? 2 are set to the average value found
by HSST for ?2k and ?k2 , respectively. In SSMT the parameters of the
Method
Error
spike-and-slab prior are found by type-II ML. In HSDep m = n.
HSST
0.29?0.01
2
2
Furthermore, ?k and ?k take the values found by HSST while P is
HSMT
0.38?0.03
obtained using type-II ML. In all models we set the variance of the
SS
0.77?0.01
MT
noise for task k, ?k2 , equal to 0.5. Finally, in DM we try different
DM
0.37?0.01
hyper-parameters and report the best results observed. After trainBM
0.24?0.02
ing each model on the data, we measure the average reconstruction
HSDep
0.21?0.01
? k the estimate of the model coefficients
error of wk . Denote by w
for task k (this is the posterior mean except in BM and DM). The
? k ? wk ||2 /||wk ||2 , where
reconstruction error is measured as ||w
|| ? ||2 is the `2 -norm and wk are the exact coefficients of task k.
Figure 3 (top) shows the average reconstruction error of each
method over 50 repetitions of the experiments described. HSDep obtains the lowest error. The observed differences in performance are
significant according to a Student?s t-test (p-value < 5%). BM performs worse than HSDep because the greedy MAP estimation of the
sparsity patterns of each task is sometimes trapped in sub-optimal
solutions. The poor results of HSMT , SSMT and DM are due to the
assumption made by these models of all tasks sharing relevant features, which is not satisfied. Figure 3 (bottom) shows the average
entries in absolute value of the correlation matrix C estimated by
HSDep . The matrix has a block diagonal form, with blocks of size
8 ? 8 (8 is the number of relevant coefficients in each task). Thus,
within each block the corresponding latent variables uj and vj are
strongly correlated, indicating jointly relevant or irrelevant features.
This is the expected estimation for the scenario considered.
4.2
Figure 3: (top) Average reconstruction error of each method.
(bottom) Average absolute value
of the entries of the matrix C estimated by HSDep in gray scale
(white=0 and black=1). Black
squares are groups of jointly relevant / irrelevant features.
Reconstruction of Images of Hand-written Digits from the MNIST
A second batch of experiments considers the reconstruction of images of hand-written digits extracted from the MNIST data set [23]. These images are in gray scale with pixel values between 0
and 255. Most pixels are inactive and equal to 0. Thus, the images are sparse and suitable to be
reconstructed using the model proposed. The images are reduced to size 10 ? 10 pixels and the pixel
intensities are normalized to lie in the interval [0, 1]. Then, K = 100 tasks of n = 75 samples each
are generated. For this, we randomly choose 50 images corresponding to the digit 3 and 50 images
corresponding to the digit 5 (these digits are chosen because they differ significantly). Similar results
(not shown) to the ones reported here are obtained for other pairs of digits. For each task, the entries
of Xk are sampled from a standard Gaussian. The model coefficients, wk , are simply the pixel
values of each image (i.e., d = 100). Importantly, unlike in the previous experiments, the model
coefficients are not synthetically generated but correspond to actual images. Furthermore, since the
7
tasks contain images of different digits they are expected to have different relevant features. Given
Xk and wk , the targets yk are generated using (1) with ?k2 = 0.1 ?k. The objective is to reconstruct
wk from Xk and yk for each task k. The hyper-parameters are set as in Section 4.1 with ?k2 = 0.1
?k. The reconstruction error is also measured as in that section.
Figure 4 (top) shows the average reconstruction error of each method over 50 repetitions of the experiments described. Again, HSDep performs best. Furthermore, the differences in performance are
also statistically significant. The second best result corresponds to HSMT , probably due to background pixels which are irrelevant in all the tasks and to the heavy-tails of the horseshoe prior.
HSST , SSM T , BM and DM perform significantly worse. DM performs poorly probably because of
the inferior shrinking properties of the `1 norm compared to the horseshoe [3]. The poor results of
SSMT are due to the lack of heavy-tails in the spike-and-slab prior. In BM we have observed that
the greedy MAP estimation of the task supports is more frequently trapped in sub-optimal solutions.
Furthermore, the algorithm described in [17] fails to converge most times in this scenario. Figure 4
(right, bottom) shows a representative subset of the images reconstructed by each method. The best
reconstructions correspond to HSDep . Finally, Figure 4 (left, bottom) shows in gray scale the average
correlations in absolute value induced by HSDep for the selection process of each pixel of the image
with respect to the selection of a particular pixel which is displayed in green. Correlations are high
to avoid the selection of background pixels and to select pixels that actually correspond to the digits
3 and 5. The correlations induced are hence appropriate for the multi-task problem considered.
HSST
0.36?0.02
Error
HSMT
0.25?0.02
SSMT
0.39?0.01
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
DM
0.37?0.01
BM
0.52?0.03
HSDep
0.20?0.01
Figure 4: (top) Average reconstruction error each method. (left, bottom) Average absolute value correlation in
a gray scale (white=0 and black=1) between the latent variables uj and vj corresponding to the pixel displayed
in green and the variables uj and vj corresponding to all the other pixels of the image. (right, bottom) Examples
of actual and reconstructed images by each method. The best reconstruction results correspond to HSDep .
5
Conclusions and Future Work
We have described a linear sparse model for learning dependencies in the feature selection process.
The model can be used in a multi-task learning setting with several tasks available for induction
that need not share relevant features, but only dependencies in the feature selection process. Exact
inference is intractable in such a model. However, expectation propagation provides an efficient
approximate alternative with a cost in O(Kn2 d), where K is the number of tasks, n is the number
of samples of each task, and d is the dimensionality of the data. Experiments with real and synthetic
data illustrate the benefits of the proposed method. Specifically, this model performs better than
other multi-task alternatives from the literature. Our experiments also show that the proposed model
is able to induce relevant feature selection dependencies from the training data alone. Future paths
of research include the evaluation of this model in practical problems of sparse coding, i.e., when
all tasks share a common design matrix X that has to be induced from the data alongside with the
model coefficients, with potential applications to image denoising and image inpainting [24].
Acknowledgment: Daniel Hern?andez-Lobato is supported by the Spanish MCyT (Ref. TIN201021575-C02-02). Jos?e Miguel Hern?andez-Lobato is supported by Infosys Labs, Infosys Limited.
8
References
[1] I. M. Johnstone and D. M. Titterington. Statistical challenges of high-dimensional data. Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906):4237,
2009.
[2] T. J. Mitchell and J. J. Beauchamp. Bayesian variable selection in linear regression. Journal of the
American Statistical Association, 83(404):1023?1032, 1988.
[3] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. Journal of Machine
Learning Research W&CP, 5:73?80, 2009.
[4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), 58(1):267?288, 1996.
[5] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning
Research, 1:211?244, 2001.
[6] J. M. Hern?andez-Lobato, D. Hern?andez-Lobato, and A. Su?arez. Network-based sparse Bayesian classification. Pattern Recognition, 44:886?900, 2011.
[7] M. Van Gerven, B. Cseke, R. Oostenveld, and T. Heskes. Bayesian source localization with the multivariate Laplace prior. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors,
Advances in Neural Information Processing Systems 22, pages 1901?1909, 2009.
[8] Julia E. Vogt and Volker Roth. The group-lasso: `1,? regularization versus `1,2 regularization. In Goesele
et al., editor, 32nd Anual Symposium of the German Association for Pattern Recognition, volume 6376,
pages 252?261. Springer, 2010.
[9] Y. Kim, J. Kim, and Y. Kim. Blockwise sparse regression. Statistica Sinica, 16(2):375, 2006.
[10] D. Hern?andez-Lobato, J. M. Hern?andez-Lobato, T. Helleputte, and P. Dupont. Expectation propagation
for Bayesian multi-task feature selection. In Jos?e L. Balc?azar, Francesco Bonchi, Aristides Gionis, and
Mich`ele Sebag, editors, Proceedings of the European Conference on Machine Learning, volume 6321,
pages 522?537. Springer, 2010.
[11] G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for
multiple classification problems. Statistics and Computing, pages 1?22, 2009.
[12] T. Xiong, J. Bi, B. Rao, and V. Cherkassky. Probabilistic joint feature selection for multi-task learning.
In Proceedings of the Seventh SIAM International Conference on Data Mining, pages 332?342. SIAM,
2007.
[13] T. Jebara. Multi-task feature and kernel selection for svms. In Proceedings of the twenty-first international
conference on Machine learning, pages 55?62. ACM, 2004.
[14] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch?olkopf, J. Platt, and
T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 41?48. MIT Press,
Cambridge, MA, 2007.
[15] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In J. Lafferty,
C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information
Processing Systems 23, pages 964?972. 2010.
[16] P. Garrigues and B. Olshausen. Learning horizontal connections in a sparse coding model of natural
images. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information
Processing Systems 20, pages 505?512. MIT Press, Cambridge, MA, 2008.
[17] T. Peleg, Y. C Eldar, and M. Elad. Exploiting statistical dependencies in sparse representations for signal
recovery. Signal Processing, IEEE Transactions on, 60(5):2286?2303, 2012.
[18] A. Papoulis. Probability, Random Variables, and Stochastic Processes. Mc-Graw Hill, 1984.
[19] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer,
August 2006.
[20] T. Minka. A Family of Algorithms for approximate Bayesian Inference. PhD thesis, Massachusetts Institute of Technology, 2001.
[21] M. W. Seeger. Expectation propagation for exponential families. Technical report, Department of EECS,
University of California, Berkeley, 2006.
[22] T. Minka. Power EP. Technical report, Carnegie Mellon University, Department of Statistics, 2004.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[24] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding.
Journal of Machine Learning Research, 11:19?60, 2010.
9
| 5212 |@word multitask:1 oostenveld:1 middle:2 norm:2 nd:1 tedious:1 vogt:1 relevancy:1 d2:1 hu:4 covariance:3 inpainting:1 papoulis:1 garrigues:1 carry:1 initial:1 series:1 daniel:3 document:1 must:2 written:3 readily:2 additive:2 numerical:1 shape:1 dupont:1 update:6 alone:2 greedy:3 fewer:1 half:1 leaf:1 xk:10 balc:1 provides:1 beauchamp:1 location:1 ssm:1 simpler:1 five:1 mathematical:1 symposium:1 consists:1 fitting:7 bonchi:1 introduce:1 expected:3 nor:1 frequently:1 multi:22 actual:3 considering:1 estimating:1 suffice:1 maximizes:1 lowest:1 interpreted:2 titterington:1 caution:1 guarantee:1 pseudo:1 mitigate:1 berkeley:1 sapiro:1 act:1 concave:1 exactly:1 k2:18 uk:2 control:3 unit:1 platt:2 appear:1 yn:1 positive:1 before:2 negligible:1 engineering:2 limit:3 path:1 hernandez:1 black:3 conversely:1 employes:1 limited:1 factorization:1 range:1 statistically:1 bi:1 practical:3 acknowledgment:1 hood:1 yj:1 lecun:1 practice:1 block:6 digit:8 pontil:1 significantly:5 convenient:2 matching:1 induce:2 get:1 marginalize:1 selection:36 context:1 restriction:1 equivalent:3 map:3 dz:1 lobato:8 maximizing:1 williams:2 roth:1 independently:2 convex:1 simplicity:1 identifying:2 recovery:1 onoma:1 rule:3 importantly:2 jmh233:1 traditionally:1 coordinate:2 laplace:4 updated:1 target:8 exact:6 us:1 origin:4 approximated:1 particularly:1 updating:1 recognition:4 observed:7 ep:14 bottom:6 infosys:2 taskar:1 u2j:6 hv:4 wj:20 ensures:1 region:1 culotta:2 yk:8 complexity:2 ideally:1 cam:1 trained:1 carrying:1 depend:1 predictive:3 localization:1 division:1 completely:1 easily:1 joint:8 arez:1 regularizer:1 derivation:2 train:1 describe:2 effective:1 zemel:1 hyper:9 whose:2 widely:1 larger:3 supplementary:1 elad:1 s:1 reconstruct:1 favor:3 statistic:3 unseen:1 jointly:4 online:1 propose:1 reconstruction:11 interaction:1 product:2 relevant:25 poorly:1 roweis:1 inducing:2 olkopf:1 exploiting:1 convergence:3 optimum:1 tall:3 illustrate:2 ac:1 miguel:2 measured:2 strong:1 come:1 peleg:1 qd:1 differ:4 closely:1 attribute:3 stochastic:1 material:1 implementing:1 andez:8 generalization:1 alleviate:1 strictly:2 extension:2 around:1 considered:14 slab:5 vary:1 consecutive:1 omitted:2 estimation:13 vice:1 repetition:2 hoffman:1 mit:2 gaussian:18 avoid:2 shrinkage:6 factorizes:3 volker:1 cseke:1 focus:1 ponce:1 vk:1 methodological:1 rank:1 likelihood:5 indicates:1 seeger:1 kim:3 inference:12 dependent:1 typically:1 koller:1 interested:1 pixel:12 among:7 classification:2 eldar:1 special:1 ruan:1 equal:9 once:1 evgeniou:1 look:1 future:3 minimized:1 sanghavi:1 others:1 report:3 few:3 mcyt:1 ppt:1 randomly:2 divergence:2 mining:1 evaluation:1 mixture:1 extreme:1 irrelevance:1 accurate:1 integral:3 damping:2 ynew:3 old:1 goesele:1 taylor:1 graw:1 instance:6 column:1 rao:1 cost:5 introducing:2 subset:2 expects:1 entry:8 seventh:1 reported:1 dependency:32 damp:1 eec:1 learnt:2 synthetic:6 combined:1 density:6 dvj:1 siam:2 international:2 probabilistic:3 universidad:1 anual:1 jos:3 again:1 thesis:1 satisfied:1 choose:1 worse:2 resort:1 derivative:4 american:1 potential:2 de:1 coding:4 wk:10 summarized:1 student:4 coefficient:30 gionis:1 satisfy:1 ad:2 later:1 try:2 closed:3 lab:1 doing:1 bayes:1 complicated:1 parallel:1 square:1 accuracy:2 variance:2 qk:2 efficiently:1 maximized:2 correspond:5 identify:1 bayesian:9 mc:1 straight:2 converged:2 explain:1 sharing:1 definition:1 against:1 minka:2 dm:9 naturally:1 proof:1 associated:2 sampled:2 massachusetts:1 mitchell:1 ele:1 ut:1 dimensionality:2 improves:1 organized:1 actually:2 appears:1 tipping:1 follow:1 improved:1 formulation:1 evaluated:3 strongly:1 furthermore:7 lastly:1 correlation:14 until:1 hand:2 horizontal:1 expressive:1 replacing:1 su:1 propagation:8 lack:3 kn2:2 gray:4 olshausen:1 name:1 normalized:2 contain:1 inductive:1 regularization:7 analytically:1 hence:4 symmetric:1 iteratively:1 leibler:1 hsm:1 deal:1 attractive:1 white:2 numerator:1 spanish:1 inferior:1 hill:1 julia:1 performs:5 cp:1 image:17 consideration:1 common:4 mt:1 physical:1 tracked:1 volume:2 tail:5 association:2 approximates:1 marginals:1 refer:1 significant:2 mellon:1 cambridge:3 versa:1 resorting:1 heskes:1 similarly:3 shawe:1 gj:15 posterior:10 multivariate:1 recent:1 irrelevant:10 belongs:1 scenario:9 vt:1 yi:3 aut:1 additional:3 utk:1 employed:2 determine:1 maximize:2 ud:1 converge:1 signal:2 ii:5 multiple:1 infer:1 ing:1 technical:2 characterized:1 offer:1 cross:1 long:1 bach:1 ravikumar:1 a1:2 impact:2 prediction:10 variant:1 regression:7 denominator:3 expectation:9 df:2 normalization:4 sometimes:1 kernel:1 background:2 interval:2 source:1 crucial:1 sch:1 rest:1 unlike:1 ascent:3 wkt:1 induced:4 probably:2 thing:1 lafferty:2 jordan:1 near:1 gerven:1 synthetically:1 bengio:2 fit:2 perfectly:2 lasso:4 idea:1 haffner:1 inactive:1 suffer:1 ignored:1 useful:2 involve:1 amount:2 induces:1 svms:1 reduced:2 generate:1 specifies:4 exist:1 problematic:1 trapped:3 estimated:3 tibshirani:1 carnegie:1 affected:1 group:7 four:1 nevertheless:2 clarity:1 neither:1 v1:1 graph:3 sum:1 prob:3 family:3 c02:1 summarizes:1 xnew:3 adapted:1 u1:1 argument:2 department:4 according:1 alternate:2 poor:2 describes:2 smaller:1 across:2 modification:3 invariant:1 hern:8 german:1 singer:1 available:3 gaussians:3 uam:1 operation:2 enforce:1 appropriate:1 dudv:1 xiong:1 alternative:7 batch:2 denotes:1 dirty:3 include:2 ensure:1 assumes:4 running:1 top:4 xw:4 restrictive:2 uj:23 society:2 objective:1 quantity:2 spike:8 md:1 diagonal:3 jalali:1 gradient:11 subspace:1 vd:1 cauchy:2 considers:2 reason:2 induction:6 enforcing:2 assuming:2 besides:1 code:1 ratio:1 minimizing:3 difficult:2 unfortunately:3 sinica:1 blockwise:1 polson:1 design:3 implementation:2 boltzmann:4 twenty:1 perform:1 m11:1 convolution:1 francesco:1 vkt:1 horseshoe:24 displayed:3 situation:1 extended:1 incorporated:3 vj2:6 y1:1 august:1 jebara:1 intensity:1 introduced:6 namely:2 required:4 specified:3 kl:3 pair:1 philosophical:1 connection:1 mdd:1 california:1 address:1 able:2 alongside:1 pattern:5 scott:1 sparsity:12 challenge:1 green:2 royal:2 power:1 suitable:3 difficulty:2 natural:6 indicator:1 improve:3 technology:1 carried:1 prior:33 literature:5 determining:1 loss:2 fully:1 expect:1 limitation:2 carvalho:1 versus:1 validation:1 editor:6 share:6 heavy:4 prone:1 penalized:1 summary:1 placed:1 supported:2 free:3 bias:1 allow:1 johnstone:1 explaining:1 institute:1 absolute:4 sparse:14 benefit:2 distributed:1 overcome:1 dimension:2 xn:1 van:1 forward:2 made:2 simplified:1 bm:7 transaction:2 sj:8 approximate:17 obtains:1 reconstructed:3 kullback:1 ml:5 global:1 sequentially:1 mairal:1 assumed:4 xi:2 factorize:1 factorizing:1 latent:12 mich:1 aristides:1 mj:3 zk:5 correlated:1 obtaining:1 schuurmans:1 bottou:1 european:1 vj:23 diag:2 statistica:1 azar:1 noise:4 n2:2 sebag:1 allowed:1 ref:1 quadrature:4 x1:1 representative:1 madrid:1 jnew:2 sub:3 inferring:1 shrinking:1 fails:1 exponential:3 lie:1 theorem:1 specific:3 covariate:1 bishop:1 evidence:8 intractable:4 exists:1 mnist:2 phd:1 nk:1 sparser:1 cherkassky:1 simply:2 likely:2 infinitely:4 univariate:1 expressed:1 springer:3 corresponds:3 extracted:1 acm:1 obozinski:1 ma:2 presentation:1 careful:1 shared:4 infinite:1 typical:1 specifically:7 determined:2 wt:2 except:4 uniformly:1 denoising:1 total:1 e:1 indicating:1 select:1 support:3 relevance:2 evaluate:5 argyriou:1 handling:1 |
4,656 | 5,213 | Parametric Task Learning
Tatsuya Hongo
Nagoya Institute of Technology
Nagoya, 466-8555, Japan
hongo.mllab.nit@gmail.com
Ichiro Takeuchi
Nagoya Institute of Technology
Nagoya, 466-8555, Japan
takeuchi.ichiro@nitech.ac.jp
Masashi Sugiyama
Tokyo Institute of Technology
Tokyo, 152-8552, Japan
sugi@cs.titech.ac.jp
Shinichi Nakajima
Nikon Corporation
Tokyo, 140-8601, Japan
nakajima.s@nikon.co.jp
Abstract
We introduce an extended formulation of multi-task learning (MTL) called parametric task learning (PTL) that can systematically handle infinitely many tasks
parameterized by a continuous parameter. Our key finding is that, for a certain
class of PTL problems, the path of the optimal task-wise solutions can be represented as piecewise-linear functions of the continuous task parameter. Based on
this fact, we employ a parametric programming technique to obtain the common
shared representation across all the continuously parameterized tasks. We show
that our PTL formulation is useful in various scenarios such as learning under
non-stationarity, cost-sensitive learning, and quantile regression. We demonstrate
the advantage of our approach in these scenarios.
1 Introduction
Multi-task learning (MTL) has been studied for learning multiple related tasks simultaneously. A
key assumption behind MTL is that there exists a common shared representation across the tasks.
Many MTL algorithms attempt to find such a common representation and at the same time to learn
multiple tasks under that shared representation. For example, we can enforce all the tasks to share a
common feature subspace or a common set of variables by using an algorithm introduced in [1, 2]
that alternately optimizes the shared representation and the task-wise solutions.
Although the standard MTL formulation can handle only a finite number of tasks, it is sometimes
more natural to consider infinitely many tasks parameterized by a continuous parameter, e.g., in
learning under non-stationarity [3] where learning problems change over continuous time, costsensitive learning [4] where loss functions are asymmetric with continuous cost balance, and quantile regression [5] where the quantile is a continuous variable between zero and one. In order to
handle these infinitely many parametrized tasks, we propose in this paper an extended formulation
of MTL called parametric-task learning (PTL).
The key contribution of this paper is to show that, for a certain class of PTL problems, the optimal
common representation shared across infinitely many parameterized tasks can be obtainable. Specifically, we develop an alternating minimization algorithm a` la [1, 2] for finding the entire continuum
of solutions and the common feature subspace (or the common set of variables) among infinitely
many parameterized tasks. Our algorithm exploits the fact that, for those classes of PTL problems,
the path of task-wise solutions is piecewise-linear in the task parameter. We use the parametric
programming technique [6, 7, 8, 9] for computing those piecewise linear solutions.
1
Notations: Let us denote by R, R+ , and R++ the set of real, nonnegative, and positive numbers,
d
respectively, while we define Nn := {1, . . . , n} for every natural number n. We denote by S++
the
set of d ? d positive definite matrices, and let I(?) be the indicator function.
2
Review of Multi-Task Learning (MTL)
In this section, we review an MTL method developed in [1, 2]. Let {(xi , yi )}i?Nn be the set of
n training instances, where xi ? X ? Rd is the input and yi ? Y is the output. We define
wi (t) ? [0, 1], t ? NT as the weight of the ith instance for the tth task, where T is the number
of tasks. We consider an affine model ft (x) = ?t,0 + ?t? x for each task, where ?t,0 ? R and
?t ? Rd . For notational simplicity, we define augmented vectors ?? := (?0 , ?1 , . . . , ?d )? ? Rd+1
? := (1, x1 , . . . , xd )? ? Rd+1 , and write the affine model as ft (x) = ??t? x.
?
and x
The multi-task feature learning method discussed in [1] is formulated as
? ?
? ? ? ?1
? i )) +
?t D ?t ,
min
wi (t)?t (r(yi , ??t? x
?t }t?N
T
{?
T
d
D?S++
,tr(D)?1
t?NT t?NT
(1)
t?NT
where tr(D) is the trace of D, ?t : R ? R+ is the loss function for the tth task incurred on the
? i )1 , and ? > 0 is the regularization parameter2 . It was shown [1] that the problem
residual r(yi , ??t? x
(1) is equivalent to
? ?
?
? i )) + ||B||2tr ,
min
wi (t)?t (r(yi , ??t? x
?t }t?N
T
{?
T
t?NT i?NN
where B is the d ? T matrix whose tth column is given by the vector ?t , and ||B||tr :=
tr((BB ? )1/2 ) is the trace norm of B. As shown in [10], the trace norm is the convex upper envelope
of the rank of B, and (1) can be interpreted as the problem of finding a common feature subspace
across T tasks. This problem is often referred to as multi-task feature learning. If the matrix D is
restricted to be diagonal, the formulation (1) is reduced to multi-task variable selection [11, 12].
In order to solve the problem (1), the alternating minimization algorithm was suggested in [1] (see
Algorithm 1). This algorithm alternately optimizes the task-wise solutions {??t }t?NT and the common representation matrix D. It is worth noting that, when D is fixed, each ??t can be independently
optimized (Step 1). On the other hand, when {??t }t?NT are fixed, the optimization of the matrix D
can be reduced to the minimization over d eigenvalues ?1 , . . . , ?d of the matrix C := BB ? , and the
optimal D can be analytically computed (Step 2).
3
Parametric-Task Learning (PTL)
We consider the case where we have infinitely many tasks parametrized by a single continuous
parameter. Let ? ? [?L , ?U ] be a continuous task parameter. Instead of the set of weights wi (t), t ?
NT , we consider a weight function wi : [?L , ?U ] ? [0, 1] for each instance i ? Nn . In PTL, we
learn a parameter vector ??? ? Rd+1 as a continuous function of the task parameter ?:
? ?U ?
? ?U
?
?
? i )) d? + ?
min
wi (?) ?? (r(yi , ?? x
??? D?1 ?? d?,
(2)
?? }??[? ,? ]
{?
L U
d
D?S++
,tr(D)?1
?L
?L
i?Nn
where, note that, the loss function ?? possibly depends on ?.
As we will explain in the next section, the above PTL formulation is useful in various important
machine learning scenarios including learning under non-stationarity, cost-sensitive learning, and
? i ) = (yi ? ??? x
? i )2 for regression problems with yi ? R, while r(yi , ??t? x
?i) =
For example, r(yi , ??t? x
?
?
? i for binary classification problems with yi ? {?1, 1}.
1 ? y i ?t x
2
In [1], wi (t) takes either 1 or 0. It takes 1 only if the ith instance is used in the tth task. We slightly
generalize the setup so that each instance can be used in multiple tasks with different weights.
1
2
Algorithm 1 A LTERNATING M INIMIZATION A LGORITHM FOR MTL [1]
1: Input: Data {(xi , yi )}i?Nn and weights {wi (t)}i?Nn ,t?NT ;
2: Initialize: D ? Id /d (Id is d ? d identity matrix)
3: while convergence condition is not true do
4:
Step 1: For t = 1, . . . , T do
?
?
? i )) + ? ? D?1 ?
wi (t)?t (r(yi , ??? x
??t ? arg min
?
T
?
i?Nn
5:
Step 2:
D ?
?
C 1/2
?t? D?1 ?t ,
= arg
min
1/2
d
tr(C)
D?S++ ,tr(D)?1
t?N
T
where C := BB ? whose (j, k)th element is defined as Cj,k :=
6: end while
?t }t?N and D;
7: Output: {?
T
?
t?NT
?tj ?tk .
quantile regression. However, at first glance, the PTL optimization problem (2) seems computationally intractable since we need to find infinitely many task-wise solutions as well as the common
feature subspace (or the common set of variables if D is restricted to be diagonal) shared by infinitely
many tasks.
Our key finding is that, for a certain class of PTL problems, when D is fixed, the optimal path of the
task-wise solutions ??? is shown to be piecewise-linear in ?. By exploiting this piecewise-linearity,
we can efficiently handle infinitely many parameterized tasks, and the optimal solutions of those
class of PTL problems can be exactly computed.
In the following theorem, we prove that the task-wise solutions ??? is piecewise-linear in ? if the
weight functions and the loss function satisfy certain conditions.
d
Theorem 1 For any d ? d positive-definite matrix D ? S++
, the optimal solution path of
?
? i )) + ?? ? D?1 ?
??? ? arg min
wi (?)?? (r(yi , ??? x
?
?
(3)
i?Nn
? can be
for ? ? [?L , ?U ] is written as a piecewise-linear function of ? if the residual r(y, ??? x)
? and the weight functions wi : [?L , ?U ] ? [0, 1], i ? Nn and the
written as an affine function of ?,
loss function ? : R ? R+ satisfy either of the following conditions (a) or (b):
(a) All the weight functions are piecewise-linear functions, and the loss function is a convex
piecewise-linear function which does not depend on ?;
(b) All the weight functions are piecewise-constant functions, and the loss function is a convex
piecewise-linear function which depends on ? in the following form:
?
?? (r) =
max{(ah + bh r)(ch + dh ?), 0},
(4)
h?NH
where H is a positive integer, and ah , bh , ch , dh ? R are constants such that ch + dh ? ? 0 for all
? ? [?L , ?U ].
In the proof in Appendix A, we show that, if the weight functions and the loss function satisfy the
conditions (a) or (b), the problem (3) is reformulated as a parametric quadratic program (parametric
QP), where the parameter ? only appears in the linear term of the objective function. As shown, for
example, in [9], the optimal solution path of this class of parametric QP has a piecewise-linear form.
If ??? is piecewise-linear in ?, we can exactly compute the entire solution path by using parametric
programming. In machine learning literature, parametric programming is often used in the context
3
Algorithm 2 A LTERNATING M INIMIZATION A LGORITHM FOR PTL
1: Input: Data {(xi , yi )}i?Nn and weight functions wi : [?L , ?U ] :? [0, 1] for all i ? Nn ;
2: Initialize: D ? Id /d (Id is d ? d identity matrix)
3: while convergence condition is not true do
4:
Step 1: For all the continuum of ? ? [?L , ?U ] do
?
? i )) + ?? ? D?1 ?
??? ? arg min
wi (?)?? (r(yi , ??? x
?
?
5:
i?Nn
by using parametric programming;
Step 2:
C 1/2
D ?
= arg
min
d ,tr(D)?1
tr(C)1/2
D?S++
?
where (j, k)th element of C ? Rd?d is defined as Cj,k :=
6: end while
?? } for ? ? [?L , ?U ] and D;
7: Output: {?
?U
??? D?1 ?? d?,
(5)
?L
? ?U
?L
??,j ??,k d?;
of regularization path-following [13, 14, 15]3 . We start from the solution at ? = ?L , and follow
the path of the optimal solutions while ? is continuously increased. This is efficiently conducted by
exploiting the piecewise-linearity.
Our proposed algorithm for solving the PTL problem (2) is described in Algorithm 2, which is essentially a continuous version of the MTL algorithm shown in Algorithm 1. Note that, by exploiting
the piecewise linearity of ?? , we can compute the integral at Step 2 (Eq. (5)) in Algorithm 2.
Algorithm 2 can be changed to parametric-task variable selection if Step 2 is replaced with
??
?U 2
??,j d?
?L
??
D ? diag(?1 , . . . , ?d ) where ?j = ?
for all j ? Nd ,
?U 2
?
? d?
j ? ?Nd
?,j
?L
which can also be computed efficiently by exploiting the piecewise linearity of ?? .
4 Examples of PTL Problems
In this section, we present three examples where our PTL formulation (2) is useful.
Binary Classification Under Non-Stationarity Suppose that we observe n training instances sequentially, and denote them as {(xi , yi , ?i )}i?Nn , where xi ? Rd , yi ? {?1, 1}, and ?i is the time
when the ith instance is observed. Without loss of generality, we assume that ?1 < . . . < ?n . Under
non-stationarity, if we are requested to learn a classifier to predict the output for a test input x observed at time ? , the training instances observed around time ? should have more influence on the
classifier than others.
Let wi (? ) denote the weight of the ith instance when training a classifier for a test point at time ? .
We can for example use the following triangular weight function (see Figure1):
?
? 1 + s?1 (?i ? ? ) if ? ? s ? ?i < ?,
wi (? ) =
(6)
1 ? s?1 (?i ? ? ) if ? ? ?i < ? + s,
? 0
otherwise,
where s > 0 determines the width of the triangular time windows. The problem of training a
classifier for time ? is then formulated
as
?
? i ) + ?||?||22 ,
min
wi (? ) max(0, 1 ? yi ??? x
?
?
i?Nn
where we used the hinge loss.
3
In regularization path-following, one computes the optimal solution path w.r.t. the regularization parameter,
whereas we compute the optimal solution path w.r.t. the task parameter ?.
4
Figure 1: Examples of weight functions {wi (? )}i?Nn in non-stationary time-series learning. Given a
training instances (xi , yi ) at time ?i for i = 1, . . . , n under non-stationary condition, it is reasonable
to use the weights {wi (? )}i?Nn as shown here when we learn a classifier to predict the output of a
test input at time ? .
If we have the belief that a set of classifiers for different time should have some common structure,
we can apply our PTL approach to this problem. If we consider a time interval ? ? [?L , ?U ], the
parametric-task feature learning problem is formulated as
? ?U ?
? ?U
? i ) d? + ?
min
wi (? ) max(0, 1 ? yi ???? x
??? D?1 ?? d?.
(7)
? )}? ?[? ,? ]
{?(?
L U
?L
d
D?S++
,tr(D)?1
?L
i?Nn
Note that the problem (7) satisfies the condition (a) in Theorem 1.
Joint Cost-Sensitive Learning Next, let us consider cost-sensitive binary classification. When
the costs of false positives and false negatives are unequal, or when the numbers of positive and
negative training instances are highly imbalanced, it is effective to use the cost-sensitive learning
approach [16]. Suppose that we are given a set of training instances {(xi , yi )}i?Nn with xi ? Rd and
yi ? {?1, 1}. If we know that the ratio of the false positive and false negative costs is approximately
? : (1 ? ?), it is reasonable to solve the following cost-sensitive SVM [17]:
?
? i ) + ?||?||22 ,
min
wi (?) max(0, 1 ? yi ??? x
?
?
i?Nn
where the weight wi (?) is defined as
wi (?) =
{
?
1??
if yi = ?1,
if yi = +1.
When the exact false positive and false negative costs in the test scenario are unknown [4], it is often
desirable to train several cost-sensitive SVMs with different values of ?. If we have the belief that
a set of classifiers for different cost ratios should have some common structure, we can apply our
PTL approach to this problem. If we consider an interval ? ? [?L , ?U ], 0 < ?L < ?U < 1, the
parametric-task feature learning problem is formulated as
? ?U ?
? ?U
?
?
? i ) d? + ?
min
wi (?) max(0, 1 ? yi ?? x
??? D?1 ?? d?.
(8)
?? }??[? ,? ]
{?
L U
d
D?S++
,tr(D)?1
?L
?L
i?Nn
The problem (8) also satisfies the condition (a) in Theorem 1. Figure 2 shows an example of joint
cost-sensitive learning applied to a toy 2D binary classification problem.
Joint Quantile Regression Given a set of training instances {(xi , yi )}i?Nn with xi ? Rd and
yi ? R drawn from a joint distribution P (X, Y ), quantile regression [19] is used to estimate the
conditional ? th quantile FY?1
|X=x (? ) as a function of x, where ? ? (0, 1) and FY |X=x is the cumulative distribution function of the conditional distribution P (Y |X = x). Jointly estimating multiple
conditional quantile functions is often useful for exploring the stochastic relationship between X
and Y (see Section 5 for an example of joint quantile regression problems). Linear quantile regression along with L2 regularization [20] at order ? ? (0, 1) is formulated as
{
?
(1 ? ? )|r| if r ? 0,
? i ) + ?||?||22 , ?? (r) :=
min
?? (yi ? ??? x
?
|r|
if r > 0.
?
?
i?Nn
5
4
2
2
0
0
x2
x2
4
-2
-2
-4
-4
-4
-2
0
2
4
6
-4
-2
x1
0
2
4
6
x1
(a) Independent cost-sensitive learning
(b) Joint cost-sensitive learning
Figure 2: An example of joint cost-sensitive learning on 2D toy dataset (2D input x is expanded to
n-dimension by radial basis functions centered on each xi ). In each plot, the decision boundaries
of five cost-sensitive SVMs (? = 0.1, 0.25, 0.5, 0.75, 0.9) are shown. (a) Left plot is the results obtained by independently training each cost-sensitive SVMs. (b) Right plot is the results obtained by
jointly training infinitely many cost-sensitive SVMs for all the continuum of ? ? [0.05, 0.95] using
the methodology we present in this paper (both are trained with the same regularization parameter
?). When independently trained, the inter-relationship among different cost-sensitive SVMs looks
inconsistent (c.f., [18]).
If we have the belief that a family of quantile regressions at various ? ? (0, 1) have some common
structure, we can apply our PTL framework to joint estimation of the family of quantile regressions
This PTL problem satisfies the condition (b) in Theorem 1, and is written as
? 1
? 1 ?
??? D?1 ?? d?,
?? (yi ? ??? xi )d? + ?
min
{?? }? ?(0,1)
d
D?S++
,tr(D)?1
0 i?N
n
0
where we do not need any weighting and omit wi (? ) = 1 for all i ? Nn and ? ? [0, 1].
5
Numerical Illustrations
In this section, we illustrate various aspects of PTL with the three examples discussed in the previous
section.
Artificial Example for Learning under Non-stationarity We first consider a simple artificial
problem with non-stationarity, where the data generating mechanism gradually changes. We assume
that our data generating mechanism produces the training set {(xi , yi , ?i )}i?Nn with n = 100 as
2?
2?
follows. For each ?i ? {0, 1 2?
n , 2 n , . . . , (n ? 1) n }, the output yi is first determined as yi = 1 if i
d
is odd, while yi = ?1 if i is even. Then, xi ? R is generated as
xi1 ? N (yi cos ?i , 12 ), xi2 ? N (yi sin ?i , 12 ), xij ? N (0, 12 ), ?j ? {3, . . . , d},
(9)
where N (?, ? 2 ) is the normal distribution with mean ? and variance ? 2 . Namely, only the first
two dimensions of x differ in two classes, and the remaining d ? 2 dimensions are considered
as noise. In addition, according to the value of ?i , the means of the class-wise distributions in
the first two dimensions gradually change. The data distributions of the first two dimensions for
? = 0, 0.5?, ?, 1.5? are illustrated in Figure 3. Here, we applied our PT feature learning approach
with triangular time windows in (6) with s = 0.25?. Figure 4 shows the mis-classification rate
of PT feature learning (PTFL) and ordinary independent learning (IND) on a similarly generated
test sample with size 1000. When the input dimension d = 2, there is no advantage for learning
common features since these two input dimensions are important for classification. On the other
hand, as d increases, PT feature learning becomes more and more advantageous. Especially when
the regularization parameter ? is large, the independent learning approach is completely deteriorated
as d increases, while PTFL works reasonably well in all the setups.
6
Figure 3: The first 2 input dimensions of artificial example at ? = 0, 0.5?, ?, 1.5?. The class-wise
distributions in these two dimensions gradually change with ? ? [0, 2?].
0.5
PTL
IND
0.3
0.2
0.1
0
0.4
0.5
PTL
IND
Mis-classification Rate
0.4
Mis-classification Rate
Mis-classification Rate
0.5
0.3
0.2
0.1
0
2
5
10 20 50
Input Dimension
100
0.4
PTL
IND
0.3
0.2
0.1
0
2
5
10 20 50
Input Dimension
100
2
5
10 20 50
Input Dimension
100
Figure 4: Experimental results on artificial example under non-stationarity. Mis-classification rate
on test sample with size 1000 for various setups d ? {2, 5, 10, 20, 50, 100} and ? ? {0.1, 1, 10}
are shown. The red symbols indicate the results of our PT feature learning (PTFL) whereas the
blue symbols indicate ordinary independent learning (IND). The plotted are average (and standard
deviation) over 100 replications with different random seeds. All the differences except d = 2 are
statistically significant (p < 0.01).
Joint Cost-Sensitive SVM Learning on Benchmark Datasets Here, we report the experimental
results on joint cost-sensitive SVM learning discussed in Section 4. Although our main contribution
is not just claiming favorable generalization properties of parametric task learning solutions, we
compared, as an illustration, the generalization performances of PT feature learning (PTFL) and
PT variable selection (PTVS) with the ordinary independent learning approach (IND). In PTFL
and PTVS, we learned common feature subspaces and common sets of variables shared across the
continuum of cost-sensitive SVM for ? ? [0.05, 0.95] for 10 benchmark datasets (see Table 1). In
each data set, we divided the entire sample into training, validation, and test sets with almost equal
size. The average test errors (and the standard deviation) of 10 different data splits are reported
in ?
Table 1. The total
( ?test errors for cost-sensitive SVMs
?with ? = 0.1, 0.2, . . ). , 0.9 are defined
as ??{0.1,...,0.9} ? i:yi =?1 I(f? (xi ) > 0) + (1 ? ?) i:yi =1 I(f? (xi ) ? 0) , where f? is the
trained SVM with the cost ratio ?. Model selection was conducted by using the same criterion on
validation sets. We see that, in most cases, PTFL or PTVS had better generalization performance
than IND.
Joint Quantile Regression Finally, we applied PT feature learning to joint quantile regression
problems. Here, we took a slightly different approach from what was described in the previous
section. Given a training set {(xi , yi )}i?Nn , we first estimated conditional mean function E[Y |X =
? |X = xi ], where E
? is the
x] by least-square regression, and computed the residual ri := yi ? E[Y
estimated conditional mean function. Then, we applied PT feature learning to {(xi , ri )}i?Nn , and
?
?
estimated the conditional ? th quantile function as F?Y?1
|X=x (? ) := E[Y |X = xi ] + fres (x|? ), where
f?res (?|? ) is the estimated ? th quantile regression fitted to the residuals.
When multiple quantile regressions with different ? s are independently learned, we often encounter
a notorious problem known as quantile crossing (see Section 2.5 in [5]). For example, in Figure 5(a),
some of the estimated conditional quantile functions cross each other (which never happens in the
true conditional quantile functions). One possible approach to mitigate this problem is to assume
a model on the heteroscedastic structure. In the simplest case, if we assume that the data is homoscedastic (i.e., the conditional distribution P (Y |x) does not depend on x except its location),
7
Table 1: Average (and standard deviation) of test errors obtained by joint cost-sensitive SVMs on
benchmark datasets. n is the sample size, d is the input dimension, Ind indicates the results when
each cost-sensitive SVM was trained independently, while PTFL and PTVS indicate the results from
PT feature learning and PT feature selection, respectively. The bold numbers in the table indicate
the best performance among three methods.
n
195
569
194
690
768
862
1000
1000
300
528
Data Name
Parkinson
Breast Cancer Diagnostic
Breast Cancer Prognostic
Australian
Diabetes
Fourclass
Germen
Splice
SVM Guide
DVowel
d
20
30
33
14
8
2
24
60
10
10
Ind
32.30 (10.60)
20.36 (7.77)
48.97 (12.92)
117.97 (22.97)
185.90 (21.13)
181.69 (22.13)
242.21 (18.35)
179.80 (24.22)
175.70 (15.55)
175.16 (13.78)
PTFL
30.21 (9.09)
18.49 (6.15)
49.28 ( 9.83)
106.25 (12.66)
179.89 (16.31)
179.30 (14.25)
219.66 (16.22)
151.69 (18.02)
170.16 (9.99)
175.74 (9.37)
PTVS
30.25 (8.53)
19.46 (5.89)
48.68 (5.89)
111.22 (15.95)
175.95 (16.26)
178.67 (19.24)
237.20 (15.78)
183.54 (21.27)
179.76 (14.76)
175.50 (7.38)
quantile regressions at different ? s can be obtained by just vertically shifting other quantile regression function (see Figure 5(f)).
Our PT feature learning approach, when applied to the joint quantile regression problem, allows us
to interpolate these two extreme cases. Figure 5 shows a joint QR example on the bone mineral
density (BMD) data [21]. We applied our approach after expanding univariate input x to a d = 5
dimensional vector by using evenly allocated RBFs. When (a) ? ? 0, our approach is identical
with independently estimating each quantile regression, while it coincides with homoscedastic case
when (f) ? ? ?. In our experience, the best solution is usually found somewhere between these
two extremes: in this example, (d) ? = 5 was chosen as the best model by 10-fold cross-validation.
4
2
1
0
-1
-2
2
1
0
-1
-1.5
-1
-0.5
0
0.5
(Standardized) Age
1
1.5
-2
2
-1.5
-1
-0.5
0
0.5
1
1.5
2
1
0
-1
-0.5
0
0.5
(Standardized) Age
(d) ? = 5
-2
-1.5
-1
-0.5
1
1.5
2
0
0.5
1
1.5
2
(Standardized) Age
(c) ? = 1
4
0.05, 0.10, ..., 0.95 conditional quantile functions
3
2
1
0
-1
-2
-2
-1
0
-1
2
(Standardized) Relative BMD Change
(Standardized) Relative BMD Change
3
-1.5
1
(b) ? = 0.1
4
0.05, 0.10, ..., 0.95 conditional quantile functions
-2
2
(Standardized) Age
(a) ? ? 0
4
0.05, 0.10, ..., 0.95 conditional quantile functions
3
-2
-2
-2
(Standardized) Relative BMD Change
4
0.05, 0.10, ..., 0.95 conditional quantile functions
3
(Standardized) Relative BMD Change
0.05, 0.10, ..., 0.95 conditional quantile functions
3
(Standardized) Relative BMD Change
(Standardized) Relative BMD Change
4
0.05, 0.10, ..., 0.95 conditional quantile functions
3
2
1
0
-1
-2
-2
-1.5
-1
-0.5
0
0.5
(Standardized) Age
1
(e) ? = 10
1.5
2
-2
-1.5
-1
-0.5
0
0.5
(Standardized) Age
1
1.5
2
(f) ? ? ?
Figure 5: Joint quantile regression examples on BMD data [21] for six different ?s.
6
Conclusions
In this paper, we introduced parametric-task learning (PTL) approach that can systematically handle
infinitely many tasks parameterized by a continuous parameter. We illustrated the usefulness of this
approach by providing three examples that can be naturally formulated as PTL. We believe that there
are many other practical problems that falls into this PTL framework.
Acknowledgments
The authors thank the reviewers for fruitful comments. IT, MS, and SN thank the support from
MEXT Kakenhi 23700165, JST CREST Program, MEXT Kakenhi 23120004, respectively.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Advances in Neural
Information Processing Systems, volume 19, pages 41?48. 2007.
[2] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework
for multi-task structure learning. In Advances in Neural Information Processing Systems, volume 20, pages 25?32. 2008.
[3] L. Cao and F. Tay. Support vector machine with adaptive parameters in finantial time series
forecasting. IEEE Transactions on Neural Networks, 14(6):1506?1518, 2003.
[4] F. R. Bach, D. Heckerman, and E. Horvits. Considering cost asymmetry in learning classifiers.
Journal of Machine Learning Research, 7:1713?41, 2006.
[5] R. Koenker. Quantile Regression. Cambridge University Press, 2005.
[6] K. Ritter. On parametric linear and quadratic programming problems. mathematical Programming: Proceedings of the International Congress on Mathematical Programming, pages
307?335, 1984.
[7] E. L. Allgower and K. George. Continuation and path following. Acta Numerica, 2:1?63,
1993.
[8] T. Gal. Postoptimal Analysis, Parametric Programming, and Related Topics. Walter de
Gruyter, 1995.
[9] M. J. Best. An algorithm for the solution of the parametric quadratic programming problem.
Applied Mathemetics and Parallel Computing, pages 57?76, 1996.
[10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum
order system approximation. In Proceedings of the American Control Conference, volume 6,
pages 4734?4739, 2001.
[11] B. A. Turlach, W. N. Venables, and S. J. Wright. Simultaneous variable selection. Technometrics, 47:349?363, 2005.
[12] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection and joint sbspace selection
for multiple classification problems. Statistics and Computing, 20(2):231?252, 2010.
[13] M. R. Osborne, B. Presnell, and B. A. Turlach. A new approach to variable selection in least
squares problems. IMA Journal of Numerical Analysis, 20(20):389?404, 2000.
[14] B. Efron and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407?499, 2004.
[15] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support
vector machine. Journal of Machine Learning Research, 5:1391?415, 2004.
[16] Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in nonstandard
situations. Machine Learning, 46:191?202, 2002.
[17] M. A. Davenport, R. G. Baraniuk, and C. D. Scott. Tuning support vector machine for minimax and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2010.
[18] G. Lee and C. Scott. Nested support vector machines. IEEE Transactions on Signal Processing,
58(3):1648?1660, 2010.
[19] R. Koenker. Quantile Regression. Cambridge University Press, 2005.
[20] I. Takeuchi, Q. V. Le, T. Sears, and A. J. Smola. Nonparametric quantile estimation. Journal
of Machine Learning Research, 7:1231?1264, 2006.
[21] L. K. Bachrach, T. Hastie, M. C. Wang, B. Narasimhan, and R. Marcus. Acquisition in healthy
Asian, hispanic, black and caucasian youth. a longitudinal study. The Journal of Clinical
Endocrinology and Metabolism, 84:4702?4712, 1999.
[22] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
9
| 5213 |@word version:1 advantageous:1 norm:2 nd:2 seems:1 prognostic:1 turlach:2 tr:13 series:2 longitudinal:1 com:1 nt:10 gmail:1 written:3 numerical:2 plot:3 stationary:2 intelligence:1 metabolism:1 caucasian:1 ith:4 location:1 five:1 mathematical:2 along:1 replication:1 prove:1 introduce:1 inter:1 multi:8 window:2 considering:1 becomes:1 estimating:2 linearity:4 notation:1 what:1 interpreted:1 developed:1 narasimhan:1 finding:4 gal:1 corporation:1 mitigate:1 masashi:1 ptl:27 every:1 xd:1 exactly:2 classifier:8 control:1 omit:1 positive:8 vertically:1 congress:1 id:4 path:13 approximately:1 black:1 acta:1 studied:1 heteroscedastic:1 co:2 statistically:1 bmd:8 fazel:1 practical:1 acknowledgment:1 definite:2 pontil:2 boyd:2 radial:1 selection:9 bh:2 context:1 influence:1 equivalent:1 fruitful:1 reviewer:1 nit:1 independently:6 convex:4 bachrach:1 simplicity:1 vandenberghe:1 handle:5 deteriorated:1 pt:11 suppose:2 annals:1 exact:1 programming:10 diabetes:1 element:2 crossing:1 asymmetric:1 observed:3 ft:2 taskar:1 wang:1 trained:4 depend:2 solving:1 basis:1 completely:1 joint:18 represented:1 various:5 train:1 walter:1 sears:1 effective:1 artificial:4 pearson:1 whose:2 heuristic:1 solve:2 otherwise:1 triangular:3 statistic:2 jointly:2 advantage:2 eigenvalue:1 took:1 propose:1 cao:1 qr:1 exploiting:4 convergence:2 asymmetry:1 produce:1 generating:2 tk:1 fourclass:1 illustrate:1 develop:1 ac:2 allgower:1 odd:1 eq:1 c:1 indicate:4 australian:1 differ:1 tokyo:3 stochastic:1 centered:1 jst:1 generalization:3 exploring:1 around:1 considered:1 wright:1 normal:1 seed:1 predict:2 hispanic:1 continuum:4 homoscedastic:2 estimation:2 favorable:1 healthy:1 sensitive:21 venables:1 minimization:4 parkinson:1 notational:1 kakenhi:2 rank:2 indicates:1 nn:27 entire:4 arg:5 among:3 classification:13 initialize:2 equal:1 never:1 evgeniou:1 identical:1 look:1 others:1 report:1 piecewise:16 employ:1 simultaneously:1 interpolate:1 asian:1 ima:1 replaced:1 attempt:1 technometrics:1 stationarity:8 highly:1 extreme:2 behind:1 tj:1 integral:1 experience:1 re:1 plotted:1 fitted:1 instance:13 column:1 increased:1 ordinary:3 cost:28 deviation:3 usefulness:1 conducted:2 reported:1 rosset:1 density:1 international:1 ritter:1 xi1:1 lee:2 continuously:2 possibly:1 davenport:1 american:1 toy:2 japan:4 de:1 bold:1 satisfy:3 depends:2 bone:1 ichiro:2 red:1 start:1 parallel:1 rbfs:1 contribution:2 square:2 takeuchi:3 variance:1 efficiently:3 generalize:1 worth:1 ah:2 nonstandard:1 explain:1 simultaneous:1 acquisition:1 sugi:1 naturally:1 proof:1 mi:5 dataset:1 efron:1 cj:2 obtainable:1 appears:1 mtl:10 follow:1 figure1:1 methodology:1 formulation:7 generality:1 just:2 smola:1 hand:2 glance:1 costsensitive:1 believe:1 name:1 lgorithm:2 true:3 regularization:9 analytically:1 alternating:2 illustrated:2 ind:9 sin:1 width:1 coincides:1 criterion:1 m:1 demonstrate:1 wise:9 common:18 qp:2 jp:3 nh:1 volume:3 discussed:3 significant:1 cambridge:3 rd:9 tuning:1 similarly:1 sugiyama:1 had:1 imbalanced:1 optimizes:2 nagoya:4 scenario:4 certain:4 binary:4 yi:41 minimum:1 george:1 signal:1 multiple:6 desirable:1 youth:1 clinical:1 cross:2 lin:1 bach:1 divided:1 regression:23 breast:2 essentially:1 titech:1 nakajima:2 sometimes:1 whereas:2 addition:1 interval:2 allocated:1 envelope:1 comment:1 inconsistent:1 jordan:1 integer:1 noting:1 split:1 hastie:2 wahba:1 nitech:1 six:1 forecasting:1 presnell:1 reformulated:1 useful:4 nonparametric:1 svms:7 tth:4 reduced:2 simplest:1 continuation:1 xij:1 estimated:5 diagnostic:1 tibshirani:2 blue:1 write:1 numerica:1 key:4 drawn:1 nikon:2 angle:1 parameterized:7 baraniuk:1 family:2 reasonable:2 almost:1 decision:1 appendix:1 fold:1 quadratic:3 nonnegative:1 x2:2 ri:2 aspect:1 min:14 expanded:1 according:1 mllab:1 across:5 slightly:2 heckerman:1 wi:24 happens:1 restricted:2 gradually:3 notorious:1 computationally:1 neyman:1 german:1 mechanism:2 xi2:1 know:1 koenker:2 end:2 apply:3 observe:1 enforce:1 spectral:1 encounter:1 standardized:12 remaining:1 hinge:1 somewhere:1 exploit:1 quantile:34 especially:1 micchelli:1 objective:1 parametric:20 diagonal:2 subspace:5 thank:2 parametrized:2 mineral:1 evenly:1 topic:1 fy:2 marcus:1 relationship:2 illustration:2 ratio:3 balance:1 providing:1 ying:1 setup:3 claiming:1 trace:3 negative:4 unknown:1 upper:1 datasets:3 benchmark:3 finite:1 situation:1 extended:2 shinichi:1 introduced:2 namely:1 optimized:1 unequal:1 learned:2 alternately:2 suggested:1 usually:1 pattern:1 scott:2 program:2 max:5 including:1 belief:3 shifting:1 natural:2 indicator:1 residual:4 hindi:1 zhu:1 minimax:1 technology:3 sn:1 review:2 literature:1 l2:1 relative:6 loss:10 age:6 validation:3 incurred:1 affine:3 systematically:2 share:1 cancer:2 changed:1 guide:1 institute:3 fall:1 boundary:1 dimension:13 cumulative:1 computes:1 author:1 adaptive:1 transaction:3 bb:3 crest:1 hongo:2 sequentially:1 xi:21 parameter2:1 continuous:11 table:4 learn:4 reasonably:1 expanding:1 requested:1 diag:1 main:1 noise:1 osborne:1 x1:3 augmented:1 referred:1 weighting:1 splice:1 theorem:5 covariate:1 symbol:2 svm:7 exists:1 intractable:1 false:6 endocrinology:1 univariate:1 infinitely:11 ch:3 nested:1 determines:1 satisfies:3 dh:3 obozinski:1 gruyter:1 conditional:15 identity:2 formulated:6 shared:7 change:10 specifically:1 determined:1 except:2 tay:1 called:2 total:1 experimental:2 la:1 support:6 mext:2 argyriou:2 |
4,657 | 5,214 | Direct 0-1 Loss Minimization and Margin
Maximization with Boosting
Shaodan Zhai, Tian Xia, Ming Tan and Shaojun Wang
Kno.e.sis Center
Department of Computer Science and Engineering
Wright State University
{zhai.6,xia.7,tan.6,shaojun.wang}@wright.edu
Abstract
We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the
training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local
coordinatewise maximum of the margins in a certain sense. Experimental results
on a collection of machine-learning benchmark datasets show that DirectBoost
gives better results than AdaBoost, LogitBoost, LPBoost with column generation
and BrownBoost, and is noise tolerant when it maximizes an n? th order bottom
sample margin.
1
Introduction
The classification problem in machine learning and data mining is to predict an unobserved discrete
output value y based on an observed input vector x. In the spirit of the model-free framework, it
is always assumed that the relationship between the input vector and the output value is stochastic
and described by a fixed but unknown probability distribution p(X, Y ) [7]. The goal is to learn a
classifier, i.e., a mapping function f (x) from x to y ? Y such that the probability of the classification
error is small. As it is well known, the optimal choice is the Bayes classifier [7]. However, since
p(X, Y ) is unknown, we cannot learn the Bayes classifier directly. Instead, following Vapnik?s
general setting of the empirical risk minimization [7, 24], we focus on a more realistic goal: Given a
set of training data D = {(x1 , y1 ), ? ? ? , (xn , yn )} independently drawn from p(X, Y ), we consider
finding f (x) in a function class H that minimizes the empirical classification error,
n
1X
1(?
yi 6= yi )
n i=1
(1)
where y?i = arg maxy?Y yf (xi ), Y = {?1, 1} and 1(?) is an indicator function. Under certain
conditions, direct empirical classification error minimization is consistent [24] and under low noise
situations it has a fast convergence rate [15, 23]. However, due to the nonconvexity, nondifferentiability and discontinuity of the classification error function, the minimization of (1) is typically
NP-hard for general linear models [13]. The common approach is to minimize a surrogate function
which is usually a convex upper bound of the classification error function. The problem of minimizing the empirical surrogate loss turns out to be a convex programming problem with considerable
computational advantages and learned classifiers remain consistent to Bayes classifier [1, 20, 28, 29],
however clearly there is a mismatch between ?desired? loss function used in inference and ?training? loss function during the training process [16]. Moreover, it has been shown that all boosting
algorithms based on convex functions are susceptible to random classification noise [14].
Boosting is a machine-learning method based on the idea of creating a single, highly accurate classifier by combining many weak and inaccurate ?rules of thumb.? A remarkably rich theory and
a record of empirical success [18] have evolved around boosting, nevertheless it is still not clear
how to best exploit what is known about how boosting operates, even for binary classification. In
1
this paper, we propose a boosting method for binary classification ? DirectBoost ? a greedy coordinate descent algorithm that directly minimizes classification error over labeled training examples
to build an ensemble linear classifier of weak classifiers. Once the training error is reduced to a
(local coordinatewise) minimum, DirectBoost runs a coordinate ascent algorithm that greedily adds
weak classifiers by directly maximizing any targeted arbitrarily defined margins, it might escape the
region of minimum training error in order to achieve a larger margin. The algorithm stops once a
(local coordinatewise) maximum of the margins is reached. In the next section, we first present a
coordinate descent algorithm that directly minimizes 0-1 loss over labeled training examples. We
then describe coordinate ascent algorithms that aims to directly maximize any targeted arbitrarily
defined margins right after we reach a (local coordinatewise) minimum of 0-1 loss. In Section 3, we
show experimental results on a collection of machine-learning benchmark data sets for DirectBoost,
AdaBoost [9], LogitBoost [11], LPBoost with column generation [6] and BrownBoost [10], and discuss our findings. Due to space limitation, the proofs of theorems, related works, technical details
as well as conclustions and future works are given in the full version of this paper [27].
2
DirectBoost: Minimizing 0-1 Loss and Maximizing Margins
Let H = {h1 , ..., hl } denote the set of all possible weak classifiers that can be produced by the
weak learning algorithm, where a weak classifier hj ? H is a mapping from an instance space X to
Y = {?1, 1}. The hj s are not assumed to be linearly independent, and H is closed under negation,
i.e., both h and ?h belong to H. We assume that the training set consists of examples with labels
{(xi , yi )}, i = 1, ? ? ? , n, where (xi , yi ) ? X ? Y that are generated independently from p(X, Y ).
We define C of H as the set of mappings that can be generated by taking a weighted average of
classifiers from H:
)
(
C=
f :x?
X
?h h(x) | ?h ? 0 ,
(2)
h?H
The goal here is to find f ? C that minimizes the empirical classification error (1), and has good
generalization performance.
2.1 Minimizing 0-1 Loss
Similar to AdaBoost, DirectBoost works by sequentially running an iterative greedy coordinate
descent algorithm, each time directly minimizing true empirical classification error (1) instead of
a weighted empirical classification error in AdaBoost. That is, for each iteration, only the parameter
of a weak classifier that leads to the most significant true classification error reduction is updated,
while the weights of all other weak classifiers are kept unchanged. The rationale is that the inference
used to predict the label of a sample can be written as a linear function with a single parameter.
Consider the tth iteration, the ensemble classifier is
ft (x) =
t
X
?k hk (x)
(3)
k=1
where previous t ? 1 weak classifiers hk (x) and corresponding weights ?k , k = 1, ? ? ? , t ? 1 have
been selected and determined. The inference function for sample xi is defined as
Ft (xi , y) = yft (xi ) = y (
t?1
X
?k hk (xi )) + ?t yht (xi )
(4)
k=1
Pt?1
Since a(xi ) = k=1 ?k hk (xi ) is constant and hk (xi ) is either +1 or -1 depending on sample xi ,
we re-write the equation above as,
Ft (xi , y) = y ht (xi )?t + ya(xi )
(5)
Note that for each label y of sample xi , there is a linear function of ?t with the slope to be either +1 or
-1 and intercept to be ya(xi ). Given an input of ?t , each example xi has two linear scoring functions,
Ft (xi , +1) and Ft (xi , ?1), i = 1, ? ? ? , n, one for the positive label y = +1 and one for the negative
label y = ?1. From these two linear scoring functions, the one with the higher score determines
the predicted label y?i of the ensemble classifier ft (xi ). The intersection point ei of these two linear
scoring functions is the critical point that the predicted label y?i switches its sign, the intersection
point satisfies the condition that Ft (xi , +1) = Ft (xi , ?1) = 0, i.e. a(xi ) + ?t ht (xi ) = 0, and can
i)
be computed as ei = ? ha(x
, i = 1, ? ? ? , n. These points divide ?t into (at most) n + 1 intervals,
t (xi )
each interval has the value of a true classification error, thus the classification error is a stepwise
2
Algorithm 1 Greedy coordinate descent algorithm that minimizes a 0-1 loss.
1: D = {(xi , yi ), i = 1, ? ? ? , n}
2: Sort |a(xi )|, i = 1, ? ? ? , n in an increasing order.
3: for a weak classifier hk ? H do
4:
Visit each sample in the order that |a(xi )| is increasing.
5:
Compute the slope and intercept of F (xi , yi ) = yi hk (xi )? + yi a(xi ).
6:
Let e?i = |a(xi )|.
7:
If (slope > 0 and intercept < 0), error update on the righthand side of e?i is -1.
8:
If (slope < 0 and intercept > 0), error update on the righthand side of e?i is +1.
9:
Incrementally calculate classification error on intervals of e?i s.
10:
Get the interval that has minimum classification error.
11: end for
12: Pick the weak classifiers that lead to largest classification error reduction.
13: Among selected these weak classifiers, only update the weight of one weak classifier that gives
the smallest exponential loss.
14: Repeat 2-13 until training error reaches minimum.
function of ?t . The value of ei , i = 1, ? ? ? , n can be negative or positive, however since H is closed
in negation, we only care about these that are positive.
The greedy coordinate descent algorithm that sequentially minimizes a 0-1 loss is described in
Algorithm 1, lines 3-11 are the weak learning steps and the rest are boosting steps. Consider
an example with 4 samples to illustrate this procedure. Suppose for a weak classifier, we have
Ft (xi , yi ), i = 1, 2, 3, 4 as shown in Figure 1. At ?t = 0, samples x1 and x2 have negative margins,
thus they are misclassified, the error rate is 50%. We incrementally update the classification error on
intervals of e?i , i = 1, 2, 3, 4: For Ft (x1 , y1 ), its slope is negative and its intercept is negative, sample
x1 always has a negative margin for ?t > 0, thus there is no error update on the right-hand side of
e?1 . For Ft (x2 , y2 ), its slope is positive and its intercept is negative, then when ?t is at the right side
of e?2 , sample x2 has positive margin and becomes correctly classified, so we update the error by -1,
the error rate is reduced to 25%. For Ft (x3 , y3 ), its slope is negative and its intercept is positive, then
when ?t is at the right side of e?3 , sample x3 has a negative margin and becomes misclassified, so
we update the error rate changes to 50% again. For Ft (x4 , y4 ), its slope is positive and its intercept
is positive, sample x4 always has positive margin for ?t > 0, thus there is no error update on the
right-hand side of e?4 . We finally have the minimum error rate of 25% on the interval of [?
e2 , e?3 ].
Ft(x3 , y3 )
a4,|a4 |
Ft(x1 , y1 )
a3,|a3 |
|a2 |
|a1 |
0
Ft(x4 , y4 )
a1
a2
e?1
e?2
e?3
e?4
?t
Ft(x2 , y2 )
50%
Classification error
25%
0
e?2
e?3
?t
Figure 1: An example of computing minimum 0-1 loss of a weak learner over 4 samples.
We repeat this procedure until the training error reaches
its minimum, which may be 0 in a data separable case.
We then go to the next stage, explained below, that aims to
maximize margins. A nice property of the above greedy
coordinate descent algorithm is that the classification error is monotonically decreasing. Assume there are M
weak classifiers be considered, the computational complexity of Algorithm 1 in the training stage is O(M n) for
each iteration.
For boosting, as long as the weaker learner is strong
enough to achieve reasonably high accuracy, the data will
be linearly separable and the minimum 0-1 loss is usually
0. As shown in Theorem 1, the region of zero 0-1 loss is
a (convex) cone.
Theorem 1 The region of zero training error, if exists, is a cone, and it is not a set of isolated cones.
Algorithm 1 is a heuristic procedure that minimizes 0-1 loss, it is not guaranteed to find the global
minimum, it may trap to a coordinatewise local minimum [22] of 0-1 loss. Nevertheless, we switch
to algorithms that directly maximize the margins we present below.
2.2 Maximizing Margins
The margins theory [17] provides an insightful analysis for the success of AdaBoost where the
authors proved that the generalization error of any ensemble classifiers is bounded in terms of the
3
entire distribution of margins of training examples, as well as the number of training examples and
the complexity of the base classifiers, and AdaBoost?s dynamics has a strong tendency to increase the
margins of training examples. Instead, we can prove that the generalization error of any ensemble
classifier is bounded in terms of the average margin of bottom n? samples or n? th order margin
of training examples, as well as the number of training examples and the complexity of the base
classifiers. This view motivates us to propose a coordinate ascent algorithm to directly maximize
several types of margins just right after the training error reaches a (local coordinatewise) minimum.
The margin of a labeled example (xi , yi ) with respect to an ensemble classifier ft (x) =
Pt
k=1 ?k hk (xi ) is defined to be
mi =
yi
Pt
k=1 ?k hk (xi )
P
t
k=1 ?k
(6)
This is a real number between -1 and +1 that intuitively measures the confidence of the classifier in
its prediction on the ith example. It is equal to the weighted fraction of base classifiers voting for
the correct label minus the weighted fraction voting for the incorrect label [17].
We denote the minimum margin, the averageP
margin, and median margin over the training examples
n
as gmin = mini?{1,??? ,n} mi , gaverage = n1 i=1 mi , and gmedian = median{mi , i = 1, ? ? ? , n}.
Furthermore, we can sort the margins over all training examples in an increasing order, and consider
n? worst training examples n? ? n that have smaller margins, and compute the average margin over
those n? training examples.
n? samples, and denote
P We call this the average margin of the bottom
1
?
?
?
it as gaverage n = n? i?Bn? mi , where Bn denotes the set of n samples having the smallest
margins.
The margin maximization method described below is a greedy coordinate ascent algorithm that adds
a weak classifier achieving maximum margin. It allows us to continuously maximize the margin
while keeping the training error at a minimum by running the greedy coordinate descent algorithm
presented in the previous section. The margin mi is a linear fractional function of ?, and it is
quasiconvex, and quasiconcave, i.e., quasilinear [2, 5]. Theorem 2 shows that the average margin of
bottom n? examples is quasiconcave in the region of the zero training error.
Theorem 2 Denote the average margin of bottom n? samples as
gaverage n? (?) =
X
yi
i?{Bn? |?}
?
Pt
k=1 ?k hk (xi )
P
t
k=1 ?k
where {B |?} denotes the set of n samples whose margins are at the bottom for fixed ?. Then
gaverage n? (?) in the region of zero training error is quasiconcave.
Pt?1
Pt?1
We denote ai =
k=1 yi ?k hk (xi ), bi,t = yi ht (xi ) ? {?1, +1} and c =
k=1 ?k , then the
margin on the ith example (xi , yi ) can be rewritten as,
n?
mi =
ai + bi,t ?t
c + ?t
(7)
The derivative of the margin on ith example with respect to ?t is calculated as,
bi,t c ? ai
?mi
=
??t
(c + ?t )2
Margin
(8)
m6
m5
m4
m3
m2
m1
0
q1
q2
q3
q4
d
?t
Figure 2: Margin curves of six exam-
Since c ? ai , depending on the sign of bi,t , the derivative
of the margin on the ith sample (xi , yi ) is either positive or
negative, which is irrelevant to the value of ?t . This is also
true for the second derivative of the margin. Therefore, the
margin on the ith example (xi , yi ) with respect to ?t is either
concave when it is monotonically increasing or convex when
it is monotonically decreasing. See Figure 2 for a simple
illustration.
ples. At points q1 , q2 , q3 and q4 , the median example is changed. At points q2 Consider a greedy coordinate ascent algorithm that maxiand q4 , the set of bottom n? = 3 exam- mizes the average margin gaverage over all training examples.
The derivative of gaverage can be written as,
ples are changed.
Pn
Pn
?gaverage
i=1 ai
i=1 bi,t c ?
(9)
=
??t
(c + ?t )2
4
Algorithm 2 Greedy coordinate ascent algorithm that maximizes the average margin of bottom n?
examples.
1: Input: ai=1,??? ,n and c from previous round.
2: Sort ai=1,??? ,n in an increasing order. Bn? ? {n? samples having the smallest ai at ?t = 0}.
3: for a weak classifier do
4:
Determine the lowest
P sample whose margin is decreasing and determine d.
5:
Compute Dn? ? i?Bn? (bi,t c ? ai ).
6:
j ? 0, qj ? 0.
7:
Compute the intersection qj+1 of the j + 1th highest increasing margin in Bn? and the j + 1th
smallest decreasing margin in Bnc ? (the complement of the set Bn? ).
8:
if qj+1 < d and Dn? > 0 then
9:
Incrementally update Bn? , Bnc ? and Dn? at ?t = qj+1 ; j ? j + 1.
10:
Go back to Line 7.
11:
else
12:
if Dn? > 0 then q ? ? d; otherwise q ? ? qj .
13:
Compute the average margin of the bottom n? examples at q ? .
14:
end if
15: end for
16: Pick the weak classifier with the largest increment of the average margin of bottom n? examples
with weight being q ? .
17: Repeat 2-16 until no increment in average margin of bottom n? examples.
Therefore, the maximum average margin can only happen at two ends of the interval. As shown in
Figure 2, the maximum average margin is either at the origin or at point d, which depends on the
sign of the derivative in (9). If it is positive, the average margin is monotonically increasing, we set
?t = d ? ?, otherwise we set ?t = 0. The greedy coordinate ascent algorithm found by: looking
at all weak classifiers in H, if the nominator in (9) is positive, we let its weight ? close to the right
value on the interval where the training error is minimum, and compute the value of the average
margin. We add the weak classifier which has the largest average margin increment. We iterate this
procedure until convergence. Its convergence is given by Theorem 3 shown below.
Theorem 3 When constrained to the region of zero training error, the greedy coordinate ascent
algorithm that maximizes the average margin over all examples converges to an optimal solution.
Now consider a greedy coordinate ascent algorithm maximizing the average margin of bottom n?
training examples, gaverage n? . Apparently maximizing the minimum margin is a special case by
choosing n? = 1. Figure 2 is a simple illustration with six training examples. Our aim is to maximize
the average margin of the bottom 3 examples. The interval [0, d] of ?t indicates an interval where
the training error is zero. On the point of d, the sample margin m3 alters from positive to negative,
which causes the training error jump from 0 to 1/6. As shown in Figure 2, the margin of each of six
training examples is either monotonically increasing or decreasing.
If we know a fixed set of bottom n? training examples having smaller margins for an interval of ?t
with a minimum training error, it is straightforward to compute the derivative of the average margin
of bottom n? training examples as
?gaverage n?
=
??t
P
i?Bn?
bi,t c ?
(c + ?t
P
)2
i?Bn?
ai
(10)
Again gaverage n? is a monotonic function of ?t , depending on the sign of the derivative in (10), it is
maximized either on the left side or on the right side of the interval.
In general, the set of bottom n? training examples for an interval of ?t with a minimum training
error varies over ?t , it is required to precisely search for any snapshot of bottom n? examples with a
different value of ?.
To address this, we first examine when the margins of two examples intersect. Consider the ith
a +bi,t ?t
example (xi , yi ) with margin mi = i c+?
and the jth example (xj , yj ) with margin mj =
t
aj +bj,t ?t
c+?t .
Notice bi , bj is either -1 or +1. Assume bi = bj , then because mi 6= mj (since ai 6= aj ),
the margins of example i and example j never intersect; assume bi 6= bj , then because mi = mj
5
|a ?a |
|a ?a |
at ?t = i 2 j , the margins of example i and example j might intersect with each other if i 2 j
belongs to the interval of ?t with the minimum training error. In summary, given any two samples,
we can decide whether they intersect by checking whether b terms have the same sign, if not, they
do intersect, and we can determine the intersection point.
The greedy coordinate ascent algorithm that sequentially maximizes the average margin of bottom
n? examples is described in Algorithm 2, lines 3-15 are the weak learning steps and the rest are
boosting steps. At line 5 we compute Dn? which can be used to check the sign of the derivative
in (10). Since the function of the average margin of bottom n? examples is quasiconcave, we can
determine the optimal point q ? by Dn? , and only need to compute the margin value at q ? . We add the
weak learner, which has the largest increment of the average margin over bottom n? examples, into
the ensembled classifier. This procedure terminates if there is no increment in the average margin
of bottom n? examples over the considered weak classifiers. If M weak learners are considered, the
computational complexity of Algorithm 2 in the training stage is O (max(n log n, M n? )) for each
iteration. The convergence analysis of Algorithm 2 is given by Theorem 4.
Theorem 4 When constrained to the region of zero training error, the greedy coordinate ascent
algorithm that maximizes average margin of bottom n? samples converges to a coordinatewise maximum solution, but it is not guaranteed to converge to an optimal solution due to the non-smoothness
of the average margin of bottom n? samples.
?-relaxation: Unfortunately, there is a fundamental difficulty in the greedy coordinate ascent algorithm that maximizes the average margin of bottom n? samples: It gets stuck at a corner, from
which it is impossible to make progress along any coordinate direction. We propose an ?-relaxation
method to overcome this difficulty. This method was first proposed by [3] for the assignment problem, and was extended to the linear cost network flow problem and strictly convex costs and linear
constraints [4, 21]. The main idea is to allow a single coordinate to change even if this worsens the
margin function. When a coordinate is changed, it is set to ? plus or ? minus the value that maximizes
the margin function along that coordinate, where ? is a positive number.
We can design a similar greedy coordinate ascent algorithm to directly maximize the bottom n? th
sample margin by only making a slight modification to Algorithm 2: for a weak classifier, we choose
the intersection point that led to the largest increasing of the bottom n? th margin. When combined
with ?-relaxation, this algorithm will eventually approach a small neighbourhood of a local optimal
solution that maximizes the bottom n? th sample margin. As shown in Figure 2, bottom n? th margin
is a multimodal function, this algorithm with ?-relaxation is very sensitive to the choice of n? , and it
usually gets stuck in a bad coordinatewise point without using ?-relaxation. However, an impressive
advantage is that this method is tolerant to noise, which will be shown in Section 3.
3
Experimental Results
In the experiments below, we first evaluate the performance of DirectBoost on 10 UCI data sets.
We then evaluate noise robustness of DirectBoost. For all the algorithms in our comparison, we
use decision trees with depth of either 1 or 3 as weak learners since for the small datasets, decision
stumps (tree depth of 1) is already strong enough. DirectBoost with decision trees is implemented
by a greedy top-down recursive partition algorithm to find the tree but differently from AdaBoost
and LPBoost, since DirectBoost does not maintain a distribution over training samples. Instead, for
each splitting node, DirectBoost simply chooses the attribute to split on by minimizing 0-1 loss or
maximizing the predefined margin value. In all the experiments that ?-relaxation is used, the value
of ? is 0.01. Note that our empirical study is focused on whether the proposed boosting algorithm
is able to effectively improve the accuracy of state-of-the-art boosting algorithms with the same
weak learner space H, thus we restrict our comparison to boosting algorithms with the same weak
learners, rather than a wide range of classification algorithms, such as SVMs and KNN.
3.1 Experiments on UCI data
We first compare DirectBoost with AdaBoost, LogitBoost, soft margin LPBoost and BrownBoost
on 10 UCI data sets1 from the UCI Machine Learning Repository [8]. We partition each UCI dataset
into five parts with the same number of samples for five-fold cross validation. In each fold, we use
three parts for training, one part for validation, and the remaining part for testing. The validation
1
For Adult data, where we use a subset a5a in LIBSVM set http://www.csie.ntu.edu.tw/ ? cjlin/libsvm. We
do not use the original Adult data which has 48842 examples since LPBoost runs very slow on it.
6
D depth AdaBoost LogitBoost
BrownBoost DirectBoostavg DirectBoost?avg DirectBoostorder
Datasets
N
LPBoost
Tic-tac-toe
958
9
3
1.47(0.7)
1.47(1.0)
2.62(0.8)
3.66(1.3)
0.63(0.4)
1.15(0.8)
1.05(0.4)
Diabetes
768
8
3
27.71(1.7) 27.32(1.3)
26.01(3.3)
26.67(2.6)
25.62(2.5)
25.49(3.0)
23.4(3.7)
Australian
690 14
3
14.2(1.8)
16.23(2.6) 14.49(4.4)
13.77(4.6)
14.06(3.6)
13.33(3.0)
13.48(2.9)
Fourclass
862
2
3
1.86(1.3)
2.44(1.6)
3.02(2.3)
2.33(1.7)
2.33(1.0)
1.86(1.3)
1.74(1.5)
Ionosphere
351 34
3
9.71(3.7)
9.71(3.1)
8.57(2.7)
10.86(2.8)
7.71(3.0)
8.29(2.7)
7.71(4.4)
Splice
1000 61
3
5.3(1.4)
5.3(2.6)
4.8(1.4)
6.1(1.1)
4.8(0.7)
4.0(0.5)
6.7(1.6)
Cancer-wdbc 569 29
1
4.25(2.5)
4.42(1.4)
3.89(1.5)
4.25(2.2)
4.96(3.0)
4.07(2.0)
3.72(2.9)
Cancer-wpbc 198 32
1
27.69(7.6) 30.26(7.3) 26.15(10.5)
28.72(8.4)
27.69(8.1)
24.62(7.6)
27.18(10.0)
Heart
270 13
1
17.41(7.7) 18.52(5.1)
19.26(8.1)
18.15(7.2)
18.15(5.1)
16.67(7.5)
18.15(7.6)
Adult
6414 14
3
15.6(0.7)
16.2(1.1)
15.56(0.9)
16.25(1.7)
15.28(0.8)
15.8(1.1)
15.39(0.8)
Table 1: Percent test errors of AdaBoost, LogitBoost, soft margin LPBoost with column generation, BrownBoost, and three DirectBoost methods on 10 UCI datasets each with N samples and D attributes.
set is used to choose the optimal model for each algorithm: For AdaBoost and LogitBoost, the
validation data is used to perform early stopping since there is no nature stopping criteria for these
algorithms. We run the algorithms until convergence where the stopping criterion is that the change
of loss is less than 1e-6, and then choose the ensemble classifier from the round with minimum error
on the validation data. For BrownBoost, we select the optimal cutoff parameters by the validation
set, which are chosen from {0.0001, 0.001, 0.01, 0.03, 0.05, 0.08, 0.1, 0.14, 0.17, 0.2}. LPBoost
maximizes the soft margin subject to linear constraints, its objective is equivalent to DirectBoost
with maximizing the average margin of bottom n? samples [19], thus we set the same candidate
parameters n? /n = {0.01, 0.05, 0.1, 0.2, 0.5, 0.8} for them. For LPBoost, the termination rule we
use is same to the one in [6], and we select the optimal regularization parameter by the validation
set. For DirectBoost, the algorithm terminates when there is no increment in the targeted margin
value, and we select the model with the optimal n? by the validation set.
We use DirectBoostavg to denote our method that runs Algorithm 1 first and then maximizes the
average of bottom n? margins without ?-relaxation, DirectBoost?avg to denote our method that runs
Algorithm 1 first and then maximizes the average margin of bottom n? samples with ?-relaxation, and
DirectBoostorder to denote our method that runs Algorithm 1 first and then maximizes the bottom
n? th margin with ?-relaxation. The means and standard deviations of test errors are given in Table 1.
Clearly DirectBoostavg , DirectBoost?avg and DirectBoostorder outperform other boosting algorithms
in general, specially DirectBoost?avg is better than AdaBoost, LogitBoost, LPBoost and BrownBoost
over all data sets except Cancer-wdbc. Among the family of DirectBoost algorithms, DirectBoostavg
wins on two datasets where it searches the optimal margin solution in the region of zero training
error, this means that keeping the training error at zero may lead to good performance in some
cases. DirectBoostorder wins on three other datasets, but its results are unstable and sensitive to
n? . With ?-relaxation, DirectBoost?avg searches the optimal margin solution in the whole parameter
space and gives the best performance on the remaining 5 data sets. It is well known that AdaBoost
performs well on the datasets with a small test error such as Tic-tac-toe and Fourclass, it is extremely
hard for other boosting algorithms to beat AdaBoost. Nevertheless, DirectBoost is still able to give
even better results in this case. For example, on Tic-tac-toe data set, the test error becomes 0.63%,
more than half the error rate reduction. Our method would be more valuable for those who value
prediction accuracy, which might be the case in areas of medical and genetic research.
DirectBoost?avg and LPBoost are both designed
to maximize the average margin over bottom n? samples [19], but as shown by the
left figure in Figure 3, DirectBoost?avg generates a larger margin value than LPBoost when
decision trees with depth greater than 1 are
used as weak learners, this may explain why
DirectBoost?avg outperforms LPBoost. When
?
Figure 3: The value of average margins of bottom n decision stumps
are used as weak learners,
samples vs. the number of iterations for LPBoost with
LPBoost
converges
to a global optimal solu?
column generation and DirectBoostavg on Australian
?
tion,
and
DirectBoost
avg nearly converges to
dataset, left: Decision tree, right: Decision stump.
the maximum margin as shown by the right figure in Figure 3, even though no theoretical justification is known for this observed phenomenon.
7
Table 2 shows the number of iterations and total
# of iterations
Total running times
run times (in seconds) for AdaBoost, LPBoost
AdaBoost
117852
31168
and DirectBoost?avg at the training stage, where
LPBoost
286
167520
we use the Adult dataset with 10000 training
DirectBoost?avg
1737
606
samples. All these three algorithms employ decision trees with a depth of 3 as weak learners. Table 2: Number of iterations and total run times (in
seconds) in training stage on Adult dataset with 10000
The experiments are conducted on a PC with training samples and the depth of DecisionTrees is 3.
Core2 Duo 2.6GHz CPU and 2G RAM. Clearly
DirectBoost?avg takes less time for the entire training stage since it converges much faster. LPBoost
converges in less than three hundred rounds, but as a total corrective algorithm, it has a greater computational cost on each round. To handle large scale data sets in practice, similar to AdaBoost, we
can use many tricks. For example, we can partition the data into many parts and use distributed
algorithms to select the weak classifier.
3.2
Evaluate noise robustness
In the experiments conducted below, we evaluate the noise robustness of each boosting method.
First, we run the above algorithms on a synthetic example created by [14]. This is a simple counterexample to show that for a broad class of convex loss functions, no boosting algorithm is provably
robust to random label noise, this class includes AdaBoost, LogitBoost, etc. For LPBoost and its
variations [25, 26], they do not satisfy the preconditions of the theorem presented by [14], but Glocer [12] showed experimentally that these soft margin boosting methods have the same problem as
the AdaBoost and LogitBoost to handle random noise.
l
5
20
?
0
0.05
0.2
0
0.05
0.2
AB
0
17.6
24.2
0
30.0
29.9
LB
0
0
23.4
0
29.6
30.0
LPB
0
0
14.5
0
27.0
29.8
BB
0
1.2
2.2
0.6
15.0
19.6
DB?avg
0
0
24.7
0
25.4
29.6
DBorder
0
0
0
0
0
3.2
data
wdbc
Iono.
Table 3:
Percent test errors of AdaBoost (AB),
LogitBoost (LB), LPBoost (LPB), BrownBoost (BB),
DirectBoost?avg , and DirectBoostorder on Long and
Servedio?s example with random noise.
?
0
0.05
0.2
0
0.05
0.2
AB
4.3
6.6
8.8
9.7
10.3
16.6
LB
4.4
6.8
8.8
9.7
12.3
15.0
LPB
4.0
4.9
7.6
8.6
9.3
14.6
BB
4.5
6.5
8.3
8.8
11.5
17.9
DB?avg
4.1
5.0
8.4
8.3
9.3
14.4
DBorder
3.7
5.0
6.6
7.7
8.6
9.5
Table 4:
Percent test errors of AdaBoost (AB),
LogitBoost (LB), LPBoost (LPB), BrownBoost (BB),
DirectBoost?avg , and DirectBoostorder on two UCI
datasets with random noise.
We repeat the synthetic learning problem with binary-valued weak classifiers that is described
in [14]. We set the number of training examples to 1000 and the labels are corrupted with a noise
rate ? at 0%, 5%, and 20% respectively. Examples in this setting are binary vectors of length 2l +11.
Table 3 reports the error rates on a clean test data set with size 5000, that is, the labels of test data
are uncorrupted, and a same size clean data is generated as validation data. AdaBoost performs very
poor on this problem. This result is not surprising at all since [14] designed this example on purpose to explain the inadequacy of convex optimization methods. LogitBoost, LPBoost with column
generation, and DirectBoost?avg perform better in the case that l = 5 and ? = 5%, but for the other
cases they do as bad as AdaBoost. BrownBoost is designed for noise tolerance, and it does well in
the case of l = 5, but it also cannot handle the case of l = 20 and ? > 0%. On the other hand,
DirectBoostorder performs very well for all cases, showing DirectBoostorder ?s impressive noise tolerance property since the most difficult examples are given up without any penalty.
These algorithms are also tested on two UCI datasets, randomly corrupted with additional label
noise on training data at rates of 5% and 20% respectively. Again, we keep the validation and the
test data are clean. The results are reported in Table 4 by five-fold cross validation, the same as
Experiment 1. LPBoost with column generation, DirectBoost?avg and DirectBoostorder do well in
the case of ? = 5%, and their performance is better than AdaBoost, LogitBoost, and BrownBoost.
For the case of ? = 20%, all the algorithms perform much worse than the corresponding noise-free
case, except DirectBoostorder which still generates a good performance close to the noise-free case.
4
Acknowledgements
This research is supported in part by AFOSR under grant FA9550-10-1-0335, NSF under grant
IIS:RI-small 1218863, DoD under grant FA2386-13-1-3023, and a Google research award.
8
References
[1] P. Bartlett and M. Traskin. AdaBoost is consistent. Journal of Machine Learning Research, 8:2347?2368,
2007.
[2] M. Bazaraa, H. Sherali and C. Shetty. Nonlinear Programming: Theory and Algorithms, 3rd Edition.
Wiley-Interscience, 2006.
[3] D. P. Bertsekas. A distributed algorithm for the assignment problem. Technical Report, MIT, 1979.
[4] D. Bertsekas. Network Optimization: Continuous and Discrete Models. Athena Scientific, 1998.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[6] A. Demiriz, K. Bennett and J. Shawe-Taylor. Linear programming boosting via column generation, Machine Learning, 46:225?254, 2002.
[7] L. Devroye, L. Gy?orfi and G. Lugosi. A Probabilistic Theory of Pattern Recognition Springer, New York,
1996.
[8] A. Frank and A. Asuncion. UCI Machine Learning Repository. School of Information and Computer
Science, University of California at Irvine, 2006.
[9] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[10] Y. Freund. An adaptive version of the boost by majority algorithm. Machine Learning, 43(3):293?318,
2001.
[11] J. Friedman, T. Hastie and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The
Annals of Statistics, 28(2):337?374, 2000.
[12] K. Glocer. Entropy regularization and soft margin maximization. Ph.D. Dissertation, UCSC, 2009.
[13] K. Hoffgen, H. Simon and K. van Horn. Robust trainability of single neurons. Journal of Computer and
System Sciences, 50(1):114?125, 1995.
[14] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. Machine
Learning, 78:287-304, 2010.
[15] E. Mammen and A. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27, 1808-1829,
1999.
[16] D. McAllester, T. Hazan and J. Keshet. Direct loss minimization for structured prediction. Neural Information Processing Systems (NIPS), 1594-1602, 2010.
[17] R. Schapire, Y. Freund, P. Bartlett and W. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651?1686, 1998.
[18] R. Schapire and Y. Freund. Boosting: Foundations and Algorithms. MIT Press, 2012.
[19] S. Shalev-Shwartz and Y. Singer. On the equivalence of weak learnability and linear separability: new
relaxations and efficient boosting algorithms. Machine Learning, 80(2-3): 141-163, 2010.
[20] I. Steinwart. Consistency of support vector machines and other regularized kernel classifiers. IEEE
Transactions on Information Theory, 51(1):128-142, 2005.
[21] P. Tseng and D. Bertsekas. Relaxation methods for strictly convex costs and linear constraints. Mathematics of Operations Research, 16:462-481, 1991.
[22] P. Tseng. Convergence of block coordinate descent method for nondifferentiable minimization. Journal
of Optimization Theory and Applications, 109(3):475?494, 2001.
[23] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135166, 2004.
[24] V. Vapnik. Statistical Learning Theory. John Wiley, 1998.
[25] M. Warmuth, K. Glocer and G. Ratsch. Boosting algorithms for maximizing the soft margin. Advances
in Neural Information Processing Systems (NIPS), 21, 1585-1592, 2007.
[26] M. Warmuth, K. Glocer and S. Vishwanathan. Entropy regularized LPBoost. The 19th International
conference on Algorithmic Learning Theory (ALT), 256-271, 2008.
[27] S. Zhai, T. Xia, M. Tan and S. Wang. Direct 0-1 loss minimization and margin maximization with
boosting. Technical Report, 2013.
[28] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32(1):56?85, 2004.
[29] T. Zhang and B. Yu. Boosting with early stopping: Convergence and consistency. The Annals of Statistics,
33:1538?1579, 2005.
9
| 5214 |@word worsens:1 repository:2 version:2 hoffgen:1 termination:1 bn:10 q1:2 pick:2 minus:2 reduction:3 score:1 sherali:1 genetic:1 outperforms:1 surprising:1 si:1 written:2 john:1 realistic:1 partition:3 happen:1 additive:1 designed:3 update:9 v:1 discrimination:1 greedy:19 selected:2 half:1 warmuth:2 ith:6 dissertation:1 record:1 fa9550:1 provides:1 boosting:27 node:1 zhang:2 five:3 dn:6 along:2 direct:4 ucsc:1 incorrect:1 consists:1 prove:1 interscience:1 behavior:1 examine:1 ming:1 decreasing:5 core2:1 cpu:1 increasing:9 becomes:3 moreover:1 bounded:2 maximizes:12 lowest:1 what:1 evolved:1 tic:3 duo:1 minimizes:7 q2:3 unobserved:1 finding:2 y3:2 voting:3 concave:1 classifier:46 medical:1 grant:3 yn:1 bertsekas:3 positive:14 engineering:1 local:8 bazaraa:1 lugosi:1 might:3 plus:1 equivalence:1 tian:1 bi:11 range:1 horn:1 yj:1 testing:1 recursive:1 practice:1 block:1 x3:3 procedure:5 intersect:5 area:1 empirical:10 orfi:1 boyd:1 confidence:1 get:3 cannot:2 close:2 risk:2 impossible:1 intercept:8 fa2386:1 www:1 equivalent:1 center:1 maximizing:8 go:2 straightforward:1 independently:2 convex:12 focused:1 splitting:1 m2:1 rule:2 vandenberghe:1 handle:3 coordinate:27 increment:6 justification:1 updated:1 variation:1 pt:6 tan:3 suppose:1 annals:6 programming:3 origin:1 diabetes:1 trick:1 recognition:1 labeled:4 lpboost:24 bottom:34 observed:2 ft:18 csie:1 wang:3 gmin:1 worst:1 calculate:1 precondition:1 region:8 highest:1 valuable:1 complexity:4 dynamic:1 learner:10 multimodal:1 differently:1 corrective:1 fast:1 describe:1 choosing:1 shalev:1 whose:2 heuristic:1 larger:2 valued:1 yht:1 otherwise:2 statistic:6 knn:1 demiriz:1 advantage:2 propose:4 uci:9 combining:1 achieve:2 convergence:7 converges:6 fourclass:2 depending:3 illustrate:1 exam:2 school:1 progress:1 strong:3 implemented:1 predicted:2 australian:2 direction:1 kno:1 correct:1 attribute:2 stochastic:1 mcallester:1 ensembled:1 generalization:4 ntu:1 solu:1 strictly:2 around:1 considered:3 wright:2 mapping:3 predict:2 bj:4 algorithmic:1 early:2 smallest:4 a2:2 purpose:1 label:13 sensitive:2 largest:5 weighted:4 minimization:8 mit:2 clearly:3 always:3 aim:3 reaching:1 rather:1 pn:2 hj:2 q3:2 focus:1 lpb:4 indicates:1 check:1 hk:11 greedily:1 sense:1 inference:3 stopping:4 inaccurate:1 typically:1 entire:2 misclassified:2 provably:1 arg:1 classification:27 among:2 constrained:2 special:1 art:1 equal:1 once:3 never:1 having:3 x4:3 broad:1 yu:1 nearly:1 future:1 np:1 report:3 escape:1 employ:1 randomly:1 m4:1 iono:1 n1:1 negation:2 maintain:1 ab:4 friedman:1 mining:1 highly:1 righthand:2 pc:1 predefined:1 accurate:1 ples:2 tree:7 divide:1 taylor:1 desired:1 re:1 isolated:1 theoretical:1 instance:1 column:7 soft:6 assignment:2 maximization:4 cost:4 deviation:1 subset:1 hundred:1 dod:1 quasiconcave:4 conducted:2 learnability:1 reported:1 varies:1 corrupted:2 synthetic:2 combined:1 chooses:1 fundamental:1 international:1 probabilistic:1 lee:1 continuously:2 again:3 a5a:1 choose:3 worse:1 corner:1 creating:1 booster:1 derivative:8 potential:1 stump:3 gy:1 includes:1 satisfy:1 depends:1 tion:1 h1:1 view:2 closed:2 apparently:1 hazan:1 reached:1 bayes:3 sort:3 aggregation:1 asuncion:1 slope:8 simon:1 minimize:1 accuracy:3 who:1 ensemble:8 maximized:1 weak:38 thumb:1 produced:1 classified:1 explain:2 reach:4 servedio:2 e2:1 toe:3 proof:1 mi:11 stop:1 irvine:1 proved:1 dataset:4 fractional:1 back:1 higher:1 adaboost:25 though:1 furthermore:1 just:1 stage:6 until:6 hand:3 steinwart:1 ei:3 nonlinear:1 incrementally:3 google:1 glocer:4 yf:1 aj:2 logistic:1 scientific:1 averagep:1 true:4 y2:2 regularization:2 round:4 during:1 mammen:1 criterion:2 m5:1 theoretic:1 performs:3 percent:3 common:1 defeat:1 belong:1 slight:1 m1:1 significant:1 cambridge:1 counterexample:1 ai:11 tac:3 smoothness:1 rd:1 consistency:3 mathematics:1 shawe:1 impressive:2 etc:1 add:5 base:3 showed:1 irrelevant:1 belongs:1 certain:2 binary:4 arbitrarily:3 success:2 yi:18 uncorrupted:1 scoring:3 minimum:21 yft:1 care:1 greater:2 additional:1 determine:4 maximize:9 converge:1 monotonically:5 ii:1 full:1 smooth:1 technical:3 faster:1 cross:2 long:3 visit:1 award:1 a1:2 prediction:3 regression:1 iteration:8 kernel:1 remarkably:1 interval:14 else:1 median:3 ratsch:1 rest:2 specially:1 ascent:14 subject:1 db:2 flow:1 spirit:1 effectiveness:1 call:1 nominator:1 split:1 enough:2 m6:1 switch:2 iterate:1 xj:1 hastie:1 brownboost:11 restrict:1 idea:2 qj:5 whether:3 six:3 bartlett:2 inadequacy:1 penalty:1 quasilinear:1 york:1 cause:1 clear:1 tsybakov:2 ph:1 svms:1 tth:1 reduced:3 http:1 schapire:3 outperform:1 nsf:1 alters:1 notice:1 sign:6 correctly:1 tibshirani:1 discrete:2 write:1 nevertheless:3 achieving:1 drawn:1 cutoff:1 libsvm:2 clean:3 ht:3 nonconvexity:1 kept:1 ram:1 relaxation:12 fraction:2 cone:3 run:10 family:1 decide:1 decision:9 bound:1 guaranteed:2 fold:3 precisely:1 constraint:3 vishwanathan:1 x2:4 ri:1 generates:2 extremely:1 separable:2 department:1 structured:1 poor:1 remain:1 smaller:2 terminates:2 separability:1 tw:1 making:1 modification:1 maxy:1 hl:1 explained:1 intuitively:1 heart:1 equation:1 turn:1 discus:1 eventually:1 cjlin:1 singer:1 know:1 end:4 operation:1 rewritten:1 sets1:1 neighbourhood:1 shaojun:2 robustness:3 shetty:1 original:1 denotes:2 running:3 top:1 remaining:2 a4:2 exploit:1 build:2 unchanged:1 objective:1 already:1 surrogate:2 win:2 athena:1 majority:1 nondifferentiable:1 unstable:1 tseng:2 devroye:1 length:1 traskin:1 relationship:1 zhai:3 y4:2 minimizing:6 mini:1 illustration:2 difficult:1 susceptible:1 unfortunately:1 frank:1 negative:11 design:1 motivates:1 unknown:2 perform:3 upper:1 neuron:1 snapshot:1 datasets:9 benchmark:2 descent:9 beat:1 situation:1 extended:1 looking:1 y1:3 lb:4 complement:1 required:1 california:1 learned:1 mizes:1 boost:1 discontinuity:1 nip:2 address:1 able:2 adult:5 usually:3 below:6 mismatch:1 pattern:1 max:1 explanation:1 critical:1 difficulty:2 regularized:2 indicator:1 improve:1 created:1 nice:1 acknowledgement:1 checking:1 afosr:1 freund:4 loss:21 rationale:1 generation:7 limitation:1 validation:11 foundation:1 consistent:3 cancer:3 changed:3 bnc:2 repeat:4 summary:1 free:3 keeping:2 jth:1 wpbc:1 supported:1 side:8 weaker:1 allow:1 wide:1 taking:1 ghz:1 distributed:2 xia:3 calculated:1 xn:1 curve:1 overcome:1 rich:1 depth:6 tolerance:2 author:1 collection:2 jump:1 stuck:2 avg:18 adaptive:1 transaction:1 bb:4 keep:1 global:2 sequentially:3 tolerant:2 q4:3 assumed:2 xi:44 shwartz:1 search:3 iterative:1 continuous:1 why:1 table:8 learn:2 reasonably:1 mj:3 nature:1 robust:2 main:1 linearly:2 whole:1 logitboost:13 noise:18 edition:1 coordinatewise:9 x1:5 slow:1 wiley:2 quasiconvex:1 exponential:1 candidate:1 splice:1 theorem:10 down:1 bad:2 showing:1 insightful:1 alt:1 ionosphere:1 a3:2 exists:1 stepwise:1 trap:1 vapnik:2 effectively:1 keshet:1 margin:106 wdbc:3 entropy:2 intersection:5 led:1 simply:1 van:1 monotonic:1 springer:1 determines:1 satisfies:1 goal:3 targeted:4 bennett:1 considerable:1 hard:2 change:3 experimentally:1 determined:1 except:2 operates:1 total:4 experimental:3 ya:2 tendency:1 m3:2 trainability:1 select:4 support:1 evaluate:4 tested:1 phenomenon:1 |
4,658 | 5,215 | Reservoir Boosting : Between Online and Offline
Ensemble Learning
Franc?ois Fleuret
Idiap Research Institute
Martigny, Switzerland
francois.fleuret@idiap.ch
Leonidas Lefakis
Idiap Research Institute
Martigny, Switzerland
leonidas.lefakis@idiap.ch
Abstract
We propose to train an ensemble with the help of a reservoir in which the learning
algorithm can store a limited number of samples.
This novel approach lies in the area between offline and online ensemble approaches
and can be seen either as a restriction of the former or an enhancement of the latter.
We identify some basic strategies that can be used to populate this reservoir and
present our main contribution, dubbed Greedy Edge Expectation Maximization
(GEEM), that maintains the reservoir content in the case of Boosting by viewing
the samples through their projections into the weak classifier response space.
We propose an efficient algorithmic implementation which makes it tractable in
practice, and demonstrate its efficiency experimentally on several compute-vision
data-sets, on which it outperforms both online and offline methods in a memory
constrained setting.
1
Introduction
Learning a boosted classifier from a set of samples S = {X, Y }N ? RD ? {?1, 1} is usually
addressed in the context of two main frameworks. In offline Boosting settings [10] it is assumed that
the learner has full access to the entire dataset S at any given time. At each iteration t, the learning
algorithm calculates a weight wi for each sample i ? the derivative of the loss with respect to the
classifier response on that sample ? and feeds these weights together with the entire dataset to a
weak learning algorithm, which learns a predictor ht . The coefficient at of the chosen weak learner
ht is then calculated based on its weighted error. There are many variations of this basic model,
too many to mention here, but a common aspect of these is that they do not explicitly address the
issue of limited resources. It is assumed that the dataset can be efficiently processed in its entirety at
each iteration. In practice however, memory and computational limitations may make such learning
approaches prohibitive or at least inefficient.
A common approach used in practice to deal with such limitations is that of sub-sampling the data-set
using strategies based on the sample weights W [9, 13]. Though these methods address the limits
of the weak learning algorithms resources, they nonetheless assume a) access to the entire data-set
at all times, b) the ability to calculate the weights W of the N samples and to sub-sample K of
these, all in an efficient manner. The issues with such an approach can be seen in tasks such as
computer vision, where samples need not only be loaded sequentially into memory if they do not
all fit which in itself may be computationally prohibitive, but furthermore once loaded they must
be pre-processed, for example by extracting descriptors, making the calculation of the weights
themselves a computationally expensive process.
For large datasets, in order to address such issues, the framework of online learning is frequently
employed. Online Boosting algorithms [15] typically assume access solely to a Filter() function, by
which they mine samples from the data-set typically one at a time. Due to the their online nature
1
such approaches typically treat the weak learning algorithm as a black box, assuming that it can be
trained in an online manner, and concentrate on different approaches to calculating the weak learner
coefficients [15, 4]. A notable exception is the works of [11] and [14], where weak learner selectors
are introduced, one for each weak learner in the ensemble, which are capable of picking a weak
learner from a predetermined pool. All these approaches however are similar in the fact that they are
forced to predetermine the number of weak learners in the boosted strong classifier.
We propose here a middle ground between these two extremes in which the boosted classifier can
store some of the already processed samples in a reservoir, possibly keeping them through multiple
rounds of training. As in online learning we assume access only to a Filter() through which we can
sample Qt samples at each Boosting iteration. This setting is related to the framework proposed
in [2] for dealing with large data-sets, the method proposed there however uses the filter to obtain
a sample and stochastically accepts or rejects the sample based on its weight. The drawback of
this approach is a) that after each iteration all old samples are discarded, and b) the algorithm must
process an increasing number of samples at each iteration as the weights become increasingly smaller.
We propose to acquire a fixed number of samples at each iteration and to add these to a persistent
reservoir, discarding only a subset. The only other work we know which trains a Boosting classifier
in a similar manner is [12], where the authors are solely concerned with learning in the presence of
concept drift and do not propose a strategy for filling this reservoir. Rather they use a simple sliding
window approach and concentrate on the removal and adding of weak learners to tackle this drift.
A related concept to the work presented here is that of learning on a budget [6], where, as in the
online learning setting, samples are presented one at a time to the learner, a perceptron, which builds
a classification model by retaining an active subset of these samples. The main concern in this context
is the complexity of the model itself and its effect via the Gramm matrix computation on both training
and test time. Subsequent works on budget perceptrons has led to tighter budgets [16] (at higher
computational costs), while [3] proved that such approaches are mistake-bound.
Similar work on Support Vector Machines [1] proposed LaSVM, a SVM solver which was shown
to converge to the SVM QP solution by adopting a scheme composed of two alternating steps,
which consider respectively the expansion and contraction of the support vector set via the SMO
algorithm. SVM budgeted learning was also considered in [8] via an L1 -SVM formulation which
allowed users to specifically set a budget parameter B, and subsequently minimized the loss on the B
worst-classified examples.
As noted, these approaches are concerned with the complexity of the classification model, that is the
budget refers to the number of samples which have none-zero coefficients in the dual representation
of the classifier. In this respect our work is only loosely related to what is often referred to as budget
learning, in that we solve a qualitatively different task, namely addressing the complexity of the
parsing and processing the data during training.
Rt
|Rt |
Qt
?AA
?A
yA
wt
Ft (x)
F ilter()
ht
H
?
T
Table 1: Notation
the contents of the reservoir at iteration t
the size of the reservoir
the fresh batch of samples at iteration t
the covariance matrix of the edges h ? y
the expectation of the edges of samples in set A
the vector of labels {?1, 1}|A| of samples in A
the vector of Boosting weights at iteration t
the constructed strong classifier at iteration t
a filter returning samples from S
the weak learner chosen at iteration t
the family of weak learners
component-wise (Hadamard) product
number of weak learners in the strong classifier
2
Table 2: Boosting with a Reservoir
Construct R0 and Q0 with r and q calls to F ilter().
for t = 1, . . . , T do
Discard q samples from Rt?1 ? Qt?1 samples to obtain Rt
Select ht using the samples in Rt
Compute at using Rt
Construct Qt with q calls to F ilter()
end for
PT
Return FT = t=1 at ht
2
Reservoir of samples
In this section we present in more detailed form the framework of learning a boosted classifier with
the help of a reservoir. As mentioned, the batch version of Boosting consists of iteratively selecting a
weak learner ht at each iteration t, based on the loss reduction they induce on the full training set
S. In the reservoir setting, weak learners are selected solely from the information provided by the
samples contained in the reservoir Rt .
Let N be the number of training samples, and S = {1, . . . , N } the set of their indexes. We
consider here one iteration of a Boosting procedure, where each sample is weighted according to its
contribution to the overall loss. Let y ? {?1, 1}N be the sample labels, and H ? {?1, 1}N the set
of weak-learners, each identified with its vector of responses over the samples. Let ? ? RN
+ be the
sample weights at that Boosting iteration.
For any subset of sample indexes B ? {1, . . . , N } let yB ? {?1, 1}|B| be the ?extracted? vector.
We define similarly ?B , and for any weak learner h ? H let hB ? {?1, 1}|B| stands for the vector
of the |B| responses over the samples in B.
At each iteration t, the learning algorithm is presented with a batch of fresh samples Qt ? S, |Qt | =
q, and must choose r samples from the full set of samples Rt ? Qt at its disposal, in order to build
Rt+1 with |Rt+1 | = r, which it subsequently uses for training.
t
Using the samples from Rt , the learner chooses a weak learner ht ? H to maximize hhtRt ?yRt , wR
i,
t
where ? stands for the Hadamard component-wise vector product. Maximizing this latter quantity
corresponds to minimizing the weighted error estimated on the samples currently in Rt . The weight
at of the selected weak learner can also be estimated with Rt .
The learner then receives a fresh batch of samples Qt+1 and the process continues iteratively. See
algorithm in Table 2. In the following we will address the issue of which strategy to employ to discard
the q samples at each time step t. To our knowledge, no previous work has been published in this or a
similar framework.
3
Reservoir Strategies
In the following we present a number of strategies for populating the reservoir, i.e. for choosing which
q samples from Rt ? Qt to discard. We begin by identifying three basic and rather straightforward
t
approaches. Max Weights (Max) At each iteration t the weight vector wR
is computed for the
t ?Qt
r + q samples and the r samples with the largest weights are kept. Weighted Sampling (WSam) As
t
above wR
is computed, then normalized to 1, and used as a distribution to sample r samples
t ?Qt
to keep without replacement. Random Sampling (Rand) The reservoir is constructed by sampling
uniformly r samples from the r + q available, without replacement.
These will serve mainly as benchmark baselines against which we will compare our proposed method,
presented below, which is more sophisticated and, as we show empirically, more efficient. These
baselines are presented to highlight that a more sophisticated reservoir strategy is needed to ensure
competitive performance, rather than to serve as examples of state-of-the-art baselines.
Our objective will be to populate the reservoir with samples that will allow for an optimal selection
of weak learners, as close as possible to the choice we would make if we could keep all samples.
3
The issue at hand is similar to that of feature selection: The selected samples should be jointly
informative for choosing the good weak learners. This forces to find a proper balance between the
individual importance of the kept samples (i.e. choosing those with large weights) and maximizing
the heterogeneity of the weak learners responses on them.
3.1
Greedy Edge Expectation Maximization
In that reservoir setting, it makes sense that given a set of samples A from which we must discard
samples and retain only a subset B, what we would like is to retain a training set that is as representative as possible of the entire set A. Ideally, we would like B to be such that if we pick the optimal
weak-learner according to the samples it contains
h? = argmaxhhB ? yB , wB i
(1)
h?H
it maximizes the same quantity estimated on all the samples in A. i.e. we want hh?A ? yA , wA i to be
large.
There may be many weak-learners in H that have the exact same responses as h? on the samples
in B, and since we consider a situation where we will not have access to the samples from A \ B
anymore, we model the choice among these weak-learners as a random choice. In which case, a good
h? is one maximizing
EH?U (H) (hHA ? yA , ?A i | HB = h?B ) ,
(2)
that is the average of the scores on the full set A of the weak-learners which coincide with h? on the
retained set B.
We propose to model the distribution U(H) with a normal law. If H is picked uniformly in H, under
a reasonable assumption of symmetry, we propose
H ? y ? N (?, ?)
(3)
where ? is the vector of dimension N of the expectations of weak learner edges, and ? is a covariance
? = A \ B, and with ?A,B denoting an extracted
matrix of size N ? N . Under this model, if B
sub-matrix, we have
EH?U (H) (hHA ? yA , ?A i | HB = h?B )
(4)
= EH?y?N (?,?) (hHA ? yA , ?A i | HB = h?B )
(5)
= hh?B ? yB , ?B i + EH?y?N (?,?) (hHB? ? yB? , ?B? i | HB = h?B )
(6)
=
h(h?B
? yB ), wB i + h?B? +
?1
?
?BB
? ?BB (hB
? yB ? ?B ), wB? i
(7)
Though the modeling of the discrete variables H ? y by a continuous distribution may seem awkward,
we point out two important aspects. Firstly the parametric modeling allows for an analytical expression
for the calculation of (2). Given that we seek to maximize this value over the possible subsets B of
A, an analytic approach is necessary for the algorithm to retain tractability. Secondly, for a given
?1
?
vector of edges h?B ? yB in B, the vector ?B? + ?BB
? ?BB (hB ? yB ? ?B ) is not only the conditional
?
expectation of hB? ? yB? , but also its optimal linear predictor in a least squares error sense.
We note that choosing B based on (7) requires estimates of three quantities: the expected weak-learner
edges ?A , the co-variance matrix ?AA , and the weak learner h? trained on B. Given these quantities,
we must also develop a tractable optimization scheme to find the B maximizing it.
3.2
Computing ? and ?
As mentioned, the proposed method requires in particular an estimate of the vector of expected edges
?A of the samples in A, as well as the corresponding covariance matrix ?AA .
In practice, the estimation of the above depends on the nature of the weak learner family H. In
the case of classification stumps, which we use in the experiments below, both these values can be
calculated with small computational cost.
A classification stump is a simple classifier h?,?,d which for a given threshold ? ? R, polarity
? ? {?1, 1}, and feature index d ? {1, . . . , D}, has the following form:
1 if ? xd ? ? ?
D
?x ? R , h?,?,d (x) =
(8)
?1 otherwise
4
where xd refers to the value of the dth component of x.
In practice when choosing the optimal stump for a given set of samples A, a learner would sort all the
samples according to each of the D dimensions, and for each dimension d it would consider stumps
with thresholds ? between two consecutive samples in that sorted list.
For this family of stumps H and given that we shall consider both polarities, Eh (hA yA ) = 0.
The covariance of the edge of two samples can also be calculated efficiently, with O(|A|2 D) complexity. For two given samples i,j we have
?h ? H, yi hi yj hj ? {?1, 1}.
(9)
Having sorted the samples along a specific dimension d we have that for ? = 1, yi hi yj hj 6= yi yj
for those weak learners which disagree on those samples i.e. with min(xdi , xdj ) < ? < max(xdi , xdj ).
If Ijd , Iid are the indexes of the samples in the sorted list then there are (|Ijd ? Iid |) such disagreeing
weak learners for ? = 1 (plus the same quantity for ? = ?1), given that for each dimension d there
correspond 2(|A| ? 1) weak-learners in H, we reach the following update rule ?d, ?{i, j} :
?AA (i, j)+ = yi yj (2 ? (|A| ? 1) ? 4 ? |Ijd ? Iid |)
(10)
where ?AA (i, j) refers to the i, j element of ?. As can be seen, this leads to a cost of O(|A|2 D).
Given that commonly D |A|, this cost should not be much higher than O(D|A| log |A|) the cost
of sorting along the D dimensions.
3.3
Choice of h?
As stated, the estimation of h? for a given B must be computationally efficient. We could further
commit to the Gaussian assumption by defining p(h? = h), ?h ? H i.e. the probability that a weak
learner h will be the chosen one given that it will be trained on B and integrating over H, this
however, though consistent with the Gaussian assumption, is computationally prohibitive. Rather, we
present here two cheap alternatives both of which perform well in practice.
The first and simplest strategy is to use ?B, h? ? yB = (1, . . . , 1) which is equivalent to making the
assumption that the training process will results in a weak learner which performs perfectly on the
training data B. This is exactly what the process will strive to achieve, however unlikely it may be.
The second is to generate a number |HLattice | of weak learner edges by sampling on the {?1, 1}|B|
lattice using the Gaussian H ? y ? N (?B , ?BB ) restricted to this lattice and to keep the optimal
h? = argmax h ? HLattice h(hB ? yB ), wB i. We can further simplify this process by considering the
whole set A and the lattice {?1, 1}|A| and simply extracting the values h?B for the different subsets B.
Though much more complex, this approach can be implemented extremely efficiently, experiments
showed however that the simple rule of ?B, h? ? yB = (1, . . . , 1) works just as well in practice and
is considerably cheaper. In the following experiments we present results solely for this first rule.
3.4
Greedy Calculation of argmaxB
Despite the analytical formulation offered by our Gaussian assumption, an exact maximization over
all possible subsets remains computationally intractable. For these reason we propose a greedy
approach to building the reservoir population which is computationally bounded.
We initialize the set B = A, i.e. initially we assume we are keeping all the samples, and calculate
??1
BB . The greedy process then iteratively goes through the |B| samples in B and finds the sample j
0
such that for B = B \ {j} the value
h?B? 0B 0 ??1
(h?B 0 ? yB 0 ), wB? 0 i + hh?B 0 ? yB 0 , wB 0 i
B 0B 0
(11)
0
is maximized, where, in this context, h? refers to the weak learner chosen by training on B . This
? = q, discarding one sample at each iteration.
process is repeated q times, i.e. until |B|
In the experiments presented here, we stop the greedy subset selection after these q steps. However in
practice the subset selection can continue by choosing pairs k,j to swap between the two steps. In
our experiments however we did not notice any gain from further optimization of the subset B.
5
3.5
Evaluation of E(hh?A , wA i|B)
Each step in the above greedy process requires going through all the samples j in the current B and
calculating E(hh?A , wA i|B 0 ) for B 0 = B \ {j}.
In order for our method to be computationally tractable we must be able to compute the above value
with a limited computational cost. The naive approach of calculating the value from scratch for each
j would cost O(|B 0 |3 + |B?0 ||B|) . The main computational cost here is the first factor, incurred
in calculating the inverse of the covariance matrix ?B 0B 0 which results from the matrix ?BB by
removing a single row and column. It is thus important to be able to perform this calculation with a
low computational cost.
3.5.1
Updating ??1
B 0B 0
For a given matrix M and its inverse M ?1 we would like to efficiently calculate the inverse of M?j
which is results from M by the deletion of row and column j.
It can be shown that the inverse of the matrix Mej which results from M by the substitution of row
and column j by the basis vector ej is given by the following formula:
1
Me?j 1 = M ?1 ?
M ?1 M ?1 + eTj ej
(12)
Mii j? ?j
where M?j stands for the vector of elements of the jth column of matrix M and Mj? stand for the
vector of elements of its jth row. We omit the proof (a relatively straightforward manipulation of the
?1
Sherman-Morrison formulas) due to space constraints. The inverse M?j
can be recovered by simply
removing the jth row and column of Me?1
.
j
Based on this we can compute ??1
in O(|B|2 ). We further exploit the fact that the matrices
B 0B 0
?1
T
?B? 0B 0 and ?B 0B 0 enter into the calculations through the products ??1
h? 0 and wB
? 0B 0 . Thus by
? ?B
B 0B 0 B
?1 ?
T
pre-calculating the products ?BB hB and wB? ?BB
? once at the beginning of each greedy optimization
step, we can incur a cost of O(|B|) for each sample j and an O(|B|2 ) cost overall.
3.6
Weights w?B
GEEM provides a method for selecting which samples to keep and which to discard. However in
doing so it creates a biased sample B of the set A, and consequently weights wB are not representative
of the weight distribution wA . It is thus necessary to alter the weights wB to obtain a new weight
vector w?B which will takes this bias into account. Based on the assumption (3) and (7), and the fact
that ?A = 0, we set
?1
T
w?B = wB + wB
(13)
? ?BB
? ?BB
The resulting weight vector w?B used to pick the weak-learner h? correctly reflects the entire set
A = Rt ? Qt (under the Gaussian assumption)
3.7
Overall Complexity
The proposed method GEEM comprises, at each boosting iteration, three main steps: (1) The
calculation of ?AA , (2) The optimization of B, and (3) The training of the weak learner ht
The third step is common to all the reservoir strategies presented here. In the case of classification
stumps by presorting the samples along each dimension and exploiting the structure of the hypothesis
space H, we can incur a cost of O(D|B| log |B|) where D is the dimensionality of the input space.
The first step, as mentioned, incurs a cost of O(|A|2 D) if we go through all dimensions of the
data. However the minimum objective of acquiring an invertible matrix ?AA by only looking at
|A| dimensions and incurring a cost of O(|A|3 ). Finally the second step as analyzed in the previous
section, incurs a cost of O(q|A|2 ).
Thus the overall complexity of the proposed method is O(|A|3 + D|A|log|A|) which in practice
should not be significantly larger than O(D|B|log|B|), the cost of the remaining reservoir strategies.
We note that this analysis ignores the cost of processing incoming samples Qt which is also common
to all strategies, dependent on the task this cost may handily dominate all others.
6
4
Experiments
In order to experimentally validate both the framework of reservoir boosting as well as the proposed
method GEEM, we conducted experiments on four popular computer vision datasets.
In all our experiments we use logitboost for training. It attempts to minimize the logistic loss which
is less aggressive than the exponential loss. Original experiments with the exponential loss in a
reservoir setting showed it to be unstable and to lead to degraded performance for all the reservoir
strategies presented here. In [14] the authors performed extensive comparison in an online setting and
also found logitboost to yield the best results. We set the number of weak learners T in the boosted
classifier to be T = 250 common to all methods. In the case of the online boosting algorithms this
translates to fixing the number of weak learners.
Finally, for the methods that use a reservoir ? that is GEEM and the baselines outlined in 3 ? we set
r = q. Thus at every iteration, the reservoir is populated with |Rt | = r samples and the algorithm
receives a further |Qt | = r samples from the filter. The reservoir strategy is then used to discard r of
these samples to build Rt+1 .
4.1
Data-sets
We used four standard datasets: CIFAR-10 is a recognition dataset consisting of 32 ? 32 images
of 10 distinct classes depicting vehicles and animals. The training data consists of 5000 images
of each class. We pre-process the data as in [5] using code provided by the authors. MNIST is
a well-known optical digit recognition dataset comprising 60000 images of size 28 ? 28 of digits
from 0 ? 9. We do not preprocess the data in anyway, using the raw pixels as features. INRIA is
a pedestrian detection dataset. The training set consists of 12180 images of size 64 ? 128 of both
pedestrians and background images from which we extract HoG features [7]. STL-10 An image
recognition dataset consisting of images of size 96 ? 96 belonging to 10 classes, each represented by
500 images in the training set. We pre-process the data as for CIFAR.
4.2
Baselines
The baselines for the reservoir strategy have already been outlined in 3, and we also benchmarked
three online Boosting algorithms: Oza [15], Chen [4], and Bisc [11]. The first two algorithms treat
weak learners as a black-box but predefine their number. We initiate the weak learners of these
approaches by running Logitboost offline using a subset of the training set as we found that randomly
sampling the weak learners led to very poor performance; thus though they are online algorithms,
nonetheless in the experiments presented here they are afforded an offline initialization step. Note
that these approaches are not mutually exclusive with the proposed method, as the weak learners
picked by GEEM can be combined with an online boosting algorithm optimizing their coefficients.
For the final method [11], we initiated the number of selectors to be K = 250 resulting in the same
number of weak learners as the other methods. We also conducted experiments with [14] which is
closely related to [11], however as it performed consistently worse than [11], we do not show those
results here.
Finally we compared our method against two sub-sampling methods that have access to the full
dataset and subsample r samples using a weighted sampling routine. At each iteration, these methods
compute the boosting weights of all the samples in the dataset and use weighted sampling to obtain
a subset Rt . The first method is a simple weighted sampling method (WSS) while the second is
Madaboost (Mada) which combines weighted sampling with weight adjustment for the sub-sampled
samples. We furthermore show comparison with a fixed reservoir baseline (Fix), this baseline
subsamples the dataset once prior to learning and then trains the ensemble using offline Adaboost,
the contents of the reservoir in this case do not change from iteration to iteration.
5
Results and Discussion
Table 3, 4, and 5, list respectively the performance of the reservoir baselines, the online Boosting
techniques, and the sub-sampling methods. Each table also presents the performance of our GEEM
approach in the same settings.
7
Dataset
CIFAR
STL
INRIA
MNIST
Max
r=100
r=250
29.59 (0.59) 29.16 (0.71)
30.20 (0.75) 30.72 (0.82)
95.57 (0.49) 96.31 (0.37)
66.74 (1.45) 68.25 (0.81)
Rand
r=100
r=250
46.02 (0.35) 45.88 (0.24)
39.25 (0.32) 39.40 (0.25)
91.54 (0.49) 91.72 (0.35)
79.97 (0.24) 79.59 (0.22)
WSam
r=100
r=250
48.92 (0.34) 50.09 (0.24)
41.60 (0.39) 42.93 (0.30)
94.29 (0.23) 94.63 (0.30)
83.96 (0.29) 84.07 (0.23)
GEEM
r=100
r=250
50.96 (0.36) 54.87 (0.28)
42.40 (0.65) 45.70 (0.38)
97.21 (0.21) 97.52 (0.13)
84.66 (0.30) 84.33 (0.33)
Table 3: Test Accuracy on the four datasets for the different reservoir strategies
Dataset
CIFAR
STL
INRIA
MNIST
Online Boosting
Chen
Bisc
Oza
39.40 (1.91) 45.03 (0.93) 49.16 (0.40)
33.09 (1.49) 36.35 (0.49) 39.98 (0.56)
94.23 (0.97) 95.65 (0.38) 95.50 (0.49)
80.99 (1.11) 85.25 (0.82) 84.85 (0.54)
GEEM
(r=250)
54.87 (0.28)
45.70 (0.38)
97.53 (0.13)
84.33 (0.33)
Table 4: Comparison of GEEM with online boosting algorithms
Dataset
CIFAR
STL
INRIA
MNIST
WSS
r=100
r=250
50.38 (0.38) 51.66 (0.30)
42.54 (0.35) 44.07 (0.31)
94.24 (0.30) 94.65 (0.16)
84.21 (0.27) 84.51 (0.16)
Mada
r=100
r=250
48.87 (0.26) 49.44 (0.33)
41.36 (0.32) 42.34 (0.24)
94.26 (0.27) 94.65 (0.10)
79.00 (0.33) 78.99 (0.31)
Fix
r=1,000
r=2,500
48.41 (0.88) 52.40 (0.77)
42.04 (0.19) 46.07 (0.41)
92.46 (0.67) 93.82 (0.74)
85.37 (0.33) 88.02 (0.15)
GEEM
r=100
r=250
50.96(0.36) 54.87 (0.28)
42.40 (0.65) 45.70 (0.38)
97.21 (0.21) 97.53 (0.13)
84.66 (0.30) 84.33 (0.33)
Table 5: Comparison of GEEM with subsampling algorithms
As can be seen, GEEM outperforms the other reservoir strategies on three of the four datasets and
performs on par with the best on the fourth (MNIST). It also outperforms the on-line Boosting
techniques on three data-sets and on par with the best baselines on MNIST. Finally, GEEM performs
better than all the sub-sampling algorithms. Note that the Fix baseline was provided with ten times
the number of samples to reach a similar level of performance.
These results demonstrate that both the reservoir framework we propose for Boosting, and the specific
GEEM algorithm, provide performance greater or on par with existing state-of-the-art methods. When
compared with other reservoir strategies, GEEM suffers from larger complexity which translates to
a longer training time. For the INRIA dataset and r = 100 GEEM requires circa 70 seconds for
training as opposed to 50 for the WSam strategy, while for r = 250 GEEM takes approximately 320
seconds to train compared to 70 for WSam. We note however that even when equating training time,
which translates to using r = 100 for GEEM and r = 250 for WSam, GEEM still outperforms the
simpler reservoir strategies. The timing results on the other 3 datasets were similar in this respect.
Many points can still be improved. In our ongoing research we are investigating different approaches
to modeling the process of evaluating h? , of particular importance is of course that it is both reasonable
and fast to compute, one approach is to consider the maximum a posteriori value of h? by drawing on
elements in extreme value theory.
We have further plans to adapt this framework, and the proposed method, to a series of other settings.
It could be applied in the context of parallel processing, where a dataset can be split among CPUs
each training a classifier on a different portion of the data.
Finally, we are also investigating the method?s suitability for active learning tasks and dataset creation.
We note that the proposed method GEEM is not given information concerning the labels of the
samples, but simply the expectation and covariance matrix of the edges.
Acknowledgments
This work was supported by the European Community?s Seventh Framework Programme FP7 Challenge 2 - Cognitive Systems, Interaction, Robotics - under grant agreement No 247022 - MASH.
8
References
[1] Antoine Bordes, Seyda Ertekin, Jason Weston, and L?eon Bottou. Fast kernel classifiers with
online and active learning. J. Mach. Learn. Res., 6:1579?1619, December 2005.
[2] Joseph K. Bradley and Robert E. Schapire. Filterboost: Regression and classification on large
datasets. In NIPS, 2007.
[3] Nicol Cesa-Bianchi and Claudio Gentile. Tracking the best hyperplane with a simple budget
perceptron. In In Proc. of Nineteenth Annual Conference on Computational Learning Theory,
pages 483?498. Springer-Verlag, 2006.
[4] Shang-Tse Chen, Hsuan-Tien Lin, and Chi-Jen Lu. An online boosting algorithm with theoretical
justifications. In John Langford and Joelle Pineau, editors, ICML, ICML ?12, pages 1007?1014,
New York, NY, USA, July 2012. Omnipress.
[5] Adam Coates and Andrew Ng. The importance of encoding versus training with sparse coding
and vector quantization. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th
International Conference on Machine Learning (ICML-11), ICML ?11, pages 921?928, New
York, NY, USA, June 2011. ACM.
[6] Koby Crammer, Jaz S. Kandola, and Yoram Singer. Online classification on a budget. In
Sebastian Thrun, Lawrence K. Saul, and Bernhard Schlkopf, editors, NIPS. MIT Press, 2003.
[7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on,
volume 1, pages 886?893 vol. 1, 2005.
[8] Ofer Dekel and Yoram Singer. Support vector machines on a budget. In NIPS, pages 345?352,
2006.
[9] Carlos Domingo and Osamu Watanabe. Madaboost: A modification of adaboost. In Proceedings
of the Thirteenth Annual Conference on Computational Learning Theory, COLT ?00, pages
180?189, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc.
[10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. J. Comput. Syst. Sci., 55(1):119?139, August 1997.
[11] Helmut Grabner and Horst Bischof. On-line boosting and vision. In CVPR (1), pages 260?267,
2006.
[12] Mihajlo Grbovic and Slobodan Vucetic. Tracking concept change with incremental boosting
by minimization of the evolving exponential loss. In ECML PKDD, ECML PKDD?11, pages
516?532, Berlin, Heidelberg, 2011. Springer-Verlag.
[13] Zdenek Kalal, Jiri Matas, and Krystian Mikolajczyk. Weighted sampling for large-scale boosting.
In BMVC, 2008.
[14] C. Leistner, A. Saffari, P.M. Roth, and H. Bischof. On robustness of on-line boosting a competitive study. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th
International Conference on, pages 1362 ?1369, 27 2009-oct. 4 2009.
[15] Nikunj C. Oza and Stuart Russell. Online bagging and boosting. In In Artificial Intelligence
and Statistics 2001, pages 105?112. Morgan Kaufmann, 2001.
[16] Bordes Antoine Weston Jason and L?eon Bottou. Online (and offline) on an even tighter budget.
In In Artificial Intelligence and Statistics 2005, 2005.
9
| 5215 |@word version:1 middle:1 dalal:1 triggs:1 dekel:1 seek:1 contraction:1 covariance:6 pick:2 incurs:2 mention:1 reduction:1 substitution:1 contains:1 score:1 selecting:2 series:1 denoting:1 outperforms:4 existing:1 bradley:1 current:1 recovered:1 jaz:1 must:7 parsing:1 john:1 subsequent:1 informative:1 predetermined:1 analytic:1 cheap:1 update:1 greedy:8 prohibitive:3 selected:3 intelligence:2 beginning:1 provides:1 boosting:29 firstly:1 simpler:1 along:3 constructed:2 become:1 jiri:1 persistent:1 consists:3 combine:1 manner:3 expected:2 themselves:1 frequently:1 pkdd:2 chi:1 cpu:1 window:1 solver:1 increasing:1 considering:1 provided:3 begin:1 notation:1 bounded:1 maximizes:1 what:3 benchmarked:1 dubbed:1 every:1 tackle:1 xd:2 exactly:1 returning:1 classifier:14 grant:1 omit:1 timing:1 treat:2 limit:1 mistake:1 despite:1 mach:1 encoding:1 initiated:1 solely:4 approximately:1 black:2 plus:1 inria:5 initialization:1 equating:1 co:1 limited:3 acknowledgment:1 yj:4 practice:9 digit:2 procedure:1 filterboost:1 area:1 evolving:1 reject:1 significantly:1 projection:1 pre:4 induce:1 refers:4 integrating:1 close:1 selection:4 context:4 restriction:1 equivalent:1 roth:1 maximizing:4 straightforward:2 go:2 hsuan:1 identifying:1 madaboost:2 rule:3 dominate:1 population:1 anyway:1 variation:1 justification:1 pt:1 user:1 exact:2 us:2 hypothesis:1 domingo:1 agreement:1 element:4 nikunj:1 expensive:1 recognition:4 updating:1 continues:1 disagreeing:1 ft:2 oza:3 worst:1 calculate:3 russell:1 mentioned:3 complexity:7 ideally:1 tobias:1 mine:1 trained:3 incur:2 serve:2 creates:1 creation:1 efficiency:1 learner:48 swap:1 basis:1 kalal:1 represented:1 train:4 forced:1 distinct:1 fast:2 artificial:2 choosing:6 larger:2 solve:1 nineteenth:1 cvpr:2 drawing:1 otherwise:1 ability:1 statistic:2 commit:1 jointly:1 itself:2 final:1 online:22 subsamples:1 analytical:2 propose:9 interaction:1 product:4 hadamard:2 achieve:1 validate:1 exploiting:1 enhancement:1 etj:1 francois:1 adam:1 incremental:1 help:2 develop:1 andrew:1 fixing:1 qt:14 strong:3 implemented:1 ois:1 idiap:4 entirety:1 switzerland:2 concentrate:2 drawback:1 closely:1 filter:5 subsequently:2 human:1 viewing:1 saffari:1 fix:3 generalization:1 leistner:1 suitability:1 vucetic:1 tighter:2 secondly:1 considered:1 ground:1 normal:1 lawrence:1 algorithmic:1 consecutive:1 argmaxb:1 estimation:2 proc:1 label:3 currently:1 largest:1 weighted:9 reflects:1 minimization:1 mit:1 gaussian:5 rather:4 hj:2 ej:2 boosted:5 claudio:1 lise:1 june:1 consistently:1 mainly:1 helmut:1 baseline:11 sense:2 posteriori:1 dependent:1 entire:5 typically:3 unlikely:1 initially:1 w:2 going:1 comprising:1 pixel:1 issue:5 classification:7 dual:1 overall:4 among:2 colt:1 retaining:1 plan:1 constrained:1 art:2 initialize:1 animal:1 once:3 construct:2 having:1 ng:1 sampling:14 stuart:1 koby:1 icml:4 filling:1 alter:1 minimized:1 others:1 simplify:1 employ:1 franc:1 randomly:1 oriented:1 composed:1 kandola:1 individual:1 cheaper:1 argmax:1 consisting:2 replacement:2 attempt:1 detection:2 evaluation:1 analyzed:1 extreme:2 circa:1 mada:2 edge:11 capable:1 necessary:2 ilter:3 old:1 loosely:1 re:1 theoretical:1 column:5 modeling:3 wb:12 tse:1 yoav:1 maximization:3 lattice:3 cost:18 tractability:1 addressing:1 subset:12 predictor:2 conducted:2 seventh:1 too:1 xdi:2 considerably:1 chooses:1 combined:1 international:2 retain:3 picking:1 pool:1 together:1 invertible:1 cesa:1 opposed:1 choose:1 possibly:1 worse:1 stochastically:1 cognitive:1 derivative:1 inefficient:1 return:1 strive:1 syst:1 account:1 aggressive:1 stump:6 coding:1 coefficient:4 pedestrian:2 inc:1 notable:1 explicitly:1 leonidas:2 depends:1 performed:2 vehicle:1 picked:2 jason:2 doing:1 portion:1 competitive:2 sort:1 maintains:1 parallel:1 carlos:1 contribution:2 minimize:1 square:1 degraded:1 accuracy:1 loaded:2 descriptor:1 efficiently:4 ensemble:5 variance:1 identify:1 correspond:1 maximized:1 yield:1 preprocess:1 weak:47 raw:1 populating:1 iid:3 none:1 lu:1 schlkopf:1 published:1 classified:1 reach:2 suffers:1 sebastian:1 against:2 nonetheless:2 proof:1 stop:1 gain:1 dataset:16 proved:1 popular:1 sampled:1 knowledge:1 dimensionality:1 routine:1 sophisticated:2 feed:1 disposal:1 higher:2 adaboost:2 response:6 awkward:1 rand:2 yb:14 formulation:2 improved:1 though:5 box:2 bmvc:1 furthermore:2 just:1 until:1 langford:1 hand:1 receives:2 logistic:1 pineau:1 building:1 effect:1 usa:3 concept:3 normalized:1 former:1 alternating:1 q0:1 iteratively:3 deal:1 round:1 during:1 noted:1 theoretic:1 demonstrate:2 performs:3 l1:1 omnipress:1 image:8 wise:2 novel:1 common:5 qp:1 empirically:1 volume:1 enter:1 rd:1 outlined:2 populated:1 similarly:1 sherman:1 access:6 longer:1 add:1 showed:2 optimizing:1 discard:6 manipulation:1 store:2 verlag:2 continue:1 joelle:1 yi:4 tien:1 seen:4 minimum:1 greater:1 gentile:1 morgan:2 employed:1 r0:1 converge:1 maximize:2 morrison:1 july:1 sliding:1 full:5 multiple:1 adapt:1 calculation:6 cifar:5 lin:1 concerning:1 predetermine:1 calculates:1 basic:3 regression:1 mej:1 vision:6 expectation:6 iteration:22 kernel:1 adopting:1 histogram:1 robotics:1 kaufmann:2 background:1 want:1 ertekin:1 thirteenth:1 addressed:1 publisher:1 biased:1 december:1 seem:1 call:2 extracting:2 presence:1 split:1 concerned:2 hb:10 fit:1 identified:1 perfectly:1 translates:3 expression:1 lasvm:1 york:2 fleuret:2 detailed:1 lefakis:2 ten:1 processed:3 simplest:1 handily:1 generate:1 schapire:2 coates:1 notice:1 estimated:3 wr:3 correctly:1 discrete:1 shall:1 vol:1 four:4 threshold:2 budgeted:1 ht:8 kept:2 inverse:5 fourth:1 family:3 reasonable:2 mii:1 decision:1 bound:1 hi:2 annual:2 constraint:1 afforded:1 aspect:2 min:1 extremely:1 optical:1 relatively:1 slobodan:1 according:3 poor:1 belonging:1 smaller:1 increasingly:1 wi:1 joseph:1 making:2 modification:1 restricted:1 iccv:1 computationally:7 resource:2 mutually:1 remains:1 hh:5 needed:1 know:1 initiate:1 singer:2 tractable:3 fp7:1 end:1 available:1 predefine:1 incurring:1 ofer:1 anymore:1 batch:4 alternative:1 robustness:1 original:1 bagging:1 remaining:1 ensure:1 running:1 ijd:3 subsampling:1 calculating:5 exploit:1 eon:2 yoram:2 build:3 grabner:1 society:1 objective:2 matas:1 already:2 quantity:5 strategy:19 parametric:1 rt:18 exclusive:1 antoine:2 gradient:1 thrun:1 sci:1 berlin:1 me:2 unstable:1 reason:1 fresh:3 assuming:1 code:1 index:4 retained:1 polarity:2 minimizing:1 acquire:1 balance:1 robert:2 hog:1 stated:1 martigny:2 implementation:1 proper:1 perform:2 bianchi:1 disagree:1 datasets:7 discarded:1 benchmark:1 ecml:2 heterogeneity:1 situation:1 defining:1 looking:1 rn:1 august:1 community:1 drift:2 introduced:1 namely:1 pair:1 extensive:1 bischof:2 smo:1 accepts:1 deletion:1 nip:3 address:4 dth:1 able:2 usually:1 below:2 pattern:1 challenge:1 max:4 memory:3 getoor:1 mash:1 force:1 eh:5 scheme:2 naive:1 extract:1 prior:1 removal:1 nicol:1 law:1 freund:1 loss:8 par:3 highlight:1 limitation:2 versus:1 incurred:1 offered:1 consistent:1 editor:3 bordes:2 row:5 course:1 supported:1 keeping:2 jth:3 offline:8 populate:2 allow:1 bias:1 perceptron:2 institute:2 saul:1 sparse:1 calculated:3 dimension:9 stand:4 evaluating:1 ignores:1 author:3 qualitatively:1 commonly:1 coincide:1 san:1 horst:1 programme:1 mikolajczyk:1 bb:11 selector:2 bernhard:1 keep:4 dealing:1 sequentially:1 active:3 incoming:1 investigating:2 assumed:2 francisco:1 continuous:1 table:8 nature:2 mj:1 learn:1 ca:1 symmetry:1 depicting:1 heidelberg:1 expansion:1 bottou:2 complex:1 european:1 did:1 main:5 whole:1 logitboost:3 subsample:1 allowed:1 repeated:1 reservoir:38 referred:1 representative:2 scheffer:1 hhb:1 ny:2 sub:7 comprises:1 watanabe:1 exponential:3 comput:1 lie:1 third:1 learns:1 removing:2 formula:2 discarding:2 specific:2 jen:1 list:3 svm:4 concern:1 stl:4 workshop:2 hha:3 intractable:1 xdj:2 adding:1 mnist:6 importance:3 yrt:1 quantization:1 budget:10 sorting:1 chen:3 led:2 simply:3 contained:1 adjustment:1 tracking:2 acquiring:1 ch:2 aa:7 corresponds:1 springer:2 extracted:2 acm:1 weston:2 conditional:1 oct:1 sorted:3 consequently:1 krystian:1 content:3 experimentally:2 change:2 specifically:1 uniformly:2 wt:1 hyperplane:1 shang:1 osamu:1 ya:6 perceptrons:1 exception:1 select:1 support:3 latter:2 crammer:1 ongoing:1 scratch:1 |
4,659 | 5,216 | Beyond Pairwise: Provably Fast Algorithms for
Approximate k-Way Similarity Search
Anshumali Shrivastava
Department of Computer Science
Computing and Information Science
Cornell University
Ithaca, NY 14853, USA
anshu@cs.cornell.edu
Ping Li
Department of Statistics & Biostatistics
Department of Computer Science
Rutgers University
Piscataway, NJ 08854, USA
pingli@stat.rutgers.edu
Abstract
We go beyond the notion of pairwise similarity and look into search problems
with k-way similarity functions. In this paper, we focus on problems related to
1 ?S2 ?S3 |
3-way Jaccard similarity: R3way = |S
|S1 ?S2 ?S3 | , S1 , S2 , S3 ? C, where C is a
size n collection of sets (or binary vectors). We show that approximate R3way
similarity search problems admit fast algorithms with provable guarantees, analogous to the pairwise case. Our analysis and speedup guarantees naturally extend
to k-way resemblance. In the process, we extend traditional framework of locality
sensitive hashing (LSH) to handle higher-order similarities, which could be of independent theoretical interest. The applicability of R3way search is shown on the
?Google Sets? application. In addition, we demonstrate the advantage of R3way
resemblance over the pairwise case in improving retrieval quality.
1 Introduction and Motivation
Similarity search (near neighbor search) is one of the fundamental problems in Computer Science.
The task is to identify a small set of data points which are ?most similar? to a given input query.
Similarity search algorithms have been one of the basic building blocks in numerous applications
including search, databases, learning, recommendation systems, computer vision, etc.
One widely used notion of similarity on sets is the Jaccard similarity or resemblance [5, 10, 18, 20].
Given two sets S1 , S2 ? ? = {0, 1, 2, ..., D ? 1}, the resemblance R2way between S1 and S2 is
1 ?S2 |
defined as: R2way = |S
|S1 ?S2 | . Existing notions of similarity in search problems mainly work with
pairwise similarity functions. In this paper, we go beyond this notion and look at the problem of
k-way similarity search, where the similarity function of interest involves k sets (k ? 2). Our work
exploits the fact that resemblance can be naturally extended to k-way resemblance similarity [18,
1 ?S2 ?...?Sk |
21], defined over k sets {S1 , S2 , ..., Sk } as Rk?way = |S
|S1 ?S2 ?...?Sk | .
Binary high-dimensional data
The current web datasets are typically binary, sparse, and extremely high-dimensional, largely due to the wide adoption of the ?Bag of Words? (BoW) representations for documents and images. It is often the case, in BoW representations, that just the presence
or absence (0/1) of specific feature words captures sufficient information [7, 16, 20], especially
with (e.g.,) 3-grams or higher-order models. And so, the web can be imagined as a giant storehouse
of ultra high-dimensional sparse binary vectors. Of course, binary vectors can also be equivalently
viewed as sets (containing locations of the nonzero features).
We list four practical scenarios where k-way resemblance search would be a natural choice.
(i) Google Sets:
(http://googlesystem.blogspot.com/2012/11/google-sets-still-available.html)
Google Sets is among the earliest google projects, which allows users to generate list of similar
words by typing only few related keywords. For example, if the user types ?mazda? and ?honda?
the application will automatically generate related words like ?bmw?, ?ford?, ?toyota?, etc. This
application is currently available in google spreadsheet. If we assume the term document binary
1 ?w2 ?w|
representation of each word w in the database, then given query w1 and w2 , we show that |w
|w1 ?w2 ?w|
turns out to be a very good similarity measure for this application (see Section 7.1).
1
(ii) Joint recommendations: Users A and B would like to watch a movie together. The profile of
each person can be represented as a sparse vector over a giant universe of attributes. For example,
a user profile may be the set of actors, actresses, genres, directors, etc, which she/he likes. On the
other hand, we can represent a movie M in the database over the same universe based on attributes
associated with the movie. If we have to recommend movie M, jointly to users A and B, then a
|
natural measure to maximize is |A?B?M
|A?B?M | . The problem of group recommendation [3] is applicable
in many more settings such as recommending people to join circles, etc.
(iii) Improving retrieval quality: We are interested in finding images of a particular type of object, and we have two or three (possibly noisy) representative images. In such a scenario, a natural
expectation is that retrieving images simultaneously similar to all the representative images should
be more refined than just retrieving images similar to any one of them. In Section 7.2, we demonstrate that in cases where we have more than one element to search for, we can refine our search
quality using k-way resemblance search. In a dynamic feedback environment [4], we can improve
subsequent search quality by using k-way similarity search on the pages already clicked by the user.
(iv) Beyond pairwise clustering: While machine learning algorithms often utilize the data
through pairwise similarities (e.g., inner product or resemblance), there are natural scenarios where
the affinity relations are not pairwise, but rather triadic, tetradic or higher [2, 30]. The computational
cost, of course, will increase exponentially if we go beyond pairwise similarity.
Efficiency is crucial With the data explosion in modern applications, the brute force way of scanning all the data for searching is prohibitively expensive, specially in user-facing applications like
search. The need for k-way similarity search can only be fulfilled if it admits efficient algorithms.
This paper fulfills this requirement for k-way resemblance and its derived similarities. In particular,
we show fast algorithms with provable query time guarantees for approximate k-way resemblance
search. Our algorithms and analysis naturally provide a framework to extend classical LSH framework [14, 13] to handle higher-order similarities, which could be of independent theoretical interest.
Organization
In Section 2, we review approximate near neighbor search and classical Locality
Sensitive Hashing (LSH). In Section 3, we formulate the 3-way similarity search problems. Sections 4, 5, and 6 describe provable fast algorithms for several search problems. Section 7 demonstrates the applicability of 3-way resemblance search in real applications.
2 Classical c-NN and Locality Sensitive Hashing (LSH)
Initial attempts of finding efficient (sub-linear time) algorithms for exact near neighbor search, based
on space partitioning, turned out to be a disappointment with the massive dimensionality of current
datasets [11, 28]. Approximate versions of the problem were proposed [14, 13] to break the linear
query time bottleneck. One widely adopted formalism is the c-approximate near neighbor (c-NN).
Definition 1 (c-Approximate Near Neighbor or c-NN). Consider a set of points, denoted by P, in a
D-dimensional space RD , and parameters R0 > 0, ? > 0. The task is to construct a data structure
which, given any query point q, if there exist an R0 -near neighbor of q in P, it reports some cR0 -near
neighbor of q in P with probability 1 ? ?.
The usual notion of c-NN is for distance. Since we deal with similarities, we define R0 -near neighbor
of point q as a point p with Sim(q, p) ? R0 , where Sim is the similarity function of interest.
Locality sensitive hashing (LSH) [14, 13] is a popular framework for c-NN problems. LSH is a
family of functions, with the property that similar input objects in the domain of these functions
have a higher probability of colliding in the range space than non-similar ones. In formal terms,
consider H a family of hash functions mapping RD to some set S
Definition 2 (Locality Sensitive Hashing (LSH)). A family H is called (R0 , cR0 , p1 , p2 )-sensitive if
for any two points x, y ? RD and h chosen uniformly from H satisfies the following:
? if Sim(x, y) ? R0 then P rH (h(x) = h(y)) ? p1
? if Sim(x, y) ? cR0 then P rH (h(x) = h(y)) ? p2
For approximate nearest neighbor search typically, p1 > p2 and c < 1 is needed. Note, c < 1 as
we are defining neighbors in terms of similarity. Basically, LSH trades off query time with extra
preprocessing time and space which can be accomplished off-line.
2
Fact 1 Given a family of (R0 , cR0 , p1 , p2 ) -sensitive hash functions, one can construct a data struc1/p1
ture for c-NN with O(n? log1/p2 n) query time and space O(n1+? ), where ? = log
log 1/p2 .
Minwise Hashing for Pairwise Resemblance One popular choice of LSH family of functions
associated with resemblance similarity is, Minwise Hashing family [5, 6, 13]. Minwise Hashing
family applies an independent random permutation ? : ? ? ?, on the given set S ? ?, and looks
at the minimum element under ?, i.e. min(?(S)). Given two sets S1 , S2 ? ? = {0, 1, 2, ..., D ? 1},
it can be shown by elementary probability argument that
P r (min(?(S1 )) = min(?(S2 ))) =
|S1 ? S2 |
= R2way .
|S1 ? S2 |
(1)
The recent work on b-bit minwise hashing [20, 23] provides an improvement by storing only the
lowest b bits of the hashed values: min(?(S1 )), min(?(S2 )). [26] implemented the idea of building
hash tables for near neighbor search, by directly using the bits from b-bit minwise hashing.
3
3-way Similarity Search Formulation
Our focus will remain on binary vectors which can also be viewed as sets. We illustrate our method
|S1 ?S2 ?S3 |
. The algorithm and
using 3-way resemblance similarity function Sim(S1 , S2 , S3 ) = |S
1 ?S2 ?S3 |
guarantees naturally extend to k-way resemblance. Given a size n collection C ? 2? of sets (or
binary vectors), we are particularly interested in the following three problems:
1. Given two query sets S1 and S2 , find S3 ? C that maximizes Sim(S1 , S2 , S3 ).
2. Given a query set S1 , find two sets S2 , S3 ? C maximizing Sim(S1 , S2 , S3 ).
3. Find three sets S1 , S2 , S3 ? C maximizing Sim(S1 , S2 , S3 ).
The brute force way of enumerating all possibilities leads to the worst case query time of O(n),
O(n2 ) and O(n3 ) for problem 1, 2 and 3, respectively. In a hope to break this barrier, just like the
case of pairwise near neighbor search, we define the c-approximate (c < 1) versions of the above
three problems. As in the case of c-NN, we are given two parameters R0 > 0 and ? > 0. For each
of the following three problems, the guarantee is with probability at least 1 ? ?:
1. (3-way c-Near Neighbor or 3-way c-NN) Given two query sets S1 and S2 , if there
exists S3 ? C with Sim(S1 , S2 , S3 ) ? R0 , then we report some S3? ? C so that
Sim(S1 , S2 , S3? ) ? cR0 .
2. (3-way c-Close Pair or 3-way c-CP) Given a query set S1 , if there exists a pair of
set S2 , S3 ? C with Sim(S1 , S2 , S3 ) ? R0 , then we report sets S2? , S3? ? C so that
Sim(S1 , S2? , S3? ) ? cR0 .
3. (3-way c-Best Cluster or 3-way c-BC) If there exist sets S1 , S2 , S3 ? C with
Sim(S1 , S2 , S3 ) ? R0 , then we report sets S1? , S2? , S3? ? C so that Sim(S1? , S2? , S3? ) ? cR0 .
4
Sub-linear Algorithm for 3-way c-NN
The basic philosophy behind sub-linear search is bucketing, which allows us to preprocess dataset
in a fashion so that we can filter many bad candidates without scanning all of them. LSH-based
techniques rely on randomized hash functions to create buckets that probabilistically filter bad candidates. This philosophy is not restricted for binary similarity functions and is much more general.
Here, we first focus on 3-way c-NN problem for binary data.
Theorem 1 For R3way c-NN one can construct a data structure with O(n? log1/cR0 n) query time
and O(n1+? ) space, where ? = 1 ?
log 1/c
log 1/c+log 1/R0 .
The argument for 2-way resemblance can be naturally extended to k-way resemblance. Specifically,
given three sets S1 , S2 , S3 ? ? and an independent random permutation ? : ? ? ?, we have:
P r (min(?(S1 )) = min(?(S2 )) = min(?(S3 ))) = R3way .
(2)
Eq.( 2) shows that minwise hashing, although it operates on sets individually, preserves all 3-way
(in fact k-way) similarity structure of the data. The existence of such a hash function is the key
requirement behind the existence of efficient approximate search. For the pairwise case, the probability event was a simple hash collision, and the min-hash itself serves as the bucket index. In case
3
of 3-way (and higher) c-NN problem, we have to take care of a more complicated event to create an
indexing scheme. In particular, during preprocessing we need to create buckets for each individual
S3 , and while querying we need to associate the query sets S1 and S2 to the appropriate bucket. We
need extra mechanisms to manipulate these minwise hashes to obtain a bucketing scheme.
Proof of Theorem 1: We use two additional functions: f1 : ? ? N for manipulating min(?(S3 ))
and f2 : ? ? ? ? N for manipulating both min(?(S1 )) and min(?(S2 )). Let a ? N+ such that
|?| = D < 10a . We define f1 (x) = (10a + 1) ? x and f2 (x, y) = 10a x + y. This choice ensures
that given query S1 and S2 , for any S3 ? C, f1 (min(?(S3 ))) = f2 (min(?(S1 )), min(?(S2 ))) holds
if and only if min(?(S1 )) = min(?(S2 )) = min(?(S2 )), and thus we get a bucketing scheme.
To complete the proof, we introduce two integer parameters K and L. Define a new hash function
by concatenating K events. To be more precise, while preprocessing, for every element S3 ? C
create buckets g1 (S3 ) = [f1 (h1 (S3 )); ...; f1 (hK (S3 ))] where hi is chosen uniformly from minwise
hashing family. For given query points S1 and S2 , retrieve only points in the bucket g2 (S1 , S2 ) =
[f2 (h1 (S1 ), h1 (S2 )); ...; f2 (hK (S1 ), hK (S2 ))]. Repeat this process L times independently. For any
S3 ? C, with Sim(S1 , S2 , S3 ) ? R0 , is retrieved with probability at least 1 ? (1 ? R0K )L . Using
log 1/c
log n
1
?
K = ? log
1 ? and L = ?n log( ? )?, where ? = 1 ? log 1/c+log 1/R , the proof can be obtained
0
cR0
using standard concentration arguments used to prove Fact 1, see [14, 13]. It is worth noting that
the probability guarantee parameter ? gets absorbed in the constants as log( 1? ). Note, the process is
stopped as soon as we find some element with R3way ? cR0 .
Theorem 1 can be easily extended to k-way resemblance with same query time and space guarantees.
Note that k-way c-NN is at least as hard as k ? -way c-NN for any k ? ? k, because we can always
choose (k ?k ? +1) identical query sets in k-way c-NN, and it reduces to k ? -way c-NN problem. So,
any improvements in R3way c-NN implies improvement in the classical min-hash LSH for Jaccard
similarity. The proposed analysis is thus tight in this sense.
The above observation makes it possible to also perform the traditional pairwise c-NN search using
the same hash tables deployed for 3-way c-NN. In the query phase we have an option, if we have
two different queries S1 , S2 , then we retrieve from bucket g2 (S1 , S2 ) and that is usual 3-way c-NN
search. If we are just interested in pairwise near neighbor search given one query S1 , then we will
look into bucket g2 (S1 , S1 ), and we know that the 3-way resemblance between S1 , S1 , S3 boils
down to the pairwise resemblance between S1 and S3 . So, the same hash tables can be used for
both the purposes. This property generalizes, and hash tables created for k-way c-NN can be used
for any k ? -way similarity search so long as k ? ? k. The approximation guarantees still holds. This
flexibility makes k-way c-NN bucketing scheme more advantageous over the pairwise scheme.
?
1
One of the peculiarity of LSH based techniques is that the
query complexity exponent ? < 1 is dependent on the choice
R0=0.01
0.8
of the threshold R0 we are interested in and the value of c
0.05
0.1
0.3
0.6
which is the approximation ratio that we will tolerate. Figure 1
0.2
0.4
0.8
log 1/c
0.5
plots ? = 1? log 1/c+log 1/R0 with respect to c, for selected R0
0.4
0.6
0.9
0.7
values from 0.01 to 0.99. For instance, if we are interested in
0.2
0.95
highly similar pairs, i.e. R0 ? 1, then we are looking at near
R =0.99
0
O(log n) query complexity for c-NN problem as ? ? 0. On
0
0
0.2
0.4
0.6
0.8
1
the other hand, for very lower threshold R0 , there is no much
c
log 1/c
of hope of time-saving because ? is close to 1.
Figure 1: ? = 1 ? log 1/c+log
1/R0 .
5
Other Efficient k-way Similarities
We refer to the k-way similarities for which there exist sub-linear algorithms for c-NN search with
query and space complexity exactly as given in Theorem 1 as efficient . We have demonstrated
existence of one such example of efficient similarities, which is the k-way resemblance. This leads
to a natural question: ?Are there more of them??.
[9] analyzed all the transformations on similarities that preserve existence of efficient LSH search. In
particular, they showed that if S is a similarity for which there exists an LSH family, then there also
exists an LSH family for any similarity which is a probability generating
function (PGF) transfor??
i
mation on S. PGF
transformation
on
S
is
defined
as
P
GF
(S)
=
p
S
, where S ? [0, 1] and
i
i=1
??
pi ? 0 satisfies i=1 pi = 1. Similar theorem can also be shown in the case of 3-way resemblance.
4
Theorem 2 Any PGF transformation on 3-way resemblance R3way is efficient.
Recall in the proof of Theorem 1, we created hash assignments f1 (min(?(S3 ))) and
f2 (min(?(S1 )), min(?(S2 ))), which lead to a bucketing scheme for the 3-way resemblance search,
where the collision event E = {f1 (min(?(S3 )) = f2 (min(?(S1 )), min(?(S2 )))} happens with
probability P r(E) = R3way . To prove the above Theorem 2, we will need to create hash events
??
i
having probability P GF (R3way ) = i=1 pi (R3way ) . Note that 0 ? P GF (R3way ) ? 1. We will
make use of the following simple lemma.
Lemma 1 (R3way )n is efficient for all n ? N.
Proof: Define new hash assignments g1n (S3 ) = [f1 (h1 (S3 )); ...; f1 (hn (S3 ))] and g2n (S1 , S2 ) =
[f2 (h1 (S1 ), h1 (S2 )); ...; f2 (hn (S1 ), hn (S2 ))]. The collision event g1n (S3 ) = g2n (S1 , S2 ) has
probability (R3way )n . We now use the pair < g1n , g2n > instead of < f1 , f2 > and obtain same
guarantees, as in Theorem 1, for (R3way )n as well.
Proof of Theorem 2: From Lemma 1, let < g1i , g2i > be the hash pair corresponding to (R3way )i
as used in above lemma. We sample one hash pair from the set {< g1i , g2i >: i ? N}, where
the probability of sampling < g1i , g2i > is proportional to pi . Note that pi ? 0, and satisfies
?
?
is valid. It is not difficult to see that the collision of the
i=1 pi = 1, and so the above sampling ?
?
sampled hash pair has probability exactly i=1 pi (R3way )i .
Theorem 2 can be naturally extended to k-way similarity for any k ? 2. Thus, we now have
infinitely many k-way similarity functions admitting efficient sub-linear search. One, that might be
interesting, because of its radial basis kernel like nature, is shown in the following corollary.
Corollary 1 eR
k?way
?1
is efficient.
Proof: Use the expansion of eR
k?way
normalized by e to see that eR
k?way
?1
is a PGF on Rk?way .
6 Fast Algorithms for 3-way c-CP and 3-way c-BC Problems
For 3-way c-CP and 3-way c-BC problems, using bucketing scheme with minwise hashing family
will save even more computations.
Theorem 3 For R3way c-Close Pair Problem (or c-CP) one can construct a data structure with
log 1/c
O(n2? log1/cR0 n) query time and O(n1+2? ) space, where ? = 1 ? log 1/c+log
1/R0 .
Note that we can switch the role of f1 and f2 in the proof of Theorem 1. We are thus left with a c-NN
problem with search space O(n2 ) (all pairs) instead of n. A bit of analysis, similar to Theorem 1,
will show that this procedure achieves the required query time O(n2? log1/cR0 n), but uses a lot
more space, O(n2(1+? )), than shown in the above theorem. It turns out that there is a better way of
doing c-CP that saves us space.
Proof of Theorem 3: We again start with constructing hash tables. For every element Sc ? C, we
create a hash-table and store Sc in bucket B(Sc ) = [h1 (Sc ); h2 (Sc ); ...; hK (Sc )], where hi is chosen
uniformly from minwise independent family of hash functions H. We create L such hash-tables. For
a query element Sq we look for all pairs in bucket B(Sq ) = [h1 (Sq ); h2 (Sq ); ...; hK (Sq )] and repeat
this for each of the L tables. Note, we do not form pairs of elements retrieved from different tables
as they do not satisfy Eq. (2). If there exists a pair S1 , S2 ? C with Sim(Sq , S1 , S2 ) ? R0 , using
Eq. (2), we can see that we will find that pair in bucket B(Sq ) with probability 1 ? (1 ? R0K )L .
Here, we cannot use traditional choice of K and L, similar to what we did in Theorem 1, as there
2 log n
2?
log( 1? )?,
are O(n2 ) instead of O(n) possible pairs. We instead use K = ? log
1 ? and L = ?n
cR0
log 1/c
with ? = 1 ? log 1/c+log
1/R0 . With this choice of K and L, the result follows. Note, the process
is stopped as soon as we find pairs S1 and S2 with Sim(Sq , S1 , S2 ) ? cR0 . The key argument that
saves space from O(n2(1+?) ) to O(n1+2? ) is that we hash n points individually. Eq. (2) makes it
clear that hashing all possible pairs is not needed when every point can be processed individually,
and pairs formed within each bucket itself filter out most of the unnecessary combinations.
5
Theorem 4 For R3way c-Best Cluster Problem (or c-BC) there exist an algorithm with running time
log 1/c
O(n1+2? log1/cR0 n), where ? = 1 ? log 1/c+log
1/R0 .
The argument similar to one used in proof of Theorem 3 leads to the running time of
O(n1+3? log1/cR0 n) as we need L = O(n3? ), and we have to processes all points at least once.
Proof of Theorem 4: Repeat c-CP problem n times for every element in collection C acting
as query once. We use the same set of hash tables and hash functions every time. The preprocessing time is O(n1+2? log1/cR0 n) evaluations of hash functions and the total querying time is
O(n ? n2? log1/cR0 n), which makes the total running time O(n1+2? log1/cR0 n).
For k-way c-BC Problem, we can achieve O(n1+(k?1)? log1/cR0 n) running time. If we are interested in very high similarity cluster, with R0 ? 1, then ? ? 0, and the running time is around
O(n log n). This is a huge saving over the brute force O(nk ). In most practical cases, specially in
big data regime where we have enormous amount of data, we can expect the k-way similarity of
good clusters to be high and finding them should be efficient. We can see that with increasing k,
hashing techniques save more computations.
7
Experiments
In this section, we demonstrate the usability of 3-way and higher-order similarity search using (i)
Google Sets, and (ii) Improving retrieval quality.
7.1 Google Sets: Generating Semantically Similar Words
Here, the task is to retrieve words which are ?semantically? similar to the given set of query words.
We collected 1.2 million random documents from Wikipedia and created a standard term-doc binary vector representation of each term present in the collected documents after removing standard
stop words and punctuation marks. More specifically, every word is represented as a 1.2 million dimension binary vector indicating its presence or absence in the corresponding document. The total
number of terms (or words) was around 60,000 in this experiment.
Since there is no standard benchmark available for this task, we show qualitative evaluations. For
querying, we used the following four pairs of semantically related words: (i) ?jaguar? and ?tiger?;
(ii) ?artificial? and ?intelligence?; (iii) ?milky? and ?way? ; (iv) ?finger? and ?lakes?. Given the
query words w1 and w2 , we compare the results obtained by the following four methods.
? Google Sets: We use Google?s algorithm and report 5 words from Google spreadsheets [1].
This is Google?s algorithm which uses its own data.
1 ?w2 ?w|
? 3-way Resemblance (3-way): We use 3-way resemblance |w
|w1 ?w2 ?w| to rank every word
w and report top 5 words based on this ranking.
? Sum Resemblance (SR): Another intuitive method is to use the sum of pairwise resem|w2 ?w|
1 ?w|
blance |w
|w1 ?w| + |w2 ?w| and report top 5 words based on this ranking.
? Pairwise Intersection (PI): We first retrieve top 100 words based on pairwise resemblance
for each w1 and w2 independently. We then report the words common in both. If there is
no word in common we do not report anything.
The results in Table 1 demonstrate that using 3-way resemblance retrieves reasonable candidates
for these four queries. An interesting query is ?finger? and ?lakes?. Finger Lakes is a region in
upstate New York. Google could only relate it to New York, while 3-way resemblance could even
retrieve the names of cities and lakes in the region. Also, for query ?milky? and ?way?, we can
see some (perhaps) unrelated words like ?dance? returned by Google. We do not see such random
behavior with 3-way resemblance. Although we are not aware of the algorithm and the dataset used
by Google, we can see that 3-way resemblance appears to be a right measure for this application.
The above results also illustrate the problem with using the sum of pairwise similarity method. The
similarity value with one of the words dominates the sum and hence we see for queries ?artificial?
and ?intelligence? that all the retrieved words are mostly related to the word ?intelligence?. Same is
the case with query ?finger? and ?lakes? as well as ?jaguar? and ?tiger?. Note that ?jaguar? is also a
car brand. In addition, for all 4 queries, there was no common word in the top 100 words similar to
the each query word individually and so PI method never returns anything.
6
Table 1: Top five words retrieved using various methods for different queries.
?JAGUAR? AND ? TIGER?
G OOGLE
3- WAY
SR
LION
LEOPARD
CHEETAH
CAT
DOG
LEOPARD
CHEETAH
LION
PANTHER
CAT
CAT
LEOPARD
LITRE
BMW
CHASIS
?MILKY? AND ? WAY?
G OOGLE
3- WAY
SR
DANCE
STARS
SPACE
THE
UNIVERSE
GALAXY
STARS
EARTH
LIGHT
SPACE
EVEN
ANOTHER
STILL
BACK
TIME
PI
?
?
?
?
?
?ARTIFICIAL? AND ?INTELLIGENCE?
G OOGLE
3- WAY
SR
PI
COMPUTER
COMPUTER
SECURITY
?
PROGRAMMING
SCIENCE
WEAPONS
?
INTELLIGENT
SECRET
?
SCIENCE
ROBOT
HUMAN
ATTACKS
?
ROBOTICS
TECHNOLOGY
HUMAN
?
PI
?
?
?
?
?
G OOGLE
NEW
YORK
NY
PARK
CITY
?FINGER? AND ?LAKES?
3- WAY
SR
SENECA
CAYUGA
ERIE
ROCHESTER
IROQUOIS
RIVERS
FRESHWATER
FISH
STREAMS
FORESTED
PI
?
?
?
?
?
We should note the importance of the denominator term in 3-way resemblance, without which frequent words will be blindly favored. The exciting contribution of this paper is that 3-way resemblance similarity search admits provable sub-linear guarantees, making it an ideal choice. On the
other hand, no such provable guarantees are known for SR and other heuristic based search methods.
7.2 Improving Retrieval Quality in Similarity Search
We also demonstrate how the retrieval quality of traditional similarity search can be boosted by utilizing more query candidates instead of just one. For the evaluations we choose two public datasets:
MNIST and WEBSPAM, which were used in a recent related paper [26] for near neighbor search
with binary data using b-bit minwise hashing [20, 23].
The two datasets reflect diversity both in terms of task and scale that is encountered in practice.
The MNIST dataset consists of handwritten digit samples. Each sample is an image of 28 ? 28
pixel yielding a 784 dimension vector with the associated class label (digit 0 ? 9). We binarize the
data by settings all non zeros to be 1. We used the standard partition of MNIST, which consists
of 10,000 samples in one set and 60,000 in the other. The WEBSPAM dataset, with 16,609,143
features, consists of sparse vector representation of emails labeled as spam or not. We randomly
sample 70,000 data points and partitioned them into two independent sets of size 35,000 each.
Table 2: Percentage of top candidates with the same labels as that of query retrieved using various
similarity criteria. More indicates better retrieval quality (Best marked in bold).
T OP
Pairwise
3-way NNbor
4-way NNbor
1
94.20
96.90
97.70
MNIST
10
20
92.33 91.10
96.13 95.36
96.89
96.28
50
89.06
93.78
95.10
1
98.45
99.75
99.90
WEBSPAM
10
20
96.94 96.46
98.68 97.80
98.87
98.15
50
95.12
96.11
96.45
For evaluation, we need to generate potential similar search query candidates for k-way search. It
makes no sense in trying to search for object simultaneously similar to two very different objects. To
generate such query candidates, we took one independent set of the data and partition it according
to the class labels. We then run a cheap k-mean clustering on each class, and randomly sample
triplets < x1 , x2 , x3 > from each cluster for evaluating 2-way, 3-way and 4-way similarity search.
For MNIST dataset, the standard 10,000 test set was partitioned according to the labels into 10 sets,
each partition was then clustered into 10 clusters, and we choose 10 triplets randomly from each
cluster. In all we had 100 such triplets for each class, and thus 1000 overall query triplets. For
WEBSPAM, which consists only of 2 classes, we choose one of the independent set and performed
the same procedure. We selected 100 triplets from each cluster. We thus have 1000 triplets from
each class making the total number of 2000 query candidates.
The above procedures ensure that the elements in each triplets < x1 , x2 , x3 > are not very far from
each other and are of the same class label. For each triplet < x1 , x2 , x3 >, we sort all the points x
in the other independent set based on the following:
? Pairwise: We only use the information in x1 and rank x based on resemblance
7
|x1 ?x|
|x1 ?x| .
? 3-way NN: We rank x based on 3-way resemblance
? 4-way NN: We rank x based on 4-way resemblance
|x1 ?x2 ?x|
|x1 ?x2 ?x| .
|x1 ?x2 ?x3 ?x|
|x1 ?x2 ?x3 ?x| .
We look at the top 1, 10, 20 and 50 points based on orderings described above. Since, all the
query triplets are of the same label, The percentage of top retrieved candidates having same label as
that of the query items is a natural metric to evaluate the retrieval quality. This percentage values
accumulated over all the triplets are summarized in Table 2.
We can see that top candidates retrieved by 3-way resemblance similarity, using 2 query points,
are of better quality than vanilla pairwise similarity search. Also 4-way resemblance, with 3 query
points, further improves the results compared to 3-way resemblance similarity search. This clearly
demonstrates that multi-way resemblance similarity search is more desirable whenever we have
more than one representative query in mind. Note that, for MNIST, which contains 10 classes, the
boost compared to pairwise retrieval is substantial. The results follow a consistent trend.
8 Future Work
While the work presented in this paper is promising for efficient 3-way and k-way similarity search
in binary high-dimensional data, there are numerous interesting and practical research problems we
can study as future work. In this section, we mention a few such examples.
One-permutation hashing. Traditionally, building hash tables for near neighbor search required
many (e.g., 1000) independent hashes. This is both time- and energy-consuming, not only for building tables but also for processing un-seen queries which have not been processed. One permutation
hashing [22] provides the hope of reducing many permutations to merely one. The version in [22],
however, was not applicable to near neighbor search due to the existence of many empty bins (which
offer no indexing capability). The most recent work [27] is able to fill the empty bins and works
well for pairwise near neighbor search. It will be interesting to extend [27] to k-way search.
Non-binary sparse data. This paper focuses on minwise hashing for binary data. Various extensions
to real-valued data are possible. For example, our results naturally apply to consistent weighted
sampling [25, 15], which is one way to handle non-binary sparse data. The problem, however, is not
solved if we are interested in similarities such as (normalized) k-way inner products, although the
line of work on Conditional Random Sampling (CRS) [19, 18] may be promising. CRS works on
non-binary sparse data by storing a bottom subset of nonzero entries after applying one permutation
to (real-valued) sparse data matrix. CRS performs very well for certain applications but it does not
work in our context because the bottom (nonzero) subsets are not properly aligned.
Building hash tables by directly using bits from minwise hashing. This will be a different approach
from the way how the hash tables are constructed in this paper. For example, [26] directly used
the bits from b-bit minwise hashing [20, 23] to build hash tables and demonstrated the significant
advantages compared to sim-hash [8, 12] and spectral hashing [29]. It would be interesting to see
the performance of this approach in k-way similarity search.
k-Way sign random projections. It would be very useful to develop theory for k-way sign random
projections. For usual (real-valued) random projections, it is known that the volume (which is related
to the determinant) is approximately preserved [24, 17]. We speculate that the collision probability
of k-way sign random projections might be also a (monotonic) function of the determinant.
9 Conclusions
We formulate a new framework for k-way similarity search and obtain fast algorithms in the case of
k-way resemblance with provable worst-case approximation guarantees. We show some applications
of k-way resemblance search in practice and demonstrate the advantages over traditional search. Our
analysis involves the idea of probabilistic hashing and extends the well-known LSH family beyond
the pairwise case. We believe the idea of probabilistic hashing still has a long way to go.
Acknowledgement
The work is supported by NSF-III-1360971, NSF-Bigdata-1419210, ONR-N00014-13-1-0764, and
AFOSR-FA9550-13-1-0137. Ping Li thanks Kenneth Church for introducing Google Sets to him in
the summer of 2004 at Microsoft Research.
8
References
[1] http://www.howtogeek.com/howto/15799/how-to-use-autofill-on-a-google-docs-spreadsheet-quick-tips/.
[2] S. Agarwal, Jongwoo Lim, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie. Beyond pairwise
clustering. In CVPR, 2005.
[3] Sihem Amer-Yahia, Senjuti Basu Roy, Ashish Chawlat, Gautam Das, and Cong Yu. Group recommendation: semantics and efficiency. Proc. VLDB Endow., 2(1):754?765, 2009.
[4] Christina Brandt, Thorsten Joachims, Yisong Yue, and Jacob Bank. Dynamic ranked retrieval. In WSDM,
pages 247?256, 2011.
[5] Andrei Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity
of Sequences, pages 21?29, Positano, Italy, 1997.
[6] Andrei Z. Broder, Moses Charikar, Alan M. Frieze, and Michael Mitzenmacher. Min-wise independent
permutations (extended abstract). In STOC, pages 327?336, Dallas, TX, 1998.
[7] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based
image classification. IEEE Trans. Neural Networks, 10(5):1055?1064, 1999.
[8] Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[9] Flavio Chierichetti and Ravi Kumar. LSH-preserving functions and their applications. In SODA, 2012.
[10] Dennis Fetterly, Mark Manasse, Marc Najork, and Janet L. Wiener. A large-scale study of the evolution
of web pages. In WWW, pages 669?678, Budapest, Hungary, 2003.
[11] Jerome H. Friedman, F. Baskett, and L. Shustek. An algorithm for finding nearest neighbors. IEEE
Transactions on Computers, 24:1000?1006, 1975.
[12] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and
satisfiability problems using semidefinite programming. Journal of ACM, 42(6):1115?1145, 1995.
[13] Sariel Har-Peled, Piotr Indyk, and Rajeev Motwani. Approximate nearest neighbor: Towards removing
the curse of dimensionality. Theory of Computing, 8(14):321?350, 2012.
[14] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604?613, Dallas, TX, 1998.
[15] Sergey Ioffe. Improved consistent sampling, weighted minhash and l1 sketching. In ICDM, 2010.
[16] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization
and semantic video retrieval. In CIVR, pages 494?501, Amsterdam, Netherlands, 2007.
[17] Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Technical report,
arXiv:1207.6083, 2013.
[18] Ping Li and Kenneth W. Church. A sketch algorithm for estimating two-way and multi-way associations.
Computational Linguistics (Preliminary results appeared in HLT/EMNLP 2005), 33(3):305?354, 2007.
[19] Ping Li, Kenneth W. Church, and Trevor J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873?880, Vancouver, Canada, 2006.
[20] Ping Li and Arnd Christian K?onig. b-bit minwise hashing. In Proceedings of the 19th International
Conference on World Wide Web, pages 671?680, Raleigh, NC, 2010.
[21] Ping Li, Arnd Christian K?onig, and Wenhao Gui. b-bit minwise hashing for estimating three-way similarities. In NIPS, Vancouver, Canada, 2010.
[22] Ping Li, Art B Owen, and Cun-Hui Zhang. One permutation hashing. In NIPS, Lake Tahoe, NV, 2012.
[23] Ping Li, Anshumali Shrivastava, and Arnd Christian K?onig. b-bit minwise hashing in practice. In Internetware, Changsha, China, 2013.
[24] Avner Magen and Anastasios Zouzias. Near optimal dimensionality reductions that preserve volumes. In
APPROX / RANDOM, pages 523?534, 2008.
[25] Mark Manasse, Frank McSherry, and Kunal Talwar. Consistent weighted sampling. Technical Report
MSR-TR-2010-73, Microsoft Research, 2010.
[26] Anshumali Shrivastava and Ping Li. Fast near neighbor search in high-dimensional binary data. In ECML,
Bristol, UK, 2012.
[27] Anshumali Shrivastava and Ping Li. Densifying one permutation hashing via rotation for fast near neighbor search. In ICML, Beijing, China, 2014.
[28] Roger Weber, Hans-J?org Schek, and Stephen Blott. A quantitative analysis and performance study for
similarity-search methods in high-dimensional spaces. In VLDB, pages 194?205, 1998.
[29] Yair Weiss, Antonio Torralba, and Robert Fergus. Spectral hashing. In NIPS, Vancouver, Canada, 2008.
[30] D. Zhou, J. Huang, and B. Sch?olkopf. Beyond pairwise classification and clustering using hypergraphs.
In NIPS, Vancouver, Canada, 2006.
9
| 5216 |@word msr:1 determinant:2 version:3 compression:1 advantageous:1 vldb:2 zelnik:1 jacob:1 mention:1 tr:1 reduction:1 initial:1 contains:1 document:6 bc:5 existing:1 current:2 com:2 determinantal:1 subsequent:1 partition:3 cheap:1 christian:3 plot:1 hash:33 intelligence:4 selected:2 item:1 fa9550:1 provides:2 location:1 honda:1 attack:1 gautam:1 brandt:1 zhang:1 five:1 tahoe:1 org:1 constructed:1 director:1 retrieving:2 prove:2 qualitative:1 consists:4 schek:1 baskett:1 introduce:1 pairwise:28 secret:1 behavior:1 p1:5 cheetah:2 multi:2 wsdm:1 automatically:1 curse:2 increasing:1 clicked:1 project:1 estimating:2 unrelated:1 maximizes:1 biostatistics:1 lowest:1 what:1 finding:4 giant:2 transformation:3 nj:1 guarantee:12 quantitative:1 every:7 yugang:1 exactly:2 prohibitively:1 demonstrates:2 uk:1 brute:3 partitioning:1 onig:3 dallas:2 jiang:1 approximately:1 might:2 china:2 range:1 adoption:1 practical:3 practice:3 block:1 x3:5 sq:8 procedure:3 digit:2 projection:4 word:29 radial:1 chongwah:1 magen:1 get:2 cannot:1 close:3 janet:1 context:1 applying:1 www:2 demonstrated:2 quick:1 maximizing:2 go:4 independently:2 formulate:2 utilizing:1 fill:1 retrieve:5 handle:3 notion:5 searching:1 traditionally:1 analogous:1 user:7 exact:1 massive:1 programming:2 us:2 olivier:1 kunal:1 associate:1 element:9 trend:1 expensive:1 particularly:1 roy:1 cut:1 database:3 labeled:1 bottom:2 role:1 taskar:1 solved:1 capture:1 worst:2 cong:1 region:2 ensures:1 ordering:1 trade:1 substantial:1 environment:1 jaguar:4 complexity:4 peled:1 manasse:2 kriegman:1 dynamic:2 tight:1 efficiency:2 f2:11 basis:1 easily:1 joint:1 panther:1 represented:2 retrieves:1 genre:1 finger:5 various:3 cat:3 tx:2 fast:8 describe:1 query:50 sc:6 artificial:3 refined:1 heuristic:1 widely:2 valued:3 cvpr:1 statistic:1 g1:1 jointly:1 ford:1 noisy:1 itself:2 indyk:2 advantage:3 sequence:1 g1i:3 took:1 product:2 frequent:1 turned:1 aligned:1 budapest:1 hungary:1 bow:2 densifying:1 flexibility:1 achieve:1 intuitive:1 shustek:1 olkopf:1 cluster:8 requirement:2 empty:2 motwani:2 generating:2 categorization:1 ben:1 object:5 illustrate:2 develop:1 stat:1 nearest:4 op:1 keywords:1 eq:4 sim:18 p2:6 implemented:1 c:1 involves:2 implies:1 attribute:2 filter:3 peculiarity:1 anshu:1 human:2 public:1 bin:2 f1:11 clustered:1 civr:1 preliminary:1 ultra:1 elementary:1 leopard:3 extension:1 hold:2 transfor:1 around:2 mapping:1 achieves:1 torralba:1 purpose:1 earth:1 estimation:1 proc:1 applicable:2 bag:2 label:7 currently:1 sensitive:7 individually:4 him:1 create:7 city:2 weighted:3 hope:3 anshumali:4 clearly:1 always:1 mation:1 manor:1 rather:1 zhou:1 cr:3 cornell:2 boosted:1 probabilistically:1 earliest:1 corollary:2 derived:1 focus:4 endow:1 joachim:1 she:1 properly:1 rank:4 indicates:1 mainly:1 improvement:3 hk:5 sense:2 dependent:1 nn:27 accumulated:1 typically:2 perona:1 relation:1 manipulating:2 interested:7 semantics:1 provably:1 pixel:1 overall:1 among:1 html:1 classification:2 denoted:1 exponent:1 favored:1 art:1 construct:4 saving:2 having:2 piotr:2 sampling:8 once:2 identical:1 never:1 aware:1 look:6 yu:1 icml:1 park:1 future:2 report:11 recommend:1 intelligent:1 few:2 modern:1 randomly:3 frieze:1 simultaneously:2 preserve:3 individual:1 phase:1 n1:9 gui:1 microsoft:2 attempt:1 friedman:1 organization:1 interest:4 huge:1 possibility:1 highly:1 evaluation:4 analyzed:1 punctuation:1 admitting:1 light:1 behind:2 yielding:1 semidefinite:1 mcsherry:1 har:1 explosion:1 iv:2 circle:1 theoretical:2 stopped:2 instance:1 formalism:1 assignment:2 applicability:2 cost:1 introducing:1 subset:2 entry:1 rounding:1 scanning:2 person:1 thanks:1 fundamental:1 randomized:1 river:1 freshwater:1 broder:2 international:1 probabilistic:2 off:2 cr0:20 together:1 sketching:1 tip:1 ashish:1 michael:1 w1:6 again:1 oogle:4 reflect:1 yisong:1 containing:1 choose:4 possibly:1 hn:3 g2i:3 emnlp:1 huang:1 admit:1 return:1 michel:1 li:10 potential:1 diversity:1 star:2 bold:1 summarized:1 speculate:1 satisfy:1 ranking:2 stream:1 performed:1 break:2 h1:8 lot:1 doing:1 start:1 sort:1 option:1 complicated:1 capability:1 rochester:1 contribution:1 formed:1 wiener:1 largely:1 identify:1 preprocess:1 handwritten:1 basically:1 worth:1 bristol:1 ping:10 whenever:1 hlt:1 email:1 definition:2 trevor:1 g1n:3 energy:1 galaxy:1 naturally:7 associated:3 proof:11 boil:1 sampled:1 stop:1 dataset:5 popular:2 recall:1 lim:1 car:1 dimensionality:4 improves:1 satisfiability:1 back:1 appears:1 hashing:30 higher:7 tolerate:1 follow:1 improved:2 wei:1 formulation:1 amer:1 mitzenmacher:1 just:5 roger:1 jerome:1 hand:3 sketch:2 dennis:1 web:4 rajeev:2 google:17 quality:10 perhaps:1 resemblance:45 believe:1 building:5 usa:2 name:1 normalized:2 evolution:1 hence:1 nonzero:3 semantic:1 deal:1 r0k:2 during:1 anything:2 criterion:1 trying:1 complete:1 demonstrate:6 performs:1 cp:6 l1:1 image:8 wise:1 weber:1 wikipedia:1 common:3 rotation:1 exponentially:1 volume:2 imagined:1 extend:5 he:1 million:2 association:1 hypergraphs:1 refer:1 significant:1 rd:3 vanilla:1 approx:1 had:1 lsh:17 chapelle:1 robot:1 actor:1 similarity:62 hashed:1 han:1 etc:4 patrick:1 own:1 recent:3 showed:1 retrieved:7 italy:1 scenario:3 store:1 certain:1 n00014:1 binary:19 onr:1 accomplished:1 flavio:1 seen:1 minimum:1 additional:1 care:1 preserving:1 r0:25 zouzias:1 maximize:1 ii:3 stephen:1 desirable:1 reduces:1 anastasios:1 alan:1 technical:2 pgf:4 usability:1 offer:1 long:2 retrieval:10 icdm:1 christina:1 manipulate:1 basic:2 denominator:1 vision:1 expectation:1 rutgers:2 metric:1 spreadsheet:3 blindly:1 represent:1 kernel:1 histogram:1 agarwal:1 robotics:1 sergey:1 arxiv:1 preserved:1 addition:2 crucial:1 ithaca:1 w2:9 extra:2 specially:2 sr:6 weapon:1 sch:1 yue:1 nv:1 integer:1 ngo:1 near:20 presence:2 noting:1 ideal:1 iii:3 yang:1 ture:1 minhash:1 switch:1 hastie:1 inner:2 idea:3 haffner:1 enumerating:1 bottleneck:1 returned:1 york:3 antonio:1 useful:1 collision:5 clear:1 netherlands:1 amount:1 processed:2 http:2 generate:4 exist:4 percentage:3 nsf:2 s3:44 fish:1 moses:2 sign:3 fulfilled:1 group:2 key:2 four:4 threshold:2 actress:1 enormous:1 ravi:1 kenneth:3 utilize:1 merely:1 sum:4 beijing:1 jongwoo:1 run:1 talwar:1 soda:1 extends:1 family:13 reasonable:1 lake:7 doc:2 jaccard:3 bit:12 hi:2 summer:1 refine:1 encountered:1 alex:1 colliding:1 n3:2 x2:7 argument:5 extremely:1 min:26 kumar:1 speedup:1 department:3 charikar:2 according:2 piscataway:1 combination:1 erie:1 bucketing:6 remain:1 partitioned:2 cun:1 making:2 s1:60 happens:1 avner:1 restricted:1 indexing:2 thorsten:1 bucket:12 turn:2 mechanism:1 needed:2 know:1 mind:1 serf:1 adopted:1 available:3 generalizes:1 apply:1 appropriate:1 spectral:2 triadic:1 save:4 yair:1 existence:5 top:9 clustering:4 running:5 ensure:1 linguistics:1 exploit:1 especially:1 build:1 classical:4 pingli:1 already:1 question:1 concentration:1 wenhao:1 usual:3 traditional:5 affinity:1 distance:1 najork:1 collected:2 binarize:1 provable:6 index:1 ratio:1 vladimir:1 equivalently:1 difficult:1 mostly:1 nc:1 robert:1 stoc:3 relate:1 frank:1 perform:1 observation:1 datasets:4 benchmark:1 ecml:1 defining:1 extended:5 looking:1 precise:1 milky:3 canada:4 david:1 pair:18 required:2 dog:1 disappointment:1 security:1 boost:1 nip:5 trans:1 beyond:8 able:1 lion:2 regime:1 kulesza:1 appeared:1 including:1 video:1 webspam:4 event:6 natural:6 blogspot:1 typing:1 force:3 rely:1 ranked:1 scheme:7 improve:1 movie:4 technology:1 numerous:2 created:3 church:3 log1:10 jun:1 gf:3 review:1 acknowledgement:1 sariel:1 vancouver:4 afosr:1 expect:1 permutation:9 interesting:5 proportional:1 querying:3 facing:1 h2:2 sufficient:1 consistent:4 exciting:1 bank:1 storing:2 pi:13 course:2 repeat:3 supported:1 soon:2 formal:1 raleigh:1 neighbor:23 wide:2 basu:1 barrier:1 sparse:9 feedback:1 dimension:2 gram:1 valid:1 evaluating:1 world:1 collection:3 preprocessing:4 spam:1 far:1 transaction:1 approximate:12 arnd:3 ioffe:1 containment:1 unnecessary:1 recommending:1 consuming:1 belongie:1 fergus:1 search:64 un:1 triplet:10 sk:3 table:19 promising:2 nature:1 shrivastava:4 improving:4 expansion:1 williamson:1 constructing:1 domain:1 da:1 marc:1 did:1 universe:3 rh:2 s2:60 motivation:1 big:1 profile:2 n2:8 positano:1 fetterly:1 x1:10 representative:3 join:1 fashion:1 deployed:1 ny:2 andrei:2 chierichetti:1 sub:6 concatenating:1 candidate:10 toyota:1 rk:2 theorem:20 down:1 bad:2 specific:1 removing:3 er:3 list:2 admits:2 dominates:1 exists:5 mnist:6 vapnik:1 importance:1 hui:1 nk:1 locality:5 intersection:1 infinitely:1 absorbed:1 amsterdam:1 g2:3 watch:1 recommendation:4 applies:1 monotonic:1 blott:1 satisfies:3 acm:1 conditional:2 viewed:2 marked:1 bmw:2 towards:3 owen:1 absence:2 hard:1 tiger:3 specifically:2 uniformly:3 operates:1 acting:1 semantically:3 reducing:1 lemma:4 called:1 total:4 goemans:1 brand:1 indicating:1 people:1 mark:3 support:1 fulfills:1 minwise:17 philosophy:2 bigdata:1 evaluate:1 dance:2 |
4,660 | 522 | Induction of Multiscale Temporal Structure
Michael C. Moser
Department of Computer Science &:
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Abstract
Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is
available at any instant. Although variants of back propagation can in
principle be used to find structure in sequences, in practice they are not
sufficiently powerful to discover arbitrary contingencies, especially those
spanning long temporal intervals or involving high order statistics. For
example, in designing a connectionist network for music composition, we
have encountered the problem that the net is able to learn musical structure that occurs locally in time-e.g., relations among notes within a musical phrase-but not structure that occurs over longer time periods--e.g.,
relations among phrases. To address this problem, we require a means
of constructing a reduced deacription of the sequence that makes global
aspects more explicit or more readily detectable. I propose to achieve this
using hidden units that operate with different time constants. Simulation
experiments indicate that slower time-scale hidden units are able to pick
up global structure, structure that simply can not be learned by standard
back propagation.
Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1
can map a sequence of inputs to a sequence of outputs. Learning structure in
temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus,
275
276
Mozer
Figure 1: A generic recurrent network architecture for processing input and output
sequences. Each box corresponds to a layer of units, each line to full connectivity
between layers.
the context layer must hold on to relevant aspects of the input history until a later
point in time at which they can be used.
In principle, variants of back propagation for recurrent networks (Rumelhart, Hinton, &; Williams, 1986; Williams &; Zipser, 1989) can discover an appropriate representation in the context layer for a particular task. In practice, however, back
propagation is not sufficiently powerful to discover arbitrary contingencies, especially those that span long temporal intervals or that involve high order statistics
(e.g., Mozer, 1989j Rohwer, 1990j Schmidhuber, 1991).
Let me present a simple situation where back propagation fails. It involves remembering an event over an interval of time. A variant of this task was first studied
by Schmid huber (1991). The input is a sequence of discrete symbols: A, B, C, D,
. ", I, Y. The task is to predict the next symbol in the sequence. Each sequence
begins with either an I or a Y-call this the trigger .ymbol--and is followed by a
fixed sequence such as ABCDE, which in turn is followed by a second instance of the
trigger symbol, i.e., IABCDEI or or YABCDEY. To perform the prediction task, it is
necessary to store the trigger symbol when it is first presented, and then to recall
the same symbol five time steps later.
The number of symbols intervening between the two triggers-call this the gapcan be varied. By training different networks on different gaps, we can examine
how difficult the learning task is as a function of gap. To better control the experiments, all input sequences had the same length and consisted of either I or Y
followed by ABCDEFGHIJK. The second instance of the trigger symbol was inserted
at various points in the sequence. For example, IABCDIEFGHIJK represents a gap of
4, YABCDEFGHYIJK a gap of 8.
Each training set consisted of two sequences, one with I and one with Y. Different
networks were trained on different gaps. The network architecture consisted of one
input and output unit per symbol, and ten context units. Twenty-five replications
of each network were run with different random initial weights. IT the training set
was not learned within 10000 epochs, the replication was counted as a "failure."
The primary result was that training sets with gaps of 4 or more could not be
learned reliably, as shown in Table 1.
Induction of Multiscale Temporal Structure
Table 1: LearnIng conf mgencles across Eaps
gap % failure. mean # epoch.
to learn
2
468
0
7406
4
36
6
92
9830
8
10000
100
10
10000
100
The results are suprisingly poor. My general impression is that back propagation
is powerful enough to learn only structure that is fairly local in time. For instance,
in earlier work on neural net music composition (Mozer & Soukup, 1991), we found
that our network could master the rules of composition for notes within a musical
phrase, but not rules operating at a more global level-rules for how phrases are
interrelated.
The focus of the present work is on devising learning algorithms and architectures
for better handling temporal structure at more global scales, as well as multiscale
or hierarchical structure. This difficult problem has been identified and studied by
several other researchers, including Miyata and Burr (1990), Rohwer (1990), and
Schmidhuber (1991).
1
BUILDING A REDUCED DESCRIPTION
The basic idea behind my work involves building a redueed de.eription (Hinton,
1988) of the sequence that makes global aspects more explicit or more readily detectable. The challenge of this approach is to devise an appropriate reduced description. I've experimented with a scheme that constructs a reduced description that is
essentially a bud's eye view of the sequence, sacrificing a representation of individual elements for the overall contour of the sequence. Imagine a musical tape played
at double the regular speed. Individual sounds are blended together and become
indistinguishable. However, coarser time-scale events become more explicit, such as
an ascending trend in pitch or a repeated progression of notes. Figure 2 illustrates
the idea. The curve in the left graph, depicting a sequence of individual pitches,
has been smoothed and compressed to produce the right graph. Mathematically,
"smoothed and compressed" means that the waveform has been low-pass filtered
and sampled at a lower rate. The result is a waveform in which the alternating
upwards and downwards :ftow is unmistakable.
Multiple views of the sequence are realized using context units that operate with
different time eon.tantl:
(1)
where Ci(t) is the activity of context unit i at time t, net,(t) is the net input to
unit i at time t, including activity both from the input layer and the recurrent
context connections, and
is a time constant associated with each unit that has
the range (0,1) and determines the responsiveness of the unit-the rate at which
T,
277
278
Mozer
(a)
p
i
t
(b)
P
i
t
c
c
h
h
time
reduced
description
time
(compressed)
Figure 2: (a) A sequence of musical notes. The vertical axis indicates the pitch, the
horizontal axis time. Each point corresponds to a particular note. (b) A smoothed,
compact view of the sequence.
its activity changes. With 7'i == 0, the activation rule reduces to the standard one
and the unit can sharply change its response based on a new input. With large 7'i,
the unit is sluggish, holding on to much of its previous value and thereby averaging
the response to the net input over time. At the extreme of 7'i == 1, the second term
drops out and the unit's activity becomes fixed. Thus, large 7'i smooth out the
response of a context unit over time. Note, however, that what is smoothed is the
activity of the context units, not the input itself as Figure 2 might suggest.
Smoothing is one property that distinguishes the waveform in Figure 2b from the
original. The other property, compactness, is also achieved by a large 7'i, although
somewhat indirectly. The key benefit of the compact waveform in Figure 2b is that
it allows a longer period of time to be viewed in a single glance, thereby explicating
contingencies occurring in this interval during learning. The context unit activation
rule (Equation 1) permits this. To see why this is the case, consider the relation
between the error derivative with respect to the context units at time t, 8E/8c(t),
and the error back propagated to the previous step, t - 1. One contribution to
8E/8ci(t - 1), from the first term in Equation 1, is
(2)
This means that when
7'i
is large, most of the error signal in context unit i at time
t is carried back to time t - 1. Intuitively, just as the activation of units with large
changes slowly forward in time, the error propagated back through these units
changes slowly too. Thus, the back propagated error signal can make contact with
points further back in time, facilitating the learning of more global structure in the
input sequence.
7'i
Time constants have been incorporated into the activation rules of other connectionist architectures (Jordan, 1987; McClelland, 1979; Mozer, 1989; Pearlmutter,
1989; Pineda, 1987). However, none of this work has exploited time constants to
control the temporal responsivity of individual units.
Induction of Multiscale Temporal Structure
2
LEARNING AABA PHRASE PATTERNS
A simple simulation illustrates the benefits of temporal reduced descriptions. I
generated pseudo musical phrases consisting of five notes in ascending chromatic
order, e.g., F#2 G2 G#2 12 1#2 or C4 C#4 Dot D#4 &ot, where the first pitch was selected
at random. 1 Pairs of phrases-call them A and B-were concatenated to form an
AABA pattern, terminated by a special EID marker. The complete melody then
consisted of 21 elements-four phrases offive notes followed by the EID marker-an
example of which is:
Two versions of CONCERT were tested, each with 35 context units. In the ,tandard
version, all 35 units had T
0; in the reduced de.eMption or RD version, 30 had
T
0 and 5 had T
0.8. The training set consisted of 200 examples and the test set
another 100 examples. Ten replications of each simulation were run for 300 passes
through the training set. See Mozer and Soukup (1991) for details of the network
architecture and note representations.
=
=
=
Because ofthe way that the sequences are organized, certain pitches can be predicted
based on local structure whereas other pitches require a more global memory of
the sequence. In particular, the second through fifth pitches within a phrase can
be predicted based on knowledge of the immediately preceding pitch. To predict
the first pitch in the repeated A phrases and to predict the EID marker, more
global information is necessary. Thus, the analysis was split to distinguish between
pitches requiring only local structure and pitches requiring more global structure.
As Table 2 shows, performance requiring global structure was significantly better
for the RD version (F(l,9)=179.8, p < .001), but there was only a marginally
reliable difference for performance involving local structure (F(l,9)=3.82, p=.08).
The global structure can be further broken down to prediction of the EID marker
and prediction of the first pitch of the repeated A phrases. In both cases, the
performance improvement for the RD version was significant: 88.0% versus 52.9%
for the end of sequence (F(l,9)=220, p < .001); 69.4% versus 61.2% for the first
pitch (F(l,9)=77.6, p < .001).
Experiments with different values of T in the range .7-.95 yielded qualitatively
similar results, as did experiments in which the A and B phrases were formed by
random walks in the key of C major.
lOne need not understand the musical notation to make sense of this example. Simply
consider each note to be a unique symbol in a set of symbols having a fixed ordering. The
example is framed in terms of music because my original work involved music composition.
Table 2: Performance on AABA phrases
.trueture .tandard ver,ion RD ver.ion
local
96.7%
97.3%
global
58.4%
75.6%
279
280
Mozer
3
DETECTING CONTINGENCIES ACROSS GAPSREVISITED
I now return to the prediction task involving sequences containing two I's or Y's
separated by a stream of intervening symbols. A reduced description network had
no problem learning the contingency across wide gaps. Table 3 compares the results
presented earlier for a standard net with ten context units and the results for an
RD net having six standard context units (T
0) and four units having identical
nonzero T, in the range of .75-.95. More on the choice of T below, but first observe
that the reduced description net had a100% success rate. Indeed, it had no difficulty
with much wider gaps: I tested gaps of up to 25 symbols. The number of epochs to
learn scales roughly linearly with the gap.
=
When the task was modified slightly such that the intervening symbols were randomly selected from the set {!,B,e,D}, the RD net still had no difficulty with the
prediction task.
The bad news here is that the choice of T can be important. In the results reported
above, T was selected to optimize performance. In general, a larger T was needed
to span larger gaps. For sma.ll gaps, performance was insensitive to the particular T
chosen. However, the larger the temporal gap that had to be spanned, the sma.ller
the range of T values that gave acceptable results. This would appear to be a serious
limitation of the approach. However, there are several potential solutions.
1. One might try using back propagation to train the time constants directly. This
does not work particularly well on the problems I've examined, apparently
because the path to an appropriate T is fraught with local optima. Using
gradient descent to fine tune T, once it's in the right neighborhood, is somewhat
more successful.
2. One might include a complete range of T values in the context layer. It is not
difficult to determine a rough correspondence between the choice of T and the
temporal interval to which a unit is optimally tuned. If sufficient units are
used to span a range of intervals, the network should perform well. The down
side, of course, is that this gives the network an excess of weight parameters
with which it could potentia.lly overfit the training data. However, because the
different T correspond to different temporal scales, there is much less freedom
to abuse the weights here than, say, in a situation where additional hidden
units are added to a feedforward network.
gap
2
4
6
8
10
Table 3: Learning contingencies across gaps (revisited)
,tandard net
reduced de,criptaon net
% failure, mean # epoch,
% failure, mean # epoch,
to learn
to learn
0
468
0
328
36
7406
0
584
92
9830
0
992
100
10000
0
1312
100
10000
0
1630
Induction of Multiscale Temporal Structure
upper
net
lower
net
Figure 3: A sketch of the Schmidhuber (1991) architecture
3. One might dynamically adjust T as a sequence is presented based on external
criteria. In Section 5, I discuss one such criterion.
4
MUSIC COMPOSITION
I have used music composition as a domain for testing and evaluating different
approaches to learning multiscale temporal structure. In previous work (Mozer &;
Soukup, 1991), we designed a sequential prediction network, called CONCERT, that
learns to reproduce a set of pieces of a particular musical style. CONCERT also
learns structural regularities of the musical style, and can be used to compose new
pieces in the same style. CONCERT was trained on a set of Bach pieces and a set of
traditional European folk melodies. The compositions it produces were reasonably
pleasant, but were lacking in global coherence. The compositions tended to wander
randomly with little direction, modulating haphazardly from major to minor keys,
flip-flopping from the style of a march to that of a minuet. I attribute these problems
to the fact that CONCERT had learned only local temporal structure.
I have recently trained CONCERT on a third set of examples-waltzes-and have
included context units that operate with a range of time constants. There is a
consensus among listeners that the new compositions are more coherent. I am
presently running more controlled simulations using the same musical training set
and versions of CONCERT with and without reduced descriptions, and am attempting
to quantify CONCERT'S abilities at various temporal scales.
5
A HYBRID APPROACH
Schmidhuber (1991; this volume) has proposed an alternative approach to learning
multiscale temporal structure in sequences. His approach, the chunking architecture,
basically involves two (or more) sequential prediction networks cascaded together
(Figure 3). The lower net receives each input and attempts to predict the next
input. When it fails to predict reliably, the next input is passed to the upper net.
Thus, once the lower net has been trained to predict local temporal structure, such
structure is removed from the input to the upper net. This simplifies the task of
learning global structure in the upper net.
281
282
Mozer
Schmidhuber's approach has some serious limitations, as does the approach I've described. We have thus merged the two in a scheme that incorporates the strengths
of each approach (Schmidhuber, Prelinger, Mozer, Blumenthal, &: Mathis, in preparation). The architecture is the same as depicted in Figure 3, except that all units
in the upper net have associated with them a time constant Tu , and the prediction
error in the lower net determines Tu. In effect, this allows the upper net to kick in
only when the lower net fails to predict. This avoid the problem of selecting time
constants, which my approach suffers. This also avoids the drawback of Schmidhuber's approach that yes-or-no decisions must be made about whether the lower net
was successful. Initial simulation experiments indicate robust performance of the
hybrid algorithm.
Acknowledgements
This research was supported by NSF Presidential Young Investigator award ffiI-9058450,
grant 90--21 from the James S. McDonnell Foundation, and DEC extemal research grant
1250. Thanks to Jiirgen Schmidhuber and Paul Smolensky for helpful comments regarding
this work, and to Darren Hardy for technical assistance.
References
Hinton, G. E. (1988). Representing part-whole hierarchies in connectionist networks.
Proceeding' of the Eighth Annual Conference of the Cognitive Science Society.
Jordan, M. I. (1987). Attractor dynamics and parallelism in a connectionist sequential
machine. In Proceeding, of the Eighth Annual Conference of the Cognitive Science
Society (pp. 531-546). Hillsdale, NJ: Erlbaum.
McClelland, J. L. (1979). On the time relations of mental processes: An examination of
systems of processes in cascade. P,ychological Review, 86, 287-330.
Miyata, Y., k Burr, D. (1990). Hierarchical recurrent networks for learning musical structure. Unpublished Manuscript.
Moser, M. C. (1989). A focused back-propagation algorithm for temporal pattem recognition. Complez Syltem" 3, 349-381.
Moser, M. C., k Soukup, T. (1991). CONCERT: A connectionist composer of erudite
tunes. In R. P. Lippmann, J. Moody, k D. S. Tourebky (Eds.), Advance, in neural
information proce"ing ,ylteml 3 (pp. 789-796). San Mateo, CA: Morgan Kaufmann.
Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks.
Neural Computation, 1, 263-269.
Pineda, F. (1987). Generalisation of back propagation to recurrent neural networks. Phy,ical Review Letter" 19, 2229-2232.
Rohwer, R. (1990). The 'moving targets' training algorithm. In D. S. Tourebky (Ed.),
Advance, in neural information proce"ing ,yltem, I (pp. 558-565). San Mateo, CA:
Morgan Kaufmann.
Rumelhart, D. E., Hinton, G. E., k Williams, R. J. (1986). Learning intemal representations by error propagation. In D. E. Rumelhart k J. L. McClelland (Eds.), Parallel
di,tributed proce"ing: Ezploration, in the microltructure of cognition. Volume I:
Foundation, (pp. 318-362). Cambridge, MA: MIT Press/Bradford Books.
Schmidhuber, J. (1991). Neural ,equence chunker, (Report FKI-148-91). Munich, Germany: Technische Universitaet Muenchen, Institut fuel Informatik.
Williams, R. J., k Zipser, D. (1989). A learning algorithm for continually running fully
recurrent neural networks. Neural Computation, 1, 270--280.
| 522 |@word version:6 simulation:5 pick:1 thereby:2 accommodate:1 phy:1 initial:2 responsivity:1 selecting:1 hardy:1 tuned:1 activation:4 must:2 readily:2 drop:1 concert:9 designed:1 selected:3 devising:1 filtered:1 mental:1 detecting:1 revisited:1 five:3 become:2 replication:3 compose:1 burr:2 huber:1 indeed:1 roughly:1 examine:1 little:1 becomes:1 begin:1 discover:3 notation:1 fuel:1 what:1 lone:1 nj:1 temporal:19 pseudo:1 blumenthal:1 control:2 unit:31 grant:2 appear:1 continually:1 local:8 tributed:1 path:1 abuse:1 might:4 studied:2 examined:1 dynamically:1 mateo:2 co:1 range:7 unique:1 testing:1 practice:2 significantly:1 cascade:1 regular:1 suggest:1 context:16 optimize:1 map:1 williams:4 focused:1 immediately:1 rule:6 spanned:1 his:1 imagine:1 trigger:5 colorado:1 hierarchy:1 target:1 designing:1 element:2 rumelhart:3 trend:1 particularly:1 recognition:1 coarser:1 inserted:1 abcde:1 news:1 ordering:1 removed:1 mozer:10 broken:1 dynamic:1 trained:4 ymbol:1 various:2 listener:1 train:1 separated:1 offive:1 neighborhood:1 larger:3 say:1 compressed:3 presidential:1 ability:1 statistic:2 itself:1 pineda:2 sequence:29 net:24 propose:1 tu:2 relevant:3 achieve:1 intervening:3 description:8 double:1 optimum:1 regularity:1 produce:2 wider:1 recurrent:8 minor:1 predicted:2 involves:3 indicate:2 quantify:1 direction:1 waveform:4 merged:1 drawback:1 attribute:1 melody:2 hillsdale:1 require:2 mathematically:1 hold:1 sufficiently:2 cognition:1 predict:7 major:2 sma:2 modulating:1 suprisingly:1 unfolding:1 rough:1 mit:1 modified:1 avoid:1 chromatic:1 varying:1 focus:1 improvement:1 indicates:1 sense:1 am:2 helpful:1 compactness:1 hidden:3 relation:4 ical:1 reproduce:1 germany:1 overall:1 among:3 smoothing:1 special:1 fairly:1 construct:1 once:2 having:3 identical:1 represents:1 connectionist:5 report:1 serious:2 distinguishes:1 randomly:2 ve:3 individual:4 consisting:1 attractor:1 attempt:1 freedom:1 adjust:1 extreme:1 behind:1 waltz:1 necessary:2 folk:1 institut:1 walk:1 sacrificing:1 instance:3 earlier:2 blended:1 phrase:13 technische:1 successful:2 erlbaum:1 too:1 optimally:1 reported:1 my:4 thanks:1 moser:3 michael:1 together:2 moody:1 connectivity:1 containing:1 slowly:2 cognitive:3 conf:1 external:1 derivative:1 style:4 return:1 book:1 potential:1 de:3 stream:1 piece:3 later:2 view:3 try:1 apparently:1 parallel:1 contribution:1 formed:1 musical:11 kaufmann:2 correspond:1 ofthe:1 yes:1 fki:1 basically:1 none:1 marginally:1 trajectory:1 informatik:1 researcher:1 history:1 tended:1 suffers:1 ed:3 rohwer:3 failure:4 pp:4 involved:1 james:1 jiirgen:1 associated:2 di:1 propagated:3 sampled:1 intrinsically:1 recall:1 knowledge:1 organized:1 back:14 manuscript:1 response:3 box:1 just:1 until:1 overfit:1 sketch:1 receives:1 horizontal:1 multiscale:7 marker:4 propagation:10 glance:1 building:2 effect:1 contain:1 consisted:5 requiring:3 alternating:1 nonzero:1 indistinguishable:1 during:1 ll:1 assistance:1 criterion:2 impression:1 complete:2 ffii:1 pearlmutter:2 upwards:1 recently:1 insensitive:1 volume:2 significant:1 composition:9 cambridge:1 framed:1 rd:6 had:10 dot:1 moving:1 longer:2 operating:1 schmidhuber:9 store:1 certain:1 success:1 proce:3 devise:1 exploited:1 responsiveness:1 morgan:2 additional:1 remembering:1 somewhat:2 preceding:1 determine:1 period:2 ller:1 signal:2 full:1 sound:1 multiple:1 reduces:1 ing:3 smooth:1 technical:1 bach:1 long:2 devised:1 award:1 controlled:1 prediction:8 variant:3 involving:3 basic:1 pitch:13 essentially:1 muenchen:1 lly:1 achieved:1 ion:2 dec:1 whereas:1 fine:1 interval:6 ot:1 operate:3 pass:1 comment:1 incorporates:1 jordan:2 call:3 zipser:2 structural:1 kick:1 feedforward:1 split:1 enough:1 gave:1 architecture:10 identified:1 idea:2 simplifies:1 regarding:1 whether:1 six:1 passed:1 speech:1 tape:1 pleasant:1 involve:1 tune:2 locally:1 ten:3 mcclelland:3 reduced:11 nsf:1 per:1 discrete:1 key:3 four:2 tourebky:2 graph:2 fraction:1 run:2 letter:1 powerful:3 master:1 pattem:1 coherence:1 acceptable:1 decision:1 layer:6 followed:4 played:1 distinguish:1 correspondence:1 encountered:1 yielded:1 activity:5 annual:2 strength:1 sharply:1 aspect:3 speed:1 span:3 attempting:1 department:1 munich:1 march:1 poor:1 mcdonnell:1 chunker:1 across:4 slightly:1 presently:1 intuitively:1 boulder:1 chunking:1 equation:2 turn:1 detectable:2 discus:1 needed:1 flip:1 ascending:2 end:1 available:1 permit:1 haphazardly:1 progression:1 hierarchical:2 observe:1 generic:1 appropriate:3 indirectly:1 alternative:1 slower:1 original:2 running:2 include:1 instant:2 music:7 soukup:4 eon:1 concatenated:1 especially:2 society:2 contact:1 added:1 realized:1 occurs:2 primary:1 traditional:1 gradient:1 eid:4 me:1 consensus:1 spanning:1 induction:4 bud:1 length:1 syltem:1 difficult:5 holding:1 reliably:2 twenty:1 perform:2 upper:6 vertical:1 descent:1 situation:2 extended:2 hinton:4 incorporated:1 varied:1 smoothed:4 arbitrary:2 pair:1 unpublished:1 connection:1 c4:1 coherent:1 learned:4 address:1 able:2 below:1 pattern:4 parallelism:1 eighth:2 smolensky:1 challenge:1 including:2 memory:1 reliable:1 event:3 difficulty:2 hybrid:2 examination:1 cascaded:1 representing:1 scheme:2 eye:1 temporally:2 axis:2 carried:1 schmid:1 epoch:5 review:2 acknowledgement:1 wander:1 lacking:1 fully:1 limitation:2 versus:2 foundation:2 contingency:6 sufficient:1 principle:2 aaba:3 course:1 supported:1 side:1 understand:1 institute:1 wide:1 fifth:1 benefit:2 curve:1 world:1 evaluating:1 contour:1 avoids:1 forward:1 qualitatively:1 made:1 san:2 counted:1 excess:1 compact:2 lippmann:1 global:14 ver:2 universitaet:1 why:1 table:6 learn:6 reasonably:1 robust:1 ca:2 miyata:2 depicting:1 fraught:1 composer:1 european:1 mathis:1 constructing:1 domain:1 did:1 linearly:1 terminated:1 whole:1 paul:1 repeated:3 facilitating:1 downwards:1 fails:3 explicit:3 third:1 learns:2 young:1 down:2 bad:1 symbol:13 experimented:1 sequential:3 ci:2 sluggish:1 illustrates:2 occurring:1 gap:16 depicted:1 interrelated:1 simply:2 g2:1 corresponds:2 darren:1 determines:2 ma:1 viewed:1 change:4 included:1 generalisation:1 except:1 averaging:1 called:1 pas:1 bradford:1 preparation:1 investigator:1 tested:2 handling:1 |
4,661 | 5,220 | Learning Generative Models with the
Up-Propagation Algorithm
Jong-Hoon Oh and H. Sebastian Seung
Bell Labs, Lucent Technologies
Murray Hill, NJ 07974
fjhoh|seungg@bell-labs.com
Abstract
Up-propagation is an algorithm for inverting and learning neural network
generative models. Sensory input is processed by inverting a model that
generates patterns from hidden variables using top-down connections.
The inversion process is iterative, utilizing a negative feedback loop that
depends on an error signal propagated by bottom-up connections. The
error signal is also used to learn the generative model from examples.
The algorithm is benchmarked against principal component analysis in
experiments on images of handwritten digits.
In his doctrine of unconscious inference, Helmholtz argued that perceptions are
formed by the interaction of bottom-up sensory data with top-down expectations.
According to one interpretation of this doctrine, perception is a procedure of sequential hypothesis testing. We propose a new algorithm, called up-propagation, that
realizes this interpretation in layered neural networks. It uses top-down connections
to generate hypotheses, and bottom-up connections to revise them.
It is important to understand the di erence between up-propagation and its ancestor, the backpropagation algorithm
1]. Backpropagation is a learning algorithm
for recognition models. As shown in Figure 1a, bottom-up connections recognize
patterns, while top-down connections propagate an error signal that is used to learn
the recognition model.
In contrast, up-propagation is an algorithm for inverting and learning generative
models, as shown in Figure 1b. Top-down connections generate patterns from a
set of hidden variables. Sensory input is processed by inverting the generative
model, recovering hidden variables that could have generated the sensory data.
This operation is called either pattern recognition or pattern analysis, depending
on the meaning of the hidden variables. Inversion of the generative model is done
iteratively, through a negative feedback loop driven by an error signal from the
bottom-up connections. The error signal is also used for learning the connections
error
generation
recognition
error
(b)
(a)
Figure 1: Bottom-up and top-down processing in neural networks. (a) Backprop
network (b) Up-prop network
in the generative model.
Up-propagation can be regarded as a generalization of principal component analysis
(PCA) and its variants like Conic
2] to nonlinear, multilayer generative models. Our
experiments with images of handwritten digits demonstrate that up-propagation
learns a global, nonlinear model of a pattern manifold. With its global parametrization, this model is distinct from locally linear models of pattern manifolds
3].
1 INVERTING THE GENERATIVE MODEL
The generative model is a network of + 1 layers of neurons, with layer 0 at the
bottom and layer at the top. The vectors , = 0
, are the activations of
the layers. The pattern 0 is generated from the hidden variables by a top-down
pass through the network,
1
(1)
)
=
;1 = (
The nonlinear function acts on vectors component by component. The matrix
contains the synaptic connections from the neurons in layer to the neurons in
layer ; 1. A bias term ;1 can be added to the argument of , but is omitted
here. It is convenient to dene auxiliary variables ^ by = (^ ). In terms of
these auxiliary variables, the top-down pass is written as
^ ;1 =
(^ )
(2)
L
L
xt
t
:::L
x
xL
xt
f Wt xt
t
L : : :
:
f
Wt
t
t
bt
f
xt
xt
xt
f xt
Wt f xt
Given a sensory input , the top-down generative model can be inverted by nding
hidden variables that generate a pattern 0 matching . If some of the hidden variables represent the identity of the pattern, the inversion operation is called
recognition. Alternatively, the hidden variables may just be a more compact representation of the pattern, in which case the operation is called analysis or encoding.
The inversion is done iteratively, as described below.
In the following, the operator denotes elementwise multiplication of two vectors,
so that = means =
for all . The bottom-up pass starts with the
mismatch between the sensory data and the generated pattern 0 ,
0
(3)
0 = (^0 ) ( ; 0 )
which is propagated upwards by
= 0 (^ ) (
(4)
;1 )
When the error signal reaches the top of the network, it is used to update the hidden
variables ,
/
(5)
;1
d
xL
z
x
y
x
zi
xi yi
d
i
d
t
f
f
xL
xL
x
x
xt
d
x
T
Wt t
T
WL L
:
:
This update closes the negative feedback loop. Then a new pattern 0 is generated
by a top-down pass (1), and the process starts over again.
This iterative inversion process performs gradient descent on the cost function 12 j ;
2
0 j , subject to the constraints (1). This can be proved using the chain rule, as in
the traditional derivation of the backprop algorithm. Another method of proof is
to add the equations (1) as constraints, using Lagrange multipliers,
x
d
x
X
1 j ; (^ )j2 +
(^ )]
(6)
0
;1
^ ;1 ;
2
=1
This derivation has the advantage that the bottom-up activations have an interpretation as Lagrange multipliers.
Inverting the generative model by negative feedback can be interpreted as a process
of sequential hypothesis testing. The top-down connections generate a hypothesis
about the sensory data. The bottom-up connections propagate an error signal
that is the disagreement between the hypothesis and data. When the error signal
reaches the top, it is used to generate a revised hypothesis, and the generate-testrevise cycle starts all over again. Perception is the convergence of this feedback loop
to the hypothesis that is most consistent with the data.
L
d
f x
T
t
xt
Wt f xt
:
t
t
2 LEARNING THE GENERATIVE MODEL
The synaptic weights determine the types of patterns that the network is able to
generate. To learn from examples, the weights are adjusted to improve the network's
generation ability. A suitable cost function for learning is the reconstruction error
1j ;
2
0 j averaged over an ensemble of examples. Online gradient descent with
2
respect to the synaptic weights is performed by a learning rule of the form
/ ;1
(7)
The same error signal that was used to invert the generative model is also used
to learn it.
The batch form of the optimization is compactly written using matrix notation.
To do this, we dene the matrices
whose columns are the vectors ,
0
corresponding to examples in the training set. Then computation and
0
learning are the minimization of
min 21 j ; 0 j2
(8)
L t
subject to the constraint that
)
=1
(9)
;1 = (
In other words, up-prop is a dual minimization with respect to hidden variables and
synaptic connections. Computation minimizes with respect to the hidden variables
, and learning minimizes with respect to the synaptic weight matrices .
From the geometric viewpoint, up-propagation is an algorithm for learning pattern
manifolds. The top-down pass (1) maps an -dimensional vector to an 0 dimensional vector 0 . Thus the generative model parametrizes a continuous dimensional manifold embedded in 0 -dimensional space. Inverting the generative
model is equivalent to nding the point on the manifold that is closest to the sensory
data. Learning the generative model is equivalent to deforming the manifold to t
a database of examples.
Wt
d
x
Wt
t
T
xt
:
D X : : : XL
d
x : : : xL
X
Xt
W
f Wt Xt
D
X
t
:::L :
XL
Wt
nL
x
xL
n
nL
n
W
principal components
Figure 2: One-step generation of handwritten digits. Weights of the 256-9 up-prop
network (left) versus the top 9 principal components (right)
target image
x0
t=0
t=1
t=10
t=100
t=1000
x1
4
4
4
4
4
2
2
2
2
2
0
0
0
5
10 0
0
5
10 0
0
5
0
10 0
5
10 0
5
10
Figure 3: Iterative inversion of a generative model as sequential hypothesis testing.
A fully trained 256{9 network is inverted to generate an approximation to a target
image that was not previously seen during training. The stepsize of the dynamics
was xed to 0 02 to show time evolution of the system.
:
Pattern manifolds are relevant when patterns vary continuously. For example, the
variations in the image of a three-dimensional object produced by changes of viewpoint are clearly continuous, and can be described by the action of a transformation
group on a prototype pattern. Other types of variation, such as deformations in
the shape of the object, are also continuous, even though they may not be readily
describable in terms of transformation groups. Continuous variability is clearly not
conned to visual images, but is present in many other domains. Many existing
techniques for modeling pattern manifolds, such as PCA or PCA mixtures
3], depend on linear or locally linear approximations to the manifold. Up-prop constructs
a globally parametrized, nonlinear manifold.
3 ONE-STEP GENERATION
The simplest generative model of the form (1) has just one step ( = 1). Uppropagation minimizes the cost function
min 21 j ; ( 1 1 )j2
(10)
1
1
For a linear this reduces to PCA, as the cost function is minimized when the vectors in the weight matrix 1 span the same space as the top principal components
of the data .
Up-propagation with a one-step generative model was applied to the USPS
database
4], which consists of example images of handwritten digits. Each of the
7291 training and 2007 testing images was normalized to a 16 16 grid with pixel
intensities in the range
0 1]. A separate model was trained for each digit class. The
nonlinearity was the logistic function. Batch optimization of (10) was done by
L
X W
f
W
D
f
D
f W X
:
Reconstruction Error
0.025
PCA, training
Up?prop, training
PCA, test
Up?prop, test
0.02
Error
0.015
0.01
0.005
0
5
10
15
20
25
30
35
40
number of vectors
Figure 4: Reconstruction error for 256{ networks as a function of . The error of
PCA with principal components is shown for comparison. The up-prop network
performs better on both the training set and test set.
n
n
n
gradient descent with adaptive stepsize control by the Armijo rule
5]. In most cases,
the stepsize varied between 10;1 and 10;3, and the optimization usually converged
within 103 epochs. Figure 2 shows the weights of a 256{9 network that was trained
on 731 di erent images of the digit \two." Each of the 9 subimages is the weight
vector of a top-level neuron. The top 9 principal components are also shown for
comparison.
Figure 3 shows the time evolution of a fully trained 256{9 network during iterative
inversion. The error signal from the bottom layer 0 quickly activates the top layer
1 . At early times, all the top layer neurons have similar activation levels. However,
the neurons with weight vectors more relevant to the target image become dominant
soon, and the other neurons are deactivated.
The reconstruction error (10) of the up-prop network was much better than that of
PCA. We trained 10 di erent up-prop networks, one for each digit, and these were
compared with 10 corresponding PCA models. Figure 4 shows the average squared
error per pixel that resulted. A 256{12 up-prop network performed as well as PCA
with 36 principal components.
x
x
4 TWO-STEP GENERATION
Two-step generation is a richer model, and is learned using the cost function
(11)
min 12 j ; ( 1 ( 2 2 ))j2
2
1
2
Note that a nonlinear is necessary for two-step generation to have more representational power than one-step generation. When this two-step generative model was
trained on the USPS database, the weight vectors in 1 learned features resembling
principal components. The activities of the 1 neurons tended to be close to their
saturated values of one or zero.
The reconstruction error of the two-step generative network was compared to that of
the one-step generative network with the same number of neurons in the top layer.
X W W
D
f W f W X
f
W
X
:
Our 256{25{9 network outperformed our 256{9 network on the test set, though
both networks used nine hidden variables to encode the sensory data. However,
the learning time was much longer, and iterative inversion was also slow. While
up-prop for one-step generation converged within several hundred epochs, up-prop
for two-step generation often needed several thousand epochs or more to converge.
We often found long plateaus in the learning curves, which may be due to the
permutation symmetry of the network architecture
6].
5 DISCUSSION
To summarize the experiments discussed above, we constructed separate generative
models, one for each digit class. Relative to PCA, each generative model was
superior at encoding digits from its corresponding class. This enhanced generative
ability was due to the use of nonlinearity.
We also tried to use these generative models for recognition. A test digit was
classied by inverting all the generative models, and then choosing the one best able
to generate the digit. Our tests of this recognition method were not encouraging.
The nonlinearity of up-propagation tended to improve the generation ability of
models corresponding to all classes, not just the model corresponding to the correct
classication of the digit. Therefore the improved encoding performance did not
immediately transfer to improved recognition.
We have not tried the experiment of training one generative model on all the digits,
with some of the hidden variables representing the digit class. In this case, pattern
recognition could be done by inverting a single generative model. It remains to be
seen whether this method will work.
Iterative inversion was surprisingly fast, as shown in Figure 3, and gave solutions
of surprisingly good quality in spite of potential problems with local minima, as
shown in Figure 4. In spite of these virtues, iterative inversion is still a problematic
method. We do not know whether it will perform well if a single generative model
is trained on multiple pattern classes. Furthermore, it seems a rather indirect way
of doing pattern recognition.
The up-prop generative model is deterministic, which handicaps its modeling of
pattern variability. The model can be dressed up in probabilistic language by dening a prior distribution ( ) for the hidden variables, and adding Gaussian noise
to 0 to generate the sensory data. However, this probabilistic appearance is only
skin deep, as the sequence of transformations from to 0 is still completely deterministic. In a truly probabilistic model, like a belief network, every layer of the
generation process adds variability.
In conclusion, we briey compare up-propagation to other algorithms and architectures.
P xL
x
xL
x
1. In backpropagation
1], only the recognition model is explicit. Iterative gradient descent methods can be used to invert the recognition model, though
this implicit generative model generally appears to be inaccurate
7, 8].
2. Up-propagation has an explicit generative model, and recognition is done
by inverting the generative model. The accuracy of this implicit recognition
model has not yet been tested empirically. Iterative inversion of generative
models has also been proposed for linear networks
2, 9] and probabilistic
belief networks
10].
3. In the autoencoder
11] and the Helmholtz machine
12], there are separate
models of recognition and generation, both explicit. Recognition uses only
bottom-up connections, and generation uses only top-down connections.
Neither process is iterative. Both processes can operate completely independently they only interact during learning.
4. In attractor neural networks
13, 14] and the Boltzmann machine
15], recognition and generation are performed by the same recurrent network. Each
process is iterative, and each utilizes both bottom-up and top-down connections. Computation in these networks is chiey based on positive, rather
than negative feedback.
Backprop and up-prop su er from a lack of balance in their treatment of bottom-up
and top-down processing. The autoencoder and the Helmholtz machine su er from
inability to use iterative dynamics for computation. Attractor neural networks lack
these deciencies, so there is incentive to solve the problem of learning attractors
14].
This work was supported by Bell Laboratories. JHO was partly supported by the
Research Professorship of the LG-Yonam Foundation. We are grateful to Dan Lee
for helpful discussions.
References
1] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations
by back-propagating errors. Nature, 323:533{536, 1986.
2] D. D. Lee and H. S. Seung. Unsupervised learning by convex and conic coding. Adv.
Neural Info. Proc. Syst., 9:515{521, 1997.
3] G. E. Hinton, P. Dayan, and M. Revow. Modeling the manifolds of images of handwritten digits. IEEE Trans. Neural Networks, 8:65{74, 1997.
4] Y. LeCun et al. Learning algorithms for classication: a comparison on handwritten
digit recognition. In J.-H. Oh, C. Kwon, and S. Cho, editors, Neural networks: the
statistical mechanics perspective, pages 261{276, Singapore, 1995. World Scientic.
5] D. P. Bertsekas. Nonlinear programming. Athena Scientic, Belmont, MA, 1995.
6] K. Kang, J.-H. Oh, C. Kwon, and Y. Park. Generalization in a two-layer neural
network. Phys. Rev., E48:4805{4809, 1993.
7] J. Kindermann and A. Linden. Inversion of neural networks by gradient descent.
Parallel Computing, 14:277{286, 1990.
8] Y. Lee. Handwritten digit recognition using K nearest-neighbor, radial-basis function,
and backpropagation neural networks. Neural Comput., 3:441{449, 1991.
9] R. P. N. Rao and D. H. Ballard. Dynamic model of visual recognition predicts neural
response properties in the visual cortex. Neural Comput., 9:721{63, 1997.
10] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean eld theory for sigmoid belief
networks. J. Artif. Intell. Res., 4:61{76, 1996.
11] G. W. Cottrell, P. Munro, and D. Zipser. Image compression by back propagation: an
example of extensional programming. In N. E. Sharkey, editor, Models of cognition:
a review of cognitive science. Ablex, Norwood, NJ, 1989.
12] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The \wake-sleep" algorithm for
unsupervised neural networks. Science, 268:1158{1161, 1995.
13] H. S. Seung. Pattern analysis and synthesis in attractor neural networks. In K.-Y. M.
Wong, I. King, and D.-Y. Yeung, editors, Theoretical Aspects of Neural Computation:
A Multidisciplinary Perspective, Singapore, 1997. Springer-Verlag.
14] H. S. Seung. Learning continuous attractors in recurrent networks. Adv. Neural Info.
Proc. Syst., 11, 1998.
15] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for Boltzmann
machines. Cognitive Science, 9:147{169, 1985.
| 5220 |@word inversion:12 compression:1 seems:1 propagate:2 tried:2 eld:1 contains:1 existing:1 com:1 activation:3 yet:1 written:2 readily:1 cottrell:1 belmont:1 extensional:1 shape:1 update:2 generative:36 parametrization:1 constructed:1 become:1 consists:1 dan:1 x0:1 mechanic:1 globally:1 encouraging:1 notation:1 xed:1 benchmarked:1 interpreted:1 minimizes:3 transformation:3 nj:2 every:1 act:1 control:1 bertsekas:1 positive:1 local:1 frey:1 encoding:3 professorship:1 range:1 averaged:1 lecun:1 testing:4 backpropagation:4 digit:17 procedure:1 erence:1 bell:3 convenient:1 matching:1 word:1 radial:1 spite:2 close:2 layered:1 operator:1 wong:1 equivalent:2 map:1 deterministic:2 resembling:1 williams:1 independently:1 convex:1 immediately:1 rule:3 utilizing:1 regarded:1 oh:3 his:1 variation:2 unconscious:1 target:3 enhanced:1 programming:2 us:3 hypothesis:8 helmholtz:3 recognition:20 rumelhart:1 predicts:1 database:3 bottom:14 ackley:1 thousand:1 cycle:1 adv:2 seung:4 dynamic:3 trained:7 depend:1 grateful:1 ablex:1 usps:2 completely:2 compactly:1 basis:1 indirect:1 derivation:2 distinct:1 fast:1 sejnowski:1 choosing:1 doctrine:2 whose:1 richer:1 solve:1 ability:3 online:1 advantage:1 sequence:1 propose:1 reconstruction:5 interaction:1 j2:4 relevant:2 loop:4 representational:1 convergence:1 object:2 depending:1 recurrent:2 propagating:1 nearest:1 erent:2 recovering:1 auxiliary:2 ning:1 correct:1 backprop:3 argued:1 generalization:2 adjusted:1 cognition:1 vary:1 early:1 omitted:1 proc:2 outperformed:1 realizes:1 kindermann:1 wl:1 minimization:2 clearly:2 activates:1 gaussian:1 rather:2 jaakkola:1 encode:1 contrast:1 helpful:1 inference:1 dayan:2 inaccurate:1 bt:1 hidden:14 ancestor:1 pixel:2 dual:1 construct:1 park:1 unsupervised:2 parametrizes:1 minimized:1 kwon:2 recognize:1 resulted:1 intell:1 attractor:5 saturated:1 mixture:1 truly:1 nl:2 scienti:2 chain:1 necessary:1 hoon:1 re:1 deformation:1 theoretical:1 column:1 modeling:3 rao:1 cost:5 hundred:1 cho:1 probabilistic:4 lee:3 synthesis:1 continuously:1 quickly:1 again:2 squared:1 cognitive:2 syst:2 potential:1 de:4 coding:1 depends:1 performed:3 lab:2 doing:1 start:3 parallel:1 formed:1 accuracy:1 ensemble:1 handwritten:7 produced:1 cation:2 converged:2 plateau:1 reach:2 tended:2 phys:1 sebastian:1 synaptic:5 ed:1 against:1 sharkey:1 proof:1 di:3 con:1 propagated:2 proved:1 treatment:1 revise:1 back:2 appears:1 response:1 improved:2 done:5 though:3 furthermore:1 just:3 implicit:2 su:2 nonlinear:6 propagation:13 lack:2 logistic:1 multidisciplinary:1 quality:1 artif:1 normalized:1 multiplier:2 evolution:2 iteratively:2 laboratory:1 neal:1 during:3 hill:1 demonstrate:1 performs:2 upwards:1 image:12 meaning:1 superior:1 sigmoid:1 empirically:1 discussed:1 interpretation:3 elementwise:1 grid:1 nonlinearity:3 language:1 longer:1 cortex:1 add:2 dominant:1 closest:1 perspective:2 driven:1 verlag:1 yi:1 inverted:2 seen:2 minimum:1 determine:1 converge:1 signal:10 multiple:1 reduces:1 long:1 variant:1 multilayer:1 expectation:1 yeung:1 represent:1 invert:2 wake:1 operate:1 subject:2 jordan:1 zipser:1 zi:1 gave:1 architecture:2 prototype:1 whether:2 pca:11 munro:1 nine:1 action:1 deep:1 generally:1 locally:2 processed:2 simplest:1 generate:10 problematic:1 singapore:2 per:1 incentive:1 group:2 neither:1 utilizes:1 layer:12 handicap:1 sleep:1 activity:1 constraint:3 generates:1 aspect:1 argument:1 min:3 span:1 ned:1 according:1 describable:1 rev:1 equation:1 previously:1 remains:1 needed:1 know:1 operation:3 disagreement:1 stepsize:3 batch:2 top:25 denotes:1 murray:1 skin:1 added:1 traditional:1 gradient:5 separate:3 parametrized:1 athena:1 manifold:11 balance:1 lg:1 info:2 negative:5 boltzmann:2 perform:1 neuron:9 revised:1 descent:5 hinton:4 variability:3 varied:1 intensity:1 inverting:10 connection:16 learned:2 kang:1 trans:1 able:2 below:1 pattern:24 perception:3 mismatch:1 usually:1 summarize:1 deactivated:1 belief:3 power:1 suitable:1 representing:1 improve:2 technology:1 ne:2 conic:2 nding:2 autoencoder:2 epoch:3 geometric:1 prior:1 review:1 multiplication:1 relative:1 embedded:1 fully:2 permutation:1 generation:15 versus:1 foundation:1 norwood:1 consistent:1 viewpoint:2 editor:3 surprisingly:2 supported:2 soon:1 bias:1 understand:1 neighbor:1 saul:1 feedback:6 curve:1 world:1 sensory:10 adaptive:1 compact:1 global:2 xi:1 alternatively:1 continuous:5 iterative:12 learn:4 transfer:1 nature:1 ballard:1 symmetry:1 interact:1 domain:1 did:1 noise:1 x1:1 brie:1 slow:1 explicit:3 xl:10 comput:2 learns:1 down:15 lucent:1 xt:14 er:2 virtue:1 linden:1 sequential:3 adding:1 subimages:1 appearance:1 visual:3 lagrange:2 springer:1 ma:1 prop:14 identity:1 king:1 revow:1 change:1 wt:9 classi:3 principal:9 called:4 pas:5 partly:1 deforming:1 jong:1 internal:1 inability:1 armijo:1 tested:1 |
4,662 | 5,221 | A Neural Network Based
Head Tracking System
D. D. Lee and H. S. Seung
Bell Laboratories, Lucent Technologies
700 Mountain Ave.
Murray Hill, NJ 07974
fddlee|seungg@bell-labs.com
Abstract
We have constructed an inexpensive, video-based, motorized tracking system that learns to track a head. It uses real time graphical
user inputs or an auxiliary infrared detector as supervisory signals
to train a convolutional neural network. The inputs to the neural
network consist of normalized luminance and chrominance images
and motion information from frame dierences. Subsampled images are also used to provide scale invariance. During the online
training phase, the neural network rapidly adjusts the input weights
depending upon the reliability of the dierent channels in the surrounding environment. This quick adaptation allows the system to
robustly track a head even when other objects are moving within
a cluttered background.
1 Introduction
With the proliferation of inexpensive multimedia computers and peripheral equipment, video conferencing nally appears ready to enter the mainstream. But personal video conferencing systems typically use a stationary camera, tying the user
to a xed location much as a corded telephone tethers one to the telephone jack. A
simple solution to this problem is to use a motorized video camera that can track
a specic person as he or she moves about. However, this presents the diculty of
having to continually control the movements of the camera while one is communicating. In this paper, we present a prototype, neural network based system that
learns the characteristics of a person's head in real time and automatically tracks
it around the room, thus alleviating the user of much of this burden.
The camera movements in this video conferencing system closely resemble the movements of human eyes. The task of the biological oculomotor system is to direct
Color
CCD Camera
(Eye)
Directional
Microphones
(Ears)
PC
Frame
Grabber
Serial
Port
Sound
Card
Reinforcement
Signals
Servo Motors
(Oculomotor
Muscles)
IR Detector
GUI Mouse
Figure 1: Schematic hardware diagram of Marvin, our head tracking system.
\interesting" parts of the visual world onto the small, high resolution areas of the
retinas. For this task, complex neural circuits have evolved in order to control the
eye movements. Some examples include the saccadic and smooth pursuit systems
that allow the eyes to rapidly acquire and track moving objects [1, 2]. Similarly,
an active video conferencing system also needs to determine the appropriate face
or feature to follow in the video stream. Then the camera must track that person's
movements over time and transmit the image to the other party.
In the past few years, the problem of face detection in images and video has attracted
considerable attention [3, 4, 5]. Rule-based methods have concentrated on looking
for generic characteristics of faces such as oval shapes or skin hue. Since these types
of algorithms are fairly simple to implement, they are commonly found in real-time
systems [6, 7]. But because other objects have similar shapes and colors as faces,
these systems can also be easily fooled. A potentially more robust approach is to
use a convolutional neural network to learn the appropriate features of a face [8, 9].
Because most such implementations learn in batch mode, they are beset by the
diculty of constructing a large enough training set of labelled images with and
without faces. In this paper, we present a video based system that uses online
supervisory signals to train a convolutional neural network. Fast online adaptation
of the network's weights allows the neural network to learn how to discriminate an
individual head at the beginning of a session. This enables the system to robustly
track the head even in the presence of other moving objects.
2 Hardware Implementation
Figure 1 shows a schematic of the tracking system we have constructed and have
named \Marvin" because of an early version's similarity to a cartoon character.
Marvin's eye consists of a small CCD camera with a 65 eld of view that is attached
to a motorized platform. Two RC servo motors give Marvin the ability to rapidly
pan and tilt over a wide range of viewing angles, with a typical maximum velocity of
300 deg/sec. The system also includes two microphones or ears that give Marvin the
ability to locate auditory cues. Integrating auditory information with visual inputs
allows the system to nd salient objects better than with either sound or video
alone. But these proceedings will focus exclusively on how a visual representation
is learned.
RGB Images
Y
U
V
D
Figure 2: Preprocessing of the video stream. Luminance, chromatic and motion
information are separately represented in the Y, U, V, D channels at multiple resolutions.
Marvin is able to learn to track a visual target using two dierent sources of supervisory signals. One method of training uses a small 38 KHz modulated infrared
light emitter ( 900 nm) attached to the object that needs to be tracked. A
heat lter renders the infrared light invisible to Marvin's video camera so that the
system does not merely learn to follow this signal. But mounted next to the CCD
camera and moving with it is a small infrared detector with a collimating lens that
signals when the object is located within a narrow angular cone in the direction
that the camera is pointing. This reinforcement signal can then be used to train
the weights of the neural network. Another more natural way for the system to
learn occurs in an actual video conferencing scenario. In this situation, a user who
is actively watching the video stream has manual override control of the camera
using graphical user interface inputs. Whenever the user repositions the camera to
a new location, the neural network would then adjust its weights to track whatever
is in the center portion of the image.
Since Marvin was built from readily available commercial components, the cost of
the system not including the PC was under $500. The input devices and motors
are all controlled by the computer using custom-written Matlab drivers that are
available for both Microsoft Windows and the Linux operating system. The image
processing computations as well as the graphical user interface are then easily implemented as simple Matlab operations and function calls. The following section
describes the head tracking neural network in more detail.
3 Neural Network Architecture
Marvin uses a convolutional neural network architecture to detect a head within its
eld of view. The video stream from the CCD camera is rst digitized with a video
capture board into a series of raw 120 160 RGB images as shown in Figure 2. Each
RGB color image is then converted into its YUV representation, and a dierence (D)
Hidden
Units
Y
WY
U
Saliency
Map
WU
V
WV
Winner Take All
D
WD
Figure 3: Neural network uses a convolutional architecture to integrate the dierent
sources of information and determine the maximally salient object.
image is also computed as the absolute value of the dierence from the preceding
frame. Of the four resulting images, the Y component represents the luminance or
grayscale information while the U and V channels contain the chromatic or color
information. Motion information in the video stream is captured by the D image
where moving objects appear highlighted.
The four YUVD channels are then subsampled successively to yield representations
at lower and lower resolutions. The resulting \image pyramids" allow the network
to achieve recognition invariance across many dierent scales without having to
train separate neural networks for each resolution. Instead, a single neural network
with the same set weights is run with the dierent resolutions as inputs, and the
maximally active resolution and position is selected.
Marvin uses the convolutional neural network architecture shown in Figure 3 to
locate salient objects at the dierent resolutions. The YUVD input images are ltered with separate 16 16 kernels, denoted by WY , WU , WV , and WD respectively.
This results in the ltered images Ys , Us , Vs , Ds :
As (i; j ) = WA As =
X WA(i ; j ) As(i + i ; j + j )
0
i ;j
0
0
0
0
(1)
0
where s denotes the scale resolution of the inputs, and A is any of the Y , U , V ,
or D channels. These ltered images represent a single layer of hidden units in the
neural network. These hidden units are then combined to form the saliency map
X s in the following manner:
X s (i; j ) = cY g[Y s (i; j )] + cU g[U s(i; j )] + cV g[V s (i; j )] + cD g[D s (i; j )] + c0 : (2)
Since g(x) = tanh(x) is sigmoidal, the saliency X s is computed as a nonlinear,
pixel-by-pixel combination of the hidden units. The scalar variables cY , cU , cV ,
and cD represent the relative importance of the dierent luminance, chromatic, and
motion channels in the overall saliency of an object.
With the bias term c0 , the function g[X s(i; j )] may then be thought of as the
relative probability that a head exists at location (i; j ) at input resolution s. The
nal output of the neural network is then determined in a competitive manner by
nding the location (im ; jm ) and scale sm of the best possible match:
g[Xm] = g[X sm (im ; jm )] = max
g[X s(i; j )]:
i;j;s
(3)
After processing the visual inputs in this manner, saccadic camera movements are
generated in order to keep the maximally salient object located near the center of
the eld of view.
4 Training and Results
Either GUI user inputs or the infrared detector may be used as a supervisory signal
to train the kernels WA and scalar weights cA of the neural network. The neural network is updated when the maximally salient location of the neural network
(im ; jm ) does not correspond to the desired object's true position (in ; jn ) as identied by the external supervisory signal. A cost function proportional to the sum
squared error terms at the maximal location and new desired location is used for
training:
e2m = jgm , g[X sm (im ; jm )j2 ;
(4)
2
s
2
en = min
jg , g[X (in ; jn )j :
(5)
s n
In the following examples, the constants gm = 0 and gn = 1 are used. The gradients
to Eqs. 4{5 are then backpropagated through the convolutional network [8, 10],
resulting in the following update rules:
cA = emg0 (Xm )g[A(im ; jm )] + en g0 (Xn )g[A(in ; jn )];
WA = emg0 (Xm )g0 (Am )cA Am + en g0 (Xn )g0 (An )cA An :
(6)
(7)
In typical batch learning applications of neural networks, the learning rate is set
to be some small positive number. However in this case, it is desirable for Marvin
to learn to track a head in a new environment as quickly as possible. Thus, rapid
adaptation of the weights during even a single training example is needed. A natural
way of doing this is to use a fairly large learning rate ( = 0:1), and to repeatedly
apply the update rules in Eqs. 6{7 until the calculated maximally salient location
is very close to the actual desired position.
An example of how quickly Marvin is able to learn to track one of the authors
as he moved around his oce is given by the learning curve in Figure 4. The
weights were rst initialized to small random values, and Marvin was corrected in
an online fashion using mouse inputs to look at the author's head. After only a few
seconds of training with a processing time loop of around 200 ms, the system was
able to locate the head to within four pixels of accuracy, as determined by hand
labelling the video data afterwards. As saccadic eye movements were initiated at
20
18
16
Pixel Error
14
12
10
8
6
4
2
0
0
10
20
30
40
50
Frame Number
Figure 4: Fast online adaptation of the neural network. The head location error in
pixels in a 120 160 image is plotted as a function of frame number (5 frames/sec).
the times indicated by the arrows in Fig. 4, new environments of the oce were
sampled and an occasional large error is seen. However, over time as these errors
are corrected, the neural network learns to robustly discriminate the head from the
oce surroundings.
5 Discussion
Figure 5 shows the inputs and weights of the network after a minute of training as
the author walked around his oce. The kernels necessarily appear a little smeared
because they are invariant to slight changes in head position, rotation, and scale.
But they clearly depict the dark hair, facial features, and skin color of the head. The
relative weighting (cY ; cU ; cV > cD ) of the dierent input channels shows that the
luminance and color information are the most reliable for tracking the head. This
is probably because it is relatively dicult to distinguish in the frame dierence
images the head from other moving body parts.
We are currently considering more complicated neural network architectures for
combining the dierent input streams to give better tracking performance. However, this example shows how a simple convolutional architecture can be used to
automatically integrate dierent visual cues to robustly track a head. Moreover, by
using fast online adaptation of the neural network weights, the system is able to
learn without needing large hand-labelled training sets and is also able to rapidly
accomodate changing environments. Future improvements in hardware and neural network architectures and algorithms are still necessary, however, in order to
approach human speeds and performance in this type of sensory processing and
recognition task.
We acknowledge the support of Bell Laboratories, Lucent Technologies. We also
thank M. Fee, A. Jacquin, S. Levinson, E. Petajan, G. Pingali, and E. Rietman for
helpful discussions.
Y
U
V
D
cY=0.15
cU=0.12
cV=0.11
cD=0.08
Figure 5: Example showing the inputs and weights used in tracking a head. The
head position as calculated by the neural network is marked with a box.
References
[1] Horiuchi, TK, Bishofberger, B & Koch, C (1994). An analog VLSI saccadic
eye movement system. Advances in Neural Information Processing Systems 6,
582{589.
[2] Rao, RPN, Zelinsky, GJ, Hayhoe, MM & Ballard, DH (1996). Modeling saccadic targeting in visual search. Advances in Neural Information Processing
Systems 8, 830{836.
[3] Sung, KK & Poggio, T (1994). Example-based learning for view-based human
face detection. Proc. 23rd Image Understanding Workshop, 843{850.
[4] Eleftheriadis, A & Jacquin, A (1995). Automatic face location detection and
tracking for model-assisted coding of video teleconferencing sequences at low
bit-rates. Signal Processing: Image Communication 7, 231.
[5] Petajan, E & Graf, HP (1996). Robust face feature analysis for automatic
speechreading and character animation. Proc. 2nd Int. Conf. Automatic Face
and Gesture Recognition, 357-362.
[6] Darrell, T, Maes, P, Blumberg, B, & Pentland, AP (1994). A novel environment
for situated vision and behavior. Proc. IEEE Workshop for Visual Behaviors,
68{72.
[7] Yang, J & Waibel, A (1996). A real-time face tracker. Proc. 3rd IEEE Workshop
on Application of Computer Vision, 142{147.
[8] Nowlan, SJ & Platt, JC (1995). A convolutional neural network hand tracker.
Advances in Neural Information Processing Systems 7, 901{908.
[9] Rowley, HA, Baluja, S & Kanade, T (1996). Human face detection in visual
scenes. Advances in Neural Information Processing Systems 8, 875{881.
[10] Le Cun, Y, et al. (1990). Handwritten digit recognition with a back propagation
network. Advances in Neural Information Processing Systems 2, 396{404.
| 5221 |@word cu:4 version:1 nd:2 c0:2 rgb:3 speechreading:1 maes:1 eld:3 series:1 exclusively:1 past:1 nally:1 com:1 wd:2 nowlan:1 must:1 readily:1 attracted:1 written:1 shape:2 enables:1 motor:3 update:2 depict:1 rpn:1 stationary:1 cue:2 alone:1 device:1 selected:1 cult:1 beginning:1 location:10 sigmoidal:1 rc:1 constructed:2 direct:1 driver:1 consists:1 manner:3 rapid:1 behavior:2 proliferation:1 automatically:2 actual:2 jm:5 window:1 little:1 considering:1 moreover:1 circuit:1 rietman:1 mountain:1 tying:1 xed:1 evolved:1 nj:1 sung:1 platt:1 control:3 whatever:1 unit:4 appear:2 continually:1 positive:1 initiated:1 ap:1 emitter:1 range:1 camera:14 implement:1 digit:1 area:1 erence:3 bell:3 thought:1 integrating:1 jacquin:2 onto:1 close:1 targeting:1 map:2 quick:1 center:2 attention:1 cluttered:1 resolution:9 communicating:1 adjusts:1 rule:3 his:2 transmit:1 updated:1 target:1 commercial:1 gm:1 user:8 alleviating:1 us:6 velocity:1 recognition:4 located:2 infrared:5 capture:1 cy:4 movement:8 servo:2 environment:5 rowley:1 seung:1 personal:1 upon:1 teleconferencing:1 easily:2 represented:1 surrounding:1 train:5 heat:1 fast:3 horiuchi:1 ability:2 highlighted:1 online:6 sequence:1 maximal:1 adaptation:5 j2:1 loop:1 combining:1 rapidly:4 culty:2 achieve:1 moved:1 rst:2 darrell:1 object:13 tk:1 depending:1 erent:10 eq:2 auxiliary:1 implemented:1 resemble:1 direction:1 closely:1 human:4 viewing:1 biological:1 im:5 assisted:1 mm:1 around:4 koch:1 tracker:2 pointing:1 early:1 proc:4 tanh:1 currently:1 smeared:1 clearly:1 chromatic:3 focus:1 she:1 improvement:1 fooled:1 ave:1 equipment:1 detect:1 am:1 helpful:1 typically:1 hidden:4 vlsi:1 pixel:5 overall:1 denoted:1 platform:1 fairly:2 having:2 cartoon:1 represents:1 look:1 future:1 few:2 retina:1 surroundings:1 individual:1 subsampled:2 phase:1 microsoft:1 gui:2 detection:4 blumberg:1 custom:1 adjust:1 pc:2 light:2 necessary:1 poggio:1 facial:1 initialized:1 desired:3 plotted:1 modeling:1 gn:1 rao:1 cost:2 combined:1 person:3 lee:1 mouse:2 quickly:2 linux:1 squared:1 ear:2 nm:1 successively:1 zelinsky:1 watching:1 external:1 conf:1 yuv:1 actively:1 converted:1 sec:2 coding:1 includes:1 int:1 jc:1 stream:6 view:4 lab:1 doing:1 portion:1 competitive:1 complicated:1 walked:1 petajan:2 ir:1 accuracy:1 convolutional:9 characteristic:2 who:1 yield:1 saliency:4 correspond:1 directional:1 raw:1 handwritten:1 detector:4 manual:1 whenever:1 ed:1 inexpensive:2 di:17 sampled:1 auditory:2 color:6 back:1 appears:1 follow:2 maximally:5 box:1 erences:1 angular:1 until:1 hand:3 nonlinear:1 propagation:1 mode:1 indicated:1 e2m:1 supervisory:5 normalized:1 contain:1 true:1 laboratory:2 during:2 m:1 hill:1 override:1 invisible:1 motion:4 interface:2 image:21 jack:1 novel:1 rotation:1 tracked:1 attached:2 khz:1 tilt:1 winner:1 analog:1 he:2 slight:1 enter:1 cv:4 rd:2 automatic:3 similarly:1 session:1 hp:1 jg:1 reliability:1 moving:6 similarity:1 mainstream:1 operating:1 gj:1 scenario:1 wv:2 muscle:1 captured:1 grabber:1 seen:1 preceding:1 speci:1 determine:2 signal:10 levinson:1 afterwards:1 desirable:1 sound:2 multiple:1 needing:1 smooth:1 match:1 gesture:1 serial:1 controlled:1 schematic:2 hair:1 vision:2 kernel:3 represent:2 pyramid:1 background:1 separately:1 diagram:1 source:2 probably:1 call:1 near:1 presence:1 reposition:1 yang:1 enough:1 beset:1 architecture:7 prototype:1 bishofberger:1 render:1 repeatedly:1 matlab:2 dark:1 hue:1 backpropagated:1 situated:1 hardware:3 concentrated:1 track:12 salient:6 four:3 changing:1 ce:4 nal:1 lter:1 luminance:5 merely:1 year:1 cone:1 sum:1 run:1 angle:1 named:1 wu:2 fee:1 bit:1 layer:1 distinguish:1 marvin:13 scene:1 conferencing:5 speed:1 min:1 relatively:1 waibel:1 peripheral:1 combination:1 describes:1 across:1 pan:1 character:2 cun:1 invariant:1 needed:1 pursuit:1 available:2 operation:1 apply:1 occasional:1 appropriate:2 generic:1 robustly:4 batch:2 jn:3 denotes:1 include:1 graphical:3 ccd:4 murray:1 move:1 skin:2 g0:4 occurs:1 saccadic:5 gradient:1 separate:2 card:1 thank:1 kk:1 acquire:1 potentially:1 implementation:2 sm:3 acknowledge:1 pentland:1 situation:1 looking:1 head:22 digitized:1 frame:7 locate:3 communication:1 identi:1 learned:1 narrow:1 able:5 hayhoe:1 wy:2 xm:3 oculomotor:2 built:1 including:1 max:1 video:19 reliable:1 natural:2 technology:2 eye:7 nding:1 ready:1 ltered:3 understanding:1 motorized:3 relative:3 graf:1 interesting:1 mounted:1 proportional:1 integrate:2 port:1 cd:4 bias:1 allow:2 wide:1 face:12 absolute:1 curve:1 calculated:2 xn:2 world:1 sensory:1 author:3 commonly:1 reinforcement:2 preprocessing:1 party:1 sj:1 keep:1 deg:1 active:2 grayscale:1 search:1 kanade:1 channel:7 learn:9 robust:2 ca:4 ballard:1 complex:1 necessarily:1 constructing:1 arrow:1 animation:1 body:1 fig:1 en:3 board:1 fashion:1 position:5 weighting:1 learns:3 minute:1 lucent:2 showing:1 consist:1 burden:1 exists:1 workshop:3 importance:1 labelling:1 accomodate:1 visual:9 tracking:9 scalar:2 dh:1 marked:1 room:1 labelled:2 considerable:1 change:1 telephone:2 typical:2 determined:2 corrected:2 baluja:1 microphone:2 multimedia:1 oval:1 discriminate:2 invariance:2 lens:1 support:1 modulated:1 |
4,663 | 5,222 | Top Rank Optimization in Linear Time
Nan Li1
Rong Jin2
Zhi-Hua Zhou1
National Key Laboratory for Novel Software Technology,
Nanjing University, Nanjing 210023, China
2
Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824
{lin,zhouzh}@lamda.nju.edu.cn rongjin@cse.msu.edu
1
Abstract
Bipartite ranking aims to learn a real-valued ranking function that orders positive
instances before negative instances. Recent efforts of bipartite ranking are focused on optimizing ranking accuracy at the top of the ranked list. Most existing
approaches are either to optimize task specific metrics or to extend the rank loss by
emphasizing more on the error associated with the top ranked instances, leading to
a high computational cost that is super-linear in the number of training instances.
We propose a highly efficient approach, titled TopPush, for optimizing accuracy
at the top that has computational complexity linear in the number of training instances. We present a novel analysis that bounds the generalization error for the
top ranked instances for the proposed approach. Empirical study shows that the
proposed approach is highly competitive to the state-of-the-art approaches and is
10-100 times faster.
1
Introduction
Bipartite ranking aims to learn a real-valued ranking function that places positive instances above
negative instances. It has attracted much attention because of its applications in several areas such
as information retrieval and recommender systems [32, 25]. Many ranking methods have been
developed for bipartite ranking, and most of them are essentially based on pairwise ranking. These
algorithms reduce the ranking problem into a binary classification problem by treating each positivenegative instance pair as a single object to be classified [16, 12, 5, 39, 38, 33, 1, 3]. Since the number
of instance pairs can grow quadratically in the number of training instances, one limitation of these
methods is their high computational costs, making them not scalable to large datasets.
Considering that for applications such as document retrieval and recommender systems, only the top
ranked instances will be examined by users, there has been a growing interest in learning ranking
functions that perform especially well at the top of the ranked list [7, 39, 38, 33, 1, 3, 27, 40]. Most
of these approaches can be categorized into two groups. The first group maximizes the ranking
accuracy at the top of the ranked list by optimizing task specific metrics [17, 21, 23, 40], such
as average precision (AP) [42], NDCG [39] and partial AUC [27, 28]. The main limitation of
these methods is that they often result in non-convex optimization problems that are difficult to
solve efficiently. Structural SVM [37] addresses this issue by translating the non-convexity into
an exponential number of constraints. It can still be computationally challenging because it usually
requires to search for the most violated constraint at each iteration of optimization. In addition, these
methods are statistically inconsistent [36, 21], leading to suboptimal solutions. The second group of
methods are based on pairwise ranking. They design special convex loss functions that place more
penalties on the ranking errors related to the top ranked instances [38, 33, 1]. Since these methods
are based on pairwise ranking, their computational costs are usually proportional to the number of
positive-negative instance pairs, making them unattractive for large datasets.
1
In this paper, we address the computational challenge of bipartite ranking by designing a ranking
algorithm, named TopPush, that can efficiently optimize the ranking accuracy at the top. The key
feature of the proposed TopPush algorithm is that its time complexity is only linear in the number
of training instances. This is in contrast to most existing methods for bipartite ranking whose computational costs depend on the number of instance pairs. Moreover, we develop novel analysis for
bipartite ranking. One deficiency of the existing theoretical studies [33, 1] on bipartite ranking is that
they try to bound the probability for a positive instance to be ranked before any negative instance,
leading to relatively pessimistic bounds. We overcome this limitation by bounding the probability
of ranking a positive instance before most negative instances, and show that TopPush is effective in
placing positive instances at the top of a ranked list. Extensive empirical study shows that TopPush
is computationally more efficient than most ranking algorithms, and yields comparable performance
as the state-of-the-art approaches that maximize the ranking accuracy at the top.
The rest of this paper is organized as follows. Section 2 introduces the preliminaries of bipartite
ranking, and addresses the difference between AUC optimization and maximizing accuracy at the
top. Section 3 presents the proposed TopPush algorithm and its key theoretical properties. Section 4
summarizes the empirical study, and Section 5 concludes this work with future directions.
2
Bipartite Ranking: AUC vs. Accuracy at the Top
Let X = {x ? Rd : kxk ? 1} be the instance space. Let S = S+ ? S? be a set of training
?
m
n
instances, where S+ = {x+
i ? X }i=1 and S? = {xi ? X }i=1 include m positive instances
and n negative instances independently sampled from distributions P+ and P? , respectively. The
goal of bipartite ranking is to learn a ranking function f : X 7? R that is likely to place a positive
instance before most negative ones. In the literature, bipartite ranking has found applications in many
domains [32, 25], and its theoretical properties have been examined by several studies [2, 6, 20, 26].
AUC is a commonly used evaluation metric for bipartite ranking [15, 9]. By exploring its equivalence to Wilcoxon-Mann-Whitney statistic [15], many ranking algorithms have been developed to
optimize AUC by minimizing the ranking loss defined as
1 Xm Xn
?
Lrank (f ; S) =
I f (x+
(1)
i ) ? f (xj ) ,
i=1
j=1
mn
where I(?) is the indicator function. Other than a few special loss functions (e.g., exponential and
logistic loss) [33, 20], most of these methods need to enumerate all the positive-negative instance
pairs, making them unattractive for large datasets. Various methods have been developed to address
this computational challenge [43, 13].
Recently, there is a growing interest on optimizing ranking accuracy at the top [7, 3]. Maximizing
AUC is not suitable for this goal as indicated by the analysis in [7]. To address this challenge,
we propose to maximize the number of positive instances that are ranked before the first negative
instance, which is known as positives at the top [33, 1, 3]. We can translate this objective into the
minimization of the following loss
1 Xm
I f (x+
) ? max f (x?
) .
(2)
L(f ; S) =
i
j
i=1
1?j?n
m
which computes the fraction of positive instances ranked below the top-ranked negative instance. By
minimizing the loss in (2), we essentially push negative instances away from the top of the ranked
list, leading to more positive ones placed at the top. We note that (2) is fundamentally different from
AUC optimization as AUC does not focus on the ranking accuracy at the top. More discussion about
the relationship between (1) and (2) can be found in the longer version of the paper [22].
To design practical learning algorithms, we replace the indicator function in (2) with its convex
surrogate, leading to the following loss function
1 Xm
+
L` (f ; S) =
` max f (x?
)
?
f
(x
)
,
(3)
j
i
i=1
1?j?n
m
where `(?) is a convex loss function that is non-decreasing1 and differentiable. Examples of such
loss functions include truncated quadratic loss `(z) = [1 + z]2+ , exponential loss `(z) = ez , or
1
In this paper, we let `(z) to be non-decreasing for the simplicity of formulating dual problem.
2
logistic loss `(z) = log(1 + ez ). In the discussion below, we restrict ourselves to the truncated
quadratic loss, though most of our analysis applies to others.
It is easy to verify that the loss L` (f ; S) in (3) is equivalent to the loss used in InfinitePush [1] (a
special case of P -norm Push [33]), i.e.,
1 Xm
L`? (f ; S) = max
` f (x?
) ? f (x+
) .
(4)
j
i
i=1
1?j?n m
The apparent advantage of employing L` (f ; S) instead of L`? (f ; S) is that it only needs to evaluate
on m positive-negative instance pairs, whereas the later needs to enumerate all the mn instance
pairs. As a result, the number of dual variables induced by L` (f ; S) is n + m, linear in the number
of training instances, which is significantly smaller than mn, the number of dual variables induced
by L`? (f ; S) [1, 31]. It is this difference that makes the proposed algorithm achieve a computational
complexity linear in the number of training instances and therefore be more efficiently than the
existing algorithms for most state-of-the-art algorithms for bipartite ranking.
3
TopPush for Optimizing Top Accuracy
We first present a learning algorithm to minimize the loss function in (3), and then the computational
complexity and performance guarantee for the proposed algorithm.
3.1
Dual Formulation
We consider linear ranking function2 , i.e., f (x) = w> x, where w ? Rd is the weight vector to be
learned. As a result, the learning problem is given by the following optimization problem
?
1 Xm
min kwk2 +
` max w> x?
? w> x+
,
(5)
j
i
w
i=1
1?j?n
2
m
where ? > 0 is a regularization parameter. Directly minimizing the objective in (5) can be challenging because of the max operator in the loss function. We address this challenge by developing a dual
formulation for (5). Specifically, given a convex and differentiable function `(z), we can rewrite it
in its convex conjugate form as `(z) = max??? ?z ? `? (?) , where `? (?) is the convex conjugate
of `(z) and ? is the domain of dual variable [4]. For example, the convex conjugate of truncated
quadratic loss is `? (?) = ?? + ?2 /4 with ? = R+ . We note that dual form has been widely used
to improve computational efficiency [35] and connect different styles of learning algorithms [19].
Here we exploit it to overcome the difficulty caused by max operator. The dual form of (5) is given
in the following theorem, whose detailed proof can be found in the longer version [22].
?
?
? >
+ >
Theorem 1. Define X+ = (x+
1 , . . . , xm ) and X = (x1 , . . . , xn ) , the dual problem of (5) is
Xm
1
min g(?, ?) =
k?> X+ ? ? > X? k2 +
`? (?i )
(6)
i=1
2?m
(?,?)??
where ? and ? are dual variables, and the domain ? is defined as
n
>
>
? =
? ? Rm
+ , ? ? R+ : 1m ? = 1n ? .
Let ?? and ? ? be the optimal solution to the dual problem (6). Then, the optimal solution w? to the
primal problem in (5) is given by
1
w? =
a?> X+ ? ? ?> X? .
(7)
?m
Remark The key feature of the dual problem in (6) is that the number of dual variables is m + n,
leading to a linear time ranking algorithm. This is in contrast to the InfinitPush algorithm in [1] that
introduces mn dual variables and a higher computational cost. In addition, the objective function in
(6) is smooth if the convex conjugate `? (?) is smooth, which is true for many common loss functions
(e.g., truncated quadratic loss and logistic loss). It is well known in the literature of optimization [4]
that an O(1/T 2 ) convergence rate can be achieved if the objective function is smooth, where T is
the number of iterations; this also helps in designing efficient learning algorithm.
2
Nonlinear function can be trained by kernel methods, and Nystr?om method and random Fourier features
can transform the kernelized problem into a linear one. See [41] for more discussions.
3
3.2
Linear Time Bipartite Ranking
According to Theorem 1, to learn a ranking function f (w), it is sufficient to learn the dual variables
? and ? by solving the problem in (6). For this purpose, we adopt the accelerated gradient method
due to its light computation per iteration, and refer the obtained algorithm as TopPush. Specifically,
we choose the Nesterov?s method [30, 29] that achieves an optimal convergence rate O(1/T 2 ) for
smooth objective function. One of the key features of the Nesterov?s method is that it maintains
?
two sequences of solutions: {(?k , ?k )} and {(s?
k ; sk )}, where the sequence of auxiliary solutions
? ?
{(sk ; sk )} is introduced to exploit the smoothness of the objective to achieve a faster convergence
rate. Algorithm 1 shows the key steps3 of the Nesterov?s method for solving the problem in (6),
where the gradients of the objective function g(?, ?) can be efficiently computed as
?? g(?, ?) = X+ ? > /?m + `0? (?) ,
?? g(?, ?) = ?X? ? > /?m .
(8)
where ? = ?> X+ ? ? > X? and `0? (?) is the derivative of `? (?).
Algorithm 1 The TopPush Algorithm
Input: X+ ? Rm?d , X? ? Rn?d , ?,
Output: w
1
1: initialize ?1 = ?0 = 0m , ?1 = ?0 = 0n , and let t?1 = 0, t0 = 1, L0 = m+n
2: repeat for k = 1, 2, . . .
3:
compute sak = ?k + ?k (?k ? ?k?1 ) and s?k = ?k + ?k (?k ? ?k?1 ), where ?k =
4:
5:
6:
tk?2 ?1
tk?1
?
? ?
compute g? = ?? g(s?
k , sk ) and g? = ?? g(sk , sk ) based on (8)
?
2
2
find Lk > Lk?1 such that g(?k+1 , ?k+1 ) > g(s?
k , sk ) + (kg? k + kg? k )/(2Lk ), where
0
0
0
?
0
1
[?k+1 ; ?k+1 ] = ?? ([?k+1 ; ?k+1 ]) with ?k+1 = sk ? Lk g? and ?k+1 = s?k ? L1k g?
q
update tk = (1 + 1 + 4t2k?1 )/2
7: until convergence (i.e., |g(?k+1 , ?k+1 ) ? g(?k , ?k )| < )
+
> ?
1
8: return w = ??m
(?>
k X ? ?k X )
It should be noted that, (6) is a constrained problem, and therefore, at each step of gradient mapping,
0
we have to project the dual solution into the domain ? (i.e, [?k+1 ; ?k+1 ] = ?? ([?0k+1 ; ?k+1
]) in
step 5) to keep them feasible. Below, we discuss how to solve this projection step efficiently.
Projection Step For clear notations, we expand the projection step into the problem
1
1
>
min
k? ? ?0 k2 + k? ? ? 0 k2 s.t. 1>
m ? = 1n ? ,
??0,??0 2
2
(9)
where ?0 and ? 0 are the solutions obtained in the last iteration. We note that similar projection
problems have been studied in [34, 24] where they either have O((m + n) log(m + n)) time complexity [34] or only provide approximate solutions [24]. Instead, based on the following proposition,
we provide a method which find the exact solution to (9) in O(n+m) time. By using proof technique
similar to that for Theorem 2 in [24], we can prove the following proposition:
Proposition 1. The optimal solution to the projection problem in (9) is given by
?? = [?0 ? ? ? ]+ and ? ? = [? 0 + ? ? ]+ ,
Pm
Pn
where ? ? is the root of function ?(?) = i=1 [?i0 ? ?]+ ? j=1 [?j0 + ?]+ .
Based on Proposition 1, we provide a method which find the exact solution to (9) in O(m + n) time.
According to Proposition 1, the key to solving this problem is to find the root of ?(?). Instead of
approximating the solution via bisection as in [24], we develop a divide-and-conquer method to find
the exact solution of ? ? in O(m + n) time, where a similar approach has been used in [10]. The
basic idea is to first identify the smallest interval that contains the root based on a modification of
the randomized median finding algorithm [8], and then solve the root exactly based on the interval.
The detailed projection procedure can be found in the longer version [22].
3
The step size of the Nesterov?s method depends on the smoothness of the objective function. In current
work we adopt the Nemirovski?s line search scheme [29] to compute the smoothness parameter, and the detailed
algorithm can be found in [22].
4
Table 1: Comparison of computational complexities for ranking algorithms, where d is the number of dimensions, is the precision parameter, m and n are the number of positive and negative instances, respectively.
Algorithm
SVMRank
SVMMAP
OWPC
SVMpAUC
InfinitePush
L1SVIP
TopPush
3.3
[18]
[42]
[38]
[27, 28]
[1]
[31]
this paper
Computational Complexity
O (m + n)d + (m + n) log(m + n)/
O (m + n)d + (m + n) log(m + n)/
O (m + n)d + (m + n) log(m + n)
/
O n log n + m log m +(m +
n)d /
O mnd + mn log(mn)/2
O mnd + mn
/
?log(mn)
O (m + n)d/
Convergence and Computational Complexity
The theorem below states the convergence of the TopPush algorithm, which follows immediately
from the convergence result for the Nesterov?s method [29].
Theorem 2. Let ?T and ?T be the solution output from TopPush after T iterations, we have
g(?T , ?T ) ? min g(?, ?) +
(?,?)??
?
provided T ? O(1/ ).
Finally, since the computational cost of each iteration is dominated by the gradient evaluation and
the projection step, the time complexity of each iteration is O((m + n)d) since the complexity of
projection step is O(m + n) and the cost of computing the gradient is O((m + n)d). Combining this
result with Theorem 2, we have, to find an -suboptimal
solution, the total computational complexity
?
of the TopPush algorithm is O((m + n)d/ ), which is linear in the number of training instances.
Table 1 compares the computational complexity of TopPush with that of the state-of-the-art algorithms. It is easy to see that TopPush is asymptotically more efficient than the state-of-the-art ranking algorithms4 . For instances, it is much more efficient than InfinitePush and its sparse extension
L1SVIP whose complexity depends on the number of positive-negative instance pairs; compared
with SVMRank , SVMMAP and SVMpAUC that handle specific performance metrics via structuralSVM, the linear dependence on the number of training instances makes our TopPush approach more
appealing, especially for large datasets.
3.4
Theoretical Guarantee
We develop theoretical guarantee for the ranking performance of TopPush. In [33, 1], the authors
have developed margin-based generalization bounds for the loss function L`? . One limitation with
the analysis in [33, 1] is that they try to bound the probability for a positive instance to be ranked
before any negative instance, leading to relatively pessimistic bounds5 . Our analysis avoids this
pitfall by considering the probability of ranking a positive instance before most negative instances.
To this end, we first define hb (x, w), the probability for any negative instance to be ranked above x
using ranking function f (x) = w> x, as
hb (x, w) = Ex? ?P ? I(w> x ? w> x? ) .
Since we are interested in whether positive instances are ranked above most negative instances, we
will measure the quality of f (x) = w> x by the probability for any positive instance to be ranked
below ? percent of negative instances, i.e.,
Pb (w, ?) = Prx+ ?P + hb (x+
i , w) ? ? .
Clearly, if a ranking function achieves a high ranking accuracy at the top, it should have a large
percentage of positive instances with ranking scores higher than most of the negative instances,
leading to a small value for Pb (w, ?) with little ?. The following theorem bounds Pb (w, ?) for
TopPush, and the detailed proof can be found in the longer version [22].
pAUC
In Table 1, we report the complexity of SVMpAUC
in [27].
tight in [28], which is more efficient than SVM
pAUC
In addition, SVMtight is used in experiments and we do not distinguish between them in this paper.
5
For instance, for the bounds in [33], the failure probability can be as large as 1 if the parameter p is large.
4
5
Theorem 3. Given training data S consisting of m independent samples from P + and n independent samples from P ? , let w? be the optimal solution to the problem in (5). Assume m ? 12 and
n t, we have, with a probability at least 1 ? 2e?t ,
p
Pb (w? , ?) ? L` (w? , S) + O (t + log m)/m
p
Pm
1
> ?
> +
where ? = O( log m/n) and L` (w? , S) = m
i=1 `(max1?j?n w xj ? w xi ).
Remark Theorem 3 implies that if the empirical loss L` (w? , S) ? O(log m/m), for most positive
instance x+ (i.e.,
p1 ? O(log m/m)), the percentage of negative instances ranked above x+ is upper
bounded by O( log m/n). We observe that m and n play different roles in the bound; that is,
because the empirical loss compares the positive instances to the negative instance with the largest
score, it usually grows significantly slower with increasing n. For instance, the largest absolute value
of Gaussian random samples grows in log n. Thus, we
?believe that the main effect of increasing n
in our bound is to reduce ? (decrease at the rate of 1/ n), especially when n is large. Meanwhile,
by increasing the number of positive instances m, we will reduce the bound for Pb (w, ?), and
consequently increase the chance of finding positive instances at the top.
4
4.1
Experiments
Settings
To evaluate the performance of the TopPush algorithm, we conduct a set of experiments on realworld datasets. Table 2 (left column) summarizes the datasets used in our experiments. Some of
them were used in previous studies [1, 31, 3], and others are larger datasets from different domains.
We compare TopPush with state-of-the-art algorithms that focus on accuracy at the top, including
SVMMAP [42], SVMpAUC [28] with ? = 0 and ? = 1/n, AATP [3] and InfinitePush [1]. In
addition, for completeness, several state-of-the-art classification and ranking models are included
in the comparison: logistic regression (LR) for binary classification, cost-sensitive SVM (cs-SVM)
that addresses imbalance class distribution by introducing a different misclassification cost for each
class, and SVMRank [18] for AUC optimization. We implement TopPush and InfinitePush using
MATLAB, implement AATP using CVX [14], and use LIBLINEAR [11] for LR and cs-SVM, and
use the codes shared by the authors of the original works.
We measure the accuracy at the top by commonly used metrics6 : (i) positives at the top
(Pos@Top) [1, 31, 3], which is defined as the fraction of positive instances ranked above the topranked negative, (ii) average precision (AP) and (iii) normalized DCG scores (NDCG). On each
dataset, experiments are run for thirty trials. In each trial, the dataset is randomly divided into two
subsets: 2/3 for training and 1/3 for test. For all algorithms, we set the precision parameter to
10?4 , choose other parameters by 5-fold cross validation (based on the average value of Pos@Top)
on training set, and perform the evaluation on test set. Finally, averaged results over thirty trails are
reported. All experiments are run on a machine with two Intel Xeon E7 CPUs and 16GB memory.
4.2
Results
In table 2, we report the performance of the algorithms in comparison, where the statistics of testbeds
are included in the first column of the table. For better comparison between the performance of
TopPush and baselines, pairwise t-tests at significance level of 0.9 are performed and results are
marks ?? / ?? in table 2 when TopPush is statistically significantly better/worse.
When an evaluation task can not be completed in two weeks, it will be stopped automatically, and no
result will be reported. As a consequence, we observe that results for some algorithms are missing
in Table 2 for certain datasets, especially for large ones. We can see from Table 2 that TopPush,
LR and cs-SVM succeed to finish the evaluation on all datasets (even the largest datasets url). In
contrast, SVMRank , SVMRank and SVMpAUC fail to complete the training in time for several large
datasets. InfinitePush and AATP have the worst scalability: they are only able to finish the smallest
dataset diabetes. We thus conclude that overall, TopPush scales well to large datasets.
6
It is worth mentioning that we also measure the ranking performance by AUC, and the results can be found
in [22]. In addition, more details of the experimental setting can be found there.
6
Table 2: Data statistics (left column) and experimental results. For each dataset, the number of positive
and negative instances is below the data name as m/n, together with dimensionality d. For training time
comparison,?N? (?F?) are marked if TopPush is at least 10 (100) times faster than the compared algorithm.
For performance (mean?std) comparison, ??? (???) is marked if TopPush performs significantly better (worse)
than the baseline based on pairwise t-test at 0.9 significance level. On each dataset, if the evaluation of an
algorithm can not be completed in two weeks, it will be stopped and its results will be missing from the table.
Data
diabetes
500/268
d : 34
news20-forsale
999/18, 929
d : 62, 061
nslkdd
71, 463/77, 054
d : 121
real-sim
22, 238/50, 071
d : 20, 958
spambase
1, 813/2, 788
d : 57
url
792, 145/1, 603, 985
d : 3, 231, 961
w8a
1, 933/62, 767
d : 300
Algorithm
Time (s)
Pos@Top
AP
NDCG
TopPush
LR
cs-SVM
SVMRank
SVMMAP
SVMpAUC
InfinitePush
AATP
10?3
10?2
10?2
5.11 ?
2.30 ?
7.70 ?
6.11 ? 10?2
4.71 ? 100
2.09 ? 10?1 N
2.63 ? 101 F
2.72 ? 103 F
.123 ? .056
.064 ? .075?
.077 ? .088?
.087 ? .082?
.077 ? .072?
.053 ? .096?
.119 ? .051
.127 ? .061
.872 ? .023
.881 ? .022
.758 ? .166?
.879 ? .022
.879 ? .012
.668 ? .123?
.877 ? .035
.881 ? .035
.976 ? .005
.973 ? .008
.920 ? .078?
.975 ? .006
.969 ? .009
.884 ? .065?
.978 ? .007
.979 ? .010
TopPush
LR
cs-SVM
SVMRank
SVMMAP
SVMpAUC
2.16 ? 100
4.14 ? 100
1.89 ? 100
2.96 ? 102 F
8.42 ? 102 F
3.25 ? 102 F
.191 ? .088
.086 ? .067?
.114 ? .069?
.149 ? .056?
.184 ? .092
.196 ? .087
.843 ? .018
.803 ? .020?
.766 ? .021?
.850 ? .016
.832 ? .022
.812 ? .019?
.970 ? .005
.962 ? .005
.955 ? .006?
.972 ? .003
.969 ? .007
.963 ? .005?
TopPush
LR
cs-SVM
SVMpAUC
7.64 ? 101
3.63 ? 101
1.86 ? 100
1.72 ? 102
.633 ? .088
.220 ? .053?
.556 ? .037?
.634 ? .059
.978 ? .001
.981 ? .002
.980 ? .001
.956 ? .002?
.997 ? .001
.998 ? .001
.998 ? .001
.996 ? .001
TopPush
LR
cs-SVM
SVMRank
1.34 ? 101
7.67 ? 100
4.84 ? 100
1.83 ? 103 F
.186 ? .049
.100 ? .043?
.146 ? .031?
.090 ? .045?
.986 ? .001
.989 ? .001
.979 ? .001
.986 ? .000
.998 ? .001
.999 ? .001
.998 ? .001
.999 ? .001
TopPush
LR
cs-SVM
SVMRank
SVMMAP
SVMpAUC
InfinitePush
1.51 ? 10?1
3.11 ? 10?2
8.31 ? 10?2
2.31 ? 101 N
1.92 ? 102 F
1.73 ? 100 N
1.78 ? 103 F
.129 ? .077
.071 ? .053?
.069 ? .059?
.069 ? .076?
.097 ? .069?
.073 ? .058?
.132 ? .087
.922 ? .006
.920 ? .010
.907 ? .010?
.931 ? .010
.935 ? .014
.854 ? .024?
.920 ? .005
.988 ? .001
.987 ? .003
.980 ? .004?
.990 ? .003
.984 ? .005
.975 ? .007?
.987 ? .002
TopPush
LR
cs-SVM
5.11 ? 103
8.98 ? 103
3.78 ? 103
.474 ? .046
.362 ? .113?
.432 ? .069?
.986 ? .001
.993 ? .001?
.991 ? .002
.999 ? .001
.999 ? .001
.998 ? .001
TopPush
LR
cs-SVM
SVMpAUC
7.35 ? 100
2.46 ? 100
3.87 ? 100
2.59 ? 103 F
.226 ? .053
.107 ? .093?
.118 ? .105?
.207 ? .046
.710 ? .019
.450 ? .374?
.447 ? .372?
.673 ? .021?
.938 ? .005
.775 ? .221?
.774 ? .220?
.929 ? .006?
Performance Comparison In terms of evaluation metric Pos@Top, we find that TopPush yields
similar performance as InfinitePush and AATP, and performs significantly better than the other baselines including LR and cs-SVM, SVMRank , SVMRank and SVMpAUC . This is consistent with the
design of TopPush that aims to maximize the accuracy at the top of the ranked list. Since the loss
function optimized by InfinitePush and AATP are similar as that for TopPush, it is not surprising
that they yield similar performance. The key advantage of using the proposed algorithm versus InfinitePush and AATP is that it is computationally more efficient and scales well to large datasets.
In terms of AP and NDCG, we observe that TopPush yield similar, if not better, performance as
the state-of-the-art methods, such as SVMMAP and SVMpAUC , that are designed to optimize these
metrics. Overall, we conclude that the proposed algorithm is effective in optimizing the ranking
accuracy for the top ranked instances.
Training Efficiency To evaluate the computational efficiency, we set the parameters of different
algorithms to be the values that are selected by cross-validation, and run these algorithms on full
datasets that include both training and testing sets. Table 2 summarizes the training time of different
algorithms. From the results, we can see that TopPush is faster than state-of-the-art ranking methods on most datasets. In fact, the training time of TopPush is similar to that of LR and cs-SVM
7
implemented by LIBLINEAR. Since the time complexity of learning a binary classification model
is usually linear in the number of training instances, this result implicitly suggests a linear time
complexity for the proposed algorithm.
5
url
trainign time (s)
Scalability We study how TopPush scales to different
number of training examples by using the largest dataset
url. Figure 1 shows the log-log plot for the training time
of TopPush vs. the size of training data, where different
lines correspond to different values of ?. For the purpose
of comparison, we also include a black dash-dot line that
tries to fit the training time by a linear function in the
number of training instances (i.e., ?(m + n)). From the
plot, we can see that for different regularization parameter ?, the training time of TopPush increases even slower
than the number of training data. This is consistent with
our theoretical analysis given in Section 3.3.
10
2
10
1
?=100
?=10
?=1
?=0.1
?=0.01
?(x)
2
10
3
10
data size
4
10
5
10
Figure 1: Training time of TopPush versus
training data size for different values of ?.
Conclusion
In this paper, we focus on bipartite ranking algorithms that optimize accuracy at the top of the ranked
list. To this end, we consider to maximize the number of positive instances that are ranked above any
negative instances, and develop an efficient algorithm, named as TopPush to solve related optimization problem. Compared with existing work on this topic, the proposed TopPush algorithm scales
linearly in the number of training instances, which is in contrast to most existing algorithms for
bipartite ranking whose time complexities dependents on the number of positive-negative instance
pairs. Moreover, our theoretical analysis clearly shows that it will lead to a ranking function that
places many positive instances the top of the ranked list. Empirical studies verify the theoretical
claims: the TopPush algorithm is effective in maximizing the accuracy at the top and is significantly
more efficient than the state-of-the-art algorithms for bipartite ranking. In the future, we plan to
develop appropriate univariate loss, instead of pairwise ranking loss, for efficient bipartite ranking
that maximize accuracy at the top.
Acknowledgement This research was supported by the 973 Program (2014CB340501), NSFC
(61333014), NSF (IIS-1251031), and ONR Award (N000141210431).
References
[1] S. Agarwal. The infinite push: A new support vector ranking algorithm that directly optimizes accuracy
at the absolute top of the list. In SDM, pages 839?850, 2011.
[2] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under
the ROC curve. JMLR, 6:393?425, 2005.
[3] S. Boyd, C. Cortes, M. Mohri, and A. Radovanovic. Accuracy at the top. In NIPS, pages 962?970. 2012.
[4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank
using gradient descent. In ICML, pages 89?96, 2005.
[6] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U -statistics. Annals of
Statistics, 36(2):844?874, 2008.
[7] S. Cl?emenc?on and N. Vayatis. Ranking the best instances. JMLR, 8:2671?2699, 2007.
[8] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to algorithms. MIT Press, 2001.
[9] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In NIPS, pages 313?320. 2004.
[10] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball for learning
in high dimensions. In ICML, pages 272?279, 2008.
[11] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. JMLR, 9:1871?1874, 2008.
[12] Y. Freund, R. Iyer, R. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences.
JMLR, 4:933?969, 2003.
[13] W. Gao, R. Jin, S. Zhu, and Z.-H. Zhou. One-pass AUC optimization. In ICML, pages 906?914, 2013.
8
[14] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http:
//cvxr.com/cvx, March 2014.
[15] J. Hanley and B. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC)
curve. Radiology, 143:29?36, 1982.
[16] R. Herbrich, T. Graepel, and K. Obermayer. Large Margin Rank Boundaries for Ordinal Regression,
chapter Advances in Large Margin Classifiers, pages 115?132. MIT Press, Cambridge, MA, 2000.
[17] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377?384,
Bonn, Germany, 2005.
[18] T. Joachims. Training linear SVMs in linear time. In KDD, pages 217?226, 2006.
[19] T. Kanamori, A. Takeda, and T. Suzuki. Conjugate relation between loss functions and uncertainty sets in
classification problems. JMLR, 14:1461?1504, 2013.
[20] W. Kotlowski, K. Dembczynski, and E. H?ullermeier. Bipartite ranking through minimization of univariate
loss. In ICML, pages 1113?1120, 2011.
[21] Q.V. Le and A. Smola. Direct optimization of ranking measures. CoRR, abs/0704.3359, 2007.
[22] N. Li, R. Jin, and Z.-H. Zhou. Top rank optimization in linear time. CoRR, abs/1410.1462, 2014.
[23] N. Li, I. W. Tsang, and Z.-H. Zhou. Efficient optimization of performance measures by classifier adaptation. IEEE-PAMI, 35(6):1370?1382, 2013.
[24] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In ICML, pages 657?664, 2009.
[25] T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[26] H. Narasimhan and S. Agarwal. On the relationship between binary classification, bipartite ranking, and
binary class probability estimation. In NIPS, pages 2913?2921. 2013.
[27] H. Narasimhan and S. Agarwal. A structural SVM based approach for optimizing partial AUC. In ICML,
pages 516?524, 2013.
[28] H. Narasimhan and S. Agarwal. SVMtight
pAUC : A new support vector method for optimizing partial AUC
based on a tight convex upper bound. In KDD, pages 167?175, 2013.
[29] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994.
[30] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, 2003.
[31] A. Rakotomamonjy. Sparse support vector infinite push. In ICML, 2012.
[32] S. Rendle, L. Balby Marinho, A. Nanopoulos, and L. Schmidt-Thieme. Learning optimal ranking with
tensor factorization for tag recommendation. In KDD, pages 727?736, 2009.
[33] C. Rudin and R. Schapire. Margin-based ranking and an equivalence between adaboost and rankboost.
JMLR, 10:2193?2232, 2009.
[34] S. Shalev-Shwartz and Y. Singer. Efficient learning of label ranking by soft projections onto polyhedra.
JMLR, 7:1567?1599, 2006.
[35] S. Sun and J. Shawe-Taylor. Sparse semi-supervised learning using conjugate functions. JMLR, 11:2423?
2455, 2010.
[36] A. Tewari and P. Bartlett. On the consistency of multiclass classification methods. JMLR, 8:1007?1025,
2007.
[37] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. JMLR, 6:1453?1484, 2005.
[38] N. Usunier, D. Buffoni, and P. Gallinari. Ranking with ordered weighted pairwise classification. In ICML,
pages 1057?1064, Montreal, Canada, 2009.
[39] H. Valizadegan, R. Jin, R. Zhang, and J. Mao. Learning to rank by optimizing NDCG measure. In NIPS,
pages 1883?1891. 2009.
[40] M. Xu, Y.-F. Li, and Z.-H. Zhou. Multi-label learning with PRO loss. In AAAI, pages 998?1004, 2013.
[41] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr?om method vs random Fourier features: A
theoretical and empirical comparison. In NIPS, pages 485?493. MIT Press, 2012.
[42] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In SIGIR, pages 271?278, 2007.
[43] P. Zhao, S.C.H. Hoi, R. Jin, and T. Yang. Online AUC maximization. In ICML, pages 233?240, Bellevue,
WA, 2011.
9
| 5222 |@word trial:2 version:5 norm:1 hsieh:1 bellevue:1 nystr:2 liblinear:3 liu:2 contains:1 score:3 document:1 spambase:1 existing:6 current:1 com:1 surprising:1 attracted:1 kdd:3 hofmann:1 treating:1 designed:1 update:1 plot:2 v:4 selected:1 rudin:1 lr:12 renshaw:1 completeness:1 boosting:1 cse:1 herbrich:2 preference:1 zhang:1 direct:1 prove:1 introductory:1 lansing:1 valizadegan:1 pairwise:7 news20:1 p1:1 growing:2 multi:1 zhouzh:1 decreasing:1 pitfall:1 automatically:1 zhi:1 little:1 cpu:1 considering:2 increasing:3 project:1 provided:1 moreover:2 notation:1 maximizes:1 bounded:1 rivest:1 kg:2 thieme:1 developed:4 narasimhan:3 finding:2 guarantee:3 exactly:1 k2:3 rm:2 classifier:2 gallinari:1 grant:1 hamilton:1 positive:31 nju:1 engineering:1 before:7 consequence:1 nsfc:1 ap:4 ndcg:5 black:1 lugosi:1 pami:1 china:1 examined:2 equivalence:2 studied:1 challenging:2 suggests:1 mentioning:1 factorization:1 nemirovski:2 statistically:2 averaged:1 practical:1 thirty:2 testing:1 implement:2 n000141210431:1 procedure:1 j0:1 area:3 empirical:8 significantly:6 projection:11 boyd:3 deed:1 altun:1 nanjing:2 onto:2 tsochantaridis:1 operator:2 shaked:1 optimize:5 equivalent:1 function2:1 missing:2 maximizing:3 roth:1 emenc:2 attention:1 independently:1 convex:14 focused:1 sigir:1 simplicity:1 immediately:1 vandenberghe:1 svmrank:11 handle:1 annals:1 aatp:7 play:1 user:1 exact:3 programming:2 trail:1 designing:2 diabetes:2 svmtight:2 std:1 role:1 wang:1 worst:1 tsang:1 sun:1 decrease:1 convexity:1 complexity:17 peled:1 nesterov:6 trained:1 depend:1 rewrite:1 solving:3 tight:2 bipartite:21 efficiency:3 max1:1 po:4 l1k:1 various:1 chapter:1 effective:3 zhou1:1 shalev:2 whose:4 apparent:1 widely:1 valued:2 solve:4 larger:1 statistic:5 radiology:1 transform:1 online:1 advantage:2 differentiable:2 sequence:2 sdm:1 propose:2 adaptation:1 combining:2 translate:1 achieve:2 scalability:2 takeda:1 convergence:7 object:1 help:1 tk:3 develop:5 montreal:1 sim:1 auxiliary:1 c:12 implemented:1 implies:1 direction:1 translating:1 hoi:1 mann:1 generalization:3 preliminary:1 proposition:5 pessimistic:2 rong:1 exploring:1 extension:1 mapping:1 week:2 claim:1 achieves:2 adopt:2 smallest:2 forsale:1 purpose:2 estimation:1 label:2 sensitive:1 largest:4 weighted:1 minimization:4 mit:3 clearly:2 gaussian:1 rankboost:1 aim:3 lamda:1 super:1 e7:1 pn:1 zhou:5 l0:1 focus:3 joachim:4 rank:7 polyhedron:1 contrast:4 baseline:3 dependent:1 i0:1 dcg:1 kernelized:1 relation:1 expand:1 interested:1 germany:1 issue:1 classification:9 dual:16 overall:2 plan:1 art:10 special:3 initialize:1 constrained:1 testbeds:1 placing:1 icml:10 future:2 others:2 report:2 fundamentally:1 ullermeier:1 few:1 randomly:1 national:1 ourselves:1 t2k:1 consisting:1 ab:2 interest:2 highly:2 leiserson:1 evaluation:7 introduces:2 light:1 primal:1 har:1 partial:3 conduct:1 divide:1 euclidean:1 taylor:1 theoretical:9 stopped:2 instance:71 column:3 xeon:1 soft:1 whitney:1 maximization:1 cost:9 introducing:1 rakotomamonjy:1 subset:1 lazier:1 reported:2 connect:1 randomized:1 together:1 aaai:1 choose:2 worse:2 algorithms4:1 derivative:1 leading:8 style:1 return:1 li:4 mahdavi:1 zhao:1 caused:1 ranking:67 depends:2 later:1 try:3 root:4 performed:1 competitive:1 maintains:1 dembczynski:1 minimize:1 om:2 accuracy:20 toppush:47 characteristic:1 efficiently:5 yield:4 identify:1 correspond:1 bisection:1 worth:1 classified:1 failure:1 associated:1 mi:1 proof:3 sampled:1 dataset:6 dimensionality:1 organized:1 graepel:2 higher:2 supervised:1 adaboost:1 disciplined:1 formulation:2 though:1 smola:1 until:1 nonlinear:1 logistic:4 quality:1 indicated:1 grows:2 believe:1 name:1 effect:1 ye:1 verify:2 true:1 normalized:1 regularization:2 laboratory:1 auc:15 noted:1 complete:1 performs:2 duchi:1 pro:1 percent:1 meaning:1 novel:3 recently:1 common:1 extend:1 kluwer:1 kwk2:1 refer:1 cambridge:2 smoothness:3 rd:2 consistency:1 pm:2 shawe:1 dot:1 longer:4 operating:1 wilcoxon:1 multivariate:1 recent:1 optimizing:10 optimizes:1 certain:1 binary:5 onr:1 maximize:5 ii:2 semi:1 full:1 smooth:4 faster:4 academic:1 cross:2 lin:2 retrieval:3 divided:1 award:1 scalable:1 basic:1 regression:2 essentially:2 metric:6 chandra:1 iteration:7 kernel:1 agarwal:5 achieved:1 buffoni:1 vayatis:2 addition:5 whereas:1 interval:2 grow:1 median:1 publisher:1 rest:1 kotlowski:1 yue:1 induced:2 inconsistent:1 structural:2 yang:2 iii:1 easy:2 hb:3 xj:2 finish:2 fit:1 li1:1 restrict:1 suboptimal:2 reduce:3 idea:1 cn:1 multiclass:1 t0:1 whether:1 bartlett:1 gb:1 url:4 effort:1 penalty:1 titled:1 remark:2 matlab:2 enumerate:2 tewari:1 detailed:4 clear:1 stein:1 svms:1 schapire:2 http:1 percentage:2 nsf:1 per:1 pauc:3 group:3 key:8 pb:5 asymptotically:1 mcneil:1 fraction:2 realworld:1 run:3 uncertainty:1 named:2 place:4 cvx:3 summarizes:3 comparable:1 bound:12 nan:1 distinguish:1 dash:1 fold:1 quadratic:4 fan:1 constraint:2 deficiency:1 software:2 dominated:1 bonn:1 fourier:2 tag:1 min:4 formulating:1 relatively:2 department:1 developing:1 according:2 structured:1 ball:1 march:1 nanopoulos:1 conjugate:6 cormen:1 smaller:1 appealing:1 making:3 modification:1 computationally:3 discus:1 fail:1 singer:3 ordinal:1 rendle:1 end:2 usunier:1 observe:3 away:1 appropriate:1 sak:1 schmidt:1 slower:2 original:1 top:39 include:4 completed:2 exploit:2 hanley:1 especially:4 conquer:1 approximating:1 tensor:1 objective:8 mnd:2 dependence:1 surrogate:1 obermayer:1 gradient:6 topic:1 code:1 relationship:2 minimizing:3 svmmap:7 difficult:1 negative:26 design:3 perform:2 recommender:2 upper:2 imbalance:1 w8a:1 datasets:15 descent:1 jin:5 truncated:4 rn:1 canada:1 introduced:1 pair:9 extensive:1 optimized:1 quadratically:1 learned:1 nip:5 address:7 able:1 usually:4 below:6 xm:7 challenge:4 jin2:1 program:1 max:7 including:2 memory:1 suitable:1 misclassification:1 ranked:24 difficulty:1 indicator:2 mn:8 zhu:1 scheme:1 improve:1 technology:1 library:1 lk:4 concludes:1 finley:1 hullender:1 literature:2 acknowledgement:1 interdependent:1 freund:1 loss:31 lecture:2 limitation:4 proportional:1 versus:2 validation:2 sufficient:1 consistent:2 mohri:2 placed:1 repeat:1 last:1 supported:1 kanamori:1 burges:1 absolute:2 sparse:3 overcome:2 dimension:2 xn:2 curve:2 avoids:1 boundary:1 computes:1 author:2 commonly:2 suzuki:1 employing:1 approximate:1 implicitly:1 keep:1 receiver:1 conclude:2 xi:2 shwartz:2 msu:1 search:2 sk:8 table:12 learn:5 rongjin:1 cl:2 meanwhile:1 domain:5 significance:2 main:2 linearly:1 bounding:1 cvxr:1 prx:1 categorized:1 x1:1 xu:1 intel:1 roc:2 precision:5 mao:1 topranked:1 exponential:3 jmlr:10 theorem:10 emphasizing:1 specific:3 list:9 svm:16 cortes:2 unattractive:2 corr:2 iyer:1 push:4 margin:5 michigan:1 likely:1 univariate:2 gao:1 ez:2 kxk:1 ordered:1 recommendation:1 chang:1 hua:1 applies:1 springer:1 chance:1 ma:1 succeed:1 goal:2 marked:2 consequently:1 replace:1 shared:1 feasible:1 included:2 specifically:2 infinite:2 total:1 pas:1 experimental:2 east:1 mark:1 support:5 radlinski:1 violated:1 accelerated:1 evaluate:3 ex:1 |
4,664 | 5,223 | SerialRank: Spectral Ranking using Seriation
Fajwel Fogel
?
C.M.A.P., Ecole
Polytechnique,
Palaiseau, France
fogel@cmap.polytechnique.fr
Alexandre d?Aspremont
?
CNRS & D.I., Ecole
Normale Sup?erieure
Paris, France
aspremon@ens.fr
Milan Vojnovic
Microsoft Research,
Cambridge, UK
milanv@microsoft.com
Abstract
We describe a seriation algorithm for ranking a set of n items given pairwise
comparisons between these items. Intuitively, the algorithm assigns similar rankings to items that compare similarly with all others. It does so by constructing a
similarity matrix from pairwise comparisons, using seriation methods to reorder
this matrix and construct a ranking. We first show that this spectral seriation algorithm recovers the true ranking when all pairwise comparisons are observed
and consistent with a total order. We then show that ranking reconstruction is
still exact even when some pairwise comparisons are corrupted or missing, and
that seriation based spectral ranking is more robust to noise than other scoring
methods. An additional benefit of the seriation formulation is that it allows us to
solve semi-supervised ranking problems. Experiments on both synthetic and real
datasets demonstrate that seriation based spectral ranking achieves competitive
and in some cases superior performance compared to classical ranking methods.
1
Introduction
We study the problem of ranking a set of n items given pairwise comparisons between these items.
In practice, the information about pairwise comparisons is usually incomplete, especially in the case
of a large set of items, and the data may also be noisy, that is some pairwise comparisons could be
incorrectly measured and incompatible with the existence of a total ordering.
Ranking is a classic problem but its formulations vary widely. For example, website ranking methods
such as PageRank [Page et al., 1998] and HITS [Kleinberg, 1999] seek to rank web pages based on
the hyperlink structure of the web, where links do not necessarily express consistent preference
relationships (e.g. a can link to b and b can link c, and c can link to a). The setting we study here
goes back at least to [Kendall and Smith, 1940] and seeks to reconstruct a ranking between items
from pairwise comparisons reflecting a total ordering.
In this case, the directed graph of all pairwise comparisons, where every pair of vertices is connected
by exactly one of two possible directed edges, is usually called a tournament graph in the theoretical
computer science literature or a ?round robin? in sports, where every player plays every other player
once and each preference marks victory or defeat. The motivation for this formulation often stems
from the fact that in many applications, e.g. music, images, and movies, preferences are easier to
express in relative terms (e.g. a is better than b) rather than absolute ones (e.g. a should be ranked
fourth, and b seventh).
1
Assumptions about how the pairwise preference information is obtained also vary widely. A subset
of preferences is measured adaptively in [Ailon, 2011; Jamieson and Nowak, 2011], while [Negahban et al., 2012], for example, assume that preferences are observed iteratively, and [Freund et al.,
2003] extract them at random. In other settings, the full preference matrix is observed, but is perturbed by noise: in e.g. [Bradley and Terry, 1952; Luce, 1959; Herbrich et al., 2006], a parametric
model is assumed over the set of permutations, which reformulates ranking as a maximum likelihood
problem.
Loss function and algorithmic approaches vary as well. Kenyon-Mathieu and Schudy [2007], for
example, derive a PTAS for the minimum feedback arc set problem on tournaments, i.e. the problem
of finding a ranking that minimizes the number of upsets (a pair of players where the player ranked
lower on the ranking beats the player ranked higher). In practice, the complexity of this method is
relatively high, and other authors [see e.g. Keener, 1993; Negahban et al., 2012] have been using
spectral methods to produce more efficient algorithms (each pairwise comparison is understood as a
link pointing to the preferred item). Simple scoring methods such as the point difference rule [Huber,
1963; Wauthier et al., 2013] produce efficient estimates at very low computational cost. Ranking
has also been approached as a prediction problem, i.e. learning to rank [Schapire and Singer, 1998],
with [Joachims, 2002] for example using support vector machines to learn a score function. Finally,
in the Bradley-Terry-Luce framework, the maximum likelihood problem is usually solved using
fixed point algorithms or EM-like majorization-minimization techniques [Hunter, 2004] for which
no precise computational complexity bounds are known.
Here, we show that the ranking problem is directly related to another classical ordering problem,
namely seriation: we are given a similarity matrix between a set of n items and assume that the items
can be ordered along a chain such that the similarity between items decreases with their distance
within this chain (i.e. a total order exists). The seriation problem then seeks to reconstruct the
underlying linear ordering based on unsorted, possibly noisy, pairwise similarity information. Atkins
et al. [1998] produced a spectral algorithm that exactly solves the seriation problem in the noiseless
case, by showing that for similarity matrices computed from serial variables, the ordering of the
second eigenvector of the Laplacian matrix (a.k.a. the Fiedler vector) matches that of the variables.
In practice, this means that spectral clustering exactly reconstructs the correct ordering provided
items are organized in a chain. Here, adapting these results to ranking produces a very efficient
polynomial-time ranking algorithm with provable recovery and robustness guarantees. Furthermore,
the seriation formulation allows us to handle semi-supervised ranking problems. Fogel et al. [2013]
show that seriation is equivalent to the 2-SUM problem and study convex relaxations to seriation
in a semi-supervised setting, where additional structural constraints are imposed on the solution.
Several authors [Blum et al., 2000; Feige and Lee, 2007] have also focused on the directly related
Minimum Linear Arrangement (MLA) problem, for which excellent approximation guarantees exist
in the noisy case, albeit with very high polynomial complexity.
The main contributions of this paper can be summarized as follows. We link seriation and ranking by
showing how to construct a consistent similarity matrix based on consistent pairwise comparisons.
We then recover the true ranking by applying the spectral seriation algorithm in [Atkins et al., 1998]
to this similarity matrix (we call this method SerialRank in what follows). In the noisy case, we
then show that spectral seriation can perfectly recover the true ranking even when some of the
pairwise comparisons are either corrupted or missing, provided that the pattern of errors is relatively
unstructured. We show in particular that, in a regime where a high proportion of comparions are
observed, some incorrectly, the spectral solution is more robust to noise than classical scoring based
methods. Finally, we use the seriation results in [Fogel et al., 2013] to produce semi-supervised
ranking solutions.
The paper is organized as follows. In Section 2 we recall definitions related to seriation, and link
ranking and seriation by showing how to construct well ordered similarity matrices from well ranked
items. In Section 3 we apply the spectral algorithm of [Atkins et al., 1998] to reorder these similarity
matrices and reconstruct the true ranking in the noiseless case. In Section 4 we then show that this
spectral solution remains exact in a noisy regime where a random subset of comparisons is corrupted.
Finally, in Section 5 we illustrate our results on both synthetic and real datasets, and compare ranking
performance with classical maximum likelihood, spectral and scoring based approaches. Auxiliary
technical results are detailed in Appendix A.
2
2
Seriation, Similarities & Ranking
In this section we first introduce the seriation problem, i.e. reordering items based on pairwise
similarities. We then show how to write the problem of ranking given pairwise comparisons as a
seriation problem.
2.1
The Seriation Problem
The seriation problem seeks to reorder n items given a similarity matrix between these items, such
that the more similar two items are, the closer they should be. This is equivalent to supposing that
items can be placed on a chain where the similarity between two items decreases with the distance
between these items in the chain. We formalize this below, following [Atkins et al., 1998].
Definition 2.1 We say that the matrix A 2 Sn is an R-matrix (or Robinson matrix) if and only if it
is symmetric and Ai,j ? Ai,j+1 and Ai+1,j ? Ai,j in the lower triangle, where 1 ? j < i ? n.
Another way to formulate R-matrix conditions is to impose Aij ? Akl if |i j| ? |k l| offdiagonal, i.e. the coefficients of A decrease as we move away from the diagonal. We also introduce
a definition for strict R-matrices A, whose rows/columns cannot be permuted without breaking the
R-matrix monotonicity conditions. We call reverse identity permutation the permutation that puts
rows and columns {1, . . . , n} of a matrix A in reverse order {n, n 1, . . . , 1}.
Definition 2.2 An R-matrix A 2 Sn is called strict-R if and only if the identity and reverse identity
permutations of A are the only permutations producing R-matrices.
Any R-matrix with only strict R-constraints is a strict R-matrix. Following [Atkins et al., 1998], we
will say that A is pre-R if there is a permutation matrix ? such that ?A?T is a R-matrix. Given
a pre-R matrix A, the seriation problem consists in finding a permutation ? such that ?A?T is a
R-matrix. Note that there might be several solutions to this problem. In particular, if a permutation
? is a solution, then the reverse permutation is also a solution. When only two permutations of A
produce R-matrices, A will be called pre-strict-R.
2.2
Constructing Similarity Matrices from Pairwise Comparisons
Given an ordered input pairwise comparison matrix, we now show how to construct a similarity
matrix which is strict-R when all comparisons are given and consistent with the identity ranking
(i.e. items are ranked in the increasing order of indices). This means that the similarity between
two items decreases with the distance between their ranks. We will then be able to use the spectral
seriation algorithm by [Atkins et al., 1998] described in Section 3 to recover the true ranking from a
disordered similarity matrix.
We first explain how to compute a pairwise similarity from binary comparisons between items by
counting the number of matching comparisons. Another formulation allows to handle the generalized linear model.
2.2.1
Similarities from Pairwise Comparisons
Suppose we are given a matrix of pairwise comparisons C 2 { 1, 0, 1}n?n such that Ci,j +Cj,i = 0
for every i 6= j and
(
1 if i is ranked higher than j
0 if i and j are not compared or in a draw
Ci,j =
(1)
1 if j is ranked higher than i
and, by convention, we define Ci,i = 1 for all i 2 {1, . . . , n} (Ci,i values have no effect in the
ranking method presented in algorithm SerialRank). We also define the pairwise similarity matrix
S match as
match
Si,j
=
?
n ?
X
1 + Ci,k Cj,k
2
k=1
3
.
(2)
Since Ci,k Cj,k = 1 if Ci,k and Cj,k have same signs, and Ci,k Cj,k = 1 if they have opposite
match
signs, Si,j
counts the number of matching comparisons between i and j with other reference
items k. If i or j is not compared with k, then Ci,k Cj,k = 0 and the term (1 + Ci,k Cj,k )/2 has an
average effect on the similarity of 1/2. The intuition behind this construction is easy to understand
in a tournament setting: players that beat the same players and are beaten by the same players should
have a similar ranking. We can write S match in the following equivalent form
1
S match =
n11T + CC T .
(3)
2
Without loss of generality, we assume in the following propositions that items are ranked in increasing order of their indices (identity ranking). In the general case, we simply replace the strict-R
property by the pre-strict-R property.
The next result shows that when all comparisons are given and consistent with the identity ranking,
then the similarity matrix S match is a strict R-matrix.
Proposition 2.3 Given all pairwise comparisons Ci,j 2 { 1, 0, 1} between items ranked according
to the identity permutation (with no ties), the similarity matrix S match constructed as given in (2) is
a strict R-matrix and
match
Sij
= n (max{i, j} min{i, j})
(4)
for all i, j = 1, . . . , n.
2.2.2
Similarities in the Generalized Linear Model
Suppose that paired comparisons are generated according to a generalized linear model (GLM),
i.e. we assume that the outcomes of paired comparisons are independent and for any pair of distinct
items, item i is observed to be preferred over item j with probability
Pi,j = H(?i ?j )
(5)
n
where ? 2 R is a vector of strengths or skills parameters and H : R ! [0, 1] is a function that
is increasing on R and such that H( x) = 1 H(x) for all x 2 R, and limx! 1 H(x) = 0
and limx!1 H(x) = 1. A well known special instance of the generalized linear model is the
Bradley-Terry-Luce model for which H(x) = 1/(1 + e x ), for x 2 R.
s
Let mi,j be the number of times items i and j were compared, Ci,j
2 { 1, 1} be the outcome of
comparison s and Q be the matrix of corresponding empirical probabilities, i.e. if mi,j > 0 we have
mi,j
s
+1
1 X Ci,j
Qi,j =
mi,j s=1
2
and Qi,j = 1/2 in case mi,j = 0. We then define the similarity matrix S glm from the observations
Q as
?
?
n
X
|Qi,k Qj,k |
{mi,k mj,k =0}
glm
Si,j =
+
.
(6)
{mi,k mj,k >0} 1
2
2
k=1
Since the comparisons are independent we have that Qi,j converges to Pi,j as mi,j goes to infinity
and
?
n ?
X
|Pi,k Pj,k |
glm
Si,j
!
1
.
2
k=1
The result below shows that this limit similarity matrix is a strict R-matrix when the variables are
properly ordered.
Proposition 2.4 If the items are ordered according to the order in decreasing values of the skill
parameters, in the limit of large number of observations, the similarity matrix S glm is a strict R
matrix.
Notice that we recover the original definition of S match in the case of binary probabilities, though
it does not fit in the Generalized Linear Model. Note also that these definitions can be directly
extended to the setting where multiple comparisons are available for each pair and aggregated in
comparisons that take fractional values (e.g. in a tournament setting where participants play several
times against each other).
4
Algorithm 1 Using Seriation for Spectral Ranking (SerialRank)
Input: A set of pairwise comparisons Ci,j 2 { 1, 0, 1} or [ 1, 1].
1: Compute a similarity matrix S as in ?2.2
2: Compute the Laplacian matrix
LS = diag(S1) S
(SerialRank)
3: Compute the Fiedler vector of S.
Output: A ranking induced by sorting the Fiedler vector of S (choose either increasing or decreasing order to minimize the number of upsets).
3
Spectral Algorithms
We first recall how the spectral clustering approach can be used to recover the true ordering in seriation problems by computing an eigenvector, with computational complexity O(n2 log n) [Kuczynski and Wozniakowski, 1992]. We then apply this method to the ranking problem.
3.1
Spectral Seriation Algorithm
We use the spectral computation method originally introduced in [Atkins et al., 1998] to solve the
seriation problem based on the similarity matrices defined in the previous section. We first recall the
definition of the Fiedler vector.
Definition 3.1 The Fiedler value of a symmetric, nonnegative and irreducible matrix A is the smallest non-zero eigenvalue of its Laplacian matrix LA = diag(A1) A. The corresponding eigenvector is called Fiedler vector and is the optimal solution to min{y T LA y : y 2 Rn , y T 1 = 0, kyk2 =
1}.
The main result from [Atkins et al., 1998], detailed below, shows how to reorder pre-R matrices in a
noise free case.
Proposition 3.2 [Atkins et al., 1998, Th. 3.3] Let A 2 Sn be an irreducible pre-R-matrix with a
simple Fiedler value and a Fiedler vector v with no repeated values. Let ?1 2 P (respectively, ?2 )
be the permutation such that the permuted Fiedler vector ?1 v is strictly increasing (decreasing).
Then ?1 A?T1 and ?2 A?T2 are R-matrices, and no other permutations of A produce R-matrices.
3.2
SerialRank: a Spectral Ranking Algorithm
In Section 2, we showed that similarities S match and S glm are pre-strict-R when all comparisons
are available and consistent with an underlying ranking of items. We now use the spectral seriation
method in [Atkins et al., 1998] to reorder these matrices and produce an output ranking. We call this
algorithm SerialRank and prove the following result.
Proposition 3.3 Given all pairwise comparisons for a set of totally ordered items and assuming
there are no ties between items, performing algorithm SerialRank, i.e. sorting the Fiedler vector of
the matrix S match defined in (3) recovers the true ranking of items.
Similar results apply for S glm when we are given enough comparisons in the Generalized Linear
Model. This last result guarantees recovery of the true ranking of items in the noiseless case. In the
next section, we will study the impact of corrupted or missing comparisons on the inferred ranking
of items.
3.3
Hierarchical Ranking
In a large dataset, the goal may be to rank only a subset of top rank items. In this case, we can
first perform spectral ranking (cheap) and then refine the ranking of the top set of items using either
the SerialRank algorithm on the top comparison submatrix, or another seriation algorithm such as
5
the convex relaxation in [Fogel et al., 2013]. This last method would also allow us to solve semisupervised ranking problems, given additional information on the structure of the solution.
4
Robustness to Corrupted and Missing Comparisons
In this section we study the robustness of SerialRank using S match with respect to noisy and missing
pairwise comparisons. We will see that noisy comparisons cause ranking ambiguities for the standard point score method and that such ambiguities can be lifted by the spectral ranking algorithm.
We show in particular that the SerialRank algorithm recovers the exact ranking when the pattern of
errors is random and errors are not too numerous.
We
Pndefine here the point score wi of an item i, also known as point-difference, or row-sum, as wi =
k=1 Ck,i which corresponds to the number of wins minus the number of losses in a tournament
setting.
Proposition 4.1 Given all pairwise comparisons Cs,t 2 { 1, 1} between items ranked according
to their indices, suppose the signs of m comparisons indexed (i1 , j1 ), . . . , (im , jm ) are switched.
1. For the case of one corrupted comparison, if j1 i1 > 2 then the spectral ranking recovers
the true ranking whereas the standard point score method induces ties between the pairs of
items (i1 , i1 + 1) and (j1 1, j1 ).
2. For the general case of m
holds true
|i
1 corrupted comparisons, suppose that the following condition
j| > 2, for all i, j 2 {i1 , . . . , im , j1 , . . . , jm } such that i 6= j,
(7)
then, S
is a strict R-matrix, and thus the spectral ranking recovers the true ranking
whereas the standard point score method induces ties between 2m pairs of items.
match
For the case of one corrupted comparison, note that the separation condition on the pair of items
(i, j) is necessary. When the comparison Ci,j between two adjacent items according to the true
ranking is corrupted, no ranking method can break the resulting tie. For the case of arbitrary number
of corrupted comparisons, condition (7) is a sufficient condition only.
Using similar arguments, we can also study conditions for recovering the true ranking in the case
with missing comparisons. These scenarios are actually slightly less restrictive than the noisy cases
and are covered in the supplementary material. We now estimate the number of randomly corrupted
entries that can be tolerated for perfect recovery of the true ranking.
Proposition 4.2 Given a comparison matrix for a set of n items with m corrupted comparisons selected uniformly at random from the set of all possible item pairs. Algorithm SerialRank guarantees
p
that the probability of recovery p(n, m) satisfies p(n, m) 1
, provided
that m = O( n). In
p
particular, this implies that p(n, m) = 1 o(1) provided that m = o( n).
i i+1
j-1 j
Shift by +1
i
i+1
Shift by -1
Strict R-constraints
j-1
j
Figure 1: The matrix of pairwise comparisons C (far left) when the rows are ordered according to
the true ranking. The corresponding similarity matrix S match is a strict R-matrix (center left). The
same S match similarity matrix with comparison (3,8) corrupted (center right). With one corrupted
comparison, S match keeps enough strict R-constraints to recover the right permutation. In the noiseless case, the difference between all coefficients is at least one and after introducing an error, the
coefficients inside the green rectangles still enforce strict R-constraints (far right).
6
5
Numerical Experiments
We conducted numerical experiments using both synthetic and real datasets to compare the performance of SerialRank with several classical ranking methods.
Synthetic Datasets The first synthetic dataset consists of a binary matrix of pairwise comparisons
derived from a given ranking of n items with uniform, randomly distributed corrupted or missing
entries. A second synthetic dataset consists of a full matrix of pairwise comparisons derived from
a given ranking of n items, with added uncertainty for items which are sufficiently close in the
true ranking of items. Specifically, given a positive integer m, we let Ci,j = 1 if i < j m,
Ci,j ? Unif[ 1, 1] if |i j| ? m, and Ci,j = 1 if i > j+m. In Figure 2, we measure the Kendall ?
correlation coefficient between the true ranking and the retrieved ranking, when varying either the
percentage of corrupted comparisons or the percentage of missing comparisons. Kendall?s ? counts
the number of agreeing pairs minus the number of disagreeing pairs between two rankings, scaled
by the total number of pairs, so that it takes values between -1 and 1. Experiments were performed
with n = 100 and reported Kendall ? values were averaged over 50 experiments, with standard
deviation less than 0.02 for points of interest (i.e. here with Kendall ? > 0.8).
1
SR
PS
RC
BTL
0.9
0.8
Kendall ?
Kendall ?
1
0.7
0.6
0
50
0.9
0.8
0.7
0.6
0
100
1
0.9
0.9
0.8
0.7
0.6
0
50
100
% missing
1
Kendall ?
Kendall ?
% corrupted
50
0.8
0.7
0.6
0
100
50
100
Range m
% missing
Figure 2: Kendall ? (higher is better) for SerialRank (SR, full red line), row-sum (PS, [Wauthier
et al., 2013] dashed blue line), rank centrality (RC [Negahban et al., 2012] dashed green line), and
maximum likelihood (BTL [Bradley and Terry, 1952], dashed magenta line). In the first synthetic
dataset, we vary the proportion of corrupted comparisons (top left), the proportion of observed comparisons (top right) and the proportion of observed comparisons, with 20% of comparisons being
corrupted (bottom left). We also vary the parameter m in the second synthetic dataset (bottom right).
Real Datasets The first real dataset consists of pairwise comparisons derived from outcomes in
the TopCoder algorithm competitions. We collected data from 103 competitions among 2742 coders
over a period of about one year. Pairwise comparisons are extracted from the ranking of each competition and then averaged for each pair. TopCoder maintains ratings for each participant, updated
in an online scheme after each competition, which were also included in the benchmarks. To measure performance in Figure 3, we compute the percentage of upsets (i.e. comparisons disagreeing
with the computed ranking), which is closely related to the Kendall ? (by an affine transformation if
comparisons were coming from a consistent ranking). We refine this metric by considering only the
participants appearing in the top k, for various values of k, i.e. computing
1 X
lk =
(8)
{r(i)>r(j)} {Ci,j <0} ,
|Ck |
i,j2Ck
7
where C are the pairs (i, j) that are compared and such that i, j are both ranked in the top k, and r(i)
is the rank of i. Up to scaling, this is the loss considered in [Kenyon-Mathieu and Schudy, 2007].
0.4
Official
PS
RC
BTL
SR
Semi-sup.
0.9
% upsets in top k
0.45
% upsets in top k
1
TopCoder
PS
RC
BTL
SR
0.35
0.3
0.8
0.7
0.6
0.5
0.4
0.3
0.25
5
500 1000 1500 2000 2500
10
15
20
k
k
Figure 3: Percentage of upsets (i.e. disagreeing comparisons, lower is better) defined in (8), for
various values of k and ranking methods, on TopCoder (left) and football data (right).
Semi-Supervised Ranking We illustrate here how, in a semi-supervised setting, one can interactively enforce some constraints on the retrieved ranking, using e.g. the semi-supervised seriation
algorithm in [Fogel et al., 2013]. We compute rankings of England Football Premier League teams
for season 2013-2014 (cf. figure 4 in Appendix for previous seasons). Comparisons are defined as
the averaged outcome (win, loss, or tie) of home and away games for each pair of teams. As shown
in Table 1, the top half of SerialRank ranking is very close to the official ranking calculated by
sorting the sum of points for each team (3 points for a win, 1 point for a tie). However, there are
significant variations in the bottom half, though the number of upsets is roughly the same as for
the official ranking. To test semi-supervised ranking, suppose for example that we are not satisfied
with the ranking of Aston Villa (last team when ranked by the spectral algorithm), we can explicitly
enforce that Aston Villa appears before Cardiff, as in the official ranking. In the ranking based on
the semi-supervised corresponding seriation problem, Aston Villa is not last anymore, though the
number of disagreeing comparisons remains just as low (cf. Figure 3, right).
Table 1: Ranking of teams in the England premier league season 2013-2014.
Official
Man City (86)
Liverpool (84)
Chelsea (82)
Arsenal (79)
Everton (72)
Tottenham (69)
Man United (64)
Southampton (56)
Stoke (50)
Newcastle (49)
Crystal Palace (45)
Swansea (42)
West Ham (40)
Aston Villa (38)
Sunderland (38)
Hull (37)
West Brom (36)
Norwich (33)
Fulham (32)
Cardiff (30)
Row-sum
Man City
Liverpool
Chelsea
Arsenal
Everton
Tottenham
Man United
Southampton
Stoke
Newcastle
Crystal Palace
Swansea
West Brom
West Ham
Aston Villa
Sunderland
Hull
Norwich
Fulham
Cardiff
RC
Liverpool
Arsenal
Man City
Chelsea
Everton
Tottenham
Man United
Southampton
Stoke
Newcastle
Swansea
Crystal Palace
West Ham
Hull
Aston Villa
West Brom
Sunderland
Fulham
Norwich
Cardiff
BTL
Man City
Liverpool
Chelsea
Arsenal
Everton
Tottenham
Man United
Southampton
Stoke
Newcastle
Crystal Palace
Swansea
West Brom
West Ham
Aston Villa
Sunderland
Hull
Norwich
Fulham
Cardiff
SerialRank
Man City
Chelsea
Liverpool
Arsenal
Everton
Tottenham
Southampton
Man United
Stoke
Swansea
Newcastle
West Brom
Hull
West Ham
Cardiff
Crystal Palace
Fulham
Norwich
Sunderland
Aston Villa
Semi-Supervised
Man City
Chelsea
Liverpool
Everton
Arsenal
Tottenham
Man United
Southampton
Newcastle
Stoke
West Brom
Swansea
Crystal Palace
Hull
West Ham
Fulham
Norwich
Sunderland
Aston Villa
Cardiff
Acknowledgments FF, AA and MV would like to acknowledge support from a European Research Council starting grant (project SIPA) and support from the MSR-INRIA joint centre.
8
References
Ailon, N. [2011], Active learning ranking from pairwise preferences with almost optimal query
complexity., in ?NIPS?, pp. 810?818.
Atkins, J., Boman, E., Hendrickson, B. et al. [1998], ?A spectral algorithm for seriation and the
consecutive ones problem?, SIAM J. Comput. 28(1), 297?310.
Blum, A., Konjevod, G., Ravi, R. and Vempala, S. [2000], ?Semidefinite relaxations for minimum
bandwidth and other vertex ordering problems?, Theoretical Computer Science 235(1), 25?42.
Bradley, R. A. and Terry, M. E. [1952], ?Rank analysis of incomplete block designs: I. the method
of paired comparisons?, Biometrika pp. 324?345.
Feige, U. and Lee, J. R. [2007], ?An improved approximation ratio for the minimum linear arrangement problem?, Information Processing Letters 101(1), 26?29.
Fogel, F., Jenatton, R., Bach, F. and d?Aspremont, A. [2013], ?Convex relaxations for permutation
problems?, NIPS 2013, arXiv:1306.4805 .
Freund, Y., Iyer, R., Schapire, R. E. and Singer, Y. [2003], ?An efficient boosting algorithm for
combining preferences?, The Journal of machine learning research 4, 933?969.
Herbrich, R., Minka, T. and Graepel, T. [2006], TrueskillTM : A bayesian skill rating system, in
?Advances in Neural Information Processing Systems?, pp. 569?576.
Huber, P. J. [1963], ?Pairwise comparison and ranking: optimum properties of the row sum procedure?, The annals of mathematical statistics pp. 511?520.
Hunter, D. R. [2004], ?MM algorithms for generalized bradley-terry models?, Annals of Statistics
pp. 384?406.
Jamieson, K. G. and Nowak, R. D. [2011], Active ranking using pairwise comparisons., in ?NIPS?,
Vol. 24, pp. 2240?2248.
Joachims, T. [2002], Optimizing search engines using clickthrough data, in ?Proceedings of the
eighth ACM SIGKDD international conference on Knowledge discovery and data mining?, ACM,
pp. 133?142.
Keener, J. P. [1993], ?The perron-frobenius theorem and the ranking of football teams?, SIAM review
35(1), 80?93.
Kendall, M. G. and Smith, B. B. [1940], ?On the method of paired comparisons?, Biometrika 31(34), 324?345.
Kenyon-Mathieu, C. and Schudy, W. [2007], How to rank with few errors, in ?Proceedings of the
thirty-ninth annual ACM symposium on Theory of computing?, ACM, pp. 95?103.
Kleinberg, J. [1999], ?Authoritative sources in a hyperlinked environment?, Journal of the ACM
46, 604?632.
Kuczynski, J. and Wozniakowski, H. [1992], ?Estimating the largest eigenvalue by the power and
Lanczos algorithms with a random start?, SIAM J. Matrix Anal. Appl 13(4), 1094?1122.
Luce, R. [1959], Individual choice behavior, Wiley.
Negahban, S., Oh, S. and Shah, D. [2012], Iterative ranking from pairwise comparisons., in ?NIPS?,
pp. 2483?2491.
Page, L., Brin, S., Motwani, R. and Winograd, T. [1998], ?The pagerank citation ranking: Bringing
order to the web?, Stanford CS Technical Report .
Schapire, W. W. C. R. E. and Singer, Y. [1998], Learning to order things, in ?Advances in Neural
Information Processing Systems 10: Proceedings of the 1997 Conference?, Vol. 10, MIT Press,
p. 451.
Wauthier, F. L., Jordan, M. I. and Jojic, N. [2013], Efficient ranking from pairwise comparisons, in
?Proceedings of the 30th International Conference on Machine Learning (ICML)?.
9
| 5223 |@word msr:1 polynomial:2 proportion:4 unif:1 seek:4 minus:2 score:5 united:6 swansea:6 ecole:2 bradley:6 com:1 si:4 numerical:2 j1:5 cheap:1 half:2 selected:1 website:1 item:51 smith:2 boosting:1 preference:9 herbrich:2 rc:5 along:1 mla:1 constructed:1 mathematical:1 symposium:1 consists:4 prove:1 liverpool:6 inside:1 introduce:2 pairwise:37 huber:2 behavior:1 roughly:1 decreasing:3 jm:2 considering:1 increasing:5 totally:1 provided:4 project:1 underlying:2 estimating:1 coder:1 what:1 minimizes:1 eigenvector:3 akl:1 newcastle:6 finding:2 transformation:1 guarantee:4 every:4 tie:7 exactly:3 biometrika:2 scaled:1 hit:1 uk:1 grant:1 jamieson:2 producing:1 t1:1 positive:1 understood:1 before:1 reformulates:1 limit:2 tournament:5 might:1 inria:1 schudy:3 wozniakowski:2 appl:1 range:1 averaged:3 directed:2 acknowledgment:1 thirty:1 practice:3 block:1 procedure:1 empirical:1 arsenal:6 adapting:1 matching:2 pre:7 cannot:1 close:2 put:1 applying:1 equivalent:3 imposed:1 missing:10 center:2 go:2 starting:1 l:1 convex:3 focused:1 formulate:1 recovery:4 assigns:1 unstructured:1 rule:1 oh:1 classic:1 handle:2 stoke:6 variation:1 updated:1 annals:2 construction:1 play:2 suppose:5 exact:3 winograd:1 observed:7 disagreeing:4 bottom:3 solved:1 connected:1 ordering:8 decrease:4 intuition:1 ham:6 environment:1 complexity:5 triangle:1 joint:1 various:2 fiedler:10 distinct:1 describe:1 query:1 approached:1 outcome:4 whose:1 widely:2 solve:3 supplementary:1 say:2 stanford:1 reconstruct:3 football:3 statistic:2 noisy:8 online:1 eigenvalue:2 hyperlinked:1 reconstruction:1 coming:1 fr:2 combining:1 frobenius:1 milan:1 competition:4 motwani:1 p:4 optimum:1 produce:7 perfect:1 converges:1 derive:1 illustrate:2 measured:2 solves:1 auxiliary:1 c:2 recovering:1 implies:1 convention:1 closely:1 correct:1 hull:6 disordered:1 material:1 brin:1 proposition:7 im:2 strictly:1 hold:1 mm:1 sufficiently:1 considered:1 algorithmic:1 pointing:1 achieves:1 vary:5 smallest:1 consecutive:1 council:1 largest:1 city:6 minimization:1 mit:1 normale:1 rather:1 ck:2 season:3 lifted:1 varying:1 derived:3 joachim:2 properly:1 rank:9 likelihood:4 sigkdd:1 cnrs:1 sunderland:6 france:2 i1:5 among:1 special:1 construct:4 once:1 icml:1 others:1 t2:1 report:1 few:1 irreducible:2 palace:6 randomly:2 individual:1 microsoft:2 interest:1 limx:2 mining:1 semidefinite:1 behind:1 chain:5 aspremon:1 edge:1 nowak:2 closer:1 necessary:1 indexed:1 incomplete:2 theoretical:2 instance:1 column:2 lanczos:1 cost:1 introducing:1 vertex:2 southampton:6 subset:3 entry:2 deviation:1 uniform:1 seventh:1 conducted:1 too:1 kuczynski:2 reported:1 perturbed:1 corrupted:19 synthetic:8 tolerated:1 adaptively:1 international:2 negahban:4 siam:3 lee:2 ambiguity:2 satisfied:1 interactively:1 reconstructs:1 choose:1 possibly:1 summarized:1 coefficient:4 explicitly:1 ranking:91 mv:1 performed:1 break:1 kendall:12 sup:2 red:1 competitive:1 recover:6 offdiagonal:1 participant:3 maintains:1 start:1 majorization:1 contribution:1 minimize:1 bayesian:1 hunter:2 produced:1 comparions:1 cc:1 explain:1 premier:2 definition:8 against:1 pp:9 minka:1 mi:8 recovers:5 dataset:6 recall:3 knowledge:1 fractional:1 organized:2 formalize:1 cj:7 graepel:1 actually:1 back:1 reflecting:1 appears:1 alexandre:1 jenatton:1 higher:4 originally:1 supervised:10 improved:1 formulation:5 though:3 generality:1 furthermore:1 just:1 correlation:1 web:3 semisupervised:1 effect:2 kenyon:3 true:17 jojic:1 symmetric:2 seriation:35 iteratively:1 round:1 adjacent:1 game:1 kyk2:1 generalized:7 crystal:6 polytechnique:2 demonstrate:1 image:1 superior:1 permuted:2 defeat:1 significant:1 cambridge:1 ai:4 erieure:1 league:2 similarly:1 centre:1 similarity:32 chelsea:6 showed:1 retrieved:2 optimizing:1 reverse:4 scenario:1 binary:3 unsorted:1 atkins:11 scoring:4 minimum:4 additional:3 ptas:1 impose:1 aggregated:1 period:1 dashed:3 semi:11 full:3 multiple:1 stem:1 technical:2 match:17 england:2 bach:1 serial:1 paired:4 laplacian:3 qi:4 victory:1 prediction:1 a1:1 impact:1 noiseless:4 metric:1 arxiv:1 whereas:2 keener:2 source:1 sr:4 bringing:1 strict:18 induced:1 supposing:1 thing:1 jordan:1 call:3 integer:1 structural:1 counting:1 easy:1 enough:2 fit:1 perfectly:1 opposite:1 bandwidth:1 luce:4 shift:2 qj:1 cause:1 detailed:2 covered:1 induces:2 schapire:3 exist:1 percentage:4 notice:1 sign:3 blue:1 write:2 vol:2 express:2 upset:7 blum:2 pj:1 ravi:1 btl:5 rectangle:1 graph:2 relaxation:4 sum:6 year:1 letter:1 fourth:1 uncertainty:1 almost:1 separation:1 home:1 draw:1 incompatible:1 appendix:2 scaling:1 submatrix:1 bound:1 refine:2 nonnegative:1 annual:1 strength:1 constraint:6 infinity:1 kleinberg:2 argument:1 min:2 performing:1 vempala:1 relatively:2 ailon:2 according:6 feige:2 slightly:1 em:1 agreeing:1 wi:2 s1:1 intuitively:1 sij:1 glm:7 boman:1 remains:2 count:2 singer:3 available:2 apply:3 hierarchical:1 away:2 spectral:27 enforce:3 appearing:1 anymore:1 centrality:1 robustness:3 shah:1 existence:1 original:1 top:10 clustering:2 cf:2 music:1 restrictive:1 sipa:1 especially:1 classical:5 move:1 arrangement:2 added:1 parametric:1 diagonal:1 villa:9 win:3 distance:3 link:7 wauthier:3 collected:1 provable:1 assuming:1 index:3 relationship:1 ratio:1 design:1 anal:1 clickthrough:1 perform:1 observation:2 datasets:5 arc:1 benchmark:1 everton:6 acknowledge:1 incorrectly:2 beat:2 extended:1 precise:1 team:6 rn:1 ninth:1 arbitrary:1 inferred:1 rating:2 introduced:1 pair:14 paris:1 namely:1 perron:1 fogel:7 engine:1 nip:4 robinson:1 able:1 usually:3 pattern:2 below:3 eighth:1 regime:2 cardiff:7 hyperlink:1 pagerank:2 max:1 green:2 terry:6 power:1 ranked:12 scheme:1 movie:1 aston:9 numerous:1 mathieu:3 lk:1 aspremont:2 extract:1 sn:3 review:1 literature:1 discovery:1 relative:1 freund:2 loss:5 reordering:1 permutation:15 switched:1 authoritative:1 affine:1 sufficient:1 consistent:8 pi:3 row:7 placed:1 last:4 free:1 aij:1 allow:1 understand:1 absolute:1 benefit:1 distributed:1 feedback:1 calculated:1 hendrickson:1 author:2 palaiseau:1 far:2 citation:1 skill:3 preferred:2 keep:1 monotonicity:1 active:2 assumed:1 reorder:5 search:1 iterative:1 robin:1 table:2 learn:1 mj:2 robust:2 excellent:1 necessarily:1 european:1 constructing:2 diag:2 official:5 main:2 motivation:1 noise:4 n2:1 repeated:1 west:12 en:1 ff:1 wiley:1 comput:1 breaking:1 magenta:1 theorem:1 showing:3 beaten:1 exists:1 albeit:1 cmap:1 ci:19 iyer:1 sorting:3 easier:1 simply:1 ordered:7 sport:1 aa:1 corresponds:1 satisfies:1 vojnovic:1 extracted:1 acm:5 identity:7 goal:1 replace:1 man:12 included:1 specifically:1 norwich:6 uniformly:1 total:5 called:4 la:2 player:8 mark:1 support:3 |
4,665 | 5,224 | Magnitude-sensitive preference formation
Nisheeth Srivastava?
Department of Psychology
University of San Diego
La Jolla, CA 92093
nisheeths@gmail.com
Edward Vul
Department of Psychology
University of San Diego
La Jolla, CA 92093
edwardvul@gmail.com
Paul R Schrater
Dept of Psychology
University of Minnesota
Minneapolis, MN, 55455
schrater@umn.edu
Abstract
Our understanding of the neural computations that underlie the ability of animals
to choose among options has advanced through a synthesis of computational modeling, brain imaging and behavioral choice experiments. Yet, there remains a
gulf between theories of preference learning and accounts of the real, economic
choices that humans face in daily life, choices that are usually between some
amount of money and an item. In this paper, we develop a theory of magnitudesensitive preference learning that permits an agent to rationally infer its preferences for items compared with money options of different magnitudes. We show
how this theory yields classical and anomalous supply-demand curves and predicts choices for a large panel of risky lotteries. Accurate replications of such
phenomena without recourse to utility functions suggest that the theory proposed
is both psychologically realistic and econometrically viable.
1
Introduction
While value/utility is a useful abstraction for macroeconomic applications, it has little psychological
validity [1]. Valuations elicited in laboratory conditions are known to be extremely variable under
different elicitation conditions, liable to anchor on arbitrary observations, and extremely sensitive
to the set of options presented [2]. This last property constitutes the most straightforward refutation
of the existence of object-specific utilities. Consider for example, an experiment conducted by [3],
where subjects were endowed with a fixed amount of money, which they could use across multiple
trials to buy out of receiving an electric shock of one of three different magnitudes (see left panel
in Figure 1). The large systematic differences found in the prices for different shock magnitudes
that subjects in this study were willing to pay demonstrate the absence of any fixed psychophysical
measurements of value. Thus, while utility maximization is a mathematically useful heuristic in
economic applications, it is unlikely that utility functions can represent value in any significant
psychological sense.
Neurological studies also demonstrate the existence of neuron populations sensitive not to absolute
reward values, but to one of the presented options being better relative to the others, a phenomenon
called comparative coding. Comparative coding was first reported in [4], who observed activity in
the orbito-frontal neurons of monkeys when offered varying juice rewards presented in pairs within
separate trial blocks in patterns that depended only on whether a particular juice is preferred within
its trial. Elliott et al. [5] found similar results using fMRI in the medial orbitofrontal cortex of human
subjects a brain region known to be involved in value coding. Even more strikingly, Plassmann et
al [6] found that falsely assigning a high price to a particular item (wine) caused both greater selfreported experienced pleasantness (EP) (see right panel of Figure 1) and greater mOFC activity
indicative of pleasure. What is causing this pleasure? Where is the ?value? assigned to the pricier
wine sample coming from?
?
Corresponding author: nisheeths@gmail.com
1
* Reconstructed from Figure 1(a) in (Vlaev, 2011)
Endowment
80
50
40
* Reconstructed from Figure 1, panels B,D in (Plassmann, 2008)
Liking
60
Endowment
40
30
20
10
0
Key:
high
medium
low
low- medium- low- mediummedium high medium high
Liking
Price o?ered
70
Pain options
6
5
4
3
2
1
6
5
4
3
2
1
Price labels
Wine 1
$5
$45 $10 $90
$35
No price labels
A
B
C
D
Wine 2
Wine 3
E
Figure 1: Valuations of options elicited in the lab can be notoriously labile. Left: An experiment
where subjects had to pay to buy out of receiving electric shock saw subjects losing or gaining
value for the price of pain of particular magnitudes both as a function of the amount of money the
experimenters initially gave them and the relative magnitude of the pair of shock options they were
given experience with. Right: Subjects asked to rate five (actually three) wines rated artificially
highly-priced samples of wine as more preferable. Not only this, imaging data from orbitofrontal
cortex showed that they actually experienced these samples as more pleasurable.
Viewed in light of these various difficulties, making choices for options that involve magnitudes,
appears to be a formidable challenge. However humans, and even animals [7] are well-known to
perform such operations easily. Therefore, one of two possibilities holds: one, that it is possible,
notwithstanding the evidence laid out above, for humans to directly assess value magnitudes (except
in corner cases like the ones we describe); two, that some alternative set of computations permits
them to behave as if they can estimate value magnitudes. This paper formalizes the set of computations that operationalizes this second view.
We build upon a framework of preference learning proposed in [8] that avoids the necessity for
assuming psychophysical access to value and develop a model that can form preferences for quantities of objects directly from history of past choices. Since the most common modality of choices
involving quantities in the modern world is determining the prices of objects, pricing forms the primary focus of our experiments. Specifically, we derive from our theory (i) classical and anomalous
supply-demand curves, and (ii) choice predictions for a large panel of risky lotteries. Hence, in this
paper we present a theory of magnitude-sensitive preference formation that, as an important special
case, provides an account of how humans learn to value money.
2
2.1
Learning to value magnitudes
Rational preference formation
Traditional treatments of preference learning (e.g. [9]) assume that there is some hidden state function U : X ? R+ such that x x0 iff U (x) > U (x0 ) ?x0 ? X , where X is the set of all possible options. Preference learning, in such settings, is reduced to a task of statistically estimating a monotone
distortion of U, thereby making two implicit assumptions (i) that there exists some psychophysical
apparatus that can compute hedonic utilities and (ii) that there exists some psychophysical apparatus
capable of representing absolute magnitudes capable of comparison in the mind. The data we describe above argues against either possibility being true. In order to develop a theory of preference
formation that avoids commitments to psychophysical value estimation, a novel approach is needed.
Srivastava & Schrater [8] provide us with the building blocks for such an approach. They propose that the process of learning preferences can be modeled as an ideal Bayesian observer directly
learning ?which option among the ones offered is best?, retaining memory of which options were
presented to it at every choice instance. However, instead of directly remembering option sets, their
model allows for the possibility that option set observations map to latent contexts in memory. In
practice, this mapping is assumed to be identified in all their demonstrations. Formally, the computation corresponding to utility in this framework is p(r|x, o), which is obtained by marginalizing
2
over the set of latent contexts C,
PC
D(x) = p(r|x, o) =
c
p(r|x, c)p(x|c)p(c|o)
,
PC
c p(x|c)p(c|o)
(1)
where it is understood that the context probability p(c|o) = p(c|{o1 , o2 , ? ? ? , ot?1 }) is a distribution
on the set of all possible contexts incrementally inferred from the agent?s observation history. Here,
p(r|x, c) encodes the probability that the item x was preferred to all other items present in choice instances linked with the context c, p(x|c) encodes the probability that the item x was present in choice
sets indexed by the context c and p(c) encodes the frequency with which the observer encounters
these contexts.
The observer also continually updates p(c|o) via recursive Bayesian estimation,
p(o(t) |c)p(c|o(1:t?1) )
p(c(t) |o(1:t) ) = PC
,
(t)
(1:t?1) )
c p(o |c)p(c|o
(2)
which, in conjunction with the desirability based state preference update, and a simple decision rule
(e.g. MAP, softmax) yields a complete decision theory.
While this theory is complete in the formal sense that it can make testable predictions of options
chosen in the future given options chosen in the past, it is incomplete in its ability to represent
options: it will treat a gamble that pays $20 with probability 0.1 against safely receiving $1 and
one that pays $20000 with probability 0.1 against safely receiving $1 as equivalent, which is clearly
unsatisfactory. This is because it considers only simple cases where options have nominal labels.
We now augment it to take the information that magnitude labels1 provide into account.
2.2
Magnitude-sensitive preference formation
Typically, people will encounter monetary labels m ? M in a large number of contexts, often entirely outside the purview of the immediate choice to be made. In the theory of [8] incorporating
desirability information related to m will involve marginalizing across all these contexts. Since
the set of such contexts across a person?s entire observation history is larg, explicit marginalization across all contexts would imply explicit marginalization across every observation involving the
monetary label m, which is unrealistic. Thus information about contexts must be compressed or
summarized2 .
We can resolve this by assuming that instead that animals generate contexts as clusters of observations, thereby creating the possibility of learning higher-order abstract relationships between them.
Such models of categorization via clustering are widely accepted in cognitive psychology [10].
Now, instead of recalling all possible observations containing m, an animal with a set of observation
clusters (contexts) would simply sample a subset of these that would be representative of all contexts wherein observations containing m are statistically typical. In such a setting, p(m|c) would
correspond to the observation likelihood of the label m being seen in the cluster c, p(c) would correspond to the relative frequency of context occurrences, and p(r|x, m, c) would correspond to the
inferred value for item x when compared against monetary label m while the active context c. The
remaining probability term p(x|m) encodes the probability of seeing transactions involving item x
and the particular monetary label m. We define r to take the value 1 when x x0 ?x0 ? X ? {x}.
Following a similar probabilistic calculus as in Equation 1, the inferred value of x becomes p(r|x)
and can be calculated as,
PM P
p(r|x, m, c)p(x|m)p(m|c)p(c)
p(r|x) = m PCM P
,
(3)
m
C p(x|m)p(m|c)p(c)
1
Note that taking monetary labels into account is not the same as committing to a direct psychophysical
evaluation of money. In our account, value judgments are linked not with magnitudes, but with labels, that just
happen to correspond to numbers in common practice.
2
Mechanistic considerations of neurobiology also suggest sparse sampling of prior contexts. The memory
and computational burden of recalculating preferences for an ever-increasing C would quickly prove insuperable.
3
p(m|c)
where to go?
s
Forager
Berry bush
X = all berry bushes
C
n
d
s
n
s
d
n
d
p(x|m)
s
Is there a
bush where
I see m red
splotches?
n
d
Typically high for
interesting m values
p(r|x,m,c)
(easy to get to)
Hill
Forest
(pmf for all x's with one m
shown in one bar)
(too crowded!)
Valley
M
c = hill
c = forest
c = valley
p(r|x)
hill
valley
(live close to hill)
p(c)
forest
forest
Dense
hill
Normal
valley
for
Sparse
Figure 2: Illustrating a choice problem an animal might face in the wild (left) and how the intermediate probability terms in our proposed model would operationalize different forms of information
needed to solve such a problem (right). Marginalizing across situation contexts and magnitude labels
tells us what the animal will do.
with the difference from the earlier expression arising from an additional summation over the set M
of monetary labels that the agent has experience with.
Figure 2 illustrates how these computations could be practically instantiated in a general situation
involving magnitude-sensitive value inference that animals could face. Our hunter-gatherer ancestor
has to choose which berry bush to forage in, and we must infer the choice he will make based on
recorded history of his past behavior. The right panel in this figure illustrates natural interpretations
for the intermediate conditional probabilities in Equation 3. The term p(m|c) encodes prior understanding of the fertility differential in the soils that characterize each of the three active contexts.
The p(r|x, m, c) term records the history of the forager?s choice within the context in via empirically observed relative frequencies. What drives the forager to prefer a sparsely-laden tree on the
hill instead of the densely laden tree in the forest in our example, though, is his calculation of the
underlying context probability p(c). In our story, because he lives near the hill, he encounters the
bushes on the hill more frequently, and so they dominate his preference judgment. A wide palette
of possible behaviors can be similarly interpreted and rationalized within the framework we have
outlined.
What exactly is this model telling us though that we aren?t putting into it ourselves? The only strong
constraint it imposes on the form of preferences currently is that they will exhibit context-specific
consistency, viz. an animal that prefers one option over another in a particular context will continue
to do so in future trials. While this constraint itself is only valid if we have some way of pinning
down particular contexts, it is congruent with results from marketing research that describe the
general form of human preferences as being ? arbitrarily coherent? - consumer preferences are labile
and sensitive to changes in option sets, framing effects, loss aversion and a host of other treatments
but are longitudinally reliable within these treatments [2]. For our model to make more interesting
economic predictions, we must further constrain the form of the preferences it can emit to match
those seen in typical monetary transactions; we do this by making further assumptions about the
intermediate terms in Equation 3 in the next three sections that describe economic applications.
3
Living in a world of money
Equation 3 gives us predictions about how people will form preferences for various options that
co-occur with money labels. Here we specialize this model to make predictions about the value of
options that are money labels, viz. fiat currency. The institutional imperatives of legal tender impose a natural ordering on preferences involving monetary quantities. Ceteris paribus, subjects will
prefer a larger quantity of money to a smaller quantity of money. Thus, while the psychological de4
sirability pointer could assign preferences to monetary labels capriciously (as an infant who prefers
the drawings on a $1 bill to those on a $100 bill might), to model desirability behavior corresponding
to knowledgeable use of currency, we constrain it to follow arithmetic ordering such that,
xm? xm ? m? > m ?m ? M,
(4)
where the notation xm denotes an item (currency token) x associated with the money label m. Then,
Equation 3 reduces to,
PM0 P
C p(x|m)p(m|c)p(c)
p(r|xm? ) = Pm
,
(5)
MP
m
C p(x|m)p(m|c)p(c)
where max(M0 ) ? m? , since the contribution to p(r|x, m, c) for all larger m terms, is set to zero
by the arithmetic ordering condition; the p(x|m) term binds x to all the m0 s it has been seen with
before.
Assuming no uncertainty about which currency token goes with which label, p(x|m) becomes a
simple delta function pointing to m that the subject has experience with, and Equation 5 can be
rewritten as,
R m? P
p(x|m, c)p(m|c)p(c)
p(r|x) = R0? P C
.
(6)
p(x|m,
c)p(m|c)p(c)
C
0
If we further assume that the model gets to see all possible money labels, i.e. M = R+ , this can be
further simplified as,
R m? P
p(m|c)p(c)
p(r|x) = R0? P C
,
(7)
p(m|c)p(c)
C
0
reflecting strong dependence on the shape of p(m), the empirical distribution of monetary outcomes
in the world.
What can we say about the shape of the general frequency distribution of numbers in the world?
Numbers have historically arisen as ways to quantify, which helps plan resource foraging, consumption and conservation. Scarcity of essential resources naturally makes being able to differentiate
small magnitudes important for selection fitness. This motivates the development of number systems where objects counted frequently (essential resources) are counted with small numbers (for
better discriminability). Thus, it is reasonable to assume that, in general, larger numbers will be
encountered relatively less frequently than smaller ones in natural environments, and hence, that the
functions p(m) and p(c) will be monotone decreasing3 . For analytical tractability, we formalize this
assumption by setting p(m|c) to be gamma distributed on the domain of monetary labels, and p(c)
to be an exponential distribution on the domain of the typical ?wealth? rate of individual contexts.
The wealth rate is an empirically accessible index for the set of situation contexts, and represents
the typical (average) monetary label we expect to see in observations associated with this context.
Thus, for instance, the wealth rate for ?steakhouses? will be higher than that of ?fast food?. For
any particular value of the wealth rate, the ?price? distribution p(m|c) will reflect the relative frequencies of seeing various monetary labels in the world in observations typical to context c. The
gamma/log-normal shape of real-world prices in specific contexts is well-attested empirically. The
wealth rate distribution p(c) can be always made monotone decreasing simply by shuffling the order
of presentation of contexts in the measure of the distribution.
With these distributional assumptions, the marginalized product p(m) is assured to be a Pareto
distribution. Data from [12] as well as supporting indirect observations in [13], suggest that we are
on relatively safe ground by making such assumptions for the general distribution of monetary units
in the world [14]. This set of assumptions further reduces Equation 7 to,
p(r|x) = ?(xm? ),
(8)
where ?(?) is the Pareto c.d.f.
3
Convergent evidence may also be found in the Zipfian principle of communication efficiency [11]. While it
might appear incongruous to speak of differential efficiency in communicating numbers, recall that the historical origins of numbers involved tally marks and other explicit token-based representations of numbers which
imposed increasing resource costs in representing larger numbers.
5
Reduced experience with monetary options will be reflected in a reduced membership of M. Sampling at random from M corresponds to approximating ? with a limited number of samples. So long
as the sampling procedure is not systematically biased away from particular x values, the resulting
curve will not be qualitatively different from the true one. Systematic differences will arise, though,
if the sampling is biased by, say, the range of values observers are known to encounter. For instance,
it is reasonable to assume that the wealth of a person is directly correlated with the upper limit of
money values they will see. Substituting this upper limit in Equation 7, we obtain a systematic difference in the curvature of the utility function that subjects with different wealth endowments will have
for the same monetary labels. The trend we obtain from a simulation (see gray inset in Figure 3) with
three different wealth levels ($1000, $10000 and $ 1 million) matches the empirically documented
increase in relative risk aversion (curvature of the utility function) with wealth [15]. Observe that
the log concavity of the Pareto c.d.f. has the practical effect of essentially converting our inferred
value for money into a classical utility function. Thus, using two assumptions (number ordering and
scarcity of essential resources), we have situated economic measurements of preference as a special,
fixed case of a more general dynamic process of desirability evaluation.
4
Modeling willingness-to-pay
Classical demand curve
Veblen demand curve Gi?en substitution
Price anchoring
relatively ?at
distribution
in the tail
(c) (a) (b)
(b)
m
(b)
+
p(r|x,m,c)
+
(c)(a)(b)
same
history
0.9
0.4
0.3
Wealth
0.2
1
0
0
20
40
60
Money label
80
t6
t7
t8
Well-behaved classical
demand curve
2
Item 2 preferred after
prices rise
1k
10k
1M
0.1
t5
m
At t2
Preference anchored to
initial numeric label
p(m|x)
0.5
p(m|x)
p(m|x)
Desirability
0.6
t4
At t8
0.8
0.7
t3
item 1 preferred in
existing choice set
p(x|m)
(b)
Wealth e?ect on risk aversion
t2
p(m|x)
p(x|m)
(a)
m
t1
m
exclusive goods
seen at relatively few
price points
p(m|x)
(c)
(a)
m
same
history
p(m|c)
(a)
p(x|m)
p(m|c)
p(m|c)
(c)
p(m|c)
Money distribution is learned over time ...
m
100
(c)
(a)
m
(b)
m
Initial samples in money distribution can
skew initial value estimates
in novel contexts
Figure 3: Illustrating derivations of pricing theory predictions for goods of various kinds from our
model.
Having studied how our model works for choices between items that all have money labels, the
logical next step is to study choices involving one item with a money label and one without, i.e.,
pricing. Note that asking how much someone values an option, as we did in the section above, is
different from asking if they would be willing to buy it at a particular price. The former corresponds
to the term p(r|x), as defined above. The latter will correspond to p(m|r, x), with m being the price
the subject is willing to pay to complete the transaction. Since the contribution of all terms where
r = 0, i.e. the transaction is not completed, is identically zero this term can be computed as,
P
p(x|m)p(m|c)p(c)
p(m|x) = PM CP
,
(9)
m
C p(x|m)p(m|c)p(c)
further replacing the integral over M with an integral over the real line as in Equation 5 for analytical
tractability when necessary.
What aspects of pricing behavior in the real world can our model explain? Interesting variations
in pricing arise from assumptions about the money distribution p(m|c) and/or the price distribution p(x|m). Figure 3 illustrates our model?s explanation for three prominent variations of classical
6
demand curves documented in the microeconomics literature. Consumers typically reduce preference for goods when prices rise, and increases it when prices drop. This fact about the structure of
preferences involved in money transactions is replicated in our model (see first column in Figure
3) via the reduction/increase of the contribution of the p(m|c) term to the numerator of Equation 9.
Marketing research reports anomalous pricing curves that violate this behavior in some cases. One
important case comprises of Veblen goods, wherein the demand for high-priced exclusive goods
drops when prices are lowered. Our model explains this behavior (see second column in Figure 3)
via unfamiliarity with the price reflected in a lower contribution from the price distribution p(x|m)
for such low values. Such non-monotonic preference behavior is difficult for utility-based models,
but sits comfortably within ours, where familiarity with options at typical price points drives desirability. Another category of anomalous demand curves comes from Giffen goods, which rise in
demand upon price increases because another substitute item becomes too expensive. Our approach
accounts for this behavior (see third column in Figure 3) under the assumption that price changes
affect the Giffen good less because its price distribution has a larger variance, which is in line with
empirical reports showing greater price inelasticity of Giffen goods [16].
The last column in Figure 3 addresses an aspect of the temporal dynamics of our model that potentially explains both (i) why behavioral economists can continually find new anchoring results
(e.g. [6, 2]) and (ii) why classical economists often consider such results to be marginal and uninteresting [17]. Behavioral scientists running experiments in labs ask subjects to exhibit preferences
for which they may not have well-formed price and label distributions, which causes them to anchor and show other forms of preference instability. Economists fail to find similar results in their
field studies, because they collect data from subjects operating in contexts for which their price and
label distributions are well-formed. Both conclusions fall out of our model of sequential preference learning, where initial samples can bias the posterior, but the long-run distribution remains
stable. Parenthetically, this demonstration also renders transparent the mechanisms by which consumers process rapid inflationary episodes, stock price volatility, and transferring between multiple
currency bases. In all these cases, empirical observations suggests inertia followed by adaptation,
which is precisely what our model would predict.
5
Modeling risky monetary choices
Finally, we ask: how well can our model fit the choice behavior of real humans making economic
decisions? The simplest economic setup to perform such a test is in predicting choices between
risky lotteries, since the human prediction is always treated as a stochastic choice preference that
maps directly onto the output of our model. We use a basic expected utility calculation, where the
desirability for lottery options is computed as in Equation 8. For a choice between a risky lottery
x1 = {mh , ml } and a safe choice x2 = ms , with a win probability q and where mh > ms > ml ,
the value calculation for the risky option will take the form,
R mh
p(m|c)p(c)
p(r|x) = Rm?s
, in wins
(10)
p(m|c)p(c)
0
R ml
p(m|c)p(c)
p(r|x) = Rm?s
, in losses
(11)
p(m|c)p(c)
0
? EV (risky)
=
q (?x (mh ) ? ?x (ms )) + (1 ? q) (?x (ml ) ? ?x (ms )) .
(12)
where ?(?) is the c.d.f. of the Pareto distribution on monetary labels m and p(x) is the given lottery
probability.
Using Equation 12, where ? is the c.d.f of a Pareto distribution, (? = {2.9, 0.1, 1} fitted empirically), assuming that subjects distort perceived probabilities [18] via an inverse-S shaped weighting
function4 , and using an -random utility maximization decision rule5 , we obtain choice predictions
4
We use Prelec?s version of this function, with the slope parameter ? distributed N (0.65, 0.2) across our
agent population. The quantitative values for ? are taken from (Zhang & Maloney, 2012).
5
-random decision utility maximization is a simple way of introducing stochasticity into the decision rule,
and is a common econometric practice when modeling population-level data. It predicts that subjects pick the
option with higher computed expected utility with a probability 1 ? , and predict randomly with a probability
7
0.8
10500
Expected value
0.7
2100
0.6
0.5
400
0.4
100
0.3
0.2
20
0.1
0.01
0.05
0.2
0.33
0.4
0.5
Probability of risky gamble
0.67
Figure 4: Comparing proportion of subjects selecting risky options predicted by our theory with data
obtained in a panel of 35 different risky choice experiments. The x-axis plots the probability of the
risky gamble; the y-axis plots the expected value of gambles scaled to the smallest EV gamble. Left:
Choice probabilities for risky option plotted for 7 p values and 5 expected value levels. Each of the
35 choice experiments was conducted using between 70-100 subjects. Right: Choice probabilities
predicted by relative desirability computing agents in the same 35 choice experiments. Results are
compiled by averaging over 1000 artificial agents.
that match human performance (see Figure 4) on a large and comprehensive panel of risky choice
experiments obtained from [19] to within statistical confidence6 .
6
Conclusion
The idea that preferences about options can be directly determined psychophysically is strongly
embedded in traditional computational treatments of human preferences, e.g. reinforcement learning [20]. Considerable evidence, some of which we have discussed, suggests that the brain does
not in fact, compute value [3]. In search of a viable alternative, we have demonstrated a variety of
behaviors typical of value-based theories using a stochastic latent variable model that simply tracks
the frequency with which options are seen to be preferred in latent contexts and then compiles this
evidence in a rational Bayesian manner to emit preferences. This proposal, and its success in explaining fundamental economic concepts, situates the computation of value (as it is generally measured)
within the range of abilities of neural architectures that can only represent relative frequencies, not
absolute magnitudes.
While our demonstrations are computationally simple, they are substantially novel. In fact, computational models explaining any of these effects even in isolation are difficult to find [1]. While
the results we demonstrate are preliminary, and while some of the radical implications of our predictions about the effects of choice history on preferences (?you will hesitate in buying a Macbook
for $100 because that is an unfamiliar price for it?7 ) remain to be verified, the plain ability to describe these economic concepts within an inductively rational framework without having to invoke
a psychophysical value construct by itself constitutes a non-trival success and forms the essential
contribution of this work.
Acknowledgments
NS and PRS acknowledge funding from the Institute for New Economic Thinking. EV acknowledges funding from NSF CPS Grant #1239323.
. The value of is fitted to the data; we used = 0.25, the value that maximized our fit to the endpoints of
the data. Since we are computing risk attitudes over a population, we should ideally also model stochasticity in
utility computatation.
6
While [19] do not give standard deviations for their data, we assume that binary choice probabilities can be
modeled by a binomial distribution, which gives us a theoretical estimate for the standard deviation expected in
the data. Our optimal fits lie within 1 SD of the raw data for 34 of 35 payoff-probability combinations, yielding
a fit in probability.
7
You will! You?ll think there?s something wrong with it.
8
References
[1] M. Rabin. Psychology and economics. Journal of Economic Literature, 36(1):pp. 11?46, 1998.
[2] Dan Ariely. Predictably irrational: The Hidden Forces That Shape Our Decisions. Harper
Collins, 2009.
[3] I. Vlaev, N. Chater, N. Stewart, and G. Brown. Does the brain calculate value? Trends in
Cognitive Sciences, 15(11):546 ? 554, 2011.
[4] L. Tremblay and W. Schultz. Relative reward preference in primate orbitofrontal cortex. Nature, 398:704?708, 1999.
[5] R. Elliott, Z. Agnew, and J. F. W. Deakin. Medial orbitofrontal cortex codes relative rather than
absolute value of financial rewards in humans. European Journal of Neuroscience, 27(9):2213?
2218, 2008.
[6] Hilke Plassmann, John O?Doherty, Baba Shiv, and Antonio Rangel. Marketing actions can
modulate neural representations of experienced pleasantness. Proceedings of the National
Academy of Sciences, 105(3):1050?1054, 2008.
[7] M Keith Chen, Venkat Lakshminarayanan, and Laurie R Santos. How basic are behavioral
biases? evidence from capuchin monkey trading behavior. Journal of Political Economy,
114(3):517?537, 2006.
[8] N Srivastava and PR Schrater. Rational inference of relative preferences. In Proceedings of
Advances in Neural Information Processing Systems 25, 2012.
[9] A. Jern, C. Lucas, and C. Kemp. Evaluating the inverse decision-making approach to preference learning. In NIPS, pages 2276?2284, 2011.
[10] J. Anderson. The Adaptive character of thought. Erlbaum Press, 1990.
[11] John Z Sun, Grace I Wang, Vivek K Goyal, and Lav R Varshney. A framework for bayesian optimality of psychophysical laws. Journal of Mathematical Psychology, 56(6):495?501, 2012.
[12] Neil Stewart, Nick Chater, and Gordon D.A. Brown. Decision by sampling. Cognitive Psychology, 53(1):1 ? 26, 2006.
[13] Christian Kleiber and Samuel Kotz. Statistical size distributions in economics and actuarial
sciences, volume 470. Wiley-Interscience, 2003.
[14] Adrian Dragulescu and Victor M Yakovenko. Statistical mechanics of money. The European
Physical Journal B-Condensed Matter and Complex Systems, 17(4):723?729, 2000.
[15] Daniel Paravisini, Veronica Rappoport, and Enrichetta Ravina. Risk aversion and wealth:
Evidence from person-to-person lending portfolios. Technical report, National Bureau of Economic Research, 2010.
[16] Kris De Jaegher. Giffen behaviour and strong asymmetric gross substitutability. In New Insights into the Theory of Giffen Goods, pages 53?67. Springer, 2012.
[17] Faruk Gul and Wolfgang Pesendorfer. The case for mindless economics. The foundations of
positive and normative economics, pages 3?39, 2008.
[18] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica, 47:263?291, 1979.
[19] Pedro Bordalo, Nicola Gennaioli, and Andrei Shleifer. Salience theory of choice under risk.
The Quarterly Journal of Economics, 127(3):1243?1285, 2012.
[20] Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning. MIT Press,
1998.
9
| 5224 |@word trial:4 illustrating:2 version:1 proportion:1 adrian:1 forager:3 willing:3 calculus:1 simulation:1 pick:1 parenthetically:1 thereby:2 reduction:1 initial:4 substitution:1 necessity:1 selecting:1 t7:1 daniel:1 ours:1 past:3 o2:1 existing:1 com:3 comparing:1 gmail:3 yet:1 assigning:1 must:3 john:2 realistic:1 happen:1 shape:4 christian:1 drop:2 plot:2 medial:2 update:2 infant:1 item:14 indicative:1 record:1 pointer:1 unfamiliarity:1 provides:1 lending:1 preference:39 sits:1 zhang:1 five:1 mathematical:1 direct:1 differential:2 supply:2 viable:2 replication:1 attested:1 prove:1 specialize:1 ect:1 wild:1 dan:1 behavioral:4 interscience:1 manner:1 falsely:1 x0:5 expected:6 rapid:1 longitudinally:1 behavior:11 frequently:3 mechanic:1 brain:4 anchoring:2 buying:1 decreasing:1 resolve:1 little:1 food:1 increasing:2 becomes:3 estimating:1 underlying:1 notation:1 formidable:1 panel:8 medium:3 what:7 santos:1 kind:1 interpreted:1 substantially:1 monkey:2 formalizes:1 safely:2 temporal:1 every:2 quantitative:1 preferable:1 exactly:1 rm:2 scaled:1 wrong:1 unit:1 underlie:1 grant:1 appear:1 faruk:1 continually:2 before:1 t1:1 understood:1 bind:1 treat:1 apparatus:2 depended:1 limit:2 scientist:1 sd:1 sutton:1 might:3 discriminability:1 studied:1 collect:1 suggests:2 someone:1 co:1 limited:1 range:2 minneapolis:1 statistically:2 practical:1 acknowledgment:1 ered:1 practice:3 block:2 recursive:1 incongruous:1 goyal:1 procedure:1 empirical:3 thought:1 kleiber:1 seeing:2 suggest:3 get:2 onto:1 close:1 valley:4 selection:1 context:34 live:1 risk:6 instability:1 equivalent:1 map:3 bill:2 imposed:1 demonstrated:1 straightforward:1 go:2 laden:2 economics:5 communicating:1 rule:2 insight:1 dominate:1 his:3 financial:1 population:4 variation:2 diego:2 nominal:1 speak:1 losing:1 origin:1 trend:2 expensive:1 nisheeth:1 asymmetric:1 sparsely:1 predicts:2 distributional:1 observed:2 ep:1 wang:1 calculate:1 region:1 sun:1 episode:1 ordering:4 prospect:1 gross:1 environment:1 reward:4 asked:1 inductively:1 ideally:1 econometrica:1 dynamic:2 irrational:1 tversky:1 upon:2 paribus:1 efficiency:2 strikingly:1 kahneman:1 easily:1 mh:4 indirect:1 stock:1 various:4 derivation:1 attitude:1 instantiated:1 committing:1 describe:5 fast:1 actuarial:1 artificial:1 tell:1 formation:5 outside:1 outcome:1 heuristic:1 widely:1 solve:1 larger:5 distortion:1 drawing:1 say:2 compressed:1 ability:4 gi:1 neil:1 think:1 itself:2 differentiate:1 analytical:2 propose:1 coming:1 product:1 commitment:1 adaptation:1 causing:1 monetary:18 iff:1 shiv:1 academy:1 cluster:3 congruent:1 comparative:2 categorization:1 object:4 help:1 derive:1 develop:3 zipfian:1 volatility:1 radical:1 measured:1 andrew:1 keith:1 strong:3 edward:1 predicted:2 come:1 trading:1 quantify:1 safe:2 stochastic:2 human:11 explains:2 behaviour:1 assign:1 transparent:1 preliminary:1 summation:1 mathematically:1 hold:1 practically:1 ground:1 normal:2 mapping:1 predict:2 pointing:1 substituting:1 m0:2 institutional:1 smallest:1 wine:7 perceived:1 estimation:2 compiles:1 condensed:1 label:29 currently:1 sensitive:7 saw:1 mit:1 clearly:1 always:2 desirability:8 rather:1 varying:1 barto:1 conjunction:1 chater:2 focus:1 viz:2 unsatisfactory:1 likelihood:1 political:1 sense:2 inference:2 economy:1 abstraction:1 membership:1 unlikely:1 transferring:1 entire:1 initially:1 hidden:2 typically:3 ancestor:1 among:2 augment:1 lucas:1 retaining:1 development:1 animal:8 plan:1 special:2 softmax:1 marginal:1 field:1 construct:1 having:2 shaped:1 sampling:5 represents:1 jern:1 constitutes:2 thinking:1 fmri:1 future:2 others:1 t2:2 report:3 gordon:1 few:1 richard:1 modern:1 randomly:1 gamma:2 densely:1 comprehensive:1 individual:1 national:2 fitness:1 ourselves:1 recalling:1 highly:1 possibility:4 evaluation:2 umn:1 recalculating:1 yielding:1 light:1 pc:3 de4:1 implication:1 accurate:1 emit:2 integral:2 capable:2 daily:1 experience:4 necessary:1 indexed:1 incomplete:1 tree:2 pmf:1 refutation:1 plotted:1 theoretical:1 fitted:2 psychological:3 instance:4 column:4 modeling:4 earlier:1 rationalized:1 asking:2 rabin:1 stewart:2 maximization:3 tractability:2 cost:1 introducing:1 subset:1 imperative:1 deviation:2 uninteresting:1 conducted:2 erlbaum:1 too:2 characterize:1 reported:1 foraging:1 function4:1 psychophysically:1 person:4 fundamental:1 accessible:1 systematic:3 probabilistic:1 receiving:4 invoke:1 synthesis:1 quickly:1 reflect:1 recorded:1 containing:2 choose:2 corner:1 creating:1 cognitive:3 account:6 de:1 coding:3 crowded:1 lakshminarayanan:1 matter:1 caused:1 mp:1 inertia:1 view:1 observer:4 lab:2 wolfgang:1 linked:2 pinning:1 red:1 option:31 elicited:2 slope:1 contribution:5 ass:1 formed:2 variance:1 who:2 vlaev:2 maximized:1 yield:2 correspond:5 judgment:2 t3:1 bayesian:4 raw:1 hunter:1 liable:1 notoriously:1 drive:2 kris:1 history:8 explain:1 maloney:1 distort:1 prelec:1 against:4 frequency:7 involved:3 pp:1 naturally:1 associated:2 rational:4 experimenter:1 treatment:4 macbook:1 ask:2 logical:1 recall:1 fiat:1 formalize:1 actually:2 reflecting:1 rappoport:1 appears:1 higher:3 follow:1 reflected:2 wherein:2 though:3 knowledgeable:1 strongly:1 anderson:1 just:1 implicit:1 marketing:3 replacing:1 incrementally:1 gray:1 willingness:1 pricing:6 behaved:1 building:1 effect:4 validity:1 concept:2 true:2 brown:2 former:1 hence:2 assigned:1 laboratory:1 vivek:1 ll:1 numerator:1 samuel:1 m:4 prominent:1 hill:8 complete:3 demonstrate:3 doherty:1 argues:1 cp:1 consideration:1 novel:3 funding:2 common:3 juice:2 empirically:5 physical:1 endpoint:1 volume:1 million:1 tail:1 he:3 interpretation:1 comfortably:1 schrater:4 discussed:1 measurement:2 significant:1 unfamiliar:1 shuffling:1 outlined:1 pm:3 similarly:1 consistency:1 stochasticity:2 had:1 portfolio:1 minnesota:1 lowered:1 access:1 stable:1 cortex:4 money:23 operating:1 compiled:1 base:1 something:1 curvature:2 posterior:1 showed:1 jolla:2 binary:1 continue:1 arbitrarily:1 life:2 success:2 vul:1 victor:1 seen:5 greater:3 remembering:1 additional:1 impose:1 r0:2 converting:1 living:1 ii:3 forage:1 multiple:2 liking:2 currency:5 infer:2 arithmetic:2 reduces:2 violate:1 technical:1 match:3 calculation:3 long:2 host:1 labile:2 prediction:9 anomalous:4 involving:6 basic:2 essentially:1 psychologically:1 represent:3 arisen:1 trival:1 hesitate:1 proposal:1 cps:1 wealth:12 modality:1 ot:1 biased:2 subject:16 near:1 ideal:1 intermediate:3 easy:1 identically:1 variety:1 marginalization:2 affect:1 psychology:7 gave:1 fit:4 identified:1 architecture:1 isolation:1 economic:12 reduce:1 idea:1 whether:1 expression:1 utility:16 render:1 gulf:1 cause:1 prefers:2 action:1 antonio:1 useful:2 generally:1 involve:2 amount:3 situated:1 category:1 simplest:1 reduced:3 generate:1 documented:2 lottery:6 nsf:1 delta:1 arising:1 track:1 neuroscience:1 key:1 putting:1 verified:1 shock:4 econometric:1 imaging:2 monotone:3 run:1 inverse:2 uncertainty:1 you:3 laid:1 reasonable:2 kotz:1 decision:10 prefer:2 orbitofrontal:4 entirely:1 pay:6 followed:1 convergent:1 encountered:1 microeconomics:1 activity:2 larg:1 occur:1 constraint:2 precisely:1 constrain:2 x2:1 encodes:5 aspect:2 extremely:2 optimality:1 relatively:4 department:2 combination:1 across:7 smaller:2 remain:1 character:1 making:6 primate:1 pr:2 taken:1 recourse:1 computationally:1 equation:12 legal:1 remains:2 resource:5 labels1:1 skew:1 fail:1 mechanism:1 needed:2 mind:1 mechanistic:1 operation:1 endowed:1 permit:2 rewritten:1 observe:1 quarterly:1 away:1 occurrence:1 alternative:2 encounter:4 existence:2 substitute:1 bureau:1 denotes:1 clustering:1 remaining:1 running:1 completed:1 binomial:1 marginalized:1 testable:1 build:1 nicola:1 approximating:1 classical:7 psychophysical:8 quantity:5 primary:1 dependence:1 exclusive:2 traditional:2 grace:1 exhibit:2 rationally:1 pain:2 win:2 separate:1 pleasure:2 consumption:1 valuation:2 considers:1 kemp:1 assuming:4 consumer:3 economist:3 o1:1 modeled:2 relationship:1 index:1 code:1 demonstration:3 difficult:2 setup:1 potentially:1 rise:3 pm0:1 motivates:1 perform:2 upper:2 observation:14 neuron:2 acknowledge:1 behave:1 supporting:1 immediate:1 situation:3 neurobiology:1 ever:1 communication:1 payoff:1 arbitrary:1 inferred:4 deakin:1 pair:2 palette:1 nick:1 coherent:1 framing:1 learned:1 nip:1 address:1 able:1 elicitation:1 bar:1 usually:1 pattern:1 xm:5 ev:3 challenge:1 gaining:1 memory:3 reliable:1 max:1 explanation:1 unrealistic:1 difficulty:1 natural:3 treated:1 predicting:1 force:1 advanced:1 mn:1 representing:2 rated:1 historically:1 imply:1 risky:13 axis:2 acknowledges:1 fertility:1 prior:2 understanding:2 berry:3 literature:2 determining:1 relative:11 marginalizing:3 embedded:1 loss:2 expect:1 law:1 interesting:3 baba:1 foundation:1 aversion:4 agent:6 offered:2 elliott:2 imposes:1 principle:1 story:1 systematically:1 pareto:5 endowment:3 token:3 soil:1 last:2 t6:1 salience:1 formal:1 bias:2 telling:1 institute:1 hedonic:1 wide:1 fall:1 face:3 taking:1 explaining:2 absolute:4 sparse:2 priced:2 distributed:2 curve:9 calculated:1 plain:1 evaluating:1 numeric:1 world:8 avoids:2 valid:1 concavity:1 t5:1 adaptive:1 author:1 made:2 qualitatively:1 san:2 simplified:1 replicated:1 counted:2 historical:1 reinforcement:2 transaction:5 reconstructed:2 schultz:1 preferred:5 varshney:1 ml:4 active:2 buy:3 anchor:2 predictably:1 assumed:1 conservation:1 search:1 latent:4 anchored:1 why:2 learn:1 nature:1 ca:2 ariely:1 purview:1 forest:5 laurie:1 european:2 artificially:1 electric:2 domain:2 assured:1 t8:2 did:1 complex:1 dense:1 paul:1 arise:2 x1:1 representative:1 en:1 venkat:1 andrei:1 wiley:1 n:1 experienced:3 comprises:1 explicit:3 tally:1 exponential:1 lie:1 third:1 weighting:1 down:1 familiarity:1 specific:3 operationalize:1 inset:1 showing:1 normative:1 veronica:1 evidence:6 exists:2 incorporating:1 burden:1 essential:4 sequential:1 positive:1 magnitude:19 notwithstanding:1 illustrates:3 t4:1 demand:9 chen:1 aren:1 simply:3 pcm:1 lav:1 neurological:1 srivastava:3 monotonic:1 springer:1 corresponds:2 pedro:1 conditional:1 modulate:1 viewed:1 presentation:1 tender:1 price:29 absence:1 considerable:1 change:2 specifically:1 except:1 typical:7 determined:1 averaging:1 called:1 accepted:1 la:2 gamble:5 formally:1 substitutability:1 ceteris:1 rangel:1 people:2 mark:1 latter:1 macroeconomic:1 harper:1 collins:1 frontal:1 scarcity:2 bush:5 dept:1 phenomenon:2 correlated:1 |
4,666 | 5,225 | Learning Mixed Multinomial Logit Model from
Ordinal Data
Sewoong Oh
Dept. of Industrial and Enterprise Systems Engr.
University of Illinois at Urbana-Champaign
Urbana, IL 61801
swoh@illinois.edu
Devavrat Shah
Department of Electrical Engineering
Massachussetts Institute of Technology
Cambridge, MA 02139
devavrat@mit.edu
Abstract
Motivated by generating personalized recommendations using ordinal (or preference) data, we study the question of learning a mixture of MultiNomial Logit
(MNL) model, a parameterized class of distributions over permutations, from partial ordinal or preference data (e.g. pair-wise comparisons). Despite its long standing importance across disciplines including social choice, operations research and
revenue management, little is known about this question. In case of single MNL
models (no mixture), computationally and statistically tractable learning from
pair-wise comparisons is feasible. However, even learning mixture with two MNL
components is infeasible in general.
Given this state of affairs, we seek conditions under which it is feasible to learn
the mixture model in both computationally and statistically efficient manner. We
present a sufficient condition as well as an efficient algorithm for learning mixed
MNL models from partial preferences/comparisons data. In particular, a mixture
of r MNL components over n objects can be learnt using samples whose size
scales polynomially in n and r (concretely, r3.5 n3 (log n)4 , with r n2/7 when
the model parameters are sufficiently incoherent). The algorithm has two phases:
first, learn the pair-wise marginals for each component using tensor decomposition; second, learn the model parameters for each component using R ANK C EN TRALITY introduced by Negahban et al. In the process of proving these results,
we obtain a generalization of existing analysis for tensor decomposition to a more
realistic regime where only partial information about each sample is available.
1
Introduction
Background. Popular recommendation systems such as collaborative filtering are based on a partially observed ratings matrix. The underlying hypothesis is that the true/latent score matrix is lowrank and we observe its partial, noisy version. Therefore, matrix completion algorithms are used for
learning, cf. [8, 14, 15, 20]. In reality, however, observed preference data is not just scores. For
example, clicking one of the many choices while browsing provides partial order between clicked
choice versus other choices. Further, scores do convey ordinal information as well, e.g. score of 4
for paper A and score of 7 for paper B by a reviewer suggests ordering B > A. Similar motivations
led Samuelson to propose the Axiom of revealed preference [21] as the model for rational behavior.
In a nutshell, it states that consumers have latent order of all objects, and the revealed preferences
through actions/choices are consistent with this order. If indeed all consumers had identical ordering, then learning preference from partial preferences is effectively the question of sorting.
In practice, individuals have different orderings of interest, and further, each individual is likely
to make noisy choices. This naturally suggests the following model ? each individual has a latent
distribution over orderings of objects of interest, and the revealed partial preferences are consistent
1
with it, i.e. samples from the distribution. Subsequently, the preference of the population as a whole
can be associated with a distribution over permutations. Recall that the low-rank structure for score
matrices, as a model, tries to capture the fact that there are only a few different types of choice
profile. In the context of modeling consumer choices as distribution over permutation, MultiNomial
Logit (MNL) model with a small number of mixture components provides such a model.
Mixed MNL. Given n objects or choices of interest, an MNL model is described as a parametric
distribution over permutations of n with parameters w = [wi ] ? Rn : each object i, 1 ? i ? n, has
a parameter wi > 0 associated with it. Then the permutations are generated randomly as follows:
choose one of thePn objects to be ranked 1 at random, where object i is chosen to be ranked 1 with
n
probability wi /( j=1 wj ). Let i1 be object chosen for the first position. Now to select second
ranked object, choose from remaining with probability proportional to their weight. We repeat until
all objects for all ranked positions are chosen. It can be easily seen that, as per this model, an item i
is ranked higher than j with probability wi /(wi + wj ).
In the mixed MNL model with r ? 2 mixture components, each component corresponds to a different MNL model: let w(1) , . . . , w(r) be the corresponding
P parameters of the r components. Let
q = [qa ] ? [0, 1]r denote the mixture distribution, i.e.
a qa = 1. To generate a permutation
at random, first choose a component a ? {1, . . . , r} with probability qa , and then draw random
permutation as per MNL with parameters w(a) .
Brief history. The MNL model is an instance of a class of models introduced by Thurstone [23].
The description of the MNL provided here was formally established by McFadden [17]. The same
model (in form of pair-wise marginals) was introduced by Zermelo [25] as well as Bradley and Terry
[7] independently. In [16], Luce established that MNL is the only distribution over permutation that
satisfies the axiom of Independence from Irrelevant Alternatives.
On learning distributions over permutations, the question of learning single MNL model and more
generally instances of Thurstone?s model have been of interest for quite a while now. The maximum
likelihood estimator, which is logistic regression for MNL, has been known to be consistent in large
sample limit, cf. [13]. Recently, R ANK C ENTRALITY [19] was established to be statistical efficient.
For learning sparse mixture model, i.e. distribution over permutations with each mixture being delta
distribution, [11] provided sufficient conditions under which mixtures can be learnt exactly using
pair-wise marginals ? effectively, as long as the number of components scaled as o(log n) where
components satisfied appropriate incoherence condition, a simple iterative algorithm could recover
the mixture. However, it is not robust with respect to noise in data or finite sample error in marginal
estimation. Other approaches have been proposed to recover model using convex optimization based
techniques, cf. [10, 18]. MNL model is a special case of a larger family of discrete choice models
known as the Random Utility Model (RUM), and an efficient algorithm to learn RUM is introduced
in [22]. Efficient algorithms for learning RUMs from partial rankings has been introduced in [3, 4].
We note that the above list of references is very limited, including only closely related literature.
Given the nature of the topic, there are a lot of exciting lines of research done over the past century
and we shall not be able to provide comprehensive coverage due to a space limitation.
Problem. Given observations from the mixed MNL, we wish to learn the model parameters, the
mixing distribution q, and parameters of each component w(1) , . . . , w(r) . The observations are in
form of pair-wise comparisons. Formally, to generatean observation, first one of the r mixture
components is chosen; and then for ` of all possible n2 pairs, comparison outcome is observed as
per this MNL
component1 . These ` pairs are chosen, uniformly at random, from a pre-determined
n
N ? 2 pairs: {(ik , jk ), 1 ? k ? N }. We shall assume that the selection of N is such that the
undirected graph G = ([n], E), where E = {(ik , jk ) : 1 ? k ? N }, is connected.
We ask following questions of interest: Is it always feasible to learn mixed MNL? If not, under what
conditions and how many samples are needed? How computationally expensive are the algorithms?
1
We shall assume that, outcomes of these ` pairs are independent of each other, but coming from the same
MNL mixture component. This is effectively true even they were generated by first sampling a permutation
from the chosen MNL mixture component, and then observing implication of this permutation for the specific
` pairs, as long as they are distinct due to the Irrelevance of Independent Alternative hypothesis of Luce that is
satisfied by MNL.
2
We briefly recall a recent result [1] that suggests that it is impossible to learn mixed MNL models in
general. One such example is described in Figure 1. It depicts an example with n = 4 and r = 2
and a uniform mixture distribution. For the first case, in mixture component 1, with probability 1
the ordering is a > b > c > d (we denote n = 4 objects by a, b, c and d); and in mixture component
2, with probability 1 the ordering is b > a > d > c. Similarly for the second case, the two mixtures
are made up of permutations b > a > c > d and a > b > d > c. It is easy to see the distribution
over any 3-wise comparisons generated from these two mixture models is identical. Therefore,
it is impossible to differentiate these two using 3-wise or pair-wise comparisons. In general, [1]
established that there exist mixture distributions with r ? n/2 over n objects that are impossible to
distinguish using log n-wise comparison data. That is, learning mixed MNL is not always possible.
Latent
Observed
Mixture Model 1
type 1
a
>
b
>
c
>
d
type 2
b
>
a
>
d
>
c
Mixture Model 2
type 1
type 2
b
a
>
>
a
b
>
>
c
d
>
>
P(
a
>
b
>
c
) = 0.5
P(
b
>
a
>
c
) = 0.5
P(
a
>
b
>
d
) = 0.5
P(
b
>
a
>
d
) = 0.5
P(
a
>
c
>
d
) = 0.5
P(
a
>
d
>
c
) = 0.5
P(
b
>
c
>
d
) = 0.5
P(
b
>
d
>
c
) = 0.5
d
c
Figure 1: Two mixture models that cannot be differentiated even with 3-wise preference data.
Contributions. The main contribution of this work is identification of sufficient conditions under
which mixed MNL model can be learnt efficiently, both statistically and computationally. Concretely, we propose a two-phase learning algorithm: in the first phase, using a tensor decomposition
method for learning mixture of discrete product distribution, we identify pair-wise marginals associated with each of the mixture; in the second phase, we use these pair-wise marginals associated
with each mixture to learn the parameters associated with each of the MNL mixture component.
The algorithm in the first phase builds upon the recent work by Jain and Oh [12]. In particular,
Theorem 3 generalizes their work for the setting where for each sample, we have limited information - as per [12], we would require that each individual gives the entire permutation; instead, we
have extended the result to be able to cope with the current setting when we only have information
about `, potentially finite, pair-wise comparisons. The algorithm in the second phase utilizes R ANK C ENTRALITY [19]. Its analysis in Theorem 4 works for setting where observations are no longer
independent, as required in [19].
We find that as long as certain rank and incoherence conditions are satisfied by the parameters of
each of the mixture, the above described two phase algorithm is able to learn mixture distribution
q and parameters associated with each mixture, w(1) , . . . , w(r) faithfully using samples that scale
polynomially in n and r ? concretely, the number of samples required scale as r3.5 n3 (log n)4 with
constants dependent on the incoherence between mixture components, and as long as r n2/7 as
well as G, the graph of potential comparisons, is a spectral expander with the total number of edges
scaling as N = O(n log n). For the precise statement, we refer to Theorem 1.
The algorithms proposed are iterative, and primarily based on spectral properties of underlying
tensors/matrices with provable, fast convergence guarantees. That is, algorithms are not only polynomial time, they are practical enough to be scalable for high dimensional data sets.
Notations. We use [N ] = {1, . . . , N } for the first N positive integers. We use ? to denote the outer
product such that (x ? y ? z)ijk = xi yj zk . Given a third order tensor T ? Rn1 ?n2 ?n3 and a matrix
r1 ?r2 ?r3
as
U ? Rn1 ?r1 , V ? Rn2 ?r2 , W ? Rn3 ?r3 , we define a linear
pP mapping T [U, V, W ] ? R
P
2
x
be
the
Euclidean
norm
of
a
vector,
T [U, V, W ]abc = i,j,k Tijk Uia Vjb Wkc . We let kxk =
i i
qP
2
kM k2 = maxkxk?1,kyk?1 xT M y be the operator norm of a matrix, and kM kF =
i,j Mij be
the Frobenius norm. We say an event happens with high probability (w.h.p) if the probability is
lower bounded by 1 ? f (n) such that f (n) = o(1) as n scales to ?.
2
Main result
In this section, we describe the main result: sufficient conditions under which mixed MNL models
can be learnt using tractable algorithms. We provide a useful illustration of the result as well as
discuss its implications.
3
Definitions. Let S denote the collection of observations, each of which is denoted as N dimensional,
{?1, 0, +1} valued vector. Recall that each observation is obtained by first selecting one of the r
mixture MNL component, and then viewing outcomes, as per the chosen MNL mixture component,
of ` randomly chosen pair-wise comparisons from the N pre-determined comparisons {(ik , jk ) :
1 ? ik 6= jk ? n, 1 ? k ? N }. Let xt ? {?1, 0, +1}N denote the tth observation with xt,k = 0 if
the kth pair (ik , jk ) is not chosen amongst the ` randomly chosen pairs, and xt,k = +1 (respectively
?1) if ik < jk (respectively ik > jk ) as per the chosen MNL mixture component. By definition, it
is easy to see that for any t ? S and 1 ? k ? N ,
(a)
(a)
r
i
wjk ? wik
` hX
E[xt,k ] =
qa Pka , where Pka = (a)
.
(1)
(a)
N a=1
wjk + wik
We shall denote Pa = [Pka ] ? [?1, 1]N for 1 ? a ? r. Therefore, in a vector form
`
(2)
E[xt ] = P q, where P = [P1 . . . Pr ] ? [?1, 1]N ?r .
N
That is, P is a matrix with r columns, each representing one of the r mixture components and q is the
mixture probability. By independence, for any t ? S, and any two different pairs 1 ? k 6= m ? N ,
r
i
`2 h X
E[xt,k xt,m ] = 2
qa Pka Pma .
(3)
N a=1
Therefore, the N ? N matrix E[xt xTt ] or equivalently tensor E[xt ? xt ] is proportional to M2 except
in diagonal entries, where
r
X
qa (Pa ? Pa ) ,
(4)
M2 = P QP T ?
a=1
Q = diag(q) being diagonal matrix with its entries being mixture probabilities, q. In a similar
manner, the tensor E[xt ? xt ? xt ] is proportional to M3 (except in O(N 2 ) entries), where
r
X
qa (Pa ? Pa ? Pa ).
(5)
M3 =
a=1
? 2 and M
? 3 , defined as
Indeed, empirical estimates M
h
i
hX
i
X
?2 = 1
?3 = 1
M
xt ? xt , and M
xt ? xt ? xt ,
|S|
|S|
t?S
(6)
t?S
provide good proxy for M2 and M3 for large enough number of samples; and shall be utilized
crucially for learning model parameters from observations.
Sufficient conditions for learning. With the above discussion, we state sufficient conditions for
learning the mixed MNL in terms of properties of M2 :
C1. M2 has rank r; let ?1 (M2 ), ?r (M2 ) > 0 be the largest and smallest singular values of M2 .
C2. For a large enough universal constant C 0 > 0,
? (M ) 4.5
1
2
.
(7)
N ? C 0 r3.5 ?6 (M2 )
?r (M2 )
In the above, ?(M2 ) represents incoherence of a symmetric matrix M2 . We recall that for a
symmetric matrix M ? RN ?N of rank r with singular value decomposition M = U SU T ,
the incoherence is defined as
r
N
?(M ) =
max kUi k .
(8)
r i?[N ]
C3. The undirected graph G = ([n], E) with E = {(ik , jk ) : 1 ? k ? N } is connected.
Let A ? {0, 1}n?n be adjacency matrix with Aij = 1 if (i, j) ? E and 0 otherwise; let
D = diag(di ) with di being degree of vertex i ? [n] and let LG = D?1 A be normalized
Laplacian of G. Let dmax = maxi di and dmin = mini di . Let the n eigenvalues of
stochastic matrix LG be 1 = ?1 (LG ) ? . . . ?n (LG ) ? ?1. Define spectral gap of G:
?(G) = 1 ? max{?2 (L), ??n (L)}.
(9)
4
Note that we choose a graph G = ([n], E) to collect pairwise data on, and we want to use a graph that
is connected, has a large spectral gap, and has a small number of edges. In condition (C3), we need
connectivity since we cannot estimate the relative strength between disconnected components (e.g.
see [13]). Further, it is easy to generate a graph with spectral gap ?(G) bounded below by a universal
constant (e.g. 1/100) and the number of edges N = O(n log n), for example using the configuration
model for Erd?os-Renyi graphs. In condition (C2), we require the matrix M2 to be sufficiently incoherent with bounded ?1 (M2 )/?r (M2 ). For example, if qmax /qmin = O(1) and the profile of each
type in the mixture distribution is sufficiently different, i.e. hPa , Pb i/(kPa kkPb k) < 1/(2r), then we
(a)
(a)
have ?(M2 ) = O(1) and ?1 (M2 )/?r (M2 ) = O(1). We define b = maxra=1 maxi,j?[n] wi /wj ,
qmax = maxa qa , and qmin = mina qa . The following theorem provides a bound on the error and
we refer to the appendix for a proof.
Theorem 1. Consider a mixed MNL model satisfying conditions (C1)-(C3). Then for any ? ? (0, 1),
there exists positive numerical constants C, C 0 such that for any positive ? satisfying
0.5
qmin ? 2 (G)d2min
0<?<
,
(10)
5
2
16qmax r ?1 (M2 )b dmax
? = [?
? = [w
? (a) ] so that with probability at least 1 ? ?,
Algorithm 1 produces estimates q
qa ] and w
q?a ? qa ? ?, and
r q
0.5
5 2
? (a) ? w(a) k
kw
max ?1 (M2 )b dmax
?C
?,
(11)
2
2
(a)
qmin ? (G)dmin
kw k
for all a ? [r], as long as
|S| ? C 0
?1 (M2 ) r4 ?1 (M2 )4
rN 4 log(N/?) 1
+
+
.
qmin ?1 (M2 )2 ?2 `2
`N
?r (M2 )5
(12)
An illustration of Theorem 1. To understand the applicability of Theorem 1, consider a concrete
example with r = 2; let the corresponding weights w(1) and w(2) be generated by choosing each
weight uniformly from [1, 2]. In particular, the rank order for each component is a uniformly random
permutation. Let the mixture distribution be uniform as well, i.e. q = [0.5 0.5]. Finally, let the
graph G = ([n], E) be chosen as per the Erd?os-R?enyi model with each edge chosen to be part of the
? where d? > log n. For this example, it can be checked that Theorem 1
graph with probability d/n,
?
2
? |S| ? C 0 n2 d?2 log(nd/?)/(`?
?
guarantees that for ? ? C/ nd,
), and nd? ? C 0 , we have for all a ?
?
? (a) ? w(a) k/kw(a) k ? C 00 nd??. That is, for ` = ?(1) and choosing
{1, 2}, |?
qa?? qa | ? ? and kw
0
?
? and w
?
? = ? /( nd), we need sample size of |S| = O(n3 d?3 log n) to guarantee error in both q
? we only need |S| = O((nd)
? 2 log n). Limited
smaller than ?0 . Instead, if we choose ` = ?(nd),
?
samples per observation leads to penalty of factor of (nd/`)
in sample complexity. To provide
bounds on the problem parameters for this example, we use standard concentration arguments. It
is well known for Erd?os-R?enyi random graphs (see [6]) that, with high probability, the number of
? and the degrees also concentrate
edges concentrates in [(1/2)d?n, (3/2)d?n] implying N = ?(dn),
? (3/2)d],
? implying dmax = dmin = ?(d).
? Also using standard concentration arguments
in [(1/2)d,
for spectrum
of random matrices, it follows that the spectral gap of G is bounded by ? ? 1 ?
?
? = ?(1) w.h.p. Since we assume the weights to be in [1, 2], the dynamic range is bounded
(C/ d)
? ?2 (M2 ) = ?(dn),
?
by b ? 2. The following Proposition shows that ?1 (M2 ) = ?(N ) = ?(dn),
and ?(M2 ) = ?(1).
Proposition 2.1. For the above example, when d? ? log n, ?1 (M2 ) ? 0.02N , ?2 (M2 ) ? 0.017N ,
and ?(M2 ) ? 15 with high probability.
Supposen now for general r, we are interested in well-behaved scenario where qmax = ?(1/r)
? (a) ? w(a) k/kw(a) k, we need
and qmin ?
= ?(1/r). To achieve arbitrary small error rate for kw
3.5 3
= O(1/ r N ), which is achieved by sample size |S| = O(r n (log n)4 ) with d? = log n.
3
Algorithm
We describe the algorithm achieving the bound in Theorem 1. Our approach is two-phased. First,
learn the moments for mixtures using a tensor decomposition, cf. Algorithm 2: for each type a ? [r],
5
produce estimate q?a ? R of the mixture weight qa and estimate P?a = [P?1a . . . P?N a ]T ? RN of the
expected outcome Pa = [P1a . . . PN a ]T defined as in (1). Secondly, for each a, using the estimate
? (a) for the MNL weights w(a) .
P?a , apply R ANK C ENTRALITY, cf. Section 3.2, to estimate w
Algorithm 1
1: Input: Samples {xt }t?S , number of types r, number of iterations T1 , T2 , graph G([n], E)
2: {(?
qa , P?a )}a?[r] ? S PECTRAL D IST ({xt }t?S , r, T1 )
(see Algorithm 2)
3: for a = 1, . . . , r do
N
4:
set P?a ? P[?1,1] (P?a ) where P
[?1,1] (?) isthe projection onto [?1, 1]
5:
w
? (a) ? R ANK C ENTRALITY G, P?a , T2
(see Section 3.2)
6: end for
? (a) )}a?[r]
7: Output: {(?
q (a) , w
To achieve Theorem 1, T1 = ? log(N |S|) and T2 = ? b2 dmax (log n + log(1/?))/(?dmin ) is
sufficient. Next, we describe the two phases of algorithms and associated technical results.
3.1
Phase 1: Spectral decomposition.
? 2 and M
?3 , the
To estimate P and q from the samples, we shall use tensor decomposition of M
T
empirical estimation of M2 and M3 respectively, recall (4)-(6). Let M2 = UM2 ?M2 UM2 be the
eigenvalue decomposition and let
?1/2
?1/2
?1/2
H = M3 [UM2 ?M2 , UM2 ?M2 , UM2 ?M2 ] .
The next theorem shows that M2 and M3 are sufficient to learn P and q exactly, when M2 has rank
r (throughout, we assume that r n ? N ).
Theorem 2 (Theorem 3.1 [12]). Let M2 ? RN ?N have rank r. Then there exists an orthogonal
matrix V H = [v1H v2H . . . vrH ] ? Rr?r and eigenvalues ?H
a , 1 ? a ? r, such that the orthogonal
tensor decomposition of H is
r
X
H
H
H
?H
H =
a (va ? va ? va ).
a=1
H
Let ?H = diag(?H
1 , . . . , ?r ). Then the parameters of the mixture distribution are
1/2
P = UM2 ?M2 V H ?H and
Q = (?H )?2 .
The main challenge in estimating M2 (resp. M3 ) from empirical data are the diagonal entires. In
[12], alternating minimization approach is used for matrix completion to find the missing diagonal
entries of M2 , and used a least squares method for estimating the tensor H directly from the samples.
Let ?2 denote the set of off-diagonal indices for an N ? N matrix and ?3 denote the off-diagonal
entries of an N ? N ? N tensor such that the corresponding projections are defined as
Mij if i 6= j ,
Tijk if i 6= j, j 6= k, k 6= i ,
P?2 (M )ij ?
and P?3 (T )ijk ?
0 otherwise .
0 otherwise .
for M ? RN ?N and T ? RN ?N ?N .
?2 and P? M
?3 to obtain estimation of diagIn lieu of above discussion, we shall use P?2 M
3
onal entries of M2 and M3 respectively. To keep technical arguments simple, we shall use first
? 2 , denoted as M
? 2 1, |S| and second |S|/2 samples based M
? 3 , denoted by
|S|/2 samples based M
2
|S|
?3
M
2 + 1, |S| in Algorithm 2.
Next, we state correctness of Algorithm 2 when ?(M2 ) is small; proof is in Appendix.
Theorem 3. There exists universal, strictly positive constants C, C 0 > 0 such that for all ? ? (0, C)
and ? ? (0, 1), if
rN 4 log(N/?) 1
?1 (M2 ) r4 ?1 (M2 )4
|S| ? C 0
+
+
, and
qmin ?1 (M2 )2 ?2 `2
`N
?r (M2 )5
? (M ) 4.5
1
2
N ? C 0 r3.5 ?6
,
?r (M2 )
6
Algorithm 2 S PECTRAL D IST: Moment method for Mixture of Discrete Distribution [12]
1: Input: Samples {xt }t?S ,
number of types r, number of iterations T
?
? 2 1, |S| , r, T
2: M2 ? M ATRIX A LT M IN M
(see Algorithm 3)
2
?2 = U
?M ?
?M U
?T
3: Compute eigenvalue decomposition of M
2
2
M2
?M , ?
?M
+ 1, |S| , U
(see Algorithm 4)
2
2
P
?
?
?
?
H
H
H
H
?
? using RTPM of [2]
5: Compute rank-r decomposition a?[r] ?a (?
va ? v?a ? v?a ) of H,
?
?
?
?
?
?
1/2
H
H
H
?2
H
?M ?
? V? ?
? , Q
? = (?
? ) , where V? = [?
? H? =
6: Output: P? = U
v H . . . v?rH ] and ?
? ? T ENSOR LS M
?3
4: H
2
?
?
|S|
2
1
M2
H
diag(?H
1 , . . . , ?r )
then there exists a permutation ? over [r] such that Algorithm 2 achieves the following bounds with
a choice of T = C 0 log(N |S|) for all i ? [r], with probability at least 1 ? ?:
s
r qmax ?1 (M2 )
|?
q?i ? qi | ? ? , and
kP??i ? Pi k ? ?
,
qmin
where ? = ?(M2 ) defined in (8) with run-time poly(N, r, 1/qmin , 1/?, log(1/?), ?1 (M2 )/?r (M2 )).
Algorithm 3 M ATRIX A LT M IN: Alternating Minimization for Matrix Completion [12]
? 2 1, |S| , r, T
1: Input: M
2
? 2 1, |S| )
2: Initialize N ? r dimensional matrix U0 ? top-r eigenvectors of P?2 (M
2
3: for all ? = 1 to T ? 1 do
?? +1 = arg minU kP? (M
? 2 1, |S| ) ? P? (U U?T )k2
4:
U
2
2
F
2
?? +1 )
5:
[U? +1 R? +1 ] = QR(U
(standard QR decomposition)
6: end for
? 2 = (U
?T )(UT ?1 )T
7: Output: M
Algorithm 4 T ENSOR LS: Least Squares method for Tensor Estimation [12]
? 3 |S| + 1, |S| , U
?M , ?
?M
1: Input: M
2
2
2
2: Define operator ?? : Rr?r?r ? RN ?N ?N as follows
??ijk (Z) =
(P
abc
? 1/2 )kc ,
? 1/2 )jb (U
?M ?
? 1/2 )ia (U
?M ?
?M ?
Zabc (U
2
2
2
M2
M2
M2
0,
?1/2
if i 6= j 6= k 6= i ,
(13)
otherwise.
?1/2
?1/2
? : Rr?r?r ? Rr?r?r s.t. A(Z)
?
?M ?
?
? ?
? ?
3: Define A
= ??(Z)[U
2
M2 , UM2 ?M2 , UM2 ?M2 ]
?
?3
4: Output: arg minZ kA(Z)
? P?3 M
3.2
|S|
2
?M ?
? ?1/2 , U
?M ?
? ?1/2 , U
?M ?
? ?1/2 ]k2
+ 1, |S| [U
2
2
2
F
M2
M2
M2
Phase 2: R ANK C ENTRALITY.
Recall that E = {(ik , jk ) : ik 6= jk ? [n], 1 ? k ? N } represents collection of N = |E| pairs and
G = ([n], E) is the corresponding graph. Let P?a denote the estimation of Pa = [Pka ] ? [?1, 1]N
for the mixture component a, 1 ? a ? r; where Pka is defined as per (1). For each a, using G and
P?a , we shall use the R ANK C ENTRALITY [19] to obtain estimation of w(a) . Next we describe the
algorithm and guarantees associated with it.
P (a)
Without loss of generality, we can assume that w(a) is such that i wi = 1 for all a, 1 ? a ?
r. Given this normalization, R ANK C ENTRALITY estimates w(a) as stationary distribution of an
appropriate Markov chain on G. The transition probabilities are 0 for all (i, j) ?
/ E. For (i, j) ? E,
(a)
(a)
they are function of P?a . Specifically, transition matrix p?(a) = [?
pi,j ] ? [0, 1]n?n with p?i,j = 0 if
7
(i, j) ?
/ E, and for (ik , jk ) ? E for 1 ? k ? N ,
(a)
p?ik ,jk =
(1 + P?ka )
dmax
2
1
and
(a)
p?jk ,ik =
(1 ? P?ka )
,
dmax
2
1
(14)
P
(a)
(a)
(a)
Finally, p?i,i = 1 ? j6=i p?i,j for all i ? [n]. Let ?
? (a) = [?
?i ] be a stationary distribution of the
Markov chain defined by p?(a) . That is,
X (a) (a)
(a)
?
?i =
p?ji ?
?j
for all i ? [n].
(15)
j
Computationally, we suggest obtaining estimation of ?
? by using power-iteration
for T iterations.
As argued before, cf. [19], T = ? b2 dmax (log n + log(1/?))/(?dmin ) , is sufficient to obtain
reasonably good estimation of ?
?.
The underlying assumption here is that there is a unique stationary distribution, which is established
by our result under the conditions of Theorem 1. Now p? is an approximation of the ideal transition
(a)
(a)
(a)
(a)
(a)
(a)
probabilities, where p(a) = [pi,j ] where pi,j = 0 if (i, j) ?
/ E and pi,j ? wj /(wi + wj ) for
all (i, j) ? E. Such an ideal Markov chain is reversible and as long as G is connected (which is, in
our case, by choice), the stationary distribution of this ideal chain is ? (a) = w(a) (recall, we have
assumed w(a) to be normalized so that all its components up to 1).
Now p?(a) is an approximation of such an ideal transition matrix p(a) . In what follows, we state
result about how this approximation error translates into the error between ?
? (a) and w(a) . Recall
that b ? maxi,j?[n] wi /wj , dmax and dmin are maximum and minimum vertex degrees of G and ?
as defined in (9).
Theorem 4. Let G = ([n], E) be non-bipartite and connected. Let k?
p(a) ? p(a) k2 ? ? for some
?5/2
positive ? ? (1/4)?b
(dmin /dmax ). Then, for some positive universal constant C,
k?
? (a) ? w(a) k
C b5/2 dmax
?
?.
? dmin
kw(a) k
(16)
And, starting from any initial condition, the power iteration manages to produce an estimate
of ?
? (a)
within twice the above stated error bound in T = ? b2 dmax (log n+log(1/?))/(?dmin ) iterations.
Proof of the above result can be found in Appendix. For spectral expander (e.g. connected ErdosRenyi graph with high probability), ? = ?(1) and therefore the bound is effectively O(?) for
bounded dynamic range, i.e. b = O(1).
4
Discussion
Learning distribution over permutations of n objects from partial observation is fundamental to
many domains. In this work, we have advanced understanding of this question by characterizing
sufficient conditions and associated algorithm under which it is feasible to learn mixed MNL model
in computationally and statistically efficient (polynomial in problem size) manner from partial/pairwise comparisons. The conditions are natural ? the mixture components should be ?identifiable?
given partial preference/comparison data ? stated in terms of full rank and incoherence conditions
of the second moment matrix. The algorithm allows learning of mixture components as long as
number of mixture components scale o(n2/7 ) for distribution over permutations of n objects.
To the best of our knowledge, this work provides first such sufficient condition for learning mixed
MNL model ? a problem that has remained open in econometrics and statistics for a while, and more
recently Machine learning. Our work nicely complements the impossibility results of [1].
Analytically, our work advances the recently popularized spectral/tensor approach for learning mixture model from lower order moments. Concretely, we provide means to learn the component even
when only partial information about the sample is available unlike the prior works. To learn the
model parameters, once we identify the moments associated with each mixture, we advance the result of [19] in its applicability. Spectral methods have also been applied to ranking in the context of
assortment optimization in [5].
8
References
[1] A. Ammar, S. Oh, D. Shah, and L. Voloch. What?s your choice? learning the mixed multi-nomial logit
model. In Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling
of computer systems, 2014.
[2] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning
latent variable models. CoRR, abs/1210.7559, 2012.
[3] H. Azari Soufiani, W. Chen, D. C Parkes, and L. Xia. Generalized method-of-moments for rank aggregation. In Advances in Neural Information Processing Systems 26, pages 2706?2714. 2013.
[4] H. Azari Soufiani, D. Parkes, and L. Xia. Computing parametric ranking models via rank-breaking. In
Proceedings of The 31st International Conference on Machine Learning, pages 360?368, 2014.
[5] J. Blanchet, G. Gallego, and V. Goyal. A markov chain approximation to choice modeling. In EC, pages
103?104, 2013.
[6] B. Bollob?as. Random Graphs. Cambridge University Press, January 2001.
[7] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired
comparisons. Biometrika, 39(3/4):324?345, 1955.
[8] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009.
[9] C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM Journal on Numerical
Analysis, 7(1):1?46, 1970.
[10] J. C. Duchi, L. Mackey, and M. I. Jordan. On the consistency of ranking algorithms. In Proceedings of
the ICML Conference, Haifa, Israel, June 2010.
[11] V. F. Farias, S. Jagabathula, and D. Shah. A data-driven approach to modeling choice. In NIPS, pages
504?512, 2009.
[12] P. Jain and S. Oh. Learning mixtures of discrete product distributions using spectral decompositions.
arXiv preprint arXiv:1311.2972, 2014.
[13] L. R. Ford Jr. Solution of a ranking problem from binary comparisons. The American Mathematical
Monthly, 64(8):28?33, 1957.
[14] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information Theory,
IEEE Transactions on, 56(6):2980?2998, 2010.
[15] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. The Journal of Machine
Learning Research, 99:2057?2078, 2010.
[16] D. R. Luce. Individual Choice Behavior. Wiley, New York, 1959.
[17] D. McFadden. Conditional logit analysis of qualitative choice behavior. Frontiers in Econometrics, pages
105?142, 1973.
[18] I. Mitliagkas, A. Gopalan, C. Caramanis, and S. Vishwanath. User rankings from comparisons: Learning
permutations in high dimensions. In Communication, Control, and Computing (Allerton), 2011 49th
Annual Allerton Conference on, pages 1143?1150. IEEE, 2011.
[19] S. Negahban, S. Oh, and D. Shah. Iterative ranking from pair-wise comparisons. In NIPS, pages 2483?
2491, 2012.
[20] S. Negahban and M. J. Wainwright. Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 2012.
[21] P. Samuelson. A note on the pure theory of consumers? behaviour. Economica, 5(17):61?71, 1938.
[22] H. A. Soufiani, D. C. Parkes, and L. Xia. Random utility theory for social choice. In NIPS, pages 126?134,
2012.
[23] Louis L Thurstone. A law of comparative judgment. Psychological review, 34(4):273, 1927.
[24] J. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 2011.
[25] E. Zermelo. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29(1):436?460, 1929.
9
| 5225 |@word briefly:1 version:1 polynomial:2 norm:3 logit:5 nd:8 open:1 km:2 seek:1 crucially:1 decomposition:14 atrix:2 moment:6 initial:1 configuration:1 score:6 selecting:1 past:1 existing:1 bradley:2 current:1 ka:3 realistic:1 numerical:2 mackey:1 implying:2 stationary:4 item:1 kyk:1 affair:1 turnier:1 parkes:3 provides:4 preference:12 allerton:2 mathematical:1 dn:3 enterprise:1 c2:2 ik:13 qualitative:1 manner:3 pairwise:2 expected:1 indeed:2 behavior:3 p1:1 cand:1 multi:1 little:1 clicked:1 provided:2 estimating:2 underlying:3 notation:1 bounded:6 qmin:9 what:3 israel:1 maxa:1 guarantee:4 friendly:1 nutshell:1 exactly:2 biometrika:1 scaled:1 k2:4 control:1 louis:1 positive:6 t1:3 engineering:1 before:1 limit:1 zeitschrift:1 despite:1 incoherence:6 twice:1 maxkxk:1 r4:2 suggests:3 collect:1 limited:3 range:2 statistically:4 phased:1 practical:1 unique:1 yj:1 practice:1 goyal:1 block:1 axiom:2 empirical:3 universal:4 projection:2 pre:2 suggest:1 cannot:2 onto:1 selection:1 operator:2 context:2 impossible:3 voloch:1 reviewer:1 missing:1 d2min:1 starting:1 independently:1 convex:2 l:2 pure:1 m2:65 estimator:1 oh:7 proving:1 population:1 thurstone:3 century:1 resp:1 user:2 exact:1 hypothesis:2 pa:8 expensive:1 jk:13 utilized:1 satisfying:2 econometrics:2 wkc:1 observed:4 preprint:1 electrical:1 capture:1 wj:6 soufiani:3 connected:6 azari:2 ordering:6 convexity:1 complexity:1 dynamic:2 engr:1 upon:1 bipartite:1 farias:1 easily:1 caramanis:1 distinct:1 jain:2 fast:1 describe:4 enyi:2 kp:2 outcome:4 choosing:2 whose:1 quite:1 larger:1 valued:1 say:1 otherwise:4 statistic:1 kahan:1 noisy:3 ford:1 differentiate:1 eigenvalue:4 rr:4 propose:2 coming:1 product:3 mixing:1 pka:6 achieve:2 description:1 frobenius:1 wjk:2 qr:2 convergence:1 r1:2 produce:3 generating:1 telgarsky:1 comparative:1 object:14 completion:7 ij:1 lowrank:1 strong:1 coverage:1 concentrate:2 closely:1 subsequently:1 stochastic:1 viewing:1 adjacency:1 require:2 argued:1 hx:2 behaviour:1 generalization:1 proposition:2 secondly:1 strictly:1 frontier:1 sufficiently:3 minu:1 mapping:1 achieves:1 smallest:1 estimation:8 largest:1 correctness:1 faithfully:1 weighted:1 minimization:2 mit:1 always:2 sigmetrics:1 pn:1 june:1 rank:12 likelihood:1 impossibility:1 industrial:1 dependent:1 entire:2 kc:1 i1:1 interested:1 arg:2 denoted:3 special:1 initialize:1 marginal:1 once:1 nicely:1 sampling:1 identical:2 represents:2 kw:7 icml:1 t2:3 jb:1 few:2 primarily:1 randomly:3 pectral:2 comprehensive:1 individual:5 phase:10 ab:1 interest:5 mixture:51 irrelevance:1 chain:5 implication:2 edge:5 partial:12 orthogonal:2 incomplete:1 euclidean:1 haifa:1 rn3:1 psychological:1 instance:2 column:1 modeling:4 applicability:2 onal:1 vertex:2 entry:8 uniform:2 rtpm:1 learnt:4 st:1 recht:1 fundamental:1 negahban:3 international:2 siam:1 standing:1 off:2 discipline:1 concrete:1 connectivity:1 satisfied:3 management:1 zermelo:2 choose:5 rn1:2 american:1 potential:1 rn2:1 b2:3 ranking:7 tijk:2 try:1 lot:1 observing:1 um2:8 recover:2 aggregation:1 collaborative:1 contribution:2 il:1 square:2 efficiently:1 judgment:1 identify:2 samuelson:2 identification:1 manages:1 j6:1 history:1 checked:1 definition:2 pp:1 naturally:1 associated:10 di:4 proof:3 rational:1 hsu:1 popular:1 ask:1 recall:8 knowledge:1 ut:1 higher:1 erd:3 done:1 generality:1 just:1 until:1 tropp:1 keshavan:2 su:1 o:3 reversible:1 logistic:1 behaved:1 normalized:2 true:2 analytically:1 alternating:2 symmetric:2 berechnung:1 davis:1 die:1 generalized:1 mina:1 duchi:1 wise:16 recently:3 rotation:1 multinomial:3 qp:2 ji:1 tail:1 marginals:5 refer:2 measurement:1 monthly:1 cambridge:2 swoh:1 consistency:1 mathematics:2 similarly:1 illinois:2 had:1 longer:1 recent:2 irrelevant:1 driven:1 scenario:1 certain:1 binary:1 ensor:2 der:2 seen:1 minimum:1 u0:1 full:1 maximumproblem:1 champaign:1 technical:2 long:8 paired:1 laplacian:1 va:4 qi:1 scalable:1 regression:1 arxiv:2 iteration:6 normalization:1 achieved:1 c1:2 background:1 want:1 ank:8 singular:2 unlike:1 expander:2 undirected:2 jordan:1 integer:1 anandkumar:1 ideal:4 revealed:3 component1:1 easy:3 enough:3 iii:1 ergebnisse:1 independence:2 luce:3 translates:1 motivated:1 b5:1 utility:2 penalty:1 york:1 action:1 generally:1 useful:1 gopalan:1 eigenvectors:2 tth:1 generate:3 exist:1 delta:1 per:9 mathematische:1 discrete:4 shall:9 uia:1 ist:2 pb:1 achieving:1 graph:14 sum:1 run:1 parameterized:1 qmax:5 family:1 throughout:1 utilizes:1 draw:1 appendix:3 scaling:1 bound:8 distinguish:1 identifiable:1 annual:1 strength:1 your:1 n3:4 personalized:1 argument:3 department:1 popularized:1 disconnected:1 jr:1 across:1 smaller:1 wi:9 nomial:1 kakade:1 happens:1 restricted:1 pr:1 jagabathula:1 computationally:6 devavrat:2 discus:1 r3:6 dmax:12 needed:1 ordinal:4 ge:1 tractable:2 end:2 lieu:1 available:2 operation:1 generalizes:1 apply:1 observe:1 assortment:1 appropriate:2 massachussetts:1 differentiated:1 spectral:11 alternative:2 shah:4 top:1 remaining:1 cf:6 vrh:1 gallego:1 build:1 tensor:15 question:6 parametric:2 concentration:2 diagonal:6 amongst:1 kth:1 vishwanath:1 outer:1 topic:1 provable:1 consumer:4 index:1 illustration:2 mini:1 bollob:1 equivalently:1 lg:4 potentially:1 statement:1 stated:2 design:1 dmin:9 observation:10 markov:4 urbana:2 finite:2 january:1 extended:1 communication:1 precise:1 rn:9 perturbation:1 arbitrary:1 rating:1 introduced:5 complement:1 pair:21 required:2 c3:3 wahrscheinlichkeitsrechnung:1 established:5 nip:3 qa:15 able:3 below:1 regime:1 challenge:1 including:2 max:3 terry:2 ia:1 event:1 power:2 ranked:5 natural:1 wainwright:1 kpa:1 advanced:1 wik:2 representing:1 technology:1 brief:1 incoherent:2 prior:1 literature:1 understanding:1 review:1 kf:1 xtt:1 ammar:1 relative:1 law:1 loss:1 permutation:19 mcfadden:2 mixed:15 limitation:1 filtering:1 proportional:3 versus:1 revenue:1 foundation:2 degree:3 sufficient:11 consistent:3 proxy:1 sewoong:1 exciting:1 blanchet:1 pi:5 repeat:1 infeasible:1 pma:1 aij:1 understand:1 institute:1 characterizing:1 sparse:1 xia:3 dimension:1 rum:3 transition:4 concretely:4 made:1 collection:2 ec:1 polynomially:2 social:2 cope:1 transaction:1 keep:1 assumed:1 xi:1 spectrum:1 economica:1 latent:5 iterative:3 reality:1 learn:14 nature:1 robust:1 zk:1 reasonably:1 obtaining:1 kui:1 poly:1 domain:1 diag:4 mnl:36 main:4 montanari:2 rh:1 motivation:1 whole:1 noise:2 profile:2 n2:6 vjb:1 convey:1 en:1 depicts:1 ein:1 wiley:1 position:2 wish:1 clicking:1 breaking:1 third:1 minz:1 renyi:1 theorem:16 remained:1 specific:1 xt:22 maxi:3 list:1 r2:2 exists:4 effectively:4 importance:1 corr:1 mitliagkas:1 browsing:1 sorting:1 gap:4 chen:1 led:1 lt:2 likely:1 kxk:1 partially:1 recommendation:2 mij:2 corresponds:1 satisfies:1 abc:2 ma:1 acm:1 conditional:1 feasible:4 determined:2 except:2 uniformly:3 specifically:1 total:1 e:1 m3:8 ijk:3 select:1 formally:2 dept:1 |
4,667 | 5,226 | Near?Optimal Density Estimation in Near?Linear
Time Using Variable?Width Histograms
Siu-On Chan
Microsoft Research
sochan@gmail.com
Ilias Diakonikolas
University of Edinburgh
ilias.d@ed.ac.uk
Rocco A. Servedio
Columbia University
rocco@cs.columbia.edu
Xiaorui Sun
Columbia University
xiaoruisun@cs.columbia.edu
Abstract
Let p be an unknown and arbitrary probability distribution over [0, 1). We consider the problem of density estimation, in which a learning algorithm is given
i.i.d. draws from p and must (with high probability) output a hypothesis distribution that is close to p. The main contribution of this paper is a highly efficient
density estimation algorithm for learning using a variable-width histogram, i.e., a
hypothesis distribution with a piecewise constant probability density function.
2
?
In more detail, for any k and ", we give an algorithm that makes O(k/"
) draws
2
?
from p, runs in O(k/"
) time, and outputs a hypothesis distribution h that is piecewise constant with O(k log2 (1/")) pieces. With high probability the hypothesis
h satisfies dTV (p, h) ? C ? optk (p) + ", where dTV denotes the total variation
distance (statistical distance), C is a universal constant, and optk (p) is the smallest total variation distance between p and any k-piecewise constant distribution.
The sample size and running time of our algorithm are optimal up to logarithmic
factors. The ?approximation factor? C in our result is inherent in the problem,
as we prove that no algorithm with sample size bounded in terms of k and " can
achieve C < 2 regardless of what kind of hypothesis distribution it uses.
1
Introduction
Consider the following fundamental statistical task: Given independent draws from an unknown
probability distribution, what is the minimum sample size needed to obtain an accurate estimate of
the distribution? This is the question of density estimation, a classical problem in statistics with a
rich history and an extensive literature (see e.g., [BBBB72, DG85, Sil86, Sco92, DL01]). While this
broad question has mostly been studied from an information?theoretic perspective, it is an inherently
algorithmic question as well, since the ultimate goal is to describe and understand algorithms that are
both computationally and information-theoretically efficient. The need for computationally efficient
learning algorithms is only becoming more acute with the recent flood of data across the sciences;
the ?gold standard? in this ?big data? context is an algorithm with information-theoretically (near-)
optimal sample size and running time (near-) linear in its sample size.
In this paper we consider learning scenarios in which an algorithm is given an input data set which
is a sample of i.i.d. draws from an unknown probability distribution. It is natural to expect (and can
be easily formalized) that, if the underlying distribution of the data is inherently ?complex?, it may
be hard to even approximately reconstruct the distribution. But what if the underlying distribution
is ?simple? or ?succinct? ? can we then reconstruct the distribution to high accuracy in a computationally and sample-efficient way? In this paper we answer this question in the affirmative for the
1
problem of learning ?noisy? histograms, arguably one of the most basic density estimation problems
in the literature.
To motivate our results, we begin by briefly recalling the role of histograms in density estimation.
Histograms constitute ?the oldest and most widely used method for density estimation? [Sil86], first
introduced by Karl Pearson in [Pea95]. Given a sample from a probability density function (pdf)
p, the method partitions the domain into a number of intervals (bins) B1 , . . . , Bk , and outputs the
?empirical? pdf which is constant within each bin. A k-histogram is a piecewise constant distribution
over bins B1 , . . . , Bk , where the probability mass of each interval Bj , j 2 [k], equals the fraction of
observations in the interval. Thus, the goal of the ?histogram method? is to approximate an unknown
pdf p by an appropriate k-histogram. It should be emphasized that the number k of bins to be used
and the ?width? and location of each bin are unspecified; they are parameters of the estimation
problem and are typically selected in an ad hoc manner.
We study the following distribution learning question:
Suppose that there exists a k-histogram that provides an accurate approximation
to the unknown target distribution. Can we efficiently find such an approximation?
In this paper, we provide a fairly complete affirmative answer to this basic question. Given a bound
k on the number of intervals, we give an algorithm that uses a near-optimal sample size, runs in
near-linear time (in its sample size), and approximates the target distribution nearly as accurately as
the best k-histogram.
To formally state our main result, we will need a few definitions. We work in a standard model of
learning an unknown probability distribution from samples, essentially that of [KMR+ 94], which
is a natural analogue of Valiant?s well-known PAC model for learning Boolean functions [Val84] to
the unsupervised setting of learning an unknown probability distribution.1 A distribution learning
problem is defined by a class C of distributions over a domain ?. The algorithm has access to
independent draws from an unknown pdf p, and its goal is to output a hypothesis distribution h
that is ?close? to the target distribution p. We measure the closeness between distributions using
the statistical distance or total variation distance. In the ?noiseless? setting, we are promised that
p 2 C and the goal is to construct a hypothesis h such that (with high probability) the total variation
distance dTV (h, p) between h and p is at most ", where " > 0 is the accuracy parameter.
The more challenging ?noisy? or agnostic model captures the situation of having arbitrary (or even
adversarial) noise in the data. In this setting, we do not make any assumptions about the target density p and the goal is to find a hypothesis h that is almost as accurate as the ?best? approximation of p
by any distribution in C. Formally, given sample access to a (potentially arbitrary) target distribution
p and " > 0, the goal of an agnostic learning algorithm for C is to compute a hypothesis distribution
h such that dTV (h, p) ? ? ? optC (p) + ", where optC (p) := inf q2C dTV (q, p) ? i.e., optC (p) is
the statistical distance between p and the closest distribution to it in C ? and ?
1 is a constant
(that may depend on the class C). We will call such a learning algorithm an ?-agnostic learning
algorithm for C; when ? > 1 we sometimes refer to this as a semi-agnostic learning algorithm.
A distribution f over a finite interval I ? R is called k-flat if there exists a partition of I into k
intervals I1 , . . . , Ik such that the pdf f is constant within each such interval. We henceforth (without
loss of generality for densities with bounded support) restrict ourselves to the case I = [0, 1). Let
Ck be the class of all k-flat distributions over [0, 1). For a (potentially arbitrary) distribution p over
[0, 1) we will denote by optk (p) := inf f 2Ck dTV (f, p).
In this terminology, our learning problem is exactly the problem of agnostically learning the class
of k-flat distributions. Our main positive result is a near-optimal algorithm for this problem, i.e.,
a semi-agnostic learning algorithm that has near-optimal sample size and near-linear running time.
More precisely, we prove the following:
Theorem 1 (Main). There is an algorithm A with the following property: Given k
1, " > 0,
2
?
and sample access to a target distribution p, algorithm A uses O(k/"
) independent draws from
2
?
p, runs in time O(k/"
), and outputs a O(k log2 (1/"))-flat hypothesis distribution h that satisfies
dTV (h, p) ? O(optk (p)) + " with probability at least 9/10.
1
We remark that our model is essentially equivalent to the ?minimax rate of convergence under the L1
distance? in statistics [DL01], and our results carry over to this setting as well.
2
Using standard techniques, the confidence probability can be boosted to 1
, for any
a (necessary) overhead of O(log(1/ )) in the sample size and the running time.
> 0, with
We emphasize that the difficulty of our result lies in the fact that the ?optimal? piecewise constant
decomposition of the domain is both unknown and approximate (in the sense that optk (p) > 0);
and that our algorithm is both sample-optimal and runs in (near-) linear time. Even in the (significantly easier) case that the target p 2 Ck (i.e., optk (p) = 0), and the optimal partition is explicitly
given to the algorithm, it is known that a sample of size ?(k/"2 ) is information-theoretically necessary. (This lower bound can, e.g., be deduced from the standard fact that learning an unknown
discrete distribution over a k-element set to statistical distance " requires an ?(k/"2 ) size sample.)
Hence, our algorithm has provably optimal sample complexity (up to a logarithmic factor), runs in
essentially sample linear time, and is ?-agnostic for a universal constant ? > 1.
It should be noted that the sample size required for our problem is well-understood; it follows from
the VC theorem (Theorem 3) that O(k/"2 ) draws from p are information-theoretically sufficient.
However, the theorem is non-constructive, and the ?obvious? algorithm following from it has running time exponential in k and 1/". In recent work, Chan et al [CDSS14] presented an approach
employing an intricate combination of dynamic programming and linear programming which yields
a poly(k/") time algorithm for the above problem. However, the running time of the [CDSS14] algorithm is ?(k 3 ) even for constant values of ", making it impractical for applications. As discussed
below our algorithmic approach is significantly different from that of [CDSS14], using neither
dynamic nor linear programming.
Applications. Nonparametric density estimation for shape restricted classes has been a subject
of study in statistics since the 1950?s (see [BBBB72] for an early book on the topic and [Gre56,
Bru58, Rao69, Weg70, HP76, Gro85, Bir87] for some of the early literature), and has applications
to a range of areas including reliability theory (see [Reb05] and references therein). By using the
structural approximation results of Chan et al [CDSS13], as an immediate corollary of Theorem 1
we obtain sample optimal and near-linear time estimators for various well-studied classes of shape
restricted densities including monotone, unimodal, and multimodal densities (with unknown mode
locations), monotone hazard rate (MHR) distributions, and others (because of space constraints we
do not enumerate the exact descriptions of these classes or statements of these results here, but
instead refer the interested reader to [CDSS13]). Birg?e [Bir87] obtained a sample optimal and linear
time estimator for monotone densities, but prior to our work, no linear time and sample optimal
estimator was known for any of the other classes.
Our algorithm from Theorem 1 is ?-agnostic for a constant ? > 1. It is natural to ask whether a
significantly stronger accuracy guarantee is efficiently achievable; in particular, is there an agnostic
algorithm with similar running time and sample complexity and ? = 1? Perhaps surprisingly, we
provide a negative answer to this question. Even in the simplest nontrivial case that k = 2, and the
target distribution is defined over a discrete domain [N ] = {1, . . . , N }, any ?-agnostic algorithm
with ? < 2 requires large sample size:
Theorem 2 (Lower bound, Informal statement). p
Any 1.99-agnostic learning algorithm for 2-flat
distributions over [N ] requires a sample of size ?( N ).
See Theorem 7 in Section 4 for a precise statement. Note that there is an exact correspondence between distributions over the discrete domain [N ] and pdf?s over [0, 1) which are piecewise constant
on each interval of the form [k/N, (k + 1)/N ) for k 2 {0, 1, . . . , N 1}. Thus, Theorem 2 implies
that no finite sample algorithm can 1.99-agnostically learn even 2-flat distributions over [0, 1). (See
Corollary 4.1 in Section 4 for a detailed statement.)
Related work. A number of techniques for density estimation have been developed in the mathematical statistics literature, including kernels and variants thereof, nearest neighbor estimators, orthogonal series estimators, maximum likelihood estimators (MLE), and others (see Chapter 2 of [Sil86]
for a survey of existing methods). The main focus of these methods has been on the statistical rate
of convergence, as opposed to the running time of the corresponding estimators. We remark that
the MLE does not exist for very simple classes of distributions (e.g., unimodal distributions with
an unknown mode, see e.g, [Bir97]). We note that the notion of agnostic learning is related to the
literature on model selection and oracle inequalities [MP007], however this work is of a different
flavor and is not technically related to our results.
3
Histograms have also been studied extensively in various areas of computer science, including
databases and streaming [JKM+ 98, GKS06, CMN98, GGI+ 02] under various assumptions about
the input data and the precise objective. Recently, Indyk et al [ILR12] studied the problem of learning a k-flat distribution over [N ] under the L2 norm and gave an efficient algorithm with sample
complexity O(k 2 log(N )/"4 ). Since the L1 distance is a stronger metric, Theorem 1 implies an
2
?
improved sample and time bound of O(k/"
) for their setting.
2
Preliminaries
Throughout the paper we assume that the underlying distributions have Lebesgue measurable densities. For a pdf p : [0, 1)
R ! R+ and a Lebesgue measurable subset A ? [0, 1), i.e., A 2 L([0, 1)),
we use p(A) to denote z2A p(z). The statistical distance or total variation distance between two
densities p, q : [0, 1) ! R+ is dTV (p, q) := supA2L([0,1)) |p(A) q(A)|. The statistical distance
satisfies
the identity dTV (p, q) = 12 kp qk1 where kp qk1 , the L1 distance between p and q,
R
is [0,1) |p(x) q(x)|dx; for convenience in the rest of the paper we work with L1 distance. We
refer to a nonnegative function p over an interval (which need not necessarily integrate to one over
the interval) as a ?sub-distribution.? Given a value ? > 0, we say that a (sub-)distribution p over
[0, 1) is ?-well-behaved if supx2[0,1) Prx?p [x] ? ?, i.e., no individual real value is assigned more
than ? probability under p. Any probability distribution with no atoms is ?-well-behaved for all
? > 0. Our results apply for general distributions over [0, 1) which may have an atomic part as well
as a non-atomic part. Given m independent draws s1 , . . . , sm from a distribution p over [0, 1), the
empirical distribution pbm over [0, 1) is the discrete distribution supported on {s1 , . . . , sm } defined
as follows: for all z 2 [0, 1), Prx?bpm [x = z] = |{j 2 [m] | sj = z}|/m.
The VC inequality. Let p : [0, 1) ! R be a Lebesgue measurable function. Given a family of
subsets A ? L([0, 1)) over [0, 1), define kpkA = supA2A |p(A)|. The VC dimension of A is
the maximum size of a subset X ? [0, 1) that is shattered by A (a set X is shattered by A if for
every Y ? X, some A 2 A satisfies A \ X = Y ). If there is a shattered subset of size s for all
s 2 + , then we say that the VC dimension of A is 1. The well-known Vapnik-Chervonenkis (VC)
inequality states the following:
Theorem 3 (VC inequality, [DL01, p.31]). Let p : I ! R+ be a probability density function over
I
I ? R and pbm be the empirical distribution obtained after drawing m
ppoints from p. Let A ? 2 be
a family of subsets with VC dimension d. Then E[kp pbm kA ] ? O( d/m).
Partitioning into intervals of approximately equal mass. As a basic primitive, given access to
a sample drawn from a ?-well-behaved target distribution p over [0, 1), we will need to partition
[0, 1) into ?(1/?) intervals each of which has probability ?(?) under p. There is a simple algorithm, based on order statistics, which does this and has the following performance guarantee (see
Appendix A.2 of [CDSS14]):
Lemma 2.1. Given ? 2 (0, 1) and access to points drawn from a ?/64-well-behaved distribution
p over [0, 1), the procedure Approximately-Equal-Partition draws O((1/?) log(1/?))
?
points from p, runs in time O(1/?),
and with probability at least 99/100 outputs a partition of [0, 1)
into ` = ?(1/?) intervals such that p(Ij ) 2 [?/2, 3?] for all 1 ? j ? `.
3
The algorithm and its analysis
In this section we prove our main algorithmic result, Theorem 1. Our approach has the following
high-level structure: In Section 3.1 we give an algorithm for agnostically learning a target distribution p that is ?nice? in two senses: (i) p is well-behaved (i.e., it does not have any heavy atomic
elements), and (ii) optk (p) is bounded from above by the error parameter ". In Section 3.2 we give a
general efficient reduction showing how the second assumption can be removed, and in Section 3.3
we briefly explain how the first assumption can be removed, thus yielding Theorem 1.
4
3.1
The main algorithm
In this section we give our main algorithmic result, which handles well-behaved distributions p for
which optk (p) is not too large:
Theorem 4. There is an algorithm Learn-WB-small-opt-k-histogram that given as input
2
2
?
?
O(k/"
) i.i.d. draws from a target distribution p and a parameter " > 0, runs in time O(k/"
), and
"/ log(1/")
has the following performance guarantee: If (i) p is 384k -well-behaved, and (ii) optk (p) ? ",
then with probability at least 19/20, it outputs an O(k ? log2 (1/"))-flat distribution h such that
dTV (p, h) ? 2 ? optk (p) + 3".
We require some notation and terminology. Let r be a distribution over [0, 1), and let P be a set of
disjoint intervals that are contained in [0, 1). We say that the P-flattening of r, denoted (r)P , is the
sub-distribution defined as
?
r(I)/|I| if v 2 I, I 2 P
r(v) =
0
if v does not belong to any I 2 P
Observe that if P is a partition of [0, 1), then (since r is a distribution) (r)P is a distribution.
We say that two intervals I, I 0 are consecutive if I = [a, b) and I 0 = [b, c). Given two consecutive
intervals I, I 0 contained in [0, 1) and a sub-distribution
use ?r (I, I 0 ) to denote the L1 distance
R r, we{I,I
0
0
{I,I 0 }
{I[I 0 }
0
}
between (r)
and (r)
, i.e., ?r (I, I ) = I[I 0 |(r)
(x) (r){I[I } (x)|dx. Note here
that {I [ I 0 } is a set that contains one element, the interval [a, c).
3.1.1
Intuition for the algorithm
We begin with a high-level intuitive explanation of the Learn-WB-small-opt-k-histogram
algorithm. It starts in Step 1 by constructing a partition of [0, 1) into z = ?(k/"0 ) intervals
?
I1 , . . . , Iz (where "0 = ?("))
such that p has weight ?("0 /k) on each subinterval. In Step 2 the
?
algorithm draws a sample of O(k/"2 ) points from p and uses them to define an empirical distribution pbm . This is the only step in which points are drawn from p. For the rest of this intuitive
explanation we pretend that the weight pb(I) that the empirical distribution pbm assigns to each interval I is actually the same as the true weight p(I) (Lemma 3.1 below shows that this is not too far
from the truth).
Before continuing with our explanation of the algorithm, let us digress briefly by imagining for a
moment that the target distribution p actually is a k-flat distribution (i.e., that optk (p) = 0). In this
case there are at most k ?breakpoints?, and hence at most k intervals Ij for which ?pbm (Ij , Ij+1 ) > 0,
so computing the ?pbm (Ij , Ij+1 ) values would be an easy way to identify the true breakpoints (and
given these it is not difficult to construct a high-accuracy hypothesis).
In reality, we may of course have optk (p) > 0; this means that if we try to use the ?pbm (Ij , Ij+1 )
criterion to identify ?breakpoints? of the optimal k-flat distribution that is closest to p (call this k-flat
distribution q), we may sometimes be ?fooled? into thinking that q has a breakpoint in an interval
Ij where it does not (but rather the value ?pbm (Ij , Ij+1 ) is large because of the difference between
q and p). However, recall that by assumption we have optk (p) ? "; this bound can be used to
show that there cannot be too many intervals Ij for which a large value of ?pbm (Ij , Ij+1 ) suggests
a ?spurious breakpoint? (see the proof of Lemma 3.3). This is helpful, but in and of itself not
enough; since our partition I1 , . . . , Iz divides [0, 1) into k/"0 intervals, a naive approach based on
this would result in a (k/"0 )-flat hypothesis distribution, which in turn would necessitate a sample
03
?
complexity of O(k/"
), which is unacceptably high. Instead, our algorithm performs a careful
process of iteratively merging consecutive intervals for which the ?pbm (Ij , Ij+1 ) criterion indicates
that a merge will not adversely affect the final accuracy by too much. As a result of this process
we end up with k ? polylog(1/") intervals for the final hypothesis, which enables us to output a
02
?
(k ? polylog(1/"0 ))-flat final hypothesis using O(k/"
) draws from p.
In more detail, this iterative merging is carried out by the main loop of the algorithm in Step 4.
Going into the t-th iteration of the loop, the algorithm has a partition Pt 1 of [0, 1) into disjoint
sub-intervals, and a set Ft 1 ? Pt 1 (i.e., every interval belonging to Ft 1 also belongs to Pt 1 ).
Initially P0 contains all the intervals I1 , . . . , Iz and F0 is empty. Intuitively, the intervals in Pt 1 \
5
Ft 1 are still being ?processed?; such an interval may possibly be merged with a consecutive interval
from Pt 1 \ Ft 1 if doing so would only incur a small ?cost? (see condition (iii) of Step 4(b) of the
algorithm).The intervals in Ft 1 have been ?frozen? and will not be altered or used subsequently in
the algorithm.
3.1.2
The algorithm
Algorithm Learn-WB-small-opt-k-histogram:
Input: parameters k
1, " > 0; access to i.i.d. draws from target distribution p over [0, 1)
"/ log(1/")
-well-behaved
384k
Output: If (i) p is
and (ii) optk (p) ? ", then with probability at least
99/100 the output is a distribution q such that dTV (p, q) ? 2optk (p) + 3".
1. Let "0 = "/ log(1/"). Run Algorithm Approximately-Equal-Partition on
"0
input parameter 6k
to partition [0, 1) into z = ?(k/"0 ) intervals I1 = [i0 , i1 ), . . . ,
Iz = [iz 1 , iz ), where i0 = 0 and iz = 1, such that with probability at least
99/100, for each j 2 {1, . . . , z} we have p([ij 1 , ij )) 2 ["0 /12k, "0 /2k] (assuming p
is "0 /(384k)-well-behaved).
02
?
2. Draw m = O(k/"
) points from p and let pbm be the resulting empirical distribution.
3. Set P0 = {I1 , I2 , . . . Iz }, and F0 = ;.
4. Let s = log2
1
"0 .
Repeat for t = 1, . . . until t = s:
(a) Initialize Pt to ; and Ft to Ft 1 .
(b) Without loss of generality, assume Pt 1 = {It 1,1 , . . . , It 1,zt 1 } where interval It 1,i is to the left of It 1,i+1 for all i. Scan left to right across the intervals
in Pt 1 (i.e., iterate over i = 1, . . . , zt 1 1). If intervals It 1,i , It 1,i+1 are (i)
both not in Ft 1 , and (ii) ?pbm (It 1,i , It 1,i+1 ) > "0 /(2k), then add both It 1,i
and It 1,i+1 into Ft .
(c) Initialize i to 1, and repeatedly execute one of the following four (mutually exclusive and exhaustive) cases until i > zt 1 :
[Case 1] i ? zt 1 1 and It 1,i = [a, b), It 1,i+1 = [b, c) are consecutive
intervals both not in Ft . Add the merged interval It 1,i [ It 1,i+1 = [a, c) into
Pt . Set i
i + 2.
[Case 2] i ? zt 1 1 and It 1,i 2 Ft . Set i
i + 1.
[Case 3] i ? zt 1 1, It 1,i 2
/ Ft and It 1,i+1 2 Ft . Add It 1,i into Ft and
set i
i + 2.
[Case 4] i = zt 1 . Add It 1,zt 1 into Ft if It 1,zt 1 is not in Ft and set i
i + 1.
(d) Set Pt
Pt [ Ft .
5. Output the |Ps |-flat hypothesis distribution (b
pm ) P s .
3.1.3
Analysis of the algorithm and proof of Theorem 4
It is straightforward to verify the claimed running time given Lemma 2.1, which bounds the running
time of Approximately-Equal-Partition. Indeed, we note that Step 2, which simply
02
?
draws O(k/"
) points and constructs the resulting empirical distribution, dominates the overall
running time. In the rest of this subsubsection we prove correctness.
We first observe that with high probability the empirical distribution pbm defined in Step 2 gives a
high-accuracy estimate of the true probability of any union of consecutive intervals from I1 , . . . , Iz .
The following lemma from [CDSS14] follows from the standard multiplicative Chernoff bound:
Lemma 3.1 (Lemma 12, [CDSS14]). With probability 99/100 over
p the sample drawn in Step 2, for
every 0 ? a < b ? z we have that |b
pm ([ia , ib )) p([ia , ib ))| ? "0 (b a) ? "0 /(10k).
We henceforth assume that this 99/100-likely event indeed takes place, so the above inequality holds
for all 0 ? a < b ? z. We use this to show that the ?pbm (It 1,i , It 1,i+1 ) value that the algorithm
6
uses in Step 4(b) is a good proxy for the actual value ?p (It
accessible to the algorithm):
Lemma 3.2. Fix 1 ? t ? s. Then we have |?pbm (It
2"0 /(5k).
(which of course is not
1,i , It 1,i+1 )
1,i , It 1,i+1 )
?p (It
1,i , It 1,i+1 )|
?
Due to space constraints the proofs of all lemmas in this section are deferred to Appendix A.
For the rest of the analysis, let q denote a fixed k-flat distribution that is closest to p, so kp qk1 =
optk (p). (We note that while optk (p) is defined as inf q2C kp qk1 , standard closure arguments
can be used to show that the infimum is actually achieved by some k-flat distribution q.) Let Q be
the partition of [0, 1) corresponding to the intervals on which q is piecewise constant. We say that a
breakpoint of Q is a value in [0, 1] that is an endpoint of one of the (at most) k intervals in Q.
The following important lemma bounds the number of intervals in the final partition Ps :
Lemma 3.3. Ps contains at most O(k log2 (1/")) intervals.
The following definition will be useful:
Definition 5. Let P denote any partition of [0, 1). We say that partition P is "0 -good for (p, q) if for
every breakpoint v of Q, the interval I in P containing v satisfies p(I) ? "0 /(2k).
The above definition is justified by the following lemma:
Lemma 3.4. If P is "0 -good for (p, q), then kp
(p)P k1 ? 2optk (p) + "0 .
We are now in a position to prove the following:
Lemma 3.5. There exists a partition R of [0, 1) that is "0 -good for (p, q) and satisfies
k(p)Ps
(p)R k1 ? ".
We construct the claimed R based on Ps , Ps 1 , . . . , P0 as follows: (i) If I is an interval in Ps not
containing a breakpoint of Q, then I is also in R; (ii) If I is an interval in Ps that does contain a
breakpoint of Q, then we further partition I into a set of intervals S in a recursive manner using
Ps 1 , . . . , P0 (see Appendix A.4). Finally, by putting everything together we can prove Theorem 4:
Proof of Theorem 4. By Lemma 3.4 applied to R, we have that kp (p)R k1 ? 2optk (p) + "0 . By
Lemma 3.5, we have that k(p)Ps (p)R k1 ? "; thus the triangle inequality gives that kp (p)Ps k1 ?
2optk (p) + 2". By Lemma 3.3 the partition Ps contains at most O(k log2 (1/")) intervals, so both
(p)Ps and (b
pm )Ps are O(k log2 (1/"))-flat distributions. Thus, k(p)Ps (b
pm )Ps k1 = k(p)Ps
2
Ps
(b
pm ) kA` , where ` = O(k log (1/")) and A` is the family of all subsets of [0, 1) that consist
of unions of up to ` intervals (which has VC dimension 2`). Consequently by the VC inequality
02
?
(Theorem 3, for a suitable choice of m = O(k/"
), we have that E[k(p)Ps (b
pm )Ps k1 ] ? 4"0 /100.
Markov?s inequality now gives that with probability at least 96/100, we have k(p)Ps (b
pm )Ps k1 ?
"0 . Hence, with overall probability at least 19/20 (recall the 1/100 error probability incurred in
Lemma 3.1), we have that kp (b
pm )Ps k1 ? 2optk (p) + 3", and the theorem is proved.
3.2
A general reduction to the case of small opt for semi-agnostic learning
In this section we show that under mild conditions, the general problem of agnostic distribution
learning for a class C can be efficiently reduced to the special case when optC is not too large
compared with ". While the reduction is simple and generic, we have not previously encountered it
in the literature on density estimation, so we provide a proof in Appendix A.5. A precise statement
of the reduction follows:
Theorem 6. Let A be an algorithm with the following behavior: A is given as input i.i.d. points
drawn from p and a parameter " > 0. A uses m(") = ?(1/") draws from p, runs in time t(") =
?(1/"), and satisfies the following: if optC (p) ? 10", then with probability at least 19/20 it outputs
a hypothesis distribution q such that (i) kp qk1 ? ? ? optC (p) + ", where ? is an absolute constant,
and (ii) given any r 2 [0, 1), the value q(r) of the pdf of q at r can be efficiently computed in T time
steps.
7
Then there is an algorithm A0 with the following performance guarantee: A0 is given as input i.i.d.
draws from p and a parameter " > 0.2 Algorithm A0 uses O(m("/10) + log log(1/")/"2 ) draws
2
?
from p, runs in time O(t("/10)) + T ? O(1/"
), and outputs a hypothesis distribution q 0 such that
0
with probability at least 39/40 we have kp q k1 ? 10(? + 2) ? optC (p) + ".
3.3
Dealing with distributions that are not well behaved
?
The assumption that the target distribution p is ?("/k)-well-behaved
can be straightforwardly removed by following the approach in Section 3.6 of [CDSS14]. That paper presents a simple linear?
time sampling-based procedure, using O(k/")
samples, that with high probability identifies all the
?heavy? elements (atoms which cause p to not be well-behaved, if any such points exist).
Our overall algorithm first runs this procedure to find the set S of ?heavy? elements, and then runs
the algorithm presented above (which succeeds for well-behaved distributions, i.e., distributions
that have no ?heavy? elements) using as its target distribution the conditional distribution of p over
[0, 1) \ S (let us denote this conditional distribution by p0 ). A straightforward analysis given in
[CDSS14] shows that (i) optk (p)
optk (p0 ), and moreover (ii) dTV (p, p0 ) ? optk (p). Thus, by
the triangle inequality, any hypothesis h satisfying dTV (h, p0 ) ? Coptk (p0 ) + " will also satisfy
dTV (h, p) ? (C + 1)optk (p) + " as desired.
4
Lower bounds on agnostic learning
In this section we establish that ?-agnostic learning with ? < 2 is information theoretically impossible, thus establishing Theorem 2.
Fix any 0 < t < 1/2. We define a probability distribution Dt over a finite set of discrete distributions
over the domain [2N ] = {1, . . . , 2N } as follows. (We assume without loss of generality below that
t is rational and that tN is an integer.) A draw of pS1 ,S2 ,t from Dt is obtained as follows.
1. A set S1 ? [N ] is chosen uniformly at random from all subsets of [N ] that contain precisely
tN elements. For i 2 [N ], the distribution pS1 ,S2 ,t assigns probability weight as follows:
?
?
1
1
t
pS1 ,S2 ,t (i) =
if i 2 S1 ,
pS1 ,S2 ,t (i) =
1+
if i 2 [N ] \ S1 .
4N
2N
2(1 t)
2. A set S2 ? [N + 1, . . . , 2N ] is chosen uniformly at random from all subsets of [N +
1, . . . , 2N ] that contain precisely tN elements. For i 2 [N + 1, . . . , 2N ], the distribution
pS1 ,S2 ,t assigns probability weight as follows:
?
?
3
1
t
pS1 ,S2 ,t (i) =
if i 2 S2 ,
1
if i 2 [N ] \ S1 .
4N
2N
2(1 t)
p
Using a birthday paradox type argument, we show that no o( N )-sample algorithm can successfully
distinguish between a distribution pS1 ,S2 ,t ? Dt and the uniform distribution over [2N ]. We then
leverage this indistinguishability to show that any (2
)-semi-agnostic learning algorithm, even
p
for 2-flat distributions, must use a sample of size ?( N ) (see Appendix B for these proofs):
Theorem 7. Fix any > 0 and any function f (?). There is no algorithm A with the following
property: given " > 0 and access
p to independent points drawn from an unknown distribution p over
[2N ], algorithm A makes o( N ) ? f (") draws from p and with probability at least 51/100 outputs
a hypothesis distribution h over [2N ] satisfying kh pk1 ? (2
)opt2 (p) + ".
As described in the Introduction, via the obvious correspondence that maps distributions over [N ]
to distributions over [0, 1), we get the following:
Corollary 4.1. Fix any > 0 and any function f (?). There is no algorithm A with the following
property: given " > 0 and access to independent draws from an unknown distribution p over [0, 1),
algorithm A makes f (") draws from p and with probability at least 51/100 outputs a hypothesis
distribution h over [0, 1) satisfying kh pk1 ? (2
)opt2 (p) + ".
2
Note that now there is no guarantee that optC (p) ? "; indeed, the point here is that optC (p) may be
arbitrary.
8
References
[AJOS14]
J. Acharya, A. Jafarpour, A. Orlitsky, and A.T. Suresh. Near-optimal-sample estimators for spherical gaussian mixtures. Technical Report http://arxiv.org/abs/1402.4746, 19 Feb 2014. A.5
[BBBB72] R.E. Barlow, D.J. Bartholomew, J.M. Bremner, and H.D. Brunk. Statistical Inference under Order
Restrictions. Wiley, New York, 1972. 1, 1
[Bir87]
L. Birg?e. Estimating a density under order restrictions: Nonasymptotic minimax risk. Annals of
Statistics, 15(3):995?1012, 1987. 1
[Bir97]
L. Birg?e. Estimation of unimodal densities without smoothness assumptions. Annals of Statistics,
25(3):970?981, 1997. 1
[Bru58]
H. D. Brunk. On the estimation of parameters restricted by inequalities. Ann. Math. Statist.,
29(2):pp. 437?454, 1958. 1
[CDSS13] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Learning mixtures of structured distributions
over discrete domains. In SODA, pages 1380?1394, 2013. 1
[CDSS14] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Efficient density estimation via piecewise
polynomial approximation. Technical Report http://arxiv.org/abs/1305.3207, conference version
in STOC, pages 604-613, 2014. 1, 2, 3.1.3, 3.1, 3.3, A.2
[CMN98] S. Chaudhuri, R. Motwani, and V. Narasayya. Random sampling for histogram construction: How
much is enough? In SIGMOD Conference, pages 436?447, 1998. 1
[DDS12]
A. De, I. Diakonikolas, and R. Servedio. Inverse problems in approximate uniform generation.
Available at http://arxiv.org/pdf/1211.1722v1.pdf, 2012. A.5
[DG85]
L. Devroye and L. Gy?orfi. Nonparametric Density Estimation: The L1 View. John Wiley & Sons,
1985. 1
[DK14]
C. Daskalakis and G. Kamath. Faster and sample near-optimal algorithms for proper learning
mixtures of gaussians. In COLT, pages 1183?1213, 2014. A.5
[DL01]
L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Series in
Statistics, Springer, 2001. 1, 1, 3, A.5
[GGI+ 02] A. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, and M. Strauss. Fast, small-space
algorithms for approximate histogram maintenance. In STOC, pages 389?398, 2002. 1
[GKS06]
S. Guha, N. Koudas, and K. Shim. Approximation and streaming algorithms for histogram construction problems. ACM Trans. Database Syst., 31(1):396?438, 2006. 1
[Gre56]
U. Grenander. On the theory of mortality measurement. Skand. Aktuarietidskr., 39:125?153, 1956.
1
[Gro85]
P. Groeneboom. Estimating a monotone density. In Proc. of the Berkeley Conference in Honor of
Jerzy Neyman and Jack Kiefer, pages 539?555, 1985. 1
[HP76]
D. L. Hanson and G. Pledger. Consistency in concave regression. The Annals of Statistics, 4(6):pp.
1038?1050, 1976. 1
[ILR12]
P. Indyk, R. Levi, and R. Rubinfeld. Approximating and Testing k-Histogram Distributions in
Sub-linear Time. In PODS, pages 15?22, 2012. 1
[JKM+ 98] H. V. Jagadish, N. Koudas, S. Muthukrishnan, V. Poosala, K. Sevcik, and T. Suel. Optimal histograms with quality guarantees. In VLDB, pages 275?286, 1998. 1
[KMR+ 94] M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, and L. Sellie. On the learnability of
discrete distributions. In Proc. 26th STOC, pages 273?282, 1994. 1
[MP007]
Concentration inequalities and model selection. Lecture Notes in Mathematics, 33, 2003, SaintFlour, Cantal, 2007. Massart, P. and Picard, J., Springer. 1
[Pea95]
K. Pearson. Contributions to the mathematical theory of evolution. ii. skew variation in homogeneous material. Philosophical Trans. of the Royal Society of London, 186:343?414, 1895. 1
[Rao69]
B.L.S. Prakasa Rao. Estimation of a unimodal density. Sankhya Ser. A, 31:23?36, 1969. 1
[Reb05]
L. Reboul. Estimation of a function under shape restrictions. Applications to reliability. Ann.
Statist., 33(3):1330?1356, 2005. 1
[Sco92]
D.W. Scott. Multivariate Density Estimation: Theory, Practice and Visualization. Wiley, New
York, 1992. 1
[Sil86]
B. W. Silverman. Density Estimation. Chapman and Hall, London, 1986. 1, 1
[Val84]
L. G. Valiant. A theory of the learnable. In Proc. 16th Annual ACM Symposium on Theory of
Computing (STOC), pages 436?445. ACM Press, 1984. 1
[Weg70]
E.J. Wegman. Maximum likelihood estimation of a unimodal density. I. and II. Ann. Math. Statist.,
41:457?471, 2169?2174, 1970. 1
9
| 5226 |@word mild:1 version:1 briefly:3 achievable:1 stronger:2 norm:1 polynomial:1 closure:1 vldb:1 decomposition:1 p0:9 jafarpour:1 carry:1 moment:1 reduction:4 series:2 contains:4 chervonenkis:1 existing:1 ka:2 com:1 gmail:1 dx:2 must:2 john:1 partition:20 shape:3 enables:1 selected:1 unacceptably:1 oldest:1 provides:1 math:2 location:2 ron:1 org:3 mathematical:2 symposium:1 ik:1 prove:6 overhead:1 manner:2 theoretically:5 indeed:3 intricate:1 behavior:1 nor:1 spherical:1 actual:1 begin:2 estimating:2 bounded:3 underlying:3 notation:1 mass:2 agnostic:16 moreover:1 what:3 kind:1 unspecified:1 affirmative:2 developed:1 impractical:1 guarantee:6 berkeley:1 every:4 orlitsky:1 concave:1 exactly:1 uk:1 partitioning:1 indistinguishability:1 val84:2 ser:1 arguably:1 positive:1 before:1 understood:1 establishing:1 becoming:1 approximately:5 merge:1 birthday:1 lugosi:1 therein:1 studied:4 suggests:1 challenging:1 range:1 testing:1 atomic:3 union:2 recursive:1 practice:1 silverman:1 procedure:3 suresh:1 area:2 universal:2 empirical:8 significantly:3 orfi:1 confidence:1 get:1 convenience:1 close:2 selection:2 cannot:1 context:1 impossible:1 risk:1 restriction:3 equivalent:1 measurable:3 map:1 gilbert:1 primitive:1 regardless:1 straightforward:2 pod:1 survey:1 formalized:1 assigns:3 estimator:8 poosala:1 handle:1 notion:1 variation:6 annals:3 target:15 suppose:1 pt:11 construction:2 exact:2 programming:3 homogeneous:1 us:7 hypothesis:20 element:8 satisfying:3 database:2 role:1 ft:17 mhr:1 capture:1 sun:3 removed:3 intuition:1 complexity:4 dynamic:2 optk:25 motivate:1 depend:1 incur:1 technically:1 triangle:2 easily:1 multimodal:1 various:3 chapter:1 muthukrishnan:2 fast:1 describe:1 london:2 digress:1 kp:11 opt2:2 pearson:2 exhaustive:1 widely:1 say:6 drawing:1 reconstruct:2 koudas:2 statistic:9 flood:1 noisy:2 itself:1 indyk:3 supx2:1 final:4 hoc:1 frozen:1 grenander:1 loop:2 narasayya:1 chaudhuri:1 achieve:1 gold:1 description:1 intuitive:2 kh:2 convergence:2 empty:1 p:23 motwani:1 polylog:2 ac:1 ggi:2 ij:18 nearest:1 c:2 implies:2 merged:2 subsequently:1 vc:9 material:1 everything:1 bin:5 require:1 fix:4 preliminary:1 opt:4 jerzy:1 hold:1 hall:1 algorithmic:4 bj:1 suel:1 early:2 smallest:1 consecutive:6 estimation:21 proc:3 combinatorial:1 correctness:1 successfully:1 gaussian:1 ck:3 rather:1 boosted:1 corollary:3 focus:1 likelihood:2 fooled:1 indicates:1 adversarial:1 sense:1 helpful:1 inference:1 streaming:2 shattered:3 typically:1 i0:2 a0:3 initially:1 spurious:1 bpm:1 going:1 i1:8 interested:1 provably:1 overall:3 colt:1 denoted:1 special:1 fairly:1 initialize:2 equal:5 construct:4 having:1 atom:2 chernoff:1 sampling:2 chapman:1 broad:1 unsupervised:1 nearly:1 breakpoint:6 thinking:1 others:2 report:2 piecewise:8 inherent:1 acharya:1 few:1 individual:1 ourselves:1 lebesgue:3 microsoft:1 recalling:1 ab:2 highly:1 picard:1 deferred:1 mixture:3 yielding:1 sens:1 accurate:3 necessary:2 orthogonal:1 continuing:1 divide:1 desired:1 boolean:1 wb:3 rao:1 optc:9 cost:1 subset:8 uniform:2 siu:1 guha:2 too:5 learnability:1 straightforwardly:1 answer:3 deduced:1 density:30 fundamental:1 accessible:1 together:1 mortality:1 opposed:1 containing:2 possibly:1 henceforth:2 pbm:16 necessitate:1 adversely:1 book:1 syst:1 nonasymptotic:1 de:1 gy:1 skand:1 satisfy:1 explicitly:1 ad:1 piece:1 multiplicative:1 try:1 view:1 doing:1 start:1 contribution:2 accuracy:6 kiefer:1 efficiently:4 yield:1 identify:2 accurately:1 history:1 explain:1 ed:1 definition:4 servedio:4 pp:2 obvious:2 thereof:1 proof:6 rational:1 proved:1 ask:1 recall:2 actually:3 dt:3 brunk:2 improved:1 execute:1 generality:3 pk1:2 until:2 xiaorui:1 mode:2 infimum:1 quality:1 perhaps:1 behaved:13 verify:1 true:3 contain:3 barlow:1 evolution:1 hence:3 assigned:1 iteratively:1 i2:1 width:3 noted:1 criterion:2 pdf:10 theoretic:1 complete:1 tn:3 performs:1 l1:6 jack:1 recently:1 endpoint:1 discussed:1 belong:1 approximates:1 refer:3 measurement:1 smoothness:1 consistency:1 pm:8 mathematics:1 bartholomew:1 reliability:2 access:8 f0:2 acute:1 add:4 feb:1 closest:3 multivariate:1 chan:5 recent:2 perspective:1 inf:3 belongs:1 scenario:1 claimed:2 honor:1 inequality:11 minimum:1 semi:4 ii:9 unimodal:5 technical:2 faster:1 hazard:1 prakasa:1 ilias:2 mle:2 variant:1 regression:1 basic:3 maintenance:1 essentially:3 noiseless:1 metric:1 arxiv:3 histogram:19 sometimes:2 kernel:1 iteration:1 achieved:1 justified:1 interval:49 rest:4 massart:1 subject:1 call:2 integer:1 structural:1 near:13 leverage:1 iii:1 easy:1 enough:2 ps1:7 iterate:1 affect:1 gave:1 dtv:14 restrict:1 agnostically:3 whether:1 ultimate:1 york:2 cause:1 constitute:1 remark:2 repeatedly:1 enumerate:1 useful:1 detailed:1 nonparametric:2 extensively:1 statist:3 processed:1 simplest:1 reduced:1 http:3 schapire:1 exist:2 disjoint:2 discrete:7 sellie:1 iz:9 putting:1 four:1 terminology:2 levi:1 promised:1 pb:1 drawn:6 neither:1 qk1:5 v1:1 monotone:4 fraction:1 run:12 inverse:1 soda:1 place:1 almost:1 reader:1 throughout:1 family:3 draw:22 z2a:1 appendix:5 bound:9 breakpoints:3 distinguish:1 correspondence:2 encountered:1 oracle:1 nonnegative:1 nontrivial:1 annual:1 precisely:3 constraint:2 flat:18 argument:2 structured:1 rubinfeld:2 combination:1 belonging:1 across:2 son:1 making:1 s1:6 intuitively:1 restricted:3 computationally:3 neyman:1 mutually:1 previously:1 visualization:1 turn:1 skew:1 needed:1 end:1 informal:1 available:1 gaussians:1 apply:1 observe:2 appropriate:1 birg:3 generic:1 denotes:1 running:11 log2:7 sigmod:1 pretend:1 k1:10 establish:1 approximating:1 classical:1 society:1 objective:1 question:7 rocco:2 exclusive:1 concentration:1 diakonikolas:4 distance:16 topic:1 bremner:1 assuming:1 devroye:2 difficult:1 mostly:1 potentially:2 statement:5 stoc:4 kamath:1 negative:1 zt:9 proper:1 unknown:14 observation:1 markov:1 sm:2 finite:3 wegman:1 immediate:1 situation:1 precise:3 paradox:1 mansour:1 arbitrary:5 introduced:1 bk:2 required:1 extensive:1 philosophical:1 hanson:1 trans:2 below:3 scott:1 royal:1 including:4 explanation:3 analogue:1 ia:2 event:1 suitable:1 natural:3 difficulty:1 minimax:2 altered:1 identifies:1 carried:1 naive:1 columbia:4 prior:1 literature:6 l2:1 nice:1 kotidis:1 loss:3 expect:1 shim:1 lecture:1 generation:1 integrate:1 incurred:1 groeneboom:1 sufficient:1 proxy:1 heavy:4 karl:1 course:2 surprisingly:1 supported:1 repeat:1 understand:1 neighbor:1 absolute:1 edinburgh:1 dimension:4 rich:1 employing:1 far:1 sj:1 approximate:4 emphasize:1 dealing:1 b1:2 daskalakis:1 iterative:1 reality:1 learn:4 inherently:2 subinterval:1 imagining:1 complex:1 poly:1 necessarily:1 domain:7 constructing:1 flattening:1 main:9 big:1 noise:1 s2:9 prx:2 succinct:1 sankhya:1 wiley:3 sub:6 position:1 exponential:1 lie:1 ib:2 theorem:22 emphasized:1 pac:1 showing:1 learnable:1 closeness:1 dominates:1 exists:3 consist:1 vapnik:1 merging:2 valiant:2 strauss:1 easier:1 flavor:1 supa2a:1 logarithmic:2 simply:1 likely:1 contained:2 springer:3 jkm:2 truth:1 satisfies:7 acm:3 conditional:2 goal:6 identity:1 consequently:1 careful:1 ann:3 hard:1 uniformly:2 lemma:18 kearns:1 total:5 called:1 jagadish:1 succeeds:1 formally:2 support:1 scan:1 constructive:1 |
4,668 | 5,227 | Factoring Variations in Natural Images with
Deep Gaussian Mixture Models
A?aron van den Oord, Benjamin Schrauwen
Electronics and Information Systems department (ELIS), Ghent University
{aaron.vandenoord, benjamin.schrauwen}@ugent.be
Abstract
Generative models can be seen as the swiss army knives of machine learning, as
many problems can be written probabilistically in terms of the distribution of the
data, including prediction, reconstruction, imputation and simulation. One of the
most promising directions for unsupervised learning may lie in Deep Learning
methods, given their success in supervised learning. However, one of the current problems with deep unsupervised learning methods, is that they often are
harder to scale. As a result there are some easier, more scalable shallow methods, such as the Gaussian Mixture Model and the Student-t Mixture Model, that
remain surprisingly competitive. In this paper we propose a new scalable deep
generative model for images, called the Deep Gaussian Mixture Model, that is
a straightforward but powerful generalization of GMMs to multiple layers. The
parametrization of a Deep GMM allows it to efficiently capture products of variations in natural images. We propose a new EM-based algorithm that scales well
to large datasets, and we show that both the Expectation and the Maximization
steps can easily be distributed over multiple machines. In our density estimation
experiments we show that deeper GMM architectures generalize better than more
shallow ones, with results in the same ballpark as the state of the art.
1
Introduction
There has been an increasing interest in generative models for unsupervised learning, with many
applications in Image processing [1, 2], natural language processing [3, 4], vision [5] and audio [6].
Generative models can be seen as the swiss army knives of machine learning, as many problems can
be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning
may lie in Deep Learning methods, given their recent results in supervised learning [7]. Although
not a universal recipe for success, the merits of deep learning are well-established [8]. Because of
their multilayered nature, these methods provide ways to efficiently represent increasingly complex
relationships as the number of layers increases. ?Shallow? methods will often require a very large
number of units to represent the same functions, and may therefore overfit more.
Looking at real-valued data, one of the current problems with deep unsupervised learning methods,
is that they are often hard to scale to large datasets. This is especially a problem for unsupervised
learning, because there is usually a lot of data available, as it does not have to be labeled (e.g. images,
videos, text). As a result there are some easier, more scalable shallow methods, such as the Gaussian
Mixture Model (GMM) and the Student-t Mixture Model (STM), that remain surprisingly competitive [2]. Of course, the disadvantage of these mixture models is that they have less representational
power than deep models.
In this paper we propose a new scalable deep generative model for images, called the Deep Gaussian
Mixture Model (Deep GMM). The Deep GMM is a straightforward but powerful generalization of
Gaussian Mixture Models to multiple layers. It is constructed by stacking multiple GMM-layers on
1
N N(0,(0,
I ))I )
NnI(0,
n n
N (0, I )
n
N (0,
N I(0,
n ) In )
N (0, I )
N (0,NIn(0,
) In )n
A1,3
A1,1
1,2
1,1A1,2 AA
A1,1 A
1,2 A1,3 A1,3
AA A
A33
A
A11A1 AA
2A
2 2 A3A
A2,1 AA
A
A2,2
2,1
2,2 A2,2
2,1
A3,1 A
A3,2 AA
3,2 A3,3 A3,3
A3,1
A3,3
3,1
3,2
xx x
(a) Gaussian
x x
x x
x
x
(b) GMM
(c) Deep GMM
Figure 1: Visualizations of a Gaussian, GMM and Deep GMM distribution. Note that these are not
graphical models. This visualization describes the connectivity of the linear transformations that
make up the multimodal structure of a deep GMM. The sampling process for the deep GMM is
shown in red. Every time a sample is drawn, it is first drawn from a standard normal distribution
and then transformed with all the transformations on a randomly sampled path. In the example it is
first transformed with A1,3 , then with A2,1 and finally with A3,2 . Every path results in differently
correlated normal random variables. The deep GMM shown has 3 ? 2 ? 3 = 18 possible paths. For
each square transformation matrix Ai,j there is a corresponding bias term bi,j (not shown here).
top of each other, which is similar to many other Deep Learning techniques. Although for every
deep GMM, one could construct a shallow GMM with the same density function, it would require
an exponential number of mixture components to do so.
The multilayer architecture of the Deep GMM gives rise to a specific kind of parameter tying. The
parameterization is most interpretable in the case of images: the layers in the architecture are able to
efficiently factorize the different variations that are present in natural images: changes in brightness,
contrast, color and even translations or rotations of the objects in the image. Because each of these
variations will affect the image separately, a traditional mixture model would need an exponential
number of components to model each combination of variations, whereas a Deep GMM can factor
these variations and model them individually.
The proposed training algorithm for the Deep GMM is based on the most popular principle for training GMMs: Expectation Maximization (EM). Although stochastic gradient (SGD) is also a possible
option, we suggest the use of EM, as it is inherently more parallelizable. As we will show later, both
the Expectation and the Maximization steps can easily be distributed on multiple computation units
or machines, with only limited communication between compute nodes. Although there has been a
lot of effort in scaling up SGD for deep networks [9], the Deep GMM is parallelizable by design.
The remainder of this paper is organized as follows. We start by introducing the design of deep
GMMs before explaining the EM algorithm for training them. Next, we discuss the experiments
where we examine the density estimation performance of the deep GMM, as a function of the number of layers, and in comparison with other methods. We conclude in Section 5, where also discuss
some unsolved problems for future work.
2
Stacking Gaussian Mixture layers
Deep GMMs are best introduced by looking at some special cases: the multivariate normal distribution and the Gaussian Mixture model.
One way to define a multivariate normal variable x is as a standard normal variable z ? N (0, In )
that has been transformed with a certain linear transformation: x = Az + b, so that
p (x) = N x|b, AAT .
2
This is visualized in Figure 1(a). The same interpretation can be applied to Gaussian Mixture Models, see Figure 1(b). A transformation is chosen from set of (square) transformations Ai , i = 1 . . . N
(each having a bias term bi ) with probabilities ?i , i = 1 . . . N , such that the resulting distribution
becomes:
N
X
p (x) =
?i N x|bi , Ai ATi .
i=1
With this in mind, it is easy to generalize GMMs in a multi-layered fashion. Instead of sampling
one transformation from a set, we can sample a path of transformations in a network of k layers, see
Figure 1(c). The standard normal variable z is now successively transformed with a transformation
from each layer of the network. Let be the set of all possible paths through the network. Each
path p = (p1 , p2 , . . . , pk ) 2 has a probability ?p of being sampled, with
X
?p =
p2
X
?(p1 ,p2 ,...,pk )
p1 ,p2 ,...,pk
= 1.
Here Nj is the number of components in layer j. The density function of x is:
p (x) =
X
p2
?p N x|
T
p , ? p ?p
,
with
p
= bk,pk + Ak,ik (. . . (b2,p2 + A2,p2 b1,p1 ))
?p =
1
Y
Aj,pj .
(1)
(2)
(3)
j=k
Here Am,n and bm,n are the n?th transformation matrix and bias of the m?th layer. Notice that one
can also factorize ?p as follows: ?(p1 ,p2 ,...,pk ) = ?p1 ?p2 . . . ?pk , so that each layer has its own set
of parameters associated with it. In our experiments, however, this had very little difference on the
log likelihood. This would mainly be useful for very large networks.
The GMM is a special case of the deep GMM having only one layer. Moreover, each deep GMM
Qk
can be constructed by a GMM with j Nj components, where every path in the network represents
one component in the GMM. The parameters of these components are tied to each other in the way
the deep GMM is defined. Because of this tying, the number of parameters to train is proportional to
Pk
j Nj . Still, the density estimator is quite expressive as it can represent a large number of Gaussian
mixture components. This is often the case with deep learning methods: Shallow architectures can
often theoretically learn the same functions, but will require a much larger number of parameters [8].
When the kind of compound functions that a deep learning method is able to model are appropriate
for the type of data, their performance will often be better than their shallow equivalents, because of
the smaller risk of overfitting.
In the case of images, but also for other types of data, we can imagine why this network structure
might be useful. A lot of images share the same variations such as rotations, translations, brightness
changes, etc.. These deformations can be represented by a linear transformation in the pixel space.
When learning a deep GMM, the model may pick up on these variations in the data that are shared
amongst images by factoring and describing them with the transformations in the network.
The hypothesis of this paper is that Deep GMMs overfit less than normal GMMs as the complexity
of their density functions increase because the parameter tying of the Deep GMM will force it to
learn more useful functions. Note that this is one of the reasons why other deep learning methods
are so successful. The only difference is that the parameter tying in deep GMMs is more explicit
and interpretable.
A closely related method is the deep mixture of factor analyzers (DMFA) model [10], which is an
extension of the Mixture of Factor Analyzers (MFA) model [11]. The DMFA model has a tree
structure in which every node is a factor analyzer that inherits the low-dimensional latent factors
3
from its parent. Training is performed layer by layer, where the dataset is hierarchically clustered
and the children of each node are trained as a MFA on a different subset of the data using the MFA
EM algorithm. The parents nodes are kept constant when training its children. The main difference
with the proposed method is that in the Deep GMM the nodes of each layer are connected to all
nodes of the layer above. The layers are trained jointly and the higher level nodes will adapt to the
lower level nodes.
3
Training deep GMMs with EM
The algorithm we propose for training Deep GMMs is based on Expectation Maximization (EM).
The optimization is similar to that of a GMM: in the E-step we will compute the posterior probabilities np that a path p was responsible for generating xn , also called the responsibilities. In the
maximization step, the parameters of the model will be optimized given those responsibilities.
3.1
Expectation
From Equation 1 we get the the log-likelihood given the data:
2
X
X
X
log p (xn ) =
log 4
?p N x n |
n
n
T
p , ?p ?p
p2
3
5.
This is the global objective for the Deep GMM to optimize. When taking the derivative with respect
to a parameter ? we get:
?
?
X
X ?p N xn | p , ?p ?Tp r? log N xn | p , ?p ?Tp
P
r?
log p (xn ) =
?q N xn | q , ?q ?Tq
n
n,p
X
=
n,p
with
np
q
np r?
log N xn |
T
p , ?p ?p
,
?p N xn | p , ?p ?Tp
= P
,
?q N xn | q , ?q ?Tq
q2
the equation for the responsibilities. Although np generally depend on the parameter ?, in the EM
algorithm the responsibilities are assumed to remain constant when optimizing the model parameters
in the M-step.
The E-step is very similar to that of a standard GMM, but instead of computing the responsibilities
nk for every component k, one needs to compute them for every path p = (p1 , p2 , . . . , pk ) 2
. This is because every path represents a Gaussian mixture component in the equivalent shallow
GMM. Because np needs to be computed for each datapoint independently, the E-step is very easy
to parallelize. Often a simple way to increase the speed of convergence and to reduce computation
time is to use an EM-variant with ?hard? assignments. Here only one of the responsibilities of each
datapoint is set to 1:
np
=
?
1
0
p = arg maxq ?q N xn |
otherwise
T
q , ?q ?q
(4)
Heuristic
Qk
Because the number of paths is the product of the number of components per layer ( j Nj ), computing the responsibilities can become intractable for big Deep GMM networks. However, when
using hard-EM variant (eq. 4), this problem reduces to finding the best path for each datapoint,
for which we can use efficient heuristics. Here we introduce such a heuristic that does not hurt the
performance significantly, while allowing us to train much larger networks.
We optimize the path p = (p1 , p2 , . . . , pk ), which is a multivariate discrete variable, with a coordinate ascent algorithm. This means we change the parameters pi layer per layer, while keeping the
4
(a) Iterations
(b) Reinitializations
(c) Switch rate during training
Figure 2: Visualizations for the introduced E-step heuristic. (a): The average log-likelihood of the
best-path search with the heuristic as a function of the number of iterations (passes) and (b): as a
function of the number of repeats with a different initialization. Plot (c) shows the percentage of data
points that switch to a better path found with a different initialization as a function of the number of
the EM-iterations during training.
parameter values of the other layers constant. After we have changed all the variables one time (one
pass), we can repeat.
Pk
The heuristic described above only requires j Nj path evaluations per pass. In Figure 2 we compare the heuristic with the full search. On the left we see that after 3 passes the heuristic converges
to a local optimum. In the middle we see that when repeating the heuristic algorithm a couple of
times with different random initializations, and keeping the best path after each iteration, the loglikelihood converges to the optimum.
In our experiments we initialized the heuristic with the optimal path from the previous E-step (warm
start) and performed the heuristic algorithm for 1 pass. Subsequently we ran the algorithm for a
second time with a random initialization for two passes
?Pfor the
? possibility of finding a better optimum
k
for each datapoint. Each E-step thus required 3
N
path evaluations. In Figure 2(c) we
j
j
show an example of the percentage of data points (called the switch-rate) that had a better optimum
with this second initialization for each EM-iteration. We can see from this Figure that the switchrate quickly becomes very small, which means that using the responsibilities from the previous
E-step is an efficient initialization for the current one. Although the number of path evaluations with
the heuristic is substantially smaller than with the full search, we saw in our experiments that the
performance of the resulting trained Deep GMMs was ultimately similar.
3.2
Maximization
In the maximization step, the parameters are updated to maximize the log likelihood of the data,
given the responsibilities. Although standard optimization techniques for training deep networks
can be used (such as SGD), Deep GMMs have some interesting properties that allow us to train
them more efficiently. Because these properties are not obvious at first sight, we will derive the
objective and gradient for the transformation matrices Ai,j in a Deep GMM. After that we will
discuss various ways for optimizing them. For convenience, the derivations in this section are based
on the hard-EM variant and with omission of the bias-terms parameters. Equations without these
simplifications can be obtained in a similar manner.
In the hard-EM variant, it is assumed that each datapoint in the dataset was generated by a path p,
for which n,p = 1. The likelihood of x given the parameters of the transformations on this path is
?
?
p (x) = A1,p11 . . . Ak,p1 k N A1,p11 . . . Ak,p1 k x|0, In ,
(5)
where we use |?| to denote the absolute value of the determinant. Now let?s rewrite:
z
=
1
Ai+1,p
. . . Ak,p1 k x
i+1
(6)
Q
=
Ai,p1i
(7)
=
A1,p11
Rp
. . . Ai 11,pi
5
1
,
(8)
N (0, In )
R1
...
R2
...
Ri
Q
...
"Folded" version of all the layers
above the current layer
Rm
Current layer
...
z
Figure 3: Optimization of a transformation Q in a Deep GMM. We can rewrite all the possible paths
in the above layers by ?folding? them into one layer, which is convenient for deriving the objective
and gradient equations of Q.
so that we get (omitting the constant term w.r.t. Q):
log p (x) / log |Q| + log N (Rp Qz|0, In ) .
(9)
Figure 3 gives a visual overview. We have ?folded? the layers above the current layer into one. This
means that each path p through the network above the current layer is equivalent to a transformation
Rp in the folded version. The transformation matrix for which we will derive the objective and
gradient is called Q. The average log-likelihood of all the data points that are generated by paths that
pass through Q is:
1X
1 XX
log p (xi ) / log |Q| +
log N (Rp Qzi |0, I)
(10)
N i
N p
i2
=
where ?p =
Np
N ,
p
=
1
Np
P
i2
1X
log |Q|
2
p
?p T r
p
?
pQ
T
?
?p Q ,
(11)
zi ziT and ?p = RpT Rp . For the gradient we get:
p
X
1
rQ
log p (xi ) = Q
N
i
T
X
?p
pQ
T
?p .
(12)
p
Optimization
Notice how in Equation 11 the summation over the data points has been converted to a summation
over covariance matrices: one for each path1 . If the number of paths is small enough, this means we
can use full gradient updates instead of mini-batched updates (e.g. SGD). The computation of the
covariance matrices is fairly efficient and can be done in parallel. This formulation also allows us to
use more advanced optimization methods, such as LBFGS-B [12].
In the setup described above, we need to keep the transformation Rp constant while optimizing Q.
This is why in each M-step the Deep GMM is optimized layer-wise from top to bottom, updating
one layer at a time. It is possible to go over this process multiple times for each M-step. Important
to note is that this way the optimization of Q does not depend on any other parameters in the same
layer. So for each layer, the optimization of the different nodes can be done in parallel on multiple
cores or machines. Moreover, nodes in the same layer do not share data points when using the EMvariant with hard-assignments. Another advantage is that this method is easy to control, as there
are no learning rates or other optimization parameters to be tuned, when using L-BFGS-B ?out of
the box?. A disadvantage is that one needs to sum over all possible paths above the current node in
the gradient computation. For deeper networks, this may become problematic when optimizing the
lower-level nodes.
Alternatively, one can also evaluate (11) using Kronecker products as
(
)
X
T
? ? ? = log |Q| + vec (Q)
?p (?p ? p ) vec (Q)
p
1
Actually we only need to sum over the number of possible transformations Rp above the node Q.
6
(13)
and Equation 12 as
??? = Q
T
+ 2 mat
(
X
p
?p (?p ?
p)
)
!
vec (Q) .
(14)
Here vec is the vectorization operator and mat its inverse. With these formulations we don?t have to
loop over the number of paths anymore during the optimization.
This makes the inner optimization
P
with LBFGS-B even faster. We only have to construct ?p (?p ? p ) once, which is also easy to
p
parallelize. These equation thus allow us to train even bigger Deep GMM architectures. A disadvantage, however, is that it requires the dimensionality of the data to be small enough to efficiently
construct the Kronecker products.
When the aforementioned formulations are intractable because there are too number layers in the
Deep GMM and the data dimensionality is to high, we can also optimize the parameters using backpropagation with a minibatch algorithm, such as Stochastic Gradient Descent (SGD). This approach
works for much deeper networks, because we don?t need to sum over the number of paths. From
Equation 9 we see that this is basically the same as minimizing the L2 norm of Rp Qz, with log |Q|
as regularization term. Disadvantages include the use of learning rates and other parameters such as
momentum, which requires more engineering and fine-tuning.
The most naive way is to optimize the deep GMM with SGD is by simultaneously optimizing all
parameters, as is common in neural networks. When doing this it is important that the parameters of
all nodes are converged enough in each M-step, otherwise nodes that are not optimized enough may
have very low responsibilities in the following E-step(s). This results in whole parts of the network
becoming unused, which is the equivalent of empty clusters during GMM or k-means training. An
alternative way of using SGD is again by optimizing the Deep GMM layer by layer. This has
the advantage that we have more control over the optimization, which prevents the aforementioned
problem of unused paths. But more importantly, we can now again parallelize over the number of
nodes per layer.
4
Experiments and Results
For our experiments we used the Berkeley Segmentation Dataset (BSDS300) [13], which is a commonly used benchmark for density modeling of image patches and the tiny images dataset [14]. For
BSDS300 we follow the same setup of Uria et al. [15], which is best practice for this dataset. 8 by 8
grayscale patches are drawn from images of the dataset. The train and test sets consists of 200 and
100 images respectively. Because each pixel is quantized, it can only contain integer values between
0 and 255. To make the integer pixel values continuous, uniform noise (between 0 and 1) is added.
Afterwards, the images are divided by 256 so that the pixel values lie in the range [0, 1]. Next,
the patches are preprocessed by removing the mean pixel value of every image patch. Because this
reduces the implicit dimensionality of the data, the last pixel value is removed. This results in the
data points having 63 dimensions. For the tiny images dataset we rescale the images to 8 by 8 and
then follow the same setup. This way we also have low resolution image data to evaluate on.
In all the experiments described in this section, we used the following setup for training Deep
GMMs. We used the hard-EM variant, with the aforementioned heuristic in the E-step. For each
M-step we used LBFGS-B for 1000 iterations by using equations (13) and (14) for the objective and
gradient. The total number of iterations we used for EM was fixed to 100, although fewer iterations
were usually sufficient. The only hyperparameters were the number of components for each layer,
which were optimized on a validation set.
Because GMMs are in theory able to represent the same probability density functions as a Deep
GMM, we first need to assess wether using multiple layers with a deep GMM improves performance.
The results of a GMM (one layer) and Deep GMMs with two or three layers are given in 4(a). As
we increase the complexity and number of parameters of the model by changing the number of
components in the top layer, a plateau is reached and the models ultimately start overfitting. For the
deep GMMs, the number of components in the other layers was kept constant (5 components). The
Deep GMMs seem to generalize better. Although they have a similar number of parameters, they
are able to model more complex relationships, without overfitting. We also tried this experiment on
a more difficult dataset by using highly downscaled images from the tiny images dataset, see Figure
7
(a) BSDS300
(b) Tiny Images
Figure 4: Performance of the Deep GMM for different number of layers, and the GMM (one layer).
All models were trained on the same dataset of 500 Thousand examples. For comparison we varied
the number of components in the top layer.
4(b). Because there are less correlations between the pixels of a downscaled image than between
those of an image patch, the average log likelihood values are lower. Overall we can see that the
Deep GMM performs well on both low and high resolution natural images.
Next we will compare the deep GMM with other published methods on this task. Results are shown
in Table 1. The first method is the RNADE model, a new deep density estimation technique which
is an extension of the NADE model for real valued data [16, 15]. EoRNADE, which stands for
ensemble of RNADE models, is currently the state of the art. We also report the log-likelihood
results of two mixture models: the GMM and the Student-T Mixture model, from [2]. Overall
we see that the Deep GMM has a strong performance. It scores better than other single models
(RNADE, STM), but not as well as the ensemble of RNADE models.
Model
RNADE: 1hl, 2hl, 3hl; 4hl, 5hl, 6hl
EoRNADE (6hl)
GMM
STM
Deep GMM - 3 layers
Average log likelihood
143.2, 149.2, 152.0, 153.6, 154.7, 155.2
157.0
153.7
155.3
156.2
Table 1: Density estimation results on image patch modeling using the BSDS300 dataset. Higher
log-likelihood values are better. ?hl? stands for the number of hidden layers in the RNADE models.
5
Conclusion
In this work we introduced the deep Gaussian Mixture Model: a novel density estimation technique
for modeling real valued data. we show that the Deep GMM is on par with the current state of the
art in image patch modeling, and surpasses other mixture models. We conclude that the Deep GMM
is a viable and scalable alternative for unsupervised learning. The deep GMM tackles unsupervised
learning from a different angle than other recent deep unsupervised learning techniques [17, 18, 19],
which makes it very interesting for future research.
In follow-up work, we would like to make Deep GMMs suitable for larger images and other highdimensional data. Locally connected filters, such as convolutions would be useful for this. We
would also like to extend our method to modeling discrete data. Deep GMMs are currently only
designed for continuous real-valued data, but our approach of reparametrizing the model into layers
of successive transformations can also be applied to other types of mixture distributions. We would
also like to compare this extension to other discrete density estimators such as Restricted Boltzmann
Machines, Deep Belief Networks and the NADE model [15].
8
References
[1] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image
restoration. In International Conference on Computer Vision, 2011.
[2] A?aron van den Oord and Benjamin Schrauwen. The student-t mixture model as a natural image
patch prior with application to image compression. Journal of Machine Learning Research,
2014.
[3] Yoshua Bengio, Holger Schwenk, Jean-Sbastien Sencal, Frderic Morin, and Jean-Luc Gauvain.
Neural probabilistic language models. In Innovations in Machine Learning. Springer, 2006.
[4] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word
representations in vector space. In proceedings of Workshop at ICLR, 2013.
[5] Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. One-shot learning by
inverting a compositional causal process. In Advances in Neural Information Processing Systems, 2013.
[6] Razvan Pascanu, C
? aglar G?ulc?ehre, Kyunghyun Cho, and Yoshua Bengio. How to construct
deep recurrent neural networks. In Proceedings of the International Conference on Learning
Representations, 2013.
[7] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
[8] Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends R in Machine
Learning, 2(1), 2009.
[9] Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. In Proceedings of the International Conference on Learning Representations, 2014.
[10] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Deep mixtures of factor analysers.
In International Conference on Machine Learning, 2012.
[11] Zoubin Ghahramani and Geoffrey E Hinton. The em algorithm for mixtures of factor analyzers.
Technical report, University of Toronto, 1996.
[12] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm
for bound constrained optimization. SIAM Journal on Scientific Computing, 1995.
[13] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the International Conference on Computer Vision.
IEEE, 2001.
[14] Antonio Torralba, Robert Fergus, and William T Freeman. 80 million tiny images: A large data
set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2008.
[15] Benigno Uria, Iain Murray, and Hugo Larochelle. A deep and tractable density estimator. In
Proceedings of the International Conference on Machine Learning, 2013.
[16] Benigno Uria, Iain Murray, and Hugo Larochelle. RNADE: The real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems, 2013.
[17] Karol Gregor, Andriy Mnih, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning, 2013.
[18] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic back-propagation
and variational inference in deep latent gaussian models. In International Conference on Machine Learning, 2014.
[19] Yoshua Bengio, Eric Thibodeau-Laufer, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. In International Conference on Machine Learning, 2013.
9
| 5227 |@word determinant:1 version:2 middle:1 compression:1 norm:1 simulation:2 tried:1 covariance:2 brightness:2 sgd:7 pick:1 shot:1 harder:1 electronics:1 score:1 jimenez:1 daniel:1 tuned:1 ati:1 current:9 rnade:7 gauvain:1 written:2 uria:3 plot:1 interpretable:2 update:2 designed:1 generative:6 fewer:1 intelligence:1 parameterization:1 parametrization:1 aglar:1 core:1 quantized:1 node:16 pascanu:1 successive:1 toronto:1 wierstra:2 constructed:2 become:2 ik:1 viable:1 doron:1 consists:1 downscaled:2 manner:1 introduce:1 theoretically:1 p1:11 examine:1 multi:1 salakhutdinov:2 freeman:1 byrd:1 little:1 increasing:1 becomes:2 stm:3 xx:2 moreover:2 tying:4 ballpark:1 kind:2 substantially:1 q2:1 finding:2 transformation:20 nj:5 berkeley:1 every:9 tackle:1 rm:1 control:2 unit:2 before:1 aat:1 local:1 engineering:1 laufer:1 ak:4 parallelize:3 path:30 becoming:1 might:1 initialization:6 limited:2 bi:3 range:1 responsible:1 practice:1 swiss:2 backpropagation:1 razvan:1 universal:1 significantly:1 convenient:1 word:1 morin:1 suggest:1 zoubin:1 get:4 convenience:1 layered:1 operator:1 risk:1 optimize:4 equivalent:4 a33:1 dean:1 straightforward:2 go:1 independently:1 resolution:2 tomas:1 estimator:4 iain:2 importantly:1 deriving:1 variation:8 hurt:1 coordinate:1 updated:1 imagine:1 hypothesis:1 trick:1 trend:1 recognition:1 updating:1 labeled:1 database:1 bottom:1 capture:1 thousand:1 connected:2 removed:1 ran:1 rq:1 benjamin:3 complexity:2 ultimately:2 trained:4 depend:2 rewrite:2 zoran:1 eric:1 easily:2 multimodal:1 differently:1 schwenk:1 represented:1 various:1 geoff:1 derivation:1 train:5 analyser:1 quite:1 heuristic:13 larger:3 valued:5 jean:2 loglikelihood:1 kai:1 otherwise:2 statistic:1 jointly:1 shakir:1 advantage:2 reconstruction:2 propose:4 product:4 remainder:1 loop:1 representational:1 az:1 recipe:1 sutskever:1 parent:2 convergence:1 optimum:4 nin:1 r1:1 empty:1 generating:1 cluster:1 converges:2 karol:1 object:2 derive:2 recurrent:1 rescale:1 eq:1 zit:1 strong:1 p2:12 larochelle:2 direction:2 closely:1 filter:1 stochastic:4 subsequently:1 human:1 backprop:1 require:3 generalization:2 clustered:1 benigno:2 summation:2 extension:3 normal:7 torralba:1 a2:5 estimation:6 ruslan:2 currently:2 saw:1 individually:1 gaussian:15 sight:1 a3a:1 probabilistically:2 rezende:1 inherits:1 likelihood:10 mainly:1 contrast:1 am:1 inference:1 factoring:2 hidden:1 transformed:4 pixel:7 arg:1 aforementioned:3 overall:2 classification:1 art:3 special:2 fairly:1 constrained:1 construct:4 once:1 having:3 sampling:2 represents:2 holger:1 unsupervised:9 future:2 np:8 report:2 yoshua:4 richard:1 randomly:1 simultaneously:1 ulc:1 jeffrey:1 tq:2 william:1 interest:1 possibility:1 highly:1 mnih:1 evaluation:3 mixture:26 tree:1 initialized:1 causal:1 deformation:1 bsds300:4 modeling:5 disadvantage:4 tp:3 measuring:1 restoration:1 assignment:2 maximization:7 stacking:2 introducing:1 elis:1 subset:1 ugent:1 surpasses:1 uniform:1 krizhevsky:2 successful:1 too:1 thibodeau:1 cho:1 density:14 international:9 siam:1 oord:2 probabilistic:1 quickly:1 ilya:1 schrauwen:3 connectivity:1 again:2 successively:1 derivative:1 converted:1 bfgs:1 student:4 b2:1 jitendra:1 aron:2 later:1 performed:2 lot:3 responsibility:10 jason:1 doing:1 wether:1 red:1 competitive:2 start:3 option:1 parallel:2 reached:1 ass:1 square:2 greg:1 convolutional:2 qk:2 efficiently:5 ensemble:2 generalize:3 basically:1 lu:1 published:1 converged:1 datapoint:5 plateau:1 parallelizable:2 mohamed:1 obvious:1 associated:1 unsolved:1 couple:1 sampled:2 dataset:11 popular:1 color:1 dimensionality:3 improves:1 organized:1 segmentation:2 actually:1 back:1 higher:2 supervised:2 follow:3 danilo:1 wei:1 formulation:3 done:2 box:1 implicit:1 correlation:1 overfit:2 expressive:1 reparametrizing:1 propagation:1 minibatch:1 aj:1 scientific:1 omitting:1 contain:1 regularization:1 kyunghyun:1 i2:2 rpt:1 during:4 performs:1 image:37 wise:1 variational:1 novel:1 charles:1 common:1 rotation:2 hugo:2 overview:1 million:1 extend:1 interpretation:1 peihuang:1 yosinski:1 vec:4 ai:8 tuning:1 analyzer:4 language:2 had:2 pq:2 etc:1 multivariate:3 own:1 recent:2 posterior:1 optimizing:6 p1i:1 compound:1 certain:1 ecological:1 success:2 jorge:1 joshua:1 seen:2 maximize:1 corrado:1 multiple:8 full:3 afterwards:1 reduces:2 segmented:1 technical:1 faster:1 adapt:1 knife:2 divided:1 bigger:1 a1:10 prediction:2 scalable:5 variant:5 multilayer:1 vision:3 expectation:5 iteration:8 represent:4 folding:1 whereas:1 separately:1 fine:1 ascent:1 pass:3 gmms:19 seem:1 integer:2 unused:2 bengio:4 easy:4 enough:4 switch:3 affect:1 zi:1 architecture:6 andriy:1 reduce:1 inner:1 effort:1 compositional:1 deep:83 antonio:1 useful:4 generally:1 repeating:1 weird:1 nonparametric:1 locally:1 tenenbaum:1 visualized:1 percentage:2 problematic:1 notice:2 per:4 discrete:3 mat:2 p11:3 drawn:3 imputation:2 changing:1 preprocessed:1 gmm:56 pj:1 kept:2 nocedal:1 sum:3 inverse:1 angle:1 powerful:2 patch:9 lake:1 scaling:1 layer:51 bound:1 simplification:1 nni:1 kronecker:2 alex:2 ri:1 scene:1 tal:1 speed:1 mikolov:1 martin:1 department:1 combination:1 remain:3 describes:1 em:17 increasingly:1 smaller:2 shallow:8 hl:8 den:2 restricted:1 equation:9 visualization:3 discus:3 describing:1 mind:1 merit:1 tractable:1 available:1 appropriate:1 anymore:1 fowlkes:1 alternative:2 yair:1 rp:8 top:4 include:1 graphical:1 ghahramani:1 especially:1 murray:2 gregor:1 objective:5 malik:1 added:1 traditional:1 gradient:9 amongst:1 iclr:1 reason:1 relationship:2 mini:1 minimizing:1 innovation:1 setup:4 difficult:1 robert:1 rise:1 design:2 boltzmann:1 allowing:1 convolution:1 datasets:2 benchmark:1 daan:2 descent:1 hinton:3 looking:2 communication:1 varied:1 omission:1 brenden:1 parallelizing:1 introduced:3 bk:1 inverting:1 required:1 david:1 optimized:4 imagenet:1 established:1 maxq:1 able:4 usually:2 pattern:1 including:2 memory:1 video:1 belief:1 power:1 mfa:3 suitable:1 natural:8 force:1 warm:1 ciyou:1 advanced:1 zhu:1 naive:1 text:1 prior:1 l2:1 par:1 interesting:2 proportional:1 geoffrey:2 validation:1 foundation:1 sufficient:1 principle:1 tiny:5 share:2 pi:2 ehre:1 translation:2 course:1 changed:1 surprisingly:2 repeat:2 keeping:2 last:1 bias:4 allow:2 deeper:3 explaining:1 taking:1 absolute:1 van:2 distributed:2 dimension:1 xn:10 stand:2 evaluating:1 autoregressive:2 commonly:1 bm:1 transaction:1 yichuan:1 keep:1 global:1 overfitting:3 b1:1 conclude:2 assumed:2 factorize:2 xi:2 alternatively:1 don:2 grayscale:1 search:3 latent:2 vectorization:1 continuous:2 fergus:1 why:3 table:2 promising:2 nature:1 learn:2 qz:2 inherently:1 complex:2 pk:10 hierarchically:1 main:1 multilayered:1 big:1 whole:2 noise:1 hyperparameters:1 child:2 nade:2 batched:1 fashion:1 momentum:1 explicit:1 exponential:2 lie:3 tied:1 tang:1 removing:1 specific:1 r2:1 a3:7 intractable:2 eornade:2 workshop:1 nk:1 chen:1 easier:2 army:2 lbfgs:3 visual:1 prevents:1 springer:1 aa:5 shared:1 luc:1 hard:7 change:3 folded:3 ghent:1 called:5 total:1 pas:4 aaron:1 highdimensional:1 pfor:1 evaluate:2 audio:1 trainable:1 correlated:1 |
4,669 | 5,228 | Robust Kernel Density Estimation by Scaling and
Projection in Hilbert Space
Clayton D. Scott
Deparment of EECS
Univeristy of Michigan
Ann Arbor, MI 48109
clayscot@umich.edu
Robert A. Vandermeulen
Department of EECS
University of Michigan
Ann Arbor, MI 48109
rvdm@umich.edu
Abstract
While robust parameter estimation has been well studied in parametric density estimation, there has been little investigation into robust density estimation in the
nonparametric setting. We present a robust version of the popular kernel density
estimator (KDE). As with other estimators, a robust version of the KDE is useful
since sample contamination is a common issue with datasets. What ?robustness?
means for a nonparametric density estimate is not straightforward and is a topic
we explore in this paper. To construct a robust KDE we scale the traditional KDE
and project it to its nearest weighted KDE in the L2 norm. This yields a scaled
and projected KDE (SPKDE). Because the squared L2 norm penalizes point-wise
errors superlinearly this causes the weighted KDE to allocate more weight to high
density regions. We demonstrate the robustness of the SPKDE with numerical
experiments and a consistency result which shows that asymptotically the SPKDE
recovers the uncontaminated density under sufficient conditions on the contamination.
1
Introduction
The estimation of a probability density function (pdf) from a random sample is a ubiquitous problem
in statistics. Methods for density estimation can be divided into parametric and nonparametric,
depending on whether parametric models are appropriate. Nonparametric density estimators (NDEs)
offer the advantage of working under more general assumptions, but they also have disadvantages
with respect to their parametric counterparts. One of these disadvantages is the apparent difficulty in
making NDEs robust, which is desirable when the data follow not the density of interest, but rather
a contaminated version thereof. In this work we propose a robust version of the KDE, which serves
as the workhorse among NDEs [11, 10].
We consider the situation where most observations come from a target density ftar but some observations are drawn from a contaminating density fcon , so our observed samples come from the
density fobs = (1 ? ?) ftar + ?fcon . It is not known which component a given observation comes
from. When considering this scenario in the infinite sample setting we would like to construct some
transform that, when applied to fobs , yields ftar . We introduce a new formalism to describe transformations that ?decontaminate? fobs under sufficient conditions on ftar and fcon . We focus on a
specific nonparametric condition on ftar and fcon that reflects the intuition that the contamination
manifests in low density regions of ftar . In the finite sample setting, we seek a NDE that converges
to ftar asymptotically. Thus, we construct a weighted KDE where the kernel weights are lower in
low density regions and higher in high density regions. To do this we multiply the standard KDE
by a real value greater than one (scale) and then find the closest pdf to the scaled KDE in the L2
norm (project), resulting in a scaled and projected kernel density estimator (SPKDE). Because the
squared L2 norm penalizes point-wise differences between functions quadratically, this causes the
1
SPKDE to draw weight from the low density areas of the KDE and move it to high density areas to
get a more uniform difference to the scaled KDE. The asymptotic limit of the SPKDE is a scaled
and shifted version of fobs . Given our proposed sufficient conditions on ftar and fcon , the SPKDE
can asymptotically recover ftar .
A different construction for a robust kernel density estimator, the aptly named ?robust kernel density
estimator? (RKDE), was developed by Kim & Scott [6]. In that paper the RKDE was analytically
and experimentally shown to be robust, but no consistency result was presented. Vandermeulen
& Scott [15] proved that a certain version of the RKDE converges to fobs . To our knowledge the
convergence of the SPKDE to a transformed version of fobs , which is equal to ftar under sufficient
conditions on ftar and fcon , is the first result of its type.
In this paper we present a new formalism for nonparametric density estimation, necessary and sufficient conditions for decontamination, the construction of the SPKDE, and a proof of consistency.
We also include experimental results applying the algorithm to benchmark datasets with comparisons to the RKDE, traditional KDE, and an alternative robust KDE implementation. Many of our
results and proof techniques are novel in KDE literature. Proofs are contained in the supplemental
material.
2
Nonparametric Contamination Models and Decontamination Procedures
for Density Estimation
What assumptions are necessary and sufficient on a target and contaminating density in order to
theoretically recover the target density is a question that, to the best of our knowledge, is completely
unexplored in a nonparametric setting. We will approach this problem in the infinite sample setting,
where we know fobs = (1 ? ?)ftar + ?fcon and ?, but do not know ftar or fcon . To this end we
introduce a new formalism. Let D be the set of all pdfs on Rd . We use the term contamination
model to refer to any subset V ? D ? D, i.e. a set of pairs (ftar , fcon ). Let R? : D ? D be
a set of transformations on D indexed by ? ? [0, 1). We say that R? decontaminates V if for all
(ftar , fcon ) ? V and ? ? [0, 1) we have R? ((1 ? ?)ftar + ?fcon ) = ftar .
One may wonder whether there exists some set of contaminating densities, Dcon , and a transformation, R? , such that R? decontaminates D ? Dcon . In other words, does there exist some set of
contaminating densities for which we can recover any target density? It turns out this is impossible
if Dcon contains at least two elements.
Proposition 1. Let Dcon ? D contain at least two elements. There does not exist any transformation
R? which decontaminates D ? Dcon .
Proof. Let f ? D and g, g 0 ? Dcon such that g 6= g 0 . Let ? ? (0, 21 ). Clearly ftar ,
0
and ftar
,
f (1?2?)+?g
1??
0
f (1?2?)+g?
1??
are both elements of D. Note that
0
(1 ? ?)ftar + ?g 0 = (1 ? ?)ftar
+ ?g.
In order for R? to decontaminate D with respect to Dcon , we need R? ((1 ? ?)ftar + ?g 0 ) = ftar
0
0
0
and R? ((1 ? ?)ftar
+ ?g) = ftar
, which is impossible since ftar 6= ftar
.
This proposition imposes significant limitations on what contamination models can be decontaminated. For example, suppose we know that fcon is Gaussian with known covariance matrix and unknown mean. Proposition 1 says we cannot design R? so that it can decontaminate (1??)ftar +?fcon
for all ftar ? D. In other words, it is impossible to design an algorithm capable of removing Gaussian contamination (for example) from arbitrary target densities. Furthermore, if R? decontaminates
V and V is fully nonparametric (i.e. for all f ? D there exists some f 0 ? D such that (f, f 0 ) ? V)
then for each (ftar , fcon ) pair, fcon must satisfy some properties which depend on ftar .
2.1
Proposed Contamination Model
For a function f : Rd ? R let supp(f ) denote the support of f . We introduce the following
contamination assumption:
2
Assumption A. For the pair (ftar , fcon ), there exists u such that fcon (x) = u for almost all (in the
Lebesgue sense) x ? supp(ftar ) and fcon (x0 ) ? u for almost all x0 ?
/ supp(ftar ).
See Figure 1 for an example of a density satisfying this assumption. Because fcon must be uniform
over the support of ftar a consequence of Assumption A is that supp(ftar ) has finite Lebesgue measure. Let VA S
be the contamination model containing all pairs of densities which satisfy Assumption
A. Note that (ftar ,fcon )?VA ftar is exactly all densities whose support has finite Lebesgue measure,
which includes all densities with compact support.
The uniformity assumption on fcon is a common ?noninformative? assumption on the contamination. Furthermore, this assumption is supported by connections to one-class classification. In that
problem, only one class (corresponding to our ftar ) is observed for training, but the testing data is
drawn from fobs and must be classified. The dominant paradigm for nonparametric one-class classification is to estimate a level set of ftar from the one observed training class [14, 7, 13, 16, 12, 9],
and classify test data according to that level set. Yet level sets only yield optimal classifiers (i.e.
likelihood ratio tests) under the uniformity assumption on fcon , so that these methods are implicitly
adopting this assumption. Furthermore, a uniform contamination prior has been shown to optimize
the worst-case detection rate among all choices for the unknown contamination density [5]. Finally,
our experiments demonstrate that the SPKDE works well in practice, even when Assumption A is
significantly violated.
2.2
Decontamination Procedure
Under Assumption A ftar is present in fobs and its shape is left unmodified (up to a multiplicative
1
factor) by fcon . To recover ftar it is necessary to first scale fobs by ? = 1??
yielding
1
?
((1 ? ?)ftar + ?fcon ) = ftar +
fcon .
(1)
1??
1??
?
?
After scaling we would like to slice off 1??
fcon from the bottom of ftar + 1??
fcon . This transform
is achieved by
?
max 0, ftar +
fcon ? ? ,
(2)
1??
?
where ? is set such that 2 is a pdf (which in this case is achieved with ? = r 1??
). We will now
show that this transform is well defined in a general sense. Let f be a pdf and let
g?,? = max {0, ?f (?) ? ?}
where the max is defined pointwise. The following lemma shows that it is possible to slice off the
bottom of any scaled pdf to get a transformed pdf and that the transformed pdf is unique.
Lemma 1. For fixed ? > 1 there exists a unique ?0 > 0 such that kg?,?0 kL1 = 1.
Figure 2 demonstrates this transformation
pdf. We define the following transform
n applied to a o
1
A
A
R? : D ? D where R? (f ) = max 1?? f (?) ? ?, 0 where ? is such that R?A (f ) is a pdf.
Proposition 2. R?A decontaminates VA .
The proof of this proposition is an intermediate step for
the proof for Theorem 2. For any two subsets of V, V 0 ?
D ? D, R
V and V 0 iff R? decontamS? decontaminates
0
inates V V . Because of this, every decontaminating
transform has a maximal set which it can decontaminate.
Assumption A is both sufficient and necessary for decontamination by R?A , i.e. the set VA is maximal.
Proposition 3. Let {(q, q 0 )} ? D ? D and (q, q 0 ) ?
/ VA .
R?A cannot decontaminate {(q, q 0 )}.
The proof of this proposition is in the supplementary material.
2.3
Other Possible Contamination Models
3
(1-?)ftar
?fcon
Figure 1: Density with contamination
satisfying Assumption A
The model described previously is
just one of many possible models. An obvious approach to robust
kernel density estimation is to use
an anomaly detection algorithm and
construct the KDE using only nonanomalous samples. We will investigate this model under a couple of
anomaly detection schemes and describe their properties.
?-1
1-1/?
Original Density
Scaled Density
Shifted to pdf
Figure 2: Infinite sample SPKDE transform. Arrows indicate the area under the line.
One of the most common methods for anomaly detection is the level set method. For a probability
measure ? this method attempts to find the set S with smallest Lebesgue measure such that ?(S)
is above some threshold, t, and declares samples Routside of that set as being anomalous. For a
density f this is equivalent to finding ? such that {x|f (x)??} f (y)dy = t and declaring samples
were f (X) < ? as being anomalous. Let X1 , . . . , Xn be iid samples from fobs . Using the level
set method for a robust KDE, we would construct a density fbobs which is an estimate of fobs .
Next we would select some threshold ? > 0 and declare a sample, Xi , as being anomalous if
fbobs (Xi ) < ?. Finally we would construct a KDE using the non-anomalous samples. Let ?{?} be
the indicator function. Applying this method in the infinite sample situation, i.e. fbobs = fobs , would
fobs (x)?
{ obs
}
cause
our non-anomalous samples to come from the density p(x) =
where ? =
?
R
?{f (y)>?} f (y)dy. See Figure 3. Perfect recovery of ftar using this method requires ?fcon (x) ?
ftar (x) (1 ? ?) for all x and that fcon and ftar have disjoint supports. The first assumption means
that this density estimator can only recover ftar if it has a drop off on the boundary of its support,
whereas Assumption A only requires that ftar have finite support. See the last diagram in Figure
3. Although these assumptions may be reasonable in certain situations, we find them less palatable
than Assumption A. We also evaluate this approach experimentally later and find that it performs
poorly.
Another approach based on anomaly detection
would be to find the connected components of
fobs and declare those that are, in some sense,
small as being anomalous. A ?small? connected component may be one that integrates
to a small value, or which has a small mode.
Unfortunately this approach also assumes that
ftar and fcon have disjoint supports. There are
also computational issues with this anomaly detection scheme; finding connected components,
finding modes, and numerical integration are
computationally difficult.
f
(x)>?
?
Original Density
Set density under threshold to 0
Threshold at ?
Normalize to integrate to 1
Figure 3: Infinite sample version of the level set
To some degree, R?A actually achieves the obrejection KDE
jectives of the previous two robust KDEs. For
A
the first model, the R? does indeed set those regions of the pdf that are below some threshold to
zero. For the second, if the magnitude of the level at which we choose to slice off the bottom of
the contaminated density is larger than the mode of the anomalous component then the anomalous
component will be eliminated.
3
Scaled Projection Kernel Density Estimator
Here we consider approximating R?A in a finite sample situation. Let f ? L2 Rd be a pdf and
X1 , . . . , Xn be iid samples from f . Let k? (x, x0 ) be a radial smoothing
kernel with bandwidth ?
such that k? (x, x0 ) = ? ?d q (kx ? x0 k2 /?), where q (k?k2 ) ? L2 Rd and is a pdf. The classic
kernel density estimator is:
n
1X
k? (?, Xi ) .
f??n :=
n 1
4
In practice ? is usually not known and Assumption A is violated. Because of this we will scale our
1
density by ? > 1 rather than 1??
. For a density f define
Q? (f ) , max {?f (?) ? ?, 0} ,
where ? = ?(?) is set such that the RHS is a pdf. ? can be used to tune robustness with larger
? corresponding to more robustness (setting ? to 1 in all the following transforms simply yields
the KDE).
Given a
KDE we would ideally like to apply Q? directly and search over ? until
max ? f??n (?) ? ?, 0 integrates to 1. Such an estimate requires multidimensional numerical integration and is not computationally tractable. The SPKDE is an alternative approach that always
yields a density and manifests the transformed density in its asymptotic limit.
We now introduce the construction of the SPKDE. Let D?n be the convex hull of
k? (?, X1 ) , . . . , k? (?, Xn ) (the space of weighted kernel density estimators). The SPKDE is defined as
n
f?,?
:= arg minn
? f??n ? g
L2 ,
g?D?
which is guaranteed to have a unique minimizer since D?n is closed and convex and we are projecting
n
in a Hilbert space ([1] Theorem 3.14). If we represent f?,?
in the form
n
f?,?
=
n
X
ai k? (?, Xi ) ,
1
T
then the minimization problem is a quadratic program over the vector a = [a1 , . . . , an ] , with a
restricted to the probabilistic simplex, ?n . Let G be the Gram matrix of k? (?, X1 ) , . . . , k? (?, Xn ),
that is
Gij
= hk? (?, Xi ) , k? (?, Xj )iL2
Z
=
k? (x, Xi ) k? (x, Xj ) dx.
Let 1 be the ones vector and b = G1 n? , then the quadratic program is
min aT Ga ? 2bT a.
a??n
Since G is a Gram matrix, and therefore positive-semidefinite, this quadratic program is convex.
Furthermore, the integral defining Gij can be computed in closed form for many kernels of interest.
For example for the Gaussian kernel
!
0 2
d
?
kx
?
x
k
?
=? Gij = k?2? (Xi , Xj ),
k? (x, x0 ) = 2?? 2 2 exp
2? 2
and for the Cauchy kernel [2]
k? (x, x0 ) =
1+d
2
? (d+1)/2 ?
?
2
?d
kx ? x0 k
1+
?2
!? 1+d
2
=? Gij = k2? (Xi , Xj ).
We now present
some results on the asymptotic behavior of the SPKDE. Let D be the set of all pdfs
in L2 Rd . The infinite sample version of the SPKDE is
2
f?0 = arg min k?f ? hkL2 .
h?D
It is worth noting that projection operators in Hilbert space, like the one above, are known to be well
defined if the convex set we are projecting onto is closed and convex. D is not closed in L2 Rd ,
but this turns out not to be an issue because of the form of ?f . For details see the proof of Lemma
2 in the supplemental material.
Lemma 2. f?0 = max {?f (?) ? ?, 0} where ? is set such that max {?f (?) ? ?, 0} is a pdf.
5
Given the same rate on bandwidth necessary for consistency of the traditional KDE, the SPKDE
converges to its infinite sample version in its asymptotic limit.
p
n
? f?0
2 ? 0.
Theorem 1. Let f ? L2 Rd . If n ? ? and ? ? 0 with n? d ? ? then
f?,?
L
n
Because f?,?
is a sequence of pdfs and f?0 ? L2 R , it is possible to show L2 convergence implies
1
L convergence.
p
n
? f?0
? 0.
Corollary 1. Given the conditions in the previous theorem statement,
f?,?
d
L1
To summarize, the SPKDE converges to a transformed version of f . In the next section we will
1
show that under Assumption A and with ? = 1??
, the SPKDE converges to ftar .
3.1
SPKDE Decontamination
Let ftar ? L2 Rd be a pdf having support with finite Lebesgue measure and let ftar and fcon
satisfy Assumption A. Let X1 , X2 , . . . , Xn be iid samples from fobs = (1 ? ?) ftar + ?fcon with
n
? ? [0, 1). Finally let f?,?
be the SPKDE constructed from X1 , . . . , Xn , having bandwidth ? and
robustness parameter ?. We have
p
n
1
Theorem 2. Let ? = 1??
. If n ? ? and ? ? 0 with n? d ? ? then
f?,?
? ftar
? 0.
L1
To our knowledge this result is the first of its kind, wherein a nonparametric density estimator is able
to asymptotically recover the underlying density in the presence of contaminated data.
4
Experiments
For all of the experiments optimization was performed using projected gradient descent. The projection onto the probabilistic simplex was done using the algorithm developed in [4] (which was
actually originally discovered a few decades ago [3, 8]).
4.1
Synthetic Data
To show that the SPKDE?s theoretical properties are manifested in practice we conducted an idealized experiment where the contamination is uniform and the contamination proportion is known.
Figure 4 exhibits the ability of the SPKDE to compensate for uniform noise. Samples for the density estimator came from a mixture of the ?Target? density with a uniform contamination on [?2, 2],
sampling from the contamination with probability ? = 0.2. This experiment used 500 samples and
1
= 54 (the value for perfect asymptotic decontamination).
the robustness parameter ? was set to 1??
The SPKDE performs well in this situation and yields a scaled and shifted version of the standard
KDE. This scale and shift is especially evident in the preservation of the bump on the right hand side
of Figure 4.
4.2
Datasets
In our remaining experiments we investigate two performance metrics for different amounts of contamination. We perform our experiments on 12 classification datasets (names given in the supplemental material) where the 0 label is used as the target density and the 1 label is the anomalous
contamination. This experimental setup does not satisfy Assumption A. The training datasets are
?
constructed with n0 samples from label 0 and 1??
n0 samples from label 1, thus making an ? proportion of our samples come from the contaminating density. For our experiments we use the values
? = 0, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30. Given some dataset we are interested in how well our density
estimators fb estimate the density of the 0 class of our dataset, ftar . Each test is performed on 15
permutations of the dataset. The experimental setup here is similar to the setup in Kim & Scott [6],
the most significant difference being that ? is set differently.
4.3
Performance Criteria
6
0.8
First we investigate the Kullback-Leibler (KL)
divergence
!
Z
fb(x)
b
b
DKL f ||f0 = f (x) log
dx.
f0 (x)
0.7
KDE
SPKDE
Target
0.6
0.5
This KL divergence is large when fb estimates
f0 to have mass where it does not. For exam- 0.4
ple, in our context, fb makes mistakes because
0.3
of outlying contamination. We estimate this KL
divergence as follows. Since we do not have ac0.2
cess to f0 , it is estimated from the testing same
e
ple using a KDE, f0 . The bandwidth for f0 is 0.1
set using the testing data with aLOOCV line
search minimizing DKL f0 ||fe0 , which is de0
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
scribed in more detail below. We then approximate the integral using a sample mean by genn0
erating samples from fb, {x0i }1 and using the Figure 4: KDE and SPKDE in the presence of uniform noise
estimate
!
0
n
1 X
fb(x0i )
b
DKL f ||f0 ? 0
.
log
n 1
fe0 (x0i )
The number of generated samples n0 is set to double the number of training samples.
Since KL divergence isn?t symmetric we also investigate
!
Z
Z
f0 (x)
b
DKL f0 ||f
=
f0 (x) log
dx = C ? f0 (y) log fb(y) dy,
fb(x)
where C is a constant not depending on fb. This KL divergence is large when f0 has mass where fb
n00
does not. The final term is easy to estimate using expectation. Let {x00i }1 be testing samples from
f0 (not used for training). The following is a reasonable approximation
Z
?
n00
1 X
b
f0 (y) log f (y) dy ? ? 00
log fb(x00i ) .
n 1
For a given performance metric and contamination amount, we compare the mean performance of
two density estimators across datasets using the Wilcoxon signed rank test [17]. Given N datasets
we first rank the datasets according to the absolute difference between performance criterion, with
hi being the rank of the ith dataset. For example if the jth dataset has the largest absolute difference
we set hj = N and if the kth dataset has the smallest absolute difference we set hk = 1. We let
R1 be the sum of the hi s where method one?s metric is greater than metric two?s and R2 be the sum
of the hi s where method two?s metric is larger. The test statistic is min(R1 , R2 ), which we do not
report. Instead we report R1 and R2 and the p-value that the two methods do not perform the same
on average. Ri < Rj is indicative of method i performing better than method j.
4.4
Methods
The data were preprocessed by scaling to fit in the unit cube. This scaling technique was chosen over
whitening because of issues with singular covariance matrices. The Gaussian kernel was used for
all density estimates. For each permutation of each dataset,the bandwidth
parameter is set using the
b
training data with a LOOCV line search minimizing DKL fobs ||f , where fb is the KDE based on
the contaminated data and fobs is the observed density. This metric was used in order to maximize
the performance of the traditional KDE in KL divergence metrics. For the SPKDE the parameter ?
was chosen to be 2 for all experiments. This choice of ? is based on a few preliminary experiments
7
Table 1: Wilcoxon
rank test results
signed
Wilcoxon Test Applied to DKL
?
0
0.05 0.1 0.15 0.2
SPKDE
5
0
1
2
0
KDE
73
78
77
76
78
p-value .0049 5e-4 1e-3 .0015 5e-4
SPKDE
53
59
58
67
63
RKDE
25
19
20
11
15
p-value 0.31 0.13 0.15 .027 .064
SPKDE
0
0
1
1
0
rejKDE
78
78
77
77
78
p-value 5e-4 5e-4 1e-3 1e-3 5e-4
fb||f0
0.25
0
78
5e-4
61
17
.092
2
76
.0015
0.3
0
78
5e-4
63
15
.064
0
78
5e-4
Wilcoxon Test Applied to DKL f0 ||fb
0
0.05 0.1 0.15 0.2 0.25 0.3
37
30
27
21
17
16
17
41
48
51
57
61
62
61
.91 .52 .38 .18 .092 .078 .092
14
14
14
10
10
12
12
64
64
64
68
68
66
66
.052 .052 .052 .021 .021 .034 .034
29
21
19
15
13
9
11
49
57
59
63
65
69
67
.47 .18 .13 .064 .043 .016 .027
for which it yielded good results over various sample contamination amounts. The construction of
the RKDE follows exactly the methods outlined in the ?Experiments? section of Kim & Scott [6].
It is worth noting that the RKDE depends on the loss function used and that the Hampel loss used
in these experiments very aggressively suppresses
weights on the tails. Because of this
the kernel
b
we expect that RKDE performs well on the DKL f ||f0 metric. We also compare the SPKDE to a
kernel density estimator constructed from samples declared non-anomalous by a level set anomaly
detection as described in Section 2.3. To do this we first construct the classic KDE, f??n and then
reject those samples in the lower 10th percentile of f??n (Xi ). Those samples not rejected are used in
a new KDE, the ?rejKDE? using the same ? parameter.
4.5
Results
We present the results of the Wilcoxon signed rank tests in Table 1. Experimental results for each
dataset can be found in the supplemental material. From the results
it is clear that the SPKDE is
b
effective at compensating for contamination in the DKL f ||f0 metric, albeit not quite as well as
the RKDE. The main advantage
of the SPKDE over the RKDE is that it significantly outperforms
b
the RKDE in the DKL f0 ||f metric. The rejKDE performs significantly worse than the SPKDE
on almost every experiment. Remarkably the SPKDE outperforms the KDE in the situation with no
contamination (? = 0) for both performance metrics.
5
Conclusion
Robustness in the setting of nonparametric density estimation is a topic that has received little attention despite extensive study of robustness in the parametric setting. In this paper we introduced a
robust version of the KDE, the SPKDE, and developed a new formalism for analysis of robust density estimation. With this new formalism we proposed a contamination model and decontaminating
transform to recover a target density in the presence of noise. The contamination model allows that
the target and contaminating densities have overlapping support and that the basic shape of the target
density is not modified by the contaminating density. The proposed transform is computationally
prohibitive to apply directly to the finite sample KDE and the SPKDE is used to approximate the
transform. The SPKDE was shown to asymptotically converge to the desired transform.Experi
ments have shown that the SPKDE is more effective than the RKDE at minimizing DKL f0 ||fb .
Furthermore the p-values for these experiments were smaller than the p-values for the DKL fb||f0
experiments where the RKDE outperforms the SPKDE.
Acknowledgements
This work support in part by NSF Awards 0953135, 1047871, 1217880, 1422157. We would also
like to thank Samuel Brodkey for his assistance with the simulation code.
8
References
[1] H.H. Bauschke and P.L. Combettes. Convex analysis and monotone operator theory in Hilbert
spaces. CMS Books in Mathematics, Ouvrages de math?ematiques de la SMC. Springer New
York, 2011.
[2] D.A. Berry, K.M. Chaloner, J.K. Geweke, and A. Zellner. Bayesian Analysis in Statistics and
Econometrics: Essays in Honor of Arnold Zellner. A Wiley Interscience publication. Wiley,
1996.
[3] Peter Brucker. An o(n) algorithm for quadratic knapsack problems. Operations Research
Letters, 3(3):163 ? 166, 1984.
[4] John C. Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections
onto the l1 -ball for learning in high dimensions. In ICML, pages 272?279, 2008.
[5] R. El-Yaniv and M. Nisenson. Optimal single-class classification strategies. In B. Sch?olkopf,
J. Platt, and T. Hoffman, editors, Adv. in Neural Inform. Proc. Systems 19. MIT Press, Cambridge, MA, 2007.
[6] J. Kim and C. Scott. Robust kernel density estimation. J. Machine Learning Res., 13:2529?
2565, 2012.
[7] G. Lanckriet, L. El Ghaoui, and M. I. Jordan. Robust novelty detection with single-class mpm.
In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing
Systems 15, pages 905?912. MIT Press, Cambridge, MA, 2003.
[8] P.M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Mathematical Programming, 46(1-3):321?328,
1990.
[9] B. Sch?olkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson. Estimating the support
of a high-dimensional distribution. Neural Computation, 13(7):1443?1472, 2001.
[10] D. W. Scott. Multivariate Density Estimation. Wiley, New York, 1992.
[11] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall,
London, 1986.
[12] K. Sricharan and A. Hero. Efficient anomaly detection using bipartite k-nn graphs. In J. ShaweTaylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in
Neural Information Processing Systems 24, pages 478?486. 2011.
[13] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. JMLR,
6:211?232, 2005.
[14] J. Theiler and D. M. Cai. Resampling approach for anomaly detection in multispectral images.
In Proc. SPIE, volume 5093, pages 230?240, 2003.
[15] R. Vandermeulen and C. Scott. Consistency of robust kernel density estimators. COLT, 30,
2013.
[16] R. Vert and J.-P. Vert. Consistency and convergence rates of one-class SVM and related algorithms. JMLR, pages 817?854, 2006.
[17] F. Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80?83,
1945.
9
| 5228 |@word version:13 norm:4 proportion:2 essay:1 seek:1 simulation:1 covariance:2 contains:1 outperforms:3 scovel:1 yet:1 dx:3 must:3 john:1 numerical:3 shawetaylor:1 shape:2 noninformative:1 drop:1 n0:3 resampling:1 prohibitive:1 indicative:1 mpm:1 ith:1 math:1 mathematical:1 constructed:3 interscience:1 introduce:4 x0:8 theoretically:1 indeed:1 behavior:1 brucker:1 decontaminate:5 compensating:1 little:2 considering:1 project:2 estimating:1 underlying:1 mass:2 what:3 kg:1 kind:1 superlinearly:1 cm:1 suppresses:1 developed:3 supplemental:4 finding:3 transformation:5 unexplored:1 every:2 multidimensional:1 exactly:2 scaled:9 classifier:1 demonstrates:1 k2:3 unit:1 platt:2 positive:1 declare:2 limit:3 consequence:1 mistake:1 despite:1 signed:3 studied:1 smc:1 scribed:1 unique:3 testing:4 practice:3 silverman:1 procedure:2 area:3 significantly:3 reject:1 projection:5 vert:2 word:2 radial:1 get:2 cannot:2 ga:1 onto:3 operator:2 context:1 applying:2 impossible:3 optimize:1 equivalent:1 straightforward:1 attention:1 convex:6 recovery:1 estimator:16 his:1 classic:2 target:11 construction:4 suppose:1 anomaly:9 programming:1 lanckriet:1 element:3 satisfying:2 econometrics:1 observed:4 bottom:3 worst:1 region:5 connected:3 adv:1 contamination:28 intuition:1 ideally:1 depend:1 uniformity:2 bipartite:1 completely:1 differently:1 various:1 describe:2 effective:2 london:1 zemel:1 shalev:1 apparent:1 whose:1 supplementary:1 larger:3 quite:1 say:2 ability:1 statistic:4 g1:1 transform:10 final:1 advantage:2 sequence:1 cai:1 propose:1 maximal:2 iff:1 poorly:1 normalize:1 olkopf:2 convergence:4 double:1 yaniv:1 r1:3 zellner:2 perfect:2 converges:5 depending:2 exam:1 x0i:3 nearest:1 received:1 come:5 indicate:1 implies:1 nisenson:1 hull:1 material:5 pardalos:1 investigation:1 preliminary:1 proposition:7 hall:1 exp:1 bump:1 achieves:1 smallest:2 estimation:14 proc:2 integrates:2 x00i:2 loocv:2 label:4 largest:1 weighted:4 reflects:1 minimization:1 hoffman:1 mit:2 clearly:1 gaussian:4 always:1 modified:1 rather:2 hj:1 publication:1 corollary:1 focus:1 pdfs:3 rank:5 likelihood:1 chaloner:1 hk:2 kim:4 sense:3 el:2 nn:1 bt:1 transformed:5 decontaminates:6 interested:1 issue:4 among:2 classification:5 arg:2 colt:1 univeristy:1 integration:2 smoothing:1 constrained:1 cube:1 equal:1 construct:7 having:2 eliminated:1 sampling:1 kdes:1 chapman:1 icml:1 simplex:2 contaminated:4 report:2 few:2 divergence:6 individual:1 lebesgue:5 attempt:1 detection:11 interest:2 investigate:4 multiply:1 erating:1 mixture:1 yielding:1 semidefinite:1 integral:2 capable:1 necessary:5 il2:1 biometrics:1 indexed:1 taylor:1 penalizes:2 fe0:2 desired:1 re:1 theoretical:1 formalism:5 classify:1 disadvantage:2 unmodified:1 subset:2 kl1:1 uniform:7 wonder:1 decontaminating:2 conducted:1 bauschke:1 eec:2 synthetic:1 density:75 probabilistic:2 off:4 squared:2 clayscot:1 containing:1 choose:1 worse:1 book:1 supp:4 de:2 includes:1 satisfy:4 ranking:1 idealized:1 depends:1 multiplicative:1 later:1 performed:2 closed:4 recover:7 shai:1 multispectral:1 yield:6 bayesian:1 iid:3 worth:2 ago:1 classified:1 inform:1 uncontaminated:1 thereof:1 obvious:1 proof:8 mi:2 recovers:1 spie:1 couple:1 proved:1 dataset:8 popular:1 manifest:2 knowledge:3 ubiquitous:1 hilbert:4 geweke:1 actually:2 higher:1 originally:1 follow:1 wherein:1 done:1 furthermore:5 just:1 rejected:1 smola:1 until:1 working:1 hand:1 steinwart:1 overlapping:1 mode:3 name:1 contain:1 counterpart:1 analytically:1 aggressively:1 symmetric:1 leibler:1 assistance:1 percentile:1 samuel:1 criterion:2 pdf:16 decontamination:6 evident:1 demonstrate:2 workhorse:1 nde:1 performs:4 l1:3 duchi:1 image:1 wise:2 novel:1 common:3 volume:1 tail:1 refer:1 significant:2 cambridge:2 ai:1 rd:8 consistency:6 outlined:1 mathematics:1 shawe:1 f0:22 whitening:1 dominant:1 wilcoxon:6 contaminating:7 closest:1 multivariate:1 scenario:1 certain:2 manifested:1 honor:1 came:1 greater:2 converge:1 paradigm:1 maximize:1 novelty:1 preservation:1 desirable:1 rj:1 offer:1 compensate:1 divided:1 award:1 dkl:12 a1:1 va:5 anomalous:10 basic:1 metric:11 expectation:1 chandra:1 kernel:19 adopting:1 represent:1 achieved:2 whereas:1 remarkably:1 diagram:1 singular:1 sch:2 subject:1 jordan:1 noting:2 presence:3 intermediate:1 easy:1 xj:4 fit:1 bandwidth:5 shift:1 whether:2 allocate:1 bartlett:1 fob:18 becker:1 peter:1 york:2 cause:3 useful:1 clear:1 tune:1 transforms:1 nonparametric:12 amount:3 singly:1 exist:2 nsf:1 shifted:3 estimated:1 disjoint:2 threshold:5 drawn:2 preprocessed:1 ce:1 asymptotically:5 graph:1 monotone:1 sum:2 letter:1 named:1 almost:3 reasonable:2 draw:1 ob:1 dy:4 scaling:4 bound:1 hi:3 guaranteed:1 quadratic:5 yielded:1 declares:1 x2:1 ri:1 declared:1 min:3 performing:1 de0:1 department:1 according:2 ball:1 across:1 smaller:1 making:2 projecting:2 restricted:1 ghaoui:1 computationally:3 previously:1 turn:2 singer:1 know:3 hero:1 tractable:1 serf:1 umich:2 end:1 operation:1 apply:2 appropriate:1 alternative:2 robustness:8 ematiques:1 weinberger:1 knapsack:1 original:2 assumes:1 remaining:1 include:1 yoram:1 especially:1 approximating:1 move:1 question:1 parametric:5 strategy:1 traditional:4 obermayer:1 exhibit:1 gradient:1 kth:1 thank:1 thrun:1 aptly:1 topic:2 cauchy:1 code:1 pointwise:1 minn:1 ratio:1 minimizing:3 difficult:1 unfortunately:1 setup:3 robert:1 statement:1 kde:35 fcon:34 implementation:1 design:2 unknown:2 perform:2 upper:1 observation:3 sricharan:1 datasets:8 benchmark:1 finite:7 descent:1 situation:6 defining:1 discovered:1 arbitrary:1 clayton:1 introduced:1 pair:4 kl:6 extensive:1 connection:1 quadratically:1 hush:1 able:1 below:2 usually:1 scott:8 summarize:1 program:4 max:8 difficulty:1 hampel:1 indicator:1 scheme:2 isn:1 prior:1 literature:1 l2:13 acknowledgement:1 berry:1 asymptotic:5 fully:1 loss:2 permutation:2 expect:1 limitation:1 declaring:1 integrate:1 degree:1 sufficient:7 experi:1 imposes:1 theiler:1 editor:3 kovoor:1 supported:1 last:1 jth:1 side:1 arnold:1 bulletin:1 absolute:3 slice:3 boundary:1 dimension:1 xn:6 gram:2 fb:16 projected:3 ple:2 outlying:1 approximate:2 compact:1 implicitly:1 kullback:1 xi:9 shwartz:1 search:3 decade:1 table:2 robust:20 williamson:1 main:1 arrow:1 rh:1 noise:3 x1:6 wiley:3 combettes:1 pereira:1 jmlr:2 removing:1 theorem:5 specific:1 r2:3 svm:1 ments:1 exists:4 albeit:1 n00:2 magnitude:1 kx:3 michigan:2 simply:1 explore:1 contained:1 springer:1 minimizer:1 ma:2 ann:2 experimentally:2 infinite:7 tushar:1 lemma:4 gij:4 arbor:2 experimental:4 la:1 palatable:1 select:1 support:12 violated:2 evaluate:1 |
4,670 | 5,229 | Distributed Estimation, Information Loss and
Exponential Families
Qiang Liu
Alexander Ihler
Department of Computer Science, University of California, Irvine
qliu1@uci.edu
ihler@ics.uci.edu
Abstract
Distributed learning of probabilistic models from multiple data repositories
with minimum communication is increasingly important. We study a simple
communication-efficient learning framework that first calculates the local maximum likelihood estimates (MLE) based on the data subsets, and then combines
the local MLEs to achieve the best possible approximation to the global MLE
given the whole dataset. We study this framework?s statistical properties, showing
that the efficiency loss compared to the global setting relates to how much the underlying distribution families deviate from full exponential families, drawing connection to the theory of information loss by Fisher, Rao and Efron. We show that
the ?full-exponential-family-ness? represents the lower bound of the error rate of
arbitrary combinations of local MLEs, and is achieved by a KL-divergence-based
combination method but not by a more common linear combination method. We
also study the empirical properties of both methods, showing that the KL method
significantly outperforms linear combination in practical settings with issues such
as model misspecification, non-convexity, and heterogeneous data partitions.
1
Introduction
Modern data-science applications increasingly require distributed learning algorithms to extract information from many data repositories stored at different locations with minimal interaction. Such
distributed settings are created due to high communication costs (for example in sensor networks),
or privacy and ownership issues (such as sensitive medical or financial data). Traditional algorithms
often require access to the entire dataset simultaneously, and are not suitable for distributed settings.
We consider a straightforward two-step procedure for distributed learning that follows a ?divide and
conquer? strategy: (i) local learning, which involves learning probabilistic models based on the local
data repositories separately, and (ii) model combination, where the local models are transmitted
to a central node (the ?fusion center?), and combined to form a global model that integrates the
information in the local repositories. This framework only requires transmitting the local model
parameters to the fusion center once, yielding significant advantages in terms of both communication
and privacy constraints. However, the two-step procedure may not fully extract all the information in
the data, and may be less (statistically) efficient than a corresponding centralized learning algorithm
that operates globally on the whole dataset. This raises important challenges in understanding the
fundamental statistical limits of the local learning framework, and proposing optimal combination
methods to best approximate the global learning algorithm.
In this work, we study these problems in the setting of estimating generative model parameters
from a distribution family via the maximum likelihood estimator (MLE). We show that the loss of
statistical efficiency caused by using the local learning framework is related to how much the underlying distribution families deviate from full exponential families: local learning can be as efficient
as (in fact exactly equivalent to) global learning on full exponential families, but is less efficient
on non-exponential families, depending on how nearly ?full exponential family? they are. The
1
?full-exponential-family-ness? is formally captured by the statistical curvature originally defined
by Efron (1975), and is a measure of the minimum loss of Fisher information when summarizing
the data using first order efficient estimators (e.g., Fisher, 1925, Rao, 1963). Specifically, we show
that arbitrary combinations of the local MLEs on the local datasets can approximate the global MLE
on the whole dataset at most up to an asymptotic error rate proportional to the square of the statistical curvature. In addition, a KL-divergence-based combination of the local MLEs achieves this
minimum error rate in general, and exactly recovers the global MLE on full exponential families.
In contrast, a more widely-used linear combination method does not achieve the optimal error rate,
and makes mistakes even on full exponential families. We also study the two methods empirically,
examining their robustness against practical issues such as model mis-specification, heterogeneous
data partitions, and the existence of hidden variables (e.g., in the Gaussian mixture model). These
issues often cause the likelihood to have multiple local optima, and can easily degrade the linear
combination method. On the other hand, the KL method remains robust in these practical settings.
Related Work. Our work is related to Zhang et al. (2013a), which includes a theoretical analysis
for linear combination. Merugu and Ghosh (2003, 2006) proposed the KL combination method in
the setting of Gaussian mixtures, but without theoretical analysis. There are many recent theoretical
works on distributed learning (e.g., Predd et al., 2007, Balcan et al., 2012, Zhang et al., 2013b,
Shamir, 2013), but most focus on discrimination tasks like classification and regression. There are
also many works on distributed clustering (e.g., Merugu and Ghosh, 2003, Forero et al., 2011, Liang
et al., 2013) and distributed MCMC (e.g., Scott et al., 2013, Wang and Dunson, 2013, Neiswanger
et al., 2013). An orthogonal setting of distributed learning is when the data is split across the variable
dimensions, instead of the data instances; see e.g., Liu and Ihler (2012), Meng et al. (2013).
2
Problem Setting
Assume we have an i.i.d. sample X = {xi ? i = 1, . . . , n}, partitioned into d sub-samples X k =
{xi ? i ? ?k } that are stored in different locations, where ?dk=1 ?k = [n]. For simplicity, we assume
the data are equally partitioned, so that each group has n/d instances; extensions to the more general
case is straightforward. Assume X is drawn i.i.d. from a distribution with an unknown density from
a distribution family {p(x??)? ? ? ?}. Let ?? be the true unknown parameter. We are interested in
estimating ?? via the maximum likelihood estimator (MLE) based on the whole sample,
??mle = arg max ? log p(xi ??).
???
i?[n]
However, directly calculating the global MLE often requires distributed optimization algorithms
(such as ADMM (Boyd et al., 2011)) that need iterative communication between the local repositories and the fusion center, which can significantly slow down the algorithm regardless of the amount
of information communicated at each iteration. We instead approximate the global MLE by a twostage procedure that calculates the local MLEs separately for each sub-sample, then sends the local
MLEs to the fusion center and combines them. Specifically, the k-th sub-sample?s local MLE is
??k = arg max ? log p(xi ??),
???
i??k
and we want to construct a combination function f (??1 , . . . , ??d ) ? ??f to form the best approximation
to the global MLE ??mle . Perhaps the most straightforward combination is the linear average,
1
Linear-Averaging: ??linear = ? ??k .
d k
However, this method is obviously limited to continuous and additive parameters; in the sequel, we
illustrate it also tends to degenerate in the presence of practical issues such as non-convexity and
non-i.i.d. data partitions. A better combination method is to average the models w.r.t. some distance
metric, instead of the parameters. In particular, we consider a KL-divergence based averaging,
KL-Averaging:
??KL = arg min ? KL(p(x???k ) ?? p(x??)).
???
(1)
k
The estimate ??KL can also be motivated by a parametric bootstrap procedure that first draws sample
?
X k from each local model p(x???k ), and then estimates a global MLE based on all the combined
2
?
bootstrap samples X ? = {X k ? k ? [d]}. We can readily show that this reduces to ??KL as the size
?
of the bootstrapped samples X k grows to infinity. Other combination methods based on different
distance metrics are also possible, but may not have a similarly natural interpretation.
3
Exactness on Full Exponential Families
In this section, we analyze the KL and linear combination methods on full exponential families.
We show that the KL combination of the local MLEs exactly equals the global MLE, while the
linear average does not in general, but can be made exact by using a special parameterization. This
suggests that distributed learning is in some sense ?easy? on full exponential families.
Definition 3.1. (1). A family of distributions is said to be a full exponential family if its density can
be represented in a canonical form (up to one-to-one transforms of the parameters),
p(x??) = exp(?T ?(x) ? log Z(?)),
? ? ? ? {? ? Rm ? ? exp(?T ?(x))dH(x) < ?}.
x
where ? = [?1 , . . . ?m ]T and ?(x) = [?1 (x), . . . ?m (x)]T are called the natural parameters and the
natural sufficient statistics, respectively. The quantity Z(?) is the normalization constant, and H(x)
is the reference measure. An exponential family is said to be minimal if [1, ?1 (x), . . . ?m (x)]T is
linearly independent, that is, there is no non-zero constant vector ?, such that ?T ?(x) = 0 for all x.
Theorem 3.2. If P = {p(x??)? ? ? ?} is a full exponential family, then the KL-average ??KL always
exactly recovers the global MLE, that is, ??KL = ??mle . Further, if P is minimal, we have
?(??1 ) + ? + ?(??d )
??KL = ??1 (
),
d
(2)
where ? ? ? ? E? [?(x)] is the one-to-one map from the natural parameters to the moment parameters, and ??1 is the inverse map of ?. Note that we have ?(?) = ?log Z(?)/??.
Proof. Directly verify that the KL objective in (1) equals the global negative log-likelihood.
The nonlinear average in (2) gives an intuitive interpretation of why ??KL equals ??mle on full exponential families: it first calculates the local empirical moment parameters ?(??k ) = d/n ?i??k ?(xk );
averaging them gives the empirical moment parameter on the whole data ?
?n = 1/n ?i?[n] ?(xk ),
which then exactly maps to the global MLE.
Eq (2) also suggests that ??linear would be exact only if ?(?) is an identity map. Therefore, one may
make ??linear exact by using the special parameterization ? = ?(?). In contrast, KL-averaging will
make this reparameterization automatically (? is different on different exponential families). Note
that both KL-averaging and global MLE are invariant w.r.t. one-to-one transforms of the parameter
?, but linear averaging is not.
Example 3.3 (Variance Estimation). Consider estimating the variance ? 2 of a zero-mean Gaussian
distribution. Let s?k = (d/n) ?i??k (xi )2 be the empirical variance on the k-th sub-sample and
s? = ?k s?k /d the overall empirical variance. Then, ??linear would correspond to different power
means on s?k , depending on the choice of parameterization, e.g.,
? = ? 2 (variance)
??linear
1
d
?k s?k
? = ? (standard deviation)
1
d
sk )1/2
?k (?
? = ? ?2 (precision)
1
d
sk )?1
?k (?
where only the linear average of s?k (when ? = ? 2 ) matches the overall empirical variance s? and
equals the global MLE. In contrast, ??KL always corresponds to a linear average of s?k , equaling the
global MLE, regardless of the parameterization.
3
4
Information Loss in Distributed Learning
The exactness of ??KL in Theorem 3.2 is due to the beauty (or simplicity) of exponential families.
Following Efron?s intuition, full exponential families can be viewed as ?straight lines? or ?linear
subspaces? in the space of distributions, while other distribution families correspond to ?curved? sets
of distributions, whose deviation from full exponential families can be measured by their statistical
curvatures as defined by Efron (1975). That work shows that statistical curvature is closely related
to Fisher and Rao?s theory of second order efficiency (Fisher, 1925, Rao, 1963), and represents
the minimum information loss when summarizing the data using first order efficient estimators. In
this section, we connect this classical theory with the local learning framework, and show that the
statistical curvature also represents the minimum asymptotic deviation of arbitrary combinations of
the local MLEs to the global MLE, and that this is achieved by the KL combination method, but not
in general by the simpler linear combination method.
4.1
Curved Exponential Families and Statistical Curvature
We follow the convention in Efron (1975), and illustrate the idea of statistical curvature using curved
exponential families, which are smooth sub-families of full exponential families. The theory can be
naturally extended to more general families (see e.g., Efron, 1975, Kass and Vos, 2011).
Definition 4.1. A family of distributions {p(x??)? ? ? ?} is said to be a curved exponential family if
its density can be represented as
p(x??) = exp(?(?)T ?(x) ? log Z(?(?))),
(3)
where the dimension of ? = [?1 , . . . , ?q ] is assumed to be smaller than that of ? = [?1 , . . . , ?m ] and
? = [?1 , . . . , ?m ], that is q < m.
Following Kass and Vos (2011), we assume some regularity conditions for our asymptotic analysis.
Assume ? is an open set in Rq , and the mapping ? ? ? ? ?(?) is one-to-one and infinitely differentiable, and of rank q, meaning that the q ? m matrix ?(?)
?
has rank q everywhere. In addition,
if a sequence {?(?i ) ? N0 } converges to a point ?(?0 ), then {?i ? ?} must converge to ?(?0 ). In
geometric terminology, such a map ? ? ? ? ?(?) is called a q-dimensional embedding in Rm .
Obviously, a curved exponential family can be treated as a smooth subset of a full exponential family
p(x??) = exp(? T ?(x) ? log Z(?)), with ? constrained in ?(?). If ?(?) is a linear function, then
the curved exponential family can be rewritten into a full exponential family in lower dimensions;
otherwise, ?(?) is a curved subset in the ?-space, whose curvature ? its deviation from planes or
straight lines ? represents its deviation from full exponential families.
Consider the case when ? is a scalar, and hence ?(?) is a curve; the geometric curvature ?? of ?(?) at point ? is defined to be the reciprocal of
the radius of the circle that fits best to ?(?) locally at ?. Therefore, the
curvature of a circle of radius r is a constant 1/r. In general, elementary
calculus shows that ??2 = (?? ?T ?? ? )?3 (?
??T ??? ? ?? ?T ?? ? ? (?
??T ?? ? )2 ). The statistical curvature of a curved exponential family is defined similarly, except
equipped with an inner product defined via its Fisher information metric.
1/
?
?(?)
Definition 4.2 (Statistical Curvature). Consider a curved exponential family P = {p(x??)? ? ? ?},
whose parameter ? is a scalar (q = 1). Let ?? = cov? [?(x)] be the m ? m Fisher information on
the corresponding full exponential family p(x??). The statistical curvature of P at ? is defined as
??2 = (?? ?T ?? ?? ? )?3 [(?
??T ?? ??? ) ? (?? ?T ?? ?? ? ) ? (?
??T ?? ?? ? )2 ].
The definition can be extended to general multi-dimensional parameters, but requires involved notation. We give the full definition and our general results in the appendix.
Example 4.3 (Bivariate Normal on Ellipse). Consider a bivariate normal distribution with diagonal
covariance matrix and mean vector restricted on an ellipse ?(?) = [a cos(?), b sin(?)], that is,
1
p(x??) ? exp [ ? (x21 + x22 ) + a cos ? x1 + b sin ? x2 )], ? ? (??, ?), x ? R2 .
2
We have that ?? equals the identity matrix in this case, and the statistical curvature equals the
geometric curvature of the ellipse in the Euclidian space, ?? = ab(a2 sin2 (?) + b2 cos2 (?))?3/2 .
4
The statistical curvature was originally defined by Efron (1975) as the minimum amount of information loss when summarizing the sample using first order efficient estimators. Efron (1975) showed
that, extending the result of Fisher (1925) and Rao (1963),
?mle
lim [I?X? ? I??? ] = ??2? I?? ,
(4)
n??
where I?? is the Fisher information (per data instance) of the distribution p(x??) at the true parameter
?mle
?? , and I?X? = nI?? is the total information included in a sample X of size n, and I??? is the Fisher
information included in ??mle based on X. Intuitively speaking, we lose about ??2? units of Fisher
information when summarizing the data using the ML estimator. Fisher (1925) also interpreted ??2?
?mle
as the effective number of data instances lost in MLE, easily seen from rewriting I??? ? (n ?
??2? )I?? , as compared to I?X? = nI?? . Moreover, this is the minimum possible information loss
in the class of ?first order efficient? estimators T (X), those which satisfy the weaker condition
limn?? I?? /I?T? = 1. Rao coined the term ?second order efficiency? for this property of the MLE.
The intuition here has direct implications for our distributed setting, since ??f depends on the data
only through {??k }, each of which summarizes the data with a loss of ??2? units of information. The
total information loss is d ? ??2? , in contrast with the global MLE, which only loses ??2? overall.
Therefore, the additional loss due to the distributed setting is (d ? 1) ? ??2? . We will see that our
results in the sequel closely match this intuition.
4.2
Lower Bound
The extra information loss (d ? 1)??2? turns out to be the asymptotic lower bound of the mean square
error rate n2 E?? [I?? (??f ? ??mle )2 ] for any arbitrary combination function f (??1 , . . . , ??d ).
Theorem 4.4 (Lower Bound). For an arbitrary measurable function ??f =f (??1 , . . . , ??d ), we have
lim inf n2 E?? [??f (??1 , . . . , ??d ) ? ??mle ??2 ] ? (d ? 1)??2? I??1
? .
n?+?
Sketch of Proof . Note that
E?? [????f ? ??mle ??2 ] = E?? [????f ? E?? (??mle ???1 , . . . , ??d )??2 ] + E?? [????mle ? E?? (??mle ???1 , . . . , ??d )??2 ]
? E?? [????mle ? E?? (??mle ???1 , . . . , ??d )??2 ]
= E?? [var?? (??mle ???1 , . . . , ??d )],
where the lower bound is achieved when ??f = E?? (??mle ???1 , . . . , ??d ). The conclusion follows by
showing that limn?+? E?? [var?? (??mle ???1 , . . . , ??d )] = (d ? 1)??2? I??1
? ; this requires involved asymptotic analysis, and is presented in the Appendix.
?d )
?1 . . . , ?
f (? ,
4.3
(
The proof above highlights a geometric interpretation via the pro- ?1
??mle
jection of random variables (e.g., Van der Vaart, 2000). Let F be ?
1
the set of all random variables in the form of f (??1 , . . . , ??d ). The op? 2 (d 1) ? 2
n
timal consensus function should be the projection of ??mle onto F, ??d
and the minimum mean square error is the distance between ??mle
and F. The conditional expectation ??f = E?? (??mle ???1 , . . . , ??d ) is the exact projection and ideally
the best combination function; however, this is intractable to calculate due to the dependence on the
unknown true parameter ?? . We show in the sequel that ??KL gives an efficient approximation and
achieves the same asymptotic lower bound.
General Consistent Combination
We now analyze the performance of a general class of ??f , which includes both the KL average ??KL
and the linear average ??linear ; we show that ??KL matches the lower bound in Theorem 4.4, while
??linear is not optimal even on full exponential families. We start by defining conditions which any
?reasonable? f (??1 , . . . , ??d ) should satisfy.
5
Definition 4.5. (1). We say f (?) is consistent, if for ?? ? ?, ?k ? ?, ?k ? [d] implies
f (?1 , . . . , ?d ) ? ?.
(2). f (?) is symmetric if f (??1 , . . . , ??d ) = f (???(1) , . . . , ???(d) ), for any permutation ? on [d].
The consistency condition guarantees that if all the ??k are consistent estimators, then ??f should also
be consistent. The symmetry is also straightforward due to the symmetry of the data partition {X k }.
In fact, if f (?) is not symmetric, one can always construct a symmetric version that performs better
or at least the same (see Appendix for details). We are now ready to present the main result.
Theorem 4.6. (1). Consider a consistent and symmetric ??f = f (??1 , . . . , ??d ) as in Definition 4.5,
whose first three orders of derivatives exist. Then, for curved exponential families in Definition 4.1,
d?1 f
E?? [??f ? ??mle ] =
? ? + o(n?1 ),
n ?
d?1
f 2
?2
E?? [????f ? ??mle ??2 ] =
? [??2? I??1
),
? + (d + 1)(? ? ) ] + o(n
?
n2
where ??f? is a term that depends on the choice of the combination function f (?). Note that the mean
square error is consistent with the lower bound in Theorem 4.4, and is tight if ??f? = 0.
(2). The KL average ??KL has ??f? = 0, and hence achieves the minimum bias and mean square error,
E?? [??KL ? ??mle ] = o(n?1 ),
d ? 1 2 ?1
? ??? I?? + o(n?2 ).
E?? [????KL ? ??mle ??2 ] =
n2
In particular, note that the bias of ??KL is smaller in magnitude than that of general ??f with ??f? ? 0.
(4). The linear averaging ??linear , however, does not achieve the lower bound in general. We have
1
? 3 log p(x??? )
]),
??linear
= I??2 (?
??T? ??? ?? ?? + E?? [
?
2
??3
which is in general non-zero even for full exponential families.
(5). The MSE w.r.t. the global MLE ??mle can be related to the MSE w.r.t. the true parameter ?? , by
d ? 1 2 ?1
E?? [????KL ? ?? ??2 ] = E?? [????mle ? ?? ??2 ] +
? ??? I?? + o(n?2 ).
n2
d?1
linear 2
) ] + o(n?2 ).
? [??2? I??1
E?? [????linear ? ?? ??2 ] = E?? [????mle ? ?? ??2 ] +
? + 2(?? ?
n2
Proof. See Appendix for the proof and the general results for multi-dimensional parameters.
Theorem 4.6 suggests that ??f ? ??mle = Op (1/n) for any consistent f (?), which is smaller in mag?
nitude than ??mle ? ?? = Op (1/ n). Therefore, any consistent ??f is first order efficient, in that
its difference from the global MLE ??mle is negligible compared to ??mle ? ?? asymptotically. This
also suggests that KL and the linear methods perform roughly the same asymptotically in terms of
recovering the true parameter ?? . However, we need to treat this claim with caution, because, as
we demonstrate empirically, the linear method may significantly degenerate in the non-asymptotic
region or when the conditions in Theorem 4.6 do not hold.
5
Experiments and Practical Issues
We present numerical experiments to demonstrate the correctness of our theoretical analysis. More
importantly, we also study empirical properties of the linear and KL combination methods that
are not enlightened by the asymptotic analysis. We find that the linear average tends to degrade
significantly when its local models (??k ) are not already close, for example due to small sample
sizes, heterogenous data partitions, or non-convex likelihoods (so that different local models find
different local optima). In contrast, the KL combination is much more robust in practice.
6
?4
?2
Linear?Avg
KL?Avg
Theoretical
?3
?2
Linear?Avg
KL?Avg
Global MLE
?3
?3
?4
?6
?3.5
?5
?4
?6
?4
?8
150 250 500 1000
Total Sample Size (n)
150 250 500 1000
Total Sample Size (n)
(a). E(???f ? ??mle ??2 )
150 250 500 1000
Total Sample Size (n)
150 250 500 1000
Total Sample Size (n)
(b). ?E(?f ? ??mle )?
(c). E(???f ? ?? ??2 )
(d). ?E(?f ? ?? )?
Figure 1: Result on the toy model in Example 4.3. (a)-(d) The mean square errors and biases of the
linear average ??linear and the KL average ??KL w.r.t. to the global MLE ??mle and the true parameter
?? , respectively. The y-axes are shown on logarithmic (base 10) scales.
5.1
Bivariate Normal on Ellipse
We start with the toy model in Example 4.3 to verify our theoretical results. We draw samples from
the true model (assuming ?? = ?/4, a = 1, b = 5), and partition the samples randomly into 10 subgroups (d = 10). Fig. 1 shows that the empirical biases and MSEs match closely with the theoretical
predictions when the sample size is large (e.g., n ? 250), and ??KL is consistently better than ??linear
in terms of recovering both the global MLE and the true parameters. Fig. 1(b) shows that the bias
of ??KL decreases faster than that of ??linear , as predicted in Theorem 4.6 (2). Fig. 1(c) shows that
all algorithms perform similarly in terms of the asymptotic MSE w.r.t. the true parameters ?? , but
linear average degrades significantly in the non-asymptotic region (e.g., n < 250).
0
??/2
10
(n = 10)
??/2
0
100
1000
Total Sample Size (n)
(a). Global MLE ??mle
?/2
?/2
(n = 10)
0
??/2
??/2
10
0
100
1000
Total Sample Size (n)
(b). KL Average ??KL
?/2
Estimted Parameter
?/2
Estimted Parameter
Estimted Parameter
?/2
Model Misspecification. Model misspecification is unavoidable in practice, and may create multiple local modes in the likelihood objective,
?
leading to poor behavior from the linear average. We illustrate this phe0
nomenon using the toy model in Example 4.3, assuming the true model
is N ([0, 1/2], 12?2 ), outside of the assumed parametric family. This is
?/2
illustrated in the figure at right, where the ellipse represents the parametric family, and the black square denotes the true model. The MLE will concentrate on the projection
of the true model to the ellipse, in one of two locations (? = ??/2) indicated by the two red circles.
Depending on the random data sample, the global MLE will concentrate on one or the other of these
two values; see Fig. 2(a). Given a sufficient number of samples (n > 250), the probability that the
MLE is at ? ? ??/2 (the less favorable mode) goes to zero. Fig. 2(b) shows KL averaging mimics the
bi-modal distribution of the global MLE across data samples; the less likely mode vanishes slightly
slower. In contrast, the linear average takes the arithmetic average of local models from both of
these two local modes, giving unreasonable parameter estimates that are close to neither (Fig. 2(c)).
?/2
0
??/2
10
(n = 10)
??/2
0
?/2
100
1000
Total Sample Size (n)
(c). Linear Average ??linear
Figure 2: Result on the toy model in Example 4.3 with model misspecification: scatter plots of the
estimated parameters vs. the total sample size n (with 10,000 random trials for each fixed n). The
inside figures are the densities of the estimated parameters with fixed n = 10. Both global MLE and
KL-average concentrate on two locations (??/2), and the less favorable (??/2) vanishes when the
sample sizes are large (e.g., n > 250). In contrast, the linear approach averages local MLEs from the
two modes, giving unreasonable estimates spread across the full interval.
7
?615
?620
?620
?625
?625
?630
?630
?635
500
5000
50000
Total Sample Size (n)
(a) Training LL
(random partition)
?620
?630
Local MLEs
Global MLE
Linear?Avg?Matched
Linear?Avg
KL?Avg
?620
?630
?640
?640
500
5000
50000
Total Sample Size (n)
?650
500
5000
50000
Total Sample Size (n)
?650
500
5000
50000
Total Sample Size (n)
(b) Test LL
(random partition)
(c) Training LL
(label-wise partition)
(d) Test LL
(label-wise partition)
Figure 3: Learning Gaussian mixture models on MNIST: training and test log-likelihoods of different methods with varying training size n. In (a)-(b), the data are partitioned into 10 sub-groups
uniformly at random (ensuring sub-samples are i.i.d.); in (c)-(d) the data are partitioned according
to their digit labels. The number of mixture components is fixed to be 10.
?100
?100
?120
?120
?140
Local MLEs
Global MLE
Linear?Avg?Matched
Linear?Avg
KL?Avg
1000 10000100000
Training Sample Size (n)
(a) Training log-likelihood
5.2
Figure 4: Learning Gaussian mixture models
on the YearPredictionMSD data set. The data
are randomly partitioned into 10 sub-groups,
and we use 10 mixture components.
?140
1000 10000100000
Training Sample Size (n)
(b) Test log-likelihood
Gaussian Mixture Models on Real Datasets
We next consider learning Gaussian mixture models. Because component indexes may be arbitrarily
switched, na??ve linear averaging is problematic; we consider a matched linear average that first
matches indices by minimizing the sum of the symmetric KL divergences of the different mixture
components. The KL average is also difficult to calculate exactly, since the KL divergence between
Gaussian mixtures is intractable. We approximate the KL average using Monte Carlo sampling (with
500 samples per local model), corresponding to the parametric bootstrap discussed in Section 2.
We experiment on the MNIST dataset and the YearPredictionMSD dataset in the UCI repository,
where the training data is partitioned into 10 sub-groups randomly and evenly. In both cases, we use
the original training/test split; we use the full testing set, and vary the number of training examples
n by randomly sub-sampling from the full training set (averaging over 100 trials). We take the first
100 principal components when using MNIST. Fig. 3(a)-(b) and 4(a)-(b) show the training and test
likelihoods. As a baseline, we also show the average of the log-likelihoods of the local models
(marked as local MLEs in the figures); this corresponds to randomly selecting a local model as
the combined model. We see that the KL average tends to perform as well as the global MLE, and
remains stable even with small sample sizes. The na??ve linear average performs badly even with
large sample sizes. The matched linear average performs as badly as the na??ve linear average when
the sample size is small, but improves towards to the global MLE as sample size increases.
For MNIST, we also consider a severely heterogenous data partition by splitting the images into 10
groups according to their digit labels. In this setup, each partition learns a local model only over
its own digit, with no information about the other digits. Fig. 3(c)-(d) shows the KL average still
performs as well as the global MLE, but both the na??ve and matched linear average are much worse
even with large sample sizes, due to the dissimilarity in the local models.
6
Conclusion and Future Directions
We study communication-efficient algorithms for learning generative models with distributed data.
Analyzing both a common linear averaging technique and a less common KL-averaging technique
provides both theoretical and empirical insights. Our analysis opens many important future directions, including extensions to high dimensional inference and efficient approximations for complex
machine learning models, such as LDA and neural networks.
8
Acknowledgements. This work sponsored in part by NSF grants IIS-1065618 and IIS-1254071,
and the US Air Force under Contract No. FA8750-14-C-0011 under DARPA?s PPAML program.
References
Bradley Efron. Defining the curvature of a statistical problem (with applications to second order
efficiency). The Annals of Statistics, pages 1189?1242, 1975.
Ronald Aylmer Fisher. Theory of statistical estimation. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 22, pages 700?725. Cambridge Univ Press, 1925.
C Radhakrishna Rao. Criteria of estimation in large samples. Sankhy?a: The Indian Journal of
Statistics, Series A, pages 189?206, 1963.
Yuchen Zhang, John C Duchi, and Martin J Wainwright. Communication-efficient algorithms for
statistical optimization. Journal of Machine Learning Research, 14:3321?3363, 2013a.
Srujana Merugu and Joydeep Ghosh. Privacy-preserving distributed clustering using generative
models. In IEEE Int?l Conf. on Data Mining (ICDM), pages 211?218. IEEE, 2003.
Srujana Merugu and Joydeep Ghosh. Distributed learning using generative models. PhD thesis,
University of Texas at Austin, 2006.
Joel B Predd, Sanjeev R Kulkarni, and H Vincent Poor. Distributed learning in wireless sensor
networks. John Wiley & Sons: Chichester, UK, 2007.
Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. arXiv preprint arXiv:1204.3514, 2012.
Yuchen Zhang, John Duchi, Michael Jordan, and Martin J Wainwright. Information-theoretic lower
bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems (NIPS), pages 2328?2336, 2013b.
Ohad Shamir. Fundamental limits of online and distributed algorithms for statistical learning and
estimation. arXiv preprint arXiv:1311.3494, 2013.
Pedro A Forero, Alfonso Cano, and Georgios B Giannakis. Distributed clustering using wireless
sensor networks. IEEE Journal of Selected Topics in Signal Processing, 5(4):707?724, 2011.
Yingyu Liang, Maria-Florina Balcan, and Vandana Kanchanapally. Distributed PCA and k-means
clustering. In Big Learning Workshop, NIPS, 2013.
Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I George,
and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. In EFaBBayes 250 conference, volume 16, 2013.
Xiangyu Wang and David B Dunson. Parallel MCMC via Weierstrass sampler. arXiv preprint
arXiv:1312.4605, 2013.
Willie Neiswanger, Chong Wang, and Eric Xing. Asymptotically exact, embarrassingly parallel
MCMC. arXiv preprint arXiv:1311.4780, 2013.
Qiang Liu and Alexander Ihler. Distributed parameter estimation via pseudo-likelihood. In International Conference on Machine Learning (ICML), pages 1487?1494. July 2012.
Z. Meng, D. Wei, A. Wiesel, and A.O. Hero III. Distributed learning of Gaussian graphical models
via marginal likelihoods. In Int?l Conf. on Artificial Intelligence and Statistics (AISTATS), 2013.
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and
Trends? in Machine Learning, 3(1):1?122, 2011.
Robert E Kass and Paul W Vos. Geometrical foundations of asymptotic inference, volume 908. John
Wiley & Sons, 2011.
Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
9
| 5229 |@word trial:2 repository:6 version:1 wiesel:1 open:2 calculus:1 cos2:1 covariance:1 euclidian:1 moment:3 liu:3 series:1 selecting:1 mag:1 bootstrapped:1 fa8750:1 outperforms:1 bradley:1 ka:3 scatter:1 chu:1 must:1 readily:1 john:4 ronald:1 additive:1 partition:12 numerical:1 plot:1 sponsored:1 n0:1 discrimination:1 v:1 generative:4 selected:1 intelligence:1 parameterization:4 plane:1 xk:2 reciprocal:1 weierstrass:1 provides:1 node:1 location:4 simpler:1 zhang:4 mathematical:1 direct:1 combine:2 inside:1 yingyu:1 privacy:4 behavior:1 roughly:1 multi:2 globally:1 automatically:1 equipped:1 estimating:3 underlying:2 notation:1 moreover:1 matched:5 mcculloch:1 interpreted:1 proposing:1 caution:1 ghosh:4 guarantee:1 pseudo:1 exactly:6 rm:2 uk:1 unit:2 medical:1 grant:1 negligible:1 local:39 treat:1 tends:3 mistake:1 limit:2 severely:1 analyzing:1 meng:2 black:1 suggests:4 co:2 limited:1 ms:1 statistically:1 bi:1 practical:5 testing:1 lost:1 practice:2 communicated:1 bootstrap:3 digit:4 procedure:4 empirical:9 significantly:5 boyd:2 projection:3 onto:1 close:2 equivalent:1 map:5 measurable:1 center:4 straightforward:4 regardless:2 go:1 convex:1 simplicity:2 splitting:1 estimator:8 insight:1 importantly:1 financial:1 reparameterization:1 embedding:1 annals:1 shamir:2 yishay:1 exact:5 trend:1 steven:1 preprint:4 wang:3 calculate:2 equaling:1 region:2 decrease:1 rq:1 intuition:3 vanishes:2 convexity:2 complexity:1 ideally:1 raise:1 tight:1 efficiency:5 eric:2 easily:2 darpa:1 represented:2 ppaml:1 univ:1 effective:1 monte:2 artificial:1 outside:1 whose:4 widely:1 say:1 drawing:1 otherwise:1 statistic:5 cov:1 vaart:2 online:1 obviously:2 advantage:1 differentiable:1 sequence:1 srujana:2 interaction:1 product:1 uci:3 degenerate:2 achieve:3 intuitive:1 regularity:1 optimum:2 extending:1 converges:1 depending:3 illustrate:3 measured:1 op:3 eq:1 edward:1 recovering:2 predicted:1 involves:1 implies:1 convention:1 concentrate:3 direction:3 radius:2 closely:3 require:2 elementary:1 extension:2 hold:1 ic:1 normal:3 exp:5 mapping:1 claim:1 achieves:3 vary:1 a2:1 estimation:7 favorable:2 integrates:1 lose:1 label:4 sensitive:1 correctness:1 create:1 exactness:2 sensor:3 gaussian:9 always:3 beauty:1 varying:1 ax:1 focus:1 maria:2 consistently:1 rank:2 likelihood:14 contrast:7 baseline:1 summarizing:4 sense:1 sin2:1 inference:2 aylmer:1 entire:1 hidden:1 interested:1 arg:3 issue:6 classification:1 overall:3 constrained:1 ness:2 special:2 marginal:1 equal:6 once:1 construct:2 sampling:2 qiang:2 represents:5 icml:1 nearly:1 sankhy:1 mimic:1 future:2 modern:1 randomly:5 simultaneously:1 divergence:5 ve:4 ab:1 centralized:1 mining:1 joel:1 chichester:1 chong:1 mixture:10 yielding:1 x22:1 implication:1 ohad:1 orthogonal:1 divide:1 yuchen:2 circle:3 theoretical:8 minimal:3 joydeep:2 instance:4 rao:7 cost:1 deviation:5 subset:3 examining:1 stored:2 connect:1 combined:3 mles:12 density:4 fundamental:2 international:1 hugh:1 sequel:3 probabilistic:2 contract:1 michael:1 transmitting:1 na:4 sanjeev:1 thesis:1 central:1 unavoidable:1 worse:1 conf:2 derivative:1 leading:1 toy:4 jection:1 b2:1 includes:2 int:2 satisfy:2 caused:1 depends:2 kanchanapally:1 analyze:2 red:1 start:2 bayes:1 xing:1 parallel:2 shai:1 square:7 ni:2 air:1 merugu:4 variance:6 correspond:2 vincent:1 carlo:2 straight:2 yearpredictionmsd:2 definition:8 against:1 involved:2 naturally:1 proof:5 ihler:4 recovers:2 mi:1 irvine:1 dataset:6 lim:2 efron:9 improves:1 embarrassingly:1 originally:2 follow:1 modal:1 wei:1 hand:1 sketch:1 nonlinear:1 bonassi:1 mode:5 lda:1 perhaps:1 indicated:1 grows:1 verify:2 true:12 multiplier:1 willie:1 hence:2 alternating:1 symmetric:5 neal:1 illustrated:1 sin:2 ll:4 criterion:1 forero:2 theoretic:1 demonstrate:2 performs:4 duchi:2 pro:1 balcan:3 geometrical:1 cano:1 meaning:1 wise:2 image:1 parikh:1 common:3 empirically:2 volume:4 discussed:1 interpretation:3 significant:1 cambridge:3 consistency:1 similarly:3 vos:3 access:1 specification:1 stable:1 base:1 curvature:17 own:1 recent:1 showed:1 inf:1 arbitrarily:1 der:2 transmitted:1 minimum:9 captured:1 seen:1 additional:1 preserving:1 george:1 converge:1 fernando:1 xiangyu:1 signal:1 arithmetic:1 ii:3 relates:1 multiple:3 full:27 reduces:1 july:1 stephen:1 smooth:2 borja:1 match:5 enlightened:1 faster:1 icdm:1 mle:75 equally:1 calculates:3 prediction:1 ensuring:1 regression:1 florina:2 heterogeneous:2 metric:3 expectation:1 arxiv:8 iteration:1 normalization:1 achieved:3 addition:2 want:1 separately:2 fine:1 interval:1 sends:1 limn:2 extra:1 alfonso:1 jordan:1 chipman:1 presence:1 split:2 easy:1 iii:1 fit:1 inner:1 idea:1 texas:1 motivated:1 pca:1 speaking:1 cause:1 amount:2 transforms:2 locally:1 exist:1 canonical:1 problematic:1 nsf:1 estimated:2 per:2 group:5 terminology:1 blum:1 drawn:1 neither:1 rewriting:1 asymptotically:3 blocker:1 sum:1 inverse:1 everywhere:1 family:47 reasonable:1 draw:2 appendix:4 summarizes:1 bound:10 badly:2 constraint:2 infinity:1 x2:1 min:1 martin:2 department:1 according:2 combination:27 poor:2 across:3 smaller:3 increasingly:2 slightly:1 son:2 partitioned:6 giannakis:1 intuitively:1 invariant:1 restricted:1 remains:2 turn:1 neiswanger:2 hero:1 rewritten:1 unreasonable:2 robustness:1 slower:1 existence:1 original:1 denotes:1 clustering:4 x21:1 graphical:1 calculating:1 coined:1 giving:2 conquer:1 ellipse:6 classical:1 society:1 objective:2 already:1 quantity:1 strategy:1 parametric:4 dependence:1 degrades:1 traditional:1 diagonal:1 said:3 subspace:1 distance:3 nitude:1 degrade:2 evenly:1 topic:1 consensus:2 assuming:2 index:2 minimizing:1 liang:2 difficult:1 dunson:2 setup:1 robert:2 negative:1 unknown:3 perform:3 datasets:2 curved:10 defining:2 extended:2 communication:9 misspecification:4 mansour:1 arbitrary:5 peleato:1 david:1 timal:1 eckstein:1 kl:56 connection:1 philosophical:1 vandana:1 california:1 subgroup:1 heterogenous:2 nip:2 scott:2 challenge:1 program:1 max:2 including:1 wainwright:2 power:1 suitable:1 natural:4 treated:1 force:1 created:1 ready:1 extract:2 deviate:2 understanding:1 geometric:4 acknowledgement:1 asymptotic:12 georgios:1 loss:13 fully:1 highlight:1 permutation:1 proportional:1 var:2 qliu1:1 foundation:2 switched:1 sufficient:2 consistent:8 efabbayes:1 austin:1 wireless:2 bias:5 weaker:1 aad:1 distributed:27 van:2 curve:1 dimension:3 made:1 avg:10 twostage:1 approximate:4 ml:1 global:34 assumed:2 xi:5 continuous:1 iterative:1 sk:2 why:1 robust:2 symmetry:2 mse:3 complex:1 aistats:1 main:1 spread:1 linearly:1 whole:5 predd:2 big:2 paul:1 n2:6 x1:1 fig:8 slow:1 wiley:2 precision:1 sub:10 exponential:36 learns:1 down:1 theorem:9 showing:3 r2:1 dk:1 fusion:4 bivariate:3 intractable:2 mnist:4 avrim:1 workshop:1 phd:1 magnitude:1 dissimilarity:1 logarithmic:1 likely:1 infinitely:1 scalar:2 pedro:1 corresponds:2 loses:1 dh:1 conditional:1 identity:2 viewed:1 marked:1 towards:1 ownership:1 fisher:13 admm:1 included:2 specifically:2 except:1 operates:1 uniformly:1 averaging:13 sampler:1 principal:1 called:2 total:14 formally:1 jonathan:1 alexander:3 indian:1 kulkarni:1 mcmc:3 |
4,671 | 523 | LEARNING UNAMBIGUOUS REDUCED
SEQUENCE DESCRIPTIONS
Jiirgen Schmidhuber
Dept. of Computer Science
University of Colorado
Campus Box 430
Boulder, CO 80309, USA
yirgan@cs.colorado.edu
Abstract
Do you want your neural net algorithm to learn sequences? Do not limit yourself to conventional gradient descent (or approximations thereof).
Instead, use your sequence learning algorithm (any will do) to implement
the following method for history compression. No matter what your final goals are, train a network to predict its next input from the previous
ones. Since only unpredictable inputs convey new information, ignore all
predictable inputs but let all unexpected inputs (plus information about
the time step at which they occurred) become inputs to a higher-level
network of the same kind (working on a slower, self-adjusting time scale).
Go on building a hierarchy of such networks. This principle reduces the
descriptions of event sequences without 1088 of information, thus easing
supervised or reinforcement learning tasks. Alternatively, you may use
two recurrent networks to collapse a multi-level predictor hierarchy into a
single recurrent net. Experiments show that systems based on these principles can require less computation per time step and many fewer training
sequences than conventional training algorithms for recurrent nets. Finally you can modify the above method such that predictability is not defined
in a yes-or-no fashion but in a continuous fashion.
291
292
Schmidhuber
1
INTRODUCTION
The following methods for supervised sequence learning have been proposed: Simple
recurrent nets [7][3], time-delay nets (e.g. [2]), sequential recursive auto-associative
memories [16], back-propagation through time or BPTT [21] [30] [33], Mozer's 'focused back-prop' algorithm [10], the IID- or RTRL-algorithm [19][1][34], its accelerated versions [32][35][25], the recent fast-weight algorithm [27], higher-order
networks [5], as well as continuous time methods equivalent to some of the above
[14)[15][4]. The following methods for sequence learning by reinforcement learning
have been proposed: Extended REINFORCE algorithms [31], the neural bucket
brigade algorithm [22], recurrent networks adjusted by adaptive critics [23](see also
[8]), buffer-based systems [13], and networks of hierarchically organized neuron-like
"bions" [18].
With the exception of [18] and [13], these approaches waste resources and limit
efficiency by focusing on every input instead of focusing only on relevant inputs.
Many of these methods have a second drawback as well: The longer the time lag
between an event and the occurrence of a related error the less information is carried
by the corresponding error information wandering 'back into time' (see [6] for a more
detailed analysis). [11], [12] and [20] have addressed the latter problem but not the
former. The system described by [18] on the other hand addresses both problems,
but in a manner much different from that presented here.
2
HISTORY COMPRESSION
A major contribution of this work is an adaptive method for removing redundant
information from sequences. This principle can be implemented with the help of
any of the methods mentioned in the introduction.
Consider a deterministic discrete time predictor (not necessarily a neural network)
whose state at time t of sequence p is described by an environmental input vector
zP(t), an internal state vector hP(t), and an output vector zP(t). The environment
may be non-deterministic. At time 0, the predictor starts with zP(O) and an internal
start state hP(O). At time t ~ 0, the predictor computes
zP(t)
= f(zP(t), hP(t)).
At time t> 0, the predictor furthermore computes
hP(t)
= g(zP(t -
1), hP(t - 1)).
All information about the input at a given time t z can be reconstructed from
tz,f,g,zP(O),hP(O), and the pairs (t"zP(t,)) for which 0 < t, ~ tz and zP(t, -l);j:
zP(t,). This is because if zP(t) = zP(t + 1) at a given time t, then the predictor is
able to predict the next input from the previous ones. The new input is derivable
by means of f and g.
Information about the observed input sequence can be even further compressed
beyond just the unpredicted input vectors zP(t,). It suffices to know only those
elements of the vectors zP(t,) that were not correctly predicted.
This observation implies that we can discriminate one sequence from another by
knowing jud the unpredicted inputs and the corresponding time steps at which they
Learning Unambiguous Reduced Sequence Descriptions
occurred. No information is lost if we ignore the expected inputs. We do not even
have to know f and g. I call this the principle of history compression.
From a theoretical point of view it is important to know at what time an unexpected
input occurs; otherwise there will be a potential for ambiguities: Two different input
sequences may lead to the same shorter sequence of un predicted inputs. With many
practical tasks, however, there is no need for knowing the critical time steps (see
section 5).
3
SELF-ORGANIZING PREDICTOR HIERARCHY
Using the principle of history compression we can build a self-organizing hierarchical
neural 'chunking' system l . The basic task can be formulated as a prediction task.
At a given time step the goal is to predict the next input from previous inputs. If
there are external target vectors at certain time steps then they are simply treated
as another part of the input to be predicted.
The architecture is a hierarchy of predictors, the input to each level of the hierarchy
is coming from the previous level. Pi denotes the ith level network which is trained
to predict its own nezt input from its previous inputs2 ? We take Pi to be one of
the conventional dynamic recurrent neural networks mentioned in the introduction;
however, it might be some other adaptive sequence processing device as well3 .
At each time step the input of the lowest-level recurrent predictor Po is the current
external input. We create a new higher-level adaptive predictor P,+l whenever
the adaptive predictor at the previous level, P" stops improving its predictions.
When this happens the weight-changing mechanism of P, is switched off (to exclude
potential instabilities caused by ongoing modifications of the lower-level predictors).
If at a given time step P, (8 > 0) fails to predict its next input (or if we are at
the beginning of a training sequence which usually is not predictable either) then
P'+l will receive as input the concatenation of this next input of P, plus a unique
representation of the corresponding time step4; the activations of P,+l 's hidden and
output units will be updated. Otherwise P,+l will not perform an activation update.
This procedure ensures that P'+l is fed with an unambiguous reduced descriptionS
of the input sequence observed by P,. This is theoretically justified by the principle
of history compression.
In general, P,+l will receive fewer inputs over time than P,. With existing learning
1 See also [18] for a different hierarchical connectionist chun1cing system based on similar
principles.
2Recently I became aware that Don Mathis had some related ideas (personal communication). A hierarchical approach to sequence generation was pursued by [9].
3For instance, we might employ the more limited feed-forward networks and a 'time
window' approach. In this case, the number of previous inputs to be considered as a basis
for the next prediction will remain fixed.
? A unique time representation is theoretically necessary to provide P.+l with unambiguous information about when the failure occurred (see also the last paragraph of section
2). A unique representation of the time that went by since the lad unpredicted input occurred will do as well.
& In contrast, the reduced descriptions referred to by [11] are not unambiguous.
293
294
Schmidhuber
algorithms, the higher-level predictor should have less difficulties in learning to
predict the critical inputs than the lower-level predictor. This is because P,+l'S
'credit assignment paths' will often be short compared to those of P,. This will
happen if the incoming inputs cany global temporal structure which has not yet
been discovered by P,. (See also [18] for a related approach to the problem of credit
assignment in reinforcement learning.)
This method is a simplification and an improvement of the recent chunking method
described by [24].
A multi-level predictor hierarchy is a rather safe way of learning to deal with sequences with multi-level temporal structure (e.g speech). Experiments have shown
that multi-level predictors can quickly learn tasks which are practically unlearnable
by conventionalrecunent networks, e.g. [6].
4
COLLAPSING THE HIERARCHY
One disadvantage of a predictor hierarchy as above is that it is not known in advance
how many levels will be needed. Another disadvantage is that levels are explicitly
separated from each other. It may be possible, however, to collapse the hierarchy
into a single network as outlined in this section. See details in [26].
We need two conventional recurrent networks: The automatizer A and the chunker
C, which cones pond to a distinction between automatic and attended events. (See
also [13] and [17] which describe a similar distinction in the context ofreinforcement
learning). At each time step A receives the current external input. A's enor function
is threefold: One term forces it to emit certain desired target outputs at certain
times. If there is a target, then it becomes part of the next input. The second term
forces A at every time step to predict its own next non-target input. The third
(crucial) term will be explained below.
If and only if A makes an enor concerning the first and second term of its en or
function, the un predicted input (including a potentially available teaching vector)
along with a unique representation of the current time step will become the new
input to C. Before this new input can be processed, C (whose last input may have
occuned many time steps earlier) is trained to predict this higher-level input from
its cunent internal state and its last input (employing a conventional recunent net
algorithm). After this, C performs an activation update which contributes to a
higher level internal representation of the input history. Note that according to the
principle of history compression C is fed with an unambiguous reduced description
of the input history. The information deducible by means of A's predictions can be
considered as redundant. (The beginning of an episode usually is not predictable,
therefore it has to be fed to the chunking level, too.)
Since C's 'credit assignment paths' will often be short compared to those of A, C will
often be able to develop useful internal representations of previous unexpected input
events. Due to the final term of its error function, A will be forced to reproduce
these internal representations, by predicting C's state. Therefore A will be able
to create useful internal representations by itself in an early stage of processing a
Learning Unambiguous Reduced Sequence Descriptions
given sequence; it will often receive meaningful error signals long before errors of
the first or second kind occur. These internal representations in turn must cany
the discriminating information for enabling A to improve its low-level predictions.
Therefore the chunker will receive fewer and fewer inputs, since more and more
inputs become predictable by the automatizer. This is the collapsing operation.
Ideally, the chunker will become obsolete after some time.
It must be emphasized that unlike with the incremental creation of a multi-level
predictor hierarchy described in section 3, there is no formal proof that the 2-net
on-line version is free of instabilities. One can imagine situations where A unlearns
previously learned predictions because of the third term of its enor function. Relative weighting of the different terms in A's enor function represents an ad-hoc
remedy for this potential problem. In the experiments below, relative weighting
was not necessary.
5
EXPERIMENTS
One experiment with a multi-level chunking architecture involved a grammar which
produced strings of many a's and b's such that there was local temporal structure
within the training strings (see [6] for details). The task was to differentiate between
strings with long overlapping suffixes. The conventional algorithm completely failed
to solve the task; it became confused by the great numbers of input sequences with
similar endings. Not so the chunking system: It soon discovered certain hierarchical
temporal structures in the input sequences and decomposed the problem such that
it was able to solve it within a few hundred-thousand training sequences.
The 2-net chunking system (the one with the potential for collapsing levels) was
also tested against the conventionalrecUlrent net algorithms. (See details in [26].)
With the conventional algorithms, with various learning rates, and with more than
1,000,000 training sequences performance did not improve in prediction tasks involving even as few as ~o time steps between relevant events.
But, the 2-net chunking system was able to solve the task rather quickly. An
efficient approximation of the BPTT-method was applied to both the chunker and
the automatizer: Only 3 iterations of error propagation 'back into the past' were
performed at each time step. Most of the test runs required less than 5000 training
sequences. Still the final weight matriz of the automatizer often resembled what
one would hope to get from the conventional algorithm. There were hidden units
which learned to bridge the 20-step time lags by means of strong self-connections.
The chunking system needed less computation per time step than the conventional
method and required many fewer training sequences.
6
CONTINUOUS HISTORY COMPRESSION
The history compression technique formulated above defines expectationmismatches in a yes-or-no fashion: Each input unit whose activation is not predictable at a certain time gives rise to an unexpected event. Each unexpected event
provokes an update of the internal state of a higher-level predictor. The updates
always take place according to the conventional activation spreading rules for re-
295
296
Schmidhuber
current neural nets. There is no concept of a partial mismatch or of a 'near-miss'.
There is no possibility of updating the higher-level net 'just a little bit' in response
to a 'nearly expected input'. In practical applications, some 'epsilon' has to be used
to define an acceptable mismatch.
In reply to the above criticism, continuous history compression is based on the
following ideas. In what follows, Viet) denotes the i-th component of vector vet).
We use a local input representation. The components of zP(t) are forced to sum
up to 1 and are interpreted as a prediction of the probability distribution of the
possible zP(t + 1). Z}(t) is interpreted as the prediction of the probability that
zHt + 1) is 1.
The output entropy
- 2: zr(t)log zr(t)
j
can be interpreted as a measure of the predictor's confidence. In the worst ease,
the predictor will expect every possible event with equal probability.
How much information (relative to the current predictor) is conveyed by the event
z~(t + 1)
1, once it is observed? According to [29] it is
=
-log Z}(t).
[28] defines update procedures based on Mozer's recent update function [12] that
let highly informative events have a stronger influence on the history representation
than less informative (more likely) events. The 'strength' of an update in response
to a more or less unexpected event is a monotonically increasing function of the
information the event conveys. One of the update procedures uses Pollack's recursive auto-associative memories [16] for storing unexpected events, thus yielding an
entirely local learning algorithm for learning extended sequences.
7
ACKNOWLEDGEMENTS
Thanks to Josef Hochreiter for conducting the experiments. Thanks to Mike Mozer
and Mark Ring for useful comments on an earlier draft of this paper. This research
was supported in part by NSF PYI award IRI-9058450, grant 90-21 from the James
S. McDonnell Foundation, and DEC external research grant 1250 to Michael C.
Mozer.
References
[1] J. Bachrach. Learning to represent state, 1988. Unpublished master's thesis,
University of Massachusetts, Amherst.
[2] U. Bodenhausen and A. Waibel. The tempo 2 algorithm: Adjusting time-delays
by supervised learning. In D. S. Lippman, J. E. Moody, and D. S. Touretzky,
editors, Advances in Neural In/ormation Processing Systems 3, pages 155-161.
San Mateo, CA: Morgan Kaufmann, 1991.
Learning Unambiguous Reduced Sequence Descriptions
[3] J. L. Elman. Finding structure in time. Technical Report CRL Technical
Report 8801, Center for Research in Language, University of California, San
Diego, 1988.
[4] M. Gherrity. A learning algorithm for analog fully recurrent neural networks. In
IEEE/INNS International Joint Conference on Neural Networks, San Diego,
volume 1, pages 643-644, 1989.
[S] C. L. Giles and C. B. Miller. Learning and extracting finite state automata.
Accepted for publication in Neural Computation, 1992.
[6] Josef Hochreiter. Diploma thesis, 1991. Institut fur Informatik, Technische
Universitiit Miinchen.
[7] M. I. Jordan. Serial order: A parallel distributed processing approach. Technical Report ICS Report 8604, Institute for Cognitive Science, University of
California, San Diego, 1986.
[8] G. Lukes. Review of Schmidhuber's paper 'Recurrent networks adjusted by
adaptive critics'. Neural Network Reviews, 4(1):41-42, 1990.
[9] Y. Miyata. An unsupervised PDP learning model for action planning. In Proc.
of the Tenth Annual Conference of the Cognitive Science Society, Hillsdale,
NJ, pages 223-229. Erlbaum, 1988.
[10] M. C. Mozer. A focused back-propagation algorithm for temporal sequence
recognition. Complez Systems, 3:349-381, 1989.
[11] M. C. Mozer. Connectionist music composition based on melodic, stylistic,
and psychophysical constraints. Technical Report CU-CS-49S-90, University
of Colorado at Boulder, 1990.
[12] M. C. Mozer. Induction of multiscale temporal structure. In D. S. Lippman,
J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information
Processing Systems 4, to appear. San Mateo, CA: Morgan Kaufmann, 1992.
[13] C. Myers. Learning with delayed reinforcement through attention-driven buffering. TR, Imperial College of Science, Technology and Medicine, 1990.
[14] B. A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation, 1:263-269, 1989.
[IS] F. J. Pineda. Time dependent adaptive neural networks. In D. S. Touretzky,
editor, Advances in Neural Information Processing Systems 2, pages 710-718.
San Mateo, CA: Morgan Kaufmann, 1990.
[16] J. B. Pollack. Recursive distributed representation. Artificial Intelligence,
46:77-10S, 1990.
[17] M. A. Ring. PhD Proposal: Autonomous construction of sensorimotor hierarchies in neural networks. Technical report, Univ. of Texas at Austin, 1990.
[18] M. A. Ring. Incremental development of complex behaviors through automatic
construction of sensory-motor hierarchies. In L. Birnbaum and G. Collins,
editors, Machine Learning: Proceedings of the Eighth International Workshop,
pages 343-347. Morgan Kaufmann, 1991.
[19] A. J . Robinson and F. Fallside. The utility driven dynamic error propagation
network. Technical Report CUED/F-INFENG/TR.l, Cambridge University
Engineering Department, 1987.
297
298
Schmidhuber
[20] R. Rohwer. The 'moving targets' training method. In J. Kindermann and
A. Linden, editors, Proceedings of 'Distributed Adaptive Neural Information
Processing', St.Augustin, ~4.-~5.5,. Oldenbourg, 1989.
[21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland,
editors, Parallel Distributed Processing, volume I, pages 318-362. MIT Press,
1986.
[22] J. H. Schmidhuber. A local learning algorithm for dynamic feedforward and
recurrent networks. Connection Science, 1(4):403-412, 1989.
[23] J. H. Schmidhuber. Recurrent networks adjusted by adaptive critics. In Proc.
IEEE/INNS International Joint Conference on Neural Networks, Washington,
D. C., volume I, pages 719-722, 1990.
[24] J. H. Schmidhuber. Adaptive decomposition of time. In T. Kohonen,
K. Miikisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 909-914. Elsevier Science Publishers B.V., North-Holland, 1991.
[25] J. H. Schmidhuber. A fixed size storage O(n 3 ) time complexity learning algorithm for fully recurrent continually running networks. Accepted for publication
in Neural Computation, 1992.
[26] J. H. Schmidhuber. Learning complex, extended sequences using the principle
of history compression. Accepted for publication in Neural Computation, 1992.
[27] J. H. Schmidhuber. Learning to control fast-weight memories: An alternative
to recurrent nets. Accepted for publication in Neural Computation, 1992.
[28] J. H. Schmidhuber, M. C. Mozer, and D. Prelinger. Continuous history compression. Technical report, Dept. of Compo Sci., University of Colorado at
Boulder, 1992.
[29] C. E. Shannon. A mathematical theory of communication (parts I and II). Bell
System Technical Journal, XXVII:379-423, 1948.
[30] P. J. Werbos. Generalization of back propagation with application to a recurrent
gas market model. Neural Networks, 1, 1988.
[31] R. J. Williams. Toward a theory of reinforcement-learning connectionist systems. Technical Report NU-CCS-88-3, College of Compo Sci., Northeastern
University, Boston, MA, 1988.
[32] R. J. Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27,
Boston: Northeastern University, College of Computer Science, 1989.
[33] R. J. Williams and J. Pengo An efficient gradient-based algorithm for on-line
training of recurrent network trajectories. Neural Computation, 4:491-501,
1990.
[34] R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent
learning algorithm. Connection Science, 1(1):87-111, 1989.
[35] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent
networks and their computational complexity. In Back-propagation: Theory,
Architectures and Applications. Hillsdale, NJ: Erlbaum, 1992, in press.
PART VI
RECURRENT
NETWORKS
| 523 |@word cu:1 version:2 compression:11 stronger:1 bptt:2 decomposition:1 attended:1 tr:2 past:1 existing:1 current:5 activation:5 yet:1 must:2 oldenbourg:1 happen:1 informative:2 motor:1 update:8 pursued:1 fewer:5 device:1 obsolete:1 intelligence:1 beginning:2 ith:1 short:2 compo:2 draft:1 miinchen:1 mathematical:1 along:1 become:4 paragraph:1 manner:1 theoretically:2 market:1 expected:2 behavior:1 elman:1 planning:1 multi:6 decomposed:1 little:1 unpredictable:1 window:1 increasing:1 becomes:1 confused:1 campus:1 lowest:1 what:4 kind:2 interpreted:3 string:3 enor:4 finding:1 nj:2 temporal:6 every:3 gherrity:1 control:1 unit:3 grant:2 appear:1 continually:1 before:2 engineering:1 local:4 modify:1 limit:2 path:2 might:2 plus:2 easing:1 mateo:3 luke:1 co:1 ease:1 collapse:2 limited:1 deducible:1 practical:2 unique:4 recursive:3 lost:1 implement:1 lippman:2 procedure:3 bell:1 confidence:1 melodic:1 get:1 storage:1 context:1 influence:1 instability:2 conventional:10 equivalent:1 deterministic:2 center:1 go:1 iri:1 attention:1 williams:6 automaton:1 focused:2 pyi:1 bachrach:1 rule:1 autonomous:1 updated:1 hierarchy:12 target:5 colorado:4 imagine:1 diego:3 construction:2 exact:1 us:1 element:1 rumelhart:2 recognition:1 simula:1 updating:1 werbos:1 observed:3 mike:1 worst:1 thousand:1 ensures:1 ormation:1 episode:1 went:1 mentioned:2 mozer:8 predictable:5 environment:1 complexity:3 ideally:1 dynamic:3 personal:1 trained:2 creation:1 efficiency:1 basis:1 completely:1 po:1 joint:2 various:1 train:1 separated:1 forced:2 fast:2 describe:1 univ:1 artificial:2 whose:3 lag:2 solve:3 otherwise:2 compressed:1 jud:1 grammar:1 itself:1 final:3 associative:2 hoc:1 sequence:31 differentiate:1 myers:1 net:13 inn:2 pineda:1 coming:1 kohonen:1 relevant:2 organizing:2 description:8 zp:16 incremental:2 ring:3 help:1 cued:1 recurrent:21 develop:1 strong:1 implemented:1 c:2 predicted:4 implies:1 safe:1 drawback:1 hillsdale:2 require:1 suffices:1 generalization:1 adjusted:3 ofreinforcement:1 practically:1 considered:2 credit:3 ic:1 great:1 predict:8 major:1 early:1 proc:2 spreading:1 augustin:1 kindermann:1 bridge:1 create:2 hope:1 mit:1 always:1 rather:2 publication:4 improvement:1 fur:1 contrast:1 criticism:1 elsevier:1 dependent:1 suffix:1 hidden:2 reproduce:1 josef:2 development:1 equal:1 aware:1 once:1 washington:1 represents:1 buffering:1 unsupervised:1 nearly:1 connectionist:3 report:11 employ:1 few:2 delayed:1 possibility:1 highly:1 provokes:1 yielding:1 emit:1 partial:1 necessary:2 shorter:1 institut:1 desired:1 re:1 theoretical:1 pollack:2 instance:1 earlier:2 giles:1 disadvantage:2 assignment:3 technische:1 predictor:22 hundred:1 delay:2 erlbaum:2 too:1 thanks:2 st:1 international:3 amherst:1 discriminating:1 off:1 michael:1 quickly:2 moody:2 thesis:2 ambiguity:1 collapsing:3 tz:2 external:4 cognitive:2 potential:4 exclude:1 waste:1 north:1 matter:1 caused:1 explicitly:1 ad:1 vi:1 performed:1 view:1 start:2 parallel:2 contribution:1 became:2 kaufmann:4 conducting:1 miller:1 yes:2 produced:1 iid:1 informatik:1 trajectory:2 cc:2 history:14 touretzky:3 whenever:1 rohwer:1 failure:1 against:1 sensorimotor:1 involved:1 james:1 jiirgen:1 thereof:1 conveys:1 proof:1 stop:1 adjusting:2 massachusetts:1 organized:1 back:7 focusing:2 feed:1 higher:8 supervised:3 response:2 box:1 furthermore:1 just:2 stage:1 reply:1 working:1 hand:1 receives:1 multiscale:1 overlapping:1 propagation:7 defines:2 usa:1 building:1 concept:1 remedy:1 former:1 deal:1 self:4 unambiguous:8 pearlmutter:1 performs:1 recently:1 brigade:1 volume:3 analog:1 occurred:4 lad:1 composition:1 cambridge:1 automatic:2 outlined:1 hp:6 teaching:1 language:1 had:1 moving:1 longer:1 own:2 recent:3 driven:2 schmidhuber:13 buffer:1 certain:5 morgan:4 redundant:2 monotonically:1 signal:1 ii:1 reduces:1 technical:11 long:2 concerning:1 bodenhausen:1 serial:1 award:1 prediction:9 involving:1 basic:1 infeng:1 iteration:1 represent:1 hochreiter:2 dec:1 receive:4 justified:1 want:1 proposal:1 addressed:1 crucial:1 publisher:1 unlike:1 comment:1 jordan:1 call:1 extracting:1 zipser:2 near:1 feedforward:1 architecture:3 idea:2 knowing:2 texas:1 utility:1 wandering:1 speech:1 action:1 useful:3 detailed:1 processed:1 mcclelland:1 reduced:7 nsf:1 per:2 correctly:1 discrete:1 threefold:1 imperial:1 changing:1 birnbaum:1 tenth:1 cone:1 sum:1 run:1 you:3 master:1 place:1 stylistic:1 acceptable:1 bit:1 entirely:1 simplification:1 annual:1 strength:1 occur:1 constraint:1 your:3 department:1 according:3 waibel:1 unpredicted:3 mcdonnell:1 chunker:4 remain:1 rtrl:1 modification:1 happens:1 explained:1 boulder:3 bucket:1 chunking:8 resource:1 previously:1 turn:1 mechanism:1 needed:2 know:3 fed:3 well3:1 available:1 operation:1 hierarchical:4 occurrence:1 tempo:1 alternative:1 slower:1 denotes:2 running:1 music:1 medicine:1 epsilon:1 build:1 society:1 psychophysical:1 pond:1 occurs:1 gradient:4 fallside:1 reinforce:1 sci:2 concatenation:1 toward:1 induction:1 potentially:1 rise:1 perform:1 neuron:1 observation:1 enabling:1 finite:1 descent:1 gas:1 situation:1 extended:3 communication:2 hinton:1 pdp:1 discovered:2 kangas:1 pair:1 required:2 unpublished:1 connection:3 california:2 distinction:2 learned:2 nu:2 robinson:1 address:1 able:5 beyond:1 usually:2 below:2 mismatch:2 eighth:1 including:1 memory:3 event:14 critical:2 treated:1 difficulty:1 force:2 predicting:1 zr:2 improve:2 technology:1 carried:1 auto:2 review:2 acknowledgement:1 relative:3 fully:2 expect:1 diploma:1 generation:1 foundation:1 switched:1 conveyed:1 principle:9 editor:7 storing:1 critic:3 pi:2 austin:1 supported:1 last:3 free:1 soon:1 formal:1 viet:1 institute:1 distributed:4 ending:1 computes:2 sensory:1 forward:1 reinforcement:5 adaptive:10 san:6 employing:1 reconstructed:1 ignore:2 derivable:1 global:1 incoming:1 alternatively:1 don:1 continuous:5 un:2 vet:1 learn:2 ca:3 miyata:1 contributes:1 improving:1 necessarily:1 mathis:1 complex:2 did:1 hierarchically:1 convey:1 referred:1 en:1 fashion:3 predictability:1 fails:1 third:2 weighting:2 northeastern:2 removing:1 resembled:1 emphasized:1 linden:1 workshop:1 sequential:1 phd:1 boston:2 entropy:1 simply:1 likely:1 failed:1 unexpected:7 holland:1 environmental:1 recunent:1 ma:1 prop:1 goal:2 formulated:2 crl:1 universitiit:1 yourself:1 miss:1 discriminate:1 accepted:4 experimental:1 shannon:1 meaningful:1 exception:1 college:3 internal:10 mark:1 latter:1 collins:1 accelerated:1 ongoing:1 dept:2 tested:1 unlearnable:1 |
4,672 | 5,230 | Sensory Integration and Density Estimation
Joseph G. Makin and Philip N. Sabes
Center for Integrative Neuroscience/Department of Physiology
University of California, San Francisco
San Francisco, CA 94143-0444 USA
makin, sabes @phy.ucsf.edu
Abstract
The integration of partially redundant information from multiple sensors is a standard computational problem for agents interacting with the world. In man and
other primates, integration has been shown psychophysically to be nearly optimal
in the sense of error minimization. An influential generalization of this notion
of optimality is that populations of multisensory neurons should retain all the information from their unisensory afferents about the underlying, common stimulus [1]. More recently, it was shown empirically that a neural network trained
to perform latent-variable density estimation, with the activities of the unisensory
neurons as observed data, satisfies the information-preservation criterion, even
though the model architecture was not designed to match the true generative process for the data [2]. We prove here an analytical connection between these seemingly different tasks, density estimation and sensory integration; that the former
implies the latter for the model used in [2]; but that this does not appear to be true
for all models.
1
Introduction
A sensible criterion for integration of partially redundant information from multiple senses is that
no information about the underlying cause be lost. That is, the multisensory representation should
contain all of the information about the stimulus as the unisensory representations together did. A
variant on this criterion was first proposed in [1]. When satisfied, and given sensory cues that have
been corrupted with Gaussian noise, the most probable multisensory estimate of the underlying
stimulus property (height, location, etc.) will be a convex combination of the estimates derived independently from the unisensory cues, with the weights determined by the variances of the corrupting
noise?as observed psychophysically in monkey and man, e.g., [3, 4].
The task of plastic organisms placed in novel environments is to learn, from scratch, how to perform
this task. One recent proposal [2] is that primates treat the activities of the unisensory populations
of neurons as observed data for a latent-variable density-estimation problem. Thus the activities
of a population of multisensory neurons play the role of latent variables, and the model is trained
to generate the same distribution of unisensory activities when they are driven by the multisensory
neurons as when they are driven by their true causes in the world. The idea is that the latent variables
in the model will therefore come to correspond (in some way) to the latent variables that truly
underlie the observed distribution of unisensory activities, including the structure of correlations
across populations. Then it is plausible to suppose that, for any particular value of the stimulus,
inference to the latent variables of the model is ?as good as? inference to the true latent cause,
and that therefore the information criterion will be satisfied. Makin et alia showed precisely this,
empirically, using an exponential-family harmonium (a generalization of the restricted Boltzmann
machine [5]) as the density estimator [2].
1
Here we prove analytically that successful density estimation in certain models, including that of [2],
will necessarily satisfy the information-retention criterion. In variant architectures, the guarantee
does not hold.
2
2.1
Theoretical background
Multisensory integration and information retention
Psychophysical studies have shown that, when presented with cues of varying reliability in two
different sense modalities but about a common stimulus property (e.g., location or height), primates (including humans) estimate the property as a convex combination of the estimates derived
independently from the unisensory cues, where the weight on each estimate is proportional to its
reliability [3, 4]. Cue reliability is measured as the inverse variance in performance across repeated
instances of the unisensory cue, and will itself vary with the amount of corrupting noise (e.g., visually blur) added to the cue. This integration rule is optimal in that it minimizes error variance across
trials, at least for Gaussian corrupting noise.
Alternatively, it can be seen as a special case of a more general scheme [6]. Assuming a uniform
prior distribution of stimuli, the optimal combination just described is equal to the peak of the
posterior distribution over the stimulus, conditioned on the noisy cues (y 1 , y 2 ):
x ? = argmax Pr[X = x|y 1 , y 2 ].
x
For Gaussian corrupting noise, this posterior distribution will itself be Gaussian; but even for integration problems that yield non-Gaussian posteriors, humans have been shown to estimate the
stimulus with the peak of that posterior [7].
This can be seen as a consequence of a scheme more general still, namely, encoding not merely
the peak of the posterior, but the entire distribution [1, 8]. Suppose again, for simplicity, that
Pr[X|Y 1 , Y 2 ] is Gaussian. Then if x ? is itself to be combined with some third cue (y 3 ), optimality requires keeping the variance of this posterior as well, since it (along with the reliability of
y 3 ) determines the weight given to x ? in this new combination. This scheme is especially relevant
when y 1 and y 2 are not ?cues? but the activities of populations of neurons, e.g. visual and auditory,
respectively. Since sensory information is more likely to be integrated in the brain in a staged, hierarchical fashion than in a single common pool [9], optimality requires encoding at least the first
two cumulants of the posterior distribution. For more general, non-Gaussian posteriors, the entire
posterior should be encoded [1, 6]. This amounts [1] to requiring, for downstream, ?multisensory?
neurons with activities Z, that:
Pr[X|Z] = Pr[X|Y 1 , Y 2 ].
When information about X reaches Z only via Y = [Y 1 , Y 2 ] (i.e., X ? Y ? Z forms a Markov
chain), this is equivalent (see Appendix) to requiring that no information about the stimulus be lost
in transforming the unisensory representations into a multisensory representation; that is,
I(X; Z) = I(X; Y),
where I(A; B) is the mutual information between A and B.
Of course, if there is any noise in the transition from unisensory to multisensory neurons, this equation cannot be satisfied exactly. A sensible modification is to require that this noise be the only
source of information loss. This amounts to requiring that the information equality hold, not for Z,
but for any set of sufficient statistics for Z as a function of Y, Tz (Y); that is,
I(X; Tz (Y)) = I(X; Y).
2.2
(1)
Information retention and density estimation
A rather general statement of the role of neural sensory processing, sometimes credited to
Helmholtz, is to make inferences about states of affairs in the world, given only the data supplied
by the sense organs. Inference is hard because the mapping from the world?s states to sense data is
2
X
Y
Y
Z
p(x)
p(y|x)
q(y|z)
q(z)
A
B
Figure 1: Probabilistic graphical models. (A) The world?s generative process. (B) The model?s generative
process. Observed nodes are shaded. After training the model (q), the marginals match: p(y) = q(y).
not invertible, due both to noise and to the non-injectivity of physical processes (as in occlusion). A
powerful approach to this problem used in machine learning, and arguably by the brain [10, 11], is
to build a generative model for the data (Y), including the influence of unobserved (latent) variables
(Z). The latent variables at the top of a hierarchy of such models would presumably be proxies for
the true causes, states of affairs in the world (X).
In density estimation, however, the objective function for learning the parameters of the model is
that:
Z
Z
p(y|x)p(x)dx =
q(y|z)q(z)dz
(2)
x
z
(Fig. 1), i.e., that the ?data distribution? of Y match the ?model distribution? of Y; and this is
consistent with models that throw away information about the world in the transformation from
observed to latent variables, or even to their sufficient statistics. For example, suppose that the
world?s generative process looked like this:
Example 2.1. The prior p(x) is the flip of an unbiased coin; and the emission p(y|x) draws from
a standard normal distribution, takes the absolute value of the result, and then multiplies by ?1 for
tails and +1 for heads. Information about the state of X is therefore perfectly represented in Y . But
a trained density-estimation model with, say, a Gaussian emission model, q(y|z), would not bother
to encode any information in Z, since the emission model alone can represent all the data (which
just look like samples from a standard normal distribution). Thus Y and Z would be independent,
and Eq. 1 would not be satisfied, even though Eq. 2 would.
This case is arguably pathological, but similar considerations apply for more subtle variants. In
addition to Eq. 2, then, we shall assume something more: namely, that the ?noise models? for the
world and model match; i.e., that q(y|z) has the same functional form as p(y|x). More precisely,
we assume:
? functions f (y; ?), ?(x), ?(z) 3
p(y|x) = f y; ?(x) ,
(3)
q(y|z) = f y; ?(z) .
In [2], for example, f (y; ?) was assumed to be a product of Poisson distributions, so the ?proximate
causes? ? were a vector of means. Note that the functions ?(x) and ?(z) induce distributions over
? which we shall call p(?) and q(?), respectively; and that:
Ep(?) [f (y; ?)] = Ep(x) [f (y; ?(x)] = Eq(z) [f (y; ?(z)] = Eq(?) [f (y; ?)],
(4)
where the first and last equalities follows from the ?law of the unconscious statistician,? and the
second follows from Eqs. 2 and 3.
3
Latent-variable density estimation for multisensory integration
In its most general form, the aim is to show that Eq. 4 implies, perhaps with some other constraints,
Eq. 1. More concretely, suppose the random variables Y 1 , Y 2 , provided by sense modalities 1 and
2, correspond to noisy observations of an underlying stimulus. These could be noisy cues, but they
could also be the activities of populations of neurons (visual and proprioceptive, say, for concreteness). Then suppose a latent-variable density estimator is trained on these data, until it assigns the
same probability, q(y 1 , y 2 ), to realizations of the observations, [y 1 , y 2 ], as that with which they
appear, p(y 1 , y 2 ). Then we should like to know that inference to the latent variables in the model,
3
i.e., computation of the sufficient statistics Tz (Y 1 , Y 2 ), throws away no information about the
stimulus. In [2], where this was shown empirically, the density estimator was a neural network, and
its latent variables were interpreted as the activities of downstream, multisensory neurons. Thus the
transformation from unisensory to multisensory representation was shown, after training, to obey
this information-retention criterion.
It might seem that we have already assembled sufficient conditions. In particular, knowing that
the ?noise models match,? Eq. 3, might seem to guarantee that the data distribution and model
distribution have the same sufficient statistics, since sufficient statistics depend only on the form of
the conditional distribution. Then Tz (Y) would be sufficient for X as well as for Z, and the proof
complete. But this sense of ?form of the conditional distribution? is stronger than Eq. 4. If, for
example, the image of z under ?(?) is lower-dimensional than the image of x under ?(?), then the
conditionals in Eq. 3 will have different forms as far as their sufficient statistics go. An example will
clarify the point.
Example 3.1. Let p(y) be a two-component mixture of a (univariate) Bernoulli distribution. In
particular, let ?(x) and ?(z) be the identity maps, ? provide the mean of the Bernoulli, and p(X =
0.4) = 1/2, p(X = 0.6) = 1/2. The mixture marginal is therefore another Bernoulli random
variable, with equal probability of being 1 or 0. Now consider the ?mixture? model q that has the
same noise model, i.e., a univariate Bernoulli distribution, but a prior with all its mass at a single
mixing weight. If q(Z = 0.5) = 1, this model will satisfy Eq. 4. But a (minimal) sufficient statistic
for the latent variables under p is simply the single sample, y, whereas the minimal sufficient statistic
for the latent variable under q is the nullset: the observation tells us nothing about Z because it is
always the same value.
To rule out such cases, we propose (below) further constraints.
3.1
Proof strategy
We start by noting that any sufficient statistics Tz (Y) for Z are also sufficient statistics for any
function of Z, since all the information about the output of that function must pass through Z first
(Fig. 2A). In particular, then, Tz (Y) are sufficient statistics for the proximate causes, ? = ?(Z).
That is, for any ? generated by the model, Fig. 1B, tz (y) for the corresponding y drawn from
f (y; ?) are sufficient statistics. What about the ? generated by the world, Fig. 1A? We should like
to show that tz (y) are sufficient for them as well. This will be the case if, for every ? produced by
the world, there exists a vector z such that ?(z) = ?.
This minimal condition is hard to prove. Instead we might show a slightly stronger condition, that
(q(?) = 0) =? (p(?) = 0), i.e., to any ? that can be generated by the world, the model
assigns nonzero probability. This is sufficient (although unnecessary) for the existence of a vector
z for every ? produced by the world. Or we might pursue a stronger condition still, that to any
? that can be generated by the world, the model and data assign the same probability: q(?) =
p(?). If one considers the marginals p(y) = q(y) to be mixture models, then this last condition
is called the ?identifiability? of the mixture [12], and for many conditional distributions f (y; ?),
identifiability conditions have been proven. Note that mixture identifiability is taken to be a property
of the conditional distribution, f (y; ?), not the marginal, p(y); so, e.g., without further restriction,
a mixture model is not identifiable even if there exist just two prior distributions, p1 (?), p2 (?), that
produce identical marginal distributions.
To see that identifiability, although sufficient (see below) is unnecessary, consider again the twocomponent mixture of a (univariate) Bernoulli distribution:
Example 3.2. Let p(X = 0.4) = 1/2, p(X = 0.6) = 1/2. If the model, q(y|z)q(z), has the
same form, but mixing weights q(Z = 0.3) = 1/2, q(Z = 0.7) = 1/2, its mixture marginal will
match the data distribution; yet p(?) 6= q(?), so the model is clearly unidentifiable. Nevertheless,
the sample itself, y, is a (minimal) sufficient statistic for both the model and the data distribution, so
the information-retention criterion will be satisfied.
4
H[Y]
H[Y]
H[Tz (Y)]
H[?(Z)]
H[Z]
H[X]
H[?(X)]
H[?(Z)]
H[Z]
H[Tz (Y)]
A
B
Figure 2: Venn diagrams for information. (A) ?(Z) being a deterministic function of Z, its entropy (dark
green) is a subset of the latter?s (green). The same is true for the entropies of Tz (Y) (dark orange) and Y
(orange), but additionally their intersections with H[Z] are identical because Tz is a sufficient statistic for Z.
The mutual information values I(?(Z); Y) and I(?(Z); Tz (Y)) (i.e., the intersections of the entropies) are
clearly identical (outlined patch). This corresponds to the derivation of Eq. 6. (B) When ?(Z) is a sufficient
statistic for Y, as guaranteed by Eq. 3, the intersection of its entropy with H[Y] is the same as the intersection
of H[Z] with H[Y]; likewise for H[?(X)] and H[X] with H[Y]. Since all information about X reaches Z
via Y, the entropies of X and Z overlap only on H[Y]. Finally, if p(?(x)) = q(?(z)), and Pr[Y|?(X)] =
Pr[Y|?(Z)] (Eq. 3), then the entropies of ?(X) and ?(Z) have the same-sized overlaps (but not the same
overlaps) with H[Y] and H[Tz (Y)]. This guarantees that I(X; Tz (Y)) = I(X; Y) (see Eq. 7).
In what follows we shall assume that the mixtures are finite. This is the case when the model is an
exponential-family harmonium (EFH)1 , as in [2]: there are at most K := 2|hiddens| mixture components. It is not true for real-valued stimuli X. For simplicity, we shall nevertheless assume that there
are at most 2|hiddens| configurations of X since: (1) the stimulus must be discretized immediately
upon transduction by the nervous system, the brain (presumably) having only finite representational
capacity; and (2) if there were an infinite number of configurations, Eq. 2 could not be satisfied
exactly anyway. Eq. 4 can therefore be expressed as:
I
X
f (y; ?)p(?) =
i
J
X
f (y; ?)q(?),
(5)
j
where I ? K, J ? K.
3.2
Formal description of the model, assumptions, and result
? The general probabilistic model. This is given by the graphical models in Fig. 1. ?The
world? generates data according to Fig. 1A (?data distribution?), and ?the brain? uses Fig.
1B. None of the distributions labeled in the diagram need be equal to each other, and in fact
X and Z may have different support.
? The assumptions.
1. The noise models ?match?: Eq. 3.
2. The number of hidden-variable states is finite, but otherwise arbitrarily large.
3. Density estimation has been successful; i.e., the data and model marginals over Y
match: Eq. 2
4. The noise model/conditional distribution f (y; ?) is identifiable: if p(y) = q(y), then
p(?) = q(?). This condition holds for a very broad class of distributions.
? The main result. Information about the stimulus is retained in inferring the latent variables
of the model, i.e. in the ?feedforward? (Y ? Z) pass of the model. More precisely,
1
An EFH is a two layer Markov random field, with full interlayer connectivity and no intralayer connectivity, and in which the conditional distributions of the visible layer given the hiddens and vice versa belong to
exponential families of probability distributions [5]. The restricted Boltzmann machine is therefore the special
case of Bernoulli hiddens and Bernoulli visibles.
5
information loss is due only to noise in the hidden layer (which is unavoidable), not to a
failure of the inference procedure: Eq. 1.
More succinctly: for identifiable mixture models, Eq. 5 and Eq. 3 together imply Eq. 1.
3.3
Proof
First, for any set of sufficient statistics Tz (Y) for Z:
I(Y; ?(Z)|Tz (Y)) ? I(Y; Z|Tz (Y))
data-processing inequality [13]
=0
Tz (Y) are sufficient for Z
=? 0 = I(Y; ?(Z)|Tz (Y))
Gibbs?s inequality
= H[?(Z)|Tz (Y)] ? H[?(Z)|Y, Tz (Y)]
def?n cond. mutual info.
= H[?(Z)|Tz (Y)] ? H[?(Z)|Y]
Tz (Y) deterministic
? H[?(Z)] + H[?(Z)]
=? I(?(Z); Tz (Y)) = I(?(Z); Y).
=0
def?n mutual info.
(6)
So Tz are sufficient statistics for ?(Z).
Now if finite mixtures of f (y; ?) are identifiable, then Eq. 5 implies that p(?) = q(?). Therefore:
I(X; Tz (Y)) ? I(X; Y)
data-processing inequality
? I(?(X); Y)
= I(?(Z); Y)
= I(?(Z); Tz (Y))
X ? ?(X) ? Y, D.P.I.
p(?) = q(?), Eq. 3
Eq. 6
= I(?(X); Tz (Y))
p(?) = q(?), Eq. 3
? I(X; Tz (Y))
(7)
data-processing inequality
=? I(X; Tz (Y)) = I(X; Y),
which is what we set out to prove. (This last progression is illustrated in Fig. 2B.)
4
Relationship to empirical findings
The use of density-estimation algorithms for multisensory integration appears in [2, 15, 16], and in
[2], the connection between generic latent-variable density estimation and multisensory integration
was made, although the result was shown only empirically. We therefore relate those results to the
foregoing proof.
4.1
A density estimator for multisensory integration
In [2], an exponential-family harmonium (model distribution, q, Fig. 3B) with Poisson visible units
(Y) and Bernoulli hiddens units (Z) was trained on simulated populations of neurons encoding
arm configuration in two-dimensional space (Fig. 3). An EFH is parameterized by the matrix of
connection strengths between units (weights, W ) and the unit biases, bi . The nonlinearities for
Bernoulli and Poisson units are logistic and exponential, respectively, corresponding to their inverse
?canonical links? [17].
The data for these populations were created by (data distribution, p, Fig. 3A):
1. drawing a pair of joint angles (? 1 = shoulder, ? 2 = elbow) from a uniform distribution
between the joint limits; drawing two population gains (g p , g v , the ?reliabilities? of the two
populations) from uniform distributions over spike counts?hence x = [? 1 , ? 1 , g p , g v ];
2. encoding the joint angles in a set of 2D, Gaussian tuning curves (with maximum height g p )
that smoothly tile joint space (?proprioceptive neurons,? ?p ), and encoding the correspond6
Y0v
X
Y1v
Y2v
Gv
Y0p
Y3v
?
Y1p
Y2p
Y3p
Y0v
Gp
A
Y1v
Z0
Z1
Z2
Z3
Y2v
Y3v
Y0p
Y1p
Y2p
Y3p
B
Figure 3: Two probabilistic graphical models for the same data?a specific instance of Fig. 1. Colors are
as in Fig. 2. (A) Hand position (?) elicits a response from populations of visual (Yv ) and proprioceptive
(Yp ) neurons. The reliability of each population?s encoding of hand position varies with their respective gains,
G v , G p . (B) The exponential family harmonium (EFH; see text). After training, an up-pass through the model
yields a vector of multisensory (mean) activities (z, hidden units) that contains all the information about ?, g v ,
and g p that was in the unisensory populations, Yv and Yp .
ing end-effector position in a set of 2D, Gaussian tuning curves (with maximum height g v )
that smoothly tile the reachable workspace (?visual neurons,? ?v );
3. drawing spike counts, [yp , yv ], from independent Poisson distributions whose means were
given by [?p , ?v ].
Thus the distribution
of the unisensory spike counts, Y = [Yp , Yv ], conditioned on hand position,
Q
p(y|x) = i p(y i |x), was a ?probabilistic population code,? a biologically plausible proposal for
how the cortex encodes probability distributions over stimuli [1]. The model was trained using onestep contrastive divergence, a learning procedure that changes weights and biases by descending the
approximate gradient of a function that has q(y) = p(y) as its minimum [18, 19].
It was then shown empirically that the criterion for ?optimal multisensory integration? proposed
in [1],
? = Pr[X|yp , yv ],
Pr[X|Z]
(8)
? of vectors sampled from q(z|y), and that the match improves
held approximately for an average, Z,
? approaches the expected value
as the number of samples grows?i.e., as the sample average Z
Eq(z|y) [Z|y]. Since the weight matrix W was ?fat,? the randomly initialized network was highly
unlikely to satisfy Eq. 8 by chance.
4.2
Formulating the empirical result in terms of the proof of Section 3
To show that Eq. 8 must hold, we first demonstrate its equivalence to Eq. 1. It then suffices, under
our proof, to show that the model obeys Eqs. 3 and 5 and that the ?mixture model? defined by the
true generative process is identifiable.
? ? Eq(z|y) [Z|Y], which is a sufficient statistic for a vector of
For sufficiently many samples, Z
Bernoulli random variables: Eq(z|y) [Z|Y] = Tz (Y). This also corresponds to a noiseless ?uppass? through the model, Tz (Y) = ?{W Y + bz }2 . And the information about the stimulus
reaches the multisensory population, Z, only via the two unisensory populations, Y. Together these
imply that Eq. 8 is equivalent to Eq. 1 (see Appendix for proof).
For both the ?world? and the model, the function f (y; ?) is a product of independent Poissons,
whose means ? are given respectively by the embedding of hand position into the tuning curves
of the two populations, ?(X), and by the noiseless ?down-pass? through the model, exp{W T Z +
by } =: ?(Z). So Eq. 3 is satisfied. Eq. 5 holds because the EFH was trained as a density estimator,
and because the mixture may be treated as finite. (Although hand positions were drawn from a
continuous uniform distribution, the number of mixing components in the data distribution is limited
to the number of training samples. For the model in [2], this was less than a million. For comparison,
the harmonium had 2900 mixture weights at its disposal.) Finally, the noise model is factorial:
2
That the vector of means alone and not higher-order cumulants suffices reflects the fact that the sufficient
statistics can be written as linear functions of Y?in this case, W Y, with W the weight matrix?which is
arguably a generically desirable property for neurons [20].
7
Q
f (y; ?) = i f (y i ; ? i ). The class of mixtures of factorial distributions, f (y; ?), is identifiable just
in case the class of mixtures of f (y i ; ? i ) is identifiable [14]; and mixtures of (univariate) Poisson
conditionals are themselves identifiable [12]. Thus the mixture used in [2] is indeed identifiable.
5
Conclusions
We have traced an analytical connection from psychophysical results in monkey and man to a broad
class of machine-learning algorithms, namely, density estimation in latent-variable models. In particular, behavioral studies of multisensory integration have shown that primates estimate stimulus
properties with the peak of the posterior distribution over the stimulus, conditioned on the two
unisensory cues [3, 4]. This can be seen as a special case of a more general ?optimal? computation, viz., computing and representing the entire posterior distribution [1, 6]; or, put differently,
finding transformations of multiple unisensory representations into a multisensory representation
that retains all the original information about the underlying stimulus. It has been shown that this
computation can be learned with algorithms that implement forms of latent-variable density estimation [15, 16]; and, indeed, argued that generic latent-variable density estimators will satisfy the
information-retention criterion [2]. We have provided an analytical proof that this is the case, at least
for certain classes of models (including the ones in [2]).
What about distributions f (y; ?) other than products of Poissons? Identifiability results, which
we have relied on here, appear to be the norm for finite mixtures; [12] summarizes the ?overall
picture? thus: ?[A]part from special cases with finite samples spaces [like binomials] or very special
simple density functions [like the continuous uniform distribution], identifiability of classes of finite
mixtures is generally assured.? Thus the results apply to a broad set of density-estimation models
and their equivalent neural networks.
Interestingly, this excludes Bernoulli random variables, and therefore the mixture model defined by
restricted Boltzmann machines (RBMs). Such mixtures are not strictly identifiable [12], meaning
there is more than one set of mixture weights that will produce the observed marginal distribution.
Hence the guarantee proved in Section 3 does not hold. On the other hand, the proof provides only
sufficient, not necessary conditions, so some guarantee of information retention is not ruled out.
And indeed, a relaxation of the identifiability criterion to exclude sets of measure zero has recently
been shown to apply to certain classes of mixtures of Bernoullis [21].
The information-retention criterion applies more broadly than multisensory integration; it is generally desirable. It is not, presumably, sufficient: the task of the cortex is not merely to pass information on unmolested from one point to another. On the other hand, the task of integrating data
from multiple sources without losing information about the underlying cause of those data has broad
application: it applies, for example, to the data provided by spatially distant photoreceptors that are
reporting the edge of a single underlying object. Whether the criterion can be satisfied in this and
other cases depends both on the brain?s generative model and on the true generative process by
which the stimulus is encoded in neurons.
The proof was derived for sufficient statistics rather than the neural responses themselves, but this
limitation can be overcome at the cost of time (by collecting or averaging repeated samples of neural
responses) or of space (by having a hidden vector long enough to contain most of the information
even in the presence of noise).
Finally, the result was derived for ?completed? density estimation, q(y) = p(y). This is a strong
limitation; one would prefer to know how approximate completion of learning, q(y) ? p(y), affects
the guarantee, i.e., how robust it is. In [2], for example, Eq. 2 was never directly verified, and in
fact one-step contrastive divergence (the training rule used) has suboptimal properties for building a
good generative model [22] And although the sufficient conditions supplied by the proof apply to a
broad class of models, it would also be useful to know necessary conditions.
Acknowledgments
JGM thanks Matthew Fellows, Maria Dadarlat, Clay Campaigne, and Ben Dichter for useful conversations.
8
References
[1] Wei Ji Ma, Jeffrey M. Beck, Peter E. Latham, and Alexandre Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience, 9:1423?1438, 2006.
[2] Joseph G. Makin, Matthew R. Fellows, and Philip N. Sabes. Learning Multisensory Integration and
Coordinate Transformation via Density Estimation. PLoS Computational Biology, 9(4):1?17, 2013.
[3] Marc O. Ernst and Martin S. Banks. Humans integrate visual and haptic information in a statistically
optimal fashion. Nature, 415(January):429?433, 2002.
[4] David Alais and David Burr. The ventriloquist effect results from near-optimal bimodal integration.
Current Biology, 14(3):257?62, February 2004.
[5] Max Welling, Michal Rosen-Zvi, and Geoffrey E. Hinton. Exponential Family Harmoniums with an
Application to Information Retrieval. In Advances in Neural Information Processing Systems 17: Proceedings of the 2004 Conference, pages 1481?1488., 2005.
[6] David C. Knill and Alexandre Pouget. The Bayesian brain: the role of uncertainty in neural coding and
computation. Trends in Neurosciences, 27(12), 2004.
[7] J.A. Saunders and David C. Knill. Perception of 3D surface orientation from skew symmetry. Vision
research, 41(24):3163?83, November 2001.
[8] Robert J. van Beers, AC Sittig, and Jan J. Denier van Der Gon. Integration of proprioceptive and visual
position-information: An experimentally supported model. Journal of Neurophysiology, 81:1355?1364,
1999.
[9] Philip N. Sabes. Sensory integration for reaching: Models of optimality in the context of behavior and
the underlying neural circuits. Progress in brain research, 191:195?209, January 2011.
[10] Bruno A. Olshausen. Sparse codes and spikes. In R.P.N. Rao, Bruno A. Olshausen, and Michael S.
Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function, chapter 13. MIT
Press, 2002.
[11] Anthony J. Bell. Towards a Cross-Level Theory of Neural Learning. AIP Conference Proceedings,
954:56?73, 2007.
[12] D.M. Titterington, A.F.M. Smith, and U.E. Makov. Statistical Analysis of Finite Mixture Distributions.
Wiley, 1985.
[13] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 2006.
[14] Henry Teicher. Identifiability of Mixtures of Product Measures. The Annals of Mathematical Statistics,
38(4):1300?1302, 1967.
[15] Ilker Yildirim and Robert A. Jacobs. A rational analysis of the acquisition of multisensory representations.
Cognitive Science, 36(2):305?32, March 2012.
[16] Jeffrey M. Beck, Katherine Heller, and Alexandre Pouget. Complex Inference in Neural Circuits with
Probabilistic Population Codes and Topic Models. Advances in Neural Information Processing Systems
25: Proceedings of the 2012 Conference, pages 1?9, 2013.
[17] Peter McCullagh and John A. Nelder. Generalized Linear Models. Chapman and Hall/CRC, second
edition, 1989.
[18] Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets.
Neural Computation, 18:1527?1554, 2006.
[19] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 14:1771?1800, 2002.
[20] Jeffrey M. Beck, Vikranth R. Bejjanki, and Alexandre Pouget. Insights from a Simple Expression for Linear Fisher Information in a Recurrently Connected Population of Spiking Neurons. Neural Computation,
23(6):1484?1502, June 2011.
[21] Elizabeth S. Allman, Catherine Matias, and John a. Rhodes. Identifiability of parameters in latent structure
models with many observed variables. The Annals of Statistics, 37(6A):3099?3132, December 2009.
[22] Geoffrey E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. Technical report,
University of Toronto, Toronto, 2010.
9
| 5230 |@word neurophysiology:1 trial:1 stronger:3 norm:1 efh:5 integrative:1 jacob:1 contrastive:3 phy:1 configuration:3 contains:1 interestingly:1 current:1 z2:1 michal:1 yet:1 dx:1 must:3 written:1 john:2 visible:2 distant:1 blur:1 gv:1 designed:1 joy:1 alone:2 generative:9 cue:12 nervous:1 affair:2 smith:1 provides:1 node:1 location:2 toronto:2 height:4 mathematical:1 along:1 prove:4 behavioral:1 burr:1 interlayer:1 indeed:3 expected:1 behavior:1 p1:1 themselves:2 brain:8 discretized:1 elbow:1 provided:3 underlying:8 circuit:2 mass:1 what:4 interpreted:1 minimizes:1 monkey:2 pursue:1 titterington:1 unobserved:1 transformation:4 finding:2 guarantee:6 fellow:2 every:2 collecting:1 visibles:1 exactly:2 fat:1 unit:6 underlie:1 appear:3 arguably:3 retention:8 treat:1 limit:1 consequence:1 encoding:6 credited:1 approximately:1 might:4 equivalence:1 shaded:1 limited:1 bi:1 statistically:1 obeys:1 acknowledgment:1 practical:1 lost:2 implement:1 procedure:2 jan:1 empirical:2 bell:1 physiology:1 induce:1 integrating:1 cannot:1 put:1 context:1 influence:1 yee:1 descending:1 restriction:1 equivalent:3 map:1 deterministic:2 center:1 dz:1 go:1 independently:2 convex:2 simplicity:2 assigns:2 twocomponent:1 immediately:1 pouget:4 estimator:6 rule:3 insight:1 population:20 embedding:1 notion:1 anyway:1 coordinate:1 poissons:2 annals:2 hierarchy:1 play:1 suppose:5 unconscious:1 losing:1 us:1 trend:1 helmholtz:1 element:1 gon:1 labeled:1 observed:8 role:3 ep:2 connected:1 plo:1 environment:1 transforming:1 trained:7 depend:1 harmonium:6 y2p:2 upon:1 joint:4 differently:1 represented:1 chapter:1 derivation:1 fast:1 tell:1 saunders:1 whose:2 encoded:2 plausible:2 valued:1 say:2 foregoing:1 otherwise:1 drawing:3 statistic:22 gp:1 itself:4 noisy:3 seemingly:1 analytical:3 net:1 propose:1 product:5 relevant:1 realization:1 mixing:3 ernst:1 representational:1 description:1 produce:2 ben:1 object:1 completion:1 ac:1 measured:1 progress:1 eq:40 strong:1 throw:2 y0p:2 p2:1 implies:3 come:1 human:3 crc:1 require:1 argued:1 assign:1 suffices:2 generalization:2 probable:1 strictly:1 clarify:1 hold:6 sufficiently:1 hall:1 normal:2 visually:1 presumably:3 exp:1 mapping:1 matthew:2 vary:1 estimation:17 rhodes:1 organ:1 vice:1 reflects:1 minimization:1 mit:1 clearly:2 sensor:1 gaussian:10 always:1 aim:1 rather:2 reaching:1 varying:1 encode:1 derived:4 emission:3 viz:1 june:1 maria:1 bernoulli:12 sense:6 inference:8 entire:3 integrated:1 unlikely:1 hidden:4 alais:1 overall:1 orientation:1 multiplies:1 integration:19 special:5 mutual:4 marginal:5 equal:3 field:1 orange:2 having:2 never:1 chapman:1 identical:3 biology:2 broad:5 look:1 nearly:1 rosen:1 report:1 stimulus:20 aip:1 pathological:1 randomly:1 divergence:3 beck:3 argmax:1 occlusion:1 statistician:1 jeffrey:3 highly:1 generically:1 truly:1 mixture:28 sens:1 held:1 chain:1 edge:1 necessary:2 respective:1 initialized:1 ruled:1 theoretical:1 minimal:4 effector:1 instance:2 rao:1 cover:1 cumulants:2 retains:1 cost:1 subset:1 uniform:5 successful:2 osindero:1 zvi:1 varies:1 corrupted:1 psychophysically:2 combined:1 thanks:1 density:24 peak:4 hiddens:5 retain:1 workspace:1 probabilistic:7 invertible:1 michael:1 pool:1 together:3 connectivity:2 again:2 satisfied:8 unavoidable:1 tile:2 tz:33 cognitive:1 expert:1 yp:5 exclude:1 nonlinearities:1 makov:1 coding:1 satisfy:4 afferent:1 depends:1 start:1 yv:5 relied:1 identifiability:9 simon:1 variance:4 likewise:1 correspond:2 yield:2 bayesian:2 plastic:1 produced:2 yildirim:1 none:1 reach:3 failure:1 rbms:1 acquisition:1 matias:1 proof:11 y1p:2 gain:2 auditory:1 sampled:1 proved:1 rational:1 color:1 conversation:1 improves:1 subtle:1 clay:1 appears:1 alexandre:4 disposal:1 higher:1 response:3 wei:1 unidentifiable:1 though:2 just:4 correlation:1 until:1 hand:7 logistic:1 perhaps:1 grows:1 olshausen:2 building:1 effect:1 usa:1 contain:2 true:9 requiring:3 unbiased:1 former:1 analytically:1 equality:2 hence:2 spatially:1 nonzero:1 proprioceptive:4 illustrated:1 criterion:12 generalized:1 whye:1 complete:1 demonstrate:1 latham:1 image:2 meaning:1 consideration:1 novel:1 recently:2 common:3 functional:1 spiking:1 empirically:5 physical:1 ji:1 million:1 tail:1 organism:1 belong:1 marginals:3 versa:1 gibbs:1 tuning:3 outlined:1 bruno:2 had:1 reliability:6 reachable:1 henry:1 cortex:2 surface:1 etc:1 something:1 posterior:11 recent:1 showed:1 driven:2 catherine:1 certain:3 inequality:4 arbitrarily:1 der:1 seen:3 injectivity:1 minimum:1 ventriloquist:1 redundant:2 preservation:1 multiple:4 full:1 bother:1 desirable:2 ing:1 technical:1 match:9 cross:1 long:1 retrieval:1 proximate:2 variant:3 noiseless:2 vision:1 poisson:5 bz:1 sometimes:1 represent:1 bimodal:1 proposal:2 background:1 addition:1 conditionals:2 whereas:1 diagram:2 source:2 modality:2 haptic:1 december:1 seem:2 call:1 allman:1 near:1 noting:1 presence:1 feedforward:1 enough:1 affect:1 architecture:2 perfectly:1 suboptimal:1 idea:1 knowing:1 whether:1 expression:1 peter:2 cause:7 deep:1 generally:2 useful:2 factorial:2 amount:3 dark:2 generate:1 supplied:2 exist:1 canonical:1 neuroscience:3 broadly:1 shall:4 nevertheless:2 traced:1 drawn:2 verified:1 excludes:1 merely:2 downstream:2 concreteness:1 relaxation:1 inverse:2 parameterized:1 powerful:1 angle:2 uncertainty:1 reporting:1 family:6 patch:1 draw:1 appendix:2 summarizes:1 prefer:1 layer:3 def:2 guaranteed:1 identifiable:10 activity:10 strength:1 precisely:3 constraint:2 encodes:1 generates:1 alia:1 optimality:4 formulating:1 martin:1 department:1 influential:1 according:1 combination:4 march:1 across:3 slightly:1 sittig:1 elizabeth:1 joseph:2 primate:4 modification:1 biologically:1 restricted:4 pr:8 taken:1 equation:1 skew:1 count:3 know:3 flip:1 end:1 staged:1 apply:4 obey:1 hierarchical:1 away:2 progression:1 generic:2 coin:1 existence:1 original:1 thomas:2 top:1 binomial:1 completed:1 graphical:3 especially:1 build:1 february:1 psychophysical:2 objective:1 added:1 already:1 looked:1 spike:4 strategy:1 gradient:1 link:1 elicits:1 simulated:1 capacity:1 philip:3 sensible:2 topic:1 considers:1 assuming:1 code:4 retained:1 relationship:1 z3:1 minimizing:1 katherine:1 robert:2 statement:1 relate:1 info:2 boltzmann:4 perform:2 teh:1 neuron:17 observation:3 markov:2 finite:9 november:1 january:2 hinton:4 shoulder:1 head:1 interacting:1 david:4 namely:3 pair:1 connection:4 z1:1 california:1 learned:1 assembled:1 below:2 perception:2 including:5 green:2 max:1 belief:1 overlap:3 treated:1 arm:1 representing:1 scheme:3 imply:2 sabes:4 picture:1 created:1 denier:1 ilker:1 text:1 prior:4 heller:1 law:1 loss:2 limitation:2 proportional:1 proven:1 geoffrey:4 integrate:1 agent:1 sufficient:29 proxy:1 consistent:1 beer:1 editor:1 corrupting:4 bank:1 course:1 succinctly:1 placed:1 last:3 keeping:1 supported:1 formal:1 bias:2 guide:1 absolute:1 sparse:1 venn:1 van:2 curve:3 overcome:1 world:16 transition:1 sensory:6 concretely:1 made:1 san:2 far:1 welling:1 approximate:2 photoreceptors:1 assumed:1 francisco:2 unnecessary:2 nelder:1 alternatively:1 continuous:2 latent:22 additionally:1 learn:1 nature:2 robust:1 ca:1 symmetry:1 unisensory:17 necessarily:1 intralayer:1 anthony:1 marc:1 assured:1 complex:1 did:1 main:1 noise:16 edition:1 knill:2 nothing:1 teicher:1 repeated:2 fig:13 fashion:2 transduction:1 wiley:2 inferring:1 position:7 exponential:7 third:1 z0:1 down:1 specific:1 recurrently:1 exists:1 conditioned:3 entropy:6 intersection:4 smoothly:2 simply:1 likely:1 univariate:4 visual:6 expressed:1 partially:2 lewicki:1 applies:2 corresponds:2 satisfies:1 determines:1 chance:1 ma:1 conditional:6 identity:1 sized:1 towards:1 man:3 fisher:1 hard:2 onestep:1 change:1 determined:1 infinite:1 experimentally:1 mccullagh:1 averaging:1 called:1 pas:5 multisensory:23 cond:1 support:1 latter:2 ucsf:1 scratch:1 |
4,673 | 5,231 | General Table Completion using a Bayesian
Nonparametric Model
Zoubin Ghahramani
Department of Engineering
University of Cambridge
zoubin@eng.cam.ac.uk
Isabel Valera
Department of Signal Processing
and Communications
University Carlos III in Madrid
ivalera@tsc.uc3m.es
Abstract
Even though heterogeneous databases can be found in a broad variety of applications, there exists a lack of tools for estimating missing data in such databases. In
this paper, we provide an efficient and robust table completion tool, based on a
Bayesian nonparametric latent feature model. In particular, we propose a general
observation model for the Indian buffet process (IBP) adapted to mixed continuous
(real-valued and positive real-valued) and discrete (categorical, ordinal and count)
observations. Then, we propose an inference algorithm that scales linearly with
the number of observations. Finally, our experiments over five real databases show
that the proposed approach provides more robust and accurate estimates than the
standard IBP and the Bayesian probabilistic matrix factorization with Gaussian
observations.
1
Introduction
A full 90% of all the data in the world has been generated over the last two years and this expansion
rate will not diminish in the years to come [17]. This extreme availability of data explains the great
investment that both the industry and the research community are expending in data science. Data is
usually organized and stored in databases, which are often large, noisy, and contain missing values.
Missing data may occur in diverse applications due to different reasons. For example, a sensor in
a remote sensor network may be damaged and transmit corrupted data or even cease to transmit;
participants in a clinical study may drop out during the course of the study; or users of a recommendation system rate only a small fraction of the available books, movies, or songs. The presence
of missing values can be challenging when the data is used for reporting, information sharing and
decision support, and as a consequence, missing data treatment has captured the attention in diverse
areas of data science such as machine learning, data mining, and data warehousing and management.
Several studies have shown that probabilistic modeling can help to estimate missing values, detect
errors in databases, or provide probabilistic responses to queries [19]. In this paper, we exclusively
focus on the use of probabilistic modeling for missing data estimation, and assume that the data
are missing completely at random (MCAR). There is extensive literature in probabilistic missing
data estimation and imputation in homogeneous databases, where all the attributes that describe
each object in the database present the same (continuous or discrete) nature. Most of the work
assumes that databases contain only either continuous data, usually modeled as Gaussian variables
[21], or discrete, that can be either modeled by discrete likelihoods [9] or simply treated as Gaussian
variables [15, 21]. However, there still exists a lack of work dealing with heterogeneous databases,
which in fact are common in real applications and where the standard approach is to treat all the
attributes, either continuous or discrete, as Gaussian variables. As a motivating example, consider a
database that contains the answers to a survey, including diverse information about the participants
such as age (count data), gender (categorical data), salary (continuous non negative data), etc.
1
In this paper, we provide a general Bayesian approach for estimating and replacing the missing data
in heterogeneous databases (being the data MCAR), where the attributes describing each object can
be either discrete, continuous or mixed variables. Specifically, we account for real-valued, positive
real-valued, categorical, ordinal and count data. To this end, we assume that the information in
the database can be stored in a matrix (or table), where each row corresponds to an object and
the columns are the attributes that describe the different objects. We propose a novel Bayesian
nonparametric approach for general table completion based on feature modeling, in which each
object is represented by a set of latent variables and the observations are generated from a distribution
determined by those latent features. Since the number of latent variables needed to explain the data
depends on the specific database, we use the Indian buffet process (IBP) [8], which places a prior
distribution over binary matrices where the number of columns (latent variables) is unbounded.
The standard IBP assumes real-valued observations combined with conjugate likelihood models
that allow for fast inference algorithms [4]. Here, we aim at dealing with heterogeneous databases,
which may contain mixed continuous and discrete observations.
We propose a general observation model for the IBP that accounts for mixed continuous and discrete data, while keeping the properties of conjugate models. This allows us to propose an inference
algorithm that scales linearly with the number of observations. The proposed algorithm does not
only infer the latent variables for each object in the table, but it also provides accurate estimates for
its missing values. Our experiments over five real databases show that our approach for table completion outperforms, in terms of accuracy, the Bayesian probabilistic matrix factorization (BPMF)
[15] and the standard IBP which assume Gaussian observations. We also observe that the approach
based on treating mixed continuous and discrete data as Gaussian fails in estimating some attributes,
while the proposed approach provides robust estimates for all the missing values regardless of their
discrete or continuous nature.
The main contributions in this paper are: i) A general observation model (for mixed continuous and
discrete data) for the IBP that allows us to derive an inference algorithm that scales linearly with
the number of objects, and its application to build ii) a general and scalable tool to estimate missing
values in heterogeneous databases. An efficient C-code implementation for Matlab of the proposed
table completion tool is also released on the authors website.
2
Related Work
In recent years, probabilistic modeling has become an attractive option for building database management systems since it allows estimating missing values, detecting errors, visualizing the data, and
providing probabilistic answers to queries [19]. BayesDB,1 for instance, is a database management
system that resorts to Crosscat [18], which originally appeared as a Bayesian approach to model human categorization of objects. BayesDB provides missing data estimates and probabilistic answer
to queries, but it only considers Gaussian and multinomial likelihood functions.
In the literature, probabilistic low-rank matrix factorization approaches have been broadly applied to
table completion (see, e.g., [14, 15, 21]). In these approaches, the table database X is approximated
by a low-rank matrix representation X ? ZB, where Z and B are usually assumed to be Gaussian
distributed. Most of the works in this area have focused on building automatic recommendation
systems, which appears as the most popular application of missing data estimation [14, 15, 21].
More specific models to build recommendation systems can be found in [7, 22], where the authors
assume that the rates each user assign to items are generated by a probabilistic generative model
which, based on the available data, accounts for similarities among users and among items to provide
good estimates of the missing rates.
Probabilistic matrix factorization can also be viewed as latent feature modeling, where each object
is represented by a vector of continuous latent variables. In contrast, the IBP and other latent feature
models (see, e.g., [16]) assume binary latent features to represent each object. Latent feature models
usually assume homogeneous databases with either real [14, 15, 21] or categorical data [9, 12, 13],
and only a few works consider heterogeneous data, such as mixed real and categorical data [16].
However, up to our knowledge, there are no general latent feature models (nor table completion
tools) to directly deal with heterogeneous databases. To fill this gap, in this paper we provide a
general table completion approach for heterogeneous databases, based on a generalized IBP, that
allows for efficient inference.
1
http://probcomp.csail.mit.edu/bayesdb/
2
3
Model Description
Let us assume a table with N objects, where each object is defined by D attributes. We can store
the data in an N ? D observation matrix X, in which each D-dimensional row vector is denoted by
d
d
xn = [x1n , . . . , xD
n ] and each entry is denoted by xn . We consider that column vectors x (i.e., each
dimension in the observation matrix X) may contain the following types of data:
? Continuous variables:
1. Real-valued, i.e., xdn ? <
2. Positive real-valued, i.e., xdn ? <+ .
? Discrete variables:
1. Categorical data, i.e., xdn takes values in a finite unordered set, e.g., xdn ? {?blue?,
?red?, ?black?}.
2. Ordinal data, i.e., xdn takes values in a finite ordered set, e.g., xdn ? {?never?, ?sometimes?, ?often?, ?usually?, ?always?}.
3. Count data, i.e., xdn ? {0, . . . , ?},
We assume that each observation xdn can be explained by a K-length vector of latent variables
associated to the n-th data point zn = [zn1 , . . . , znK ] and a weighting vector2 Bd = [bd1 , . . . , bdK ]
(being K the number of latent variables), whose elements bdk weight the contribution of k-th the
latent feature to the d-th dimension of X. We gather the latent binary feature vectors zn in a N ? K
matrix Z, which follows an IBP with concentration parameter ?, i.e., Z ? IBP(?) [8]. We place a
2
Gaussian distribution with zero mean and covariance matrix ?B
IK over the weighting vectors Bd .
d
For convenience, zn is a K-length row vector, while B is a K-length column vector.
To accommodate for all kinds of observed random variables described above, we introduce an auxiliary Gaussian variable ynd , such that when conditioned on the auxiliary variables, the latent variable
model behaves as a standard IBP with Gaussian observations. In particular, we assume ynd is Gaussian distributed with mean zn Bd and variance ?y2 , i.e.,
p(ynd |zn , Bd ) = N (ynd |zn Bd , ?y2 ),
and assume that there exists a transformation function over the variables ynd to obtain the observations xdn , mapping the real line < into the observation space. The resulting generative model is
shown in Figure 1, where Z is the IBP latent matrix, and Yd and Bd contain, respectively, the
auxiliary Gaussian variables ynd and the weighting factors bdk for the d-dimension of the data. Additionally, ?d denotes the set of auxiliary random variables needed to obtain the observation vector
xd given Yd , and Hd contains the hyper-parameters associated to the random variables in ?d . This
model assumes that the observations xdn are independent given the latent matrix Z, the weighting
matrices Bd and the auxiliary variables ?d . Therefore, the likelihood can be factorized as
d
p(X|Z, {B
, ?d }D
d=1 )
=
D
Y
d=1
p(x |Z, B , ? ) =
d
d
d
D Y
N
Y
d=1 n=1
p(xdn |zn , Bd , ?d ).
Note that, if we assume Gaussian observations and set Yd = xd , this model resembles the standard
IBP with Gaussian observations [8]. In addition, conditioned on the variables Yd , we can infer the
latent matrix Z as in the standard IBP. We also remark that auxiliary Gaussian variables to link a
latent model with the observations have been previously used in Gaussian processes for multi-class
classification [6] and for ordinal regression [2]. However, up to our knowledge, this simple approach
has not been used to account for mixed continuous and discrete data, and the existent approaches
for the IBP with discrete observations propose non-conjugate likelihood models and approximate
inference algorithms [12, 13].
3.1
Likelihood Functions
Now, we define the set of transformations that map from the Gaussian variables ynd to the corresponding observations xdn . We consider that each dimension in the table X may contain any of the
discrete or continuous variables detailed above, provide a likelihood function for each kind of data
and, in turn, also a likelihood function for mixed data.
2
For convenience, we capitalized here the notation for the weighting vectors Bd .
3
Real-valued Data. In this case, we assume that xd = Yd in the model in Figure 1 and consider
the standard approach when dealing with real-valued observations, which consist of assuming a
Gaussian likelihood function. In particular, as in the standard linear-Gaussian IBP [8], we assume
that each observation xdn is distributed as
p(xdn |zn , Bd ) = N (xdn |zn Bd , ?y2 ).
Positive Real-valued Data. In order to obtain positive real-valued observations, i.e., xdn ? <+ , we
apply a transformation over ynd that maps from the real numbers to the positive real numbers, i.e.,
xdn = f (ynd + udn ),
where udn is a Gaussian noise variable with variance ?u2 , and f : < ? <+ is a monotonic differentiable function. By change of variables, we obtain the likelihood function for positive real-valued
observations as
1
1
?1 d
d 2 d
?1 d
p(xdn |zn , Bd ) = q
f
exp ?
(f
(x
)
?
z
B
)
(x
)
n
n
n , (1)
2
2
d
2(?y + ?u )
dxn
2?(?y2 + ?u2 )
where f ?1 : <+ ? < is the inverse function of the transformation f (?), i.e, f ?1 (f (v)) = v. Note
that in this case we resort to the Gaussian variable udn in order to obtain xdn from ynd , and therefore,
?d = udd and Hd = ?u2 .
Categorical Data. Now we account for categorical observations, i.e., each observation xdn can take
values in the unordered index set {1, . . . , Rd }. Hence, assuming a multinomial probit model, we
can write
d
xdn = arg max ynr
,
(2)
r?{1,...,Rd }
d
|zn bdr , ?y2 )
N (ynr
bdr
denotes the K-length weighting vector, in which each bdkr
where
?
being
weights the influence of the k-th feature for the observation xdn taking value r. Note that, under this
d
likelihood model, since we have a Gaussian auxiliary variable ynr
and a weighting factor bdkr for
each possible value of the observation r ? {1, . . . , Rd }, we need to gather all the weighting factors
d
in the N ? Rd matrix Yd .
bdkr in a K ? Rd matrix Bd , and all the Gaussian auxiliary variables ynr
d
ynr
d
= zn bdr + udnr , where udnr is a Gaussian noise
Under this observation model, we can write ynr
2
variable with variance ?y , and therefore, we can obtain the probability of each element xdn taking
value r ? {1, . . . , Rd } as [6]
"R
#
d
Y
d
d
d
d
p(xn = r|zn , B ) = Ep(u)
? u + zn (br ? bj ) ,
(3)
j=1
j6=r
where subscript r in bdr states for the column in Bd (r ? {1, . . . , Rd }), ?(?) denotes the cumulative
density function of the standard normal distribution and Ep(u) [?] denotes expectation with respect to
the distribution p(u) = N (0, ?y2 ).
Ordinal Data. Consider ordinal data, in which each element xdn takes values in the ordered index
set {1, . . . , Rd }. Then, assuming an ordered probit model, we can write
?
if ynd ? ?1d
?
? 1
?
? 2
if ?1d < ynd ? ?2d
xdn =
(4)
..
?
.
?
?
?
d
Rd
if ?R
< ynd
d ?1
where again ynd is Gaussian distributed with mean zn Bd and variance ?y2 , and ?rd for r ?
{1, . . . , Rd ? 1} are the thresholds that divide the real line into Rd regions. We assume the thresholds ?rd are sequentially generated from the truncated Gaussian distribution ?rd ? N (?rd |0, ??2 )I(?rd >
d
d
?r?1
), where ?0d = ?? and ?R
= +?. As opposed to the categorical case, now we have a unique
d
4
weighting vector Bd and a unique Gaussian variable ynd for each observation xdn . Hence, the value
of xdn is determined by the region in which ynd falls.
Under the ordered probit model [2], the probability of each element xdn taking value r ? {1, . . . , Rd }
can be written as
!
!
d
d
d
d
?
?
z
B
?
?
z
B
n
n
r?1
r
p(xdn = r|zn , Bd ) = ?
??
.
(5)
?y
?y
Let us remark that, if the d-dimension of the observation matrix contains ordinal data, the set of
d
auxiliary variables reduces to the Gaussian thresholds ?d = {?1d , . . . , ?R
} and Hd = ??2 .
d ?1
Count Data. In count data each observation xdn takes non-negative integer values, i.e., xdn ?
{0, . . . , ?}. Then, we assume
xdn = bf (ynd )c,
(6)
where bvc returns the floor of v, that is the largest integer that does not exceed v, and f : < ? <+
is a monotonic differentiable function that maps from the real numbers to the positive real numbers.
We can therefore write the likelihood function as
!
!
d
?1 d
d
?1 d
)
?
z
B
f
(x
+
1)
?
z
B
f
(x
n
n
n
n
??
(7)
p(xdn |zn , Bd ) = ?
?y
?y
where f ?1 : <+ ? < is the inverse function of the transformation f (?).
2
y
?
Z
Yd
2
B
Bd
X
d
Hd
d = 1, . . . , D
Figure 1: Generalized IBP for mixed continuous and discrete observations.
4
Inference Algorithm
In this section we describe our algorithm for inferring the latent variables given the observation
matrix. Under our model, detailed in Section 3, the probability distribution over the observation
matrix is fully characterized by the latent matrices Z and {Bd }D
d=1 (as well as the auxiliary variables
?d ). Hence, if we assume the latent vector zn for the n-th datapoint and the weighting factors
Bd (and the auxiliary variables ?d ) to be known, we have a probability distribution over missing
observations xdn from which we can obtain estimates for xdn by sampling from this distribution,3 or
by simply taking either its mean, mode or median value. However, this procedure requires the latent
matrix Z and the latent weighting factors Bd (and ?d ) to be known.
We use Markov Chain Monte Carlo (MCMC) methods, which have been broadly applied to infer
the IBP matrix (see, e.g., in [8, 23, 20]). The proposed inference algorithm is summarized in Algorithm 1. This algorithm exploits the information in the available data to learn the similarities among
the objects (captured in our model by the latent feature matrix Z), and how these latent features
show up in the attributes that describe the objects (captured in our model by Bd ). In Algorithm 1,
we first need to update the latent matrix Z. Note that conditioned on {Yd }D
d=1 , both the latent
are
independent
of
the
observation
matrix X. Admatrix Z and the weighting matrices {Bd }D
d=1
d D
ditionally, since {Bd }D
and
{Y
}
are
Gaussian
distributed,
we
can
analytically
marginalize
d=1
d=1
d D
out the weighting matrices {Bd }D
to
obtain
p({Y
}
|Z).
Therefore,
to
infer
the
matrix
Z, we
d=1
d=1
can apply the collapsed Gibbs sampler which presents better mixing properties than the uncollapsed
3
Note that sampling from this distribution might be computationally expensive. In this case, we can easily
obtain samples of xdn by exploiting the structure of our model. In particular, we can simply sample the auxiliary
Gaussian variables ynd given zn and Bd , and then obtain an estimate for xdn by applying the corresponding
transformation, detailed in Section 3.1.
5
Algorithm 1 Inference Algorithm.
Input: X
Initialize: initialize Z and {Yd }D
d=1
1: for each iteration do
2:
Update Z given {Yd }D
d=1 .
3:
for d = 1, . . . , D do
4:
Sample Bd given Z and Yd according to (8).
5:
Sample Yd given X, Z and Bd (as shown in the Supplementary Material).
6:
Sample ?d if needed (as shown in the Supplementary Material).
7:
end for
8: end for
d D
Output: Z, {Bd }D
d=1 and {? }d=1
Gibbs sampler and, in consequence, is the standard method of choice in the context of the standard
linear-Gaussian IBP [8]. However, this algorithm suffers from a high computational cost (being
complexity per iteration cubic with the number of data points N ), which is prohibitive when dealing
with large databases. In order to solve this limitation, we resort to the accelerated Gibbs sampler [4]
instead. This algorithm presents linear complexity with the number of datapoints and is detailed in
the Supplementary Material.
Second, we need to sample the weighting factors in Bd , which is a K ? Rd matrix in the case of
categorical attributes, and a K-length column vector otherwise. We denote each column vector in
Bd by bdr . The posterior over the weighting vectors are given by
p(bdr |yrd , Z) = N (bdr |P?1 ?dr , P?1 ),
(8)
2
where P = Z> Z + 1/?B
Ik and ?dr = Z> yrd . Note that the covariance matrix P?1 depend neither
on the dimension d nor on r, so we only need to invert the K ? K matrix P once at each iteration.
We describe in the Supplementary Material how to efficiently compute P after changes in the Z
matrix by rank one updates, without the need of computing the matrix product Z> Z.
Once we have updated Z and Bd , we sample each element in Yd from the distribution
d
d
|xdn , zn , bd ) spec|zn bd , ?y2 ) if the observation xdn is missing, and from the posterior p(ynr
N (ynr
ified in the Supplementary Material, otherwise. Finally, we sample the auxiliary variables in ?d
from their posterior distribution (detailed in the Supplementary Material) if necessary. This two latter steps involve, in the worst case, sampling from a doubly truncated univariate normal distribution
(see the Supplementary Material for further details), for which we make use of the algorithm in [11].
5
Experimental evaluation
We now validate the proposed algorithm for table completion on five real databases, which are
summarized in Table 1. The datasets contain different numbers of instances and attributes, which
cover all the discrete and continuous variables described in Section 3. We compare, in terms of
predictive log-likelihood, the following methods for table completion:
? The proposed general table completion approach denoted by GIBP (detailed in Section 3).
? The standard linear-Gaussian IBP [8] denoted by SIBP, treating all the attributes as Gaussian.
? The Bayesian probabilistic matrix factorization approach [15] denoted by BPMF, that also
treats all the attributes in X as Gaussian distributed.
For the GIBP, we consider for the real positive and the count data the following transformation,
that maps from the real numbers to the real positive numbers, f (x) = log(exp(wx) + 1), where
w is a user hyper-parameter. Before running the SIBP and the BPMF methods we normalize each
column in matrix X to have zero-mean and unit-variance. Then, in order to provide estimates for
the missing data, we denormalize the inferred Gaussian variable. Additionally, since both the SIBP
and the BPMF assume continuous observations, when dealing with discrete data, we estimate each
missing value as the closest integer value to the (denormalized) Gaussian variable.
6
Dataset
Statlog German credit dataset
[5]
QSAR biodegradation dataset
[10]
Internet usage survey dataset
[1]
Wine quality Dataset [3]
N
1,000
6,497
D
20 (10 C + 4 O
+ 6 N)
41 (2 R + 17 P
+ 4 C + 18 N)
32 (23 C + 8 O
+ 1 N)
12 (11 P + 1 N)
NESARC dataset [13]
43,000
55 C
1,055
1,006
Description
Collects information about the credit risks of
the applicants.
Contains molecular descriptors of biodegradable and non-biodegradable chemicals.
Contains the responses of the participants to a
survey related to the usage of internet.
Contains the results of physicochemical tests realized to different wines.
Contains the responses of the participants to a
survey related to personality disorders.
0
?2
?2
?3
?4
GIBP
SIBP
BPMF
?5
?6
10
20
30
40
% of missing data
50
(a) Statlog.
?1
Log?likelihood
?1
Log?likelihood
Log?likelihood
Table 1: Description of datasets. ?R? states for real-valued variables, ?P? for positive real-valued
variables, ?C? for categorical variables, ?O? for ordinal variables and ?N? for count variables
?4
GIBP
SIBP
BPMF
?6
?8
?10
10
30
40
% of missing data
50
(b) QSAR biodegradation.
10
20
30
40 50 60 70
% of missing data
80
90
(c) Internet usage survey.
?0.5
Log?likelihood
Log?likelihood
GIBP
SIBP
BPMF
?2
?2.5
20
0
GIBP
SIBP
BPMF
?5
?10
?0.6
?0.7
GIBP
SIBP
?0.8
10
?1.5
20
30
40 50 60 70
% of missing data
80
90
10
(d) Wine quality.
20
30
40 50 60 70
% of missing data
80
90
(e) Nesarc database
Figure 2: Average test log-likelihood per missing datum. The ?whiskers? show a standard deviations
from the average test log-likelihood.
In Figure 2, we plot the average predictive log-likelihood per missing value as a function of the
percentage of missing data. Each value in Figure 2 has been obtained by averaging the results in
20 independent sets where the missing values have been randomly chosen. In Figures 2a and 2b,
we cut the plot in 50% because, in these two databases, the discrete attributes present a mode value
that is present for more than 80% of the instances. As a consequence, the SIBP and the BPMF
algorithms assign probability close to one to the mode, which results in an artificial increase in the
average test log-likelihood for larger percentages of missing data. For the BPMF model, we have
used different numbers of latent features (in particular, 10, 20 and 50), although we only show the
best results for each database, specifically, K = 10 for the NESARC and the wine databases, and
K = 50 for the remainder. Both the GIBP and the SIBP have not inferred a number of (binary)
latent features above 25 in any case. Note that in Figure 2e, we only plot the test log-likelihood for
the GIBP and the SIBP because the BPMF provides much lower values. As expected, we observe
in Figure 2 that the average test log-likelihood decreases for the three models when the number of
missing values increases (flat shape of the curves are due to the y-axis scale). In this figure, we also
observe that the proposed general IBP model outperforms the SIBP and the BPMF for four of the
the databases, being the SIBP slightly better for the Internet database. The BPMF model presents
the lowest test-log-likelihood in all the databases.
Now, we analyze the performance of the three models for each kind of discrete and continuous
variables. Figure 3 shows average predictive likelihood per missing value for each attribute in the
table, i.e., for each dimension in X. In this figure we have grouped the dimensions according to the
kind of data that they contain, showing in the x-axis the number of considered categories for the case
of categorical and ordinal data. In this figure, we observe that the GIBP presents similar performance
7
for all the attributes in the five databases, while for the SIBP and the BPMF models, the test-loglikelihood falls drastically for some of the attributes, being this effect worse in the case of the BPMF
(it explains the low log-likelihood in Figure 2). This effect is even more evident in Figures 2b
and 2d. We also observe, in Figures 2 and 3, that both IBP based approaches (the GIBP and the
SIBP) outperform the BPMF, with the proposed GIBP being the one that best performs across all
the databases. We can conclude that, unlike to the BPMF and the GIBP, the GIBP provides accurate
estimates for the missing data regardless of their discrete or continuous nature.
6
Conclusions
In this paper, we have proposed a table completion approach for heterogeneous databases, based on
an IBP with a generalized likelihood that allows for mixed discrete and continuous data. We have
then derived an inference algorithm that scales linearly with the number of observations. Finally, our
experimental results over five real databases have shown than the proposed approach outperforms,
in terms of robustness and accuracy, approaches that treat all the attributes as Gaussian variables.
Log?likelihood
0
?10
GIBP
SIBP
BPMF
?20
?30
C5
C10
C5
C3
C4
C3
C3
C4
C2
C2 O4
Attribute
O5
O5
O2
N
N
N
N
N
N
(a) Statlog.
Log?likelihood
10
0
?10
?20
?30
GIBP
SIBP
BPMF
R R P P P P P P P P P P P P P P P P P C2C2C4C2 N N N N N N N N N N N N N N N N N N
Attribute
(b) QSAR biodegradation.
Log?likelihood
0
?2
?4
GIBP
SIBP
BPMF
?6
?8
C3 C3 C3 C3 C3 C3 C4 C4 C4 C5 C5 C6 C6 C6 C6 C6 C5 C5 C3 C2 C2 C2 C9 O6 O7 O7 O7 O7 O7 O8 O6 N
Attribute
(c) Internet usage survey.
Log?likelihood
10
0
?10
GIBP
SIBP
BPMF
?20
?30
P
P
P
P
P
P
P
Attribute
P
P
P
P
N
(d) Wine quality.
Log?likelihood
0
?10
GIBP
SIBP
BPMF
?20
?30
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
Attribute
(e) Nesarc database
Figure 3: Average test log-likelihood per missing datum in each dimension of the data with 50% of
missing data. In the x-axis ?R? states for real-valued variables, ?P? for positive real-valued variables,
?C? for categorical variables, ?O? for ordinal variables and ?N? for count variables. The number that
accompanies the ?C? or ?O? corresponds to the number of categories.
Acknowledgments
Isabel Valera acknowledge the support of Plan Regional-Programas I+D of Comunidad de Madrid
(AGES-CM S2010/BMD-2422), Ministerio de Ciencia e Innovaci?on of Spain (project DEIPRO
TEC2009-14504-C02-00 and program Consolider-Ingenio 2010 CSD2008-00010 COMONSENS).
Zoubin Ghahramani is supported by the EPSRC grant EP/I036575/1 and a Google Focused Research
Award.
8
References
[1] Pew Research Centre.
25th anniversary of the web.
Available on:
http://www.pewinternet.org/datasets/january-2014-25th-anniversary-of-the-web-omnibus/.
[2] W. Chu and Z. Ghahramani. Gaussian processes for ordinal regression. J. Mach. Learn. Res., 6:1019?
1041, December 2005.
[3] P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis.
Modeling wine preferences by
data mining from physicochemical properties. Decision Support Systems. Dataset available on:
http://archive.ics.uci.edu/ml/datasets.html, 47(4):547?553, 2009.
[4] F. Doshi-Velez and Z. Ghahramani. Accelerated sampling for the indian buffet process. In Proceedings of
the 26th Annual International Conference on Machine Learning, ICML ?09, pages 273?280, New York,
NY, USA, 2009. ACM.
[5] J. Eggermont, J. N. Kok, and W. A. Kosters. Genetic programming for data classification: Partitioning
the search space. In In Proceedings of the 2004 Symposium on applied computing (ACM SAC04). Dataset
available on: http://archive.ics.uci.edu/ml/datasets.html, pages 1001?1005. ACM, 2004.
[6] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian process
priors. Neural Computation, 18:2006, 2005.
[7] P. Gopalan, F. J. R. Ruiz, R. Ranganath, and D. M. Blei. Bayesian Nonparametric Poisson Factorization for Recommendation Systems. nternational Conference on Artificial Intelligence and Statistics
(AISTATS), 2014.
[8] T. L. Griffiths and Z. Ghahramani. The Indian buffet process: an introduction and review. Journal of
Machine Learning Research, 12:1185?1224, 2011.
[9] X.-B. Li. A Bayesian approach for estimating and replacing missing categorical data. J. Data and
Information Quality, 1(1):3:1?3:11, June 2009.
[10] K. Mansouri, T. Ringsted, D. Ballabio, R. Todeschini, and V. Consonni. Quantitative structureactivity
relationship models for ready biodegradability of chemicals. Journal of Chemical Information and Modeling. Dataset available on: http://archive.ics.uci.edu/ml/datasets.html.
[11] C. P. Robert. Simulation of truncated normal variables. Statistics and computing, 5(2):121?125, 1995.
[12] F. J. R. Ruiz, I. Valera, C. Blanco, and F. Perez-Cruz. Bayesian nonparametric modeling of suicide
attempts. Advances in Neural Information Processing Systems, 25:1862?1870, 2012.
[13] F. J. R. Ruiz, I. Valera, C. Blanco, and F. Perez-Cruz. Bayesian nonparametric comorbidity analysis of psychiatric disorders. Journal of Machine Learning Research (To appear). Available on
http://arxiv.org/pdf/1401.7620v1.pdf, 2013.
[14] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information
Processing Systems, 2007.
[15] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov Chain Monte
Carlo. In Proceedings of the 25th International Conference on Machine Learning, ICML ?08, pages
880?887, New York, NY, USA, 2008. ACM.
[16] E. Salazar, M. Cain, E. Darling, S. Mitroff, and L. Carin. Inferring latent structure from mixed real and
categorical relational data. In Proceedings of the 29th International Conference on Machine Learning
(ICML-12), ICML ?12, pages 1039?1046, New York, NY, USA, July 2012. Omnipress.
[17] ScienceDaily. Big data, for better or worse: 90% of world?s data generated over last two years.
[18] P. Shafto, C. Kemp, Mansinghka V., and Tenenbaum J. B. A probabilistic model of cross-categorization.
Cognition, 120(1):1 ? 25, 2011.
[19] S. Singh and T. Graepel. Automated probabilistic modelling for relational data. In Proceedings of the
ACM of Information and Knowledge Management, CIKM ?13, New York, NY, USA, 2013. ACM.
[20] M. Titsias. The infinite gamma-Poisson feature model. Advances in Neural Information Processing
Systems, 19, 2007.
[21] A. Todeschini, F. Caron, and M. Chavent. Probabilistic low-rank matrix completion with adaptive spectral
regularization algorithms. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems 26, pages 845?853. Curran Associates, Inc.,
Dec. 2013.
[22] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
KDD ?11, pages 448?456, New York, NY, USA, 2011. ACM.
[23] S. Williamson, C. Wang, K. Heller, and D. Blei. The IBP compound Dirichlet process and its application to focused topic modeling. Proceedings of the 27th Annual International Conference on Machine
Learning, 2010.
9
| 5231 |@word cortez:1 bf:1 consolider:1 simulation:1 eng:1 covariance:2 accommodate:1 mcar:2 contains:7 exclusively:1 bvc:1 genetic:1 outperforms:3 o2:1 chu:1 bd:35 written:1 applicant:1 cruz:2 tec2009:1 ministerio:1 wx:1 kdd:1 shape:1 drop:1 treating:2 update:3 plot:3 s2010:1 generative:2 prohibitive:1 website:1 item:2 spec:1 intelligence:1 ivalera:1 blei:3 provides:6 detecting:1 preference:1 c6:5 org:2 five:5 unbounded:1 c2:5 become:1 bd1:1 ik:2 symposium:1 doubly:1 introduce:1 expected:1 nor:2 multi:1 salakhutdinov:2 consonni:1 spain:1 estimating:5 notation:1 project:1 factorized:1 lowest:1 kind:4 cm:1 deipro:1 transformation:7 quantitative:1 xd:4 uk:1 partitioning:1 unit:1 grant:1 appear:1 before:1 positive:12 engineering:1 treat:3 consequence:3 mach:1 subscript:1 yd:13 black:1 might:1 resembles:1 collect:1 challenging:1 factorization:8 bmd:1 unique:2 acknowledgment:1 investment:1 procedure:1 area:2 griffith:1 zoubin:3 psychiatric:1 convenience:2 marginalize:1 close:1 collapsed:1 influence:1 applying:1 context:1 risk:1 www:1 map:4 missing:37 attention:1 regardless:2 survey:6 focused:3 disorder:2 fill:1 datapoints:1 hd:4 bpmf:22 transmit:2 updated:1 damaged:1 user:4 programming:1 homogeneous:2 curran:1 associate:1 element:5 approximated:1 expensive:1 cut:1 database:36 observed:1 ep:3 epsrc:1 wang:2 ynr:8 worst:1 region:2 remote:1 decrease:1 complexity:2 ciencia:1 cam:1 existent:1 depend:1 singh:1 predictive:3 titsias:1 completely:1 easily:1 isabel:2 represented:2 fast:1 describe:5 monte:2 query:3 artificial:2 hyper:2 whose:1 supplementary:7 valued:16 solve:1 larger:1 loglikelihood:1 otherwise:2 statistic:2 bdk:3 noisy:1 differentiable:2 propose:6 product:1 remainder:1 i036575:1 uci:3 mixing:1 description:3 validate:1 normalize:1 blanco:2 exploiting:1 categorization:2 uncollapsed:1 object:14 help:1 derive:1 completion:13 ac:1 mansinghka:1 ibp:25 c10:1 auxiliary:13 come:1 girolami:1 shafto:1 attribute:21 capitalized:1 human:1 material:7 rogers:1 explains:2 assign:2 statlog:3 diminish:1 credit:2 normal:3 exp:2 great:1 considered:1 mapping:1 bj:1 cognition:1 ic:3 released:1 wine:6 estimation:3 largest:1 grouped:1 tool:5 mit:1 sensor:2 gaussian:40 always:1 aim:1 derived:1 focus:1 june:1 rank:4 likelihood:34 modelling:1 contrast:1 sigkdd:1 detect:1 inference:10 zn1:1 arg:1 among:3 classification:2 html:3 denoted:5 ingenio:1 plan:1 initialize:2 once:2 never:1 sampling:4 broad:1 icml:4 carin:1 few:1 randomly:1 gamma:1 comunidad:1 bdr:7 attempt:1 mining:3 mnih:2 evaluation:1 extreme:1 crosscat:1 perez:2 chain:2 accurate:3 necessary:1 divide:1 re:1 instance:3 industry:1 modeling:10 column:8 bayesdb:3 cover:1 zn:21 cost:1 deviation:1 entry:1 innovaci:1 motivating:1 stored:2 answer:3 corrupted:1 combined:1 density:1 international:5 csail:1 probabilistic:18 again:1 management:4 opposed:1 dr:2 worse:2 book:1 resort:3 return:1 matos:1 li:1 account:5 de:2 unordered:2 summarized:2 availability:1 inc:1 o7:5 depends:1 ynd:18 analyze:1 red:1 carlos:1 participant:4 option:1 contribution:2 collaborative:1 accuracy:2 biodegradable:2 variance:5 descriptor:1 efficiently:1 bayesian:14 carlo:2 qsar:3 j6:1 explain:1 datapoint:1 suffers:1 sharing:1 doshi:1 associated:2 vector2:1 dataset:9 treatment:1 popular:1 knowledge:4 organized:1 graepel:1 uc3m:1 appears:1 originally:1 o6:2 response:3 though:1 web:2 replacing:2 lack:2 google:1 mode:3 quality:4 scientific:1 building:2 usage:4 effect:2 contain:8 y2:8 omnibus:1 usa:5 hence:3 analytically:1 chemical:3 regularization:1 deal:1 attractive:1 visualizing:1 during:1 x1n:1 nternational:1 generalized:3 tsc:1 pdf:2 evident:1 performs:1 omnipress:1 variational:1 novel:1 common:1 behaves:1 multinomial:3 velez:1 cambridge:1 gibbs:3 caron:1 pew:1 automatic:1 rd:18 centre:1 similarity:2 etc:1 posterior:3 closest:1 recent:1 store:1 compound:1 binary:4 cain:1 captured:3 floor:1 salazar:1 signal:1 ii:1 july:1 full:1 expending:1 infer:4 reduces:1 o5:2 xdn:39 characterized:1 clinical:1 cross:1 physicochemical:2 molecular:1 award:1 scalable:1 regression:3 heterogeneous:9 expectation:1 poisson:2 udn:3 iteration:3 represent:1 sometimes:1 arxiv:1 invert:1 dec:1 addition:1 median:1 salary:1 unlike:1 regional:1 archive:3 december:1 dxn:1 integer:3 presence:1 exceed:1 iii:1 automated:1 variety:1 br:1 song:1 accompanies:1 york:5 remark:2 matlab:1 detailed:6 involve:1 gopalan:1 nonparametric:6 kok:1 comorbidity:1 tenenbaum:1 category:2 http:6 outperform:1 percentage:2 chavent:1 cikm:1 per:5 blue:1 diverse:3 broadly:2 discrete:22 write:4 darling:1 four:1 cerdeira:1 threshold:3 imputation:1 neither:1 v1:1 fraction:1 year:4 inverse:2 reporting:1 place:2 c02:1 decision:2 internet:5 datum:2 annual:2 adapted:1 occur:1 flat:1 department:2 according:2 conjugate:3 across:1 slightly:1 explained:1 computationally:1 previously:1 describing:1 count:9 turn:1 german:1 needed:3 ordinal:11 end:3 available:8 apply:2 observe:5 spectral:1 buffet:4 robustness:1 weinberger:1 personality:1 assumes:3 denotes:4 running:1 dirichlet:1 todeschini:2 exploit:1 ghahramani:6 build:2 eggermont:1 realized:1 concentration:1 link:1 topic:2 considers:1 kemp:1 reason:1 o4:1 assuming:3 code:1 length:5 modeled:2 index:2 ified:1 providing:1 relationship:1 suicide:1 robert:1 warehousing:1 negative:2 implementation:1 observation:44 markov:2 datasets:6 finite:2 acknowledge:1 truncated:3 january:1 relational:2 communication:1 community:1 inferred:2 comonsens:1 extensive:1 nesarc:4 c3:10 c4:5 usually:5 appeared:1 program:1 including:1 max:1 treated:1 valera:4 movie:1 axis:3 ready:1 categorical:15 csd2008:1 prior:2 literature:2 review:1 discovery:1 heller:1 probit:4 fully:1 whisker:1 mixed:12 limitation:1 age:2 znk:1 gather:2 o8:1 article:1 editor:1 row:3 anniversary:2 course:1 supported:1 last:2 keeping:1 drastically:1 allow:1 burges:1 fall:2 taking:4 distributed:6 curve:1 dimension:9 xn:3 world:2 cumulative:1 author:2 c5:6 adaptive:1 welling:1 ranganath:1 approximate:1 dealing:5 ml:3 sequentially:1 assumed:1 conclude:1 recommending:1 continuous:21 latent:33 search:1 table:20 additionally:2 nature:3 learn:2 robust:3 expansion:1 williamson:1 bottou:1 aistats:1 main:1 linearly:4 big:1 noise:2 madrid:2 cubic:1 ny:5 fails:1 inferring:2 weighting:15 programas:1 ruiz:3 specific:2 showing:1 cease:1 exists:3 consist:1 conditioned:3 gap:1 simply:3 ditionally:1 univariate:1 ordered:4 recommendation:4 u2:3 monotonic:2 gender:1 corresponds:2 acm:8 viewed:1 change:2 specifically:2 determined:2 infinite:1 sampler:3 averaging:1 zb:1 e:1 experimental:2 support:3 almeida:1 latter:1 indian:4 accelerated:2 c9:1 mcmc:1 |
4,674 | 5,232 | Dependent nonparametric trees for dynamic
hierarchical clustering
Avinava Dubey?? , Qirong Ho?? , Sinead Williamson? , Eric P. Xing?
? Machine Learning Department, Carnegie Mellon University
? Institute for Infocomm Research, A*STAR
?
McCombs School of Business, University of Texas at Austin
akdubey@cs.cmu.edu, hoqirong@gmail.com
sinead.williamson@mccombs.utexas.edu, epxing@cs.cmu.edu
Abstract
Hierarchical clustering methods offer an intuitive and powerful way to model a
wide variety of data sets. However, the assumption of a fixed hierarchy is often overly restrictive when working with data generated over a period of time:
We expect both the structure of our hierarchy, and the parameters of the clusters, to evolve with time. In this paper, we present a distribution over collections
of time-dependent, infinite-dimensional trees that can be used to model evolving
hierarchies, and present an efficient and scalable algorithm for performing approximate inference in such a model. We demonstrate the efficacy of our model and
inference algorithm on both synthetic data and real-world document corpora.
1
Introduction
Hierarchically structured clustering models offer a natural representation for many forms of data.
For example, we may wish to hierarchically cluster animals, where ?dog? and ?cat? are subcategories
of ?mammal?, and ?poodle? and ?dachshund? are subcategories of ?dog?. When modeling scientific
articles, articles about machine learning and programming languages may be subcategories under
computer science. Representing clusters in a tree structure allows us to explicitly capture these
relationships, and allow clusters that are closer in tree-distance to have more similar parameters.
Since hierarchical structures occur commonly, there exists a rich literature on statistical models for
trees. We are interested in nonparametric distributions over trees ? that is, distributions over trees
with infinitely many leaves and infinitely many internal nodes. We can model any finite data set
using a finite subset of such a tree, marginalizing over the infinitely many unoccupied branches. The
advantage of such an approach is that we do not have to specify the tree dimensionality in advance,
and can grow our representation in a consistent manner if we observe more data.
In many settings, our data points are associated with a point in time ? for example the date when
a photograph was taken or an article was written. A stationary clustering model is inappropriate in
such a context: The number of clusters may change over time; the relative popularities of clusters
may vary; and the location of each cluster in parameter space may change. As an example, consider
a topic model for scientific articles over the twentieth century. The field of computer science ? and
therefore topics related to it ? did not exist in the first half of the century. The proportion of scientific
articles devoted to genetics has likely increased over the century, and the terminology used in such
articles has changed with the development of new sequencing technology.
Despite this, to the best of our knowledge, there are no nonparametric distributions over timeevolving trees in the literature. There exist a variety of distributions over stationary trees
[1, 14, 5, 13, 10], and time-evolving non-hierarchical clustering models [16, 7, 11, 2, 4, 12] ? but
no models that combine time evolution and hierarchical structure. The reason for this is likely to
be practical: Inference in trees is typically very computationally intensive, and adding temporal
variation will, in general, increase the computational requirements. Designing such a model must,
therefore, proceed hand in hand with developing efficient and scalable inference schemes.
1
(a) Infinite tree
(b) Changing popularity
(c) Cluster/topic drift
Figure 1: Our dependent tree-structured stick breaking process can model trees of arbitrary size and shape,
and captures popularity and parameter changes through time. a) Model any number of nodes (clusters, topics),
of any branching factor, and up to any depth b) Nodes can change in probability mass, or new nodes can be
created c) Node parameters can evolve over time.
In this paper, we define a distribution over temporally varying trees with infinitely many nodes that
captures this form of variation, and describe how this model can cluster both real-valued observations and text data. Further, we propose a scalable approximate inference scheme that can be run in
parallel, and demonstrate its efficacy on synthetic data where ground-truth clustering is available, as
well as demonstrate qualitative and quantitative performance on three text corpora.
2
Background
The model proposed in this paper is a dependent nonparametric process with tree-structured
marginals. A dependent nonparametric process [12] is a distribution over collections of random
measures indexed by values in some covariate space, such that at each covariate value, the marginal
distribution is given by some known nonparametric distribution. For example, a dependent Dirichlet
process [12, 7, 11] is a distribution over collections of probability measures with Dirichlet processdistributed marginals; a dependent Pitman-Yor process [15] is a distribution over collections of
probability measures with Pitman-Yor process-distributed marginals; a dependent Indian buffet
process [17] is a distribution over collections of matrices with Indian buffet process-distributed
marginals; etc. If our covariate space is time, such distributions can be used to construct nonparametric, time-varying models.
There are two main methods of inducing dependency: Allowing the sizes of the atoms composing
the measure to vary across covariate space, and allowing the parameter values associated with the
atoms to vary across covariate space. In the context of a time-dependent topic model, these methods
correspond to allowing the popularity of a topic to change over time, and allowing the words used
to express a topic to change over time (topic drift). Our proposed model incorporates both forms
of dependency. In the supplement, we discuss some specific dependent nonparametric models that
share properties with our model.
The key difference between our proposed model and existing dependent nonparametric models is
that ours has tree-distributed marginals. There are a number of options for the marginal distribution
over trees, as we discuss in the supplement. We choose a distribution over infinite-dimensional trees
known as the tree-structured stick breaking process [TSSBP, 1], described in Section 2.1.
2.1 The tree-structured stick-breaking process
The tree-structured stick-breaking process (TSSBP) is a distribution over trees with infinitely many
leaves andPinfinitely many internal nodes. Each node within the tree is associated with a mass ?
such that ? = 1, and each data point is assigned to a node in the tree according to p(zn = ) =
? , where zn is the node assignment of the nth data point. The TSSBP is unique among the current
toolbox of random infinite-dimensional trees in that data can be assigned to an internal node, rather
than a leaf, of the tree. This property is often desirable; for example in a topic modeling context,
a document could be assigned to a general topic such as ?science? that lives toward the root of the
tree, or to a more specific topic such as ?genetics? that is a descendant of the science topic.
The TSSBP can be represented using two interleaving stick-breaking processes ? one (parametrized
by ?) that determines the size of a node and another (parametrized by ?) that determines the branching probabilities. Index the root node as node ? and let ?? be the mass assigned to it. Index its
(countably infinite) child nodes as node 1, node 2, . . . and let ?1 , ?2 , . . . be the masses assigned to
them; index the child nodes of node 1 as nodes 1 ? 1, 1 ? 2, . . . and let ?1?1 , ?1?2 , . . . be the masses
assigned to nodes 1 ? 1, 1 ? 2 . . . ; etc. Then we can sample the infinite-dimensional tree as:
? ? Beta(1, ?(||)), ? ? Beta(1, ?), ?? = ?? , ?? = 1
Qi?1
Q
??i = ??i j=1 (1 ? ??j ) ? = ? ? 0 ? (1 ? ?0 )?0 ,
2
(1)
where || indicates the depth of node , and 0 ? indicates that 0 is an ancestor node of . We refer
to the resulting infinite-dimensional weighted tree as ? = ((? ), (?i )).
3
Dependent tree-structured stick-breaking processes
We now describe a dependent tree-structured stick-breaking process where both atom sizes and their
locations vary with time. We first describe a distribution over atom sizes, and then use this distribution over collections of trees as the basis for time-varying clustering models and topic models.
3.1 A distribution over time-varying trees
We start with the basic TSSBP model [1] (described in Section 2.1 and the left of Figure 1), and
(t)
(t)
(t)
modify it so that the latent variables ? , ? and ? are replaced with sequences ? , ? and ?
(t)
(t)
indexed by discrete time t ? T (the middle of Figure 1). The forms of ? and ? are chosen so
(t)
that the marginal distribution over the ? is as described in Equation 1.
(t)
Let N (t) be the number of observations at time t, and let zn be the node allocation of the nth
PNt
(t)
(t)
observation at time t. For each node at time t, let X =
I(zn = ) be the number
PNt n=1
(t)
(t)
of observations assigned to node at time t, and Y =
n=1 I( ? zn ) be the number of
observations assigned to descendants of node . Introduce a ?window? parameter h ? N. We can
then define a prior predictive distribution over the tree at time t, as
Pt?1
Pt?1
(t0 )
(t0 )
?(t) ? Beta 1 + t0 =t?h X , ?(||) + t0 =t?h Y
(2)
Pt?1
P Pt
(t)
(t0 )
(t0 )
(t0 )
(t0 )
??i ? Beta 1 + t0 =t?h (X?i + Y?i ),? + j>i t0 =t?h (X?j + Y?j ) .
Following [1], we let ?(d) = ?d ?0 , for ?0 > 0 and ? ? (0, 1). This defines a sequence of trees
(t)
(t)
(?(t) = ((? ), (?i )), t ? T ).
Intuitively, the prior distribution over a tree at time t is given by the posterior distribution of the (stationary) TSSBP, conditioned on the observations in some window t ? h, . . . , t ? 1. The following
theorem gives the equivalence of dynamic TSSBP (dTSSBP) and TSSBP
Theorem 1. The marginal posterior distribution of the dTSSBP, at time t, follows a TSSBP.
The proof is a straightforward extension of that for the generalized P?olya urn dependent Dirichlet
process [7] and is given in the supplimentary. The above theorem implies that Equation 2 defines a
dependent tree-structured stick-breaking process.
We note that an alternative choice for inducing dependency would be to down-weight the contribution of observations for previous time-steps. For example, we could exponentially decay the
contributions of observations from previous time-steps, inducing a similar form of dependency as
that found in the recurrent Chinese restaurant process [2]. However, unlike the method described in
Equation 2, such an approach would not yield stationary TSSBP-distributed marginals.
3.2 Dependent hierarchical clustering
The construction above gives a distribution over infinite-dimensional trees, which in turn have a
probability distribution over their nodes. In order to use this distribution in a hierarchical Bayesian
(t)
model for data, we must associate each node with a parameter value ? . We let ?(t) denote the set
(t)
of all parameters ? associated with a tree ?(t) . We wish to capture two properties: 1) Within a tree
?(t) , nodes have similar values to their parents; and 2) Between trees ?(t) and ?(t+1) , corresponding
(t)
(t+1)
parameters ? and ?
have similar values. This form of variation is shown in the right of
Figure 1. In this subsection, we present two models that exhibit these properties: One appropriate
for real-valued data, and one appropriate for multinomial data.
3.2.1 A time-varying, tree-structured mixture of Gaussians
An infinite mixture of Gaussians is a flexible choice for density estimation and clustering real-valued
observations. Here, we suggest a time-varying hierarchical clustering model that is similar to the
generalized Gaussian model of [1]. The model assumes Gaussian-distributed data at each node, and
allows the means of clusters to evolve in an auto-regressive model, as below:
(t)
(t?1)
?? |??
(t?1)
? N (??
, ?0 ?1a I),
3
(t)
(t?1)
??i |?(t) , ??i
? N (m, s2 I),
(3)
where, s2 =
1
|?i|
?0 ?1
+
1
?1
,
|?i|+a
?0 ?1
m = s2 ?
|?i| 2
)
(?0 ?1
(t?1)
?(t)
+
???i
|?i|+a
?0 ?1
, ?0 > 0, ?1 ? (0, 1),
? ? [0, 1), and a ? 1. Due to the self-conjugacy of the Gaussian distribution, this corresponds to
a Markov network with factor potentials given by unnormalized Gaussian distributions: Up to a
(t?1)
(t)
normalizing constant, the factor potential associated with the link between ?
and ? is Gaus||
(t)
(t)
sian with variance ?0 ?1 , and the factor potential associated with the link between ? and ??i is
|?i|+a
Gaussian with variance ?0 ?1
.
For a single time point, this allows for fractal-like behavior, where the distance between child and
parent decreases down the tree. This behavior, which is not used in the generalized Gaussian model
of [1], makes it easier to identify the root node, and guarantees that the marginal distribution over
the location of the leaf nodes has finite variance. The a parameter enforces the idea that the amount
(t)
(t+1)
(t)
(t)
of variation between ? and ?
is smaller than that between ? and ??i , while ? ensures the
variance of node parameters remains finite across time. We chose spherical Gaussian distributions
to ensure that structural variation is captured by the tree rather than by node parameters.
3.3 A time-varying model for hierarchically clustering documents
Given a dictionary of V words, a document can be represented using a V -dimensional term frequency vector, that corresponds to a location on the surface of the (V ? 1)-dimensional unit sphere.
The von Mises-Fisher distribution, with mean direction ? and concentration parameter ? , provides
a distribution on this space. A mixture of von Mises-Fisher distributions can, therefore, be used to
cluster documents [3, 8]. Following the terminology of topic modeling [6], the mean direction ?k
associated with the kth cluster can be interpreted as the topic associated with that cluster.
We construct a time-dependent hierarchical clustering model appropriate for documents by associ(t)
ating nodes of our dependent nonparametric tree with topics. Let xn be the vector associated with
(t)
the nth document at time t. We assign a mean parameter ? to each node in each tree ?(t) as
(t)
(t?1)
?? |??
(t)
(t)
? vMF(?? , ?? ),
(t)
(t?1)
??i |?(t) , ??i
(t)
(t)
(4)
? vMF(??i , ??i ),
q
(t)
(t?1)
?0 ??1 +?0 ?a
(t)
(t?1)
(t)
(t)
1 ??
a (t)
??i =
), ??
=
where, ??
= ?0 1 + ?2a
(t)
1 + 2?1 (??1 ? ??
??
q
|?i| (t)
|?i|+a (t?1)
? ?
? +?0 ?1
??i
(t?1)
(t)
|?i|
a (t)
), ??i = 0 1
?0 ?1
1 + ?2a
, ?0 > 0, ?1 > 1, and
(t)
1 + 2?1 (? ? ??i
??i
(t)
??1
(t)
?
that can be interpreted as the parent of
is a probability vector of the same dimension as the
the root node at time t.1 This yields similar dependency behavior to that described in Section 3.2.1.
(t)
(t)
(t)
Conditioned on ?(t) and ?(t) = (? ), we sample each document xn according to zn ?
Discrete(?(t) ) and xn ? vMF(?(t) , ?). This is a hierarchical extension of the temporal vMF mixture proposed by [8].
4
Online Learning
In many time-evolving applications, we observe data points in an online setting. We are typically
interested in obtaining predictions for future data points, or characterizing the clustering structure of
current data, rather than improving predictive performance on historic data. We therefore propose
a sequential online learning algorithm, where at each time t we infer the parameter settings for the
tree ?(t) conditioned on the previous trees, which we do not re-learn. This allows us to focus our
computational efforts on the most recent (and likely relevant) data. This has the added advantage of
reducing the computational demands of the algorithm, as we do not incorporate a backwards pass
through the data, and are only ever considering a fraction of the data at a time.
In developing an inference scheme, there is always a trade-off between estimate quality and computational requirements. MCMC samplers are often the ?gold standard? of inference techniques,
because they have the true posterior distribution as the stationary distribution of their Markov Chain.
However, they can be very slow, particularly in complex models. Estimating the parameter setting
that maximizes the data likelihood is a much cheaper, but cannot capture the full posterior.
1
(t)
In our experiments, we set ??1 to be the average over all data points at time t. This ensures that the root
node is close to the centroid of the data, rather than the periphery.
4
In order to develop an inference algorithm that is parallelizable, runs in reasonable time, but still
obtains good predictive performance, we combine Gibbs sampling steps for learning the tree
(t)
parameters (?(t) ) and the topic indicators (zn ) with a MAP method for estimating the location
(t)
parameters (? ). The resulting algorithm has the following desirable properties:
(t)
(t)
(0)
(t?1)
1. The priors for ? , ? only depend on {zn } . . . {zn
}, whose sufficient statistics
(0)
(0)
(t?1)
(t?1)
{X , Y } . . . {X
, Y
} can be updated in amortized constant time.
(t)
(t)
(1)
(t)
2. The posteriors for ? , ? are conditionally independent given {zn } . . . {zn }. Hence we
(t)
(t)
(1)
(t)
can Gibbs sample ? , ? in parallel given the cluster assignments {zn } . . . {zn } (or more
precisely, their sufficient statistics {X , Y }). Similarly, we can Gibbs sample the cluster/topic
(t)
(t)
(t) (t)
assignments {zn } in parallel given the parameters {? , ? , ? } and the data, as well as infer
(t)
the MAP estimate of {? } in parallel given the data and the cluster/topic assignments. Because
of the online assumption, we do not consider evidence from times u > t.
(t)
(t)
Sampling ? , ?
Due to the conjugacy between the beta and binomial distributions, we can
easily Gibbs sample the stick-breaking parameters
Pt
Pt
(t0 )
(t0 )
?(t) |X , Y ? Beta 1 + t0 =t?h X ,?(||) + t0 =t?h Y
Pt
P Pt
(t)
(t0 )
(t0 )
(t0 )
(t0 )
??i |X?i , Y?i ? Beta 1 + t0 =t?h (X?i + Y?i ),? + j>i t0 =t?h (X?j + Y?j ) .
(t)
(t)
The ? , ? distributions for each node are conditionally independent given the counts X, Y , and
(t) (t)
(t)
so the sampler can be parallelized. We only explicitly store ? , ? , ? for nodes with nonzero
Pt
(t0 )
(t0 )
counts, i.e. t0 =t?h X + Y > 0.
(t)
(t)
(t)
(t)
Conditioned on the ? and ? , the distribution over the cluster assignments zn
Sampling zn
is just given by the TSSBP. We therefore use the slice sampling method described in [1] to Gibbs
(t)
(t)
(t)
(t)
sample zn | {? }, {? }, xn , ?. Since the cluster assignments are conditionally independent
given the tree, this step can be performed in parallel.
Learning ? It is possible to Gibbs sample the cluster parameters ?; however, in the document clustering case described in Section 3.3, this requires far more time than sampling all other parameters.
To improve the speed of our algorithm, we instead use maximum a posteriori (MAP) estimates for
?, obtained using a parallel coordinate ascent algorithm. Notably, conditioned on the trees at time
(t)
t ? 1 and t + 1, the ? for odd-numbered tree depths || are conditionally independent given the
(t)
?0 s at even-numbered tree depths |0 |, and vice versa. Hence, our algorithm alternates between
(t)
(t)
parallel optimization of odd-depth ? , and parallel optimization of even-depth ? .
(t)
In general, the conditional distribution of a cluster parameter ? depends on the values of its prede(t?1)
(t+1)
cessor ?
, its postdecessor ?
, its parent at time t, and its children at time t. In some cases,
not all of these values will be available ? for example if a node was unoccupied at previous time
steps. In this case, the distribution now depends on the full history of the parent node. For computational reasons, and because we do not wish to store the full history, we approximate the distribution
as being dependent only on observed members of the node?s Markov blanket.
5
Experimental evaluation
We evaluate the performance of our model on both synthetic and real-world data sets. Evaluation
on synthetic data sets allows us to verify that our inference algorithm allows us to recover the ?true?
evolving hierarchical structure underlying our data. Evaluation on real-world data allows us to
evaluate whether our modeling assumptions are useful in practice.
5.1 Synthetic data
We manually created a time-evolving tree, as shown in Figure 2, with Gaussian-distributed data
at each node. This synthetic time-evolving tree features temporal variation in node probabilities,
temporal variation in node parameters, and addition and deletion of nodes. Using the Gaussian
model described in Equation 3, we inferred the structure of the tree at each time period as described
in Section 4. Figure 3 shows the recovered tree structure, demonstrating the ability of our inference
algorithm to recover the expected evolving hierarchical structure. Note that it accurately captures
evolution in node probabilities and location, and the addition and deletion of new nodes.
5
Figure 2: Ground truth tree, evolving over three time steps
Figure 3: Recovered tree structure, over three consecutive time periods. Each color indicates a node in the
tree and each arrow indicates a branch connecting parent to child; nodes are consistently colored across time.
dTSSBP
o-TSSBP
T-TSSBP
Depth limit
4
3
4
3
4
3
T WITTER
522 ? 4.35
249 ? 0.98
414 ? 3.31
199 ? 2.19
335 ? 54.8
182 ? 24.1
SOU
2708 ? 32.0 1320 ? 33.6 1455 ? 44.5 583 ? 16.4 1687 ? 329 1089 ? 143
PNAS
4562 ? 116
3217 ? 195
2672 ? 357 1163 ? 196 4333 ? 647 2962 ? 685
dDP
o-DP
T-DP
T WITTER
204 ? 8.82
136 ? 0.42
112 ? 10.9
SOU
834 ? 51.2
633 ? 18.8
890 ? 70.5
PNAS
2374 ? 51.7
1061 ? 10.5
2174 ? 134
Table 1: Test set average log-likelihood on three datasets.
5.2 Real-world data
In Section 3.3, we described how the dependent TSSBP can be combined with a von Mises-Fisher
likelihood to cluster documents. To evaluate this model, we looked at three corpora:
? T WITTER: 673,102 tweets containing hashtags relevant to the NFL, collected over 18 weeks in 2011 and
containing 2,636 unique words (after stopwording). We grouped the tweets into 9 two-week epochs.
? PNAS: 79,800 paper titles from the Proceedings of the National Academy of Sciences between 1915 and
2005, containing 36,901 unique words (after stopwording). We grouped the titles into 10 ten-year epochs.
? S TATE OF THE U NION (S O U): Presidential SoU addresses from 1790 through 2002, containing 56,352
sentences and 21,505 unique words (after stopwording). We grouped the sentences into 21 ten-year epochs.
In each case, documents were represented using their vector of term frequencies.
Our hypothesis is that the topical structure of language is hierarchically structured and timeevolving, and that a model that captures these properties will achieve better performance than models
that ignore hierarchical structure and/or temporal evolution. To test these hypotheses, we compare
our dependent tree-structured stick-breaking process (dTSSBP) against several online nonparametric models for document clustering:
1. Multiple tree-structured stick-breaking process (T-TSSBP): We modeled the entire corpus using the stationary TSSBP model, with each node modeled using an independent von Mises-Fisher distribution. Each
time period is modeled with a separate tree, using a similar implementation to our time-dependent TSSBP.
2. ?Online? tree-structured stick-breaking processes (o-TSSBP): This simulates online learning of a single,
stationary tree over the entire corpus. We used our dTSSBP implementation with an infinite window h =
(t)
?, and once a node is created at time t, we prevent its vMF mean ? from changing in future time points.
3. Dependent Dirichlet process (dDP): We modeled the entire corpus using an h-order Markov generalized
P?olya urn DDP [7]. This model was implemented by modifying our dTSSBP code to have a single level.
(t)
(t)
Node parameters were evolved as ?k ? vMF(?k , ?).
4. Multiple Dirichlet process (T-DP): We modeled the entire corpus using DP mixtures of von Mises-Fisher
distributions, one DP per time period. Each node was modeled using an independent von Mises-Fisher
distribution. We used our own implementation.
6
Chemistry
1915 - 1924
36
pressure, ions, solutions, salts, osmotic,
molecules, mobilities, gas, effect, influence
Chemistry
1925 - 1934
Chemistry
1945 - 1954
3
pressure, ions, solutions, salts, osmotic,
molecules, mobilities, gas, effect, influence
Chemistry
1965 - 1974
0
0
19
9
mobilities, ions, air, electrons, presence,
resistance, function, electric, molecules,
disease
electrons, mobilities, ions, air, presence,
metals, electric, resistance, function,
conductivity
3
pressure, acoustic, exhibit, excitation,
telephonic, variation, heat, specific,
liquids, chiefly
pressure, acoustic, liquids, telephonic,
exhibit, excitation, variation, heat,
specific, reservoirs
3
pressure, acoustic, liquids, telephonic,
exhibit, excitation, variation heat,
specific, reservoirs
3
solutions, liquids, non, salts, fields,
electrolytes, dielectric, fused, squares,
intensive
solutions, equations, finite, field, liquids,
salts, non, electrolytes, conductance, certain
9
Immunology
1965 - 1974
30
virus, murine, leukemia, cells, sarcoma,
antibody, herpes, induced, simian, type
?
?
24
Immunology
1975 - 1984
11
pressure, acoustic, liquids, telephonic,
exhibit, excitation, variation, heat,
specific, reservoirs
11
solutions, equations, finite, field, liquids,
non, salts, electrolytes, conductance, certain
Immunology
1985 - 1994
97
97
virus, leukemia, murine, sarcoma, cells,
induced, mice, herpes, antigens, simplex
virus, leukemia, murine, sarcoma, cells,
induced, mice, herpes, antigens, simplex
209
133
virus, simian, rna, cells, vesicular, stomatitis,
influenza, sequence, antigen, viral
virus, simian, rna, cells, vesicular, stomatitis,
influenza, sequence, antigen, viral
93
virus, sarcoma, avian, gene, transforming, genome,
protein, sequences, murine, myeloblastosis
63
65
virus, cells, epstein, barr, murine, antibody,
sarcoma, leukemia, vitro, antibodies
virus, cells, epstein, barr, murine, antibody,
sarcoma, leukemia, vitro, antibodies
Figure 4: PNAS dataset: Birth, growth, and death of tree-structured topics in our dTSSBP model. This
illustration captures some trends in American scientific research throughout the 20th century, by focusing on
the evolution of parent and child topics in two major scientific areas: Chemistry and Immunology (the rest of
the tree has been omitted for clarity). At each epoch, we show the number of documents assigned to each topic,
as well as it?s most popular words (according to the vMF mean ?).
5. ?Online? Dirichlet process (o-DP): This simulates online learning of a single DP over the entire corpus.
We used our dDP implementation with an infinite window h = ?, and once a cluster is instantiated at time
t, we prevent its vMF mean ?(t) from changing in future time points.
Evaluation scheme: We divide each dataset into two parts: the first 50%, and last 50% of time
points. We use the first 50% to tune model parameters and select a good random restart (by training
on 90% and testing on 10% of the data at each time point), and then use the last 50% to evaluate
the performance of the best parameters/restart (again, by training on 90% and testing on 10% data).
When training the 3 TSSBP-based models, we grid-searched ?0 ? {1, 10, 100, 1000, 10000}, and
fixed ?1 = 1, a = 0 for simplicity. Each value of ?0 was run 5 times to get different random
restarts, and we took the best ?0 -restart pair for evaluation on the last 50% of time points. For the 3
DP-based models, there is no ?0 parameter, so we simply took 5 random restarts and used the best
one for evaluation. For all TSSBP- and DP-based models, we repeated the evaluation phase 5 times
to get error bars. Every dTSSBP trial completed in < 20 minutes on a single processor core, while
we observed moderate (though not perfectly linear) speedups with 2-4 processors.
Parameter settings: For all models, we estimated each node/cluster?s vMF concentration parameter ? from the data. For the TSSBP-based models, we used stick breaking parameters ? = 0.5 and
(t)
?(d) = 0.5d , and set ??1 to the average document term frequency vector at time t. In order to keep
running times reasonable, we limit the TSSBP-based models to a maximum depth of either 3 or 4
(we report results for both)2 . For the DP-based models, we used a Dirichlet process concentration
parameter of 1. The dDP?s inter-epoch vMF concentration parameter was set to ? = 0.001.
Results: Table 1 shows the average log (unnormalized) likelihoods on the test sets (from the last
50% of time points). The tree-based models uniformly out-perform the non-hierarchical models,
while the max-depth-4 tree models outperform the max-depth-3 ones. On all 3 datasets, the maxdepth-4 dTSSBP uniformly outperforms all models, confirming our initial hypothesis.
5.3 Qualitative results
In addition to high-quality quantitative results, we find that the time-dependent tree model gives
good qualitative performance. Figure 4 shows two time-evolving sub-trees obtained from the PNAS
data set. The top level shows a sub-tree concerned with Chemistry; the bottom level shows a sub-tree
2
One justification is that shallow hierarchies are easier to interpret than deep ones; see [5, 9].
7
Cold War
1960 - 1970
Mexican War
1840 - 1850
Cold War
1970 - 1980
144
40
19
general, army, command, war, proper,
summer, secretary, operations, time, mexico
world, peace, free, nation, nations, america,
war, dream, american, communist
Cold War
1980 - 1990
Cold War
1990 - 2000
87
10
world, peace, free, nation, nations,
america, war, dream, american, communist
world, peace, free, nation, nations,
america, war, dream, american, communist
10
3
world, security, strength, relations,
peace, people, fourth, nations, nuclear,
continue
world, security, strength, relations,
peace, people, fourth, nations, nuclear,
continue
Civil War
1860 - 1870
6
world, major, peace, asia, force, exist,
security, america, natural, nation
10
world, peace, free, nation, nations,
america, war, dream, american, communist
5
world, security, strength, relations,
peace, people, fourth, nations, nuclear,
continue
3
world, major, peace, asia, force, exist,
security, america, natural, nation
slavery, constitution, senate, van, buren,
war, existed, rebellion, time, act
3
world, power, defenses, years, leadership,
restore, alliances, trusts, peace, requires
Indian Wars
1790 - 1800
Indian Wars
1800 - 1810
1
indian, tribes, overtures, friendship,
spared, source, lands, commissioners,
title, demarcation
1
indian, tribes, overtures, friendship,
spared, source, lands, commissioners,
extinguished, title
Indian Wars
1810 - 1820
2
indian, tribes, overtures, friendship,
spared, source, lands, imposition, war,
mode
Indian Wars
1830 - 1840
?
6
11
5
indian, tribes, friendship, overtures, spared,
lands, source, demarcation, practicable,
imposition
indian, tribes, friendship, overtures, spared,
lands, source, demarcation, practicable,
imposition
indian, tribes, friendship, overtures, spared,
lands, source, demarcation, practicable,
imposition
Figure 5: State of the Union dataset: Birth, growth, and death of tree-structured topics in our dTSSBP
model. This illustration captures some key events in American history. At each epoch, we show the number of
documents assigned to each topic, as well as it?s most popular words (according to the vMF mean ?).
concerned with Immunology. Our dynamic tree model discovers closely-related topics and groups
them under a sub-tree, and creates, grows and destroys individual sub-topics as needed to fit the data.
For instance, our model captures the sudden surge in Immunology-related research from 1975-1984,
which happened right after the structure of the antibody molecule was identified a few years prior.
In the Chemistry topic, the study of mechanical properties of materials (pressure, acoustic properties,
specific heat, etc) is a constant presence throughout the century. The study of electrical properties
of materials starts off with a topic (in purple) that seems devoted to Physical Chemistry. However,
following the development of Quantum Mechanics in the 30s, this line of research became more
closely aligned with Physics than Chemistry, and it disappears from the sub-tree. In its wake, we
see the growth of a topic more concerned with electrolytes, solutions and salts, which remained the
within the sphere of Chemistry.
Figure 5 shows time-evolving sub-trees obtained from the State of the Union dataset. We see a
sub-tree tracking the development of the Cold War. The parent node contains general terms relevant
to the Cold War; starting from the 1970s, a child node (shown in purple) contains terms relevant
to nuclear arms control, in light of the Strategic Arms Limitation Talks of that decade. The same
decade also sees the birth of a child node focused on Asia (shown in cyan), contemporaneous with
President Richard Nixon?s historic visit to China in 1972. In addition to the Cold War, we also
see topics corresponding to events such as the Mexican War, the Civil War and the Indian Wars,
demonstrating our model?s ability to detect events in a timeline.
6
Discussion
In this paper, we have proposed a flexible nonparametric model for dynamically-evolving, hierarchically structured data. This model can be applied to multiple types of data using appropriate
choices of likelihood; we present an application in document clustering that combines high-quality
quantitative performance with intuitively interpretable results. One of the significant challenges in
constructing nonparametric dependent tree models is the need for efficient inference algorithms. We
make judicious use of approximations and combine MCMC and MAP approximation techniques to
develop an inference algorithm that can be applied in an online setting, while being parallelizable.
Acknowledgements: This research was supported by NSF Big data IIS1447676, DARPA XDATA
FA87501220324 and NIH GWAS R01GM087694.
8
References
[1] R. Adams, Z. Ghahramani, and M. Jordan. Tree-structured stick breaking for hierarchical data.
In Advances in Neural Information Processing Systems, 2010.
[2] A. Ahmed and E. Xing. Dynamic non-parametric mixture models and the recurrent Chinese
restaurant process: with applications to evolutionary clustering. In SDM, 2008.
[3] A. Banerjee, I. Dhillon, J. Ghosh, and S. Sra. Clustering on the unit hypersphere using von
Mises-Fisher distributions. Journal of Machine Learning Research, 6:1345?1382, 1995.
[4] D. Blei and P. Frazier. Distance dependent Chinese restaurant processes. Journal of Machine
Learning Research, 12(2461?2488), 2011.
[5] D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested
Chinese restaurant process. In Advances in Neural Information Processing Systems, 2004.
[6] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[7] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet processes.
In uai, 2007.
[8] S. Gopal and Y. Yang. Von Mises-Fisher clustering models. In International Conference on
Machine Learning, 2014.
[9] Q. Ho, J. Eisenstein, and E. Xing. Document hierarchies from text and links. In Proceedings
of the 21st international conference on World Wide Web, pages 739?748. ACM, 2012.
[10] J. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27?43,
1982.
[11] D. Lin, E. Grimson, and J. Fisher. Construction of dependent Dirichlet processes based on
Poisson processes. In Advances in Neural Information Processing Systems, 2010.
[12] S. N. MacEachern. Dependent nonparametric processes. In Bayesian Statistical Science, 1999.
[13] R. M. Neal. Density modeling and clustering using Dirichlet diffusion trees. Bayesian Statistics, 7:619?629, 2003.
[14] A. Rodriguez, D. Dunson, and A. Gelfand. The nested Dirichlet process. Journal of the
American Statistical Association, 103(483), 2008.
[15] E. Sudderth and M. Jordan. Shared segmentation of natural scenes using dependent PitmanYor processes. In Advances in Neural Information Processing Systems, 2008.
[16] X. Wang and A. McCallum. Topics over time: a non-Markov continuous-time model of topical
trends. In Knowledge Discovery and Data Mining, 2006.
[17] S. Williamson, P. Orbanz, and Z. Ghahramani. Dependent Indian buffet processes. In Artificial
Intelligence and Statistics, 2010.
9
| 5232 |@word trial:1 middle:1 proportion:1 seems:1 pressure:7 mammal:1 initial:1 contains:2 efficacy:2 liquid:7 document:17 ours:1 outperforms:1 existing:1 current:2 com:1 recovered:2 virus:8 gmail:1 written:1 must:2 confirming:1 shape:1 simian:3 interpretable:1 stationary:7 half:1 leaf:4 intelligence:1 mccallum:1 core:1 leadership:1 colored:1 regressive:1 sudden:1 provides:1 hypersphere:1 node:60 location:6 blei:3 commissioner:2 beta:7 qualitative:3 descendant:2 combine:4 introduce:1 manner:1 inter:1 notably:1 expected:1 behavior:3 olya:2 surge:1 mechanic:1 spherical:1 inappropriate:1 window:4 considering:1 estimating:2 underlying:1 maximizes:1 mass:5 evolved:1 interpreted:2 ghosh:1 guarantee:1 temporal:5 quantitative:3 every:1 nation:13 act:1 growth:3 stick:14 control:1 unit:2 conductivity:1 modify:1 limit:2 despite:1 chose:1 china:1 equivalence:1 dynamically:1 antigen:4 practical:1 unique:4 enforces:1 testing:2 practice:1 union:2 cold:7 area:1 evolving:11 davy:1 word:7 griffith:1 numbered:2 suggest:1 protein:1 get:2 cannot:1 close:1 context:3 influence:2 map:4 straightforward:1 starting:1 focused:1 simplicity:1 nuclear:4 century:5 population:1 variation:11 coordinate:1 justification:1 updated:1 president:1 hierarchy:5 pt:9 construction:2 programming:1 designing:1 hypothesis:3 associate:1 amortized:1 trend:2 particularly:1 observed:2 bottom:1 electrical:1 capture:10 wang:1 ensures:2 decrease:1 trade:1 disease:1 grimson:1 transforming:1 dynamic:4 mccombs:2 depend:1 predictive:3 creates:1 eric:1 basis:1 easily:1 darpa:1 cat:1 represented:3 america:6 talk:1 heat:5 instantiated:1 describe:3 pitmanyor:1 artificial:1 herpes:3 birth:3 whose:1 dielectric:1 gelfand:1 valued:3 presidential:1 ability:2 statistic:4 online:10 advantage:2 sequence:5 sdm:1 took:2 propose:2 relevant:4 aligned:1 date:1 qirong:1 achieve:1 gold:1 academy:1 intuitive:1 inducing:3 parent:8 cluster:24 requirement:2 hashtags:1 adam:1 recurrent:2 develop:2 avian:1 communist:4 odd:2 polya:1 school:1 fa87501220324:1 implemented:1 c:2 implies:1 blanket:1 direction:2 closely:2 modifying:1 prede:1 material:2 barr:2 assign:1 extension:2 genealogy:1 ground:2 week:2 electron:2 major:3 vary:4 dictionary:1 consecutive:1 omitted:1 estimation:1 utexas:1 title:4 grouped:3 vice:1 weighted:1 destroys:1 gaussian:9 pnt:2 always:1 rna:2 rather:4 vesicular:2 gopal:1 gaus:1 varying:8 command:1 sou:3 focus:1 frazier:1 consistently:1 sequencing:1 indicates:4 likelihood:5 spared:6 centroid:1 demarcation:4 detect:1 posteriori:1 inference:12 tate:1 dependent:30 secretary:1 typically:2 entire:5 relation:3 ancestor:1 interested:2 among:1 flexible:2 development:3 animal:1 marginal:5 field:4 construct:2 once:2 ng:1 atom:4 sampling:5 manually:1 leukemia:5 future:3 simplex:2 report:1 extinguished:1 richard:1 few:1 national:1 individual:1 cheaper:1 replaced:1 phase:1 conductance:2 mining:1 evaluation:7 mixture:6 light:1 devoted:2 chain:1 closer:1 mobility:4 tree:84 indexed:2 divide:1 re:1 alliance:1 increased:1 instance:1 modeling:5 zn:17 assignment:6 strategic:1 ating:1 subset:1 dependency:5 synthetic:6 combined:1 st:1 density:2 immunology:6 international:2 physic:1 off:2 avinava:1 connecting:1 fused:1 mouse:2 von:8 again:1 containing:4 choose:1 poodle:1 american:7 kingman:1 potential:3 chemistry:10 star:1 explicitly:2 depends:2 performed:1 root:5 xing:3 start:2 option:1 parallel:8 recover:2 gwas:1 contribution:2 air:2 square:1 purple:2 became:1 variance:4 correspond:1 yield:2 identify:1 bayesian:3 accurately:1 processor:2 history:3 parallelizable:2 against:1 frequency:3 associated:9 proof:1 mi:8 dataset:4 popular:2 sinead:2 knowledge:2 subsection:1 dimensionality:1 color:1 segmentation:1 focusing:1 restarts:2 asia:3 specify:1 though:1 just:1 working:1 hand:2 web:1 trust:1 unoccupied:2 banerjee:1 rodriguez:1 defines:2 epstein:2 mode:1 quality:3 scientific:5 grows:1 effect:2 verify:1 timeevolving:2 true:2 evolution:4 hence:2 assigned:10 nonzero:1 death:2 dhillon:1 neal:1 conditionally:4 branching:2 self:1 excitation:4 eisenstein:1 unnormalized:2 generalized:5 demonstrate:3 vmf:11 discovers:1 nih:1 viral:2 multinomial:1 vitro:2 physical:1 salt:6 exponentially:1 influenza:2 association:1 marginals:6 interpret:1 mellon:1 refer:1 significant:1 versa:1 gibbs:6 caron:1 grid:1 similarly:1 xdata:1 associ:1 language:2 r01gm087694:1 surface:1 etc:3 posterior:5 own:1 recent:1 orbanz:1 moderate:1 periphery:1 store:2 certain:2 constitution:1 continue:3 life:1 captured:1 parallelized:1 overture:6 period:5 branch:2 full:3 desirable:2 pnas:5 infer:2 multiple:3 ahmed:1 offer:2 sphere:2 lin:1 visit:1 peace:10 qi:1 prediction:1 scalable:3 basic:1 cmu:2 poisson:1 ion:4 cell:7 background:1 addition:4 wake:1 grow:1 source:6 sudderth:1 rest:1 unlike:1 ascent:1 induced:3 simulates:2 member:1 incorporates:1 jordan:4 structural:1 presence:3 backwards:1 yang:1 concerned:3 variety:2 restaurant:4 fit:1 perfectly:1 identified:1 idea:1 intensive:2 texas:1 t0:23 whether:1 nfl:1 war:23 defense:1 effort:1 resistance:2 proceed:1 fractal:1 deep:1 useful:1 dubey:1 tune:1 amount:1 nonparametric:14 ten:2 tenenbaum:1 outperform:1 exist:4 electrolyte:4 nsf:1 happened:1 estimated:1 overly:1 popularity:4 per:1 carnegie:1 discrete:2 express:1 group:1 key:2 terminology:2 demonstrating:2 changing:3 prevent:2 clarity:1 diffusion:1 fraction:1 tweet:2 year:4 run:3 imposition:4 powerful:1 fourth:3 throughout:2 reasonable:2 antibody:6 cyan:1 ddp:5 summer:1 existed:1 nixon:1 strength:3 occur:1 precisely:1 scene:1 speed:1 performing:1 urn:3 speedup:1 department:1 structured:18 developing:2 according:4 alternate:1 across:4 smaller:1 shallow:1 practicable:3 intuitively:2 taken:1 computationally:1 equation:6 conjugacy:2 remains:1 discus:2 turn:1 count:2 needed:1 telephonic:4 available:2 gaussians:2 operation:1 observe:2 hierarchical:16 appropriate:4 alternative:1 buffet:3 ho:2 assumes:1 clustering:20 dirichlet:12 ensure:1 binomial:1 completed:1 running:1 top:1 restrictive:1 ghahramani:2 chinese:4 added:1 looked:1 parametric:1 concentration:4 cessor:1 exhibit:5 evolutionary:1 kth:1 dp:10 distance:3 link:3 nion:1 separate:1 parametrized:2 restart:3 topic:32 collected:1 reason:2 toward:1 dream:4 code:1 index:3 relationship:1 modeled:6 illustration:2 mexico:1 dunson:1 implementation:4 proper:1 perform:1 allowing:4 observation:9 markov:5 datasets:2 finite:6 gas:2 ever:1 topical:2 arbitrary:1 drift:2 inferred:1 dog:2 pair:1 toolbox:1 mechanical:1 sentence:2 security:5 acoustic:5 deletion:2 timeline:1 address:1 bar:1 below:1 challenge:1 max:2 power:1 event:3 business:1 natural:4 force:2 restore:1 indicator:1 sian:1 senate:1 nth:3 representing:1 scheme:4 improve:1 arm:2 technology:1 epxing:1 temporally:1 disappears:1 created:3 auto:1 text:3 prior:4 literature:2 epoch:6 acknowledgement:1 discovery:1 evolve:3 marginalizing:1 relative:1 subcategories:3 expect:1 historic:2 limitation:1 allocation:2 sufficient:2 consistent:1 metal:1 article:6 share:1 land:6 austin:1 genetics:2 changed:1 supported:1 last:4 free:4 tribe:6 allow:1 institute:1 wide:2 characterizing:1 pitman:2 yor:2 distributed:6 slice:1 van:1 depth:10 xn:4 world:15 dimension:1 rich:1 genome:1 quantum:1 collection:6 commonly:1 far:1 contemporaneous:1 approximate:3 obtains:1 ignore:1 countably:1 gene:1 keep:1 doucet:1 uai:1 corpus:8 witter:3 continuous:1 latent:2 decade:2 table:2 learn:1 molecule:4 composing:1 sra:1 obtaining:1 improving:1 williamson:3 complex:1 electric:2 constructing:1 did:1 hierarchically:5 main:1 arrow:1 s2:3 big:1 child:8 repeated:1 reservoir:3 slow:1 sub:8 wish:3 breaking:14 interleaving:1 theorem:3 down:2 minute:1 friendship:6 specific:7 covariate:5 remained:1 decay:1 sarcoma:6 normalizing:1 evidence:1 exists:1 adding:1 sequential:1 supplement:2 conditioned:5 demand:1 easier:2 civil:2 photograph:1 simply:1 twentieth:1 infinitely:5 likely:3 army:1 tracking:1 corresponds:2 truth:2 determines:2 chiefly:1 nested:2 acm:1 conditional:1 shared:1 fisher:9 change:6 judicious:1 infinite:11 reducing:1 uniformly:2 infocomm:1 sampler:2 mexican:2 pas:1 experimental:1 select:1 internal:3 searched:1 people:3 maceachern:1 indian:14 incorporate:1 evaluate:4 mcmc:2 |
4,675 | 5,233 | Sparse Bayesian structure learning with dependent
relevance determination prior
Anqi Wu1
Mijung Park2
Oluwasanmi Koyejo3
Jonathan W. Pillow4
1,4
Princeton Neuroscience Institute, Princeton University,
{anqiw, pillow}@princeton.edu
2
The Gatsby Unit, University College London, mijung@gatsby.ucl.ac.uk
3
Department of Psychology, Stanford University, sanmi@stanford.edu
Abstract
In many problem settings, parameter vectors are not merely sparse, but dependent in such a way that non-zero coefficients tend to cluster together. We refer to this form of dependency as ?region sparsity?. Classical sparse regression
methods, such as the lasso and automatic relevance determination (ARD), model
parameters as independent a priori, and therefore do not exploit such dependencies. Here we introduce a hierarchical model for smooth, region-sparse weight
vectors and tensors in a linear regression setting. Our approach represents a hierarchical extension of the relevance determination framework, where we add a
transformed Gaussian process to model the dependencies between the prior variances of regression weights. We combine this with a structured model of the prior
variances of Fourier coefficients, which eliminates unnecessary high frequencies.
The resulting prior encourages weights to be region-sparse in two different bases
simultaneously. We develop efficient approximate inference methods and show
substantial improvements over comparable methods (e.g., group lasso and smooth
RVM) for both simulated and real datasets from brain imaging.
1
Introduction
Recent work in statistics has focused on high-dimensional inference problems where the number of
parameters p equals or exceeds the number of samples n. Although ill-posed in general, such problems are made tractable when the parameters have special structure, such as sparsity in a particular
basis. A large literature has provided theoretical guarantees about the solutions to sparse regression
problems and introduced a suite of practical methods for solving them efficiently [1?7].
The Bayesian interpretation of standard ?shrinkage? based methods for sparse regression problems
involves maximum a postieriori (MAP) inference under a sparse, independent prior on the regression coefficients [8?15]. Under such priors, the posterior has high concentration near the axes, so
the posterior maximum is at zero for many weights unless it is pulled strongly away by the likelihood. However, these independent priors neglect a statistical feature of many real-world regression
problems, which is that non-zero weights tend to arise in clusters, and are therefore not independent
a priori. In many settings, regression weights have an explicit topographic relationship, as when
they index regressors in time or space (e.g., time series regression, or spatio-temporal neural receptive field regression). In such settings, nearby weights exhibit dependencies that are not captured by
independent priors, which results in sub-optimal performance.
Recent literature has explored a variety of techniques for improving sparse inference methods by
incorporating different types of prior dependencies, which we will review here briefly. The smooth
relevance vector machine (s-RVM) extends the RVM to incorporate a smoothness prior defined
1
in a kernel space, so that weights are smooth as well as sparse in a particular basis [16]. The
group lasso captures the tendency for groups of coefficients to remain in or drop out of a model
in a coordinated manner by using an l1 penalty on the l2 norms pre-defined groups of coefficients
[17]. A method described in [18] uses a multivariate Laplace distribution to impose spatio-temporal
coupling between prior variances of regression coefficients, which imposes group sparsity while
leaving coefficients marginally uncorrelated. The literature includes many related methods [19?24],
although most require a priori knowledge of the dependency structure, which may be unavailable in
many applications of interest.
Here we introduce a novel, flexible method for capturing dependencies in sparse regression problems, which we call dependent relevance determination (DRD). Our approach uses a Gaussian
process to model dependencies between latent variables governing the prior variance of regression weights. (See [25], which independently proposed a similar idea.) We simultaneously impose
smoothness by using a structured model of the prior variance of the weights? Fourier coefficients.
The resulting model captures sparse, local structure in two different bases simultaneously, yielding
estimates that are sparse as well as smooth. Our method extends previous work on automatic locality determination (ALD) [26] and Bayesian structure learning (BSL) [27], both of which described
hierarchical models for capturing sparsity, locality, and smoothness. Unlike these methods, DRD
can tractably recover region-sparse estimates with multiple regions of non-zero coefficients, without
pre-definining number of regions. We argue that DRD can substantially improve structure recovery
and predictive performance in real-world applications.
This paper is organized as follows: Sec. 2 describes the basic sparse regression problem; Sec. 3 introduces the DRD model; Sec. 4 and Sec. 5 describe the approximate methods we use for inference;
In Sec. 6, we show applications to simulated data and neuroimaging data.
2
2.1
Problem setup
Observation model
We consdier a scalar response yi ? R linked to an input vector xi ? Rp via the linear model:
yi = xi > w + i ,
for
i = 1, 2, ? ? ? , n,
(1)
with observation noise i ? N (0, ? 2 ). The regression (linear weight) vector w ? Rp is the quantity
of interest. We denote the design matrix by X ? Rn?p , where each row of X is the ith input vector
xi > , and the observation vector by y = [y1 , ? ? ? , yn ]> ? Rn . The likelihood can be written:
y|X, w, ? 2 ? N (y|Xw, ? 2 I).
2.2
(2)
Prior on regression vector
We impose the zero-mean multivariate normal prior on w:
w|? ? N (0, C(?))
(3)
where the prior covariance matrix C(?) is a function of hyperparameters ?. One can specify C(?)
based on prior knowledge on the regression vector, e.g. sparsity [28?30], smoothness [16, 31], or
both [26]. Ridge regression assumes C(?) = ??1 I where ? is a scalar for precision. Automatic relevance determination (ARD) uses a diagonal prior covariance matrix with a distinct hyperparameter
?i for each element of the diagonal, thus Cii = ?i?1 . Automatic smoothness determination (ASD)
assumes a non-diagonal prior covariance, given by a Gaussian kernel, Cij = exp(?? ? ?ij /2? 2 )
where ?ij is the squared distance between the filter coefficients wi and wj in pixel space and
? = {?, ? 2 }. Automatic locality determination (ALD) parametrizes the local region with a Gaussian form, so that prior variance of each filter coefficient is determined by its Mahalanobis distance
(in coordinate space) from some mean location ? under a symmetric positive semi-definite matrix
?. The diagonal prior covariance matrix is given by Cii = exp(? 21 (?i ? ?)> ??1 (?i ? ?))), where
?i is the space-time location (i.e., filter coordinates) of the ith filter coefficient wi and ? = {?, ?}.
2
3
Dependent relevance determination (DRD) priors
We formulate the prior covariances to capture the region dependent sparsity in the regression vector
in the following.
Sparsity inducing covariance
We first parameterise the prior covariance to capture region sparsity in w
Cs = diag[exp(u)],
(4)
where the parameters are u ? Rp . We impose the Gaussian process (GP) hyperprior on u
u ? N (b1, K).
(5)
The GP hyperprior is controlled by the mean parameter b ? R and the squared exponential kernel
parameters, overall scale ? ? R and the length scale l ? R. We denote the hyperparameters by
?s = {b, ?, l}. We refer to the prior distribution associated with the covariance Cs as dependent
relevance determination (DRD) prior.
Note that this hyperprior induces dependencies between the ARD precisions, that is, prior variance
changes slowly between neighboring coefficients. If the ith coefficient of u has large prior variance,
then probably the i + 1 and i ? 1 coefficients are large as well.
Smoothness inducing covariance
We then formulate the smoothness inducing covariance in frequency domain. Smoothness is captured by a low pass filter with only lower frequencies passing through. Therefore, we define a zeromean Gaussian prior over the Fourier-transformed weights w using a diagonal covariance matrix
Cf with diagonal
Cf,ii = exp(?
?2i
),
2? 2
(6)
where ?i is the ith location of the regression weights w in frequency domain and ? 2 is the Gaussian
covariance. We denote the hyperparameters by ?f = ? 2 . This formulation imposes neighboring
weights to have similar levels of Fourier power.
Similar to the automatic determination in frequency coordinates (ALDf) [26], this way of formulating the covariance requires taking discrete Fourier transform of input vectors to construct the prior in
the frequency domain. This is a significant consumption in computation and memory requirements
especially when p is large. To avoid the huge expense, we abandon the single frequency version Cf
but combine it with Cs to form Csf with both sparsity and smoothness induced as the following.
Smoothness and region sparsity inducing covariance
Finally, to capture both region sparsity and smoothness in w, we combine Cs and Cf in the following
way
1
1
Csf = Cs2 B > Cf BCs2 ,
(7)
where B is the Fourier transformation matrix which could be huge when p is large. Implementation
exploits the speed of the FFT to apply B implicitly. This formulation implies that the sparse regions
that are captured by Cs are pruned out and the variance of the remaining entries in weights are
correlated by Cf . We refer to the prior distribution associated with the covariance Csf as smooth
dependent relevance determination (sDRD) prior.
Unlike Cs , the covariance Csf is no longer diagonal. To reduce computational complexity and
storage requirements, we only store those values that correspond to non-zero portions in the diagonal
of Cs and Cf from the full Csf .
3
Figure 1: Generative model for locally smooth and globally sparse Bayesian structure learning. The ith response
yi is linked to an input vector xi and a weight vector w
in each i. The weight vector w is governed by u and ?f .
The hyper-prior p(u|?s ) imposes correlated sparsity in w
and the hyperparameters ?f imposes smoothness in w.
4
Posterior inference for w
First, we denote the overall hyperparameter set to be ? = {? 2 , ?s , ?f } = {? 2 , b, ?, l, ? 2 }. We
? and compute the conditional MAP
compute the maximum likelihood estimate for ? (denoted by ?)
?
estimate for the weights w given ? (closed form), which is the empirical Bayes procedure equipped
with a hyper-prior. Our goal is to infer w. The posterior distribution over w is obtained by
Z Z
p(w|X, y) =
p(w, u, ?|X, y)dud?,
(8)
which is analytically intractable. Instead, we approximate the marginal posterior distribution with
the conditional distribution given the MAP estimate of u, denoted by ?u , and the maximum likelihood estimation of ? 2 , ?s , ?f denoted by ??2 , ??s , ??f ,
p(w|X, y) ?
p(w|X, y, ?u , ??2 , ??s , ??f ).
(9)
The approximate posterior over w is multivariate normal with the mean and covariance given by
5
p(w|X, y, ?u , ??2 , ??s , ??f )
=
?w
=
?w
=
N (?w , ?w ),
1
( X > X + C??1,?? ,?? )?1 ,
?
u s f
?2
1
?w X T y.
??2
(10)
(11)
(12)
Inference for hyperparameters
The MAP inference of w derived in the previous section depends on the values of ?? = {??2 , ??s , ??f }.
? we maximize the marginal likelihood of the evidence.
To estimate ?,
?? =
arg max log p(y|X, ?)
?
(13)
where
Z Z
p(y|X, ?)
=
p(y|X, w, ? 2 )p(w|u, ?f )p(u|?s )dwdu.
(14)
Unfortunately, computing the double integrals is intractable. In the following, we consider the the
approximation method based on Laplace approximation to compute the integral approximately.
Laplace approximation to posterior over u
To approximate the marginal likelihood, we can rewrite Bayes? rule to express the marginal likelihood as the likelihood times prior divided by the posterior,
p(y|X, ?) =
p(y|X, u)p(u|?)
,
p(u|y, X, ?)
(15)
Laplace?s method allows us to approximate p(u|y, X, ?), the posterior over the latent u given
the data {X, y} and hyper-parameters ?, using a Gaussian centered at the mode of the distribution and inverse covariance given by the Hessian of the negative log-likelihood. Let ?u =
?2
?1
arg maxu p(u|y, X, ?) and ?u = ?( ?u?u
denote the mean and covariance
> log p(u|y, X, ?))
4
Figure 2: Comparison of estimators for 1D simulated example. First column: True filter
weight, maximum likelihood (linear regression) estimate, empirical Bayesian ridge regression (L2penalized) estimate; Second column: ARD estimate with different and independent prior covariance hyperparameters, lasso regression with L1-regularization and group lasso with group size of 5;
Third column: ALD methods in space-time domain, frequency domain and combination of both, respectively; Fourth column: DRD method in space-time domain only and its smooth version sDRD
imposing both sparsity (space-time) and smoothness (frequency), and smooth RVM initialized with
elastic net estimate.
of this Gaussian, respectively. Although the right-hand-side can be evaluated at any value of u, a
common approach is to use the mode u = ?u , since this is where the Laplace approximation is
most accurate. This leads to the following expression for the log marginal likelihood:
log p(y|X, ?) ? log p(y|X, ?u ) + log p(?u |?) ?
1
2
log |2??u |.
(16)
Then by optimizing log p(y|X, ?) with regard to ?, we can get ?? given a fixed ?u , denoted as ???u . Following an iterative optimization procedure, we obtain an updating rule ?tu =
arg maxu p(u|y, X, ???ut?1 ) at tth iteration. The algorithm will stop when u and ? converge. More
information and details about formulation and derivation are described in the appendix.
6
6.1
Experiment and Results
One Dimensional Simulated Data
Beginning with simulated data, we compare performances of various regression estimators. One
dimensional data is generated from a generative model with d = 200 dimensions. Firstly to generate
a Gaussian process, a covariance kernel matrix K is built with squared exponential kernel with the
spatial locations of regression weights as inputs. Then a scalar b is set as the mean function to
determine the scale of prior covariance. Given the Gaussian process, we generate a multivariate
vector u, and then take its exponential to obtain the diagonal of prior covariance Cs in space-time
domain. To induce smoothness, eq. 7 is introduced to get covariance Csf . Then a weight vector w
is sampled from a Gaussian distribution with zero mean and Csf . Finally, we obtain the response
y given stimulus x with w plus Gaussian noise . In our case, should be large enough to ensure
that data and response won?t impose strong likelihood over prior knowledge. Thus the introduced
prior would largely dominate the estimate. Three local regions are constructed, which are positive,
negative and a half-positive-half-negative with sufficient zeros between separate bumps clearly apart.
As shown in Figure 2, the left top subfigure shows the underlying weight vector w.
Traditional methods like maximum likelihood, without any prior, are significantly overwhelmed by
large noise of the data. Weak priors such as ridge, ARD, lasso could fit the true weight better with
5
Figure 3: Estimated filter weights
and prior covariances. Upper row
shows the true filter (dotted black)
and estimated ones (red); Bottom
row shows the underlying prior covariance matrix.
different levels of sparsity imposed, but are still not sparse enough and not smooth at all. Group
lasso enforces a stronger sparsity than lasso by assuming block sparsity, thus making the result
smoother locally. ALD based methods have better performance, compared with traditional ones, in
identifying one big bump explicitly. ALDs is restricted by the assumption of one modal Gaussian,
therefore is able to find one dominating local region. ALDf focuses localities in frequency domain
thus make the estimate smoother but no spatial local regions are discovered. ALDsf combines
the effects in both ALDs and ALDf, and thus possesses smoothness but only one region is found.
Smooth Relevance Vector Machine (sRVM) can smooth the curve by incorporating a flexible noisedependent smoothness prior into the RVM, but is not able to draw information from data likelihood
magnificently. Our DRD can impose distinct local sparsity via Gaussian process prior and sDRD can
induce smoothness via bounding the frequencies. For all baseline models, we do model selection
via cross-validation varying through a wide range of parameter space, thus we can guarantee the
fairness for comparisons.
To further illustrate the benefits and principles of DRD, we demonstrate the estimated covariance
via ARD, ALDsf and sDRD in Figure 3. It can be stated that ARD could detect multiple localities
since its priors are purely independent scalars which could be easily influenced by data with strong
likelihood, but the consideration is the loss of dependency and smoothness. ALDsf can only detect
one locality due to its deterministic Gaussian form when likelihood is not sufficiently strong, but
with Fourier components over the prior, it exhibits smoothness. sDRD could capture multiple local
sparse regions as well as impose smoothness. The underlying Gaussian process allows multiple
non-zero regions in prior covariance with the result of multiple local sparsities for weight tensor.
Smoothness is introduced by a Gaussian type of function controlling the frequency bandwidth and
direction.
In addition, we examine the convergence properties of various estimators as a function of the amount
of collected data and give the average relative errors of each method in Figure 4. Responses are
simulated from the same filter as above with large Gaussian white noise which weakens the data
likelihood and thus guarantees a significant effect of prior over likelihood. The results show that
sDRD estimate achieves the smallest MSE (mean squared error), regardless of the number of training
samples. The MSE, mentioned here and in the following paragraphs, refers to the error compared
with the underlying w. The test error, which will be mentioned in later context, refers to the error
compared with true y. The left plot in Figure 4 shows that other methods require at least 1-2 times
more data than sDRD to achieve the same error rate. The right figure shows the ratio of the MSE for
each estimate to the MSE for sDRD estimate, showing that the next best method (ALDsf) exhibits
an error nearly two times of sDRD.
6.2
Two Dimensional Simulated Data
To better illustrate the performance of DRD and lay the groundwork for real data experiment, we
present a 2-dimensional synthetic experiment. The data is generated to match characteristics of
real fMRI data, as will be outlined in the next section. With a similar generation procedure as in 1dimensional experiment, a 2-dimensional w is generated with analogical properties as the regression
weights in fMRI data. The analogy is based on reasonable speculation and accumulated acknowledge from repeated trials and experiment. Two comparative studies are conducted to investigate the
influences of sample size on the recovery accuracy of w and predictive ability, both with dimension
= 1600 (the same as fMRI). To demonstrate structural sparsity recovery, we only compare our DRD
method with ARD, lasso, elastic net (elnet), group lasso (glasso).
6
Figure 4: Convergence of error rates on simulated data with varying training size (Left) and the
relative error (MSE ratio) for sDRD (Right)
Figure 5: Test error for each method when n = 215 (Left) and n = 800 (Right) for 2D simulated
data.
The sample size n varies in {215, 800}. The results are shown in Fig. 5 and Fig. 6. When n = 215,
only DRD is able to reveal an approximative estimation of true w with a small level of noise as well
as giving the lowest predictive error. Group lasso performs slightly better than ARD, lasso and elnet,
and presents only a weakly distinct block wise estimation compared with lasso and elnet. Lasso
and elnet both show similar performances and give a stronger sparsity than ARD, which indicates
that ARD fails to impose strong sparsity in this synthetic case due to its complete independencies
among dimensions when data is less sufficient and noisy. When n = 800, DRD still holds the
best prediction. Group lasso fails to keep the record since block-wise penalty can capture group
information but miss the subtlety when finer details matter. ARD progresses to the second place
because when data likelihood is strong enough, posterior of w won?t be greatly influenced by the
noise but follow the likelihood and the prior. Additionally, since ARD?s prior is more flexible and
independent than lasso and elnet, the posterior would approximate the underlying w better and finer.
6.3
fMRI Data
We analyzed functional MRI data from the Human Connectome Project 1 collected from 215 healthy
adult participants on a relational reasoning task. We used contrast images for the comparison of relational reasoning and matching tasks. Data were processed using the HCP minimal preprocessing
pipelines [32], down-sampled to 63 ? 76 ? 63 voxels using the flirt applyXfm tool [33], then tailored
further down to 40 ? 76 ? 40 by deleting zero-signal regions outside the brain. We analyzed 215
samples, each of which is an average from Z-slice 37 to 39 slices of 3D structure based on recommendations by domain experts. As the dependent variable in the regression, we selected the number
of correct responses on the Penn Matrix Text, which is a measure of fluid intelligence that should be
related to relational reasoning performance.
In each run, we randomly split the fMRI data into five sets for five-fold cross-validation, and took
an average of test errors across five folds for each run. Hyperparameters were chosen by a five-fold
cross-validation within the training set, and the optimal hyper parameter set was used for computing
test performance. Fig. 7 shows the regions of positive (red) and negative (blue) support for the
regression weights we obtained using different sparse regression methods. The rightmost panel
quantifies performance using mean squared error on held out test data. Both predictive performance
and estimated pattern are similar to n = 215 result in 2D synthetic experiment. ARD returns a quite
noisy estimation due to the complete independencies and weak likelihood. The elastic net estimate
improves slightly over lasso but is significantly better than ARD, which indicates that lasso type
of regularizations impose stronger sparsity than ARD in this case. Group lasso is slightly better
1
http://www.humanconnectomeproject.org/.
7
Figure 6: Surface plot of estimated w from each method using 2-dimensional simulated data when
n = 215.
Figure 7: Positive (red) and negative (blue) supports of the estimated weights from each method
using real fMRI data and the corresponding test errors.
because of its block-wise regularization, but more noisy blocks pop up influencing the predictive
ability. DRD reveals strong sparsity as well as clustered local regions. It also possesses the smallest
test error indicating the best predictive ability. Given that local group information most likely gather
around a few pixels in fMRI data, smoothness would be less valuable to be induced. This is the
reason that sDRD doesn?t show a distinct outperforming result over DRD, as a result of which we
omit smoothness imposing comparative experiment for fMRI data. In addition, we also test the
StructOMP [24] method for both 2D simulated data and fMRI data, but it doesn?t show satisfactory
estimation and predictive ability in the 2D data with our data?s intrinsic properties. Therefore we
chose to not show it for comparison in this study.
7
Conclusion
We proposed DRD, a hierarchal model for smooth and region-sparse weight tensors, which uses a
Gaussian process to model spatial dependencies in prior variances, an extension of the relevance
determination framework. To impose smoothness, we also employed a structured model of the
prior variances of Fourier coefficients, which allows for pruning of high frequencies. Due to the
intractability of marginal likelihood integration, we developed an efficient approximate inference
method based on Laplace approximation, and showed substantial improvements over comparable
methods on both simulated and fMRI real datasets. Our method yielded more interpretable weights
and indeed discovered multiple sparse regions that were not detected by other methods. We have
shown that DRD can gracefully incorporate structured dependencies to recover smooth, regionsparse weights without any specification of groups or regions, and believe it will be useful for other
kinds of high-dimensional datasets from biology and neuroscience.
Acknowledgments
This work was supported by the McKnight Foundation (JP), NSF CAREER Award IIS-1150186
(JP), NIMH grant MH099611 (JP) and the Gatsby Charitable Foundation (MP).
8
References
[1] R. Tibshirani. Journal of the Royal Statistical Society. Series B, pages 267?288, 1996.
[2] H. Lee, A. Battle, R. Raina, and A. Ng. In NIPS, pages 801?808, 2006.
[3] H. Zou and T. Hastie. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
67(2):301?320, 2005.
[4] B. Efron, T. Hastie, I. Johnstone, and et al. Tibshirani, R. Least angle regression. The Annals of statistics,
32(2):407?499, 2004.
[5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[6] G. Yuan, K. Chang, C. Hsieh, and C. Lin. JMLR, 11:3183?3234, 2010.
[7] F. Bach, R. Jenatton, J. Mairal, and et al. Obozinski, G. Convex optimization with sparsity-inducing
norms. Optimization for Machine Learning, pages 19?53, 2011.
[8] R. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
[9] M. Tipping. Sparse bayesian learning and the relevance vector machine. JMLR, 1:211?244, 2001.
[10] D. MacKay. Bayesian non-linear modeling for the prediction competition. In Maximum Entropy and
Bayesian Methods, pages 221?234. Springer, 1996.
[11] T. Mitchell and J. Beauchamp. Bayesian variable selection in linear regression. JASA, 83(404):1023?
1032, 1988.
[12] E. George and R. McCulloch. Variable selection via gibbs sampling. JASA, 88(423):881?889, 1993.
[13] C. Carvalho, N. Polson, and J. Scott. Handling sparsity via the horseshoe. In International Conference
on Artificial Intelligence and Statistics, pages 73?80, 2009.
[14] C. Hans. Bayesian lasso regression. Biometrika, 96(4):835?845, 2009.
[15] B. Anirban, P. Debdeep, P. Natesh, and David D. Bayesian shrinkage. December 2012.
[16] A. Schmolck. Smooth Relevance Vector Machines. PhD thesis, University of Exeter, 2008.
[17] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
[18] M. Van Gerven, B. Cseke, F. De Lange, and T. Heskes. Efficient bayesian multivariate fmri analysis using
a sparsifying spatio-temporal prior. NeuroImage, 50(1):150?161, 2010.
[19] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv
preprint arXiv:1001.0736, 2010.
[20] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th
Annual International Conference on Machine Learning, pages 433?440. ACM, 2009.
[21] H. Liu, L. Wasserman, and J. Lafferty. Nonparametric regression and classification with joint sparsity
constraints. In NIPS, pages 969?976, 2009.
[22] R. Jenatton, J. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. JMLR,
12:2777?2824, 2011.
[23] S. Kim and E. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS genetics, 5(8):e1000587, 2009.
[24] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. JMLR, 12:3371?3412, 2011.
[25] B. Engelhardt and R. Adams. Bayesian structured sparsity from gaussian fields. arXiv preprint
arXiv:1407.2235, 2014.
[26] M. Park and J. Pillow. Receptive field inference with localized priors. PLoS computational biology,
7(10):e1002219, 2011.
[27] M. Park, O. Koyejo, J. Ghosh, R. Poldrack, and J. Pillow. In Proceedings of the Sixteenth International
Conference on Artificial Intelligence and Statistics, pages 489?497, 2013.
[28] M. Tipping. Sparse Bayesian learning and the relevance vector machine. JMLR, 1:211?244, 2001.
[29] A. Tipping and A. Faul. Analysis of sparse bayesian learning. NIPS, 14:383?389, 2002.
[30] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS, 2007.
[31] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions.
NIPS, pages 317?324, 2003.
[32] M. Glasser, S. Sotiropoulos, A. Wilson, T. Coalson, B. Fischl, J. Andersson, J. Xu, S. Jbabdi, M. Webster,
and et al. Polimeni, J. NeuroImage, 2013.
[33] N.M. Alpert, D. Berdichevsky, Z. Levin, E.D. Morris, and A.J. Fischman. Improved methods for image
registration. NeuroImage, 3(1):10 ? 18, 1996.
9
| 5233 |@word trial:1 version:2 briefly:1 mri:1 norm:3 stronger:3 covariance:28 hsieh:1 jacob:1 liu:1 series:4 groundwork:1 rightmost:1 anqi:1 written:1 webster:1 drop:1 plot:2 interpretable:1 generative:2 half:2 selected:1 intelligence:3 beginning:1 ith:5 record:1 beauchamp:1 location:4 toronto:1 firstly:1 org:1 zhang:1 five:4 constructed:1 yuan:2 combine:4 paragraph:1 manner:1 introduce:2 indeed:1 examine:1 brain:2 globally:1 mijung:2 equipped:1 provided:1 project:1 underlying:5 estimating:1 panel:1 mcculloch:1 lowest:1 kind:1 substantially:1 developed:1 ghosh:1 transformation:1 suite:1 guarantee:3 temporal:3 quantitative:1 biometrika:1 uk:1 unit:1 penn:1 omit:1 yn:1 grant:1 positive:5 influencing:1 local:10 approximately:1 black:1 plus:1 chose:1 range:1 practical:1 acknowledgment:1 enforces:1 block:5 definite:1 procedure:3 empirical:2 significantly:2 vert:1 matching:1 pre:2 induce:2 refers:2 get:2 selection:5 storage:1 context:1 influence:1 www:1 map:4 imposed:1 deterministic:1 oluwasanmi:1 regardless:1 polimeni:1 independently:1 convex:1 focused:1 formulate:2 recovery:3 identifying:1 wasserman:1 rule:2 estimator:3 dominate:1 coalson:1 e1000587:1 coordinate:3 laplace:6 annals:1 controlling:1 us:4 approximative:1 element:1 updating:1 lay:1 bottom:1 preprint:2 capture:7 region:24 wj:1 plo:2 jbabdi:1 valuable:1 substantial:2 mentioned:2 complexity:1 nimh:1 sotiropoulos:1 weakly:1 solving:1 rewrite:1 predictive:7 purely:1 basis:2 easily:1 joint:1 various:2 derivation:1 distinct:4 fast:1 describe:1 london:1 detected:1 artificial:2 hyper:4 outside:1 quite:1 stanford:2 posed:1 dominating:1 ability:4 statistic:4 topographic:1 gp:2 transform:1 noisy:3 abandon:1 net:3 ucl:1 took:1 neighboring:2 tu:1 achieve:1 sixteenth:1 analogical:1 inducing:6 competition:1 convergence:2 cluster:2 requirement:2 double:1 comparative:2 adam:1 coupling:1 develop:1 ac:1 illustrate:2 weakens:1 ij:2 ard:16 progress:1 eq:1 strong:6 c:8 involves:1 implies:1 faul:1 direction:1 csf:7 correct:1 filter:9 centered:1 human:1 require:2 nagarajan:1 clustered:1 extension:2 hold:1 sufficiently:1 around:1 normal:2 exp:4 maxu:2 bump:2 achieves:1 smallest:2 estimation:7 healthy:1 rvm:5 mh099611:1 grouped:1 tool:1 clearly:1 gaussian:21 avoid:1 shrinkage:3 varying:2 wilson:1 cseke:1 ax:1 derived:1 focus:1 improvement:2 likelihood:22 indicates:2 greatly:1 contrast:1 baseline:1 detect:2 kim:1 inference:10 dependent:8 accumulated:1 transformed:2 pixel:2 overall:2 arg:3 ill:1 flexible:3 denoted:4 priori:3 classification:1 among:1 spatial:3 special:1 integration:1 mackay:1 marginal:6 equal:1 construct:1 field:3 ng:1 sampling:1 biology:2 represents:1 park:2 fairness:1 nearly:1 fmri:11 parametrizes:1 wipf:1 stimulus:2 few:1 randomly:1 simultaneously:3 beck:1 friedman:1 interest:2 huge:2 investigate:1 introduces:1 analyzed:2 yielding:1 held:1 accurate:1 integral:2 unless:1 hyperprior:3 initialized:1 theoretical:1 subfigure:1 minimal:1 column:4 modeling:1 teboulle:1 entry:1 levin:1 conducted:1 dependency:12 varies:1 synthetic:3 international:3 siam:1 lee:1 connectome:1 together:1 squared:5 thesis:2 huang:1 slowly:1 expert:1 return:1 de:1 sec:5 includes:1 coefficient:16 matter:1 coordinated:1 explicitly:1 mp:1 depends:1 audibert:1 later:1 view:1 closed:1 linked:2 portion:1 red:3 recover:2 bayes:2 participant:1 xing:1 e1002219:1 accuracy:1 variance:11 largely:1 efficiently:1 characteristic:1 correspond:1 weak:2 bayesian:16 metaxas:1 marginally:1 finer:2 influenced:2 frequency:13 associated:2 stop:1 sampled:2 mitchell:1 knowledge:3 ut:1 improves:1 efron:1 organized:1 jenatton:2 tipping:3 follow:1 methodology:2 response:7 specify:1 modal:1 improved:1 formulation:3 evaluated:1 strongly:1 zeromean:1 governing:1 hand:1 drd:17 mode:2 reveal:1 believe:1 effect:2 true:5 analytically:1 regularization:3 dud:1 symmetric:1 satisfactory:1 neal:1 white:1 mahalanobis:1 encourages:1 won:2 ridge:3 demonstrate:2 complete:2 performs:1 l1:2 reasoning:3 image:2 wise:3 consideration:1 novel:1 common:1 functional:1 poldrack:1 jp:3 association:1 interpretation:1 trait:1 refer:3 significant:2 imposing:2 gibbs:1 smoothness:24 automatic:7 outlined:1 heskes:1 specification:1 han:1 longer:1 surface:1 add:1 base:2 posterior:11 multivariate:5 recent:2 showed:1 optimizing:1 apart:1 store:1 hierarchal:1 outperforming:1 yi:3 captured:3 george:1 impose:10 cii:2 employed:1 converge:1 maximize:1 determine:1 signal:1 semi:1 ii:2 multiple:6 full:1 smoother:2 infer:1 smooth:15 exceeds:1 match:1 determination:14 cross:3 bach:2 lin:2 divided:1 award:1 controlled:1 prediction:2 regression:34 basic:1 bsl:1 arxiv:4 iteration:1 kernel:5 tailored:1 addition:2 koyejo:1 leaving:1 eliminates:1 unlike:2 posse:2 probably:1 induced:2 tend:2 december:1 lafferty:1 call:1 structural:1 near:1 gerven:1 split:1 enough:3 fft:1 variety:1 fit:1 psychology:1 alds:2 hastie:3 lasso:24 wu1:1 bandwidth:1 reduce:1 idea:1 lange:1 expression:1 glasser:1 penalty:2 passing:1 hessian:1 useful:1 amount:1 nonparametric:1 locally:2 morris:1 induces:1 processed:1 tth:1 generate:2 http:1 nsf:1 dotted:1 neuroscience:2 estimated:6 tibshirani:3 blue:2 fischl:1 discrete:1 hyperparameter:2 alpert:1 express:1 group:18 independency:2 sparsifying:1 asd:1 registration:1 imaging:2 graph:1 merely:1 run:2 inverse:2 angle:1 fourth:1 extends:2 place:1 reasonable:1 draw:1 appendix:1 comparable:2 capturing:2 fold:3 yielded:1 annual:1 constraint:1 nearby:1 fourier:8 speed:1 formulating:1 pruned:1 department:1 structured:7 combination:1 mcknight:1 battle:1 remain:1 describes:1 slightly:3 across:1 anirban:1 wi:2 making:1 restricted:1 pipeline:1 tractable:1 apply:1 hierarchical:3 away:1 rp:3 assumes:2 remaining:1 cf:7 ensure:1 top:1 xw:1 neglect:1 exploit:2 giving:1 especially:1 classical:1 society:3 tensor:3 quantity:1 receptive:2 concentration:1 diagonal:9 traditional:2 exhibit:3 distance:2 separate:1 simulated:12 consumption:1 gracefully:1 argue:1 collected:2 reason:1 engelhardt:1 assuming:1 length:1 index:1 relationship:1 ratio:2 setup:1 neuroimaging:1 cij:1 unfortunately:1 expense:1 negative:5 stated:1 fluid:1 polson:1 design:1 implementation:1 upper:1 observation:3 datasets:3 acknowledge:1 horseshoe:1 relational:3 y1:1 rn:2 discovered:2 introduced:4 david:1 speculation:1 pop:1 tractably:1 nip:5 adult:1 able:3 pattern:1 scott:1 sparsity:29 built:1 max:1 memory:1 royal:3 deleting:1 power:1 overlap:1 raina:1 improve:1 sahani:1 text:1 prior:56 literature:3 review:1 l2:1 voxels:1 relative:2 loss:1 glasso:1 parameterise:1 generation:1 analogy:1 carvalho:1 localized:1 validation:3 foundation:2 jasa:2 gather:1 sufficient:2 imposes:4 principle:1 thresholding:1 charitable:1 intractability:1 uncorrelated:1 row:3 genetics:1 supported:1 side:1 pulled:1 institute:1 wide:1 johnstone:1 taking:1 sparse:26 benefit:1 regard:1 curve:1 dimension:3 slice:2 world:2 pillow:3 van:1 doesn:2 genome:1 made:1 regressors:1 preprocessing:1 approximate:8 pruning:1 implicitly:1 keep:1 reveals:1 mairal:1 b1:1 unnecessary:1 spatio:3 xi:4 latent:2 iterative:2 quantifies:1 additionally:1 correlated:3 career:1 elastic:3 improving:1 unavailable:1 mse:5 zou:1 domain:9 diag:1 big:1 noise:6 arise:1 hyperparameters:7 bounding:1 repeated:1 xu:1 fig:3 gatsby:3 cs2:1 precision:2 sub:1 fails:2 neuroimage:3 explicit:1 exponential:3 governed:1 jmlr:5 third:1 down:2 showing:1 explored:1 linden:1 evidence:2 incorporating:2 intractable:2 intrinsic:1 phd:2 overwhelmed:1 locality:6 entropy:1 likely:1 hcp:1 scalar:4 subtlety:1 recommendation:1 chang:1 springer:1 ald:4 acm:1 obozinski:2 sanmi:1 conditional:2 goal:1 change:1 determined:1 miss:1 andersson:1 pas:1 tendency:1 indicating:1 college:1 support:2 jonathan:1 relevance:15 incorporate:2 princeton:3 handling:1 |
4,676 | 5,234 | Mondrian Forests: Efficient Online Random Forests
Balaji Lakshminarayanan
Gatsby Unit
University College London
Daniel M. Roy
Department of Engineering
University of Cambridge
Yee Whye Teh
Department of Statistics
University of Oxford
Abstract
Ensembles of randomized decision trees, usually referred to as random forests,
are widely used for classification and regression tasks in machine learning and
statistics. Random forests achieve competitive predictive performance and are
computationally efficient to train and test, making them excellent candidates for
real-world prediction tasks. The most popular random forest variants (such as
Breiman?s random forest and extremely randomized trees) operate on batches
of training data. Online methods are now in greater demand. Existing online
random forests, however, require more training data than their batch counterpart
to achieve comparable predictive performance. In this work, we use Mondrian
processes (Roy and Teh, 2009) to construct ensembles of random decision trees
we call Mondrian forests. Mondrian forests can be grown in an incremental/online
fashion and remarkably, the distribution of online Mondrian forests is the same as
that of batch Mondrian forests. Mondrian forests achieve competitive predictive
performance comparable with existing online random forests and periodically retrained batch random forests, while being more than an order of magnitude faster,
thus representing a better computation vs accuracy tradeoff.
1
Introduction
Despite being introduced over a decade ago, random forests remain one of the most popular machine
learning tools due in part to their accuracy, scalability, and robustness in real-world classification
tasks [3]. (We refer to [6] for an excellent survey of random forests.) In this paper, we introduce a
novel class of random forests?called Mondrian forests (MF), due to the fact that the underlying tree
structure of each classifier in the ensemble is a so-called Mondrian process. Using the properties of
Mondrian processes, we present an efficient online algorithm that agrees with its batch counterpart at
each iteration. Not only are online Mondrian forests faster and more accurate than recent proposals
for online random forest methods, but they nearly match the accuracy of state-of-the-art batch random
forest methods trained on the same dataset.
The paper is organized as follows: In Section 2, we describe our approach at a high-level, and in
Sections 3, 4, and 5, we describe the tree structures, label model, and incremental updates/predictions
in more detail. We discuss related work in Section 6, demonstrate the excellent empirical performance
of MF in Section 7, and conclude in Section 8 with a discussion about future work.
2
Approach
Given N labeled examples (x1 , y1 ), . . . , (xN , yN ) 2 RD ? Y as training data, our task is to predict
labels y 2 Y for unlabeled test points x 2 RD . We will focus on multi-class classification where
Y := {1, . . . , K}, however, it is possible to extend the methodology to other supervised learning tasks
such as regression. Let X1:n := (x1 , . . . , xn ), Y1:n := (y1 , . . . , yn ), and D1:n := (X1:n , Y1:n ).
A Mondrian forest classifier is constructed much like a random forest: Given training data D1:N ,
we sample an independent collection T1 , . . . , TM of so-called Mondrian trees, which we will
describe in the next section. The prediction made by each Mondrian tree Tm is a distribution
pTm (y|x, D1:N ) over the class label y for a test point x. The prediction made by the Mondrian
PM
1
forest is the average M
m=1 pTm (y|x, D1:N ) of the individual tree predictions. As M ! 1, the
average converges at the standard rate to the expectation ET ?MT( ,D1:N ) [ pT (y|x, D1:N )], where
MT ( , D1:N ) is the distribution of a Mondrian tree. As the limiting expectation does not depend on
M , we would not expect to see overfitting behavior as M increases. A similar observation was made
by Breiman in his seminal article [2] introducing random forests. Note that the averaging procedure
above is ensemble model combination and not Bayesian model averaging.
In the online learning setting, the training examples are presented one after another in a sequence
of trials. Mondrian forests excel in this setting: at iteration N + 1, each Mondrian tree T ?
MT ( , D1:N ) is updated to incorporate the next labeled example (xN +1 , yN +1 ) by sampling an
extended tree T 0 from a distribution MTx( , T, DN +1 ). Using properties of the Mondrian process,
we can choose a probability distribution MTx such that T 0 = T on D1:N and T 0 is distributed
according to MT ( , D1:N +1 ), i.e.,
0
T ? MT ( , D1:N )
implies
T | T, D1:N +1 ? MTx( , T, DN +1 )
T 0 ? MT ( , D1:N +1 ) .
(1)
Therefore, the distribution of Mondrian trees trained on a dataset in an incremental fashion is the
same as that of Mondrian trees trained on the same dataset in a batch fashion, irrespective of the
order in which the data points are observed. To the best of our knowledge, none of the existing online
random forests have this property. Moreover, we can sample from MTx( , T, DN +1 ) efficiently: the
complexity scales with the depth of the tree, which is typically logarithmic in N .
While treating the online setting as a sequence of larger and larger batch problems is normally
computationally prohibitive, this approach can be achieved efficiently with Mondrian forests. In the
following sections, we define the Mondrian tree distribution MT ( , D1:N ), the label distribution
pT (y|x, D1:N ), and the update distribution MTx( , T, DN +1 ).
3
Mondrian trees
For our purposes, a decision tree on RD will be a hierarchical, binary partitioning of RD and a rule
for predicting the label of test points given training data. More carefully, a rooted, strictly-binary
tree is a finite tree T such that every node in T is either a leaf or internal node, and every node is the
child of exactly one parent node, except for a distinguished root node, represented by ?, which has no
parent. Let leaves(T) denote the set of leaf nodes in T. For every internal node j 2 T \ leaves(T),
there are exactly two children nodes, represented by left(j) and right(j). To each node j 2 T, we
associate a block Bj ? RD of the input space as follows: We let B? := RD . Each internal node
j 2 T \ leaves(T) is associated with a split j , ?j , where j 2 {1, 2, . . . , D} denotes the dimension
of the split and ?j denotes the location of the split along dimension j . We then define
Bleft(j) := {x 2 Bj : x j ? ?j } and Bright(j) := {x 2 Bj : x j > ?j }.
(2)
?
?
We may write Bj = `j1 , uj1 ? . . . ? `jD , ujD , where `jd and ujd denote the `ower and upper
bounds, respectively, of the rectangular block Bj along dimension d. Put `j = {`j1 , `j2 , . . . , `jD }
and uj = {uj1 , uj2 , . . . , ujD }. The decision tree structure is represented by the tuple T = (T, , ?).
We refer to Figure 1(a) for a simple illustration of a decision tree.
It will be useful to introduce some additional notation. Let parent(j) denote the parent of node j. Let
N (j) denote the indices of training data points at node j, i.e., N (j) = {n 2 {1, . . . , N } : xn 2 Bj }.
Let DN (j) = {XN (j) , YN (j) } denote the features and labels of training data points at node j. Let
`xjd and uxjd denote the lower and upper bounds of training data points (hence the superscript x)
?
?
respectively in node j along dimension d. Let Bjx = `xj1 , uxj1 ? . . . ? `xjD , uxjD ? Bj denote the
smallest rectangle that encloses the training data points in node j.
3.1
Mondrian process distribution over decision trees
Mondrian processes, introduced by Roy and Teh [19], are families {Mt : t 2 [0, 1)} of random,
hierarchical binary partitions of RD such that Mt is a refinement of Ms whenever t > s.1 Mondrian
processes are natural candidates for the partition structure of random decision trees, but Mondrian
1
Roy and Teh [19] studied the distribution of {Mt : t ? } and refered to as the budget. See [18, Chp. 5]
for more details. We will refer to t as time, not be confused with discrete time in the online learning setting.
2
0
1
x1 > 0.37
x2
?
Bj
?,?
0
x1
1
1
(a) Decision Tree
x2
?
Bjx
x2 > 0.5
0.7
F
F, F
x1 > 0.37
0.42
F
x2 > 0.5
,
1
?
,
F, F
?
F
F
?,?
0
x1
1
(b) Mondrian Tree
Figure 1: Example of a decision tree in [0, 1]2 where x1 and x2 denote horizontal and vertical axis respectively:
Figure 1(a) shows tree structure and partition of a decision tree, while Figure 1(b) shows a Mondrian tree. Note
that the Mondrian tree is embedded on a vertical time axis, with each node associated with a time of split and
the splits are committed only within the range of the training data in each block (denoted by gray rectangles).
Let j denote the left child of the root: Bj = (0, 0.37] ? (0, 1] denotes the block associated with red circles and
Bjx ? Bj is the smallest rectangle enclosing the two data points.
processes on RD are, in general, infinite structures that we cannot represent all at once. Because we
only care about the partition on a finite set of observed data, we introduce Mondrian trees, which
are restrictions of Mondrian processes to a finite set of points. A Mondrian tree T can be represented
by a tuple (T, , ?, ? ), where (T, , ?) is a decision tree and ? = {?j }j2T associates a time of split
?j 0 with each node j. Split times increase with depth, i.e., ?j > ?parent(j) . We abuse notation and
define ?parent(?) = 0.
Given a non-negative lifetime parameter and training data D1:n , the generative process for sampling
Mondrian trees from MT ( , D1:n ) is described in the following two algorithms:
Algorithm 1 SampleMondrianTree
, D1:n
1: Initialize: T = ;, leaves(T) = ;, = ;, ? = ;, ? = ;, N (?) = {1, 2, . . . , n}
2: SampleMondrianBlock ?, DN (?) ,
. Algorithm 2
Algorithm 2 SampleMondrianBlock j, DN (j) ,
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Add j to T
For all d, set `xjd = min(XN (j),d ), uxjd = max(XN (j),d )
. dimension-wise min and max
P
Sample E from exponential distribution with rate d (uxjd `xjd )
if ?parent(j) + E < then
. j is an internal node
Set ?j = ?parent(j) + E
Sample split dimension j , choosing d with probability proportional to uxjd `xjd
Sample split location ?j uniformly from interval [`xj j , uxj j ]
Set N (left(j)) = {n 2 N (j) : Xn, j ? ?j } and N (right(j)) = {n 2 N (j) : Xn, j > ?j }
SampleMondrianBlock left(j), DN (left(j)) ,
SampleMondrianBlock right(j), DN (right(j)) ,
else
. j is a leaf node
Set ?j = and add j to leaves(T)
The procedure starts with the root node ? and recurses down the tree. In Algorithm 2, we first
compute the `x? and ux? i.e. the lower and upper bounds of B?x , the smallest rectangle enclosing
XN (?) . We sample
P E from an exponential distribution whose rate is the so-called linear dimension
of B?x , given by d (ux?d `x?d ). Since ?parent(?) = 0, E + ?parent(?) = E. If E
, the time of split
is not withinPthe lifetime ; hence, we assign ? to be a leaf node and the procedure halts. (Since
x
E[E] = 1/
`xjd ) , bigger rectangles are less likely to be leaf nodes.) Else, ? is an internal
d (ujd
node and we sample a split ( ? , ?? ) from the uniform split distribution on B?x . More precisely, we
first sample the dimension ? , taking the value d with probability proportional to ux?d `x?d , and then
sample the split location ?? uniformly from the interval [`x? ? , ux? ? ]. The procedure then recurses
along the left and right children.
Mondrian trees differ from standard decision trees (e.g. CART, C4.5) in the following ways: (i)
the splits are sampled independent of the labels YN (j) ; (ii) every node j is associated with a split
3
time denoted by ?j ; (iii) the lifetime parameter controls the total number of splits (similar to the
maximum depth parameter for standard decision trees); (iv) the split represented by an internal node
j holds only within Bjx and not the whole of Bj . No commitment is made in Bj \ Bjx . Figure 1
illustrates the difference between decision trees and Mondrian trees.
Consider the family of distributions MT ( , F ), where F ranges over all possible finite sets of data
points. Due to the fact that these distributions are derived from that of a Mondrian process on RD
restricted to a set F of points, the family MT ( , ?) will be projective. Intuitively, projectivity implies
that the tree distributions possess a type of self-consistency. In words, if we sample a Mondrian
tree T from MT ( , F ) and then restrict the tree T to a subset F 0 ? F of points, then the restricted
tree T 0 has distribution MT ( , F 0 ). Most importantly, projectivity gives us a consistent way to
extend a Mondrian tree on a data set D1:N to a larger data set D1:N +1 . We exploit this property
to incrementally grow a Mondrian tree: we instantiate the Mondrian tree on the observed training
data points; upon observing a new data point DN +1 , we extend the Mondrian tree by sampling from
the conditional distribution of a Mondrian tree on D1:N +1 given its restriction to D1:N , denoted
by MTx( , T, DN +1 ) in (1). Thus, a Mondrian process on RD is represented only where we have
observed training data.
4
Label distribution: model, hierarchical prior, and predictive posterior
So far, our discussion has been focused on the tree structure. In this section, we focus on the predictive
label distribution, pT (y|x, D1:N ), for a tree T = (T, , ?, ? ), dataset D1:N , and test point x. Let
leaf(x) denote the unique leaf node j 2 leaves(T) such that x 2 Bj . Intuitively, we want the
predictive label distribution at x to be a smoothed version of the empirical distribution of labels
for points in Bleaf(x) and in Bj 0 for nearby nodes j 0 . We achieve this smoothing via a hierarchical
Bayesian approach: every node is associated with a label distribution, and a prior is chosen under
which the label distribution of a node is similar to that of its parent?s. The predictive pT (y|x, D1:N )
is then obtained via marginalization.
As is common in the decision tree literature, we assume the labels within each block are independent
of X given the tree structure. For every j 2 T, let Gj denote the distribution of labels at node j, and
let G = {Gj : j 2 T} be the set of label distributions at all the nodes in the tree. Given T and G,
the predictive label distribution at x is p(y|x, T, G) = Gleaf(x) , i.e., the label distribution at the node
leaf(x). In this paper, we focus on the case of categorical labels taking values in the set {1, . . . , K},
and so we abuse notation and write Gj,k for the probability that a point in Bj is labeled k.
We model the collection Gj , for j 2 T, as a hierarchy of normalized stable processes (NSP) [24]. A
NSP prior is a distribution over distributions and is a special case of the Pitman-Yor process (PYP)
prior where the concentration parameter is taken to zero [17].2 The discount parameter d 2 (0, 1)
controls the variation around the base distribution; if Gj ? NSP(d, H), then E[Gjk ] = Hk and
Var[Gjk ] = (1 d)Hk (1 Hk ). We use a hierarchical NSP (HNSP) prior over Gj as follows:
G? |H ? NSP(d? , H),
and
Gj |Gparent(j) ? NSP(dj , Gparent(j) ).
(3)
This hierarchical prior was first proposed by Wood et al. [24]. Here we take the base distribution H
to be the uniform distribution over the K labels, and set dj = exp
(?j ?parent(j) ) .
Given training data D1:N , the predictive distribution pT (y|x, D1:N ) is obtained by integrating over G,
i.e., pT (y|x, D1:N ) = EG?pT (G|D1:N ) [Gleaf(x),y ] = Gleaf(x),y , where the posterior pT (G|D1:N ) /
QN
pT (G) n=1 Gleaf(xn ),yn . Posterior inference in the HNSP, i.e., computation of the posterior means
Gleaf(x) , is a special case of posterior inference in the hierarchical PYP (HPYP). In particular, Teh
[22] considers the HPYP with multinomial likelihood (in the context of language modeling). The
model considered here is a special case of [22]. Exact inference is intractable and hence we resort to
approximations. In particular, we use a fast approximation known as the interpolated Kneser-Ney
(IKN) smoothing [22], a popular technique for smoothing probabilities in language modeling [13].
The IKN approximation in [22] can be extended in a straightforward fashion to the online setting,
and the computational complexity of adding a new training instance is linear in the depth of the tree.
We refer the reader to Appendix A for further details.
2
Taking the discount parameter to zero leads to a Dirichlet process . Hierarchies of NSPs admit more tractable
approximations than hierarchies of Dirichlet processes [24], hence our choice here.
4
5
Online training and prediction
In this section, we describe the family of distributions MTx( , T, DN +1 ), which are used to incrementally add a data point, DN +1 , to a tree T . These updates are based on the conditional Mondrian
algorithm [19], specialized to a finite set of points. In general, one or more of the following three
operations may be executed while introducing a new data point: (i) introduction of a new split ?above?
an existing split, (ii) extension of an existing split to the updated extent of the block and (iii) splitting
an existing leaf node into two children. To the best of our knowledge, existing online decision trees
use just the third operation, and the first two operations are unique to Mondrian trees. The complete
pseudo-code for incrementally updating a Mondrian tree T with a new data point D according to
MTx( , T, D) is described in the following two algorithms. Figure 2 walks through the algorithms
on a toy dataset.
Algorithm 3 ExtendMondrianTree(T, , D)
1: Input: Tree T = (T, , ?, ? ), new training instance D = (x, y)
2: ExtendMondrianBlock(T, , ?, D)
. Algorithm 4
Algorithm 4 ExtendMondrianBlock(T, , j, D)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
Set e` = max(`xj x, 0) and eu = max(x uxj , 0)
. e` = eu = 0D if x 2 Bjx
P `
Sample E from exponential distribution with rate d (ed + eud )
if ?parent(j) + E < ?j then
. introduce new parent for node j
Sample split dimension , choosing d with probability proportional to e`d + eud
Sample split location ? uniformly from interval [uxj, , x ] if x > uxj, else [x , `xj, ].
Insert a new node |? just above node j in the tree, and a new leaf j 00 , sibling to j, where
x
x
x
x
|? = , ?|? = ?, ?|? = ?parent(j) + E, `|? = min(`j , x), u|? = max(uj , x)
00
j = left(?
|) iff x |? ? ?|?
SampleMondrianBlock j 00 , D,
else
Update `xj
min(`xj , x), uxj
max(uxj , x)
. update extent of node j
if j 2
/ leaves(T) then
. return if j is a leaf node, else recurse down the tree
if x j ? ?j then child(j) = left(j) else child(j) = right(j)
ExtendMondrianBlock(T, , child(j), D)
. recurse on child containing D
In practice, random forest implementations stop splitting a node when all the labels are identical and
assign it to be a leaf node. To make our MF implementation comparable, we ?pause? a Mondrian
block when all the labels are identical; if a new training instance lies within Bj of a paused leaf
node j and has the same label as the rest of the data points in Bj , we continue pausing the Mondrian
block. We ?un-pause? the Mondrian block when there is more than one unique label in that block.
Algorithms 9 and 10 in the supplementary material discuss versions of SampleMondrianBlock and
ExtendMondrianBlock for paused Mondrians.
Prediction using Mondrian tree Let x denote a test data point. If x is already ?contained? in
the tree T , i.e., if x 2 Bjx for some leaf j 2 leaves(T), then the prediction is taken to be Gleaf(x) .
Otherwise, we somehow need to incorporate x. One choice is to extend T by sampling T 0 from
MTx( , T, x) as described in Algorithm 3, and set the prediction to Gj , where j 2 leaves(T0 ) is the
leaf node containing x. A particular extension T 0 might lead to an overly confident prediction; hence,
we average over every possible extension T 0 . This integration can be carried out analytically and the
computational complexity is linear in the depth of the tree. We refer to Appendix B for further details.
6
Related work
The literature on random forests is vast and we do not attempt to cover it comprehensively; we provide
a brief review here and refer to [6] and [8] for a recent review of random forests in batch and online
settings respectively. Classic decision tree induction procedures choose the best split dimension and
location from all candidate splits at each node by optimizing some suitable quality criterion (e.g.
information gain) in a greedy manner. In a random forest, the individual trees are randomized to
de-correlate their predictions. The most common strategies for injecting randomness are (i) bagging
[1] and (ii) randomly subsampling the set of candidate splits within each node.
5
1
1
1
1
c
x2
x2
a
b
a
x1
0
1
(a)
0
1
(b)
1
(c)
0
d
x2
b
a
x1
0
c
d
x2
b
a
x1
c
d
x2
b
1
c
x2
b
1
c
b
a
x1
(d)
1
0
a
x1
1
x1
0
(e)
1
(f)
0
x1 > 0.75
x1 > 0.75
1.01
x2 > 0.23
x2 > 0.23
x2 > 0.23
2.42
x1 > 0.47
3.97
1
a
(g)
b
a
a
c
b
(h)
b
c
d
(i)
Figure 2: Online learning with Mondrian trees on a toy dataset: We assume that = 1, D = 2 and add one
data point at each iteration. For simplicity, we ignore class labels and denote location of training data with red
circles. Figures 2(a), 2(c) and 2(f) show the partitions after the first, second and third iterations, respectively,
with the intermediate figures denoting intermediate steps. Figures 2(g), 2(h) and 2(i) show the trees after the first,
second and third iterations, along with a shared vertical time axis.
At iteration 1, we have two training data points, labeled as a, b. Figures 2(a) and 2(g) show the partition
and tree structure of the Mondrian tree. Note that even though there is a split x2 > 0.23 at time t = 2.42, we
commit this split only within Bjx (shown by the gray rectangle).
At iteration 2, a new data point c is added. Algorithm 3 starts with the root node and recurses down the
tree. Algorithm 4 checks if the new data point lies within B?x by computing the additional extent e` and eu . In
this case, c does not lie within B?x . Let Rab and Rabc respectively denote the small gray rectangle (enclosing
a, b) and big gray rectangle (enclosing a, b, c) in Figure 2(b). While extending the Mondrian from Rab to Rabc ,
we could either introduce a new split in Rabc outside Rab or extend the split in Rab to the new range. To
choose between these two options, we sample the time of this new
P split: we first sample E from an exponential
distribution whose rate is the sum of the additional extent, i.e., d (e`d + eud ), and set the time of the new split
to E + ?parent(?) . If E + ?parent(?) ? ?? , this new split in Rabc can precede the old split in Rab and a split is
sampled in Rabc outside Rab . In Figures 2(c) and 2(h), E + ?parent(?) = 1.01 + 0 ? 2.42, hence a new split
P
x1 > 0.75 is introduced. The farther a new data point x is from Bjx , the higher the rate d (e`d + eud ), and
P `
u
subsequently the higher the probability of a new split being introduced, since E[E] = 1/
d (ed + ed ) . A
new split in Rabc is sampled such that it is consistent with the existing partition structure in Rab (i.e., the new
split cannot slice through Rab ).
In the final iteration, we add data point d. In Figure 2(d), the data point d lies within the extent of the root
node, hence we traverse to the left side of the root and update Bjx of the internal node containing {a, b} to
include d. We could either introduce a new split or extend the split x2 > 0.23. In Figure 2(e), we extend the
split x2 > 0.23 to the new extent, and traverse to the leaf node in Figure 2(h) containing b. In Figures 2(f) and
2(i), we sample E = 1.55 and since ?parent(j) + E = 2.42 + 1.55 = 3.97 ? = 1, we introduce a new split
x1 > 0.47.
Two popular random forest variants in the batch setting are Breiman-RF [2] and Extremely randomized
trees (ERT) [12]. Breiman-RF uses bagging and furthermore, at each node, a random k-dimensional
subset of the original D features is sampled. ERT chooses a k dimensional subset of the features and
then chooses one split location each for the k features randomly (unlike Breiman-RF which considers
all possible split locations along a dimension). ERT does not use bagging. When k = 1, the ERT
trees are totally randomized and the splits are chosen independent of the labels; hence the ERT-1
method is very similar to MF in the batch setting in terms of tree induction. (Note that unlike ERT,
MF uses HNSP to smooth predictive estimates and allows a test point to branch off into its own node.)
Perfect random trees (PERT), proposed by Cutler and Zhao [7] for classification problems, produce
totally randomized trees similar to ERT-1, although there are some slight differences [12].
Existing online random forests (ORF-Saffari [20] and ORF-Denil [8]) start with an empty tree and
grow the tree incrementally. Every leaf of every tree maintains a list of k candidate splits and
associated quality scores. When a new data point is added, the scores of the candidate splits at the
corresponding leaf node are updated. To reduce the risk of choosing a sub-optimal split based on
noisy quality scores, additional hyper parameters such as the minimum number of data points at a
leaf node before a decision is made and the minimum threshold for the quality criterion of the best
split, are used to assess ?confidence? associated with a split. Once these criteria are satisfied at a leaf
node, the best split is chosen (making this node an internal node) and its two children are the new
leaf nodes (with their own candidate splits), and the process is repeated. These methods could be
6
memory inefficient for deep trees due to the high cost associated with maintaining candidate quality
scores for the fringe of potential children [8].
There has been some work on incremental induction of decision trees, e.g. incremental CART [5],
ITI [23], VFDT [11] and dynamic trees [21], but to the best of our knowledge, these are focused on
learning decision trees and have not been generalized to online random forests. We do not compare
MF to incremental decision trees, since random forests are known to outperform single decision trees.
Bayesian models of decision trees [4, 9] typically specify a distribution over decision trees; such
distributions usually depend on X and lack the projectivity property of the Mondrian process. More
importantly, MF performs ensemble model combination and not Bayesian model averaging over
decision trees. (See [10] for a discussion on the advantages of ensembles over single models, and
[15] for a comparison of Bayesian model averaging and model combination.)
7
Empirical evaluation
The purpose of these experiments is to evaluate the predictive performance (test accuracy) of MF
as a function of (i) fraction of training data and (ii) training time. We divide the training data into
100 mini-batches and we compare the performance of online random forests (MF, ORF-Saffari [20])
to batch random forests (Breiman-RF, ERT-k, ERT-1) which are trained on the same fraction of the
training data. (We compare MF to dynamic trees as well; see Appendix F for more details.) Our
scripts are implemented in Python. We implemented the ORF-Saffari algorithm as well as ERT in
Python for timing comparisons. The scripts can be downloaded from the authors? webpages. We
did not implement the ORF-Denil [8] algorithm since the predictive performance reported in [8] is
very similar to that of ORF-Saffari and the computational complexity of the ORF-Denil algorithm is
worse than that of ORF-Saffari. We used the Breiman-RF implementation in scikit-learn [16].3
We evaluate on four of the five datasets used in [20] ? we excluded the mushroom dataset as even
very simple logical rules achieve > 99% accuracy on this dataset.4 We re-scaled the datasets such
that each feature takes on values in the range [0, 1] (by subtracting the min value along that dimension
and dividing by the range along that dimension, where range = max min).
As is common in the random forest literature [2], we set the number of trees M = 100. For Mondrian
forests, we set the lifetime = 1 and the HNSP discount parameter = 10D. For ORF-Saffari, we
set num epochs = 20 (number of passes through the training data) and set the other hyper parameters
to the values used in [20]. For Breiman-RF and ERT, the hyper parameters are set to default values.
We repeat each algorithm with five random initializations and report the mean performance. The
results are shown in Figure 3. (The * in Breiman-RF* indicates scikit-learn implementation.)
Comparing test accuracy vs fraction of training data on usps, satimages and letter datasets, we
observe that MF achieves accuracy very close to the batch RF versions (Breiman-RF, ERT-k,
ERT-1) trained on the same fraction of the data. MF significantly outperforms ORF-Saffari
trained on the same fraction of training data. In batch RF versions, the same training data can
be used to evaluate candidate splits at a node and its children. However, in the online RF versions
(ORF-Saffari and ORF-Denil), incoming training examples are used to evaluate candidate splits just
at a current leaf node and new training data are required to evaluate candidate splits every time a
new leaf node is created. Saffari et al. [20] recommend multiple passes through the training data to
increase the effective number of training samples. In a realistic streaming data setup, where training
examples cannot be stored for multiple passes, MF would require significantly fewer examples than
ORF-Saffari to achieve the same accuracy.
Comparing test accuracy vs training time on usps, satimages and letter datasets, we observe that MF
is at least an order of magnitude faster than re-trained batch versions and ORF-Saffari. For
ORF-Saffari, we plot test accuracy at the end of every additional pass; hence it contains additional
markers compared to the top row which plots results after a single pass. Re-training batch RF using
100 mini-batches is unfair to MF; in a streaming data setup where the model is updated when a
new training instance arrives, MF would be significantly faster than the re-trained batch versions.
3
The scikit-learn implementation uses highly optimized C code, hence we do not compare our runtimes with
the scikit-learn implementation. The ERT implementation in scikit-learn achieves very similar test accuracy as
our ERT implementation, hence we do not report those results here.
4
https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.names
7
Assuming trees are balanced after adding each data point, it can be shown that computational cost of
MF scales as O(N log N ) whereas that of re-trained batch RF scales as O(N 2 log N ) (Appendix C).
Appendix E shows that the average depth of the forests trained on above datasets scales as O(log N ).
It is remarkable that choosing splits independent of labels achieves competitive classification performance. This phenomenon has been observed by others as well?for example, Cutler and Zhao
[7] demonstrate that their PERT classifier (which is similar to batch version of MF) achieves test
accuracy comparable to Breiman-RF on many real world datasets. However, in the presence of
irrelevant features, methods which choose splits independent of labels (MF, ERT-1) perform worse
than Breiman-RF and ERT-k (but still better than ORF-Saffari) as indicated by the results on the
dna dataset. We trained MF and ERT-1 using just the most relevant 60 attributes amongst the 180
attributes5 ?these results are indicated as MF? and ERT-1? in Figure 3. We observe that, as expected,
filtering out irrelevant features significantly improves performance of MF and ERT-1.
0.95
0.92
1.00
0.95
0.90
0.90
0.95
0.90
0.88
0.90
0.85
0.85
0.80
0.80
0.75
0.75
0.70
0.70
0.65
0.80
0.65
0.60
0.78
0.60
0.85
0.86
0.80
0.84
MF
ERT-k
ERT-1
ORF-Saffari
Breiman-RF*
0.75
0.70
0.65
0.82
MF?
0.55
ERT-1?
0.600.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.760.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.550.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.500.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
1.00
1.00
1.1
MF?
0.95
0.90
0.95
1.0
0.90
0.90
0.85
0.85
0.85
0.80
0.80
0.70
0.65
0.60 1
10
10
2
10
3
usps
10
4
0.8
0.75
MF
ERT-k
ERT-1
ORF-Saffari
0.75
0.70
0.80
0.7
0.65
0.6
0.60
10
5
0.75 1
10
10
2
10
3
satimages
10
4
0.55 1
10
ERT-1?
0.9
10
2
10
3
10
4
10
5
0.5 1
10
102
103
104
dna
letter
Figure 3: Results on various datasets: y-axis is test accuracy in both rows. x-axis is fraction of training data
for the top row and training time (in seconds) for the bottom row. We used the pre-defined train/test split.
For usps dataset D = 256, K = 10, Ntrain = 7291, Ntest = 2007; for satimages dataset D = 36, K =
6, Ntrain = 3104, Ntest = 2000; letter dataset D = 16, K = 26, Ntrain = 15000, Ntest = 5000; for dna dataset
D = 180, K = 3, Ntrain = 1400, Ntest = 1186.
8
Discussion
We have introduced Mondrian forests, a novel class of random forests, which can be trained incrementally in an efficient manner. MF significantly outperforms existing online random forests in
terms of training time as well as number of training instances required to achieve a particular test
accuracy. Remarkably, MF achieves competitive test accuracy to batch random forests trained on the
same fraction of the data. MF is unable to handle lots of irrelevant features (since splits are chosen
independent of the labels)?one way to use labels to guide splits is via recently proposed Sequential
Monte Carlo algorithm for decision trees [14]. The computational complexity of MF is linear in the
number of dimensions (since rectangles are represented explicitly) which could be expensive for
high dimensional data; we will address this limitation in future work. Random forests have been
tremendously influential in machine learning for a variety of tasks; hence lots of other interesting
extensions of this work are possible, e.g. MF for regression, theoretical bias-variance analysis of MF,
extensions of MF that use hyperplane splits instead of axis-aligned splits.
Acknowledgments
We would like to thank Charles Blundell, Gintare Dziugaite, Creighton Heaukulani, Jos?e Miguel
Hern?andez-Lobato, Maria Lomeli, Alex Smola, Heiko Strathmann and Srini Turaga for helpful
discussions and feedback on drafts. BL gratefully acknowledges generous funding from the Gatsby
Charitable Foundation. This research was carried out in part while DMR held a Research Fellowship
at Emmanuel College, Cambridge, with funding also from a Newton International Fellowship through
the Royal Society. YWT?s research leading to these results was funded in part by the European
Research Council under the European Union?s Seventh Framework Programme (FP7/2007-2013)
ERC grant agreement no. 617411.
5
https://www.sgi.com/tech/mlc/db/DNA.names
8
References
[1] L. Breiman. Bagging predictors. Mach. Learn., 24(2):123?140, 1996.
[2] L. Breiman. Random forests. Mach. Learn., 45(1):5?32, 2001.
[3] R. Caruana and A. Niculescu-Mizil. An empirical comparison of supervised learning algorithms.
In Proc. Int. Conf. Mach. Learn. (ICML), 2006.
[4] H. A. Chipman, E. I. George, and R. E. McCulloch. Bayesian CART model search. J. Am. Stat.
Assoc., pages 935?948, 1998.
[5] S. L. Crawford. Extensions to the CART algorithm. Int. J. Man-Machine Stud., 31(2):197?217,
1989.
[6] A. Criminisi, J. Shotton, and E. Konukoglu. Decision forests: A unified framework for
classification, regression, density estimation, manifold learning and semi-supervised learning.
Found. Trends Comput. Graphics and Vision, 7(2?3):81?227, 2012.
[7] A. Cutler and G. Zhao. PERT - Perfect Random Tree Ensembles. Comput. Sci. and Stat., 33:
490?497, 2001.
[8] M. Denil, D. Matheson, and N. de Freitas. Consistency of online random forests. In Proc. Int.
Conf. Mach. Learn. (ICML), 2013.
[9] D. G. T. Denison, B. K. Mallick, and A. F. M. Smith. A Bayesian CART algorithm. Biometrika,
85(2):363?377, 1998.
[10] T. G. Dietterich. Ensemble methods in machine learning. In Multiple classifier systems, pages
1?15. Springer, 2000.
[11] P. Domingos and G. Hulten. Mining high-speed data streams. In Proc. 6th ACM SIGKDD Int.
Conf. Knowl. Discov. Data Min. (KDD), pages 71?80. ACM, 2000.
[12] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Mach. Learn., 63(1):3?42,
2006.
[13] J. T. Goodman. A bit of progress in language modeling. Comput. Speech Lang., 15(4):403?434,
2001.
[14] B. Lakshminarayanan, D. M. Roy, and Y. W. Teh. Top-down particle filtering for Bayesian
decision trees. In Proc. Int. Conf. Mach. Learn. (ICML), 2013.
[15] T. P. Minka. Bayesian model averaging is not model combination. MIT Media Lab note.
http://research.microsoft.com/en-us/um/people/minka/papers/bma.html, 2000.
[16] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.,
12:2825?2830, 2011.
[17] J. Pitman. Combinatorial stochastic processes, volume 32. Springer, 2006.
[18] D. M. Roy. Computability, inference and modeling in probabilistic programming. PhD thesis,
Massachusetts Institute of Technology, 2011. http://danroy.org/papers/Roy-PHD-2011.pdf.
[19] D. M. Roy and Y. W. Teh. The Mondrian process. In Adv. Neural Inform. Proc. Syst. (NIPS),
volume 21, pages 27?36, 2009.
[20] A. Saffari, C. Leistner, J. Santner, M. Godec, and H. Bischof. On-line random forests. In
Computer Vision Workshops (ICCV Workshops). IEEE, 2009.
[21] M. A. Taddy, R. B. Gramacy, and N. G. Polson. Dynamic trees for learning and design. J. Am.
Stat. Assoc., 106(493):109?123, 2011.
[22] Y. W. Teh. A hierarchical Bayesian language model based on Pitman?Yor processes. In Proc.
21st Int. Conf. on Comp. Ling. and 44th Ann. Meeting Assoc. Comp. Ling., pages 985?992.
Assoc. for Comp. Ling., 2006.
[23] P. E. Utgoff. Incremental induction of decision trees. Mach. Learn., 4(2):161?186, 1989.
[24] F. Wood, C. Archambeau, J. Gasthaus, L. James, and Y. W. Teh. A stochastic memoizer for
sequence data. In Proc. Int. Conf. Mach. Learn. (ICML), 2009.
9
| 5234 |@word trial:1 version:8 orf:18 contains:1 score:4 daniel:1 denoting:1 dubourg:1 outperforms:2 existing:10 freitas:1 current:1 comparing:2 com:2 lang:1 mushroom:2 realistic:1 periodically:1 partition:7 j1:2 kdd:1 treating:1 plot:2 update:6 v:3 generative:1 prohibitive:1 leaf:32 instantiate:1 greedy:1 fewer:1 ntrain:4 denison:1 smith:1 farther:1 ptm:2 num:1 memoizer:1 draft:1 node:59 location:8 traverse:2 org:1 uxj:6 five:2 dn:13 constructed:1 along:8 stud:1 manner:2 introduce:7 blondel:1 expected:1 behavior:1 multi:1 gjk:2 totally:2 confused:1 underlying:1 moreover:1 notation:3 medium:1 mcculloch:1 gintare:1 heaukulani:1 unified:1 pseudo:1 every:11 exactly:2 biometrika:1 classifier:4 scaled:1 assoc:4 control:2 partitioning:1 grant:1 unit:1 normally:1 yn:6 before:1 t1:1 engineering:1 timing:1 despite:1 mach:9 oxford:1 abuse:2 kneser:1 might:1 initialization:1 studied:1 archambeau:1 projective:1 range:6 unique:3 acknowledgment:1 practice:1 block:10 implement:1 union:1 procedure:5 empirical:4 significantly:5 chp:1 word:1 integrating:1 confidence:1 pre:1 cannot:3 unlabeled:1 encloses:1 close:1 put:1 context:1 risk:1 seminal:1 yee:1 restriction:2 www:1 lobato:1 straightforward:1 agaricus:1 survey:1 rectangular:1 focused:2 simplicity:1 splitting:2 gramacy:1 rule:2 importantly:2 his:1 classic:1 handle:1 variation:1 ert:26 limiting:1 updated:4 pt:9 hierarchy:3 taddy:1 exact:1 programming:1 us:3 domingo:1 agreement:1 associate:2 trend:1 roy:8 expensive:1 updating:1 balaji:1 labeled:4 database:1 observed:5 bottom:1 adv:1 eu:3 balanced:1 projectivity:3 complexity:5 utgoff:1 dynamic:3 trained:13 depend:2 mondrian:60 passos:1 predictive:12 upon:1 usps:4 represented:7 various:1 grown:1 train:2 fast:1 describe:4 london:1 effective:1 monte:1 hyper:3 choosing:4 outside:2 whose:2 widely:1 larger:3 supplementary:1 ikn:2 otherwise:1 godec:1 statistic:2 commit:1 noisy:1 superscript:1 online:24 final:1 sequence:3 advantage:1 subtracting:1 recurses:3 commitment:1 j2:1 uci:1 relevant:1 aligned:1 matheson:1 iff:1 ernst:1 achieve:7 scalability:1 sgi:1 webpage:1 parent:19 empty:1 strathmann:1 extending:1 produce:1 incremental:7 converges:1 perfect:2 stat:3 miguel:1 progress:1 dividing:1 implemented:2 implies:2 differ:1 attribute:1 subsequently:1 criminisi:1 stochastic:2 saffari:16 material:1 require:2 assign:2 srini:1 andez:1 leistner:1 varoquaux:1 strictly:1 extension:6 insert:1 hold:1 around:1 considered:1 ic:1 exp:1 predict:1 bj:17 achieves:5 generous:1 smallest:3 purpose:2 estimation:1 proc:7 injecting:1 precede:1 bma:1 label:30 combinatorial:1 knowl:1 prettenhofer:1 council:1 agrees:1 tool:1 mit:1 heiko:1 denil:5 breiman:15 hulten:1 derived:1 focus:3 maria:1 likelihood:1 check:1 indicates:1 hk:3 tech:1 tremendously:1 sigkdd:1 grisel:1 am:2 helpful:1 inference:4 streaming:2 niculescu:1 typically:2 classification:6 html:1 denoted:3 art:1 xjd:6 initialize:1 smoothing:3 special:3 integration:1 construct:1 once:2 gramfort:1 sampling:4 runtimes:1 identical:2 icml:4 nearly:1 uj1:2 future:2 report:2 recommend:1 others:1 randomly:2 pyp:2 individual:2 mtx:9 microsoft:1 attempt:1 highly:1 mining:1 cournapeau:1 evaluation:1 arrives:1 recurse:2 cutler:3 held:1 accurate:1 tuple:2 tree:98 iv:1 old:1 divide:1 walk:1 circle:2 re:6 theoretical:1 instance:5 modeling:4 cover:1 caruana:1 cost:2 introducing:2 subset:3 uniform:2 predictor:1 seventh:1 graphic:1 reported:1 stored:1 chooses:2 confident:1 st:1 density:1 international:1 randomized:7 probabilistic:1 off:1 jos:1 thesis:1 satisfied:1 containing:4 choose:4 worse:2 admit:1 conf:6 resort:1 zhao:3 inefficient:1 return:1 leading:1 toy:2 michel:1 syst:1 potential:1 de:2 ywt:1 lakshminarayanan:2 int:7 explicitly:1 stream:1 hpyp:2 script:2 root:6 lot:2 lab:1 observing:1 red:2 competitive:4 start:3 option:1 maintains:1 ass:1 bright:1 accuracy:15 variance:1 efficiently:2 ensemble:8 bayesian:10 none:1 carlo:1 comp:3 ago:1 randomness:1 inform:1 whenever:1 ed:3 minka:2 james:1 associated:8 dmr:1 sampled:4 stop:1 dataset:13 gain:1 popular:4 massachusetts:1 logical:1 knowledge:3 improves:1 organized:1 carefully:1 higher:2 supervised:3 methodology:1 specify:1 wei:1 though:1 lifetime:4 just:4 furthermore:1 smola:1 horizontal:1 marker:1 scikit:6 incrementally:5 somehow:1 lack:1 rab:8 quality:5 gray:4 indicated:2 name:2 dietterich:1 xj1:1 normalized:1 dziugaite:1 counterpart:2 hence:12 analytically:1 excluded:1 eg:1 self:1 rooted:1 m:1 criterion:3 whye:1 generalized:1 pdf:1 complete:1 demonstrate:2 geurts:1 performs:1 wise:1 novel:2 recently:1 charles:1 funding:2 common:3 specialized:1 multinomial:1 mt:15 volume:2 extend:7 slight:1 refer:6 cambridge:2 rd:10 consistency:2 pm:1 erc:1 particle:1 language:4 dj:2 gratefully:1 funded:1 stable:1 gj:8 add:5 base:2 posterior:5 own:2 recent:2 optimizing:1 irrelevant:3 lomeli:1 binary:3 continue:1 meeting:1 minimum:2 greater:1 additional:6 care:1 george:1 ii:4 branch:1 multiple:3 semi:1 smooth:1 faster:4 match:1 discov:1 bigger:1 halt:1 prediction:11 variant:2 regression:4 vision:2 expectation:2 iteration:8 represent:1 santner:1 achieved:1 proposal:1 whereas:1 remarkably:2 want:1 fellowship:2 interval:3 else:6 grow:2 ujd:4 goodman:1 operate:1 rest:1 posse:1 unlike:2 pass:3 archive:1 cart:5 db:1 call:1 chipman:1 presence:1 intermediate:2 split:62 iii:2 shotton:1 j2t:1 variety:1 xj:5 marginalization:1 restrict:1 reduce:1 tm:2 tradeoff:1 sibling:1 blundell:1 t0:1 speech:1 deep:1 um:1 useful:1 discount:3 dna:4 http:4 outperform:1 overly:1 write:2 discrete:1 brucher:1 four:1 threshold:1 rectangle:9 vast:1 computability:1 fraction:7 wood:2 sum:1 letter:4 family:4 reader:1 decision:29 appendix:5 comparable:4 bit:1 bound:3 precisely:1 alex:1 x2:17 nearby:1 interpolated:1 speed:1 extremely:3 min:7 department:2 influential:1 according:2 turaga:1 combination:4 remain:1 making:2 refered:1 restricted:2 intuitively:2 iccv:1 taken:2 computationally:2 hern:1 discus:2 thirion:1 tractable:1 fp7:1 end:1 operation:3 observe:3 hierarchical:8 distinguished:1 ney:1 batch:22 robustness:1 jd:3 original:1 bagging:4 denotes:3 dirichlet:2 subsampling:1 include:1 top:3 wehenkel:1 maintaining:1 newton:1 exploit:1 emmanuel:1 uj:2 konukoglu:1 society:1 bl:1 perrot:1 already:1 added:2 strategy:1 concentration:1 amongst:1 unable:1 thank:1 sci:1 manifold:1 considers:2 extent:6 induction:4 assuming:1 code:2 index:1 illustration:1 mini:2 setup:2 executed:1 negative:1 polson:1 enclosing:4 implementation:8 design:1 perform:1 teh:9 upper:3 vertical:3 observation:1 datasets:7 iti:1 finite:5 extended:2 committed:1 y1:4 gasthaus:1 smoothed:1 mlc:1 retrained:1 introduced:5 required:2 vanderplas:1 optimized:1 bischof:1 c4:1 nip:1 address:1 usually:2 eud:4 rf:16 max:7 memory:1 royal:1 mallick:1 suitable:1 natural:1 predicting:1 pause:2 mizil:1 representing:1 technology:1 brief:1 axis:6 irrespective:1 excel:1 carried:2 categorical:1 created:1 acknowledges:1 crawford:1 nsp:6 prior:6 literature:3 review:2 python:3 epoch:1 embedded:1 expect:1 interesting:1 limitation:1 proportional:3 filtering:2 var:1 remarkable:1 foundation:1 downloaded:1 creighton:1 consistent:2 article:1 charitable:1 row:4 repeat:1 side:1 guide:1 bias:1 institute:1 comprehensively:1 taking:3 pitman:3 yor:2 distributed:1 slice:1 feedback:1 depth:6 xn:11 world:3 dimension:14 pert:3 qn:1 default:1 author:1 collection:2 made:5 refinement:1 programme:1 far:1 correlate:1 pausing:1 ignore:1 ml:1 overfitting:1 incoming:1 conclude:1 un:1 search:1 decade:1 learn:15 forest:50 excellent:3 european:2 did:1 whole:1 big:1 ling:3 child:12 repeated:1 x1:20 referred:1 en:1 fashion:4 gatsby:2 sub:1 duchesnay:1 exponential:4 comput:3 candidate:11 lie:4 unfair:1 third:3 down:4 paused:2 list:1 intractable:1 workshop:2 adding:2 sequential:1 ower:1 phd:2 magnitude:2 budget:1 illustrates:1 demand:1 mf:33 logarithmic:1 likely:1 contained:1 ux:4 springer:2 acm:2 conditional:2 fringe:1 ann:1 shared:1 man:1 infinite:1 except:1 uniformly:3 averaging:5 hyperplane:1 called:4 total:1 pas:2 ntest:4 pedregosa:1 college:2 internal:8 people:1 incorporate:2 evaluate:5 d1:30 phenomenon:1 |
4,677 | 5,235 | Parallel Sampling of HDPs using Sub-Cluster Splits
John W. Fisher III
CSAIL, MIT
fisher@csail.mit.edu
Jason Chang
CSAIL, MIT
jchang7@csail.mit.edu
Abstract
We develop a sampling technique for Hierarchical Dirichlet process models. The
parallel algorithm builds upon [1] by proposing large split and merge moves based
on learned sub-clusters. The additional global split and merge moves drastically
improve convergence in the experimental results. Furthermore, we discover that
cross-validation techniques do not adequately determine convergence, and that
previous sampling methods converge slower than were previously expected.
1
Introduction
Hierarchical Dirichlet Process (HDP) mixture models were first introduced by Teh et al. [2]. HDPs
extend the Dirichlet Process (DP) to model groups of data with shared cluster statistics. Since
their inception, HDPs and related models have been used in many statistical problems, including
document analysis [2], object categorization [3], and as a prior for hidden Markov models [4].
The success of HDPs has garnered much interest in inference algorithms. Variational techniques
[5, 6] are often used for their parallelization and speed, but lack the limiting guarantees of Markov
chain Monte Carlo (MCMC) methods. Unfortunately, MCMC algorithms tend to converge slowly.
In this work, we extend the recent DP Sub-Cluster algorithm [1] to HDPs to accelerate convergence
by inferring ?sub-clusters? in parallel and using them to propose large split moves.
Extensions to the HDP are complicated by the additional DP, which violates conjugacy assumptions
used in [1]. Furthermore, split/merge moves require computing the joint model likelihood, which,
prior to this work, was unknown in the common Direct Assignment HDP representation [2]. We
discover that significant overlap in cluster distributions necessitates new global split/merge moves
that change all clusters simultaneously. Our experiments on synthetic and real-world data validate
the improved convergence of the proposed method. Additionally, our analysis of joint summary
statistics suggests that other MCMC methods may converge prematurely in finite time.
2
Related Work
The seminal work of [2] introduced the Chinese Restaurant Franchise (CRF) and the Direct Assignment (DA) sampling algorithms for the HDP. Since then, many alternatives have been developed.
Because HDP inference often extends methods from DPs, we briefly discuss relevant work on both
models that focus on convergence and scalability. Current methods are summarized in Table 1.
Simple Gibbs sampling methods, such as CRF or DA, may converge slowly in complex models.
Works such as [11, 12, 13, 14] address this issue in DPs with split/merge moves. Wang and Blei [7]
developed the only split/merge MCMC method for HDPs by extending the Sequentially Allocated
Merge-Split (SAMS) algorithm of DPs developed in [13]. Unfortunately, reported results in [7]
only show a marginal improvement over Gibbs sampling. Our experiments suggest that this is likely
due to properties of the specific sampler, and that a different formulation significantly improves
convergence. Additionally, SAMS cannot be parallelized, and is therefore only tested on a corpus
with 263K words. By designing a parallel algorithm, we test on a corpus of 100M words.
1
Table 1: Capabilities of MCMC Sampling Algorithms for HDPs
CRF [2] DA [2] SAMS [7] FSD [4] Hog-Wild [8] Super-Cluster [9] Proposed
Infinite Model
X
X
X
?
X
X
X
MCMC Guarantees
X
X
X
X
?
X
X
Non-Conjugate Priors
?
?
?
X
?
?
X
Parallelizable
?
?
?
X
X
X
X
Local Splits/Merges
?
?
X
?
?
?
X
Global Splits/Merges
?
?
?
?
?
?
X
? potentially possible with some adapatation of the DP Metropolis-Hastings framework of [10].
There has also been work on parallel sampling algorithms for HDPs. Fox et al. [4] generalizes the
work of Ishwaran and Zarepour [15] by approximating the highest-level DP with a finite symmetric
Dirichlet (FSD). Iterations of this approximation can be parallelized, but fixing the model order is
undesirable since it no longer grows with the data. Furthermore, our experiments suggest that this algorithm exhibits poor convergence. Newman et al. [8] present an alternative parallel approximation
related to Hog-Wild Gibbs sampling [16, 17]. Each processor independently runs a Gibbs sampler
on its assigned data followed by a resynchronization step across all processors. This approximation
has shown to perform well on cross-validation metrics, but loses the limiting guarantees of MCMC.
Additionally, we will show that cross-validation metrics are not suitable to analyze convergence.
An exact parallel algorithm for DPs and HDPs was recently developed by Willamson et al. [9]
by grouping clusters into independent super-clusters. Unfortunately, the parallelization does not
scale well [18], and convergence is often impeded [1]. Regardless of exactness, all current parallel
sampling algorithms exhibit poor convergence due to their local nature, while split/merge proposals
are essentially ineffective and cannot be parallelized.
2.1
DP Sub-Clusters Algorithm
The recent DP Sub-Cluster algorithm [1] addresses these issues by combining non-ergodic Markov
chains into an ergodic chain and proposing splits from learned sub-clusters. We briefly review
relevant aspects of the DP Sub-Cluster algorithm here. MCMC algorithms typically satisfy two
conditions: detailed balance and ergodicity. Detailed balance ensures that the target distribution
is a stationary distribution of the chain, while ergodicity guarantees uniqueness of the stationary
distribution. The method of [1] combines a Gibbs sampler that is restricted to non-empty clusters
with a Metropolis-Hastings (MH) algorithm that proposes splits and merges. Since any Gibbs or
MH sampler satisfies detailed balance, the true posterior distribution is guaranteed to be a stationary
distribution of the chain. Furthermore, the combination of the two samplers enforces ergodicity and
guarantees the convergence to the stationary distribution.
The DP Sub-Cluster algorithm also augments the model with auxiliary variables that learn a twocomponent mixture model for each cluster. These ?sub-clusters? are subsequently used to propose
splits that are learned over time instead of built in a single iteration like previous methods. In this
paper, we extend these techniques to HDPs. As we will show, considerable work is needed to address
the higher-level DP and the overlapping distributions that exist in topic modeling.
3
Hierarchical Dirichlet Processes
We begin with a brief review of the equivalent CRF and DA representations of the HDP [2] depicted
in Figures 1a?1b. Due to the prolific use of HDPs in topic modeling, we refer to the variables with
their topic modeling names. ? is the corpus-level, global topic proportions, ?k is the parameter for
topic k, and xji is the ith word in document j. Here, the CRF and DA representations depart. In the
CRF, ?
?j is drawn from a stick-breaking process [19], and each ?customer? (i.e., word) is assigned
to a ?table? through tji ? Categorical(?
?j ). The higher-level DP then assigns ?dishes? (i.e., topics)
to tables via kjt ? Categorical(?). The association of customers to dishes through the tables is
equivalent to assigning a word to a topic. In the CRF, multiple tables can be assigned the same dish.
The DA formulation combines these multiple instances and directly assigns a word to a topic with
zji . The resulting document-specific topic proportions, ?j , aggregates multiple ?
?j values. For
2
(a) HDP CRF Model
(b) HDP DA Model
(c) HDP Augmented DA Model
Figure 1: Graphical models. (c) Hyper-parameters are omitted and auxiliary variables are dotted.
Figure 2: Visualization of augmented sample space.
reasons which will be discussed, inference in the DA formulation still relies on some aspects of
the CRF. We adopt the notation of [2], where the number of tables in restaurant j serving dish k is
denoted mjk , and the number of customers in restaurant j at table t eating dish k is njtk . Marginal
P
P
counts are represented with dots, e.g., nj?? , t,k njtk and mj? , k mjk represent the number
of customers and dishes in restaurant j, respectively. We refer the reader to [2] for additional details.
4
Restricted Parallel Sampling
We draw on the DP Sub-Cluster algorithm to combine a restricted, parallel Gibbs sampler with
split/merge moves (as described in Section 2.1). The former is detailed here, and the latter is developed in Section 5. Because the restricted Gibbs sampler cannot create new topics, dimensions of the
infinite vectors ?, ?, and ? associated with empty clusters need not be instantiated. Extending the
DA sampling algorithm of [2] results in the following restricted posterior distributions:
p(?|m) = Dir(m?1 , . . . , m?K , ?),
p(?j |?, z) = Dir(??1 + nj?1 , . . . , ??K + nj?K , ??K+1 ),
p(?k |x, z) ? fx (xIk ; ?k )f? (?k ; ?),
PK
p(zji |x, ?j , ?) ? k=1 ?jk fx (xji ; ?k )1I[zji = k],
p(mjk |?, z) = fm (mjk ; ??k , nj?k ) ,
?(??k )
mjk
.
?(??k +nj?k ) s(nj?k , mjk )(??k )
(1)
(2)
(3)
(4)
(5)
Since p(?|?) is not known analytically, we use the auxiliary variable, mjk , as derived by [2, 20].
Here, s(n, m) denotes unsigned Stirling numbers of the first kind. We note that ? and ? are now
(K + 1)?length vectors partitioning the space, where the last components, ?K+1 and ?j(K+1) ,
aggregate the weight of all empty topics. Additionally, Ik , {j, i; zji = k} denotes the set of
indices in topic k, and fx and f? denote the observation and prior distributions. We note that if f? is
conjugate to fx , Equation (3) stays in the same family of parametric distributions as f? (?; ?).
Equations (1?5), each of which can be sampled in parallel, fully specify the restricted Gibbs sampler. The astute reader may notice similarities with the FSD approximation used in [4]. The main
differences are that the ? distribution in Equation (1) is exact, and that sampling z in Equation (4)
is explicitly restricted to non-empty clusters. Unlike [4], however, this sampler is guaranteed to
converge to the true HDP model when combined with any split move (cf. Section 2.1).
5
Augmented Sub-Cluster Space for Splits and Merges
In this section we develop the augmented, sub-cluster model, which is aimed at finding a twocomponent mixture model containing a likely split of the data. As demonstrated in [1], these splits
perform well in DPs because they improve at every iteration of the algorithm. Unfortunately, because
these splits perform poorly in HDPs, we modify the formulation to propose more flexible moves.
For each topic, k, we fit two sub-topics, k` and kr, referred to as the ?left? and ?right? sub-topics.
Each topic is augmented with auxiliary global sub-topic proportions, ? k = {? k` , ? kr }, document3
level sub-topic proportions, ? jk = {? jk` , ? jkr }, and sub-topic parameters, ?k = {?k` , ?kr }. Furthermore, a sub-topic assignment, z ji ? {`, r} is associated with each word, xji . The augmented
space is summarized in Figure 1c and visualized in Figure 2. These auxiliary variables are denoted
with the same symbol as their ?regular-topic? counterparts to allude to their similarities. Extending
the work of [1], we adopt the following auxiliary generative and marginal posterior distributions:
Generative Distributions
Marginal Posterior Distributions
p(? k |?) = Dir(? + m?k` , ? + m?kr ),
p(? k ) = Dir(?, ?),
p(? jk |? k ) = Dir(?? k` , ?? kr ),
Y
Y
p(?k |?, z, x) =
f? (?kh ; ?) Zji (?, ?, z, x),
Zji (?, ?, z, x) ,
YK Y
k=1
p(? jk |?) = Dir(?? k` +nj?k` ,?? kr +nj?kr ), (7)
p(?kh |?) ? fx (xIkh ; ?kh )f? (?kh ; ?),
(8)
p(z ji |?) ? ? jzji zji fx (xji ; ?zji zji )
(9)
p(mjkh |?) = fm (mjkh ; ?? kh , nj?kh ),
(10)
j,i?Ik
h?{`,r}
p(z|?, ?, z, x) =
(6)
? jkzji fx (xji ;? kzji )
j,i?Ik Zji (?,?,z,x)
X
h?{`,r}
,
? jzji h fx (xji ; ?zji h ),
where ? denotes all other variables. Full derivations are given in the supplement. Notice the similarity between these posterior distributions and Equations (1?5). Inference is performed by interleaving
the sampling of Equations (1?5) with Equations (6?10). Furthermore, each step can be parallelized.
5.1
Sub-Topic Split/Merge Proposals
We adopt a Metropolis-Hastings (MH) [21] framework that proposes a split/merge from the subtopics and either accepts or rejects it. Denoting v , {?, ?, z, ?} and v , {?, ?, z, ?} as the set of
regular and auxiliary variables, a sampled proposal, {?
v , v?} ? q(?
v , v?|v) is accepted with probability
h
i
? v ) q(v|x,?
?
v )q(v|x,v,v)
? = min 1, p(x,?v)p(v|x,?
?
Pr[{v, v} = {?
v , v}]
= min [1, H] .
(11)
?
p(x,v)p(v|x,v) q(?
v |x,v)q(v|x,v,?
v)
H, is known as the Hastings ratio. Algorithm 1 outlines a general split/merge MH framework,
where steps 1?2 propose a sample from q(?
v |x, v)q(v?|x, v, v, v?). Sampling the variables other than z?
is detailed here, after which we discuss three versions of Algorithm 1 with variants on sampling z?.
Algorithm 1 Split-Merge Framework
? document proportions, ?
?
1. Propose assignments, z?, global proportions, ?,
? , and parameters, ?.
2. Defer the proposal of auxiliary variables to the restricted sampling of Equations (1?10).
3. Accept/reject the proposal with the Hastings ratio.
? In Metropolis-Hastings, convergence typically improves as the proposal distribution is
(Step 1: ?):
closer to the target distribution. Thus, it would be ideal to propose ?? from p(?|?
z ). Unfortunately,
p(?|z) cannot be expressed analytically without conditioning on the dish counts, m?k , as in Equation
(1). Since the distribution of dish counts depends on ? itself, we approximate its value with
m
? jk (z) , arg maxm p(m|? = 1/K , z) = arg maxm
?(1/K )
1 m
?(1/K +nj?k ) s(nj?k , m)( K ) ,
(12)
where the global topic proportions have essentially been substituted with 1/K . We note that the
dependence on z is implied through the counts, n. We then propose global topics proportions from
? z ) = p(?|
? m(?
?? ? q(?|?
? z )) = Dir (m
? ?1 (?
z ), ? ? ? , m
? ?K (?
z ), ?) .
(13)
We will denote m
? jk , m
? jk (z) and m
?? jk , m
? jk (?
z ). We emphasize that the approximate m
?? jk is
only used for a proposal distribution, and the resulting chain will still satisfy detailed balance.
(Step 1: ?
? ): Conditioned on ? and z, the distribution of ? is known to be Dirichlet. Thus, we
? z?) by sampling directly from the true posterior distribution of Equation (2).
propose ?
? ? p(?
? |?,
? If f? is conjugate to fx , we sample ?? directly from the posterior of Equation (3). If
(Step 1: ?):
non-conjugate models, any proposal can be used while adjusting for it in the Hastings ratio.
4
(Step 2): We use the Deferred MH sampler developed in [1], which sets q(v?|x, v?) = p(v?|x, v?) by
deferring the sampling of auxiliary variables to the restricted sampler of Section 5. Splits and merges
are then only proposed for topics where auxiliary variables have already burned-in. In practice burnin is quite fast, and is determined by monitoring the sub-topic data likelihoods.
(Step 3): Finally, the above proposals results in the following the Hastings ratio:
H=
? z )p(x|?
p(?,?
z)
p(?,z)p(x|z)
?
?
q(z|?
v ,v)q(?|z)
? z) .
q(?
z |v,v)q(?|?
(14)
The data likelihood, p(x|z) is known analytically, and q(?|z) can be calculated according to Equation 13. The prior distribution, p(?, z), is expressed in the following proposition:
Proposition 5.1. Let z be a set of topic assignments with integer values in {1, . . . , K}. Let ? be a
(K +1)?length vector representing global topic weights, and ?K+1 be the sum of weights associated
with empty topics. The prior distribution, p(?, z), marginalizing over ?, can be expressed as
h
i h YD
YK
YK ?(?? +n ) i
?(?)
??1
k
j?k
p(?, z) = ??K+1
?k?1 ?
.
(15)
?(?+nj?? )
?(??k )
k=1
k=1
j=1
Proof. See supplemental material.
The remaining term in Equation (14), q(?
z |v, v), is the probability of proposing a particular split. In
the following sections, we describe three possible split constructions using the sub-clusters. Since
?
the other steps remain the same, we only discuss the proposal distributions for z? and ?.
5.1.1
Deterministic Split/Merge Proposals
The method of [1] constructs a split deterministically by copying the sub-cluster labels for a single cluster. We refer to this proposal as a local split, which only changes assignments within one
topic, as opposed to a global split (discussed shortly), which changes all topic assignments. A local
deterministic split will essentially be accepted if the joint likelihood increases. Unfortunately, as
we show in the supplement, samples from the typical set of an HDP do not have high likelihood.
Deterministic split and merge proposals are, consequently, very rarely accepted. We now suggest
two alternative pairs of split and merge proposals, each with their own benefits and drawbacks.
5.1.2
Local Split/Merge Proposals
Here, we depart from the approach of [1] by sampling a local split of topic a into topics b and c.
Temporary parameters, {?
?b , ?
?c , ??b , ??c }, and topic assignments, z?, are sampled according to
)
Y X
(?
?b , ?
?c ) = ?a ? (? a` , ? ar ),
?
?k fx (xji ; ??k )1I[?
zji = k].
(16)
=? q(?
z |v, v) ?
(??b , ??c ) = (?a` , ?ar ),
j,i?Ia k?{b,c}
We note that a sample from q(?
z |v, v) is already drawn from the restricted Gibbs sampler described
in Equation (9). Therefore, no additional computation is needed to propose the split. If the split is
rejected, the z? is simply used as the next sample of the auxiliary z for cluster a.
A ?? is then drawn by splitting ?a into ??b and ??c according to a local version of Equation (13):
?? ?b , m
?? ?c ).
q(??b , ??c |?
z , ?a ) = Dir(??b /?a , ??c /?a ; m
(17)
The corresponding merge move combines topics b and c into topic a by deterministically performing
q(?
zji |v) = 1I[?
zji = a],
q(??a |v) = ?(??a ? (?b + ?c )).
?j, i ? Ib ? Ic ,
This results in the following Hastings ratio for a local split (derivation in supplement):
?
?
Y
Y
m
? +m
? ?c
?
?
QM
?(???k +?
nj?k )
p(x|?
z)
m
? ?b )?(m
? ?c ) ?a ?b
?(??a )
1
K+1
,
H = ??(
?
?
m
? ?b m
?
?
z |v,v) QS
?(??a +nj?a )
? ?c p(x|z) q(?
?(m
? +m
? )
?(??? )
?b
?c
??b
??c
K
j
k?{b,c}
(18)
(19)
k
where QSK and QM
K are the probabilities of selecting a specific split or merge with K topics. We
record q(?
z |v, v) when sampling from Equation (9), and all other terms are computed via sufficient
statistics. We set QSK = 1 by proposing all splits at each iteration. QM
K will be discussed shortly.
5
The Hastings ratio for a merge is essentially the reciprocal of Equation (19). However, the reverse
? and ??, which are not readily
split move, q(z|?
v , v?), relies on the inferred sub-topic parameters, ?
available due to the Deferred MH algorithm. Instead, we approximate the Hastings ratio by substituting the two original topic parameters, ?b and ?c , for the proposed sub-topics. The quality of this
approximation rests on the similarity between the regular-topics and the sub-topics. Generating the
reverse move that splits topic a into b and c can then be approximated as
Y
?zji fx (xji ;?zji )
bb Lcc
q(z|?
v , v?) ?
=L
(20)
Lbc Lcb ,
j,i?Ib ?Ic ?b fx (xi ;?b )+?c fx (xi ;?c )
Y
Y
Lkk ,
?k fx (xji ; ?k ),
Lkl ,
[?k fx (xji ; ?k ) + ?l fx (xji ; ?l )] .
(21)
j,i?Ik
j,i?Ik
All of the terms in Equation (20) are already calculated in the restricted Gibbs steps. When aggregated correctly in the K ? K matrix, L, the Hastings ratio for any proposed merge is evaluated in
constant time. However, if topics b and c are merged into a, further merging a with another cluster
cannot be efficiently computed without looping through the data. We therefore only propose bK/2c
merges by generating a random permutation of the integers [1, K], and proposing to merge disjoint
neighbors. For example, if the random permutation for K = 7 is { 3 1 7 4 2 6 5}, we propose to
2bK/2c
merge topics 3 and 1, topics 7 and 4, and topics 2 and 6. This results in QM
K = K(K?1) .
5.1.3
Global Split/Merge Proposals
In many applications where clusters have significant overlap (e.g., topic modeling), local splits may
be too constrained since only points within a single topic change. We now develop a global split
and merge move, which reassign the data in all topics. A global split first constructs temporary topic
? followed by proposing topic assignments for all words with:
proportions, ?
? , and parameters, ?,
)
Y ?
(?
?b , ?
?c ) = ?a ? (? a` , ? ar ), ?
?k = ?k , ?k 6= a,
?z?ji fx (xji ; ??z?ji )
=? q(?
z |v, v) =
. (22)
P
(??b , ??c ) = (?a` , ?ar ),
??k = ?k , ?k 6= a,
?
?k fx (xji ; ??k )
j,i
k
Similarly, the corresponding merge move is constructed according to
)
Y ?
?
? a = ?b + ?c ,
?
?k = ?k , ?k 6= b, c,
?z?ji fx (xji ; ??z?ji )
. (23)
=? q(?
z |v, v) =
P
??a ? q(??a |z, x),
??k = ?k , ?k 6= b, c,
?
?k fx (xji ; ??k )
j,i
k
The proposal for ??a is written in a general form; if priors are conjugate, one should propose directly
from the posterior. After Equations (22)?(23), ?? is sampled via Equation (13). All remaining steps
follow Algorithm 1. The resulting Hastings ratio for a global split (see supplement) is expressed as
K
D
K+1
D
m
?
Y
Y
Y ?(m
Y
? ??a |z) QM
?
?k ?k
?(???k +?
nj?k )
z ) q(z|?
?(??k )
? ?k )
m
? ?? ) p(x|?
v ,v)q(
K+1
. (24)
H = ??(?+
?
m
?
?
q(?
z |v,v)
?(m
? ?k )
?(??k +nj?k )
QS
?(???k )
?(?+m
? ?? ) p(x|z)
K
??k ?k
j=1
k=1
k=1
j=1
Similar to local merges, the Hastings ratio for a global merge depends on the proposed sub-topics
parameters. We approximate these with the main-topic parameters prior to the merge.
Unlike the local split/merge proposals, proposing z? requires significant computation by looping
through all data points. As such, we only propose a single global split and merge each iteration.
Thus, QSK = 1/K and QM
K = 2/(K(K ? 1)). We emphasize that the developed global moves are
very different from previous local split/merge moves in DPs and HDPs (e.g., [1, 7, 11, 13, 14]). We
conjecture that this is the reason the split/merge moves in [7] only made negligible improvement.
6
Experiments
We now test the proposed HDP Sub-Clusters method on topic modeling. The algorithm is summarized in the following steps: (1) initialize ? and z randomly; (2) sample ?, ?, ?, and ? via
Equations (2, 3, 7, 8); (3) sample z and z via Equations (4, 9); (4) propose b K
2 c local merges
followed by K local splits; (5) propose a global merge followed by a global split; (6) sample m
and m via Equations (5, 10); (7) sample ? and ? via Equations (1, 6); (8) repeat from Step 2
until convergence. We fix the hyper-parameters, but resampling techniques [2] can easily be incorporated. All results are averaged over 10 sample paths. Source code can be downloaded from
http://people.csail.mit.edu/jchang7.
6
(a) Visualizing Topics
20
10
0 -2
10
secs (log scale)
10
1
20
Num.
Topics
1 Proc.
2 Procs.
Global
Combined
4 Procs.
8 Procs.
10
(b) Split/Merge Moves
0 -2
10
10
secs (log scale)
(c) Parallelization
10
0
-2
Combined
HOW
Log Like.
Det.
Local
Num. Topics
Num. Topics
20
1
-2.5
-3
-3.5
-4
-3
10 10
secs (log scale)
10
2
10
3
(d) Algorithm Comparison
-8
-8
-8.4
Num. Topics
100
-8.2
50
-8.4
0
0
secs
1000 0
Number of Topics
-8
-8.2
-8
-8.4
100
Num. Topics
-8.2
-7.8
-7.8
50
-8.4
0
100
(a) AP Results with Different Initializations
-8.2
HOW Log Likelihood
HOW Log Like.
-7.8
-7.8
HOW Log Likelihood
HOW Log Like.
Figure 3: Synthetic ?bars? example. (a) Visualizing topic word distributions without splits/merges
for K = 5. (b)?(c) Number of inferred topics for different split/merge proposals and parallelizations.
(d) Comparing sampling algorithms with a single processor and initialized to a single topic.
0
secs
2000 0
Number of Topics
100
(b) AP Results with Switching Algorithms
Figure 4: Results on AP. (a) 1, 25, 50, and 75 initial topics. (b) Switching algorithms at 1000 secs.
6.1
Synthetic Bars Dataset
We synthesized 200 documents from the ?bars? example of [22] with a dictionary of 25 words that
can be arranged in a 5x5 grid. Each of the 10 true topics forms a horizontal or vertical bar. To
visualize the sub-topics, we initialize to 5 topics and do not propose splits or merges. The resulting
regular- and sub-topics are shown in Figure 3a. Notice how the sub-topics capture likely splits.
Next, we consider different split/merge proposals in Figure 3b. The ?Combined? algorithm uses
local and global moves. The deterministic moves are often rejected resulting in slow convergence.
While global moves are not needed in such a well-separated dataset, we have observed that the make
a significant impact in real-world datasets. Furthermore, since every step of the sampling algorithm
can be parallelized, we achieve a linear speedup in the number of processors, as shown in Figure 3c.
Figure 3d compares convergence without parallelization to the Direct Assignment (DA) sampler and
the Finite Symmetric Dirichlet (FSD) of order 20. Since all algorithms should sample from the same
model, the goal here is to analyze convergence speed. We plot two summary statistics: the likelihood
of a single held-out word (HOW) from each document, and the number of inferred topics. While
the HOW likelihood for FSD converges at 1 second, the number of topics converges at 100 seconds.
This suggests that cross-validation techniques, which evaluate model fit, cannot solely determine
MCMC convergence. We note that FSD tends to first create all L topics and slowly remove them.
6.2
Real-World Corpora Datasets
Next, we consider the Associated Press (AP) dataset [23] with 436K words in 2K documents. We
manually set the FSD order to 100. Results using 16 cores (except DA, which cannot be parallelized)
with 1, 25, 50, and 75 initial topics are shown in Figure 4a. All samplers should converge to the
same statistics regardless of the initialization. While HOW likelihood converges for 3/4 FSD initializations, the number of topics indicates that no DA or FSD sample paths have converged. Unlike
the well-separated, synthetic dataset, the Sub-Clusters method that only uses local splits and merges
does not converge to a good solution here. In contrast, all initializations of the Sub-Clusters method
have converged to a high HOW likelihood with only approximately 20 topics. The path taken by
each sampler in the joint HOW likelihood / number of topics space is shown in the right panel of
Figure 4a. This visualization helps to illustrate the different approaches taken by each algorithm.
Figure 5aPshows confusion matrices, C, of the inferred topics. Each element of C is defined as:
Cr,c =
x fx (x; ?r ) log fx (x; ?c ), and captures the likelihood of a random word from topic r
7
(a) Confusion Matrices for AP
(b) Four Topics from NYTimes
-8.6
-8.2
Num. Topics
200
100
-8.6
0
-1
10 10
0
secs (log scale)
4
5
10 10 0
Number of Topics
-8.7
-8.7
-9
-9.3
-9
200
Num. Topics
-7.8
-8.2
100
-9.3
0
200
(a) Enron Results
HOW Log Likelihood
HOW Log Like.
-7.8
HOW Log Likelihood
HOW Log Like.
Figure 5: (a) Confusion matrices on AP for S UB -C LUSTERS, DA, and FSD (left to right). Outlines
are overlaid to compare size. (b) Four inferred topics from the NYTimes articles.
-1
10 10
0
secs (log scale)
4
5
10 10 0
Number of Topics
200
(b) NYTimes Results
Figure 6: Results on (a) Enron emails and (b) NYTimes articles for 1 and 50 initial topics.
evaluated under topic c. DA and FSD both converge to many topics that are easily confused, whereas
the Sub-Clusters method converges to a smaller set of more distinguishable topics.
Rigorous proofs about convergence are quite difficult. Furthermore, even though the approximations
made in calculating the Hastings ratios for local and global splits (e.g., Equation (20)) are backed by
intuition, they complicate the analysis. Instead, we run each sample path for 2,000 seconds. After
1,000 seconds, we switch the Sub-Clusters sample paths to FSD and all other sample paths to SubClusters. Markov chains that have converged should not change when switching the sampler. Figure
4b shows that switching from DA, FSD, or the local version of Sub-Clusters immediately changes
the number of topics, but switching Sub-Clusters to FSD has no effect. We believe that the number
of topics is slightly higher in the former because the Sub-Cluster method struggles to create small
topics. By construction, the splits make large moves, in contrast to DA and FSD, which often create
single word topics. This suggests that alternating between FSD and Sub-Clusters may work well.
Finally, we consider two large datasets from [24]: Enron Emails with 6M words in 40K documents
and NYTimes Articles with 100M words in 300K documents. We note that the NYTimes dataset is
3 orders of magnitude larger than those considered in the HDP split/merge work of [7]. Again, we
manually set the FSD order to 200. Results are shown in Figure 6 initialized to 1 and 50 topics. In
such large datasets, it is difficult to predict convergence times; after 28 hours, it seems as though no
algorithms have converged. However, the Sub-Clusters method seems to be approaching a solution,
whereas FSD has yet to prune topics and DA has yet to to achieve a good cross-validation score.
Four inferred topics using the Sub-Clusters method on the NYTimes dataset are visualized in Figure
5b. These words seem to describe plausible topics (e.g., music, terrorism, basketball, and wine).
7
Conclusion
We have developed a new parallel sampling algorithm for the HDP that proposes split and merge
moves. Unlike previous attempts, the proposed global splits and merges exhibit significantly improved convergence in a variety of datasets. We have also shown that cross-validation metrics in
isolation can lead to the erroneous conclusion that an MCMC sampling algorithm has converged.
By considering the number of topics and held-out likelihood jointly, we show that previous sampling
algorithms converge very slowly.
Acknowledgments
This research was partially supported by the Office of Naval Research Multidisciplinary Research
Initiative program, award N000141110688 and by VITALITE, which receives support from Army
Research Office Multidisciplinary Research Initiative program, award W911NF-11-1-0391.
8
References
[1] J. Chang and J. W. Fisher, III. Parallel sampling of DP mixture models using sub-clusters splits. In
Advances in Neural Information and Processing Systems, Dec 2013.
[2] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[3] E. B. Sudderth. Graphical Models for Visual Object Recognition and Tracking. PhD thesis, Massachusetts
Institute of Technology, 2006.
[4] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. An HDP-HMM for systems with state
persistence. In International Conference on Machine Learning, July 2008.
[5] Y. W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. In Advances in Neural
Information Processing Systems, volume 20, 2008.
[6] M. Bryant and E. Sudderth. Truly nonparametric online variational inference for Hierarchical Dirichlet
processes. In Advances in Neural Information Processing Systems, 2012.
[7] C. Wang and D Blei. A split-merge MCMC algorithm for the Hierarchical Dirichlet process.
arXiv:1207.1657 [stat.ML], 2012.
[8] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models. Journal
of Machine Learning Research, 10:1801?1828, December 2009.
[9] S. Williamson, A. Dubey, and E. P. Xing. Parallel Markov chain Monte Carlo for nonparametric mixture
models. In International Conference on Machine Learning, 2013.
[10] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational
and Graphical Statistics, 9(2):249?265, June 2000.
[11] S. Jain and R. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture
model. Journal of Computational and Graphical Statistics, 13:158?182, 2000.
[12] P. J. Green and S. Richardson. Modelling heterogeneity with and without the Dirichlet process. Scandinavian Journal of Statistics, pages 355?375, 2001.
[13] D. B. Dahl. An improved merge-split sampler for conjugate Dirichlet process mixture models. Technical
report, University of Wisconsin - Madison Dept. of Statistics, 2003.
[14] S. Jain and R. Neal. Splitting and merging components of a nonconjugate Dirichlet process mixture
model. Bayesian Analysis, 2(3):445?472, 2007.
[15] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the Dirichlet process.
Canadian Journal of Statistics, 30:269?283, 2002.
[16] F. Niu, B. Recht, C. R?e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. In Advances in Neural Information Processing Systems, 2011.
[17] M. J. Johnson, J. Saunderson, and A. S. Willsky. Analyzing hogwild parallel gaussian gibbs sampling. In
Advances in Neural Information Processing Systems, 2013.
[18] Y. Gal and Z. Ghahramani. Pitfalls in the use of parallel inference for the Dirichlet process. In Workshop
on Big Learning, NIPS, 2013.
[19] J. Sethuraman. A constructive definition of Dirichlet priors. Statstica Sinica, pages 639?650, 1994.
[20] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.
Annals of Statistics, 2(6):1152?1174, 1974.
[21] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika,
57(1):97?109, 1970.
[22] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101:5228?5235, April 2004.
[23] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, March 2003.
[24] K. Bache and M. Lichman. UCI Machine Learning Repository, 2013.
9
| 5235 |@word repository:1 version:3 briefly:2 proportion:9 seems:2 initial:3 score:1 selecting:1 lichman:1 denoting:1 document:9 current:2 comparing:1 assigning:1 yet:2 written:1 readily:1 john:1 remove:1 plot:1 resampling:1 stationary:4 generative:2 ith:1 reciprocal:1 core:1 record:1 blei:4 num:7 constructed:1 direct:3 ik:5 initiative:2 wild:2 combine:4 expected:1 xji:15 pitfall:1 considering:1 begin:1 discover:2 notation:1 confused:1 panel:1 kind:1 developed:8 proposing:7 supplemental:1 finding:2 gal:1 nj:16 guarantee:5 every:2 bryant:1 biometrika:1 qm:6 stick:1 partitioning:1 negligible:1 local:19 modify:1 tends:1 struggle:1 switching:5 analyzing:1 niu:1 path:6 solely:1 merge:41 yd:1 ap:6 approximately:1 initialization:4 terrorism:1 suggests:3 averaged:1 acknowledgment:1 enforces:1 practice:1 procedure:1 significantly:2 reject:2 jchang7:2 word:17 lcb:1 regular:4 persistence:1 griffith:1 suggest:3 cannot:7 undesirable:1 unsigned:1 collapsed:1 seminal:1 equivalent:2 deterministic:4 customer:4 demonstrated:1 backed:1 regardless:2 independently:1 ergodic:2 impeded:1 splitting:2 assigns:2 twocomponent:2 immediately:1 q:2 steyvers:1 fx:22 limiting:2 annals:1 target:2 construction:2 exact:3 smyth:1 us:2 designing:1 element:1 approximated:1 jk:11 recognition:1 bache:1 observed:1 wang:2 capture:2 ensures:1 highest:1 yk:3 nytimes:7 intuition:1 upon:1 necessitates:1 accelerate:1 joint:4 mh:6 easily:2 represented:1 derivation:2 separated:2 instantiated:1 fast:1 describe:2 jain:2 monte:4 newman:2 aggregate:2 hyper:2 quite:2 larger:1 plausible:1 statistic:11 richardson:1 jointly:1 itself:1 online:1 beal:1 propose:16 relevant:2 combining:1 uci:1 poorly:1 achieve:2 academy:1 kh:6 validate:1 scalability:1 convergence:20 empty:5 cluster:41 extending:3 categorization:1 generating:2 franchise:1 converges:4 object:2 help:1 illustrate:1 develop:3 stat:1 fixing:1 auxiliary:11 drawback:1 merged:1 tji:1 subsequently:1 stochastic:1 lcc:1 violates:1 material:1 require:1 fix:1 proposition:2 extension:1 lkl:1 considered:1 ic:2 wright:1 overlaid:1 predict:1 visualize:1 substituting:1 dictionary:1 adopt:3 omitted:1 wine:1 uniqueness:1 proc:1 label:1 kjt:1 maxm:2 create:4 mit:5 exactness:1 gaussian:1 super:2 cr:1 eating:1 office:2 derived:1 focus:1 june:1 naval:1 improvement:2 modelling:1 likelihood:16 indicates:1 contrast:2 rigorous:1 inference:7 typically:2 accept:1 hidden:1 issue:2 arg:2 flexible:1 denoted:2 proposes:3 constrained:1 initialize:2 marginal:4 construct:2 procs:3 ng:1 sampling:30 manually:2 report:1 prolific:1 randomly:1 simultaneously:1 national:1 attempt:1 interest:1 deferred:2 mixture:10 truly:1 held:2 chain:11 closer:1 fox:2 initialized:2 instance:1 modeling:5 ar:4 w911nf:1 stirling:1 assignment:10 johnson:1 too:1 reported:1 dir:8 synthetic:4 combined:4 recht:1 international:2 csail:5 stay:1 again:1 thesis:1 containing:1 opposed:1 slowly:4 american:1 summarized:3 sec:8 satisfy:2 explicitly:1 depends:2 performed:1 hogwild:2 jason:1 analyze:2 xing:1 parallel:16 complicated:1 capability:1 asuncion:1 defer:1 fsd:18 subtopics:1 efficiently:1 bayesian:2 carlo:4 monitoring:1 processor:4 converged:5 parallelizable:1 complicate:1 email:2 definition:1 associated:4 proof:2 saunderson:1 sampled:4 dataset:6 adjusting:1 massachusetts:1 improves:2 higher:3 subclusters:1 follow:1 nonconjugate:1 specify:1 improved:3 april:1 formulation:4 evaluated:2 arranged:1 though:2 furthermore:8 rejected:2 ergodicity:3 inception:1 until:1 receives:1 horizontal:1 hastings:16 overlapping:1 lack:1 multidisciplinary:2 quality:1 scientific:1 grows:1 believe:1 name:1 effect:1 zarepour:2 true:4 counterpart:1 adequately:1 former:2 assigned:3 analytically:3 alternating:1 symmetric:2 neal:3 visualizing:2 x5:1 basketball:1 outline:2 crf:9 confusion:3 variational:3 recently:1 common:1 garnered:1 ji:6 conditioning:1 volume:1 extend:3 association:2 discussed:3 synthesized:1 significant:4 refer:3 gibbs:12 grid:1 similarly:1 dot:1 scandinavian:1 longer:1 similarity:4 lbc:1 posterior:8 own:1 recent:2 dish:8 reverse:2 success:1 additional:4 parallelized:6 prune:1 determine:2 converge:9 aggregated:1 july:1 multiple:3 full:1 technical:1 cross:6 award:2 impact:1 variant:1 essentially:4 metric:3 arxiv:1 iteration:5 represent:1 dec:1 proposal:20 whereas:2 sudderth:3 source:1 parallelizations:1 allocated:1 parallelization:4 rest:1 unlike:4 enron:3 ineffective:1 tend:1 december:1 seem:1 jordan:3 integer:2 ideal:1 canadian:1 split:71 iii:2 switch:1 variety:1 restaurant:4 fit:2 isolation:1 fm:2 approaching:1 luster:1 det:1 reassign:1 detailed:6 aimed:1 dubey:1 nonparametric:3 visualized:2 augments:1 http:1 exist:1 notice:3 dotted:1 disjoint:1 hdps:13 correctly:1 serving:1 group:1 four:3 drawn:3 dahl:1 astute:1 sum:2 run:2 extends:1 family:1 reader:2 draw:1 followed:4 guaranteed:2 looping:2 aspect:2 speed:2 min:2 performing:1 conjecture:1 speedup:1 according:4 combination:1 poor:2 march:1 conjugate:6 across:1 remain:1 smaller:1 sam:3 slightly:1 metropolis:4 deferring:1 restricted:11 pr:1 taken:2 equation:25 conjugacy:1 previously:1 visualization:2 discus:3 count:4 needed:3 lkk:1 generalizes:1 available:1 ishwaran:2 hierarchical:6 alternative:3 shortly:2 slower:1 original:1 denotes:3 dirichlet:20 cf:1 remaining:2 graphical:4 lock:1 madison:1 calculating:1 music:1 ghahramani:1 build:1 chinese:1 approximating:1 implied:1 move:23 already:3 depart:2 parametric:1 dependence:1 exhibit:3 gradient:1 dp:19 hmm:1 topic:102 evaluate:1 reason:2 willsky:2 hdp:16 length:2 code:1 index:1 copying:1 ratio:11 balance:4 difficult:2 unfortunately:6 sinica:1 potentially:1 hog:2 xik:1 unknown:1 perform:3 teh:3 vertical:1 observation:1 markov:8 datasets:5 finite:3 descent:1 heterogeneity:1 incorporated:1 prematurely:1 parallelizing:1 inferred:6 introduced:2 bk:2 pair:1 learned:3 merges:12 accepts:1 temporary:2 hour:1 nip:1 address:3 bar:4 qsk:3 program:2 built:1 including:1 green:1 ia:1 overlap:2 suitable:1 representing:1 improve:2 mjk:7 technology:1 brief:1 sethuraman:1 categorical:2 prior:9 review:2 marginalizing:1 wisconsin:1 fully:1 permutation:2 burned:1 allocation:1 validation:6 downloaded:1 sufficient:1 article:3 summary:2 repeat:1 last:1 supported:1 free:1 drastically:1 institute:1 neighbor:1 benefit:1 distributed:1 dimension:1 calculated:2 world:3 made:2 welling:2 bb:1 approximate:5 emphasize:2 ml:1 global:24 sequentially:1 n000141110688:1 corpus:4 xi:2 latent:1 table:8 additionally:4 nature:1 learn:1 mj:1 jkr:1 williamson:1 complex:1 da:18 substituted:1 pk:1 main:2 big:1 augmented:6 referred:1 slow:1 sub:42 inferring:1 deterministically:2 breaking:1 ib:2 interleaving:1 erroneous:1 specific:3 symbol:1 grouping:1 workshop:1 merging:2 kr:7 supplement:4 phd:1 magnitude:1 conditioned:1 zji:16 depicted:1 distinguishable:1 simply:1 likely:3 army:1 antoniak:1 visual:1 expressed:4 tracking:1 partially:1 chang:2 loses:1 satisfies:1 relies:2 goal:1 consequently:1 shared:1 fisher:3 considerable:1 change:6 infinite:2 determined:1 typical:1 except:1 sampler:17 kurihara:1 accepted:3 experimental:1 burnin:1 rarely:1 people:1 support:1 latter:1 ub:1 constructive:1 dept:1 mcmc:11 tested:1 |
4,678 | 5,236 | Localized Data Fusion for Kernel k-Means Clustering
with Application to Cancer Biology
Adam A. Margolin
margolin@ohsu.edu
Department of Biomedical Engineering
Oregon Health & Science University
Portland, OR 97239, USA
Mehmet G?onen
gonen@ohsu.edu
Department of Biomedical Engineering
Oregon Health & Science University
Portland, OR 97239, USA
Abstract
In many modern applications from, for example, bioinformatics and computer vision, samples have multiple feature representations coming from different data
sources. Multiview learning algorithms try to exploit all these available information to obtain a better learner in such scenarios. In this paper, we propose a novel
multiple kernel learning algorithm that extends kernel k-means clustering to the
multiview setting, which combines kernels calculated on the views in a localized
way to better capture sample-specific characteristics of the data. We demonstrate
the better performance of our localized data fusion approach on a human colon
and rectal cancer data set by clustering patients. Our method finds more relevant
prognostic patient groups than global data fusion methods when we evaluate the
results with respect to three commonly used clinical biomarkers.
1
Introduction
Clustering algorithms aim to find a meaningful grouping of the samples at hand in an unsupervised
manner for exploratory data analysis. k-means clustering is one of the classical algorithms (Hartigan, 1975), which uses k prototype vectors (i.e., centers or centroids of k clusters) to characterize
the data and minimizes a sum-of-squares cost function to find these prototypes with a coordinate
descent optimization method. However, the final cluster structure heavily depends on the initialization because the optimization scheme of k-means clustering is prone to local minima. Fortunately,
the sum-of-squares minimization can be formulated as a trace maximization problem, which can
not be solved easily due to binary decision variables used to denote cluster memberships, but this
hard optimization problem can be reduced to an eigenvalue decomposition problem by relaxing the
constraints (Zha et al., 2001; Ding and He, 2004). In such a case, overall clustering algorithm can be
formulated in two steps: (i) performing principal component analysis (PCA) (Pearson, 1901) on the
covariance matrix and (ii) recovering cluster membership matrix using the k eigenvectors that correspond to the k largest eigenvalues. Similar to many other learning algorithms, k-means clustering
is also extended towards a nonlinear version with the help of kernel functions, which is called kernel
k-means clustering (Girolami, 2002). The kernelized variant can also be optimized with a spectral
relaxation approach using kernel PCA (KPCA) (Sch?olkopf et al., 1998) instead of canonical PCA.
In many modern applications, samples have multiple feature representations (i.e., views) coming
from different data sources. Instead of using only one of the views, it is better to use all available information and let the learning algorithm decide how to combine these data sources, which is known
as multiview learning. There are three main categories for the combination strategy (Noble, 2004):
(i) combination at the feature level by concatenating the views (i.e., early integration), (ii) combination at the decision level by concatenating the outputs of learners trained on each view separately
(i.e., late integration), and (iii) combination at the learning level by trying to find a unified distance,
kernel, or latent matrix using all views simultaneously (i.e., intermediate integration).
1
1.1
Related work
When we have multiple views for clustering, we can simply concatenate the views and train a standard clustering algorithm on the concatenated view, which is known as early integration. However,
this approach does not assign weights to the views, and the view with the highest number of features
might dominate the clustering step due to the unsupervised nature of the problem.
Late integration algorithms obtain a clustering on each view separately and combine these clustering
results using an ensemble learning scheme. Such clustering algorithms are also known as cluster
ensembles (Strehl and Ghosh, 2002). However, they do not exploit the dependencies between the
views during clustering, and these dependencies might already be lost if we combine only clustering
results in the second step.
Intermediate integration algorithms combine the views in a single learning scheme to collectively
find a unified clustering. Chaudhuri et al. (2009) propose to extract a unifying feature representation
from the views by performing canonical correlation analysis (CCA) (Hotelling, 1936) and to train
a clustering algorithm on this common representation. Similarly, Blaschko and Lampert (2008) extract a common feature representation but with a nonlinear projection step using kernel CCA (Lai
and Fyfe, 2000) and then perform clustering. Such CCA-based algorithms assume that all views are
informative, and if there are some noisy views, this can degrade the clustering performance drastically. Lange and Buhmann (2006) propose to optimize the weights of a convex combination of
view-specific similarity measures within a nonnegative matrix factorization framework and to assign samples to clusters using the latent matrices obtained in the factorization step. Valizadegan and
Jin (2007) extend the maximum margin clustering formulation of Xu et al. (2004) to perform kernel combination and clustering jointly by formulating a semidefinite programming (SDP) problem.
Chen et al. (2007) further improve this idea by formulating a quadratically constrained quadratic
programming problem instead of an SDP problem. Tang et al. (2009) convert the views into graphs
by placing samples into vertices and creating edges using the similarity values between samples
in each view, and then factorize these graphs jointly with a shared factor common to all graphs,
which is used for clustering at the end. Kumar et al. (2011) propose a co-regularization strategy
for multiview spectral clustering by enforcing agreement between the similarity matrices calculated
on the latent representations obtained from the spectral decomposition of each view. Huang et al.
(2012) formulate another multiview spectral clustering method that finds a weighted combination
of the affinity matrices calculated on the views. Yu et al. (2012) develop a multiple kernel k-means
clustering algorithm that optimizes the weights in a conic sum of kernels calculated on the views.
However, their formulation uses the same kernel weights for all of the samples.
Multiview clustering algorithms have attracted great interest in cancer biology due to the availability
of multiple genomic characterizations of cancer patients. Yuan et al. (2011) formulate a patientspecific data fusion algorithm that uses a nonparametric Bayesian model coupled with a Markov
chain Monte Carlo inference scheme, which can combine only two views and is computationally
very demanding due to the high dimensionality of genomic data. Shen et al. (2012) and Mo et al.
(2013) find a shared latent subspace across genomic views and cluster cancer patients using their
representations in this subspace. Wang et al. (2014) construct patient networks from patient?patient
similarity matrices calculated on the views, combine these into a single unified network using a
network fusion approach, and then perform clustering on the final patient network.
1.2
Our contributions
Intermediate integration using kernel matrices is also known as multiple kernel learning (MKL)
(G?onen and Alpayd?n, 2011). Most of the existing MKL algorithms use the same kernel weights
for all samples, which may not be a good idea due to sample-specific characteristics of the data or
measurement noise present in some of the views. In this work, we study kernel k-means clustering under the multiview setting and propose a novel MKL algorithm that combines kernels with
sample-specific weights to obtain a better clustering. We demonstrate the better performance of our
algorithm on the human colon and rectal cancer data set provided by TCGA consortium (The Cancer
Genome Atlas Network, 2012), where we use three genomic characterizations of the patients (i.e.,
DNA copy number, mRNA gene expression, and DNA methylation) for clustering. Our localized
data fusion approach obtains more relevant prognostic patient groups than global fusion approaches
when we evaluate the results with respect to three commonly used clinical biomarkers (i.e., microsatellite instability, hypermutation, and mutation in BRAF gene) of colon and rectal cancer.
2
2
Kernel k-means clustering
We first review kernel k-means clustering (Girolami, 2002) before extending it to the multiview
setting. Given N independent and identically distributed samples {xi ? X }ni=1 , we assume that
there is a function ?(?) that maps the samples into a feature space, in which we try to minimize a
sum-of-squares cost function over the cluster assignment variables {zic }n,k
i=1,c=1 . The optimization
problem (OPT1) defines kernel k-means clustering as a binary integer programming problem, where
nc is the number of samples assigned to cluster c, and ?c is the centroid of cluster c.
minimize
n X
k
X
zic k?(xi ) ? ?c k22
i=1 c=1
with respect to zic ? {0, 1}
subject to
k
X
zic = 1
?(i, c)
(OPT1)
?i
c=1
where nc =
n
X
zic
?c, ?c =
i=1
n
1 X
zic ?(xi )
nc i=1
?c
We can convert this optimization problem into an equivalent matrix-vector form problem as follows:
minimize tr ((? ? M)> (? ? M))
with respect to Z ? {0, 1}n?k
subject to Z1k = 1n
where ? = [?(x1 )
L=
(OPT2)
?(x2 ) . . .
>
?(xn )], M = ?ZLZ ,
?1
?1
diag (n?1
1 , n2 , . . . , nk ).
Using that ?> ? = K, tr (AB) = tr (BA), and Z> Z = L?1 , the objective function of the
optimization problem (OPT2) can be rewritten as
tr ((? ? M)> (? ? M)) = tr ((? ? ?ZLZ> )> (? ? ?ZLZ> ))
= tr (?> ? ? 2?> ?ZLZ> + ZLZ> ?> ?ZLZ> )
1
1
= tr (K ? 2KZLZ> + KZLZ> ZLZ> ) = tr (K ? L 2 Z> KZL 2 ),
1
where K is the kernel matrix that holds the similarity values between the samples, and L 2 is defined
as taking the square root of the diagonal elements. The resulting optimization problem (OPT3) is a
trace maximization problem, but it is still very difficult to solve due to the binary decision variables.
1
1
maximize tr (L 2 Z> KZL 2 ? K)
with respect to Z ? {0, 1}n?k
subject to Z1k = 1n
(OPT3)
1
However, we can formulate a relaxed version of this optimization problem by renaming ZL 2 as H
and letting H take arbitrary real values subject to orthogonality constraints.
maximize tr (H> KH ? K)
with respect to H ? Rn?k
(OPT4)
>
subject to H H = Ik
The final optimization problem (OPT4) can be solved by performing KPCA on the kernel matrix
K and setting H to the k eigenvectors that correspond to k largest eigenvalues (Sch?olkopf et al.,
1998). We can finally extract a clustering solution by first normalizing all rows of H to be on the
unit sphere and then performing k-means clustering on this normalized matrix. Note that, after the
normalization step, H contains k-dimensional representations of the samples on the unit sphere, and
k-means is not very sensitive to initialization in such a case.
3
3
Multiple kernel k-means clustering
In a multiview learning scenario, we have multiple feature representations, where we assume that
each representation has its own mapping function, i.e., {?m (?)}pm=1 . Instead of an unweighted
combination of these views (i.e., simple concatenation), we can obtain a weighted mapping function
by concatenating views using a convex sum (i.e., nonnegative weights that sum up to 1). This
>
corresponds to replacing ?(xi ) with ?? (xi ) = ?1 ?1 (xi )> ?2 ?2 (xi )> . . . ?p ?p (xi )> ,
where ? ? Rp+ is the vector of kernel weights that we need to optimize during training. The kernel
function defined over the weighted mapping function becomes
k? (xi , xj ) = h?? (xi ), ?? (xj )i =
p
X
h?m ?m (xi ), ?m ?m (xj )i =
m=1
p
X
2
?m
km (xi , xj ),
m=1
where we combine kernel functions using a conic sum (i.e., nonnegative weights), which guarantees
to have a positive semi-definite kernel function at the end. The optimization problem (OPT5) gives
the trace maximization problem we need to solve.
maximize tr (H> K? H ? K? )
with respect to H ? Rn?k , ? ? Rp+
subject to H> H = Ik , ? > 1p = 1
p
X
2
where K? =
?m
Km
(OPT5)
m=1
We solve this problem using a two-step alternating optimization strategy: (i) Optimize H given ?.
If we know the kernel weights (or initialize randomly in the first iteration), solving (OPT5) reduces
to solving (OPT4) with the combined kernel matrix K? , which requires performing KPCA on K? .
(ii) Optimize ? given H. If we know the eigenvectors from the first step, solving (OPT5) reduces to
solving (OPT6), which is a convex quadratic programming (QP) problem with p decision variables
and one equality constraint, and is solvable with any standard QP solver up to a moderate number
of kernels.
p
X
2
minimize
?m
tr (Km ? H> Km H)
m=1
(OPT6)
with respect to ? ? Rp+
subject to ? > 1p = 1
Note that
Ppusing a convex combination of kernels in (OPT5) is not a viable option because if we set
K? to m=1 ?m Km , there would be a trivial solution to the trace maximization problem with a
single active kernel and others with zero weights, which is also observed by Yu et al. (2012).
4
Localized multiple kernel k-means clustering
Instead of using the same kernel weights for all samples, we propose to use a localized data fusion approach by assigning sample-specific weights to kernels, which enables us to capture samplespecific characteristics of the data and to get rid of sample-specific noise that may be present in
some of the views. In our localized combination approach, the mapping function is represented as
>
?? (xi ) = ?i1 ?1 (xi )> ?i2 ?2 (xi )> . . . ?ip ?p (xi )> , where ? ? Rn?p
is the matrix of
+
sample-specific kernel weights, which are nonnegative and sum up to 1 for each sample (G?onen and
Alpayd?n, 2013). The locally combined kernel function can be written as
k? (xi , xj ) = h?? (xi ), ?? (xj )i =
p
X
h?im ?m (xi ), ?jm ?m (xj )i =
m=1
p
X
?im ?jm km (xi , xj ),
m=1
where we are guaranteed to have a positive semi-definite kernel function. The optimization problem
(OPT7) gives the trace maximization problem with the locally combined kernel matrix, where ? m ?
Rn+ is the vector of kernel weights assigned to kernel m, and ? denotes the Hadamard product.
4
maximize tr (H> K? H ? K? )
with respect to H ? Rn?k , ? ? Rn?p
+
subject to H> H = Ik , ?1p = 1n
p
X
where K? =
(? m ? >
m ) ? Km
(OPT7)
m=1
We solve this problem using a two-step alternating optimization strategy: (i) Optimize H given ?.
If we know the sample-specific kernel weights (or initialize randomly in the first iteration), solving
(OPT7) reduces to solving (OPT4) with the combined kernel matrix K? , which requires performing
KPCA on K? . (ii) Optimize ? given H. If we know the eigenvectors from the first step, using that
tr (A> ((cc> ) ? B)A) = c> ((AA> ) ? B)c, solving (OPT7) reduces to solving (OPT8), which is
a convex QP problem with n ? p decision variables and n equality constraints.
minimize
p
X
>
?>
m ((In ? HH ) ? Km )? m
m=1
n?p
with respect to ? ? R+
(OPT8)
subject to ?1p = 1n
Training the localized combination approach requires more computational effort than training the
global approach due to the increased size of QP problem in the second step. However, the blockdiagonal structure of the Hessian matrix in (OPT8) can be exploited to solve this problem much
more efficiently. Note that the objective function of (OPT8) can be written as
?? ?
? ?> ?
(In ? HH> ) ? K1
0n?n
???
0n?n
?1
?1
??? 2 ?
0n?n
(In ? HH> ) ? K2 ? ? ?
0n?n
?? 2 ? ?
?
?
? . ? ?
??
. ?,
..
..
..
..
? .. ? ?
?? .. ?
.
.
.
.
?p
0n?n
0n?n
? ? ? (In ? HH> ) ? Kp ? p
where we have an n ? n matrix for each kernel on the diagonal of the Hessian matrix.
5
Experiments
Clustering patients is one of the clinically important applications in cancer biology because it helps
to identify prognostic cancer subtypes and to develop personalized strategies to guide therapy. Making use of multiple genomic characterizations in clustering is critical because different patients may
manifest their disease in different genomic platforms due to cancer heterogeneity and measurement
noise. We use the human colon and rectal cancer data set provided by TCGA consortium (The Cancer Genome Atlas Network, 2012), which contains several genomic characterizations of the patients,
to test our new clustering algorithm in a challenging real-world application.
We use DNA copy number, mRNA gene expression, and DNA methylation data of the patients
for clustering. In order to evaluate the clustering results, we use three commonly used clinical
biomarkers of colon and rectal cancer (The Cancer Genome Atlas Network, 2012): (i) micro-satellite
instability (i.e., a hypermutable phenotype caused by the loss of DNA mismatch repair activity)
(ii) hypermutation (defined as having mutations in more than or equal to 300 genes), and (iii) mutation in BRAF gene. Note that these three biomarkers are not directly identifiable from the input
data sources used. The preprocessed genomic characterizations of the patients can be downloaded
from a public repository at https://www.synapse.org/#!Synapse:syn300013, where
DNA copy number, mRNA gene expression, DNA methylation, and mutation data consist of 20313,
20530, 24980, and 14581 features, respectively. The micro-satellite instability data can be downloaded from https://tcga-data.nci.nih.gov/tcga/dataAccessMatrix.htm. In
the resulting data set, there are 204 patients with available genomic and clinical biomarker data.
We implement kernel k-means clustering and its multiview variants in Matlab. Our implementations
are publicly available at https://github.com/mehmetgonen/lmkkmeans. We solve the
QP problems of the multiview variants using the Mosek optimization software (Mosek, 2014). For
all methods, we perform 10 replications of k-means with different initializations as the last step and
use the solution with the lowest sum-of-squares cost to decide cluster memberships.
5
We calculate four different kernels to use in our experiments: (i) KC : the Gaussian kernel on DNA
copy number data, (ii) KG : the Gaussian kernel on mRNA gene expression data, (iii) KM : the
Gaussian kernel on DNA methylation data, and (vi) KCGM : the Gaussian kernel on concatenated
data (i.e., early combination). Before calculating each kernel, the input data is normalized to have
zero mean and unit standard deviation (i.e., z-normalization for each feature). For each kernel, we
set the kernel width parameter to the square root of the number of features in its corresponding view.
We compare seven clustering algorithms on this colon and rectal cancer data set: (i) kernel k-means
clustering with KC , (ii) kernel k-means clustering with KG , (iii) kernel k-means clustering with KM ,
(iv) kernel k-means clustering with KCGM , (v) kernel k-means clustering with (KC + KG + KM ) / 3,
(vi) multiple kernel k-means clustering with (KC , KG , KM ), and (vii) localized multiple kernel kmeans clustering with (KC , KG , KM ). The first three algorithms are single-view clustering methods
that work on a single genomic characterization. The fourth algorithm is the early integration approach that combines the views at the feature level. The fifth and sixth algorithms are intermediate
integration approaches that combine the kernels using unweighted and weighted sums, respectively,
where the latter is very similar to the formulations of Huang et al. (2012) and Yu et al. (2012). The
last algorithm is our localized MKL approach that combines the kernels in a sample-specific way.
We assign three different binary labels to each sample as the ground truth using the three clinical
biomarkers mentioned and evaluate the clustering results using three different performance metrics:
(i) normalized mutual information (NMI), (ii) purity, and (iii) the Rand index (RI). We set the number
of clusters to 2 for all of the algorithms because each ground truth label has only two categories.
Cluster
? 1
? 2
0.2
0.4
0.6
pre
s
ex
0.6
0.4
1.0
0.8
ion
lat
Ge
ne
0.8
thy
Me
sio
n
1.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
??
?
?
?
0.2
?
?
?
?
??
?
??
?
?
?
?
??
?
?
?
?
?
??
0.2
0.4
0.6
0.8
1.0
We first show the kernel weights assigned to 204
colon and rectal cancer patients by our localized
data fusion approach. As we can see from Figure 1, some of the patients are very well characterized by their DNA copy number data. Our localized algorithm assigns weights larger than 0.5
to DNA copy number data for most of the patients
in the second cluster, whereas all three views are
used with comparable weights for the remaining
patients. Note that the kernel weights of each patient are strictly nonnegative and sum up to 1 (i.e.,
defined on the unit simplex). Our proposed clustering algorithm can identify the most informative genomic platforms in an unsupervised and
patient-specific manner. Together with the better clustering performance and biological interpretation presented next, this particular application from cancer biology shows the potential for
localized combination strategy.
Copy number
Figure 1: Kernel weights assigned to patients
by our localized data fusion approach. Each dot
denotes a single cancer patient, and patients in
the same cluster are drawn with the same color.
Figure 2 summarizes the results obtained by seven clustering algorithms on the colon and rectal cancer data set. For each algorithm, the cluster assignment and the values of three clinical biomarkers
are aligned to each other, and we report the performance values of nine biomarker?metric pairs. We
see that DNA copy number (i.e., KC ) is the most informative genomic characterization when we
compare the performance of single-view clustering algorithms, where it obtains better results than
mRNA gene expression (i.e., KG ) and DNA methylation (i.e., KM ) in terms of NMI and RI on all
biomarkers. We also see that the early integration strategy (i.e., KCGM ) does not improve the results because mRNA gene expression and DNA methylation dominate the clustering step due to the
unsupervised nature of the problem. However, when we combine the kernels using an unweighted
combination strategy, i.e., (KC + KG + KM ) / 3, the performance values are significantly improved
compared to single-view clustering methods and early integration in terms of NMI and RI on all
biomarkers. Instead of using an unweighted sum, we can optimize the combination weights using
the multiple kernel k-means clustering of Section 3. In this case, the performance values are slightly
improved compared to the unweighted sum in terms of NMI and RI on all biomarkers. Our localized data fusion approach significantly outperforms the other algorithms in terms of NMI and RI on
?micro-satellite instability? and ?hypermutation? biomarkers, and it is the only algorithm that can
obtain purity values higher than the ratio of the majority class samples on ?mutation in BRAF gene?
biomarker. These results validate the benefit of our localized approach for the multiview setting.
6
Algorithm: Kernel k ?means clustering with KC
Clusters:
102 patients
MSI high:
Hypermutation:
BRAF mutation:
102 patients
Algorithm: Kernel k ?means clustering with KG
Clusters:
117 patients
MSI high:
Hypermutation:
BRAF mutation:
NMI
0.1466
0.1418
0.0459
Purity
0.8676
0.8480
0.8971
RI
0.5376
0.5426
0.5156
NMI
0.0504
0.0514
0.0174
Purity
0.8676
0.8480
0.8971
RI
0.5082
0.5091
0.5082
NMI
0.0008
0.0049
0.0026
Purity
0.8676
0.8480
0.8971
RI
0.5143
0.5105
0.5143
NMI
0.0019
0.0127
0.0041
Purity
0.8676
0.8480
0.8971
RI
0.5105
0.5076
0.5105
85 patients
NMI
0.2437
0.2303
0.0945
Purity
0.8676
0.8480
0.8971
RI
0.6009
0.6096
0.5568
82 patients
NMI
0.2557
0.2431
0.1013
Purity
0.8676
0.8480
0.8971
RI
0.6141
0.6233
0.5666
NMI
0.3954
0.3788
0.1481
Purity
0.8873
0.8873
0.8971
RI
0.8088
0.8088
0.7114
87 patients
Algorithm: Kernel k ?means clustering with KM
Clusters:
83 patients
MSI high:
Hypermutation:
BRAF mutation:
121 patients
Algorithm: Kernel k ?means clustering with KCGM
Clusters:
87 patients
MSI high:
Hypermutation:
BRAF mutation:
117 patients
Algorithm: Kernel k ?means clustering with (KC + KG + KM) / 3
Clusters:
119 patients
MSI high:
Hypermutation:
BRAF mutation:
Algorithm: Multiple kernel k ?means clustering with (KC, KG, KM)
Clusters:
122 patients
MSI high:
Hypermutation:
BRAF mutation:
Algorithm: Localized multiple kernel k ?means clustering with (KC, KG, KM)
Clusters:
158 patients
MSI high:
Hypermutation:
BRAF mutation:
46 patients
Figure 2: Results obtained by seven clustering algorithms on the colon and rectal cancer data set
provided by TCGA consortium (The Cancer Genome Atlas Network, 2012). For each algorithm, we
first display the cluster assignment and report the number of patients in each cluster. We then display
the values of three clinical biomarkers aligned with the cluster assignment, where ?MSI high? shows
the patients with high micro-satellite instability status in darker color, ?Hypermutation? shows the
patients with mutations in more than or equal to 300 genes in darker color, and ?BRAF mutation?
shows the patients with a mutation in their BRAF gene in darker color. We compare the algorithms
in terms of their clustering performance on three clinical biomarkers under three metrics: normalized
mutual information (NMI), purity, and the Rand index (RI). For all performance metrics, a higher
value means better performance, and for each biomarker?metric pair, the best result is reported in
bold face. We see that our localized clustering algorithm obtains the best result for eight out of nine
biomarker?metric pairs, whereas all algorithms have the same purity value for BRAF mutation.
7
Copy number
Gene expression
Methylation
Clusters
Mutation
Figure 3: Important features in genomic views determined using the solution of multiple kernel
k-means clustering together with cluster assignment and mutations in frequently mutated genes.
For each genomic view, we calculate the Pearson correlation values between features and clustering
assignment, and display topmost 100 positively correlated and bottommost 100 negatively correlated
features (red: high, blue: low). We also display the mutation status (black: mutated, white: wildtype) of patients for 102 most frequently mutated genes, which are mutated in at least 16 patients.
Copy number
Gene expression
Methylation
Clusters
Mutation
Figure 4: Important features in genomic views determined using the solution of localized multiple
kernel k-means clustering together with cluster assignment and mutations in frequently mutated
genes. See Figure 3 for details.
We perform an additional biological interpretation step by looking at the features that can be used
to differentiate the clusters found. Figures 3 and 4 show features in genomic views that are highly
(positively or negatively) correlated with the cluster assignments of the two best performing algorithms in terms of clustering performance, namely, multiple kernel k-means clustering and localized
multiple kernel k-means clustering. We clearly see that the genomic signatures of the hyper-mutated
cluster (especially the one for DNA copy number) obtained using our localized data fusion approach
are much less noisy than those of global data fusion. Identifying clear genomic signatures are clinically important because they can be used for diagnostic and prognostic purposes on new patients.
6
Discussion
We introduce a localized data fusion approach for kernel k-means clustering to better capture
sample-specific characteristics of the data in the multiview setting, which can not be captured using
global data fusion strategies such as Huang et al. (2012) and Yu et al. (2012). The proposed method
is from the family of MKL algorithms and combines the kernels defined on the views with samplespecific weights to determine the relative importance of the views for each sample. We illustrate the
practical importance of the method on a human colon and rectal cancer data set by clustering patients
using their three different genomic characterizations. The results show that our localized data fusion
strategy can identify more relevant prognostic patient groups than global data fusion strategies.
The interesting topics for future research are: (i) exploiting the special structure of the Hessian
matrix in our formulation by developing a customized solver instead of using an off-the-shelf optimization software to improve the time complexity, and (ii) integrating prior knowledge about the
samples that we may have into our formulation to be able to find more relevant clusters.
Acknowledgments. This study was financially supported by the Integrative Cancer Biology Program (grant no 1U54CA149237) and the Cancer Target Discovery and Development (CTDD) Network (grant no 1U01CA176303) of the National Cancer Institute.
8
References
M. B. Blaschko and C. H. Lampert. Correlational spectral clustering. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2008.
K. Chaudhuri, S. M. Kakada, K. Livescu, and K. Sridharan. Multi-view clustering via canonical correlation
analysis. In Proceedings of the 26st International Conference on Machine Learning, 2009.
J. Chen, Z. Zhao, J. Ye, and H. Liu. Nonlinear adaptive distance metric learning for clustering. In Proceedings
of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2007.
C. Ding and X. He. K-means clustering via principal component analysis. In Proceedings of the 21st International Conference on Machine Learning, 2004.
M. Girolami. Mercer kernel-based clustering in feature space. IEEE Transactions on Neural Networks, 13(3):
780?784, 2002.
M. G?onen and E. Alpayd?n. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12
(Jul):2211?2268, 2011.
M. G?onen and E. Alpayd?n. Localized algorithms for multiple kernel learning. Pattern Recognition, 46(3):
795?807, 2013.
J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 1975.
H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321?327, 1936.
H.-C. Huang, Y.-Y. Chuang, and C.-S. Chen. Affinity aggregation for spectral clustering. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2012.
A. Kumar, P. Rai, and H. Daum?e III. Co-regularized multi-view spectral clustering. In Advances in Neural
Information Processing Systems 24, 2011.
P. L. Lai and C. Fyfe. Kernel and nonlinear canonical correlation analysis. International Journal of Neural
Systems, 10(5):365?377, 2000.
T. Lange and J. M. Buhmann. Fusion of similarity data in clustering. In Advances in Neural Information
Processing Systems 18, 2006.
Q. Mo, S. Wang, V. E. Seshan, A. B. Olshen, N. Schultz, C. Sander, R. S. Powers, M. Ladanyi, and R. Shen.
Pattern discovery and cancer gene identification in integrated cancer genomic data. Proceedings of the
National Academy of Sciences of the United States of America, 110(11):4245?4250, 2013.
Mosek. The MOSEK Optimization Tools Manual Version 7.0 (Revision 134). MOSEK ApS, Denmark, 2014.
W. S. Noble. Support vector machine applications in computational biology. In B. Sch?olkopf, K. Tsuda, and
J.-P. Vert, editors, Kernel Methods in Computational Biology, chapter 3. The MIT Press, 2004.
K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2(11):
559?572, 1901.
B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Computation, 10(5):1299?1319, 1998.
R. Shen, Q. Mo, N. Schultz, V. E. Seshan, A. B. Olshen, J. Huse, M. Ladanyi, and C. Sander. Integrative
subtype discovery in glioblastoma using iCluster. PLoS ONE, 7(4):e35236, 2012.
A. Strehl and J. Ghosh. Cluster ensembles ? A knowledge reuse framework for combining multiple partitions.
Journal of Machine Learning Research, 3(Dec):583?617, 2002.
W. Tang, Z. Lu, and I. S. Dhillon. Clustering with multiple graphs. In Proceedings of the 9th IEEE International
Conference on Data Mining, 2009.
The Cancer Genome Atlas Network. Comprehensive molecular characterization of human colon and rectal
cancer. Nature, 487(7407):330?337, 2012.
H. Valizadegan and R. Jin. Generalized maximum margin clustering and unsupervised kernel learning. In
Advances in Neural Information Processing Systems 19, 2007.
B. Wang, A. M. Mezlini, F. Demir, M. Flume, Z. Tu, M. Brudno, B. Haibe-Kains, and A. Goldenberg. Similarity
network fusion for aggregating data types on a genomic scale. Nature Methods, 11(3):333?337, 2014.
L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Advances in Neural
Information Processing Systems 17, 2004.
S. Yu, L.-C. Tranchevent, X. Liu, W. Gl?anzel, J. A. K. Suykens, B. De Moor, and Y. Moreau. Optimized data
fusion for kernel k-means clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34
(5):1031?1039, 2012.
Y. Yuan, R. S. Savage, and F. Markowetz. Patient-specific data fusion defines prognostic cancer subtypes. PLoS
Computational Biology, 7(10):e1002227, 2011.
H. Zha, X. He, C. Ding, H. Simon, and M. Gu. Spectral relaxation for K-means clustering. In Advances in
Neural Information Processing Systems 14, 2001.
9
| 5236 |@word repository:1 version:3 prognostic:6 integrative:2 km:19 decomposition:2 covariance:1 tr:14 liu:2 contains:2 united:1 outperforms:1 existing:1 savage:1 com:1 assigning:1 attracted:1 written:2 john:1 concatenate:1 partition:1 informative:3 enables:1 atlas:5 aps:1 intelligence:1 plane:1 characterization:9 org:1 ik:3 viable:1 yuan:2 replication:1 combine:14 introduce:1 manner:2 valizadegan:2 thy:1 frequently:3 sdp:2 multi:2 gov:1 jm:2 solver:2 becomes:1 provided:3 blaschko:2 revision:1 lowest:1 kg:11 minimizes:1 unified:3 ghosh:2 z1k:2 guarantee:1 biometrika:1 k2:1 zl:1 unit:4 grant:2 subtype:1 before:2 positive:2 engineering:2 local:1 aggregating:1 might:2 black:1 initialization:3 relaxing:1 challenging:1 co:2 factorization:2 practical:1 acknowledgment:1 lost:1 glioblastoma:1 definite:2 implement:1 significantly:2 vert:1 projection:1 pre:1 integrating:1 renaming:1 consortium:3 get:1 instability:5 optimize:7 equivalent:1 map:1 www:1 center:1 mrna:6 convex:5 formulate:3 shen:3 identifying:1 assigns:1 dominate:2 exploratory:1 coordinate:1 target:1 heavily:1 magazine:1 programming:4 us:3 methylation:8 livescu:1 agreement:1 element:1 recognition:3 observed:1 ding:3 solved:2 capture:3 wang:3 calculate:2 plo:2 highest:1 disease:1 mentioned:1 topmost:1 complexity:1 ladanyi:2 signature:2 trained:1 solving:8 bottommost:1 negatively:2 learner:2 gu:1 easily:1 htm:1 represented:1 america:1 chapter:1 train:2 monte:1 opt2:2 kp:1 fyfe:2 microsatellite:1 hyper:1 pearson:3 larger:1 solve:6 jointly:2 noisy:2 final:3 ip:1 differentiate:1 eigenvalue:4 neufeld:1 propose:6 coming:2 product:1 kzl:2 tu:1 relevant:4 hadamard:1 aligned:2 combining:1 chaudhuri:2 academy:1 kh:1 validate:1 olkopf:4 exploiting:1 cluster:35 extending:1 satellite:4 adam:1 help:2 illustrate:1 develop:2 ex:1 recovering:1 girolami:3 margolin:2 tcga:5 human:5 public:1 assign:3 biological:2 im:2 subtypes:2 strictly:1 brudno:1 hold:1 therapy:1 ground:2 great:1 mapping:4 mo:3 early:6 purpose:1 label:2 sensitive:1 largest:2 tool:1 weighted:4 moor:1 minimization:1 uller:1 mit:1 clearly:1 genomic:21 gaussian:4 aim:1 zic:6 shelf:1 opt3:2 portland:2 biomarker:5 sigkdd:1 centroid:2 colon:11 inference:1 goldenberg:1 membership:3 integrated:1 kernelized:1 kc:11 relation:1 i1:1 overall:1 development:1 constrained:1 integration:11 initialize:2 platform:2 mutual:2 equal:2 construct:1 special:1 having:1 biology:8 placing:1 yu:5 unsupervised:5 noble:2 mosek:5 future:1 simplex:1 others:1 report:2 micro:4 opt6:2 modern:2 randomly:2 simultaneously:1 national:2 comprehensive:1 ab:1 interest:1 highly:1 mining:2 semidefinite:1 chain:1 edge:1 iv:1 tsuda:1 increased:1 assignment:8 maximization:5 kpca:4 cost:3 vertex:1 deviation:1 characterize:1 reported:1 dependency:2 combined:4 st:2 international:5 rectal:11 off:1 together:3 huang:4 creating:1 zhao:1 potential:1 de:1 bold:1 availability:1 inc:1 oregon:2 caused:1 depends:1 vi:2 view:44 try:2 root:2 red:1 zha:2 aggregation:1 option:1 jul:1 simon:1 mutation:21 contribution:1 minimize:5 square:6 ni:1 publicly:1 characteristic:4 efficiently:1 ensemble:3 correspond:2 identify:3 bayesian:1 mutated:6 identification:1 lu:1 carlo:1 cc:1 manual:1 sixth:1 manifest:1 color:4 knowledge:3 dimensionality:1 higher:2 improved:2 synapse:2 rand:2 formulation:5 biomedical:2 smola:1 correlation:4 hand:1 replacing:1 nonlinear:5 mkl:5 defines:2 usa:3 k22:1 normalized:4 ye:1 regularization:1 assigned:4 equality:2 alternating:2 dhillon:1 i2:1 white:1 during:2 width:1 larson:1 generalized:1 trying:1 multiview:13 demonstrate:2 novel:2 nih:1 common:3 qp:5 extend:1 he:3 interpretation:2 opt1:2 measurement:2 pm:1 similarly:1 zlz:7 dot:1 similarity:7 closest:1 own:1 optimizes:1 moderate:1 scenario:2 binary:4 exploited:1 captured:1 minimum:1 fortunately:1 relaxed:1 additional:1 purity:11 determine:1 maximize:4 ii:9 semi:2 multiple:24 reduces:4 characterized:1 clinical:8 sphere:2 lai:2 molecular:1 variant:3 vision:3 patient:49 metric:7 iteration:2 kernel:89 normalization:2 ion:1 dec:1 suykens:1 whereas:2 nci:1 separately:2 source:4 sch:4 subject:9 sridharan:1 integer:1 intermediate:4 iii:6 identically:1 sander:2 xj:8 variate:1 fit:1 lange:2 idea:2 prototype:2 biomarkers:12 expression:8 pca:3 reuse:1 effort:1 opt4:4 hessian:3 nine:2 york:1 matlab:1 clear:1 eigenvectors:4 nonparametric:1 locally:2 category:2 dna:15 reduced:1 http:3 demir:1 canonical:4 diagnostic:1 blue:1 group:3 four:1 drawn:1 hartigan:2 preprocessed:1 graph:4 relaxation:2 sum:13 convert:2 fourth:1 extends:1 family:1 decide:2 decision:5 summarizes:1 comparable:1 cca:3 guaranteed:1 display:4 quadratic:2 nonnegative:5 activity:1 identifiable:1 constraint:4 orthogonality:1 x2:1 software:2 personalized:1 ri:13 formulating:2 kumar:2 performing:7 department:2 developing:1 rai:1 combination:15 clinically:2 across:1 nmi:13 slightly:1 son:1 making:1 repair:1 computationally:1 hh:4 know:4 letting:1 ge:1 end:2 available:4 rewritten:1 eight:1 spectral:8 hotelling:2 rp:3 chuang:1 denotes:2 clustering:94 remaining:1 lat:1 unifying:1 calculating:1 daum:1 exploit:2 concatenated:2 k1:1 especially:1 classical:1 objective:2 already:1 strategy:11 diagonal:2 financially:1 affinity:2 subspace:2 distance:2 concatenation:1 majority:1 degrade:1 seven:3 me:1 topic:1 trivial:1 enforcing:1 denmark:1 alpayd:4 index:2 msi:8 ratio:1 onen:5 nc:3 difficult:1 olshen:2 trace:5 ba:1 implementation:1 perform:5 markov:1 descent:1 jin:2 heterogeneity:1 extended:1 looking:1 rn:6 arbitrary:1 pair:3 namely:1 optimized:2 philosophical:1 quadratically:1 able:1 pattern:5 mismatch:1 gonen:1 program:1 power:1 critical:1 demanding:1 regularized:1 buhmann:2 solvable:1 customized:1 scheme:4 improve:3 github:1 ne:1 conic:2 extract:3 health:2 coupled:1 opt5:5 mehmet:1 review:1 prior:1 discovery:4 blockdiagonal:1 relative:1 loss:1 interesting:1 localized:24 downloaded:2 mercer:1 editor:1 strehl:2 row:1 cancer:31 prone:1 supported:1 last:2 copy:11 gl:1 drastically:1 guide:1 institute:1 taking:1 face:1 fifth:1 moreau:1 distributed:1 benefit:1 calculated:5 xn:1 world:1 genome:5 unweighted:5 commonly:3 adaptive:1 schultz:2 transaction:2 obtains:3 status:2 gene:18 global:6 active:1 rid:1 factorize:1 xi:20 latent:4 nature:4 schuurmans:1 diag:1 main:1 noise:3 lampert:2 n2:1 xu:2 x1:1 positively:2 darker:3 wiley:1 ny:1 concatenating:3 late:2 tang:2 specific:12 normalizing:1 fusion:21 grouping:1 consist:1 importance:2 margin:3 nk:1 chen:3 phenotype:1 vii:1 simply:1 collectively:1 aa:1 corresponds:1 truth:2 acm:1 formulated:2 kmeans:1 towards:1 shared:2 hard:1 determined:2 principal:2 correlational:1 called:1 meaningful:1 sio:1 support:1 latter:1 bioinformatics:1 evaluate:4 ohsu:2 correlated:3 |
4,679 | 5,237 | Learning with Fredholm Kernels
Qichao Que Mikhail Belkin Yusu Wang
Department of Computer Science and Engineering
The Ohio State University
Columbus, OH 43210
{que,mbelkin,yusu}@cse.ohio-state.edu
Abstract
In this paper we propose a framework for supervised and semi-supervised learning
based on reformulating the learning problem as a regularized Fredholm integral
equation. Our approach fits naturally into the kernel framework and can be interpreted as constructing new data-dependent kernels, which we call Fredholm
kernels. We proceed to discuss the ?noise assumption? for semi-supervised learning and provide both theoretical and experimental evidence that Fredholm kernels
can effectively utilize unlabeled data under the noise assumption. We demonstrate
that methods based on Fredholm learning show very competitive performance in
the standard semi-supervised learning setting.
1
Introduction
Kernel methods and methods based on integral operators have become one of the central areas of
machine learning and learning theory. These methods combine rich mathematical foundations with
strong empirical performance. In this paper we propose a framework for supervised and unsupervised learning as an inverse problem based on solving the integral equation known as the Fredholm
problem of the first kind. We develop regularization based algorithms for solving these systems
leading to what we call Fredholm kernels.
In the basic setting of supervised learning we are given the data set (xi , yi ), where xi ? X, yi ? R.
We would like to construct a function f : X ? R, such that f (xi ) ? yi and f is ?nice enough?
to generalize to new data points. This is typically done by choosing f from a class of functions (a
Reproducing Kernel Hilbert Space (RKHS) corresponding to a positive definite kernel for the kernel
methods) and optimizing a certain loss function, such as the square loss or hinge loss.
In this paper we formulate a new framework for learning based on interpreting the learning problem
as a Fredholm integral equation. This formulation shares some similarities with the usual kernel
learning framework but unlike the standard methods also allows for easy incorporation of unlabeled
data. We also show how to interpret the resulting algorithm as a standard kernel method with a
non-standard data-dependent kernel (somewhat resembling the approach taken in [13]).
We discuss reasons why incorporation of unlabeled data may be desirable, concentrating in particular on what may be termed ?the noise assumption? for semi-supervised learning, which is related
but distint from manifold and cluster assumption popular in the semi-supervised learning literature.
We provide both theoretical and empirical results showing that the Fredholm formulation allows for
efficient denoising of classifiers.
To summarize, the main contributions of the paper are as follows:
(1) We formulate a new framework based on solving a regularized Fredholm equation. The framework naturally combines labeled and unlabeled data. We show how this framework can be expressed
as a kernel method with a non-standard data-dependent kernel.
1
(2) We discuss ?the noise assumption? in semi-supervised learning and provide some theoretical evidence that Fredholm kernels are able to improve performance of classifiers under this assumption.
More specifically, we analyze the behavior of several versions of Fredholm kernels, based on combining linear and Gaussian kernels. We demonstrate that for some models of the noise assumption,
Fredholm kernel provides better estimators than the traditional data-independent kernel and thus
unlabeled data provably improves inference.
(3) We show that Fredholm kernels perform well on synthetic examples designed to illustrate the
noise assumption as well as on a number of real-world datasets.
Related work. Kernel and integral methods in machine learning have a large and diverse literature
(e.g., [12, 11]). The work most directly related to our approach is [10], where Fredholm integral
equations were introduced to address the problem of density ratio estimation and covariate shift. In
that work the problem of density ratio estimation was expressed as a Fredholm integral equation and
solved using regularization in RKHS. This setting also relates to a line of work on on kernel mean
embedding where data points are embedded in Reproducing Kernel Hilbert Spaces using integral
operators with applications to density ratio estimation and other tasks [5, 6, 7]. A very interesting
recent work [9] explores a shrinkage estimator for estimating means in RKHS, following the SteinJames estimator originally used for estimating the mean in an Euclidean space. The results obtained
in [9] show how such estimators can reduce variance. There is some similarity between that work
and our theoretical results presented in Section 4 which also show variance reduction for certain
estimators of the kernel although in a different setting. Another line of related work is the class
of semi-supervised learning techniques (see [15, 2] for a comprehensive overview) related to manifold regularization [1], where an additional graph Laplacian regularizer is added to take advantage
of the geometric/manifold structure of the data. Our reformulation of Fredholm learning as a kernel, addressing what we called ?noise assumptions?, parallels data-dependent kernels for manifold
regularization proposed in [13].
2
Fredholm Kernels
We start by formulating learning framework proposed in this paper. Suppose we are given l labeled
pairs (x1 , y1 ), . . . , (xl , yl ) from the data distribution p(x, y) defined on X ? Y and u unlabeled
points xl+1 , . . . , xl+u from the marginal distribution pX (x) on X. For simplicity we will assume
that the feature space X is a Euclidean space RD , and the label set Y is either {?1, 1} for binary
classification or the real line R for regression. Semi-supervised learning algorithms aim to construct
a (predictor) function f : X ? Y by incorporating the information of unlabeled data distribution.
To this end, we introduce the integral operator KpX associated with a kernel function k(x, z). In our
setting k(x, z) does not have to be a positive semi-definite (or even symmetric) kernel.
Z
KpX : L2 ? L2 and KpX f (x) = k(x, z)f (z)pX (z)dz,
(1)
where L2 is the space of square-integrable functions. By the law of large numbers, the above operator can be approximated using unlabeled data from pX as
l+u
Kp?X f (x) =
1 X
k(x, xi )f (xi ).
l + u i=1
This approximation provides a natural way of incorporating unlabeled data into algorithms. In our
Fredholm learning framework, we will use functions in KpX H = {KpX f : f ? H}, where H is
an appropriate Reproducing Kernel Hilbert Space (RKHS) as classification or regression functions.
Note that unlike RKHS, this space of functions, KpX H, is density dependent.
In particular, this now allows us to formulate the following optimization problem for semi-supervised
classification/regression in a way similar to many supervised learning algorithms:
The Fredholm learning framework solves the following optimization problem1 :
l
1X
((Kp?X f )(xi ) ? yi )2 + ?kf k2H ,
f ?H l
i=1
f ? = arg min
1
(2)
We will be using the square loss to simplify the exposition. Other loss functions can also be used in Eqn 2.
2
The final classifier is c(x) = (Kp?X f ? ) (x), where Kp?X is the operator defined above. Eqn 2 is a
discretized and regularized version of the Fredholm integral equation KpX f = y, thus giving the
name of Fredholm learning framework.
Even though at a first glance this setting looks similar to conventional kernel methods, the extra
layer introduced by Kp?X makes significant difference, in particular, by allowing the integration
of information from unlabeled data distribution. In contrast, solutions to standard kernel methods
for most kernels, e.g., linear, polynomial or Gaussian kernels, are completely independent of the
unlabeled data. We note that our approach is closely related to [10] where a Fredholm equation is
used to estimated the density ratio for two probability distributions.
The Fredholm learning framework is a generalization of the standard kernel framework. In fact, if
the kernel k is the ?-function, then our formulation above is equivalent to the Regularized Kernel
Pl
Least Squares equation f ? = arg minf ?H 1l i=1 (f (xi ) ? yi )2 + ?kf k2H . We could also replace
the L2 loss in Eqn 2 by other loss functions, such as hinge loss, resulting in a SVM-like classifier.
Finally, even though Eqn 2 is an optimization problem in a potentially infinite dimensional function
space H, a standard derivation, using the Representer Theorem (See full version for details), yields
a computationally accessible solution as follows:
l+u
f ? (x) =
?1 T
1 X
T
kH (x, xj )vj , v = Kl+u
Kl+u KH + ?I
Kl+u y,
l + u j=1
(3)
where (Kl+u )ij = k(xi , xj ) for 1 ? i ? l, 1 ? j ? l + u, and (KH )ij = kH (xi , xj ) for
1 ? i, j ? l + u. Note that Kl+u is a l ? (l + u) matrix.
Fredholm kernels: a convenient reformulation. In fact we will see that Fredholm learning problem induces a new data-dependent kernel, which we will refer to as Fredholm kernel2 . To show this
connection, we use the following identity, which can be easily verified:
?1 T
?1
T
T
T
Kl+u
Kl+u KH + ?I
Kl+u = Kl+u
Kl+u KH Kl+u
+ ?I
.
T
Define KF = Kl+u KH Kl+u
to be the l ? l kernel matrix associated with a new kernel defined by
k?F (x, z) =
l+u
X
1
k(x, xi )kH (xi , xj )k(z, xj ),
(l + u)2 i,j=1
(4)
and we consider the unlabeled data are fixed for computing this new kernel. Using this new kernel
k?F , the final classifying function from Eqn 3 can be rewritten as:
l+u
l
X
1 X
?1
c? (x) =
k(x, xi )f ? (xi ) =
k?F (x, xs )?s , ? = (KF + ?I) y.
l + u i=1
s=1
Because of Eqn 4 we will sometimes refer to the kernels kH and k as the ?inner? and ?outer? kernels
respectively. It can be observed that this solution is equivalent to a standard kernel method, but using
a new data dependent kernel k?F , which we will call the Fredholm kernel, since it is induced from
the Fredholm problem formulated in Eqn 2.
Proposition 1. The Fredholm kernel defined in Eqn 4 is positive semi-definite as long as KH is
positive semi-definite for any set of data x1 , . . . , xl+u .
The proof is given in the full version. The ?outer? kernel k does not have to be either positive definite
or even symmetric. When using Gaussian kernel for k, discrete approximation in Eqn 4 might be
unstable when the kernel width is small, so we also introduce the normalized Fredholm kernel,
l+u
X
k(z, xj )
k(x, xi )
P
kH (xi , xj ) P
.
(5)
k?FN (x, z) =
k(x,
x
)
n
n
n k(z, xn )
i,j=1
It is easy to check that the resulting Fredholm kernel k?FN is still symmetric positive semi-definite.
Even though Fredholm kernel was derived using L2 loss here, it could also be derived when hinge
loss is used, which will be explained in full version.
2
We note that the term Fredholm Kernel has been used in mathematics ([8], page 103) and also in a different
learning context [14]. Our usage represents a different object.
3
3
The Noise Assumption and Semi-supervised Learning
In order for unlabeled data to be useful in classification tasks it is necessary for the marginal distribution of the unlabeled data to contain information about the conditional distribution of the labels.
Several ways in which such information can be encoded has been proposed including the ?cluster
assumption? [3] and the ?manifold assumption? [1]. The cluster assumption states that a cluster (or
a high density area) contains only (or mostly) points belonging to the same class. That is, if x1 and
x2 belong to the same cluster, the corresponding labels y1 , y2 should be the same. The manifold
assumption assumes that the regression function is smooth with respect to the underlying manifold
structure of the data, which can be interpreted as saying that the geodesic distance should be used
instead of the ambient distance for optimal classification. The success of algorithms based on these
ideas indicates that these assumptions do capture certain characteristics of real data. Still, better
understanding of unlabeled data may still lead to progress in data analysis.
The noise assumption. We propose to formulate a new assumption, the ?noise assumption?, which is that in the neighborhood of every point, the directions with low variance (for
the unlabeled data) are uninformative with respect to the class labels, and can be regarded as
noise. While intuitive, as far as we know, it has
not been explicitly formulated in the context
of semi-supervised learning algorithms, nor applied to theoretical analysis.
Figure 1: Left: only labelled points, and Right:
with unlabelled points.
Note that even if the noise variance is small along a single direction, it could still significantly decrease the performance of a supervised learning algorithm if the noise is high-dimensional. These
accumulated non-informative variations in particular increase the difficulty of learning a good classifier when the amount of labeled data is small. The first figure on right illustrates the issue of noise
with two labeled points. The seemingly optimal classification boundary (the red line) differs from
the correct one (in black) due to the noisy variation along the y axis for the two labeled points.
Intuitively unlabeled data shown in the right panel of Figure 1 can be helpful in this setting as low
variance directions can be estimated locally such that algorithms could suppress the influences of
the noisy variation when learning a classifier.
Connection to cluster and manifold assumptions. The noise assumption is compatible with the
manifold assumption within the manifold+noise model. Specifically, we can assume that the functions of interest vary along the manifold and are constant in the orthogonal direction. Alternatively,
we can think of directions with high variance as ?signal/manifold? and directions with low variance as ?noise?. We note that the noise assumption does not require the data to conform to a
low-dimensional manifold in the strict mathematical sense of the word. The noise assumption is
orthogonal to the cluster assumption. For example, Figure 1 illustrates a situation where data has no
clusters but the noise assumption applies.
4
Theoretical Results for Fredholm Kernels
Non-informative variation in data could degrade traditional supervised learning algorithms. We
will now show that Fredholm kernels can be used to replace traditional kernels to inject them with
?noise-suppression? power with the help of unlabeled data. In this section we will present two views
to illustrate how such noise suppression can be achieved. Specifically, in Section 4.1) we show that
under certain setup, linear Fredholm kernel suppresses principal components with small variance.
In Section 4.2) we prove that under certain conditions we are able to provide good approximations
to the ?true? kernel on the hidden underlying space.
To make our arguments more clear, we assume that there are infinite amount of unlabelled data; that
is, we know the marginal distribution of data exactly. We will then consider the following continuous
versions of the un-normalized andZnormalized
Fredholm kernels as in Eqn 4 and 5:
Z
kFU (x, z) =
k(x, u)kH (u, v)k(z, v)p(u)p(v)dudv
Z Z
k(x, u)
k(z, v)
R
kFN (x, z) =
kH (u, v) R
p(u)p(v)dudv.
k(x, w)p(w)dw
k(z, w)p(w)dw
4
(6)
(7)
Note, in the above equations and in what follows, we sometimes write p instead of pX for the
marginal distribution when its choice is clear from context. We will typically use kF to denote
appropriate normalized or unnormalized kernels depending on the context.
4.1
Linear Fredholm kernels and inner products
For this section, we consider the unormalized Fredholm kernel, that is kF = kFU . If the ?outer?
kernel k(u, v) is linear, i.e. k(u, v) = hu, vi, the resulting Fredholm kernel can be viewed as an
inner product. Specifically, the un-normalized Fredholm kernel from Eqn 6 can be rewritten as:
Z Z
T
kF (x, z) = x ?F z, where ?F =
ukH (u, v)v T p(u)p(v)dudv.
Thus kF (x, z) is simply an inner product which depends on both the unlabeled data distribution p(x)
and the ?inner? kernel kH . This inner product re-weights the standard norm in feature space based
on variances along the principal directions of the matrix ?F . We show that for the model when unlabeled data is sampled from a normal distribution this kernel can be viewed as a ?soft thresholding?
PCA, suppressing the directions with low variance. Specifically, we have the following3
2
Theorem 2. Let kH (x, z) = exp ? kx?zk
and assume the distribution pX for unlabeled data is
2t
a single multi-variate normal distribution, N (?, diag(?12 , . . . , ?d2 )). We have
!
s
D
4
Y
?14
?D
t
T
?? + diag
,..., 2
.
?F =
2?d2 + t
2?12 + t
2?D + t
d=1
Assuming that the data is mean-subtracted, i.e. ? = 0, we see that xT ?F z re-scales the projections
along the principal components
q when computing the inner product; that is, the rescaling factor for
the i-th principal direction is
Note that this rescaling factor
?4
?2
?i4
.
2?i2 +t
?i4
?
2?i2 +t
0 when ?i2 t. On the other hand when ?i2 t, we
have that 2?2i+t ? 2i . Hence t can be considered as a soft threshold that eliminates the effects of
i
principal components with small variances. When t is small the rescaling factors are approximately
2
), in which case ?F is is proportional to the covariance matrix
proportional to diag(?12 , ?22 , . . . , ?D
T
of the data XX .
4.2
Kernel Approximation With Noise
We have seen that one special case of Fredholm kernel could achieve the effect of principal components re-scaling by using linear kernel as the ?outer? kernel k. In this section we give a more general
interpretation of noise suppression by the Fredholm kernel.
First, we give a simple senario to provide some intuition behind the definition of Fredholm kernle. Consider a standard supervised learning setting which uses the solution f ? =
Pl
arg minf ?H 1l i=1 (f (xi )?yi )2 +?kf k2H as the classifier. Let
target
kH
denote the ideal kernel that we intend to use on the clean
data, which we call the target kernel from now on. Now suppose what we have are two noisy labelled points xe and ze for
?true? data x
? and z?, i.e. xe = x
? + ?x , ze = z? + ?z . The
target
evaluation of kH
(xe , ze ) can be quite different from the true
target
signal kH
(?
x, z?), leading to an suboptimal final classifier (the
red line in Figure 1 (a)). On the other hand, now consider the
RR
Fredholm kernel from Eqn 6 (or similarly from Eqn 7): kF (xe , ze ) =
k(xe , u)p(u) ? kH (u, v) ?
k(ze , v)p(v)dudv, and set the outer kernel k to be the Gaussian kernel, and the inner kernel kH to be
target
the same as target kernel kH
. We can think of kF (xe , ze ) as an averaging of kH (u, v) over all possible pairs of data u, v, weighted by k(xe , u)p(u) and k(ze , v)p(v) respectively. Specifically, points
3
The proof of this and other results can be found in the full version.
5
that are close to xe (resp. ze ) with high density will receive larger weights. Hence the weighted
averages will be biased towards x
? and z? respectively (which presumably lie in high density regions
around xe and ze ). The value of kF (xe , ze ) tends to provide a more accurate estimate of kH (?
x, z?).
See the right figure for an illustration where the arrows indicate points with stronger influences in the
computation of kF (xe , ze ) than kH (xe , ze ). As a result, the classifier obtained using the Fredholm
kernel will also be more resilient to noise and closer to the optimum.
The Fredholm learning framework is rather flexible in terms of the choices of kernels k and kH .
In the remainder of this section, we will consider a few specific scenarios and provide quantitative
analysis to show the noise robustness of the Fredholm kernel.
Problem setup. Assume that we have a ground-truth distribution over the subspace spanned by
the first d dimension of the Euclidean space RD . We will assume that this distribution is a single Gaussian N (0, ?2 Id ). Suppose this distribution is corrupted with Gaussian noise along the orthogonal subspace of dimension D ? d. That is, for any ?true? point x
? drawn from N (0, ?2 Id ),
2
its observation xe is drawn from N (?
x, ? ID?d ). Since the noise lies in a space orthogonal
to data distribution, this means that any observed point, labelled or unlabeled, is sampled from
pX = N (0, diag(?2 Id , ? 2 ID?d ). We will show that Fredholm kernel provides a better approximation to the ?original? kernel given unlabeled data than simply computing the kernel of noisy points.
We choose this basic setting to be able to state the theoretical results in a clean manner. Even though
this is a Gaussian distribution over a linear subspace with noise, this framework has more general
implications since local neighborhoods of manifolds are (almost) linear spaces.
Note: In this section we use normalized Fredholm kernel given in Eqn 7, that is kF = kFN for now
on. Un-normalized Fredholm kernel displays similar behavior, while the bounds are trickier.
target
Linear Kernel. First we consider the case where the target kernel kH
(u, v) is the linear kernel,
target
T
kH (u, v) = u v. We will set kH in Fredholm kernel to also be linear, and k to be the Gaussian
ku?vk2
kernel k(u, v) = e? 2t We will compare kF (xe , ze ) with the target kernel on the two observed
target
target
points, that is, with kH
(xe , ze ). The goal is to estimate kH
(?
x, z?). We will see that (1) both
target
kF (xe , ze ) and (appropriately scaled) kH (xe , ze ) are unbiased estimators of kH
(?
x, z?), however (2)
target
the variance of kF (xe , ze ) is smaller than that of kH (xe , ze ), making it a more precise estimator.
Theorem 3. Suppose the probability distribution for the unlabeled data pX
=
N (0, diag(?2 Id , ? 2 ID?d )). For Fredholm kernel defined in Eqn 7, we have
!
2 2
t
+
?
target
Exe ,ze (kH
(xe , ze )) = Exe ,ze
kF (xe , ze ) = x
?T z?
?2
2
target
t+?2
kF (xe , ze ) < Varxe ,ze (kH
(xe , ze )).
Moreover, when ? > ?, Varxe ,ze
?2
Remark: Note that we have a normalization constant for the Fredholm kernel to make it an unbiased
estimator of x
?T z?. In practice, choosing normalization is subsumed in selecting the regularization
parameter for kernel methods.
Thus we can see the Fredholm kernel provides an approximation of the ?true? linear kernel, but with
smaller variance compared to the actual linear kernel on noisy data.
Gaussian Kernel. We now consider the case where the target kernel is the Gaussian kernel:
2
target
kH
(u, v) = exp ? ku?vk
. To approximate this kernel, we will set both k and kH to be Gaus2r
sian kernels. To simplify the presentation of results, we assume that k and kH have the same kernel
width t. The resulting Fredholm kernel turns out to also be a Gaussian kernel, whose kernel width
depends on the choice of t.
Our main result is the following. Again, similar to the case of linear kernel, the Fredholm estimation
target
target
kF (xe , ze ) and kH
(xe , ze ) are both unbiased estimator for the target kH
(?
x, z?) up to a constant;
but kF (xe , ze ) has a smaller variance.
Theorem 4. Suppose the probability distribution for the unlabeled
data pX
=
2
target
N (0, diag(?2 Id , ? 2 ID?d )). Given the target kernel kH
(u, v) = exp ? ku?vk
with
ker2r
nel width r > 0, we can choose t, given by the equation
6
t(t+?2 )(t+3?2 )
?4
= r, and two scaling
constants c1 , c2 , such that
target
target
?1
Exe ,ze (c?1
x, z?).
1 kH (xe , ze )) = Exe ,ze (c2 kF (xe , ze )) = kH (?
target
?1
and when ? > ?, we have Varxe ,ze (c?1
1 kH (xe , ze )) > Varxe ,ze (c2 kF (xe , ze )).
Remark. In practice, when applying kernel methods for real world applications, optimal kernel
width r is usually unknown and chosen by cross-validation or other methods. Similarly, for our
Fredholm kernel, one can also use cross-validation to choose the optimal t for kF .
5
Experiments
Using linear and Gaussian kernel for k or kH respectively, we will define three instances of the
Fredholm kernel as follows.
2
.
(1) FredLin1: k(x, z) = xT z and kH (x, z) = exp ? kx?zk
2r
kx?zk2
T
(2) FredLin2: k(x, z) = exp ? 2r
and kH (x, z) = x z.
2
(3) FredGauss: k(x, z) = kH (x, z) = exp ? kx?zk
.
2r
For the kernels in (2) and (3) that use the Gaussian kernel as outside
kernel k we can also define their normalized version, which we will
denote by by FredLin2(N) and FredGauss(N) respectively.
2
1.5
1
0.5
0
?0.5
?1
Synthetic examples. Noise and cluster assumptions.
?1.5
?2
?1
To isolate the ability of Fredholm kernels to deal with noise from
the cluster assumption, we construct two synthetic examples that
violate the cluster assumption, shown in Figure 2. The figures show
first two dimensions, with multi-variate Gaussian noise with variance ? 2 = 0.01 in R100 added. The classification boundaries are
indicated by the color. For each class, we provide several labeled
points and large amount of unlabeled data. Note that the classification boundary in the ?circle? example is non-linear.
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
1.5
1
0.5
0
?0.5
?1
?1.5
?1
?0.5
0
0.5
1
1.5
We compare Fredholm kernel based classifier with RLSC (Regularized Least Squares Classifier), and two widely used semisupervised methods, the transductive support vector machine and
Noise but not
LapRLSC. Since the examples violate the cluster assumption, the Figure 2:
cluster
assumption.
Gaussian
two existing semi-supervised learning algorithms, Transductive
100
noise
in
R
is
added.
Linear
SVM and LapRLSC, should not gain much from the unlabeled data.
For TSVM, we use the primal TSVM proposed in [4], and we will (above) and non-linear (beuse the implementation of LapRLSC given in [1]. Different num- low) class boundaries.
bers of labeled points are given for each class, together with another
2000 unlabeled points. To choose the optimal parameters for each method, we pick the parameters
based on their performance on the validation set, while the final classification error is computed on
the held-out testing data set. Results are reported in Table 1 and 2, in which Fredholm kernels show
clear improvement over other methods for synthetic examples in term of classification error.
Real-world Data Sets. Unlike artificial examples, it is usually difficult to verify whether certain
assumptions are satisfied in real-world problems. In this section, we examine the performance of
Fredholm kernels on several real-world data sets and compare it with the baseline algorithms mentioned above.
Linear Kernels. Here we consider text categorization and sentiment analysis, where linear methods
are known to perform well. We use the following data (represented by TF-IDF features):
(1) 20 news group: it has 11269 documents with 20 classes, and we select the first 10 categories
for our experiment. (2) Webkb: the original data set contains 7746 documents with 7 unbalanced
classes, and we pick the two largest classes with 1511 and 1079 instances respectively. (3) IMDB
movie review: it has 1000 positive reviews and 1000 negative reviews of movie on IMDB.com. (4)
Twitter sentiment data from Sem-Eval 2013: it contains 5173 tweets, with positive, neural and negative sentiment. We combine neutral and negative classes to set up a binary classification problem.
Results are reported in Table 3. In Table4, we use WebKB as an example to illustrate the change of
the performance as number of labeled points increases.
7
Number
of Labeled
8
16
32
RLSC
10.0(? 3.9)
9.1(? 1.9)
5.8(? 3.2)
TSVM
5.2(? 2.2)
5.1(? 1.1)
4.5(? 0.8)
Methods(Linear)
LapRLSC
FredLin1
10.0(? 3.5) 3.7(? 2.6)
9.1(? 2.2) 2.9(? 2.0)
6.0(? 3.2) 2.3(? 2.3)
FredLin2(N)
4.5(? 2.1)
3.6(? 1.9)
2.6(? 2.2)
Table 1: Prediction error of different classifiers for the?two lines? example.
Number
of Labeled
16
32
64
K-RLSC
17.4(? 5.0)
16.5(? 7.1)
8.7(? 1.7)
Methods(Gaussian)
TSVM
LapRLSC
32.2(? 5.2) 17.0(? 4.6)
29.9(? 9.3) 18.0(? 6.8)
20.3(? 4.2) 9.7(? 2.0)
FredGauss(N)
7.1(? 2.4)
6.0(? 1.6)
5.5(? 0.7)
Table 2: Prediction error of different classifiers for the ?circle? example.
Gaussian Kernel. We test our methods on hand-written digit recognition. The experiment use
subsets of two handwriting digits data sets MNIST and USPS: (1) the one from MNIST contains
10k digits in total with balanced examples for each class, and the one for USPS is the original testing
set containing about 2k images. The pixel values are normalized to [0, 1] as features. Results are
reported in Table 5. In Table 6, we show that as we add additional Gaussian noise to MNIST data,
Fredholm kernels start to show significant improvement.
Data Set
Webkb
20news
IMDB
Twitter
RLSC
16.9(? 1.4)
22.2(? 1.0)
30.0(? 2.0)
38.7(? 1.1)
TSVM
12.7(? 0.8)
21.0(? 0.9)
20.2(? 2.6)
37.6(? 1.4)
Methods(Linear)
FredLin1
FredLin2
13.0(? 1.3) 12.0(? 1.6)
20.5 (? 0.7) 20.5 (?0.7)
19.9(? 2.3) 21.7(? 2.9)
37.4(? 1.2) 37.4(? 1.2)
FredLin2(N)
12.0(? 1.6)
20.5(? 0.7)
21.7(? 2.7)
37.5(? 1.2)
Table 3: The error of various methods on the text data sets. 20 labeled data per class are given with
rest of the data set as unlabeled points. Optimal parameter for each method are used.
Number
of Labeled
10
20
80
RLSC
20.7(? 2.4)
16.9(? 1.4)
10.9(? 1.4)
TSVM
13.5(? 0.5)
12.7(? 0.8)
9.7(? 1.0)
Methods(Linear)
FredLin1
FredLin2
14.8(? 2.4) 14.6(? 2.4)
13.0(? 1.3) 12.0(? 1.6)
8.1(? 1.0)
7.9(? 0.9)
FredLin2(N)
14.6(? 2.3)
12.0(? 1.6)
7.9(? 0.9)
Table 4: Prediction error on Webkb with different number of labeled points.
Data Set
USPST
MNIST
K-RLSC
11.8(? 1.4)
14.3(? 1.2)
Methods(Gaussian)
LapRLSC
FredGauss
10.2 (?0.5) 12.4(? 1.8)
8.6(? 1.2)
12.2(?1.0)
FredGauss(N)
10.8(? 1.1)
13.0(? 0.9)
Table 5: Prediction error of nonlinear classifiers on the MNIST and USPS. 20 labeled data per class
are given with rest of the data set as unlabeled points. Optimal parameter for each method are used.
Number
of Labeled
10
20
40
80
K-RLSC
34.1(? 2.1)
27.2(? 1.1)
20.0(? 0.7)
15.6(? 0.4)
Methods(Gaussian)
LapRLSC
FredGauss
35.6 (?3.5) 27.9(? 1.6)
27.3 (?1.8) 21.9(? 1.2)
20.3 (?0.8) 17.3(? 0.5)
15.6 (?0.5) 14.8(? 0.6)
FredGauss(N)
29.0(? 1.5)
22.9(? 1.2)
18.4(? 0.4)
15.4(? 0.5)
Table 6: The prediction error of nonlinear classifiers on MNIST corrupted with Gaussian noise with
standard deviation 0.3, with different numbers of labeled points, from 10 to 80. Optimal parameter
for each method are used.
Acknowledgments. The work was partially supported by NSF Grants CCF-1319406 and RI
1117707. We thank the anonymous NIPS reviewers for insightful comments.
8
References
[1] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric
framework for learning from labeled and unlabeled examples. Journal of Machine Learning
Research, 7:2399?2434, 2006.
[2] Oliver Chapelle, Bernhard Sch?olkopf, and Alexander Zien, editors. Semi-Supervised Learning.
MIT Press, Cambridge, MA, 2006.
[3] Oliver Chapelle, Jason Weston, and Bernhard Sch?olkopf. Cluster kernels for semi-supervised
learning. In Advances in Neural Information Processing Systems 17, pages 585?592, 2003.
[4] Oliver Chapelle and Alexander Zien. Semi-supervised classification by low density separation.
In Robert G. Cowell and Zoubin Ghahramani, editors, AISTATS, pages 57?64, 2005.
[5] Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, and
Bernhard Sch?olkopf. Covariate shift by kernel mean matching. Dataset shift in machine
learning, pages 131?160, 2009.
[6] S. Gr?unew?alder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional mean embeddings as regressors. In Proceedings of the 29th International Conference on
Machine Learning, volume 2, pages 1823?1830, 2012.
[7] Steffen Grunewalder, Gretton Arthur, and John Shawe-Taylor. Smooth operators. In Proceedings of the 30th International Conference on Machine Learning, pages 1184?1192, 2013.
[8] Michiel Hazewinkel. Encyclopaedia of Mathematics, volume 4. Springer, 1989.
[9] Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, and Bernhard
Sch?olkopf. Kernel mean shrinkage estimators. arXiv preprint arXiv:1405.5505, 2014.
[10] Qichao Que and Mikhail Belkin. Inverse density as an inverse problem: the fredholm equation
approach. In Advances in Neural Information Processing Systems 26, pages 1484?1492, 2013.
[11] Bernhard Sch?olkopf and Alexander J Smola. Learning with kernels: Support vector machines,
regularization, optimization, and beyond. MIT press, 2001.
[12] John Shawe-Taylor and Nello Cristianini. Kernel methods for pattern analysis. Cambridge
university press, 2004.
[13] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the 22nd International Conference on
Machine Learning, pages 824?831, New York, NY, USA, 2005. ACM Press.
[14] SVN Vishwanathan, Alexander J Smola, and Ren?e Vidal. Binet-cauchy kernels on dynamical systems and its application to the analysis of dynamic scenes. International Journal of
Computer Vision, 73(1):95?119, 2007.
[15] Xiaojin Zhu. Semi-supervised learning literature survey. Technical report, Computer Science,
University of Wisconsin-Madison, 2005.
9
| 5237 |@word version:8 polynomial:1 norm:1 stronger:1 nd:1 hu:1 d2:2 covariance:1 pick:2 reduction:1 contains:4 selecting:1 rkhs:5 suppressing:1 document:2 existing:1 com:1 written:1 john:2 fn:2 informative:2 krikamol:1 designed:1 num:1 provides:4 cse:1 mathematical:2 along:6 c2:3 become:1 prove:1 combine:3 manner:1 introduce:2 karsten:1 behavior:2 nor:1 examine:1 multi:2 steffen:1 discretized:1 actual:1 estimating:2 underlying:2 xx:1 panel:1 moreover:1 webkb:4 what:5 kind:1 interpreted:2 suppresses:1 quantitative:1 every:1 exactly:1 classifier:15 scaled:1 grant:1 positive:8 engineering:1 local:1 tends:1 id:9 approximately:1 might:1 black:1 qichao:2 acknowledgment:1 testing:2 practice:2 definite:6 differs:1 digit:3 pontil:1 area:2 empirical:2 significantly:1 convenient:1 projection:1 matching:1 word:1 zoubin:1 unlabeled:31 close:1 operator:6 context:4 influence:2 senario:1 applying:1 conventional:1 equivalent:2 reviewer:1 dz:1 resembling:1 survey:1 formulate:4 simplicity:1 estimator:10 regarded:1 spanned:1 oh:1 dw:2 embedding:1 variation:4 resp:1 target:26 suppose:5 us:1 approximated:1 ze:37 recognition:1 labeled:17 observed:3 cloud:1 preprint:1 wang:1 solved:1 capture:1 region:1 news:2 decrease:1 mentioned:1 intuition:1 balanced:1 cristianini:1 dynamic:1 geodesic:1 solving:3 patterson:1 imdb:3 completely:1 r100:1 usps:3 easily:1 represented:1 various:1 regularizer:1 derivation:1 kp:5 artificial:1 choosing:2 que:3 neighborhood:2 outside:1 quite:1 encoded:1 larger:1 whose:1 widely:1 ability:1 niyogi:2 think:2 transductive:3 noisy:5 final:4 seemingly:1 advantage:1 rr:1 propose:3 product:5 remainder:1 combining:1 achieve:1 intuitive:1 kh:48 olkopf:5 cluster:14 optimum:1 kpx:7 categorization:1 encyclopaedia:1 object:1 help:1 illustrate:3 develop:1 depending:1 bers:1 ij:2 progress:1 solves:1 strong:1 kenji:1 marcel:1 indicate:1 direction:9 closely:1 correct:1 unew:1 require:1 resilient:1 generalization:1 anonymous:1 proposition:1 unormalized:1 pl:2 around:1 considered:1 ground:1 normal:2 exp:6 k2h:3 presumably:1 vary:1 estimation:4 baldassarre:1 label:4 largest:1 tf:1 weighted:2 fukumizu:1 mit:2 gaussian:21 aim:1 rather:1 grunewalder:1 shrinkage:2 derived:2 vk:2 improvement:2 check:1 indicates:1 contrast:1 suppression:3 baseline:1 mbelkin:1 sense:1 helpful:1 inference:1 twitter:2 dependent:7 kfu:2 vk2:1 accumulated:1 typically:2 hidden:1 provably:1 pixel:1 issue:1 arg:3 classification:12 flexible:1 integration:1 special:1 marginal:4 construct:3 represents:1 look:1 unsupervised:1 minf:2 problem1:1 representer:1 report:1 simplify:2 belkin:4 few:1 comprehensive:1 subsumed:1 interest:1 eval:1 evaluation:1 behind:1 primal:1 held:1 implication:1 accurate:1 ambient:1 oliver:3 integral:10 closer:1 necessary:1 arthur:3 orthogonal:4 euclidean:3 taylor:2 re:3 circle:2 theoretical:7 instance:2 soft:2 trickier:1 addressing:1 neutral:1 subset:1 deviation:1 predictor:1 gr:1 exe:4 reported:3 corrupted:2 synthetic:4 muandet:1 density:10 explores:1 borgwardt:1 international:4 accessible:1 yl:1 together:1 kernel2:1 again:1 central:1 satisfied:1 lever:1 containing:1 choose:4 huang:1 inject:1 leading:2 rescaling:3 explicitly:1 vi:1 depends:2 view:1 jason:1 analyze:1 red:2 competitive:1 start:2 parallel:1 tsvm:6 partha:2 contribution:1 square:5 variance:15 characteristic:1 yield:1 generalize:1 fredholm:66 ren:1 bharath:1 yusu:2 definition:1 sriperumbudur:1 kfn:2 naturally:2 associated:2 proof:2 handwriting:1 sampled:2 gain:1 dataset:1 concentrating:1 popular:1 color:1 improves:1 hilbert:3 originally:1 supervised:24 formulation:3 done:1 though:4 smola:3 hand:3 eqn:15 nonlinear:2 glance:1 columbus:1 indicated:1 semisupervised:1 name:1 effect:2 usage:1 verify:1 unbiased:3 binet:1 normalized:8 contain:1 y2:1 regularization:7 true:5 reformulating:1 symmetric:3 hence:2 ccf:1 i2:4 deal:1 width:5 alder:1 unnormalized:1 demonstrate:2 interpreting:1 image:1 ohio:2 overview:1 volume:2 belong:1 interpretation:1 interpret:1 significant:2 refer:2 cambridge:2 rd:2 mathematics:2 similarly:2 shawe:2 chapelle:3 similarity:2 add:1 recent:1 optimizing:1 termed:1 scenario:1 certain:6 binary:2 success:1 xe:30 yi:6 integrable:1 seen:1 additional:2 somewhat:1 signal:2 semi:21 relates:1 full:4 desirable:1 violate:2 gretton:4 zien:2 smooth:2 technical:1 unlabelled:2 cross:2 long:1 michiel:1 rlsc:7 laplacian:1 prediction:5 basic:2 regression:4 vision:1 arxiv:2 kernel:135 sometimes:2 normalization:2 achieved:1 c1:1 receive:1 uninformative:1 appropriately:1 extra:1 eliminates:1 unlike:3 biased:1 rest:2 strict:1 comment:1 induced:1 isolate:1 call:4 ideal:1 enough:1 easy:2 embeddings:1 xj:7 fit:1 variate:2 suboptimal:1 reduce:1 inner:8 idea:1 svn:1 shift:3 whether:1 pca:1 sentiment:3 proceed:1 york:1 remark:2 useful:1 clear:3 amount:3 locally:1 nel:1 induces:1 category:1 nsf:1 estimated:2 per:2 diverse:1 conform:1 discrete:1 write:1 group:1 reformulation:2 threshold:1 drawn:2 clean:2 verified:1 utilize:1 graph:1 tweet:1 inverse:3 sch:5 saying:1 almost:1 separation:1 scaling:2 layer:1 bound:1 schmittfull:1 display:1 i4:2 incorporation:2 idf:1 alex:1 vishwanathan:1 x2:1 ri:1 scene:1 argument:1 min:1 formulating:1 px:8 department:1 belonging:1 smaller:3 making:1 explained:1 intuitively:1 taken:1 computationally:1 equation:12 discus:3 turn:1 know:2 end:1 zk2:1 rewritten:2 vidal:1 appropriate:2 dudv:4 subtracted:1 robustness:1 vikas:2 original:3 assumes:1 hinge:3 madison:1 giving:1 ghahramani:1 intend:1 added:3 usual:1 traditional:3 subspace:3 distance:2 thank:1 outer:5 degrade:1 manifold:15 nello:1 unstable:1 cauchy:1 reason:1 assuming:1 illustration:1 ratio:4 setup:2 mostly:1 difficult:1 robert:1 potentially:1 negative:3 suppress:1 implementation:1 unknown:1 perform:2 allowing:1 observation:1 datasets:1 situation:1 precise:1 y1:2 reproducing:3 introduced:2 pair:2 kl:13 connection:2 nip:1 address:1 able:3 beyond:2 usually:2 pattern:1 dynamical:1 summarize:1 including:1 power:1 natural:1 difficulty:1 regularized:5 sian:1 zhu:1 improve:1 movie:2 axis:1 xiaojin:1 text:2 nice:1 literature:3 geometric:2 l2:5 kf:24 understanding:1 review:3 law:1 embedded:1 loss:10 wisconsin:1 interesting:1 proportional:2 validation:3 foundation:1 usa:1 thresholding:1 editor:2 classifying:1 share:1 compatible:1 supported:1 hazewinkel:1 mikhail:4 boundary:4 dimension:3 xn:1 world:5 rich:1 regressors:1 far:1 approximate:1 bernhard:5 xi:16 alternatively:1 continuous:1 un:3 why:1 table:10 ku:3 zk:3 sem:1 constructing:1 vj:1 diag:6 aistats:1 main:2 arrow:1 noise:36 x1:3 ny:1 xl:4 lie:2 theorem:4 xt:2 covariate:2 specific:1 showing:1 insightful:1 x:1 svm:2 evidence:2 incorporating:2 mnist:6 effectively:1 illustrates:2 kx:4 simply:2 expressed:2 partially:1 sindhwani:2 applies:1 cowell:1 springer:1 truth:1 acm:1 ma:1 weston:1 conditional:2 identity:1 formulated:2 viewed:2 goal:1 exposition:1 towards:1 presentation:1 labelled:3 replace:2 change:1 specifically:6 infinite:2 averaging:1 denoising:1 principal:6 called:1 total:1 experimental:1 jiayuan:1 select:1 support:2 unbalanced:1 alexander:4 |
4,680 | 5,238 | Scalable Kernel Methods via Doubly Stochastic Gradients
Bo Dai1 , Bo Xie1 , Niao He1 , Yingyu Liang2 , Anant Raj1 , Maria-Florina Balcan3 , Le Song1
1
Georgia Institute of Technology
{bodai, bxie33, nhe6, araj34}@gatech.edu, lsong@cc.gatech.edu
2
3
Princeton University
Carnegie Mellon University
yingyul@cs.princeton.edu
ninamf@cs.cmu.edu
Abstract
The general perception is that kernel methods are not scalable, so neural nets become the choice for large-scale nonlinear learning problems. Have we tried hard
enough for kernel methods? In this paper, we propose an approach that scales up
kernel methods using a novel concept called ?doubly stochastic functional gradients?. Based on the fact that many kernel methods can be expressed as convex
optimization problems, our approach solves the optimization problems by making two unbiased stochastic approximations to the functional gradient?one using
random training points and another using random features associated with the
kernel?and performing descent steps with this noisy functional gradient. Our
algorithm is simple, need no commit to a preset number of random features, and
allows the flexibility of the function class to grow as we see more incoming data in
the streaming setting. We demonstrate that a function learned by this procedure after t iterations converges to the optimal function in the reproducing kernel
Hilbert
?
space in rate O(1/t), and achieves a generalization bound of O(1/ t). Our approach can readily scale kernel methods up to the regimes which are dominated by
neural nets. We show competitive performances of our approach as compared to
neural nets in datasets such as 2.3 million energy materials from MolecularSpace,
8 million handwritten digits from MNIST, and 1 million photos from ImageNet
using convolution features.
1
Introduction
The general perception is that kernel methods are not scalable. When it comes to large-scale nonlinear learning problems, the methods of choice so far are neural nets although theoretical understanding remains incomplete. Are kernel methods really not scalable? Or is it simply because we
have not tried hard enough, while neural nets have exploited sophisticated design of feature architectures, virtual example generation for dealing with invariance, stochastic gradient descent for efficient
training, and GPUs for further speedup?
A bottleneck in scaling up kernel methods comes from the storage and computation cost of the
dense kernel matrix, K. Storing the matrix requires O(n2 ) space, and computing it takes O(n2 d)
operations, where n is the number of data points and d is the dimension. There have been many great
attempts to scale up kernel methods, including efforts in perspectives of numerical linear algebra,
functional analysis, and numerical optimization.
A common numerical linear algebra approach is to approximate the kernel matrix using low-rank
factorizations, K ? A> A, with A ? Rr?n and rank r 6 n. This low-rank approximation allows
subsequent kernel algorithms to directly operate on A, but computing the approximation requires
O(nr2 + nrd) operations. Many work followed this strategy, including Greedy basis selection
techniques [1], Nystr?om approximation [2] and incomplete Cholesky decomposition [3]. In practice, one observes that kernel methods with approximated kernel matrices often result in a few
percentage of losses in performance. In fact, without further assumption on the regularity of the
1
kernel?matrix, ?
the generalization ability after using low-rank approximation is typically of order
O(1/ r + 1/ n) [4, 5], which implies that the rank needs to be nearly linear in the number of
data points! Thus, in order for kernel methods to achieve the best generalization ability, low-rank
approximation based approaches immediately become impractical for big datasets because of their
O(n3 + n2 d) preprocessing time and O(n2 ) storage.
Random feature approximation is another popular approach for scaling up kernel methods [6, 7].
The method directly approximates the kernel function instead of the kernel matrix using explicit
feature maps. The advantage of this approach is that the random feature matrix for n data points
can be computed in time O(nrd) using O(nr) storage, where r is the number of random features.
Subsequent algorithms then only need to operate on an O(nr) matrix. Similar to low-rank kernel
?
matrix
? approximation approach, the generalization ability of this approach is of the order O(1/ r +
1/ n) [8, 9], which implies that the number of random features also needs to be O(n). Another
common drawback of these two approaches is that adapting the solution from a small r to a large
r0 is not easy if one wants to increase the rank of the approximated kernel matrix or the number of
random features for better generalization ability. Special procedures need to be designed to reuse
the solution obtained from a small r, which is not straightforward.
Another approach that addresses the scalability issue rises from the optimization perspective. One
general strategy is to solve the dual forms of kernel methods using the block-coordinate descent (e.g., [10, 11, 12]). Each iteration of this algorithm only incurs O(nrd) computation and O(nr)
storage, where r is the block size. A second strategy is to perform functional gradient descent
based on a batch of data points at each epoch (e.g., [13, 14]). Thus, the computation and storage in
each iteration required are also O(nrd) and O(nr), respectively, where r is the batch size. These
approaches can straightforwardly adapt to a different r without restarting the optimization procedure and exhibit no generalization loss since they do not approximate the kernel matrix or function.
However, a serious drawback of these approaches is that, without further approximation, all support
vectors need to be stored for testing, which can be as big as the entire training set! (e.g., kernel ridge
regression and non-separable nonlinear classification problems.)
In summary, there exists a delicate trade-off between computation, storage and statistics when
scaling up kernel methods. Inspired by various previous efforts, we propose a simple yet general
strategy that scales up many kernel methods using a novel concept called ?doubly stochastic
functional gradients?. Our method relies on the fact that most kernel methods can be expressed
as convex optimization problems over functions in the reproducing kernel Hilbert spaces (RKHS)
and solved via functional gradient descent. Our algorithm proceeds by making two unbiased
stochastic approximations to the functional gradient, one using random training points and another
using random functions associated with the kernel, and then descending using this noisy functional
gradient. The key intuitions behind our algorithm originate from (i) the property of stochastic
gradient descent algorithm that as long as the stochastic gradient is unbiased, the convergence of
the algorithm is guaranteed [15]; and (ii) the property of pseudo-random number generators that the
random samples can in fact be completely determined by an initial value (a seed). We exploit these
properties and enable kernel methods to achieve better balances between computation, storage,
and statistics. Our method interestingly integrates kernel methods, functional analysis, stochastic
optimization, and algorithmic tricks, and it possesses a number of desiderata:
Generality and simplicity. Our approach applies to many kernel methods such as kernel version of
ridge regression, support vector machines, logistic regression and two-sample test as well as many
different types of kernels such as shift-invariant, polynomial, and general inner product kernels.
The algorithm can be summarized in just a few lines of code (Algorithm 1 and 2). For a different problem and kernel, we just need to replace the loss function and the random feature generator.
Flexibility. While previous approaches based on random features typically require a prefix number
of features, our approach allows the number of random features, and hence the flexibility of
the function class to grow with the number of data points. Therefore, unlike previous random
feature approach, our approach applies to the data streaming setting and achieves full potentials of
nonparametric methods.
Efficient computation. The key computation of our method comes from evaluating the doubly
stochastic functional gradient, which involves the generation of the random features given specific
seeds and also the evaluation of these features on a small batch of data points. At iteration t, the
computational complexity is O(td).
2
Small memory. While most approaches require saving all the support vectors, the algorithm
allows us to avoid keeping the support vectors since it only requires a small program to regenerate
the random features and sample historical features according to some specific random seeds. At
iteration t, the memory needed is O(t), independent of the dimension of the data.
Theoretical guarantees. We provide novel and nontrivial analysis involving Hilbert space
martingales and a newly proved recurrence relation, and demonstrate that the estimator produced
by our algorithm, which might be outside of the RKHS, converges to the optimal RKHS function.
More specifically, both in expectation and with high probability, our algorithm estimates the optimal
?
function in the RKHS in the rate of O(1/t) and achieves a generalization bound of O(1/ t),
which are indeed optimal [15]. The variance of the random features introduced in our second
approximation to the functional gradient, only contributes additively to the constant in the convergence rate. These results are the first of the kind in literature, which could be of independent interest.
Strong empirical performance. Our algorithm can readily scale kernel methods up to the regimes
which are previously dominated by neural nets. We show that our method compares favorably to
other scalable kernel methods in medium scale datasets, and to neural nets in big datasets with
millions of data.
In the remainder, we will first introduce preliminaries on kernel methods and functional gradients.
We will then describe our algorithm and provide both theoretical and empirical supports.
2
Duality between Kernels and Random Processes
Kernel methods owe their name to the use of kernel functions, k(x, x0 ) : X ? X 7? R, which are
symmetric positive
definite (PD), meaning that for all n > 1, and x1 , . . . , xn ? X , and c1 , . . . , cn ?
Pn
R, we have i,j=1 ci cj k(xi , xj ) > 0. There is an intriguing duality between kernels and stochastic
processes which will play a crucial role in our algorithm design later. More specifically,
Theorem 1 (e.g., Devinatz [16]; Hein & Bousquet [17]) If k(x, x0 ) is a PD kernel, then there
exists a set R?, a measure P on ?, and random function ?? (x) : X 7? R from L2 (?, P), such that
k(x, x0 ) = ? ?? (x) ?? (x0 ) dP(?).
Essentially, the above integral representation relates the kernel function to a random process ? with
measure P(?). Note that the integral representation may not be unique. For instance, the random
process can be a Gaussian process on X with the sample function ?? (x), and k(x, x0 ) is simply
the covariance function between two point x and x0 . If the kernel is also continuous and shift
invariant, i.e., k(x, x0 ) = k(x ? x0 ) for x ? Rd , then the integral representation specializes into a
form characterized by inverse Fourier transformation (e.g., [18, Theorem 6.6]),
Theorem 2 (Bochner) A continuous, real-valued, symmetric and shift-invariant function k(x ? x0 )
on Rd is a PD kernel if and only if there is a finite non-negative measure P(?) on Rd , such that
R
R
>
0
k(x ? x0 ) = Rd ei? (x?x ) dP(?) = Rd ?[0,2?] 2 cos(? > x + b) cos(? > x0 + b) d (P(?) ? P(b)) ,
?
where P(b) is a uniform distribution on [0, 2?], and ?? (x) = 2 cos(? > x + b).
For Gaussian RBF kernel, k(x ? x0 ) = exp(?kx ? x0 k2 /2? 2 ), this yields a Gaussian distribution
P(?) with density proportional to exp(?? 2 k?k2 /2); for the Laplace kernel, this yields a Cauchy
distribution; and for the Martern kernel, this yields the convolutions of the unit ball [19]. Similar
representations where the explicit form of ?? (x) and P(?) are known can also be derived for rotation
invariant kernel, k(x, x0 ) = k(hx, x0 i), using Fourier transformation on sphere [19]. For polynomial
kernels, k(x, x0 ) = (hx, x0 i + c)p , a random tensor sketching approach can also be used [20].
Instead of finding the random processes P(?) and functions ?? (x) given kernels, one can go the
reverse direction and construct kernels from random processes and functions (e.g., Wendland [18]).
R
Theorem 3 If k(x, x0 ) = ? ?? (x)?? (x0 ) dP(?) for a nonnegative measure P(?) on ? and
?? (x) : X 7? R from L2 (?, P), then k(x, x0 ) is a PD kernel.
For instance, ?? (x) := cos(? > ?? (x) + b), where ?? (x) can be a random convolution of the input x
parametrized by ?. Another important concept is the reproducing kernel Hilbert space (RKHS). An
RKHS H on X is a Hilbert space of functions from X to R. H is an RKHS if and only if there exists
a k(x, x0 ) : X ? X 7? R such that ?x ? X , k(x, ?) ? H, and ?f ? H, hf (?), k(x, ?)iH = f (x).
If such a k(x, x0 ) exists, it is unique and it is a PD kernel. A function f ? H if and only if
2
kf kH := hf, f iH < ?, and its L2 norm is dominated by RKHS norm, kf kL2 6 kf kH .
3
3
Doubly Stochastic Functional Gradients
Many kernel methods can be written as convex optimization problems over functions in the RKHS
and solved using the functional gradient methods [13, 14]. Inspired by these previous work, we will
introduce a novel concept called ?doubly stochastic functional gradients? to address the scalability
issue. Let l(u, y) be a scalar loss function convex of u ? R. Let the subgradient of l(u, y) with
respect to u be l0 (u, y). Given a PD kernel k(x, x0 ) and the associated RKHS H, many kernel
methods try to find a function f? ? H which solves the optimization problem
?
2
argmin R(f ) := E(x,y) [l(f (x), y)] + kf kH ??
argmin E(x,y) [l(f (x), y)]
(1)
2
f ?H
kf kH 6B(?)
where ? > 0 is a regularization parameter, B(?) is a non-increasing function of ?, and the data
(x, y) follow a distribution P(x, y). The functional gradient ?R(f ) is defined as the linear term in
the change of the objective after we perturb f by in the direction of g, i.e.,
R(f + g) = R(f ) + h?R(f ), giH + O(2 ).
(2)
For instance, applying the above definition, we have ?f (x) = ? hf, k(x, ?)iH = k(x, ?), and
2
? kf kH = ? hf, f iH = 2f .
Stochastic functional gradient. Given a data point (x, y) ? P(x, y) and f ? H, the stochastic
functional gradient of E(x,y) [l(f (x), y)] with respect to f ? H is
?(?) := l0 (f (x), y)k(x, ?),
(3)
which is essentially a single data point approximation to the true functional gradient. Furthermore,
for any g ? H, we have h?(?), giH = l0 (f (x), y)g(x). Inspired by the duality between kernel functions and random processes, we can make an additional approximation to the stochastic functional
gradient using a random function ?? (x) sampled according to P(?). More specifically,
Doubly stochastic functional gradient. Let ? ? P(?), then the doubly stochastic gradient of
E(x,y) [l(f (x), y)] with respect to f ? H is
?(?) := l0 (f (x), y)?? (x)?? (?).
(4)
Note that the stochastic functional gradient ?(?) is in RKHS H but ?(?) may be outside H, since
?? (?) may?be outside the RKHS. For instance, for the Gaussian RBF kernel, the random function
?? (x) = 2 cos(? > x + b) is outside the RKHS associated with the kernel function.
However, these functional gradients are related by ?(?) = E? [?(?)], which lead to unbiased estimators of the original functional gradient, i.e.,
?R(f ) = E(x,y) [?(?)] + vf (?),
and
?R(f ) = E(x,y) E? [?(?)] + vf (?).
(5)
We emphasize that the source of randomness associated with the random function is not present
in the data, but artificially introduced by us. This is crucial for the development of our scalable
algorithm in the next section. Meanwhile, it also creates additional challenges in the analysis of the
algorithm which we will deal with carefully.
4
Doubly Stochastic Kernel Machines
t
Algorithm 1: {?i }i=1 = Train(P(x, y))
t
Algorithm 2: f (x) = Predict(x, {?i }i=1 )
Require: P(?), ?? (x), l(f (x), y), ?.
1: for i = 1, . . . , t do
2:
Sample (xi , yi ) ? P(x, y).
3:
Sample ?i ? P(?) with seed i.
i?1
4:
f (xi ) = Predict(xi , {?j }j=1 ).
5:
?i = ??i l0 (f (xi ), yi )??i (xi ).
6:
?j = (1 ? ?i ?)?j for j = 1, . . . , i ? 1.
7: end for
Require: P(?), ?? (x).
1: Set f (x) = 0.
2: for i = 1, . . . , t do
3:
Sample ?i ? P(?) with seed i.
4:
f (x) = f (x) + ?i ??i (x).
5: end for
The first key intuition behind our algorithm originates from the property of stochastic gradient descent algorithm that as long as the stochastic gradient is bounded and unbiased, the convergence of
the algorithm is guaranteed [15]. In our algorithm, we will exploit this property and introduce two
sources of randomness, one from data and another artificial, to scale up kernel methods.
4
The second key intuition behind our algorithm is that the random functions used in the doubly
stochastic functional gradients will be sampled according to pseudo-random number generators,
where the sequences of apparently random samples can in fact be completely determined by an
initial value (a seed). Although these random samples are not the ?true? random sample in the
purest sense of the word, they suffice for our task in practice.
To be more specific, our algorithm proceeds by making two stochastic approximation to the functional gradient in each iteration, and then descending using this noisy functional gradient. The overall
algorithms for training and prediction are summarized in Algorithm 1 and 2. The training algorithm essentially just performs samplings of random functions and evaluations of doubly stochastic
gradients and maintains a collection of real numbers {?i }, which is computationally efficient and
memory friendly. A crucial step in the algorithm is to sample the random functions with ?seed i?.
The seeds have to be aligned between training and prediction, and with the corresponding ?i obtained from each iteration. The learning rate ?t in the algorithm needs to be chosen as O(1/t), as
shown by our later analysis to achieve the best rate of convergence. For now, we assume that we
have access to the data generating distribution P(x, y). This can be modified to sample uniformly
randomly from a fixed dataset, without affecting the algorithm and the later convergence analysis.
t
t
Let the sampled data and random function parameters be Dt := {(xi , yi )}i=1 and ? t := {?i }i=1 ,
respectively after t iteration. The function obtained by Algorithm 1 is a simple additive form of the
doubly stochastic functional gradients
Xt
ft+1 (?) = ft (?) ? ?t (?t (?) + ?ft (?)) =
ait ?i (?), ?t > 1, and f1 (?) = 0,
(6)
i=1
Qt
where ait = ??i j=i+1 (1 ? ?j ?) are deterministic values depending on the step sizes ?j (i 6 j 6
t) and regularization parameter ?. This simple form makes it easy for us to analyze its convergence.
We note that our algorithm can also take a mini-batch of points and random functions at each step,
and estimate an empirical covariance for preconditioning to achieve potentially better performance.
5
Theoretical Guarantees
In this section, we will show that, both in expectation and with high probability, our algorithm
can estimate the
? optimal function in the RKHS with rate O(1/t) and achieve a generalization
bound of O(1/ t). The analysis for our algorithm has a new twist compared to previous analysis
of stochastic gradient descent algorithms, since the random function approximation results in
an estimator which is outside the RKHS. Besides the analysis for stochastic functional gradient
descent, we need to use martingales and the corresponding concentration inequalities to prove that
the sequence of estimators, ft+1 , outside the RKHS converge to the optimal function, f? , in the
RKHS. We make the following standard assumptions ahead for later references:
A. There exists an optimal solution, denoted as f? , to the problem of our interest (1).
B. Loss function `(u, y) : R ? R ? R and its first-order derivative is L-Lipschitz continous
in terms of the first argument.
C. For any data {(xi , yi )}ti=1 and any trajectory {fi (?)}ti=1 , there exists M > 0, such that
|`0 (fi (xi ), yi )| 6 M . Note in our situation M exists and M < ? since we assume
bounded domain and the functions ft we generate are always bounded as well.
D. There exists ? > 0 and ? > 0, such that k(x, x0 ) 6 ?, |?? (x)?? (x0 )| 6 ?, ?x, x0 ?
X , ? ? ?. For example, when k(?, ?) is the Gaussian RBF kernel, we have ? = 1, ? = 2.
We now present our main theorems as below. Due to the space restrictions, we will only provide a
short sketch of proofs here. The full proofs for the these theorems are given in the appendix.
Theorem 4 (Convergence in expectation) When ?t = ?t with ? > 0 such that ?? ? (1, 2) ? Z+ ,
2C 2 + 2?Q21
EDt ,?t |ft+1 (x) ? f? (x)|2 6
, for any x ? X
t
n
o
p
where Q1 = max kf? kH , (Q0 + Q20 + (2?? ? 1)(1 + ??)2 ?2 ?M 2 )/(2?? ? 1) , with Q0 =
?
2 2?1/2 (? + ?)LM ?2 , and C 2 = 4(? + ?)2 M 2 ?2 .
Theorem 5 (Convergence with high probability) When ?t = ?t with ? > 0 such that ?? ? Z+ ,
for any x ? X , we have with probability at least 1 ? 3? over (Dt , ? t ),
|ft+1 (x) ? f? (x)|2 6
C 2 ln(2/?) 2?Q22 ln(2t/?) ln2 (t)
+
,
t
t
5
n
o
p
where C is as above and Q2 = max kf? kH , Q0 + Q20 + ?M 2 (1 + ??)2 (?2 + 16?/?) , with
?
Q0 = 4 2?1/2 M ?(8 + (? + ?)?L).
Proof sketch: We focus on the convergence in expectation; the high probability bound can be
established in a similar fashion. The main technical difficulty is that ft+1 may not be in the RKHS
H. The key of the proof is then to construct an intermediate function ht+1 , such that the difference
between ft+1 and ht+1 and the difference between ht+1 and f? can be bounded. More specifically,
Xt
ht+1 (?) = ht (?) ? ?t (?t (?) + ?ht (?)) =
ait ?i (?), ?t > 1, and h1 (?) = 0,
(7)
i=1
where ?t (?) = E?t [?t (?)]. Then for any x, the error can be decomposed as two terms
|ft+1 (x) ? f? (x)|2 6 2 |ft+1 (x) ? ht+1 (x)|2
|
{z
}
+
error due to random functions
2
2? kht+1 ? f? kH
|
{z
}
error due to random data
For the error term due to random functions, ht+1 is constructed such that ft+1 ? ht+1 is a martingale, and the stepsizes are chosen such that |ait | 6 ?t , which allows us to bound the martingale.
In other words, the choices of the stepsizes keep ft+1 close to the RKHS. For the error term
due to random data, since ht+1 ? H, we can now apply the standard arguments for stochastic
approximation in the RKHS. Due
randomness, the recursion is slightly more
to ?the
padditional
?2
et
1
complicated, et+1 6 1 ? 2??
e
+
+
,
where
et+1 = EDt ,?t [kht+1 ? f? k2H ], and
2
t
t
t
t
t
?1 and ?2 depends on the related parameters. Solving this recursion then leads to a bound for the
second error term.
Theorem 6 (Generalization bound) Let the true risk be Rtrue (f ) = E(x,y) [l(f (x), y)]. Then with
probability at least 1 ? 3? over (Dt , ? t ), and C and Q2 defined as previously
p
p
?
?
(C ln(8 et/?) + 2?Q2 ln(2t/?) ln(t))L
?
.
Rtrue (ft+1 ) ? Rtrue (f? ) 6
t
Proof By the Lipschitz continuity of l(?, y) and Jensen?s Inequality, we have
p
Rtrue (ft+1 ) ? Rtrue (f? ) 6 LEx |ft+1 (x) ? f? (x)| 6 L Ex |ft+1 (x) ? f? (x)|2 = Lkft+1 ? f? k2 .
2
Again, kft+1 ? f? k2 can be decomposed as two terms O kft+1 ? ht+1 k22 and O(kht+1 ? f? kH ),
which can be bounded similarly as in Theorem 5 (see Corollary 12 in the appendix).
Remarks. The overall rate of convergence in expectation, which is O(1/t), is indeed optimal. Classical complexity theory (see, e.g. reference in [15]) shows that to obtain -accuracy solution, the
number of iterations needed for the stochastic approximation is ?(1/) for strongly convex case and
?(1/2 ) for general convex case. Different from the classical setting of stochastic approximation,
our case imposes not one but two sources of randomness/stochasticity in the gradient, which intuitively speaking, might require higher order number of iterations for general convex case. However,
our method is still able to achieve the same rate as in the classical setting. The rate of the generalization bound is also nearly optimal up to log factors. However, these bounds may be further refined
with more sophisticated techniques and analysis. For example, mini-batch and preconditioning can
be used to reduce the constant factors in the bound significantly, the analysis of which is left for
future study. Theorem 4 also reveals bounds in L? and L2 sense as in Section A.2 in the appendix.
The choices of stepsizes ?t and the tuning parameters given in these bounds are only for sufficient
conditions and simple analysis; other choices can also lead to bounds in the same order.
6
Computation, Storage and Statistics Trade-off
To investigate computation, storage and statistics trade-off, we will fix the desired L2 error in the
function estimation to , i.e., kf ? f? k22 6 , and work out the dependency of other quantities on .
These other quantities include the preprocessing time, the number of samples and random features
(or rank), the number of iterations of each algorithm, and the computational cost and storage requirement for learning and prediction. We assume that the number of samples, n, needed to achieve the
prescribed error is of the order O(1/), the same for all methods. Furthermore, we make no other
regularity assumption about margin properties or the kernel matrix such as fast spectrum decay. Thus
the required number of random feature (or ranks) r will be of the order O(n) = O(1/) [4, 5, 8, 9].
6
We will pick a few representative algorithms for comparison, namely, (i) NORMA [13]: kernel
methods trained with stochastic functional gradients; (ii) k-SDCA [12]: kernelized version of stochastic dual coordinate ascend; (iii) r-SDCA: first approximate the kernel function with random
features, and then run stochastic dual coordinate ascend; (iv) n-SDCA: first approximate the kernel matrix using Nystr?om?s method, and then run stochastic dual coordinate ascend; similarly we
will combine Pegasos algorithm [21] with random features and Nystr?om?s method, and obtain (v)
r-Pegasos, and (vi) n-Pegasos. The comparisons are summarized below.
From the table, one can see that our method, r-SDCA and r-Pegasos achieve the best dependency on
the dimension d of the data. However, often one is interested in increasing the number of random
features as more data points are observed to obtain a better generalization ability. Then special
procedures need to be designed for updating the r-SDCA and r-Pegasos solution, which we are not
clear how to implement easily and efficiently.
Algorithms
Doubly SGD
NORMA/k-SDCA
r-Pegasos/r-SDCA
n-Pegasos/n-SDCA
7
Preprocessing
Computation
O(1)
O(1)
O(1)
O(1/3 )
Total Computation Cost
Training
Prediction
O(d/2 )
O(d/)
O(d/2 )
O(d/)
O(d/2 )
O(d/)
O(d/2 )
O(d/)
Total Storage Cost
Training Prediction
O(1/)
O(1/)
O(d/)
O(d/)
O(1/)
O(1/)
O(1/)
O(1/)
Experiments
We show that our method compares favorably to other kernel methods in medium scale datasets
and neural nets in large scale datasets. We examined both regression and classification problems
with smooth and almost smooth loss functions. Below is a summary of the datasets used1 , and more
detailed description of these datasets and experimental settings can be found in the appendix.
Name
Model # of samples Input dim Output range Virtual
(1)
Adult
K-SVM
32K
123
{?1, 1}
no
(2) MNIST 8M 8 vs. 6 [25] K-SVM
1.6M
784
{?1, 1}
yes
(3)
Forest
K-SVM
0.5M
54
{?1, 1}
no
K-logistic
8M
1568
{0, . . . , 9}
yes
(4)
MNIST 8M [25]
(5)
CIFAR 10 [26]
K-logistic
60K
2304
{0, . . . , 9}
yes
(6)
ImageNet [27]
K-logistic
1.3M
9216
{0, . . . , 999}
yes
6K
276
[?800, ?2000]
yes
(7) QuantumMachine [28] K-ridge
(8) MolecularSpace [28]
K-ridge
2.3M
2850
[0, 13]
no
Experiment settings. For datasets (1) ? (3), we compare the algorithms discussed in Section 6. For
algorithms based on low rank kernel matrix approximation and random features, i.e., pegasos and
SDCA, we set the rank and number of random features to be 28 . We use same batch size for both
our algorithm and the competitors. We stop algorithms when they pass through the entire dataset
once. This stopping criterion (SC1) is designed for justifying our conjecture that the bottleneck of
the performances of the vanilla methods with explicit feature comes from the accuracy of kernel
approximation. To this end, we investigate the performances of these algorithms under different
levels of random feature approximations but within the same number of training samples. To further
investigate the computational efficiency of the proposed algorithm, we also conduct experiments
where we stop all algorithms within the same time budget (SC2). Due to space limitation, the
comparison on regression synthetic dataset under SC1 and on (1) ? (3) under SC2 are illustrated
in Appendix B.2. We do not count the preprocessing time of Nystr?om?s method for n-Pegasos and
n-SDCA. The algorithms are executed on the machine with AMD 16 2.4GHz Opteron CPUs and
200G memory. Note that this allows NORMA and k-SDCA to save all the data in the memory.
We report our numerical results in Figure 1(1)-(8) with explanations stated as below . For full details
of our experimental setups, please refer to section B.1 in Appendix.
Adult. The result is illustrated in Figure 1(1). NORMA and k-SDCA achieve the best error rate,
15%, while our algorithm achieves a comparable rate, 15.3%.
1
A ?yes? for the last column means that virtual examples are generated from for training. K-ridge stands
for kernel ridge regression; K-SVM stands for kernel SVM; K-logistic stands for kernel logistic regression.
7
35
28 r?pegasos
28 r?SDCA
30
28 n?pegasos
28 n?SDCA
doubly SGD
25
20
3
35
2.5
30
Test Error (%)
k?SDCA
NORMA
Test Error (%)
Test Error (%)
40
2
1.5
1
0.5
15
?2
10
10
0
10
Training Time (sec)
(1) Adult
2
5
4
10
10
1
0.5
6
7
10
10
Number of training samples
(3) Forest
jointly?trained neural net
fixed neural net
doubly SGD
40
30
20
90
fixed neural net
doubly SGD
80
70
60
50
40
10 5
10
6
7
10
6
10
Number of training samples
(4) MNIST 8M
10
8
10
Number of training samples
(5) CIFAR 10
neural net
doubly SGD
20
4
10
100
Test error (%)
1.5
(6) ImageNet
neural net
doubly SGD
2.6
2.4
15
PCE (%)
MAE (Kcal/mole)
2
10
Training Time (sec)
jointly?trained neural net
Test error (%)
Test error (%)
fixed neural net
doubly SGD
5
0
10
(2) MNIST 8M 8 vs. 6
jointly?trained neural net
10
15
Training Time (sec)
50
2
20
10
0
0
25
10
2.2
2
1.8
1.6
1.4
5
1.2
5
10
1
6
10
Number of training samples
5
10
6
10
Number of training samples
(7) QuantumMachine
(8) MolecularSpace.
Figure 1: Experimental results for dataset (1) ? (8).
MNIST 8M 8 vs. 6. The result is shown in Figure 1(2). Our algorithm achieves the best test error
0.26%. Comparing to the methods with full kernel, the methods using random/Nystr?om features
achieve better test errors probably because of the underlying low-rank structure of the dataset.
Forest. The result is shown in Figure 1(3). Our algorithm achieves test error about 15%, much better
than the n/r-pegasos and n/r-SDCA. Our method is preferable for this scenario, i.e., huge datasets
with sophisticated decision boundary considering the trade-off between cost and accuracy.
MNIST 8M. The result is shown in Figure 1(4). Better than the 0.6% error provided by fixed and
jointly-trained neural nets, our method reaches an error of 0.5% very quickly.
CIFAR 10 The result is shown in Figure 1(5). We compare our algorithm to a neural net with
two convolution layers (after contrast normalization and max-pooling layers) and two local layers achieving 11% test error. The specification is at https://code.google.com/p/cuda-convnet/. Our
method achieves comparable performance but much faster.
ImageNet The result is shown in Figure 1(6). Our method achieves test error 44.5% by further
max-voting of 10 transformations of the test set while the jointly-trained neural net arrives at 42%
(without variations in color and illumination), and the fixed neural net only achieves 46% with maxvoting.
QuantumMachine/MolecularSpace The results are shown in Figure 1(7) &(8). On dataset (7), our
method achieves Mean Absolute Error of 2.97 kcal/mole, outperforming neural nets, 3.51 kcal/mole,
which is close to the 1 kcal/mole required for chemical accuracy. Moreover, the comparison on
dataset (8) is the first in the literature, and our method is still comparable with neural net.
Acknowledgement
M.B. is suppoerted in part by NSF CCF-0953192, CCF-1451177, CCF-1101283, and CCF-1422910, ONR
N00014-09-1-0751, and AFOSR FA9550-09-1-0538. L.S. is supported in part by NSF IIS-1116886, NSF/NIH
BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, and a Raytheon Faculty Fellowship.
8
References
[1] A. J. Smola and B. Sch?olkopf. Sparse greedy matrix approximation for machine learning. In ICML, 2000.
[2] C. K. I. Williams and M. Seeger. Using the Nystrom method to speed up kernel machines. In T. G.
Dietterich, S. Becker, and Z. Ghahramani, editors, NIPS, 2000.
[3] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of
Machine Learning Research, 2:243?264, 2001.
[4] P. Drineas and M. Mahoney. On the nystr om method for approximating a gram matrix for improved
kernel-based learning. JMLR, 6:2153?2175, 2005.
[5] C. Cortes, M. Mohri, and A. Talwalkar. On the impact of kernel approximation on learning accuracy. In
AISTATS, 2010.
[6] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2008.
[7] Q.V. Le, T. Sarlos, and A. J. Smola. Fastfood ? computing hilbert space expansions in loglinear time. In
ICML, 2013.
[8] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In NIPS, 2009.
[9] D. Lopez-Paz, S. Sra, A. Smola, Z. Ghahramani, and B. Schlkopf. Randomized nonlinear component
analysis. In ICML, 2014.
[10] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines.
Technical Report MSR-TR-98-14, Microsoft Research, 1998.
[11] T. Joachims. Making large-scale SVM learning practical. In B. Sch?olkopf, C. J. C. Burges, and A. J.
Smola, editors, Advances in Kernel Methods ? Support Vector Learning, pages 169?184, Cambridge,
MA, 1999. MIT Press.
[12] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. Journal
of Machine Learning Research, 14(1):567?599, 2013.
[13] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal
Processing, 52(8), Aug 2004.
[14] N. Ratliff and J. Bagnell. Kernel conjugate gradient for fast kernel machines. In IJCAI, 2007.
[15] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19(4):1574?1609, January 2009.
[16] A. Devinatz. Integral representation of pd functions. Trans. AMS, 74(1):56?77, 1953.
[17] M. Hein and O. Bousquet. Kernels, associated structures, and generalizations. Technical Report 127,
Max Planck Institute for Biological Cybernetics, 2004.
[18] H. Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, UK, 2005.
[19] Bernhard Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[20] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD, 2013.
[21] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In ICML, 2007.
[22] Cong D. Dang and Guanghui Lan. Stochastic block mirror descent methods for nonsmooth and stochastic
optimization. Technical report, University of Florida, 2013.
[23] Yurii Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[24] A. Cotter, S. Shalev-Shwartz, and N. Srebro. Learning optimally sparse support vector machines. In
ICML, 2013.
[25] G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines with selective sampling.
InLarge Scale Kernel Machines, pages 301?320. MIT Press, 2007.
[26] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of
Toronto, 2009.
[27] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[28] G. Montavon, K. Hansen, S. Fazli, M. Rupp, F. Biegler, A. Ziehe, A. Tkatchenko, A. Lilienfeld, and K.
M?uller. Learning invariant representations of molecules for atomization energy prediction. In NIPS, 2012.
[29] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly
convex stochastic optimization. In ICML, pages 449?456, 2012.
[30] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
9
| 5238 |@word msr:1 faculty:1 version:2 polynomial:3 norm:2 additively:1 tried:2 decomposition:1 covariance:2 q1:1 pick:1 sgd:7 incurs:1 nystr:6 tr:1 initial:2 rkhs:20 interestingly:1 prefix:1 document:1 comparing:1 com:1 yet:1 intriguing:1 written:1 readily:2 kft:2 numerical:4 subsequent:2 additive:1 kdd:1 designed:3 juditsky:1 v:3 greedy:2 short:1 fa9550:1 toronto:1 zhang:1 constructed:1 become:2 lopez:1 prove:1 doubly:19 combine:1 yingyu:1 introduce:3 x0:26 ascend:3 indeed:2 inspired:3 decomposed:2 td:1 cpu:1 dang:1 considering:1 increasing:2 solver:1 provided:1 bounded:5 suffice:1 q21:1 medium:2 underlying:1 moreover:1 kind:1 argmin:2 q2:3 finding:1 transformation:3 impractical:1 guarantee:2 pseudo:2 ti:2 friendly:1 voting:1 preferable:1 k2:4 platt:1 uk:1 unit:1 originates:1 planck:1 positive:1 local:1 might:2 examined:1 co:5 factorization:1 nemirovski:1 range:1 unique:2 practical:1 lecun:1 testing:1 practice:2 block:3 definite:1 implement:1 digit:1 procedure:4 sdca:16 yingyul:1 empirical:3 adapting:1 significantly:1 word:2 pegasos:13 close:2 selection:1 storage:11 risk:1 applying:1 descending:2 restriction:1 map:2 deterministic:1 sarlos:1 straightforward:1 go:1 williams:1 convex:8 simplicity:1 immediately:1 estimator:4 coordinate:6 variation:1 laplace:1 shamir:1 play:1 programming:1 trick:1 approximated:2 recognition:1 updating:1 observed:1 role:1 ft:17 loosli:1 solved:2 cong:1 trade:4 song1:1 observes:1 nhe6:1 intuition:3 pd:7 complexity:2 nesterov:1 trained:6 solving:1 algebra:2 creates:1 efficiency:2 basis:1 completely:2 preconditioning:2 drineas:1 easily:1 sink:1 various:1 train:1 fast:4 describe:1 artificial:1 outside:6 refined:1 shalev:3 solve:1 valued:1 ability:5 statistic:4 commit:1 jointly:5 noisy:3 online:1 advantage:1 rr:1 sequence:2 net:22 propose:2 product:1 remainder:1 aligned:1 flexibility:3 achieve:10 description:1 kh:9 scalability:2 olkopf:3 sutskever:1 convergence:10 regularity:2 requirement:1 ijcai:1 generating:1 converges:2 depending:1 qt:1 lsong:1 aug:1 strong:1 solves:2 c:2 involves:1 come:4 implies:2 direction:2 drawback:2 norma:5 stochastic:41 opteron:1 enable:1 material:1 virtual:3 require:5 hx:2 f1:1 generalization:12 really:1 preliminary:1 fix:1 randomization:1 biological:1 pham:1 exp:2 k2h:1 seed:8 algorithmic:1 predict:2 great:1 lm:1 achieves:10 estimation:1 integrates:1 hansen:1 weighted:1 cotter:1 minimization:1 uller:1 mit:3 gaussian:5 always:1 modified:1 avoid:1 pn:1 stepsizes:3 gatech:2 corollary:1 derived:1 l0:5 focus:1 joachim:1 maria:1 rank:14 contrast:1 seeger:1 talwalkar:1 sense:2 am:1 dim:1 stopping:1 streaming:2 typically:2 entire:2 kernelized:1 relation:1 selective:1 interested:1 issue:2 dual:5 overall:2 classification:3 denoted:1 development:1 special:2 construct:2 saving:1 once:1 sampling:2 icml:6 nearly:2 future:1 nrd:4 report:5 nonsmooth:1 serious:1 few:3 randomly:1 kitchen:1 owe:1 delicate:1 microsoft:1 attempt:1 karthik:1 interest:2 huge:2 investigate:3 evaluation:2 mahoney:1 arrives:1 behind:3 primal:1 r01gm108341:1 integral:4 ohad:1 conduct:1 incomplete:2 iv:1 desired:1 hein:2 theoretical:4 minimal:1 instance:4 column:1 cost:5 nr2:1 uniform:1 krizhevsky:2 paz:1 optimally:1 stored:1 straightforwardly:1 dependency:2 synthetic:1 guanghui:1 recht:2 density:1 randomized:1 siam:2 off:4 pagh:1 sketching:1 quickly:1 again:1 fazli:1 derivative:1 potential:1 summarized:3 sec:3 depends:1 vi:1 later:4 try:1 h1:1 apparently:1 analyze:1 competitive:1 hf:4 maintains:1 complicated:1 shai:1 om:6 accuracy:5 convolutional:1 variance:1 efficiently:1 yield:3 yes:6 handwritten:1 produced:1 schlkopf:1 trajectory:1 cc:1 cybernetics:1 randomness:4 reach:1 definition:1 competitor:1 energy:2 kl2:1 ninamf:1 nystrom:1 associated:6 proof:5 sampled:3 newly:1 proved:1 dataset:7 popular:1 stop:2 color:1 lilienfeld:1 hilbert:6 cj:1 sophisticated:3 carefully:1 higher:1 dt:3 follow:1 improved:1 strongly:2 generality:1 furthermore:2 just:3 smola:6 sketch:2 ei:1 replacing:1 nonlinear:4 google:1 continuity:1 logistic:6 name:2 dietterich:1 k22:2 concept:4 unbiased:5 true:3 ccf:4 hence:1 regularization:2 chemical:1 symmetric:2 q0:4 illustrated:2 deal:1 recurrence:1 sc1:2 please:1 criterion:1 ln2:1 ridge:6 demonstrate:2 performs:1 meaning:1 image:1 novel:4 fi:2 nih:1 common:2 rotation:1 functional:31 twist:1 million:4 discussed:1 approximates:1 mae:1 mellon:1 refer:1 cambridge:4 rd:5 tuning:1 vanilla:1 canu:1 similarly:2 stochasticity:1 access:1 specification:1 perspective:2 reverse:1 scenario:1 n00014:1 inequality:2 outperforming:1 onr:1 yi:5 exploited:1 additional:2 mole:4 r0:1 bochner:1 converge:1 sc2:2 signal:1 ii:4 relates:1 multiple:1 full:4 rahimi:2 smooth:2 technical:5 faster:1 adapt:1 characterized:1 long:2 sphere:1 cifar:3 justifying:1 impact:1 prediction:6 scalable:7 regression:7 florina:1 desideratum:1 involving:1 cmu:1 expectation:5 essentially:3 iteration:11 kernel:93 normalization:1 c1:1 tkatchenko:1 affecting:1 want:1 fellowship:1 fine:1 grow:2 source:3 crucial:3 sch:3 operate:2 unlike:1 posse:1 probably:1 ascent:1 pooling:1 sridharan:1 intermediate:1 iii:1 enough:2 easy:2 bengio:1 xj:1 architecture:1 inner:1 reduce:1 cn:1 haffner:1 shift:3 bottleneck:2 reuse:1 becker:1 effort:2 speaking:1 remark:1 deep:1 clear:1 detailed:1 nonparametric:1 gih:2 generate:1 http:1 shapiro:1 percentage:1 nsf:4 cuda:1 estimated:1 carnegie:1 key:5 lan:2 achieving:1 ht:11 subgradient:1 sum:1 rtrue:5 run:2 inverse:1 almost:1 decision:1 appendix:6 scaling:3 vf:2 comparable:3 bound:13 layer:4 followed:1 guaranteed:2 purest:1 nonnegative:1 nontrivial:1 ahead:1 n3:1 dominated:3 bousquet:2 fourier:2 speed:1 argument:2 prescribed:1 nathan:1 performing:1 separable:1 gpus:1 conjecture:1 speedup:1 according:3 ball:1 conjugate:1 slightly:1 making:5 intuitively:1 invariant:6 computationally:1 ln:5 remains:1 previously:2 scheinberg:1 count:1 needed:3 singer:1 end:3 photo:1 yurii:1 operation:2 apply:1 save:1 batch:6 florida:1 original:1 include:1 exploit:2 yoram:1 perturb:1 ghahramani:2 approximating:1 classical:3 q20:2 tensor:1 objective:1 lex:1 quantity:2 strategy:4 concentration:1 niao:1 nr:4 loglinear:1 exhibit:1 gradient:43 dp:3 bagnell:1 convnet:1 parametrized:1 originate:1 amd:1 cauchy:1 rupp:1 code:2 besides:1 mini:2 balance:1 setup:1 executed:1 potentially:1 favorably:2 negative:1 rise:1 stated:1 ratliff:1 design:2 perform:1 convolution:4 regenerate:1 datasets:10 finite:1 descent:12 november:1 january:1 situation:1 hinton:1 reproducing:3 edt:2 introduced:2 namely:1 required:3 anant:1 imagenet:5 continous:1 learned:1 established:1 kht:3 nip:5 address:2 able:1 adult:3 proceeds:2 below:4 perception:2 trans:1 regime:2 challenge:1 program:1 including:2 memory:5 max:5 explanation:1 difficulty:1 regularized:1 kivinen:1 recursion:2 molecularspace:4 technology:1 specializes:1 epoch:1 understanding:1 literature:2 l2:5 kf:9 acknowledgement:1 afosr:1 loss:7 generation:2 limitation:1 proportional:1 he1:1 srebro:2 generator:3 sufficient:1 imposes:1 editor:2 storing:1 tiny:1 summary:2 mohri:1 supported:1 last:1 keeping:1 burges:1 institute:2 absolute:1 sparse:2 q22:1 ghz:1 boundary:1 dimension:3 xn:1 evaluating:1 stand:3 gram:1 collection:1 preprocessing:4 historical:1 far:1 transaction:1 approximate:4 restarting:1 emphasize:1 bernhard:1 keep:1 dealing:1 incoming:1 reveals:1 xi:9 shwartz:3 biegler:1 spectrum:1 continuous:2 table:1 robust:1 molecule:1 career:1 sra:1 contributes:1 forest:3 expansion:1 williamson:1 bottou:2 artificially:1 meanwhile:1 domain:1 aistats:1 dense:1 main:2 fastfood:1 big:3 n2:4 ait:4 x1:1 representative:1 scattered:1 georgia:1 martingale:4 fashion:1 atomization:1 sub:1 explicit:4 jmlr:1 montavon:1 theorem:11 kcal:4 specific:3 xt:2 jensen:1 rakhlin:1 liang2:1 decay:1 svm:8 cortes:1 exists:8 mnist:7 ih:4 sequential:1 ci:1 mirror:1 illumination:1 budget:1 kx:1 margin:1 simply:2 expressed:2 bo:2 scalar:1 wendland:2 applies:2 relies:1 ma:2 rbf:3 replace:1 lipschitz:2 hard:2 change:1 determined:2 specifically:4 uniformly:1 preset:1 raytheon:1 called:3 total:2 pas:1 invariance:1 duality:3 experimental:3 pce:1 ziehe:1 cholesky:1 support:9 alexander:1 bigdata:1 princeton:2 ex:1 |
4,681 | 5,239 | Kernel Mean Estimation via Spectral Filtering
Krikamol Muandet
MPI-IS, T?ubingen
krikamol@tue.mpg.de
Bharath Sriperumbudur
Dept. of Statistics, PSU
bks18@psu.edu
Bernhard Sch?olkopf
MPI-IS, T?ubingen
bs@tue.mpg.de
Abstract
The problem of estimating the kernel mean in a reproducing kernel Hilbert space
(RKHS) is central to kernel methods in that it is used by classical approaches (e.g.,
when centering a kernel PCA matrix), and it also forms the core inference step of
modern kernel methods (e.g., kernel-based non-parametric tests) that rely on embedding probability distributions in RKHSs. Previous work [1] has shown that
shrinkage can help in constructing ?better? estimators of the kernel mean than the
empirical estimator. The present paper studies the consistency and admissibility
of the estimators in [1], and proposes a wider class of shrinkage estimators that
improve upon the empirical estimator by considering appropriate basis functions.
Using the kernel PCA basis, we show that some of these estimators can be constructed using spectral filtering algorithms which are shown to be consistent under
some technical assumptions. Our theoretical analysis also reveals a fundamental
connection to the kernel-based supervised learning framework. The proposed estimators are simple to implement and perform well in practice.
1
Introduction
The kernel mean or the mean element, which corresponds to the mean of the kernel function in a
reproducing kernel Hilbert space (RKHS) computed w.r.t. some distribution P, has played a fundamental role as a basic building block of many kernel-based learning algorithms [2?4], and has
recently gained increasing attention through the notion of embedding distributions in an RKHS [5?
13]. Estimating the kernel mean remains an important problem as the underlying distribution P is
usually unknown and we must rely entirely on the sample drawn according to P.
Given a random sample drawn independently and identically (i.i.d.) from P, the mostP
common
n
way to estimate the kernel mean is by replacing P by the empirical measure, Pn := n1 i=1 ?Xi
where ?x is a Dirac measure at x [5, 6]. Without any prior knowledge about P, the empirical
estimator is possibly the best one can do. However, [1] showed that this estimator can be ?improved?
by constructing a shrinkage estimator which is a combination of a model with low bias and high
variance, and a model with high bias but low variance. Interestingly, significant improvement is
in fact possible if the trade-off between these two models is chosen appropriately. The shrinkage
estimator proposed in [1], which is motivated from the classical James-Stein shrinkage estimator
[14] for the estimation of the mean of a normal distribution, is shown to have a smaller mean-squared
error than that of the empirical estimator. These findings provide some support for the conceptual
premise that we might be somewhat pessimistic in using the empirical estimator of the kernel mean
and there is abundant room for further progress.
In this work, we adopt a spectral filtering approach to obtain shrinkage estimators of kernel mean
that improve on the empirical estimator. The motivation behind our approach stems from the idea
presented in [1] where the kernel mean estimation is reformulated as an empirical risk minimization
(ERM) problem, with the shrinkage estimator being then obtained through penalized ERM. It is
important to note that this motivation differs fundamentally from the typical supervised learning as
the goal of regularization here is to get the James-Stein-like shrinkage estimators [14] rather than
1
to prevent overfitting. By looking at regularization from a filter function perspective, in this paper,
we show that a wide class of shrinkage estimators for kernel mean can be obtained and that these
estimators are consistent for an appropriate choice of the regularization/shrinkage parameter.
Unlike in earlier works [15?18] where the spectral filtering approach has been used in supervised
learning problems, we here deal with unsupervised setting and only leverage spectral filtering as a
way to construct a shrinkage estimator of the kernel mean. One of the advantages of this approach
is that it allows us to incorporate meaningful prior knowledge. The resultant estimators are characterized by the filter function, which can be chosen according to the relevant prior knowledge.
Moreover, the spectral filtering gives rise to a broader interpretation of shrinkage through, for example, the notion of early stopping and dimension reduction. Our estimators not only outperform the
empirical estimator, but are also simple to implement and computationally efficient.
The paper is organized as follows. In Section 2, we introduce the problem of shrinkage estimation
and present a new result that theoretically justifies the shrinkage estimator over the empirical estimator for kernel mean, which improves on the work of [1] while removing some of its drawbacks.
Motivated by this result, we consider a general class of shrinkage estimators obtained via spectral
filtering in Section 3 whose theoretical properties are presented in Section 4. The empirical performance of the proposed estimators are presented in Section 5. The missing proofs of the results are
given in the supplementary material.
2
Kernel mean shrinkage estimator
In this section, we present preliminaries on the problem of shrinkage estimation in the context of estimating the kernel mean [1] and then present a theoretical justification (see Theorem 1) for shrinkage
estimators that improves our understanding of the kernel mean estimation problem, while alleviating
some of the issues inherent in the estimator proposed in [1].
Preliminaries: Let H be an RKHS of functions on a separable topological space X . The space H
is endowed with inner product h?, ?i, associated norm k ? k, and reproducing
p kernel k : X ? X ? R,
which we assume to be continuous and bounded, i.e., ? := supx?X k(x, x) < ?. The kernel
mean of some unknown distribution P on X and its empirical estimate?we refer to this as kernel
mean estimator (KME)?from i.i.d. sample x1 , . . . , xn are given by
Z
n
1X
?P :=
k(x, ?) dP(x)
and
?
?P :=
k(xi , ?),
(1)
n i=1
X
respectively. As mentioned before, ?
?P is the ?best? possible estimator to estimate ?P if nothing is
known about P. However, depending on the information that is available about P, one can construct
various estimators of ?P that perform ?better? than ?P . Usually, the performance measure that is
used for comparison is the mean-squared error though alternate measures can be used. Therefore,
our main objective is to improve upon KME in terms of the mean-squared error, i.e., construct ?
?P
such that EP k?
?P ? ?P k2 ? EP k?
?P ? ?P k2 for all P ? P with strict inequality holding for at least one
element in P where P is a suitably large class of Borel probability measures on X . Such an estimator
1
?
?P is said to be admissible w.r.t P. If P = M+
(X ) is the set of all Borel probability measures on
X , then ?
?P satisfying the above conditions may not exist and in that sense, ?
?P is possibly the best
estimator of ?P that one can have.
Admissibility of shrinkage estimator: To improve upon KME, motivated by the James-Stein esti? [1] proposed a shrinkage estimator ?
?? := ?f ? + (1 ? ?)?
?P where ? ? R is the shrinkage
mator, ?,
parameter that balances the low-bias, high-variance model (?
?P ) with the high-bias, low-variance
model (f ? ? H). Assuming for simplicity f ? = 0, [1] showed that EP k?
?? ? ?P k2 < EP k?
?P ? ?P k2
2
2
if and only if ? ? (0, 2?/(? + k?P k )) where ? := EP k?
?P ? ?P k . While this is an interesting
result, the resultant estimator ?
?? is strictly not a ?statistical estimator? as it depends on quantities
that need to be estimated, i.e., it depends on ? whose choice requires the knowledge of ?P , which
is the quantity to be estimated. We would like to mention that [1] handles the general case with f ?
being not necessarily zero, wherein the range for ? then depends on f ? as well. But for the purposes
of simplicity and ease of understanding, for the rest of this paper we assume f ? = 0. Since ?
?? is
not practically interesting, [1] resorted to the following representation of ?P and ?
?P as solutions to
the minimization problems [1, 19]:
2
Z
n
1X
kk(xi , ?) ? gk2 ,
(2)
g?H X
g?H n
i=1
using which ?
?? is shown to be the solution ton the regularized empirical risk minimization problem:
1X
kk(xi , ?) ? gk2 + ?kgk2 ,
(3)
?
?? = arg inf
g?H n
i=1
?
where ? > 0 and ? := ?+1
, i.e., ?
?? = ?
? ? . It is interesting to note that unlike in supervised
?+1
learning (e.g., least squares regression), the empirical minimization problem in (2) is not ill-posed
and therefore does not require a regularization term although it is used in (3) to obtain a shrinkage
estimator of ?P . [1] then obtained a value for ? through cross-validation and used it to construct
?P . However,
?
? ? as an estimator of ?P , which is then shown to perform empirically better than ?
?+1
no theoretical guarantees including the basic requirement of ?
? ? being consistent are provided. In
?+1
fact, because ? is data-dependent, the above mentioned result about the improved performance of
?
?? over a range of ? does not hold as such a result is proved assuming ? is a constant and does not
depend on the data. While it is clear that the regularizer in (3) is not needed to make (2) well-posed,
the role of ? is not clear from the point of view of ?
? ? being consistent and better than ?
?P . The
?+1
following result provides a theoretical understanding of ?
? ? from these viewpoints.
?P = arg inf
kk(x, ?) ? gk2 dP(x),
?
?P = arg inf
?+1
Theorem 1. Let ?
?? be constructed as in (3). Then the following hold.
P
(i) k?
?? ? ?P k ? 0 as ? ? 0 and n ? ?. In addition, if ? = n?? for some ? > 0, then
k?
?? ? ?P k = OP (n? min{?,1/2} ).
1
(X ) : k?P k2 <
(ii) For ? = cn?? with c > 0 and ? > 1, define Pc,? := {P ? M+
R
1/?
2
?
A k(x, x) dP(x)} where A := 21/? ?+c1/?
. Then ? n and ? P ? Pc,? , we have
(??1)(??1)/?
2
2
EP k?
?? ? ?P k < EP k?
?P ? ?P k .
?? is a consistent estimator of ?P as long as ? ? 0 and the
Remark. (i) Theorem 1(i) shows that ?
convergence rate in probability of k?
?? ? ?P k is determined by the rate of convergence of ? to zero,
with the best possible convergence rate?being n?1/2 . Therefore to attain a fast rate of convergence,
it is instructive to choose ? such that ? n ? 0 as ? ? 0 and n ? ?.
(ii) Suppose for some c > 0 and ? > 1, we choose ? = cn?? , which means the resultant estimator
?
?? is a proper estimator as it does not depend on any unknown quantities. Theorem 1(ii) shows
1
that for any n and P ? Pc,? , ?
?? is a ?better? estimator than ?
?P . Note that for any P ? M+
(X ),
R
RR
Rp
2
2
k(x, x) dP(x)) ? k(x, x) dP(x). This means ?
??
k?P k =
k(x, y) dP(x) dP(y) ? (
1
is admissible
if
we
restrict
M
(X
)
to
P
which
considers
only
those
distributions
for
which
c,?
+
R
k?P k2 / k(x, x) dP(x) is strictly less than a constant, A < 1. It is obvious to note that if c is
very small or ? is very large, then A gets closer to one and ?
?? behaves almost like ?
?P , thereby
matching with our intuition.
(iii) A nice interpretation for Pc,? can be obtained as in Theorem 1(ii) when k is a translation invariant kernel on Rd . It can be shown that Pc,? contains the class of all probability measures whose
characteristic function has an L2 norm (and therefore is the set of square integrable probability densities if P has a density w.r.t. the Lebesgue measure) bounded by a constant that depends on c, ? and
k (see ?2 in the supplementary material).
3
Spectral kernel mean shrinkage estimator
?? = ?f ? + (1 ? ?)?
?P =
Let
to the shrinkage
?? considered in [1], i.e., ?
P estimator ?
Pus return
?
?P , ei iei , where (ei )i?N are the countable orthonormal basis (ONB)
? i hf , ei iei + (1 ? ?) i h?
of H?countable ONB exist since H is separable which follows from X being separable and k
being continuous [20,
4.33]. This
P Lemma
P estimator can be generalized by considering the shrinkage
?
estimator ?
?? :=
?
hf
,
e
ie
+
?P , ei iei where ? := (?1 , ?2 , . . .) ? R? is
i i
i i
i (1 ? ?i )h?
a sequence of shrinkage parameters. If ?? := EP k?
?? ? ?P k2 is the risk of this estimator, the
following theorem gives an optimality condition on ? for which ?? < ?.
P
Theorem 2. For some ONB (ei )i , ?? ? ? = i (??,i ? ?i ) where ??,i and ?i denote the risk
of the ith component of ?
?? and ?
?P , respectively. Then, ??,i ? ?i < 0 if
2?i
0 < ?i <
,
(4)
?i + (fi? ? ?i )2
3
uncorrelated isotropic Gaussian
??M L = X
.
correlated anisotropic Gaussian
??M L = X
?
target
X ? N (?, I)
.
?
X ? N (?, ?)
target
Figure 1: Geometric explanation of a shrinkage estimator when estimating a mean of a Gaussian
distribution. For isotropic Gaussian, the level sets of the joint density of ??M L = X are hyperspheres.
In this case, shrinkage has the same effect regardless of the direction. Shaded area represents those
estimates that get closer to ? after shrinkage. For anisotropic Gaussian, the level sets are concentric
ellipsoids, which makes the effect dependent on the direction of shrinkage.
where fi? and ?i denote the Fourier coefficients of f ? and ?P , respectively.
The condition in (4) is a component-wise version of the condition given in [1, Theorem 1] for a class
of estimators ?
?? := ?f ? + (1 ? ?)?
?P which may be expressed here by assuming that we have a
constant shrinkage parameter ?i = ? for all i. Clearly, as the optimal range of ?i may vary across
coordinates, the class of estimators in [1] does not allow us to adjust ?i accordingly. To understand
why this property is important, let us consider the problem of estimating the mean of Gaussian
distribution illustrated in Figure 1. For correlated random variable X ? N (?, ?), a natural choice
of basis is the set of orthonormal eigenvectors which diagonalize the covariance matrix ? of X.
Clearly, the optimal range of ?i depends on the corresponding eigenvalues. Allowing for different
basis (ei )i and shrinkage parameter ?i opens up a wide range of strategies that can be used to
construct ?better? estimators.
A natural strategy under this representation is as follows: i) we specify the ONB (ei )i and project
?
?P onto this basis. ii) we shrink each ?
?i independently according to a pre-defined shrinkage rule.
iii) the shrinkage estimate is reconstructed as a superposition of the resulting components. In other
words, an ideal shrinkage estimator can be defined formally as a non-linear mapping:
X
X
?
?P ??
h(?i )hf ? , ei iei +
(1 ? h(?i ))h?
?P , ei iei
(5)
i
i
where h : R ? R is a shrinkage rule. Since we make no reference to any particular basis (ei )i , nor to
any particular shrinkage rule h, a wide range of strategies can be adopted
Pn here. For example, we?can
view whitening as a special case in which f ? is the data average n1 i=1 xi and 1 ? h(?i ) = 1/ ?i
where ?i and ei are the ith eigenvalue and eigenvector of the covariance matrix, respectively.
Inspired by Theorem 2, we adopt the spectral filtering approach as one of the strategies to construct
the estimators of the form (5). P
To this end, owing to the regularization interpretation in (3), we
n
consider estimators of the form i=1 ?i k(xi , ?) for some ? ? Rn ?looking for such
Pnan estimator
is equivalent to learning a signed measure that is supported on (xi )ni=1 . Since i=1 ?i k(xi , ?)
is a minimizer of (3), ? should satisfy K? = K1n where K is an n ? n Gram matrix and 1n =
[1/n. . . . , 1/n]? . Here the solution is trivially ? = 1n , i.e., the coefficients of the standard estimator
?
?P if K is invertible. Since K?1 may not exist and even if it exists, the computation of it can be
numerically unstable, the idea of spectral filtering?this is quite popular in the theory of inverse
problems [15] and has been used in kernel least squares [17]?is to replace K?1 by some regularized
matrices g? (K) that approximates K?1 as ? goes to zero. Note P
that unlike in (3), the regularization
n
is quite important here (i.e., the case of estimators of the form i=1 ?i k(xi , ?)) without which the
the linear system is under determined. Therefore, we propose the following class of estimators:
?
?? :=
n
X
i=1
?i k(xi , ?)
with ?(?) := g? (K)K1n ,
(6)
where g? (?) is a filter function and ? is referred to as a shrinkage parameter. The matrix-valued
function g? (K) can be described by a scalar function g? : [0, ?2 ] ? R on the spectrum of K.
That is, if K = UDU? is the eigen-decomposition of K where D = diag(?
?1 , . . . , ??n ), we have
g? (D) = diag(g? (?
?1 ), . . . , g? (?
?n )) and g? (K) = Ug? (D)U? . For example, the scalar filter
function of Tikhonov regularization is g? (?) = 1/(? + ?). In the sequel, we call this class of
estimators a spectral kernel mean shrinkage estimator (Spectral-KMSE).
4
1.5
Tikhonov
L2 Boosting
Algorithm
L2 Boosting
Acc. L2 Boosting
Iterated Tikhonov
Truncated SVD
Update Equation (a := K1n ? K?
? t ? ? t?1 + ?a
? t ? ? t?1 + ?t (? t?1 ? ? t?2 ) +
(K + n?I)?i = 1n + n??i?1
None
t?1
?t
na
)
Filter Function
Pt?1
g(?) = ? i=1 (1 ? ??)i
g(?) = pt (?)
t
?? t
g(?) = (?+?)
?(?+?)t
?1
g(?) = ? 1{???}
1
g(?)?
Table 1: Update equations for ? and corresponding filter functions.
TSVD
??method
Iterated Tikhonov
0.5
0
0
0.2
0.4
?
0.6
0.8
1
Figure 2: Plot of g(?)?.
Pn
? i i?
? i ) are
Proposition 3. The Spectral-KMSE satisfies ?
?? =
?i )?
?i h?
?, v
vi , where (?
?i , v
i=1 g? (?
b
eigenvalue and eigenfunction pairs of the empirical covariance operator Ck : H ? H defined as
Pn
Cbk = n1 i=1 k(?, xi ) ? k(?, xi ).
By virtue of Proposition 3, if we choose 1 ? h(?
? ) := g? (?
? )?
? , the Spectral-KMSE is indeed in
the form of (5) when f ? = 0 and (ei )i is the kernel PCA (KPCA) basis, with the filter function
g? determining the shrinkage rule. Since by definition g? (?
?i ) approaches the function 1/?
?i as ?
goes to 0, the function g? (?
?i )?
?i approaches 1 (no shrinkage). As the value of ? increases, we have
more shrinkage because the value of g? (?
?i )?
?i deviates from 1, and the behavior of this deviation
depends on the filter function g? . For example, we can see that Proposition 3 generalizes Theorem
2 in [1] where the filter function is g? (K) = (K + n?I)?1 , i.e., g(?) = 1/(? + ?). That is, we
have g? (?
?i )?
?i = ??i /(?
?i + ?), implying that the effect of shrinkage is relatively larger in the lowvariance direction. In the following, we discuss well-known examples of spectral filtering algorithms
obtained by various choices of g? . Update equations for ?(?) and corresponding filter functions are
summarized in Table 1. Figure 2 illustrates the behavior of these filter functions.
L2 Boosting. This algorithm, also known as gradient descent or Landweber iteration, finds a
weight ? by performing a gradient descent iteratively. Thus, we can interpret early stopping as
shrinkage and the reciprocal of iteration number as shrinkage parameter, i.e., ? ? 1/t. The step-size
? does not play any role for shrinkage [16], so we use the fixed step-size ? = 1/?2 throughout.
Accelerated L2 Boosting. This algorithm, also known as ?-method,?uses an accelerated gradient
descent step, which is faster than L2 Boosting because we only need t iterations to get the same
solution as the L2 Boosting would get after t iterations. Consequently, we have ? ? 1/t2 .
Iterated Tikhonov. This algorithm can be viewed as a combination of Tikhonov regularization
and gradient descent. Both parameters ? and t play the role of shrinkage parameter.
Truncated Singular Value Decomposition. This algorithm can be interpreted as a projection onto
the first principal components of the KPCA basis. Hence, we may interpret dimensionality reduction
as shrinkage and the size of reduced dimension as shrinkage parameter. This approach has been used
in [21] to improve the kernel mean estimation under the low-rank assumption.
Most of the above spectral filtering algorithms allow to compute the coefficients ? without explicitly
computing the eigen-decomposition of K, as we can see in Table 1, and some of which may have
no natural interpretation in terms of regularized risk minimization. Lastly, an initialization of ?
corresponds to the target of shrinkage. In this work, we assume that ? 0 = 0 throughout.
4
Theoretical properties of Spectral-KMSE
This section presents some theoretical properties for the proposed Spectral-KMSE in (6). To this
end, we first present a regularization interpretation that is different from the one in (3) which involves
learning a smooth operator from H to H [22]. This will be helpful to investigate the consistency of
the Spectral-KMSE. Let us consider the following regularized risk minimization problem,
arg minF?H?H
2
EX kk(X, ?) ? F[k(X, ?)]kH + ?kFk2HS
(7)
where F is a Hilbert-Schmidt operator from H to H. Essentially, we are seeking a smooth operator
F that maps k(x, ?) to itself, where (7) is an instance of the regression framework in [22]. The
formulation of shrinkage as the solution of a smooth operator regression, and the empirical solution
(8) and in the lines below, were given in a personal communication by Arthur Gretton. It can be
5
shown that the solution to (7) is given
by F = Ck (Ck + ?I)?1 where Ck : H ? H is a covariance
R
operator in H defined as Ck = k(?, x) ? k(?, x) dP(x) (see ?5 of the supplement for a proof).
Define ?? := F?P = Ck (Ck + ?I)?1 ?P . Since k is bounded, it is easy to verify
P that Ck is HilbertSchmidt and therefore compact. Hence by the Hilbert-Schmidt theorem, Ck = i ?i h?, ?i i?i where
(?i )i?N are the positive eigenvalues and (?i )i?N are the corresponding eigenvectors that form an
ONB
P? for?ithe range space of Ck denoted as R(Ck ). This implies ?? can be decomposed as ?? =
i=1 ?i +? h?P , ?i i?i . We can observe that the filter function corresponding to the problem (7)
is
g? (?) = 1/(? + ?). By extending this approach to other filter functions, we obtain ?? =
P?
i=1 ?i g? (?i )h?P , ?i i?i which is equivalent to ?? = Ck g? (Ck )?P .
Since Ck is a compact operator, the role of filter function g? is to regularize the inverse of Ck .
In standard supervised setting, the explicit form of the solution is f? = g? (Lk )Lk f? where Lk
is the integral
operator of kernel k acting in L2 (X , ?X ) and f? is the expected solution given by
R
f? (x) = Y y d?(y|x) [16]. It is interesting to see that ?? admits a similar form to that of f? , but it is
written in term of covariance operator Ck instead of the integral operator Lk . Moreover, the solution
to (7) is also in a similar form to the regularized conditional embedding ?Y |X = CY X (Ck + ?I)?1
[9]. This connection implies that the spectral filtering may be applied more broadly to improve the
estimation of conditional mean embedding, i.e., ?Y |X = CY X g? (Ck ).
The empirical counterpart of (7) is given by
n
1X
2
(8)
kk(xi , ?) ? F[k(xi , ?)]kH + ?kFk2HS ,
arg min
F
n i=1
?1
resulting in ?
?? = F?
?P = 1?
? where ? = [k(x1 , ?), . . . , k(xn , ?)]? , which matches
n K(K + ?I)
with the one in (6) with g? (K) = (K + ?I)?1 . Note that this is exactly the F-KMSE proposed in
[1]. Based on ?? which depends on P, an empirical version of it can be obtained by replacing Ck
and ?P with their empirical estimators leading to ?
?? = Cbk g? (Cbk )?
?P . The following result shows
that ?
?? = ?
?? , which means the Spectral-KMSE proposed in (6) is equivalent to solving (8).
?P be the P
sample counterparts of Ck and ?P given by Cbk :=
Proposition
4. Let Cbk and ?
Pn
n
1
1
k(x
,
?)
?
k(x
,
?)
and
?
?
:=
?? :=
i
i
P
i=1
i=1 k(xi , ?), respectively. Then, we have that ?
n
n
b
b
Ck g? (Ck )?
?P = ?
?? , where ?
?? is defined in (6).
Having established a regularization interpretation for ?
?? , it is of interest to study the consistency and
convergence rate of ?
?? similar to KMSE in Theorem 1. Our main goal here is to derive convergence
rates for a broad class of algorithms given a set of sufficient conditions on the filter function, g? . We
believe that for some algorithms it is possible to derive the best achievable bounds, which requires
ad-hoc proofs for each algorithm. To this end, we provide a set of conditions any admissible filter
function, g? must satisfy.
Definition 1. A family of filter functions g? : [0, ?2 ] ? R, 0 < ? ? ?2 is said to be admissible if there exists finite positive constants B, C, D, and ?0 (all independent of ?) such that
(C1) sup??[0,?2 ] |?g? (?)| ? B, (C2) sup??[0,?2 ] |r? (?)| ? C and (C3) sup??[0,?2 ] |r? (?)|? ? ?
D?? , ? ? ? (0, ?0 ] hold, where r? (?) := 1 ? ?g? (?).
These conditions are quite standard in the theory of inverse problems [15, 23]. The constant ?0
is called the qualification of g? and is a crucial factor that determines the rate of convergence in
inverse problems. As we will see below, that the rate of convergence of ?
?? depends on two factors:
(a) smoothness of ?P which is usually unknown as it depends on the unknown P and (b) qualification
of g? which determines how well the smoothness of ?P is captured by the spectral filter, g? .
p
Theorem 5. Suppose g? is admissible in the sense of Definition 1. Let ? = supx?X k(x, x). If
?P ? R(Ck? ) for some ? > 0, then for any ? > 0, with probability at least 1 ? 3e?? ,
?
?
?
2?B + ?B 2?
(2 2?2 ?)min{1,?} ??
?
kCk ?P k,
k?
?? ? ?P k ?
+ D?min{?,?0 } kCk?? ?P k + C?
n
nmin{1/2,?/2}
where R(A) denotes the range space of A and ? is some universal constant that does not depend on
?
? and n. Therefore, k?
?? ? ?P k = OP (n? min{1/2,?/2} ) with ? = o(n
min{1/2,?/2}
min{?,?0 }
).
Theorem 5 shows that the convergence rate depends on the smoothness of ?P which is imposed
through the range space condition that ?P ? R(Ck? ) for some ? > 0. Note that this is in contrast
6
to the estimator in Theorem 1 which does not require any smoothness assumptions on ?P . It can
be shown that the smoothness of ?P increases with increase in ?. This means, irrespective of the
smoothness of ?P for ? > 1, the best possible convergence rate is n?1/2 which matches with that of
KMSE in Theorem 1. While the qualification ?0 does not seem to directly affect the rates, it controls
the rate at which ? converges to zero. For example, if g? (?) = 1/(? + ?) which corresponds to
Tikhonov regularization, it can be shown that ?0 = 1 which means for ? > 1, ? = o(n?1/2 )
implying that ? cannot decay to zero slower than n?1/2 . Ideally, one would require a larger ?0
(preferably infinity which is the case with truncated SVD) so that the convergence of ? to zero can
be made arbitrarily slow if ? is large. This way, both ? and ?0 control the behavior of the estimator.
In fact, Theorem 5 provides a choice for ??which is what we used in Theorem 1 to study the
admissibility of ?
?? to Pc,? ?to construct the Spectral-KMSE. However, this choice of ? depends
on ? which is not known in practice (although ?0 is known as it is determined by the choice of g? ).
Therefore, ? is usually learnt from data through cross-validation or through Lepski?s method [24] for
which guarantees similar to the one presented in Theorem 5 can be provided. However, irrespective
of the data-dependent/independent choice for ?, checking for the admissibility of Spectral-KMSE
(similar to the one in Theorem 1) is very difficult and we intend to consider it in future work.
5
Empirical studies
Synthetic data. Given the i.i.d. sample X = {x1 , x2 , . . . , xn } from P where xi ? Rd , we evaluate
Pn
2
different estimators using the loss function L(?, X, P) := k i=1 ?i k(xi , ?) ? Ex?P [k(x, ?)]kH .
The risk of the estimator is subsequently approximated by averaging over m independent copies of
X. In this experiment, we set n = 50, d = 20, and m = 1000. Throughout, we use the Gaussian
RBF kernel k(x, x? ) = exp(?kx ? x? k2 /2? 2 ) whose bandwidth parameter is calculated using the
median heuristic, i.e., ? 2 = median{kxi ? xj k2 }. To allow for an analytic calculation of the
loss L(?, X, P), we assume that the distribution P is a d-dimensional mixture of Gaussians [1, 8].
P4
Specifically, the data are generated as follows: x ? i=1 ?i N (?i , ?i )+?, ?ij ? U (?10, 10), ?i ?
W(3 ? Id , 7), ? ? N (0, 0.2 ? Id ) where U (a, b) and W(?0 , df ) are the uniform distribution and
Wishart distribution, respectively. As in [1], we set ? = [0.05, 0.3, 0.4, 0.25].
A natural approach for choosing ? is cross-validation procedure, which can be performed efficiently
for the iterative methods such as Landweber and accelerated Landweber. For these two algorithms,
we evaluate the leave-one-out score and select ? t at the iteration t that minimizes this score (see,
e.g., Figure 3(a)). Note that these methods have the built-in property of computing the whole regularization path efficiently. Since each iteration of the iterated Tikhonov is in fact equivalent to the
F-KMSE, we assume t = 3 for simplicity and use the efficient LOOCV procedure proposed in [1]
to find ? at each iteration. Lastly, the truncation limit of TSVD can be identified efficiently by mean
of generalized cross-validation (GCV) procedure [25]. To allow for an efficient calculation of GCV
score, we resort to the alternative loss function L(?) := kK? ? K1n k22 .
Figure 3 reveals interesting aspects of the Spectral-KMSE. Firstly, as we can see in Figure 3(a), the
number of iterations acts as shrinkage parameter whose optimal value can be attained within just
a few iterations. Moreover, these methods do not suffer from ?over-shrinking? because ? ? 0 as
t ? ?. In other words, if the chosen t happens to be too large, the worst we can get is the standard empirical estimator. Secondly, Figure 3(b) demonstrates that both Landweber and accelerated
Landweber are more computationally efficient than the F-KMSE. Lastly, Figure 3(c) suggests that
the improvement of shrinkage estimators becomes increasingly remarkable in a high-dimensional
setting. Interestingly, we can observe that most Spectral-KMSE algorithms outperform the SKMSE, which supports our hypothesis on the importance of the geometric information of RKHS
mentioned in Section 3. In addition, although the TSVD still gain from shrinkage, the improvement
is smaller than other algorithms. This highlights the importance of filter functions and associated
parameters.
Real data. We apply Spectral-KMSE to the density estimation problem via kernel mean matching
[1, 26]. The datasets were taken from the UCIP
repository1 and pre-processed by standardizing
r
each feature. Then, we fit a mixture model Q = j=1 ?j N (?j , ?j2 I) to the pre-processed dataset
1
http://archive.ics.uci.edu/ml/
7
2
?1.4
10
?1.6
10
KME
S?KMSE
F?KMSE
Landweber
Acc Landweber
Iterated Tikhonov
Truncated SVD
1
10
0
10
Percentage of Improvement (1000 iterations)
KME
S?KMSE
F?KMSE
Landweber
Acc. Landweber
Iterated Tikhonov (?=0.01)
Elapsed Time (1000 iterations)
Risk (1000 iterations)
60
10
?1.2
10
?1
10
?2
10
?3
10
?4
10
?5
10
?1.8
10
10
20
Iterations
30
40
10
(a) risk vs. iteration
40
1
2
10
10
Sample Size
(b) runtime vs. sample size
3
10
S?KMSE
F?KMSE
Landweber
Acc Landweber
Iter. Tikhonov
Truncated SVD
30
20
10
0
?6
0
50
20
40
60
Dimensionality
80
100
(c) risk vs. dimension
Figure 3: (a) For iterative algorithms, the number of iterations acts as shrinkage parameter. (b) The
iterative algorithms such as Landweber and accelerated Landweber are more efficient than the FKMSE. (c) A percentage of improvement w.r.t. the KME, i.e., 100 ? (R ? R? )/R where R and R?
denote the approximated risk of KME and KMSE, respectively. Most Spectral-KMSE algorithms
outperform S-KMSE which does not take into account the geometric information of the RKHS.
Pr
X := {xi }ni=1 by minimizing k?Q ? ?
?X k2 subject to the constraint j=1 ?j = 1. Here ?Q is the
mean embedding of the mixture model Q and ?
?X is the empirical mean embedding obtained from
X. Based on different estimators of ?X , we evaluate the resultant model Q by the negative loglikelihood score on the test data. The parameters (?j , ?j , ?j2 ) are initialized by the best one obtained
from the K-means algorithm with 50 initializations. Throughout, we set r = 5 and use 25% of each
dataset as a test set.
Table 2: The average negative log-likelihood evaluated on the test set. The results are obtained from
30 repetitions of the experiment. The boldface represents the statistically significant results.
Dataset
ionosphere
glass
bodyfat
housing
vowel
svmguide2
vehicle
wine
wdbc
KME
36.1769
10.7855
18.1964
14.3016
13.9253
28.1091
18.5295
16.7668
35.1916
S-KMSE
36.1402
10.7403
18.1158
14.2195
13.8426
28.0546
18.3693
16.7548
35.1814
F-KMSE
36.1622
10.7448
18.1810
14.0409
13.8817
27.9640
18.2547
16.7457
35.0023
Landweber
36.1204
10.7099
18.1607
14.2499
13.8337
28.1052
18.4873
16.7596
35.1402
Acc Land
36.1554
10.7541
18.1941
14.1983
14.1368
27.9693
18.3124
16.6790
35.1366
Iter Tik
36.1334
10.9078
18.1267
14.2868
13.8633
28.0417
18.4128
16.6954
35.1881
TSVD
36.1442
10.7791
18.1061
14.3129
13.8375
28.1128
18.3910
16.5719
35.1850
Table 2 reports the results on real data. In general, the mixture model Q obtained from the proposed
shrinkage estimators tend to achieve lower negative log-likelihood score than that obtained from the
standard empirical estimator. Moreover, we can observe that the relative performance of different
filter functions vary across datasets, suggesting that, in addition to potential gain from shrinkage, incorporating prior knowledge through the choice of filter function could lead to further improvement.
6
Conclusion
We shows that several shrinkage strategies can be adopted to improve the kernel mean estimation.
This paper considers the spectral filtering approach as one of such strategies. Compared to previous
work [1], our estimators take into account the specifics of kernel methods and meaningful prior
knowledge through the choice of filter functions, resulting in a wider class of shrinkage estimators.
The theoretical analysis also reveals a fundamental similarity to standard supervised setting. Our
estimators are simple to implement and work well in practice, as evidenced by the empirical results.
Acknowledgments
The first author thanks Ingo Steinwart for pointing out existing works along the line of spectral filtering, and Arthur Gretton for suggesting the connection of shrinkage to smooth operator framework.
This work was carried out when the second author was a Research Fellow in the Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
K. Muandet, K. Fukumizu, B. Sriperumbudur, A. Gretton, and B. Sch?olkopf. ?Kernel Mean Estimation
and Stein Effect?. In: ICML. 2014, pp. 10?18.
B. Sch?olkopf, A. Smola, and K.-R. M?uller. ?Nonlinear Component Analysis as a Kernel Eigenvalue
Problem?. In: Neural Computation 10.5 (July 1998), pp. 1299?1319.
J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge, UK: Cambridge
University Press, 2004.
B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA, USA: MIT Press, 2001.
A. Berlinet and T. C. Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer
Academic Publishers, 2004.
A. Smola, A. Gretton, L. Song, and B. Sch?olkopf. ?A Hilbert Space Embedding for Distributions?. In:
ALT. Springer-Verlag, 2007, pp. 13?31.
A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. J. Smola. ?A kernel method for the
two-sample-problem?. In: NIPS. 2007.
K. Muandet, K. Fukumizu, F. Dinuzzo, and B. Sch?olkopf. ?Learning from Distributions via Support
Measure Machines?. In: NIPS. 2012, pp. 10?18.
L. Song, J. Huang, A. Smola, and K. Fukumizu. ?Hilbert Space Embeddings of Conditional Distributions
with Applications to Dynamical Systems?. In: ICML. 2009.
K. Muandet, D. Balduzzi, and B. Sch?olkopf. ?Domain Generalization via Invariant Feature Representation?. In: ICML. 2013, pp. 10?18.
K. Muandet and B. Sch?olkopf. ?One-Class Support Measure Machines for Group Anomaly Detection?.
In: UAI. AUAI Press, 2013, pp. 449?458.
B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. R. G. Lanckriet. ?Hilbert Space
Embeddings and Metrics on Probability Measures?. In: JMLR 99 (2010), pp. 1517?1561.
K. Fukumizu, L. Song, and A. Gretton. ?Kernel Bayes? Rule: Bayesian Inference with Positive Definite
Kernels?. In: JMLR 14 (2013), pp. 3753?3783.
C. M. Stein. ?Estimation of the Mean of a Multivariate Normal Distribution?. In: The Annals of Statistics
9.6 (1981), pp. 1135?1151.
H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Vol. 375. Mathematics
and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1996.
E. D. Vito, L. Rosasco, and R. Verri. Spectral Methods for Regularization in Learning Theory. 2006.
E. D. Vito, L. Rosasco, A. Caponnetto, U. D. Giovannini, and F. Odone. ?Learning from Examples as
an Inverse Problem.? In: JMLR 6 (2005), pp. 883?904.
L. Baldassarre, L. Rosasco, A. Barla, and A. Verri. ?Vector Field Learning via Spectral Filtering.? In:
ECML/PKDD (1). Vol. 6321. Lecture Notes in Computer Science. Springer, 2010, pp. 56?71.
J. Kim and C. D. Scott. ?Robust Kernel Density Estimation?. In: JMLR 13 (2012), 2529?2565.
I. Steinwart and A. Christmann. Support Vector Machines. New York: Springer, 2008.
L. Song and B. Dai. ?Robust Low Rank Kernel Embeddings of Multivariate Distributions?. In: NIPS.
2013, pp. 3228?3236.
S. Gr?unew?alder, G. Arthur, and J. Shawe-Taylor. ?Smooth Operators?. In: ICML. Vol. 28. 2013,
pp. 1184?1192.
L. L. Gerfo, L. Rosasco, F. Odone, E. D. Vito, and A. Verri. ?Spectral Algorithms for Supervised Learning.? In: Neural Computation 20.7 (2008), pp. 1873?1897.
O. V. Lepski, E. Mammen, and V. G. Spokoiny. ?Optimal Spatial Adaptation to Inhomogeneous Smoothness: An Approach based on Kernel Estimates with Variable Bandwith Selectors?. In: Annals of Statistics
25 (1997), pp. 929?947.
G. Golub, M. Heath, and G. Wahba. ?Generalized Cross-Validation as a Method for Choosing a Good
Ridge Parameter?. In: Technometrics 21 (1979), pp. 215?223.
L. Song, X. Zhang, A. Smola, A. Gretton, and B. Sch?olkopf. ?Tailoring Density Estimation via Reproducing Kernel Moment Matching?. In: ICML. 2008, pp. 992?999.
9
| 5239 |@word version:2 achievable:1 norm:2 suitably:1 open:1 covariance:5 decomposition:3 mention:1 thereby:1 moment:1 reduction:2 contains:1 score:5 rkhs:6 interestingly:2 existing:1 must:2 written:1 tailoring:1 analytic:1 krikamol:2 plot:1 update:3 v:3 implying:2 accordingly:1 isotropic:2 ith:2 reciprocal:1 core:1 dinuzzo:1 provides:2 boosting:7 firstly:1 zhang:1 mathematical:1 along:1 constructed:2 c2:1 introduce:1 theoretically:1 expected:1 indeed:1 behavior:3 mpg:2 nor:1 pkdd:1 inspired:1 decomposed:1 considering:2 increasing:1 kme:8 provided:2 estimating:5 bounded:3 underlying:1 moreover:4 project:1 becomes:1 what:1 interpreted:1 minimizes:1 eigenvector:1 finding:1 guarantee:2 esti:1 fellow:1 preferably:1 act:2 auai:1 runtime:1 exactly:1 k2:10 demonstrates:1 uk:1 control:2 berlinet:1 gerfo:1 before:1 positive:3 qualification:3 limit:1 id:2 path:1 might:1 signed:1 initialization:2 suggests:1 shaded:1 ease:1 range:9 statistically:1 acknowledgment:1 practice:3 block:1 implement:3 differs:1 definite:1 procedure:3 area:1 empirical:24 universal:1 attain:1 matching:3 projection:1 pre:3 word:2 get:6 onto:2 cannot:1 operator:12 risk:11 context:1 equivalent:4 map:1 imposed:1 missing:1 go:2 attention:1 regardless:1 independently:2 simplicity:3 pure:1 estimator:77 rule:5 orthonormal:2 regularize:1 embedding:7 handle:1 notion:2 coordinate:1 justification:1 annals:2 target:3 suppose:2 pt:2 alleviating:1 play:2 anomaly:1 us:1 hypothesis:1 lanckriet:1 element:2 satisfying:1 approximated:2 ep:8 role:5 worst:1 cy:2 trade:1 mentioned:3 intuition:1 ideally:1 cristianini:1 vito:3 personal:1 depend:3 solving:1 ithe:1 upon:3 basis:9 joint:1 various:2 regularizer:1 fast:1 choosing:2 odone:2 dordrecht:1 whose:5 quite:3 supplementary:2 posed:2 valued:1 larger:2 heuristic:1 loglikelihood:1 statistic:5 itself:1 hoc:1 advantage:1 rr:1 sequence:1 eigenvalue:5 housing:1 propose:1 product:1 p4:1 adaptation:1 j2:2 relevant:1 uci:1 achieve:1 kh:3 dirac:1 olkopf:11 convergence:11 requirement:1 extending:1 converges:1 leave:1 help:1 wider:2 depending:1 derive:2 ij:1 op:2 progress:1 involves:1 implies:2 christmann:1 rasch:1 direction:3 inhomogeneous:1 drawback:1 unew:1 owing:1 filter:22 subsequently:1 material:2 bks18:1 require:3 premise:1 generalization:1 preliminary:2 proposition:4 pessimistic:1 secondly:1 strictly:2 hold:3 practically:1 considered:1 ic:1 normal:2 exp:1 mapping:1 pointing:1 gk2:3 vary:2 adopt:2 early:2 wine:1 purpose:1 estimation:14 baldassarre:1 loocv:1 tik:1 superposition:1 repetition:1 minimization:6 fukumizu:5 uller:1 clearly:2 mit:1 gaussian:7 rather:1 ck:24 pn:6 shrinkage:60 broader:1 improvement:6 rank:2 likelihood:2 contrast:1 kim:1 sense:2 glass:1 helpful:1 inference:2 dependent:3 stopping:2 issue:1 arg:5 ill:1 denoted:1 proposes:1 spatial:1 special:1 field:1 construct:7 having:1 psu:2 represents:2 broad:1 unsupervised:1 icml:5 minf:1 future:1 t2:1 report:1 fundamentally:1 inherent:1 few:1 modern:1 lebesgue:1 n1:3 vowel:1 technometrics:1 detection:1 interest:1 investigate:1 adjust:1 golub:1 mixture:4 pc:6 behind:1 integral:2 closer:2 arthur:3 taylor:2 initialized:1 abundant:1 theoretical:8 instance:1 earlier:1 engl:1 kpca:2 deviation:1 uniform:1 gcv:2 gr:1 too:1 supx:2 learnt:1 kxi:1 synthetic:1 muandet:5 thanks:1 density:6 fundamental:3 borgwardt:1 ie:1 sequel:1 off:1 invertible:1 na:1 squared:3 central:1 choose:3 possibly:2 huang:1 rosasco:4 wishart:1 resort:1 leading:1 return:1 repository1:1 account:2 suggesting:2 potential:1 de:2 iei:5 standardizing:1 summarized:1 bandwith:1 coefficient:3 spokoiny:1 satisfy:2 explicitly:1 depends:11 vi:1 ad:1 performed:1 view:2 vehicle:1 sup:3 hf:3 bayes:1 kgk2:1 hanke:1 square:3 ni:2 variance:4 mator:1 characteristic:1 efficiently:3 bayesian:1 iterated:6 none:1 acc:5 bharath:1 definition:3 centering:1 sriperumbudur:3 pp:17 james:3 obvious:1 resultant:4 proof:3 associated:2 gain:2 proved:1 dataset:3 popular:1 knowledge:6 improves:2 dimensionality:2 hilbert:8 organized:1 landweber:14 attained:1 supervised:7 wherein:1 improved:2 specify:1 verri:3 formulation:1 evaluated:1 though:1 shrink:1 just:1 smola:6 lastly:3 nmin:1 steinwart:2 replacing:2 ei:12 nonlinear:1 believe:1 building:1 effect:4 k22:1 verify:1 usa:1 counterpart:2 regularization:15 hence:2 iteratively:1 laboratory:1 illustrated:1 deal:1 alder:1 mammen:1 mpi:2 generalized:3 ridge:1 wise:1 recently:1 fi:2 common:1 behaves:1 ug:1 empirically:1 anisotropic:2 interpretation:6 approximates:1 kluwer:2 numerically:1 interpret:2 significant:2 refer:1 cambridge:4 smoothness:7 rd:2 consistency:3 trivially:1 mathematics:2 shawe:2 similarity:1 whitening:1 pu:1 multivariate:2 showed:2 perspective:1 inf:3 tikhonov:11 verlag:1 ubingen:2 inequality:1 arbitrarily:1 integrable:1 captured:1 dai:1 somewhat:1 july:1 ii:5 gretton:8 stem:1 caponnetto:1 smooth:5 technical:1 faster:1 characterized:1 match:2 cross:5 long:1 calculation:2 academic:2 dept:1 basic:2 regression:3 essentially:1 metric:1 df:1 iteration:15 kernel:51 c1:2 addition:3 onb:5 singular:1 median:2 publisher:2 diagonalize:1 sch:11 appropriately:1 rest:1 unlike:3 crucial:1 strict:1 archive:1 subject:1 tend:1 heath:1 seem:1 call:1 leverage:1 ideal:1 iii:2 identically:1 easy:1 embeddings:3 affect:1 xj:1 fit:1 restrict:1 bandwidth:1 identified:1 inner:1 idea:2 cn:2 wahba:1 motivated:3 pca:3 song:5 suffer:1 reformulated:1 york:1 remark:1 clear:2 eigenvectors:2 stein:5 processed:2 reduced:1 http:1 outperform:3 exist:3 percentage:2 estimated:2 broadly:1 vol:3 tsvd:4 kck:2 iter:2 group:2 drawn:2 prevent:1 svmguide2:1 resorted:1 inverse:6 almost:1 throughout:4 family:1 entirely:1 bound:1 played:1 topological:1 hilbertschmidt:1 infinity:1 constraint:1 x2:1 fourier:1 aspect:1 min:7 optimality:1 performing:1 separable:3 relatively:1 department:1 according:3 alternate:1 combination:2 smaller:2 across:2 increasingly:1 b:1 happens:1 invariant:2 pr:1 erm:2 neubauer:1 taken:1 computationally:2 equation:3 remains:1 discus:1 needed:1 cbk:5 end:3 adopted:2 available:1 generalizes:1 endowed:1 gaussians:1 apply:1 observe:3 spectral:33 appropriate:2 schmidt:2 rkhss:1 alternative:1 eigen:2 rp:1 slower:1 denotes:1 balduzzi:1 classical:2 seeking:1 objective:1 intend:1 quantity:3 parametric:1 strategy:6 said:2 gradient:4 dp:9 tue:2 considers:2 unstable:1 boldface:1 assuming:3 kk:6 ellipsoid:1 balance:1 minimizing:1 difficult:1 holding:1 negative:3 rise:1 countable:2 proper:1 unknown:5 perform:3 allowing:1 datasets:2 ingo:1 finite:1 descent:4 ecml:1 truncated:5 looking:2 communication:1 rn:1 reproducing:5 concentric:1 evidenced:1 pair:1 c3:1 connection:3 elapsed:1 established:1 nip:3 eigenfunction:1 beyond:1 usually:4 below:2 pattern:1 agnan:1 dynamical:1 giovannini:1 scott:1 built:1 including:1 explanation:1 hyperspheres:1 natural:4 rely:2 regularized:5 improve:7 lk:4 irrespective:2 carried:1 deviate:1 prior:5 understanding:3 nice:1 l2:9 geometric:3 checking:1 determining:1 relative:1 loss:3 admissibility:4 highlight:1 lecture:1 interesting:5 filtering:15 remarkable:1 validation:5 sufficient:1 consistent:5 viewpoint:1 uncorrelated:1 land:1 translation:1 penalized:1 supported:1 copy:1 truncation:1 bias:4 allow:4 understand:1 wide:3 dimension:3 xn:3 gram:1 calculated:1 author:2 made:1 reconstructed:1 compact:2 selector:1 bernhard:1 ml:1 overfitting:1 reveals:3 uai:1 conceptual:1 xi:18 k1n:4 spectrum:1 continuous:2 lepski:2 iterative:3 why:1 table:5 robust:2 correlated:2 necessarily:1 constructing:2 domain:1 diag:2 main:2 motivation:2 whole:1 nothing:1 x1:3 referred:1 borel:2 slow:1 shrinking:1 explicit:1 jmlr:4 admissible:5 removing:1 theorem:20 specific:1 udu:1 ton:1 decay:1 admits:1 alt:1 virtue:1 ionosphere:1 exists:2 incorporating:1 gained:1 importance:2 supplement:1 justifies:1 illustrates:1 barla:1 kx:1 wdbc:1 expressed:1 scalar:2 springer:3 corresponds:3 minimizer:1 satisfies:1 determines:2 ma:1 conditional:3 goal:2 viewed:1 consequently:1 rbf:1 room:1 replace:1 typical:1 determined:3 specifically:1 acting:1 averaging:1 lemma:1 principal:1 called:1 svd:4 meaningful:2 formally:1 select:1 bodyfat:1 support:6 accelerated:5 incorporate:1 evaluate:3 instructive:1 ex:2 |
4,682 | 524 | NETWORK MODEL OF STATE-DEPENDENT
SEQUENCING
Jeffrey P. Sutton: Adam N. Mamelak t and J. Allan Hobson
Laboratory of Neurophysiology and Department of Psychiatry
Harvard Medical School
74 Fenwood Road, Boston, MA 02115
Abstract
A network model with temporal sequencing and state-dependent modulatory features is described. The model is motivated by neurocognitive data
characterizing different states of waking and sleeping. Computer studies
demonstrate how unique states of sequencing can exist within the same
network under different aminergic and cholinergic modulatory influences.
Relationships between state-dependent modulation, memory, sequencing
and learning are discussed.
1
INTRODUCTION
Models of biological information processing often assume only one mode or state
of operation. In general, this state depends upon a high degree of fidelity or modulation among the neural elements. In contrast, real neural networks often have
a. repertoire of processing states that is greatly affected by the relative balances of
various neuromodulators (Selverston, 1988; Harris-Warrick and Marder, 1991). One
area where changes in neuromodulation and network behavior are tightly and dramatically coupled is in the sleep-wake cycle (Hobson and Steriade, 1986; Mamelak
and Hobson, 1989). This cycle consists of three main states: wake, non-rapid eye
? Also in the Center for Biological Information Processing, Whitaker College, E25-201,
Massachusetts Institute of Technology, Cambridge, MA 02139
t Currently in the Department of Neurosurgery, University of California, San Francisco,
CA 94143
283
284
Sutton, Mamelak, and Hobson
movement (NREM) sleep and rapid eye movement (REM) sleep. Each state is characterized by a unique balance of monoaminergic and cholinergic neuromodulation
(Hobson and Steriade, 1986; figure 1). In humans, each state also has characteristic cognitive sequencing properties (Foulkes, 1985; Hobson, 1988; figure 1). An
integration and better understanding of the complex relationships between neuromodulation and information sequencing are desirable from both a computational
and a neurophysiological perspective. In this paper, we present an initial approach
to this difficult neurocognitive problem using a network model.
MODULATION
STATE
tonic
phasic
amlDerglc cholinergic
(tf)
SEQUENCING
(6)
progrt!6llive
WAKE
high
low
Al ~ A2 --~ A3
J, +- illput
Bl
-7
B2
perseverative
NREM
SLEEP
intcrUlC(liatc
Al
low
l'
\,
A3~ A2
bizarre
Al
REM
low
-7
A2
J, +-rGO
high
SLEEP
A2/Bl
PGO
-+
J,
D2 ~ B3
Figure 1: Overview of the three state model which attempts to integrate aspects of
neuromodulation and cognitive sequencing. The aminergic and cholinergic systems
are important neuromodulators that filter and amplify, as opposed to initiating or
carrying, distributed information embedded as memories (eg. A1, A2, A3) in neural
networks. In the wake state, a relative aminergic dominance exists and the associated network sequencing is logical and progressive. For example, the sequence
A1 -+ A2 transitions to B1 -+ B2 when an appropriate input (eg. B1) is present
at a certain time. The NREM state is characterized by an intermediate aminergicto-cholinergic ratio correlated with ruminative and perseverative sequences. Unexpected or "bizarre" sequences are found in the REM state, wherein phasic cholinergic inputs dominate and are prominent in the ponto-geniculo-occipital (PGO) brain
areas. Bizarreness is manifest by incongruous or mixed memories, such as A2/ B1,
and sequence discontinuities, such as A2 -+ A2/ B1 -+ B2, which may be associated
with PGO bursting in the absence of other external input.
Network Model of State-Dependent Sequencing
2
AMINERGIC AND CHOLINERGIC
NEUROMODULATION
As outlined in figure 1, there are unique correlations among the aminergic and
cholinergic systems and the forms of information sequencing that exist in the states
of waking and NREM and REM sleep. The following brief discussion, which undoubtably oversimplifies the complicated and widespread actions of these systems,
highlights some basic and relevant principles. Interested readers are referred to the
review by Hobson and Steriade (1986) and the article by Hobson et al. in this
volume for a more detailed presentation.
The biogenic amines, including norepinephrine, serotonin and dopamine, have
been implicated as tonic regulators of the signal-to-noise ratio in neural networks
(eg. Mamelak and Hobson, 1989). Increasing (decreasing) the amount of aminergic
modulation improves (worsens) network fidelity (figure 2a). A standard means of
modeling this property is by a stochastic or gain factor, analogous to the well-known
Boltzmann factor f3 = l/kT, which is present in the network updating rule.
Complex neuromodulatory effects of acetylcholine depend upon the location and
types of receptors and channels present in different neurons. One main effect is
facilitatory excitation (figure 2b). Mamelak and Hobson (1989) have suggested
how the phasic release of acetylcholine, involving the bursting of PGO cells in the
brainstem, coupled with tonic aminergic demodulation, could induce bifurcations
in information sequencing at the network level. The model described in the next
section sets out to test this notion.
h.
a.
1.0 r--------7":::::::O~==-_,
be
------------(]
????A???A?
0.8
.9
~
"0
Initial Activity
0 .6
~
~
8-6
EPSP
0.4
'"
e
c..
~
-6
Resultant Activity
0.2
------(]
?3
-2
-1
o
1
2
A
3
b-8
Membrane Potential Relative to Threshold
no efFow:t
action potential subthreshold
induced
adivity pp.rRists
Figure 2: (a) Plot of neural firing probability as a function of the membrane protential, h, relative to threshold, 9, for values of aminergic modulation f3 of 0.5, 1.0, 1.5
and 3.0. (b) Schematic diagram of cholinergic facilitation, where EPSPs of magnitude 6 only induce a change in firing activity if h is initially in the range (9 - 6, 9).
Modified from Mamelak and Hobson (1989).
285
286
Sutton, Mamelak, and Hobson
3
ASSOCIATIVE SEQUENCING NETWORK
There are several ways to approach the problem of modeling modulatory effects on
temporal sequencing. We have chosen to commence with an associative network
that is an extension of the work on models resembling elementary motor pattern
generators (Kleinfeld, 1986; Sompolinsky and Kanter, 1986; Gutfreund and Mezard,
1988). We consider it to be significant that recent data on brainstem control systems
show an overlap between sleep-wake regulators and locomotor pattern generators
(Garcia-Rill et al., 1990).
The network consists of N neural elements with binary values S, = ?1, i = 1, .'" N,
corresponding to whether they are firing or not firing. The elements are linked
together by two kinds of a priori learned synaptic connections. One kind,
p
JH) = ~ I: ere;,
i:/; j,
(1)
#,=1
=
er
encodes a set of p uncorrelated patterns {er}[~l! J.L
1, ... ,p, where each
takes
the value ?l with equal probabilities. These patterns correspond to memories that
are stable until a transition to another memory is made. Transitions in a sequence
of memories J.L 1 -+ 2 -+ ... -+ q < p are induced by a second type of connection
=
J~~)
'3
9- 1
=~
"c~+lc~.
N L...J
Ii.,
1i.3
(2)
#,=1
Here, ~ is a relative weight of the connection types. The average time spent in a
memory pattern before transitioning to the next one in a sequence is T. At time t,
the membrane potential is given by
N
h,(t)
= ~ [IN) Sj(t) + J,~') Sj(t -
1+ 6,(t) + 1;(t).
T)
(3)
The two terms contained in the brackets reflect intrinsic network interactions, while
phasic PGO effects are represented by the 6,(t). External inputs, other than PGO
inputs, to ~(t) are denoted by Ii(t). Dynamic evolution of the network follows the
updating rule
with probability
{ 1 + .'F'/I[h?.)-?? (.)) } -1
(4)
In this equation, the amount of aminergic-like modulation is parameterized by {3.
While updating could be done serially, a parallel dynamic process is chosen here for
convenience. In the absence of external and PGO-like inputs, and with {3 > 1.0,
the dynamics have the effect of generating trajectories on an adiabatically varying
hyper surface that molds in time to produce a path from one basin of attraction
to another. For {3 < 1.0, the network begins to lose this property. Lowering {3
mostly affects neural elements close to threshold, since the decision to change firing
activity centers around the threshold value. However, as {3 decreases, fluctuations
in the membrane potentials increase and a larger fraction of the neural elements
remain, on average, near threshold.
Network Model of State-Dependent Sequencing
4
SIMULATION RESULTS
=
=
A network consisting of N
50 neural elements was examined wherein p 6
memory patterns (A1, A2, A3, B1, B2 and B3) were chosen at random
(pi N = 0.12). These memories were arranged into two loops, A and B, according to equation (2) such that the cyclic sequences A1 --+ A2 --+ A3 --+ A1 --+ ??? and
B1 --+ B2 --+ B3 --+ B1 --+ ??? were stored in loops A and B, respectively. For simplicity, c5i(t) = c5(t) and 9.(t) = 0, 'Vi. The transition parameters were set to A = 2.5
and T = 8 for all the simulations to ensure reliable pattern generation under fully
modulated conditions (large /3, c5 = OJ Somplinsky and Kanter, 1986). Variations in
/3, c5(t) and I.(t) delineated the individual states that were examined.
In the model wake state, where there was a high degree ofaminergic-like modulation
(eg. /3 2.0), the network generated loops of sequential memories. Once cued into
one of the two loops, the network would remain in that loop until an external input
caused a transition into the other loop (figure 3).
=
:CI.II~'"
\
...
.-,
. ' so ,
ILS,
?
(lOll,
"~____-~
?
~
l,
Z$'
.ao
;,
.21
,5(1
u
__
; ___________________
"'.II~"
"
..:
,
1..0 0
?
.
"I,
2J'
I
,
,
",
,
'
.:Jo
1.
t.
,;,
~ :~~
,
'~~------------
.
..
1III l 0 - - - - - - - ! . '
~
_?
~
1.11
"
os
I
AI
_I
.,
.
.,
,
II
,
" "'I
~\
~
U
?
,
,
I
~f.llr
o.s
,
,
'
_?
?
?
:Is
ItJ
I
~',
,
'.
7r~'"
,
,
,
; :~f------------~
A~
~
,
n
~
~
~
I/me
JLBI
Figure 3: Plot of overlap as a function of time for each of the six memories A1,
A2, A3, B1, B2, B3 in the simulated wake state. The overlap is a measure of the
normalized Hamming distance between the instantaneous pattern of the network
and a given memory. f3 2.0, c5 0, A 2.5, T
8. The network is cued in pattern
A1 and then sequences through loop A. At t
75, pattern B1 is inputted to the
network and loop B ensues. The dotted lines highlight the transitions between
different memory patterns.
=
=
=
=
=
287
288
Sutton, Mamelak, and Hobson
SlmuillflHl NREII Sf.." S,.,.
;:
u
o
n
~
f~
fa
f~
/lme
Figure 4: Graph of overlap VB. time for each of the six memories in the simulated
NREM sleep state. {3
1.1, 6
0, A 2.5, T
8. Initially, the network is cued
in pattern Al and remains in loop A. Considerable fluctuations in the overlaps are
present and external inputs are absent.
=
=
=
=
As {3 was decreased (eg. (3 = 1.1), partially characterizing conditions of a model
NREM state, sequencing within a loop was observed to persist (figure 4). However,
decreased stability relative to the wake state was observed and small perturbations
could cause disruptions within a loop and occasional bifurcations between loops.
Nevertheless, in the absence of an effective mechanism to induce inter-loop transitions, the sequences were basically repetitive in this state.
For small f3 (eg. 0.8 < f3 < 1.0) and various PGO-like activities within the simulated
REM state, a diverse and rich set of dynamic behaviors was observed, only some of
which are reported here. The network was remarkably sensitive to the timing of the
PGO type bursts. With f3 1.0, inputs of 6 = 2.5 units in clusters of 20 time steps
occurring with a frequency of approximately one cluster per 50 time steps could
induce the following: (a) no or little effect on identifiable intra-loop sequencing;
(b) bifurcations between loops; (c) a change from orderly intra-loop sequencing
to apparent disorder;l(d) a change from apparent disorder to orderly progression
within a single loop ("defibrillation" effect); (e) a change from a disorderly pattern
to another disorderly pattern. An example of transition types (c) and (d), with the
overall effect of inducing a bifurcation between the loops, is shown in figure 5.
=
10n detailed inspection, the apparent disorder actually revealed several sequences in
loops A and/or B running out of phase with relative delays generally less than T.
Network Model of State-Dependent Sequencing
In general, lower intensity (eg. 2.0 to 2.5 units), longer duration (eg. >20 time steps)
PGO-like bursting was more effective in inducing bifurcations than higher intensity
(eg. 4.0 units), shorter duration (eg. 2 time steps) bursts. PGO induced bifurcations
were possible in all states and were associated with significant populations of neural
elements that were below, but within 6 units of threshold.
Slmu/afMI REJI SllHIp SIIIIII
:c ::~0: ~ A--~
u,!:-......,-~zs~.-~~:-'-.-+.7S,....\--:'?;DII:-----:'2$=--~,~
, '.
"
'
~'.II',
'
"
"
oslt\-..?~
u
~
,
,
,.0
\
\
~
,
'
'
,
U
~~
H
~
~
n
~
_
&O~~
,.
,,
;
~
,
u
~
-
PGO
n
~
-
',
~
~
11",.
PGO
Figure 5: REM sleep state plot of overlap VB. time for each of the six memories.
f3
1.0, 6
2.5, A
2.5, T
8. The network sequences progressively in loop
A until a cluster of simulated PGO bursts (asterisks) occurs lasting 40 < t < 60.
A complex output involving alternating sequences from loop A and loop B results
(note dotted lines). A second PGO burst cluster during the interval 90 < t < 110
yields an output consisting of a single loop B sequence. Over the time span of the
simulation, a bifurcation from loop A to loop B has been induced.
=
5
=
=
=
STATE-DEPENDENT LEARNING
The connections set up by equations (1) and (2) are determined a priori using
a standard Hebbian learning algorithm and are not altered during the network
simulations. Since neuromodulators, including the monoamines norepinephrine and
serotonin, have been implicated as essential factors in synaptic plasticity (Kandel
et al., 1987), it seems reasonable that state changes in modulation may also affect
changes in plasticity. This property, when superimposed on the various sequencing
features of a network, may yield possibly novel memory and sequence formations,
associations and perhaps other unexamined global processes.
289
290
Sutton, Mamelak, and Hobson
6
CONCLUSIONS
The main finding of this paper is that unique states of information sequencing
can exist within the same network under different modulatory conditions. This
result holds even though the model makes significant simplifying assumptions about
the neurophysiological and cognitive processes motivating its construction. Several
observations from the model also suggest mechanisms whereby interactions between
the aminergic and cholinergic systems can give rise to sequencing properties, such as
discontinuities, in different states, especially REM sleep. Finally, the model provides
a means of investigating some of the complex and interesting relationships between
modulation, memory, sequencing and learning within and between different states.
AcknowledgeInents
Supported by NIH grant MH 13,923, the HMS/MMHC Research & Education Fund,
the Livingston, Dupont-Warren and McDonnell-Pew Foundations, DARPA under
ONR contract N00014-85-K-0124, the Sloan Foundation and Whitaker College.
References
Foulkes D (1985) Dreaming: A Cognitive-Psychological Analysis. Hillsdale: Erlbaum.
Garcia-Rill E, Atsuta Y, Iwahara T, Skinner RD (1990) Development of brainstem
modulation of locomotion. Somatosensory Motor Research 7 238-239.
Gutfreund H, Mezard M (1988) Processing of temporal sequences in neural networks. PhYI Rev Lett 61 235-238.
Harris-Warrick RM, Marder E (1991) Modulation of neural networks for behavior.
Annu Rev Neurolci 14 39-57.
Hobson JA (1988) The Dreaming Brain. New York: Basic.
Hobson JA, Steriade M (1986) Neuronal basis of behavioral state control. In:
Mountcastle VB (ed) Handbook of Physiology - The Nervous System, Vol IV.
Bethesda: Am Physiol Soc, 701-823.
Kandel ER, Klein M, Hochner B, Shuster M, Siegelbaum S, Hawkins R, et al. (1987)
Synaptic modulation and learning: New insights into synaptic transmission from
the study of behavior. In: Edelman GM, Gall WE, Cowan WM (eds) Synaptic
Function. New York: Wiley, 471-518.
Kleinfeld D (1986) Sequential state generation by model neural networks. Proc Naa
Acad Sci USA 83 9469-9473.
Mamelak AN, Hobson JA (1989) Dream bizarrenes as the cognitive correlate of
altered neuronal behavior in REM sleep. J Cog Neurolci 1(3) 201-22.
Selverston AI (1988) A consideration of invertebrate central pattern generators as
computational data bases. Neural Networks 1 109-117.
Sompolinsky H, Kanter I (1986) Temporal association in asymmetric neural networks. Phys Rev Lett 57 2861-2864.
| 524 |@word neurophysiology:1 worsens:1 seems:1 d2:1 simulation:4 simplifying:1 somplinsky:1 hochner:1 initial:2 cyclic:1 physiol:1 plasticity:2 dupont:1 motor:2 plot:3 progressively:1 fund:1 nervous:1 inspection:1 provides:1 location:1 burst:4 loll:1 edelman:1 consists:2 behavioral:1 inter:1 allan:1 rapid:2 behavior:5 brain:2 rem:8 initiating:1 decreasing:1 adiabatically:1 little:1 increasing:1 begin:1 kind:2 z:1 selverston:2 gutfreund:2 finding:1 temporal:4 rm:1 control:2 unit:4 medical:1 grant:1 before:1 timing:1 acad:1 sutton:5 receptor:1 firing:5 modulation:12 path:1 fluctuation:2 approximately:1 bursting:3 examined:2 range:1 unique:4 incongruous:1 area:2 physiology:1 road:1 induce:4 suggest:1 amplify:1 convenience:1 close:1 influence:1 center:2 resembling:1 occipital:1 commence:1 duration:2 simplicity:1 disorder:3 rule:2 attraction:1 insight:1 dominate:1 facilitation:1 stability:1 population:1 notion:1 variation:1 analogous:1 construction:1 gm:1 gall:1 locomotion:1 harvard:1 element:7 updating:3 asymmetric:1 persist:1 observed:3 cycle:2 sompolinsky:2 movement:2 decrease:1 dynamic:4 carrying:1 depend:1 upon:2 basis:1 livingston:1 mh:1 darpa:1 various:3 represented:1 dreaming:2 effective:2 hyper:1 formation:1 kanter:3 apparent:3 larger:1 serotonin:2 nrem:6 associative:2 sequence:15 steriade:4 interaction:2 epsp:1 relevant:1 loop:25 inducing:2 cluster:4 transmission:1 produce:1 generating:1 adam:1 spent:1 cued:3 school:1 epsps:1 soc:1 somatosensory:1 filter:1 stochastic:1 human:1 brainstem:3 dii:1 education:1 hillsdale:1 ja:3 ao:1 repertoire:1 biological:2 elementary:1 extension:1 hold:1 around:1 hawkins:1 a2:12 inputted:1 proc:1 lose:1 currently:1 sensitive:1 tf:1 ere:1 neurosurgery:1 modified:1 varying:1 acetylcholine:2 release:1 sequencing:23 superimposed:1 greatly:1 contrast:1 psychiatry:1 am:1 dependent:7 initially:2 interested:1 overall:1 fidelity:2 among:2 warrick:2 priori:2 denoted:1 development:1 integration:1 bifurcation:7 equal:1 once:1 f3:7 progressive:1 tightly:1 individual:1 phase:1 consisting:2 jeffrey:1 undoubtably:1 attempt:1 intra:2 cholinergic:10 llr:1 bracket:1 kt:1 shorter:1 iv:1 psychological:1 modeling:2 delay:1 erlbaum:1 motivating:1 stored:1 ensues:1 reported:1 contract:1 together:1 e25:1 itj:1 jo:1 reflect:1 neuromodulators:3 central:1 opposed:1 possibly:1 cognitive:5 external:5 potential:4 b2:6 caused:1 sloan:1 depends:1 vi:1 linked:1 wm:1 complicated:1 parallel:1 il:1 characteristic:1 subthreshold:1 correspond:1 yield:2 ponto:1 basically:1 trajectory:1 phys:1 synaptic:5 ed:2 pp:1 frequency:1 resultant:1 associated:3 hamming:1 gain:1 adivity:1 monoamine:1 massachusetts:1 logical:1 manifest:1 improves:1 actually:1 higher:1 wherein:2 illput:1 arranged:1 done:1 though:1 correlation:1 until:3 o:1 widespread:1 kleinfeld:2 mode:1 perhaps:1 usa:1 effect:8 b3:4 normalized:1 evolution:1 skinner:1 alternating:1 laboratory:1 eg:10 during:2 whereby:1 excitation:1 prominent:1 demonstrate:1 disruption:1 instantaneous:1 novel:1 consideration:1 nih:1 disorderly:2 overview:1 volume:1 c5i:1 discussed:1 association:2 significant:3 cambridge:1 ai:2 neuromodulatory:1 pew:1 rd:1 outlined:1 rgo:1 stable:1 longer:1 locomotor:1 surface:1 base:1 recent:1 perspective:1 certain:1 n00014:1 binary:1 onr:1 signal:1 ii:6 desirable:1 mold:1 hebbian:1 characterized:2 demodulation:1 a1:7 schematic:1 involving:2 basic:2 dopamine:1 repetitive:1 sleeping:1 cell:1 remarkably:1 decreased:2 interval:1 wake:8 diagram:1 induced:4 cowan:1 near:1 monoaminergic:1 intermediate:1 iii:1 revealed:1 geniculo:1 affect:2 absent:1 whether:1 motivated:1 six:3 york:2 cause:1 action:2 dramatically:1 generally:1 modulatory:4 detailed:2 amount:2 exist:3 amine:1 dotted:2 per:1 klein:1 diverse:1 aminergic:10 vol:1 affected:1 dominance:1 threshold:6 nevertheless:1 lowering:1 graph:1 fraction:1 parameterized:1 reader:1 reasonable:1 hobson:17 decision:1 vb:3 perseverative:2 sleep:11 identifiable:1 activity:5 marder:2 encodes:1 invertebrate:1 facilitatory:1 aspect:1 regulator:2 span:1 department:2 according:1 mcdonnell:1 membrane:4 remain:2 bethesda:1 delineated:1 rev:3 lasting:1 equation:3 remains:1 neuromodulation:5 mechanism:2 phasic:4 operation:1 progression:1 occasional:1 appropriate:1 running:1 ensure:1 whitaker:2 especially:1 bl:2 occurs:1 fa:1 distance:1 simulated:4 sci:1 me:1 dream:1 relationship:3 ratio:2 balance:2 difficult:1 mostly:1 rise:1 boltzmann:1 neuron:1 observation:1 tonic:3 perturbation:1 waking:2 intensity:2 mmhc:1 connection:4 neurolci:2 california:1 learned:1 discontinuity:2 suggested:1 below:1 pattern:15 including:2 memory:17 reliable:1 oj:1 overlap:6 serially:1 oversimplifies:1 altered:2 technology:1 brief:1 eye:2 bizarre:2 hm:1 coupled:2 review:1 understanding:1 mountcastle:1 relative:7 embedded:1 fully:1 highlight:2 mixed:1 generation:2 interesting:1 generator:3 asterisk:1 foundation:2 integrate:1 degree:2 basin:1 article:1 principle:1 uncorrelated:1 pi:1 supported:1 implicated:2 warren:1 jh:1 institute:1 characterizing:2 distributed:1 lett:2 transition:8 rich:1 made:1 c5:4 san:1 correlate:1 sj:2 orderly:2 global:1 investigating:1 handbook:1 b1:9 mamelak:10 francisco:1 norepinephrine:2 channel:1 ca:1 complex:4 foulkes:2 main:3 noise:1 neuronal:2 referred:1 wiley:1 lc:1 mezard:2 sf:1 kandel:2 pgo:15 annu:1 transitioning:1 cog:1 lme:1 er:3 a3:6 exists:1 intrinsic:1 essential:1 sequential:2 ci:1 magnitude:1 neurocognitive:2 occurring:1 boston:1 garcia:2 neurophysiological:2 unexpected:1 contained:1 partially:1 harris:2 ma:2 presentation:1 absence:3 considerable:1 change:8 determined:1 rill:2 college:2 modulated:1 correlated:1 |
4,683 | 5,240 | Subspace Embeddings for the Polynomial Kernel
Huy L. Nguy?e? n
Simons Institute, UC Berkeley
Berkeley, CA 94720
hlnguyen@cs.princeton.edu
Haim Avron
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
haimav@us.ibm.com
David P. Woodruff
IBM Almaden Research Center
San Jose, CA 95120
dpwoodru@us.ibm.com
Abstract
Sketching is a powerful dimensionality reduction tool for accelerating statistical
learning algorithms. However, its applicability has been limited to a certain extent
since the crucial ingredient, the so-called oblivious subspace embedding, can only
be applied to data spaces with an explicit representation as the column span or row
span of a matrix, while in many settings learning is done in a high-dimensional
space implicitly defined by the data matrix via a kernel transformation. We propose the first fast oblivious subspace embeddings that are able to embed a space
induced by a non-linear kernel without explicitly mapping the data to the highdimensional space. In particular, we propose an embedding for mappings induced by the polynomial kernel. Using the subspace embeddings, we obtain the
fastest known algorithms for computing an implicit low rank approximation of the
higher-dimension mapping of the data matrix, and for computing an approximate
kernel PCA of the data, as well as doing approximate kernel principal component
regression.
1
Introduction
Sketching has emerged as a powerful dimensionality reduction technique for accelerating statistical learning techniques such as `p -regression, low rank approximation, and principal component
analysis (PCA) [12, 5, 14]. For natural settings of parameters, this technique has led to the first
asymptotically optimal algorithms for a number of these problems, often providing considerable
speedups over exact algorithms. Behind many of these remarkable algorithms is a mathematical apparatus known as an oblivious subspace embedding (OSE). An OSE is a data-independent random
transform which is, with high probability, an approximate isometry over the embedded subspace,
i.e. kSxk = (1 ? )kxk simultaneously for all x ? V where S is the OSE, V is the embedded
subspace and k ? k is some norm of interest. For the OSE to be useful in applications, it is crucial
that applying it to a vector or a collection of vectors (a matrix) can be done faster than the intended
downstream use.
So far, all OSEs proposed in the literature are for embedding subspaces that have a representation
as the column space or row space of an explicitly provided matrix, or close variants of it that admit
a fast multiplication given an explicit representation (e.g. [1]). This is quite unsatisfactory in many
statistical learning settings. In many cases the input may be described by a moderately sized n-byd sample-by-feature matrix A, but the actual learning is done in a much higher (possibly infinite)
dimensional space, by mapping each row of A to an high dimensional feature space. Using the
kernel trick one can access the high dimensional mapped data points through an inner product space,
1
and thus avoid computing the mapping explicitly. This enables learning in the high-dimensional
space even if explicitly computing the mapping (if at all possible) is prohibitive. In such a setting,
computing the explicit mapping just to compute an OSE is usually unreasonable, if not impossible
(e.g., if the feature space is infinite-dimensional).
The main motivation for this paper is the following question: is it possible to design OSEs that
operate on the high-dimensional space without explicitly mapping the data to that space?
We propose the first fast oblivious subspace embeddings for spaces induced by a non-linear kernel
without explicitly mapping the data to the high-dimensional space. In particular, we propose an OSE
for mappings induced by the polynomial kernel. We then show that the OSE can be used to obtain
faster algorithms for the polynomial kernel. Namely, we obtain faster algorithms for approximate
kernel PCA and principal component regression.
We now elaborate on these contributions.
Subspace Embedding for Polynomial Kernel Maps. Let k(x, y) = (hx, yi + c)q for some constant c ? 0 and positive integer q. This is the degree q polynomial kernel function. Without loss
of generality
we assume that c = 0 since a non-zero c can be handled by adding a coordinate of
?
value c to all of the data points. Let ?(x) denote the function that maps a d-dimensional vector x
to the dq -dimensional vector formed by taking the product of all subsets of q coordinates of x, i.e.
?(v) = v ? . . . ? v (doing ? q times), and let ?(A) denote the application of ? to the rows of A. ? is
the map that corresponds to the polynomial kernel, that is k(x, y) = h?(x), ?(y)i, so learning with
the data matrix A and the polynomial kernel corresponds to using ?(A) instead of A in a method
that uses linear modeling.
We describe a distribution over dq ? O(3q n2 /2 ) sketching matrices S so that the mapping ?(A) ? S
can be computed in O(nnz(A)q) + poly(3q n/) time, where nnz(A) denotes the number of nonzero entries of A. We show that with constant probability arbitrarily close to 1, simultaneously for
all n-dimensional vectors z, kz ? ?(A) ? Sk2 = (1 ? )kz ? ?(A)k2 , that is, the entire row-space of
?(A) is approximately preserved. Additionally, the distribution does not depend on A, so it defines
an OSE.
It is important to note that while the literature has proposed transformations for non-linear kernels
that generate an approximate isometry (e.g. Kernel PCA), or methods that are data independent (like
the Random Fourier Features [17]), no method previously had both conditions, and thus they do not
constitute an OSE. These conditions are crucial for the algorithmic applications we propose (which
we discuss next).
Applications: Approximate Kernel PCA, PCR. We say an n ? k matrix V with orthonormal
columns spans a rank-k (1 + )-approximation of an n ? d matrix A if kA ? V V T AkF ? (1 +
)kA ? Ak kF , where kAkF is the Frobenius norm of A and Ak = arg minX of rank k kA ? XkF . We
state our results for constant q.
In O(nnz(A))+n?poly(k/) time an n?k matrix V with orthonormal columns can be computed, for
which k?(A) ? V V T ?(A)kF ? (1 + )k?(A) ? [?(A)]k kF , where [?(A)]k denotes the best rank-k
approximation to ?(A). The k-dimensional subspace V of Rn can be thought of as an approximation
to the top k left singular vectors of ?(A). The only alternative algorithm we are aware of, which
doesn?t take time at least dq , would be to first compute the Gram matrix ?(A) ? ?(A)T in O(n2 d)
time, and then compute a low rank approximation, which, while this computation can also exploit
sparsity in A, is much slower since the Gram matrix is often dense and requires ?(n2 ) time just to
write down.
Given V , we show how to obtain a low rank approximation to ?(A). Our algorithm computes three
matrices V, U, and R, for which k?(A) ? V ? U ? ?(R)kF ? (1 + )k?(A) ? [?(A)]k kF . This
representation is useful, since given a point y ? Rd , we can compute ?(R) ? ?(y) quickly using
the kernel trick. The total time to compute the low rank approximation is O(nnz(A)) + (n + d) ?
poly(k/). This is considerably faster than standard kernel PCA which first computes the Gram
matrix of ?(A).
We also show how the subspace V can be used to regularize and speed up various learning algorithms
with the polynomial kernel. For example, we can use the subspace V to solve regression problems
2
of the form minx kV x ? bk2 , an approximate form of principal component regression [8]. This can
serve as a form of regularization, which is required as the problem minx k?(A)x ? bk2 is usually
underdetermined. A popular alternative form of regularization is to use kernel ridge regression,
which requires O(n2 d) operations. As nnz(A) ? nd, our method is again faster.
Our Techniques and Related Work. Pagh recently introduced the T ENSOR S KETCH algorithm [14], which combines the earlier C OUNT S KETCH of Charikar et al. [3] with the Fast Fourier
Transform (FFT) in a clever way. Pagh originally applied T ENSOR S KETCH for compressing matrix
multiplication. Pham and Pagh then showed that T ENSOR S KETCH can also be used for statistical
learning with the polynomial kernel [16].
However, it was unclear whether T ENSOR S KETCH can be used to approximately preserve entire
subspaces of points (and thus can be used as an OSE). Indeed, Pham and Pagh show that a fixed
point v ? Rd has the property that for the T ENSOR S KETCH sketching matrix S, k?(v) ? Sk2 =
(1 ? )k?(v)k2 with constant probability. To obtain a high probability bound using their results,
the authors take a median of several independent sketches. Given a high probability bound, one
can use a net argument to show that the sketch is correct for all vectors v in an n-dimensional
subspace of Rd . The median operation results in a non-convex embedding, and it is not clear how
to efficiently solve optimization problems in the sketch space with such an embedding. Moreover,
since n independent sketches are needed for probability 1 ? exp(?n), the running time will be at
least n ? nnz(A), whereas we seek only nnz(A) time.
Recently, Clarkson and Woodruff [5] showed that C OUNT S KETCH can be used to provide a subspace
embedding, that is, simultaneously for all v ? V , k?(v) ? Sk2 = (1 ? )k?(v)k2 . T ENSOR S KETCH
can be seen as a very restricted form of C OUNT S KETCH, where the additional restrictions enable
its fast running time on inputs which are tensor products. In particular, the hash functions in T EN SOR S KETCH are only 3-wise independent. Nelson and Nguyen [13] showed that C OUNT S KETCH
still provides a subspace embedding if the entries are chosen from a 4-wise independent distribution.
We significantly extend their analysis, and in particular show that 3-wise independence suffices for
C OUNT S KETCH to provide an OSE, and that T ENSOR S KETCH indeed provides an OSE.
We stress that all previous work on sketching the polynomial kernel suffers from the drawback described above, that is, it provides no provable guarantees for preserving an entire subspace, which is
needed, e.g., for low rank approximation. This is true even of the sketching methods for polynomial
kernels that do not use T ENSOR S KETCH [10, 7], as it only provides tail bounds for preserving the
norm of a fixed vector, and has the aforementioned problems of extending it to a subspace, i.e.,
boosting the probability of error to be enough to union bound over net vectors in a subspace would
require increasing the running time by a factor equal to the dimension of the subspace.
After we show that T ENSOR S KETCH is an OSE, we need to show how to use it in applications. An
unusual aspect is that for a T ENSOR S KETCH matrix S, we can compute ?(A) ? S very efficiently,
as shown by Pagh [14], but computing S ? ?(A) is not known to be efficiently computable, and
indeed, for degree-2 polynomial kernels this can be shown to be as hard as general rectangular
matrix multiplication. In general, even writing down S ? ?(A) would take a prohibitive dq amount
of time. We thus need to design algorithms which only sketch on one side of ?(A).
Another line of research related to ours is that on random features maps, pioneered in the seminal
paper of Rahimi and Recht [17] and extended by several papers a recent fast variant [11]. The goal in
this line of research is to construct randomized feature maps ?(?) so that the Euclidean inner product
h?(u), ?(v)i closely approximates the value of k(u, v) where k is the kernel; the mapping ?(?) is
dependent on the kernel. Theoretical analysis has focused so far on showing that h?(u), ?(v)i is
indeed close to k(u, v). This is also the kind of approach that Pham and Pagh [16] use to analyze
T ENSOR S KETCH. The problem with this kind of analysis is that it is hard to relate it to downstream
metrics like generalization error and thus, in a sense, the algorithm remains a heuristic. In contrast,
our approach based on OSEs provides a mathematical framework for analyzing the mappings, to
reason about their downstream use, and to utilize various tools from numerical linear algebra in
conjunction with them, as we show in this paper. We also note that in to contrary to random feature
maps, T ENSOR S KETCH is attuned to taking advantage of possible input sparsity. e.g. Le et al. [11]
method requires computing the Walsh-Hadamard transform, whose running time is independent of
the sparsity.
3
2
Background: C OUNT S KETCH and T ENSOR S KETCH
We start by describing the C OUNT S KETCH transform [3]. Let m be the target dimension. When
applied to d-dimensional vectors, the transform is specified by a 2-wise independent hash function
h : [d] ? [m] and a 2-wise independent sign functionP
s : [d] ? {?1, +1}. When applied to v, the
value at coordinate i of the output, i = 1, 2, . . . , m is j|h(j)=i s(j)vj . Note that C OUNT S KETCH
can be represented as a m ? d matrix in which the j-th column contains a single non-zero entry s(j)
in the h(j)-th row.
We now describe the
T ENSOR S KETCH transform [14]. Suppose we are given a point v ? Rd
q
and so ?(v) ? Rd , and the target dimension is again m. The transform is specified using q 3wise independent hash functions h1 , . . . , hq : [d] ? [m], and q 4-wise independent sign functions
s1 , . . . , sq : [d] ? {+1, ?1}. T ENSOR S KETCH applied to v is then C OUNT S KETCH applied to
?(v) with hash function H : [dq ] ? [m] and sign function S : [dq ] ? {+1, ?1} defined as follows:
H(i1 , . . . , iq ) = h1 (i1 ) + h2 (i2 ) + ? ? ? + hq (iq ) mod m,
and
S(i1 , . . . , iq ) = s1 (i1 ) ? s2 (i1 ) ? ? ? sq (iq ).
It is well-known that if H is constructed this way, then it is 3-wise independent [2, 15]. Unlike the
work of Pham and Pagh [16], which only used that H was 2-wise independent, our analysis needs
this stronger property of H.
The T ENSOR S KETCH transform can be applied to v without computing ?(v) as follows. First,
compute the polynomials
B?1
X
X
p` (x) =
xi
vj ? s` (j),
i=0
j|h` (j)=i
for ` = 1, 2, . . . , q. A calculation [14] shows
q
Y
`=1
p` (x) mod (xB ? 1) =
B?1
X
i=0
X
xi
vj1 ? ? ? vjq S(j1 , . . . , jq ),
(j1 ,...,jq )|H(j1 ,...,jq )=i
that is, the coefficients of the product of the q polynomials mod (xm ? 1) form the value
of T ENSOR S KETCH (v). Pagh observed that this product of polynomials can be computed in
O(qm log m) time using the Fast Fourier Transform. As it takes O(q nnz(v)) time to form the q
polynomials, the overall time to compute T ENSOR S KETCH(v) is O(q(nnz(v) + m log m)).
3
T ENSOR S KETCH is an Oblivious Subspace Embedding
Let S be the dq ? m matrix such that T ENSOR S KETCH(v) is ?(v) ? S for a randomly selected
T ENSOR S KETCH. Notice that S is a random matrix. In the rest of the paper, we refer to such a
matrix as a T ENSOR S KETCH matrix with an appropriate number of columns i.e. the number
of
q
hash buckets. We will show that S is an oblivious subspace embedding for subspaces in Rd for
appropriate values of m. Notice that S has exactly P
one non-zero entry per row. The index of the
q
non-zero in the row (i1 , . . . , iq ) is H(i1 , . . . , iq ) = j=1 hj (ij ) mod m. Let ?a,b be the indicator
random variable Q
of whether Sa,b is non-zero. The sign of the non-zero entry in row (i1 , . . . , iq ) is
q
S(i1 , . . . , iq ) = j=1 sj (ij ). Our main result is that the embedding matrix S of T ENSOR S KETCH
can be used to approximate matrix product and is a subspace embedding (OSE).
Theorem 1 (Main Theorem). Let S be the dq ? m matrix such that T ENSOR S KETCH(v) is ?(v)S
for a randomly selected T ENSOR S KETCH. The matrix S satisfies the following two properties.
1. (Approximate Matrix Product:) Let A and B be matrices with dq rows. For m ? (2 +
3q )/(2 ?), we have
Pr[kAT SS T B ? AT Bk2F ? 2 kAk2F kBk2F ] ? 1 ? ?
2. (Subspace Embedding:) Consider a fixed k-dimensional subspace V . If m ? k 2 (2 +
3q )/(2 ?), then with probability at least 1 ? ?, kxSk = (1 ? )kxk simultaneously for all
x?V.
4
Algorithm 1 k-Space
1: Input: A ? Rn?d , ? (0, 1], integer k.
2: Output: V ? Rn?k with orthonormal columns which spans a rank-k (1 + )-approximation to
?(A).
3:
4:
5:
6:
7:
8:
Set the parameters m = ?(3q k 2 + k/) and r = ?(3q m2 /2 ).
Let S be a dq ? m T ENSOR S KETCH and T be an independent dq ? r T ENSOR S KETCH.
Compute ?(A) ? S and ?(A) ? T .
Let U be an orthonormal basis for the column space of ?(A) ? S.
Let W be the m ? k matrix containing the top k left singular vectors of U T ?(A)T .
Output V = U W .
We establish the theorem via two lemmas. The first lemma proves the approximate matrix product
property via a careful second moment analysis. Due to space constraints, a proof is included only in
the supplementary material version of the paper.
Lemma 2. Let A and B be matrices with dq rows. For m ? (2 + 3q )/(2 ?), we have
Pr[kAT SS T B ? AT Bk2F ? 2 kAk2F kBk2F ] ? 1 ? ?
The second lemma proves that the subspace embedding property follows from the approximate
matrix product property.
Lemma 3. Consider a fixed k-dimensional subspace V . If m ? k 2 (2 + 3q )/(2 ?), then with
probability at least 1 ? ?, kxSk = (1 ? )kxk simultaneously for all x ? V .
Proof. Let B be a dq ? k matrix whose columns form an orthonormal basis of V . Thus, we have
B T B = Ik and kBk2F = k. The condition that kxSk = (1 ? )kxk simultaneously for all x ? V is
equivalent to the condition that the singular values of B T S are bounded by 1 ? . By Lemma 2, for
m ? (2 + 3q )/((/k)2 ?), with probability at least 1 ? ?, we have
kB T SS T B ? B T Bk2F ? (/k)2 kBk4F = 2
Thus, we have kB T SS T B ? Ik k2 ? kB T SS T B ? Ik kF ? . In other words, the squared singular
values of B T S are bounded by 1 ? , implying that the singular values of B T S are also bounded by
1 ? . Note that kAk2 for a matrix A denotes its operator norm.
4
4.1
Applications
Approximate Kernel PCA and Low Rank Approximation
We say an n ? k matrix V with orthonormal columns spans a rank-k (1 + )-approximation of an
n ? d matrix A if kA ? V V T AkF ? (1 + )kA ? Ak kF . Algorithm k-Space (Algorithm 1) finds
an n ? k matrix V which spans a rank-k (1 + )-approximation of ?(A).
Before proving the correctness of the algorithm, we start with two key lemmas. Proofs are included
only in the supplementary material version of the paper.
q
Lemma 4. Let S ? Rd ?m be a randomly chosen T ENSOR S KETCH matrix with m = ?(3q k 2 +
T
k/). Let U U be the n?n projection matrix onto the column space of ?(A)?S. Then if [U T ?(A)]k
is the best rank-k approximation to matrix U T ?(A), we have
kU [U T ?(A)]k ? ?(A)kF ? (1 + O())k?(A) ? [?(A)]k kF .
q
Lemma 5. Let U U T be as in Lemma 4. Let T ? Rd ?r be a randomly chosen T ENSOR S KETCH
matrix with r = O(3q m2 /2 ), where m = ?(3q k 2 + k/). Suppose W is the m ? k matrix whose
columns are the top k left singular vectors of U T ?(A)T . Then,
kU W W T U T ?(A) ? ?(A)kF ? (1 + )k?(A) ? [?(A)]k kF .
Theorem 6. (Polynomial Kernel Rank-k Space.) For the polynomial kernel of degree q, in
O(nnz(A)q) + n ? poly(3q k/) time, Algorithm k-S PACE finds an n ? k matrix V which spans
a rank-k (1 + )-approximation of ?(A).
5
Proof. By Lemma 4 and Lemma 5, the output V = U W spans a rank-k (1 + )-approximation to
?(A). It only remains to argue the time complexity. The sketches ?(A) ? S and ?(A) ? T can be
computed in O(nnz(A)q) + n ? poly(3q k/) time. In n ? poly(3q k/) time, the matrix U can be
obtained from ?(A) ? S and the product U T ?(A)T can be computed. Given U T ?(A)T , the matrix
W of top k left singular vectors can be computed in poly(3q k/) time, and in n ? poly(3q k/) time
the product V = U W can be computed. Hence the overall time is O(nnz(A)q) + n ? poly(3q k/),
and the theorem follows.
We now show how to find a low rank approximation to ?(A). A proof is included in the supplementary material version of the paper.
Theorem 7. (Polynomial Kernel PCA and Low Rank Factorization) For the polynomial kernel of
degree q, in O(nnz(A)q)+(n+d)?poly(3q k/) time, we can find an n?k matrix V , a k?poly(k/)
matrix U , and a poly(k/) ? d matrix R for which
kV ? U ? ?(R) ? AkF ? (1 + )k?(A) ? [?(A)]k kF .
The success probability of the algorithm is at least .6, which can be amplified with independent
repetition.
Note that Theorem 7 implies the rowspace of ?(R) contains a k-dimensional subspace L with dq ?dq
projection matrix LLT for which k?(A)LLT ? ?(A)kF ? (1 + )k?(A) ? [?(A)]k kF , that is, L
provides an approximation to the space spanned by the top k principal components of ?(A).
4.2
Regularizing Learning With the Polynomial Kernel
Consider learning with the polynomial kernel. Even if d n it might be that even for low values of
q we have dq n. This makes a number of learning algorithms underdetermined, and increases the
chance of overfitting. The problem is even more severe if the input matrix A has a lot of redundancy
in it (noisy features).
To address this, many learning algorithms add a regularizer, e.g., ridge terms. Here we propose to
regularize by using rank-k approximations to the matrix (where k is the regularization parameter
that is controlled by the user). With the tools developed in the previous subsection, this not only
serves as a regularization but also as a means of accelerating the learning.
We now show that two different methods that can be regularized using this approach.
4.2.1
Approximate Kernel Principal Component Regression
If dq > n the linear regression with ?(A) becomes underdetermined and exact fitting to the right
hand side is possible, and in more than one way. One form of regularization is Principal Component
Regression (PCR), which first uses PCA to project the data on the principal component, and then
continues with linear regression in this space.
We now introduce the following approximate version of PCR.
Definition 8. In the Approximate Principal Component Regression Problem (Approximate PCR),
we are given an n ? d matrix A and an n ? 1 vector b, and the goal is to find a vector x ? Rk and
an n ? k matrix V with orthonormal columns spanning a rank-k (1 + )-approximation to A for
which x = argminx kV x ? bk2 .
Notice that if A is a rank-k matrix, then Approximate PCR coincides with ordinary least squares
regression with respect to the column space of A. While PCR would require solving the regression
problem with respect to the top k singular vectors of A, in general finding these k vectors exactly
results in unstable computation, and cannot be found by an efficient linear sketch. This would
occur, e.g., if the k-th singular value ?k of A is very close (or equal) to ?k+1 . We therefore relax
the definition to only require that the regression problem be solved with respect to some k vectors
which span a rank-k (1 + )-approximation to A.
The following is our main theorem for Approximate PCR.
Theorem 9. (Polynomial Kernel Approximate PCR.) For the polynomial kernel of degree q, in
O(nnz(A)q) + n ? poly(3q k/) time one can solve the approximate PCR problem, namely, one
6
can output a vector x ? Rk and an n ? k matrix V with orthonormal columns spanning a rank-k
(1 + )-approximation to ?(A), for which x = argminx kV x ? bk2 .
Proof. Applying Theorem 6, we can find an n ? k matrix V with orthonormal columns spanning a
rank-k (1 + )-approximation to ?(A) in O(nnz(A)q) + n ? poly(3q k/) time. At this point, one can
solve solve the regression problem argminx kV x ? bk2 exactly in O(nk) time since the minimizer is
x = V T b.
4.2.2
Approximate Kernel Canonical Correlation Analysis
In Canonical Correlation Analysis (CCA) we are given two matrices A, B and we wish to find
directions in which the spaces spanned by their columns are correlated. Due to space constraints,
details appear only in the supplementary material version of the paper.
5
Experiments
We report two sets of experiments whose goal is to demonstrate that the k-Space algorithm (Algorithm 1) is useful as a feature extraction algorithm. We use standard classification and regression
datasets.
In the first set of experiments, we compare ordinary `2 regression to approximate principal component `2 regression, where the approximate principal components are extracted using k-Space (we
use RLSC for classification). Specifically, as explained in Section 4.2.1, we use k-Space to compute
V and then use regression on V (in one dataset we also add an additional ridge regularization). To
predict, we notice that V = ?(A) ? S ? R?1 ? W , where R is the R factor of ?(A) ? S, so S ? R?1 ? W
defines a mapping to the approximate principal components. So, to predict on a matrix At we first
compute ?(At ) ? S ? R?1 ? W (using T ENSOR S KETCH to compute ?(At ) ? S fast) and then multiply
by the coefficients found by the regression. In all the experiments, ?(?) is defined using the kernel
k(u, v) = (uT v + 1)3 .
While k-Space is efficient and gives an embedding in time that is faster than explicitly expanding the
feature map, or using kernel PCA, there is still some non-negligible overhead in using it. Therefore,
we also experimented with feature extraction using only a subset of the training set. Specifically, we
first sample the dataset, and then use k-Space to compute the mapping S ? R?1 ? W . We apply this
mapping to the entire dataset before doing regression.
The results are reported in Table 1. Since k-Space is randomized, we report the mean and standard
deviation of 5 runs. For all datasets, learning with the extracted features yields better generalized
errors than learning with the original features. Extracting the features using only a sample of the
training set results in only slightly worse generalization errors. With regards to the MNIST dataset,
we caution the reader not to compare the generalization results to the ones obtained using the polynomial kernel (as reported in the literature). In our experiments we do not use the polynomial kernel
on the entire dataset, but rather use it to extract features (i.e., do principal component regularization)
using only a subset of the examples (only 5,000 examples out of 60,000). One can expect worse results, but this is a more realistic strategy for very large datasets. On very large datasets it is typically
unrealistic to use the polynomial kernel on the entire dataset, and approximation techniques, like the
ones we suggest, are necessary.
We use a similar setup in the second set of experiments, now using linear SVM instead of regression
(we run only on the classification datasets). The results are reported in Table 2. Although the gap is
smaller, we see again that generally the extracted features lead to better generalization errors.
We remark that it is not our goal to show that k-Space is the best feature extraction algorithm of
the classification algorithms we considered (RLSC and SVM), or that it is the fastest, but rather
that it can be used to extract features of higher quality than the original one. In fact, in our experiments, while for a fixed number of extracted features, k-Space produces better features than simply
using T ENSOR S KETCH, it is also more expensive in terms of time. If that additional time is used
to do learning or prediction with T ENSOR S KETCH with more features, we overall get better generalization error (we do not report the results of these experiments). However, feature extraction is
widely applicable, and there can be cases where having fewer high quality features is beneficial, e.g.
performing multiple learning on the same data, or a very expensive learning tasks.
7
Table 1:
Comparison of testing error with using regression with original features and with features extracted using k-Space. In the table, n
is number of training instances, d is the number of features per instance and nt is the number of instances in the test set. ?Regression? stands
for ordinary `2 regression. ?PCA Regression? stand for approximate principal component `2 regression. ?Sample PCA Regression? stands
approximate principal component `2 regression where only ns samples from the training set are used for computing the feature extraction. In
?PCA Regression? and ?Sample PCA Regression? k features are extracted. In k-Space we use m = O(k) and r = O(k) with the ratio
between m and k and r and k as detailed in the table. For classification tasks, the percent of testing points incorrectly predicted is reported.
For regression tasks, we report kyp ? yk2 /kyk where yp is the predicted values and y is the ground truth.
Dataset
MNIST
Regression
14%
PCA Regression
Out of
Memory
12%
4.3% ? 1.0%
k = 200
m/k = 4
r/k = 8
15.2% ? 0.1%
k = 500
m/k = 2
r/k = 4
6.5% ? 0.2%
k = 500
m/k = 4
r/k = 8
? = 0.001
7.0% ? 0.2%
k = 200
m/k = 4
r/k = 8
classification
n = 60, 000, d = 784
nt = 10, 000
CPU
regression
n = 6, 554, d = 21
nt = 819
ADULT
15.3%
classification
n = 32, 561, d = 123
nt = 16, 281
CENSUS
7.1%
regression
n = 18, 186, d = 119
nt = 2, 273
USPS
13.1%
classification
n = 7, 291, d = 256
nt = 2, 007
Table 2:
Sampled PCA Regression
7.9% ? 0.06%
k = 500, ns = 5000
m/k = 2
r/k = 4
3.6% ? 0.1%
k = 200, ns = 2000
m/k = 4
r/k = 8
15.2% ? 0.03%
k = 500, ns = 5000
m/k = 2
r/k = 4
6.8% ? 0.4%
k = 500, ns = 5000
m/k = 4
r/k = 8
? = 0.001
7.5% ? 0.3%
k = 200, ns = 2000
m/k = 4
r/k = 8
Comparison of testing error with using SVM with original features and with features extracted using k-Space.. In the table, n is
number of training instances, d is the number of features per instance and nt is the number of instances in the test set. ?SVM? stands for linear
SVM. ?PCA SVM? stand for using k-Space to extract features, and then using linear SVM. ?Sample PCA SVM? stands for using only ns
samples from the training set are used for computing the feature extraction. In ?PCA SVM? and ?Sample PCA SVM? k features are extracted.
In k-Space we use m = O(k) and r = O(k) with the ratio between m and k and r and k as detailed in the table. For classification tasks,
the percent of testing points incorrectly predicted is reported.
Dataset
MNIST
SVM
8.4%
PCA SVM
Out of
Memory
15.0%
15.1% ? 0.1%
k = 500
m/k = 2
r/k = 4
7.2% ? 0.2%
k = 200
m/k = 4
r/k = 8
classification
n = 60, 000, d = 784
nt = 10, 000
ADULT
classification
n = 32, 561, d = 123
nt = 16, 281
USPS
8.3%
classification
n = 7, 291, d = 256
nt = 2, 007
6
Sampled PCA SVM
6.1% ? 0.1%
k = 500, ns = 5000
m/k = 2
r/k = 4
15.2% ? 0.1%
k = 500, ns = 5000
m/k = 2
r/k = 4
7.5% ? 0.3%
k = 200, ns = 2000
m/k = 4
r/k = 8
Conclusions and Future Work
Sketching based dimensionality reduction has so far been limited to linear models. In this paper,
we describe the first oblivious subspace embeddings for a non-linear kernel expansion (the polynomial kernel), opening the door for sketching based algorithms for a multitude of problems involving
kernel transformations. We believe this represents a significant expansion of the capabilities of
sketching based algorithms. However, the polynomial kernel has a finite-expansion, and this finiteness is quite useful in the design of the embedding, while many popular kernels induce an infinitedimensional mapping. We propose that the next step in expanding the reach of sketching based
methods for statistical learning is to design oblivious subspace embeddings for non-finite kernel
expansions, e.g., the expansions induced by the Gaussian kernel.
8
References
[1] H. Avron, V. Sindhawni, and D. P. Woodruff. Sketching structured matrices for faster nonlinear
regression. In Advances in Neural Information Processing Systems (NIPS), 2013.
[2] L. Carter and M. N. Wegman. Universal classes of hash functions. J. Comput. Syst. Sci.,
18(2):143?154, 1979.
[3] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theor.
Comput. Sci., 312(1):3?15, 2004.
[4] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In
Proceedings of the 41th Annual ACM Symposium on Theory of Computing (STOC), 2009.
[5] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity
time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC),
2013.
[6] P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM J. Matrix Analysis Applications, 30(2):844?881, 2008.
[7] R. Hamid, Y. Xiao, A. Gittens, and D. DeCoste. Compact random feature maps. In Proc. of
the 31th International Conference on Machine Learning (ICML), 2014.
[8] I. T. Jolliffe. A note on the use of principal components in regression. Journal of the Royal
Statistical Society, Series C, 31(3):300?303, 1982.
[9] R. Kannan, S. Vempala, and D. P. Woodruff. Principal component analysis and higher correlations for distributed data. In Proceedings of the 27th Conference on Learning Theory (COLT),
2014.
[10] P. Kar and H. Karnick. Random feature maps for dot product kernels. In Proceedings of the
Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[11] Q. Le, T. Sarl?os, and A. Smola. Fastfood ? Approximating kernel expansions in loglinear time.
In Proc. of the 30th International Conference on Machine Learning (ICML), 2013.
[12] M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in
Machine Learning, 3(2):123?224, 2011.
[13] J. Nelson and H. Nguyen. OSNAP: Faster numerical linear algebra algorithms via sparser
subspace embeddings. In 54th IEEE Annual Symposium on Foundations of Computer Science
(FOCS), 2013.
[14] R. Pagh. Compressed matrix multiplication. ACM Trans. Comput. Theory, 5(3):9:1?9:17,
2013.
[15] M. Patrascu and M. Thorup. The power of simple tabulation hashing. J. ACM, 59(3):14, 2012.
[16] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In
Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, KDD ?13, pages 239?247, New York, NY, USA, 2013. ACM.
[17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
Neural Information Processing Systems (NIPS), 2007.
9
| 5240 |@word version:5 polynomial:31 norm:4 stronger:1 nd:1 ount:9 seek:1 decomposition:1 moment:1 reduction:3 contains:2 series:1 woodruff:6 ours:1 ketch:42 ka:5 com:2 nt:10 realistic:1 numerical:3 j1:3 kdd:1 enables:1 hash:6 implying:1 intelligence:1 prohibitive:2 selected:2 fewer:1 kyk:1 item:1 provides:6 boosting:1 height:1 mathematical:2 constructed:1 symposium:3 ik:3 focs:1 combine:1 fitting:1 overhead:1 introduce:1 indeed:4 os:3 actual:1 cpu:1 decoste:1 increasing:1 becomes:1 provided:1 project:1 moreover:1 bounded:3 osnap:1 kind:2 developed:1 caution:1 finding:2 transformation:3 guarantee:1 berkeley:2 avron:2 exactly:3 k2:4 qm:1 appear:1 positive:1 before:2 negligible:1 apparatus:1 ak:3 analyzing:1 approximately:2 might:1 fastest:2 limited:2 walsh:1 factorization:1 testing:4 union:1 kat:2 sq:2 nnz:15 universal:1 thought:1 significantly:1 projection:2 word:1 induce:1 suggest:1 get:1 onto:1 close:4 clever:1 operator:1 cannot:1 applying:2 impossible:1 writing:1 seminal:1 restriction:1 equivalent:1 map:10 center:2 convex:1 rectangular:1 focused:1 m2:2 orthonormal:9 regularize:2 spanned:2 embedding:17 proving:1 coordinate:3 target:2 suppose:2 pioneered:1 exact:2 user:1 us:2 trick:2 trend:1 expensive:2 continues:1 observed:1 solved:1 compressing:1 complexity:1 moderately:1 depend:1 solving:1 algebra:3 serve:1 basis:2 usps:2 drineas:1 various:2 represented:1 regularizer:1 muthukrishnan:1 fast:9 describe:3 artificial:1 sarl:1 quite:2 emerged:1 heuristic:1 solve:5 whose:4 say:2 s:5 supplementary:4 relax:1 widely:1 compressed:1 statistic:1 transform:9 noisy:1 advantage:1 net:2 propose:7 product:13 frequent:1 hadamard:1 amplified:1 frobenius:1 kv:5 extending:1 produce:1 iq:8 ij:2 sa:1 c:1 predicted:3 implies:1 direction:1 drawback:1 correct:1 closely:1 kb:3 enable:1 material:4 require:3 hx:1 sor:1 suffices:1 generalization:5 hamid:1 underdetermined:3 theor:1 pham:5 considered:1 ground:1 kyp:1 exp:1 mapping:17 algorithmic:1 predict:2 proc:2 applicable:1 correctness:1 repetition:1 tool:3 gaussian:1 rather:2 avoid:1 hj:1 conjunction:1 unsatisfactory:1 rank:26 contrast:1 sigkdd:1 sense:1 dependent:1 streaming:1 entire:6 typically:1 jq:3 i1:9 arg:1 aforementioned:1 overall:3 classification:12 almaden:1 colt:1 uc:1 equal:2 aware:1 construct:1 extraction:6 having:1 represents:1 icml:2 future:1 report:4 oblivious:8 opening:1 randomly:4 simultaneously:6 preserve:1 intended:1 argminx:3 interest:1 mining:1 multiply:1 severe:1 mahoney:2 behind:1 xb:1 necessary:1 euclidean:1 bk2f:3 theoretical:1 instance:6 column:17 modeling:1 earlier:1 ordinary:3 applicability:1 deviation:1 subset:3 entry:5 reported:5 considerably:1 recht:2 international:4 randomized:3 siam:1 pagh:10 sketching:11 quickly:1 again:3 squared:1 containing:1 possibly:1 worse:2 admit:1 yp:1 syst:1 coefficient:2 explicitly:7 stream:1 h1:2 lot:1 doing:3 analyze:1 start:2 capability:1 simon:1 contribution:1 formed:1 square:1 efficiently:3 yield:1 farach:1 llt:2 reach:1 suffers:1 definition:2 proof:6 cur:1 sampled:2 dpwoodru:1 dataset:8 popular:2 subsection:1 ut:1 dimensionality:3 knowledge:1 higher:4 originally:1 hashing:1 done:3 generality:1 just:2 implicit:1 smola:1 correlation:3 sketch:7 hand:1 nonlinear:1 o:1 defines:2 quality:2 believe:1 usa:1 true:1 regularization:7 hence:1 nonzero:1 i2:1 yorktown:1 coincides:1 generalized:1 stress:1 ridge:3 demonstrate:1 percent:2 wise:9 recently:2 tabulation:1 extend:1 tail:1 approximates:1 refer:1 significant:1 rd:8 had:1 dot:1 access:1 yk2:1 add:2 isometry:2 showed:3 recent:1 certain:1 kar:1 arbitrarily:1 watson:1 ensor:32 success:1 yi:1 seen:1 preserving:2 additional:3 multiple:1 rahimi:2 faster:8 calculation:1 rlsc:2 controlled:1 prediction:1 variant:2 regression:40 involving:1 scalable:1 metric:1 fifteenth:1 kernel:55 preserved:1 whereas:1 background:1 singular:9 median:2 finiteness:1 crucial:3 operate:1 unlike:1 rest:1 induced:5 contrary:1 mod:4 integer:2 extracting:1 door:1 embeddings:7 enough:1 fft:1 independence:1 inner:2 computable:1 whether:2 pca:22 handled:1 accelerating:3 clarkson:3 york:1 constitute:1 remark:1 kbk2f:3 useful:4 generally:1 clear:1 detailed:2 amount:1 carter:1 generate:1 canonical:2 notice:4 sign:4 per:3 pace:1 write:1 key:1 redundancy:1 utilize:1 asymptotically:1 downstream:3 run:2 jose:1 powerful:2 reader:1 bound:4 cca:1 haim:1 annual:3 occur:1 byd:1 constraint:2 fourier:3 speed:1 argument:1 span:9 aspect:1 performing:1 vempala:1 speedup:1 charikar:2 structured:1 smaller:1 slightly:1 beneficial:1 gittens:1 s1:2 explained:1 restricted:1 pr:2 census:1 bucket:1 previously:1 remains:2 discus:1 describing:1 jolliffe:1 needed:2 serf:1 unusual:1 thorup:1 operation:2 unreasonable:1 apply:1 appropriate:2 alternative:2 slower:1 xkf:1 original:4 denotes:3 top:6 running:4 exploit:1 prof:2 establish:1 approximating:1 society:1 tensor:1 question:1 strategy:1 kak2:1 loglinear:1 unclear:1 minx:3 subspace:33 hq:2 mapped:1 sci:2 nelson:2 kak2f:2 argue:1 extent:1 unstable:1 reason:1 provable:1 spanning:3 kannan:1 index:1 providing:1 ratio:2 setup:1 stoc:2 relate:1 design:4 datasets:5 finite:2 incorrectly:2 wegman:1 extended:1 rn:3 vj1:1 david:1 introduced:1 namely:2 required:1 specified:2 akf:3 nip:2 trans:1 address:1 able:1 adult:2 usually:2 xm:1 sparsity:4 pcr:9 memory:2 royal:1 unrealistic:1 power:1 natural:1 regularized:1 indicator:1 extract:3 literature:3 discovery:1 kf:14 multiplication:4 relative:1 embedded:2 loss:1 sk2:3 kakf:1 expect:1 ingredient:1 remarkable:1 h2:1 foundation:2 degree:5 attuned:1 xiao:1 dq:17 bk2:5 ibm:4 row:11 side:2 institute:1 taking:2 distributed:1 regard:1 dimension:4 gram:3 stand:6 karnick:1 kz:2 doesn:1 computes:2 collection:1 author:1 san:1 infinitedimensional:1 nguyen:2 far:3 sj:1 approximate:26 compact:1 implicitly:1 colton:1 overfitting:1 xi:2 table:8 additionally:1 ku:2 ca:2 expanding:2 expansion:6 poly:14 vj:2 aistats:1 main:4 dense:1 fastfood:1 motivation:1 s2:1 huy:1 n2:4 en:1 elaborate:1 ny:2 ose:14 n:10 explicit:4 wish:1 comput:3 down:2 theorem:10 embed:1 rk:2 showing:1 experimented:1 svm:13 multitude:1 mnist:3 adding:1 nk:1 gap:1 chen:1 sparser:1 led:1 simply:1 nguy:1 kxk:4 patrascu:1 corresponds:2 minimizer:1 satisfies:1 chance:1 extracted:8 truth:1 acm:6 sized:1 goal:4 careful:1 considerable:1 hard:2 included:3 infinite:2 specifically:2 principal:17 lemma:12 called:1 total:1 highdimensional:1 princeton:1 regularizing:1 correlated:1 |
4,684 | 5,241 | Learning the Learning Rate for
Prediction with Expert Advice
Wouter M. Koolen
Queensland University of Technology and UC Berkeley
wouter.koolen@qut.edu.au
Tim van Erven
Leiden University, the Netherlands
tim@timvanerven.nl
?
Peter D. Grunwald
Leiden University and Centrum Wiskunde & Informatica, the Netherlands
pdg@cwi.nl
Abstract
Most standard algorithms for prediction with expert advice depend on a parameter
called the learning rate. This learning rate needs to be large enough to fit the data
well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher
and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher
learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic
in the number of experts and the inverse of the learning rate, our method performs
as well as if we would know the empirically best learning rate from a large range
that includes both conservative small values and values that are much higher than
those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the
grid.
1
Introduction
Consider a learner who in each round t = 1, 2, . . . specifies a probability distribution wt on K
experts, before being told a vector `t ? [0, 1]K with their losses and consequently incurring loss
ht := wt ? `t . Losses are summed up over trials and after T rounds the learner?s cumulative loss
PT
PT
HT = t=1 ht is compared to the cumulative losses LkT = t=1 `kt of the experts k = 1, . . . , K.
This is essentially the framework of prediction with expert advice [1, 2], in particular the standard
Hedge setting [3]. Ideally, the learner?s predictions would not be much worse than those of the best
expert, who has cumulative loss L?T = mink LkT , so that the regret RT = HT ? L?T is small.
Follow-the-Leader (FTL) is a natural strategy for the learner. In any round t, it predicts with a
point mass on the expert k with minimum loss Lkt?1 , i.e. the expert that was best on the previous
t ? 1 rounds. However, in the standard game-theoretic analysis, the experts? losses are assumed
to be generated by an adversary, and then the regret for FTL can grow linearly in T [4], which
means that it is not learning. To do better, the predictions need to be less outspoken, which can
be accomplished by replacing FTL?s choice of the expert with minimal cumulative loss by the soft
k
minimum wtk ? e??Lt?1 , which is known as the exponential weights or Hedge algorithm [3]. Here
? > 0 is a regularisation parameter that is called the learning rate. As ? ? ? the soft minimum
approaches the exact minimum and exponential weights converges to FTL. In contrast, the lower ?,
the more the soft minimum resembles a uniform distribution and the more conservative the learner.
1
Let R?T denote the regret for exponential weights with learning rate ?. To obtain guarantees against
adversarial losses, several tunings of ? have been proposed in the literature. Most of these may be
understood by starting with the bound
T
R?T ?
ln K X ?
+
?t ,
?
t=1
(1)
which holds for any sequence of losses. Here ?t? ? 0 is the approximation error (called mixability
gap by [5]) when the loss of the learner in round t is approximated by the so-called mix loss, which
is a certain ?-exp-concave lower bound (see Section 2.1). The analysis then proceeds
P by giving
an upper bound bt (?) ? ?t? and choosing ? to balance the two terms ln(K)/? p
and t bt (?). In
8 ln(K)/T , for
particular, the bound ?t? ? ?/8 results in p
the most conservative tuning ? =
which the regret is always bounded by O( T ln(K)); the same guarantee can still be achieved
even if the horizon T is unknown in advance by using, for instance, the so-called doubling trick
[4]. It is possible though to learn more aggressively by using a bound on ?t? that depends on the
data. The
can be obtained by using ?t? ? e? wt ? `t and choosing ? =
p first such ?improvement
p
ln(1 + 2 ln(K)/LT ) ?
2 ln(K)/L?T , where again the doubling trick can be used if L?T is
p
unknown in advance, which leads to a bound of O( L?T ln(K) + ln K) [6, 4]. Since L?T ? T
this is never worse than the conservative tuning, and it can be better if the best expert has very
small losses (a case sometimes called the ?low noise condition?). A further improvement has been
proposed by Cesa-Bianchi et al. [7], who bound ?t? by a constant times the variance vt? of `kt when k
is distributed according to wt , such that vt? = wt ? (`t ? ht )2 . Rather than using a constant learning
rate, at time t they playpthe Hedge weights wt based on a time-varying learning rate ?t that is
P
approximately tuned as ln(K)/Vt?1 with Vt = s?t vs?s . This leads to a so-called second-order
bound on the regret of the form
p
RT = O
Vt ln(K) + ln K ,
(2)
which, as Cesa-Bianchi et al. show, implies
!
r
L?T (T ? L?T )
RT = O
ln(K) + ln K
T
(3)
and is therefore always better than the tuning in terms of L?T (note though that (2) can be much
stronger than (3) on data for which the exponential weights rapidly concentrate on a single expert,
see also [8]). The general pattern that emerges is that the better the bound on ?t? , the higher ?
can be chosen and the more aggressive the learning. De Rooij et al. [5] take this approach to its
extreme and P
do not bound ?t? at all. In their AdaHedge algorithm they tune ?t = ln(K)/?t?1
where ?t = s?t ?s?s , which is very similar to the second-order tuning of Cesa-Bianchi et al. and
indeed also satisfies (2) and (3). Thus, this sequence of prior works appears to have reached the
limit of what is possible based on improving the bound on ?t? . Unfortunately, however, if the data
are not adversarial, then even second-order bounds do not guarantee the best possible tuning of ?
for the data at hand. (See the experiments that study the influence of ? in [5].) In practice, selecting
?t to be the best-performing learning rate so far (that is, running FTL at the meta-level) appears to
work well [9], but this approach requires a computationally intensive grid search over learning rates
[9] and formal guarantees can only be given for independent and identically distributed (IID) data
[10]. A new technique based on speculatively trying out different ? was therefore introduced in the
FlipFlop algorithm [5]. By alternating learning rates ?t = ? and ?t that are very similar to those
of AdaHedge, FlipFlop is both able to satisfy the second-order bounds (2) and (3), and to guarantee
that its regret is never much worse than the regret R?
T for Follow-the-Leader:
RT = O R?
(4)
T .
Thus FlipFlop covers two extremes: on the one hand it is able to compete with ? that are small
enough to deal with the worst case, and on the other hand it can compete with ? = ? (FTL).
Main Contribution We generalise the FlipFlop approach to cover a large range of ? in between.
As before, let R?T denote the regret of exponential weights with fixed learning rate ?. We introduce
2
the learning the learning rate (LLR) algorithm, which satisfies (2), (3) and (4) and in addition
guarantees a regret satisfying
1+?
?
1
RT = O ln(K) ln ?
RT
for all ? ? [?tah? , 1]
(5)
ah
for any ? > 0. Thus, LLR performs almost as well as the learning
p rate ??T ? [?t? , 1] that is
ah
optimal with hindsight. Here the lower end-point ?t? ? (1 ? o(1)) ln(K)/T (as follows from
(28) below) is a data-dependent value that is sufficiently conservative (i.e. small) to provide secondorder guarantees and consequently worst-case optimality. The upper end-point 1 is an artefact of the
analysis, which we introduce because, for general losses in [0, 1]K , we do not have a guarantee in
terms of R?T for 1 < ? < ?. For the special case of binary losses `t ? {0, 1}K , however, we can
say a bit more: as shown in Appendix B of the supplementary material, in this special case the LLR
algorithm guarantees regret bounded by RT = O(KR?T ) for all ? ? [1, ?].
The additional factor ln(K) ln1+? (1/?) in (5) comes from a prior on an exponentially spaced grid
of ?. It is logarithmic in the number of experts K, and its dependence on 1/? grows slower than
ln1+? (1/?) ? ln1+? (1/?tah? ) = O(ln1+? (T )) for any ? > 0. For the optimally tuned ??T , we have
in mind regret that grows like R?T?T = O(T ? ) for some ? ? [0, 1/2], so an additional polylog factor
seems a small price to pay to adapt to the right exponent ?.
Although ? ? ?tah? appear to be most important, the regret for LLR can also be related to R?T for
lower ?:
ln K
RT = O
for all ? < ?tah? ,
(6)
?
which is not in terms of R?T , but still improves on the standard bound (1) because ?t? ? 0 for all ?.
The LLR algorithm takes two parameters, which determine the trade-off between constants in the
bounds (2)?(6) above. Normally we would propose to set these parameters to moderate values, but if
we do let them approach various limits, LLR becomes essentially the same as FlipFlop, AdaHedge
or FTL (see Section 2).
Computational Efficiency Although LLR
employs a grid of ?, it does not have to search
over this grid. Instead, in each time step it only
has to do computations for the single ? that is
active, and, as a consequence, it runs as fast as
using exponential weights with a single fixed
?, which is linear in K and T . LLR, as presented here, does store information about all
the grid points, which requires O(ln(K) ln(T ))
storage, but we describe a simple approximation that runs equally fast and only requires a
constant amount of storage.
8000
Worst?case bound and worst?case ?
7000
Hedge(?)
AdaHedge
FlipFlop ah
LLR and ?t*
6000
5000
regret
We emphasise that we do not just have a bound
on LLR that is unavailable for earlier methods;
there also exist actual losses for which the optimal learning rate with hindsight ??T is fundamentally in between the robust learning rates
chosen by AdaHedge and the aggressive choice
? = ? of FTL. On such data, Hedge with fixed
learning rate ??T performs significantly better
than both these extremes; see Figure 1. In Appendix A we describe the data used to generate
Figure 1 and explain why the regret obtained by
LLR is significantly smaller than the regret of
AdaHedge, FTL and all other tunings described
above.
4000
3000
2000
1000
0 ?4
10
?2
10
0
10
learning rate (?)
2
10
Figure 1: Example data (details in Appendix A)
on which Hedge/exponential weights with intermediate learning rate (global minimum) performs
much better than both the worst-case optimal
learning rate (local minimum on the left) and large
learning rates (plateau on the right). We also show
the performance of the algorithms mentioned in
the introduction.
3
Outline The paper is organized as follows. In Section 2 we define the LLR algorithm and in
Section 3 we make precise how it satisfies (2), (3), (4), (5) and (6). Section 4 provides a discussion.
Finally, the appendix contains a description of the data in Figure 1 and most of the proofs.
2
The Learning the Learning Rate Algorithm
In this section we describe the LLR algorithm, which is a particular strategy for choosing a timevarying learning rate in exponential weights. We start by formally describing the setting and then
explain how LLR chooses its learning rates.
2.1
The Hedge Setting
At the start of each round t = 1, 2, . . . the learner produces a probability distribution wt =
K
(wt1 , . . . , wtK ) on K ? 2 experts.
the experts incur losses `t = (`1t , . . . , `K
t ) ? [0, 1] and the
P Then
k k
learner?s loss ht = wt ? `t = k wt `t is the expected loss under wt . After T rounds, the learner?s
PT
PT
cumulative loss is HT = t=1 ht and the cumulative losses for the experts are LkT = t=1 `kt . The
goal is to minimize the regret RT = HT ?L?T with respect to the cumulative loss L?T = mink LkT of
the best expert. We consider strategies for the learner that play the exponential weights distribution
k
e??t Lt?1
wtk = PK
??t Ljt?1
j=1 e
for a choice of learning rate ?t that may depend on all losses before time t. To analyse such methods,
P
k
it is common to approximate the learner?s loss ht by the mix loss mt = ? ?1t ln k wtk e??t `t , which
appears under a variety of names in e.g. [7, 4, 11, 5]. The resulting approximation error or mixability
gap ?t = ht ?mt is always non-negative and cannot exceed 1. This, and some other basic properties
of the mix loss are listed in Lemma 1 of De Rooij et al. [5], which we reproduce as Lemma C.1 in
the additional material.
As will be explained in the next section, LLR does not monitor the regrets of all learning rates
directly. Instead, it tracks their cumulative mixability gaps, which provide a convenient lower bound
on the regret that is monotonically increasing with the number of rounds T , in contrast to the regret
itself. To show this, let R?T denote the regret of the exponential weights strategy with fixed learning
PT
PT
rate ?t = ?, and similarly let MT? = t=1 m?t and ??T = t=1 ?t? denote its cumulative mix loss
and mixability gap.
Lemma 2.1. For any fixed learning rate ? ? (0, ?], the regret of exponential weights satisfies
R?T ? ??T .
(7)
Proof. Apply property 3 in Lemma C.1 to the regret decomposition R?T = MT? ? L?T + ??T .
We will use the following notational conventions. Lower-case letters indicate instantaneous quantities like mt , ?t and wt , whereas uppercase letters denote cumulative quantities like MT , ?T and
RT . In the absence of a superscript the learning rates present in any such quantity are those chosen
by LLR. In contrast, the superscript ? refers to using the same fixed learning rate ? throughout.
2.2
LLR?s Choice of Learning Rate
The LLR algorithm is a member of the exponential weights family of algorithms. Its defining property is its adaptive and non-monotonic selection of the learning rate ?t , which is specified in Algorithm 1 and explained next. The LLR algorithm works in regimes in which it speculatively tries
out different strategies for ?t . Almost all of these strategies consist of choosing a fixed ? from the
following grid:
? 1 = ?,
? i = ?2?i for i = 2, 3, . . . ,
(8)
where the exponential base
? = 1 + 1/ log2 K
4
(9)
Algorithm 1 LLR(? ah , ? ? ). The grid ? 1 , ? 2 , . . . and weights ? 1 , ? 2 , . . . are defined in (8) and (12).
i
Initialise b0 := 0; ?ah
0 := 0; ?0 := 0 for all i ? 1.
for t = 1, 2, . . . do
if all active indices and ah are bt?1 -full then
ah
Increase bt := ??ah
t?1 /? (with ? as defined in (14))
else
Keep bt := bt?1
end if
Let i be the least non-bt -full index.
if i is active then
Play ? i .
ah
Update ?it := ?it?1 + ?ti . Keep ?jt := ?jt?1 for j 6= i and ?ah
t := ?t?1 .
else
Play ?tah as defined in (10).
j
j
ah
ah
Update ?ah
t := ?t?1 + ?t . Keep ?t := ?t?1 for all j ? 1.
end if
end for
is chosen to ensure that the grid is dense enough so that ? i for i ? 2 is representative for all
? ? [? i+1 , ? i ] (this is made precise in Lemma 3.3). We also include the special value ? 1 = ?,
because it corresponds to FTL, which works well for IID data and data with a small number of
leader changes, as discussed by De Rooij et al. [5].
For each index i = 1, 2, . . . in the grid, let Ait ? {1, . . . , t} denote the set of rounds up to trial t in
which the LLR algorithm plays ? i . Then LLR keeps track of the performance of ? i by storing the
i
sum of mixability gaps ?ti ? ?t? for which ? i is responsible:
X
?si .
?it =
s?Ait
In addition to the grid in (8), LLR considers one more strategy, which we will call the AdaHedge
strategy, because it is very similar to the learning rate chosen by the AdaHedge algorithm [5]. In the
AdaHedge strategy, LLR plays ?t equal to
?tah =
ln K
,
?ah
t?1
(10)
P
? ah
where ?ah
?sah is the sum of mixability gaps ?tah ? ?t t during the rounds Aah
t =
t ?
s?Aah
t
{1, . . . , t} in which LLR plays the AdaHedge strategy. The only difference to the original AdaHedge is that the latter sums the mixability gaps over all s ? {1, . . . , t}, not just those in Aah
t . Note
that, in our variation, ?tah does not change during rounds outside Aah
t .
The AdaHedge learning rate ?tah is non-increasing with t, and (as we will show in Theorem 3.6
below) it is small enough to guarantee the worst-case bound (3), which is optimal for adversarial
data. We therefore focus on ? > ?tah and call an index i in the grid active in round t if ? i > ?tah .
Let imax ? imax (t) be the number of grid indices that are active at time t, such that ? imax (t) ? ?tah .
Then LLR cyclically alternates grid learning rates and the AdaHedge learning rate, in a way that
approximately maintains
?2t
?itmax
?ah
?1t
t
?
?
.
.
.
?
? ah
for all t,
(11)
1
2
i
max
?
?
?
?
where ? ah > 0 and ? 1 , ? 2 , . . . > 0 are fixed weights that control the relative importance of AdaHedge and the grid points (higher weight = more important). The LLR algorithm takes as parameters
? ah and ? ? , where ? ah only has to be positive, but ? ? is restricted to (0, 1). We then choose
?1 = ?? ,
? i = (1 ? ? ? )?(i ? 1)
for i ? 2,
(12)
P? i
i
where ? is a prior probability distribution on {1, 2, . . .}. It follows that i=1 ? = 1, so that ? may
be interpreted as a prior probability mass on grid index i. For ?, we require a distribution with very
5
heavy tails (meaning ?(i) not much smaller than 1i ), and we fix the convenient choice
Z
i
ln K
?(i) =
i?1
ln K
1
dx =
ln
(x + e) ln2 (x + e)
1
i?1
ln K
+e
?
1
ln
i
ln K
+e
.
(13)
We cannot guarantee that the invariant (11) holds exactly, and our algorithm incurs overhead for
changing learning rates, so we do not want to change learning rates too often. LLR therefore uses
an exponentially increasing budget b and tries grid indices and the AdaHedge strategy in sequence
until they exhaust the budget. To make this precise, we say that an index i is b-full in round t if
ah
?it?1 /? i > b and similarly that AdaHedge is b-full in round t if ?ah
> b. Let bt be the
t?1 /?
budget at time t, which LLR chooses as follows: first it initialises b0 = 0 and then, for t ? 1, it
tests whether all active indices and AdaHedge are bt?1 -full. If this is the case, LLR approximately
ah
increases the budget by a factor ? > 1 by setting bt = ??ah
t?1 /? > ?bt?1 , otherwise it just keeps
the budget the same: bt = bt?1 . In particular, we will fix budget multiplier
?
? = 1 + ? ah ,
(14)
which minimises the constants in our bounds. Now if, at time t, there exists an active index that is
not bt -full, then LLR plays the first such index. And if all active indices are bt -full, LLR plays the
AdaHedge strategy, which cannot be bt -full in this case by definition of bt . This guarantees that all
ratios ?iT /?Ti are approximately within a factor ? of each other for all i that are active at time t? ,
which we define to be the last time t ? T that LLR plays AdaHedge:
t? = max Aah
T.
(15)
Whenever LLR plays AdaHedge it is possible, however, that a new index i becomes active and it
then takes a while for this index?s cumulative mixability gap ?iT to also grow up to the budget.
Since AdaHedge is not played while the new index is catching up, the ratio guarantee always still
holds for all indices that were active at time t? .
2.3
Choosing the LLR Parameters
LLR has several existing strategies as sub-cases. For ? ah ? ? it essentially becomes AdaHedge.
For ? ? ? 1 it becomes FlipFlop. For ? ? ? 1 and ? ah ? 0 it becomes FTL. Intermediate values
for ? ah and ? ? retain the benefits of these algorithms, but in addition allow LLR to compete with
essentially all learning rates ranging from worst-case safe to extremely aggressive.
2.4
Run time and storage
LLR, as presented here, runs in constant time per round. This is because, in each round, it only
needs to compute the weights and update the corresponding cumulative mixability gap for a single
learning rate strategy. If the current strategy exceeds its budget (becomes bt -full), LLR proceeds
i (t)
to the next1 . The memory requirement is dominated by the storage of ?1t , . . . , ?tmax , which,
following the discussion below (5), is at most
imax (T ) = 2 +
ln ?imax1 (T )
ln ?
? 2 + log?
1
= O(ln(K) ln(T )).
?Tah
However, a minor approximation reduces the memory requirement down to a constant: At any point
in time the grid strategies considered by LLR split in three. Let us say that ? i is played at time t.
Then all preceding ? j for j ? i are already at (or slightly past) the budget. And all succeeding ? j
for i < j ? imax are still at (or slightly past) the previous budget. So we can approximate their
cumulative mixability gaps by simply ignoring these slight overshoots. It then suffices to store only
the cumulative mixability gap for the currently advancing ? i , and the current and previous budget.
1
In the early stages it may happen that the next strategy is already over the budget and needs to be skipped,
but this start-up effect quickly disappears when the budget exceeds 1, as the weighted increment ?ti /? i ?
? i /8 log1+ (1/?) is bounded for all 0 ? ? ? 1.
6
3
Analysis of the LLR algorithm
In this section we analyse the regret of LLR. We first show that for each loss sequence the regret is
bounded in terms of the cumulative mixability gaps ?iT and ?ah
T incurred by the active learning rates
(Lemma 3.1). As LLR keeps the cumulative mixability gaps approximately balanced according to
(11), we can then further bound the regret in terms of each of the individual learning rates in the grid
(Lemma 3.2). The next step is to deal with learning rates between grid points, by showing that their
cumulative mixability gap ??T relates to ?iT for the nearest higher grid point ? i ? ? (Lemma 3.3).
In Lemma 3.4 we put all these steps together. As the cumulative mixability gap ??T does not exceed
the regret R?T for fixed learning rates (Lemma 2.1), we can then derive the bounds (2) through (6)
from the introduction in Theorems 3.5 and 3.6.
We start by showing that the regret of LLR is bounded by the cumulative mixability gaps of the
learning rates that it plays. The proof, which appears in Section C.4, is a generalisation of Lemma 12
in [5]. It crucially uses the fact that the lowest learning rate played by LLR is the AdaHedge rate ?tah
which relates to ?ah
t .
Lemma 3.1. On any sequence of losses, the regret of the LLR algorithm with parameters ? ah > 0
and ? ? ? (0, 1) is bounded by
RT ?
imax
?
X
+ 2 ?ah
?iT ,
T +
??1
i=1
where imax is the largest i such that ? i is active in round T and ? is defined in (14).
The LLR budgeting scheme keeps the cumulative mixability gaps from Lemma 3.1 approximately
balanced according to (11). The next result, proved in Section C.5, makes this precise.
Lemma 3.2. Fix t? as in (15). Then for each index i that was active at time t? and arbitrary j 6= i:
j
?j
? i
?
+
+ min{1, ? j /8},
(16a)
?jT ? ?
? i T ? ah
?j
j
?jT ? ? ah ?ah
(16b)
T + min{1, ? /8},
?
? ah i
?ah
? + 1.
(16c)
T ?
?i T
LLR employs an exponentially spaced grid of learning rates that are evaluated using ? and played
proportionally to ? their cumulative mixability gaps. In the next step (which is restated and proved
as Lemma C.7 in the additional material) we show that the mixability gap of a learning rate between
grid-points cannot be much smaller than that of its next higher grid neighbour. This establishes in
particular that an exponential grid is sufficiently fine.
Lemma 3.3. For ? ? 1 and for any sequence of losses with values in [0, 1]:
?t?? ? ?e(??1)(ln K+?) ?t? .
The preceding results now allow us to bound the regret of LLR in terms of the cumulative mixability
gap of any fixed learning rate (which does not exceed its regret by Lemma 2.1) and in terms of the
cumulative mixability gap of AdaHedge (which we will use to establish worst-case optimality).
Lemma 3.4. Suppose the losses take values in [0, 1], let ? ah > 0 and ? ? ? (0, 1) be the parameters
?
+ 2 ? ah + ?. Then the regret of the LLR algorithm
of the LLR algorithm, and abbreviate B = ??1
is bounded by
??T
?
?
?
RT ? B?e(??1)(ln K+1) i(?)
+
+ ah +
+3
8(? ? 1) ?
??1
?
for all ? ? [?tah? , 1], where i(?) = 2 + blog? (1/?)c is the index of the nearest grid point above ?, and
by
??
?
?
?
T
RT ? B ? +
+
+
+3
?
8(? ? 1) ? ah
??1
7
for ? = ?. In addition
RT ? B
?ah
?
T
+
+ 1,
ah
?
8(? ? 1)
and for any ? < ?tah?
?ah
T ?
ln K
+ 1.
?
The proof appears in additional material Section C.6.
We are now ready for our main result, which is proved in Section C.7. It shows that LLR competes
with the regret of any learning rate above the worst-case safe rate and below 1 modulo a mild factor.
In addition, LLR also performs well on all data favoured by Follow-the-Leader.
Theorem 3.5. Suppose the losses take values in [0, 1], let ? ah > 0 and ? ?? ? (0, 1) be the
parameters of the LLR algorithm, and introduce the constants B = 1 + 2 ? ah + 3? ah and
CK = (log2 K + 1)/8 + B/? ah + 1. Then the regret of LLR is simultaneously bounded by
4Be1
RT ?
(log2 K + 1) ln(7/?) ln2 2 log2 (5/?) R?T + CK
for all ? ? [?tah? , 1]
?
1??
|
{z
}
=O (ln1+? (1/?)) for any ? > 0
and by
B
RT ? ? R?
for ? = ?.
T + CK
?
In addition
B ln K
RT ? ah
+ CK
for any ? < ?tah? .
?
?
To interpret the theorem, we recall from the introduction that ln(1/?) is better than O(ln T ) for all
? ? ?tah? .
We finally show that LLR is robust to the worst-case. We do this by showing something much
stronger, namely that LLR guarantees a so-called second-order bound (a concept introduced in [7]).
PT
The bound is phrased in terms of the cumulative variance VT = t=1 vt , where vt = Vk?wt `kt
is the variance of `kt for k distributed according to wt . See Section C.8 for the proof.
Theorem 3.6. Suppose the losses take values in [0, 1], let ? ah > 0 and ? ? ? (0, 1) be the
?
parameters of the LLR algorithm, and introduce the constants B = ??1
+ 2 ? ah + ? and
CK = (log2 K + 1)/8 + B/? ah + 1. Then the regret of LLR is bounded by
2B ln K
B p
RT ? ah VT ln K + CK +
?
3? ah
and consequently by
r
L?T (T ? L?T )
B
2B ln K
B 2 ln K
RT ? ah
ln K + 2 CK +
+
.
?
T
3? ah
(? ah )2
4
Discussion
We have shown that our new LLR algorithm is able to recover the same second-order bounds as
previous methods, which guard against worst-case data by picking a small learning rate if necessary.
What LLR adds is that, at the cost of a (poly)logarithmic overhead factor, it is also able to learn a
range of higher learning rates ?, which can potentially achieve much smaller regret (see Figure 1).
This is accomplished by covering this range with a grid of sufficient granularity. The overhead
factor depends on a prior on the grid, for which we have fixed a particular choice with a heavy tail.
However, the algorithm would also work with any other prior, so if it were known a priori that certain
values in the grid were of special importance, they could be given larger prior mass. Consequently,
a more advanced analysis demonstrating that only a subset of learning rates could potentially be
optimal (in the sense of minimizing the regret R?T ) would directly lead to factors of improvement in
the algorithm. Thus we raise the open question: what is the smallest subset E of learning rates such
that, for any data, the minimum of the regret over this subset min??E R?T is approximately the same
as the minimum min? R?T over all or a large range of learning rates?
8
References
[1] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[2] V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences,
56(2):153?173, 1998.
[3] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[5] S. de Rooij, T. van Erven, P. D. Gr?unwald, and W. M. Koolen. Follow the leader if you can,
Hedge if you must. Journal of Machine Learning Research, 15:1281?1316, 2014.
[6] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48?75, 2002.
[7] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction
with expert advice. Machine Learning, 66(2/3):321?352, 2007.
[8] T. van Erven, P. Gr?unwald, W. M. Koolen, and S. de Rooij. Adaptive hedge. In Advances in
Neural Information Processing Systems 24 (NIPS), 2011.
[9] M. Devaine, P. Gaillard, Y. Goude, and G. Stoltz. Forecasting electricity consumption by aggregating specialized experts; a review of the sequential aggregation of specialized experts,
with an application to Slovakian and French country-wide one-day-ahead (half-)hourly predictions. Machine Learning, 90(2):231?260, 2013.
[10] P. Gr?unwald. The safe Bayesian: learning the learning rate via the mixability gap. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory (ALT). Springer,
2012.
[11] V. Vovk. Competitive on-line statistics. International Statistical Review, 69:213?248, 2001.
[12] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
9
| 5241 |@word mild:1 trial:2 stronger:2 seems:1 open:1 crucially:1 queensland:1 decomposition:1 incurs:1 contains:1 sah:1 selecting:1 tuned:2 erven:3 existing:1 past:2 current:2 si:1 yet:1 dx:1 must:1 happen:1 succeeding:1 update:3 initialises:1 v:1 half:1 warmuth:1 provides:1 boosting:1 guard:1 overhead:3 introduce:5 expected:1 indeed:1 actual:1 increasing:3 becomes:6 bounded:9 competes:1 mass:3 lowest:1 what:3 interpreted:1 hindsight:2 guarantee:16 berkeley:1 ti:4 concave:1 exactly:1 control:1 normally:1 appear:1 hourly:1 before:3 positive:1 understood:1 local:1 aggregating:1 limit:2 consequence:1 approximately:7 lugosi:1 tmax:1 au:1 resembles:1 range:5 responsible:1 practice:3 regret:38 significantly:2 convenient:2 refers:1 cannot:4 close:1 selection:1 wt1:1 storage:4 put:1 influence:1 regardless:1 starting:1 restated:1 imax:7 initialise:1 variation:1 increment:1 pt:7 play:11 suppose:3 modulo:1 exact:1 us:2 secondorder:1 trick:2 element:1 approximated:1 centrum:1 satisfying:1 predicts:1 worst:11 trade:1 mentioned:1 balanced:2 ideally:1 overshoot:1 depend:2 raise:1 incur:1 efficiency:1 learner:11 various:1 fast:2 describe:3 choosing:5 outside:1 supplementary:1 larger:1 say:3 otherwise:1 statistic:1 analyse:2 itself:1 superscript:2 hoc:1 sequence:7 pdg:1 propose:1 rapidly:1 achieve:1 description:1 requirement:2 produce:1 converges:1 tim:2 polylog:1 derive:1 minimises:1 measured:1 nearest:2 minor:1 b0:2 implies:1 come:1 convention:1 indicate:1 concentrate:1 artefact:1 safe:3 material:4 require:1 fix:3 suffices:1 budgeting:1 generalization:1 hold:3 sufficiently:2 considered:1 exp:1 algorithmic:1 early:1 smallest:1 currently:1 gaillard:1 largest:1 establishes:1 weighted:2 always:4 rather:1 ck:7 varying:1 timevarying:1 cwi:1 focus:1 improvement:3 notational:1 vk:1 contrast:3 adversarial:3 skipped:1 sense:1 dependent:2 bt:18 reproduce:1 exponent:1 priori:1 summed:1 special:4 uc:1 equal:1 never:2 fundamentally:1 employ:3 neighbour:1 simultaneously:1 individual:1 wouter:2 llr:61 extreme:3 nl:2 uppercase:1 kt:5 necessary:1 stoltz:2 littlestone:1 catching:1 theoretical:2 minimal:1 instance:1 soft:3 earlier:1 cover:3 goude:1 electricity:1 cost:1 subset:3 uniform:1 gr:3 too:1 optimally:1 chooses:2 confident:1 international:2 retain:1 told:1 off:1 picking:1 together:1 quickly:1 again:1 speculatively:2 cesa:6 choose:1 worse:4 expert:22 aggressive:4 de:5 exhaust:1 includes:1 satisfy:1 ad:1 depends:2 try:2 reached:1 start:4 recover:1 maintains:1 aggregation:1 competitive:1 contribution:1 minimize:1 variance:3 who:3 spaced:2 bayesian:1 iid:2 ah:58 explain:2 plateau:1 whenever:1 definition:1 against:2 proof:5 proved:3 recall:1 emerges:1 improves:1 organized:1 auer:1 appears:5 higher:9 day:1 follow:4 improved:1 ljt:1 evaluated:1 though:2 just:3 stage:1 until:1 hand:3 replacing:1 french:1 grows:2 name:1 effect:1 concept:1 multiplier:1 aggressively:1 alternating:1 deal:2 round:17 game:3 during:2 self:1 covering:1 ln2:2 trying:1 be1:1 outline:1 theoretic:2 performs:5 meaning:1 ranging:1 instantaneous:1 common:1 specialized:2 koolen:4 mt:6 empirically:1 exponentially:3 discussed:1 tail:2 slight:1 interpret:1 cambridge:1 tuning:10 rd:1 grid:30 similarly:2 base:1 add:1 something:1 moderate:1 store:2 certain:2 meta:1 binary:1 blog:1 vt:9 accomplished:2 minimum:9 additional:5 gentile:1 preceding:2 timvanerven:1 determine:1 monotonically:1 relates:2 full:9 mix:4 reduces:1 exceeds:2 adapt:1 equally:1 prediction:9 basic:1 essentially:4 sometimes:1 achieved:1 addition:6 ftl:11 whereas:1 want:1 fine:1 else:2 grow:2 country:1 member:1 call:2 granularity:1 intermediate:2 exceed:3 enough:5 identically:1 split:1 variety:1 fit:1 intensive:1 whether:1 forecasting:1 wiskunde:1 peter:1 proportionally:1 listed:1 tune:1 netherlands:2 amount:1 informatica:1 generate:1 specifies:1 schapire:1 exist:1 track:2 per:1 demonstrating:1 rooij:5 monitor:1 changing:1 prevent:1 ht:11 advancing:1 sum:3 run:5 inverse:1 compete:3 letter:2 you:2 almost:2 throughout:1 family:1 decision:1 appendix:4 bit:1 bound:27 pay:1 played:4 ahead:1 phrased:1 dominated:1 optimality:2 extremely:1 min:4 performing:1 according:4 alternate:1 smaller:4 slightly:2 increasingly:1 wtk:4 explained:2 restricted:1 invariant:1 ln:46 computationally:1 previously:1 describing:1 ln1:5 know:1 mind:1 end:5 available:1 incurring:1 apply:1 slower:1 original:1 thomas:1 running:1 ensure:1 include:1 log2:5 itmax:1 giving:1 establish:1 mixability:22 already:2 quantity:3 question:1 strategy:17 rt:19 dependence:1 majority:1 consumption:1 considers:1 index:18 ratio:2 balance:1 minimizing:1 unfortunately:1 potentially:2 mink:2 negative:1 unknown:2 perform:1 bianchi:6 upper:2 defining:1 precise:4 mansour:1 arbitrary:1 introduced:2 namely:1 specified:1 established:1 nip:1 able:4 adversary:1 proceeds:2 below:4 pattern:1 regime:1 max:2 memory:2 natural:1 abbreviate:1 advanced:1 scheme:1 technology:1 disappears:1 ready:1 log1:1 prior:8 literature:1 review:2 regularisation:1 relative:1 freund:1 loss:34 lkt:5 leiden:2 incurred:1 sufficient:1 storing:1 heavy:2 last:1 formal:2 allow:3 generalise:1 wide:1 emphasise:1 van:3 distributed:3 benefit:1 cumulative:24 made:1 adaptive:3 far:1 approximate:2 keep:7 global:1 overfitting:1 active:14 assumed:1 leader:5 search:2 why:1 learn:3 robust:2 ignoring:1 improving:1 unavailable:1 poly:2 pk:1 main:2 dense:1 linearly:1 noise:1 ait:2 qut:1 advice:5 representative:1 grunwald:1 wiley:1 favoured:1 sub:1 exponential:15 cyclically:1 theorem:5 down:1 aah:5 jt:4 showing:3 alt:1 consist:1 exists:1 sequential:1 kr:1 importance:2 budget:13 horizon:1 gap:23 logarithmic:3 lt:3 simply:1 doubling:2 monotonic:1 springer:1 corresponds:1 satisfies:4 hedge:9 adahedge:24 goal:1 consequently:4 price:1 absence:1 change:3 generalisation:1 vovk:2 wt:13 lemma:18 conservative:5 called:8 tah:19 unwald:3 formally:1 flipflop:7 latter:1 |
4,685 | 5,242 | Delay-Tolerant Algorithms for
Asynchronous Distributed Online Learning
Matthew Streeter
Duolingo, Inc.?
Pittsburgh, PA
matt@duolingo.com
H. Brendan McMahan
Google, Inc.
Seattle, WA
mcmahan@google.com
Abstract
We analyze new online gradient descent algorithms for distributed systems with
large delays between gradient computations and the corresponding updates. Using insights from adaptive gradient methods, we develop algorithms that adapt not
only to the sequence of gradients, but also to the precise update delays that occur.
We first give an impractical algorithm that achieves a regret bound that precisely
quantifies the impact of the delays. We then analyze AdaptiveRevision, an
algorithm that is efficiently implementable and achieves comparable guarantees.
The key algorithmic technique is appropriately and efficiently revising the learning rate used for previous gradient steps. Experimental results show when the
delays grow large (1000 updates or more), our new algorithms perform significantly better than standard adaptive gradient methods.
1
Introduction
Stochastic and online gradient descent methods have proved to be extremely useful for solving largescale machine learning problems [1, 2, 3, 4]. Recently, there has been much work on extending these
algorithms to parallel and distributed systems [5, 6, 7, 8, 9]. In particular, Recht et al. [10] and Duchi
et al. [11] have shown that standard stochastic algorithms essentially ?work? even when updates are
applied asynchronously by many threads. Our experiments confirm this for moderate amounts of
parallelism (say 100 threads), but show that for large amounts of parallelism (as in a distributed
system, with say 1000 threads spread over many machines), performance can degrade significantly.
To address this, we develop new algorithms that adapt to both the data and the amount of parallelism.
Adaptive gradient (AdaGrad) methods [12, 13] have proved remarkably effective for real-world
problems, particularly on sparse data (for example, text classification with bag-of-words features).
The key idea behind these algorithms is to prove a general regret bound in terms of an arbitrary sequence of non-increasing learning rates and the full sequence of gradients, and then to
define an adaptive method for choosing the learning rates as a function of the gradients seen so
far, so as to minimize the final bound when the learning rates are plugged in. We extend this
idea to the parallel setting, by developing a general regret bound that depends on both the gradients and the exact update delays that occur (rather than say an upper bound on delays). We then
present AdaptiveRevision, an algorithm for choosing learning rates and efficiently revising
past learning-rate choices that strives to minimize this bound. In addition to providing an adaptive
regret bound (which recovers the standard AdaGrad bound in the case of no delays), we demonstrate
excellent empirical performance.
Problem Setting and Notation We consider a computation model where one or more computation
units (a thread in a parallel implementation or a full machine in a distributed system) store and
?
Work performed while at Google, Inc.
1
update the model x ? Rn , and another larger set of computation units perform feature extraction
and prediction. We call the first type the Updaters (since they apply the gradient updates) and
the second type the Readers (since they read coefficients stored by the Updaters). Because
the Readers and Updaters may reside on different machines, perhaps located in different parts
of the world, communication between them is not instantaneous. Thus, when making a prediction,
a Reader will generally be using a coefficient vector that is somewhat stale relative to the most
recent version being served by the Updaters.
As one application of this model, consider the problem of predicting click-through rates for sponsored search ads using a generalized linear model [14, 15]. While the coefficient vector may be
stored and updated centrally, predictions must be available in milliseconds in any part of the world.
This leads naturally to an architecture in which a large number of Readers maintain local copies
of the coefficient vector, sending updates to the Updaters and periodically requesting fresh coefficients from them. As another application, this model encompasses the Parameter Server/ Model
Replica split of Downpour SGD [16].
Our bounds apply to general online convex optimization [4], which encompasses the problem of
predicting with a generalized linear model (models where the prediction is a function of at ? xt ,
where at is a feature vector and xt are model coefficients). We analyze the algorithm on a sequence
of ? = 1, ..., T rounds; for the moment, we index rounds based on when each prediction is made. On
each round, a convex loss function f? arrives at a Reader, the Reader predicts with x? ? Rn and
incurs loss f? (x? ). The Reader then computes a subgradient g? ? ?f? (x? ). For each coordinate
i where g?,i is nonzero, the Reader sends an update to the Updater(s) for those coefficients. We
are particularly concerned with sparse data, where n is very large, say 106 ? 109 , but any particular
training example has only a small fraction of the features at,i that take non-zero values.
The regret against a comparator x? ? Rn is
Regret(x? ) ?
T
X
f? (x? ) ? f? (x? ).
(1)
? =1
Our primary theoretical contributions are upper bounds on the regret of our algorithms.
We assume a fully asynchronous model, where the delays in the read requests and update requests
can be different for different coefficients even for the same training event. This leads to a combinatorial explosion in potential interleavings of these operations, making fine-grained adaptive analysis
quite difficult. Our primary technique for addressing this will be the linearization of loss functions,
a standard tool in online convex optimization which takes on increased importance in the parallel
setting. An immediate consequence of convexity is that given a general convex loss function f? ,
with g? ? ?f? (x? ), for any x? , we have f? (x? ) ? f? (x? ) ? g? ? (x? ? x? ). One of the key observations of Zinkevich [1] is that by plugging this inequality into (1), we see that if we can guarantee
low regret against linear functions, we can provide the same guarantees against arbitrary convex
functions. Further, expanding the dot products and re-arranging the sum, we can write
Regret(x? ) ?
n
X
Regreti (x?i )
where
Regreti (x?i ) =
T
X
g?,i (x?,i ? x?i ).
(2)
? =1
i=1
If we consider algorithms where the updates are also coordinate decomposable (that is, the update
to coordinate i can be applied independently of the update of coordinate j), then we can bound
Regret(x? ) by proving a per-coordinate bound for linear functions and then summing across coordinates. In fact, our computation architecture already assumes a coordinate decomposable algorithm
since this lets us avoid synchronizing the Updates, and so in addition to leading to more efficient
algorithms, this approach will greatly simplify the analysis. The proofs of Duchi et al. [11] take a
similar approach.
Bounding per-coordinate regret Given the above, we will design and analyze asynchronous onedimensional algorithms which can be run independently on each coordinate of the true learning
problem. For each coordinate, each Read and Update is assumed to be an atomic operation.
It will be critical to adopt an indexing scheme different than the prediction-based indexing ? used
above. The net result will be bounding the sum of (2), but we will actually re-order the sum to
make the analysis easier. Critically, this ordering could be different for different coordinates, and
2
so considering one coordinate at a time simplifies the analysis considerably.1 We index time by the
order of the Updates, so the index t is such that gt is the gradient associated with the tth update
applied and xt is the value of the coefficient immediately before the update for gt is applied. Then,
the Online Gradient Descent (OGD) update consists of exactly the assumed-atomic operation
xt+1 = xt ? ?t gt ,
(3)
where ?t is a learning-rate. Let r(t) ? {1, . . . , t} be the index such that xr(t) was the value of the
coefficient used by the Reader to compute gt (and to predict on the corresponding example). That
is, update r(t) ? 1 completed before the Read for gt , but update r(t) completed after. Thus, our
loss (for coordinate i) is gt xr(t) , and we desire a bound on
Regreti (x? ) =
T
X
gt (xr(t) ? x? ).
t=1
Main result and related work We say an update s is outstanding at time t if the Read for
Update s occurs before update t, but the Update occurs after: precisely, s is outstanding at t
if r(s) ? t < s. We let Ft ? {s | r(s) ? t < s} be the set of updates
P outstanding at time t. We
call the sum of these gradients the forward gradient sum, gtfwd ? s?Ft gs . Then, ignoring constant factors and terms independent of T , we show that AdaptiveRevision has a per-coordinate
bound of the form
v
u T
uX
Regret ? t
gt2 + gt gtfwd .
(4)
t=1
Theorem 3 gives the precise result as well as the n-dimensional version. Observe that without any
delays, gtfwd = 0, and we arrive at the standard AdaGrad-style bound. To prove the bound for
AdaptiveRevision, we require an additional InOrder assumption on the delays, namely that
for any indexes s1 and s2 , if r(s1 ) < r(s2 ) then s1 < s2 . This assumption should be approximately
satisfied most of the time for realistic delay distributions, and even under a more pathological delay
distributions (delays uniform on {0, . . . , m} rather than more tightly grouped around a mean delay),
our experiments show excellent performance for AdaptiveRevision.
The key challenge is that unlike in the AdaGrad case, conceptually we need to know gradients that
have not yet been computed in order to calculate the optimal learning rate. We surmount this by
using an algorithm that not only chooses learning rates adaptively, but also revises previous gradient
steps. Critically, these revisions require only moderate additional storage and network cost: we store
a sum of gradients along with each coefficient, and for each Read, we remember the value of this
gradient sum at the time of the Read until the corresponding Update occurs. This later storage
can essentially be implemented on the network, if the gradient sum is sent from the Updater to the
Reader and back again, ensuring it is available exactly when needed. This is the approach taken
in the pseudocode of Algorithm 1.
Against a true adversary and a maximum delay of m, in general we cannot do better than just
training synchronously on a single machine using a 1/m fraction of the data. Our results surmount this issue by producing strongly data-dependent bounds: we do not expect fully adversarial
gradients and delays in practice, and so on real data the bound we prove still gives interesting results. In fact, we can essentially recover the guarantees for AsyncAdaGrad from Duchi et al. [11],
which rely on stochastic assumptions on the sparsity of the data, by applying the same assumptions
to our bound. To simplify the comparison, WLOG we consider a 1-dimensional problem where
kx? k2 = 1, kgt k2 ? 1, and we have the stochastic assumption that each gt is exactly 0 independently with probability p (implying Mj = 1, M = 1, and M2 = p in their notation). Then, simple
calculations (given in p
Appendix B) show
our bound for AdaptiveRevision implies a bound on
expected regret of O (1 + mp)pT without knowledge of p or m, ignoring terms independent of
T .2 AsyncAdaGrad achieves the same bound, but critically this requires knowledge of both p and
1
Our analysis could be extended to non-coordinate-decomposable algorithms, but then the full gradient
update across all coordinates would need to be atomic. This case is less interesting due to the computational
overhead.
2
In the analysis, we choose the parameter G0 based on an upper bound m on the delay, but this only impacts
an additive term independent of T .
3
m in advance in order to tune the learning rate appropriately (in the general n-dimensional case, this
would mean knowing not just one parameter p, but a separate sparsity parameter pj for each coordinate, and then using an appropriate per-coordinate scaling of the learning rate depending on?this);
without such knowledge, AsyncAdaGrad only obtains the much worse bound O (1 + mp) pT .
AdaptiveRevision will also provide significantly better guarantees if most of the delays are
much less than the maximum, or if the data is only approximately sparse (e.g., many gt = 10?6
rather than exactly 0). The above analysis also makes a worst-case assumption on the gt gtfwd terms,
but in practice many gradients in gtfwd are likely to have opposite signs and cancel out, a fact our
algorithm and bounds can exploit.
2
Algorithms and Analysis
We first introduce some additional definitions. Let o(t) ? max Ft ? {t}, the index of the highest
update outstanding at time t, or t itself if nothing is outstanding. The sets Ft fully specify the
delay pattern. In light of (4), we further define Gfwd
? gt2 + 2gt gtfwd . We also define Bt , the set
t
of updates applied while update t was outstanding. Under our notation, this set is easily defined
as Bt = {r(t), . . . , t ? 1} (or the empty set if r(t) = t, so in particular B1 = ?). We will also
Pt?1
frequently use the backward gradient sum, gtbck ? s=r(t) gs . These vectors most often appear in
? gt2 + 2gt gtbck . Figure 3 in Appendix A shows a variety of delay patterns and
the products Gbck
t
gives a visual representation of the sums Gfwd and Gbck . We say the delay is (upper) bounded by m
if t ? r(t) ? m for all t, which implies |Ft | ? m and |Bt | ? m. Note that if m = 0 then r(t) = t.
Pt
We use the compressed summation notation c1:t ? s=1 cs for vectors, scalars, and functions.
Our analysis builds on the following simple but fundamental result (Appendix C contains all proofs
and lemmas omitted here).
Lemma 1. Given any non-increasing learning-rate schedule ?t , define ?t where ?1 = 1/?1 and
?t = 1/?t ? 1/?t?1 for t > 1, so ?t = 1/?1:t . Then, for any delay schedule, unprojected online
gradient descent achieves, for any x? ? R,
T
Regret(x? ) ?
(2RT )2
1X
+
?t Gfwd
t
2?T
2 t=1
(2RT )2 ?
where
T
X
?t ?
|x ? xt |2 .
?
1:T
t=1
Proof. Given how we have indexed time, we can consider the regret of a hypothetical online gradient
descent algorithm that plays xt and then observes gt , since this corresponds exactly to the update
(3). We can then bound regret for this hypothetical setting using a simple modification to standard
bound for OGD [1],
T
X
gt ? xt ? g1:T ? x? ?
t=1
T
X
?t
t=1
2
T
|x? ? xt |2 +
1X
?t gt2 .
2 t=1
The actual algorithm used xr(t) to predict on gt , not xt , so we can bound its Regret by
Regret ?
T
T
X
1X
(2RT )2
+
?t gt2 +
gt (xr(t) ? xt ).
2?T
2 t=1
t=1
Recalling xt+1 = xt ? ?t gt , observe that xr(t) ? xt =
T
X
t=1
gt (xr(t) ? xt ) =
T
X
t=1
gt
X
?s gs =
Pt?1
T
X
s=1
s?Bt
s=r(t)
?s gs
?s gs , =
X
t?Fs
gt =
P
s?Bt
T
X
(5)
?s gs and so
?s gs gsfwd ,
s=1
using Lemma 4(E) from the Appendix to re-order the sum. Plugging into (5) completes the proof.
For projected online gradient descent, by projecting onto a feasible set of radius R and assuming
x? is in this set, we immediately get |x? ? xt | ? 2R. Without projecting, we get a more adaptive
bound which depends on the weighted quadratic mean 2RT . Though less standard, we choose to
4
analyze the unprojected variant of the algorithm for two reasons. First, our analysis rests heavily on
the ability to represent points played by our algorithms exactly as weighted sums of past gradients, a
property not preserved when projection is invoked. More importantly, we know of no experiments on
real-world prediction problems (where any x ? Rn is a valid model) where the projected algorithm
actually performs better. In our experience, once the learning-rate schedule is tuned appropriately,
the resulting RT values will not be more than a constant factor of kx? k. This makes intuitive sense
in the stochastic case, where it is known that averages of the xt should in fact converge to x? .3
? such that RT ? R;
? again,
For learning rate tuning we assume we know in advance a constant R
?
in practice this is roughly equivalent to assuming we know kx k in advance in order to choose the
feasible set.
Our first algorithm, HypFwd (for Hypothetical-Forward), assumes it has knowledge of all the gradients, so it can optimize its learning rates to minimize the above bound. If there are no delays, that
is, gtfwd = 0 for all t, then this immediately gives rise to a standard AdaGrad-style online gradient
descent method. If there are delays, the Gfwd
terms could be large, implying the optimal learning
t
rates should be smaller. Unfortunately, it is impossible for a real algorithm to know gtfwd when ?t is
chosen. To work toward a practical algorithm, we introduce HypBack, which achieves similar guarantees (but is still impractical). Finally, we introduce AdaptiveRevision, which plays points
very similar to HypBack, but can be implemented efficiently. Since we will need non-increasing
? bck ? maxs?t Gbck and G
? fwd ? maxs?t Gfwd . In praclearning rates, it will be useful to define G
1:t
1:s
1:t
1:s
bck
bck
fwd
?
tice, we expect G
> 0, which at worst adds a
1:T to be close to G1:T . We assume WLOG that G1
negligible additive constant to our regret.
Algorithm HypFwd
This algorithm ?cheats? by using the forward sum gtfwd to choose ?t ,
?t = q
?
(6)
? fwd
G
1:t
for an appropriate scaling parameter ? > 0. Then, Lemma 1 combined with the technical inequality
of Corollary 10 (given in Appendix D) gives
? q
? fwd .
? G
(7)
Regret ? 2 2R
1:T
?
? (recalling R
? ? RT ). If there are no delays, this bound reduces to the
2R
when we take ? = q
?
P
T
2
?
standard bound 2 2R
t=1 gt . With delays, however, this is a hypothetical algorithm, because
it is generally not possible to know gtfwd when update t is applied. However, we can implement
this algorithm efficiently in a single-machine simulation, and it performs very well (see Section 3).
Thus, our goal is to find an efficiently implementable algorithm that achieves comparable results in
practice and also matches this regret bound.
Algorithm HypBack The next step in the analysis is to show that a second hypothetical algorithm,
HypBack, approximates the regret bound of (7). This algorithm plays
x
?t+1 = ?
t
X
??s gs
where
??t = q
s=1
?
? bck
G
1:o(t)
(8)
+ G0
is a learning rate with parameters ? and G0 . This is a hypothetical algorithm, since we also can?t
(efficiently) know Gbck
1:o(t) on round t. We prove the following guarantee:
Lemma 2. Suppose delays
bounded by m and |gt | ? L. Then when the InOrder property holds,
?
? and G0 = m2 L2 has
HypBack with ? = 2R
? q
? G
? fwd + 2RmL.
?
Regret ? 2 2R
1:T
3
For example, the arguments of Nemirovski et al. [17, Sec 2.2] hold for unprojected gradient descent.
5
Algorithm 1 Algorithm AdaptiveRevision
Procedure Read(loss function f ):
Read (xi , g?i ) from the Updaters for all necessary coordinates
Calculate a subgradient g ? ?f (x)
for each coordinate i with a non-zero gradient do
Send an update tuple (g ? gi , g?old ? g?i ) to the Updater for coordinate i
Procedure Update(g, g?old ): The Updater initializes state (?
g ? 0, z ? 1, z 0 ? 1, x ? 0) per coordinate.
Do the following atomically:
g bck ? g? ? g?old
For analysis, assign index t to the current update.
? old ? ??z0
Invariant: effective ? for all g bck .
0
? bck
z
? z + g 2 + 2g ? g bck ; z 0 ? max(z, z 0 ) Maintain z = Gbck
1:t and z = G1:t , to enforce non-increasing ?.
?
?
?
? z0
New learning rate.
x ? x ? ?g
The main gradient-descent update.
x ? x + (? old ? ?)g bck
Apply adaptive revision of some previous steps.
g?
? g? + g
Maintain g? = g1:t .
Algorithm AdaptiveRevision Now that we have shown that HypBack is effective, we can
describe AdaptiveRevision, which efficiently approximates HypBack. We then analyze this
new algorithm by showing its loss is close to the loss of HypBack. Pseudo-code for the algorithm
as implemented for the experiments is given in Algorithm 1; we now give an equivalent expression
? bck ,
for the algorithm
under the InOrder assumption. Let ?t be the learning rate based on G
1:t
q
bck
?
?t = ?/ G + G0 . Then, AdaptiveRevision plays the points
1:t
xt+1 =
t
X
?st gs
where
?st = ?min(t,o(s)) .
(9)
s=1
When s << t then we will usually have min(t, o(s)) = o(s), and so we see that ?st = ?o(s) = ??s ,
and so the effective learning rate applied to gradient gs is the same one HypBack would have used
(namely ??s ); thus, the only difference between AdaptiveRevision and HypBack is on the
leading edge, where o(s) > t. See Figure 4 in Appendix A for an example. When InOrder holds,
Lemma 6 (in Appendix C) shows Algorithm 1 plays the points specified by (9).
Given Lemma 2, it is sufficient to show that the difference between the loss of HypBack and the
loss of AdaptiveRevision is small. Lemma 8 (in the appendix) accomplishes this, showing
that under the InOrder assumption and with G0 = m2 L2 the difference in loss is at most 2?Lm
(a quantity independent of T ). Our main theorem is then a direct consequence of Lemma 2 and
Lemma 8:
Theorem 3. Under an InOrder delay pattern with a maximum
delay of at most m, the
? q fwd
?
?
?
?
when
AdaptiveRevision algorithm guarantees Regret ? 2 2R G1:T + (2 2 + 2)RmL
?
2 2
?
we take G0 = m L and ? = 2R. Applied on a per-coordinate basis to an n-dimensional
problem, we have
v
n uX
X
? X
?
u T
2 +2
t
?
?
Regret ? 2 2R
gt,i
gs,i gs,i + n(2 2 + 2)RmL.
i=1
t=1
s?Ft,i
?
?
We note the n-dimensional guarantee is at most O nRL
T m , which matches the lower bound
? and R (see, for
for the feasible set [?R, R]n and gt ? [?L, L]n up to the difference between R
example, Langford et al. [18]).4 Our point, of course, is that for real data our bound will often be
much much better.
4
To compare to regret bounds stated
? in terms of L2 bounds on the feasible set and the gradients,
? note for
gt ? [?L, L]n we have kgt k2 ? nL, and similarly for x ? [?R, R]n we have kxk2 ? nR, so the
dependence on n is a necessary consequence of using these norms, which are quite natural for sparse problems.
6
Figure 1: Accuracy as a function of update delays, with learning rate scale factors optimized for each
algorithm and dataset for the zero delay case. The x-axis is non-linear. The results are qualitatively
similar across the plots, but note the differences in the y-axis ranges. In particular, the random delay
pattern appears to hurt performance significantly less than either the minibatch or constant delay
patterns.
Figure 2: Accuracy as a function of update delays, with learning rate scale factors optimized as
a function of the delay. The lower plot in each group shows the best learning rate scale ? on a
log-scale.
3
Experiments
We study the performance of both hypothetical algorithms and AdaptiveRevision on two realworld medium-sized datasets. We simulate the update delays using an update queue, which allows
us to implement the hypothetical algorithms and also lets us precisely control both the exact delays as well as the delay pattern. We compare to the dual-averaging AsyncAdaGrad algorithm of
Duchi et al. [11] (AsyncAda-DA in the figures), as well as asynchronous AdaGrad gradient descent
(AsyncAda-GD), which can be thought of as AdaptiveRevision with all g bck set to zero and
no revision step. As analyzed, AdaptiveRevision stores an extra variable (z 0 ) in order to enforce a non-increasing learning rate. In practice, we found this had a negligible impact; in the plots
above, AdaptiveRevision? denotes the algorithm without this check. With this improvement
AdaptiveRevision stores three numbers per coefficient, versus the two stored by AsyncAdagrad DA or GD.
We consider three different delay patterns, which we parameterize by D, the average delay; this
yields a more fair comparison across the delay patterns than using the the maximum delay m. We
consider: 1) constant delays, where all updates (except at the beginning and the end of the dataset)
have a delay of exactly D (e.g., rows (B) and (C) in Figure 3 in the Appendix); 2) A minibatch delay
pattern5 , where 2D + 1 Reads occur, followed by 2D + 1 Updates; and 3) a random delay pattern,
where the delays are chosen uniformly from the set {0, . . . , 2D}, so again the mean delay is D. The
first two patterns satisfy InOrder, but the third does not.
5
It is straightforward to show that under this delay pattern, when we do not enforcing non-increasing learning rates, AdaptiveRevision and HypBack are in fact equivalent to standard AdaGrad run on the minibatches (that is, with one update per minibatch using the combined minibatch gradient sum).
7
We evaluate on two datasets. The first is a web search advertising dataset from a large search engine.
The dataset consists of about 3.1?106 training examples with a large number of sparse anonymized
features based on the ad and query text. Each example is labeled {?1, 1} based on whether or not
the person doing the query clicked on the ad. The second is a shuffled version of the malicious URL
dataset as described by Ma et al. [19] (2.4?106 examples, 3.2?106 features).6 For each of these
datasets we trained a logistic regression model, and evaluated using the logistic loss (LogLoss).
That is, for an example with feature vector a ? Rn and label y ? {?1, 1}, the loss is given by
`(x, (a, y)) = log(1 + exp(?y a ? x)). Following the spirit of our regret bounds, we evaluate the
models online, making a single pass over the data and computing accuracy metrics on the predictions
made by the model immediately before it trained on each example (i.e., progressive validation). To
avoid possible transient behavior, we only report metrics for the predictions on the second half of
each dataset, though this choice does not change the results significantly.
The exact parametrization of the learning rate schedule is particularly important with
? delayed updates. We follow the common practice of taking learning rates of the form ?t = ?/ St + 1, where
? bck for HypBack or
St is the appropriate learning rate statistic for the given algorithm, e.g., G
1:o(t)
Pt
2
2 2
s=1 gs for vanilla AdaGrad. In the analysis, we use G0 = m L rather than G0 = 1; we believe
G0 = 1 will generally be a better choice in practice, though we did not optimize this choice.7 When
we optimize ?, we choose the best setting from a grid {?0 (1.25)i | i ? N}, where ?0 is an initial
guess for each dataset.
All figures give the average delay D on the x-axis. For Figure 1, for each dataset and algorithm, we
optimized ? in the zero delay (D = m = 0) case, and fixed this parameter as the average delay D
increases. This leads to very bad performance for standard AdaGrad DA and GD as D gets large.
In Figure 2, we optimized ? individually for each delay level; we plot the accuracy as before, with
the lower plot showing the optimal learning rate scaling ? on a log-scale. The optimal learning rate
scaling for GD and DA decrease by two orders of magnitude as the delays increase. However, even
with this tuning they do not obtain the performance of AdaptiveRevision. The performance of
AdaptiveRevision (and HypBack and HypFwd) is slightly improved by lowering the learning
rate as delays increase, but the effect is comparatively very minor. As anticipated, the performance
for AdaptiveRevision, HypBack, and HypFwd are closely grouped.
AdaptiveRevision?s delay tolerance can lead to enormous speedups in practice. For example,
the leftmost plot of Figure 2 shows that AdaptiveRevision achieves better accuracy with an
update delay of 10,000 than AsyncAda-DA achieves with a delay of 1000. Because update delays
are proportional to the number of Readers, this means that AdaptiveRevision can be used to
train a model an order of magnitude faster than AsyncAda-DA, with no reduction in accuracy. This
allows for much faster iteration when data sets are large and parallelism is cheap, which is the case
in important real-world problems such as ad click-through rate prediction [14].
4
Conclusions and Future Work
We have demonstrated that adaptive tuning and revision of per-coordinate learning rates for distributed gradient descent can significantly improve accuracy as the update delays become large.
The key algorithmic technique is maintaining a sum of gradients, which allows the adjustment of
all learning rates for gradient updates that occurred between the current Update and its Read.
The analysis method is novel, but is also somewhat indirect; an interesting open question is finding a general analysis framework for algorithms of this style. Ideally such an analysis would
also remove the technical need for the InOrder assumption, and also allow for the analysis of
AdaptiveRevision variants of OGD with Projection and Dual Averaging.
6
We also ran experiments on the rcv1.binary training dataset (0.6?106 examples, 0.05?106 features)
from Chang and Lin [20]; results were qualitatively very similar to those for the URL dataset.
7
The main purpose of choosing a larger G0 in the theorems was to make the performance of HypBack
and AdaptiveRevision provably close to that of HypFwd, even in the worst case. On real data, the
performance of the algorithms will typically be close even with G0 = 1.
8
References
[1] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML,
2003.
[2] Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms.
In ICML 2004, 2004.
[3] L?eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems. 2008.
[4] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 2012.
[5] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using
mini-batches. J. Mach. Learn. Res., 13(1), January 2012.
[6] Peter Richt?arik and Martin Tak?ac? . Parallel coordinate descent methods for big data optimization.
arXiv:1212.0873 [math.OC], 2012. URL http://arxiv.org/abs/1212.0873.
[7] Martin Tak?ac? , Avleen Bijral, Peter Richt?arik, and Nati Srebro. Mini-batch primal and dual methods for
SVMs. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[8] Daniel Hsu, Nikos Karampatziakis, John Langford, and Alexander J. Smola. Scaling Up Machine Learning, chapter Parallel Online Learning. Cambridge University Press, 2011.
[9] John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual averaging for distributed optimization:
Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592?606, 2012.
[10] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: a lock-free approach to
parallelizing stochastic gradient descent. In NIPS, 2011.
[11] John C. Duchi, Michael I. Jordan, and H. Brendan McMahan. Estimation, optimization, and parallelism
when data is sparse. In NIPS, 2013.
[12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. In COLT, 2010.
[13] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In COLT, 2010.
[14] H. Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan
Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg,
Arnar Mar Hrafnkelsson, Tom Boulos, and Jeremy Kubica. Ad click prediction: a view from the trenches.
In KDD, 2013.
[15] Thore Graepel, Joaquin Qui?nonero Candela, Thomas Borchert, and Ralf Herbrich. Web-scale bayesian
click-through rate prediction for sponsored search advertising in microsoft?s bing search engine. In ICML,
2010.
[16] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao,
Marc?Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed
deep networks. In NIPS, 2012.
[17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. on Optimization, 19(4):1574?1609, January 2009. ISSN 1052-6234.
doi: 10.1137/070704277.
[18] John Langford, Alex Smola, and Martin Zinkevich. Slow Learners are Fast. In Advances in Neural
Information Processing Systems 22. 2009.
[19] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Identifying suspicious urls: An
application of large-scale online learning. In Proceedings of the 26th Annual International Conference on
Machine Learning, ICML ?09, 2009.
[20] Chih-Chung Chang and Chih-Jen Lin. LIBSVM data sets.
datasets/, 2010.
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/
[21] Peter Auer, Nicol`o Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning
algorithms. Journal of Computer and System Sciences, 2002.
9
| 5242 |@word version:3 norm:1 dekel:1 open:1 simulation:1 automat:1 incurs:1 sgd:1 tice:1 reduction:1 bck:13 liu:1 contains:1 moment:1 initial:1 daniel:2 tuned:1 past:2 hrafnkelsson:1 current:2 com:2 savage:1 yet:1 must:1 john:5 devin:1 periodically:1 realistic:1 additive:2 kdd:1 cheap:1 remove:1 plot:6 sponsored:2 update:50 juditsky:1 implying:2 half:1 guess:1 beginning:1 parametrization:1 math:1 herbrich:1 org:1 zhang:1 along:1 direct:1 become:1 suspicious:1 prove:4 consists:2 overhead:1 dan:1 introduce:3 expected:1 behavior:1 roughly:1 frequently:1 actual:1 considering:1 increasing:6 revision:4 clicked:1 notation:4 bounded:2 medium:1 revising:2 finding:1 impractical:2 guarantee:9 pseudo:1 remember:1 hypothetical:8 exactly:7 k2:3 control:1 unit:2 appear:1 producing:1 before:5 negligible:2 local:1 todd:1 consequence:3 mach:1 cheat:1 niu:1 approximately:2 nemirovski:2 range:1 practical:1 atomic:3 practice:8 regret:27 implement:2 trench:1 xr:7 procedure:2 empirical:1 significantly:6 thought:1 projection:2 word:1 holt:1 get:3 cannot:1 onto:1 close:4 storage:2 applying:1 impossible:1 optimize:3 zinkevich:3 equivalent:3 demonstrated:1 dean:1 www:1 send:1 straightforward:1 independently:3 convex:8 bachrach:1 ke:1 decomposable:3 identifying:1 immediately:4 matthieu:1 m2:3 insight:1 importantly:1 ralf:1 proving:1 coordinate:25 hurt:1 arranging:1 updated:1 pt:6 play:5 heavily:1 suppose:1 exact:3 programming:2 ogd:3 olivier:1 shamir:1 pa:1 trend:1 particularly:3 located:1 predicts:1 labeled:1 ft:6 csie:1 worst:3 calculate:2 parameterize:1 ranzato:1 ordering:1 decrease:1 highest:1 richt:2 observes:1 ran:2 benjamin:1 convexity:1 nie:1 ideally:1 trained:2 solving:2 learner:1 basis:1 interleavings:1 easily:1 indirect:1 chapter:1 train:1 fast:1 effective:4 describe:1 doi:1 query:2 choosing:3 shalev:1 quite:2 larger:2 elad:1 kai:1 say:6 voelker:1 compressed:1 ability:1 statistic:1 gi:1 g1:6 itself:1 asynchronously:1 online:19 final:1 sequence:4 net:1 product:2 nonero:1 intuitive:1 seattle:1 convergence:1 empty:1 extending:1 depending:1 develop:2 ac:2 andrew:2 minor:1 implemented:3 c:1 implies:2 rml:3 radius:1 closely:1 kgt:2 stochastic:10 kubica:1 transient:1 libsvmtools:1 require:2 assign:1 fwd:6 ntu:1 summation:1 hold:3 around:1 wright:1 exp:1 lawrence:1 algorithmic:2 predict:2 lm:1 matthew:2 achieves:8 adopt:1 omitted:1 purpose:1 estimation:1 bag:1 combinatorial:1 label:1 individually:1 grouped:2 tool:1 weighted:2 stefan:1 arik:2 rather:4 avoid:2 claudio:1 corollary:1 improvement:1 karampatziakis:1 check:1 greatly:1 brendan:4 adversarial:1 sense:1 contr:1 dependent:1 bt:5 typically:1 tak:2 provably:1 issue:1 classification:1 dual:4 colt:2 once:1 extraction:1 ng:1 progressive:1 synchronizing:1 cancel:1 icml:4 anticipated:1 future:1 report:1 simplify:2 pathological:1 tightly:1 delayed:1 jeffrey:1 maintain:3 microsoft:1 recalling:2 ab:1 analyzed:1 arrives:1 nl:1 light:1 behind:1 primal:1 logloss:1 tuple:1 edge:1 explosion:1 necessary:2 experience:1 ohad:1 indexed:1 plugged:1 old:5 re:5 theoretical:1 increased:1 bijral:1 cost:1 addressing:1 uniform:1 delay:61 stored:3 considerably:1 gd:4 confident:1 chooses:1 adaptively:1 recht:2 combined:2 fundamental:1 st:5 person:1 international:2 siam:1 michael:2 again:3 satisfied:1 cesa:1 choose:5 worse:1 chung:1 leading:2 style:3 potential:1 jeremy:1 sec:1 coefficient:12 inc:3 satisfy:1 gt2:5 mp:2 depends:2 ad:5 performed:1 later:1 hogwild:1 view:1 candela:1 analyze:6 doing:1 hazan:1 recover:1 parallel:6 shai:1 grady:1 contribution:1 minimize:3 accuracy:7 greg:1 efficiently:8 yield:1 conceptually:1 bayesian:1 critically:3 advertising:2 served:1 definition:1 infinitesimal:1 against:4 tucker:1 naturally:1 proof:4 associated:1 recovers:1 hsu:1 proved:2 dataset:10 revise:1 knowledge:4 graepel:1 schedule:4 actually:2 back:1 auer:1 appears:1 follow:1 tom:1 specify:1 improved:1 evaluated:1 though:3 strongly:1 mar:1 just:2 smola:2 until:1 langford:3 joaquin:1 web:2 christopher:1 google:3 minibatch:4 logistic:2 perhaps:1 stale:1 believe:1 thore:1 matt:1 effect:1 true:2 shuffled:1 read:11 nonzero:1 round:4 self:1 regreti:3 oc:1 generalized:3 leftmost:1 demonstrate:1 duchi:7 performs:2 instantaneous:1 invoked:1 recently:1 novel:1 common:1 pseudocode:1 extend:1 occurred:1 approximates:2 onedimensional:1 cambridge:1 phillips:1 tuning:3 vanilla:1 grid:1 similarly:1 had:1 dot:1 atomically:1 alekh:1 gt:26 add:1 recent:1 moderate:2 wattenberg:1 store:4 server:1 inequality:2 binary:1 seen:1 additional:3 somewhat:2 nikos:1 gentile:1 accomplishes:1 converge:1 corrado:1 stephen:1 full:3 reduces:1 technical:2 match:2 adapt:2 calculation:1 faster:2 lin:3 davydov:1 plugging:2 impact:3 prediction:14 ensuring:1 variant:2 regression:1 essentially:3 metric:2 arxiv:2 iteration:1 represent:1 monga:1 gilad:1 agarwal:1 c1:1 preserved:1 addition:2 remarkably:1 fine:1 completes:1 grow:1 malicious:1 sends:1 appropriately:3 extra:1 rest:1 unlike:1 ascent:1 sent:1 spirit:1 unprojected:3 jordan:1 call:2 yang:1 split:1 concerned:1 variety:1 nrl:1 architecture:2 click:4 opposite:1 idea:2 simplifies:1 knowing:1 tradeoff:1 requesting:1 thread:4 expression:1 whether:1 url:4 f:1 queue:1 peter:3 deep:1 useful:2 generally:3 tune:1 amount:3 svms:1 tth:1 http:2 shapiro:1 millisecond:1 sign:1 per:9 write:1 group:1 key:5 lan:2 enormous:1 pj:1 libsvm:1 replica:1 backward:1 lowering:1 subgradient:3 fraction:2 sum:15 surmount:2 run:2 realworld:1 arrive:1 reader:11 chih:2 appendix:9 scaling:6 qui:1 comparable:2 bound:40 followed:1 centrally:1 played:1 quadratic:1 g:13 annual:1 occur:3 precisely:3 alex:1 updater:4 bousquet:1 simulate:1 argument:1 extremely:1 min:2 rcv1:1 martin:6 speedup:1 developing:1 request:2 across:4 strives:1 smaller:1 slightly:1 tw:1 making:3 s1:3 modification:1 quoc:1 projecting:2 invariant:1 indexing:2 taken:1 bing:1 cjlin:1 needed:1 know:7 singer:1 end:1 sending:1 available:2 operation:3 ofer:1 apply:3 observe:2 appropriate:3 enforce:2 batch:2 thomas:1 assumes:2 denotes:1 completed:2 downpour:1 lock:1 maintaining:1 exploit:1 eon:1 yoram:1 build:1 comparatively:1 feng:1 initializes:1 g0:12 already:1 quantity:1 occurs:3 question:1 primary:2 rt:7 dependence:1 nr:1 gradient:44 separate:1 degrade:1 reason:1 fresh:1 toward:1 enforcing:1 assuming:2 chikkerur:1 code:1 issn:1 index:7 mini:2 providing:1 julian:1 difficult:1 unfortunately:1 stated:1 rise:1 implementation:1 duolingo:2 design:1 ebner:1 perform:2 bianchi:1 upper:4 observation:1 datasets:4 implementable:2 descent:14 january:2 immediate:1 extended:1 communication:1 precise:2 rn:5 synchronously:1 arbitrary:2 parallelizing:1 david:1 namely:2 specified:1 optimized:4 engine:2 nip:3 trans:1 address:1 justin:1 adversary:1 parallelism:5 pattern:11 usually:1 sparsity:2 challenge:1 encompasses:2 max:4 wainwright:1 event:1 critical:1 natural:1 rely:1 predicting:2 largescale:1 scheme:1 improve:1 axis:3 text:2 eugene:1 l2:3 nati:1 nicol:1 adagrad:9 relative:1 loss:13 fully:3 expect:2 interesting:3 proportional:1 srebro:1 versus:1 geoffrey:1 validation:1 foundation:1 sufficient:1 anonymized:1 xiao:1 row:1 course:1 asynchronous:4 copy:1 free:1 allow:1 senior:1 saul:1 taking:1 sparse:6 distributed:9 tolerance:1 world:5 valid:1 computes:1 reside:1 made:2 adaptive:12 forward:3 projected:2 qualitatively:2 far:1 obtains:1 confirm:1 tolerant:1 summing:1 pittsburgh:1 assumed:2 b1:1 xi:1 shwartz:1 search:5 quantifies:1 streeter:2 mj:1 learn:1 robust:1 expanding:1 golovin:1 ignoring:2 excellent:2 bottou:1 borchert:1 marc:1 da:6 did:1 spread:1 main:4 aurelio:1 bounding:2 s2:3 big:1 paul:1 nothing:1 fair:1 slow:1 tong:1 wlog:2 mao:1 mcmahan:5 kxk2:1 third:1 grained:1 young:1 theorem:4 z0:2 bad:1 xt:18 jen:1 showing:3 importance:1 magnitude:2 linearization:1 kx:3 chen:1 easier:1 likely:1 visual:1 desire:1 adjustment:1 ux:2 scalar:1 chang:2 avleen:1 corresponds:1 gary:1 minibatches:1 ma:2 comparator:1 goal:1 sized:1 sculley:1 feasible:4 change:1 except:1 uniformly:1 averaging:3 lemma:10 pas:1 experimental:1 mark:1 alexander:1 outstanding:6 rajat:1 evaluate:2 |
4,686 | 5,243 | Efficient Minimax Strategies for Square Loss Games
Wouter M. Koolen
Queensland University of Technology and UC Berkeley
wouter.koolen@qut.edu.au
Alan Malek
University of California, Berkeley
malek@eecs.berkeley.edu
Peter L. Bartlett
University of California, Berkeley and Queensland University of Technology
peter@berkeley.edu
Abstract
We consider online prediction problems where the loss between the prediction and
the outcome is measured by the squared Euclidean distance and its generalization,
the squared Mahalanobis distance. We derive the minimax solutions for the case
where the prediction and action spaces are the simplex (this setup is sometimes
called the Brier game) and the `2 ball (this setup is related to Gaussian density
estimation). We show that in both cases the value of each sub-game is a quadratic
function of a simple statistic of the state, with coefficients that can be efficiently
computed using an explicit recurrence relation. The resulting deterministic minimax strategy and randomized maximin strategy are linear functions of the statistic.
1
Introduction
We are interested in general strategies for sequential prediction and decision making (a.k.a. online
learning) that improve their performance with experience. Since the early days of online learning,
people have formalized such learning tasks as regret games. The learner interacts with an adversarial environment with the goal of performing almost as well as the best strategy from some fixed
reference set. In many cases, we have efficient algorithms with an upper bound on the regret that
meets the game-theoretic lower bound (up to a small constant factor). In a few special cases, we
have the exact minimax strategy, meaning that we understand the learning problem at all levels of
detail. In even fewer cases we can also efficiently execute the minimax strategy. These cases serve
as exemplars to guide our thinking about learning algorithms.
In this paper we add two interesting examples to the canon of efficiently computable minimax strategies. Our setup, as described in Figure 1, is as follows. The Learner and the Adversary play vectors
a ? A and x ? X , upon which the Learner is penalized using the squared Euclidean distance
2
ka ? xk or its generalization, the squared Mahalanobis distance,
2
ka ? xkW = (a ? x)| W ?1 (a ? x),
parametrized by a symmetric matrix W 0. After a sequence of T such interactions, we compare
the loss of the Learner to the loss of the best fixed prediction a? ? A. In all our examples, this best
PT
fixed action in hindsight is the mean outcome a? = T1 t=1 xt , regardless of W . We use regret, the
difference between the loss of the learner and the loss of a? , to evaluate performance. The minimax
regret for the T -round game, also known as the value of the game, is given by
V := inf sup ? ? ? inf sup
a1 x1
aT xT
T
X
1
t=1
2
2
kat ? xt kW ? inf
a
1
T
X
1
t=1
2
2
ka ? xt kW
(1)
where the at range over actions A and the xt range over outcomes X . The minimax strategy chooses
the at , given all past outcomes x1 , . . . , xt?1 , to achieve this regret. Intuitively, the minimax regret
is the regret if both players play optimally while assuming the other player is doing the same.
Our first example is the Brier game, where the action and outcome spaces are the probability simplex
with K outcomes. The Brier game is traditionally popular in meteorology [Bri50].
Our second example is the ball game, where the
action and outcome spaces are the Euclidean norm Given: T , W , A, X .
ball, i.e. A = X = {x ? RK | kxk2 = 1}. (Even For t = 1, 2, . . . , T
though we measure loss by the squared Mahalanobis
? Learner chooses prediction at ? A
distance, we play on the standard Euclidean norm
?
Adversary chooses outcome xt ? X
ball.) The ball game is related to Gaussian density
2
estimation [TW00].
? Learner incurs loss 12 kat ? xt kW .
In each case we exhibit a strategy that can play a
Figure 1: Protocol
T -round game in O(T K 2 ) time. (The algorithm
spends O(T K + K 3 ) time pre-processing the game,
and then plays in O(K 2 ) time per round.)
2
Outline
We define our loss using the squared Mahalanobis distance, parametrized by a symmetric matrix
W 0. We recover the squared Euclidean distance by choosing W = I. Our games will always
last T rounds. For some observed data x1 , . . . , xn , the value-to-go for the remaining T ? n rounds
is given by
V (x1 , . . . , xn ) :=
inf sup ? ? ? inf sup
an+1 xn+1
aT xT
T
T
X
X
1
1
2
2
kat ? xt kW ? inf
ka ? xt kW .
2
2
a
t=n+1
t=1
By definition, the minimax regret (1) is V = V () where is the empty sequence, and the value-togo satisfies the recurrence
(
PT
2
? inf a t=1 12 ka ? xt kW
if n = T ,
V (x1 , . . . , xn ) =
(2)
2
1
inf an+1 supxn+1 2 kan+1 ? xn+1 kW + V (x1 , . . . , xn+1 ) if n < T .
Our analysis for the two games proceeds in a similar manner.
For some P
past history of plays
Pn
n
(x1 , . . . , xn ) of length n, we summarize the state by s = t=1 xt and ? 2 = t=1 x|t W ?1 xt . As
2
we will see, the value-to-go after n of T rounds can be written as V (s, ? , n); i.e. it only depends
on the past plays through s and ? 2 . More surprisingly, for each n, the value-to-go V (s, ? 2 , n) is
a quadratic function of s and a linear function of ? 2 (under certain conditions on W ). While it
is straightforward to see that the terminal value V (s, ? 2 , T ) is quadratic in the state (this is easily
checked by computing the loss of the best expert and using the first case of Equation (2)), it is not at
all obvious that propagating from V (s + x, ? 2 + x| W ?1 x, n + 1) to V (s, ? 2 , n), using the second
case of (2), preserves this structure.
This compact representation of the value-function is an essential ingredient for a computationally
feasible algorithm. Many minimax approaches, such as normalized maximum likelihood [Sht87],
have computational complexities that scale exponentially with the time horizon. We derive a strategy
that can play in constant amortized time.
Why is this interesting? We go beyond previous work in a few directions. First, we exhibit two new
games that belong to the tiny class admitting computationally feasible minimax algorithms. Second,
we consider the setting with squared Mahalanobis loss which allows the user intricate control over
the penalization of different prediction errors. Our results clearly show how the learner should
exploit this prioritization.
2.1
Related work
Repeated games with minimax strategies are frequently studied ([CBL06]) and, in online learning,
minimax analysis has been applied to a variety of losses and repeated games; however, computa2
tionally feasible algorithms are the exception, not the rule. For example, consider log loss, first
discussed in [Sht87]. Whiile the minimax algorithm, Normalized Maximum Likelihood, is well
known [CBL06], it generally requires computation that is exponential in the time horizon as one
needs to aggregate over all data sequences. To our knowledge, there are two exceptions where
efficient NML forecasters are possible: the multinomial case where fast Fourier transforms may
be exploited [KM05], and very particular exponential families that cause NML to be a Bayesian
strategy [HB12], [BGH+ 13]. The minimax optimal strategy is known also for: (i) the ball game
with W = I [TW00] (our generalization to Mahalanobis W 6= I results in fundamentally different strategies), (ii) the ball game with W = I and a constraint on the player?s deviation from the
current empirical minimizer [ABRT08] (for which the optimal strategy is Follow-the-Leader), (iii)
Lipschitz-bounded convex loss functions [ABRT08], (iv) experts with an L? bound [AWY08], and
(v) static experts with absolute loss [CBS11]. While not guaranteed to be an exhaustive list, the
previous paragraph demonstrates the rarity of tractable minimax algorithms.
3
The Offline Problem
The regret is defined as the difference between the loss of the algorithm and the loss of the best
action in hindsight. Here we calculate that action and its loss.
Lemma 3.1. Suppose A ? conv(X ) (this will always hold in the settings we study). For data
x1 , . . . , xT ? X , the loss of the best action in hindsight equals
!|
!!
T
T
T
T
X
X
1
1 X | ?1
1 X
2
?1
inf
ka ? xt kW =
x W xt ?
xt
W
xt
,
(3)
2 t=1 t
T t=1
a?A t=1 2
t=1
PT
and the minimizer is the mean outcome a? = T1 t=1 xt .
Proof. The unconstrained minimizer and value are obtained by equating the derivative to zero and
plugging in the solution. The assumption A ? conv(X ) ensures that the constraint a ? A is
inactive.
The best action in hindsight is curiously independent of W , A and X . This also shows that the
Pt?1
1
follow the leader strategy that plays at = t?1
s=1 xs is independent of W and A as well. As we
shall see, the minimax strategy does not have this property.
4
Simplex (Brier) Game
In this section we analyze the Brier game. The action and outcome spaces are the probability simplex
|
on K outcomes; A = X = 4 := {x ? RK
+ | 1 x = 1}. The loss is given by half the squared
2
Mahalanobis distance, 21 ka ? xkW . We present a full minimax analysis of the T -round game: we
calculate the game value, derive the maximin and minimax strategies, and discuss their efficient
implementation.
The structure of this section is as follows. In Lemmas 4.1 and 4.2, the conclusions (value and
optimizers) are obtained under the proviso that the given optimizer lies in the simplex. In our main
result, Theorem 4.3, we apply these auxiliary results to our minimax analysis and argue that the
maximizer indeed lies in the simplex. We immediately work from a general symmetric W 0 with
the following lemma.
Lemma 4.1. Fix a symmetric matrix C 0 and vector d. The optimization problem
1
max ? p| C ?1 p + d| p
p?4
2
|
2
|
|
C
has value 21 d| Cd ? (1 1Cd?1)
= 12 d| C ? C11
d + 211|Cd?1
attained at optimizer
| C1
1| C1
C1
1| Cd ? 1
C11| C
C1
1 = C? |
p? = C d ?
d+ |
|
1 C1
1 C1
1 C1
provided that p? is in the simplex.
3
P
Proof. We solve for the optimal p? . Introducing Lagrange multiplier ? for the constraint k pk =
|
1, we need to have p = C (d ? ?1) which results in ? = 11|Cd?1
C1. Thus, the maximizer equals
|
|
|
1
1| Cd?1
p? = C d ? 1 1Cd?1
C d ? 1 1Cd?1
1
which
produces
objective
value
d
+
| C1
| C1 1 .
2
1| C1 1
The statement follows from simplification.
This lemma allows us to compute the value and saddle point whenever the future payoff is quadratic.
Lemma 4.2. Fix symmetric matrices W 0 and A such that W ?1 + A 0, and a vector b. The
optimization problem
1
1
2
min max ka ? xkW + x| Ax + b| x
a?4 x?4 2
2
achieves its value
1 |
1 (1| W c ? 1)2
c Wc ?
2
2
1| W 1
where
c =
1
diag W ?1 + A + b
2
at saddle point (the maximin strategy randomizes, playing x = ei with probability p?i )
W1
W 11| W
?
?
c+ |
a = p = W?
1| W 1
1 W1
provided p? 0.
Proof. The objective is convex in x for each a as W ?1 + A 0, so it is maximized at a corner
x = ek . We apply min-max swap (see e.g. [Sio58]), properness of the loss (which implies that
a? = p? ) and expand:
1
1
2
min max ka ? xkW + x| Ax + b| x
2
2
1
1
2
= min max ka ? ek kW + e|k Aek + b| ek
a?4 k 2
2
1
1
2
ka ? ek kW + e|k Aek + b| ek
= max min E
p?4 a?4 k?p 2
2
1
1
2
= max E
kp ? ek kW + e|k Aek + b| ek
p?4 k?p 2
2
|
1
1 | ?1
= max ? p W p + diag W ?1 + A p + b| p
p?4
2
2
a?4 x?4
The proof is completed by applying Lemma 4.1.
4.1
Minimax Analysis of the Brier Game
Next, we turn to computing V (s, ? 2 , n) as a recursion and specifying the minimax and maximin
strategies. However, for the value-to-go function to retain its quadratic form, we need an alignment
condition on W . We say that W is aligned with the simplex if
W1
W 11| W
W?
diag(W ?1 ) ? 2 |
,
(4)
1| W 1
1 W1
where denotes an entry-wise inequality between vectors. Note that many matrices besides I
satisfy this condition: for example, all symmetric 2 ? 2 matrices. We can now fully specify the
value and strategies for the Brier game.
2
Theorem 4.3. Consider the T -round Brier game with Mahalanobis loss 21 ka ? xkW P
with W
n
satisfying the
alignment
condition
(4).
After
n
outcomes
(x
,
.
.
.
,
x
)
with
statistics
s
=
1
n
t=1 xt
Pn
|
2
?1
and ? = t=1 xt W xt the value-to-go is
V (s, ? 2 , n) =
1
1
1
?n s| W ?1 s ? ? 2 + (1 ? n?n ) diag(W ?1 )| s + ?n ,
2
2
2
4
and the minimax and maximin strategies are given by
W1
nW 1
?
2
?
2
a (s, ? , n) = p (s, ? , n) = |
+ ?n+1 s ? |
1 W1
1 W1
1
W 11| W
+ (1 ? n?n+1 ) W ?
diag(W ?1 )
2
1| W 1
where the coefficients are defined recursively by
1
?T = 0
?T =
T
2
?n = ?n+1
+ ?n+1
?n
(1 ? n?n+1 )
=
2
2
1
diag(W ?1 )| W diag(W ?1 ) ?
4
1 |
21 W
2 !
diag(W ?1 ) ? 1
1| W 1
+ ?n+1 .
Proof. We prove this by induction, beginning at the end of the game and working backwards in
time. Assume that V (s, ? 2 , T ) has the given form. Recall that the value at the end of the game is
PT
2
V (s, ? 2 , T ) = ? inf a t=1 21 ka ? xt kW and is given by Lemma 3.1. Matching coefficients, we
find V (s, ? 2 , T ) corresponds to ?T = T1 and ?T = 0.
Now assume that V has the assumed form after n rounds. Using s and ? 2 to denote the state after
n ? 1 rounds, we can write
1
1
2
V (s, ? 2 , n ? 1) = min max ka ? xkW + ?n (s + x)| W ?1 (s + x)
a?4 x?4 2
2
1 2
1
? (? + x| W ?1 x) + (1 ? n?n ) diag(W ?1 )| (s + x) + ?n .
2
2
Using Lemma 4.2 to evaluate the right hand side produces a quadratic function in the state, and we
can then match terms to find ?n?1 and ?n?1 and the minimax and maximin strategy. The final step
is checking the p? 0 condition necessary to apply Lemma 4.2, which is equivalent to W being
aligned with the simplex. See the appendix for a complete proof.
This full characterization of the game allows us to derive the following minimax regret bound.
Theorem 4.4. Let W satisfy the alignment condition (4). The minimax regret of the T -round simplex
game satisfies
2 !
1 |
?1
)?1
1 + ln(T ) 1
?1 |
?1
2 1 W diag(W
V ?
diag(W ) W diag(W ) ?
.
2
4
1| W 1
Proof. The regret is equal to the value of the game, V = V (0, 0, 0) = ?0 . First observe that
2
(1 ? n?n+1 )2 = 1 ? 2n?n+1 + n2 ?n+1
= 1 ? 2n?n+1 + n2 (?n ? ?n+1 )
= ?n+1 + 1 ? (n + 1)2 ?n+1 + n2 ?n .
After summing over n the last two terms telescope, and we find
?0 ?
T
?1
X
(1 ? n?n+1 )2 = ? T 2 ?T +
n=0
T
?1
X
(1 + ?n+1 ) =
n=0
T
X
?n .
n=1
Each ?n can be bounded by 1/n, as observed in [TW00, proof of Lemma 2]. In the base case n = T
this holds with equality, and for n < T we have
2
?n = ?n+1
+ ?n+1 ?
It follows that ?0 ?
PT
n=1
?n ?
PT
1
n=1 n
1
1 n(n + 2)
1
1
+
=
? .
2
2
(n + 1)
n+1
n (n + 1)
n
? 1 + ln(T ) as desired.
5
5
Norm Ball Game
This section parallels the previous. Here, we consider the online game with Mahalanobis loss and
A = X =
:= {x ? RK | kxk ? 1}, the 2-norm Euclidian ball (not the Mahalanobis ball). We
show that the value-to-go function is always quadratic in s and linear in ? 2 and derive the minimax
and maximin strategies.
Lemma 5.1. Fix a symmetric matrix A and vector b and assume A + W ?1 0. Let ?max be the
?2
largest eigenvalue of W ?1 +A and vmax the corresponding eigenvector. If b| (?max I ? A) b ?
1, then the optimization problem
1
1
2
inf sup ka ? xkW + x| Ax + x| b
2
a?
x?
2
?1
has value 12 b| (?max I ? A) b + 21 ?max , minimax strategy a? = (?max I ? A)?1 b and a randomized maximin strategy that plays two unit length vectors, with
s
q
a|k ak
1 1
|
Pr x = a? ? 1 ? a? a? vmax = ?
,
2 2 1 ? a|? a?
where a? and ak are the components of a? perpendicular and parallel to vmax .
Proof. As the objective is convex, the inner optimum must be on the boundary and hence will be at
a unit vector x. Introduce a Lagrange multiplier ? for x| x ? 1 to get the Lagrangian
1
1
1
2
inf inf sup ka ? xkW + x| Ax + x| b + ? (1 ? x| x).
2
2
2
This is concave in x if W + A ? ?I 0, that is, ?max ? ?. Differentiating yields the optimizer
x? = (W ?1 + A ? ?I)?1 (W ?1 a ? b), which leaves us with an optimization in only a and ?:
1 | ?1
1
1
inf inf
a W a ? (W ?1 a ? b)| (W ?1 + A ? ?I)?1 (W ?1 a ? b) + ?.
2
2
a?
???max 2
Since the infimums are over closed sets, we can exchange their order. Unconstrained optimization
?1
of a results in a? = (?I ? A) b. Evaluating
the objective at a? and using W ?1 a? ? b =
?1
?1
?1
?1
W (?I ? A) b ? b = W + A ? ?I (?I ? A) b results in
!
2
1 |
1
1 X (u|i b)
?1
inf
b (?I ? A) b + ? = inf
+? ,
2
? ? ?i
???max 2
???max 2
i
P
|
using the spectral decomposition A =
i ?i ui ui . For ? ? ?max , we have ? ? ?i . Taking
?2
derivatives, provided b| (?max I ? A) b ? 1, this function is increasing in ? ? ?max , and so
obtains its infimum at ?max . Thus, when the assumed inequality is satisfied, the a? is minimax for
the given x? .
a?
??0 x
?1
To obtain the maximin strategy, we can take the usual convexification where the Adversary plays
distributions P over the unit sphere. This allows us to swap the infimum and supremum (see e.g.
Sion?s minimax theorem[Sio58]) and obtain an equivalent optimization problem. We then see that
the objective only depends on the mean ? = E x and second moment D = E xx| of the distribution
P . The characterization in [KNW13, Theorem 2.1] tells us that ?, D are the first two moments of a
distribution on units iff tr(D) = 1 and D ??| . Then, our usual min-max swap yields
1
1
1 | ?1
a W a ? a| W ?1 x + x| W ?1 x + x| Ax + b| x
V = sup inf E
2
2
P a?
x?P 2
1
1
= sup inf a| W ?1 a ? a| W ?1 ? + tr (W ?1 + A)D + b| ?
2
?,D a?
2
1
1
= sup ? ?| W ?1 ? + tr (W ?1 + A)D + b| ?
2
2
?,D
1 ? | ?1 ?
1
= ? a W a + b| a? + sup
tr (W ?1 + A)D
2
Da? a? | 2
tr(D)=1
6
?
vmax
Figure 2: Illustration of the maximin distribution from Lemma 5.1. The mixture of red unit vectors
|
with mean ? has second moment D = ??| + (1 ? ?| ?)vmax vmax
.
where the second equality uses a = ? and the third used the saddle point condition ?? = a? . The
matrix D with constraint tr(D) = 1 now seeks to align with the largest eigenvector of W ?1 + A
but it also has to respect the constraint D a? a? | . We now re-parameterise by C = D ? a? a? | .
We then need to find
1
tr (W ?1 + A)C .
sup
2
C0
tr(C)=1?a? | a?
By linearity of the objective the maximizer is of rank 1, and hence this is a (scaled) maximum
|
, so that D ? = a? a? | +
eigenvalue problem, with solution given by C ? = (1 ? a? | a? )vmax vmax
|
. This essentially reduces finding P to a 2-dimensional problem, which can
(1 ? a? | a? )vmax vmax
be solved in closed form [KNW13, Lemma 4.1]. It is easy to verify that the mixture in the theorem
has the desired mean a? and second moment D ? . See Figure 2 for the geometrical intuition.
Notice that both the minimax and maximin strategies only depend on W through ?max and vmax .
5.1
Minimax Analysis of the Ball Game
With the above lemma, we can compute the value and strategies for the ball game in an analogous
way to Theorem 4.3. Again, we find that the value function at the end of the game is quadratic in
the state, and, surprisingly, remains quadratic under the backwards induction.
2
Theorem 5.2. Consider the T -round ball game with loss 12 ka ? xkW . After n rounds, the valuePn
Pn
to-go for a state with statistics s = t=1 xt and ? 2 = t=1 x|t W ?1 xt is
V (s, ? 2 , n) =
1
1 |
s A n s ? ? 2 + ?n .
2
2
The minimax strategy plays
a? (s, ? 2 , n) =
?1
?max I ? (An+1 ? W ?1 )
An+1 s
and the maximin strategy plays two unit length vectors with
s
q
a|k ak
1
1
|
Pr x = a? ? 1 ? a? a? vmax = ?
,
2 2 1 ? a|? a?
where ?max and vmax correspond to the largest eigenvalue of An+1 and a? and ak are the components of a? perpendicular and parallel to vmax . The coefficients An and ?n are determined
recursively by base case AT = T1 W ?1 and ?T = 0 and recursion
?1
An = An+1 W ?1 + ?max I ? An+1
An+1 + An+1
1
?n = ?max + ?n+1 .
2
7
Proof outline. The proof is by induction on the number n of rounds played. In the base case n = T
we find (see (3)) AT = T1 W ?1 and ?T = 0. For the the induction step, we need to calculate
1
2
V (s, ? 2 , n) = inf sup ka ? xkW + V (s + x, ? 2 + x| W ?1 x, n + 1).
a?
x?
2
Using the induction hypothesis, we expand the right-hand-side to
1
1
1
2
inf sup ka ? xkW + (s + x)| An+1 (s + x) ? (? 2 + x| W ?1 x) + ?n+1 .
2
2
a?
x?
2
which we can evaluate by applying Lemma 5.1 with A = An+1 ? W ?1 and b = s| An+1 .
Collecting terms and matching with V (s, ? 2 , n) = 21 s| An s ? 12 ? 2 + ?n yields the recursion for
An and ?n as well as the given minimax and maximin strategies. As before, much of the algebra
has been moved to the appendix.
Understanding the eigenvalues of An As we have seen from the An recursion, the eigensystem
is always the same as that of W ?1 . Thus, we can characterize the minimax strategy completely by
its effect on the eigenvalues of W ?1 . Denote the eigenvalues of An and W ?1 to be ?in and ?i ,
respectively, with ?1n?1 corresponding to the largest eigenvalue. The eigenvalues follow:
?i (?i + ?1 )
(?in )2
+ ?i = n 1 n i ,
?in?1 =
1
i
?i + ? n ? ? n
?i + ?n ? ?n
i
which leaves the order of ?n unchanged. The largest eigenvalue ?1n satisfies the recurrence ?1T /?1 =
2
1/T and ?1n /?1 = ?1n+1 /?1 + ?1n+1 /?1 , which, remarkably, is the same recurrence for the ?n
= ?n ?max .
parameter in the Brier game, i.e. ?max
n
This observation is the key to analyzing the minimax regret.
Theorem 5.3. The minimax regret of the T -round ball game satisfies
1 + ln(T )
V ?
?max (W ?1 ).
2
PT
PT
Proof. We have V = V (0, 0, 0) = ?0 = n=1 ?max
= ?max (W ?1 ) n=1 ?n , the last equality
n
PT
following from the discussion above. The proof of Theorem 4.4 gives the bound on n=1 ?n .
Taking stock, we find that the minimax regrets of the Brier game (Theorems 4.3) and ball game
(Theorems 5.2) have identical dependence on the horizon T but differ in a complexity factor arising
from the interaction of the action space and the loss matrix W .
6
Conclusion
In this paper, we have presented two games that, unexpectedly, have computationally efficient minimax strategies. While the structure of the square Mahalanobis distance is important, it is the interplay between the loss and the constraint set that allows efficient calculation of the backwards
induction, value-to-go, and achieving strategies. For example, the square Mahalanobis game with
`1 ball action spaces does not admit a quadratic value-to-go unless W = I.
We emphasize the low computational cost of this method despite the exponential blow-up in state
space size. In the Brier game, the ?n coefficients need to be precomputed, which can be done in
O(T ) time. Similarly, computation of the eigenvalues of the An coefficients for the ball game can be
done in O(T K + K 3 ) time. Then, at each iteration of the algorithm, only matrix-vector multiplications between the current state and the precomputed parameters are required. Hence, playing either
T round game requires O(T K 2 ) time. Unfortunately, as is the case with most minimax algorithms,
the time horizon must be known in advance.
There are many different future directions. We are currently pursuing a characterization of action
spaces that permit quadratic value functions under squared Mahalanobis loss, and investigating connections between losses and families of value functions closed under backwards induction. There
is some notion of conjugacy between losses, value-to-go functions, and action spaces, but a generalization seems difficult: the Brier game and ball game worked out for seemingly very different
reasons.
8
References
[ABRT08] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal
strategies and minimax lower bounds for online convex games. In Servedio and Zhang
[SZ08], pages 415?423.
[AWY08] Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. When random play is optimal
against an adversary. In Servedio and Zhang [SZ08], pages 437?446.
[BGH+ 13] Peter L. Bartlett, Peter Grunwald, Peter Harremo?es, Fares Hedayati, and Wojciech
Kot?owski. Horizon-independent optimal prediction with log-loss in exponential families. CoRR, abs/1305.4324, 2013.
[Bri50]
Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly
weather review, 78(1):1?3, 1950.
[CBL06]
Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, learning, and games. Cambridge
University Press, 2006.
[CBS11]
Nicol`o Cesa-Bianchi and Ohad Shamir. Efficient online learning via randomized rounding. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems 24, pages 343?351, 2011.
Fares Hedayati and Peter L. Bartlett. Exchangeability characterizes optimality of sequential normalized maximum likelihood and bayesian prediction with jeffreys prior.
In International Conference on Artificial Intelligence and Statistics, pages 504?510,
2012.
[HB12]
[KM05]
Petri Kontkanen and Petri Myllym?aki. A fast normalized maximum likelihood algorithm for multinomial data. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05), pages 1613?1616, 2005.
[KNW13] Wouter M. Koolen, Jiazhong Nie, and Manfred K. Warmuth. Learning a set of directions. In Shai Shalev-Shwartz and Ingo Steinwart, editors, Proceedings of the 26th
Annual Conference on Learning Theory (COLT), June 2013.
[Sht87]
Yurii Mikhailovich Shtar?kov. Universal sequential coding of single messages. Problemy Peredachi Informatsii, 23(3):3?17, 1987.
[Sio58]
Maurice Sion. On general minimax theorems. Pacific J. Math., 8(1):171?176, 1958.
[SZ08]
Rocco A. Servedio and Tong Zhang, editors. 21st Annual Conference on Learning
Theory - COLT 2008, Helsinki, Finland, July 9-12, 2008. Omnipress, 2008.
[TW00]
Eiji Takimoto and Manfred K. Warmuth. The minimax strategy for Gaussian density
estimation. In 13th COLT, pages 100?106, 2000.
9
| 5243 |@word norm:4 seems:1 seek:1 forecaster:1 queensland:2 decomposition:1 jacob:2 incurs:1 euclidian:1 tr:8 recursively:2 moment:4 past:3 ka:19 current:2 written:1 must:2 half:1 fewer:1 leaf:2 intelligence:2 warmuth:3 xk:1 beginning:1 manfred:3 characterization:3 math:1 zhang:3 prove:1 kov:1 paragraph:1 introduce:1 manner:1 indeed:1 intricate:1 brier:13 frequently:1 owski:1 terminal:1 increasing:1 conv:2 provided:3 xx:1 bounded:2 linearity:1 spends:1 eigenvector:2 hindsight:4 finding:1 berkeley:5 collecting:1 concave:1 demonstrates:1 scaled:1 control:1 unit:6 t1:5 before:1 randomizes:1 despite:1 ak:4 analyzing:1 meet:1 lugosi:1 au:1 studied:1 equating:1 specifying:1 range:2 perpendicular:2 regret:15 kat:3 optimizers:1 universal:1 empirical:1 weather:1 matching:2 pre:1 get:1 applying:2 equivalent:2 deterministic:1 lagrangian:1 go:11 regardless:1 straightforward:1 convex:4 formalized:1 immediately:1 rule:1 notion:1 traditionally:1 analogous:1 pt:10 play:14 suppose:1 user:1 exact:1 shamir:1 prioritization:1 us:1 hypothesis:1 amortized:1 satisfying:1 convexification:1 observed:2 solved:1 unexpectedly:1 calculate:3 ensures:1 intuition:1 environment:1 complexity:2 ui:2 nie:1 depend:1 algebra:1 serve:1 upon:1 learner:8 swap:3 completely:1 easily:1 joint:1 stock:1 fast:2 kp:1 artificial:2 zemel:1 tell:1 aggregate:1 outcome:12 choosing:1 exhaustive:1 abernethy:2 shalev:1 solve:1 nineteenth:1 say:1 statistic:5 final:1 online:7 seemingly:1 interplay:1 sequence:3 eigenvalue:10 interaction:2 aligned:2 iff:1 achieve:1 moved:1 ijcai:1 empty:1 optimum:1 produce:2 derive:5 propagating:1 measured:1 exemplar:1 auxiliary:1 implies:1 nml:2 direction:3 differ:1 bgh:2 proviso:1 exchange:1 fix:3 generalization:4 hold:2 nw:1 optimizer:3 early:1 achieves:1 finland:1 estimation:3 currently:1 largest:5 clearly:1 gaussian:3 always:4 pn:3 sion:2 exchangeability:1 ax:5 june:1 rank:1 likelihood:4 adversarial:1 problemy:1 abor:1 relation:1 expand:2 interested:1 colt:3 special:1 uc:1 equal:3 identical:1 kw:12 thinking:1 future:2 simplex:10 petri:2 fundamentally:1 few:2 preserve:1 ab:1 message:1 wouter:3 joel:1 alignment:3 mixture:2 admitting:1 necessary:1 experience:1 ohad:1 unless:1 iv:1 euclidean:5 taylor:1 desired:2 re:1 cost:1 introducing:1 deviation:1 entry:1 rounding:1 optimally:1 characterize:1 eec:1 chooses:3 st:1 density:3 international:2 randomized:3 retain:1 w1:7 squared:10 again:1 satisfied:1 cesa:2 corner:1 admit:1 expert:3 derivative:2 ek:7 wojciech:1 maurice:1 blow:1 coding:1 coefficient:6 satisfy:2 depends:2 closed:3 doing:1 sup:13 analyze:1 red:1 recover:1 characterizes:1 parallel:3 cbl06:3 shai:1 square:3 efficiently:3 maximized:1 yield:3 correspond:1 bayesian:2 history:1 whenever:1 checked:1 definition:1 against:1 servedio:3 obvious:1 proof:13 static:1 popular:1 recall:1 knowledge:1 attained:1 day:1 follow:3 specify:1 execute:1 though:1 done:2 working:1 hand:2 steinwart:1 ei:1 maximizer:3 infimum:2 effect:1 normalized:4 multiplier:2 verify:1 equality:3 hence:3 symmetric:7 mahalanobis:13 round:16 game:51 recurrence:4 aki:1 eigensystem:1 outline:2 theoretic:1 complete:1 xkw:11 omnipress:1 geometrical:1 meaning:1 wise:1 koolen:3 multinomial:2 exponentially:1 belong:1 discussed:1 fare:2 monthly:1 cambridge:1 unconstrained:2 similarly:1 tionally:1 shawe:1 add:1 base:3 align:1 inf:21 certain:1 inequality:2 harremo:1 exploited:1 seen:1 canon:1 c11:2 july:1 ii:1 full:2 aek:3 reduces:1 kontkanen:1 alan:1 match:1 calculation:1 sphere:1 a1:1 plugging:1 prediction:10 essentially:1 iteration:1 sometimes:1 c1:11 remarkably:1 properness:1 backwards:4 iii:1 easy:1 variety:1 inner:1 computable:1 inactive:1 curiously:1 bartlett:5 peter:7 cause:1 action:14 generally:1 tewari:1 transforms:1 meteorology:1 eiji:1 telescope:1 notice:1 arising:1 per:1 write:1 shall:1 key:1 achieving:1 takimoto:1 yellin:1 almost:1 family:3 pursuing:1 decision:1 appendix:2 bound:6 guaranteed:1 simplification:1 played:1 quadratic:11 annual:2 constraint:6 worked:1 informatsii:1 helsinki:1 wc:1 fourier:1 min:7 optimality:1 performing:1 pacific:1 ball:18 making:1 jeffreys:1 intuitively:1 pr:2 computationally:3 equation:1 ln:3 remains:1 conjugacy:1 discus:1 turn:1 precomputed:2 tractable:1 end:3 yurii:1 permit:1 apply:3 observe:1 spectral:1 weinberger:1 denotes:1 remaining:1 completed:1 exploit:1 unchanged:1 objective:6 strategy:38 rocco:1 dependence:1 usual:2 interacts:1 exhibit:2 distance:9 parametrized:2 argue:1 reason:1 induction:7 assuming:1 length:3 besides:1 illustration:1 setup:3 unfortunately:1 difficult:1 statement:1 implementation:1 bianchi:2 upper:1 observation:1 ingo:1 payoff:1 required:1 connection:1 maximin:13 california:2 beyond:1 adversary:4 proceeds:1 kot:1 summarize:1 ambuj:1 max:33 recursion:4 minimax:44 improve:1 technology:2 review:1 understanding:1 prior:1 checking:1 multiplication:1 nicol:2 loss:30 fully:1 interesting:2 parameterise:1 ingredient:1 penalization:1 verification:1 editor:3 tiny:1 playing:2 cd:8 penalized:1 surprisingly:2 last:3 offline:1 guide:1 side:2 understand:1 taking:2 differentiating:1 absolute:1 peredachi:1 boundary:1 xn:7 evaluating:1 vmax:14 compact:1 obtains:1 emphasize:1 supremum:1 investigating:1 summing:1 assumed:2 leader:2 shwartz:1 why:1 glenn:1 protocol:1 diag:12 pk:1 main:1 n2:3 myllym:1 repeated:2 x1:8 qut:1 grunwald:1 rarity:1 tong:1 sub:1 pereira:1 explicit:1 exponential:4 lie:2 kxk2:1 third:1 rk:3 theorem:13 xt:26 list:1 x:1 rakhlin:1 essential:1 sequential:3 corr:1 horizon:5 forecast:1 saddle:3 lagrange:2 kxk:1 expressed:1 corresponds:1 malek:2 satisfies:4 kan:1 minimizer:3 goal:1 lipschitz:1 feasible:3 determined:1 lemma:16 called:1 e:1 player:3 exception:2 people:1 alexander:1 evaluate:3 mikhailovich:1 |
4,687 | 5,244 | Online Decision-Making in
General Combinatorial Spaces
Arun Rajkumar
Shivani Agarwal
Department of Computer Science and Automation
Indian Institute of Science, Bangalore 560012, India
{arun r,shivani}@csa.iisc.ernet.in
Abstract
We study online combinatorial decision problems, where one must make sequential decisions in some combinatorial space without knowing in advance the cost of
decisions on each trial; the goal is to minimize the total regret over some sequence
of trials relative to the best fixed decision in hindsight. Such problems have been
studied mostly in settings where decisions are represented by Boolean vectors and
costs are linear in this representation. Here we study a general setting where costs
may be linear in any suitable low-dimensional vector representation of elements
of the decision space. We give a general algorithm for such problems that we
call low-dimensional online mirror descent (LDOMD); the algorithm generalizes
both the Component Hedge algorithm of Koolen et al. (2010), and a recent algorithm of Suehiro et al. (2012). Our study offers a unification and generalization of
previous work, and emphasizes the role of the convex polytope arising from the
vector representation of the decision space; while Boolean representations lead to
0-1 polytopes, more general vector representations lead to more general polytopes.
We study several examples of both types of polytopes. Finally, we demonstrate the
benefit of having a general framework for such problems via an application to an
online transportation problem; the associated transportation polytopes generalize
the Birkhoff polytope of doubly stochastic matrices, and the resulting algorithm
generalizes the PermELearn algorithm of Helmbold and Warmuth (2009).
1
Introduction
In an online combinatorial decision problem, the decision space is a set of combinatorial structures,
such as subsets, trees, paths, permutations, etc. On each trial, one selects a combinatorial structure
from the decision space, and incurs a loss; the goal is to minimize the regret over some sequence of
trials relative to the best fixed structure in hindsight. Such problems have been studied extensively
in the last several years, primarily in the setting where the combinatorial structures are represented
by Boolean vectors, and costs are linear in this representation; this includes online learning of paths,
permutations, and various other specific combinatorial structures [16, 17, 12], as well as the Component Hedge algorithm of Koolen et al. [14] which generalizes many of these previous studies. More
recently, Suehiro et al. [15] considered a setting where the combinatorial structures of interest are
represented by the vertices of the base polytope of a submodular function, and costs are linear in this
representation; this includes as special cases several of the Boolean examples considered earlier, as
well as new settings such as learning permutations with certain position-based losses (see also [2]).
In this work, we consider a general form of the online combinatorial decision problem, where costs
can be linear in any suitable low-dimensional vector representation of the combinatorial structures
of interest. This encompasses representations as Boolean vectors and vertices of submodular base
polytopes as special cases, but also includes many other settings. We give a general algorithm for
1
such problems that we call low-dimensional online mirror descent (LDOMD); the algorithm generalizes both the Component Hedge algorithm of Koolen et al. for Boolean representations [14], and
the algorithm of Suehiro et al. for submodular polytope vertex representations [15].1 As we show, in
many settings of interest, the regret bounds for LDOMD are better than what can be obtained with
other algorithms for online decision problems, such as the Hedge algorithm of Freund and Schapire
[10] and the Follow the Perturbed Leader algorithm of Kalai and Vempala [13].
We start with some preliminaries and background in Section 2, and describe the LDOMD algorithm
and its analysis in Section 3. Our study emphasizes the role of the convex polytope arising from the
vector representation of the decision space; we study several examples of such polytopes, including
matroid polytopes, polytopes associated with submodular functions, and permutation polytopes in
Sections 4?6, respectively. Section 7 applies our framework to an online transportation problem.
2
Preliminaries and Background
Notation. For n ? Z+ , we will denote [n] = {1, . . . , n}. For a vector z ? Rd , we will denote by
kzk1 , kzk2 , and kzk? the standard L1 , L2 , and L? norms of z, respectively. For a set Z ? Rd , we
will denote by conv(Z) the convex hull of Z, and by int(Z) the interior of Z. For a closed convex
set K ? Rd and Legendre function F : K?R,2 we will denote by BF : K ? int(K)?R+ the
Bregman divergence associated with F , defined as BF (x, x0 ) = F (x) ? F (x0 ) ? ?F (x0 ) ? (x ? x0 ),
and by F ? : ?F (int(K))?R the Fenchel conjugate of F , defined as F ? (u) = supx?K (x?u?F (x)).
Problem Setup. Let C be a (finite but large) set of
Online Combinatorial Decision-Making
combinatorial structures. Let ? : C?Rd be some injective mapping that maps each c ? C to a unique
Inputs:
vector ?(c) ? Rd (so that |?(C)| = |C|). We will
Finite set of combinatorial structures C
generally assume d |C| (e.g. d = poly log(|C|)).
Mapping ? : C?Rd
The online combinatorial decision-making problem
For t = 1 . . . T :
we consider can be described as follows: On each
? Predict ct ? C
trial t, one makes a decision in C by selecting a structure ct ? C, and receives a loss vector `t ? [0, 1]d ;
? Receive loss vector `t ? [0, 1]d
the loss incurred is given by ?(ct ) ? `t (see Figure 1).
? Incur loss ?(ct ) ? `t
The goal is to minimize the regret relative to the single best structure in C in hindsight; specifically, the Figure 1: Online decision-making in a genregret of an algorithm A that selects ct ? C on trial t eral combinatorial space.
over T trials is defined as
PT
PT
t
t
t
RT [A] =
t=1 ?(c ) ? ` ? minc?C
t=1 ?(c) ? ` .
In particular, we would like to design algorithms whose worst-case regret (over all possible loss sequences) is sublinear in T (and also has as good a dependence as possible on other relevant problem
parameters). From standard results, it follows that for any deterministic algorithm, there is always a
loss sequence that forces the regret to be linear in T ; as is common in the online learning literature,
we will therefore consider randomized algorithms that maintain a probability distribution pt over C
from which ct is randomly drawn, and consider bounding the expected regret of such algorithms.
Online Mirror Descent (OMD). Recall that online mirror descent (OMD) is a general algorithmic
framework for online convex optimization problems, where on each trial t, one selects a point xt in
some convex set ? ? Rn , receives a convex cost function ft : ??R, and incurs a loss ft (xt ); the
goal is to minimize the regret relative to the best single point in ? in hindsight. The OMD algorithm
makes use of a Legendre function F : K?R defined on a closed convex set K ? ?, and effectively
performs a form of projected gradient descent in the dual space of int(K) under F , the projections
being in terms of the Bregman divergence BF associated with F . See Appendix A.1 for an outline
of OMD and its regret bound for the special case of online linear optimization, where costs ft are
linear (so that ft (x) = `t ? x for some `t ? Rn ), which will be relevant to our study.
1
We note that the recent online stochastic mirror descent (OSMD) algorithm of Audibert et al. [3] also
generalizes the Component Hedge algorithm, but in a different direction: OSMD (as described in [3]) applies
to only Boolean representations, but allows also for partial information (bandit) settings; here we consider only
full information settings, but allow for more general vector representations.
2
Recall that for a closed convex set K ? Rd , a function F : K?R is Legendre if it is strictly convex,
differentiable on int(K), and (for any norm k ? k on Rd ) k?F (xn )k? + ? whenever {xn } converges to a
point in the boundary of K.
2
Hedge/Na??ve OMD. The Hedge algorithm proposed by Freund and Schapire [10] is widely used
for online decision problems in general. The algorithm maintains a probability distribution over the
decision space, and can be viewed as an instantiation of the OMD framework, with ? (and K) the
probability simplex over the decision space, linear costs ft (since one works with expected losses),
and F the negative entropy. When applied to online combinatorial decision problems in a na??ve
manner, the Hedge algorithm requires maintaining a probability distribution over the combinatorial
decision space C, which in many cases can be computationally prohibitive (see Appendix A.2 for
an outline of the algorithm, which we also refer to as Na??ve OMD). The following bound on the
expected regret of the Hedge/Na??ve OMD algorithm is well known:
Theorem 1 (Regretqbound for Hedge/Na??ve OMD). Let ?(c) ? `t ? [a, b] ?c ? C, t ? [T ]. Then
setting ? ? =
2
(b?a)
2 ln |C|
T
gives
r
h
i
T ln |C|
?
E RT Hedge(? ) ? (b ? a)
.
2
Follow the Perturbed Leader (FPL). Another widely used algorithm for online decision problems
is the Follow the Perturbed Leader (FPL) algorithm proposed by Kalai and Vempala [13] (see Appendix A.3 for an outline of the algorithm). Note that in the combinatorial setting, FPL requires the
solution to a combinatorial optimization problem on each trial, which may or may not be efficiently
solvable depending on the form of the mapping ?. The following bound on the expected regret of
the FPL algorithm is well known:
0
t
t
Theorem 2 (Regret bound for FPL). Let
q k?(c) ? ?(c )k1 ? D1 , k` k1 ? G1 , and |?(c) ? ` | ? B
D1
?c, c0 ? C, t ? [T ]. Then setting ? ? = BG
gives
1T
h
p
i
?
E RT FPL(? ) ? 2 D1 BG1 T .
Polytopes. Recall that a set S ? Rd is a polytope if there exist a finite number of points x1 , . . . , xn ?
Rd such that S = conv({x1 , . . . , xn }). Any polytope S ? Rd has a unique minimal set of points
x01 , . . . , x0m ? Rd such that S = conv({x01 , . . . , x0m }); these points are called the vertices of S. A
polytope S ? Rd is said to be a 0-1 polytope if all its vertices lie in the Boolean hypercube {0, 1}d .
As we shall see, in our study of online combinatorial decision problems as above, the polytope
conv(?(C)) ? Rd will play a central role. Clearly, if ?(C) ? {0, 1}d , then conv(?(C)) is a 0-1
polytope; in general, however, conv(?(C)) can be any polytope in Rd .
3
Low-Dimensional Online Mirror Descent (LDOMD)
We describe the Low-Dimensional OMD (LDOMD) algorithm in Figure 2. The algorithm maintains
a point xt in the polytope conv(?(C)). It makes use of a Legendre function F : K?R defined on
a closed convex set K ? conv(?(C)), and effectively performs OMD in a d-dimensional space
rather than in a |C|-dimensional space as in the case of Hedge/Na??ve OMD. Note that an efficient
implementation of LDOMD requires two operations to be performed efficiently: (a) given a point
xt ? conv(?(C)), one needs to be able to efficiently find a ?decomposition? of xt into a convex
combination of a small number of points in ?(C) (this yields a distribution pt ? ?C that satisfies
Ec?pt [?(c)] = xt and also has small support, allowing efficient sampling); and (b) given a point
x
et+1 ? K, one needs to be able to efficiently find a ?projection? of x
et+1 onto conv(?(C)) in terms
of the Bregman divergence BF . The following regret bound for LDOMD follows directly from the
standard OMD regret bound (see Theorem 4 in Appendix A.1):
Theorem 3 (Regret bound for LDOMD). Let BF (?(c), x1 ) ? D2 ?c ? C. Let k ? k be any norm
in Rd such that k`t k ? G ?t ? [T ], and such that the restriction
qof F to conv(?(C)) is ?-strongly
2?
convex w.r.t. k ? k? , the dual norm of k ? k. Then setting ? ? = D
G
T gives
r
h
i
2T
.
E RT LDOMD(? ? ) ? DG
?
As we shall see below, the LDOMD algorithm generalizes both the Component Hedge algorithm
of Koolen et al. [14], which applies to settings where ?(C) ? {0, 1}d (Section 3.1), and the recent
algorithm of Suehiro et al. [15], which applies to settings where conv(?(C)) is the base polytope
associated with a submodular function (Section 5).
3
Algorithm Low-Dimensional OMD (LDOMD) for Online Combinatorial Decision-Making
Inputs:
Finite set of combinatorial structures C
Mapping ? : C?Rd
Parameters:
?>0
Closed convex set K ? conv(?(C)), Legendre function F : K?R
Initialize:
x1 = argminx?conv(?(C)) F (x) (or x1 = any other point in conv(?(C)))
For t = 1 . . . T :
? Let pt be any distribution over C such that Ec?pt [?(c)] = xt [Decomposition step]
? Randomly draw ct ? pt
? Receive loss vector `t ? [0, 1]d
? Incur loss ?(ct ) ? `t
? Update:
x
et+1 ? ?F ? (?F (xt ) ? ?`t )
xt+1 ? argminx?conv(?(C)) BF (x, x
et+1 ) [Bregman projection step]
Figure 2: The LDOMD algorithm.
3.1
LDOMD with 0-1 Polytopes
Consider first a setting where each c ? C is represented as a Boolean vector, so that ?(C) ? {0, 1}d .
In this case conv(?(C)) is a 0-1 polytope. This is the setting commonly studied under the term
?online combinatorial learning? [14, 8, 3]. In analyzing this setting, one generally introduces an
additional problem parameter, namely an upper bound m on the ?size? of each Boolean vector ?(c).
Specifically, let us assume k?(c)k1 ? m ?c ? C for some m ? [d].
Under the above assumption, it is easy to verify that applying Theorems 1 and 2 gives
h
h
q
?
i
i
d
E RT Hedge(? ? ) = O m T m ln( m
) ;
E RT FPL(? ? ) = O(m T d) .
For the LDOMD algorithm, since conv(?(C)) ? [0, 1]d ? Rd+ , it is common to take K = Rd+ and to
Pd
Pd
let F : K?R be the unnormalized negative entropy, defined as F (x) = i=1 xi ln xi ? i=1 xi ,
which leads to a multiplicative update algorithm; the resulting algorithm was termed Component
d
) ?c ? C;
Hedge in [14]. For the above choice of F , it is easy to see that BF (?(c), x1 ) ? m ln( m
1
t
moreover, k` k? ? 1 ?t, and the restriction of F on conv(?(C)) is ( m )-strongly convex w.r.t. k ? k1 .
Therefore, applying Theorem 3 with appropriate ? ? , one gets
h
q
i
d
E RT LDOMD(? ? ) = O m T ln( m
) .
Thus, when ?(C) ? {0, 1}d , the LDOMD algorithm with the above choice of F gives a better regret
bound than both Hedge/Na??ve OMD and FPL; in fact the performance of LDOMD in this setting is
essentially optimal, as one can easily show a matching lower bound [3].
Below we will see how several online combinatorial decision problems studied in the literature can
be recovered under the above framework (e.g. see [16, 17, 12, 14, 8]); in many of these cases, both
decomposition and unnormalized relative entropy projection steps in LDOMD can be performed
efficiently (in poly(d) time) (e.g. see [14]). As a warm-up, consider the following simple example:
Example 1 (m-sets with element-based losses). Here C contains all size-m subsets of a ground set
of d elements: C = {S ? [d] | |S| = m}. On each trial t, one selects a subset S t ? C and receives
d
t
a loss vector `t ? [0, 1]
P , witht`i specifying the loss for including element i ? [d]; the dloss for the
t
subset S is given by i?S t `i . Here it is natural to define a mapping ? : C?{0, 1} that maps
each S ? C to its characteristic vector, defined as ?i (S) = 1(i ? S) ?i ? [d]; the loss incurred
on predicting S t ? C is then simply ?(S t ) ? `t . Thus ?(C) = {x ? {0, 1}d | kxk1 = m}, and
d
conv(?(C)) = {x ? [0, 1]q
| kxk1 = m}. LDOMD with unnormalized negative entropy as above
d
) . It can be shown that both decomposition and unnormalized
has a regret bound of O m T ln( m
relative entropy projection steps take O(d2 ) time [17, 14].
4
3.2
LDOMD with General Polytopes
Now consider a general setting where ? : C?Rd , and conv(?(C)) ? Rd is an arbitrary polytope.
Let us assume again k?(c)k1 ? m ?c ? C for some m > 0.
Again, it is easy to verify that applying Theorems 1 and 2 gives
h
h
p
?
i
i
E RT Hedge(? ? ) = O(m T ln |C|) ;
E RT FPL(? ? ) = O(m T d) .
For the LDOMD algorithm, we consider two cases:
Case 1: ?(C) ? Rd+ . Here one can again take K = Rd+ and let F : K?R be the unnormalized
negative entropy. In this case, one gets BF (?(c), x1 ) ? m ln(d) + m ?c ? C if m < d, and
BF (?(c), x1 ) ? m ln(m) + d ?c ? C if m ? d. As before, k`t k? ? 1 ?t, and the restriction of F
1
on conv(?(C)) is ( m
)-strongly convex w.r.t. k ? k1 , so applying Theorem 3 for appropriate ? ? gives
(
p
h
i
O m T ln(d)
if m < d
?
p
E RT LDOMD(? ) =
if m ? d.
O m T ln(m)
Thus, when ?(C) ? Rd+ , if ln |C| = ?(max(ln(m), ln(d)))) and d = ?(ln(m)), then the
LDOMD algorithm with unnormalized negative entropy again gives a better regret bound than both
Hedge/Na??ve OMD and FPL.
Case 2: ?(C) 6? Rd+ . Here one can no longer use the unnormalized negative entropy in LDOMD.
One possibility is to take K = Rd and let F : K?R be defined as F (x) = 21 kxk22 , which leads to
an additive update algorithm. In this case, one gets BF (?(c), x1 ) = 21 k?(c) ? x1 k22 ? 2m2 ?c ? C;
?
moreover, k`t k2 ? d ?t, and F is 1-strongly convex w.r.t. k ? k2 . Applying Theorem 3 for
h
appropriate ? ? then gives
?
i
E RT LDOMD(? ? ) = O(m T d) .
Thus in general, when ?(C) 6? Rd+ , LDOMD with squared L2 -norm has a similar regret bound as
that of Hedge/Na??ve OMD and FPL. Note however that in some cases, Hedge/Na??ve OMD and FPL
may be infeasible to implement efficiently, while LDOMD with squared L2 -norm may be efficiently
implementable; moreover, in certain cases it may be possible to implement LDOMD with other
choices of K and F that lead to better regret bounds.
In the following sections we will consider several examples of applications of LDOMD to online
combinatorial decision problems involving both 0-1 polytopes and general polytopes in Rd .
4
Matroid Polytopes
Consider an online decision problem in which the decision space C contains (not necessarily all)
independent sets in a matroid M = (E, I). Specifically, on each trial t, one selects an independent
specifying the loss for including element
set I t ? C, and receives a loss vector `t ? [0, 1]|E| , with `teP
e ? E; the loss for the independent set I t is given by e?I t `te . Here it is natural to define a
mapping ? : C?{0, 1}|E| that maps each independent set I ? C to its characteristic vector, defined
as ?e (I) = 1(e ? I); the loss on selecting I t ? C is then ?(I t ) ? `t . Thus here d = |E|, and
?(C) ? {0, 1}|E| . A particularly interesting case is obtained by taking C to contain all the maximal
independent sets (bases) in I; in this case, the polytope conv(?(C)) is known as the matroid base
polytope of M. This polytope, often denoted as B(M), is also given by
P
n
o
P
B(M) = x ? R|E|
e?S xe ? rankM (S) ?S ? E, and
e?E xe = rankM (E) ,
where rankM : 2E ?R is the matroid rank function of M defined as
rankM (S) = max |I| | I ? I, I ? S
?S ? E .
We will see below (Section 5) that both decomposition and unnormalized relative entropy projection
steps in this case can be performed efficiently assuming an appropriate oracle.
We note that Example 1 (m-subsets of a ground set of d elements) can be viewed as a special case of
the above setting for the matroid Msub = (E, I) defined by E = [d] and I = {S ? E | |S| ? m};
the set C of m-subsets of [d] is then simply the set of bases in I, and conv(?(C)) = B(Msub ). The
following is another well-studied example:
5
Example 2 (Spanning trees with edge-based losses). Here one is given a connected, undirected
graph G = ([n], E), and the decision space C is the set of all spanning trees in G. On each trial t,
t
|E|
t
one selects a spanning tree T t ? C and receives a loss vector
P ` ?t [0, 1] , with `e specifying the
t
loss for using edge e; the loss for the tree T is given by e?T t `e . It is well known that the set of
all spanning trees in G is the set of bases in the graphic matroid MG = (E, I), where I contains
edge sets of all acyclic subgraphs of G. Therefore here d = |E|, ?(C) is the set of incidence vectors
of all spanning trees in G, and conv(?(C)) = B(MG ), also known as the spanning
tree polytope.
q
|E|
Here LDOMD with unnormalized negative entropy has a regret bound of O n T ln( n?1
) .
5
Polytopes Associated with Submodular Functions
Next we consider settings where the decision space C is in one-to-one correspondence with the set
of vertices of the base polytope associated with a submodular function, and losses are linear in the
corresponding vertex representations of elements in C. This setting was considered recently in [15],
and as we shall see, encompasses both of the examples we saw earlier, as well as many others. Let
f : 2[n] ?R be a submodular function with f (?) = 0. The base polytope of f is defined as
P
n
o
Pn
B(f ) = x ? Rn i?S xi ? f (S) ?S ? [n], and i=1 xi = f ([n]) .
Let ? : C?Rn be a bijective mapping from C to the vertices of B(f ); thus conv(?(C)) = B(f ).
5.1
Monotone Submodular Functions
It is known that when f is a monotone submodular function (which means U ? V =? f (U ) ?
f (V )), then B(f ) ? Rn+ [4]. Therefore in this case one can take K = Rn+ and F : K?R to be the
unnormalized negative entropy. Both decomposition and unnormalized relative entropy projection
steps can be performed in time O(n6 + n5 Q), where Q is the time taken by an oracle that given
S returns f (S); for cardinality-based submodular functions, for which f (S) = g(|S|) for some
g : [n]?R, these steps can be performed in just O(n2 ) time [15].
Remark on matroid base polytopes and spanning trees. We note that the matroid rank function
of any matroid M is a monotone submodular function, and that the matroid base polytope B(M)
is the same as B(rankM ). Therefore Examples 1 and 2 can also be viewed as special cases of the
above setting. For the spanning trees of Example 2, the decomposition step of [14] makes use of a
linear programming formulation whose exact time complexity is unclear. Instead, one could use the
decomposition step associated with the submodular function rankMG , which takes O(|E|6 ) time.
Matroid polytopes are 0-1 polytopes; the example below illustrates a more general polytope:
Example 3 (Permutations with a certain position-based loss). Let C = Sn , the set of all permutations
of n objects: C = {? : [n]?[n] | ? is bijective}. On each trial t, one selectsPa permutation ? t ? C
n
and receives a loss vector `t ? [0, 1]n ; the loss of the permutation is given by i=1 `ti (n?? t (i)+1).
t
This type of loss arises in scheduling applications, where `i denotes the time taken to complete the
i-th job, and the loss of a job schedule (permutation of jobs) is the total waiting time of all jobs
(the waiting time of a job is its own completion time plus the sum of completion times of all jobs
scheduled before it) [15]. Here it is natural to define a mapping ? : C?Rn+ that maps ? ? C to
?(?) = (n ? ?(1) + 1, . . . , n ? ?(n) + 1); the loss on selecting ? t ? C is then ?(? t ) ? `t . Thus
here we have d = n, and ?(C) = {(?(1), . . . , ?(n)) | ? ? Sn }. It is known that the n! vectors in
?(C) are exactly the vertices of the base polytope corresponding to the monotone (cardinality-based)
P|S|
submodular function fperm : 2[n] ?R defined as fperm (S) = i=1 (n ? i + 1). Thus conv(?(C)) =
B(fperm ); this is a well-known polytope called the permutahedron [21], and has recently been studied
in the context of online learning applications in [18, 15, 1]. Here k?(?)k1 = n(n+1)
2 p ?? ? C, and
therefore LDOMD with unnormalized negative entropy has a regret bound of O n2 T ln(n) . As
noted above, decomposition and unnormalized relative entropy projection steps take O(n2 ) time.
5.2
General Submodular Functions
In general, when f is non-monotone, B(f ) ? Rn can contain vectors with non-negative entries.
Here one can use LOMD with the squared L2 -norm. The Euclidean projection step can again be
performed in time O(n6 + n5 Q) in general, where Q is the time taken by an oracle that given S
returns f (S), and in O(n2 ) time for cardinality-based submodular functions [15].
6
6
Permutation Polytopes
There has been increasing interest in recent years in online decision problems involving rankings or
permutations, largely due to their role in applications such as information retrieval, recommender
systems, rank aggregation, etc [12, 18, 19, 15, 1, 2]. Here the decision space is C = Sn , the set of
all permutations of n objects:
C = {? : [n]?[n] | ? is bijective} .
On each trial t, one predicts a permutation ? t ? C and receives some type of loss. We saw one special
type of loss in Example 3; we now consider any loss that can be represented as a linear function of
some vector representation of the permutations in C. Specifically, let d ? Z+ , and let ? : C?Rd be
any injective mapping such that on predicting ? t , one receives a loss vector `t ? [0, 1]d and incurs
loss ?(? t ) ? `t . For any such mapping ?, the polytope conv(?(C)) is called a permutation polytope
[5].3 The permutahedron we saw in Example 3 is one example of a permutation polytope; here
we consider various other examples. For any such polytope, if one can perform the decomposition
and suitable Bregman projection steps efficiently, then one can use the LDOMD algorithm to obtain
good regret guarantees with respect to the associated loss.
Example 4 (Permutations with assignment-based losses). Here on each trial t, one selects a permutation ? t ? C and receives a loss matrix `t ? [0, 1]n?n , with `tij specifying the loss for assigning
Pn
element i to position j; the loss for the permutation ? t is given by i=1 `ti,?t (i) . Here it is natural
to define a mapping ? : C?{0, 1}n?n that maps each ? ? C to its associated permutation matrix
P ? ? {0, 1}n?n , defined as Pij? = 1(?(i) = j) ?i, j ? [n]; the loss incurred on predicting ? t ? C is
Pn Pn
then i=1 j=1 ?ij (? t )`tij . Thus we have here that d = n2 , ?(C) = {P ? ? {0, 1}n?n | ? ? Sn },
and conv(?(C)) is the well-known Birkhoff polytope containing all doubly stochastic matrices in
[0, 1]n?n (also known as the assignment polytope or the perfect matching polytope of the complete
bipartite
p graphKn,n ). Here LDOMD with unnormalized negative entropy has a regret bound of
O n T ln(n) . This recovers exactly the PermELearn algorithm used in [12]; see [12] for efficient implementations of the decomposition and unnormalized relative entropy projection steps.
Example 5 (Permutations with general position-based losses). Here on each trial t, one selects
a permutation ? t ? C and receives a loss vector `t ? [0, 1]n . There is a weight function ? :
[n]?R+ that weights the loss incurred at each position, such thatPthe loss contributed by element
n
i is `ti ?(? t (i)); the total loss of the permutation ? t is given by i=1 `ti ?(? t (i)). Note that the
particular loss considered in Example 3 (and in [15]) is a special case of such a position-based loss,
with weight function ?(i) = (n?i+1). Several other position-dependent losses are used in practice;
for example, the discounted cumulative gain (DCG) based loss, which is widely used in information
1
[9]. For a general position-based loss
retrieval applications, effectively uses ?(i) = 1 ? log (i)+1
2
n
as
?(?)
=
(?(?(1)), . . . , ?(?(n))).
This yields a
with weight function ?, one can define ? : C?R
+
permutation polytope conv(?(C)) = conv (?(?(1)), . . . , ?(?(n))) | ? ? Sn ? Rn+ . Provided
one can implement the decomposition and suitable Bregman projection steps efficiently, one can use
the LDOMD algorithm to get a sublinear regret.
7
Application to an Online Transportation Problem
Consider now the following transportation problem: there are m supply locations for a particular
n
commodity and n demand locations, with a supply vector a ? Zm
+ and demand vector b ? Z+
specifying the (integer) quantities of the commodity supplied/demanded by the various locations.
Pm
Pn
4
Assume i=1 ai = j=1 bj = q. In the offline setting, there is a cost matrix ` ? [0, 1]m?n , with
`ij specifying the cost of transporting one unit of the commodity from supply location i to demand
location j, and the goal is to decide on a transportation matrix Q ? Zm?n
that specifies suitable
+
(integer) quantities of the commodity to be transportedPbetween
the
various
supply and demand
m Pn
locations so as to minimize the total transportation cost, i=1 j=1 Qij `ij .
Here we consider an online variant of this problem where the supply vector a and demand vector b
are viewed as remaining constant over some period of time, while the costs of transporting the com3
The term ?permutation polytope? is sometimes used to refer to various polytopes obtained through specific
mappings ? : Sn ?Rd ; here we use the term in a broad sense for any such polytope, following terminology of
Bowman [5]. (Note that the description Bowman [5] gives of a particular 0-1 permutation polytope in Rn(n?1) ,
known as the binary choice polytope or the linear ordering polytope [20], is actually incorrect; e.g. see [11].)
7
Algorithm Decomposition Step for Transportation Polytopes
n
Input: X ? T (a, b) (where a ? Zm
+ , b ? Z+ )
Initialize: A1 ? X; k ? 0
Repeat:
?k ?k+1
? Find an extreme point Qk ? T (a, b) such that Akij = 0 =? Qkij = 0 (see Appendix B)
k
A
? ?k ? min(i,j):Qkij >0 Qij
k
ij
? Ak+1 ? Ak ? ?k Qk
Until all entries of Ak+1 are zero
Ouput: Decomposition of X as convex combination of extreme points Q1 , . . . , Qk :
Pk
Pk
X = r=1 ?r Qr (it can be verified that ?r ? (0, 1] ?r and r=1 ?r = 1)
Figure 3: Decomposition step in applying LDOMD to transportation polytopes.
modity between various supply and demand locations change over time. Specifically, the decision
space here is the set of all valid (integer) transportation matrices satisfying constraints given by a, b:
Pn
Pm
| j=1 Qij = ai ?i ? [m] ,
C = Q ? Zm?n
+
i=1 Qij = bj ?j ? [n] .
On each trial t, one selects aP
transportation
matrix Qt ? C, and receives a cost matrix `t ?
m Pn
m?n
t t
[0, 1]
; the loss incurred is i=1 j=1 Qij `ij . A natural mapping here is simply the identity:
? : C?Zm?n
with ?(Q) = Q ?Q ? C. Thus we have here d = mn, ?(C) = C, and conv(?(C)) is
+
the well-known transportation polytope T (a, b) (e.g. see [6]):
Pn
Pm
conv(?(C)) = T (a, b) = X ? Rm?n
| j=1 Xij = ai ?i ? [m] ,
+
i=1 Xij = bj ?j ? [n] .
Transportation polytopes generalize the Birkhoff polytope of doubly stochastic matrices, which can
be seen to arise as a special case when m = n and ai = bi = 1 ?i ? [n] (see Example 4). While the
Birkhoff polytope is a 0-1 polytope, a general transportation polytope clearly includes non-Boolean
vertices. Nevertheless, we do have T (a, b) ? Rm?n
, which suggests we can use the LDOMD
+
algorithm with unnormalized negative entropy.
For the decomposition step in LDOMD, one can use an algorithm broadly similar to that used for the
Birkhoff polytope in [12]. Specifically, given a matrix X ? conv(?(C)) = T (a, b), one successively
subtracts off multiples of extreme points Qk ? C from X until one is left with a zero matrix (see
Figure 3). However, a key step of this algorithm is to find a suitable extreme point to subtract off
on each iteration. In the case of the Birkhoff polytope, this involved finding a suitable permutation
matrix, and was achieved by finding a perfect matching in a suitable bipartite graph. For general
transportation polytopes, we make use of a characterization of extreme points in terms of spanning
forests in a suitable bipartite graph (see Appendix B for details). The overall decomposition results
in a convex combination of at most mn extreme points in C, and takes O(m3 n3 ) time.
The unnormalized relative entropy projection step can be performed efficiently by using a procedure
similar to the Sinkhorn balancing used for the Birkhoff polytope in [12]. Specifically, given a none ? Rm?n , one alternately scales the rows and columns to match the desired row
negative matrix X
+
and column sums until some convergence criterion is met. As with Sinkhorn balancing, this results
in an approximate projection step, but does notp
hurt the overall regret analysis (other than a constant
additive term), yielding a regret bound of O q T ln(max(mn, q)) .
8
Conclusion
We have considered a general form of online combinatorial decision problems, where costs can be
linear in any suitable low-dimensional vector representation of elements of the decision space, and
have given a general algorithm termed low-dimensional online mirror descent (LDOMD) for such
problems. Our study emphasizes the role of the convex polytope arising from the vector representation of the decision space; this both yields a unification and generalization of previous algorithms,
and gives a general framework that can be used to design new algorithms for specific applications.
Acknowledgments. Thanks to the anonymous reviewers for helpful comments and Chandrashekar
Lakshminarayanan for helpful discussions. AR is supported by a Microsoft Research India PhD
Fellowship. SA thanks DST and the Indo-US Science & Technology Forum for their support.
8
References
[1] Nir Ailon. Bandit online optimization over the permutahedron. CoRR, abs/1312.1530, 2013.
[2] Nir Ailon. Online ranking: Discrete choice, spearman correlation and other feedback. CoRR,
abs/1308.6797, 2013.
[3] Jean-Yves Audibert, S?ebastien Bubeck, and G?abor Lugosi. Regret in online combinatorial
optimization. Mathematics of Operations Research, 39(1):31?45, 2014.
[4] Francis Bach. Learning with submodular functions: A convex optimization perspective. Foundations and Trends in Machine Learning, 6(2-3):145?373, 2013.
[5] V. J. Bowman. Permutation polyhedra. SIAM Journal on Applied Mathematics, 22(4):580?
589, 1972.
[6] Richard A Brualdi. Combinatorial Matrix Classes. Cambridge University Press, 2006.
[7] S?ebastion Bubeck. Introduction to online optimization. Lecture Notes, Princeton University,
2011.
[8] Nicol`o Cesa-Bianchi and G?abor Lugosi. Combinatorial bandits. Journal of Computer and
System Sciences, 78(5):1404?1422, 2012.
[9] David Cossock and Tong Zhang. Statistical analysis of Bayes optimal subset ranking. IEEE
Transactions on Information Theory, 54(11):5140?5154, 2008.
[10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139,
1997.
[11] M. Gr?otschel, M. J?unger, and G. Reinelt. Facets of the linear ordering polytope. Mathematical
Programming, 33:43?60, 1985.
[12] David P. Helmbold and Manfred K. Warmuth. Learning permutations with exponential
weights. Journal of Machine Learning Research, 10:1705?1736, 2009.
[13] Adam Tauman Kalai and Santosh Vempala. Efficient algorithms for online decision problems.
Journal of Computer and System Sciences, 71(3):291?307, 2005.
[14] Wouter M. Koolen, Manfred K. Warmuth, and Jyrki Kivinen. Hedging structured concepts. In
COLT, 2010.
[15] Daiki Suehiro, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Kiyohito Nagano. Online
prediction under submodular constraints. In ALT, 2012.
[16] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. Journal of
Machine Learning Research, 4:773?818, 2003.
[17] Manfred K. Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret
bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9:2287?
2320, 2008.
[18] Shota Yasutake, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Masayuki Takeda. Online
linear optimization over permutations. In ISAAC, pages 534?543, 2011.
[19] Shota Yasutake, Kohei Hatano, Eiji Takimoto, and Masayuki Takeda. Online rank aggregation.
In ACML, 2012.
[20] Jun Zhang. Binary choice, subset choice, random utility, and ranking: A unified perspective
using the permutahedron. Journal of Mathematical Psychology, 48:107?134, 2004.
[21] G?unter M. Ziegler. Lectures on Polytopes. Springer, 1995.
9
| 5244 |@word trial:17 norm:7 c0:1 bf:10 d2:2 decomposition:17 q1:1 incurs:3 kijima:2 contains:3 selecting:3 recovered:1 incidence:1 assigning:1 must:1 additive:2 update:4 prohibitive:1 warmuth:5 manfred:4 characterization:1 boosting:1 location:7 zhang:2 mathematical:2 bowman:3 supply:6 ouput:1 qij:5 incorrect:1 doubly:3 manner:1 x0:4 expected:4 discounted:1 cardinality:3 increasing:1 conv:34 iisc:1 provided:1 notation:1 moreover:3 what:1 unified:1 hindsight:4 finding:2 guarantee:1 commodity:4 ti:4 exactly:2 k2:2 rm:3 x0m:2 dima:1 unit:1 before:2 ak:3 analyzing:1 path:3 ap:1 lugosi:2 plus:1 studied:6 specifying:6 suggests:1 bi:1 unique:2 acknowledgment:1 transporting:2 practice:1 regret:29 implement:3 procedure:1 kohei:3 projection:14 matching:3 akij:1 get:4 onto:1 interior:1 scheduling:1 context:1 applying:6 restriction:3 map:5 deterministic:1 transportation:15 reviewer:1 omd:18 convex:21 helmbold:2 m2:1 subgraphs:1 hurt:1 pt:8 play:1 exact:1 programming:2 us:1 element:10 rajkumar:1 satisfying:1 osmd:2 particularly:1 trend:1 predicts:1 kxk1:2 role:5 ft:5 rankm:5 worst:1 connected:1 ordering:2 pd:2 complexity:1 incur:2 bipartite:3 easily:1 represented:5 various:6 describe:2 whose:2 jean:1 widely:3 g1:1 online:44 sequence:4 differentiable:1 mg:2 maximal:1 zm:5 relevant:2 nagano:1 description:1 qr:1 takeda:2 convergence:1 perfect:2 converges:1 adam:1 object:2 depending:1 completion:2 ij:5 qt:1 sa:1 job:6 met:1 direction:1 stochastic:4 hull:1 generalization:3 preliminary:2 anonymous:1 strictly:1 considered:5 ground:2 mapping:13 predict:1 algorithmic:1 bj:3 combinatorial:30 ziegler:1 saw:3 arun:2 shota:2 suehiro:5 clearly:2 always:1 rather:1 kalai:3 pn:9 minc:1 rank:4 polyhedron:1 sense:1 helpful:2 dependent:1 dcg:1 abor:2 bandit:3 selects:9 overall:2 dual:2 colt:1 denoted:1 special:8 ernet:1 initialize:2 santosh:1 having:1 sampling:1 brualdi:1 broad:1 simplex:1 others:1 bangalore:1 primarily:1 bg1:1 richard:1 randomly:2 dg:1 divergence:3 ve:10 argminx:2 maintain:1 microsoft:1 ab:2 interest:4 possibility:1 wouter:1 introduces:1 extreme:6 birkhoff:7 yielding:1 bregman:6 edge:3 partial:1 unification:2 injective:2 unter:1 tree:10 euclidean:1 masayuki:2 desired:1 minimal:1 fenchel:1 column:2 earlier:2 boolean:11 facet:1 ar:1 shuji:2 yoav:1 assignment:2 cost:15 vertex:10 subset:8 entry:2 gr:1 graphic:1 kn:1 perturbed:3 supx:1 thanks:2 randomized:2 siam:1 off:2 na:10 again:5 central:1 squared:3 successively:1 containing:1 cesa:1 yasutake:2 return:2 automation:1 includes:4 int:5 lakshminarayanan:1 kzk2:1 audibert:2 bg:1 ranking:4 performed:7 multiplicative:2 hedging:1 closed:5 francis:1 start:1 aggregation:2 maintains:2 qof:1 bayes:1 minimize:5 yves:1 qk:4 characteristic:2 efficiently:11 largely:1 yield:3 generalize:2 emphasizes:3 none:1 whenever:1 involved:1 isaac:1 associated:10 recovers:1 gain:1 recall:3 schedule:1 actually:1 follow:3 formulation:1 strongly:4 just:1 until:3 correlation:1 receives:11 scheduled:1 k22:1 verify:2 contain:2 concept:1 noted:1 unnormalized:17 criterion:1 tep:1 bijective:3 outline:3 complete:2 demonstrate:1 theoretic:1 performs:2 l1:1 recently:3 common:2 koolen:5 cossock:1 refer:2 cambridge:1 ai:4 rd:30 pm:3 mathematics:2 submodular:18 permutahedron:4 hatano:3 longer:1 sinkhorn:2 etc:2 base:12 own:1 recent:4 perspective:2 termed:2 certain:3 binary:2 xe:2 seen:1 additional:1 period:1 full:1 multiple:1 match:1 offer:1 bach:1 retrieval:2 a1:1 prediction:1 involving:2 variant:1 n5:2 essentially:1 iteration:1 sometimes:1 kernel:1 agarwal:1 achieved:1 receive:2 background:2 fellowship:1 comment:1 undirected:1 call:2 integer:3 easy:3 matroid:12 psychology:1 knowing:1 pca:1 utility:1 remark:1 generally:2 tij:2 extensively:1 shivani:2 eiji:4 schapire:3 specifies:1 supplied:1 exist:1 xij:2 arising:3 broadly:1 discrete:1 shall:3 waiting:2 key:1 terminology:1 nevertheless:1 drawn:1 takimoto:4 verified:1 graph:4 monotone:5 year:2 sum:2 dst:1 decide:1 draw:1 decision:40 appendix:6 eral:1 bound:20 ct:8 correspondence:1 oracle:3 constraint:2 n3:1 min:1 vempala:3 department:1 ailon:2 structured:1 combination:3 legendre:5 conjugate:1 spearman:1 making:5 taken:3 computationally:1 ln:20 generalizes:6 operation:2 appropriate:4 denotes:1 remaining:1 maintaining:1 k1:7 hypercube:1 forum:1 quantity:2 rt:11 dependence:1 said:1 unclear:1 gradient:1 otschel:1 polytope:49 reinelt:1 spanning:9 assuming:1 setup:1 mostly:1 robert:1 kzk1:1 negative:13 design:2 implementation:2 ebastien:1 perform:1 allowing:1 upper:1 recommender:1 contributed:1 bianchi:1 finite:4 implementable:1 descent:8 acml:1 rn:10 arbitrary:1 david:2 namely:1 polytopes:26 alternately:1 able:2 below:4 encompasses:2 including:3 max:3 suitable:10 natural:5 force:1 warm:1 predicting:3 solvable:1 kivinen:1 mn:3 kxk22:1 technology:1 jun:1 n6:2 fpl:12 sn:6 nir:2 literature:2 l2:4 nicol:1 relative:11 freund:3 loss:53 lecture:2 permutation:30 sublinear:2 interesting:1 acyclic:1 foundation:1 incurred:5 x01:2 pij:1 balancing:2 row:2 repeat:1 last:1 supported:1 infeasible:1 offline:1 allow:1 india:2 institute:1 taking:1 tauman:1 benefit:1 kzk:1 boundary:1 xn:4 valid:1 cumulative:1 feedback:1 dimension:1 commonly:1 projected:1 subtracts:1 ec:2 transaction:1 approximate:1 instantiation:1 leader:3 xi:5 demanded:1 kiyohito:1 forest:1 csa:1 poly:2 necessarily:1 pk:2 bounding:1 arise:1 n2:5 x1:10 kuzmin:1 tong:1 position:8 indo:1 exponential:1 lie:1 theorem:9 specific:3 xt:9 alt:1 sequential:1 effectively:3 corr:2 mirror:7 phd:1 te:1 illustrates:1 demand:6 subtract:1 entropy:18 logarithmic:1 simply:3 bubeck:2 applies:4 springer:1 satisfies:1 hedge:20 goal:5 viewed:4 identity:1 jyrki:1 change:1 specifically:7 total:4 called:3 m3:1 support:2 arises:1 indian:1 princeton:1 d1:3 |
4,688 | 5,245 | Model-based Reinforcement Learning
and the Eluder Dimension
Ian Osband
Stanford University
iosband@stanford.edu
Benjamin Van Roy
Stanford University
bvr@stanford.edu
Abstract
We consider the problem of learning to optimize an unknown Markov decision process (MDP). We show that, if the MDP can be parameterized within
some known function class, we can obtain regret bounds that scale with the
dimensionality, rather than cardinality,
of the system. We characterize this
?
? dK dE T ) where T is time elapsed, dK is the
dependence explicitly as O(
Kolmogorov dimension and dE is the eluder dimension. These represent
the first unified regret bounds for model-based reinforcement learning and
provide state of the art guarantees in several important settings. Moreover, we present a simple and computationally efficient algorithm posterior
sampling for reinforcement learning (PSRL) that satisfies these bounds.
1
Introduction
We consider the reinforcement learning (RL) problem of optimizing rewards in an unknown
Markov decision process (MDP) [1]. In this setting an agent makes sequential decisions
within its enironment to maximize its cumulative rewards through time. We model the
environment as an MDP, however, unlike the standard MDP planning problem the agent
is unsure of the underlying reward and transition functions. Through exploring poorlyunderstood policies, an agent may improve its understanding of its environment but it may
improve its short term rewards by exploiting its existing knowledge [2, 3].
The focus of the literature in this area has been to develop algorithms whose performance
will be close to optimal in some sense. There are numerous criteria for statistical and
computational efficiency that might be considered. Some of the most common include PAC
(Probably Approximately Correct) [4], MB (Mistake Bound) [5], KWIK (Knows What It
Knows) [6] and regret [7]. We will focus our attention upon regret, or the shortfall in the
agent?s expected rewards compared to that of the optimal policy. We believe this is a natural
criteria for performance during learning, although these concepts are closely linked. A good
overview of various efficiency guarantees is given in section 3 of Li et al. [6].
Broadly, algorithms for RL can be separated as either model-based, which build a generative
model of the environment, or model-free which do not. Algorithms of both type have been
developed to provide PAC-MDP bounds polynomial in the number of states S and actions
A [8, 9, 10]. However, model-free approaches can struggle
to plan efficient exploration. The
?
?
only near-optimal regret bounds to time T of O(S
AT ) have only been attained by modelbased algorithms [7, 11]. But even these bounds grow with the cardinality of the state and
action spaces,
? which may be extremely large or even infinite. Worse still, there is a lower
bound ( SAT ) for the expected regret in an arbitrary MDP [7].
In special cases, where the reward or transition function is known to belong to a certain
functional family, existing algorithms can exploit the structure to move beyond this ??tabula
rasa? (where nothing is assumed beyond S and A) lower bound. The most widely-studied
1
parameterization is the degenerate MDP with no transitions, the mutli-armed bandit [12,
13, 14]. Another common assumption is that the?transition function is linear in states and
? T ) for linear quadratic control [16], but
actions. Papers here establigh regret bounds O(
with constants that grow exponentially with dimension. Later works remove this exponential
dependence, but only under significant sparsity assumptions [17]. The most general previous
analysis considers rewards and transitions that are ?-H?older in a d-dimensional space to
? (2d+?)/(2d+2?) ) [18]. However, the proposed algorithm UCCRL
establish regret bounds O(T
is not computationally tractable and the bounds approach linearity in many settings.
In this paper we analyse the simple and intuitive algorithm posterior sampling for reinforcement learning (PSRL) [20, 21, 11]. PSRL was initially introduced as a heuristic method [21],
but has since been shown to satisfy state of the art regret bounds in finite MDPs [11] and
also exploit the structure of factored MDPs [15]. We show that this same algorithm satisfies
general regret bounds that depends upon the dimensionality, rather than the cardinality, of
the underlying reward and transition function classes. To characterize the complexity of this
learning problem we extend the definition of the eluder dimension, previously introduced for
bandits [19], to capture the complexity of the reinforcement learning problem. Our results
provide a unified analysis of model-based reinforcement learning in general and provide new
state of the art bounds in several important problem settings.
2
Problem formulation
We consider the problem of learning to optimize a random finite horizon MDP M =
(S, A, RM , P M , ?, ?) in repeated finite episodes of interaction. S is the state space, A is
the action space, RM (s, a) is the reward distribution over R and P M (?|s, a) is the transition
distribution over S when selecting action a in state s, ? is the time horizon, and ? the initial
state distribution. All random variables we will consider are on a probability space ( , F, P).
A policy ? is a function mapping each state s ? S and i = 1, . . . , ? to an action a ? A. For
each MDP M and policy ?, we define a value function V :
?
#?
$
M
V?,i
(s) := EM,?
rM (sj , aj )-si = s
(1)
j=i
where r (s, a) := E[r|r ? R (s, a)] and the subscripts of the expectation operator indicate
that aj = ?(sj , j), and sj+1 ? P M (?|sj , aj ) for j = i, . . . , ? . A policy ? is said to be optimal
M
for MDP M if V?,i
(s) = max?? V?M? ,i (s) for all s ? S and i = 1, . . . , ? . We will associate with
each MDP M a policy ?M that is optimal for M .
M
M
We require that the state space S is a subset of Rd for some finite d with a ? ? ?2 -norm
induced by an inner product. These result actually extend to general Hilbert spaces, but we
will not deal with that in this paper. This allows us to decompose the transition function
as a mean value in S plus additive noise s? ? P M (?|s, a) =? s? = pM (s, a) + ?P . At
first this may seem to exclude discrete MDPs with S states from our analysis. However,
we can represent the discrete state as a probability vector st ? S = [0, 1]S ? RS with a
single active component equal to 1 and 0 otherwise. In fact, the notational convention that
S ? Rd should not impose a great restriction for most practical settings.
For any distribution
over S, we define the one step future value function U to be the
expected value of the optimal policy with the next state distributed according to .
#
$
UiM ( ) := EM,?M V?MM ,i+1 (s)-s ? .
(2)
One natural regularity condition for learning is that the future values of similar distributions
should be similar. We examine this idea through the Lipschitz constant on the means of
these state distributions. We write E( ) := E[s|s ? ] ? S for the mean of a distribution
and express the Lipschitz continuity for UiM with respect to the ? ? ?2 -norm of the mean:
|U M ( ) ? U M ( ? )| ? K M (D)?E( ) ? E( ? )?2 for all , ? ? D
(3)
i
i
i
We define K M (D) := maxi KiM (D) to be a global Lipschitz contant for the future value
function with state distributions from D. Where appropriate, we will condense our notation
2
to write K M := K M (D(M )) where D(M ) := {P M (?|s, a)|s ? S, a ? A} is the set of all
possible one-step state distributions under the MDP M .
The reinforcement learning agent interacts with the MDP over episodes that begin at times
tk = (k ? 1)? + 1, k = 1, 2, . . .. Let Ht = (s1 , a1 , r1 , . . . , st?1 , at?1 , rt?1 ) denote the history
of observations made prior to time t. A reinforcement learning algorithm is a deterministic
sequence {?k |k = 1, 2, . . .} of functions, each mapping Htk to a probability distribution
?k (Htk ) over policies which the agent will employ during the kth episode. We define the
regret incurred by a reinforcement learning algorithm ? up to time T to be
Regret(T, ?, M ) :=
?
?T /? ?
?
k,
k=1
where
k
denotes regret over the kth episode, defined with respect to the MDP M ? by
?
1
2
?
?
?(s) V?M? ,1 ? V?Mk ,1 (s)
k :=
s?S
with ? = ?
and ?k ? ?k (Htk ). Note that regret is not deterministic since it can
depend on the random MDP M ? , the algorithm?s internal random sampling and, through
the history Htk , on previous random transitions and random rewards. We will assess and
compare algorithm performance in terms of regret and its expectation.
?
3
M?
Main results
We now review the algorithm PSRL, an adaptation of Thompson sampling [20] to reinforcement learning. PSRL was first proposed by Strens [21] and later was shown to satisfy
efficient regret bounds in finite MDPs [11]. The algorithm begins with a prior distribution
over MDPs. At the start of episode k, PSRL samples an MDP Mk from the posterior. PSRL
then follows the policy ?k = ?Mk which is optimal for this sampled MDP during episode k.
Algorithm 1
Posterior Sampling for Reinforcement Learning (PSRL)
1: Input: Prior distribution ? for M ? , t=1
2: for episodes k = 1, 2, .. do
3:
sample Mk ? ?(?|Ht )
4:
compute ?k = ?Mk
5:
for timesteps j = 1, .., ? do
6:
apply at ? ?k (st , j)
7:
observe rt and st+1
8:
advance t = t + 1
9:
end for
10: end for
To state our results we first introduce some notation. For any set X and Y ? Rd for d finite
let PXC,?
,Y be the family the distributions from X to Y with mean ? ? ?2 -bounded in [0, C] and
additive ?-sub-Gaussian noise. We let N (F, ?, ? ? ?2 ) be the ?-covering number of F with
respect to the ? ? ?2 -norm and write nF = log(8N (F, 1/T 2 , ? ? ?2 )T ) for brevity. Finally we
write dE (F) = dimE (F, T ?1 ) for the eluder dimension of F at precision T ?1 , a notion of
dimension specialized to sequential measurements described in Section 4.
Our main result, Theorem 1, bounds the expected regret of PSRL at any time T .
Theorem 1 (Expected regret for PSRL in parameterized MDPs).
CR ,?R
CP ,?P
Fix a state space S, action space A, function families R ? PS?A,R
and P ? PS?A,S
for
?
any CR , CP , ?R , ?P > 0. Let M be an MDP with state space S, action space? A, rewards
R? ? R and transitions P ? ? P. If ? is the distribution of M ? and K ? = K M is a global
Lipschitz constant for the future value function as per (3) then:
3
4
#
$
1
PS
?
?
?
?
E[Regret(T, ? , M )] ? CR + CP + D(R) + +E[K ] 1 +
D(P)
(4)
T ?1
3
Where for F equal to either ?
R or P we will use the shorthand:
?
?
?
D(F)
:= 1 + ? CF dE (F) + 8 dE (F)(4CF + 2? 2 log(32T 3 )) + 8 2? 2 nF dE (F)T .
F
F
Theorem 1 is a general result that applies to almost all RL settings of interest. In particular,
we note that any bounded function is sub-Gaussian. To clarify the assymptotics if this bound
we use another classical measure of dimensionality.
Definition 1. The Kolmogorov dimension of a function class F is given by:
log(N (F, ?, ? ? ?2 ))
dimK (F) := lim sup
.
log(1/?)
??0
Using Definition 1 in Theorem 1 we can obtain our Corollary.
Corollary 1 (Assymptotic regret bounds for PSRL in parameterized MDPs).
Under the assumptions of Theorem 1 and writing dK (F) := dimK (F):
1
2
?
?
? ?R dK (R)dE (R)T + E[K ? ]?P dK (P)dE (P)T
E[Regret(T, ? P S , M ? )] = O
(5)
? ignores terms logarithmic in T .
Where O(?)
In Section 4 we provide bounds on the eluder dimension of several function classes. These
lead to explicit regret bounds in a number of important domains such as discrete MDPs,
linear-quadratic control and even generalized linear systems. In all of these cases the eluder
dimension scales comparably with more traditional notions of dimensionality. For clarity,
we present bounds in the case of linear-quadratic control.
Corollary 2 (Assymptotic regret bounds for PSRL in bounded linear quadratic systems).
Let M ? be an n-dimensional linear-quadratic system with ?-sub-Gaussian noise. If the state
is ? ? ?2 -bounded by C and ? is the distribution of M ? , then:
1
? 2
? ?C?1 n2 T .
E[Regret(T, ? P S , M ? )] = O
(6)
Here ?1 is the largest eigenvalue of the matrix Q given as the solution of the Ricatti equations
for the unconstrained optimal value function V (s) = ?sT Qs [22].
Proof. We simply apply the results of for eluder dimension in Section 4 to Corollary 1 and
upper bound the Lipschitz constant of the constrained LQR by 2C?1 , see Appendix D.
Algorithms based upon posterior sampling are intimately linked to those based upon optimism [14]. In Appendix E we outline an optimistic variant that would attain similar regret
bounds but with high probility in a frequentist sense. Unfortunately this algorithm remains
computationally intractable even when presented with an approximate MDP planner. Further, we believe that PSRL will generally be more statistically efficient than an optimistic
variant with similar regret bounds since the algorithm is not affected by loose analysis [11].
4
Eluder dimension
To quantify the complexity of learning in a potentially infinite MDP, we extend the existing
notion of eluder dimension for real-valued functions [19] to vector-valued functions. For any
G ? PXC,?
,Y we define the set of mean functions F = E[G] := {f |f = E[G] for G ? G}. If
we consider sequential observations yi ? G? (xi ) we can equivalently write them as yi =
f ? (xi ) + ?i for some f ? (xi ) = E[y|y ? G? (xi )] and ?i zero mean noise. Intuitively, the eluder
dimension of F is the length d of the longest possible sequence x1 , .., xd such that for all i,
knowing the function values of f (x1 ), .., f (xi ) will not reveal f (xi+1 ).
Definition 2 ((F, ?) ? dependence).
We will say that x ? X is (F, ?)-dependent on {x1 , ..., xn } ? X
n
?
?? ?f, f? ? F,
?f (xi ) ? f?(xi )?22 ? ?2 =? ?f (x) ? f?(x)?2 ? ?.
i=1
x ? X is (?, F)-independent of {x1 , .., xn } iff it does not satisfy the definition for dependence.
4
Definition 3 (Eluder Dimension).
The eluder dimension dimE (F, ?) is the length of the longest possible sequence of elements
in X such that for some ?? ? ? every element is (F, ?? )-independent of its predecessors.
Traditional notions from supervised learning, such as the VC dimension, are not sufficient to
characterize the complexity of reinforcement learning. In fact, a family learnable in constant
time for supervised learning may require arbitrarily long to learn to control well [19]. The
eluder dimension mirrors the linear dimension for vector spaces, which is the length of the
longest sequence such that each element is linearly independent of its predecessors, but
allows for nonlinear and approximate dependencies. We overload our notation for G ? PXC,?
,Y
and write dimE (G, ?) := dimE (E[G], ?), which should be clear from the context.
4.1
Eluder dimension for specific function classes
Theorem 1 gives regret bounds in terms of the eluder dimension, which is well-defined for
any F, ?. However, for any given F, ? actually calculating the eluder dimension may take
some additional analysis. We now provide bounds on the eluder dimension for some common
function classes in a similar approach to earlier work for real-valued functions [14]. These
proofs are available in Appendix C.
Proposition 1 (Eluder dimension for finite X ).
A counting argument shows that for |X | = X finite, any ? > 0 and any function class F:
dimE (F, ?) ? X
This bound is tight in the case of independent measurements.
Proposition 2 (Eluder dimension for linear functions).
Let F = {f |f (x) = ??(x) for ? ? Rn?p , ? ? Rp , ???2 ? C? , ???2 ? C? } then ?X :
CA
D
3
42 B
e
2C? C?
?
dimE (F, ?) ? p(4n ? 1)
log
1+
(4n ? 1) + 1 = O(np)
e?1
?
Proposition 3 (Eluder dimension for quadratic functions).
Let F = {f |f (x) = ?(x)T ??(x) for ? ? Rp?p , ? ? Rp , ???2 ? C? , ???2 ? C? } then ?X :
SQ
T
A
B2 R
2pC?2 C?
e
? 2 ).
b (4p ? 1)V + 1 = O(p
dimE (F, ?) ? p(4p ? 1)
log Ua1 +
e?1
?
Proposition 4 (Eluder dimension for generalized linear functions).
Let g(?) be a component-wise independent function on Rn with derivative in each component
bounded ? [h, h] with h > 0. Define r = hh > 1 to be the condition number. If F =
{f |f (x) = g(??(x)) for ? ? Rn?p , ? ? Rp , ???2 ? C? , ???2 ? C? } then for any X :
3 5
3
1 2C C 22 464
!
"
!
"
dimE (F , ?) ? p r2 (4n ? 2) + 1
5
e
e?1
log
r2 (4n ? 2) + 1
1+
?
?
?
? 2 np)
+1 = O(r
Confidence sets
We now follow the standard argument that relates the regret of an optimistic or posterior sampling algorithm to the construction of confidence sets [7, 11]. We will use
the eluder dimension build confidence sets for the reward and transition which contain
the true functions with high probability and then bound the regret of our algorithm by
the maximum deviation within the confidence sets. For observations from f ? ? F we
will center the sets around the least squares estimate f?tLS ? arg minf ?F L2,t (f ) where
qt?1
L2,t (f ) := i=1 ?f (xt ) ? yt ?22 is the cumulative squared prediciton error. The confidence
?
sets are defined Ft = Ft (?t ) := {f ? F|?f ? f?tLS ?2,Et ? ?t } where ?t controls the growth
qt?1
of the confidence set and the empirical 2-norm is defined ?g?22,Et := i=1 ?g(xi )?22 .
5
For F ? PXC,?
,Y , we define the distinguished control parameter:
1
2
?
?t? (F, ?, ?) := 8? 2 log(N (F, ?, ? ? ?2 )/?) + 2?t 8C + 8? 2 log(4t2 /?))
(7)
This leads to confidence sets which contain the true function with high probability.
Proposition 5 (Confidence sets with high probability).
For all ? > 0 and ? > 0 and the confidence sets Ft = Ft (?t? (F, ?, ?)) for all t ? N then:
A
B
?
?
P f? ?
Ft ? 1 ? 2?
t=1
Proof. We combine standard martingale concentrations with a discretization scheme. The
argument is essentially the same as Proposition 6 in [14], but extends statements about R
to vector-valued functions. A full derivation is available in the Appendix A.
5.1
Bounding the sum of set widths
We now bound the deviation from f ? by the maximum deviation within the confidence set.
Definition 4 (Set widths).
For any set of functions F we define the width of the set at x to be the maximum L2 deviation
between any two members of F evaluated at x.
wF (x) := sup ?f (x) ? f (x)?2
f ,f ?F
We can bound for the number of large widths in terms of the eluder dimension.
Lemma 1- (Bounding the number of large widths).
If {?t > 0-t ? N} is a nondecreasing sequence with Ft = Ft (?t ) then
3
4
m ?
?
?
4?T
1{wFtk (xtk +i ) > ?} ?
+ ? dimE (F, ?)
?2
i=1
k=1
Proof. This result follows from proposition 8 in [14] but with a small adjustment to account
for episodes. A full proof is given in Appendix B.
We now use Lemma 1 to control the cumulative deviation through time.
Proposition
- 6 (Bounding the sum of widths).
If {?t > 0-t ? N} is nondecreasing with Ft = Ft (?t ) and ?f ?2 ? C for all f ? F then:
m ?
?
?
k=1 i=1
wFtk (xtk +i ) ? 1 + ? CdimE (F, T ?1 ) + 4
?
?T dimE (F, T ?1 )T
(8)
Proof. Once again we follow the analysis of Russo [14] and strealine notation by letting wt =
wFtk (xtk +i ) abd d = dimE (F, T ?1 ). Reordering the sequence (w1 , .., wT ) ? (wi1 , .., wiT )
such that wi1 ? .. ? wiT we have that:
.
m ?
?
?
k=1 i=1
wFtk (xtk +i ) =
T
?
t=1
wit ? 1 +
T
?
i=1
wit 1{wit ? T ?1 }
qm q?
By the reordering we know that wit > ? means that k=1 i=1 1{wFtk (xtk +i ) > ?} ? t.
?
?
4?T d
?1
Td
From Lemma 1, ? ? 4?
.
So
that
if
w
>
T
then
w
?
min{C,
i
i
t
t
t?? d
t?? d }. Therefore,
?
?
T
T
?
?
? ? T d
?
4?T d
wit 1{wit ? T ?1 } ? ? Cd+
? ? Cd+2 ?T
dt ? ? Cd+4 ?T dT
t ? ?d
t
0
i=1
t=? d+1
6
6
Analysis
We will now show reproduce the decomposition of expected regret in terms of the Bellman
error [11]. From here, we will apply the confidence set results from Section 5 to obtain
M
our regret bounds. We streamline our discussion of P M , RM , V?,i
, UiM and T?M by simply
writing ? in? place of M ? or ?? and k in place of Mk or ?k where appropriate; for example
?
Vk,i
:= V??Mk ,i .
The first step in our ananlysis breaks down the regret by adding and subtracting the imagined
optimal reward of ?k under the MDP Mk .
! ?
"
! ?
"
! k
"
?
k
?
(9)
k = V?,1 ? Vk,1 (s0 ) = V?,1 ? Vk,1 (s0 ) + Vk,1 ? Vk,1 (s0 )
Here s0 is a distinguished initial state, but moving to general ?(s) poses no real challenge.
?
k
Algorithms based upon optimism bound (V?,1
? Vk,1
) ? 0 with high probability. For PSRL
we use Lemma 2 and the tower property to see that this is zero in expectation.
Lemma 2 (Posterior sampling).
If ? is the distribution of M ? then, for any ?(Htk )-measurable function g,
E[g(M ? )|Htk ] = E[g(Mk )|Htk ]
(10)
We introduce the Bellman operator T?M , which for any MDP M = (S, A, RM , P M , ?, ?),
stationary policy ? : S ? A and value function V : S ? R, is defined by
?
T?M V (s) := rM (s, ?(s)) +
P M (s? |s, ?(s))V (s? ).
s? ?S
This returns the expected value of state s where we follow the policy ? under the laws of M ,
for one time step. The following lemma gives a concise form for the dynamic programming
paradigm in terms of the Bellman operator.
Lemma 3 (Dynamic programming equation).
For any MDP M = (S, A, RM , P M , ?, ?) and policy ? : S ? {1, . . . , ? } ? A, the value
functions V?M satisfy
M
M
M
V?,i
= T?(?,i)
V?,i+1
(11)
M
for i = 1 . . . ? , with V?,?
+1 := 0.
Through repeated application of the dynamic programming operator and taking expectation
of martingale differences we can mirror earlier analysis [11] to equate expected regret with
the cumulative Bellman error:
?
?
k
?
k
E[ k ] =
(Tk,i
? Tk,i
)Vk,i+1
(stk +i )
(12)
i=1
6.1
Lipschitz continuity
Efficient regret bounds for MDPs with an infinite number of states and actions require some
regularity assumption. One natural notion is that nearby states might have similar optimal
values, or that the optimal value function function might be Lipschitz. Unfortunately, any
discontinuous reward function will usually lead to discontious values functions so that this
assumption is violated in many settings of interest. However, we only require that the
future value is Lipschitz in the sense of equation (3). This will will be satisfied whenever the
underlying value function is Lipschitz, but is a strictly weaker requirement since the system
noise helps to smooth future values.
Since P has ?P -sub-Gaussian noise we write st+1 = pM (st , at ) + ?P
t in the natural way. We
now use equation (12) to reduce regret to a sum of set widths. To reduce clutter and more
closely follow the notation of Section 4 we will write xk,i = (stk +i , atk +i ).
C ?
D
?)
*
k
?
k
k
k
?
E[ k ] ? E
r (xk,i ) ? r (xk,i ) + Ui (P (xk,i )) ? Ui (P (xk,i ))
? E
C
i=1
?
?
i=1
)
|r (xk,i ) ? r (xk,i )| + K ?p (xk,i ) ? p (xk,i )?2
k
?
k
7
k
?
*
D
(13)
Where K k is a global Lipschitz constant for the future value function of Mk as per (3).
We now use the results from Sections 4 and 5 to form the corresponding confidence sets
Rk := Rtk (? ? (R, ?, ?)) and Pk := Ptk (? ? (P, ?, ?)) for the reward and transition functions
respectively. Let A = {R? , Rk ? Rk ?k} and B = {P ? , Pk ? Pk ?k} and condition upon
these events to give:
Cm ?
D
??)
*
PS
?
k
?
k
k
?
E[Regret(T, ? , M )] ? E
|r (xk,i ) ? r (xk,i )| + K ?p (xk,i ) ? p (xk,i )?2
k=1 i=1
?
m ?
?
?
)
k=1 i=1
*
wRk (xk,i ) + E[K k |A, B]wPk (xk,i ) + 8?(CR + CP )
The posterior sampling lemma ensures that E[K k ] = E[K ? ] so that E[K k |A, B] ?
E[K ? ]
1?8?
by a union bound on {A ? B }. We fix ? = 1/8T to see that:
c
c
E[Regret(T, ? P S , M ? )] ? (CR + CP ) +
m
?
?
?
1
wRk (xk,i ) + E[K ? ] 1 +
k=1 i=1
(14)
E[K ? ]
P(A,B)
?
m
?
2?
?
1
wPt (xk,i )
T ?1
k=1 i=1
We now use equation (7) together with Proposition 6 to obtain our regret bounds. For ease
of notation we will write dE (R) = dimE (R, T ?1 ) and dE (P) = dimE (P, T ?1 ).
E[Regret(T, ? P S , M ? )] ? 2 + (CR + CP ) + ? (CR dE (R) + CP dE (P)) +
?
?
4 ?T? (R, 1/8T, ?)dE (R)T + 4 ?T? (P, 1/8T, ?)dE (P)T(15)
We let ? = 1/T 2 and write nF = log(8N (F, 1/T 2 , ? ? ?2 )T ) for R and P to complete our
proof of Theorem 1:
3
4
#
$
1
PS
?
?
?
?
E[Regret(T, ? , M )] ? CR + CP + D(R) + E[K ] 1 +
D(P)
(16)
T ?1
?
?
2 log(32T 3 )) +
?
Where D(F)
is shorthand for 1 + ? CF dE (F) + 8 dE (F)(4CF + 2?F
?
2 n d (F)T . The first term [C + C ] bounds the contribution from missed con8 2?F
F E
R
P
?
fidence sets. The cost of learning the reward function R? is bounded by D(R).
In most
problems the remaining contribution bounding transitions and lost future value will be
dominant. Corollary 1 follows from the Definition 1 together with nR and nP .
7
Conclusion
We present a new analysis of posterior sampling for reinforcement learning that leads to
a general regret bound in terms of the dimensionality, rather than the cardinality, of the
underlying MDP. These are the first regret bounds for reinforcement learning in such a
general setting and provide new state of the art guarantees when specialized to several
important problem settings. That said, there are a few clear shortcomings which we do not
address in the paper. First, we assume that it is possible to draw samples from the posterior
distribution exactly and in some cases this may require extensive computational effort.
Second, we wonder whether it is possible to extend our analysis to learning in MDPs without
episodic resets. Finally, there is a fundamental hurdle to model-based reinforcement learning
that planning for the optimal policy even in a known MDP may be intractable. We assume
access to an approximate MDP planner, but this will generally require lengthy computations.
We would like to examine whether similar bounds are attainable in model-free learning
[23], which may obviate complicated MDP planning, and examine the computational and
statistical efficiency tradeoffs between these methods.
Acknowledgments
Osband is supported by Stanford Graduate Fellowships courtesy of PACCAR inc. This work
was supported in part by Award CMMI-0968707 from the National Science Foundation.
8
References
[1] Apostolos Burnetas and Michael Katehakis. Optimal adaptive policies for Markov decision
processes. Mathematics of Operations Research, 22(1):222?255, 1997.
[2] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[3] Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A
survey. arXiv preprint cs/9605103, 1996.
[4] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
1984.
[5] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine learning, 2(4):285?318, 1988.
[6] Lihong Li, Michael L Littman, Thomas J Walsh, and Alexander L Strehl. Knows what it
knows: a framework for self-aware learning. Machine learning, 82(3):399?443, 2011.
[7] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. The Journal of Machine Learning Research, 99:1563?1600, 2010.
[8] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time.
Machine Learning, 49(2-3):209?232, 2002.
[9] Ronen Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for
near-optimal reinforcement learning. The Journal of Machine Learning Research, 3:213?231,
2003.
[10] Alexander Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael Littman. Pac modelfree reinforcement learning. In Proceedings of the 23rd international conference on Machine
learning, pages 881?888. ACM, 2006.
[11] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) Efficient Reinforcement Learning
via Posterior Sampling. Advances in Neural Information Processing Systems, 2013.
[12] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of
Machine Learning Research, 3:397?422, 2003.
[13] S?ebastien Bubeck, R?emi Munos, Gilles Stoltz, and Csaba Szepesv?
ari. X-armed bandits. Journal
of Machine Learning Research, 12:1587? 1627, 2011.
[14] Daniel Russo and Benjamin Van Roy. Learning to optimize via posterior sampling. CoRR,
abs/1301.2609, 2013.
[15] Ian Osband and Benjamin Van Roy. Near-optimal regret bounds for reinforcement learning in
factored MDPs. arXiv preprint arXiv:1403.3741, 2014.
[16] Yassin Abbasi-Yadkori, D?
avid P?
al, and Csaba Szepesv?
ari. Improved algorithms for linear
stochastic bandits. Advances in Neural Information Processing Systems, 24, 2011.
[17] Morteza Ibrahimi, Adel Javanmard, and Benjamin Van Roy. Efficient reinforcement learning
for high dimensional linear quadratic systems. In NIPS, pages 2645?2653, 2012.
[18] Ronald Ortner, Daniil Ryabko, et al. Online regret bounds for undiscounted continuous reinforcement learning. In NIPS, pages 1772?1780, 2012.
[19] Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic exploration. In Advances in Neural Information Processing Systems, pages 2256?2264,
2013.
[20] William Thompson. On the likelihood that one unknown probability exceeds another in view
of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
[21] Malcom Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th
International Conference on Machine Learning, pages 943?950, 2000.
[22] Dimitri Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific
Belmont, MA, 1995.
[23] Benjamin Van Roy and Zheng Wen. Generalization and exploration via randomized value
functions. arXiv preprint arXiv:1402.0635, 2014.
9
| 5245 |@word exploitation:1 polynomial:3 norm:4 r:1 decomposition:1 attainable:1 concise:1 initial:2 selecting:1 daniel:3 lqr:1 existing:3 discretization:1 si:1 john:1 belmont:1 ronald:2 additive:2 wiewiora:1 remove:1 stationary:1 generative:1 parameterization:1 xk:17 short:1 prediciton:1 predecessor:2 katehakis:1 apostolos:1 shorthand:2 combine:1 introduce:2 javanmard:1 expected:8 planning:3 examine:3 bellman:4 td:1 armed:2 cardinality:4 abound:1 begin:2 moreover:1 underlying:4 linearity:1 notation:6 bounded:6 what:2 cm:1 developed:1 unified:2 csaba:2 guarantee:3 every:1 nf:3 xd:1 growth:1 exactly:1 biometrika:1 rm:7 qm:1 control:8 bertsekas:1 struggle:1 mistake:1 subscript:1 approximately:1 might:3 plus:1 studied:1 ease:1 walsh:1 graduate:1 statistically:1 russo:4 practical:1 acknowledgment:1 union:1 regret:45 lost:1 sq:1 episodic:1 area:1 empirical:1 attain:1 confidence:13 dime:13 close:1 operator:4 context:1 writing:2 optimize:3 restriction:1 deterministic:2 measurable:1 center:1 yt:1 courtesy:1 attention:1 thompson:2 survey:1 wit:8 factored:2 q:1 rule:1 obviate:1 notion:5 construction:1 programming:4 associate:1 element:3 roy:7 ibrahimi:1 ft:9 preprint:3 capture:1 ensures:1 ryabko:1 episode:8 trade:1 benjamin:7 environment:3 complexity:5 ui:2 reward:16 littman:3 dynamic:4 depend:1 tight:1 singh:1 abd:1 upon:6 efficiency:3 eric:1 various:1 kolmogorov:2 derivation:1 separated:1 shortcoming:1 eluder:24 whose:1 heuristic:1 stanford:5 widely:1 valued:4 say:1 otherwise:1 analyse:1 nondecreasing:2 ptk:1 online:1 sequence:6 eigenvalue:1 subtracting:1 interaction:1 mb:1 product:1 adaptation:1 reset:1 iff:1 degenerate:1 intuitive:1 exploiting:1 regularity:2 p:5 r1:1 requirement:1 undiscounted:1 tk:3 help:1 develop:1 andrew:1 pose:1 qt:2 streamline:1 c:1 indicate:1 convention:1 quantify:1 closely:2 correct:1 discontinuous:1 attribute:1 stochastic:1 vc:1 exploration:4 atk:1 require:6 fix:2 generalization:1 decompose:1 proposition:9 wrk:2 exploring:1 strictly:1 clarify:1 mm:1 around:1 considered:1 great:1 mapping:2 wi1:2 robbins:1 largest:1 paccar:1 offs:1 gaussian:4 rather:3 cr:8 corollary:5 focus:2 notational:1 longest:3 vk:7 likelihood:1 kim:1 sense:3 wf:1 dependent:1 leung:1 initially:1 bandit:4 reproduce:1 condense:1 arg:1 plan:1 art:4 special:1 constrained:1 equal:2 once:1 aware:1 sampling:12 minf:1 future:8 np:3 t2:1 employ:1 few:1 ortner:2 wen:1 national:1 william:1 ab:1 interest:2 zheng:1 pc:1 contant:1 stoltz:1 littlestone:1 mk:10 earlier:2 leslie:2 cost:1 kaelbling:1 deviation:5 subset:1 wonder:1 daniil:1 characterize:3 burnetas:1 dependency:1 st:7 fundamental:1 international:2 randomized:1 shortfall:1 modelbased:1 together:2 michael:5 quickly:1 w1:1 squared:1 again:1 satisfied:1 abbasi:1 worse:1 derivative:1 dimitri:1 return:1 li:3 account:1 exclude:1 de:16 b2:1 inc:1 satisfy:4 wpk:1 explicitly:1 depends:1 later:2 break:1 view:1 optimistic:4 linked:2 sup:2 start:1 complicated:1 contribution:2 ass:1 square:1 fidence:1 equate:1 ronen:1 bayesian:1 comparably:1 history:2 whenever:1 lengthy:1 definition:8 proof:7 sampled:1 knowledge:1 lim:1 dimensionality:5 hilbert:1 actually:2 auer:2 attained:1 htk:7 supervised:2 follow:4 dt:2 improved:1 formulation:1 evaluated:1 langford:1 nonlinear:1 continuity:2 aj:3 reveal:1 scientific:1 mdp:28 believe:2 concept:1 true:2 contain:2 moore:1 jaksch:1 deal:1 during:3 width:7 self:1 covering:1 mutli:1 strens:2 criterion:2 generalized:2 modelfree:1 outline:1 complete:1 cp:8 wise:1 ari:2 common:3 specialized:2 functional:1 rl:3 overview:1 exponentially:1 volume:1 imagined:1 belong:1 extend:4 significant:1 measurement:2 rd:4 unconstrained:1 pm:2 rasa:1 mathematics:2 lihong:2 moving:1 access:1 pxc:4 kwik:1 dominant:1 posterior:12 optimizing:1 irrelevant:1 certain:1 arbitrarily:1 yi:2 herbert:1 tabula:1 additional:1 impose:1 maximize:1 paradigm:1 relates:1 full:2 smooth:1 exceeds:1 long:1 wpt:1 lai:1 award:1 a1:1 variant:2 essentially:1 expectation:4 arxiv:5 represent:2 szepesv:2 hurdle:1 fellowship:1 grow:2 unlike:1 probably:1 induced:1 member:1 seem:1 near:5 counting:1 timesteps:1 inner:1 idea:1 reduce:2 knowing:1 tradeoff:1 avid:1 whether:2 optimism:2 effort:1 adel:1 osband:4 peter:2 action:9 generally:2 clear:2 clutter:1 rtk:1 per:2 broadly:1 discrete:3 write:10 affected:1 express:1 threshold:1 clarity:1 ht:2 asymptotically:1 sum:3 parameterized:3 extends:1 family:4 almost:1 planner:2 place:2 missed:1 draw:1 decision:4 appendix:5 bound:45 quadratic:7 nearby:1 emi:1 argument:3 extremely:1 min:1 iosband:1 xtk:5 according:1 unsure:1 em:2 intimately:1 s1:1 intuitively:1 computationally:3 equation:5 previously:1 remains:1 loose:1 hh:1 know:5 letting:1 tractable:1 end:2 available:2 operation:1 apply:3 observe:1 appropriate:2 distinguished:2 frequentist:1 yadkori:1 rp:4 thomas:2 dimk:2 denotes:1 remaining:1 include:1 cf:4 calculating:1 exploit:2 build:2 establish:1 classical:1 move:1 moshe:1 concentration:1 dependence:4 rt:2 interacts:1 traditional:2 said:2 nr:1 cmmi:1 kth:2 athena:1 bvr:1 tower:1 considers:1 length:3 equivalently:1 unfortunately:2 potentially:1 statement:1 ebastien:1 policy:14 unknown:3 gilles:1 upper:1 observation:3 markov:3 finite:8 communication:1 rn:3 arbitrary:1 introduced:2 extensive:1 nick:1 elapsed:1 nip:2 address:1 beyond:2 tennenholtz:1 malcom:1 usually:1 sparsity:1 challenge:1 max:2 event:1 natural:4 older:1 improve:2 scheme:1 mdps:11 numerous:1 prior:3 understanding:1 literature:1 review:1 l2:3 law:1 reordering:2 allocation:1 foundation:1 incurred:1 agent:6 sufficient:1 s0:4 cd:3 strehl:2 supported:2 brafman:1 free:3 weaker:1 taking:1 munos:1 van:7 distributed:1 dimension:30 xn:2 transition:13 cumulative:4 ignores:1 made:1 reinforcement:26 adaptive:2 sj:4 approximate:3 satinder:1 global:3 active:1 sat:1 assumed:1 xi:9 psrl:14 continuous:1 learn:1 pack:1 ca:1 domain:1 pk:3 main:2 linearly:1 bounding:4 noise:6 n2:1 nothing:1 repeated:2 x1:4 tl:2 martingale:2 precision:1 sub:4 explicit:1 exponential:1 ricatti:1 ian:3 theorem:7 down:1 rk:3 specific:1 xt:1 pac:3 maxi:1 learnable:2 dk:5 r2:2 evidence:1 intractable:2 sequential:3 adding:1 valiant:1 corr:1 mirror:2 horizon:2 morteza:1 logarithmic:1 simply:2 tze:1 bubeck:1 adjustment:1 ua1:1 applies:1 satisfies:2 acm:2 ma:1 stk:2 lipschitz:10 infinite:3 wt:2 lemma:8 kearns:1 uim:3 internal:1 brevity:1 alexander:2 overload:1 violated:1 |
4,689 | 5,246 | Algorithms for CVaR Optimization in MDPs
Yinlam Chow?
Institute of Computational & Mathematical Engineering, Stanford University
Mohammad Ghavamzadeh?
Adobe Research & INRIA Lille - Team SequeL
Abstract
In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure
that addresses some of the shortcomings of the well-known variance-related risk
measures, and because of its computational efficiencies has gained popularity in
finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and
actor-critic algorithms that each uses a specific method to estimate this gradient
and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we
demonstrate the usefulness of our algorithms in an optimal stopping problem.
1
Introduction
A standard optimization criterion for an infinite horizon Markov decision process (MDP) is the
expected sum of (discounted) costs (i.e., finding a policy that minimizes the value function of the
initial state of the system). However in many applications, we may prefer to minimize some measure
of risk in addition to this standard optimization criterion. In such cases, we would like to use a
criterion that incorporates a penalty for the variability (due to the stochastic nature of the system)
induced by a given policy. In risk-sensitive MDPs [16], the objective is to minimize a risk-sensitive
criterion such as the expected exponential utility [16], a variance-related measure [24, 14], or the
percentile performance [15]. The issue of how to construct such criteria in a manner that will be
both conceptually meaningful and mathematically tractable is still an open question.
Although most losses (returns) are not normally distributed, the typical Markowitz mean-variance
optimization [18], that relies on the first two moments of the loss (return) distribution, has dominated the risk management for over 50 years. Numerous alternatives to mean-variance optimization
have emerged in the literature, but there is no clear leader amongst these alternative risk-sensitive
objective functions. Value-at-risk (VaR) and conditional value-at-risk (CVaR) are two promising
such alternatives that quantify the losses that might be encountered in the tail of the loss distribution, and thus, have received high status in risk management. For (continuous) loss distributions,
while VaR? measures risk as the maximum loss that might be incurred w.r.t. a given confidence
level ?, CVaR? measures it as the expected loss given that the loss is greater or equal to VaR? .
Although VaR is a popular risk measure, CVaR?s computational advantages over VaR has boosted
the development of CVaR optimization techniques. We provide the exact definitions of these two
risk measures and briefly discuss some of the VaR?s shortcomings in Section 2. CVaR minimization
was first developed by Rockafellar and Uryasev [23] and its numerical effectiveness was demonstrated in portfolio optimization and option hedging problems. Their work was then extended to
objective functions consist of different combinations of the expected loss and the CVaR, such as the
minimization of the expected loss subject to a constraint on CVaR. This is the objective function
?
?
Part of the work is completed during Yinlam Chow?s internship at Adobe Research.
Mohammad Ghavamzadeh is at Adobe Research, on leave of absence from INRIA Lille - Team SequeL.
1
that we study in this paper, although we believe that our proposed algorithms can be easily extended
to several other CVaR-related objective functions. Boda and Filar [9] and B?auerle and Ott [20, 3]
extended the results of [23] to MDPs (sequential decision-making). While the former proposed to
use dynamic programming (DP) to optimize CVaR, an approach that is limited to small problems,
the latter showed that in both finite and infinite horizon MDPs, there exists a deterministic historydependent optimal policy for CVaR optimization (see Section 3 for more details).
Most of the work in risk-sensitive sequential decision-making has been in the context of MDPs
(when the model is known) and much less work has been done within the reinforcement learning
(RL) framework. In risk-sensitive RL, we can mention the work by Borkar [10, 11] who considered
the expected exponential utility and those by Tamar et al. [26] and Prashanth and Ghavamzadeh [17]
on several variance-related risk measures. CVaR optimization in RL is a rather novel subject.
Morimura et al. [19] estimate the return distribution while exploring using a CVaR-based risksensitive policy. Their algorithm does not scale to large problems. Petrik and Subramanian [22]
propose a method based on stochastic dual DP to optimize CVaR in large-scale MDPs. However,
their method is limited to linearly controllable problems. Borkar and Jain [12] consider a finitehorizon MDP with CVaR constraint and sketch a stochastic approximation algorithm to solve it.
Finally, Tamar et al. [27] have recently proposed a policy gradient algorithm for CVaR optimization.
In this paper, we develop policy gradient (PG) and actor-critic (AC) algorithms for mean-CVaR
optimization in MDPs. We first derive a formula for computing the gradient of this risk-sensitive
objective function. We then propose several methods to estimate this gradient both incrementally
and using system trajectories (update at each time-step vs. update after observing one or more trajectories). We then use these gradient estimations to devise PG and AC algorithms that update the
policy parameters in the descent direction. Using the ordinary differential equations (ODE) approach, we establish the asymptotic convergence of our algorithms to locally risk-sensitive optimal
policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem. In comparison to [27], while they develop a PG algorithm for CVaR optimization in stochastic
shortest path problems that only considers continuous loss distributions, uses a biased estimator for
VaR, is not incremental, and has no comprehensive convergence proof, here we study mean-CVaR
optimization, consider both discrete and continuous loss distributions, devise both PG and (several)
AC algorithms (trajectory-based and incremental ? plus AC helps in reducing the variance of PG
algorithms), and establish convergence proof for our algorithms.
2
Preliminaries
We consider problems in which the agent?s interaction with the environment is modeled as a MDP.
A MDP is a tuple M = (X , A, C, P, P0 ), where X = {1, . . . , n} and A = {1, . . . , m} are the state
and action spaces; C(x, a) ? [?C
max , Cmax ] is the bounded cost random variable whose expectation is denoted by c(x, a) = E C(x, a) ; P (?|x, a) is the transition probability distribution; and
P0 (?) is the initial state distribution. For simplicity, we assume that the system has a single initial
state x0 , i.e., P0 (x) = 1{x = x0 }. All the results of the paper can be easily extended to the case that
the system has more than one initial state. We also need to specify the rule according to which the
agent selects actions at each state. A stationary policy ?(?|x) is a probability distribution over actions, conditioned on the current state.
In policy gradient and actor-critic
methods, we define a class
of parameterized stochastic policies ?(?|x; ?), x ? X , ? ? ? ? R?1 , estimate the gradient of a
performance measure w.r.t. the policy parameters ? from the observed system trajectories, and then
improve the policy by adjusting its parameters in the direction of the gradient. Since in this setting a
policy ? is represented by its ?1 -dimensional parameter vector ?, policy dependent functions can be
written as a function of ?P
in place of ?. So, we use ? and ? interchangeably in the paper. We denote
?
by d?? (x|x0 ) = (1 ? ?) k=0 ? k P(xk = x|x0 = x0 ; ?) and ??? (x, a|x0 ) = d?? (x|x0 )?(a|x) the
?-discounted visiting distribution of state x and state-action pair (x, a) under policy ?, respectively.
Let Z be a bounded-mean random variable, i.e., E[|Z|] < ?, with the cumulative distribution
function F (z) = P(Z ? z) (e.g., one may think of Z as the loss of an investment
strategy ?). We
define the value-at-risk at the confidence level ? ? (0, 1) as VaR? (Z) = min z | F (z) ? ? .
Here the minimum is attained because F is non-decreasing and right-continuous in z. When F
is continuous and strictly increasing, VaR? (Z) is the unique z satisfying F (z) = ?, otherwise,
the VaR equation can have no solution or a whole range of solutions. Although VaR is a popular
risk measure, it suffers from being unstable and difficult to work with numerically when Z is not
2
normally distributed, which is often the case as loss distributions tend to exhibit fat tails or empirical
discreteness. Moreover, VaR is not a coherent risk measure [1] and more importantly does not
quantify the losses that might be suffered beyond its value at the ?-tail of the distribution [23].
An alternative measure that addresses most of the VaR?s shortcomings is conditional value-at-risk,
CVAR? (Z), which is the mean of the ?-tail distribution of Z. If there is no
probability atom at
VaR? (Z), CVaR? (Z) has a unique value that is defined as CVaR? (Z) = E Z | Z ? VaR? (Z) .
Rockafellar and Uryasev [23] showed that
n
4
CVaR? (Z) = min H? (Z, ?) = min ? +
??R
??R
o
1
E (Z ? ?)+ .
1??
(1)
where (x)+ = max(x, 0) represents the positive part of x. Note that as a function of ?, H? (?, ?) is
finite and convex (hence continuous).
3
CVaR Optimization in MDPs
For a policy ?, we define the loss of a state x (state-action pair (x, a)) as the sum of (discounted)
costs encountered by thePagent when it starts at state x (state-action pairP
(x, a)) and then follows
?
k
?
k
policy ?, i.e., D? (x) = ?
k=0 ? C(xk , ak ) | x0 = x, ? and D (x, a) =
k=0 ? C(xk , ak ) | x0 =
x, a0 = a, ?. The expected value of these
two
random
variables
are
the
value
and action-value
functions of policy ?, i.e., V ? (x) = E D? (x) and Q? (x, a) = E D? (x, a) . The goal in the
standard discounted formulation is to find an optimal policy ?? = argmin? V ? (x0 ).
For CVaR optimization in MDPs, we consider the following optimization problem: For a given
confidence level ? ? (0, 1) and loss tolerance ? ? R,
min V ? (x0 )
?
subject to
CVaR? D? (x0 ) ? ?.
(2)
By Theorem 16 in [23], the optimization problem (2) is equivalent to (H? is defined by (1))
min V ? (x0 )
?,?
subject to
H? D? (x0 ), ? ? ?.
(3)
To solve (3), we employ the Lagrangian relaxation procedure [4] to convert it to the following
unconstrained problem:
4
max min L(?, ?, ?) = V ? (x0 ) + ? H? D? (x0 ), ? ? ? ,
??0 ?,?
(4)
where ? is the Lagrange multiplier. The goal here is to find the saddle point of L(?, ?, ?), i.e., a point
(?? , ? ? , ?? ) that satisfies L(?, ?, ?? ) ? L(?? , ? ? , ?? ) ? L(?? , ? ? , ?), ??, ?, ?? ? 0. This is achieved by
descending in (?, ?) and ascending in ? using the gradients of L(?, ?, ?) w.r.t. ?, ?, and ?, i.e.,1
h
+ i
?
?? L(?, ?, ?) = ?? V ? (x0 ) +
?? E D? (x0 ) ? ?
,
(1 ? ?)
h
+ i
1
1
?? L(?, ?, ?) = ? 1 +
?? E D? (x0 ) ? ?
3? 1?
P D? (x0 ) ? ? ,
(1 ? ?)
(1 ? ?)
h
+ i
1
?
0
?? L(?, ?, ?) = ? +
E D (x ) ? ?
? ?.
(1 ? ?)
(5)
(6)
(7)
We assume that there exists a policy ?(?|?; ?) such that CVaR? D? (x0 ) ? ? (feasibility assumption). As discussed in Section 1, B?auerle and Ott [20, 3] showed that there exists a deterministic
history-dependent optimal policy for CVaR optimization. The important point is that this policy
does not depend on the complete history, but only on the current time step k, current state of the
Pk
system xk , and accumulated discounted cost i=0 ? i C(xi , ai ).
In the following, we present a policy gradient (PG) algorithm (Sec. 4) and several actor-critic (AC)
algorithms (Sec. 5) to optimize (4). While the PG algorithm updates its parameters after observing
several trajectories, the AC algorithms are incremental and update their parameters at each time-step.
1
The notation 3 in (6) means that the right-most term is a member of the sub-gradient set ?? L(?, ?, ?).
3
4
A Trajectory-based Policy Gradient Algorithm
In this section, we present a policy gradient algorithm to solve the optimization problem (4). The
unit of observation in this algorithm is a system trajectory generated by following the current policy.
At each iteration, the algorithm generates N trajectories by following the current policy, use them
to estimate the gradients in Eqs. 5-7, and then use these estimates to update the parameters ?, ?, ?.
Let ? = {x0 , a0 , x1 , a1 , . . . , xT ?1 , aT ?1 , xT } be a trajectory generated by following the policy ?,
where x0 = x0 and xT is usually a terminal state of the system. After xk visits the terminal state,
it enters a recurring sink state xS at the next time step, incurring zero cost, i.e., C(xS , a) = 0,
?a ? A. Time index T is referred to as the stopping time of the MDP. Since the transition is
stochastic,
T is a non-deterministic quantity. Here we assume that the policy ? is proper, i.e.,
P?
0
P(x
k = x|x0 = x , ?) < ? for every x 6? {xS }. This further means that with probability 1,
k=0
the MDP exits the transient states and hits xS (and stays in xS ) in finite time T . For simplicity, we
assume that the agent incurs zero cost at the terminal state. Analogous results for the general case
with a non-zero terminal cost can be derived using identical arguments. The loss and probability of ?
PT ?1
QT ?1
are defined as D(?) = k=0 ? k c(xk , ak ) and P? (?) = P0 (x0 ) k=0 ?(ak |xk ; ?)P (xk+1 |xk , ak ),
PT ?1
respectively. It can be easily shown that ?? log P? (?) = k=0 ?? log ?(ak |xk ; ?).
Algorithm 1 contains the pseudo-code of our proposed policy gradient algorithm. What appears
inside the parentheses on the right-hand-side of the update equations are the estimates of the gradients of L(?, ?, ?) w.r.t. ?, ?, ? (estimates of Eqs. 5-7) (see Appendix A.2 of [13]). ?? is an operator
that projects a vector ? ? R?1 to the closest point in a compact and convex set ? ? R?1 , and
max Cmax
?? and ?? are projection operators to [? C1??
, 1?? ] and [0, ?max ], respectively. These projection
operators are necessary to ensure the convergence of the algorithm. The step-size schedules satisfy
the standard conditions for stochasticapproximation
algorithms, and ensure that the VaR parameter
? update ison the fastest time-scale ?3 (i) , the policy parameter ? update is on the
intermediate
time-scale ?2 (i) , and the Lagrange multiplier ? update is on the slowest time-scale ?1 (i) (see
Appendix A.1 of [13] for the conditions on the step-size schedules). This results in a three timescale stochastic approximation algorithm. We prove that our policy gradient algorithm converges to
a (local) saddle point of the risk-sensitive objective function L(?, ?, ?) (see Appendix A.3 of [13]).
Algorithm 1 Trajectory-based Policy Gradient Algorithm for CVaR Optimization
Input: parameterized policy ?(?|?; ?), confidence level ?, and loss tolerance ?
Initialization: policy parameter ? = ?0 , VaR parameter ? = ?0 , and the Lagrangian parameter ? = ?0
for i = 0, 1, 2, . . . do
for j = 1, 2, . . . do
0
Generate N trajectories {?j,i }N
j=1 by starting at x0 = x and following the current policy ?i .
end for
N
X
?i
? Update: ?i+1 = ?? ?i ? ?3 (i) ?i ?
1 D(?j,i ) ? ?i
(1 ? ?)N j=1
? Update: ?i+1 = ?? ?i ? ?2 (i)
+
N
1 X
?? log P? (?j,i )|?=?i D(?j,i )
N j=1
N
X
?i
?? log P? (?j,i )|?=?i D(?j,i ) ? ?i 1 D(?j,i ) ? ?i
(1 ? ?)N j=1
? Update: ?i+1 = ?? ?i + ?1 (i) ?i ? ? +
N
X
1
D(?j,i ) ? ?i 1 D(?j,i ) ? ?i
(1 ? ?)N j=1
end for
return parameters ?, ?, ?
5
Incremental Actor-Critic Algorithms
As mentioned in Section 4, the unit of observation in our policy gradient algorithm (Algorithm 1) is
a system trajectory. This may result in high variance for the gradient estimates, especially when the
length of the trajectories is long. To address this issue, in this section, we propose two actor-critic
4
algorithms that use linear approximation for some quantities in the gradient estimates and update the
parameters incrementally (after each state-action transition). We present two actor-critic algorithms
for optimizing the risk-sensitive measure (4). These algorithms are based on the gradient estimates
of Sections 5.1-5.3. While the first algorithm (SPSA-based) is fully incremental and updates all the
parameters ?, ?, ? at each time-step, the second one updates ? at each time-step and updates ? and ?
only at the end of each trajectory, thus given the name semi trajectory-based. Algorithm 2 contains
the pseudo-code of these algorithms. The projection operators ?? , ?? , and ?? are defined as in
Section 4 and are necessary to ensure the convergence of the algorithms. The step-size schedules
satisfy the standard conditions forstochastic
approximation algorithms, and ensures that the critic
update is on the fastest time-scale
?
(i)
,
the
policy and VaR parameter
updates
are on the interme4
diate time-scale, with ?-update ?3 (i) being faster
than
?-update
?
(i)
,
and
finally
the Lagrange
2
multiplier update is on the slowest time-scale ?1 (i) (see Appendix B.1 of [13] for the conditions
on these step-size schedules). This results in four time-scale stochastic approximation algorithms.
We prove that these actor-critic algorithms converge to a (local) saddle point of the risk-sensitive
objective function L(?, ?, ?) (see Appendix B.4 of [13]).
5.1
Gradient w.r.t. the Policy Parameters ?
The gradient of our objective function w.r.t. the policy parameters ? in (5) may be rewritten as
?? L(?, ?, ?) = ??
E D? (x0 ) +
h
+ i
?
E D? (x0 ) ? ?
.
(1 ? ?)
(8)
Given the original MDP M = (X , A, C, P, P0 ) and the parameter ?, we define the augmented MDP
? = (X? , A,
? C,
? P? , P?0 ) as X? = X ? R, A? = A, P?0 (x, s) = P0 (x)1{s0 = s}, and
M
? s, a) =
C(x,
?(?s)+ /(1 ? ?)
C(x, a)
if x = xT ? 0 0
, P (x , s |x, s, a) =
otherwise
P (x0 |x, a)
0
if s0 = s ? C(x, a) /?
otherwise
where xT is any terminal state of the original MDP M and sT is the value of the s part of the state
PT ?1
when a policy ? reaches a terminal state xT after T steps, i.e., sT = ?1T ? ? k=0 ? k C(xk , ak ) .
We define a class of parameterized stochastic policies ?(?|x, s; ?), (x, s) ? X? , ? ? ? ? R?1 for
this augmented MDP. Thus, the total (discounted) loss of this trajectory can be written as
T
?1
X
? T , sT , a) = D? (x0 ) +
? k C(xk , ak ) + ? T C(x
k=0
+
?
D? (x0 ) ? ? .
(1 ? ?)
(9)
From (9), it is clear that the quantity in the parenthesis of (8) is the value function of the policy ? at
? i.e., V ? (x0 , ?). Thus, it is easy to show that (the second
state (x0 , ?) in the augmented MDP M,
equality in Eq. 10 is the result of the policy gradient theorem [21])
?? L(?, ?, ?) = ?? V ? (x0 , ?) =
1 X ?
?? (x, s, a|x0 , ?) ? log ?(a|x, s; ?) Q? (x, s, a),
1 ? ? x,s,a
(10)
where ??? is the discounted visiting distribution (defined in Section 2) and Q? is the action-value
? We can show that 1 ? log ?(ak |xk , sk ; ?) ? ?k is
function of policy ? in the augmented MDP M.
1??
? k , sk , ak ) + ? Vb (xk+1 , sk+1 ) ? Vb (xk , sk )
an unbiased estimate of ?? L(?, ?, ?), where ?k = C(x
? and Vb is an unbiased estimator of V ? (see e.g., [6, 7]).
is the temporal-difference (TD) error in M,
In our actor-critic algorithms, the critic uses linear approximation for the value function V ? (x, s) ?
v > ?(x, s) = Ve ?,v (x, s), where the feature vector ?(?) belongs to the low-dimensional space R?2 .
5.2
Gradient w.r.t. the Lagrangian Parameter ?
We may rewrite the gradient of our objective function w.r.t. the Lagrangian parameters ? in (7) as
?? L(?, ?, ?) = ? ? ? + ?? E D? (x0 ) +
h
+ i
?
E D? (x0 ) ? ?
(1 ? ?)
(a)
= ? ? ? + ?? V ? (x0 , ?). (11)
Similar to Section 5.1, (a) comes from the fact that the quantity in the parenthesis in (11) is
? Note that
V ? (x0 , ?), the value function of the policy ? at state (x0 , ?) in the augmented MDP M.
? We now
the dependence of V ? (x0 , ?) on ? comes from the definition of the cost function C? in M.
derive an expression for ?? V ? (x0 , ?), which in turn will give us an expression for ?? L(?, ?, ?).
5
Lemma 1 The gradient of V ? (x0 , ?) w.r.t. the Lagrangian parameter ? may be written as
?? V ? (x0 , ?) =
1 X ?
1
?? (x, s, a|x0 , ?)
1{x = xT }(?s)+ .
1 ? ? x,s,a
(1 ? ?)
(12)
Proof. See Appendix B.2 of [13].
1
(1??)(1??) 1{x
+
= xT }(?s) is an unbiased
From Lemma 1 and (11), it is easy to see that ? ? ? +
estimate of ?? L(?, ?, ?). An issue with this estimator is that its value is fixed to ?k ? ? all along
1
a system trajectory, and only changes at the end to ?k ? ? + (1??)(1??)
(?sT )+ . This may affect
the incremental nature of our actor-critic algorithm. To address this issue, we propose a different
approach to estimate the gradients w.r.t. ? and ? in Sec. 5.4 (of course this does not come for free).
Another important issue is that the above estimator is unbiased only if the samples are generated
?k
from the distribution ??? (?|x0 , ?). If we just follow the policy, then we may use ?k ??+ (1??)
1{xk =
+
xT }(?sk ) as an estimate for ?? L(?, ?, ?). Note that this is an issue for all discounted actor-critic
algorithms that their (likelihood ratio based) estimate for the gradient is unbiased only if the samples
are generated from ??? , and not when we simply follow the policy. This might be a reason that we
have no convergence analysis (to the best of our knowledge) for (likelihood ratio based) discounted
actor-critic algorithms.2
Sub-Gradient w.r.t. the VaR Parameter ?
5.3
We may rewrite the sub-gradient of our objective function w.r.t. the VaR parameter ? (Eq. 6) as
?? L(?, ?, ?) 3 ? 1 ?
?
X
1
P
? k C(xk , ak ) ? ? | x0 = x0 ; ? .
(1 ? ?)
(13)
k=0
? the probability in (13) may be written as P(sT ?
From the definition of the augmented MDP M,
? when we reach a terminal state,
0 | x0 = x0 , s0 = ?; ?), where sT is the s part of the state in M
i.e., x = xT (see Section 5.1). Thus, we may rewrite (13) as
?? L(?, ?, ?) 3 ? 1 ?
1
P sT ? 0 | x0 = x0 , s0 = ?; ? .
(1 ? ?)
(14)
From (14), it is easy to see that ? ? ?1{sT ? 0}/(1 ? ?) is an unbiased estimate of the sub-gradient
of L(?, ?, ?) w.r.t. ?. An issue with this (unbiased) estimator is that it can be only applied at the
end of a system trajectory (i.e., when we reach the terminal state xT ), and thus, using it prevents
us of having a fully incremental algorithm. In fact, this is the estimator that we use in our semi
trajectory-based actor-critic algorithm.
One approach to estimate this sub-gradient incrementally is to use simultaneous perturbation
stochastic approximation (SPSA) method [8]. The idea of SPSA is to estimate the sub-gradient
g(?) ? ?? L(?, ?, ?) using two values of g at ? ? = ? ? ? and ? + = ? + ?, where ? > 0 is a
positive perturbation (see [8, 17] for the detailed description of ?).3 In order to see how SPSA can
help us to estimate our sub-gradient incrementally, note that
?? L(?, ?, ?) = ? + ??
E D? (x0 ) +
h
+ i (a)
?
E D? (x0 ) ? ?
= ? + ?? V ? (x0 , ?).
(1 ? ?)
(15)
Similar to Sections 5.1, (a) comes from the fact that the quantity in the parenthesis in (15) is
? Since
V ? (x0 , ?), the value function of the policy ? at state (x0 , ?) in the augmented MDP M.
the critic uses a linear approximation for the value function, i.e., V ? (x, s) ? v > ?(x, s), in our
actor-critic algorithms (see Section 5.1
and Algorithm 2), the
SPSA estimate of the sub-gradient
would be of the form g(?) ? ? + v > ?(x0 , ? + ) ? ?(x0 , ? ? ) /2?.
5.4
An Alternative Approach to Compute the Gradients
In this section, we present an alternative way to compute the gradients, especially those w.r.t. ? and
?. This allows us to estimate the gradient w.r.t. ? in a (more) incremental fashion (compared to the
method of Section 5.3), with the cost of the need to use two different linear function approximators
2
Note that the discounted actor-critic algorithm with convergence proof in [5] is based on SPSA.
SPSA-based gradient estimate was first proposed in [25] and has been widely used in various settings,
especially those involving high-dimensional parameter. The SPSA estimate described above is two-sided. It
can also be implemented single-sided, where we use the values of the function at ? and ? + . We refer the readers
to [8] for more details on SPSA and to [17] for its application in learning in risk-sensitive MDPs.
3
6
(instead of one used in Algorithm 2). In this approach, we define the augmented MDP slightly
different than the one in Section 5.3. The only difference is in the definition of the cost function,
which is defined here as (note that C(x, a) has been replaced by 0 and ? has been removed)
? s, a) =
C(x,
(?s)+ /(1 ? ?)
0
if x = xT ,
otherwise,
where xhT is any terminal
state of the original MDP M. It is easy to see that he term
+ i
1
? 0
E
D
(x
)
?
?
appearing
in the gradients of Eqs. 5-7 is the value function of the pol(1??)
icy ? at state (x0 , ?) in this augmented MDP. As a result, we have
Gradient w.r.t. ?: It is easy to see that now this gradient (Eq. 5) is the gradient of the value function
of the original MDP, ?? V ? (x0 ), plus ? times the gradient of the value function of the augmented
MDP, ?? V ? (x0 , ?), both at the initial states of these MDPs (with abuse of notation, we use V
for the value function of both MDPs). Thus, using linear approximators u> f (x, s) and v > ?(x, s)
for the value functions of the original and augmented MDPs, ?? L(?, ?, ?) can be estimated as
?? log ?(ak |xk , sk ; ?) ? (k + ??k ), where k and ?k are the TD-errors of these MDPs.
Gradient w.r.t. ?: Similar to the case for ?, it is easy to see that this gradient (Eq. 7) is ? ? ? plus
the value function of the augmented MDP, V ? (x0 , ?), and thus, can be estimated incrementally as
?? L(?, ?, ?) ? ? ? ? + v > ?(x, s).
Sub-Gradient w.r.t. ?: This sub-gradient (Eq. 6) is ? times one plus the gradient w.r.t. ? of the
value function of the augmented
MDP, ?? V ? (x0 , ?), and thus, it can be estimated incrementally
>
0 +
v ?(x ,? )??(x0 ,? ? )
using SPSA as ? 1 +
.
2?
Algorithm 3 in Appendix B.3 of [13] contains the pseudo-code of the resulting algorithm.
Algorithm 2 Actor-Critic Algorithms for CVaR Optimization
Input: Parameterized policy ?(?|?; ?) and value function feature vector ?(?) (both over the augmented
? confidence level ?, and loss tolerance ?
MDP M),
Initialization: policy parameters ? = ?0 ; VaR parameter ? = ?0 ; Lagrangian parameter ? = ?0 ; value
function weight vector v = v0
// (1) SPSA-based Algorithm:
for k = 0, 1, 2, . . . do
? k , sk , ak ) (with ? = ?k );
Draw action ak ? ?(?|xk , sk ; ?k );
Observe cost C(x
Observe next state (xk+1 , sk+1 ) ? P? (?|xk , sk , ak ); // note that sk+1 = (sk ? C xk , ak ) /?
>
>
? k , sk , ak ) + ?vk ?(xk+1 , sk+1 ) ? vk ?(xk , sk )
TD Error: ?k = C(x
(16)
Critic Update: vk+1 = vk + ?4 (k)?k ?(xk , sk )
(17)
!
>
0
0
vk ? x , ?k + ?k ? ?(x , ?k ? ?k )
(18)
? Update: ?k+1 = ?? ?k ? ?3 (k) ?k +
2?k
?2 (k)
? Update: ?k+1 = ?? ?k ?
?? log ?(ak |xk , sk ; ?) ? ?k
(19)
1??
1
1{xk = xT }(?sk )+
(20)
? Update: ?k+1 = ?? ?k + ?1 (k) ?k ? ? +
(1 ? ?)(1 ? ?)
if xk = xT (reach a terminal state), then set (xk+1 , sk+1 ) = (x0 , ?k+1 )
end for
// (2) Semi Trajectory-based Algorithm:
for k = 0, 1, 2, . . . do
if xk 6= xT then
? k , sk , ak ) (with ? = ?k ), and next state
Draw action ak ? ?(?|xk , sk ; ?k ), observe cost C(x
(xk+1 , sk+1 ) ? P? (?|xk , sk , ak ); Update (?k , vk , ?k , ?k ) using Eqs. 16, 17, 19, and 20
else
Update (?k , vk , ?k , ?k ) using Eqs. 16, 17,19, and 20; Update ? as
?k
? Update: ?k+1 = ?? ?k ? ?3 (k) ?k ?
1 sT ? 0
(21)
1??
Set (xk+1 , sk+1 ) = (x0 , ?k+1 )
end if
end for
return policy and value function parameters ?, ?, ?, v
7
6
Experimental Results
We consider an optimal stopping problem in which the state at each time step k ? T consists of the
cost ck and time k, i.e., x = (ck , k), where T is the stopping time. The agent (buyer) should decide
either to accept the present cost or wait. If she accepts or when k = T , the system reaches a terminal
state and the cost ck is received, otherwise, she receives the cost ph and the new state is (ck+1 , k+1),
where ck+1 is fu ck w.p. p and fd ck w.p. 1 ? p (fu > 1 and fd < 1 are constants). Moreover, there
is a discounted factor ? ? (0, 1) to account for the increase in the buyer?s affordability. The problem
has been described in more details in Appendix C of [13]. Note that if we change cost to reward
and minimization to maximization, this is exactly the American option pricing problem, a standard
testbed to evaluate risk-sensitive algorithms (e.g., [26]). Since the state space is continuous, finding
an exact solution via DP is infeasible, and thus, it requires approximation and sampling techniques.
We compare the performance of our risk-sensitive policy gradient Algorithm 1 (PG-CVaR) and two
actor-critic Algorithms 2 (AC-CVaR-SPSA,AC-CVaR-Semi-Traj) with their risk-neutral counterparts
(PG and AC) (see Appendix C of [13] for the details of these experiments). Figure 1 shows the
distribution of the discounted cumulative cost D? (x0 ) for the policy ? learned by each of these
algorithms. The results indicate that the risk-sensitive algorithms yield a higher expected loss, but
less variance, compared to the risk-neutral methods. More precisely, the loss distributions of the risksensitive algorithms have lower right-tail than their risk-neutral counterparts. Table 1 summarizes
the performance of these algorithms. The numbers reiterate what we concluded from Figure 1.
Mean?CVaR
Mean
0.15
Probability
Probability
0.06
0.04
0.02
0
?40?20 0 20 40 60
Reward
Mean?CVaR
Mean?CVaR SPSA
Mean
0.1
0.05
0
?50
0 50 100
Reward
Figure 1: Loss distributions for the policies learned by the risk-sensitive and risk-neutral policy
gradient and actor critic algorithms. The two left figures correspond to the PG methods, and the two
right figures correspond to the AC algorithms. In all cases, the loss tolerance equals to ? = 40.
PG
PG-CVaR
AC
AC-CVaR-SPSA
AC-CVaR-Semi-Traj.
E(D? (x0 ))
16.08
19.75
16.96
22.86
23.01
?(D? (x0 ))
17.53
7.06
32.09
3.40
4.98
CVaR(D? (x0 ))
69.18
25.75
122.61
31.36
34.81
Table 1: Performance comparison for the policies learned by the risk-sensitive and risk-neutral algorithms.
7
Conclusions and Future Work
We proposed novel policy gradient and actor critic (AC) algorithms for CVaR optimization in MDPs.
We provided proofs of convergence (in [13]) to locally risk-sensitive optimal policies for the proposed algorithms. Further, using an optimal stopping problem, we observed that our algorithms
resulted in policies whose loss distributions have lower right-tail compared to their risk-neutral
counterparts. This is extremely important for a risk averse decision-maker, especially if the righttail contains catastrophic losses. Future work includes: 1) Providing convergence proofs for our
AC algorithms when the samples are generated by following the policy and not from its discounted
visiting distribution, 2) Using importance sampling methods [2, 27] to improve gradient estimates
in the right-tail of the loss distribution (worst-case events that are observed with low probability) of
the CVaR objective function, and 4) Evaluating our algorithms in more challenging problems.
Acknowledgement The authors would like to thank Professor Marco Pavone and Lucas Janson
for their comments that helped us with some technical details in the proofs of the algorithms.
8
References
[1] P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Journal of Mathematical
Finance, 9(3):203?228, 1999.
[2] O. Bardou, N. Frikha, and G. Pag`es. Computing VaR and CVaR using stochastic approximation and
adaptive unconstrained importance sampling. Monte Carlo Methods and Applications, 15(3):173?210,
2009.
[3] N. B?auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical
Methods of Operations Research, 74(3):361?379, 2011.
[4] D. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
[5] S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained
Markov decision processes. Systems & Control Letters, 59(12):760?766, 2010.
[6] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms. In
Proceedings of Advances in Neural Information Processing Systems 20, pages 105?112, 2008.
[7] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45
(11):2471?2482, 2009.
[8] S. Bhatnagar, H. Prasad, and L.A. Prashanth. Stochastic Recursive Algorithms for Optimization, volume
434. Springer, 2013.
[9] K. Boda and J. Filar. Time consistent dynamic risk measures. Mathematical Methods of Operations
Research, 63(1):169?186, 2006.
[10] V. Borkar. A sensitivity formula for the risk-sensitive cost and the actor-critic algorithm. Systems &
Control Letters, 44:339?346, 2001.
[11] V. Borkar. Q-learning for risk-sensitive control. Mathematics of Operations Research, 27:294?311, 2002.
[12] V. Borkar and R. Jain. Risk-constrained Markov decision processes. IEEE Transaction on Automatic
Control, 2014.
[13] Y. Chow, M. Ghavamzadeh, L. Janson, and M. Pavone. Algorithms for CVaR optimization in MDPs.
arXiv:1406.3339, 2014.
[14] J. Filar, L. Kallenberg, and H. Lee. Variance-penalized Markov decision processes. Mathematics of
Operations Research, 14(1):147?161, 1989.
[15] J. Filar, D. Krass, and K. Ross. Percentile performance criteria for limiting average Markov decision
processes. IEEE Transaction of Automatic Control, 40(1):2?10, 1995.
[16] R. Howard and J. Matheson. Risk sensitive Markov decision processes. Management Science, 18(7):
356?369, 1972.
[17] Prashanth L.A. and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings
of Advances in Neural Information Processing Systems 26, pages 252?260, 2013.
[18] H. Markowitz. Portfolio Selection: Efficient Diversification of Investment. John Wiley and Sons, 1959.
[19] T. Morimura, M. Sugiyama, M. Kashima, H. Hachiya, and T. Tanaka. Nonparametric return distribution approximation for reinforcement learning. In Proceedings of the 27th International Conference on
Machine Learning, pages 799?806, 2010.
[20] J. Ott. A Markov Decision Model for a Surveillance Application and Risk-Sensitive Markov Decision
Processes. PhD thesis, Karlsruhe Institute of Technology, 2010.
[21] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the Sixteenth European
Conference on Machine Learning, pages 280?291, 2005.
[22] M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision
processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence,
2012.
[23] R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 26:1443?1471,
2002.
[24] M. Sobel. The variance of discounted Markov decision processes. Applied Probability, pages 794?802,
1982.
[25] J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation.
IEEE Transactions on Automatic Control, 37(3):332?341, 1992.
[26] A. Tamar, D. Di Castro, and S. Mannor. Policy gradients with variance related risk criteria. In Proceedings
of the Twenty-Ninth International Conference on Machine Learning, pages 387?396, 2012.
[27] A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk.
arXiv:1404.3862v1, 2014.
9
| 5246 |@word briefly:1 open:1 prasad:1 p0:6 pg:12 incurs:1 mention:1 moment:1 initial:5 contains:4 janson:2 current:6 written:4 john:1 numerical:1 update:31 v:1 stationary:1 intelligence:1 xk:34 mannor:2 mathematical:4 along:1 differential:1 prove:2 consists:1 inside:1 manner:1 finitehorizon:1 x0:72 expected:8 spsa:14 terminal:11 discounted:15 decreasing:1 td:3 increasing:1 project:1 provided:1 bounded:2 moreover:2 notation:2 what:2 argmin:1 minimizes:1 developed:1 finding:2 pseudo:3 temporal:1 every:1 glassner:1 finance:2 fat:1 exactly:1 hit:1 control:6 normally:2 unit:2 bertsekas:1 positive:2 engineering:1 local:2 sutton:2 ak:21 path:1 abuse:1 inria:2 might:4 plus:4 initialization:2 challenging:1 fastest:2 limited:2 range:1 unique:2 investment:2 recursive:1 procedure:1 empirical:1 projection:3 confidence:5 wait:1 selection:1 operator:4 risk:57 context:1 descending:1 optimize:3 equivalent:1 deterministic:3 demonstrated:1 lagrangian:6 starting:1 convex:2 simplicity:2 estimator:6 rule:1 importantly:1 analogous:1 limiting:1 pt:3 exact:2 programming:2 us:4 satisfying:1 observed:3 enters:1 worst:1 ensures:1 averse:2 removed:1 mentioned:1 environment:1 pol:1 reward:3 dynamic:2 ghavamzadeh:7 depend:1 rewrite:3 petrik:2 delbaen:1 efficiency:1 exit:1 sink:1 easily:3 represented:1 various:1 jain:2 shortcoming:3 monte:1 artificial:1 whose:2 emerged:1 stanford:1 solve:3 widely:1 otherwise:5 think:1 timescale:1 advantage:1 propose:4 interaction:1 matheson:1 sixteenth:1 description:1 convergence:10 incremental:9 leave:1 converges:1 help:2 derive:3 develop:2 ac:15 qt:1 received:2 eq:10 implemented:1 come:4 indicate:1 quantify:2 direction:3 stochastic:15 transient:1 preliminary:1 mathematically:1 exploring:1 strictly:1 marco:1 considered:1 estimation:1 maker:1 ross:1 sensitive:24 minimization:3 rather:1 ck:7 boosted:1 surveillance:1 derived:1 schaal:1 vk:7 she:2 likelihood:2 slowest:2 dependent:2 stopping:6 accumulated:1 chow:3 a0:2 accept:1 selects:1 issue:7 dual:1 denoted:1 morimura:2 lucas:1 development:1 constrained:2 equal:2 construct:1 having:1 atom:1 sampling:3 identical:1 represents:1 lille:2 future:2 markowitz:2 spall:1 employ:1 ve:1 comprehensive:1 resulted:1 replaced:1 fd:2 risksensitive:2 sobel:1 tuple:1 fu:2 necessary:2 maximization:1 ott:4 ordinary:1 cost:21 neutral:6 usefulness:2 st:9 international:3 sensitivity:1 stay:1 sequel:2 lee:3 vijayakumar:1 thesis:1 manage:1 management:3 american:1 return:6 account:1 sec:3 includes:1 rockafellar:3 satisfy:2 reiterate:1 hedging:1 helped:1 observing:2 start:1 option:2 prashanth:3 minimize:2 variance:11 who:1 yield:1 correspond:2 conceptually:1 carlo:1 trajectory:20 bhatnagar:4 history:2 hachiya:1 simultaneous:2 reach:5 suffers:1 definition:4 internship:1 proof:7 di:1 adjusting:1 popular:2 knowledge:1 schedule:4 appears:1 attained:1 higher:1 follow:2 specify:1 formulation:1 done:1 just:1 sketch:1 hand:1 receives:1 nonlinear:1 incrementally:6 scientific:1 believe:1 mdp:23 pricing:1 name:1 karlsruhe:1 multiplier:3 unbiased:7 counterpart:3 former:1 hence:1 historydependent:1 equality:1 during:1 interchangeably:1 percentile:2 criterion:9 complete:1 mohammad:2 demonstrate:2 novel:2 recently:1 rl:3 volume:1 tail:7 discussed:1 he:1 numerically:1 refer:1 ai:1 automatic:3 unconstrained:2 mathematics:2 sugiyama:1 portfolio:2 actor:25 v0:1 closest:1 multivariate:1 showed:3 optimizing:1 belongs:1 diversification:1 approximators:2 devise:3 minimum:1 greater:1 converge:1 shortest:1 semi:5 technical:1 faster:1 long:1 visit:1 a1:1 feasibility:1 adobe:3 parenthesis:4 involving:1 expectation:2 arxiv:2 iteration:1 achieved:1 c1:1 addition:2 want:1 ode:1 else:1 yinlam:2 suffered:1 concluded:1 biased:1 heath:1 comment:1 induced:1 subject:4 tend:1 member:1 incorporates:1 effectiveness:1 bardou:1 intermediate:1 easy:6 affect:1 idea:1 tamar:4 expression:2 utility:2 penalty:1 peter:1 action:11 clear:2 detailed:1 nonparametric:1 locally:3 ph:1 generate:1 estimated:3 popularity:1 discrete:1 four:1 discreteness:1 kallenberg:1 v1:1 relaxation:1 sum:2 year:1 convert:1 parameterized:4 letter:2 uncertainty:1 place:1 reader:1 decide:1 draw:2 decision:15 prefer:1 appendix:9 vb:3 summarizes:1 encountered:2 constraint:2 precisely:1 dominated:1 generates:1 argument:1 min:6 extremely:1 relatively:1 according:1 combination:1 slightly:1 son:1 making:3 castro:1 sided:2 equation:3 discus:1 turn:1 tractable:1 ascending:1 end:8 operation:5 incurring:1 rewritten:1 observe:3 appearing:1 kashima:1 alternative:6 original:5 ensure:3 completed:1 cmax:1 especially:4 establish:3 objective:13 question:1 quantity:5 strategy:1 dependence:1 visiting:3 exhibit:1 gradient:61 amongst:1 dp:3 thank:1 athena:1 cvar:46 considers:1 unstable:1 reason:1 pavone:2 code:3 length:1 modeled:1 filar:4 index:1 ratio:2 minimizing:2 providing:1 difficult:1 proper:1 policy:67 twenty:1 observation:2 markov:11 howard:1 finite:3 descent:2 extended:4 variability:2 team:2 krass:1 perturbation:3 ninth:1 pair:2 auerle:3 coherent:2 accepts:1 icy:1 testbed:1 learned:3 tanaka:1 address:4 beyond:2 recurring:1 usually:1 max:6 subramanian:2 event:1 natural:3 improve:2 technology:1 mdps:18 numerous:1 literature:1 acknowledgement:1 asymptotic:1 loss:28 fully:2 artzner:1 var:22 incurred:1 agent:4 consistent:1 s0:4 critic:29 course:1 penalized:1 free:1 infeasible:1 side:1 institute:2 pag:1 distributed:2 tolerance:4 transition:3 cumulative:2 evaluating:1 author:1 reinforcement:2 adaptive:1 uryasev:3 transaction:3 approximate:1 compact:1 status:1 automatica:1 leader:1 xi:1 continuous:7 sk:24 table:2 promising:1 nature:2 controllable:1 traj:2 european:1 pk:1 linearly:1 whole:1 x1:1 augmented:14 referred:1 fashion:1 wiley:1 sub:10 diate:1 exponential:2 formula:3 theorem:2 specific:1 xt:15 x:5 consist:1 exists:3 sequential:3 gained:1 importance:2 phd:1 conditioned:1 horizon:2 borkar:5 simply:1 saddle:3 lagrange:3 prevents:1 springer:1 satisfies:1 relies:1 eber:1 conditional:5 goal:2 absence:1 professor:1 change:2 infinite:2 typical:1 reducing:1 lemma:2 total:1 catastrophic:1 experimental:1 buyer:2 e:1 meaningful:1 latter:1 evaluate:1 |
4,690 | 5,247 | Sparse Multi-Task Reinforcement Learning
Daniele Calandriello ?
Alessandro Lazaric?
Team SequeL
INRIA Lille ? Nord Europe, France
Marcello Restelli?
DEIB
Politecnico di Milano, Italy
Abstract
In multi-task reinforcement learning (MTRL), the objective is to simultaneously
learn multiple tasks and exploit their similarity to improve the performance w.r.t.
single-task learning. In this paper we investigate the case when all the tasks can
be accurately represented in a linear approximation space using the same small
subset of the original (large) set of features. This is equivalent to assuming that
the weight vectors of the task value functions are jointly sparse, i.e., the set of
their non-zero components is small and it is shared across tasks. Building on existing results in multi-task regression, we develop two multi-task extensions of the
fitted Q-iteration algorithm. While the first algorithm assumes that the tasks are
jointly sparse in the given representation, the second one learns a transformation
of the features in the attempt of finding a more sparse representation. For both
algorithms we provide a sample complexity analysis and numerical simulations.
1
Introduction
Reinforcement learning (RL) and approximate dynamic programming (ADP) [24, 2] are effective
approaches to solve the problem of decision-making under uncertainty. Nonetheless, they may fail
in domains where a relatively small amount of samples can be collected (e.g., in robotics where
samples are expensive or in applications where human interaction is required, such as in automated
rehabilitation). Fortunately, the lack of samples can be compensated by leveraging on the presence
of multiple related tasks (e.g., different users). In this scenario, usually referred to as multi-task reinforcement learning (MTRL), the objective is to simultaneously solve multiple tasks and exploit their
similarity to improve the performance w.r.t. single-task learning (we refer to [26] and [15] for a comprehensive review of the more general setting of transfer RL). In this setting, many approaches have
been proposed, which mostly differ for the notion of similarity leveraged in the multi-task learning
process. In [28] the transition and reward kernels of all the tasks are assumed to be generated from
a common distribution and samples from different tasks are used to estimate the generative distribution and, thus, improving the inference on each task. A similar model, but for value functions, is
proposed in [16], where the parameters of all the different value functions are assumed to be drawn
from a common distribution. In [23] different shaping function approaches for Q-table initialization
are considered and empirically evaluated, while a model-based approach that estimates statistical information on the distribution of the Q-values is proposed in [25]. Similarity at the level of the MDPs
is also exploited in [17], where samples are transferred from source to target tasks. Multi-task reinforcement learning approaches have been also applied in partially observable environments [18].
In this paper we investigate the case when all the tasks can be accurately represented in a linear
approximation space using the same small subset of the original (large) set of features. This is
equivalent to assuming that the weight vectors of the task value functions are jointly sparse, i.e., the
set of their non-zero components is small and it is shared across tasks. Let us illustrate the concept
of shared sparsity using the blackjack card game. The player can rely on a very large number of
features such as: value and color of the cards in the player?s hand, value and color of the cards on
?
?
{daniele.calandriello,alessandro.lazaric}@inria.fr
{marcello.restelli}@polimi.it
1
the table and/or already discarded, different scoring functions for the player?s hand (e.g., sum of the
values of the cards) and so on. The more the features, the more likely it is that the corresponding
feature space could accurately represent the optimal value function. Nonetheless, depending on the
rules of the game (i.e., the reward and dynamics), a very limited subset of features actually contribute
to the value of a state and we expect the optimal value function to display a high level of sparsity.
Furthermore, if we consider multiple tasks differing for the behavior of the dealer (e.g., the value at
which she stays) or slightly different rule sets, we may expect such sparsity to be shared across tasks.
For instance, if the game uses an infinite number of decks, features based on the history of the cards
played in previous hands have no impact on the optimal policy for any task and the corresponding
value functions are all jointly sparse in this representation. Building on this intuition, in this paper
we first introduce the notion of sparse MDPs in Section 3. Then we rely on existing results in
multi-task regression [19, 1] to develop two multi-task extensions of the fitted Q-iteration algorithm
(Sections 4 and Section 5) and we study their theoretical and empirical performance (Section 6). An
extended description of the results, as well as the full proofs of the statements, are reported in [5].
2
Preliminaries
Multi-Task Reinforcement Learning (MTRL). A Markov decision process (MDP) is a tuple M =
(X , A, R, P, ?), where the state space X is a bounded subset of the Euclidean space, the action
space A is finite (i.e., |A| < ?), R : X ? A ? [0, 1] is the reward of a state-action pair, P :
X ? A ? P(X ) is the transition distribution over the states achieved by taking an action in a given
state, and ? ? (0, 1) is a discount factor. A deterministic policy ? : X ? A is a mapping from
states to actions. We denote by B(X ? A; b) the set of measurable bounded state-action functions
f : X ? A ? [?b; b]. Solving an MDP corresponds to computing the optimal action?value function
Q? ? B(X ?A; Qmax = 1/(1??)),P
defined as the fixed point of the optimal Bellman operator T
defined as T Q(x, a) = R(x, a) + ? y P (y|x, a) maxa0 Q(y, a0 ). The optimal policy is obtained
as the greedy policy w.r.t. the optimal value function as ? ? (x) = arg maxa?A Q? (x, a). In this
paper we study the multi-task reinforcement learning (MTRL) setting where the objective is to solve
T tasks, defined as Mt = (X , A, Pt , Rt , ?) with t ? [T ] = {1, . . . , T }, with the same state-action
space, but different dynamics and rewards. The objective of MTRL is to exploit similarities between
tasks to improve the performance w.r.t. single-task learning. In particular, we choose linear fitted
Q-iteration as the single-task baseline and we propose multi-task extensions tailored to exploit the
sparsity in the structure of the tasks.
input: Input sets S = {x }nx
T
, tol, K
t
i i=1 t=1
Linear Fitted Q-iteration. Whenever
Initialize W 0 ? 0 , k = 0
X and A are large or continuous, we
do
need to resort to approximation schemes
k ?k+1
to learn a near-optimal policy. One of
for a ? 1, . . . , |A| do
the most popular ADP methods is the
for t ? 1, . . . , T , i ? 1, . . . , nx do
k
k
fitted-Q iteration (FQI) algorithm [7],
= Rt (xi,t , a) and yi,a,t
? Pt (?|xi,t , a)
Sample ri,a,t
k
k
k
which extends value iteration to approxe kt (yi,a,t
Compute zi,a,t = ri,a,t + ? maxa0 Q
, a0 )
imate action-value functions. While exend for
k
k
x
act value iteration proceeds by iterative
Build datasets Da,t
= {(xi,t , a), zi,a,t
}n
i=1
k
k
T
applications of the Bellman operator (i.e.,
ca on {Da,t }t=1 (see Eqs. 2,5, or 8)
Compute W
Qk = T Qk?1 ), at each iteration FQI apend for
proximates T Qk?1 by solving a regreswhile max
Wak ? Wak?1
2 ? tol and k < K
a
sion problem. Among possible instances,
Figure 1: Linear FQI with fixed design and fresh samples at
here we focus on a specific implementa- each iteration in a multi-task setting.
tion of FQI in the fixed design setting with
linear approximation and we assume access to a generative model of the MDP. Since the action
space A is finite, we represent action-value functions as a collection of |A| independent state-value
functions. We introduce a dx -dimensional state-feature vector ?(?) = [?1 (?), . . . , ?dx (?)]T with
?i : X ? R such that supx ||?(x)||2 ? L. From ? we obtain a linear approximation space for
action-value functions as F = {fw (x, a) = ?(x)T wa , x ? X , a ? A, wa ? Rdx }. FQI receives
x
as input a fixed set of states S = {xi }ni=1
(fixed design setting) and the space F. Starting from
0
k
k nx
w = 0, at each iteration k, FQI first draws a (fresh) set of samples (ri,a
, yi,a
)i=1 from the genk
x
erative model of the MDP for each action a on each of the states {xi }ni=1
(i.e., ri,a
= R(xi , a)
k
k
k nx
and yi,a ? P (?|xi , a)) and builds |A| independent training sets Da = {(xi , a), zi,a
}i=1 , where
k
k
b k?1 (y k , a0 ) is an unbiased sample of T Q
b k?1 and Q
b k?1 (y k , a0 ) is comzi,a
= ri,a
+ ? maxa0 Q
i,a
i,a
2
k
, a0 )T wk?1 . Then FQI
puted using the weight vector learned at the previous iteration as ?(yi,a
k
solves |A| linear regression problems, each fitting the training set Da and it returns vectors w
bak ,
k
k
k
which lead to the new action value function fwbk with w
b = [w
b1 , . . . , w
b|A| ]. At each iteration the
total number of samples is n = |A| ? nx . The process is repeated up to K iterations or until no
b k?1 could be unbounded
significant change in the weight vector is observed. Since in principle Q
k
(due to numerical issues in the regression step), in computing the samples zi,a
we use a function
k?1
k?1
e
b
Q
obtained by truncating Q
in [?Qmax ; Qmax ]. The convergence and the performance of
FQI are studied in detail in [20] in the case of bounded approximation space, while linear FQI is
studied in [17, Thm. 5] and [22, Lemma 5]. When moving to the multi-task setting, we consider
k
cak ? Rdx ?T the matrix with vector w
different state sets {St }Tt=1 and we denote by W
ba,t
? Rdx as
the t?th column. The general structure of FQI in a multi-task setting is reported in Fig. 1. Finally,
we introduce the following matrix notation. For any matrix W ? Rd?T , [W ]t ? Rd is the t?th
column and [W ]i ? RT the i?th row of the matrix, Vec(W ) is the RdT vector obtained by stacking
the columns of the matrix, Col(W ) is its column-space and Row(W ) is its row-space. Beside the `2 ,
`1 -norm for vectors, we use the trace (or nuclear) norm kW k? = trace((W W T )1/2 ), the Frobenius
P
Pd
norm kW kF = ( i,j [W ]2i,j )1/2 and the `2,1 -norm kW k2,1 = i=1 k[W ]i k2 . We denote by O d
the set of orthonormal matrices and for any pair of matrices V and W , V ? Row(W ) denotes the
orthogonality between the spaces spanned by the two matrices.
3
Fitted Q?Iteration in Sparse MDPs
Depending on the regression algorithm employed at each iteration, FQI can be designed to take advantage of different characteristics of the functions at hand, such as smoothness (`2 ?regularization)
and sparsity (`1 ?regularization). In this section we consider the high?dimensional regression
scenario and we study the performance of FQI under sparsity assumptions. Let ?w (x) =
arg maxa fw (x, a) be the greedy policy w.r.t. fw . We start with the following assumption.1
Assumption 1. For any function fw ? F, the Bellman operator T can be expressed as
T fw (x, a) = R(x, a) + ?
E
x0 ?P (?|x,a)
[fw (x0 , ?w (x0 ))] = ?(x, a)T wR + ??(x, a)T P??w w
(1)
This assumption implies that F is closed w.r.t. the Bellman operator, since for any fw , its image T fw
can be computed as the product between features ?(?, ?) and a vector of weights wR and P??w w. As a
result, the optimal value function Q? itself belongs to F and it can be computed as ?(x, a)T w? . This
assumption encodes the intuition that in the high?dimensional feature space F induced by ?, the
transition kernel P , and therefore the system dynamics, can be expressed as a linear combination of
the features using the matrix P??w , which depends on both function fw and features ?. This condition
is usually satisfied whenever the space F is spanned by a very large set of features that allows it to
approximate a wide range of different functions, including the reward and transition kernel. Under
b k?1 = fwk
this assumption, at each iteration k of FQI, there exists a weight vector wk such that T Q
and an approximation of the target function fwk can be obtained by solving an ordinary least-squares
problem on the samples in Dak . Unfortunately, it is well known that OLS fails whenever the number
of samples is not sufficient w.r.t. the number of features (i.e., d > n). For this reason, Asm. 1
is often joined together with a sparsity assumption. Let J(w) = {i = 1, . . . , d : wi 6= 0} be
the set of s non-zero components of vector w (i.e., s = |J(w)|) and J c (w) be the complementary
set. In supervised learning, the LASSO [11, 4] is effective in exploiting the sparsity assumption
that s d and dramatically reduces the sample complexity. In RL the idea of sparsity has been
successfully integrated into policy evaluation [14, 21, 8, 12] but rarely in the full policy iteration. In
value iteration, it can be easily integrated in FQI by approximating the target weight vector wak as
nx
2
1 X
k
w
bak = arg min
?(xi )T w ? zi,a
+ ?||w||1 .
(2)
w?Rdx nx
i=1
While this integration is technically simple, the conditions on the MDP structure that imply sparsity
in the value functions are not fully understood. In fact, one may simply assume that Q? is sparse in
F, with s non-zero weights, thus implying that d ? s features captures aspects of states and actions
that do not have any impact on the actual optimal value function. Nonetheless, this would provide
1
A similar assumption has been previously used in [9] where the transition P is embedded in a RKHS.
3
no guarantee about the actual level of sparsity encountered by FQI through iterations, where the
target functions fwk may not be sparse at all. For this reason we need stronger conditions on the
structure of the MDP. We state the following assumption (see [10, 6] for similar conditions).
Assumption 2 (Sparse MDPs). There exists a set J (the set of useful features) for MDP M, with
|J| = s d, such that for any i ?
/ J, and any policy ? the rows [P?? ]i are equal to 0, and there
exists a function fwR = R such that J(wR ) ? J.
This assumption implies that not only the reward function is sparse, but also that the features that
are useless for the reward have no impact on the dynamics of the system. Since P?? can be seen as
a linear representation of the transition kernel embedded in the high-dimensional space F, this assumption corresponds to imposing that the matrix P?? has all its rows corresponding to features outside of J set to 0. This in turn means that the future state-action vector E[?(x0 , a0 )T ] = ?(x, a)T P??
depends only on the features in J. In the blackjack scenario illustrated in the introduction, this
assumption is verified by features related to the history of the cards played so far. In fact, if we
consider an infinite number of decks, the feature indicating whether an ace has already been played
is not used in the definition of the reward function and it is completely unrelated to the other features and, thus it does not contribute to the optimal value function. An important consideration on
this assumption can be derived by a closer look to the sparsity pattern of the matrix P?? . Since the
sparsity is required at the level of the rows, this does not mean that the features that do not belong to
J have to be equal to 0 after each transition. Instead, their value will be governed simply by the interaction with the features in J. This means that the features outside of J can vary from completely
unnecessary features with no dynamics, to features that are redundant to those in J in describing the
evolution of the system. Additional discussion on this assumption is available in [5]. Assumption 2,
together with Asm. 1, leads to the following lemma.
Lemma 1. Under Assumptions 1 and 2, the application of the Bellman operator T to any function
fw ? F, produces a function fw0 = T fw ? F such that J(w0 ) ? J.
b k?1 has a level
This lemma guarantees that at any iteration k of FQI, the target function fwk = T Q
k
of sparsity J(w ) ? s. We are now ready to study the performance of LASSO-FQI over iterations.
In order to simplify the comparison to the multi-task results in sections 4 and 5, we analyze the
average performance over multiple tasks. We consider that the previous assumptions extend to all
the MDPs {Mt }Tt=1 , each with a set of useful features Jt and sparsity st . The action?value function
learned after K iterations is evaluated by comparing the performance of the corresponding greedy
policy ?tK (x) = arg maxa QK
t (x, a) to the optimal policy. The performance loss is measured w.r.t.
a target distribution ? ? P(X ?A). We introduce the following standard assumption for LASSO [3].
Assumption 3 (Restricted Eigenvalues (RE)). Define n as the number of samples, and J c as the
complement of the set of indices J. For any s ? [d], there exists ?(s) ? R+ such that:
k??k2
d
min ?
: |J| ? s, ? ? R \{0}, k?J c k1 ? 3 k?J k1 ? ?(s),
(3)
n k?J k2
Theorem 1 (LASSO-FQI). Let the tasks P
{Mt }Tt=1 and the function space F satisfy assumptions 1, 2 and 3 with average sparsity s? = t st /T , ?min (s) = mint ?(st ) and features bounded
supx ||?(x)||2 ? L. If LASSO-FQI (Alg.p1 with Eq. 2) is run independently on all T tasks for K
iterations with a regularizer ? = ?Qmax log(d)/n, for any numerical constant ? > 8, then with
probability at least (1 ? 2d1??/8 )KT , the performance loss is bounded as
2
T
2
1 X
1
Qmax L2 s log d
?
?K
K 2
+
?
Q
.
(4)
Qt ? Qt t
? O
max
T t=1
(1 ? ?)4 ?4min (s) n
2,?
Remark 1 (assumptions). Asm. 3 is a relatively weak constraint on the representation capability of
the data. The RE assumption is common in regression, and it is extensively analyzed in [27]. Asm. 1
and 2 are specific to our setting and may pose significant constraints on the set of MDPs of interest.
Asm. 1 is introduced to give a more explicit interpretation for the notion of sparse MDPs. Without
Asm. 1, the bound in Eq. 4 would have an additional approximation error term similar to standard
approximate value iteration results (see e.g., [20]). Asm. 2 is a potentially very loose sufficient
condition to guarantee that the target functions encountered over the iterations of LASSO?FQI have
4
a minimum level of sparsity. Thm. 1 requires that for any k ? K, the target function fwk+1 = T fwtk
t
has weights wtk+1 that are sparse, i.e., maxt,k skt ? s with skt = |J(wtk+1 )|. In other words, all target
functions encountered must be sparse, or LASSO?FQI could suffer a huge loss at an intermediate
step. Such condition could be obtained under much less restrictive assumptions than Asm. 2, that
leaves up to the MDPs dynamics to resparsify the target function at each step, at the expenses of
interpretability. It could be sufficient to prove that the MDP dynamics do not enforce sparsity, but
simply do not reduce it across iterations, and use guarantees for LASSO reconstruction to maintain
sparsity across iterations. Finally, we point out that even if ?useless? features do not satisfy Asm. 2
and are weakly correlated with the dynamics and the reward function, their weights are discounted
by ? at each step. As a result, the target functions could become ?approximately? as sparse as Q?
over iterations, and provide enough guarantees to be used for a variation of Thm. 1. We leave for
future work a more thorough investigation of these possible relaxations.
4
Group-LASSO Fitted Q?Iteration
After introducing the concept of sparse MDP in Sect. 3, we move to the multi-task scenario and we
study the setting where there exists a suitable representation (i.e., set of features) under which all the
tasks can be solved using roughly the same set of features, the so-called shared sparsity assumption.
Given the set of useful features Jt for task t, we denote by J = ?Tt=1 Jt the union of all the non-zero
coefficients across all the tasks. Similar to Asm. 2 and Lemma 1, we first assume that the set of
features ?useful? for at least one of the tasks is relatively small compared to d and then show how
this propagates through iterations.
Assumption 4. We assume that the joint useful features over all the tasks are such that |J| = s? d.
Lemma 2. Under Asm. 2 and 4, at any iteration k, the target weight matrix W k has J(W k ) ? s?.
The Algorithm. In order to exploit the similarity across tasks stated in Asm. 4, we resort to the
Group LASSO (GL) algorithm [11, 19], which defines a joint optimization problem over all the
tasks. GL is based on the intuition that given the weight matrix W ? Rd?T , the norm kW k2,1
measures the level of shared-sparsity across tasks. In fact, in kW k2,1 the `2 -norm measures the ?relevance? of feature i across tasks, while the `1 -norm ?counts? the total number of relevant features,
which we expect to be small in agreement with Asm. 4. Building on this intuition, we define the
GL?FQI algorithm in which at each iteration for each action a ? A we compute (details about the
implementation of GL?FQI are reported in [5, Appendix A])
cak = arg min
W
Wa
T
X
k
Za,t ? ?t wa,t
2 + ? kWa k .
2,1
2
(5)
t=1
Theoretical Analysis. The regularization of GL?FQI is designed to take advantage of the sharedsparsity assumption at each iteration and in this section we show that this may lead to reduce the
sample complexity w.r.t. using LASSO in FQI for each task separately. Before reporting the analysis of GL?FQI, we need to introduce a technical assumption defined in [19] for GL.
Assumption 5 (Multi-Task Restricted Eigenvalues). Define ? as the block diagonal matrix composed by the T sample matrices ?t . For any s ? [d], there exists ?(s) ? R+ s.t.
(
)
k? Vec(?)k2
d?T
min ?
: |J| ? s, ? ? R
\{0}, k?J c k2,1 ? 3 k?J k2,1 ? ?(s),
(6)
nT kVec(?J )k2
Similar to Theorem 1 we evaluate the performance of GL?FQI as the performance loss of the
returned policy w.r.t. the optimal policy and we obtain the following performance guarantee.
Theorem 2 (GL?FQI). Let the tasks {Mt }Tt=1 and the function space F satisfy assumptions 1, 2, 4, and 5 with joint sparsity s? and features bounded supx ||?(x)||2 ? L. If
GL?FQI (Alg. 1 with Eq. 5) is run jointly on all T tasks for K iterations with a regularizer
3
1
(log d) 2 +? 2
? max 1 +
?
? = LQ
, for any numerical constant ? > 0, then with probability at least
nT
T
(1 ? log(d)?? )K , the performance loss is bounded as
2 2
T
2
1 X
1
L Qmax s?
(log d)3/2+?
?
?tK
K 2
?
1+
+ ? Qmax . (7)
Qt ? Qt
? O
T t=1
(1 ? ?)4 ?4 (2?
s) n
2,?
T
5
Remark 2 (comparison with LASSO-FQI). Ignoring all the terms in common with the two methods, constants, and logarithmic factors, we can summarize
their bounds of LASSO-FQI and GL?
?
e s log(d)/n) and O
e s?/n(1 + log(d)/ T ) . The first interesting aspect of the bound of
FQI as O(?
GL?FQI is the role played by the number of tasks T . In LASSO?FQI the ?cost?
of discovering
?
the st useful features is a factor log d, while GL?FQI has a factor 1 + log(d)/ T , which decreases
with the number of tasks. This illustrates the advantage of the multi?task learning dimension of
GL?FQI, where all the samples of all tasks actually contribute to discovering useful features, so
that the more the number of features, the smaller the cost. In the limit, we notice that when T ? ?,
the bound for GL?FQI does not depend on the dimensionality of the problem anymore. The other
critical aspect of the bounds is the difference between s? and s?. In fact, maxt st ? s? ? d and if
the shared-sparsity assumption does not hold, we can construct cases where the number of non-zero
features st is very small for each task, but the union J = ?t Jt is still a full set, so that s? ? d. In this
case, GL?FQI cannot leverage on the shared sparsity across tasks and it may perform significantly
worse than LASSO?FQI. This is the well?known negative transfer effect that happens whenever
the wrong assumption over tasks is enforced thus worsening the single-task learning performance.
5
Feature Learning Fitted Q?Iteration
Unlike other properties such as smoothness, the sparsity of a function is intrinsically related to the
specific representation used to approximate it (i.e., the function space F). While Asm. 2 guarantees
that F induces sparsity for each task separately, Asm. 4 requires that all the tasks share the same
useful features in the given representation. As discussed in Rem. 2, whenever this is not the case,
GL?FQI may perform worse than LASSO?FQI. In this section we investigate an alternative notion
of sparsity in MDPs and we introduce the Feature Learning fitted Q-iteration (FL?FQI) algorithm.
Low Rank approximation. Since the poor performance of GL?FQI is due to the chosen representation (i.e., features), it is natural to ask the question whether there exists an alternative representation
(i.e., different features) inducing a higher level of shared sparsity. Let us assume that there exists a
space F ? defined by features ?? such that the weight matrix of the optimal Q-functions A? ? Rd?T
is such that J(A? ) = s? d. As shown in Lemma 2, together with Asm. 2 and 4, this guarantees
that at any iteration J(Ak ) ? s? . Given the set of states {St }Tt=1 , let ? and ?? the feature matrices
obtained by evaluating ? and ?? on the states. We assume that there exists a linear transformation
of the features of F ? to the features of F such that ? = ?? U with U ? Rdx ?dx . In this setting
the samples used to define the regression problem can be formulated as noisy observations of ?? Aka
for any action a. Together with the transformation U , this implies that there exists a weight matrix
Wak such that ?? Aka = ?? U U ?1 Aka = ?Wak with Wak = U ?1 Aka . Although Aka is indeed sparse,
any attempt to learn Wak using GL would fail, since Wak may have a very low level of sparsity. On
the other hand, an algorithm able to learn a suitable transformation U , it may be able to recover the
representation ?? (and the corresponding space F ? ) and exploit the high level of sparsity of Aka .
While this additional step of representation or feature learning introduces additional complexity, it
allows to relax the strict assumption on the joint sparsity s? and may improve the performance of
GL?FQI. Our assumption is formulated as follows.
Assumption 6. There exists an orthogonal matrix U ? O d (block diagonal matrix having matrices
{Ua ? O dx } on the diagonal) such that the weight matrix A? obtained as A? = U ?1 W ? is jointly
sparse, i.e., has a set of ?useful? features J(A? ) = ?Tt=1 J([A? ]t ) with |J(A? )| = s? d.
Coherently with this assumption, we adapt the multi-task feature learning (MTFL) algorithm defined
in [1] and at each iteration k for any action a we solve the optimization problem
bak , A
bka ) = arg min
(U
min
Ua ?O d Aa ?Rd?T
T
X
k
||Za,t
? ?t Ua [Aa ]t ||2 + ? kAk2,1 .
(8)
t=1
In order to better characterize the solution to this optimization problem, we study more in detail
the relationship between A? and W ? and analyze the two directions of the equality A? = U ?1 W ? .
When A? has s? non-zero rows, then any orthonormal transformation W ? will have at most rank
r? = s? . This suggests that instead of solving the joint optimization problem in Eq. 8 and explicitly
recover the transformation U , we may directly try to solve for low-rank weight matrices W . Then
we need to show that a low-rank W ? does indeed imply the existence of a transformation to a jointlysparse matrix A? . Assume W ? has low rank r? . It is then possible to perform a standard singular
6
value decomposition W ? = U ?V = U A? . Because ? is diagonal with r? non-zero entries, A? will
have r? non-zero rows, thus being jointly sparse. It is possible to derive the following equivalence.
Proposition 1 ([5, Appendix A]). Given A, W ? Rd?T , U ? O d , the following equality holds,
with the relationship between the optimal solutions being W ? = U A? ,
min
A,U
T
X
k
||Za,t
? ?t Ua [Aa ]t ||2 + ? kAk2,1 = min
W
t=1
T
X
k
||Za,t
? ?t [Wa ]t ||2 + ?kW k1 .
(9)
t=1
The previous proposition states the equivalence between solving a feature learning version of GL
and solving a nuclear norm (or trace norm) regularized problem. This penalty is equivalent to an
`1 -norm penalty on the singular values of the W matrix, thus forcing W to have low rank. Notice
that assuming that W ? has low rank can be also interpreted as the fact that either the task weights
[W ? ]t or the features weights [W ? ]i are linearly correlated. In the first case, it means that there is
a dictionary of core tasks that is able to reproduce all the other tasks as a linear combination. As a
result, Assumption 6 can be reformulated as Rank(W ? ) = s? . Building on this intuition we define
the FL?FQI algorithm where the regression is carried out according to Eq. 9.
Theoretical Analysis. Our aim is to obtain a bound similar to Theorem 2 for the new FL-FQI Algorithm. We begin by introducing a slightly different assumption on the data available for regression.
Assumption 7 (Restricted Strong Convexity). Under Assumption 6, let W ? = U DV T be a singular
value decomposition of the optimal matrix W ? of rank r, and U r , V r the submatrices associated
with the top r singular values. Define B = {? ? Rd?T : Row(?)?U r and Col(?)?V r }, and the
projection operator onto this set ?B . There exists a positive constant ? such that
k? Vec(?)k22
d?T
min
:??R
, k?B (?)k1 ? 3k? ? ?B (?)k1 ? ?
(10)
2nT k Vec(?)k22
Theorem 3 (FL?FQI). Let the tasks {Mt }Tt=1 and the function space F satisfy assumptions 1, 2, 6,
and 7 with rank s? , features bounded supx ||?(x)||2 ? L and T > ?(log n). If FL?FQI
p(Alg. 1 with
Eq. 8) is run jointly on all T tasks for K iterations with a regularizer ? ? 2LQmax (d + T )/n,
then with probability at least ?((1 ? exp{?(d + T )})K ), the performance loss is bounded as
2
T
2
d
1 X
1
Qmax L4 s?
?
?K
K 2
1
+
+
?
Q
.
Qt ? Qt t
? O
max
T t=1
(1 ? ?)4
?2
n
T
2,?
Remark 3 (comparison with GL-FQI). Unlike GL?FQI, the performance FL?FQI does not depend on the shared sparsity s? of W ? but on its rank, that is the value s? of the most jointly-sparse
representation that can be obtained through an orthogonal transformation U of the features. Whenever tasks are somehow linearly dependent, even if the weight matrix W ? is dense and s? ? d,
the rank s? can be small, thus guaranteeing a dramatic improvement over GL?FQI. On the other
hand, learning
a new representation comes at the cost of a worse dependency on d. In fact, the term
?
log(d)/ T in GL?FQI, becomes d/T , implying that many more tasks are needed for FL?FQI to
construct a suitable representation. This is not surprising since we introduced a d ? d matrix U
in the optimization problem and a larger number of parameters needs to be learned. As a result,
although significantly reduced by the use of trace-norm instead of `2,1 -regularization, the negative
transfer is not completely removed. In particular, the introduction of new tasks, that are not linear
combinations of the previous tasks, may again increase the rank s? , corresponding to the fact that
no jointly-sparse representation can be constructed.
6
Experiments
We investigate the empirical performance of GL?FQI, and FL?FQI and compare their results to
single-task LASSO?FQI in two variants of the blackjack game. In the first variant (reduced variant)
the player can choose to hit to obtain a new card or stay to end the episode, while in the second one
(reduced variant) she can also choose to double the bet on the first turn. Different tasks can be
defined depending on several parameters of the game, such as the number of decks, the threshold at
which the dealer stays and whether she hits when the threshold is research exactly with a soft hand.
Full variant experiment. The tasks are generated by selecting 2, 4, 6, 8 decks, by setting the stay
threshold at {16, 17} and whether the dealer hits on soft, for a total of 16 tasks. We define a very
7
-0.04
GL-FQI
FL-FQI
Lasso-FQI
-0.08
GL-FQI
FL-FQI
Lasso-FQI
-0.1
HE
HE
-0.06
-0.08
-0.12
-0.14
-0.16
-0.1
1000.0
2000.0
3000.0
n
4000.0
5000.0
100.0
300.0
500.0
700.0
900.0
1100.0
n
Figure 2: Comparison of FL?FQI, GL?FQI and LASSO?FQI on full (left) and reduced (right) variants. The
y axis is the average house edge (HE) computed across tasks.
rich description of the state space with the objective of satisfying Asm. 1. At the same time this is
likely to come with a large number of useless features, which makes it suitable for sparsification.
In particular, we include the player hand value, indicator functions for each possible player hand
value and dealer hand value, and a large description of the cards not dealt yet (corresponding to
the history of the game), under the form of indicator functions for various ranges. In total, the
representation contains d = 212 features. We notice that although none of the features is completely
useless (according to the definition in Asm. 2), the features related with the history of the game
are unlikely to be very useful for most of the tasks defined in this experiment. We collect samples
from up to 5000 episodes, although they may not be representative enough given the large state
space of all possible histories that the player can encounter and the high stochasticity of the game.
The evaluation is performed by simulating the learned policy for 2,000,000 episodes and computing
the average House Edge (HE) across tasks. For each algorithm we report the performance for the
best regularization parameter ? in the range {2, 5, 10, 20, 50}. Results are reported in Fig. 2-(left).
Although the set of features is quite large, we notice that all the algorithms succeed in learning a good
policy even with relatively few samples, showing that all of them can take advantage of the sparsity
of the representation. In particular, GL?FQI exploits the fact that all 16 tasks share the same useless
features (although the set of useful feature may not overlap entirely) and its performance is the best.
FL?FQI suffers from the increased complexity of representation learning, which in this case does
not lead to any benefit since the initial representation is sparse, but it performs as LASSO?FQI.
Reduced variant experiment. We consider a representation for which we expect the weight matrix
to be dense. In particular, we only consider the value of the player?s hand and of the dealer?s hand and
we generate features as the Cartesian product of these two discrete variables plus a feature indicating
whether the hand is soft, for a total of 280 features. Similar to the previous setting, the tasks are
generated with 2, 4, 6, 8 decks, whether the dealer hits on soft, and a larger number of stay thresholds
in {15, 16, 17, 18}, for a total of 32 tasks. We used regularizers in the range {0.1, 1, 2, 5, 10}. Since
the history is not included, the different number of decks influences only the probability distribution
of the totals. Moreover, limiting the actions to either hit or stay further increases the similarity
among tasks. Therefore, we expect to be able to find a dense, low-rank solution. Results in Fig. 2(right) confirms this guess, with FL?FQI performing significantly better than the other methods. In
addition, GL?FQI and LASSO?FQI perform similarly, since the dense representation penalizes
both single-task and shared sparsity; in fact, both methods favor low values of ?, meaning that the
sparse-inducing penalties are not effective.
7
Conclusions
We studied the multi-task reinforcement learning problem under shared sparsity assumptions across
the tasks. GL?FQI extends the FQI algorithm by introducing a Group-LASSO step at each iteration and it leverages over the fact that all the tasks are expected to share the same small set of
useful features to improve the performance of single-task learning. Whenever the assumption is not
valid, GL?FQI may perform worse than LASSO?FQI. With FL?FQI we take a step further and
we learn a transformation of the given representation that could guarantee a higher level of shared
sparsity. Future work will be focused on considering a relaxation of the theoretical assumptions and
on studying alternative multi-task regularization formulations such as in [29] and [13].
Acknowledgments This work was supported by the French Ministry of Higher Education and Research, the
European Community?s Seventh Framework Programme under grant agreement 270327 (project CompLACS),
and the French National Research Agency (ANR) under project ExTra-Learn n.ANR-14-CE24-0010-01.
8
References
[1] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning.
Machine Learning, 73(3):243?272, 2008.
[2] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[3] Peter J Bickel, Ya?acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and dantzig
selector. The Annals of Statistics, pages 1705?1732, 2009.
[4] Peter B?uhlmann and Sara van de Geer. Statistics for High-Dimensional Data: Methods, Theory and
Applications. Springer, 1st edition, 2011.
[5] Daniele Calandriello, Alessandro Lazaric, and Marcello Restelli. Sparse Multi-task Reinforcement Learning. In https://hal.inria.fr/hal-01073513, 2014.
[6] A Castelletti, S Galelli, M Restelli, and R Soncini-Sessa. Tree-based feature selection for dimensionality
reduction of large-scale control systems. In IEEE ADPRL, 2011.
[7] Damien Ernst, Pierre Geurts, Louis Wehenkel, and Michael L Littman. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(4), 2005.
[8] Mohammad Ghavamzadeh, Alessandro Lazaric, R?emi Munos, Matt Hoffman, et al. Finite-sample analysis of lasso-td. In ICML, 2011.
[9] Steffen Grunewalder, Guy Lever, Luca Baldassarre, Massimiliano Pontil, and Arthur Gretton. Modelling
transition dynamics in mdps with rkhs embeddings. In ICML, 2012.
[10] H. Hachiya and M. Sugiyama. Feature selection for reinforcement learning: Evaluating implicit statereward dependency via conditional mutual information. In ECML PKDD. 2010.
[11] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer, 2009.
[12] M. Hoffman, A. Lazaric, M. Ghavamzadeh, and R. Munos. Regularized least squares temporal difference
learning with nested `2 and `1 penalization. In EWRL, pages 102?114. 2012.
[13] Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso.
In ICML, pages 433?440. ACM, 2009.
[14] J Zico Kolter and Andrew Y Ng. Regularization and feature selection in least-squares temporal difference
learning. In ICML, 2009.
[15] A. Lazaric. Transfer in reinforcement learning: a framework and a survey. In M. Wiering and M. van
Otterlo, editors, Reinforcement Learning: State of the Art. Springer, 2011.
[16] Alessandro Lazaric and Mohmammad Ghavamzadeh. Bayesian multi-task reinforcement learning. In
ICML, 2010.
[17] Alessandro Lazaric and Marcello Restelli. Transfer from multiple MDPs. In NIPS, 2011.
[18] Hui Li, Xuejun Liao, and Lawrence Carin. Multi-task reinforcement learning in partially observable
stochastic environments. Journal of Machine Learning Research, 10:1131?1186, 2009.
[19] Karim Lounici, Massimiliano Pontil, Sara Van De Geer, Alexandre B Tsybakov, et al. Oracle inequalities
and optimal inference under group sparsity. The Annals of Statistics, 39(4):2164?2204, 2011.
[20] R?emi Munos and Csaba Szepesv?ari. Finite-time bounds for fitted value iteration. The Journal of Machine
Learning Research, 9:815?857, 2008.
[21] C. Painter-Wakefield and R. Parr. Greedy algorithms for sparse reinforcement learning. In ICML, 2012.
[22] Bruno Scherrer, Victor Gabillon, Mohammad Ghavamzadeh, and Matthieu Geist. Approximate modified
policy iteration. In ICML, 2012.
[23] Matthijs Snel and Shimon Whiteson. Multi-task reinforcement learning: Shaping and feature selection.
In EWRL, September 2011.
[24] Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning. MIT Press, 1998.
[25] F. Tanaka and M. Yamamura. Multitask reinforcement learning on the distribution of mdps. In CIRA
2003, pages 1108?1113, 2003.
[26] Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.
Journal of Machine Learning Research, 10(1):1633?1685, 2009.
[27] Sara A Van De Geer, Peter B?uhlmann, et al. On the conditions used to prove oracle results for the lasso.
Electronic Journal of Statistics, 3:1360?1392, 2009.
[28] A. Wilson, A. Fern, S. Ray, and P. Tadepalli. Multi-task reinforcement learning: A hierarchical Bayesian
approach. In ICML, pages 1015?1022, 2007.
[29] Yi Zhang and Jeff G Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In NIPS,
pages 2550?2558, 2010.
9
| 5247 |@word multitask:1 version:1 norm:11 stronger:1 tadepalli:1 confirms:1 simulation:1 dealer:6 decomposition:2 jacob:1 dramatic:1 reduction:1 initial:1 contains:1 selecting:1 rkhs:2 existing:2 comparing:1 nt:3 surprising:1 worsening:1 yet:1 dx:4 must:1 numerical:4 designed:2 mtfl:1 implying:2 generative:2 greedy:4 leaf:1 discovering:2 guess:1 core:1 contribute:3 theodoros:1 zhang:1 unbounded:1 constructed:1 become:1 prove:2 fitting:1 fwr:1 ray:1 introduce:6 x0:4 expected:1 indeed:2 roughly:1 p1:1 pkdd:1 behavior:1 multi:28 steffen:1 bellman:5 rem:1 discounted:1 td:1 actual:2 considering:1 ua:4 becomes:1 begin:1 project:2 bounded:9 notation:1 unrelated:1 moreover:1 interpreted:1 maxa:3 differing:1 finding:1 transformation:9 sparsification:1 csaba:1 guarantee:9 temporal:2 thorough:1 act:1 exactly:1 k2:10 wrong:1 hit:5 control:1 zico:1 grant:1 louis:1 bertsekas:1 before:1 positive:1 understood:1 limit:1 sutton:1 ak:1 laurent:1 approximately:1 inria:3 plus:1 initialization:1 studied:3 dantzig:1 equivalence:2 suggests:1 collect:1 sara:3 limited:1 range:4 acknowledgment:1 union:2 block:2 pontil:3 empirical:2 ce24:1 submatrices:1 significantly:3 vert:1 projection:1 word:1 fqi:77 cannot:1 onto:1 selection:4 operator:6 influence:1 equivalent:3 deterministic:1 measurable:1 compensated:1 starting:1 truncating:1 independently:1 focused:1 politecnico:1 convex:1 survey:2 xuejun:1 matthieu:1 rule:2 d1:1 nuclear:2 orthonormal:2 spanned:2 notion:4 variation:1 limiting:1 annals:2 target:12 pt:2 user:1 programming:2 us:1 agreement:2 element:1 expensive:1 satisfying:1 observed:1 role:1 solved:1 capture:1 wiering:1 sect:1 episode:3 decrease:1 removed:1 alessandro:6 intuition:5 environment:2 pd:1 complexity:5 convexity:1 reward:9 littman:1 agency:1 dynamic:11 proximates:1 ghavamzadeh:4 weakly:1 solving:6 depend:2 technically:1 completely:4 easily:1 joint:5 represented:2 various:1 geist:1 regularizer:3 massimiliano:3 effective:3 outside:2 quite:1 ace:1 larger:2 solve:5 jean:1 relax:1 anr:2 favor:1 asm:18 statistic:4 jointly:10 itself:1 noisy:1 advantage:4 eigenvalue:2 propose:1 reconstruction:1 interaction:2 product:2 fr:2 relevant:1 ernst:1 description:3 frobenius:1 inducing:2 exploiting:1 convergence:1 double:1 produce:1 guaranteeing:1 leave:1 tk:2 illustrate:1 develop:2 depending:3 pose:1 derive:1 measured:1 damien:1 qt:6 andrew:2 eq:7 strong:1 solves:1 implies:3 come:2 differ:1 bak:3 direction:1 stochastic:1 milano:1 human:1 education:1 adprl:1 maxa0:3 preliminary:1 investigation:1 proposition:2 extension:3 hold:2 considered:1 normal:1 exp:1 lawrence:1 mapping:1 parr:1 matthew:1 vary:1 dictionary:1 bickel:1 blackjack:3 baldassarre:1 genk:1 uhlmann:2 successfully:1 hoffman:2 mit:1 ewrl:2 aim:1 modified:1 grunewalder:1 rdx:5 sion:1 bet:1 barto:1 wilson:1 derived:1 focus:1 she:3 improvement:1 rank:14 modelling:1 aka:6 baseline:1 inference:2 dependent:1 integrated:2 unlikely:1 a0:6 reproduce:1 france:1 arg:6 among:2 issue:1 scherrer:1 art:1 integration:1 initialize:1 mutual:1 equal:2 construct:2 evgeniou:1 having:1 ng:1 kw:6 lille:1 look:1 marcello:4 icml:8 carin:1 future:3 report:1 simplify:1 richard:1 few:1 composed:1 simultaneously:2 national:1 comprehensive:1 erative:1 maintain:1 attempt:2 friedman:1 interest:1 huge:1 investigate:4 evaluation:2 introduces:1 analyzed:1 regularizers:1 kt:2 tuple:1 closer:1 edge:2 arthur:1 orthogonal:2 tree:2 euclidean:1 taylor:1 penalizes:1 re:2 theoretical:4 fitted:10 instance:2 column:4 soft:4 increased:1 ordinary:1 stacking:1 introducing:3 cost:3 subset:4 entry:1 seventh:1 characterize:1 reported:4 dependency:2 supx:4 yamamura:1 st:9 matthijs:1 stay:6 sequel:1 complacs:1 together:4 michael:1 gabillon:1 again:1 lever:1 satisfied:1 leveraged:1 choose:3 guy:1 worse:4 resort:2 return:1 li:1 de:3 bka:1 wk:2 coefficient:1 satisfy:4 kolter:1 explicitly:1 depends:2 tion:1 try:1 performed:1 closed:1 analyze:2 start:1 recover:2 capability:1 square:3 ni:2 painter:1 qk:4 characteristic:1 dealt:1 weak:1 bayesian:2 accurately:3 fern:1 none:1 history:6 za:4 simultaneous:1 hachiya:1 suffers:1 whenever:7 rdt:1 definition:2 nonetheless:3 proof:1 di:1 associated:1 popular:1 intrinsically:1 ask:1 color:2 dimensionality:2 shaping:2 actually:2 alexandre:2 higher:3 supervised:1 formulation:1 evaluated:2 ritov:1 lounici:1 furthermore:1 wakefield:1 implicit:1 until:1 hand:13 receives:1 lack:1 somehow:1 french:2 defines:1 mode:1 puted:1 scientific:1 mdp:9 hal:2 building:4 effect:1 k22:2 concept:2 unbiased:1 matt:1 evolution:1 regularization:7 equality:2 karim:1 illustrated:1 game:8 daniele:3 stone:1 tt:8 mohammad:2 geurts:1 performs:1 image:1 meaning:1 consideration:1 ari:1 common:4 ols:1 mt:5 rl:3 empirically:1 kwa:1 belong:1 extend:1 adp:2 interpretation:1 discussed:1 he:4 refer:1 significant:2 vec:4 imposing:1 smoothness:2 rd:7 similarly:1 stochasticity:1 sugiyama:1 bruno:1 moving:1 access:1 europe:1 similarity:7 italy:1 belongs:1 mint:1 forcing:1 scenario:4 inequality:1 yi:6 exploited:1 scoring:1 victor:1 cak:2 seen:1 fortunately:1 additional:4 minimum:1 ministry:1 employed:1 schneider:1 redundant:1 multiple:7 full:5 reduces:1 gretton:1 technical:1 adapt:1 luca:1 impact:3 variant:7 regression:10 neuro:1 liao:1 iteration:42 kernel:4 represent:2 tailored:1 robotics:1 achieved:1 addition:1 szepesv:1 separately:2 cira:1 singular:4 source:1 extra:1 unlike:2 strict:1 induced:1 leveraging:1 near:1 presence:1 leverage:2 intermediate:1 enough:2 embeddings:1 automated:1 zi:5 hastie:1 lasso:29 reduce:2 idea:1 andreas:1 whether:6 penalty:4 suffer:1 peter:4 returned:1 reformulated:1 action:20 remark:3 tol:2 dramatically:1 useful:12 amount:1 discount:1 tsybakov:2 extensively:1 induces:1 reduced:5 generate:1 http:1 notice:4 lazaric:8 wr:3 tibshirani:1 dak:1 discrete:1 group:5 threshold:4 drawn:1 calandriello:3 verified:1 graph:1 relaxation:2 sum:1 enforced:1 run:3 uncertainty:1 qmax:8 extends:2 reporting:1 electronic:1 draw:1 decision:2 appendix:2 entirely:1 bound:7 fl:14 played:4 display:1 encountered:3 oracle:2 orthogonality:1 constraint:2 ri:5 encodes:1 otterlo:1 aspect:3 emi:2 min:11 performing:1 relatively:4 transferred:1 according:2 combination:3 poor:1 across:13 slightly:2 smaller:1 wi:1 making:1 rehabilitation:1 happens:1 wtk:2 dv:1 restricted:3 deib:1 imate:1 previously:1 turn:2 fail:2 describing:1 loose:1 count:1 needed:1 end:1 studying:1 available:2 hierarchical:1 enforce:1 simulating:1 pierre:1 anymore:1 alternative:3 encounter:1 batch:1 existence:1 original:2 assumes:1 denotes:1 top:1 include:1 wehenkel:1 exploit:7 restrictive:1 k1:5 build:2 approximating:1 skt:2 objective:5 move:1 already:2 question:1 coherently:1 rt:3 kak2:2 diagonal:4 september:1 card:8 athena:1 nx:7 w0:1 collected:1 reason:2 fresh:2 assuming:3 useless:5 index:1 relationship:2 mostly:1 unfortunately:1 statement:1 potentially:1 expense:1 nord:1 trace:4 stated:1 negative:2 ba:1 design:3 implementation:1 policy:16 perform:5 observation:1 markov:1 discarded:1 datasets:1 finite:4 polimi:1 ecml:1 philippe:1 extended:1 team:1 thm:3 community:1 introduced:2 complement:1 pair:2 required:2 learned:4 tanaka:1 acov:1 nip:2 able:4 proceeds:1 usually:2 pattern:1 sparsity:37 summarize:1 max:4 including:1 interpretability:1 suitable:4 critical:1 natural:1 rely:2 regularized:2 overlap:2 fwk:5 indicator:2 scheme:1 improve:5 mdps:12 imply:2 axis:1 ready:1 carried:1 review:1 l2:1 kf:1 beside:1 fully:1 expect:5 embedded:2 loss:6 interesting:1 penalization:1 sufficient:3 propagates:1 principle:1 editor:1 share:3 maxt:2 row:10 gl:33 supported:1 tsitsiklis:1 wide:1 taking:1 munos:3 sparse:27 benefit:1 van:4 dimension:1 transition:8 evaluating:2 rich:1 valid:1 kvec:1 collection:1 reinforcement:21 programme:1 far:1 approximate:5 observable:2 selector:1 b1:1 assumed:2 unnecessary:1 xi:9 continuous:1 iterative:1 table:2 learn:6 transfer:6 ca:1 ignoring:1 improving:1 whiteson:1 alg:3 european:1 domain:2 da:4 dense:4 linearly:2 restelli:5 edition:1 repeated:1 complementary:1 fig:3 referred:1 representative:1 fails:1 explicit:1 lq:1 col:2 governed:1 house:2 mtrl:5 learns:1 shimon:1 theorem:5 specific:3 jt:4 showing:1 exists:12 hui:1 illustrates:1 cartesian:1 logarithmic:1 simply:3 likely:2 deck:6 expressed:2 partially:2 joined:1 springer:3 aa:3 corresponds:2 nested:1 acm:1 obozinski:1 succeed:1 conditional:1 formulated:2 jeff:1 shared:13 fw:11 change:1 included:1 infinite:2 lemma:7 total:7 called:1 geer:3 ya:1 player:8 rarely:1 indicating:2 l4:1 guillaume:1 wak:8 relevance:1 evaluate:1 argyriou:1 correlated:2 |
4,691 | 5,248 | Probabilistic Differential Dynamic Programming
Yunpeng Pan and Evangelos A. Theodorou
Daniel Guggenheim School of Aerospace Engineering
Institute for Robotics and Intelligent Machines
Georgia Institute of Technology
Atlanta, GA 30332
ypan37@gatech.edu, evangelos.theodorou@ae.gatech.edu
Abstract
We present a data-driven, probabilistic trajectory optimization framework for systems with unknown dynamics, called Probabilistic Differential Dynamic Programming (PDDP). PDDP takes into account uncertainty explicitly for dynamics models using Gaussian processes (GPs). Based on the second-order local approximation of the value function, PDDP performs Dynamic Programming around a
nominal trajectory in Gaussian belief spaces. Different from typical gradientbased policy search methods, PDDP does not require a policy parameterization
and learns a locally optimal, time-varying control policy. We demonstrate the effectiveness and efficiency of the proposed algorithm using two nontrivial tasks.
Compared with the classical DDP and a state-of-the-art GP-based policy search
method, PDDP offers a superior combination of data-efficiency, learning speed,
and applicability.
1
Introduction
Differential Dynamic Programming (DDP) is a powerful trajectory optimization approach. Originally introduced in [1], DDP generates locally optimal feedforward and feedback control policies
along with an optimal state trajectory. Compared with global optimal control approaches, the local optimal DDP shows superior computational efficiency and scalability to high-dimensional problems. In the last decade, variations of DDP have been proposed in both control and machine learning
communities [2][3][4][5][6]. Recently, DDP was applied for high-dimensional policy search which
achieved promising results in challenging control tasks [7].
DDP is derived based on linear approximations of the nonlinear dynamics along state and control
trajectories, therefore it relies on accurate and explicit dynamics models. However, modeling a
dynamical system is in general a challenging task and model uncertainty is one of the principal
limitations of model-based methods. Various parametric and semi-parametric approaches have been
developed to address these issues, such as minimax DDP using Receptive Field Weighted Regression
(RFWR) by Morimoto and Atkeson [8], and DDP using expert-demonstrated trajectories by Abbeel
et al. [9]. Motivated by the complexity of the relationships between states, controls and observations
in autonomous systems, in this work we take a Bayesian non-parametric approach using Gaussian
Processes (GPs).
Over last few years, GP-based control and Reinforcement Learning (RL) algorithms have increasingly drawn more attention in control theory and machine learning communities. For instance,
the works by Rasmussen et al.[10], Nguyen-Tuong et al.[11], Deisenroth et al.[12][13][14] and
Hemakumara et al.[15] have demonstrated the remarkable applicability of GP-based control and RL
methods in robotics. In particular, a recently proposed GP-based policy search framework called
PILCO, developed by Deisenroth and Rasmussen [13] (an improved version has been developed by
Deisenroth, Fox and Rasmussen [14]) has achieved unprecedented performances in terms of data1
efficiency and policy learning speed. PILCO as well as most gradient-based policy search algorithms
require iterative methods (e.g.,CG or BFGS) for solving non-convex optimization to obtain optimal
policies.
The proposed approach does not require a policy parameterization. Instead PDDP finds a linear, time
varying control policy based on Bayesian non-parametric representation of the dynamics and outperforms PILCO in terms of control learning speed while maintaining a comparable data-efficiency.
2
Proposed Approach
The proposed PDDP framework consists of 1) a Bayesian non-parametric representation of the unknown dynamics; 2) local approximations of the dynamics and value functions; 3) locally optimal
controller learning.
2.1
Problem formulation
We consider a general unknown stochastic system described by the following differential equation
dx = f (x, u)dt + C(x, u)d?,
n
x(t0 ) = x0 ,
m
d? ? N (0, ?? ),
(1)
p
where x ? R is the state, u ? R is the control, t is time and ? ? R is standard Brownian motion
noise. The trajectory optimization problem is defined as finding a sequence of state and controls that
minimize the expected cost
Z T
?
J (x(t0 )) = E h x(T ) +
L x(t), ?(x(t)), t dt ,
(2)
t0
where h(x(T )) is the terminal cost, L(x(t), ?(x(t)), t) is the instantaneous cost rate, u(t) =
?(x(t)) is the control policy. The cost J ? (x(t0 )) is defined as the expectation of the total cost
accumulated from t0 to T . For the rest of our analysis, we denote xk = x(tk ) in discrete-time
where k = 0, 1, ..., H is the time step, we use this subscript rule for other variables as well.
2.2
Probabilistic dynamics model learning
? = (x, u) ? Rn+m to state tranThe continuous functional mapping from state-control pair x
? . We view this
sition dx can be viewed as an inference with the goal of inferring dx given x
inference as a nonlinear regression problem. In this subsection, we introduce the Gaussian processes (GP) approach to learning the dynamics model in (1). A GP is defined as a collection of
random variables, any finite number subset of which have a joint Gaussian distribution. Given a se? = {(x0 , u0 ), . . . (xH , uH )}, and the corresponding state transition
quence of state-control pairs X
dX = {dx0 , . . . , dxH }, a GP is completely defined by a mean function and a covariance function.
The joint distribution of the observed output and
corresponding
to a given test state the output
h
i
? X)
? + ?n I
? x
??)
dX
K(X,
K(X,
?
?
?
? = (x , u ) can be written as p dx? ? N 0,
control pair x
.
? ?
?
?
? )
K(?
x , X)
K(?
x ,x
The covariance of this multivariate Gaussian distribution is defined via a kernel matrix K(xi , xj ). In
particular, in this paper we consider the Gaussian kernel K(xi , xj ) = ?s2 exp(? 21 (xi ?xj )T W(xi ?
xj ))+?n2 , with ?s , ?n , W the hyper-parameters. The kernel function can be interpreted as a similar? i and X
? j are close to each
ity measure of random variables. More specifically, if the training pairs X
other in the kernel space, their outputs dxi and dxj are highly correlated. The posterior distribution,
which is also a Gaussian, can be obtained by constraining the joint distribution to contain the output
dx? that is consistent with the observations. Assuming independent outputs (no correlation between
? k = (xk , uk ) at time step k, the one-step predictive
each output dimension) and given a test input x
mean and variance of the state transition are specified as
?
? X)
? + ?n I)?1 dX,
Ef [dxk ] = K(?
xk , X)(K(
X,
?
? X)
? + ?n I)?1 K(X,
? x
? k ) ? K(?
? k ).
VARf [dxk ] = K(?
xk , x
xk , X)(K(
X,
(3)
The state distribution at k = 1 is p(x1 ) ? N (?1 , ?1 ) where the state mean and variance are
?1 = x0 +Ef [dx0 ], ?1 = VARf [dx0 ]. When propagating the GP-based dynamics over a trajectory
? k becomes uncertain with a Gaussian distribution
of time horizon H, the input state-control pair x
2
? 0 is deterministic). Here we define the joint distribution over state-control pair at k as
(initially x
? k ). Thus the distribution over state transition becomes p(dxk ) =
? k, ?
xk ) = p(xk , uk ) ? N (?
Rp(?
p(f (?
xk )|?
xk )p(?
xk )d?
xk . Generally, this predictive distribution cannot be computed analytically
because the nonlinear mapping of an input Gaussian distribution lead to a non-Gaussian predictive
distribution. However, the predictive distribution can be approximated by a Gaussian p(dxk ) ?
N (d?k , d?k ) [16]. Thus the state distribution at k + 1 is also a Gaussian N (?k+1 , ?k+1 ) [14]
?k+1 = ?k + d?k ,
?k+1 = ?k + d?k + COVf ,?xk [xk , dxk ] + COVf ,?xk [dxk , xk ].
(4)
?
? k , ?k ), we employ the moment matching approach [16][14]
Given an input joint distribution N (?
to compute the posterior GP. The predictive mean d?k is evaluated as
Z
? k d?
? k, ?
d?k = Ex? k Ef [dxk ] = Ef [dxk ]N ?
xk .
Next, we compute the predictive covariance matrix
"
VARf ,?
x [dxk ]
1
k
d?k =
.
.
.
COVf ,?
[dx
k1 , dxkn ]
xk
...
.
.
.
...
COVf ,?
xk [dxkn , dxk1 ]
.
.
.
VARf ,?
x [dxkn ]
#
,
k
where the variance term on the diagonal for output dimension i is obtained as
2
VARf ,?xk [dxki ] = Ex? k VARf [dxki ] + Ex? k Ef [dxki ]2 ? Ex? k Ef [dxki ] ,
(5)
and the off-diagonal covariance term for output dimension i, j is given by the expression
COVf ,?xk [dxki , dxkj ] = Ex? k Ef [dxki ]Ef [dxkj ] ? Ex? k [Ef [dxki ]]Ex? k [Ef [dxkj ]].
(6)
The input-output cross-covariance is formulated as
? k Ef [dxk ]T ? Ex? k [?
COVf ,?xk [?
xk , dxk ] = Ex? k x
xk ]Ef ,?xk [dxk ]T .
(7)
COVf ,?xk [xk , dxk ] can be easily obtained as a sub-matrix of (7). The kernel or hyper-parameters
? = (?n , ?s , W) can be learned by maximizing the log-likelihood of the training outputs given the
inputs
?
?
? = argmax log p dX|X, ?
.
(8)
?
This optimization problem can be solved using numerical methods such as conjugate gradient [17].
2.3
Local dynamics model
? k ), is created based
In DDP-related algorithms, a local model along a nominal trajectory (?
xk , u
on: i) a first or second-order linear approximation of the dynamics model; ii) a second-order local approximation of the value function. In our proposed PDDP framework, we will create a local
? k ). In order to incorporate unmodel along a trajectory of state distribution-control pair (p(?
xk ), u
certainty explicitly in the local model, we introduce the Gaussian belief augmented state vector
zxk = [?k vec(?k )]T ? Rn+n?n where vec(?k ) is the vectorization of ?k . Now we create a local
linear model of the dynamics. Based on eq.(4), the dynamics model with the augmented state is
zxk+1 = F(zxk , uk ).
(9)
?xk and ?uk = uk ? u
? k . In this work we consider
Define the control and state variations ?zxk = zxk ? z
the first-order expansion of the dynamics. More precisely we have
?zxk+1 = Fkx ?zxk + Fku ?uk ,
(10)
x
u
where the Jacobian matrices Fk and Fk are specified as
? ??
?
? ?k+1
k+1
2
2
??
? ?k ?
Fkx = ?xk F = ? ? ? k
? R(n+n )?(n+n ) ,
?k+1
k+1
? ?k
? ?k
(11)
" ??
#
k+1
Fku = ?uk F =
?uk
? ?k+1
?uk
? R(n+n
2
)?m
.
??
??
k+1 ? ?k+1 ? ?k+1 ? ?k+1
The partial derivatives ? ?k+1 , ? ?k+1 , ? ?
, ? ? , ?uk , ?uk can be computed analytically.
?
?
k
k
k
k
Their forms are provided in the supplementary document of this work. For numerical implementation, the dimension of the augmented state can be reduced by eliminating the redundancy of ?k and
the principle square root of ?k may be used for numerical robustness [6].
3
2.4
Cost function
In the classical DDP and many optimal control problems, the following quadratic cost function is
used
L(xk , uk ) = (xk ? xgoal
)T Q(xk ? xgoal
) + uT
(12)
k Ruk ,
k
k
where xgoal
is the target state. Given the distribution p(xk ) ? N (?k , ?k ), the expectation of
k
original quadratic cost function is formulated as
h
i
Exk L(xk , uk ) = tr(Q?k ) + (?k ? xgoal
)T Q(?k ? xgoal
) + uT
(13)
k Ruk .
k
k
In PDDP, we use the cost function L(zxk , uk ) = Exk [L(xk , uk )]. The analytic expressions of partial
?
L(zxk , uk ) can be easily obtained. The cost function (13) scales
derivatives ?z?x L(zxk , uk ) and ?u
k
k
linearly with the state covariance, therefore the exploration strategy of PDDP is balanced between
the distance from the target and the variance of the state. This strategy fits well with DDP-related
frameworks that rely on local approximations of the dynamics. A locally optimal controller obtained
from high-risk explorations in uncertain regions might be highly undesirable.
2.5
Control policy
The Bellman equation for the value function in discrete-time is specified as follows
"
#
x
x
x
V (zk , k) = min E L(zk , uk ) + V F(zk , uk ), k + 1 |xk .
uk
|
{z
}
(14)
Q(zx
k ,uk )
We create a quadratic local model of the value function by expanding the Q-function up to the
second order
T xx
1 ?zxk
Qk
Qxu
?zxk
x
x
0
x x
u
k
, (15)
Qk (zk +?zk , uk +?uk ) ? Qk +Qk ?zk +Qk ?uk +
Qux
Quu
?uk
2 ?uk
k
k
where the superscripts of the Q-function indicate derivatives. For instance, Qxk = ?x Qk (zxk , uk ).
For the rest of the paper, we will use this superscript rule for L and V as well. To find the optimal
? k that maximize the Q-function
control policy, we compute the local variations in control ? u
h
i
?1 u
?1 ux
? k = arg max Qk (zxk + ?zxk , uk + ?uk ) = ?(Quu
?u
Qk ?(Quu
Qk ?zxk = Ik + Lk ?zxk .
k )
k )
uk
{z
}|
{z
}
|
Ik
Lk
(16)
?k = u
? k + ?u
? k . The quadratic expansion of the value function
The optimal control can be found as u
is backward propagated based on the equations that follow
Qxk = Lxk + Vkx Fkx , Quk = Luk + Vkx Fku ,
xx
x T xx x
ux
ux
u T xx x
uu
uu
u T xx u
Qxx
k = Lk + (Fk ) Vk Fk , Qk = Lk + (Fk ) Vk Fk , Qk = Lk + (Fk ) Vk Fk ,
u
x
x
u
xx
xx
xu
Vk?1 = Vk + Qk Ik ,
Vk?1 = Qk + Qk Lk ,
Vk?1 = Qk + Qk Lk .
(17)
The second-order local approximation of the value function is propagated backward in time iteratively. We use the learned controller to generate a locally optimal trajectory by propagating the
dynamics forward in time. The control policy is a linear function of the augmented state zxk , therefore the controller is deterministic. The state propagations have been discussed in Sec. 2.2.
2.6
Summary of algorithm
The proposed algorithm can be summarized in Algorithm 1. The algorithm consists of 8 modules.
In Model learning (Step 1-2) we sample trajectories from the original physical system in order to
collect training data and learn a probabilistic model. In Local approximation (Step 4) we obtain
a local linear approximation (10) of the learned probabilistic model along a nominal trajectory by
computing Jacobian matrices (11). In Controller learning (Step 5) we compute a local optimal control sequence (16) by backward-propagation of the value function (17). To ensure convergence, we
4
? k = ?Ik + Lk ?zxk .
employ the line search strategy as in [2]. We compute the control law as ? u
Initially ? = 1, then decrease it until the expected cost is smaller than the previous one. In Forward
propagation (Step 6), we apply the control sequence from last step and obtain a new nominal trajectory for the next iteration. In Convergence condition (Step 7), we set a threshold on the accumulated
cost J ? such that when J ? < J ? , the algorithm is terminated with the optimized state and control
trajectory. In Interaction condition (Step 8), when the state covariance ?k exceeds a threshold ?tol ,
we sample new trajectories from the physical system using the control obtained in step 5, and go
back to step 2 to learn a more accurate model. The old GP training data points are removed from
the training set to keep its size fixed. Finally in Nominal trajectory update (step 9), the trajectory
obtained in Step 6 or 8 becomes the new nominal trajectory for the next iteration. An simple illustration of the algorithm is shown in Fig. 3a. Intuitively, PDDP requires interactions with the physical
systems only if the GP model no longer represents the true dynamics around the nominal trajectory.
Given: A system with unknown dynamics, target states
Goal : An optimized trajectory of state and control
1 Generate N state trajectories by applying random control sequences to the physical system (1);
2 Obtain state and control training pairs from sampled trajectories and optimize the
hyper-parameters of GP (8);
3 for i = 1 to Imax do
? k ) (10);
4
Compute a linear approximation of the dynamics along (?
zxk , u
?k = u
? k + ?u
? k and value function
5
Backpropagate in time to get the locally optimal control u
6
7
8
9
10
11
V (zxk , k) according to (16) (17);
? k , obtain a new
Forward propagate the dynamics (9) by applying the optimal control u
trajectory (zxk , uk );
if Converge then Break the for loop;
if ?k > ?tol then apply the optimal control to the original physical system to generate a
new nominal trajectory (zxk , uk ) and N ? 1 additional trajectories by applying small
variations of the learned controller, update the GP training set and go back to step 2;
?xk = zxk , u
? k = uk and i = i + 1, go back to step 4;
Set z
end
Apply the optimized controller to the physical system, obtain the optimized trajectory.
Algorithm 1: PDDP algorithm
2.7
Computational complexity
Dynamics propagation: The major computational effort is devoted to GP
inferences. In particular,
the complexity of one-step moment matching (2.2) is O (N )2 n2 (n+m) [14], which is fixed during
the iterative process of PDDP. We found a small number of sampled trajectories (N ? 5) are able
to provide good performances for a system of moderate size (6-12 state dimensions). However, for
higher dimensional problems, sparse or local approximation of GP (e.g. [11][18][19], etc) may be
used to reduce the computational cost of GP dynamics propagation.
Controller learning: According to (16), learning policy parameters Ik and Lk requires computing
3
the inverse of Quu
k , which has the computational complexity of O(m ), where m is the dimension
of control input. As a local trajectory optimization method, PDDP offers comparable scalability to
the classical DDP.
2.8
Relation to existing works
Here we summarize the novel features of PDDP in comparison with some notable DDP-related
frameworks for stochastic systems (see also Table 1). First, PDDP shares some similarities with
the belief space iLQG [6] framework, which approximates the belief dynamics using an extended
Kalman filter. Belief space iLQG assumes a dynamics model is given and the stochasticity comes
from the process noises. PDDP, however, is a data-driven approach that learns the dynamics models
and controls from sampled data, and it takes into account model uncertainties by using GPs. Second,
PDDP is also comparable with iLQG-LD [5], which applies Locally Weighted Projection Regression
(LWPR) to represent the dynamics. iLQG-LD does not incorporate model uncertainty therefore
requires a large amount of data to learn an accurate model. Third, PDDP does not suffer from the
5
high computational cost of finite differences used to numerically compute the first-order expansions
[2][6] and second-order expansions [4] of the underlying stochastic dynamics. PDDP computes
Jacobian matrices analytically (11).
PDDP
Belief space iLQG[6] iLQG-LD[5] iLQG[2]/sDDP[4]
State
?k , ?k
? k , ?k
xk
xk
Dynamics model
Unknown
Known
Unknown
Known
Linearization
Analytic Jacobian Finite differences Analytic Jacobian Finite differences
Table 1: Comparison with DDP-related frameworks
3
Experimental Evaluation
We evaluate the PDDP framework using two nontrivial simulated examples: i) cart-double inverted
pendulum swing-up; ii) six-link robotic arm reaching. We also compare the learning efficiency
of PDDP with the classical DDP [1] and PILCO [13][14]. All experiments were performed in
MATLAB.
3.1
Cart-double inverted pendulum swing-up
Cart-Double Inverted Pendulum (CDIP) swing-up is a challenging control problem because the system is highly underactuated with 3 degrees of freedom and only 1 control input. The system has 6
state-dimensions (cart position/velocity, link 1,2 angles and angular velocities). The swing-up problem is to find a sequence of control input to force both pendulums from initial position (?,?) to the
inverted position (2?,2?). The balancing task requires the velocity of the cart, angular velocities of
both pendulums to be zero. We sample 4 initial trajectories with time horizon H = 50. The CDIP
swing-up problem has been solved by two controllers for swing-up and balancing, respectively [20].
PILCO [14] is one of the few RL methods that is able to complete this task without knowing the
dynamics. The results are shown in Fig.1.
CDIP state trajectories
CDIP cost
Cart position
Cart velocity
Link1 angular velocity
Link2 angular velocity
Link1 angle
Link2 angle
12
10
8
PDDP
DDP
PILCO
1
0.8
6
0.6
4
2
0.4
0
0.2
?2
?4
0
5
10
15
20
25
30
Time steps
35
40
45
0
0
50
(a)
5
10
15
20
25
30
Time steps
35
40
45
50
(b)
Figure 1: Results for the CDIP task. (a) Optimized state trajectories of PDDP. Solid lines indicate
means, errorbars indicate variances. (b) Cost comparison of PDDP, DDP and PILCO. Costs (eq. 13)
were computed based on sampled trajectories by applying the final controllers.
3.2
Six-link robotic arm
The six-link robotic arm model consist of six links of equal length and mass, connected in an open
chain with revolute joints. The system has 6 degrees of freedom, and 12 state dimensions (angle
and angular velocity for each joint). The goal for the first 3 joints is to move to the target angle ?4
and for the rest 3 joints to ? ?4 . The desired velocities for all 6 joints are zeros. We sample 2 initial
trajectories with time horizon H = 50. The results are shown in Fig. 2.
3.3
Comparative analysis
DDP: Originally introduced in the 70?s, the classical DDP [1] is still one of the most effective and
efficient trajectory optimization approaches. The major differences between DDP and PDDP can
6
Angle
6?link arm Cost
1
3
PDDP
DDP
PILCO
2.5
0
2
?1
5
10
15
20
25
30
Angular velocity
35
40
45
50
1.5
1
1
0
0.5
?1
5
10
15
20
25
30
Time steps
35
40
45
0
0
50
(a)
5
10
15
20
25
30
Time steps
35
40
45
50
(b)
Figure 2: Results for the 6-link arm task. (a) Optimized state trajectories of PDDP. Solid lines
indicate means, errorbars indicate variances. (b) Cost comparison of PDDP, DDP and PILCO. Costs
(eq. 13) were computed based on sampled trajectories by applying the final controllers.
be summarized as follow: firstly, DDP relies on a given accurate dynamics model, while PDDP is
a data-driven framework that learns a locally accurate model by forward sampling; secondly, DDP
does not deal with model uncertainty, PDDP takes into account model uncertainty using GPs and
perform local dynamic programming in Gaussian belief spaces; thirdly, generally in applications
of DDP linearizations are performed using finite differences while in PDDP Jacobian matrices are
computed analytically (11).
PILCO: The recently proposed PILCO [14] framework has demonstrated state-of-the-art learning
efficiency compared with other methods such as [21][22]. The proposed PDDP is different from
PILCO in several ways. Firstly, based on local linear approximation of dynamics and quadratic
approximation of the value function, PDDP finds linear, time-varying feedforward and feedback
policy, PILCO requires an a priori policy parameterization and an extra optimization solver. Secondly, PDDP keeps a fixed size of training data for GP inferences, while PILCO adds new data to
the training set after each trial (recently, the authors applied sparse GP approximation [19] in an
improved version of PILCO when the data size reached a threshold). Thirdly, by using the Gaussian
belief augmented state and cost function (13), PDDP?s exploration scheme is balanced between the
distance from the target and the variance of the state. PILCO employs a saturating cost function
which leads to automatic explorations in the high-variance regions in the early stages of learning.
In both tasks, PDDP, DDP and PILCO bring the system to the desired states. The resulting trajectories for PDDP are shown in Fig.1a and 2a. The reason for low variances of some optimized
trajectories is that during final stage of learning, interactions with the physical systems (forward
samplings using the locally optimal controller) would reduce the variances significantly. The costs
are shown in Fig. 1b and 2b. For both tasks, PDDP and DDP performs similarly and slightly different from PILCO in terms of cost reduction. The major reasons for this difference are: i) different cost
functions used by these methods; ii) we did not impose any convergence condition for the optimized
trajectories on PILCO. We now compare PDDP with DDP and PILCO in terms of data-efficiency
and controller learning speed.
Data-efficiency: As shown in Fig.4a, in both tasks, PDDP performs slightly worse than PILCO in
terms of data-efficiency based on the number of interactions required with the physical systems. For
the systems used for testing, PDDP requires around 15% ? 25% more interactions than PILCO.
The number of interactions indicates the amount of sampled trajectories required from the physical
system. At each trial we sample N trajectories from the physical systems (algorithm 1). Possible
reasons for the slightly worse performances are: i) PDDP?s policy is linear which is restrictive, while
PILCO yields nonlinear policy parameterizations; ii) PDDP?s exploration scheme is more conservative than PILCO in the early stages of learning. We believe PILCO is the most data-efficient
framework for these tasks. However, PDDP is able to offer close performances thanks to the probabilistic representation of the dynamics as well as the use of Gaussian belief augmented state.
Learning speed: In terms of total computational time required to obtain the final controller, PDDP
outperforms PILCO significantly as shown in Fig.4b. For the 6 and 12 dimensional systems used
for testing, PILCO requires an iterative method (e.g.,CG or BFGS) to solve high dimensional optimization problems (depending on the policy parameterization), while PDDP computes local optimal
controls (16) without an extra optimizer. In terms of computational time per iteration, as shown in
7
Fig.3b, PDDP is slower than the classical DDP due to the high computational cost of GP dynamics
propagations. However, for DDP, the time dedicated to linearizing the dynamics model is around
70% ? 90% of the total time per iteration for the two tasks considered in this work. PDDP avoids
the high computational cost of finite differences by evaluating all Jacobian matrices analytically, the
time dedicated to linearization is less than 10% of the total time per iteration.
Time per iteration (sec) for CDIP
Physical
system
Time per iteration (sec) for 6?link arm
16
Dynamics linearization
Forward/backward pass
14
50
12
Control
policy
Dyanmics linearization
Forward/backward pass
40
10
GP
dynamics
30
8
6
20
4
10
Local Model?
Cost function
2
0
DDP
0
PDDP
(a)
DDP
PDDP
(b)
Figure 3: (a) An intuitive illustration of the PDDP framework. (b) Comparison of PDDP and DDP
in terms of the computational time per iteration (in seconds) for the CDIP (left subfigure) and 6-link
arm (right subfigure) tasks. Green indicates time for performing linearization, cyan indicates time
for forward and backward sweeps (Sec. 2.6).
Number of interactions
Total time (minutes)
35
PDDP
PILCO
30
1500
PDDP
PILCO
25
1000
20
15
500
10
5
0
CDIP
0
6?Link arm
(a)
CDIP
6?Link arm
(b)
Figure 4: Comparison of PDDP and PILCO in terms of data-efficiency and controller learning speed.
(a) Number of interactions with the physical systems required to obtain the final results in Fig. 1
and 2. (b) Total computational time (in minutes) consumed to obtain the final controllers.
4
Conclusions
In this work we have introduced a probabilistic model-based control and trajectory optimization
method for systems with unknown dynamics based on Differential Dynamic Programming (DDP)
and Gaussian processes (GPs), called Probabilistic Differential Dynamic Programming (PDDP).
PDDP takes model uncertainty into account explicitly by representing the dynamics using GPs and
performing local Dynamic Programming in Gaussian belief spaces. Based on the quadratic approximation of the value function, PDDP yields a linear, locally optimal control policy and features a more
efficient control improvement scheme compared with typical gradient-based policy search methods.
Thanks to the probabilistic representation of the dynamics, PDDP offers reasonable data-efficiency
comparable to a state of the art GP-based policy search method [14]. In general, local trajectory optimization is a powerful approach to challenging control and RL problems. Due to its model-based
nature, model inaccuracy has always been the major obstacle for advanced applications. Grounded
on the solid developments of classical trajectory optimization and Bayesian machine learning, the
proposed PDDP has demonstrated encouraging performance and potential for many applications.
Acknowledgments
This work was partially supported by a National Science Foundation grant NRI-1426945.
8
References
[1] D. Jacobson and D. Mayne. Differential dynamic programming. 1970.
[2] E. Todorov and W. Li. A generalized iterative lqg method for locally-optimal feedback control
of constrained nonlinear stochastic systems. In American Control Conference, pages 300?306,
June 2005.
[3] Y. Tassa, T. Erez, and W. D. Smart. Receding horizon differential dynamic programming. In
NIPS, pages 1465?1472.
[4] E. Theodorou, Y. Tassa, and E. Todorov. Stochastic differential dynamic programming. In
American Control Conference, pages 1125?1132, June 2010.
[5] D. Mitrovic, S. Klanke, and S. Vijayakumar. Adaptive optimal feedback control with learned
internal dynamics models. In From Motor Learning to Interaction Learning in Robots, pages
65?84. Springer, 2010.
[6] J. Van Den Berg, S. Patil, and R. Alterovitz. Motion planning under uncertainty using iterative local optimization in belief space. The International Journal of Robotics Research,
31(11):1263?1278, 2012.
[7] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In NIPS, pages
207?215. 2013.
[8] J. Morimoto and C.G. Atkeson. Minimax differential dynamic programming: An application
to robust biped walking. In NIPS, pages 1539?1546, 2002.
[9] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng. An application of reinforcement learning to
aerobatic helicopter flight. In NIPS, pages 1?8, 2007.
[10] C. E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. In NIPS, pages
751?759, 2003.
[11] D. Nguyen-Tuong, J. Peters, and M. Seeger. Local gaussian process regression for real time
online model learning. In NIPS, pages 1193?1200, 2008.
[12] M. P. Deisenroth, C. E. Rasmussen, and J. Peters. Gaussian process dynamic programming.
Neurocomputing, 72(7):1508?1524, 2009.
[13] M. P. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to
policy search. In ICML, pages 465?472, 2011.
[14] M. P. Deisenroth, D. Fox, and C. E. Rasmussen. Gaussian processes for data-efficient learning
in robotics and control. IEEE Transsactions on Pattern Analysis and Machine Intelligence,
27:75?90, 2014.
[15] P. Hemakumara and S. Sukkarieh. Learning uav stability and control derivatives using gaussian
processes. IEEE Transactions on Robotics, 29:813?824, 2013.
[16] J. Quinonero Candela, A. Girard, J. Larsen, and C. E. Rasmussen. Propagation of uncertainty in
bayesian kernel models-application to multiple-step ahead forecasting. In IEEE International
Conference on Acoustics, Speech, and Signal Processing, 2003.
[17] C.K.I Williams and C.E. Rasmussen. Gaussian processes for machine learning. MIT Press,
2006.
[18] L. Csat?o and M. Opper. Sparse on-line gaussian processes. Neural Computation, 14(3):641?
668, 2002.
[19] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In NIPS, pages
1257?1264, 2005.
[20] W. Zhong and H. Rock. Energy and passivity based control of the double inverted pendulum
on a cart. In International Conference on Control Applications, pages 896?901, Sept 2001.
[21] T. Raiko and M. Tornio. Variational bayesian learning of nonlinear hidden state-space models
for model predictive control. Neurocomputing, 72(16):3704?3712, 2009.
[22] H. van Hasselt. Insights in reinforcement learning. Hado van Hasselt, 2011.
9
| 5248 |@word luk:1 trial:2 version:2 eliminating:1 open:1 propagate:1 covariance:7 tr:1 solid:3 xgoal:5 ld:3 reduction:1 moment:2 initial:3 daniel:1 document:1 outperforms:2 existing:1 hasselt:2 dx:10 written:1 numerical:3 analytic:3 lqg:1 motor:1 update:2 intelligence:1 parameterization:4 xk:40 parameterizations:1 firstly:2 along:6 differential:10 ik:5 koltun:1 consists:2 alterovitz:1 introduce:2 x0:3 expected:2 planning:1 terminal:1 bellman:1 encouraging:1 solver:1 becomes:3 provided:1 xx:7 underlying:1 mass:1 interpreted:1 developed:3 finding:1 certainty:1 pseudo:1 uk:33 control:59 grant:1 engineering:1 local:26 subscript:1 might:1 collect:1 challenging:4 pddp:62 acknowledgment:1 testing:2 significantly:2 matching:2 projection:1 get:1 cannot:1 ga:1 tuong:2 close:2 undesirable:1 risk:1 applying:5 optimize:1 deterministic:2 demonstrated:4 maximizing:1 go:3 attention:1 williams:1 convex:1 rule:2 insight:1 imax:1 ity:1 stability:1 variation:4 autonomous:1 target:5 nominal:8 programming:13 gps:6 velocity:10 approximated:1 walking:1 observed:1 exk:2 module:1 levine:1 solved:2 region:2 connected:1 decrease:1 removed:1 balanced:2 complexity:4 dynamic:54 solving:1 smart:1 predictive:7 efficiency:12 completely:1 uh:1 easily:2 joint:10 various:1 effective:1 hyper:3 supplementary:1 solve:1 gp:21 superscript:2 final:6 online:1 sequence:5 quigley:1 unprecedented:1 rock:1 interaction:9 helicopter:1 loop:1 dxh:1 mayne:1 intuitive:1 scalability:2 convergence:3 double:4 comparative:1 tk:1 tornio:1 depending:1 propagating:2 school:1 eq:3 qxx:1 indicate:5 uu:2 come:1 filter:1 stochastic:5 exploration:5 require:3 abbeel:2 secondly:2 gradientbased:1 around:4 considered:1 exp:1 mapping:2 major:4 optimizer:1 early:2 create:3 weighted:2 evangelos:2 mit:1 gaussian:27 always:1 reaching:1 zhong:1 varying:3 gatech:2 passivity:1 derived:1 june:2 vk:7 quence:1 improvement:1 likelihood:1 indicates:3 seeger:1 cg:2 inference:4 accumulated:2 initially:2 hidden:1 relation:1 lwpr:1 issue:1 arg:1 priori:1 development:1 art:3 constrained:1 field:1 equal:1 ng:1 sampling:2 represents:1 icml:1 intelligent:1 few:2 employ:3 national:1 neurocomputing:2 argmax:1 freedom:2 atlanta:1 highly:3 evaluation:1 jacobson:1 devoted:1 chain:1 accurate:5 partial:2 fox:2 old:1 desired:2 subfigure:2 uncertain:2 instance:2 modeling:1 obstacle:1 applicability:2 cost:28 subset:1 theodorou:3 thanks:2 international:3 vijayakumar:1 probabilistic:10 off:1 worse:2 expert:1 derivative:4 american:2 li:1 account:4 potential:1 underactuated:1 bfgs:2 sec:4 summarized:2 notable:1 explicitly:3 performed:2 view:1 root:1 break:1 candela:1 pendulum:6 reached:1 minimize:1 square:1 morimoto:2 variance:10 qk:16 yield:2 bayesian:6 trajectory:46 zx:1 kuss:1 energy:1 larsen:1 dxi:1 propagated:2 sampled:6 subsection:1 ut:2 back:3 originally:2 dt:2 varf:6 follow:2 higher:1 improved:2 formulation:1 evaluated:1 angular:6 stage:3 correlation:1 until:1 flight:1 nonlinear:6 propagation:7 believe:1 dxk:13 contain:1 true:1 swing:6 analytically:5 iteratively:1 deal:1 during:2 linearizing:1 generalized:1 complete:1 demonstrate:1 performs:3 motion:2 bring:1 qxu:1 dedicated:2 variational:2 instantaneous:1 ef:12 recently:4 novel:1 snelson:1 superior:2 data1:1 functional:1 rl:4 physical:12 tassa:2 thirdly:2 discussed:1 approximates:1 numerically:1 vec:2 automatic:1 fk:8 similarly:1 erez:1 stochasticity:1 biped:1 robot:1 longer:1 similarity:1 etc:1 add:1 posterior:2 multivariate:1 brownian:1 moderate:1 driven:3 inverted:5 additional:1 impose:1 converge:1 maximize:1 signal:1 semi:1 pilco:31 u0:1 ii:4 multiple:1 exceeds:1 offer:4 cross:1 uav:1 regression:4 ae:1 controller:16 expectation:2 sition:1 iteration:8 kernel:6 represent:1 grounded:1 hado:1 robotics:5 achieved:2 extra:2 rest:3 cart:8 dxj:1 effectiveness:1 feedforward:2 constraining:1 todorov:2 xj:4 fit:1 reduce:2 knowing:1 consumed:1 t0:5 link2:2 motivated:1 expression:2 six:4 effort:1 forecasting:1 suffer:1 peter:2 speech:1 rfwr:1 matlab:1 tol:2 generally:2 se:1 amount:2 locally:11 klanke:1 reduced:1 generate:3 coates:1 per:6 csat:1 discrete:2 redundancy:1 threshold:3 drawn:1 backward:6 year:1 mitrovic:1 linearizations:1 inverse:1 angle:6 uncertainty:9 powerful:2 reasonable:1 comparable:4 cyan:1 ddp:35 quadratic:6 nontrivial:2 ahead:1 precisely:1 link1:2 generates:1 speed:6 min:1 performing:2 according:2 combination:1 guggenheim:1 conjugate:1 smaller:1 slightly:3 pan:1 increasingly:1 intuitively:1 den:1 covf:7 equation:3 end:1 nri:1 dxkj:3 apply:3 robustness:1 slower:1 rp:1 original:3 assumes:1 ensure:1 patil:1 maintaining:1 restrictive:1 k1:1 ghahramani:1 classical:7 sweep:1 move:1 parametric:5 receptive:1 strategy:3 diagonal:2 gradient:3 distance:2 link:11 simulated:1 quinonero:1 reason:3 assuming:1 kalman:1 length:1 relationship:1 illustration:2 quu:4 implementation:1 policy:28 unknown:7 perform:1 observation:2 finite:6 yunpeng:1 extended:1 rn:2 community:2 introduced:3 pair:8 required:4 specified:3 optimized:8 aerospace:1 acoustic:1 errorbars:2 learned:5 inaccuracy:1 nip:7 address:1 able:3 dynamical:1 receding:1 qux:1 pattern:1 summarize:1 max:1 green:1 belief:11 rely:1 force:1 advanced:1 arm:9 minimax:2 scheme:3 representing:1 technology:1 lk:9 created:1 raiko:1 sept:1 aerobatic:1 revolute:1 law:1 ilqg:7 limitation:1 remarkable:1 foundation:1 degree:2 consistent:1 principle:1 share:1 quk:1 balancing:2 summary:1 supported:1 last:3 rasmussen:9 institute:2 sparse:4 van:3 feedback:4 dimension:8 opper:1 transition:3 avoids:1 evaluating:1 computes:2 forward:8 collection:1 reinforcement:4 author:1 adaptive:1 atkeson:2 nguyen:2 transaction:1 keep:2 global:1 robotic:3 xi:4 search:10 iterative:5 continuous:1 decade:1 vectorization:1 table:2 promising:1 learn:3 zk:6 nature:1 expanding:1 robust:1 correlated:1 expansion:4 did:1 linearly:1 terminated:1 s2:1 noise:2 n2:2 girard:1 x1:1 augmented:6 xu:1 fig:9 georgia:1 sub:1 inferring:1 position:4 explicit:1 xh:1 jacobian:7 third:1 learns:3 minute:2 consist:1 linearization:5 horizon:4 backpropagate:1 saturating:1 ux:3 partially:1 applies:1 springer:1 relies:2 viewed:1 goal:3 formulated:2 typical:2 specifically:1 principal:1 conservative:1 called:3 total:6 pas:2 experimental:1 deisenroth:6 berg:1 internal:1 dx0:3 incorporate:2 evaluate:1 vkx:2 lxk:1 ex:9 |
4,692 | 5,249 | Weighted importance sampling for off-policy learning
with linear function approximation
A. Rupam Mahmood, Hado van Hasselt, Richard S. Sutton
Reinforcement Learning and Artificial Intelligence Laboratory
University of Alberta
Edmonton, Alberta, Canada T6G 1S2
{ashique,vanhasse,sutton}@cs.ualberta.ca
Abstract
Importance sampling is an essential component of off-policy model-free reinforcement learning algorithms. However, its most effective variant, weighted importance sampling, does not carry over easily to function approximation and, because of this, it is not utilized in existing off-policy learning algorithms. In this
paper, we take two steps toward bridging this gap. First, we show that weighted
importance sampling can be viewed as a special case of weighting the error of
individual training samples, and that this weighting has theoretical and empirical benefits similar to those of weighted importance sampling. Second, we show
that these benefits extend to a new weighted-importance-sampling version of offpolicy LSTD( ). We show empirically that our new WIS-LSTD( ) algorithm can
result in much more rapid and reliable convergence than conventional off-policy
LSTD( ) (Yu 2010, Bertsekas & Yu 2009).
1
Importance sampling and weighted importance sampling
Importance sampling (Kahn & Marshall 1953, Rubinstein 1981, Koller & Friedman 2009) is a wellknown Monte Carlo technique for estimating an expectation under one distribution given samples
from a different distribution. Consider that data samples Yk 2 R are generated i.i.d. from a sample
.
distribution l, but we are interested in estimating the expected value of these samples, vg = E g [Yk ],
under a different distribution g. In importance sampling this is achieved simply by averaging the
.
k)
samples weighted by the ratio of their likelihoods ?k = g(Y
l(Yk ) , called the importance-sampling
ratio. That is, vg is estimated as:
Pn
?k Y k
.
v?g = k=1
.
(1)
n
This is an unbiased estimate because each of the samples it averages is unbiased:
Z
Z
g(y)
E l [?k Yk ] = l(y)
y dy = g(y)y dy = E g [Yk ] = vg .
l(y)
y
y
Unfortunately, this importance sampling estimate is often of unnecessarily high variance. To see
how this can happen, consider a case in which the samples Yk are all nearly the same (under both
distributions) but the importance-sampling ratios ?k vary greatly from sample to sample. This should
be an easy case because the samples are so similar for the two distributions, but importance sampling
will average the ?k Yk , which will be of high variance, and thus its estimates will also be of high
variance. In fact, without further bounds on the importance-sampling ratios, v?g may have infinite
variance (Andrad?ottir et al. 1995, Robert & Casella 2004).
An important variation on importance sampling that often has much lower variance is weighted importance sampling (Rubinstein 1981, Koller & Friedman 2009). The weighted importance sampling
1
(WIS) estimate vg as a weighted average of the samples with importance-sampling ratios as weights:
Pn
.
k=1 ?k Yk
v?g = P
.
n
k=1 ?k
This estimate is biased, but consistent (asymptotically correct) and typically of much lower variance
than the ordinary importance-sampling (OIS) estimate, as acknowledged by many authors (Hesterberg 1988, Casella & Robert 1998, Precup, Sutton & Singh 2000, Shelton 2001, Liu 2001, Koller
& Friedman 2009). For example, in the problematic case sketched above (near constant Yk , widely
varying ?k ) the variance of the WIS estimate will be related to the variance of Yk . Note also that
when the samples are bounded, the WIS estimate has bounded variance, because the estimate itself
is bounded by the highest absolute value of Yk , no matter how large the ratios ?k are (Precup, Sutton
& Dasgupta 2001).
Although WIS is the more successful importance sampling technique, it has not yet been extended
to parametric function approximation. This is problematic for applications to off-policy reinforcement learning, in which function approximation is viewed as essential for large-scale applications
to sequential decision problems with large state and action spaces. Here an important subproblem is
the approximation of the value function?the expected sum of future discounted rewards as a function of state?for a designated target policy that may differ from that used to select actions. The
existing methods for off-policy value-function approximation either use OIS (Maei & Sutton 2010,
Yu 2010, Sutton et al. 2014, Geist & Scherrer 2014, Dann et al. 2014) or use WIS but are limited
to the tabular or non-parametric case (Precup et al. 2000, Shelton 2001). How to extend WIS to
parametric function approximation is important, but far from clear (as noted by Precup et al. 2001).
2
Importance sampling for linear function approximation
In this section, we take the first step toward bridging the gap between WIS and off-policy learning
with function approximation. In a general supervised learning setting with linear function approximation, we develop and analyze two importance-sampling methods. Then we show that these two
methods have theoretical properties similar to those of OIS and WIS. In the fully-representable case,
one of the methods becomes equivalent to the OIS estimate and the other to the WIS estimate.
The key idea is that OIS and WIS can be seen as least-squares solutions to two different empirical
objectives. The OIS estimate is the least-squares solution to an empirical mean-squared objective
where the samples are importance weighted:
Pn
n
n
X
?k Y k
1X
2
v?g = arg min
(?k Yk v) =)
(?k Yk v?g ) = 0 =) v?g = k=1
. (2)
n
n
v
k=1
k=1
Similarly, the WIS estimate is the least-squares solution to an empirical mean-squared objective
where the individual errors are importance weighted:
Pn
n
n
X
1X
2
k=1 ?k Yk
v?g = arg min
?k (Yk v) =)
?k (Yk v?g ) = 0 =) v?g = P
. (3)
n
n
v
k=1 ?k
k=1
k=1
We solve similar empirical objectives in a general supervised learning setting with linear function
approximation to derive the two new methods.
Consider two correlated random variables Xk and Yk , where Xk takes values from a finite set X ,
and where Yk 2 R. We want to estimate the conditional expectation of Yk for each x 2 X under
a target distribution gY |X . However, the samples (Xk , Yk ) are generated i.i.d. according to a joint
sample distribution lXY (?) with conditional probabilities lY |X that may differ from the conditional
.
target distribution. Each input is mapped to a feature vector k = (Xk ) 2 Rm , and the goal is to
estimate the expectation E gY |X [Yk | Xk = x] as a linear function of the features
.
? > (x) ? vg (x) = E gY |X [Yk |Xk = x] .
Estimating this expectation is again difficult because the target joint distribution of the input-output
pairs gXY can be different than the sample joint distribution lXY . Generally, the discrepancy in
2
the joint distribution may arise from two sources: difference in marginal distribution of inputs,
gX 6= lX , and difference in the conditional distribution of outputs, gY |X 6= lY |X . Problems where
only the former discrepancy arise are known as covariate shift problems (Shimodaira 2000). In
these problems the conditional expectation of the outputs is assumed unchanged between the target
and the sample distributions. In off-policy learning problems, the discrepancy between conditional
probabilities is more important. Most off-policy learning methods correct only the discrepancy
between the target and the sample conditional distributions of outputs (Hachiya et al. 2009, Maei &
Sutton 2010, Yu 2010, Maei 2011, Geist & Scherrer 2014, Dann et al. 2014). In this paper, we also
focus only on correcting the discrepancy between the conditional distributions.
The problem of estimating vg (x) as a linear function of features using samples generated from l can
be formulated as the minimization of the mean squared error (MSE) where the solution is as follows:
??
?2
?
? 1
?
?
?? =
? arg min E lX E gY |X [Yk |Xk ] ? > k
= E lX k >
E lX E gY |X [Yk |Xk ] k . (4)
k
?
Similar to the empirical mean-squared objectives defined in (2) and (3), two different empirical
objectives can be defined to approximate this solution. In one case the importance weighting is
applied to the output samples, Yk , and in the other case the importance weighting is applied to the
error, Yk ? > k ,
n ?
n
?2
?
?2
. 1X
. 1X
J?n (?) =
?k Y k ? > k ;
J?n (?) =
?k Y k ? > k ,
n
n
k=1
k=1
.
where importance-sampling ratios are defined by ?k = gY |X (Yk |Xk )/lY |X (Yk |Xk ).
We can minimize these objectives by equating the derivatives of the above empirical objectives to
zero. Provided the relevant matrix inverses exist, the resulting solutions are, respectively
! 1 n
n
X
X
.
>
?n =
?
?k Yk k , and
(5)
k
k
k=1
.
?n =
?
n
X
k=1
?k
k
>
k
!
k=1
1 n
X
?k Y k
k
.
(6)
k=1
? the OIS-LS estimator and ?
? the WIS-LS estimator.
We call ?
A least-squares method similar to WIS-LS above was introduced for covariate shift problems by
Hachiya, Sugiyama and Ueda (2012). Although superficially similar, that method uses importancesampling ratios to correct for the discrepancy in the marginal distributions of inputs, whereas
WIS-LS corrects for the discrepancy in the conditional expectations of the outputs. For the fullyrepresentable case, unlike WIS-LS, the method of Hachiya et al. becomes an ordinary Monte Carlo
estimator with no importance sampling.
3
Analysis of the least-squares importance-sampling methods
The two least-squares importance-sampling methods have properties similar to those of the OIS and
the WIS methods. In Theorems 1 and 2, we prove that when vg can be represented as a linear
function of the features, then OIS-LS is an unbiased estimator of ? ? , whereas WIS-LS is a biased
estimator, similar to the WIS estimator. If the solution is not linearly representable, least-squares
methods are not generally unbiased. In Theorem 3 and 4, we show that both least-squares estimators
are consistent for ? ? . Finally, we demonstrate that the least-squares methods are generalizations of
OIS and WIS by showing, in Theorem 5 and 6, that in the fully representable case (when the features
form an orthonormal basis) OIS-LS is equivalent to OIS and WIS-LS is equivalent to WIS.
Theorem 1. If vg is a linear function of the features, that is, vg (x) = ? >
? (x), then OIS-LS is an
? n ] = ?? .
unbiased estimator, that is, E lXY [?
Theorem 2. Even if vg is a linear function of the features, that is, vg (x) = ? >
? (x), WIS-LS is in
? n ] 6= ? ? .
general a biased estimator, that is, E lXY [?
3
? n is a consistent estimator of the MSE solution ? ? given in (4).
Theorem 3. The OIS-LS estimator ?
? n is a consistent estimator of the MSE solution ? ? given in (4).
Theorem 4. The WIS-LS estimator ?
? > (x) of input
Theorem 5. If the features form an orthonormal basis, then the OIS-LS estimate ?
n
x is equivalent to the OIS estimate of the outputs corresponding to x.
? > (x) of input
Theorem 6. If the features form an orthonormal basis, then the WIS-LS estimate ?
n
x is equivalent to the WIS estimate of the outputs corresponding to x.
Proofs of Theorem 1-6 are given in the Appendix.
The WIS-LS estimate is perhaps the most interesting of the two least-squares estimates, because it
generalizes WIS to parametric function approximation for the first time and extends its advantages.
4
A new off-policy LSTD( ) with WIS
In sequential decision problems, off-policy learning methods based on important sampling can suffer
from the same high-variance issues as discussed above for the supervised case. To address this, we
extend the idea of WIS-LS to off-policy reinforcement learning and construct a new off-policy WISLSTD( ) algorithm.
We first explain the problem setting. Consider a learning agent that interacts with an environment
where at each step t the state of the environment is St and the agent observes a feature vector
.
m
t = (St ) 2 R . The agent takes an action At based on a behavior policy b(?|St ), that is typically
a function of the state features. The environment provides the agent a scalar (reward) signal Rt+1
and transitions to state St+1 . This process continues, generating a trajectory of states, actions and
rewards. The goal is to estimate the values of the states under the target policy ?, defined as the
expected returns given by the sum of future discounted rewards:
"1
#
t
X
Y
.
v? (s) = E
Rt+1
(Sk ) | S0 = s, At ? ?(?|St ), 8t ,
t=0
k=1
where (Sk ) 2 [0, 1] is a state-dependent degree of discounting on arrival in Sk (as in Sutton et al.
2014). We assume the rewards and discounting are chosen such that v? (s) is well-defined and finite.
Our primary objective is to estimate v? as a linear function of the features: v? (s) ? ? > (s), where
? 2 Rm is a parameter vector to be estimated. As before, we need to correct for the difference
in sample distribution resulting from the behavior policy and the target distribution as induced by
the target policy. Consider a partial trajectory from time step k to time t, consisting of a sequence
Sk , Ak , Rk , Sk+1 , . . . , St . The probability of this trajectory occurring given it starts at Sk under the
target policy will generally differ from its probability under the behavior policy. The importancesampling ratio ?tk is defined to be the ratio of these probabilities. This importance-sampling ratio
can be written in terms of the product of action-selection probabilities without needing a model of
the environment (Sutton & Barto 1998):
Qt 1
t 1
t 1
?(Ai |Si ) Y ?(Ai |Si ) Y
t .
?k = Qi=k
=
=
?i ,
t 1
b(Ai |Si )
i=k b(Ai |Si )
i=k
i=k
.
where we use the shorthand ?i = ?i+1
= ?(Ai |Si )/b(Ai |Si ).
i
We incorporate a common technique to reinforcement learning (RL) where updates are estimated
by bootstrapping, fully or partially, on previously constructed state-value estimates. Bootstrapping
potentially reduces the variance of the updates compared to using full returns and makes RL algorithms applicable to non-episodic tasks. In this paper we assume that the bootstrapping parameter
(s) 2 [0, 1] may depend on the state s (as in Sutton & Barto 1998, Maei & Sutton 2010). In the
.
.
following, we use the notational shorthands k = (Sk ) and k = (Sk ).
Following Sutton et al. (2014), we construct an empirical loss as a sum of pairs of squared corrected
and uncorrected errors, each corresponding to a different number of steps of lookahead, and each
.
weighted as a function of the intervening discounting and bootstrapping. Let Gtk = Rk+1 + . . . + Rt
be the undiscounted return truncated after looking ahead t k steps. Imagine constructing the
4
empirical loss for time 0. After leaving S0 and observing R1 and S1 , the first uncorrected error is
G10 ? > 0 , with weight equal to the probability of terminating 1
1 . If we do not terminate, then
we continue to S1 and form the first corrected error G10 + v > 1 ? > 0 using the bootstrapping
estimate v > 1 . The weight on this error is 1 (1 1 ), and the parameter vector v may differ from ?.
Continuing to the next time step, we obtain the second uncorrected error G20 ? > 0 and the second
corrected error G20 +v > 2 ? > 0 , with respective weights 1 1 (1 2 ) and 1 1 2 (1
2 ). This
goes on until we reach the horizon of our data, say at time t, when we bootstrap fully with v > t ,
generating a final corrected return error of Gt0 + v > t ? > 0 with weight 1 1 ? ? ? t 1 t 1 t .
.
The general form for the uncorrected error is ?tk (?) = Gtk ? > k , and the general form for the
.
corrected error is ?kt (?, v) = Gtk + v > t ? > k . All these errors could be squared, weighted by
their weights, and summed to form the overall empirical loss. In the off-policy case, we need to also
weight the squares of the errors ?tk and ?kt by the importance-sampling ratio ?tk . Hence, the overall
empirical loss at time k for data up to time t can be written as
`tk (?, v)
t 1
X
.
= ?k
Cki
1
i=k+1
+
?k Ckt 1
?
(1
h
i)
(1
?
t)
?ik (?)
?2
2
?tk (?)
+
+
i (1
i)
?
?ki (?, v)
?kt (?, v) 2
t
i
?2
t
. Y
, where Ckt =
j
j ?j .
j=k+1
This loss differs from that used by other LSTD( ) methods in that importance weighting is applied
to the individual errors within `tk (?, v).
Now, we are ready to state the least-squares problem. As noted by Geist & Scherrer (2014), LSTD( )
methods can be derived by solving least-squares problems where estimates at each step are matched
with multi-step returns starting from those steps given that bootstrapping is done using the solution
itself. Our proposed new method, called WIS-LSTD( ), computes at each time t the solution to the
least-squares problem:
t 1
X
.
? t = arg min
`tk (?, ? t ).
?
k=0
At the solution, the derivative of the objective is zero:
Pt 1 ?
?
k=0 2 k,t (? t , ? t ) k = 0, where the errors k,t are defined by
?
k,t (?, v)
t 1
X
.
= ?k
Cki
i=k+1
1
?
Next, we separate the terms of
?
k,t (? t , ? t )
k
1
+
?i
i ) k (?, v)
i (1
+ ?k Ckt
?
k,t (? t , ? t )
k
?
1
?
Pt
1 t
k=0 `k (?, ? t ) ?=? t
t
t )?k (?)
(1
?t
t k (?, v)
+
that involve ? t from those that do not:
=
?
.
Ak,t ? t , where bk,t 2 Rm , Ak,t 2 Rm?m and they are defined as
= bk,t
t 1
X
.
bk,t = ?k
Cki
i
i )?k (?)
(1
@
@?
(1
i
i i )Gk
+ ?k Ckt
k
1
Gtk
k,
i=k+1
t 1
X
.
Ak,t = ?k
Cki
1
k ((1
i i)
k
i (1
i)
i)
>
+ ?k Ckt
1
k(
k
t
t)
>
.
i=k+1
Therefore, the solution can be found as follows:
t 1
X
k=0
t 1
(bk,t
. X
Ak,t ? t ) = 0 =) ? t = At 1 bt , where At =
Ak,t ,
k=0
t 1
. X
bt =
bk,t .
(7)
k=0
In the following we show that WIS-LS is a special case of the above algorithm defined by (7). As
Theorem 6 shows that WIS-LS generalizes WIS, it follows that the above algorithm generalizes WIS
as well.
Theorem 7. At termination, the algorithm defined by (7) is equivalent to the WIS-LS method in the
? t as
sense that if 0 = ? ? ? = t = 0 = ? ? ? = t 1 = 1 and t = 0, then ? t defined in (7) equals ?
.
defined in (6), with Yk = Gtk . (Proved in the Appendix).
5
Our last challenge is to find an equivalent efficient online algorithm for this method. The solution in
(7) cannot be computed incrementally in this form. When a new sample arrives at time t + 1, Ak,t+1
and bk,t+1 have to be computed for each k = 0, . . . , t, and hence the computational complexity of
this solution grows with time. It would be preferable if the solution at time t + 1 could be computed
incrementally based on the estimates from time t, requiring only constant computational complexity
per time step. It is not immediately obvious such an efficient update exists. For instance, for = 1
this method achieves full Monte Carlo (weighted) importance-sampling estimation, which means
whenever the target policy deviates from the behavior policy all previously made updates have to
be unmade so that no updates are made towards a trajectory which is impossible under the target
policy. Sutton et al. (2014) show it is possible to derive efficient updates in some cases with the use
of provisional parameters which keep track of the provisional updates that might need to be unmade
when a deviation occurs. In the following, we show that using such provisional parameters it is also
possible to achieve an equivalent efficient update for (7).
We first write both bk,t and Ak,t recursively in t (derivations in Appendix A.8):
bk,t+1 = bk,t + ?k Ckt Rt+1
k
+ (?t
t 1 t
t t ?k Ck Gk
1)
k,
>
>
Ak,t+1 = Ak,t +
1) t t ?k Ckt 1 k ( k
k( t
t+1 t+1 ) + (?t
t) .
Using the above recursions, we can write the updates of both bt and At incrementally. The vector
bt can be updated incrementally as
t
t 1
t 1
t 1
X
X
X
X
bt+1 =
bk,t+1 =
bk,t+1 + bt,t+1 =
bk,t + ?t Rt+1 t + Rt+1
?k Ckt k
?k Ckt
k=0
k=0
+ (?t
1)
t t
k=1
t 1
X
?k Ckt
1
Gtk
k=1
= bt + Rt+1 et + (?t
k
(8)
1)ut ,
k=1
where the eligibility trace et 2 Rm and the provisional vector ut 2 Rm are defined as follows:
e t = ?t
+
t
t 1
X
?k Ckt
= ?t
k
t
+ ?t
?t
t t
1
t 1
k=1
ut =
t t
t 1
X
?k Ckt 1
?k Ckt
t 2
X
1
Gtk
k
=
?t
t t
1 t 1 t 1
?k Ckt 1
k
+ ?t
1 Rt
t 1
k=1
t 2
X
?k Ckt
2
k=0
+
1
(?t
1 ut 1
k(
!
=
t
t+1
>
t+1 ) + (?t
1)
t t
t(
t t
t 1
X
t
t+1 )
t+1
+ (?t
1)Vt ,
1 t 1
k=1
+
t t
?k Ckt 1
?t
(9)
k
+ Rt e t
t
?k Ckt
t+1 )
t+1
(10)
1) .
>
1
k(
t)
k
>
k(
t 1
t)
>
+ ?t
t 1(
1
t 1
1 Vt 1
+ et
1(
t 1
t)
>
(11)
t 2
X
?k Ckt
2
k(
k
t 1)
>
k=1
t)
>
k=1
=
t t et 1 ),
k=1
>
where the provisional matrix Vt 2 Rm?m is defined as
t 1
X
>
Vt = t t
?k Ckt 1 k ( k
t ) = t t ?t 1 t
t 2
X
+
t
k=0
k=1
= At + et (
= ?t (
k=1
k=0
?k Ckt
!
Gtk
The matrix At can be updated incrementally as
t
t 1
t 1
X
X
X
At+1 =
Ak,t+1 =
Ak,t+1 + At,t+1 =
Ak,t + ?t
t 1
X
k
k=1
k=1
+ Rt
+
t 2
X
!
(12)
.
Then the parameter vector can be updated as: ? t+1 = (At+1 )
1
bt+1 .
(13)
Equations (8?13) comprise our WIS-LSTD( ). Its per-step computational complexity is O(m3 ),
where m is the number of features. The computational cost of this method does not increase with
time. At present we are unsure whether or not there is an O(m2 ) implementation.
6
Theorem 8. The off-policy LSTD( ) method defined in (8?13) is equivalent to the off-policy
LSTD( ) method defined in (7) in the sense that they compute the same ? t at each time t.
Proof. The result follows immediately from the above derivation.
It is easy to see that in the on-policy case this method becomes equivalent to on-policy LSTD( )
(Boyan 1999) by noting that the third term of both bt and At updates in (8) and (11) becomes zero,
because in the on-policy case all the importance-sampling ratios are 1.
Recently Dann et al. (2014) proposed another least-squares based off-policy method called recursive LSTD-TO( ). Unlike our algorithm, that algorithm does not specialize to WIS in the fully representable case, and it does not seem as closely related to WIS. The Adaptive Per-Decision Importance
Weighting (APDIW) method by Hachiya et al. (2009) is superficially similar to WIS-LSTD( ), there
are several important differences. APDIW is a one-step method that always fully bootstraps whereas
WIS-LSTD( ) covers the full spectrum of multi-step backups including both one-step backup and
Monte Carlo update. In the fully representable case, APDIW does not become equivalent to the WIS
estimate, whereas WIS-LSTD(1) does. Moreover, APDIW does not find a consistent estimation of
the off-policy target whereas WIS algorithms do.
5
Experimental results
We compared the performance of the proposed WIS-LSTD( ) method with the conventional offpolicy LSTD( ) by Yu (2010) on two random-walk tasks for off-policy policy evaluation. These
random-walk tasks consist of a Markov chain with 11 non-terminal and two terminal states. They
can be imagined to be laid out horizontally, where the two terminal states are at the left and the right
ends of the chain. From each non-terminal state, there are two actions available: left, which leads to
the state to the left and right, which leads to the state to the right. The reward is 0 for all transitions
except for the rightmost transition to the terminal state, where it is +1. The initial state was set to
the state in the middle of the chain. The behavior policy chooses an action uniformly randomly,
whereas the target policy chooses the right action with probability 0.99. The termination function
was set to 1 for the non-terminal states and 0 for the terminal states.
We used two tasks based on this Markov chain in our experiments. These tasks differ by how the
non-terminal states were mapped to features. The terminal states were always mapped to a vector
with all zero elements. For each non-terminal state, the features were normalized so that the L2 norm
of each feature vector was one. For the first task, the feature representation was tabular, that is, the
feature vectors were standard basis vectors. In this representation, each feature corresponded to only
one state. For the second task, the feature vectors were binary representations of state indices. There
were 11 non-terminal states, hence each feature vector had blog2 (11)c + 1 = 4 components. These
vectors for the states from left to right were (0, 0, 0, 1)> , (0, 0, 1, 0)> , (0, 0, 1, 1)> , . . . , (1, 0, 1, 1)> ,
which were then normalized to get unit vectors. These features heavily underrepresented the states,
due to the fact that 11 states were represented by only 4 features.
We tested both algorithms for different values of constant , from 0 to 0.9 in steps of 0.1 and from
0.9 to 1.0 in steps of 0.025. The matrix to be inverted in both methods was initialized to ?I, where the
regularization parameter ? was varied by powers of 10 with powers chosen from -3 to +3 in steps of
0.2. Performance was measured as the empirical mean squared error (MSE) between the estimated
value of the initial state and its true value under the target policy projected to the space spanned by
the given features. This error was measured at the end of each of 200 episodes for 100 independent
runs.
Figure 1 shows the results for the two tasks in terms of empirical convergence rate, optimum performance and parameter sensitivity. Each curve shows MSE together with standard errors. The first row
shows results for the tabular task and the second row shows results for the function approximation
task. The first column shows learning curves using ( , ?) = (0, 1) for the first task and (0.95, 10) for
the second. It shows that in both cases WIS-LSTD( ) learned faster and gave lower error throughout
the period of learning. The second column shows performance with respect to different optimized
over ?. The x-axis is plotted in a reverse log scale, where higher values are more spread out than the
lower values. In both tasks, WIS-LSTD( ) outperformed the conventional LSTD( ) for all values of
. For the best parameter setting (best and ?), WIS-LSTD( ) outperformed LSTD( ) by an order
7
Tabular task
o?-policy LSTD( )
MSE
MSE
MSE
? 0.0
... 0.5
?? 0.9
WIS-LSTD( )
episodes
regularization parameter ?
Func. approx. task
o?-policy LSTD( )
MSE
MSE
MSE
? 0.5
... 0.9
?? 1.0
WIS-LSTD( )
episodes
regularization parameter ?
Figure 1: Empirical comparison of our WIS-LSTD( ) with conventional off-policy LSTD( ) on two
random-walk tasks. The empirical Mean Squared Error shown is for the initial state at the end of
each episode, averaged over 100 independent runs (and also over 200 episodes in column 2 and 3).
of magnitude. The third column shows performance with respect to the regularization parameter ?
for three representative values of . For a wide range of ?, WIS-LSTD( ) outperformed conventional LSTD( ) by an order of magnitude. Both methods performed similarly for large ?, as such
large values essentially prevent learning for a long period of time. In the function approximation
task when smaller values of ? were chosen, close to 1 led to more stable estimates, whereas smaller
introduced high variance for both methods. In both tasks, the better-performing regions of ? (the
U-shaped depressions) were wider for WIS-LSTD( ).
6
Conclusion
Although importance sampling is essential to off-policy learning and has become a key part of modern reinforcement learning algorithms, its most effective form?WIS?has been neglected because
of the difficulty of combining it with parametric function approximation. In this paper, we have
begun to overcome these difficulties. First, we have shown that the WIS estimate can be viewed as
the solution to an empirical objective where the squared errors of individual samples are weighted
by the importance-sampling ratios. Second, we have introduced a new method for general supervised learning called WIS-LS by extending the error-weighted empirical objective to linear function
approximation and shown that the new method has similar properties as those of the WIS estimate.
Finally, we have introduced a new off-policy LSTD algorithm WIS-LSTD( ) that extends the benefits of WIS to reinforcement learning. Our empirical results show that the new WIS-LSTD( ) can
outperform Yu?s off-policy LSTD( ) in both tabular and function approximation tasks and shows
robustness in terms of its parameters. An interesting direction for future work is to extend these
ideas to off-policy linear-complexity methods.
Acknowledgement
This work was supported by grants from Alberta Innovates Technology Futures, National Science
and Engineering Research Council, and Alberta Innovates Centre for Machine Learning.
8
References
Andrad?ottir, S., Heyman, D. P., Ott, T. J. (1995). On the choice of alternative measures in importance sampling
with markov chains. Operations Research, 43(3):509?519.
Bertsekas, D. P., Yu, H. (2009). Projected equation methods for approximate solution of large linear systems.
Journal of Computational and Applied Mathematics, 227(1):27?50.
Boyan, J. A. (1999). Least-squares temporal difference learning. In Proceedings of the 17th International
Conference, pp. 49?56.
Casella, G., Robert, C. P. (1998). Post-processing accept-reject samples: recycling and rescaling. Journal of
Computational and Graphical Statistics, 7(2):139?157.
Dann, C., Neumann, G., Peters, J. (2014). Policy evaluation with temporal differences: a survey and comparison. Journal of Machine Learning Research, 15:809?883.
Geist, M., Scherrer, B. (2014). Off-policy learning with eligibility traces: A survey. Journal of Machine
Learning Research, 15:289?333.
Hachiya, H., Akiyama, T., Sugiayma, M., Peters, J. (2009). Adaptive importance sampling for value function
approximation in off-policy reinforcement learning. Neural Networks, 22(10):1399?1410.
Hachiya, H., Sugiyama, M., Ueda, N. (2012). Importance-weighted least-squares probabilistic classifier for
covariate shift adaptation with application to human activity recognition. Neurocomputing, 80:93?101.
Hesterberg, T. C. (1988), Advances in importance sampling, Ph.D. Dissertation, Statistics Department, Stanford
University.
Kahn, H., Marshall, A. W. (1953). Methods of reducing sample size in Monte Carlo computations. In Journal
of the Operations Research Society of America, 1(5):263?278.
Koller, D., Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
Liu, J. S. (2001). Monte Carlo strategies in scientific computing. Berlin, Springer-Verlag.
Maei, H. R., Sutton, R. S. (2010). GQ( ): A general gradient algorithm for temporal-difference prediction
learning with eligibility traces. In Proceedings of the Third Conference on Artificial General Intelligence,
pp. 91?96. Atlantis Press.
Maei, H. R. (2011). Gradient temporal-difference learning algorithms. PhD thesis, University of Alberta.
Precup, D., Sutton, R. S., Singh, S. (2000). Eligibility traces for off-policy policy evaluation. In Proceedings
of the 17th International Conference on Machine Learning, pp. 759?766. Morgan Kaufmann.
Precup, D., Sutton, R. S., Dasgupta, S. (2001). Off-policy temporal-difference learning with function approximation. In Proceedings of the 18th International Conference on Machine Learning.
Robert, C. P., and Casella, G., (2004). Monte Carlo Statistical Methods, New York, Springer-Verlag.
Rubinstein, R. Y. (1981). Simulation and the Monte Carlo Method, New York, Wiley.
Shelton, C. R. (2001). Importance Sampling for Reinforcement Learning with Multiple Objectives. PhD thesis,
Massachusetts Institute of Technology.
Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227?244.
Sutton, R. S., Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Sutton, R. S., Mahmood, A. R., Precup, D., van Hasselt, H. (2014). A new Q( ) with interim forward view
and Monte Carlo equivalence. In Proceedings of the 31st International Conference on Machine Learning,
Beijing, China.
Yu, H. (2010). Convergence of least squares temporal difference methods under general conditions. In Proceedings of the 27th International Conference on Machine Learning, pp. 1207?1214.
9
| 5249 |@word innovates:2 middle:1 version:1 norm:1 termination:2 simulation:1 recursively:1 carry:1 initial:3 liu:2 rightmost:1 existing:2 hasselt:2 si:6 yet:1 written:2 happen:1 update:11 intelligence:2 offpolicy:2 xk:10 dissertation:1 provides:1 gx:1 lx:4 provisional:5 constructed:1 become:2 ik:1 prove:1 shorthand:2 specialize:1 expected:3 rapid:1 behavior:5 planning:1 multi:2 terminal:11 discounted:2 alberta:5 becomes:4 provided:1 estimating:4 bounded:3 matched:1 moreover:1 bootstrapping:6 temporal:6 preferable:1 rm:7 classifier:1 unit:1 ly:3 grant:1 bertsekas:2 atlantis:1 before:1 engineering:1 sutton:18 ak:13 might:1 equating:1 china:1 equivalence:1 limited:1 range:1 averaged:1 recursive:1 differs:1 bootstrap:2 episodic:1 empirical:19 reject:1 get:1 cannot:1 close:1 selection:1 impossible:1 conventional:5 equivalent:11 go:1 starting:1 l:21 survey:2 underrepresented:1 immediately:2 correcting:1 m2:1 estimator:13 orthonormal:3 spanned:1 variation:1 updated:3 target:15 imagine:1 pt:2 ualberta:1 heavily:1 us:1 element:1 recognition:1 utilized:1 continues:1 subproblem:1 region:1 episode:5 highest:1 observes:1 yk:30 environment:4 complexity:4 reward:6 neglected:1 terminating:1 singh:2 depend:1 solving:1 predictive:1 basis:4 gt0:1 easily:1 joint:4 geist:4 represented:2 america:1 derivation:2 effective:2 monte:9 artificial:2 rubinstein:3 corresponded:1 widely:1 solve:1 stanford:1 say:1 statistic:2 itself:2 final:1 online:1 advantage:1 sequence:1 g20:2 product:1 gq:1 adaptation:1 relevant:1 combining:1 achieve:1 lookahead:1 intervening:1 convergence:3 undiscounted:1 r1:1 optimum:1 extending:1 generating:2 neumann:1 tk:8 wider:1 derive:2 develop:1 measured:2 qt:1 c:1 ois:16 uncorrected:4 differ:5 direction:1 closely:1 correct:4 human:1 generalization:1 vary:1 achieves:1 estimation:2 outperformed:3 applicable:1 council:1 weighted:18 minimization:1 mit:2 always:2 ck:1 pn:4 varying:1 barto:3 derived:1 focus:1 notational:1 likelihood:2 greatly:1 sense:2 inference:2 dependent:1 hesterberg:2 typically:2 bt:9 accept:1 koller:4 kahn:2 interested:1 sketched:1 arg:4 scherrer:4 issue:1 overall:2 special:2 summed:1 marginal:2 equal:2 construct:2 comprise:1 shaped:1 sampling:38 unnecessarily:1 yu:8 nearly:1 discrepancy:7 future:4 tabular:5 richard:1 modern:1 randomly:1 national:1 neurocomputing:1 individual:4 consisting:1 friedman:4 evaluation:3 arrives:1 chain:5 kt:3 partial:1 respective:1 mahmood:2 continuing:1 walk:3 initialized:1 plotted:1 theoretical:2 instance:1 column:4 marshall:2 cover:1 ordinary:2 cost:1 ott:1 deviation:1 successful:1 chooses:2 st:7 international:5 sensitivity:1 cki:4 probabilistic:2 off:28 corrects:1 together:1 precup:7 squared:9 again:1 thesis:2 derivative:2 return:5 rescaling:1 gy:7 matter:1 dann:4 performed:1 view:1 analyze:1 observing:1 start:1 minimize:1 square:18 variance:12 kaufmann:1 carlo:9 trajectory:4 hachiya:6 explain:1 reach:1 casella:4 whenever:1 pp:4 obvious:1 proof:2 proved:1 begun:1 massachusetts:1 ut:4 higher:1 supervised:4 done:1 until:1 ckt:20 incrementally:5 perhaps:1 scientific:1 grows:1 requiring:1 true:1 unbiased:5 lxy:4 former:1 discounting:3 regularization:4 hence:3 normalized:2 laboratory:1 eligibility:4 noted:2 demonstrate:1 recently:1 common:1 empirically:1 rl:2 imagined:1 extend:4 discussed:1 ai:6 approx:1 mathematics:1 similarly:2 sugiyama:2 centre:1 had:1 stable:1 importancesampling:2 wellknown:1 reverse:1 verlag:2 binary:1 continue:1 vt:4 inverted:1 seen:1 morgan:1 period:2 signal:1 full:3 multiple:1 needing:1 reduces:1 faster:1 long:1 rupam:1 post:1 qi:1 prediction:1 variant:1 essentially:1 expectation:6 hado:1 achieved:1 whereas:7 want:1 source:1 leaving:1 biased:3 unlike:2 induced:1 seem:1 call:1 near:1 noting:1 easy:2 gave:1 idea:3 shift:4 whether:1 bridging:2 suffer:1 peter:2 york:2 action:8 depression:1 generally:3 clear:1 involve:1 blog2:1 ph:1 outperform:1 exist:1 problematic:2 estimated:4 per:3 track:1 write:2 dasgupta:2 key:2 acknowledged:1 prevent:1 asymptotically:1 sum:3 beijing:1 run:2 inverse:1 extends:2 laid:1 throughout:1 ueda:2 decision:3 dy:2 appendix:3 bound:1 ki:1 activity:1 ahead:1 akiyama:1 min:4 performing:1 interim:1 department:1 designated:1 according:1 shimodaira:2 representable:5 unsure:1 smaller:2 wi:60 s1:2 equation:2 previously:2 end:3 generalizes:3 available:1 operation:2 alternative:1 robustness:1 graphical:2 recycling:1 society:1 unchanged:1 objective:13 occurs:1 parametric:5 primary:1 rt:10 strategy:1 interacts:1 gradient:2 separate:1 mapped:3 berlin:1 toward:2 index:1 ratio:14 difficult:1 unfortunately:1 robert:4 potentially:1 gk:2 trace:4 implementation:1 policy:49 markov:3 finite:2 truncated:1 extended:1 looking:1 varied:1 canada:1 maei:6 introduced:4 pair:2 bk:12 optimized:1 learned:1 address:1 challenge:1 reliable:1 including:1 power:2 difficulty:2 boyan:2 recursion:1 technology:2 axis:1 ready:1 func:1 deviate:1 l2:1 acknowledgement:1 fully:7 loss:5 interesting:2 vg:11 agent:4 degree:1 t6g:1 consistent:5 s0:2 principle:1 row:2 supported:1 last:1 free:1 institute:1 wide:1 absolute:1 van:2 benefit:3 curve:2 overcome:1 superficially:2 transition:3 computes:1 author:1 made:2 reinforcement:10 adaptive:2 projected:2 forward:1 far:1 gxy:1 approximate:2 keep:1 assumed:1 spectrum:1 sk:8 terminate:1 ca:1 improving:1 mse:11 constructing:1 spread:1 linearly:1 s2:1 backup:2 arise:2 arrival:1 gtk:8 representative:1 edmonton:1 wiley:1 weighting:7 third:3 theorem:13 rk:2 covariate:4 showing:1 essential:3 exists:1 consist:1 sequential:2 g10:2 importance:44 phd:2 magnitude:2 occurring:1 horizon:1 gap:2 led:1 simply:1 horizontally:1 partially:1 scalar:1 lstd:35 springer:2 conditional:9 viewed:3 goal:2 formulated:1 towards:1 infinite:1 except:1 corrected:5 uniformly:1 averaging:1 reducing:1 called:4 experimental:1 m3:1 select:1 incorporate:1 tested:1 shelton:3 correlated:1 |
4,693 | 525 | A Network of Localized Linear Discriminants
Martin S. Glassman
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540
msg@siemens.siemens.com
Abstract
The localized linear discriminant network (LLDN) has been designed to address
classification problems containing relatively closely spaced data from different
classes (encounter zones [1], the accuracy problem [2]). Locally trained hyperplane segments are an effective way to define the decision boundaries for these
regions [3]. The LLD uses a modified perceptron training algorithm for effective
discovery of separating hyperplane/sigmoid units within narrow boundaries. The
basic unit of the network is the discriminant receptive field (DRF) which combines
the LLD function with Gaussians representing the dispersion of the local training
data with respect to the hyperplane. The DRF implements a local distance measure [4], and obtains the benefits of networks oflocalized units [5]. A constructive
algorithm for the two-class case is described which incorporates DRF's into the
hidden layer to solve local discrimination problems. The output unit produces a
smoothed, piecewise linear decision boundary. Preliminary results indicate the
ability of the LLDN to efficiently achieve separation when boundaries are narrow
and complex, in cases where both the "standard" multilayer perceptron (MLP)
and k-nearest neighbor (KNN) yield high error rates on training data.
1
The LLD Training Algorithm and DRF Generation
The LLD is defined by the hyperplane normal vector V and its "midpoint" M (a translated
origin [1] near the center of gravity of the training data in feature space). Incremental
corrections to V and M accrue for each training token feature vector Y j in the training
set, as iIlustrated in figure 1 (exaggerated magnitudes). The surface of the hyperplane is
appropriately moved either towards or away from Yj by rotating V, and shifting M along
1102
A Network of Localized Linear Discriminants
the axis defined by V~ M is always shifted towards Yj in the "radial" direction Rj (which is
the componerit of D j orthogonal to V, where D j = Yj - M):
! TOKEN ON CORRECT SIDE OF HYPERPLANE!
V,.."".
R.
,....
.
J .'
""
! TOKEN ON WRONG SIDE OF HYPERPLANE
V,.."".
..'
~.
"
/T ".
.6M v":i/-.6M .' _~
". ? Vj
.6~~M
-
R.
~
I
..'
J .'
"" """
"
M"
O?
J
":'"
~.Vj
ll.V OJ
.'
Figure 1: LLD incremental correction vectors associated with training token Y j are shown
above, and the corresponding LLD update rules below:
ilV
= ]L(n) Lil~ = ]L(n) L(-Se~e8j)0
j
llMv =
IIDjl1
j
llMVj = yen) L( -SeWe8j)V
yen) L
j
j
llMR = f3(n) L llMRj = f3(n) L(We8j)~
j
j
The batch mode summation is over tokens in the local training set, and n is the iteration
index. The polarity of ilVj and ilMRj is set by Se (c = the class of Yj ), where Se = 1 if Yj is
classified correctly, and Se = -1 if not. Corrections for each token are scaled by a sigmoidal
error term: 8j = 1/(1 + exp ?se1J/ A) I VTDj I?, a function of the distance of the token to
the plane, the sign of Se, and a data-dependent scaling parameter: A = IVT[B~ - B~] I, where
1J is a fixed (experimental) scaling parameter. The scaling of the sigmoid is proportional
to an estimate of the boundary region width along the axis of V. Be is a weighted average
of the class c token vectors: Be(n + 1) = (1 - a)Be(n) + aWe EjEe ?j.e(n)Yj(n), where ?j.e
is a sigmoid with the same scaling as 8j, except that it is centered on Be instead of M,
emphasizing tokens of class c nearest the hyperplane surface. For small1J's, Be will settle
near the cluster center of gravity, and for large 1J's, Be will approach the tokens closest to
the hyperplane surface. (The rate of the movement of Be is limited by the value of a, which
is not critical.) The inverse of the number of tokens in class c, We, balances the weight
of the corrections from each class. If a more Bayesian-like solution is required, the slope
of 8 can be made class dependent (for example, replacing 1J with 1J e ex: we). Since the
slope of the sigmoid error term is limited and distribution dependent, the use of We, along
with the nonlinear weighting of tokens near the hyperplane surface, is important for the
development of separating planes in relatively narrow boundaries (the assumption is that
the distributions near these boundaries are non-Gaussian). The setting of 1J simultaneously
(for convenience) controls the focus on the "inner edges" of the class clusters and the slope
of the sigmoid relative to the distance between the inner edges, with some resultant control
over generalization performance. This local scaling of the error also aids the convergence
rate. The range of good values for 1J has been found to be reasonably wide, and identical
11 03
1104
Glassman
values have been used successfully with speech, ecg, and synthetic data; it could also
be set/optimized using cross-validation. Separate adaptive learning rates (/L(n), yen), and
f3(n? are used in order to take advantage ofthe distinct nature of the geometric function of
each component. Convergence is also improved by maintaining M within the local region;
this controls the rate at which the hyperplane can sweep through the boundary region,
making the effect of Ll V more predictable. The LLD normal vector update is simply:
V(n + 1) = (V(n) + LlV)/I!V(n) + LlVII ,so that V is always normalized to unit magnitude.
The midpoint is just shifted: M(n + 1) = M(n) + LlMR + ~v .
T
lambda
.L
--L-
+Vk
o
Bk .
I
,c I Mk
___________ . . _______ - - - - - - - - ?
C
.;gm~
O?
k
B~::??::>-1?:
~SigmaR~ ~I
i,k,c
lambda: estimate of the
boundary region width
sigma(V): dispersion of
the training data in the
discriminant direction (V)
sigma(R): dispersion of
the training data In all
directions orthogonal to V
Figure 2: Vectors and parameters associated with the DRF for class c, for LLD k
DRF's are used to localize the response of the LLD to the region of feature space in which
it was trained, and are constructed after completion ofLLD training. Each DRF represents
one class, and the localizing component of the DRF is a Gaussian function based on simple
statistics of the training data for that class. Two measures of the dispersion of the data are
used: O'v ("normal" dispersion), obtained using the mean average deviation of the lengths of
Pj,k,c, and O'R ("radial" dispersion), obtained correspondingly using the 0 j,k,c'S. (As shown,
Pj,k,c is the normal component, and OJ,k,c the radial component of Y j - Bk,c') The output in
response to an input vector Yj from the class c DRF associated with the LLD k is cPj,k,c:
cPj,k,c
=
Eh,c(Sj,k -0.5)/ exp(
d2:.vJ,k,c +d2:.R,j,k,c );
Two components of the DRF incorporate the LLD discriminant; one is the sigmoid error
function used in training the LLD but shifted down to a value of zero at the hyperplane
surface.' The other is E> k,c, which is 1 if Yj is on the class c side of LLD k, and zero if
not. (In retrospect, for generalization performance, it may not be desirable to introduce
this discontinuity to the discriminant component.) The contribution of the Gaussian is
based on the normal and radial dispersion weighted distances of the input vector to B k,c:
dVJ,k,C
= IIPj,k,cll/O'V,k,C'
and . dRJ,k,c
= IIOj,k,cll/O'R,k,C'
2 Network Construction
Segmentation of the boundary between classes is accomplished by "growing" LLD's within
the boundary region. An LLD is initialized using a closely spaced pair of tokens from each
class. The LLD is grown by adding nearby tokens to the training set, using the k-nearest
neighbors to the LLD midpoint at each growth stage as candidates for permanent inclusion.
Candidate DRF's are generated after incremental training of the LLD to accommodate each
A Network of Localized Linear Discriminants
new candidate token. Two error measures are used to assess the effect of each candidate, the
peak value of Bj over the local training set, and 'UJ', which is a measure of misc1assification
error due to the receptive fields of the candidate DRF's extending over the entire training
set. The candidate token with the lowest average 'UJ' is permanently added, as long as both
its Bj and 'UJ' are below fixed thresholds. Growth the the LLD is halted if no candidate has
both error measures below threshold. The B j and 'UJ' thresholds directly affect the granularity
of the DRF representation of the data; they need to be set to minimize the number of DRF's
generated, while allowing sufficient resolution of local discrimination problems. They
should perhaps be adaptive so as to encourage coarse grained solutions to develop before
fine grain structure.
Figure 3: Four "snapshots" in the growth of an LLD/DRF pair. The upper two are "c1oseups." The initial LLD/DRF pair is shown in the upper left, along with the seed pair. Filled
rectangles and ellipses represent the tokens from each class in the permanent local training
set at each stage. The large markers are the B points, and the cross is the LLD midpoint.
The amplitude of the DRF outputs are coded in grey scale.
1105
1106
Glassman
At this point the DRF's are fixed and added to the network; this represents the addition of
two new localized features available for use by the network's output layer in solving the
global discrimination problem. In this implementation, the output "layer" is a single LLD
used to generate a two-class decision. The architecture is shown below:
INPUT
DATA
LLD'S
,
~, ,
0/
"',\j
,
SlGMA
I
I
"
a,(V,R)
I
I
I
,'~
SIGMAIr,a,(V,R)
S/GMA Ir ,1,(V,R)
,
,,
LOCALIZED
FEATURES
OUTPUT
DISCRIMINANT
FUNCTION
(LLD WI SIGMOID)
v~
,
IS , ? \ , IJIJ
__ A---
,,
ERROR MEASURE ON
TRAINING TOKENS
USED TO SEED NEW
LLD'S OR HALT
TRAINING
Figure 4: LLDN architecture for a two-dimensional, two-class problem
The ouput unit is completely retrained after addition of a new DRF pair, using the entire training set. The output of the network to the input Yj is: 'Pj = 1/(1 +exp ? 'Y)/ Ao)VT[<i>j - M]),
where Ao = IVT[Bo - Bdl, and <i>j = [cPj,}, .?. , cPj,p] is the p dimensional vector of DRF
outputs presented to the output unit. V is the output LLD normal vector, M the midpoint,
and Be's the cluster edge points in the internal feature space. The output error for each
token is then used to select a new seed pair for development of the next LLD/DRF pair.
If all tokens are classified with sufficient confidence, of course, construction of the LLDN
is complete. There are three possibilities for insufficient confidence: a token is covered
by a DRF of the wrong class, it is not yet covered sufficiently by any DRF's, or it is in a
region of "conflict" between DRF's of different classes. A heuristic is used to prevent the
repeated selection of the same seed pair tokens, since there is no guarantee that a given DRF
will significantly reduce the error for the data it covers after output unit retraining. This
heuristic alternates between the types of error and the class for selection of the primary seed
token. Redundancy in DRF shapes is also minimized by error-weighting the dispersion
computations so that the resultant Gaussian focuses more on the higher error regions of the
local training data. A simple but reasonably effective pruning algorithm was incorporated
to further eliminate unnecessary DRF's.
A Network of Localized Linear Discriminants
Figure 5: Network response plots illustrating network development. The upper two
sequences, beginning with the first LLD/DRF pair, and the bottom two plots show final
network responses for these two problems. A solution to a harder version of the nested
squares problem is on the lower left.
3 Experimental Results
The first experiment demonstrates comparative convergence properties of the LLD and a
single hyperplane trained by the standard generalized delta rule (GDR) method (no hidden
units, single output unit "network" is used) on 14 linearly separable, minimal consonant
1107
1108
Glassman
pair data sets. The data is 256 dimensional (time/frequency matrix, described in [6]), with
80 exemplars per consonant. The results compare the best performance obtainable from
each technique. The LLD converges roughly 12 times faster in iteration counts. The GDR
often fails to .completely separate f/th, f/v, and s/sh; in the results in figure 6 it fails on
the f/th data set at a plateau of 25% error. In both experiments described in this paper,
networks were run for relatively long times to insure confidence in declaring failure to
z
100K
~
10K
o
~
a:
Figure 6: TRAINING A SINGLE HYPERPLANE
Figure 7: ERROR RATES VS. GEOMETRIES
50
(d06S not separate)
w
10
1/1
w
Ii:i
~
1000
...J
a:
a:
w
a.
::IE
o
o
.....
U
a:
w
1/1
Z
Q
a. 0
10
~
w
t:
1
ffiu
100
29
D+-----~----~--~~--~
Q CI Z III N >:J: :J: :J: ~ :J: .... a: a:
MINIMAL PAIR
j:: ~~itill 11:1- 0U
29
29
DOn
29
~
4A
1
4A
1
Don n
%WlDTH
%~~~
cui i:J
1I:1Il~~a
solve the problem. The second experiment involves complete networks on synthetic twodimensional problems. Two examples of the nested squares problem (random distributions
of tokens near the surface of squares of alternating class, 400 tokens total) are shown in
figure 5. Two parameters controlling data set generation are explored: the relative boundary
region width, and the relative offset from the origin of the data set center of gravity (while
keeping the upper right comer of the outside square near the (1,1) coordinate); all data is
kept within the unit square (except for geometry number 2). Relative boundary widths of
29%, 4.4%, and 1% are used with offsets of 0%, 76%, and 94%. The best results over
parameter settings are reported for each network for each geometry. Four MLP architectures
were used: 2:16:1,2:32:1, 2:64:1, and 2:16:16:1; all of these converge to a solution for
the easiest problem (wide boundaries, no offset), but all eventually fail as the boundaries
narrow and/or the offset increases. The worst performing net (2:64: 1) fails for 7/8 problems
(maximum error rate of 49%); the best net (2:16:16:1) fails in 3/8 (maximum of 24%
error). The LLDN is 1 to 3 orders of magnitude faster in cpu time when the MLP does
converge, even though it does not use adaptive learning rates in this experiment. (The
average running time for the LLDN was 34 minutes; for the MLP's it was 3481 minutes
[Stardent 3040, single cpu], but which includes non-converging runs. The 2:16:16:1 net
did, however, take 4740 minutes to solve problem 6, which was solved in 7 minutes by the
LLDN.) The best LLDN's converge to zero errors over the problem set (fig. 6), and are not
too sensitive to parameter variation, which primarily affect convergence time and number
of DRF's generated. In contrast, finding good values for learning rate and momentum for
the MLP's for each problem was a time-consuming process. The effect of random weight
initialization in the MLP is not known because of the long running times required. The
KNN error rate was estimated using the leave-one-out method, and yields error rates of
0%, 10.5%, and 38.75% (for the best k's) respectively for the three values of boundary
width. The LLDN is insensitive to offset and scale (like the KNN) because of the use
of the local origin (M) and error scaling (A.). While global offset and scaling problems
for the MLP can be ameliorated through normalization and origin translation, this method
cannot guarantee elimination of local offset and scaling problems. The LLDN's utilization
A Network of Localized Linear Discriminants
ofDRF's was reasonably efficient, with the smallest networks (after pruning) using 20,32,
and 54 DRF's for the three boundary widths. A simple pruning algorithm, which starts up
after convergence, iteratively removes the DRF's with the lowest connection weights to the
output unit (which is retrained after each link is removed). A range of roughly 20% to 40%
of the DRF's were removed before developing misclassification errors on the training sets.
The LLDN was also tested on the "two-spirals" problem, which is know to be difficult for
the standard MLP methods. Because ofthe boundary segmentation process, solution ofthe
two-spirals problem was straightforward for the LLDN, and could be tuned to converge in
as fast as 2.5 minutes on an Apollo DN10000. The solution shown in fig. 5 uses 50 DRF's
(not pruned). The generalization pattern is relatively "nice" (for training on the sparse
version of the data set), and perhaps demonstrates the practical nature of the smoothed
piecewise linear boundary for nonlinear problems.
4
Discussion
The effect of LLDN parameters on generalization performance needs to be studied. In
the nested squares problem it is clear that the MLP's will have better generalization when
they converge; this illustrates the potential utility of a multi-scale approach to developing
localized discriminants. A number of extensions are possible: Localized feature selection
can be implemented by simply zeroing components of V. The DRF Gaussians could
model the radial dispersion of the data more effectively (in greater than two dimensions) by
generating principal component axes which are orthogonal to V. Extension to the multiclass
case can be based on DRF sets developed for discrimination between each class and all
other classes, using the DRF's as features for a multi-output classifier. The use of multiple
hidden layers offers the prospect of more complex localized receptive fields. Improvement
in generalization might be gained by including a procedure for merging neighboring DRF's.
While it is felt that the LLD parameters should remain fixed, it may be advantageous to
allow adjustment of the DRF Gaussian dispersions as part of the output layer training. A
stopping rule for LLD training needs to be developed so that adaptive learning rates can be
utilized effectively. This rule may also be useful in identifying poor token candidates early
in the incremental LLD training.
References
[1] J. Sklansky and G.N. Wassel. Pattern Classifiers and Trainable Machines. Springer
Verlag, New York, 1981
[2] S. Makram-Ebeid, lA. Sirat, and J.R. Viala. A rationalized error backpropagation
learning algorithm. Proc. IlCNN, 373-380, 1988
[3] J. Sklansky, and Y. Park. Automated design of mUltiple-class piecewise linear classifiers.
Journal of Classification, 6: 195-222, 1989
[4] R.D. Short, and K. Fukanaga. A new nearest neighbor distance measure. Proc. Fifth
Inti. Conf. on Pattern Rec., 81-88
[5] R. Lippmann. A critical overview of neural network pattern classifiers. Neural Networks
jor Signal Processing (IEEE), 267-275, 1991
[6] M.S. Glassman and M.B. Starkey. Minimal consonant pair discrimination for speech
therapy. Proc. European Con! on Speech Comm. and Tech., 273-276, 1989
1109
| 525 |@word illustrating:1 version:2 advantageous:1 retraining:1 d2:2 grey:1 harder:1 accommodate:1 initial:1 tuned:1 com:1 yet:1 grain:1 shape:1 remove:1 designed:1 plot:2 update:2 discrimination:5 v:1 plane:2 beginning:1 short:1 coarse:1 sigmoidal:1 along:4 constructed:1 ouput:1 combine:1 introduce:1 roughly:2 growing:1 multi:2 cpu:2 insure:1 awe:1 lowest:2 easiest:1 developed:2 finding:1 nj:1 guarantee:2 growth:3 gravity:3 wrong:2 scaled:1 demonstrates:2 control:3 unit:12 utilization:1 classifier:4 before:2 local:12 might:1 initialization:1 studied:1 ecg:1 limited:2 range:2 drj:1 practical:1 yj:9 implement:1 backpropagation:1 procedure:1 significantly:1 confidence:3 road:1 radial:5 convenience:1 cannot:1 selection:3 twodimensional:1 center:3 straightforward:1 resolution:1 identifying:1 rule:4 coordinate:1 variation:1 construction:2 gm:1 controlling:1 us:2 origin:4 utilized:1 rec:1 bottom:1 solved:1 worst:1 region:10 movement:1 removed:2 prospect:1 predictable:1 comm:1 trained:3 solving:1 segment:1 completely:2 translated:1 comer:1 grown:1 distinct:1 fast:1 effective:3 outside:1 heuristic:2 solve:3 tested:1 ability:1 statistic:1 knn:3 final:1 advantage:1 sequence:1 net:3 neighboring:1 achieve:1 moved:1 convergence:5 cluster:3 extending:1 produce:1 comparative:1 incremental:4 converges:1 leave:1 generating:1 develop:1 completion:1 exemplar:1 nearest:4 gma:1 implemented:1 involves:1 indicate:1 direction:3 closely:2 correct:1 centered:1 settle:1 elimination:1 ao:2 generalization:6 preliminary:1 summation:1 extension:2 correction:4 sufficiently:1 therapy:1 normal:6 exp:3 seed:5 bj:2 early:1 ilv:1 smallest:1 proc:3 sensitive:1 successfully:1 weighted:2 always:2 gaussian:5 modified:1 ax:1 focus:2 vk:1 improvement:1 tech:1 contrast:1 dependent:3 stopping:1 entire:2 eliminate:1 hidden:3 classification:2 development:3 field:3 f3:3 ijij:1 identical:1 represents:2 park:1 minimized:1 piecewise:3 primarily:1 simultaneously:1 geometry:3 mlp:9 possibility:1 sh:1 edge:3 encourage:1 orthogonal:3 filled:1 rotating:1 initialized:1 accrue:1 minimal:3 mk:1 rationalized:1 cover:1 halted:1 localizing:1 deviation:1 too:1 reported:1 synthetic:2 dvj:1 cll:2 peak:1 ie:1 containing:1 lambda:2 conf:1 potential:1 includes:1 permanent:2 start:1 slope:3 fukanaga:1 yen:3 contribution:1 square:6 ass:1 accuracy:1 minimize:1 ir:1 il:1 efficiently:1 spaced:2 yield:2 ofthe:3 bayesian:1 classified:2 plateau:1 failure:1 frequency:1 resultant:2 associated:3 con:1 segmentation:2 amplitude:1 obtainable:1 higher:1 response:4 improved:1 though:1 just:1 stage:2 retrospect:1 replacing:1 nonlinear:2 marker:1 mode:1 perhaps:2 effect:4 normalized:1 alternating:1 iteratively:1 ll:2 width:6 generalized:1 complete:2 sigmoid:7 discriminants:6 overview:1 insensitive:1 inclusion:1 zeroing:1 surface:6 closest:1 exaggerated:1 verlag:1 vt:1 accomplished:1 greater:1 converge:5 signal:1 ii:1 multiple:2 corporate:1 rj:1 desirable:1 constructive:1 faster:2 cross:2 long:3 offer:1 sklansky:2 coded:1 ellipsis:1 halt:1 wassel:1 converging:1 basic:1 multilayer:1 iteration:2 normalization:1 represent:1 addition:2 fine:1 appropriately:1 incorporates:1 near:6 granularity:1 iii:1 spiral:2 automated:1 affect:2 architecture:3 inner:2 reduce:1 multiclass:1 utility:1 speech:3 york:1 useful:1 se:5 covered:2 clear:1 locally:1 generate:1 shifted:3 sign:1 delta:1 estimated:1 correctly:1 per:1 ivt:2 redundancy:1 four:2 threshold:3 localize:1 prevent:1 pj:3 kept:1 rectangle:1 run:2 inverse:1 separation:1 decision:3 scaling:8 layer:5 msg:1 felt:1 nearby:1 pruned:1 performing:1 separable:1 martin:1 relatively:4 developing:2 alternate:1 poor:1 cui:1 remain:1 lld:34 wi:1 making:1 inti:1 count:1 eventually:1 fail:1 know:1 available:1 gaussians:2 away:1 batch:1 encounter:1 apollo:1 permanently:1 running:2 maintaining:1 uj:4 sweep:1 added:2 receptive:3 primary:1 distance:5 separate:3 link:1 separating:2 discriminant:6 length:1 index:1 polarity:1 insufficient:1 balance:1 difficult:1 sigma:2 implementation:1 design:1 lil:1 allowing:1 upper:4 dispersion:10 snapshot:1 incorporated:1 smoothed:2 retrained:2 bk:2 pair:12 required:2 optimized:1 connection:1 glassman:5 conflict:1 narrow:4 discontinuity:1 address:1 below:4 pattern:4 oj:2 including:1 shifting:1 critical:2 misclassification:1 eh:1 representing:1 axis:2 nice:1 geometric:1 discovery:1 relative:4 generation:2 proportional:1 declaring:1 localized:11 validation:1 sufficient:2 translation:1 llv:1 course:1 token:26 keeping:1 side:3 allow:1 perceptron:2 neighbor:3 wide:2 correspondingly:1 midpoint:5 sparse:1 fifth:1 benefit:1 boundary:19 dimension:1 drf:38 jor:1 made:1 adaptive:4 sj:1 pruning:3 obtains:1 ameliorated:1 lippmann:1 global:2 unnecessary:1 consonant:3 consuming:1 don:2 nature:2 reasonably:3 complex:2 european:1 vj:3 did:1 linearly:1 repeated:1 fig:2 aid:1 fails:4 momentum:1 candidate:8 weighting:2 grained:1 down:1 emphasizing:1 minute:5 explored:1 offset:7 cpj:4 adding:1 effectively:2 gained:1 ci:1 merging:1 gdr:2 magnitude:3 illustrates:1 simply:2 adjustment:1 bo:1 springer:1 nested:3 towards:2 except:2 ilcnn:1 hyperplane:14 principal:1 total:1 experimental:2 la:1 siemens:3 iipj:1 east:1 zone:1 select:1 college:1 internal:1 incorporate:1 princeton:1 trainable:1 ex:1 |
4,694 | 5,250 | A Representation Theory for Ranking Functions
Harsh Pareek, Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
{harshp,pradeepr}@cs.utexas.edu
Abstract
This paper presents a representation theory for permutation-valued functions,
which in their general form can also be called listwise ranking functions. Pointwise ranking functions assign a score to each object independently, without taking
into account the other objects under consideration; whereas listwise loss functions
evaluate the set of scores assigned to all objects as a whole. In many supervised
learning to rank tasks, it might be of interest to use listwise ranking functions
instead; in particular, the Bayes Optimal ranking functions might themselves be
listwise, especially if the loss function is listwise. A key caveat to using listwise ranking functions has been the lack of an appropriate representation theory
for such functions. We show that a natural symmetricity assumption that we call
exchangeability allows us to explicitly characterize the set of such exchangeable
listwise ranking functions. Our analysis draws from the theories of tensor analysis, functional analysis and De Finetti theorems. We also present experiments
using a novel reranking method motivated by our representation theory.
1
Introduction
A permutation-valued function, also called a ranking function, outputs a ranking over a set of objects given features corresponding to the objects, and learning such ranking functions given data is
becoming an increasingly key machine learning task. For instance, tracking a set of objects given
a particular order of uncertain sensory inputs involves predicting the permutation of objects corresponding to the inputs at each time step. Collaborative filtering and recommender systems can
be modeled as ranking movies (or other consumer objects). Extractive document summarization
involves ranking sentences in order of their importance, while also taking diversity into account.
Learning rankings over documents, in particular, has received considerable attention in the Information Retrieval community, under the subfield of ?learning to rank?. The problems above involve
diverse kinds of supervision and diverse evaluation metrics, but with the common feature that the
object of interest is a ranking function, that when given an input set of objects, outputs a permutation
over the set of objects. In this paper, we will consider the standard generalization of ranking functions which output a real-valued score vector, which can be sorted to yield the desired permutation.
The tasks above then entail learning a ranking function given data, and given some evaluation metric
which captures the compatibility between two permutations. These evaluation metrics are domainspecific, and even in specific domains such as information retrieval, could be varied based on actual
user preferences. Popular IR evaluation metrics for instance include Mean Average Precision (MAP)
[1], Expected Reciprocal Rank (ERR) [7] and Normalized Discounted Cumulative Gain (NDCG)
[17]. A common characteristic of these evaluation loss functionals are that these are typically listwise: so that the loss evaluates the entire set of scores assigned to all the objects in a manner that
is not separable in the individual scores. Indeed, some tasks by their very nature require listwise
evaluation metrics. A key example is that of ranking with diversity[5], where the user prefers results that are not only relevant individually, but also diverse mutually; searching for web-pages with
the query ?Jaguar? should not just return individually relevant results, but also results that cover
1
the car, the animal and the sports team, among others. Chapelle et al [8] also mention ranking for
diversity as an important future direction in learning to rank. Other fundamentally listwise ranking
problems include pseudo-relevance feedback, topic distillation, subtopic retrieval and ranking over
graphs (e.g.. social networks) [22].
While these evaluation/loss functionals (and typically their corresponding surrogate loss functionals
as well) are listwise, most parameterizations of the ranking functions used within these (surrogate)
loss functionals are typically pointwise, i.e. they rank each object (e.g. document) independently
of the other objects. Why should we require listwise ranking functions for listwise ranking tasks?
Pointwise ranking functions have the advantage of computational efficiency: since these evaluate
each object independently, they can be parameterized very compactly. Moreover, for certain ranking
tasks, such as vanilla rank prediction with 0/1 loss or multilabel ranking with certain losses[11],
it can be shown that the Bayes-consistent ranking function is pointwise, so that one would lose
statistical efficiency by not restricting to the sub-class of pointwise ranking functions. However,
as noted above, many modern ranking tasks have an inherently listwise flavor, and correspondingly
their Bayes-consistent ranking functions are listwise as well. For instance, [24] show that the Bayesconsistent ranking function of the popular NDCG evaluation metric is inherently listwise.
There is however a caveat to using listwise ranking functions: a lack of representation theory, and
corresponding guidance to parameterizing such listwise ranking functions. Indeed, the most commonly used ranking functions are linear ranking functions and decision trees, both of which are
pointwise. With decision trees, gradient boosting is often used as a technique to increase the complexity of the function class. The Yahoo! Learning to Rank challenge [6] was dominated by such
methods, which comprise the state-of-the-art in learning to rank for information retrieval today. It
should be noted that gradient boosted decision trees, even if trained with listwise loss functions
(e.g.. via LambdaMART[3]), are still a sum of pointwise ranking functions and therefore pointwise
ranking functions themselves, and hence subject to the theoretical limitations outlined in this paper.
In a key contribution of this paper, we impose a very natural assumption on general listwise ranking functions, which we term exchangeability, which formalizes the notion that the ranking function
depends only on the object features, and not the order in which the documents are presented. Specifically, as detailed further in Section 3, we define exchangeable ranking functions as those listwise
functions where if their set of input objects is permuted, their output permutation/score vector is
permuted in the same way. This simple assumption allows us to provide an explicit characterization
of the set of listwise ranking functions in the following form:
X
(f (x))i = h(xi , {x\i }) =
?j6=i gt (xi , xj )
(1)
t
This representation theorem is the principal contribution of this work. We hope that this result will
provide a general recipe for designing learning to rank algorithms for diverse domains. For each
domain, practitioners would need to utilize domain knowledge to define a suitable class of pairwise
functions g parameterized by w, and use this ranking function in conjunction with a suitable listwise
loss. Individual terms in (1) can be fit via standard optimization methods such as gradient descent,
while multiple terms can be fit via gradient boosting.
In recent work, two papers have proposed specific listwise ranking functions. Qin et al. [22] suggest the use of conditional random fields (CRFs) to predict the relevance scores of the individual
documents via the the most probable configuration of the CRF. They distinguish between ?local
ranking,? which we called ranking with pointwise ranking functions above, and ?global ranking?
which corresponds to listwise ranking functions; and argue that using CRFs would allow for global
ranking. Weston and Blitzer [26] propose a listwise ranking function (?Latent Structured Ranking?)
assuming a low rank structure for the set of items to be ranked. Both of these ranking functions are
exchangeable as we detail in Appendix A. The improved performance of these specific classes of
ranking functions also provides empirical support for the need for a representation theory of general
listwise ranking functions.
We first consider the case where features are discrete and derive our representation theorem using the theory of symmetric tensor decomposition. For the more general continuous case, we first
present the the case with three objects using functional analytic spectral theory. We then present
the extension to the general continuous case by drawing upon De Finetti?s theorem. Our analysis
highlights the correspondences between these theories, and brings out an important open problem in
the functional analysis literature.
2
2
Problem Setup
We consider the general ranking setting, where the m objects to be ranked (possibly contingent on a
query), are represented by the feature vectors x = (x1 , x2 , . . . , xm ) ? X m . Typically, X = Rk for
some k. The key object of interest in this paper is a ranking function:
Definition 2.1 (Ranking function) Given a set of object feature vectors x (possibly contingent on a
query q), a ranking function f : X m ? Rm is a function that takes as input the m object feature vectors, and has as output a vector of scores for the set of objects, so that f (x) = (f1 (x), . . . , fm (x));
for some functions fj : X m ? R.
It is instructive at this juncture to distinguish between pointwise (local) and listwise (global) ranking
functions. A pointwise ranking function f would score each object xi independently, ignoring
the other objects, so that each component function fj (x) above depends only on xj , and can be
written as a function fj (xj ) with some overloading of notation. In contrast, the components fj (x)
of the output vector of a listwise ranking function would depend on the feature-vectors of all the
documents.
3
Representation theory
We investigate the class of ranking functions which satisfy a very natural property: exchanging the
feature-vectors of any two documents should cause their positions in the output ranking order to be
exchanged. Definition 3.1 formalizes this intuition.
Definition 3.1 (Exchangeable Ranking Function) A listwise ranking function f : X m ? Rm is
said to be exchangeable if f (?(x)) = ?(f (x)) for every permutation ? ? Sk (where Sk is the set of
all permutations of order k)
Letting (f1 , f2 , . . . , fm ) denote the components of the ranking function f , we arrive at the following
key characterization of exchangeable ranking functions.
Theorem 3.2 Every exchangeable ranking function f : X m ? Rm can be written as f (x) =
(f1 (x), f2 (x), . . . , fm (x)) with
fi (x) = h(xi , {x\i })
(2)
where {x\i } = {xj |1 ? j ? m, j 6= i}, and for some h : X m ? R symmetric in {x\i }
(i.e. h(y) = h(?(y)), ?y ? X m?1 , ? ? Sk )
Proof The components of a ranking function f : X m ? Rm , viz. fi (x), represent the score
assigned to each document. First, exchangeability implies that exchanging the feature values of
some two documents does not affect the scores of the remaining documents, i.e. fi (x) does not
change if i is not involved in the exchange, i.e. fi (x) is symmetric in {x\i } Second, exchanging the
feature values of documents 1 and i exchanges their scores, i.e.,
fi (x1 , . . . , xi , . . . , xn ) = f1 (xi , . . . , x1 , . . . , xn )
(3)
Thus, the scoring function for the ith document can be expressed in terms of that of the first document. Call that scoring function h. Then, combining the two properties above, we have,
fi (x) = h(xi , {x\i })
(4)
where h is symmetric in {x\i }.
Theorem 3.2 entails the intuitive result that the component functions fi of exchangeable ranking
functions f can all be expressed in terms of a single partially symmetric function h whose first
argument is the document corresponding to that component and which is symmetric in the other
documents. Pointwise ranking functions then correspond to the special case where h is independent
of the other document-feature-vectors (so that h(xi , {x\i }) = h(xi ) with some overloading of
notation) and are thus trivially exchangeable.
3
As the main result of this paper, we will characterize the class of such partially symmetric functions
h, and thus the set of exchangeable listwise ranking functions, for various classes X as
fi (x) =
?
X
?j6=i gt (xi , xj )
(5)
t=1
for some set of functions {gt }?
t=1 , gt : X ? X ? R.
3.1
The Discrete Case: Tensor Decomposition
We first consider a decomposition theorem for symmetric tensors, and then through a correspondence between symmetric tensors and symmetric functions with finite domains, derive the corresponding decomposition for symmetric functions. We then simply extend the analysis to obtain the
corresponding decomposition theorem for partially symmetric functions.
The term tensor may have connotations (from its use in Physics) with regards to how a quantity
behaves under linear transformations, but here we use it only to mean ?multi-way array?.
Definition 3.3 (Tensor) A real-valued order-k tensor is a collection of real-valued elements
Ai1 ,i2 ,...,ik ? R indexed by tuples (i1 , i2 , . . . , ik ) ? X k .
Definition 3.4 (Symmetric tensor) An order-k tensor A = [Ai1 ,i2 ...,ik ] is said to be symmetric iff
for any permutation ? ? Sk ,
Ai1 ,i2 ,...,ik = Ai?(1) ,i?(2) ,...,i?(k) .
(6)
Comon et al. [9] show that such a symmetric tensor (sometimes called supersymmetric since it is
symmetric w.r.t. all dimensions) can be decomposed into a sum of rank-1 symmetric tensors, where
a rank-1 symmetric tensor is a k-way outer product of some vector v (we will use the standard
notation ? to denote an outer product u ? v ? ? ? ? ? z = [uj1 vj2 . . . zjk ]j1 ,...,jk ).
Proposition 3.5 (Decomposition theorem for symmetric tensors [9]) Any order-k symmetric tensor A can be decomposed as a sum of k-fold outer product tensors as follows:
A=
?
X
?k vi
(7)
i=1
The special matrix case (k = 2) of this theorem should be familiar to the reader as the spectral
theorem. In that case, the vi are orthogonal, the smallest such representation is unique and can be
recovered by tractable algorithms. In the general symmetric tensor case, the vi are not necessarily
orthogonal and the decomposition need not be unique; it is however finite [9]. While the spectral
theory for symmetric tensors is relatively straightforward, bearing similarity to that for matrices, the
theory for general non-symmetric tensors is nontrivial: we refer the interested reader to [21, 20, 10].
However, since we are interested not in general non-symmetric tensors, but partially symmetric
tensors, the above theorem can be extended in a straightforward way in our case as we shall see in
Theorem 3.7.
Our next step involves generalizing the earlier proposition to multivariate symmetric functions by
representing them as tensors, which then yields a corresponding spectral theorem of product decompositions for such functions. In particular, note that when the feature vector of each document
takes values only from a finite set X , of size |X |, a symmetric function h(x1 , x2 , . . . , xm ) can be
represented as an order-m symmetric tensor H where Hv1 v2 ...vm = h(v1 , v2 , . . . , vm ) for vi ? X .
We can thus leverage Proposition 3.5 to obtain the result of the following proposition:
Proposition 3.6 (Symmetric Product decomposition for multivariate functions (finite domain))
Any symmetric function f : X m ? R for a finite set X can be decomposed as
f (x) =
?
X
?j gt (xj ),
t=1
for some set of functions {gt }Tt=1 , gt : X ? R, T < ?
4
(8)
In the case of ranking three documents, each fi assigns a score to document i taking the other
document?s features as arguments. fi then corresponds to a matrix and the functions gt correspond
to the set of eigenvectors of this matrix. In the general case of ranking m documents, fi is an order
m ? 1 tensor and gt are the eigenvectors for a symmetric decomposition of the tensor.
Our class of exchangeable ranking functions corresponds to partially symmetric functions. In the
following, we extend the theory above to the partially symmetric case (proof in Appendix B).
Theorem 3.7 (Product decomposition for partially symmetric functions) A partially symmetric
function h : X m ? R symmetric in x2 , . . . , xm on a finite set X can be decomposed as
?
X
h(x1 , {x\1 }) =
?j6=1 gt (x1 , xj )
(9)
t=1
for some set of functions {gt }Tt=1 , gt : X ? X ? R, T < ?.
Remarks:
I. To the best of our knowledge, the study of partially symmetric tensors and their decompositions as above has not been considered in the literature. Notions such as rank and best successive
approximations would be interesting areas for future research.
II. The tensor view of learning to rank gives rise to a host of other interesting research directions.
Consider the learning to rank problem: each training example corresponds to one entry in the
resulting ranking tensor. A candidate approach to learning to rank might thus be tensor-completion,
perhaps using a convex nuclear tensor norm regularization [14].
3.2
The Continuous Case
In this section, we generalize the results of the previous section to the more realistic setting where
the feature space X is compact. The extension to the partially symmetric case from the symmetric
one is similar to that in the discrete case and is given as Theorem C.1 in Appendix C, so we discuss
only decomposition theorems for symmetric functions below.
3.2.1
Argument via Functional Analytic Spectral Theorem
We first recall some key definitions from functional analysis [25, pp.203]. A linear operator T is
bounded if its norm kT k = supkxk=1 kT xk is finite. A bounded linear operator T is self-adjoint if
T = T ? , where T ? is the adjoint operator. A linear operator A from a Banach space X to a Banach
space Y is compact if it takes bounded sets in X into relatively compact sets (i.e. whose closure is
compact) in Y.
The Hilbert-Schmidt theorem [25] provides a spectral decomposition for such compact self-adjoint
operators. Let A be a compact self-adjoint operator on a Hilbert space H. Then, by the HilbertSchmidt theorem, there is a complete orthonormal basis, {?n }, for H so that A?n = ?n ?n and
?n ? 0 as n ? ?. A can then be written as:
?
X
A=
?n ?n h?n , ?i.
(10)
n=1
We refer the reader to [25] for further details. The compactness condition can be relaxed to boundedness, but in that case a discrete spectrum {?n } does not exist, and is replaced by a measure ?,
and the summation in the Hilbert-Schmidt theorem 3.8 is replaced by an integral. We consider only
compact self-adjoint operators in this paper.
In the following key theorem, we provide a decomposition theorem for bivariate symmetric functions
Theorem 3.8 (Product decomposition for symmetric bivariate functions) A symmetric function
f (x, y) ? L2 (X ? X ) corresponds to a compact self-adjoint operator, and can be decomposed as
?
X
f (x, y) =
?t gt (x)gt (y),
t=1
5
for some functions gt ? L2 (X ), ?t ? 0 as t ? ?
The above result gives a corresponding decomposition theorem (via Theorem C.1) for partially
symmetric functions in three variables. Extending the result to beyond three variables would require
extending this decomposition result for linear operators to the general multilinear operator case.
Unfortunately, to the best of our knowledge, a decomposition theorem for multilinear operators is
an open problem in the functional analysis literature. Indeed, even the corresponding discrete tensor
case has only been studied recently. Instead, in the next section, we will use a result from probability
theory instead, and obtain a proof for our decomposition theorem under additional conditions.
3.2.2
Argument via De Finetti?s Theorem
In the previous section, we leveraged the interpretation of multivariate functions as multilinear operators. However, it is also possible to interpret multivariate functions as measures on a product space.
Under appropriate assumptions, we will show that a De Finetti-like theorem gives us the required
decomposition theorem for symmetric measures.
We first review De Finetti?s theorem and related terms.
Definition 3.9 (Infinite Exchangeability) An infinite sequence X1 , X2 , . . . of random variables is
said to be exchangeable if for any n ? N and any permutation ? ? Sn ,
p(X1 , X2 , . . . , Xn ) = p(X?(1) , X?(2) , . . . , X?(n) )
(11)
We note that exchangeability as defined in the probability theory literature refers to symmetricity of
the kind above, and is a distinct if related notion compared to that used in the rest of this paper.
Then, we have a class of De-Finetti-like theorems:
Theorem 3.10 (De Finetti-like theorems) A sequence of random variables X1 , X2 , . . . is infinitely
exchangeable iff, for all n, there exists a probability distribution function ?, such that ,
Z
p(X1 , . . . , Xn ) = ?ni=1 p(Xi ; ?)?(d?)
(12)
where p denotes the pdf of the corresponding distribution
This decomposes the joint distribution over n variables into an integral over product distributions.
De Finetti originally proved this result for 0-1 random variables, in which
P case the p(Xi ; ?) are
Bernoulli with parameter ? a real-valued random variable, ? = limn?? i Xi /n. For accessible
proofs of this result and a similar one for the case when Xi are instead discrete, we refer the reader to
[15, 2]. This result was later extended to the case where the variables Xi take values in a compact set
X by Hewitt and Savage [16]. (The proof in [16] first shows that the set of symmetric measures is a
convex set whose set of extreme points is precisely the set of all product measures, i.e. independent
distributions. Then, it establishes a Choquet representation i.e. an integral representation of this
convex set as a convex combination of its extreme points, giving us a De Finetti-like theorem as
above.) In this general case, the parameter ? can be interpreted as being distribution-valued ? as
opposed to real valued in the binary case described above. Our description of this result is terse for
lack of space, see [2, pp.188] for details. Thus, we derive the following theorem:
Theorem 3.11 (Product decomposition for Symmetric functions) Given an infinite sequence of
m
+
documents with features xi from a compact
R set X , if a function f : X ? R is symmetric in
every leading subset of n documents, and f = M < ?, then f /M corresponds to a probability
measure and f can be decomposed as
Z
f (x) = ?j g(xj ; ?)?(d?)
(13)
for some set of functions {g(?; ?)}, g : X ? R
This theorem can also be applied to discrete valued features Xi , and we would obtain a representation similar to that obtained through tensor analysis in Section 3.1. Applied to features Xi
6
belonging to a compact set, we obtain the required representation theorem similar to the functional
analytic theory of Section 3.2.1. However, note that De Finetti?s theorem integrates over products
of probabilities, so that each term is non-negative, a restriction not present in the functional analytic
case. Moreover, we have an integral in the De Finetti decomposition, while via tensor analysis in the
discrete case, we have a finite sum whose size is given by the rank of the tensor, and in the functional
analytic analysis, the spectrum for bounded operators is discrete. De Finetti?s theorem also requires
the existence of infinitely many objects for which every leading finite subsequence is exchangeable.
The similarities and differences between the functional analytic viewpoint and De Finetti?s theorem
have been previously noted in the literature, for instance in Kingman?s 1977 Wald Lecture [19] and
we discuss them further in Appendix E.
4
Experiments
For our experiments, we consider the information retrieval learning to rank task, where we are given
a training set consisting of n queries. Each query q (i) is associated with m documents, represented
(i)
(i)
(i)
via feature vectors x(i) = (x1 , x2 , . . . , xm ) ? X m . The documents for q (i) have relevance levels
(i) (i)
(i)
r(i) = (r1 , r2 , . . . , rm ) ? Rm . Typically, R = {0, 1, . . . , l ? 1}. The training set thus consists
(i) (i) n
of the tuples T = {x , r }i=1 . T is assumed sampled i.i.d. from a distribution D over X m ? Rm .
Ranking Loss Functionals We are interested in the NDCG ranking evaluation metric, and hence
for the ranking loss functional, we focus on optimization-amenable listwise surrogates for NDCG;
specifically, a convex class of strongly NDCG-consistent loss functions introduced in [24] and
nonconvex listwise loss functions, ListNet [4] and the Cosine Loss. In addition, we impose an `2
regularization penalty on kwk.
[24] exhaustively characterized the set of strongly NDCG consistent surrogates as Bregman divergences D? corresponding to strictly convex ? (see
P Appendix F). We choose the following instances
of ?: the Cross Entropy loss with ?(x) = 0.01( i xi log xi ?xi ), the square loss with ?(x) = kxk2
and the q-norm loss with ?(x) = kxk2q , q = log(m) + 2 (where m is the number of documents).
Note that the multiplicative factor in ? is significant as it does affect ?.
Ranking Functions The representation theory of the previous sections gives a functional form
for listwise ranking functions. In this section, we pick a simple class of ranking functions inspired
by this representation theory, and use it to rerank the scores output by various pointwise ranking
functions. Consider the following class of exchangeable ranking functions f (x) where the score for
the ith document is given by:
!
fi (x) = b(xi )?j6=i g(xi , xj ; w) = b(xi )?j6=i exp
X
wk Sk (xi , xj )
(14)
k
where b(xi ) is the score provided by the base ranker for the i-th document, and Sk are pairwise
functions (?kernels?) applied to xi and xj . Note that w = 0 yields the
P base ranking functions. Our
theory suggests that we can combine several such terms as fi (x) = t b(xi ; vt )?j6=i g(xi , xj ; wt ).
For our experiments, we only use one such term. A Gradient Boosting procedure can be used on top
of our procedure to fit multiple terms for this series.
Our choice of g is motivated by computational considerations: For general functions g, the computation of (14) would require O(m) time per function evaluation, where m is the number of documents. However, the specificP
functional form in (14) allows O(1) time
Pper function evaluation as
fi (x; w) = b(xi )?k (exp(wk j6=i Sk (xi , xj ))), where the inner term j6=i Sk (xi , xj ) in the RHS
does not depend on w and can be precomputed. Thus after the precomputation step, each function
evaluation is as efficient as that for a pointwise ranking function.
As the base pointwise rankers b, we use those provided by RankLib1 : MART, RankNet, RankBoost,
AdaRank, Coordinate Ascent (CA), LambdaMART, ListNet, Random Forests, Linear regression.
We refer the reader to the RankLib website for details on these.
1
https://sourceforge.net/p/lemur/wiki/RankLib/
7
Table 1: Results for our reranking procedure across LETOR 3.0 datasets. For each dataset, the first
column is the base ranker, second column is the loss function used for reranking.
OHSUMED
TD2003
NP2003
ndcg@1
ndcg@2
ndcg@5
ndcg@10
Base
RankBoost
Reranked w/
Cross Ent
Base
CA
Reranked w/
q-Norm
Base
MART
Reranked w/
Square
0.5104
0.4798
0.4547
0.4356
0.5421
0.4901
0.4615
0.4445
0.3500
0.2875
0.3228
0.3210
0.3250
0.3375
0.3461
0.3385
0.5467
0.6500
0.7112
0.7326
0.5600
0.6567
0.7128
0.7344
HP2003
ndcg@1
ndcg@2
ndcg@5
ndcg@10
HP2004
NP2004
Base
MART
Reranked w/
Cross Ent
Base
RankBoost
Reranked w/
q-Norm
Base
MART
Reranked w/
Square
0.6667
0.7667
0.7546
0.7740
0.7333
0.7667
0.7618
0.7747
0.5200
0.6067
0.7034
0.7387
0.5333
0.6533
0.7042
0.7420
0.3600
0.4733
0.5603
0.5951
0.3733
0.4867
0.5719
0.6102
Results We use the LETOR 3.0 collection [23], which contains the OHSUMED dataset and the
Gov collection: HP2003/04, TD2003/04, NP2003/04, which respectively correspond to the listwise
Homepage Finding, Topic Distillation and Named Page Finding tasks. We use NDCG as evaluation
metric and show gains instead of losses, so larger values are better.
We use the following pairwise functions/kernels {Sk }: we construct a cosine similarity function for
documents using the Query Normalized document features for each LETOR dataset. In addition,
OHSUMED contains document similarity information for each query and the Gov datasets contain
link information and a sitemap, i.e. a parent-child relation. We use these relations directly as the
kernels Sk in (14). Thus, we have two kernels for OHSUMED and three for the Gov datasets, and
w is 2- and 3-dimensional respectively. To obtain the scores b for the baseline pointwise ranking
function, we used Ranklib v2.1-patched with its default parameter values.
LETOR contains 5 predefined folds with training, validation and test sets. We use these directly
and report averaged results on the test set. For the `2 regularization parameter, we pick a C from
[0, 1e-5,1e-2, 1e-1, 1, 10, 1e2,1e3] tuning for maximum NDCG@10 on the validation set. We used
gradient descent on w to fit parameters. Though our objective is nonconvex, we found that random
restarts did not affect the achieved minimum and used the initial value w = 0 for our experiments.
Since w = 0 corresponds to the base pointwise rankers, we expect the reranking method to perform
as well as the base rankers in the worst case. Table 1 shows some results across LETOR datasets
which show improvements over the base rankers. For each dataset, we compare the NDCG for
the specified base rankers with the NDCG for our reranking method with that base ranker and the
specified listwise loss. (Detailed results are presented in Appendix G). Gradient descent required on
average only 17 iterations and 20 function evaluations, thus the principal computational cost of this
method was the precomputation for eq. (14). The low computational cost and shown empirical results for the reranking method are promising and validate our theoretical investigation. We hope that
this representation theory will enable the development of listwise ranking functions across diverse
domains, especially those less studied than ranking in information retrieval.
Acknowledgements
We acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS1320894, IIS-1447574, and DMS-1264033.
8
References
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern information retrieval. Addison Wesley, 1999.
[2] J. M. Bernardo and A. F. Smith. Bayesian theory, volume 405. John Wiley & Sons, 2009.
[3] C. J. Burges. From RankNet to LambdaRank to LambdaMart: An overview. Learning, 11:23?581, 2010.
[4] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise
approach. In International Conference on Machine learning 24, pages 129?136. ACM, 2007.
[5] J. Carbonell and J. Goldstein. The use of MMR, diversity-based reranking for reordering documents
and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on
Research and development in information retrieval, pages 335?336. ACM, 1998.
[6] O. Chapelle and Y. Chang. Yahoo! learning to rank challenge overview. Journal of Machine Learning
Research-Proceedings Track, 14:1?24, 2011.
[7] O. Chapelle, D. Metzler, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In
Conference on Information and Knowledge Management (CIKM), 2009.
[8] O. Chapelle, Y. Chang, and T. Liu. Future directions in learning to rank. In JMLR Workshop and Conference Proceedings, volume 14, pages 91?100, 2011.
[9] P. Comon, G. Golub, L. Lim, and B. Mourrain. Symmetric tensors and symmetric tensor rank. SIAM
Journal on Matrix Analysis and Applications, 30(3):1254?1279, 2008.
[10] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM
journal on Matrix Analysis and Applications, 21(4):1253?1278, 2000.
[11] K. Dembczynski, W. Kotlowski, and E. Huellermeier. Consistent multilabel ranking through univariate
losses. arXiv preprint arXiv:1206.6401, 2012.
[12] P. Diaconis. Finite forms of de Finetti?s theorem on exchangeability. Synthese, 36(2):271?281, 1977.
[13] P. Diaconis and D. Freedman. Finite exchangeable sequences. The Annals of Probability, pages 745?764,
1980.
[14] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011.
[15] D. Heath and W. Sudderth. De Finetti?s theorem on exchangeable variables. The American Statistician,
30(4):188?189, 1976.
[16] E. Hewitt and L. J. Savage. Symmetric measures on Cartesian products. Transactions of the American
Mathematical Society, pages 470?501, 1955.
[17] K. J?arvelin and J. Kek?al?ainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR
?00: Proceedings of the 23rd annual international ACM SIGIR conference on research and development
in information retrieval, pages 41?48, New York, NY, USA, 2000. ACM.
[18] E. T. Jaynes. Some applications and extensions of the de Finetti representation theorem. Bayesian Inference and Decision Techniques, 31:42, 1986.
[19] J. F. Kingman. Uses of exchangeability. The Annals of Probability, pages 183?197, 1978.
[20] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455?500,
2009.
[21] L. Qi. The spectral theory of tensors (Rough Version). arXiv preprint arXiv:1201.3424, 2012.
[22] T. Qin, T. Liu, X. Zhang, D. Wang, and H. Li. Global ranking using continuous conditional random
fields. In Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS 2008), 2008.
[23] T. Qin, T. Liu, J. Xu, and H. Li. LETOR: A benchmark collection for research on learning to rank for
information retrieval. Information Retrieval, 13(4):346?374, 2010.
[24] P. Ravikumar, A. Tewari, and E. Yang. On NDCG consistency of listwise ranking methods. 2011.
[25] M. C. Reed and B. Simon. Methods of modern mathematical physics: Functional analysis, volume 1.
Gulf Professional Publishing, 1980.
[26] J. Weston and J. Blitzer. Latent Structured Ranking. arXiv preprint arXiv:1210.4914, 2012.
9
| 5250 |@word version:1 norm:5 open:2 closure:1 decomposition:25 pick:2 mention:1 boundedness:1 initial:1 configuration:1 series:1 score:17 contains:3 liu:4 document:34 err:1 recovered:1 savage:2 jaynes:1 written:3 john:1 realistic:1 j1:1 analytic:6 ainen:1 reranking:7 website:1 item:1 xk:1 reciprocal:2 ith:2 smith:1 yamada:1 caveat:2 characterization:2 parameterizations:1 boosting:3 provides:2 preference:1 successive:1 zhang:2 synthese:1 mathematical:2 lathauwer:1 ik:4 retrieving:1 consists:1 combine:1 manner:1 pairwise:4 indeed:3 expected:2 themselves:2 multi:1 inspired:1 discounted:1 decomposed:6 gov:3 actual:1 ohsumed:4 provided:2 symmetricity:2 moreover:2 bayesconsistent:1 notation:3 bounded:4 homepage:1 kind:2 interpreted:1 finding:2 transformation:1 formalizes:2 pseudo:1 every:4 bernardo:1 precomputation:2 rm:7 exchangeable:17 producing:1 lemur:1 local:2 supkxk:1 becoming:1 ndcg:19 might:3 studied:2 suggests:1 averaged:1 unique:2 procedure:3 area:1 empirical:2 refers:1 suggest:1 operator:13 lambdarank:1 hp2003:2 restriction:1 map:1 crfs:2 straightforward:2 attention:1 independently:4 convex:7 sigir:3 recovery:1 assigns:1 parameterizing:1 array:1 nuclear:1 orthonormal:1 searching:1 notion:3 coordinate:1 annals:2 kolda:1 today:1 user:2 us:1 designing:1 element:1 jk:1 metzler:1 preprint:3 wang:1 capture:1 worst:1 pradeepr:1 intuition:1 jaguar:1 complexity:1 hp2004:1 exhaustively:1 multilabel:2 trained:1 depend:2 arvelin:1 upon:1 efficiency:2 f2:2 basis:1 compactly:1 joint:1 represented:3 various:2 distinct:1 query:7 whose:4 larger:1 valued:9 drawing:1 terse:1 advantage:1 sequence:4 net:1 propose:1 aro:1 product:13 qin:4 relevant:3 combining:1 cao:1 iff:2 ranklib:3 adjoint:6 intuitive:1 description:1 validate:1 sourceforge:1 recipe:1 ent:2 parent:1 extending:2 r1:1 letor:6 object:26 blitzer:2 derive:3 completion:2 received:1 eq:1 c:1 involves:3 implies:1 pper:1 extractive:1 direction:3 bader:1 enable:1 require:4 exchange:2 assign:1 f1:4 generalization:1 investigation:1 proposition:5 probable:1 multilinear:4 summation:1 extension:3 strictly:1 considered:1 exp:2 kxk2q:1 predict:1 smallest:1 integrates:1 lose:1 utexas:1 individually:2 establishes:1 moor:1 hope:2 rough:1 rankboost:3 exchangeability:7 boosted:1 conjunction:1 focus:1 viz:1 improvement:1 rank:25 bernoulli:1 contrast:1 baseline:1 inference:1 gandy:1 typically:5 entire:1 compactness:1 relation:2 i1:1 interested:3 compatibility:1 among:1 yahoo:2 development:3 animal:1 art:1 special:2 field:2 comprise:1 construct:1 uj1:1 future:3 hv1:1 others:1 report:1 fundamentally:1 modern:3 diaconis:2 divergence:1 individual:3 familiar:1 replaced:2 consisting:1 statistician:1 interest:3 investigate:1 highly:1 evaluation:15 ai1:3 golub:1 extreme:2 pradeep:1 np2003:2 amenable:1 kt:2 predefined:1 bregman:1 integral:4 orthogonal:2 tree:3 indexed:1 exchanged:1 desired:1 guidance:1 theoretical:2 uncertain:1 instance:5 column:2 earlier:1 cover:1 w911nf:1 exchanging:3 cost:2 entry:1 subset:1 vandewalle:1 characterize:2 st:1 recht:1 international:3 siam:3 accessible:1 physic:2 vm:2 management:1 opposed:1 leveraged:1 possibly:2 choose:1 american:2 leading:2 return:1 kingman:2 iis1320894:1 li:3 account:2 de:18 diversity:4 wk:2 satisfy:1 explicitly:1 ranking:93 depends:2 vi:4 later:1 view:1 multiplicative:1 kwk:1 bayes:3 dembczynski:1 simon:1 collaborative:1 contribution:2 square:3 ir:2 ni:1 kek:1 characteristic:1 yield:3 correspond:3 generalize:1 bayesian:2 j6:8 definition:7 evaluates:1 pp:2 involved:1 sitemap:1 e2:1 dm:1 proof:5 associated:1 gain:2 sampled:1 proved:1 dataset:4 popular:2 recall:1 knowledge:4 car:1 lim:1 hilbert:3 goldstein:1 wesley:1 originally:1 supervised:1 restarts:1 listnet:2 improved:1 subtopic:1 though:1 strongly:2 just:1 web:1 lack:3 brings:1 perhaps:1 zjk:1 usa:1 normalized:2 contain:1 hence:2 assigned:3 regularization:3 symmetric:51 i2:4 self:5 noted:3 cosine:2 pdf:1 crf:1 tt:2 complete:1 fj:4 consideration:2 novel:1 fi:14 recently:1 common:2 permuted:2 functional:14 behaves:1 overview:2 supersymmetric:1 banach:2 extend:2 interpretation:1 td2003:2 volume:3 interpret:1 distillation:2 refer:4 significant:1 ai:1 tuning:1 vanilla:1 outlined:1 trivially:1 rd:1 consistency:1 chapelle:4 entail:2 supervision:1 similarity:4 gt:15 base:15 multivariate:4 recent:1 certain:2 nonconvex:2 binary:1 vt:1 scoring:2 minimum:1 contingent:2 relaxed:1 impose:2 additional:1 ii:3 multiple:2 adarank:1 characterized:1 cross:3 retrieval:11 host:1 ravikumar:2 qi:1 prediction:1 wald:1 regression:1 metric:8 arxiv:6 iteration:1 represent:1 sometimes:1 kernel:4 achieved:1 whereas:1 addition:2 singular:1 sudderth:1 limn:1 rest:1 kotlowski:1 heath:1 ascent:1 subject:1 call:2 practitioner:1 leverage:1 yang:1 baeza:1 xj:14 fit:4 affect:3 fm:3 inner:1 texas:1 ranker:8 motivated:2 penalty:1 patched:1 gulf:1 e3:1 york:1 cause:1 prefers:1 remark:1 ranknet:2 tewari:1 detailed:2 involve:1 eigenvectors:2 http:1 wiki:1 exist:1 nsf:1 cikm:1 per:1 track:1 diverse:5 discrete:9 shall:1 yates:1 finetti:16 key:8 utilize:1 v1:1 graph:1 sum:4 inverse:1 parameterized:2 connotation:1 arrive:1 named:1 reader:5 draw:1 decision:4 appendix:6 distinguish:2 correspondence:2 fold:2 annual:3 nontrivial:1 hilbertschmidt:1 precisely:1 x2:7 dominated:1 argument:4 separable:1 relatively:2 department:1 structured:2 combination:1 belonging:1 across:3 increasingly:1 son:1 comon:2 mutually:1 previously:1 discus:2 precomputed:1 addison:1 letting:1 tractable:1 v2:3 appropriate:2 spectral:7 schmidt:2 professional:1 existence:1 choquet:1 denotes:1 remaining:1 include:2 top:1 publishing:1 giving:1 especially:2 graded:1 society:1 tensor:40 objective:1 quantity:1 surrogate:4 said:3 gradient:7 link:1 outer:3 carbonell:1 topic:2 argue:1 consumer:1 assuming:1 pointwise:17 modeled:1 reed:1 setup:1 unfortunately:1 negative:1 rise:1 neto:1 summarization:1 twenty:1 perform:1 recommender:1 datasets:4 benchmark:1 finite:11 acknowledge:1 descent:3 extended:2 team:1 vj2:1 varied:1 community:1 introduced:1 required:3 specified:2 sentence:1 nip:1 beyond:1 below:1 xm:4 challenge:2 suitable:2 natural:3 ranked:2 predicting:1 representing:1 movie:1 harsh:1 sn:1 review:2 literature:5 l2:2 acknowledgement:1 reordering:1 loss:23 subfield:1 permutation:11 highlight:1 interesting:2 limitation:1 filtering:1 lecture:1 rerank:1 expect:1 validation:2 consistent:5 huellermeier:1 viewpoint:1 austin:1 summary:1 allow:1 burges:1 taking:3 correspondingly:1 listwise:39 regard:1 feedback:1 dimension:1 xn:4 default:1 cumulative:1 sensory:1 domainspecific:1 commonly:1 collection:4 ribeiro:1 social:1 transaction:1 functionals:5 mmr:1 compact:11 global:4 assumed:1 tuples:2 xi:32 spectrum:2 subsequence:1 continuous:4 latent:2 sk:10 why:1 decomposes:1 table:2 promising:1 nature:1 ca:2 inherently:2 ignoring:1 forest:1 bearing:1 necessarily:1 domain:7 did:1 main:1 hewitt:2 rh:1 whole:1 freedman:1 child:1 x1:11 xu:1 ny:1 wiley:1 precision:1 sub:1 position:1 explicit:1 candidate:1 kxk2:1 jmlr:1 theorem:46 rk:1 specific:3 r2:1 bivariate:2 exists:1 workshop:1 restricting:1 overloading:2 importance:1 juncture:1 cartesian:1 flavor:1 entropy:1 generalizing:1 simply:1 univariate:1 infinitely:2 expressed:2 tracking:1 sport:1 partially:11 chang:2 corresponds:7 acm:5 mart:4 weston:2 conditional:2 sorted:1 considerable:1 change:1 specifically:2 infinite:3 wt:1 principal:2 called:4 support:2 lambdamart:3 relevance:4 tsai:1 evaluate:2 instructive:1 |
4,695 | 5,251 | Near-Optimal-Sample Estimators for Spherical
Gaussian Mixtures
Jayadev Acharya?
MIT
jayadev@mit.edu
Ashkan Jafarpour, Alon Orlitsky, Ananda Theertha Suresh
UC San Diego
{ashkan, alon, asuresh}@ucsd.edu
Abstract
Many important distributions are high dimensional, and often they can be modeled
as Gaussian mixtures. We derive the first sample-efficient polynomial-time estimator for high-dimensional spherical Gaussian mixtures. Based on intuitive spectral reasoning, it approximates mixtures of k spherical Gaussians in d-dimensions
to within `1 distance using O(dk 9 (log2 d)/4 ) samples and Ok, (d3 log5 d)
computation time. Conversely, we show that any estimator requires ?(dk/2 )
samples, hence the algorithm?s sample complexity is nearly optimal in the dimension. The implied time-complexity factor Ok, is exponential in k, but much
smaller than previously known.
We also construct a simple estimator for one-dimensional Gaussian mixtures that
2
3k+1
?
?
uses O(k/
) samples and O((k/)
) computation time.
1
Introduction
1.1
Background
Meaningful information often resides in high-dimensional spaces: voice signals are expressed in
many frequency bands, credit ratings are influenced by multiple parameters, and document topics
are manifested in the prevalence of numerous words. Some applications, such as topic modeling
and genomic analysis consider data in over 1000 dimensions [31, 14]. Typically, information can
be generated by different types of sources: voice is spoken by men or women, credit parameters
correspond to wealthy or poor individuals, and documents address topics such as sports or politics.
In such cases the overall data follow a mixture distribution [26, 27]. Mixtures of high-dimensional
distributions are therefore central to the understanding and processing of many natural phenomena.
Methods for recovering the mixture components from the data have consequently been extensively
studied by statisticians, engineers, and computer scientists.
Initially, heuristic methods such as expectation-maximization were developed [25, 21]. Over the
past decade, rigorous algorithms were derived to recover mixtures of d-dimensional spherical Gaussians [10, 18, 4, 8, 29] and general Gaussians [9, 2, 5, 19, 22, 3]. Many of these algorithms consider
mixtures where the `1 distance between the mixture components is 2 ? od (1), namely approaches
the maximum of 2 as d increases. They identify the distribution components in time and samples
that grow polynomially in d. Recently, [5, 19, 22] showed that the parameters of any k-component
d-dimensional Gaussian mixture can be recovered in time and samples that grow as a high-degree
polynomial in d and exponentially in k.
A different approach that avoids the large component-distance requirement and the high time and
sample complexity, considers a slightly relaxed notion of approximation, sometimes called PAC
learning [20], or proper learning, that does not approximate each mixture component, but instead
?
Author was a student at UC San Diego at the time of this work
1
derives a mixture distribution that is close to the original one. Specifically, given a distance bound
> 0, error probability ? > 0, and samples from the underlying mixture f , where we use boldface
letters for d-dimensional objects, PAC learning seeks a mixture estimate ?f with at most k components
such that D(f , ?f ) ? with probability ? 1 ? ?, where D(?, ?) is some given distance measure, for
example `1 distance or KL divergence.
An important and extensively studied special case of Gaussian mixtures is mixture of sphericalGaussians [10, 18, 4, 8, 29], where for each component the d coordinates are distributed independently with the same variance, though possibly with different means. Note that different components
can have different variances. Due to their simple structure, spherical-Gaussian mixtures are easier to
analyze and under a minimum-separation assumption have provably-practical algorithms for clustering and parameter estimation. We consider spherical-Gaussian mixtures as they are important on
their own and form a natural first step towards learning general Gaussian mixtures.
1.2
Sample complexity
Reducing the number of samples required for learning is of great practical significance. For example,
in topic modeling every sample is a whole document, in credit analysis every sample is a person?s
credit history, and in genetics, every sample is a human DNA. Hence samples can be very scarce
and obtaining them can be very costly. By contrast, current CPUs run at several Giga Hertz, hence
samples are typically much more scarce of a resource than time.
For one-dimensional distributions, the need for sample-efficient algorithms has been broadly recognized. The sample complexity of many problems is known quite accurately, often to within a constant factor. For example, for discrete distributions over {1, . . . ,s}, an approach was proposed in [23]
and its modifications were used in [28] to estimate the probability multiset using ?(s/ log s) samples. Learning one-dimensional m-modal distributions over {1, . . . ,s} requires ?(m log(s/m)/3 )
samples [11]. Similarly, one-dimensional mixtures of k structured distributions (log-concave, monotone hazard rate, and unimodal) over {1, . . . ,s} can be learned with O(k/4 ), O(k log(s/)/4 ), and
O(k log(s)/4 ) samples, respectively, and these bounds are tight up to a factor of [6].
Unlike the 1-dimensional case, in high dimensions, sample complexity bounds are quite weak. For
example, to learn a mixture of k = 2 spherical Gaussians, existing estimators use O(d12 ) samples,
and this number increases exponentially with k [16]. We close this gap by constructing estimators
with near-linear sample complexity.
1.3
Previous and new results
Our main contribution is PAC learning d-dimensional spherical Gaussian mixtures with near-linear
samples. In the process of deriving these results we also prove results for learning one-dimensional
Gaussians and for finding which distribution in a class is closest to the one generating samples.
d-dimensional Gaussian mixtures
Several papers considered PAC learning of discrete- and Gaussian-product mixtures. [17] considered
mixtures of two d-dimensional Bernoulli products where all probabilities are bounded away from 0.
? 2 /4 ) time and samples, where the O
? notation
They showed that this class is PAC learnable in O(d
hides logarithmic factors. [15] eliminated the probability constraints and generalized the results
from binary to arbitrary discrete alphabets and from 2 to k mixture components, showing that these
2k2 (k+1)
?
) time. Although they did not explicitly mention
mixtures are PAC learnable in O((d/)
4(k+1)
?
) samples. [16] generalized these results
sample complexity, their algorithm uses O((d/)
to Gaussian products and showed that mixtures of k Gaussians, where the difference between the
2k2 (k+1)
?
) time,
means is bounded by B times the standard deviation, are PAC learnable in O((dB/)
4(k+1)
?
) samples. These algorithms consider the KL divergence
and can be shown to use O((dB/)
between the distribution and its estimate, but it can be shown that the `1 distance would result in
similar complexities. It can also be shown that these algorithms or their simple modifications have
similar time and sample complexities for spherical Gaussians as well.
Our main contribution for this problem is to provide an algorithm that PAC learns mixtures of
spherical-Gaussians in `1 distance with number of samples nearly-linear, and running time polyno2
mial in the dimension d. Specifically, in Theorem 11 we show that mixtures of k spherical-Gaussian
distributions can be learned using
n = O(
dk 9
d
d
log2 ) = Ok, (d log2 )
4
?
?
samples and in time
k2
k7
d 2
?k, (d3 ).
O(n d log n + d( 3 log2 ) ) = O
?
4(k+1)
?
) samples. Furthermore,
Recall that for similar problems, previous algorithms used O((d/)
recent algorithms typically construct the covariance matrix [29, 16], hence require ? nd2 time.
In that sense, for small k, the time complexity we derive is comparable to the best such algorithms one can hope for. Additionally, the exponential dependence on k in the time complexity
7
2
3
is d( k3 log2 d? )k /2 , significantly lower than the dO(k ) dependence in previous results.
2
Conversely, Theorem 2 shows that any algorithm for PAC learning a mixture of k spherical Gaussians requires ?(dk/2 ) samples, hence our algorithms are nearly sample optimal in the dimension.
In addition, their time complexity significantly improves on previously known ones.
One-dimensional Gaussian mixtures
To prove the above results we derive two simpler results that are interesting on their own. We
? ?2 )
construct a simple estimator that learns mixtures of k one-dimensional Gaussians using O(k
3k+1
?
samples and in time O((k/)
). We note that independently and concurrently with this work [12]
? ?2 ) samples and in
showed that mixtures of two one-dimensional Gaussians can be learnt with O(
time O(?5 ). Combining with some of the techniques in this paper, they extend their algorithm to
mixtures of k Gaussians, and reduce the exponent to 3k ? 1.
Let d(f , F) be the smallest `1 distance between a distribution f and any distribution in a collection
F. The popular S CHEFFE estimator [13] takes a surprisingly small O(log ?F?) independent samples
from an unknown distribution f and time O(?F?2 ) to find a distribution in F whose distance from f
is at most a constant factor larger than d(f , F). In Lemma 1, we reduce the time complexity of the
?
Scheffe algorithm from O(?F?2 ) to O(?F?),
helping us reduce the running time of our algorithms.
A detailed analysis of several such estimators are provided in [1] and here we outline a proof for one
particular estimator for completeness.
1.4
The approach and technical contributions
Given the above, our goal is to construct a small class of distributions such that one of them is -close
to the underlying distribution.
Consider for example mixtures of k components in one dimension with means and variances
bounded by B. Take the collection of all mixtures derived by quantizing the means and variances of
all components to m accuracy, and quantizing the weights to w accuracy. It can be shown that if
m , w ? /k 2 then one of these candidate mixtures would be O()-close to any mixture, and hence
?
to the underlying one. There are at most (B/m )2k ? (1/w )k = (B/)O(k) candidates and running
S CHEFFE on these mixtures would lead to an estimate. However, this approach requires a bound on
the means and variances. We remove this requirement on the bound, by selecting the quantizations
based on samples and we describe it in Section 3.
In d dimensions, consider spherical Gaussians with the same variance and means bounded by B.
Again, take the collection of all distributions derived by quantizing the means of all components
in all coordinates to m accuracy, and quantizing the weights to w accuracy. It can be shown that
for d-dimensional Gaussian to get distance from the underlying distribution, it suffices to take
?
m , w ? 2 /poly(dk). There are at most (B/m )dk ? (1/w )k = 2O (dk) possible combinations of
the k mean vectors and weights. Hence S CHEFFE implies an exponential-time algorithm with sample
?
complexity O(dk).
To reduce the dependence on d, one can approximate the span of the k mean
vectors. This reduces the problem from d to k dimensions, allowing us to consider a distribution
2
collection of size 2O(k ) , with S CHEFFE sample complexity of just O(k 2 ). [15, 16] constructs the
sample correlation matrix and uses k of its columns to approximate the span of mean vectors. This
3
approach requires the k columns of the sample correlation matrix to be very close to the actual
correlation matrix, requiring a lot more samples.
We derive a spectral algorithm that approximates the span of the k mean vectors using the top k
eigenvectors of the sample covariance matrix. Since we use the entire covariance matrix instead of
just k columns, a weaker concentration suffices and the sample complexity can be reduced.
Using recent tools from non-asymptotic random matrix theory [30], we show that the span of the
?
means can be approximated with O(d)
samples. This result allows us to address most ?reasonable?
distributions, but still there are some ?corner cases? that need to be analyzed separately. To address
them, we modify some known clustering algorithms such as single-linkage, and spectral projections.
While the basic algorithms were known before, our contribution here, which takes a fair bit of effort
and space, is to show that judicious modifications of the algorithms and rigorous statistical analysis
yield polynomial time algorithms with near-linear sample complexity. We provide a simple and
practical spectral algorithm that estimates all such mixtures in Ok, (d log2 d) samples.
The paper is organized as follows. In Section 2, we introduce notations, describe results on the
Scheffe estimator, and state a lower bound. In Sections 3 and 4, we present the algorithms for onedimensional and d-dimensional Gaussian mixtures respectively. Due to space constraints, most of
the technical details and proofs are given in the appendix.
2
2.1
Preliminaries
Notation
For arbitrary product distributions p1 , . . . , pk over a d dimensional space let pj,i be the distribution
of pj over coordinate i, and let ?j,i and ?j,i be the mean and variance of pj,i respectively. Let
f = (w1 , . . . , wk , p1 , . . . , pk ) be the mixture of these distributions with mixing weights w1 , . . . , wk .
? . It can be empirical mean or a more complex estimate. ?????
We denote estimates of a quantity x by x
denotes the spectral norm of a matrix and ?????2 is the `2 norm of a vector. We use D(?, ?) to denote
the `1 distance between two distributions.
2.2
Selection from a pool of distributions
Many algorithms for learning mixtures over the domain X first obtain a small collection F of mixtures and then perform Maximum Likelihood test using the samples to output a distribution [15, 17].
Our algorithm also obtains a set of distributions containing at least one that is close to the underlying
in `1 distance. The estimation problem now reduces to the following. Given a class F of distributions and samples from an unknown distribution f , find a distribution in F that is close to f . Let
def
D(f , F) = minfi ?F D(f , fi ).
The well-known Scheffe?s method [13] uses O(?2 log ?F?) samples from the underlying distribution
f , and in time O(?2 ?F?2 T log ?F?) outputs a distribution in F with `1 distance of at most 9.1 ?
max(D(f , F), ) from f , where T is the time required to compute the probability of an x ? X by
a distribution in F. A naive application of this algorithm requires time quadratic in the number of
distributions in F. We propose a variant of this, that works in near linear time. More precisely,
Lemma 1 (Appendix B). Let > 0. For some constant c, given c2 log( ?F? ? ) independent samples
from a distribution f , with probability ? 1??, the output ?f of MODIFIED SCHEFFE satisfies D(?f , f ) ?
?/?)
).
1000 ? max(D(f , F), ). Furthermore, the algorithm runs in time O( ?F ?T log(?F
2
Several such estimators have been proposed in the past [11, 12]. A detailed analysis of the estimator
presented here was studied in [1]. We outline a proof in Appendix B for completeness. Note that
the constant 1000 in the above lemma has not been optimized. For our problem of estimating k
?k, (d2 ).
component mixtures in d-dimensions, T = O(dk) and ?F? = O
2.3
Lower bound
Using Fano?s inequality, we show an information theoretic lower bound of ?(dk/2 ) samples to
learn k-component d-dimensional spherical Gaussian mixtures for any algorithm. More precisely,
4
Theorem 2 (Appendix C). Any algorithm that learns all k-component d-dimensional spherical
Gaussian mixtures to `1 distance with probability ? 1/2 requires ?(dk/2 ) samples.
3
Mixtures in one dimension
Over the past decade estimation of one dimensional distributions has gained significant attention [24, 28, 11, 6, 12, 7]. We provide a simple estimator for learning one dimensional Gaussian
mixtures using the M ODIFIED S CHEFFE estimator. Formally, given samples from f , a mixture of
def
Gaussian distributions pi = N (?i , ?i2 ) with weights w1 , w2 , . . . wk , our goal is to find a mixture
f? = (w
?1 , w
?2 , . . . w
?k , p?1 , p?2 , . . . p?k ) such that D(f, f?) ? . We make no assumption on the weights,
means or the variances of the components. While we do not use the one dimensional algorithm in
the d-dimensional setting, it provides insight to the usage of the M ODIFIED S CHEFFE estimator and
may be of independent interest. As stated in Section 1.4, our quantizations are based on samples and
is an immediate consequence of the following observation for samples from a Gaussian distribution.
Lemma 3 (Appendix D.1). Given n independent samples x1 , . . . , xn from N (?, ? 2 ), with probabil2/?
2/?
ity ? 1 ? ? there are two samples xj , xk such that ?xj ? ?? ? ? 7 log
and ?xj ? xk ? ?? ? 2? 7 log
.
2n
2n
The above lemma states that given samples from a Gaussian distribution, there would be a sample
close to the mean and there would be two samples that are about a standard deviation apart. Hence,
if we consider the set of all Gaussians N (xj , (xj ? xk )2 ) ? 1 ? j, k ? n, then that set would contain
a Gaussian close to the underlying one. The same holds for mixtures and for a Gaussian mixture
and we can create the set of candidate mixtures as follows.
Lemma 4 (Appendix D.2). Given n ? 120k log(4k/?)
samples from a mixture f of k Gaussians. Let
2
2
S = {N (xj , (xj ? xk ) ) ? 1 ? j, k ? n} and W = {0, 2k
, 2k . . . , 1} be a set of weights. Let
def
F = {(w
?1 , w
?2 , . . . , w
?k , p?1 , p?2 , . . . p?k ) ? p?i ? S, ?1 ? i ? k?1, w
?i ? W, w
?k = 1?(w
?1 +. . . w
?k?1 ) ? 0}
be a set of n2k (2k/)k?1 ? n3k?1 candidate distributions. There exists f? ? F such that D(f, f?) ? .
Running the M ODIFIED S CHEFFE algorithm on the above set of candidates F yields a mixture that
is close to the underlying one. By Lemma 1 and the above lemma we obtain
k
k
log ?
for some constant
2
k log(k/?)
) , and returns a mixture
2
Corollary 5 (Appendix D.3). Let n ? c ?
3k?1
runs in time
O (( k log(k/?)
)
2
c. There is an algorithm that
f? such that D(f, f?) ? 1000
with probability ? 1 ? 2?.
[12] considered the one dimensional Gaussian mixture problem for two component mixtures. While
the process of identifying the candidate means is same for both the papers, the process of identifying
the variances and proof techniques are different.
4
Mixtures in d dimensions
Algorithm L EARN k-S PHERE learns mixtures of k spherical Gaussians using near-linear samples.
For clarity and simplicity of proofs, we first prove the result when all components have the same
variance ? 2 , i.e., pi = N (?i , ? 2 Id ) for 1 ? i ? k. A modification of this algorithm works for components with different variances. The core ideas are same and we discuss the changes in Section 4.3.
The algorithm starts out by estimating ? 2 and we discuss this step later. We estimate the means in
three steps, a coarse single-linkage clustering, recursive spectral clustering and search over span of
means. We now discuss the necessity of these steps.
4.1
Estimating the span of means
A simple modification of the one dimensional algorithm can be used to learn mixtures in d dimensions, however, the number of candidate mixtures would be exponential in d, the number of
dimensions. As stated in Section 1.4, given the span of the mean vectors ?i , we can grid the k
dimensional span to the required accuracy g and use M ODIFIED S CHEFFE, to obtain a polynomial
5
time algorithm. One of the natural and well-used methods to estimate the span of mean vectors is
using the correlation matrix [29]. Consider the correlation-type matrix,
S=
1 n
t
2
? X(i)X(i) ? ? Id .
n i=1
For a sample X from a particular component j, E[XXt ] = ? 2 Id + ?j ?j t , and the expected fraction
of samples from pj is wj . Hence
k
E[S] = ? wj ?j ?j t .
j=1
Therefore, as n ? ?, S converges to
k
?j=1 wj ?j ?j t ,
and its top k eigenvectors span the means.
While the above intuition is well understood, the number of samples necessary for convergence
?
is not well studied. We wish O(d)
samples to be sufficient for the convergence irrespective of the
values of the means. However this is not true when the means are far apart. In the following example
we demonstrate that the convergence of averages can depend on their separation.
Example 6. Consider the special case, d = 1, k = 2, ? 2 = 1, w1 = w2 = 1/2, and mean differences
??1 ? ?2 ? = L ? 1. Given this prior information, one can estimate the average of the mixture, that
yields (?1 + ?2 )/2. Solving equations obtained by ?1 + ?2 and ?1 ? ?2 = L yields ?1 and ?2 . The
variance of the mixture is 1 + L2 /4 > L2 /4. With additional Chernoff type bounds, one can show
that given n samples the error in estimating the average is
?
??1 + ?2 ? ?
?1 ? ?
?2 ? ? ? (L/ n) .
Hence, estimating the means to high precision requires n ? L2 , i.e., the higher separation, the more
samples are necessary if we use the sample mean.
A similar phenomenon happens in the convergence of the correlation matrices, where the variances
of quantities of interest increase with separation. In other words, for the span to be accurate the
number of samples necessary increases with the separation. To overcome this, a natural idea is to
cluster the Gaussians such that the component means in the same cluster are close and then estimate
the span of means, and apply SCHEFFE on the span within each cluster.
For clustering, we use another spectral algorithm. Even though spectral clustering algorithms are
studied in [29, 2], they assume that the weights are strictly bounded away from 0, which does
not hold here. We use a simple recursive clustering ?
algorithm that takes a cluster C with average
?(C). If there is a component in the cluster such that wi ???i ? ?(C)??2 is ?(log(n/?)?), then the
algorithm divides the cluster into two nonempty clusters without any mis-clustering. For technical
reasons similar to the above example, we first use a coarse clustering algorithm that ensures that the
? 1/4 ?).
mean separation of any two components within each cluster is O(d
Our algorithm thus comprises of (i) variance estimation (ii) a coarse clustering ensuring that means
? 1/4 ?) of each other in each cluster (iii) a recursive spectral clustering that reduces
are within O(d
?
the mean separation to O( k 3 log(n/?)?) (iv) estimating the span of mean within each cluster,
and (v) quantizing the means and running M ODIFIED S CHFEE on the resulting candidate mixtures.
4.2
Sketch of correctness
We now describe the steps stating the performance of each step of Algorithm L EARN k-S PHERE.
To simplify the bounds and expressions, we assume that d > 1000 and ? ? min(2n2 e?d/10 , 1/3).
For smaller values of ?, we run the algorithm with error 1/3 and repeat it O(log 1? ) times to choose
a set of candidate mixtures F? . By the Chernoff-bound with error ? ?, F? contains a mixture -close
to f . Finally, we run MODIFIED SCHEFFE on F? to obtain a mixture that is close to f . By the union
bound and Lemma 1, the error of the new algorithm is ? 2?.
Variance estimation: Let ?
? be the variance estimate from step 1. If X(1) and X(2) are two samples
from the components i and j respectively, then X(1)?X(2) is distributed N (?i ??j , 2? 2 Id ). Hence
2
2
for large d, ??X(1) ? X(2)??2 concentrates around 2d? 2 + ???i ? ?j ??2 . By the pigeon-hole principle,
given k + 1 samples, two of them are from the same component. Therefore, the minimum pairwise
6
distance between k + 1 samples is close to 2d? 2 . This is made precise in the next lemma which
states that ?
? 2 is a good estimate of the variance.
Lemma 7 (Appendix
? E.1). Given n samples from the k-component mixture, with probability 1 ? 2?,
??
? 2 ? ? 2 ? ? 2.5? 2 log(n2 /?)/d.
Coarse single-linkage clustering: The second step is a single-linkage routine that clusters mixture
components with far means. Single-linkage is a simple clustering scheme that starts out with each
data point as a cluster, and at each step merges the two nearest clusters to form a larger cluster. The
algorithm stops when the distance between clusters is larger than a pre-specified threshold.
Suppose the samples are generated by a one-dimensional mixture of k components that are far,
then with high probability, when the algorithm generates k clusters all the samples within a cluster
are generated by a single component. More precisely, if ?i, j ? [k], ??i ? ?j ? = ?(? log n), then
all the n samples concentrate around their respective means and the separation between any two
samples from different components would be larger than the largest separation between any two
samples from the same component. Hence for a suitable value of threshold, single-linkage correctly
identifies the clusters. For d-dimensional Gaussian mixtures a similar property holds, with minimum
separation ?((d log n? )1/4 ?). More precisely,
Lemma 8 (Appendix E.2). After Step 2 of L EARN k-S PHERE, with probability ? 1?2?, all samples
from each component will be in the same cluster and the maximum distance between two components
2 1/4
within each cluster is ? 10k?(d log n? ) .
Algorithm L EARN k-S PHERE
Input: n samples x(1), x(2), . . . , x(n) from f and .
2
1. Sample variance: ?
? 2 = mina?b?a,b?[k+1] ??x(a) ? x(b)??2 /2d.
2. Coarse single-linkage clustering: Start with each sample as a cluster,
?
? While ? two clusters with squared-distance ? 2d?
? 2 + 23?
? 2 d log(n2 /?), merge them.
3. Recursive spectral-clustering: While there is a cluster C with ?C? ? n/5k and spectral
norm of its sample covariance matrix ? 12k 2 ?
? 2 log n3 /?,
? Use n/8k 2 of the samples to find the largest eigenvector and discard these samples.
? Project the remaining samples on the largest eigenvector.
? Perform?single-linkage in the projected space (as before) till the distance between clusters
is > 3?
? log(n2 k/?) creating new clusters.
?
?
2
32k log n2 /?
, and
4. Exhaustive search: Let g = /(16k 3/2 ), L = 200 k 4 ?1 log n? , L? =
def
2
G = {?L, . . . , ??
g , 0, g , 2g , . . . L}. Let W = {0, /(4k), 2/(4k), . . . 1} and ? = {? ?
2
2
?
?
2
? =?
? (1 + i/d 128dk ), ? ? L < i ? L }.
?
? For each cluster C find its top k ? 1 eigenvectors u1 , . . . uk?1 . Let Span(C) = {?(C)
+
k?1
? ui ? gi ? G}.
?i=1 gi ?
? Let Span = ?C??C?? n
Span(C).
5k
? i ? Span,
? For all wi? ? W , ? ?2 ? ?, ?
?
?
? 1 , ? ?2 ), . . . , N (?
? k , ? ?2 )} to F.
add {(w1? , . . . , wk?1
, 1 ? ?k?1
i=1 wi , N (?
5. Run MODIFIED SCHEFFE on F and output the resulting distribution.
Recursive spectral-clustering: The clusters formed at the beginning of this step consist of components with mean separation O(?d1/4 log n? ). We now recursively zoom into the clusters formed
and show that it is possible to cluster the components with much smaller mean separation. Note that
since the matrix is symmetric, the largest magnitude of the eigenvalue is the same as the spectral
norm. We first find the largest eigenvector of
def
S(C) =
1
t
?
?
( ? (x ? ?(C))(x
? ?(C))
)??
? 2 Id ,
?C? x?C
7
which is the sample covariance matrix with its diagonal term reduced by ?
? 2 . We then project our
samples to this vector and if there are two components with means far apart, then using singlelinkage we divide the cluster into two. The following lemma shows that this step performs accurate
clustering of components with well separated means.
4
3
Lemma 9 (Appendix E.3). Let n ? c ? dk log n? . After recursive clustering, with probability
? 1 ? 4?, the samples are divided
into clusters such that for each component i within a cluster
?
?
C, wi ???i ? ?(C)??2 ? 25? k 3 log(n3 /?) . Furthermore, all the samples from one component
remain in a single cluster.
Exhaustive search and ?
Scheffe: After step 3, all clusters have a small weighted radius
?
3
wi ???i ? ?(C)??2 ? 25? k 3 log n? . It can be shown that the eigenvectors give an accurate estimate of the span of ?i ? ?(C) within each cluster. More precisely,
9
Lemma 10 (Appendix E.4). Let n ? c ? dk
log2 d? for some constant c. After step 3, with probability
4
? 1 ? 7?, if ?C? ? n/5k, then the projection of [?i ? ?(C)]/ ???i ? ?(C)??2 on the space orthogonal
to the span of top k ? 1 eigenvectors has magnitude ? 8?2k?w ?
.
??? ??(C)??
i
i
2
We now have accurate estimates of the spans of the cluster means and each cluster has components
with close means. It is now possible to grid the set of possibilities in each cluster to obtain a set of
distributions such that one of them is close to the underlying. There is a trade-off between a dense
grid to obtain a good estimation and the computation time required. The final step takes the sparsest
grid possible to ensure an error ? . This is quantized below.
9
log2 d? for some constant c. Then Algorithm L EARN kTheorem 11 (Appendix E.5). Let n ? c ? dk
4
S PHERE, with probability ? 1 ? 9?, outputs a distribution ?f such that D(?f , f ) ? 1000. Furthermore,
2
the algorithm runs in time O(n
7
d log n + d( k3
log2 d? )
k2
2
).
Note that the run time is calculated based on an efficient implementation of single-linkage clustering
and the exponential term is not optimized.
4.3
Mixtures with unequal variances
We generalize the results to mixtures with components having different variances. Let pi =
N (?i , ?i2 Id ) be the ith component. The key differences between L EARN k-S PHERE and the algorithm for learning mixtures with unequal variances are:
1. In L EARN k-S PHERE, we first estimated the component variance ? and divided the samples
? 1/4 ?). We modify
into clusters such that within each cluster the means are separated by O(d
this step such that the samples are clustered such that within each cluster the components
not
?
?
d) apart.
only have mean separation O(d1/4 ?), but variances are also a factor at most 1+ O(1/
?
?
2. Once the variances in each cluster are within a multiplicative factor of 1 + O(1/
d) of each
other, it can be shown that the performance of the recursive spectral clustering step does not
change more than constants.
3. After obtaining clusters with similar means and variances, the exhaustive search algorithm follows, though instead of having a single ? ? for all clusters, we can have a different ? ? for each
cluster, which is estimated using the average pair wise distance between samples in the cluster.
The changes in the recursive clustering step and the exhaustive search step are easy to see and we
omit them. The coarse clustering step requires additional tools and we describe them in Appendix F.
5
Acknowledgements
We thank Sanjoy Dasgupta, Todd Kemp, and Krishnamurthy Vishwanathan for helpful discussions.
8
References
[1] J. Acharya, A. Jafarpour, A. Orlitksy, and A. T. Suresh. Sorting with adversarial comparators and application to density estimation. In ISIT, 2014.
[2] D. Achlioptas and F. McSherry. On spectral learning of mixtures of distributions. In COLT, 2005.
[3] J. Anderson, M. Belkin, N. Goyal, L. Rademacher, and J. R. Voss. The more, the merrier: the blessing of
dimensionality for learning large gaussian mixtures. In COLT, 2014.
[4] M. Azizyan, A. Singh, and L. A. Wasserman. Minimax theory for high-dimensional gaussian mixtures
with sparse mean separation. In NIPS, 2013.
[5] M. Belkin and K. Sinha. Polynomial learning of distribution families. In FOCS, 2010.
[6] S. O. Chan, I. Diakonikolas, R. A. Servedio, and X. Sun. Learning mixtures of structured distributions
over discrete domains. In SODA, 2013.
[7] S. O. Chan, I. Diakonikolas, R. A. Servedio, and X. Sun. Efficient density estimation via piecewise
polynomial approximation. In STOC, 2014.
[8] K. Chaudhuri, S. Dasgupta, and A. Vattani. Learning mixtures of gaussians using the k-means algorithm.
CoRR, abs/0912.0086, 2009.
[9] S. Dasgupta. Learning mixtures of gaussians. In FOCS, 1999.
[10] S. Dasgupta and L. J. Schulman. A two-round variant of EM for gaussian mixtures. In UAI, 2000.
[11] C. Daskalakis, I. Diakonikolas, and R. A. Servedio. Learning k-modal distributions via testing. In SODA,
2012.
[12] C. Daskalakis and G. Kamath. Faster and sample near-optimal algorithms for proper learning mixtures of
gaussians. In COLT, 2014.
[13] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer, 2001.
[14] I. S. Dhillon, Y. Guan, and J. Kogan. Iterative clustering of high dimensional text data augmented by local
search. In ICDM, 2002.
[15] J. Feldman, R. O?Donnell, and R. A. Servedio. Learning mixtures of product distributions over discrete
domains. In FOCS, 2005.
[16] J. Feldman, R. A. Servedio, and R. O?Donnell. PAC learning axis-aligned mixtures of gaussians with no
separation assumption. In COLT, 2006.
[17] Y. Freund and Y. Mansour. Estimating a mixture of two product distributions. In COLT, 1999.
[18] D. Hsu and S. M. Kakade. Learning mixtures of spherical gaussians: moment methods and spectral
decompositions. In ITCS, 2013.
[19] A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two gaussians. In STOC, 2010.
[20] M. J. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. E. Schapire, and L. Sellie. On the learnability of
discrete distributions. In STOC, 1994.
[21] J. Ma, L. Xu, and M. I. Jordan. Asymptotic convergence rate of the em algorithm for gaussian mixtures.
Neural Computation, 12(12), 2001.
[22] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In FOCS, 2010.
[23] A. Orlitsky, N. P. Santhanam, K. Viswanathan, and J. Zhang. On modeling profiles instead of values. In
UAI, 2004.
[24] L. Paninski. Variational minimax estimation of discrete distributions under kl loss. In NIPS, 2004.
[25] R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood and the em algorithm. SIAM
Review, 26(2), 1984.
[26] D. A. Reynolds and R. C. Rose. Robust text-independent speaker identification using gaussian mixture
speaker models. IEEE Transactions on Speech and Audio Processing, 3(1):72?83, 1995.
[27] D. M. Titterington, A. F. Smith, and U. E. Makov. Statistical analysis of finite mixture distributions. Wiley
New York, 1985.
[28] G. Valiant and P. Valiant. Estimating the unseen: an n/log(n)-sample estimator for entropy and support
size, shown optimal via new clts. In STOC, 2011.
[29] S. Vempala and G. Wang. A spectral algorithm for learning mixtures of distributions. In FOCS, 2002.
[30] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. CoRR, abs/1011.3027,
2010.
[31] E. P. Xing, M. I. Jordan, and R. M. Karp. Feature selection for high-dimensional genomic microarray
data. In ICML, 2001.
[32] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam. Springer New York, 1997.
9
| 5251 |@word clts:1 polynomial:7 norm:4 d2:1 seek:1 covariance:5 decomposition:1 k7:1 mention:1 jafarpour:2 recursively:1 moment:1 necessity:1 contains:1 selecting:1 document:3 reynolds:1 past:3 existing:1 recovered:1 current:1 od:1 remove:1 xk:4 beginning:1 ith:1 smith:1 core:1 completeness:2 multiset:1 provides:1 coarse:6 quantized:1 ron:1 simpler:1 zhang:1 c2:1 focs:5 prove:3 introduce:1 pairwise:1 expected:1 p1:2 voss:1 spherical:17 cpu:1 actual:1 provided:1 estimating:8 underlying:9 bounded:5 notation:3 project:2 eigenvector:3 developed:1 titterington:1 spoken:1 finding:1 every:3 orlitsky:2 concave:1 k2:4 uk:1 omit:1 before:2 scientist:1 understood:1 modify:2 todd:1 local:1 consequence:1 id:6 merge:1 lugosi:1 studied:5 conversely:2 practical:3 testing:1 recursive:8 union:1 goyal:1 prevalence:1 suresh:2 empirical:1 significantly:2 projection:2 word:2 pre:1 get:1 close:16 selection:2 attention:1 independently:2 d12:1 simplicity:1 identifying:2 wasserman:1 estimator:17 insight:1 deriving:1 ity:1 notion:1 coordinate:3 krishnamurthy:1 diego:2 suppose:1 us:4 approximated:1 wang:1 wj:3 ensures:1 sun:2 kogan:1 trade:1 rose:1 intuition:1 complexity:18 ui:1 cam:2 depend:1 tight:1 solving:1 singh:1 xxt:1 alphabet:1 separated:2 describe:4 exhaustive:4 quite:2 heuristic:1 whose:1 larger:4 gi:2 unseen:1 final:1 eigenvalue:1 quantizing:5 propose:1 product:6 aligned:1 combining:1 mixing:1 till:1 chaudhuri:1 intuitive:1 convergence:5 cluster:46 requirement:2 rademacher:1 generating:1 converges:1 object:1 derive:4 alon:2 stating:1 nearest:1 recovering:1 implies:1 concentrate:2 radius:1 human:1 require:1 suffices:2 clustered:1 preliminary:1 isit:1 strictly:1 helping:1 hold:3 around:2 credit:4 considered:3 great:1 smallest:1 estimation:10 singlelinkage:1 combinatorial:1 lucien:1 largest:5 correctness:1 create:1 tool:2 weighted:1 hope:1 mit:2 concurrently:1 genomic:2 gaussian:32 modified:3 kalai:1 karp:1 corollary:1 derived:3 nd2:1 bernoulli:1 likelihood:2 contrast:1 rigorous:2 adversarial:1 sense:1 helpful:1 typically:3 entire:1 initially:1 provably:1 overall:1 colt:5 exponent:1 special:2 uc:2 construct:5 once:1 having:2 eliminated:1 chernoff:2 yu:1 comparators:1 nearly:3 icml:1 simplify:1 acharya:2 belkin:2 piecewise:1 divergence:2 zoom:1 individual:1 festschrift:1 statistician:1 ab:2 interest:2 possibility:1 mixture:98 analyzed:1 mcsherry:1 accurate:4 necessary:3 respective:1 orthogonal:1 iv:1 divide:2 sinha:1 column:3 modeling:3 maximization:1 deviation:2 learnability:2 learnt:1 vershynin:1 person:1 density:4 siam:1 donnell:2 off:1 pool:1 earn:7 w1:5 again:1 central:1 squared:1 moitra:2 containing:1 choose:1 possibly:1 woman:1 corner:1 creating:1 vattani:1 return:1 makov:1 student:1 wk:4 explicitly:1 later:1 multiplicative:1 lot:1 analyze:1 start:3 recover:1 xing:1 contribution:4 formed:2 accuracy:5 variance:25 efficiently:1 correspond:1 identify:1 yield:4 generalize:1 weak:1 identification:1 itcs:1 accurately:1 history:1 influenced:1 ashkan:2 servedio:5 frequency:1 proof:5 mi:1 stop:1 hsu:1 popular:1 recall:1 improves:1 dimensionality:1 organized:1 redner:1 routine:1 ok:4 higher:1 follow:1 modal:2 though:3 anderson:1 furthermore:4 just:2 achlioptas:1 correlation:6 sketch:1 usage:1 requiring:1 contain:1 true:1 hence:12 symmetric:1 dhillon:1 i2:2 round:1 speaker:2 generalized:2 mina:1 outline:2 theoretic:1 demonstrate:1 performs:1 reasoning:1 wise:1 variational:1 recently:1 fi:1 exponentially:2 extend:1 approximates:2 onedimensional:1 significant:1 feldman:2 odified:5 grid:4 similarly:1 fano:2 add:1 closest:1 own:2 showed:4 hide:1 recent:2 chan:2 apart:4 discard:1 manifested:1 inequality:1 binary:1 minimum:3 additional:2 relaxed:1 recognized:1 signal:1 ii:1 multiple:1 unimodal:1 reduces:3 technical:3 faster:1 hazard:1 divided:2 icdm:1 ensuring:1 variant:2 basic:1 expectation:1 sometimes:1 background:1 addition:1 separately:1 grow:2 source:1 walker:1 microarray:1 w2:2 unlike:1 db:2 jordan:2 near:7 iii:1 easy:1 xj:7 reduce:4 idea:2 politics:1 expression:1 linkage:9 effort:1 speech:1 york:2 detailed:2 eigenvectors:5 band:1 extensively:2 dna:1 reduced:2 schapire:1 estimated:2 correctly:1 broadly:1 discrete:7 dasgupta:4 sellie:1 santhanam:1 n2k:1 key:1 threshold:2 d3:2 clarity:1 pj:4 monotone:1 fraction:1 run:8 letter:1 soda:2 family:1 reasonable:1 mial:1 separation:15 appendix:13 comparable:1 bit:1 bound:12 def:5 quadratic:1 log5:1 constraint:2 precisely:5 vishwanathan:1 n3:2 generates:1 u1:1 span:21 min:1 vempala:1 structured:2 rubinfeld:1 viswanathan:1 phere:7 combination:1 poor:1 hertz:1 smaller:3 slightly:1 remain:1 em:3 wi:5 kakade:1 modification:5 happens:1 resource:1 equation:1 previously:2 discus:3 nonempty:1 gaussians:24 apply:1 away:2 spectral:17 voice:2 original:1 top:4 clustering:23 running:5 denotes:1 remaining:1 ensure:1 log2:9 jayadev:2 implied:1 quantity:2 costly:1 dependence:3 concentration:1 diagonal:1 diakonikolas:3 distance:21 thank:1 topic:4 considers:1 kemp:1 reason:1 boldface:1 devroye:1 modeled:1 stoc:4 kamath:1 stated:2 implementation:1 proper:2 unknown:2 perform:2 allowing:1 observation:1 finite:1 immediate:1 precise:1 mansour:2 ucsd:1 arbitrary:2 rating:1 namely:1 required:4 kl:3 specified:1 optimized:2 pair:1 unequal:2 learned:2 merges:1 nip:2 address:3 below:1 max:2 suitable:1 natural:4 settling:1 scarce:2 minimax:2 scheme:1 numerous:1 identifies:1 axis:1 irrespective:1 naive:1 text:2 prior:1 understanding:1 l2:3 acknowledgement:1 schulman:1 review:1 asymptotic:3 freund:1 loss:1 men:1 interesting:1 degree:1 sufficient:1 principle:1 azizyan:1 pi:3 genetics:1 surprisingly:1 repeat:1 weaker:1 sparse:1 distributed:2 overcome:1 dimension:14 xn:1 calculated:1 avoids:1 resides:1 author:1 collection:5 made:1 san:2 projected:1 far:4 polynomially:1 transaction:1 approximate:3 obtains:1 uai:2 daskalakis:2 search:6 iterative:1 decade:2 additionally:1 learn:3 robust:1 obtaining:2 poly:1 complex:1 constructing:1 domain:3 did:1 significance:1 main:2 pk:2 dense:1 whole:1 profile:1 n2:5 fair:1 x1:1 augmented:1 xu:1 wiley:1 precision:1 comprises:1 wish:1 sparsest:1 exponential:5 candidate:9 guan:1 learns:4 theorem:3 pac:10 showing:1 learnable:3 dk:15 theertha:1 derives:1 exists:1 consist:1 quantization:2 corr:2 gained:1 valiant:4 magnitude:2 hole:1 gap:1 wealthy:1 easier:1 sorting:1 entropy:1 logarithmic:1 pigeon:1 paninski:1 expressed:1 sport:1 springer:2 satisfies:1 assouad:1 ma:1 goal:2 consequently:1 towards:1 change:3 judicious:1 specifically:2 reducing:1 ananda:1 engineer:1 called:1 lemma:15 sanjoy:1 blessing:1 kearns:1 meaningful:1 formally:1 giga:1 support:1 audio:1 d1:2 phenomenon:2 |
4,696 | 5,252 | Tighten after Relax: Minimax-Optimal Sparse PCA
in Polynomial Time
Zhaoran Wang
Huanran Lu
Han Liu
Department of Operations Research and Financial Engineering
Princeton University
Princeton, NJ 08540
{zhaoran,huanranl,hanliu}@princeton.edu
Abstract
We provide statistical and computational analysis of sparse Principal Component
Analysis (PCA) in high dimensions. The sparse PCA problem is highly nonconvex
in nature. Consequently, though its global solution attains the optimal statistical rate
of convergence, such solution is computationally intractable to obtain. Meanwhile,
although its convex relaxations are tractable to compute, they yield estimators with
suboptimal statistical rates of convergence. On the other hand, existing nonconvex
optimization procedures, such as greedy methods, lack statistical guarantees.
In this paper, we propose a two-stage sparse PCA procedure that attains the optimal
principal subspace estimator in polynomial time. The main stage employs a novel
algorithm named sparse orthogonal iteration pursuit, which iteratively solves the
underlying nonconvex problem. However, our analysis shows that this algorithm
only has desired computational and statistical guarantees within a restricted region,
namely the basin of attraction. To obtain the desired initial estimator that falls into
this region, we solve a convex formulation of sparse PCA with early stopping.
Under an integrated analytic framework, we simultaneously characterize the computational and statistical performance of this
? two-stage procedure. Computationally,
our procedure converges at the rate of 1/ t within the initialization stage, and at
a geometric rate within the main stage. Statistically, the final principal subspace
estimator achieves the minimax-optimal statistical rate of convergence with respect
to the sparsity level s? , dimension d and sample size n. Our procedure motivates a
general paradigm of tackling nonconvex statistical learning problems with provable
statistical guarantees.
1
Introduction
We denote by x1 , . . . , xn the n realizations of a random vector X ? Rd with population covariance
matrix ? ? Rd?d . The goal of Principal Component Analysis (PCA) is to recover the top k leading
eigenvectors u?1 , . . . , u?k of ?. In high dimensional settings with d n, [1?3] showed that classical
PCA can be inconsistent. Additional assumptions are needed to avoid such a curse of dimensionality.
For example, when the first leading eigenvector is of primary interest, one common assumption is that
u?1 is sparse ? the number of nonzero entries of u?1 , denoted by s? , is smaller than n. Under such
an assumption of sparsity, significant progress has been made on the methodological development
[4?13] as well as theoretical understanding [1, 3, 14?21] of sparse PCA.
However, there remains a significant gap between the computational and statistical aspects of sparse
PCA: No tractable algorithm is known to attain the statistical optimal sparse PCA estimator provably
without relying on the spiked covariance assumption. This gap arises from the nonconvexity of sparse
1
PCA. In detail, the sparse PCA estimator for the first leading eigenvector u?1 is
b
b 1 = argmin ?v T ?v,
u
subject to kvk0 = s? ,
(1)
kvk2 =1
b is the sample covariance estimator, k ? k2 is the Euclidean norm, k ? k0 gives the number of
where ?
nonzero coordinates, and s? is the sparsity level of u?1 . Although this estimator has been proven to
attain the optimal statistical rate of convergence [15, 17], its computation is intractable because it
requires minimizing a concave function over cardinality constraints [22]. Estimating the top k leading
b1 , . . . , u
b2 .
eigenvectors is even more challenging because of the extra orthogonality constraint on u
To address this computational issue, [5] proposed a convex relaxation approach, named DSPCA, for
estimating the first leading eigenvector. [13] generalized DSPCA to estimate the principal subspace
spanned by the top k leading
p eigenvectors. Nevertheless, [13] proved the obtained estimator only
attains the suboptimal s? log d/n statistical rate. Meanwhile, several methods have been proposed
to directly address the underlying nonconvex problem (1), e.g., variants of power methods or iterative
thresholding methods [10?12], greedy method [8], as well as regression-type methods [4, 6, 7, 18].
However, most of these methods lack statistical guarantees. There
p are several exceptions: (1) [11]
proposed the truncated power method, which attains the optimal s? log d/n
estimating
u?1 .
rate for
(0)
(0) ?
However, it hinges on the assumption that the initial estimator u satisfies sin ?(u , u ) ? 1?C,
where C ? (0, 1) is a constant. Suppose u(0) is chosen uniformly at random on the `2 sphere, this
assumption holds with probability decreasing to zero when d ? ? [23]. (2) [12] proposed an iterative
thresholding method, which attains a near optimal statistical rate when estimating several individual
leading eigenvectors. [18] proposed a regression-type method, which attains the optimal principal
subspace estimator. However, these two methods hinge on the spiked covariance assumption, and
require the data to be exactly Gaussian (sub-Gaussian not included). For them, the spiked covariance
assumption is crucial, because they use diagonal thresholding method [1] to obtain the initialization,
which would fail when the assumption of spiked covariance doesn?t hold, or each coordinate of X
has the same variance. Besides, except [12] and [18], all the computational procedures only recover
the first leading eigenvector, and leverage the deflation method [24] to recover the rest, which leads
to identifiability and orthogonality issues when the top k eigenvalues of ? are not distinct.
To close the gap between computational and statistical aspects of sparse PCA, we propose a two-stage
procedure for estimating the k-dimensional principal subspace U ? spanned by the top k leading
eigenvectors u?1 , . . . , u?k . The details of the two stages are as follows: (1) For the main stage, we
propose a novel algorithm, named sparse orthogonal iteration pursuit, to directly estimate the principal
subspace of ?. Our analysis shows, when its initialization falls into a restricted region, namely the
basin of attraction, this algorithm enjoys fast optimization rate of convergence, and attains the optimal
principal subspace estimator. (2) To obtain the desired initialization, we compute a convex relaxation
of sparse PCA. Unlike [5, 13], which calculate the exact minimizers, we early stop the corresponding
optimization algorithm as soon as the iterative sequence enters the basin of attraction for the main
stage. The rationale is, this convex optimization algorithm converges at a slow sublinear rate towards
a suboptimal estimator, and incurs relatively high computational overhead within each iteration.
Under a unified analytic framework, we provide simultaneous statistical and computational guarantees
for this two-stage procedure. Given the sample size n is sufficiently large, and the eigengap between
the k-th and (k + 1)-th eigenvalues of the population covariance matrix ? is nonzero, we prove: (1)
b
The
p final subspace estimator U attained by our two-stage procedure achieves the minimax-optimal
s? log d/n statistical rate of convergence. (2) Within the initialization stage, the iterative sequence
T
of subspace estimators U (t) t=0 (at the T -th iteration we early stop the initialization stage) satisfies
p
?
D U ? , U (t) ? ?1 (?) ? s? log d/n + ?2 (k, s? , d, n) ? 1/ t
{z
}
|
{z
} |
Statistical Error
(2)
Optimization Error
with high probability. Here D(?, ?) is the subspace distance, while s? is the sparsity level of U ? , both
of which will be defined in ?2. Here ?1 (?) is a quantity which depends on the population covariance
matrix ?, while ?2 (k, s? , d, n) depends on k, s? , d and n (see ?4 for details). (3) Within the main
T +Te
stage, the iterative sequence U (t)
(where Te denotes the total number of iterations of sparse
t=T +1
2
orthogonal iteration pursuit) satisfies
Optimal Rate
D U ?, U
(t)
zp }|
{
? ?3 (?, k) ? s? log d/n + ?(?)(t?T ?1)/4 ? D U ? , U (T +1)
{z
} |
|
{z
}
Statistical Error
(3)
Optimization Error
with high probability, where ?3 (?, k) is a quantity that only depends on ? and k, and
?(?) = [3?k+1 (?) + ?k (?)]/[?k+1 (?) + 3?k (?)] < 1.
(4)
Here ?k (?) and ?k+1 (?) are the k-th and (k + 1)-th eigenvalues of ?. See ?4 for more details.
Unlike previous works, our theory and method don?t depend on the spiked covariance assumption, or
require the data distribution to be Gaussian.
U init
U (t)
U
Suboptimal Rate
Optimal Rate
Basin of Attraction
Convex Relaxation
Sparse Orthogonal Iteration Pursuit
Figure 1: An illustration of our two-stage procedure.
?
Our analysis shows, at the initialization stage,
the optimization error decays to zero at the rate
of 1/ t.
p
However, the upper bound of D U ? , U (t) in (2) can?t be smaller than the suboptimal s? log d/n
rate of convergence, even with infinite number of iterations. This phenomenon, which is illustrated in
Figure 1, reveals the limit of the convex relaxation approaches for sparse PCA. Within the main stage,
as the optimization error term in (3) decreases to zero geometrically, the upper bound of D U ? , U (t)
p
decreases towards the s? log d/n statistical rate of convergence, which is minimax-optimal with
respect to the sparsity level s? , dimension d and sample size n [17]. Moreover, in Theorem 2 we will
show that, the basin of attraction for the proposed sparse orthogonal iteration pursuit algorithm can
be characterized as
nq
o
p
U : D U ? , U ? R = min
k?(?) 1 ? ?(?)1/2 /2, 2?(?)/4 .
(5)
Here ?(?) is defined in (4) and R denotes its radius.
The contribution of this paper is three-fold: (1) We propose the first tractable procedure that provably
attains the subspace estimator with minimax-optimal statistical rate of convergence with respect to the
sparsity level s? , dimension d and sample size n, without relying on the restrictive spiked covariance
assumption or the Gaussian assumption. (2) We propose a novel algorithm named sparse orthogonal
iteration pursuit, which converges to the optimal estimator at a fast geometric rate. The computation
within each iteration is highly efficient compared with convex relaxation approaches. (3) We build a
joint analytic framework that simultaneously captures the computational and statistical properties of
sparse PCA. Under this framework, we characterize the phenomenon of basin of attraction for the
proposed sparse orthogonal iteration pursuit algorithm. In comparison with our previous work on
nonconvex M -estimators [25], our analysis provides a more general paradigm of solving nonconvex
learning problems with provable guarantees. One byproduct of our analysis is novel techniques for
analyzing the statistical properties of the intermediate solutions of the Alternating Direction Method
of Multipliers [26].
Notation: Let A = [Ai,j ] ? Rd?d and v = (v1 , . . . , vd )T ? Rd . The `q norm (q ? 1) of v is kvkq .
Specifically, kvk0 gives the number of nonzero entries of v. For matrix A, the i-th largest eigenvalue
and singular value are ?i (A) and ?i (A). For q ? 1, kAkq is the matrix operator q-norm, e.g., we
have kAk2 = ?1 (A). The Frobenius norm is denoted as kAkF . For A1 and A2 , their inner product
is hA1 , A2 i = tr(AT1 A2 ). For a set S, |S| denotes its cardinality. The d ? d identity matrix is Id .
3
For index sets I, J ? {1, . . . , d}, we define AI,J ? Rd?d to be the matrix whose (i, j)-th entry is
Ai,j if i ? I and j ? J , and zero otherwise. When I = J , we abbreviate it as AI . If I or J is
{1, . . . , d}, we replace it with a dot, e.g., AI,? . We denote by Ai,? ? Rd the i-th row vector of A. A
matrix is orthonormal if its columns are unit length orthogonal vectors. The (p, q)-norm of a matrix,
denoted as kAkp,q , is obtained by first taking the `p norm of each row, and then taking `q norm of
these row norms. We denote diag(A) to be the vector consisting of the diagonal entries of A. With a
little abuse of notation, we denote by diag(v) the the diagonal matrix with v1 , . . . , vd on its diagonal.
Hereafter, we use generic numerical constants C, C 0 , C 00 , . . ., whose values change from line to line.
2
Background
In the following, we introduce the distance between subspaces and the notion of sparsity for subspace.
Subspace Distance: Let U and U 0 be two k-dimensional subspaces of Rd . We denote the projection
matrix onto them by ? and ?0 respectively. One definition of the distance between U and U 0 is
D(U, U 0 ) = k? ? ?0 kF .
(6)
This definition is invariant to the rotations of the orthonormal basis.
Subspace Sparsity: For the k-dimensional principal subspace U ? of ?, the definition of its sparsity
should be invariant to the choice of basis, because ??s top k eigenvalues might be not distinct. Here
we define the sparsity level s? of U ? to be the number of nonzero coefficients on the diagonal of its
projection matrix ?? . One can verify that (see [17] for details)
s? = supp[diag(?? )] = kU? k2,0 ,
(7)
where k ? k2,0 gives the row-sparsity level, i.e., the number of nonzero rows. Here the columns of U?
can be any orthonormal basis of U ? . This definition reduces to the sparsity of u?1 when k = 1.
Subspace Estimation: For the k-dimensional s? -sparse principal subspace U ? of ?, [17] considered
the following estimator for the orthonormal matrix U? consisting of the basis of U ? ,
b = argmin ? ?,
b UUT , subject to U orthonormal, and kUk2,0 ? s? ,
(8)
U
U?Rd?k
b is an estimator of ?. Let Ub be the column space of U.
b [17] proved that, assuming ?
b is
where ?
b
the sample covariance estimator, and the data are independent sub-Gaussian, U attains the optimal
statistical rate. However, direct computation of this estimator is NP-hard even for k = 1 [22].
3
A Two-stage Procedure for Sparse PCA
In this following, we present the two-stage procedure for sparse PCA. We will first introduce sparse
orthogonal iteration pursuit for the main stage and then present the convex relaxation for initialization.
Algorithm 1 Main stage: Sparse orthogonal iteration pursuit. Here T denotes the total number of
iterations of the initialization stage. To unify the later analysis, let t start from T + 1.
b ? Sparse Orthogonal Iteration Pursuit ?,
b Uinit
1: Function: U
b Initialization Uinit
2: Input: Covariance Matrix Estimator ?,
3: Parameter: Sparsity Parameter s
b, Maximum Number of Iterations Te
(T +1)
(T
+1)
e
e (T +1)
? Truncate Uinit , sb , U(T +1) , R2
? Thin QR U
4: Initialization: U
5: For t = T + 1, . . . , T + Te ? 1
(t+1)
e (t+1) ? ?
b ? U(t) ,
e (t+1)
6:
V
V(t+1) , R1
? Thin QR V
(t+1)
e (t+1) ? Truncate V(t+1) , sb ,
e (t+1)
7:
U
U(t+1) , R2
? Thin QR U
8: End For
b ? U(T +Te)
9: Output: U
4
Sparse Orthogonal Iteration Pursuit: For the main stage, we propose sparse orthogonal iteration
pursuit (Algorithm 1) to solve (8). In Algorithm 1, Truncate(?, ?) (Line 7) is defined in Algorithm
2. In Lines 6 and 7, Thin QR(?) denotes the thin QR decomposition (see [27] for details). In detail,
(t+1)
V(t+1) ? Rd?k and U(t+1) ? Rd?k are orthonormal matrices, and they satisfy V(t+1) ? R1
=
(t+1)
(t+1)
(t+1)
(t+1)
(t+1)
(t+1)
k?k
e
e
V
, and U
? R2
=U
, where R1
, R2
?R
. This decomposition can be
accomplished with O(k 2 d) operations using Householder algorithm [27]. Here remind that k is the
rank of the principal subspace of interest, which is much smaller than the dimension d.
Algorithm 1 consists of two steps: (1) Line 6 performs a matrix multiplication and a renormalization
using QR decomposition. This step is named orthogonal iteration in numerical analysis [27]. When
the first leading eigenvector (k = 1) is of interest, it reduces to the well-known power iteration. The
intuition behind this step can be understood as follows. We consider the minimization problem in (8)
b ? U(t) .
without the row-sparsity constraint. Note that the gradient of the objective function is ?2?
Hence, the gradient descent update scheme for this problem is
e (t+1) ? Porth U(t) + ? ? 2?
b ? U(t) ,
V
(9)
where ? is the step size, and Porth (?) denotes the renormalization step. [28] showed that the optimal
b (t) =Porth ??2??U
b (t) =Porth ??U
b (t) ,
step size ? is infinity. Thus we have Porth U(t) +??2??U
which implies that (9) is equivalent to Line 6. (2) In Line 7, we take a truncation step to enforce the
row-sparsity constraint in (8). In detail, we greedily select the sb most important rows. To enforce
the orthonormality constraint in (8), we perform another renormalization step after the truncation.
Note that the QR decomposition in Line 7 gives a both orthonormal and row-sparse U(t+1) , because
e (t+1) is row-sparse by truncation, and QR decomposition preserves its row-sparsity. By iteratively
U
performing these two steps, we are approximately solving the nonconvex problem in (8). Although
it is not clear whether this procedure achieves the global minimum of (8), we will prove that, the
obtained estimator enjoys the same optimal statistical rate of convergence as the global minimum.
Algorithm 2 Main stage: The Truncate(?, ?) function used in Line 7 of Algorithm 1.
e (t+1) ? Truncate V(t+1) , sb
1: Function: U
(t+1)
2: Row Sorting: Isb ? The set of row index i0 s with the top s
b largest
Vi,?
2 ?s
e (t+1) ? 1 i ? Isb ? V(t+1) , for all i ? {1, . . . , d}
3: Truncation: U
i,?
i,?
e (t+1)
4: Output: U
Algorithm 3 Initialization stage: Solving convex relaxation (10) using ADMM. In Lines 6 and 7,
b to A.
we need to solve two subproblems. The first one is equivalent to projecting ?(t) ??(t) +?/?
This projection can be computed using Algorithm 4 in [29]. The second can be solved by entry-wise
soft-thresholding shown in Algorithm 5 in [29]. We defer these two algorithms and their derivations
to the extended version [29] of this paper.
b
1: Function: Uinit ? ADMM ?
b
2: Input: Covariance Matrix Estimator ?
3: Parameter: Regularization Parameter ? > 0 in (10), Penalty Parameter ? > 0 of the Augmented
Lagrangian, Maximum Number of Iterations T
4: ?(0) ? 0, ?(0) ? 0, ?(0) ? 0
5: For t = 0, . . . , T ? 1
2
6:
?(t+1) ? argmin L ?, ?(t) , ?(t) + ?/2 ?
? ? ?(t)
F ? ? A
2
7:
?(t+1) ? argmin L ?(t+1) , ?, ?(t) + ?/2 ?
?(t+1) ? ?
F ? ? B
8:
?(t+1) ??(t) ? ? ?(t+1) ? ?(t+1)
9: End For
PT
10: ?(T ) ? 1/T ? t=0 ?(t) , let the columns of Uinit be the top k leading eigenvectors of ?(T )
11: Output: Uinit ? Rd?k
5
Convex Relaxation for Initialization: To obtain a good initialization for sparse orthogonal iteration
pursuit, we consider the following convex minimization problem proposed by [5, 13]
n
o
b ? + ?k?k1,1 tr(?) = k, 0 ? Id ,
minimize ? ?,
(10)
which relaxes the combinatorial optimization problem in (8). The intuition behind this relaxation can
be understood as follows: (1) ? is a reparametrization for UUT in (8), which is a projection matrix
with k nonzero eigenvalues of 1. In (10), this constraint is relaxed to tr(?) = k and 0 ? Id ,
which indicates that the eigenvalues of ? should be in [0, 1] while the sum of them is k. (2) For the
row-sparsity constraint in (8), [13] proved that k?? k0,0 ? |supp[diag(?? )]|2 = kU? k22,0 = (s? )2 .
Correspondingly, the row-sparsity constraint in (8) translates to k?k0,0 ? (s? )2 , which is relaxed to
the regularization term k?k1,1 in (10). For notational simplicity, we define
A = ? : ? ? Rd?d , tr(?) = k, 0 ? Id .
(11)
Note (10) has both nonsmooth regularization term and nontrivial constraint A. We use the Alternating
Direction Method of Multipliers (ADMM, Algorithm 3). It considers the equivalent form of (10)
n
o
b ? + ?k?k1,1 ? = ?, ? ? A, ? ? B , where B = Rd?d ,
minimize ? ?,
(12)
and iteratively minimizes the augmented Lagrangian L(?, ?, ?) + ?/2 ? k? ? ?k2F , where
b ? + ?k?k1,1 ? h?, ? ? ?i, ? ? A, ? ? B, ? ? Rd?d
L(?, ?, ?) = ? ?,
(13)
is the Lagrangian corresponding to (12), ? ? Rd?d is the Lagrange multiplier associated with the
equality constraint ? = ?, and ? > 0 is a penalty parameter that enforces such an equality constraint.
Note that other variants of ADMM, e.g., Peaceman-Rachford Splitting Method [30] is also applicable,
which would yield similar theoretical guarantees along with improved practical performance.
4
Theoretical Results
To describe our results, we define the model class Md (?, k, s? ) as follows,
?
?X = ?1/2 Z, where Z ? Rd is sub-Gaussian with mean zero,
?
Md (?, k, s ) :
variance proxy less than 1, and covariance matrix I ;
?The k-dimensional principal subspace U ? of ? is s? -sparse; ?d (?)??
k
k+1 (?)>0.
where ?1/2 satisfies ?1/2 ??1/2 = ?. Here remind the sparsity of U ? is defined in (7) and ?j (?) is
the j-th eigenvalue of ?. For notational simplicity, hereafter we abbreviate ?j (?) as ?j . This model
class doesn?t restrict ? to spiked covariance matrices, where the (k + 1), . . . , d-th eigenvalues of
? can only be identical. Moreover, we don?t require X to be exactly Gaussian, which is a crucial
requirement in several previous works, e.g., [12, 18].
We first introduce some notation. Remind D(?, ?) is the subspace distance defined in (6). Note that
?(?) < 1 is defined in (4) and will be abbreviated as ? hereafter. We define
nq
o2
p
nmin = C ? (s? )2 log d ? min
k ? ?(1 ? ? 1/2 )/2, 2?/4 ? (?k ? ?k+1 )2 /?21 ,
(14)
which denotes the required sample complexity. We also define
h p
i
p
1/4
?1 = [C?1 /(?k ??k+1 )] ? s? log d/n, ?2 = 4/ ?k ??k+1 ? k ? s? ? d2 log d/n
, (15)
which will be used in the analysis of the first stage, and
i p
hp
?
2
?1 = C k ? [?k /(?k ? ?k+1 )] ?
?1 ?k+1 /(?k ? ?k+1 ) ? s? ?(k + log d)/n,
(16)
which will be used in the analysis of the main stage. Meanwhile, remind the radius of the basin of
attraction for sparse orthogonal iteration pursuit is defined in (5). We define
Tmin = ?22 /(R ? ?1 )2 ,
Temin = 4 dlog(R/?1 )/log(1/?)e
(17)
as the required minimum numbers of iterations of the two stages respectively. The following results
will be proved in the extended version [29] of this paper accordingly.
Main Result: Recall that U (t) denotes the subspace spanned by the columns of U(t) in Algorithm 1.
6
Theorem 1. Let x1 , . . . , xn be independent realizations of X ? Md (?, k, s? ) with np? nmin , and
b be the sample covariance matrix. Suppose the regularization parameter ? = C?1 log d/n for
?
a sufficiently ?
large C > 0 in (10) and the penalty parameter ? of ADMM (Line 3 of Algorithm 3)
is ? = d ? ?/ k.
parameter sb in Algorithm 1 (Line 3) is chosen such
Also, suppose the sparsity
that sb = C max 4k/(? ?1/2 ? 1)2 , 1 ? s? , where C ? 1 is an integer constant. After T ? Tmin
e
iterations of Algorithm 3 and then Te ? Temin iterations of Algorithm 1, we obtain Ub = U (T +T ) and
hp
i p
?
2
?1 ?k+1 /(?k ? ?k+1 ) ? s? ?(k + log d)/n
D U ? , Ub ? C?1 = C 0 k ? [?k /(?k ? ?k+1 )] ?
with high probability. Here the equality follows from the definition of ?1 in (16).
Minimax-Optimality: To establish the optimality of Theorem 1, we consider a smaller model class
fd (?, k, s? , ?), which is the same as Md (?, k, s? ) except the eigengap of ? satisfies ?k ? ?k+1 >
M
??k for some constant ? > 0. This condition is mild compared to previous works, e.g., [12] assumes
f we assume that the rank k
?k ? ?k+1 ? ??1 , which is more restrictive because ?1 ? ?k . Within M,
of the principal subspace is fixed. This assumption is reasonable, e.g., in applications like population
genetics [31], the rank k of principal subspaces represents the number of population groups, which
doesn?t increase when the sparsity level s? , dimension d and sample size n are growing.
Theorem 3.1 of [17] implies the following minimax lower bound
e U ? 2 ? C?1 ?k+1 /(?k ??k+1 )2 ? (s? ?k) ? k + log[(d?k)/(s? ?k)] /n,
inf
sup E D U,
e
U
fd (?,k,s? )
X?M
where Ue denotes any principal subspace estimator. Suppose s? and d are sufficiently large (to avoid
trivial cases), the right-hand side is lower bounded
by C 0 ?1 ?k+1 /(?k ??k+1 )2 ?s? (k+1/4?log d)/n.
?
? b
By Lemma 2.1 in [29], we have D U , U ? 2k. For n, d and s? sufficiently large, it is easy to
derive the same upper bound in expectation from in Theorem 1. It attains the minimax lower bound
fd (?, k, s? , ?), up to the 1/4 constant in front of log d and a total constant of k ? ??4 .
above within M
Analysis of the Main Stage: Remind that U (t) is the subspace spanned by the columns of U(t) in
Algorithm 1, and the initialization is Uinit while its column space is U init .
Theorem 2. Under the same condition as in Theorem 1, and provided that D U ? , U init ? R, the
iterative sequence U (T +1) , U (T +2) , . . . , U (t) , . . . satisfies
D U ? , U (t) ?
C?1
+
? (t?T ?1)/4 ? ? ?1/2 R
(18)
|{z}
|
{z
}
Statistical Error
Optimization Error
with high probability, where ?1 is defined in (16), R is defined in (5), and ? is defined in (4).
Theorem 2 shows that, as long as U init falls into its basin of attraction, sparse orthogonal iteration
pursuit converges at a geometric rate of convergence in optimization error since ? < 1. According to
the definition of ? in (4), when ?k is close to ?k+1 , ? is close to 1, then the optimization error term
decays at a slower rate. Here the optimization error doesn?t increase with dimension d, which makes
this algorithm suitable to solve ultra high dimensional
problems. In (18), when t is sufficiently large
such that ? (t?T ?1)/4 ?? ?1/2 R ? ?1 , D U ? , U (t) is upper bounded by 2C?1 , which gives the optimal
statistical rate. Solving t in this inequality, we obtain that t = Te ? Temin , which is defined in (17).
Pt
Analysis of the Initialization Stage: Let ?(t) = 1/t? i=1 ?(i) where ?(i) is defined in Algorithm
3. Let U (t) be the k-dimensional subspace spanned by the top k leading eigenvectors of ?(t) .
Theorem 3. Under the same condition as in Theorem 1, the iterative sequence of k-dimensional
subspaces U (0) , U (1) , . . . , U (t) , . . . satisfies
?
D U ? , U (t) ?
?1
+
?2 ? 1/ t
(19)
|{z}
| {z }
Statistical Error
with high probability. Here ?1 and ?2 are defined in (15).
7
Optimization Error
D(U ? , U (t))
D(U ? , U (t))
3
2.5
2
1.5
0
10
?1
10
20
t
(a)
30
10
5 10 15 20
t
D(U ? , U (t)) ? D(U ? , U (T +Te))
0
10
Initial Stage
Main Stage
10
20
t
(b)
(c)
1
0.8
0.6
0.4
0.2
D(U ? , Ub)
Main Stage
Initial Stage
30
n = 60
d = 128
d = 192
d = 256
D(U ? , Ub)
?
In Theorem 3 the optimization
error
term
decays
to
zero
at
the
rate
of
1/
t. Note that ?2 increases
?
1/4
with d at the rate of d ? (log d) . That is to say, computationally convex relaxation is less efficient
than sparse orthogonal iteration pursuit, which justifies
the early stopping of ADMM. To ensure U (T )
?
enters the basin of attraction, we need ?1 + ?2 / T ? R. Solving T gives T ? Tmin where Tmin is
defined in (17). The proof of Theorem 3 is a nontrivial combination of optimization and statistical
analysis under the variational inequality framework, which is provided in the extended version [29]
of this paper with detail.
0.6
n = 100
d = 128
d = 192
d = 256
0.4
1p 1.5
2
s? log d/n
0.2
0.60.8
p 1 1.21.41.61.8
s? log d/n
(d)
(e)
Figure 2: An Illustration of main results. See ?5 for detailed experiment settings and the interpretation.
Table 1: A comparison of subspace estimation error with existing sparse PCA procedures. The error
b defined in (6). Standard deviations are provided in the parentheses.
is measured by D(U ? , U)
Procedure
Our Procedure
Convex Relaxation [13]
TPower [11] + Deflation Method [24]
GPower [10] + Deflation Method [24]
PathSPCA [8] + Deflation Method [24]
b for Setting (i)
D(U ? , U)
0.32 (0.0067)
1.62 (0.0398)
1.15 (0.1336)
1.84 (0.0226)
2.12 (0.0226)
b for Setting (ii)
D(U ? , U)
0.064 (0.00016)
0.57 (0.021)
0.01 (0.00042)
1.75 (0.029)
2.10 (0.018)
(i): d = 200, s = 10, k = 5, n = 50, ??s eigenvalues are {100, 100, 100, 100, 4, 1, . . . , 1};
(ii): The same as (i) except n = 100, ??s eigenvalues are {300, 240, 180, 120, 60, 1, . . . , 1}.
5
Numerical Results
Figure 2 illustrates the main theoretical results. For (a)-(c), we set d=200, s? =10, k=5,?
n=100, and
??s eigenvalues are {100, 100, 100, 100, 10, 1, . . . , 1}. In detail, (a) illustrates the 1/ t decay of
optimization error at the initialization stage; (b) illustrates the decay of the total estimation error (in
log-scale) at the main stage; (c) illustrates the basin of attraction phenomenon, as well as the geometric
decay of optimization error (in log-scale) of sparse orthogonal iteration pursuit as characterized in ?4.
For (d) and (e),p
the eigenstructure is the same, while d, n and s? take multiple values. They show that
the theoretical s? log d/n statistical rate of our estimator is tight in practice.
In Table 1, we compare the subspace error of our procedure with existing methods, where all except
our procedure and convex relaxation [13] leverage the deflation method [24] for subspace estimation
with k > 1. We consider two settings: Setting (i) is more challenging than setting (ii), since the top k
eigenvalues of ? are not distinct, the eigengap is small and the sample size is smaller. Our procedure
significantly outperforms other existing methods on subspace recovery in both settings.
Acknowledgement:
This research is partially supported by the grants NSF IIS1408910, NSF
IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841.
References
[1] I. Johnstone, A. Lu. On consistency and sparsity for principal components analysis in high dimensions,
Journal of the American Statistical Association 2009;104:682?693.
8
[2] D. Paul. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model, Statistica
Sinica 2007;17:1617.
[3] B. Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation
approach, The Annals of Statistics 2008:2791?2817.
[4] I. Jolliffe, N. Trendafilov, M. Uddin. A modified principal component technique based on the Lasso,
Journal of Computational and Graphical Statistics 2003;12:531?547.
[5] A. d?Aspremont, L. E. Ghaoui, M. I. Jordan, G. R. Lanckriet. A Direct Formulation for Sparse PCA Using
Semidefinite Programming, SIAM Review 2007:434?448.
[6] H. Zou, T. Hastie, R. Tibshirani. Sparse principal component analysis, Journal of computational and
graphical statistics 2006;15:265?286.
[7] H. Shen, J. Huang. Sparse principal component analysis via regularized low rank matrix approximation,
Journal of Multivariate Analysis 2008;99:1015?1034.
[8] A. d?Aspremont, F. Bach, L. Ghaoui. Optimal solutions for sparse principal component analysis, The
Journal of Machine Learning Research 2008;9:1269?1294.
[9] D. Witten, R. Tibshirani, T. Hastie. A penalized matrix decomposition, with applications to sparse principal
components and canonical correlation analysis, Biostatistics 2009;10:515?534.
[10] M. Journ?ee, Y. Nesterov, P. Richt?arik, R. Sepulchre. Generalized power method for sparse principal
component analysis, The Journal of Machine Learning Research 2010;11:517?553.
[11] X.-T. Yuan, T. Zhang. Truncated power method for sparse eigenvalue problems, The Journal of Machine
Learning Research 2013;14:899?925.
[12] Z. Ma. Sparse principal component analysis and iterative thresholding, The Annals of Statistics 2013;41.
[13] V. Q. Vu, J. Cho, J. Lei, K. Rohe. Fantope projection and selection: A near-optimal convex relaxation of
sparse PCA, in Advances in Neural Information Processing Systems:2670?2678 2013.
[14] A. Amini, M. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal
components, The Annals of Statistics 2009;37:2877?2921.
[15] V. Q. Vu, J. Lei. Minimax Rates of Estimation for Sparse PCA in High Dimensions, in International
Conference on Artificial Intelligence and Statistics:1278?1286 2012.
[16] A. Birnbaum, I. M. Johnstone, B. Nadler, D. Paul, others. Minimax bounds for sparse PCA with noisy
high-dimensional data, The Annals of Statistics 2013;41:1055?1084.
[17] V. Q. Vu, J. Lei. Minimax sparse principal subspace estimation in high dimensions, The Annals of Statistics
2013;41:2905?2947.
[18] T. T. Cai, Z. Ma, Y. Wu, others. Sparse PCA: Optimal rates and adaptive estimation, The Annals of Statistics
2013;41:3074?3110.
[19] Q. Berthet, P. Rigollet. Optimal detection of sparse principal components in high dimension, The Annals of
Statistics 2013;41:1780?1815.
[20] Q. Berthet, P. Rigollet. Complexity Theoretic Lower Bounds for Sparse Principal Component Detection, in
COLT:1046-1066 2013.
[21] J. Lei, V. Q. Vu. Sparsistency and Agnostic Inference in Sparse PCA, arXiv:1401.6978 2014.
[22] B. Moghaddam, Y. Weiss, S. Avidan. Spectral bounds for sparse PCA: Exact and greedy algorithms,
Advances in neural information processing systems 2006;18:915.
[23] K. Ball. An elementary introduction to modern convex geometry, Flavors of geometry 1997;31:1?58.
[24] L. Mackey. Deflation methods for sparse PCA, Advances in neural information processing systems
2009;21:1017?1024.
[25] Z. Wang, H. Liu, T. Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex
learning problems, The Annals of Statistics 2014;42:2164?2201.
[26] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein. Distributed optimization and statistical learning via the
R in Machine Learning 2011;3:1?122.
alternating direction method of multipliers, Foundations and Trends
[27] G. H. Golub, C. F. Van Loan. Matrix computations. Johns Hopkins University Press 2012.
[28] R. Arora, A. Cotter, K. Livescu, N. Srebro. Stochastic optimization for PCA and PLS, in Communication,
Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on:861?868IEEE 2012.
[29] Z. Wang, H. Lu, H. Liu. Nonconvex statistical optimization: Minimax-optimal Sparse PCA in polynomial
time, arXiv:1408.5352 2014.
[30] B. He, H. Liu, Z. Wang, X. Yuan. A Strictly Contractive Peaceman?Rachford Splitting Method for Convex
Programming, SIAM Journal on Optimization 2014;24:1011?1040.
[31] B. E. Engelhardt, M. Stephens. Analysis of population structure: a unifying framework and novel methods
based on sparse factor analysis, PLoS genetics 2010;6:e1001117.
9
| 5252 |@word mild:1 version:3 polynomial:3 norm:8 d2:1 covariance:17 decomposition:6 incurs:1 tr:4 sepulchre:1 initial:4 liu:4 dspca:2 hereafter:3 o2:1 existing:4 outperforms:1 tackling:1 chu:1 john:1 numerical:3 analytic:3 update:1 mackey:1 greedy:3 intelligence:1 nq:2 accordingly:1 provides:1 allerton:2 zhang:2 along:1 kvk2:1 direct:2 yuan:2 prove:2 consists:1 overhead:1 introduce:3 growing:1 relying:2 decreasing:1 gpower:1 little:1 curse:1 cardinality:2 provided:3 estimating:5 underlying:2 moreover:2 notation:3 bounded:2 r01mh102339:1 biostatistics:1 agnostic:1 argmin:4 minimizes:1 eigenvector:5 unified:1 nj:1 guarantee:7 fantope:1 concave:1 exactly:2 k2:3 control:1 unit:1 grant:1 eigenstructure:2 engineering:1 understood:2 limit:1 analyzing:1 id:4 abuse:1 approximately:1 might:1 initialization:17 challenging:2 contractive:1 statistically:1 practical:1 enforces:1 vu:4 practice:1 procedure:20 asymptotics:1 attain:2 significantly:1 projection:5 boyd:1 onto:1 close:3 selection:1 operator:1 equivalent:3 lagrangian:3 convex:18 shen:1 unify:1 simplicity:2 splitting:2 recovery:1 estimator:27 attraction:10 spanned:5 orthonormal:7 financial:1 population:6 notion:1 coordinate:2 annals:8 pt:2 suppose:4 exact:2 programming:2 livescu:1 lanckriet:1 trend:1 enters:2 wang:4 capture:1 calculate:1 solved:1 region:3 richt:1 decrease:2 plo:1 intuition:2 complexity:2 nesterov:1 depend:1 solving:5 tight:1 basis:4 joint:1 k0:3 derivation:1 distinct:3 fast:2 describe:1 artificial:1 whose:2 solve:4 say:1 relax:1 otherwise:1 statistic:11 noisy:1 final:2 sequence:5 eigenvalue:14 cai:1 propose:6 product:1 realization:2 frobenius:1 qr:8 convergence:12 requirement:1 zp:1 r1:3 converges:4 derive:1 measured:1 kvkq:1 progress:1 solves:1 implies:2 direction:3 radius:2 stochastic:1 require:3 ultra:1 elementary:1 strictly:1 kakq:1 hold:2 sufficiently:5 considered:1 nadler:2 achieves:3 early:4 a2:3 estimation:7 applicable:1 combinatorial:1 largest:2 cotter:1 minimization:2 gaussian:7 arik:1 modified:1 avoid:2 notational:2 methodological:1 rank:4 indicates:1 attains:10 greedily:1 inference:1 stopping:2 minimizers:1 i0:1 sb:6 integrated:1 journ:1 provably:2 issue:2 colt:1 denoted:3 development:1 identical:1 represents:1 k2f:1 uddin:1 thin:5 np:2 nonsmooth:1 others:2 employ:1 modern:1 simultaneously:2 preserve:1 individual:1 sparsistency:1 geometry:2 consisting:2 porth:5 detection:2 interest:3 fd:3 highly:2 golub:1 semidefinite:2 behind:2 moghaddam:1 byproduct:1 orthogonal:19 euclidean:1 desired:3 theoretical:5 column:7 soft:1 deviation:1 entry:5 peaceman:2 front:1 characterize:2 cho:1 international:1 siam:2 hopkins:1 huang:1 american:1 leading:12 supp:2 zhaoran:2 b2:1 coefficient:1 satisfy:1 depends:3 vi:1 later:1 sup:1 start:1 recover:3 reparametrization:1 identifiability:1 defer:1 contribution:1 minimize:2 r01hg06841:1 variance:2 yield:2 lu:3 simultaneous:1 definition:6 associated:1 proof:1 stop:2 proved:4 recall:1 dimensionality:1 attained:1 improved:1 wei:1 formulation:2 though:1 stage:36 nmin:2 correlation:1 hand:2 lack:2 lei:4 k22:1 verify:1 multiplier:4 orthonormality:1 hence:1 regularization:4 equality:3 alternating:3 iteratively:3 nonzero:7 illustrated:1 sin:1 ue:1 generalized:2 theoretic:1 performs:1 wise:1 variational:1 novel:5 tpower:1 parikh:1 nih:3 common:1 rotation:1 witten:1 rigollet:2 rachford:2 association:1 interpretation:1 he:1 r01gm083084:1 significant:2 ai:6 rd:16 consistency:1 hp:2 dot:1 han:1 uinit:7 multivariate:1 showed:2 inf:1 nonconvex:10 inequality:2 accomplished:1 minimum:3 additional:1 relaxed:2 paradigm:2 ii:3 stephen:1 multiple:1 reduces:2 characterized:2 iis1408910:1 sphere:1 long:1 bach:1 a1:1 parenthesis:1 variant:2 regression:2 avidan:1 expectation:1 arxiv:2 iteration:30 background:1 singular:1 crucial:2 extra:1 rest:1 unlike:2 subject:2 inconsistent:1 jordan:1 integer:1 ee:1 near:2 leverage:2 intermediate:1 easy:1 relaxes:1 hastie:2 restrict:1 suboptimal:5 lasso:1 inner:1 translates:1 whether:1 pca:29 eigengap:3 penalty:3 clear:1 eigenvectors:7 detailed:1 nsf:2 iis1332109:1 canonical:1 tibshirani:2 group:1 nevertheless:1 birnbaum:1 nonconvexity:1 v1:2 relaxation:15 geometrically:1 sum:1 named:5 reasonable:1 wu:1 bound:8 fold:1 annual:1 nontrivial:2 constraint:11 orthogonality:2 infinity:1 aspect:2 min:2 optimality:2 performing:1 relatively:1 department:1 according:1 truncate:5 combination:1 ball:1 smaller:5 kakp:1 projecting:1 restricted:2 spiked:8 invariant:2 dlog:1 ghaoui:2 computationally:3 remains:1 abbreviated:1 fail:1 deflation:6 needed:1 jolliffe:1 tractable:3 end:2 pursuit:17 operation:2 generic:1 enforce:2 amini:1 spectral:1 slower:1 top:10 denotes:9 assumes:1 ensure:1 graphical:2 hinge:2 unifying:1 restrictive:2 k1:4 build:1 establish:1 classical:1 objective:1 quantity:2 primary:1 kak2:1 diagonal:5 md:4 gradient:2 subspace:34 distance:5 vd:2 considers:1 trivial:1 provable:2 engelhardt:1 assuming:1 besides:1 length:1 index:2 remind:5 illustration:2 minimizing:1 sinica:1 subproblems:1 motivates:1 perform:1 upper:4 finite:1 descent:1 truncated:2 extended:3 tmin:4 communication:1 kvk0:2 perturbation:1 householder:1 peleato:1 namely:2 required:2 eckstein:1 address:2 sparsity:22 max:1 wainwright:1 power:5 suitable:1 regularized:1 abbreviate:2 minimax:12 scheme:1 arora:1 aspremont:2 review:1 geometric:4 understanding:1 acknowledgement:1 kf:1 multiplication:1 kakf:1 rationale:1 sublinear:1 proven:1 srebro:1 at1:1 foundation:1 basin:10 proxy:1 thresholding:5 row:15 genetics:2 penalized:1 supported:1 soon:1 truncation:4 enjoys:2 side:1 johnstone:2 fall:3 taking:2 correspondingly:1 sparse:61 distributed:1 ha1:1 van:1 dimension:11 xn:2 doesn:4 berthet:2 made:1 adaptive:1 tighten:1 global:3 reveals:1 b1:1 isb:2 don:2 iterative:8 table:2 nature:1 ku:2 init:4 meanwhile:3 zou:1 diag:4 main:18 statistica:1 paul:2 x1:2 augmented:2 renormalization:3 slow:1 sub:3 theorem:12 kuk2:1 rohe:1 hanliu:1 uut:2 r2:4 decay:6 intractable:2 te:8 justifies:1 illustrates:4 gap:3 sorting:1 flavor:1 lagrange:1 pls:1 partially:1 trendafilov:1 satisfies:7 ma:2 goal:1 identity:1 consequently:1 towards:2 replace:1 admm:6 change:1 hard:1 included:1 infinite:1 except:4 uniformly:1 specifically:1 loan:1 principal:29 lemma:1 total:4 exception:1 select:1 arises:1 ub:5 princeton:3 phenomenon:3 |
4,697 | 5,253 | Consistency of weighted majority votes
Daniel Berend Computer Science Department and Mathematics Department
Ben Gurion University
Beer Sheva, Israel berend@cs.bgu.ac.il
Aryeh Kontorovich
Computer Science Department Ben Gurion University
Beer Sheva, Israel karyeh@cs.bgu.ac.il
Abstract
We revisit from a statistical learning perspective the classical decision-theoretic
problem of weighted expert voting. In particular, we examine the consistency
(both asymptotic and finitary) of the optimal Nitzan-Paroush weighted majority
and related rules. In the case of known expert competence levels, we give sharp
error estimates for the optimal rule. When the competence levels are unknown,
they must be empirically estimated. We provide frequentist and Bayesian analyses
for this situation. Some of our proof techniques are non-standard and may be
of independent interest. The bounds we derive are nearly optimal, and several
challenging open problems are posed.
1
Introduction
Imagine independently consulting a small set of medical experts for the purpose of reaching a binary
decision (e.g., whether to perform some operation). Each doctor has some ?reputation?, which can
be modeled as his probability of giving the right advice. The problem of weighting the input of
several experts arises in many situations and is of considerable theoretical and practical importance.
The rigorous study of majority vote has its roots in the work of Condorcet [1]. By the 70s, the field
of decision theory was actively exploring various voting rules (see [2] and the references therein).
A typical setting is as follows. An agent is tasked with predicting some random variable Y ? {?1}
based on input Xi ? {?1} from each of n experts. Each expert Xi has a competence level pi ?
(0, 1), which is the probability of making a correct prediction: P(Xi = Y ) = pi . Two simplifying
assumptions are commonly made:
(i) Independence: The random variables {Xi : i ? [n]} are mutually independent conditioned
on the truth Y .
(ii) Unbiased truth: P(Y = +1) = P(Y = ?1) = 1/2.
We will discuss these assumptions below in greater detail; for now, let us just take them as given.
(Since the bias of Y can be easily estimated from data, only the independence assumption is truly
n
restrictive.) A decision rule is a mapping f : {?1} ? {?1} from the n expert inputs to the agent?s
final decision. Our quantity of interest throughout the paper will be the agent?s probability of error,
P(f (X) 6= Y ).
(1)
A decision rule f is optimal if it minimizes the quantity in (1) over all possible decision rules. It
was shown in [2] that, when Assumptions (i)?(ii) hold and the true competences pi are known, the
optimal decision rule is obtained by an appropriately weighted majority vote:
!
n
X
OPT
f (x) = sign
wi xi ,
(2)
i=1
1
where the weights wi are given by
wi = log
pi
,
1 ? pi
i ? [n].
(3)
Thus, wi is the log-odds of expert i being correct ? and the voting rule in (2), also known as naive
Bayes [3], may be seen as a simple consequence of the Neyman-Pearson lemma [4].
Main results. The formula in (2) raises immediate questions, which apparently have not previously been addressed. The first one has to do with the consistency of the Nitzan-Paroush optimal
rule: under what conditions does the probability of error decay to zero and at what rate? In Section 3,
we show that the probability of error is controlled by the committee potential ?, defined by
n
n
X
X
1
?=
(pi ? 2 )wi =
(pi ? 12 ) log
i=1
i=1
pi
.
1 ? pi
(4)
More precisely, we prove in Theorem 1 that log P(f OPT (X) 6= Y ) ??, where denotes equivalence up to universal multiplicative constants.
Another issue not addressed by the Nitzan-Paroush result is how to handle the case where the competences pi are not known exactly but rather estimated empirically by p?i . We present two solutions
to this problem: a frequentist and a Bayesian one. As we show in Section 4, the frequentist approach
does not admit an optimal empirical decision rule. Instead, we analyze empirical decision rules in
various settings: high-confidence (i.e., |?
pi ? pi | 1) vs. low-confidence, adaptive vs. nonadaptive.
The low-confidence regime requires no additional assumptions, but gives weaker guarantees (Theorem 5). In the high-confidence regime, the adaptive approach produces error estimates in terms of
the empirical p?i s (Theorem 7), while the nonadaptive approach yields a bound in terms of the unknown pi s, which still leads to useful asymptotics (Theorem 6). The Bayesian solution sidesteps the
various cases above, as it admits a simple, provably optimal empirical decision rule (Section 5). Unfortunately, we are unable to compute (or even nontrivially estimate) the probability of error induced
by this rule; this is posed as a challenging open problem.
2
Related work
Machine learning theory typically clusters weighted majority [5, 6] within the framework of online
algorithms; see [7] for a modern treatment. Since the online setting is considerably more adversarial
than ours, we obtain very different weighted majority rules and consistency guarantees. The weights
wi in (2) bear a striking similarity to the Adaboost update rule [8, 9]. However, the latter assumes
weak learners with access to labeled examples, while in our setting the experts are ?static?. Still, we
do not rule out a possible deeper connection between the Nitzan-Paroush decision rule and boosting.
In what began as the influential Dawid-Skene model [10] and is now known as crowdsourcing, one
attempts to extract accurate predictions by pooling a large number of experts, typically without the
benefit of being able to test any given expert?s competence level. Still, under mild assumptions it
is possible to efficiently recover the expert competences to a high accuracy and to aggregate them
effectively [11]. Error bounds for the oracle MAP rule were obtained in this model by [12] and
minimax rates were given in [13].
In a recent line of work [14, 15, 16] have developed a PAC-Bayesian theory for the majority vote
of simple classifiers. This approach facilitates data-dependent bounds and is even flexible enough
to capture some simple dependencies among the classifiers ? though, again, the latter are learners
as opposed to our experts. Even more recently, experts with adversarial noise have been considered [17], and efficient algorithms for computing optimal expert weights (without error analysis)
were given [18]. More directly related to the present work are the papers of [19], which characterizes the consistency of the simple majority rule, and [20, 21, 22] which analyze various models of
dependence among the experts.
2
3
Known competences
In this section we assume that the expert competences pi are known and analyze the consistency of
the Nitzan-Paroush optimal decision rule (2). Our main result here is that the probability of error
P(f OPT (X) 6= Y ) is small if and only if the committee potential ? is large.
Theorem 1. Suppose that the experts X = (X1 , . . . , Xn ) satisfy Assumptions (i)-(ii) and
n
f OPT : {?1} ? {?1} is the Nitzan-Paroush optimal decision rule. Then
(i) P(f OPT (X) 6= Y ) ? exp ? 12 ? .
(ii) P(f OPT (X) 6= Y ) ?
3
?
.
8[1 + exp(2? + 4 ?)]
As we show in the full paper [27], the upper and lower bounds are both asymptotically tight. The
remainder of this section is devoted to proving Theorem 1.
3.1
Proof of Theorem 1(i)
Define the {0, 1}-indicator variables
?i = 1{Xi =Y } ,
(5)
corresponding to the event that the ith expert is correct. A mistake f OPT (X) 6= Y occurs precisely
when1 the sum of the correct experts? weights fails to exceed half the total mass:
!
n
n
X
X
1
P(f OPT (X) 6= Y ) = P
wi ?i ?
wi .
(6)
2 i=1
i=1
Since E?i = pi , we may rewrite the probability in (6) as
"
#
!
X
X
X
1
P
wi ?i ? E
wi ?i ?
(pi ? 2 )wi .
i
i
(7)
i
A standard tool for estimating such sum deviation probabilities is Hoeffding?s inequality. Applied
to (7), it yields the bound
P
2 !
1
2
i (pi ? 2 )wi
OPT
P 2
P(f (X) 6= Y ) ? exp ?
,
(8)
i wi
which is far too crude for our purposes. Indeed, consider a finite committee of highly competent
experts with pi ?s arbitrarily close to 1 and X1 the most competent of all. Raising X1 ?s competence
sufficiently far above his peers will cause both the numerator and the denominator in the exponent
to be dominated by w12 , making the right-hand-side of (8) bounded away from zero. The inability of
Hoeffding?s inequality to guarantee consistency even in such a felicitous setting is an instance of its
generally poor applicability to highly heterogeneous sums, a phenomenon explored in some depth in
[23]. Bernstein?s and Bennett?s inequalities suffer from a similar weakness (see ibid.). Fortunately,
an inequality of Kearns and Saul [24] is sufficiently sharp to yield the desired estimate: For all
p ? [0, 1] and all t ? R,
1 ? 2p
?tp
t(1?p)
2
(1 ? p)e
+ pe
? exp
t .
(9)
4 log((1 ? p)/p)
Remark. The Kearns-Saul inequality (9) may be seen as a distribution-dependent refinement of
2
Hoeffding?s (which bounds the left-hand-side of (9) by et /8 ), and is not nearly as straightforward
to prove. An elementary rigorous proof is given in [25]. Following up, [26] gave a ?soft? proof
based on transportation and information-theoretic techniques.
1
Without loss of generality, ties are considered to be errors.
3
Put ?i = ?i ? pi , substitute into (6), and apply Markov?s inequality:
!
!
X
X
OPT
?t?
P(f (X) 6= Y ) = P ?
w i ?i ? ? ? e
Eexp ?t
wi ?i .
i
(10)
i
Now
Ee?twi ?i
= pi e?(1?pi )wi t + (1 ? pi )epi wi t
?1 + 2pi
? exp
wi2 t2 = exp 12 (pi ? 12 )wi t2 ,
4 log(pi /(1 ? pi ))
where the inequality follows from (9). By independence,
!
X
Y
E exp ?t
wi ?i
=
Ee?twi ?i ? exp
i
and hence P(f
3.2
OPT
!
1
2
i
X
(pi ?
2
1
2 )wi t
= exp
2
1
2 ?t
i
1
2
2 ?t
(X) 6= Y ) ? exp
(11)
? ?t . Choosing t = 1 yields the bound in Theorem 1(i).
Proof of Theorem 1(ii)
Define the {?1}-indicator variables
?i = 2 ? 1{Xi =Y } ? 1,
(12)
corresponding to the event that the ith expert is correct and put qi = 1 ? pi . The shorthand w ? ? =
P
n
i=1 wi ?i will be convenient. We will need some simple lemmata, whose proofs are deferred to
the journal version [27].
Lemma 2.
X
max {P (?), P (??)}
P(f OPT (X) = Y ) = 21
??{?1}n
and
P(f OPT (X) 6= Y ) =
X
1
2
min {P (?), P (??)} ,
??{?1}n
where P (?) =
Q
i:?i =1
Q
pi
i:?i =?1 qi .
0
m
Pm
0
?1
Lemma 3. Suppose that s, s ? P
(0, ?) satisfy
? si /s0i ? R,
i=1 (si + si ) ? a and R
m
0
i ? [m], for some R < ?. Then
i=1 min {si , si } ? a/(1 + R).
Lemma 4. Define the function F : (0, 1) ? R by
F (x) =
x(1 ? x) log(x/(1 ? x))
.
2x ? 1
Then sup0<x<1 F (x) = 12 .
Continuing with the main proof, observe that
E [w ? ?] =
n
X
(pi ? qi )wi = 2?
(13)
i=1
and Var [w ? ?] = 4
Pn
i=1
pi qi wi2 . By Lemma 4, pi qi wi2 ? 21 (pi ? qi )wi , and hence
Var [w ? ?] ? 4?.
(14)
Define the segment I ? R by
h
?
? i
I = 2? ? 4 ?, 2? + 4 ? .
(15)
Chebyshev?s inequality together with (13) and (14) implies that
P (w ? ? ? I) ?
4
3
.
4
(16)
n
Consider an atom ? ? {?1} for which w ? ? ? I. The proof of Lemma 2 shows that
?
P (?)
= exp (w ? ?) ? exp(2? + 4 ?),
P (??)
(17)
where the inequality follows from (15). Lemma 2 further implies that
P(f OPT (X) 6= Y ) ?
X
1
2
min {P (?), P (??)} ?
??{?1}n :w???I
3/4
? ,
1 + exp(2? + 4 ?)
where the second inequality follows from Lemma 3, (16) and (17). This completes the proof.
4
Unknown competences: frequentist
Our goal in this section is to obtain, insofar as possible, analogues of Theorem 1 for unknown expert
competences. When the pi s are unknown, they must be estimated empirically before any useful
weighted majority vote can be applied. There are various ways to model partial knowledge of expert
competences [28, 29]. Perhaps the simplest scenario for estimating the pi s is to assume that the
ith expert has been queried independently mi times, out of which he gave the correct prediction ki
times. Taking the {mi } to be fixed, define the committee profile by k = (k1 , . . . , kn ); this is the
aggregate of the agent?s empirical knowledge of the experts? performance. An empirical decision
rule f? : (x, k) 7? {?1} makes a final decision based on the expert inputs x together with the
committee profile. Analogously to (1), the probability of a mistake is
P(f?(X, K) 6= Y ).
(18)
Note that now the committee profile is an additional source of randomness. Here we run into our first
difficulty: unlike the probability in (1), which is minimized by the Nitzan-Paroush rule, the agent
cannot formulate an optimal decision rule f? in advance without knowing the pi s. This is because no
decision rule is optimal uniformly over the range of possible pi s. Our approach will be to consider
weighted majority decision rules of the form
!
n
X
f?(x, k) = sign
w(k
? i )xi
(19)
i=1
and to analyze their consistency properties under two different regimes: low-confidence and highconfidence. These refer to the confidence intervals of the frequentist estimate of pi , given by
p?i =
4.1
ki
.
mi
(20)
Low-confidence regime
In the low-confidence regime, the sample sizes mi may be as small as 1, and we define2
w(k
? i) = w
?iLC := p?i ? 21 ,
i ? [n],
(21)
which induces the empirical decision rule f?LC . It remains to analyze f?LC ?s probability of error.
Recall the definition of ?i from (5) and observe that
LC
E w
?i ?i = E[(?
pi ? 21 )?i ] = (pi ? 21 )pi ,
(22)
since p?i and ?i are independent. As in (6), the probability of error (18) is
!
!
n
n
n
X
X
X
1
P
w
?iLC ?i ?
w
?iLC = P
Zi ? 0 ,
2
i=1
i=1
i=1
(23)
2
For mi min {pi , qi } 1, the estimated competences p?i may well take values in {0, 1}, in which case
log(?
pi /?
qi ) = ??. The rule in (21) is essentially a first-order Taylor approximation to w(?) about p = 21 .
5
where Zi = w
?iLC (?i ? 12 ). Now the {Zi } are independent random variables, EZi = (pi ? 12 )2 (by
(22)), and each Zi takes values in an interval of length 21 . Hence, the standard Hoeffding bound
applies:
?
!2 ?
n
X
8
P(f?LC (X, K) 6= Y ) ? exp ??
(24)
(pi ? 21 )2 ? .
n i=1
We summarize these calculations in
Theorem 5. A sufficient condition for P(f?LC (X, K) 6= Y ) ? 0 is
?1
n
Pn
i=1 (pi
? 21 )2 ? ?.
Several remarks are in order. First, notice that the error bound in (24) is stated in terms of the unknown {pi }, providing the agent with large-committee asymptotics but giving no finitary information; this limitation is inherent in the low-confidence regime. Secondly, the condition in Theorem 5
is considerably more restrictive than the consistency condition ? ? ? implicit in Theorem 1. Indeed, the empirical decision rule f?LC is incapable of exploiting a single highly competent expert in
the way that f OPT from (2) does. Our analysis could be sharpened somewhat for moderate sample
sizes {mi } by using Bernstein?s inequality to take advantage of the low variance of the p?i s. For
sufficiently large sample sizes, however, the high-confidence regime (discussed below) begins to
take over. Finally, there is one sense in which this case is ?easier? to analyze than that of known
{pi }: since the summands in (23) are bounded, Hoeffding?s inequality gives nontrivial results and
there is no need for more advanced tools such as the Kearns-Saul inequality (9) (which is actually
inapplicable in this case).
4.2
High-confidence regime
In the high-confidence regime, each estimated competence p?i is close to the true value pi with high
probability. To formalize this, fix some 0 < ? < 1, 0 < ? ? 5, and put qi = 1 ? pi , q?i = 1 ? p?i .
We will set the empirical weights according to the ?plug-in? Nitzan-Paroush rule
w
?iHC := log
p?i
,
q?i
i ? [n],
(25)
which induces the empirical decision rule f?HC and raises immediate concerns about w
?iHC = ??. We
HC
?
give two kinds of bounds on P(f 6= Y ): nonadaptive and adaptive. In the nonadaptive analysis, we
show that for mi min {pi , qi } 1, with high probability |wi ? w
?iHC | 1, and thus a ?perturbed?
version of Theorem 1(i) holds (and in particular, wiHC will be finite with high probability). In the
adaptive analysis, we allow w
?iHC to take on infinite values3 and show (perhaps surprisingly) that this
decision rule also asymptotically achieves the rate of Theorem 1(i).
Nonadaptive analysis. The following result captures our analysis of the nonadaptive agent:
Theorem 6. Let 0 < ? < 1 and 0 < ? < min {5, 2?/n}. If
?
mi min {pi , qi } ? 3
4? + 1 ? 1
4
?2
log
4n
,
?
i ? [n],
(26)
then
P f?HC (X, K) 6= Y
(2? ? ?n)2
? ? + exp ?
.
8?
(27)
Remark. For fixed {pi } and mini?[n] mi ? ?, we may take ? and ? arbitrarily small ? and in
this limiting case, the bound of Theorem 1(i) is recovered.
3
When the decision rule is faced with evaluating sums involving ? ? ?, we automatically count this as
an error.
6
Adaptive analysis. Theorem 6 has the drawback of being nonadaptive, in that its assumptions
(26) and conclusions (27) depend on the unknown {pi } and hence cannot be evaluated by the agent
(the bound in (24) is also nonadaptive4 ). In the adaptive (fully empirical) approach, all results are
stated in terms of empirically observed quantities:
Pn
?1
Theorem 7. Choose any5 ?
?
and let R be the event where
i=1 mi
n
o
P
n
exp ? 12 i=1 (?
pi ? 21 )w
?iHC ? 2? . Then P R ? f?HC (X, K) 6= Y
? ?.
Remark 1. Our interpretation for Theorem 7 is as follows. The agent observes the committee profile
K, which determines the {?
pi , w
?iHC }, and then checks whether the event R has occurred. If not, the
adaptive agent refrains from making a decision (and may choose to fall back on the low-confidence
approach described previously). If R does hold, however, the agent predicts Y according to f?HC .
1
? = Pn (?
Observe that the event R will only occur if the empirical committee potential ?
?iHC
i=1 pi ? 2 )w
1
is sufficiently large ? i.e., if enough of the experts? competences are sufficiently far from 2 . But if
this is not the case, little is lost by refraining from a high-confidence decision and defaulting to a
low-confidence one, since near 21 , the two decision procedures are very similar.
As explained above, there does not exist a nontrivial a priori upper bound on P(f?HC (X, K) 6= Y )
absent any knowledge of the pi s. Instead, Theorem 7 bounds the probability of the agent being
?fooled? by an unrepresentative committee profile.6 Note that we have done nothing to prevent
w
?iHC = ??, and this may indeed happen. Intuitively, there are two reasons for infinite w
?iHC : (a)
th
noisy p?i due to mi being too small, or (b) the i expert ?
is actually highly (in)competent, which
causes p?i ? {0, 1} to be likely even for large mi . The 1/ mi term in the bound insures against
case (a), while in case (b), choosing infinite w
?iHC causes no harm (as we show in the proof).
Proof of Theorem 7. We will write the probability and expectation operators with subscripts (such
as K) to indicate the random variable(s) being summed over. Thus,
n
o
HC
? ?? ?0
PK,X,Y R ? f?HC (X, K) 6= Y
= PK,? R ? w
? HC ? ? ? 0 | K .
= EK 1R ? P? w
n
Recall that
Q the random
Q variable ? ? {?1} , with probability mass function
P (?) = i:?i =1 pi i:?i =?1 qi , is independent of K, and hence
? HC ? ? ? 0 .
? HC ? ? ? 0 | K = P? w
P? w
(28)
n
? ? {?1} (conditioned on K) by
Define the
variable ?
probability mass function
the HC
Q random Q
n
? ? x ? 0 . Now
P (?
? ) = i:?i =1 p?i i:?i =?1 q?i , and the set A ? {?1} by A = x : w
P? w
? HC ? ? ? 0 ? P?? w
? HC ? ?
? ? 0 = |P? (A) ? P?? (A)| ? max n |P? (A) ? P?? (A)|
A?{?1}
= kP? ? P?? kTV ?
n
X
|pi ? p?i | =: M,
i=1
where the last inequality follows from a standard tensorization property of the total variation
? HC ? ?
??0 ?
norm k?kTV , see e.g. [33, Lemma 2.2]. By Theorem 1(i), we have P?? w
P
P
n
? HC ? ? ? 0 ? M + exp ? 12 ni=1 (?
exp ? 21 i=1 (?
pi ? 21 )w
pi ? 12 )w
?iHC , and hence P? w
?iHC .
Invoking (28), we substitute the right-hand side above into (28) to obtain
!!#
"
n
n
o
X
HC
HC
1
1
?
PK,X,Y R ? f (X, K) 6= Y
? EK 1R ? M + exp ? 2
(?
pi ? 2 )w
?i
"
? EK [M ] + EK 1R exp
4
i=1
n
X
? 21
(?
pi
i=1
!#
?
1
?iHC
2 )w
.
The term oracle was suggested by a referee for this setting.
Actually, as the proof will show, we may
?take ? to be a smaller value, but with a more complex dependence
on {mi }, which simplifies to 2[1 ? (1 ? (2 m)?1 )n ] for mi ? m.
6
These adaptive bounds are similar in spirit to empirical Bernstein methods, [30, 31, 32], where the agent?s
confidence depends on the empirical variance.
5
7
By the definition of R, the second term on the last right-hand side is upper-bounded by ?/2. To
estimate M , we invoke a simple mean absolute deviation bound (cf. [34]):
s
pi (1 ? pi )
1
EK |pi ? p?i | ?
? ? ,
mi
2 mi
which finishes the proof.
Remark. The improvement
mentioned in Footnote 5 is achieved via a refinement of the bound
Pn
kP? ? P?? kTV ? i=1 |pi ? p?i | to kP? ? P?? kTV ? ? ({|pi ? p?i | : i ? [n]}), where ?(?) is the function defined in [33, Lemma 4.2].
Open problem. As argued in Remark 1, Theorem 7 achieves the optimal asymptotic rate in {pi }.
? HC ?
Can the dependence on {mi } be improved, perhaps through a better choice of w
5
Unknown competences: Bayesian
A shortcoming of Theorem 7 is that when condition R fails, the agent is left with no estimate of the
error probability. An alternative (and in some sense cleaner) approach to handling unknown expert
competences pi is to assume a known prior distribution over the competence levels pi . The natural
choice of prior for a Bernoulli parameter is the Beta distribution, namely pi ? Beta(?i , ?i ) with
p
?i ?1 ?i ?1
q
i
density iB(?i ,?
, where ?i , ?i > 0, qi = 1 ? pi and B(x, y) = ?(x)?(y)/?(x + y). Our full
i)
probabilistic model is as follows. Each of the n expert competences pi is drawn independently from
a Beta distribution with known parameters ?i , ?i . Then the ith expert, i ? [n], is queried independently mi times, with ki correct predictions and mi ?ki incorrect ones. As before, K = (k1 , . . . , kn )
is the (random) committee profile. Absent direct knowledge of the pi s, the agent relies on an empirical decision rule f? : (x, k) 7? {?1} to produce a final decision based on the expert inputs x together
with the committee profile k. A decision rule f?Ba is Bayes-optimal if it minimizes P(f?(X, K) 6= Y ),
which is formally identical to (18) but semantically there is a difference: the former is over the pi
in addition to (X, Y, K). Unlike the frequentist approach, where no optimal empirical decision rule
Pn
was possible, the Bayesian approach readily admits one: f?Ba (x, k) = sign ( i=1 w
?iBa xi ), where
w
?iBa = log
?i + ki
.
?i + mi ? ki
(29)
Notice that for 0 < pi < 1, we have w
?iBa ?? wi , almost surely, both in the frequentist and
mi ??
? Ba ? ? ? 0) is
the Bayesian interpretations. Unfortunately, although P(f?Ba (X, K) 6= Y ) = P(w
a deterministic function of {?i , ?i , mi }, we are unable to compute it at this point, or even give a
? Ba and ?.
non-trivial bound. The main source of difficulty is the coupling between w
Open problem. Give a non-trivial estimate for P(f?Ba (X, K) 6= Y ).
6
Discussion
The classic and seemingly well-understood problem of the consistency of weighted majority votes
continues to reveal untapped depth and suggest challenging unresolved questions. We hope that the
results and open problems presented here will stimulate future research.
References
[1] J.A.N. de Caritat marquis de Condorcet. Essai sur l?application de l?analyse a` la probabilit?e des d?ecisions
rendues a` la pluralit?e des voix. AMS Chelsea Publishing Series. Chelsea Publishing Company, 1785.
[2] S. Nitzan, J. Paroush. Optimal decision rules in uncertain dichotomous choice situations. International
Economic Review, 23(2):289?297, 1982.
[3] T. Hastie, R. Tibshirani, J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and
Prediction. 2009.
8
[4] J. Neyman, E. S. Pearson. On the problem of the most efficient tests of statistical hypotheses. Phil. Trans.
Royal Soc. A: Math., Physi. Eng. Sci., 231(694-706):289?337, 1933.
[5] N. Littlestone, M. K. Warmuth. The weighted majority algorithm. In FOCS, 1989.
[6] N. Littlestone, M. K. Warmuth. The weighted majority algorithm. Inf. Comput., 108(2):212?261, 1994.
[7] N. Cesa-Bianchi, G. Lugosi. Prediction, learning, and games. 2006.
[8] Y. Freund, R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. J. Comput. Syst. Sci., 55(1):119?139, 1997.
[9] R. E. Schapire, Y. Freund. Boosting. Foundations and algorithms. 2012.
[10] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the EM
algorithm. Applied Statistics, 28(1):20?28, 1979.
[11] F. Parisi, F. Strino, B. Nadler, Y. Kluger. Ranking and combining multiple predictors without labeled data.
Proc. Nat. Acad. Sci., 2014+.
[12] H. Li, B. Yu, D. Zhou. Error rate bounds in crowdsourcing models. CoRR, abs/1307.2674, 2013.
[13] C. Gao, D. Zhou. Minimax Optimal Convergence Rates for Estimating Ground Truth from Crowdsourced
Labels (arXiv:1310.5764), 2014.
[14] A. Lacasse, F. Laviolette, M. Marchand, P. Germain, N. Usunier. PAC-Bayes bounds for the risk of the
majority vote and the variance of the gibbs classifier. In NIPS, 2006.
[15] F. Laviolette, M. Marchand. PAC-Bayes risk bounds for stochastic averages and majority votes of samplecompressed classifiers. JMLR, 8:1461?1487, 2007.
[16] J.-F. Roy, F. Laviolette, M. Marchand. From PAC-Bayes bounds to quadratic programs for majority votes.
In ICML, 2011.
[17] Y. Mansour, A. Rubinstein, M. Tennenholtz. Robust aggregation of experts signals, preprint 2013.
[18] E. Eban, E. Mezuman, A. Globerson. Discrete chebyshev classifiers. In ICML (2), 2014.
[19] D. Berend, J. Paroush. When is Condorcet?s jury theorem valid? Soc. Choice Welfare, 15(4):481?488,
1998.
[20] P. J. Boland, F. Proschan, Y. L. Tong. Modelling dependence in simple and indirect majority systems. J.
Appl. Probab., 26(1):81?88, 1989.
[21] D. Berend, L. Sapir. Monotonicity in Condorcet?s jury theorem with dependent voters. Social Choice and
Welfare, 28(3):507?528, 2007.
[22] D. P. Helmbold and P. M. Long. On the necessity of irrelevant variables. JMLR, 13:2145?2170, 2012.
[23] D. A. McAllester, L. E. Ortiz. Concentration inequalities for the missing mass and for histogram rule
error. JMLR, 4:895?911, 2003.
[24] M. J. Kearns, L. K. Saul. Large deviation methods for approximate probabilistic inference. In UAI, 1998.
[25] D. Berend, A. Kontorovich. On the concentration of the missing mass. Electron. Commun. Probab.,
18(3), 1?7, 2013.
[26] M. Raginsky, I. Sason. Concentration of measure inequalities in information theory, communications and
coding. Foundations and Trends in Communications and Information Theory, 10(1-2):1?247, 2013.
[27] D. Berend, A. Kontorovich. A finite-sample analysis of the naive Bayes classifier. Preprint, 2014.
[28] E. Baharad, J. Goldberger, M. Koppel, S. Nitzan. Distilling the wisdom of crowds: weighted aggregation
of decisions on multiple issues. Autonomous Agents and Multi-Agent Systems, 22(1):31?42, 2011.
[29] E. Baharad, J. Goldberger, M. Koppel, S. Nitzan. Beyond condorcet: optimal aggregation rules using
voting records. Theory and Decision, 72(1):113?130, 2012.
[30] J.-Y. Audibert, R. Munos, C. Szepesv?ari. Tuning bandit algorithms in stochastic environments. In ALT,
2007.
[31] V. Mnih, C. Szepesv?ari, J.-Y. Audibert. Empirical Bernstein stopping. In ICML, 2008.
[32] A. Maurer, M. Pontil. Empirical Bernstein bounds and sample-variance penalization. In COLT, 2009.
[33] A. Kontorovich. Obtaining measure concentration from Markov contraction. Markov Proc. Rel. Fields,
4:613?638, 2012.
[34] D. Berend, A. Kontorovich. A sharp estimate of the binomial mean absolute deviation with applications.
Statistics & Probability Letters, 83(4):1254?1259, 2013.
9
| 5253 |@word mild:1 version:2 norm:1 open:5 mezuman:1 simplifying:1 eng:1 contraction:1 invoking:1 necessity:1 series:1 ktv:4 daniel:1 ours:1 recovered:1 si:5 goldberger:2 must:2 readily:1 happen:1 gurion:2 update:1 v:2 half:1 warmuth:2 ith:4 record:1 consulting:1 boosting:3 math:1 direct:1 aryeh:1 beta:3 incorrect:1 prove:2 shorthand:1 focs:1 indeed:3 examine:1 multi:1 automatically:1 company:1 little:1 begin:1 estimating:3 bounded:3 mass:5 israel:2 what:3 kind:1 minimizes:2 developed:1 guarantee:3 voting:4 tie:1 exactly:1 classifier:6 medical:1 before:2 understood:1 mistake:2 consequence:1 acad:1 marquis:1 subscript:1 lugosi:1 voter:1 therein:1 rendues:1 equivalence:1 challenging:3 appl:1 range:1 practical:1 globerson:1 lost:1 procedure:1 pontil:1 probabilit:1 asymptotics:2 empirical:18 universal:1 convenient:1 confidence:16 suggest:1 cannot:2 close:2 operator:1 put:3 risk:2 map:1 deterministic:1 transportation:1 phil:1 missing:2 straightforward:1 independently:4 formulate:1 helmbold:1 rule:39 his:2 proving:1 handle:1 classic:1 variation:1 autonomous:1 crowdsourcing:2 limiting:1 imagine:1 suppose:2 hypothesis:1 dawid:2 referee:1 element:1 roy:1 trend:1 continues:1 predicts:1 labeled:2 observed:1 preprint:2 capture:2 jury:2 observes:1 mentioned:1 environment:1 raise:2 tight:1 rewrite:1 segment:1 depend:1 inapplicable:1 learner:2 easily:1 indirect:1 various:5 epi:1 shortcoming:1 kp:3 rubinstein:1 aggregate:2 pearson:2 choosing:2 crowd:1 peer:1 whose:1 posed:2 statistic:2 analyse:1 noisy:1 final:3 online:2 seemingly:1 advantage:1 parisi:1 unresolved:1 remainder:1 combining:1 exploiting:1 convergence:1 cluster:1 produce:2 ben:2 derive:1 coupling:1 ac:2 soc:2 c:2 implies:2 indicate:1 distilling:1 drawback:1 correct:7 tensorization:1 stochastic:2 kluger:1 mcallester:1 argued:1 fix:1 generalization:1 opt:15 elementary:1 secondly:1 exploring:1 hold:3 sufficiently:5 considered:2 ground:1 exp:20 welfare:2 nadler:1 mapping:1 electron:1 achieves:2 purpose:2 estimation:1 proc:2 label:1 tool:2 weighted:12 hope:1 reaching:1 rather:1 pn:6 zhou:2 koppel:2 improvement:1 bernoulli:1 check:1 fooled:1 likelihood:1 modelling:1 rigorous:2 adversarial:2 sense:2 am:1 inference:2 dependent:3 stopping:1 typically:2 bandit:1 provably:1 issue:2 among:2 flexible:1 colt:1 exponent:1 priori:1 summed:1 field:2 nitzan:11 atom:1 identical:1 berend:7 yu:1 icml:3 nearly:2 future:1 minimized:1 t2:2 inherent:1 modern:1 ortiz:1 attempt:1 friedman:1 ab:1 interest:2 highly:4 mining:1 mnih:1 weakness:1 deferred:1 truly:1 ecisions:1 devoted:1 accurate:1 partial:1 continuing:1 taylor:1 littlestone:2 desired:1 maurer:1 theoretical:1 uncertain:1 instance:1 soft:1 tp:1 applicability:1 deviation:4 values3:1 predictor:1 too:2 dependency:1 kn:2 perturbed:1 essai:1 considerably:2 density:1 international:1 probabilistic:2 invoke:1 together:3 kontorovich:5 analogously:1 again:1 sharpened:1 cesa:1 opposed:1 choose:2 hoeffding:5 bgu:2 admit:1 expert:35 sidestep:1 ek:5 actively:1 syst:1 li:1 potential:3 de:5 coding:1 untapped:1 satisfy:2 ranking:1 depends:1 audibert:2 multiplicative:1 root:1 observer:1 apparently:1 analyze:6 doctor:1 characterizes:1 bayes:6 recover:1 crowdsourced:1 aggregation:3 il:2 ni:1 accuracy:1 variance:4 efficiently:1 yield:4 wisdom:1 weak:1 bayesian:7 randomness:1 footnote:1 definition:2 against:1 proof:13 mi:23 static:1 treatment:1 recall:2 knowledge:4 formalize:1 actually:3 back:1 adaboost:1 improved:1 evaluated:1 though:1 done:1 generality:1 just:1 implicit:1 hand:4 twi:2 perhaps:3 reveal:1 stimulate:1 unbiased:1 true:2 former:1 hence:6 numerator:1 game:1 iba:3 theoretic:3 ilc:4 recently:1 ari:2 began:1 empirically:4 discussed:1 he:1 interpretation:2 occurred:1 refer:1 gibbs:1 queried:2 tuning:1 consistency:10 mathematics:1 pm:1 access:1 similarity:1 ezi:1 summands:1 chelsea:2 recent:1 perspective:1 moderate:1 inf:1 irrelevant:1 scenario:1 commun:1 inequality:16 binary:1 arbitrarily:2 incapable:1 refrain:1 seen:2 greater:1 additional:2 fortunately:1 somewhat:1 surely:1 signal:1 ii:5 full:2 multiple:2 eexp:1 calculation:1 plug:1 long:1 controlled:1 qi:13 prediction:6 involving:1 denominator:1 heterogeneous:1 essentially:1 tasked:1 expectation:1 arxiv:1 histogram:1 sapir:1 achieved:1 addition:1 szepesv:2 addressed:2 interval:2 completes:1 source:2 appropriately:1 unlike:2 induced:1 pooling:1 facilitates:1 spirit:1 nontrivially:1 odds:1 ee:2 near:1 exceed:1 bernstein:5 enough:2 insofar:1 independence:3 finish:1 gave:2 zi:4 hastie:1 economic:1 simplifies:1 knowing:1 chebyshev:2 absent:2 defaulting:1 whether:2 sheva:2 suffer:1 cause:3 remark:6 useful:2 generally:1 cleaner:1 ibid:1 induces:2 simplest:1 schapire:2 exist:1 revisit:1 notice:2 sign:3 estimated:6 tibshirani:1 write:1 discrete:1 drawn:1 prevent:1 boland:1 nonadaptive:7 asymptotically:2 sum:4 raginsky:1 run:1 letter:1 striking:1 throughout:1 almost:1 w12:1 decision:35 bound:25 ki:6 marchand:3 quadratic:1 oracle:2 nontrivial:2 occur:1 precisely:2 dichotomous:1 dominated:1 min:7 skene:2 department:3 influential:1 according:2 poor:1 smaller:1 em:1 wi:24 making:3 explained:1 intuitively:1 neyman:2 mutually:1 previously:2 remains:1 discus:1 count:1 committee:12 usunier:1 operation:1 apply:1 observe:3 away:1 frequentist:7 alternative:1 substitute:2 assumes:1 denotes:1 cf:1 binomial:1 publishing:2 laviolette:3 giving:2 restrictive:2 k1:2 physi:1 classical:1 question:2 quantity:3 occurs:1 concentration:4 dependence:4 unable:2 sci:3 majority:17 condorcet:5 trivial:2 reason:1 length:1 sur:1 modeled:1 eban:1 mini:1 providing:1 unfortunately:2 stated:2 ba:6 unknown:9 perform:1 bianchi:1 upper:3 markov:3 finite:3 lacasse:1 immediate:2 situation:3 communication:2 mansour:1 finitary:2 sharp:3 competence:20 namely:1 germain:1 connection:1 raising:1 nip:1 trans:1 able:1 suggested:1 tennenholtz:1 below:2 beyond:1 wi2:3 regime:9 summarize:1 program:1 max:2 royal:1 analogue:1 event:5 karyeh:1 difficulty:2 natural:1 predicting:1 indicator:2 paroush:10 advanced:1 minimax:2 naive:2 extract:1 voix:1 faced:1 prior:2 review:1 probab:2 asymptotic:2 freund:2 loss:1 fully:1 bear:1 limitation:1 var:2 penalization:1 foundation:2 agent:17 sufficient:1 beer:2 pi:78 surprisingly:1 last:2 proschan:1 bias:1 weaker:1 deeper:1 side:4 allow:1 saul:4 fall:1 taking:1 munos:1 absolute:2 benefit:1 depth:2 xn:1 evaluating:1 valid:1 commonly:1 made:1 adaptive:8 refinement:2 far:3 social:1 approximate:1 monotonicity:1 uai:1 harm:1 xi:9 reputation:1 s0i:1 robust:1 obtaining:1 hc:19 complex:1 pk:3 main:4 noise:1 profile:7 nothing:1 competent:4 x1:3 advice:1 tong:1 lc:6 fails:2 comput:2 crude:1 pe:1 ib:1 jmlr:3 weighting:1 formula:1 theorem:27 ihc:13 pac:4 pluralit:1 explored:1 decay:1 admits:2 alt:1 concern:1 rel:1 effectively:1 importance:1 corr:1 nat:1 conditioned:2 easier:1 likely:1 insures:1 gao:1 applies:1 truth:3 determines:1 relies:1 goal:1 unrepresentative:1 bennett:1 considerable:1 strino:1 typical:1 infinite:3 uniformly:1 semantically:1 lemma:11 kearns:4 total:2 la:2 vote:9 formally:1 latter:2 arises:1 inability:1 phenomenon:1 handling:1 |
4,698 | 5,254 | Quantized Estimation of Gaussian Sequence Models
in Euclidean Balls
Yuancheng Zhu
John Lafferty
Department of Statistics
University of Chicago
Abstract
A central result in statistical theory is Pinsker?s theorem, which characterizes the
minimax rate in the normal means model of nonparametric estimation. In this
paper, we present an extension to Pinsker?s theorem where estimation is carried
out under storage or communication constraints. In particular, we place limits
on the number of bits used to encode an estimator, and analyze the excess risk
in terms of this constraint, the signal size, and the noise level. We give sharp
upper and lower bounds for the case of a Euclidean ball, which establishes the
Pareto-optimal minimax tradeoff between storage and risk in this setting.
1
Introduction
Classical statistical theory studies the rate at which the error in an estimation problem decreases as
the sample size increases. Methodology for a particular problem is developed to make estimation
efficient, and lower bounds establish how quickly the error can decrease in principle. Asymptotically
matching upper and lower bounds together yield the minimax rate of convergence
Rn (F) = inf sup R(fb, f ).
fb f ?F
This is the worst-case error in estimating an element of a model class F, where R(fb, f ) is the risk
or expected loss, and fb is an estimator constructed on a data sample of size n. The corresponding
sample complexity of the estimation problem is n(, F) = min{n : Rn (F) < }.
In the classical setting, the infimum is over all estimators. In contemporary settings, it is increasingly
of interest to understand how error depends on computation. For instance, when the data are high
dimensional and the sample size is large, constructing the estimator using standard methods may
be computationally prohibitive. The use of heuristics and approximation algorithms may make
computation more efficient, but it is important to understand the loss in statistical efficiency that this
incurs. In the minimax framework, this can be formulated by placing computational constraints on
the estimator:
Rn (F, Bn ) =
inf
sup R(fb, f ).
fb:C(fb)?Bn f ?F
Here C(fb) ? Bn indicates that the computation C(fb) used to construct fb is required to fall within
a ?computational budget? Bn . Minimax lower bounds on the risk as a function of the computational budget thus determine a feasible region for computation-constrained estimation, and a Paretooptimal tradeoff for error versus computation.
One important measure of computation is the number of floating point operations, or the running
time of an algorithm. Chandrasekaran and Jordan [3] have studied upper bounds for statistical
estimation with computational constraints of this form in the normal means model. However, useful
lower bounds are elusive. This is due to the difficult nature of establishing tight lower bounds for
1
this model of computation in the polynomial hierarchy, apart from any statistical concerns. Another
important measure of computation is storage, or the space used by a procedure. In particular, we
may wish to limit the number of bits used to represent our estimator fb. The question then becomes,
how does the excess risk depend on the budget Bn imposed on the number of bits C(fb) used to
encode the estimator?
This problem is naturally motivated by certain applications. For instance, the Kepler telescope
collects flux data for approximately 150,000 stars [6]. The central statistical task is to estimate
the lightcurve of each star nonparametrically, in order to denoise and detect planet transits. If this
estimation is done on board the telescope, the estimated function values may need to be sent back
to earth for further analysis. To limit communication costs, the estimates can be quantized. The
fundamental question is, what is lost in terms of statistical risk in quantizing the estimates? Or, in
a cloud computing environment (such as Amazon EC2), a large number of nonparametric estimates
might be constructed over a cluster of compute nodes and then stored (for example in Amazon S3)
for later analysis. To limit the storage costs, which could dominate the compute costs in many
scenarios, it is of interest to quantize the estimates. How much is lost in terms of risk, in principle,
by using different levels of quantization?
With such applications as motivation, we address in this paper the problem of risk-storage tradeoffs
in the normal means model of nonparametric estimation. The normal means model is a centerpiece
of nonparametric estimation. It arises naturally when representing an estimator in terms of an orthogonal basis [8, 11]. Our main result is a sharp characterization of the Pareto-optimal tradeoff
curve for quantized estimation of a normal means vector, in the minimax sense. We consider the
case of a Euclidean ball of unknown radius in Rn . This case exhibits many of the key technical challenges that arise in nonparametric estimation over richer spaces, including the Stein phenomenon
and the problem of adaptivity.
As will be apparent to the reader, the problem we consider is intimately related to classical rate
distortion theory [7]. Indeed, our results require a marriage of minimax theory and rate distortion
ideas. We thus build on the fundamental connection between function estimation and lossy source
coding that was elucidated in Donoho?s 1998 Wald Lectures [4]. This connection can also be used
to advantage for practical estimation schemes. As we discuss further below, recent advances on
computationally efficient, near-optimal lossy compression using sparse regression algorithms [12]
can perhaps be leveraged for quantized nonparametric estimation.
In the following section, we present relevant background and give a detailed statement of our results.
In Section 3 we sketch a proof of our main result on the excess risk for the Euclidean ball case.
Section 4 presents simulations to illustrate our theoretical analyses. Section 5 discusses related
work, and outlines future directions that our results suggest.
2
Background and problem formulation
In this section we briefly review the essential elements of rate-distortion theory and minimax theory,
to establish notation. We then state our main result, which bridges these classical theories.
In the rate-distortion setting we have a source that produces a sequence X n = (X1 , X2 , . . . , Xn ),
each component of which is independent and identically distributed as N (0, ? 2 ). The goal is to
transmit a realization from this sequence of random variables using a fixed number of bits, in such
a way that results in the minimal expected distortion with respect to the original data X n . Suppose
that we are allowed to use a total budget of nB bits, so that the average number of bits per variable
is B, which is referred to as the rate. To transmit or store the data, the encoder describes the source
sequence X n by an index ?n (X n ), where
?n : Rn ? {1, 2, . . . , 2nB } ? C(B)
is the encoding function. The nB-bit index is then transmitted or stored without loss. A decoder,
? n based on the index using a
when receiving or retrieving the data, represents X n by an estimate X
decoding function
?n : {1, 2, . . . , 2nB } ? Rn .
The image of the decoding function ?n is called the codebook, which is a discrete set in Rn with
cardinality no larger than 2nB . The process is illustrated in Figure 1, and variously referred to as
2
?n
Xn
Encoder
?n
?n (X n ) ? C(B)
Decoder
?n
? n = ?n (?n (X n ))
X
Xn
Encoder
?n
?n (X n ) ? C(B)
Decoder
?n
??n = ?n (?n (X n ))
Figure 1: Encoding and decoding process for lossy compression (top) and quantized estimation
(bottom). For quantized estimation, the model (mean vector) ?n is deterministic, not random.
source coding, lossy compression, or quantization. We call the pair of encoding and decoding functions Qn = (?n , ?n ) an (n, B)-rate distortion code. We will also use Qn to denote the composition
of the two functions, i.e., Qn (?) = ?n (?n (?)).
A distortion measure, or a loss function, d : R ? R ? R+ is used to evaluate the performance of
?i) =
the above coding and transmission process. In this paper, we will use the squared loss d(Xi , X
2
n
n
n ?n
?
?
(XP
i ? Xi ) . The distortion between two sequences X and X is then defined by dn (X , X ) =
n
1
? 2
i=1 (Xi ? Xi ) , the average of the per observation distortions. We drop the subscript n in d when
n
it is clear from the context. The distortion, or risk, for a (n, B)-rate distortion code Qn is defined as
the expected loss E d (X n , Qn (X n )). Denoting by Qn,B the set of all (n, B)-rate distortion codes,
the distortion rate function is defined as
R(B, ?) = lim inf
inf
n?? Qn ?Qn,B
E d (X n , Qn (X n )) .
This distortion rate function depends on the rate B as well as the source distribution. For the i.i.d.
N (0, ? 2 ) source, according to the well-known rate distortion theorem [7],
R(B, ?) = ? 2 2?2B .
When B is zero, meaning no information gets encoded at all, this bound becomes ? 2 , which is the
expected loss when each random variable is represented by its mean. As B approaches infinity, the
distortion goes to zero.
The previous discussion assumes the source random variables are independent and follow a common
distribution N (0, ? 2 ). The goal is to minimize the expected distortion in the reconstruction of X n
after transmitting or storing the data under a communication constraint. Now suppose that
ind.
Xi ? N (?i , ? 2 ) for i = 1, 2, . . . , n.
We assume the variance ? 2 is known and the means ?n = (?1 , . . . , ?n ) are unknown. Suppose, fur? n ), we want to estimate
thermore, that instead of trying to minimize the recovery distortion d(X n , X
the means with a risk as small as possible, but again using a budget of B bits per index.
Without the communication constraint, this problem has been very well studied [10, 9]. Let
b n ) ? ?bn = (?b1 , . . . , ?bn ) denote an estimator of the true mean ?n . For a parameter space
?(X
?n ? Rn , the minimax risk over ?n is defined as
n
1X
(?i ? ?bi )2 .
inf sup E d(?n , ?bn ) = inf sup E
n i=1
?bn ? n ??n
?bn ? n ??n
For the L2 ball of radius c,
n
n
o
1X 2
?n (c) = (?1 , . . . , ?n ) :
?i ? c2 ,
n i=1
(1)
Pinsker?s theorem gives the exact, limiting form of the minimax risk
lim inf inf
sup
n?? ?bn ? n ?? (c)
n
E d(?n , ?bn ) =
? 2 c2
.
? 2 + c2
To impose a communication constraint, we incorporate a variant of the source coding scheme described above into this minimax framework of estimation. Define a (n, B)-rate estimation code
3
6
Figure 2. Our result establishes the Pareto-optimal
tradeoff in the nonparametric normal means problem for risk versus number of bits:
Risk R
4
R(? 2 , c2 , B) =
Curves for five signal sizes are shown, c2 =
2, 3, 4, 5, 6. The noise level is ? 2 = 1. With zero
bits, the rate is c2 , the highest point on the risk
curve. The rate for large B approaches the Pinsker
bound ? 2 c2 /(? 2 + c2 ).
2
0
1
2
3
4
c2 ? 2
c4 2?2B
+ 2
2
+c
? + c2
?2
5
Bits per symbol B
Mn = (?n , ?n ), as a pair of encoding and decoding functions, as before. The encoding function
?n : Rn ? {1, 2, . . . , 2nB } is a mapping from observations X n to an index set. The decoding
function is a mapping from indices to models ??n ? Rn . We write the composition of the encoder
and decoder as Mn (X n ) = ?n (?n (X n )) = ??n , which we call a quantized estimator. Denoting by
Mn,B the set of all (n, B)-rate estimation codes, we then define the quantized minimax risk as
Rn (B, ?, ?n ) =
sup E d(?n , Mn (X n )).
inf
Mn ?Mn,B ? n ??n
We will focus on the case where our parameter space is the L2 ball defined in (1), and write
Rn (B, ?, c) = Rn (B, ?, ?n (c)).
In this setting, we let n go to infinity and define the asymptotic quantized minimax risk as
R(B, ?, c) = lim inf Rn (B, ?, c) = lim inf
n??
inf
sup
n?? Mn ?Mn,B ? n ?? (c)
n
E d(?n , Mn (X n )).
(2)
? n = Qn (X n ). Once again denoting
Note that we could estimate ?n based on the quantized data X
by Qn,B the set of all (n, B)-rate distortion codes, such an estimator is written ??n = ??n (Qn (X n )).
Clearly, if the decoding functions ?n of Qn are injective, then this formulation is equivalent. The
quantized minimax risk is then expressed as
inf
Rn (B, ?, ?n ) = inf
sup E d(?n , ??n ).
??n Qn ?Qn,B ? n ??n
The many normal means problem exhibits much of the complexity and subtlety of general nonparametric regression and density estimation problems. It arises naturally in the estimation of a function
expressed in terms of an orthogonal function basis [8, 13]. Our main result sharply characterizes the
excess risk that communication constraints impose on minimax estimation for ?(c).
3
Main results
Our first result gives a lower bound on the exact quantized asymptotic risk in terms of B, ?, and c.
Theorem 1. For B ? 0, ? > 0 and c > 0, the asymptotic minimax risk defined in (2) satisfies
R(B, ?, c) ?
? 2 c2
c4
+ 2
2?2B .
2
+c
? + c2
?2
(3)
This lower bound on the limiting minimax risk can be viewed as the usual minimax risk without
quantization, plus an excess risk term due to quantization. If we take B to be zero, the risk becomes
c2 , which is obtained by estimating all of the means simply by zero. On the other hand, letting
B ? ?, we recover the minimax risk in Pinsker?s theorem. This tradeoff is illustrated in Figure 2.
The proof of the theorem is technical and we defer it to the supplementary material. Here we sketch
the basic idea of the proof. Suppose we are able to find a prior distribution ?n on ?n and a random
4
vector ?en such that for any (n, B)-rate estimation code Mn the following holds:
Z
c4
? 2 c2
?2B (I)
= EX n d(?n , ?en )d?n (?n )
+ 2
2
? 2 + c2
? + c2
Z
(II)
?
EX n d(?n , Mn (X n ))d?n (?n )
(III)
?
sup
? n ??n (c)
EX n d(?n , Mn (X n )).
Then taking an infimum over Mn ? Mn,B gives us the desired result. In fact, we can take ?n ,
the prior on ?n , to be N (0, c2 In ), and the model becomes ?i ? N (0, c2 ) and Xi | ?i ? N (?i , ? 2 ).
Then according to Lemma 1, inequality (II) holds with ?en being the minimizer to the optimization
problem
min
p(?en | X n ,? n )
E d(?n , ?en )
subject to I(X n ; ?en ) ? nB,
p(?en | X n , ?n ) = p(?en | X n ).
The equality (I) holds due to Lemma 2. The inequality (III) can be shown by a limiting concentration
argument on the prior distribution, which is included in the supplementary material.
Lemma 1. Suppose that X1 , . . . , Xn are independent and generated by ?i ? ?(?i ) and Xi | ?i ?
p(xi | ?i ). Suppose Mn is an (n, B)-rate estimation code with risk E d(?n , Mn (X n )) ? D. Then
the rate B is lower bounded by the solution to the following problem:
min
p(?en | X n ,? n )
subject to
I(X n ; ?en )
E d(?n , ?en ) ? D,
p(?en | X n , ?n ) = p(?en | X n ).
(4)
The next lemma gives the solution to problem (4) when we have ?i ? N (0, c2 ) and Xi | ?i ?
N (?i , ? 2 )
Lemma 2. Suppose ?i ? N (0, c2 ) and Xi | ?i ? N (?i , ? 2 ) for i = 1, . . . , n. For any random
vector ?en satisfying E d(?n , ?en ) ? D and p(?en | X n , ?n ) = p(?en | X n ) we have
n
c4
I(X n ; ?en ) ? log
2
(? 2 + c2 )(D ?
? 2 c2
? 2 +c2 )
.
Combining the above two lemmas, we obtain a lower bound of the risk assuming that ?n follows the
prior distribution ?n :
Corollary 1. Suppose Mn is a (n, B)-rate estimation code for the source ?i ? N (0, c2 ) and
Xi | ?i ? N (?i , ? 2 ), then
E d(?n , Mn (X n )) ?
3.1
? 2 c2
c4
+
2?2B .
? 2 + c2
? 2 + c2
(5)
An adaptive source coding method
We now present a source coding method, which we will show attains the minimax lower bound
asymptotically with high probability.
Suppose that the encoder is given a sequence of observations (X1 , . . . , Xn ), and both the encoder
and the decoder know the radius c of the L2 ball in which the mean vector lies. The steps of the
source coding method are outlined below:
Step 1. Generating codebooks. The codebooks are distributed to both the encoder and the decoder.
5
?
?
?
?
(a) Generate codebook B = {1/ n, 2/ n, . . . , dc2 ne/ n}.
(b) Generate codebook X which consists of 2nB i.i.d. random vectors from the uniform
distribution on the n-dimensional unit sphere Sn?1 .
Step 2. Encoding.
(a) Encode bb2 =
1
2
n kXk
?n
? ? 2 by ?b2 = arg min{|b2 ? bb2 | : b2 ? B}.
(b) Encode X n by X = arg max{hX n , xn i : xn ? X }.
? n ) by their corresponding indices using log c2 + 1 log n + nB bits.
Step 3. Transmit or store (?b2 , X
2
Step 4. Decoding.
? n ) by the transmitted or stored indices.
(a) Recover (?b2 , X
(b) Estimate ? by
s
n?b4 (1 ? 2?2B ) ? n
?X .
??n =
?b2 + ? 2
We make several remarks on this quantized estimation method.
Remark 1. The rate of this coding method is B +
log c2
n
+
log n
2n ,
which is asymptotically B bits.
Remark 2. The method is probabilistic; the randomness comes from the construction of the codebook X . Denoting by M?n,B,?,c the ensemble of such random quantizers, there is then a natural onenB
to-one mapping between M?n,B,?,c and (Sn?1 )2
and we attach probability measure to M?n,B,?,c
corresponding to the product uniform distribution on (Sn?1 )2
nB
.
Remark 3. The main idea behind this coding scheme is to encode the magnitude and the direction
of the observation vector separately, in such a way that the procedure adapts to sources with different
norms of the mean vectors.
Remark 4. The computational complexity of this source coding method is exponential in n. Therefore, like the Shannon random codebook, this is a demonstration of the asymptotic achievability
of the lower bound (3), rather than a practical scheme to be implemented. We discuss possible
computationally efficient algorithms in Section 5.
The following shows that with high probability this procedure will attain the desired lower bound
asymptotically.
n
n
n 2
2
2
Theorem 2. For a sequence of vectors {?n }?
n=1 satisfying ? ? R and k? k /n = b ? c , as
n??
!
r
? 2 b2
b4
log n
n
n
?2B
+ 2
2
+C
P d(? , Mn (X )) > 2
?? 0
(6)
? + b2
? + b2
n
for some constant C that does not depend on n (but could possibly depend on b, ? and B). The
probability measure is with respect to both Mn ? M?n,B,?,c and X n ? Rn .
This theorem shows that the source coding method not only achieves the desired minimax lower
bound for the L2 ball with high probability with respect to the random codebook and source distribution, but also adapts to the true magnitude of the mean vector ?n . It agrees with the intuition that
the hardest mean vector to estimate lies on the boundary of the L2 ball. Based on Theorem 2 we can
obtain a uniform high probability bound for mean vectors in the L2 ball.
n
n
n 2
2
Corollary 2. For any sequence of vectors {?n }?
n=1 satisfying ? ? R and k? k /n ? c , as
n??
!
r
? 2 c2
c4
log n
n
n
?2B
0
P d(? , Mn (X )) > 2
+ 2
2
+C
?? 0
? + c2
? + c2
n
for some constant C 0 that does not depend on n.
We include the details of the proof of Theorem 2 in the supplementary material, which carefully
analyzes the three terms in the following decomposition of the loss function:
6
4
Estimate
2
B=0.1
B=0.2
B=0.5
B=1
James?Stein
0
?2
?4
Index
Figure 3: Comparison of the quantized estimates with different rates B, the James-Stein estimator, and the true
mean vector. The heights of the bars are the averaged estimates based on 100 replicates. Each large background
rectangle indicates the original mean component ?j .
2
2
1
n
1
n
d(?n , ??n ) =
?
? ? ?n
=
?
? ??
bX n + ?
bX n ? ? n
n
n
1
1
2
2
n
n
2
?
=
? ??
bX
+ kb
? X n ? ?n k + h??n ? ?
bX n , ?
bX n ? ? n i
n
n
n
|
{z
} |
{z
} |
{z
}
A1
A2
A3
b
b2
b
b2 +? 2
with bb2 = kX n k2 /n ? ? 2 . Term A1 characterizes the quantization error. Term
where ?
b=
A2 does not involve the random codebook, and is the loss of a type of James-Stein estimator. The
cross term A3 vanishes as n ? ?.
4
Simulations
In this section we present a set of simulation results showing the empirical performance of the
proposed quantized estimation method. Throughout the simulation, we fix the noise level ? 2 = 1,
while varying the other parameters c and B.
First we show in Figure 3 the effect of quantized estimation and compare it with the James-Stein
estimator. Setting n = 15 and c = 2, we randomly generate a mean vector ?n ? Rn with k?k2 /n =
c2 . A random vector X is then drawn from N (?n , In ) and quantized estimates with rates B ?
{0.1, 0.2,0.5, 1} are calculated;
for comparison we also compute the James-Stein estimator, given
(n?2)? 2
n
n
b
by ?JS = 1 ? kX n k2 X . We repeat this sampling and estimation procedure 100 times and report
the averaged risk estimates in Figure 3. We see that the quantized estimator essentially shrinks the
random vector towards zero. With small rates, the shrinkage is strong, with all the estimates close
to zero. Estimates with larger rates approach the James-Stein estimator.
In our second set of simulations, we choose c from {0.1, 0.5, 1, 5, 10} to reflect different signal-tonoise ratios, and choose B from {0.1, 0.2, 0.5, 1}. For each combination of the values of c and B,
we vary n, the dimension of the mean vector, which is also the number of observations. Given a set
of parameters c, B and n, a mean vector ?n is generated uniformly on the sphere k?n k2 /n = c2
and data X n are generated following the distribution N (?n , ? 2 In ). We quantize the data using the
source coding method, and compute the mean squared error between the estimator and the true mean
vector. The procedure is repeated 100 times for each of the parameter combinations, and the average
and standard deviation of the mean squared errors are recorded. The results are shown in Figure 4.
We see that as n increases, the average error decreases and approaches the theoretic lower bound in
Theorem 1. Moreover, the standard deviation of the mean squared errors also decreases, confirming
the result of Theorem 2 that the convergence is with high probability.
5
Discussion and future work
In this paper, we establish a sharp lower bound on the asymptotic minimax risk for quantized estimators of nonparametric normal means for the case of a Euclidean ball. Similar techniques can be
7
B=0.1
100
?
?
?
?
?
?
?
?
B=0.2
?
?
?
?
?
100
?
?
?
?
?
?
?
?
B=0.5
?
?
?
?
?
B=1
100
100
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
MSE
?
c=0.5
c=1
c=5
? c=10
?
?
?
1
?
?
?
?
4
?
?
?
?
?
?
?
?
?
?
8
?
?
?
?
?
?
?
?
?
?
?
1
?
12
n
?
?
?
?
4
?
?
?
?
?
?
?
?
?
?
8
?
?
?
?
?
?
?
?
?
?
?
?
1
?
?
12
?
?
4
n
?
?
?
?
1
?
?
?
?
?
?
8
?
?
?
?
?
?
12
n
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
4
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
8
12
n
Figure 4: Mean squared errors and standard deviations of the quantized estimator versus n for
different values of (B, c). The horizontal dashed lines indicate the lower bounds.
P?
applied to the setting where the parameter space is an ellipsoid ? = {? : j=1 a2j ?j2 ? c2 }. A
principal case of interest is the Sobolev ellipsoid of order m where a2j ? (?j)2m as j ? ?. The
Sobolev ellipsoid arises naturally in nonparametric function estimation and is thus of great importance. We leave this to future work.
Donoho discusses the parallel between rate distortion theory and Pinsker?s work in his Wald Lectures [4]. Focusing on the case of the Sobolev space of order m, which we denote by Fm , it
is shown that the Kolmogorov entropy H (Fm ) and the rate distortion function R(D, X) satisfy
H (Fm ) sup{R(2 , X) : P(X ? Fm ) = 1} as ? 0. This connects the worst-case minimax
analysis and least-favorable rate distortion function for the function class. Another informationtheoretic formulation of minimax rates lies in the so-called ?le Cam equation? H (F) = n2
[14, 15]. However, both are different from the direction we pursue in this paper, which is to impose communication constraints in minimax analysis.
In other related work, researchers in communications theory have studied estimation problems in
sensor networks under communication constraints. Draper and Wornell [5] obtain a result on the
so-called ?one-step problem? for the quadratic-Gaussian case, which is essentially the same as the
statement in our Corollary 1. In fact, they consider a similar setting, but treat the mean vector as
random and generated independently from a known normal distribution. In contrast, we assume
a fixed but unknown mean vector and establish a minimax lower bound as well as an adaptive
source coding method that adapts to the fixed mean vector within the parameter space. Zhang et
al. [16] also consider minimax bounds with communication constraints. However, the analysis in
[16] is focused on distributed parametric estimation, where the data are distributed between several
machines. Information is shared between the machines in order to construct a parameter estimate,
and constraints are placed on the amount of communication that is allowed.
In addition to treating more general ellipsoids, an important direction for future work is to design
computationally efficient quantized nonparametric estimators. One possible method is to divide
the variables into smaller blocks and quantize them separately. A more interesting and promising
approach is to adapt the recent work of Venkataramanan et al. [12] that uses sparse regression for
lossy compression. We anticipate that with appropriate modifications, this scheme can be applied to
quantized nonparametric estimation to yield practical algorithms, trading off a worse error exponent
in the convergence rate to the optimal quantized minimax risk for reduced complexity encoders and
decoders.
Acknowledgements
Research supported in part by NSF grant IIS-1116730, AFOSR grant FA9550-09-1-0373, ONR
grant N000141210762, and an Amazon AWS in Education Machine Learning Research grant. The
authors thank Andrew Barron, John Duchi, and Alfred Hero for valuable comments on this work.
8
References
[1] T. Tony Cai, Jianqing Fan, and Tiefeng Jiang. Distributions of angles in random packing on
spheres. The Journal of Machine Learning Research, 14(1):1837?1864, 2013.
[2] T. Tony Cai and Tiefeng Jiang. Phase transition in limiting distributions of coherence of highdimensional random matrices. Journal of Multivariate Analysis, 107:24?39, 2012.
[3] Venkat Chandrasekarana and Michael I. Jordan. Computational and statistical tradeoffs via
convex relaxation. PNAS, 110(13):E1181?E1190, March 2013.
[4] David L. Donoho. Wald lecture I: Counting bits with Kolmogorov and Shannon. 2000.
[5] Stark C. Draper and Gregory W. Wornell. Side information aware coding strategies for sensor
networks. Selected Areas in Communications, IEEE Journal on, 22(6):966?976, 2004.
[6] Jon M. Jenkins et al. Overview of the Kepler science processing pipeline. The Astrophysical
Journal Letters, 713(2):L87, 2010.
[7] Robert G. Gallager. Information Theory and Reliable Communication. John Wiley & Sons,
1968.
[8] Iain M. Johnstone. Function estimation and Gaussian sequence models. 2002. Unpublished
manuscript.
[9] Michael Nussbaum. Minimax risk: Pinsker bound. Encyclopedia of Statistical Sciences,
3:451?460, 1999.
[10] Mark Semenovich Pinsker. Optimal filtering of square-integrable signals in Gaussian noise.
Problemy Peredachi Informatsii, 16(2):52?68, 1980.
[11] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer Series in Statistics, 1st edition, 2008.
[12] Ramji Venkataramanan, Tuhin Sarkar, and Sekhar Tatikonda. Lossy compression via sparse
linear regression: Computationally efficient encoding and decoding. In IEEE International
Symposium on Information Theory (ISIT), pages 1182?1186. IEEE, 2013.
[13] Larry Wasserman. All of Nonparametric Statistics. Springer-Verlag, 2006.
[14] Wing Hung Wong and Xiaotong Shen. Probability inequalities for likelihood ratios and convergence rates of sieve MLEs. The Annals of Statistics, 23:339?362, 1995.
[15] Yuhong Yang and Andrew Barron. Information-theoretic determination of minimax rates of
convergence. The Annals of Statistics, 27(5):1564?1599, 1999.
[16] Yuchen Zhang, John Duchi, Michael Jordan, and Martin J. Wainwright. Information-theoretic
lower bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems, pages 2328?2336, 2013.
9
| 5254 |@word briefly:1 compression:5 polynomial:1 norm:1 simulation:5 bn:12 decomposition:1 incurs:1 series:1 denoting:4 written:1 john:4 planet:1 chicago:1 confirming:1 drop:1 treating:1 prohibitive:1 selected:1 fa9550:1 characterization:1 quantized:23 node:1 kepler:2 codebook:7 nussbaum:1 zhang:2 five:1 height:1 dn:1 constructed:2 c2:35 symposium:1 retrieving:1 a2j:2 consists:1 indeed:1 expected:5 cardinality:1 becomes:4 estimating:2 notation:1 bounded:1 moreover:1 what:1 pursue:1 developed:1 k2:4 unit:1 grant:4 before:1 treat:1 limit:4 encoding:7 jiang:2 establishing:1 subscript:1 approximately:1 might:1 plus:1 studied:3 collect:1 bi:1 averaged:2 practical:3 lost:2 block:1 procedure:5 area:1 empirical:1 attain:1 matching:1 suggest:1 get:1 close:1 storage:5 risk:33 nb:10 context:1 e1181:1 wong:1 equivalent:1 imposed:1 deterministic:1 elusive:1 go:2 independently:1 convex:1 focused:1 shen:1 sekhar:1 amazon:3 recovery:1 wasserman:1 estimator:21 iain:1 dominate:1 his:1 hung:1 transmit:3 limiting:4 hierarchy:1 suppose:9 construction:1 annals:2 exact:2 us:1 element:2 satisfying:3 bottom:1 cloud:1 worst:2 wornell:2 region:1 decrease:4 contemporary:1 thermore:1 highest:1 valuable:1 intuition:1 environment:1 vanishes:1 complexity:4 pinsker:8 cam:1 depend:4 tight:1 efficiency:1 basis:2 packing:1 represented:1 kolmogorov:2 apparent:1 heuristic:1 richer:1 larger:2 encoded:1 distortion:22 supplementary:3 encoder:7 statistic:5 sequence:9 advantage:1 quantizing:1 cai:2 reconstruction:1 product:1 j2:1 relevant:1 combining:1 realization:1 adapts:3 convergence:5 cluster:1 transmission:1 yuancheng:1 produce:1 generating:1 leave:1 illustrate:1 andrew:2 strong:1 implemented:1 come:1 indicate:1 trading:1 direction:4 radius:3 kb:1 larry:1 material:3 education:1 require:1 hx:1 fix:1 isit:1 anticipate:1 extension:1 hold:3 marriage:1 normal:9 great:1 mapping:3 achieves:1 vary:1 a2:2 earth:1 estimation:38 favorable:1 tatikonda:1 bridge:1 agrees:1 establishes:2 clearly:1 sensor:2 gaussian:4 rather:1 shrinkage:1 varying:1 corollary:3 encode:5 focus:1 fur:1 indicates:2 likelihood:1 contrast:1 attains:1 detect:1 sense:1 problemy:1 arg:2 exponent:1 constrained:1 construct:2 once:1 aware:1 sampling:1 placing:1 represents:1 hardest:1 jon:1 future:4 report:1 randomly:1 variously:1 floating:1 phase:1 connects:1 interest:3 replicates:1 behind:1 injective:1 orthogonal:2 euclidean:5 divide:1 yuchen:1 desired:3 theoretical:1 minimal:1 instance:2 cost:3 deviation:3 uniform:3 stored:3 encoders:1 gregory:1 mles:1 st:1 density:1 fundamental:2 ec2:1 international:1 probabilistic:1 off:1 receiving:1 decoding:9 michael:3 together:1 quickly:1 transmitting:1 squared:5 central:2 again:2 recorded:1 reflect:1 leveraged:1 possibly:1 choose:2 worse:1 wing:1 bx:5 stark:1 star:2 coding:14 b2:11 satisfy:1 depends:2 astrophysical:1 later:1 analyze:1 characterizes:3 sup:10 recover:2 parallel:1 defer:1 minimize:2 square:1 variance:1 ensemble:1 yield:2 researcher:1 randomness:1 james:6 naturally:4 proof:4 lim:4 n000141210762:1 carefully:1 back:1 focusing:1 manuscript:1 alexandre:1 follow:1 methodology:1 formulation:3 done:1 shrink:1 sketch:2 hand:1 horizontal:1 nonparametrically:1 infimum:2 perhaps:1 lossy:6 effect:1 true:4 equality:1 sieve:1 illustrated:2 ind:1 trying:1 outline:1 theoretic:3 duchi:2 image:1 meaning:1 common:1 overview:1 b4:2 composition:2 outlined:1 e1190:1 j:1 multivariate:1 recent:2 inf:14 apart:1 scenario:1 store:2 certain:1 verlag:1 jianqing:1 inequality:3 onr:1 integrable:1 transmitted:2 analyzes:1 impose:3 determine:1 signal:4 ii:3 dashed:1 pnas:1 technical:2 adapt:1 determination:1 cross:1 sphere:3 a1:2 variant:1 wald:3 regression:4 basic:1 essentially:2 represent:1 background:3 want:1 separately:2 addition:1 aws:1 source:18 comment:1 subject:2 sent:1 lafferty:1 jordan:3 call:2 near:1 counting:1 yang:1 iii:2 identically:1 fm:4 codebooks:2 idea:3 tradeoff:7 motivated:1 remark:5 useful:1 detailed:1 clear:1 involve:1 amount:1 nonparametric:14 stein:7 encyclopedia:1 tsybakov:1 telescope:2 reduced:1 generate:3 nsf:1 s3:1 estimated:1 per:4 alfred:1 discrete:1 write:2 key:1 drawn:1 rectangle:1 draper:2 asymptotically:4 relaxation:1 angle:1 letter:1 place:1 throughout:1 chandrasekaran:1 reader:1 sobolev:3 coherence:1 bit:14 bound:24 fan:1 quadratic:1 elucidated:1 constraint:13 infinity:2 sharply:1 informatsii:1 x2:1 argument:1 min:4 xiaotong:1 martin:1 department:1 according:2 ball:11 combination:2 march:1 describes:1 smaller:1 increasingly:1 intimately:1 son:1 modification:1 pipeline:1 computationally:5 equation:1 discus:4 know:1 letting:1 hero:1 operation:1 jenkins:1 bb2:3 appropriate:1 barron:2 original:2 top:1 running:1 assumes:1 include:1 tony:2 build:1 establish:4 classical:4 question:2 parametric:1 concentration:1 strategy:1 usual:1 exhibit:2 thank:1 decoder:7 transit:1 assuming:1 code:9 index:9 ellipsoid:4 ratio:2 demonstration:1 difficult:1 robert:1 statement:2 quantizers:1 design:1 unknown:3 upper:3 observation:5 communication:14 rn:17 sharp:3 sarkar:1 david:1 pair:2 required:1 unpublished:1 connection:2 c4:6 address:1 able:1 bar:1 below:2 challenge:1 including:1 max:1 reliable:1 wainwright:1 natural:1 attach:1 zhu:1 minimax:30 representing:1 scheme:5 mn:21 ne:1 carried:1 sn:3 review:1 prior:4 l2:6 acknowledgement:1 asymptotic:5 afosr:1 loss:9 lecture:3 adaptivity:1 interesting:1 filtering:1 versus:3 xp:1 principle:2 pareto:3 storing:1 achievability:1 repeat:1 placed:1 supported:1 side:1 understand:2 johnstone:1 fall:1 taking:1 sparse:3 distributed:5 peredachi:1 curve:3 boundary:1 xn:7 calculated:1 dimension:1 transition:1 fb:12 qn:15 author:1 adaptive:2 dc2:1 flux:1 excess:5 informationtheoretic:1 b1:1 xi:11 promising:1 nature:1 tonoise:1 quantize:3 mse:1 constructing:1 main:6 motivation:1 noise:4 arise:1 edition:1 denoise:1 allowed:2 repeated:1 x1:3 referred:2 en:18 centerpiece:1 board:1 venkat:1 wiley:1 wish:1 exponential:1 lie:3 theorem:13 yuhong:1 showing:1 symbol:1 concern:1 a3:2 essential:1 quantization:5 importance:1 magnitude:2 budget:5 kx:2 entropy:1 simply:1 gallager:1 expressed:2 kxk:1 subtlety:1 springer:2 minimizer:1 satisfies:1 goal:2 formulated:1 viewed:1 donoho:3 towards:1 shared:1 feasible:1 included:1 uniformly:1 lemma:6 principal:1 total:1 called:3 shannon:2 highdimensional:1 mark:1 arises:3 incorporate:1 evaluate:1 phenomenon:1 ex:3 |
4,699 | 5,255 | On the Convergence Rate of Decomposable
Submodular Function Minimization
Robert Nishihara, Stefanie Jegelka, Michael I. Jordan
Electrical Engineering and Computer Science
University of California
Berkeley, CA 94720
{rkn,stefje,jordan}@eecs.berkeley.edu
Abstract
Submodular functions describe a variety of discrete problems in machine learning, signal processing, and computer vision. However, minimizing submodular
functions poses a number of algorithmic challenges. Recent work introduced an
easy-to-use, parallelizable algorithm for minimizing submodular functions that
decompose as the sum of ?simple? submodular functions. Empirically, this algorithm performs extremely well, but no theoretical analysis was given. In this
paper, we show that the algorithm converges linearly, and we provide upper and
lower bounds on the rate of convergence. Our proof relies on the geometry of
submodular polyhedra and draws on results from spectral graph theory.
1
Introduction
A large body of recent work demonstrates that many discrete problems in machine learning can be
phrased as the optimization of a submodular set function [2]. A set function F : 2V ! R over a
ground set V of N elements is submodular if the inequality F (A) + F (B) F (A [ B) + F (A \ B)
holds for all subsets A, B ? V . Problems like clustering [33], structured sparse variable selection
[1], MAP inference with higher-order potentials [28], and corpus extraction problems [31] can be
reduced to the problem of submodular function minimization (SFM), that is
min F (A).
(P1)
A?V
Although SFM is solvable in polynomial time, existing algorithms can be inefficient on large-scale
problems. For this reason, the development of scalable, parallelizable algorithms has been an active
area of research [24, 25, 29, 35]. Approaches to solving Problem (P1) are either based on combinatorial optimization or on convex optimization via the Lov?asz extension.
Functions that occur in practice are usually not arbitrary and frequently possess additional exploitable structure. For example, a number of submodular functions admit specialized algorithms
that solve Problem (P1) very quickly. Examples include cut functions on certain kinds of graphs,
concave functions of the cardinality |A|, and functions counting joint ancestors in trees. We will use
the term simple to refer to functions F for which we have a fast subroutine for minimizing F + s,
where s 2 RN is any modular function. We treat these subroutines as black boxes. Many commonly occuring submodular functions (for example, graph cuts, hypergraph cuts, MAP inference
with higher-order potentials [16, 28, 37], co-segmentation [22], certain structured-sparsity inducing
functions [26], covering functions [35], and combinations thereof) can be expressed as a sum
XR
F (A) =
Fr (A)
(1)
r=1
of simple submodular functions. Recent work demonstrates that this structure offers important practical benefits [25, 29, 35]. For instance, it admits iterative algorithms that minimize each Fr separately and combine the results in a straightforward manner (for example, dual decomposition).
1
In particular, it has been shown that the minimization of decomposable functions can be rephrased
as a best-approximation problem, the problem of finding the closest points in two convex sets [25].
This formulation brings together SFM and classical projection methods and yields empirically fast,
parallel, and easy-to-implement algorithms. In these cases, the performance of projection methods
depends heavily on the specific geometry of the problem at hand and is not well understood in
general. Indeed, while Jegelka et al. [25] show good empirical results, the analysis of this alternative
approach to SFM was left as an open problem.
Contributions. In this work, we study the geometry of the submodular best-approximation problem
and ground the prior empirical results in theoretical guarantees. We show that SFM via alternating
projections, or block coordinate descent, converges at a linear rate. We show that this rate holds
for the best-approximation problem, relaxations of SFM, and the original discrete problem. More
importantly, we prove upper and lower bounds on the worst-case rate of convergence. Our proof
relies on analyzing angles between the polyhedra associated with submodular functions and draws
on results from spectral graph theory. It offers insight into the geometry of submodular polyhedra
that may be beneficial beyond the analysis of projection algorithms.
Submodular minimization. The first polynomial-time algorithm for minimizing arbitrary submodular functions was a consequence of the ellipsoid method [19]. Strongly and weakly polynomialtime combinatorial algorithms followed [32]. The current fastest running times are O(N 5 ?1 + N 6 )
[34] in general and O((N 4 ?1 + N 5 ) log Fmax ) for integer-valued functions [23], where Fmax =
maxA |F (A)| and ?1 is the time required to evaluate F . Some work has addressed decomposable
functions [25, 29, 35]. The running times in [29] apply to integer-valued functions and range from
O((N + R)2 log Fmax ) for cuts to O((N + Q2 R)(N + Q2 R + QR?2 ) log Fmax ), where Q ? N is
the maximal cardinality of the support of any Fr , and ?2 is the time required to minimize a simple
function. Stobbe and Krause [35] use a convex optimization approach based on Nesterov?s smoothing technique. They achieve a (sublinear) convergence rate of O(1/k) for the discrete SFM problem.
Their results and our results do not rely on the function being integral.
Projection methods. Algorithms based on alternating projections between convex sets (and related
methods such as the Douglas?Rachford algorithm) have been studied extensively for solving convex
feasibility and best-approximation problems [4, 5, 7, 11, 12, 20, 21, 36, 38]. See Deutsch [10] for a
survey of applications. In the simple case of subspaces, the convergence of alternating projections
has been characterized in terms of the Friedrichs angle cF between the subspaces [5, 6]. There are
often good ways to compute cF (see Lemma 6), which allow us to obtain concrete linear rates of
convergence for subspaces. The general case of alternating projections between arbitrary convex
sets is less well understood. Bauschke and Borwein [3] give a general condition for the linear
convergence of alternating projections in terms of the value ?? (defined in Section 3.1). However,
except in very limited cases, it is unclear how to compute or even bound ?? . While it is known that
?? < 1 for polyhedra [5, Corollary 5.26], the rate may be arbitrarily slow, and the challenge is
to bound the linear rate away from one. We are able to give a specific uniform linear rate for the
submodular polyhedra that arise in SFM.
Although both ?? and cF are useful quantities for understanding the convergence of projection
methods, they largely have been studied independently of one another. In this work, we relate
these two quantities for polyhedra, thereby obtaining some of the generality of ?? along with the
computability of cF . To our knowledge, we are the first to relate ?? and cF outside the case of
subspaces. We feel that this connection may be useful beyond the context of submodular polyhedra.
1.1
Background
Throughout this paper, we assume that F is a sum of simple submodular functions F1 , . .P
. , FR and
that F (;) = 0. Points s 2 RN can be identified with (modular) set functions via s(A) = n2A sn .
The base polytope of F is defined as the set of all modular functions that are dominated by F and
that sum to F (V ),
B(F ) = {s 2 RN | s(A) ? F (A) for all A ? V and s(V ) = F (V )}.
The Lov?asz extension f : RN ! R of F can be written as the support function of the base polytope,
that is f (x) = maxs2B(F ) s> x. Even though B(F ) may have exponentially many faces, the extension f can be evaluated in O(N log N ) time [15]. The discrete SFM problem (P1) can be relaxed to
2
the non-smooth convex optimization problem
min f (x) ?
x2[0,1]N
min
x2[0,1]N
R
X
fr (x),
(P2)
r=1
where fr is the Lov?asz extension of Fr . This relaxation is exact ? rounding an optimal continuous
solution yields the indicator vector of an optimal discrete solution. The formulation in Problem (P2)
is amenable to dual decomposition [30] and smoothing techniques [35], but suffers from the nonsmoothness of f [25]. Alternatively, we can formulate a proximal version of the problem
min f (x) + 12 kxk2 ?
x2RN
min
x2RN
R
X
r=1
(fr (x) +
2
1
2R kxk ).
(P3)
By thresholding the optimal solution of Problem (P3) at zero, we recover the indicator vector of an
optimal discrete solution [17], [2, Proposition 8.4].
Lemma 1. [25] The dual of the right-hand side of Problem (P3) is the best-approximation problem
min ka bk2 a 2 A, b 2 B,
PR
where A = {(a1 , . . . , aR ) 2 RN R | r=1 ar = 0} and B = B(F1 ) ? ? ? ? ? B(FR ).
(P4)
Lemma 1 implies that we can minimize a decomposable submodular function by solving Problem (P4), which means finding the closest points between the subspace A and the product B of base
polytopes. Projecting onto A is straightforward because A is a subspace. Projecting onto B amounts
to projecting onto each B(Fr ) separately. The projection ?B(Fr ) z of a point z onto B(Fr ) may be
solved by minimizing Fr z [25]. We can compute these projections easily because each Fr is
simple.
Throughout this paper, we use A and B to refer to the specific polyhedra defined in Lemma 1
(which live in RN R ) and P and Q to refer to general polyhedra (sometimes arbitrary convex sets) in
RD . Note that the polyhedron B depends on the submodular functions F1 , . . . , FR , but we omit the
dependence to simplify our notation. Our bound will be uniform over all submodular functions.
2
Algorithm and Idea of Analysis
A popular class of algorithms for solving best-approximation problems are projection methods [5].
The most straightforward approach uses alternating projections (AP) or block coordinate descent.
Start with any point a0 2 A, and inductively generate two sequences via bk = ?B ak and ak+1 =
?A bk . Given the nature of A and B, this algorithm is easy to implement and use in our setting, and
it solves Problem (P4) [25]. This is the algorithm that we will analyze.
The sequence (ak , bk ) will eventually converge to an optimal pair (a? , b? ). We say that AP converges
linearly with rate ? < 1 if kak a? k ? C1 ?k and kbk b? k ? C2 ?k for all k and for some constants
C1 and C2 . Smaller values of ? are better.
Analysis: Intuition. We will provide a detailed analysis of the convergence of AP for the polyhedra
A and B. To motivate our approach, we first provide some intuition with the following muchsimplified setup. Let U and V be one-dimensional subspaces spanned by the unit vectors u and v
respectively. In this case, it is known that AP converges linearly with rate cos2 ?, where ? 2 [0, ?2 ]
is the angle such that cos ? = u> v. The smaller the angle, the slower the rate of convergence.
For subspaces U and V of higher dimension, the relevant generalization of the ?angle? between the
subspaces is the Friedrichs angle [11, Definition 9.4], whose cosine is given by
cF (U, V ) = sup u> v | u 2 U \ (U \ V )? , v 2 V \ (U \ V )? , kuk ? 1, kvk ? 1 .
(2)
In finite dimensions, cF (U, V ) < 1. In general, when U and V are subspaces of arbitrary dimension,
AP will converge linearly with rate cF (U, V )2 [11, Theorem 9.8]. If U and V are affine spaces, AP
still converges linearly with rate cF (U u, V v)2 , where u 2 U and v 2 V .
We are interested in rates for polyhedra P and Q, which we define as the intersection of finitely
many halfspaces. We generalize the preceding results by considering all pairs (Px , Qy ) of
3
P
P
E
v
E
Q0
H
Q
Figure 1: The optimal sets E, H in Equation (4), the vector v, and the shifted polyhedron Q0 .
faces of P and Q and showing that the convergence rate of AP between P and Q is at worst
maxx,y cF (a?0 (Px ), a?0 (Qy ))2 , where a?(C) is the affine hull of C and a?0 (C) = a?(C) c
for some c 2 C. The faces {Px }x2RD of P are defined as the nonempty maximizers of linear
functions over P , that is
Px = arg max x> p.
(3)
p2P
While we look at angles between pairs of faces, we remark that Deutsch and Hundal [13] consider a
different generalization of the ?angle? between arbitrary convex sets.
Roadmap of the Analysis. Our analysis has two main parts. First, we relate the convergence rate
of AP between polyhedra P and Q to the angles between the faces of P and Q. To do so, we give a
general condition under which AP converges linearly (Theorem 2), which we show depends on the
angles between the faces of P and Q (Corollary 5) in the polyhedral case. Second, we specialize
to the polyhedra A and B, and we equate the angles with eigenvalues of certain matrices and use
tools from spectral graph theory to bound the relevant eigenvalues in terms of the conductance of a
specific graph. This yields a worst-case bound of 1 N 21R2 on the rate, stated in Theorem 12.
In Theorem 14, we show a lower bound of 1
3
2? 2
N 2R
on the worst-case convergence rate.
The Upper Bound
We first derive an upper bound on the rate of convergence of AP between the polyhedra A and B.
The results in this section are proved in Appendix A.
3.1
A Condition for Linear Convergence
We begin with a condition under which AP between two closed convex sets P and Q converges
linearly. This result is similar to that of Bauschke and Borwein [3, Corollary 3.14], but the rate we
achieve is twice as fast and relies on slightly weaker assumptions.
We will need a few definitions from Bauschke and Borwein [3]. Let d(K1 , K2 ) = inf{kk1 k2 k :
k1 2 K1 , k2 2 K2 } be the distance between sets K1 and K2 . Define the sets of ?closest points? as
E = {p 2 P | d(p, Q) = d(P, Q)}
H = {q 2 Q | d(q, P ) = d(Q, P )},
(4)
and let v = ?Q P 0 (see Figure 1). Note that H = E + v, and when P \ Q 6= ; we have v = 0
and E = H = P \ Q. Therefore, we can think of the pair (E, H) as a generalization of the
intersection P \ Q to the setting where P and Q do not intersect. Pairs of points (e, e + v) 2 E ? H
are solutions to the best-approximation problem between P and Q. In our analysis, we will mostly
study the translated version Q0 = Q v of Q that intersects P at E.
For x 2 RD \E, the function ? relates the distance to E with the distances to P and Q0 ,
?(x) =
d(x, E)
.
max{d(x, P ), d(x, Q0 )}
If ? is bounded, then whenever x is close to both P and Q0 , it must also be close to their intersection.
If, for example, D 2 and P and Q are balls of radius one whose centers are separated by distance
4
exactly two, then ? is unbounded. The maximum ?? = supx2(P [Q0 )\E ?(x) is useful for bounding
the convergence rate.
Theorem 2. Let P and Q be convex sets, and suppose that ?? < 1. Then AP between P and Q
converges linearly with rate 1 ?12 . Specifically,
?
kpk
3.2
p? k ? 2kp0
p? k(1
1 k
?2? )
and
kqk
q? k ? 2kq0
q? k(1
1 k
?2? ) .
Relating ?? to the Angles Between Faces of the Polyhedra
In this section, we consider the case of polyhedra P and Q, and we bound ?? in terms of the angles
between pairs of their faces. In Lemma 3, we show that ? is nondecreasing along the sequence of
points generated by AP between P and Q0 . We treat points p for which ?(p) = 1 separately because
those are the points from which AP between P and Q0 converges in one step. This lemma enables us
to bound ?(p) by initializing AP at p and bounding ? at some later point in the resulting sequence.
Lemma 3. For any p 2 P \E, either ?(p) = 1 or 1 < ?(p) ? ?(?Q0 p). Similarly, for any
q 2 Q0 \E, either ?(q) = 1 or 1 < ?(q) ? ?(?P q).
We can now bound ? by angles between faces of P and Q.
Proposition 4. If P and Q are polyhedra and p 2 P \E, then there exist faces Px and Qy such that
1
1
? cF (a?0 (Px ), a?0 (Qy ))2 .
?(p)2
The analogous statement holds when we replace p 2 P \E with q 2 Q0 \E.
Note that a?0 (Qy ) = a?0 (Q0y ). Proposition 4 immediately gives us the following corollary.
Corollary 5. If P and Q are polyhedra, then
1
1
? max cF (a?0 (Px ), a?0 (Qy ))2 .
?2?
x,y2RD
3.3
Angles Between Subspaces and Singular Values
Corollary 5 leaves us with the task of bounding the Friedrichs angle. To do so, we first relate the
Friedrichs angle to the singular values of certain matrices in Lemma 6. We then specialize this to
base polyhedra of submodular functions. For convenience, we prove Lemma 6 in Appendix A.5,
though this result is implicit in the characterization of principal angles between subspaces given
in [27, Section 1]. Ideas connecting angles between subspaces and eigenvalues are also used by
Diaconis et al. [14].
Lemma 6. Let S and T be matrices with orthonormal rows and with equal numbers of columns.
If all of the singular values of ST > equal one, then cF (null(S), null(T )) = 0. Otherwise,
cF (null(S), null(T )) is equal to the largest singular value of ST > that is less than one.
Faces of relevant polyhedra. Let Ax and By be faces of the polyhedra A and B from Lemma 1.
Since A is a vector space, its only nonempty face is Ax = A. Hence, Ax = null(S), where S is an
N ? N R matrix of N ? N identity matrices IN :
?
?
1
IN ? ? ? IN
S=p
.
(5)
{z
}
R |
repeated R times
The matrix for a?0 (By ) requires a bit more elaboration. Since B is a Cartesian product, we have
By = B(F1 )y1 ? ? ? ? ? B(FR )yR , where y = (y1 , . . . , yR ) and B(Fr )yr is a face of B(Fr ). To
proceed, we use the following characterization of faces of base polytopes [2, Proposition 4.7].
Proposition 7. Let F be a submodular function, and let B(F )x be a face of B(F ). Then there exists
a partition of V into disjoint sets A1 , . . . , AM such that
a?(B(F )x ) =
M
\
m=1
{s 2 RN | s(A1 [ ? ? ? [ Am ) = F (A1 [ ? ? ? [ Am )}.
5
The following corollary is immediate.
Corollary 8. Define F , B(F )x , and A1 , . . . , AM as in Proposition 7. Then
a?0 (B(F )x ) =
M
\
m=1
{s 2 RN | s(A1 [ ? ? ? [ Am ) = 0}.
By Corollary 8, for each Fr , there exists a partition of V into disjoint sets Ar1 , . . . , ArMr such that
a?0 (By ) =
R M
\
\r
r=1 m=1
{(s1 , . . . , sR ) 2 RN R | sr (Ar1 [ ? ? ? [ Arm ) = 0}.
In other words, we can write a?0 (By ) as the nullspace of either of the matrices
0 1>
p A11
0
1
|A11 |
B
1>
A11
B
..
B
C
B
..
.
B
C
B
.
>
B
C
B 1A
B 1>
C
B p 1M1
B A11 [???[A1M1
C
B |A |
1M1
B
C
B
..
..
C
B
or
T
=
T0 = B
.
.
C
B
B
B
C
B
1>
B
C
B
AR1
B
C
B
..
B
C
B
B
.
@
A
B
1>
@
AR1 [???[ARM
R
(6)
1
1>
p AR1
|AR1 |
..
.
1>
ARM
R
p
|ARMR |
C
C
C
C
C
C
C
C
C,
C
C
C
C
C
C
C
A
where 1A is the indicator vector of A ? V . For T 0 , this follows directly from Equation (6). T
can be obtained from T 0 via left multiplication by an invertible matrix, so T and T 0 have the same
nullspace. Lemma 6 then implies that cF (a?0 (Ax ), a?0 (By )) equals the largest singular value of
?
?
1ARM
1A1M
1
1AR1
1A11
>
1
R
p
p
??? p
???
??? p
ST = p
|ARMR |
|A1M1 |
|A11 |
|AR1 |
R
that is less than one. We rephrase this conclusion in the following remark.
Remark 9. The largest eigenvalue of (ST > )> (ST > ) less than one equals cF (a?0 (Ax ), a?0 (By ))2 .
Let Mall = M1 + ? ? ? + MR . Then (ST > )> (ST > ) is the Mall ? Mall square matrix whose rows and
columns are indexed by (r, m) with 1 ? r ? R and 1 ? m ? Mr and whose entry corresponding
to row (r1 , m1 ) and column (r2 , m2 ) equals
3.4
>
1 1Ar1 m1 1Ar2 m2
1 |Ar m \ Ar2 m2 |
p
= p 1 1
.
R |Ar1 m1 ||Ar2 m2 |
R |Ar1 m1 ||Ar2 m2 |
Bounding the Relevant Eigenvalues
It remains to bound the largest eigenvalue of (ST > )> (ST > ) that is less than one. To do so, we view
the matrix in terms of the symmetric normalized Laplacian of a weighted graph. Let G be the graph
whose vertices are indexed by (r, m) with 1 ? r ? R and 1 ? m ? Mr . Let the edge between
vertices (r1 , m1 ) and (r2 , m2 ) have weight |Ar1 m1 \ Ar2 m2 |. We may assume that G is connected
(the analysis in this case subsumes the analysis in the general case). The symmetric normalized
Laplacian L of this graph is closely related to our matrix of interest,
(ST > )> (ST > ) = I
(7)
R 1
R L.
Hence, the largest eigenvalue of (ST > )> (ST > ) that is less than one can be determined from the
smallest nonzero eigenvalue 2 (L) of L. We bound 2 (L) via Cheeger?s inequality (stated in Appendix A.6) by bounding the Cheeger constant hG of G.
Lemma 10. For R
2, we have hG
2
NR
and hence
6
2 (L)
2
N 2 R2 .
We prove Lemma 10 in Appendix A.7. Combining Remark 9, Equation (7), and Lemma 10, we
obtain the following bound on the Friedrichs angle.
Proposition 11. Assuming that R 2, we have
cF (a?0 (Ax ), a?0 (By ))2 ? 1
R 1
2
R N 2 R2
1
N 2 R2 .
?1
Together with Theorem 2 and Corollary 5, Proposition 11 implies the final bound on the rate.
Theorem 12. The AP algorithm for Problem (P4) converges linearly with rate 1 N 21R2 , i.e.,
kak
4
a? k ? 2ka0
a? k(1
k
1
N 2 R2 )
and
kbk
b? k ? 2kb0
b? k(1
k
1
N 2 R2 ) .
A Lower Bound
To probe the tightness of Theorem 12, we construct a ?bad? submodular function and decomposition
that lead to a slow rate. Appendix B gives the formal details. Our example is an augmented cut
function on a cycle: for each x, y 2 V , define Gxy to be the cut function of a single edge (x, y),
?
1 if |A \ {x, y}| = 1
Gxy =
0 otherwise .
Take N to be even and R
2 and define the submodular function F lb = F1lb + ? ? ? + FRlb , where
F1lb = G12 + G34 + ? ? ? + G(N
F2lb = G23 + G45 + ? ? ? + GN 1
1)N
and Frlb = 0 for all r 3. The optimal solution to the best-approximation problem is the all zeros
vector.
Lemma 13. The cosine of the Friedrichs angle between A and a?(B lb ) is
cF (A, a?(B lb ))2 = 1
1
R
1
cos
2?
N
.
Around the optimal solution 0, the polyhedra A and B lb behave like subspaces, and it is possible to
pick initializations a0 2 A and b0 2 B lb such that the Friedrichs angle exactly determines the rate
of convergence. That means 1 1/?2? = cF (A, a?(B lb ))2 , and
kak k = (1
Bounding 1
1
R (1
k
cos( 2?
N ))) ka0 k
and
kbk k = (1
1
R (1
k
cos( 2?
N ))) kb0 k.
cos(x) ? 12 x2 leads to the following lower bound on the rate.
Theorem 14. There exists a decomposed function F lb and initializations for which the convergence
2
rate of AP is at least 1 N2?2 R .
This theoretical bound can also be observed empirically (Figure 3 in Appendix B).
5
Convergence of the Primal Objective
We have shown that AP generates a sequence of points {ak }k 0 and {bk }k 0 in RN R such that
(ak , bk ) ! (a? , b? ) linearly, where (a? , b? ) minimizes the objective in Problem (P4). In this section,
we show that this result also implies the linear convergence of the objective in Problem (P3) and of
the original discrete objective in Problem (P1). The proofs may be found in Appendix C.
Define the matrix = R1/2 S, where
in Equation (5). Multiplication by
P S is the matrix defined
N
maps a vector (w1 , . . . , wR ) to
r wr , where wr 2 R for each r. Set xk = bk and x? = b? .
As shown in Jegelka et al. [25], Problem (P3) is minimized by x? .
Proposition 15. We have f (xk ) + 12 kxk k2 ! f (x? ) + 12 kx? k2 linearly with rate 1 N 21R2 .
This linear rate of convergence translates into a linear rate for the original discrete problem.
Theorem 16. Choose A? 2 arg minA?V F (A). Let Ak be the suplevel set of xk with smallest
value of F . Then F (Ak ) ! F (A? ) linearly with rate 1 2N12 R2 .
7
6
Discussion
In this work, we analyze projection methods for parallel SFM and give upper and lower bounds on
the linear rate of convergence. This means that the number of iterations required for an accuracy of
? is logarithmic in 1/?, not linear as in previous work [35]. Our rate is uniform over all submodular
functions. Moreover, our proof highlights how the number R of components and the facial structure
of B affect the convergence rate. These insights may serve as guidelines when working with projection algorithms and aid in the analysis of special cases. For example, reducing R is often possible.
Any collection of Fr that have disjoint support, such as the cut functions corresponding to the rows
or columns of a grid graph, can be grouped together without making the projection harder.
Our analysis also shows the effects of additional properties of F . For example, suppose that F
is separable, that is, F (V ) = F (S) + F (V \S) for some nonempty S ( V . Then the subsets
Arm ? V defining the relevant faces of B satisfy either Arm ? S or Arm ? S c [2]. This makes G
in Section 3.4 disconnected, and as a result, the N in Theorem 12 gets replaced by max{|S|, |S c |}
for an improved rate. This applies without the user needing to know S when running the algorithm.
A number of future directions suggest themselves. For example, Jegelka et al. [25] also considered
the related Douglas?Rachford (DR) algorithm. DR between subspaces converges linearly with rate
cF [6], as opposed to c2F for AP. We suspect that our approach may be modified to analyze DR
between polyhedra. Further questions include the extension to cyclic updates (instead of parallel
ones), multiple polyhedra, and stochastic algorithms.
Acknowledgments. We would like to thank M?ad?alina Persu for suggesting the use of Cheeger?s
inequality. This research is supported in part by NSF CISE Expeditions Award CCF-1139158,
LBNL Award 7076018, and DARPA XData Award FA8750-12-2-0331, and gifts from Amazon
Web Services, Google, SAP, The Thomas and Stacey Siebel Foundation, Apple, C3Energy, Cisco,
Cloudera, EMC, Ericsson, Facebook, GameOnTalis, Guavus, HP, Huawei, Intel, Microsoft, NetApp,
Pivotal, Splunk, Virdata, VMware, WANdisco, and Yahoo!. This work is supported in part by the
Office of Naval Research under grant number N00014-11-1-0688, the US ARL and the US ARO
under grant number W911NF-11-1-0391, and the NSF under grant number DGE-1106400.
References
[1] F. Bach. Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems, 2011.
[2] F. Bach. Learning with submodular functions: A convex optimization perspective. Foundations and
Trends in Machine Learning, 6(2-3):145?373, 2013.
[3] H. H. Bauschke and J. M. Borwein. On the convergence of von Neumann?s alternating projection algorithm for two sets. Set-Valued Analysis, 1(2):185?212, 1993.
[4] H. H. Bauschke and J. M. Borwein. Dykstra?s alternating projection algorithm for two sets. Journal of
Approximation Theory, 79(3):418?443, 1994.
[5] H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex feasibility problems.
SIAM Review, 38(3):367?426, 1996.
[6] H. H. Bauschke, J. B. Cruz, T. T. Nghia, H. M. Phan, and X. Wang. The rate of linear convergence of the
Douglas?Rachford algorithm for subspaces is the cosine of the Friedrichs angle. Journal of Approximation
Theory, 185:63?79, 2014.
[7] A. Beck and L. Tetruashvili. On the convergence of block coordinate descent type methods. SIAM Journal
on Optimization, 23(4):2037?2060, 2013.
[8] J. V. Burke and J. J. Mor?e. On the identification of active constraints. SIAM Journal on Numerical
Analysis, 25(5):1197?1211, 1988.
[9] F. R. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[10] F. Deutsch. The method of alternating orthogonal projections. In Approximation Theory, Spline Functions
and Applications, pages 105?121. Springer, 1992.
[11] F. Deutsch. Best Approximation in Inner Product Spaces, volume 7. Springer, 2001.
[12] F. Deutsch and H. Hundal. The rate of convergence of Dykstra?s cyclic projections algorithm: The polyhedral case. Numerical Functional Analysis and Optimization, 15(5-6):537?565, 1994.
8
[13] F. Deutsch and H. Hundal. The rate of convergence for the cyclic projections algorithm I: angles between
convex sets. Journal of Approximation Theory, 142(1):36?55, 2006.
[14] P. Diaconis, K. Khare, and L. Saloff-Coste. Stochastic alternating projections. Illinois Journal of Mathematics, 54(3):963?979, 2010.
[15] J. Edmonds. Combinatorial Structures and Their Applications, chapter Submodular Functions, Matroids
and Certain Polyhedra, pages 69?87. Gordon and Breach, 1970.
[16] A. Fix, T. Joachims, S. Park, and R. Zabih. Structured learning of sum-of-submodular higher order energy
functions. In Int. Conference on Computer Vision (ICCV), 2013.
[17] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm
base. Pacific Journal of Optimization, 7:3?17, 2011.
[18] R. M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and
Information Theory, 2(3):155?239, 2006.
[19] M. Gr?otschel, L. Lov?asz, and A. Schrijver. The ellipsoid method and its consequences in combinatorial
optimization. Combinatorica, 1(2):169?197, 1981.
[20] L. Gubin, B. Polyak, and E. Raik. The method of projections for finding the common point of convex
sets. USSR Computational Mathematics and Mathematical Physics, 7(6):1?24, 1967.
[21] I. Halperin. The product of projection operators. Acta Sci. Math. (Szeged), 23:96?99, 1962.
[22] D. Hochbaum and V. Singh. An efficient algorithm for co-segmentation. In Int. Conference on Computer
Vision (ICCV), 2009.
[23] S. Iwata. A faster scaling algorithm for minimizing submodular functions. SIAM J. on Computing, 32:
833?840, 2003.
[24] S. Jegelka, H. Lin, and J. Bilmes. On fast approximate sumodular minimization. In Advances in Neural
Information Processing Systems, 2011.
[25] S. Jegelka, F. Bach, and S. Sra. Reflection methods for user-friendly submodular optimization. In Advances in Neural Information Processing Systems, pages 1313?1321, 2013.
[26] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding.
JMLR, page 22972334, 2011.
[27] A. V. Knyazev and M. E. Argentati. Principal angles between subspaces in an A-based scalar product:
algorithms and perturbation estimates. SIAM Journal on Scientific Computing, 23(6):2008?2040, 2002.
[28] P. Kohli, L. Ladick?y, and P. Torr. Robust higher order potentials for enforcing label consistency. Int.
Journal of Computer Vision, 82, 2009.
[29] V. Kolmogorov. Minimizing a sum of submodular functions. Discrete Applied Mathematics, 160(15):
2246?2258, 2012.
[30] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE Trans. Pattern Analysis and Machine Intelligence, 2011.
[31] H. Lin and J. Bilmes. Optimal selection of limited vocabulary speech corpora. In Proc. Interspeech, 2011.
[32] S. McCormick. Handbook on Discrete Optimization, chapter Submodular Function Minimization, pages
321?391. Elsevier, 2006.
[33] M. Narasimhan and J. Bilmes. Local search for balanced submodular clusterings. In IJCAI, pages 981?
986, 2007.
[34] J. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math.
Programming, 118:237?251, 2009.
[35] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In Advances in
Neural Information Processing Systems, 2010.
[36] P. Tseng. Alternating projection-proximal methods for convex programming and variational inequalities.
SIAM Journal on Optimization, 7(4):951?965, 1997.
[37] S. Vicente, V. Kolmogorov, and C. Rother. Joint optimization of segmentation and appearance models. In
Int. Conference on Computer Vision (ICCV), 2009.
[38] J. Von Neumann. Functional Operators: The Geometry of Orthogonal Spaces. Princeton University
Press, 1950.
9
| 5255 |@word kohli:1 version:2 polynomial:3 norm:2 open:1 cos2:1 decomposition:4 pick:1 thereby:1 harder:1 cyclic:3 siebel:1 fa8750:1 existing:1 current:1 ka:1 written:1 must:1 cruz:1 numerical:2 partition:2 enables:1 update:1 knyazev:1 leaf:1 yr:3 intelligence:1 xk:3 characterization:2 math:2 unbounded:1 mathematical:2 along:2 c2:2 prove:3 specialize:2 combine:1 polyhedral:2 manner:1 lov:4 indeed:1 p1:5 frequently:1 themselves:1 decomposed:1 kp0:1 cardinality:2 considering:1 gift:1 begin:1 notation:1 bounded:1 moreover:1 null:5 kind:1 minimizes:1 maxa:1 q2:2 narasimhan:1 finding:3 guarantee:1 a1m:1 berkeley:2 concave:1 friendly:1 friedrichs:8 exactly:2 demonstrates:2 k2:7 unit:1 grant:3 omit:1 service:1 engineering:1 understood:2 treat:2 local:1 consequence:2 guavus:1 khare:1 ak:7 analyzing:1 ap:19 black:1 twice:1 initialization:2 studied:2 acta:1 co:7 fastest:1 limited:2 range:1 practical:1 acknowledgment:1 practice:1 block:3 implement:2 xr:1 area:1 intersect:1 empirical:2 saloff:1 maxx:1 projection:27 word:1 cloudera:1 q0y:1 suggest:1 get:1 onto:4 close:2 selection:2 convenience:1 operator:2 context:1 live:1 map:3 center:1 straightforward:3 independently:1 convex:16 survey:1 formulate:1 decomposable:5 amazon:1 immediately:1 m2:7 insight:2 importantly:1 spanned:1 orthonormal:1 n12:1 coordinate:3 analogous:1 feel:1 suppose:2 heavily:1 user:2 exact:1 programming:2 us:1 element:1 trend:2 cut:7 observed:1 electrical:1 solved:1 worst:4 initializing:1 wang:1 connected:1 cycle:1 halfspaces:1 balanced:1 intuition:2 cheeger:3 hypergraph:1 inductively:1 nesterov:1 motivate:1 weakly:1 solving:5 singh:1 serve:1 translated:1 easily:1 joint:2 darpa:1 chapter:2 kolmogorov:2 intersects:1 separated:1 fast:4 describe:1 outside:1 whose:5 modular:3 solve:1 valued:3 say:1 tightness:1 otherwise:2 toeplitz:1 n2a:1 think:1 nondecreasing:1 final:1 supx2:1 sequence:5 eigenvalue:8 aro:1 maximal:1 product:5 fr:20 p4:5 relevant:5 combining:1 fmax:4 achieve:2 inducing:2 qr:1 convergence:28 ijcai:1 r1:3 neumann:2 a11:6 converges:11 derive:1 pose:1 hundal:3 finitely:1 b0:1 solves:1 p2:2 tziritas:1 implies:4 arl:1 deutsch:6 direction:1 radius:1 closely:1 hull:1 stochastic:2 f1:4 generalization:3 fix:1 decompose:1 proposition:9 extension:5 hold:3 burke:1 around:1 considered:1 ground:2 algorithmic:1 rkn:1 lbnl:1 smallest:2 proc:1 combinatorial:4 label:1 largest:5 grouped:1 kq0:1 tool:1 weighted:1 minimization:10 kb0:2 modified:1 office:1 corollary:10 ax:6 naval:1 joachim:1 polyhedron:27 ladick:1 am:5 elsevier:1 inference:2 huawei:1 a0:2 ancestor:1 subroutine:2 interested:1 arg:2 dual:4 yahoo:1 development:1 ussr:1 smoothing:2 special:1 equal:6 construct:1 extraction:1 park:1 look:1 future:1 minimized:1 spline:1 simplify:1 gordon:1 few:1 diaconis:2 vmware:1 beck:1 replaced:1 geometry:5 kk1:1 microsoft:1 conductance:1 interest:1 kvk:1 primal:1 hg:2 amenable:1 coste:1 integral:1 edge:2 facial:1 orthogonal:2 tree:1 indexed:2 theoretical:3 instance:1 column:4 gn:1 ar:3 w911nf:1 vertex:2 subset:2 entry:1 uniform:3 rounding:1 gr:1 bauschke:7 eec:1 proximal:3 st:13 siam:6 physic:1 emc:1 invertible:1 michael:1 together:3 concrete:1 quickly:1 connecting:1 w1:1 borwein:6 cisco:1 von:2 opposed:1 choose:1 dr:3 admit:1 american:1 inefficient:1 chung:1 suggesting:1 potential:3 coding:1 subsumes:1 int:4 satisfy:1 depends:3 ad:1 later:1 view:1 nishihara:1 closed:1 analyze:3 sup:1 start:1 recover:1 parallel:3 p2p:1 polynomialtime:1 expedition:1 orlin:1 contribution:1 minimize:3 square:1 accuracy:1 g23:1 largely:1 equate:1 yield:3 generalize:1 identification:1 bilmes:3 apple:1 parallelizable:2 suffers:1 kpk:1 whenever:1 stobbe:2 facebook:1 definition:2 energy:2 thereof:1 proof:4 associated:1 sap:1 proved:1 popular:1 knowledge:1 segmentation:3 jenatton:1 higher:5 improved:1 formulation:2 evaluated:1 box:1 strongly:2 generality:1 though:2 implicit:1 szeged:1 hand:2 working:1 web:1 nonsmoothness:1 google:1 brings:1 halperin:1 gray:1 scientific:1 dge:1 effect:1 normalized:2 ccf:1 hence:3 alternating:11 q0:12 symmetric:2 nonzero:1 komodakis:1 interspeech:1 covering:1 kak:3 cosine:3 mina:1 occuring:1 performs:1 reflection:1 variational:1 virdata:1 netapp:1 common:1 specialized:1 functional:2 empirically:3 exponentially:1 volume:1 rachford:3 m1:9 relating:1 mor:1 refer:3 rd:2 consistency:1 grid:1 similarly:1 xdata:1 hp:1 illinois:1 mathematics:3 submodular:40 stacey:1 base:6 closest:3 recent:3 perspective:1 inf:1 certain:5 n00014:1 inequality:4 arbitrarily:1 minimum:1 additional:2 relaxed:1 preceding:1 mr:3 converge:2 signal:1 relates:1 multiple:1 needing:1 smooth:1 x2rn:2 faster:2 characterized:1 offer:2 ka0:2 bach:4 elaboration:1 lin:2 award:3 a1:6 feasibility:2 laplacian:2 scalable:1 mrf:1 vision:5 iteration:1 sometimes:1 hochbaum:1 qy:6 c1:2 background:1 separately:3 krause:2 addressed:1 singular:5 posse:1 asz:4 sr:2 suspect:1 fujishige:1 jordan:2 integer:2 counting:1 easy:3 variety:1 affect:1 identified:1 polyak:1 inner:1 idea:2 translates:1 t0:1 c3energy:1 speech:1 proceed:1 remark:4 useful:3 detailed:1 amount:1 extensively:1 zabih:1 reduced:1 generate:1 nghia:1 exist:1 nsf:2 shifted:1 disjoint:3 wr:3 edmonds:1 discrete:11 rephrased:1 write:1 ar1:12 alina:1 douglas:3 kuk:1 kqk:1 computability:1 graph:11 relaxation:2 sum:6 angle:25 throughout:2 p3:5 draw:2 appendix:7 scaling:1 sfm:10 bit:1 bound:21 followed:1 occur:1 constraint:1 x2:3 phrased:1 dominated:1 generates:1 extremely:1 min:6 separable:1 px:7 structured:4 pacific:1 combination:1 ball:1 disconnected:1 beneficial:1 smaller:2 slightly:1 sumodular:1 making:1 s1:1 kbk:3 projecting:3 iccv:3 pr:1 equation:4 remains:1 eventually:1 nonempty:3 know:1 apply:1 probe:1 hierarchical:1 away:1 spectral:4 alternative:1 slower:1 tetruashvili:1 original:3 thomas:1 running:3 cf:20 clustering:2 include:2 k1:4 classical:1 dykstra:2 society:1 objective:4 question:1 quantity:2 dependence:1 nr:1 unclear:1 subspace:17 distance:4 thank:1 otschel:1 sci:1 polytope:2 roadmap:1 tseng:1 reason:1 enforcing:1 assuming:1 rother:1 ellipsoid:2 minimizing:7 setup:1 mostly:1 robert:1 statement:1 relate:4 stated:2 guideline:1 mccormick:1 upper:5 finite:1 descent:3 behave:1 immediate:1 defining:1 communication:1 y1:2 rn:10 perturbation:1 arbitrary:6 lb:7 gameontalis:1 introduced:1 bk:6 pair:6 required:3 connection:1 rephrase:1 california:1 polytopes:2 trans:1 beyond:3 able:1 usually:1 pattern:1 sparsity:2 challenge:2 max:4 mall:3 rely:1 solvable:1 indicator:3 arm:7 stefanie:1 breach:1 sn:1 prior:1 understanding:1 review:2 multiplication:2 highlight:1 sublinear:1 ar2:5 foundation:3 jegelka:6 affine:2 thresholding:1 bk2:1 row:4 supported:2 side:1 allow:1 weaker:1 formal:1 circulant:1 face:17 matroids:1 sparse:2 benefit:1 dimension:3 vocabulary:1 commonly:1 collection:1 y2rd:1 splunk:1 gxy:2 approximate:1 persu:1 active:2 mairal:1 corpus:2 handbook:1 alternatively:1 continuous:1 iterative:1 search:1 nature:1 robust:1 ca:1 sra:1 obtaining:1 main:1 linearly:13 bounding:6 arise:1 n2:1 repeated:1 pivotal:1 body:1 exploitable:1 augmented:1 intel:1 slow:2 aid:1 paragios:1 kxk2:1 jmlr:1 nullspace:2 x2rd:1 theorem:11 bad:1 specific:4 showing:1 r2:11 admits:1 ericsson:1 maximizers:1 exists:3 cartesian:1 kx:1 phan:1 intersection:3 logarithmic:1 appearance:1 expressed:1 kxk:2 scalar:1 applies:1 springer:2 iwata:1 determines:1 relies:3 obozinski:1 identity:1 g12:1 replace:1 cise:1 vicente:1 isotani:1 specifically:1 except:1 determined:1 reducing:1 torr:1 lemma:16 principal:2 schrijver:1 combinatorica:1 support:3 evaluate:1 stefje:1 princeton:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.