Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,200 | 5,708 | A class of network models recoverable by spectral
clustering
Marina Meil?a
Department of Statistics
University of Washington
Seattle, WA 98195-4322, USA
mmp@stat.washington.edu
Yali Wan
Department of Statistics
University of Washington
Seattle, WA 98195-4322, USA
yaliwan@washington.edu
Abstract
Finding communities in networks is a problem that remains difficult, in spite of the
amount of attention it has recently received. The Stochastic Block-Model (SBM)
is a generative model for graphs with ?communities? for which, because of its
simplicity, the theoretical understanding has advanced fast in recent years. In particular, there have been various results showing that simple versions of spectral
clustering using the Normalized Laplacian of the graph can recover the communities almost perfectly with high probability. Here we show that essentially the
same algorithm used for the SBM and for its extension called Degree-Corrected
SBM, works on a wider class of Block-Models, which we call Preference Frame
Models, with essentially the same guarantees. Moreover, the parametrization we
introduce clearly exhibits the free parameters needed to specify this class of models, and results in bounds that expose with more clarity the parameters that control
the recovery error in this model class.
1
Introduction
There have been many recent advances in the recovery of communities in networks, under ?blockmodel? assumptions [19, 18, 9]. In particular, advances in recovering communities by spectral
clustering algorithms. These have been extended to models including node-specific propensities.
In this paper, we argue that one can further expand the model class for which recovery by spectral
clustering is possible, and describe a model that subsumes a number of existing models, which we
call the PFM. We show that under the PFM model, the communities can be recovered with small
error, w.h.p. Our results correspond to what [6] termed the ?weak recovery? regime, in which w.h.p.
the fraction of nodes that are mislabeled is o(1) when n ? ?.
2
The Preference Frame Model of graphs with communities
This model embodies the assumption that interactions at the community level (which we will also
call macro level) can be quantified by meaningful parameters. This general assumption underlies
the (p, q) and the related parameterizations of the SBM as well. We define a preference frame to
be a graph with K nodes, one for each community, that encodes the connectivity pattern at the
community level by a (non-symmetric) stochastic matrix R. Formally, given [K] = {1, . . . K}, a
K ? K matrix R (det(R) 6= 0) representing the transition matrix of a reversible Markov chain on
[K], the weighted graph H = ([K], R), with edge set supp R (edges correspond to entries in R not
being 0) is called a K-preference frame. Requiring reversibility is equivalent to requiring that there
is a set of symmetric weights on the edges from which R can be derived ([17]). We note that without
the reversibility assumption, we would be modeling directed graphs, which we will leave for future
1
work. We denote by ? the left principal eigenvector of R, satisfying ?T R = ?T . W.l.o.g. we can
assume the eigenvalue 1 or R has multiplicity 11 and therefore we call ? the stationary distribution
of R.
We say that a deterministic weighted graph G = (V, S) with weight matrix S (and edge set supp S)
admits a K-preference frame H = ([K], R) if and only if there exists a partition C of the nodes V
into K clusters C = {C1 , . . . Ck } of sizes n1 , . . . , nK , respectively, so that the Markov chain on V
with transition matrix P determined by S satisfies the linear constraints
X
Pij = Rlm for all i ? Cl , and all cluster indices l, m ? {1, 2, . . . k}.
(1)
j?Cm
The matrix P is obtained
from S by the standard row-normalization P = D?1 S where D =
Pn
diag{d1:n }, di = i=1 Sij .
A random graph family over node set V admits a K-preference frame H, and is called a Preference
Frame Model (PFM), if the edges i, j, i < j are sampled independently from Bernoulli distributions
with parameters Sij . It is assumed that the edges obtained are undirected and that Sij ? 1 for all
P
pairs i 6= j. We denote a realization from this process by A. Furthermore, let d?i = j?V Aij and
in general, throughout this paper, we will denote computable quantities derived from the observed
? =
A with the same letter as their model counterparts, decorated with the ?hat? symbol. Thus, D
? ?1 A, and so on.
diag d?1:n , P? = D
One question we will study is under what conditions the PFM model can be estimated from a given A
by a standard spectral clustering algorithms. Evidently, the difficult part in this estimation problem is
recovering the partition C. If this is obtained correctly, the remaining parameters are easily estimated
in a Maximum Likelihood framework.
But another question we elucidate refers to the parametrization itself. It is known that in the SBM
and Degree Corrected-SBM (DC-SBM) [18], in spite of their simplicity, there are dependencies
between the community level ?intensive? parameters and the graph level ?extensive?parameters, as
we will show below. In the parametrization of the PFM , we can explicitly show which are the free
parameters and which are the dependent ones.
Several network models in wide use admit a preference frame. For example, the SBM(B) model,
which we briefly describe here. This model has parameters the cluster sizes (n1:K ) and the connectivity matrix B ? [0, 1]K?K . For two nodes i, j ? V, the probability of an edge (i, j) is Bkl
iff i ? Ck and j ? Cl . The matrix B needs not be symmetric. When Bkk = p, Bkl = q for
k, l ? [K], k 6= l, the model is denoted SBM(p, q). It is easy to verify that the SBM admits a
preference frame. For instance, in the case of SBM(p, q), we have
di = p(nl ? 1) + q(n ? nl ) ? dCl , for i ? Cl ,
qnm
p(nl ? 1)
if l 6= m, Rl,l =
, for l, m ? {1, 2, . . . , k}.
dCl
dCl
P
In the above we have introduced the notation dCl = j?Cl di . One particular realization of the
PFM is the Homogeneous K-Preference Frame model (HPFM). In a HPFM, each node i ? V is
characterized by a weight, or propensity to form ties wi . For each pair of communities l, m with
l ? m and for each i ? Cl , j ? Cm we sample Aij with probability Sij given by
Rl,m =
Sij =
Rml wi wj
.
?l
(2)
This formulation ensures detail balance in the edge expectations, i.e. Sij = Sji . The HPFM is
virtually equivalent to what is known as the ?degree model? [8] or ?DC-SBM?, up to a reparameterization2 . Proposition 1 relates the node weights to the expected node degrees di . We note
that the main result we prove in this paper uses independent sampling of edges only to prove the
concentration of the laplacian matrix. The PFM model can be easily extended to other graph models
1
Otherwise the networks obtained would be disconnected.
Here we follow the customary definition of this model, which does not enforce Sii = 0, even though this
implies a non-zero probability of self-loops.
2
2
with dependent edges if one could prove concentration and eigenvalue separation. For example,
when R has rational entries, the subgraph induced by each block of A can be represented by a
random d-regular graph with a specified degree.
Proposition 1 In a HPFM di = wi
PK
l=1
Rkl
wCl
?l
whenever i ? Ck and k ? [K].
Equivalent statements that the expected degrees in each cluster are proportional to the weights exist
in [7, 19] and they are instrumental in analyzing this model. This particular parametrization immediately implies in what case the degrees are globally proportional to the weights. This is, obviously,
the situation when wCl ? ?l for all l ? [K].
As we see, the node degrees in a HPFM are not directly determined by the propensities wi , but
depend on those by a multiplicative constant that varies with the cluster. This type of interaction
between parameters has been observed in practically all extensions of the Stochastic Block-Model
that we are aware of, making parameter interpretation more difficult. Our following result establishes
what are the free parameters of the PFM and of their subclasses. As it will turn out, these parameters
and their interactions are easily interpretable.
Proposition 2 Let (n1 , . . . nK ) be a partition of n (assumed to represent the cluster sizes of C =
{C1 , . . . CK } a partition of node set V), R a non-singular K ? K stochastic matrix, ? its left
principal eigenvector, and ?C1 ? [0, 1]n1 , . . . ?CK ? [0, 1]nK probability distributions over C1:K .
Then, there exists a PFM consistent with H = ([K], R), with clustering C, and whose node degrees
are given by
di = dtot ?k ?Ck ,i ,
whenever i ? Ck , where dtot =
Assumption 2.
P
i?V
(3)
di is a user parameter which is only restricted above by
The proof of this result is constructive, and can be found in the extended version.
The parametrization shows to what extent one can specify independently the degree distribution of a
network model, and the connectivity parameters R. Moreover, it describes the pattern of connection
of a node i as a composition of a macro-level pattern, which gives the total probability of i to
form connections with a cluster l, and the micro-level distribution of connections between i and the
members of Cl . These parameters are meaningful on their own and can be specified or estimated
separately, as they have no hidden dependence on each other or on n, K.
The PFM enjoys a number of other interesting properties. As this paper will show, almost all the
properties that make SBM?s popular and easy to understand hold also for the much more flexible
PFM. In the remainder of this paper we derive recovery guarantees for the PFM. As an additional
goal, we will show that in the frame we set with the PFM, the recovery conditions become clearer,
more interpretable, and occasionally less restrictive than for other models.
As already mentioned, the PFM includes many models that have been found useful by previous
authors. Yet, the PFM class is much more flexible than those individual models, in the sense that
it allows other unexplored degrees of freedom (or, in other words, achieves the same advantages as
previously studied models with fewer constraints on the data). Note that there is an infinite number
of possible random graphs G with the same parameters (d1:n , n1:k , R) satisfying the constraints (1)
and Proposition 2, yet for
Preliable community detection we do not need to control S fully, but only
aggregate statistics like j?C Aij .
3
Spectral clustering algorithm and main result
Now, we address the community recovery problem from a random graph (V, A) sampled from
the PFM defined as above. We make the standard assumption that K is known. Our analysis is
3
based on a very common spectral clustering algorithm used in [13] and described also in [14, 21].
Input : Graph (V, A) with |V| = n and A ? {0, 1}n?n , number of clusters K
Output: Clustering C
? = diag(d?1 , ? ? ? , d?n ) and Laplacian
1. Compute D
?=D
? ?1/2 AD
? ?1/2
L
(4)
? 1 | ? ? ? ? ? |?
?K |
2. Calculate the K eigenvectors Y?1 , ? ? ? , Y?K associated with the K eigenvalues |?
?
of L. Normalize the eigenvectors to unit length. We denote them as the first K eigenvectors in the
following text;
? ?1/2 Y?i , i = 1, ? ? ? , K. Form matrix V? = [V?1 ? ? ? V?K ];
3. Set V?i = D
4. Treating each row of V? as a point in K dimensions, cluster them by the K-means algorithm to
?
obtain the clustering C.
Algorithm 1: Spectral Clustering
Note that the vectors V? are the first K eigenvectors of P . The K-means algorithm is assumed to find
the global optimum. For more details on good initializations for K-means in step 4 see [16].
We quantify the difference between C? and the true clusterings C by the mis-clustering rate perr ,
which is defined as
X
1
max
|C?(k) ? C?k |.
(5)
perr = 1 ?
n ?:[K]?[K]
k
Theorem 3 (Mis-clustering rate bound for HPFM and PFM) Let the n ? n matrix S admit a
PFM, and w1:n , R, ?, P, A, d1:n have the usual meaning, and let ?1:n be the eigenvalues of P ,
with |?i | ? |?i+1 |. Let dmin = min d1:n be the minimum expected degree, d?min = min d?i , and
dmax = maxij nSij . Let ? ? 1, ? > 0 be arbitrary numbers. Assume:
Assumption 1 S admits a HPFM model and (2) holds.
Assumption 2 Sij ? 1
Assumption 3 d?min ? log(n)
Assumption 4 dmin ? log(n)
Assumption 5 ?? > 0, dmax ? ? log n
Assumption 6 grow > 0, where grow is defined in Proposition 4.
Assumption 7 ?1:K are the eigenvalues of R, and |?K | ? |?K+1 | = ? > 0.
We also assume that we run Algorithm 1 on S and that K-means finds the optimal solution. Then,
for n sufficiently large, the following statements hold with probability at least 1 ? e?? .
PFM Assumptions 2 - 7 imply
4(log n)?
C0 ? 4
Kdtot
+
(6)
perr ?
ndmin grow ? 2 log n
d?min
HPFM Assumptions 1 - 6 imply
perr ?
Kdtot
ndmin grow
C0 ? 4
4(log n)?
+
2
?K log n
d?min
(7)
where C0 is a constant depending on ? and ?.
Note that perr decreases at least as 1/ log(n) when d?min = dmin = log(n). This is because d?min
and dmin help with the concentration of L. Using Proposition 4, the distances between rows of V ,
4
i.e, the true centers of the k-means step, are lower bounded by grow /dtot . After plugging in the
assumptions for dmin , d?min , dmax , we obtain
K?
C0 ? 4
4
perr ?
+
.
(8)
grow ? 2 log n (log n)(1??)
When n is small, the first component on the right hand side dominates because of the constant C0 ,
while the second part dominates when n is very large. This shows that perr decreases almost as
1/ log n. Of the remaining quantities, ? controls the spread of the degrees di . Notice that ?K and
? are eigengaps in HPFM model and PFM model respectively and depend only on the preference
frame, and likewise for grow . The eigengaps ensure the stability of principal spaces and the separation from the spurious eigenvalues, as shown in Proposition 6. The term containing (log n)? is
designed to control the difference between di and d?i with ? a small positive constant.
3.1
Proof outline, techniques and main concepts
The proof of Theorem 3 (given in the extended version of the paper) relies on three steps, which
are to be found in most results dealing with spectral clustering. First, concentration bounds of
? w.r.t L are obtained. There are various conditions under which these
the empirical Laplacian L
can be obtained, and ours are most similar to the recent result of [9]. The other tools we use are
Hoeffding bounds and tools from linear algebra. Second, one needs to bound the perturbation of
the eigenvectors Y as a function of the perturbation in L. This is based on the pivotal results of
Davis and Kahan, see e.g [18]. A crucial ingredient in these type of theorems is the size of the
eigengap between the invariant subspace Y and its orthogonal complement. This is a condition that
is model-dependent, and therefore we discuss the techniques we introduce for solving this problem
in the PFM in the next subsection.
The third step is to bound the error of the K-means clustering algorithm. This is done by a counting
argument. The crux of this step is to ensure the separation of the K distinct rows of V . This, again, is
model dependent and we present our result below. The details and proof are in the extended version.
All proofs are for the PFM; to specialize to the HPFM, one replaces ? with |?K |
3.2
Cluster separation and bounding the spurious eigenvalues in the PFM
Proposition 4 (Cluster separation) Let V, ?, d1:n have the usual meaning and define the cluster
P
dCk
volume dCk = i?Ck di , and cmax , cmin as maxk , mink n?
. Let i, j ? V be nodes belonging
k
respectively to clusters k, m with k 6= m. Then,
1
1
1
1
1
1
1
grow
2
||Vi: ? Vj: || ?
+
??
?
=
,
(9)
dtot cmax ?k
?m
?k ?m cmin
cmax
dtot
h
i
1
1
1
1
1
? 1
where grow = cmax
. Moreover, if the columns of V are
?k + ?m ? ?k ?m
cmin ? cmax
normalized to length 1, the above result holds by replacing dtot cmax,min with max, mink n?kk .
In the square brackets, cmax,min depend on the cluster-level degree distribution, while all the other
quantitities depend only of the preference frame. Hence, this expression is invariant with n, and as
long as it is strictly positive, we have that the cluster separation is ?(1/dtot ).
The next theorem is crucial in proving that L has a constant eigengap. We express the eigengap of P
in terms of the preference frame H and the mixing inside each of the clusters Ck . For this, we resort
to generalized stochastic matrices, i.e. rectangular positive matrices with equal row sums, and we
relate their properties to the mixing of Markov chains on bipartite graphs.
These tools are introduced here, for the sake of intuition, toghether with the main spectral result,
while the rest of the proofs are in the extended version.
Given C, for any vector x ? Rn , we denote by xk , k = 1, . . . K, the block of x indexed by elements
of cluster k of C. Similarly, for any square matrix A ? Rn?n , we denote by Akl = [Aij ]i?k,j?l the
block with rows indexed by i ? k, and columns indexed by j ? l.
5
Denote by ?, ?1:K , ? 1:K ? RK respectively the stationary distribution, eigenvalues3 , and eigenvectors of R.
We are interested in block stochastic matrices P for which the eigenvalues of R are the principal
eigenvalues. We call ?K+1 . . . ?n spurious eigenvalues. Theorem 6 below is a sufficient condition
that bounds |?K+1 | whenever each of the K 2 blocks of P is ?homogeneous? in a sense that will be
defined below.
When we consider the matrix L = D?1/2 SD?1/2 partitioned according to C, it will be convenient
to consider the off-diagonal blocks in pairs. This is why the next result describes the properties of
matrices consisting of a pair of off-diagonal blocks.
Proposition 5 (Eigenvalues for the off-diagonal blocks) Let M be the square matrix
0 B
(10)
M=
A 0
x1
n2 ?n1
n1 ?n2
, and let x =
, x1,2 ? Cn1,2 be an eigenvector of M
where A ? R
and B ? R
x2
with eigenvalue ?. Then
Bx2 = ?x1
Ax1 = ?x2
M2
ABx2 = ?2 x2
BAx1 = ?2 x1
BA
0
=
0
AB
(11)
(12)
(13)
Moreover, if M is symmetric, i.e B = AT , then ? is a singular value of A, x is real, and ?? is
also an eigenvalue of M with eigenvector [xT1 ? xT2 ]T . Assuming n2 ? n1 , and that A is full rank,
one can write A = V ?U T with V ? Rn2 ?n2 , U ? Rn1 ?n2 orthogonal matrices, and ? a diagonal
matrix of non-zero singular values.
Theorem 6 (Bounding the spurious eigenvalues of L) Let C, L, P, D, S, R, ? be defined as above,
and let ? be an eigenvalue of P . Assume that (1) P is block-stochastic with respect to C; (2) ?1:K are
kk
the eigenvalues of R, and |?K | > 0; (3) ? is not an eigenvalue of R; (4) denote by ?kl
3 (?2 ) the third
(second) largest in magnitude eigenvalue of block Mkl (Lkk ) and assume that
|?kl
3 |
?max (Mkl )
?c<1
|?kk |
( ?max2(Lkk )
? c). Then, the spurious eigenvalues of P are bounded by c times a constant that
depends only on R.
?
?
X?
|?| ? c max ?rkk +
rkl rlk ?
(14)
k=1:K
l6=k
Remarks: The factor that multiplies c can be further bounded denoting a
?
[ rlk ]Tl=1:K
v
uK
K
uX
X?
X
T
t
rkk +
rkl rlk = a b ? ||a||||b|| =
rkl
rlk =
l6=k
l=1
l=1
?
= [ rkl ]Tl=1:K , b =
v
uK
uX
t
r
lk
(15)
l=1
In other words,
v
uK
uX
c
rlk
|?| ? max t
2 k=1:K
(16)
l=1
The maximum column sum of a stochastic
? matrix is 1 if the matrix is doubly stochastic and larger
than 1 otherwise, and can be as large as K. However, one must remember that the interesting R
matrices have ?large? eigenvalues. In particular we will be interested in ?K > c. It is expected that
under these conditions, the factor depending on R to be close to 1.
3
Here too, eigenvalues will always be ordered in decreasing order of their magnitudes, with positive values
preceeding negatives one of the same magnitude. Consequently, for any stochastic matrix, ?1 = 1 always
6
The second remark is on the condition (3), that all blocks have small spurious eigenvalues. This
condition is not merely a technical convenience. If a block had a large eigenvalue, near 1 or ?1
(times its ?max ), then that block could itself be broken into two distinct clusters. In other words, the
clustering C would not accurately capture the cluster structure of the matrix P . Hence, condition (3)
amounts to requiring that no other cluster structure is present, in other words that within each block,
the Markov chain induced by P mixes well.
4
Related work
Previous results we used The Laplacian concentration results use a technique introduced recently
by [9], and some of the basic matrix theoretic results are based on [14] which studied the P and L
matrix in the context of spectral clustering. As any of the many works we cite, we are indebted to
the pioneering work on the perturbation of invariant subspaces of Davis and Kahan [18, 19, 20].
4.1
Previous related models
The configuration model for regular random graphs [4, 11] and for graphs with general fixed degrees
[10, 12] is very well known. It can be shown by a simple calculation that the configuration model
also admits a K-preference frame. In the particular case when the diagonal of the R matrix is 0 and
the connections between clusters are given by a bipartite configuration model with fixed degrees,
K-preference frames have been studied by [15] under the name ?equitable graphs?; the object there
was to provide a way to calculate the spectrum of the graph.
Since the PFM is itself an extension of the SBM, many other extensions of the latter will bear
resemblance to PFM. Here we review only a subset of these, a series of strong relatively recent
advances, which exploit the spectral properties of the SBM and extend this to handle a large range
of degree distributions [7, 19, 5]. The PFM includes each of these models as a subclass4 .
In [7] the authors study a model that coincides (up to some multiplicative constants) with the HPFM.
The paper introduces an elegant algorithm that achieves partial recovery or better, which is based
on the spectral properties of a random Laplacian-like matrix, and does not require knowledge of the
partition size K.
The PFM also coincides with the model of [1] and [8] called the expected degree model w.r.t the
distribution of intra-cluster edges, but not w.r.t the ambient edges, so the HPFM is a subclass of this
model.
A different approach to recovery The papers [5, 18, 9] propose regularizing the normalized Laplacian with respect to the influence of low degrees, by adding the scaled unit matrix ? I to the incidence
matrix A, and thereby they achieve recovery for much more imbalanced degree distributions than
us. Currently, we do not see an application of this interesting technique to the PFM, as the diagonal
regularization destroys the separation of the intracluster and intercluster transitions, which guarantee the clustering property of the eigenvectors. Therefore, currently we cannot break the n log n
limit into the ultra-sparse regime, although we recognize that this is an important current direction
of research.
Recovery results like ours can be easily extended to weighted, non-random graphs, and in this sense
they are relevant to the spectral clustering of these graphs, when they are assumed to be noisy
versions of a G that admits a PFM.
4.2
An empirical comparison of the recovery conditions
As obtaining general results in comparing the various recovery conditions in the literature would be
a tedious task, here we undertake to do a numerical comparison. While the conclusions drawn from
this are not universal, they illustrate well the stringency of various conditions, as well as the gap
between theory and actual recovery. For this, we construct HPFM models, and verify numerically if
they satisfy the various conditions. We have also clustered random graphs sampled from this model,
with good results (shown in the extended version).
4
In particular, the models proposed in [7, 19, 5] are variations of the DC-SBM and thus forms of the
homogeneous PFM.
7
We generate S from the HPFM model with K = 5, n = 5000. Each wi is uniformly generated
from (0.5, 1). n1:K = (500, 1000, 1500, 1000, 1000), grow > 0, ?1:K = (1, 0.8, 0.6, 0.4, 0.2). The
P4
matrix R is given below; note its last row in which r55 < l=1 r5l .
?
.80
?.04
?
R = ?.01
?.01
.13
.07
.52
.20
.08
.21
.02
.24
.65
.12
.02
.02
.12
.15
.70
.32
?
.09
.08?
?
.00?
.08?
.33
? = (.25, .44, .54, .65, .17).
(17)
The conditions we are verifying include besides ours, those obtained by [18], [19], [3] and [5];
since the original S is a perfect case for spectral clustering of weighted graphs, we also verify the
theoretical recovery conditions for spectral clustering in [2] and [16].
Our result Theorem 3 Assumption 1 and 2 automatically hold from the construction of the data.
By simulating the data, We find that dmin = 77.4, d?min = 63, both of which are bigger than
log n = 8.52. Therefore Assumption 3 and 4 hold. dmax = 509.3, grow = 1.82 > 0, thus Assumption 5 and 6 hold. After running Algorithm 1, the mis-clustering rate is r = 0.0008, which satisfies
the theoretical bound. In conclusion, the dataset fits into both the assumptions and conclusion of
Theorem 3.
1
? ?
Qin and Rohe[18] This paper has an assumption on the lower bound on ?K , that is 8?
3 K
q
K(ln(K/)
, so that the concentration bound holds with probability (1 ? ). We set = 0.1 and
dmin
obtain ?K ? 12.3, which is impossible to hold since ?K is upper bounded by 15 .
2
Rohe, Chatterjee, Yu[19] Here, one defines ?n = dmin
n , and requires ?n log n > 2 to ensure the
concentration of L. To meet this assumption, with n = 5000, dmin ? 2422. While in our case
dmin = 77.4. The assumption requires a very dense graph and is not satisfied in this dataset.
Balcan, Borgs Braverman, Chayes[3]Their theorem is based on self-determined community structure. It requires all the nodes to be more connected within their own cluster. However, in our graph,
1296 out of 5000 nodes have more connections to outside nodes ?
than to nodes
p in their own cluster.
= K(K ? 1)1 + K22 ,
Ng, Jordan, Weiss[16] require ?2 < 1 ? ?, where ? > (2 + 2 2), P
P
P
P
A2jk
A2 1/2
k:k?Si
( k,l?Si d? kl
.
1 ? maxi1 ,i2 ?{1,??? ,K} j?Ci
? )
k?Ci2 d?j d?k , 2 ? maxi?{1,??? ,K}
d?j
1
k dl
On the given data, we find that ? 36.69, and ? ? 125.28, which is impossible to hold since ?
needs to be smaller than 1.
Chaudhuri, Chung, Tsiatas[5] The recovery theorem of this paper requires di ? 128
9 ln(6n/?),
so that when all the assumptions hold, it recovers the clustering correctly with probability at least
1 ? 6?. We set ? = 0.01, and obtain that di = 77.40, 128
9 ln(6n/?) = 212.11. Therefore the
assumption fails as well.
For our method, the hardest condition to satisfy, and the most different from the others, was Assumption 6. We repeated this experiment with the other weights distributions for which this assumption
fails. The assumptions in the related papers continued to be violated. In [Qin and Rohe], we obtain
?K ? 17.32. In [Rohe, Chatterjee, Yu], we still needs dmin ? 2422. In [Balcan, Borgs Braverman,
Chayes], we get 1609 points more connected to the outside nodes of its cluster. In [Balakrishnan,
Xu, Krishnamurthy, Singh], we get ? = 0.172 and needs to satisfy ? = o(0.3292). In [Ng, Jordan,
Weiss], we obtain ? ? 175.35. Therefore, the assumptions in these papers are all violated as well.
5
Conclusion
In this paper, we have introduced the preference frame model, which is more flexible and subsumes
many current models including SBM and DC-SBM. It produces state-of-the art recovery rates comparable to existing models. To accomplish this, we used a parametrization that is clearer and more
intuitive. The theoretical results are based on the new geometric techniques which control the eigengaps of the matrices with piecewise constant eigenvectors.
We note that the main result theorem 3 uses independent sampling of edges only to prove the concentration of the laplacian matrix. The PFM model can be easily extended to other graph models with
dependent edges if one could prove concentration and eigenvalue separation. For example, when
R has rational entries, the subgraph induced by each block of A can be represented by a random
d-regular graph with a specified degree.
5
To make ? ? 1 possible, one needs dmin ? 11718.
8
References
[1] Sanjeev Arora, Rong Ge, Sushant Sachdeva, and Grant Schoenebeck. Finding overlapping
communities in social networks: toward a rigorous approach. In Proceedings of the 13th ACM
Conference on Electronic Commerce, pages 37?54. ACM, 2012.
[2] Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, and Aarti Singh. Noise thresholds
for spectral clustering. In Advances in Neural Information Processing Systems, pages 954?962,
2011.
[3] Maria-Florina Balcan, Christian Borgs, Mark Braverman, Jennifer Chayes, and Shang-Hua
Teng. Finding endogenously formed communities. arxiv preprint arXiv:1201.4899v2, 2012.
[4] Bela Bollobas. Random Graphs. Cambridge University Press, second edition, 2001.
[5] K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees in
extended planted partition model. Journal of Machine Learning Research, pages 1?23, 2012.
[6] Yudong Chen and Jiaming Xu. Statistical-computational tradeoffs in planted problems and
submatrix localization with a growing number of clusters and submatrices. arXiv preprint
arXiv:1402.1267, 2014.
[7] Amin Coja-Oghlan and Andre Lanka. Finding planted partitions in random graphs with general
degree distributions. SIAM Journal on Discrete Mathematics, 23:1682?1714, 2009.
[8] M. O. Jackson. Social and Economic Networks. Princeton University Press, 2008.
[9] Can M. Le and Roman Vershynin. Concentration and regularization of random graphs. 2015.
[10] Brendan McKay. Asymptotics for symmetric 0-1 matrices with prescribed row sums. Ars
Combinatoria, 19A:15?26, 1985.
[11] Brendan McKay and Nicholas Wormald. Uniform generation of random regular graphs of
moderate degree. Journal of Algorithms, 11:52?67, 1990.
[12] Brendan McKay and Nicholas Wormald. Asymptotic enumeration by degree sequence of
graphs with degrees o(n1/2 . Combinatorica, 11(4):369?382, 1991.
[13] Marina Meil?a and Jianbo Shi. Learning segmentation by random walks. In T. K. Leen, T. G.
Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems, volume 13, pages 873?879, Cambridge, MA, 2001. MIT Press.
[14] Marina Meil?a and Jianbo Shi. A random walks view of spectral segmentation. In T. Jaakkola
and T. Richardson, editors, Artificial Intelligence and Statistics AISTATS, 2001.
[15] M.E.J. Newman and Travis Martin. Equitable random graphs. 2014.
[16] Andrew Y Ng, Michael I Jordan, Yair Weiss, et al. On spectral clustering: Analysis and an
algorithm. Advances in neural information processing systems, 2:849?856, 2002.
[17] J.R. Norris. Markov Chains. Cambridge University Press, 1997.
[18] Tai Qin and Karl Rohe. Regularized spectral clustering under the degree-corrected stochastic
blockmodel. In Advances in Neural Information Processing Systems, pages 3120?3128, 2013.
[19] Karl Rohe, Sourav Chatterjee, Bin Yu, et al. Spectral clustering and the high-dimensional
stochastic blockmodel. The Annals of Statistics, 39(4):1878?1915, 2011.
[20] Gilbert W Stewart, Ji-guang Sun, and Harcourt Brace Jovanovich. Matrix perturbation theory,
volume 175. Academic press New York, 1990.
[21] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?
416, 2007.
9
| 5708 |@word briefly:1 version:7 instrumental:1 c0:5 tedious:1 ci2:1 thereby:1 configuration:3 series:1 denoting:1 ours:3 existing:2 recovered:1 current:2 incidence:1 comparing:1 si:2 yet:2 must:1 numerical:1 partition:7 christian:1 treating:1 interpretable:2 designed:1 stationary:2 generative:1 fewer:1 intelligence:1 xk:1 parametrization:6 parameterizations:1 node:19 preference:16 sii:1 become:1 prove:5 specialize:1 doubly:1 inside:1 introduce:2 expected:5 growing:1 globally:1 decreasing:1 automatically:1 actual:1 enumeration:1 moreover:4 notation:1 bounded:4 what:6 cm:2 akl:1 eigenvector:4 perr:7 finding:4 guarantee:3 remember:1 unexplored:1 subclass:2 tie:1 scaled:1 jianbo:2 uk:3 control:5 unit:2 grant:1 positive:4 sd:1 limit:1 analyzing:1 meil:3 meet:1 wormald:2 initialization:1 studied:3 quantified:1 range:1 directed:1 commerce:1 block:18 cn1:1 asymptotics:1 empirical:2 ax1:1 universal:1 submatrices:1 convenient:1 word:4 refers:1 regular:4 spite:2 get:2 convenience:1 close:1 cannot:1 context:1 influence:1 impossible:2 gilbert:1 equivalent:3 deterministic:1 center:1 bollobas:1 shi:2 attention:1 independently:2 rectangular:1 simplicity:2 recovery:17 immediately:1 preceeding:1 m2:1 sbm:18 continued:1 d1:5 jackson:1 stability:1 proving:1 handle:1 variation:1 krishnamurthy:2 annals:1 elucidate:1 construction:1 user:1 homogeneous:3 us:2 element:1 satisfying:2 observed:2 preprint:2 capture:1 verifying:1 calculate:2 wj:1 ensures:1 connected:2 sun:1 decrease:2 lanka:1 mentioned:1 intuition:1 broken:1 depend:4 solving:1 singh:2 algebra:1 localization:1 bipartite:2 mislabeled:1 easily:5 various:5 represented:2 distinct:2 fast:1 describe:2 wcl:2 artificial:1 newman:1 aggregate:1 outside:2 whose:1 larger:1 say:1 otherwise:2 statistic:6 richardson:1 kahan:2 itself:3 noisy:1 chayes:3 obviously:1 advantage:1 eigenvalue:24 evidently:1 sequence:1 propose:1 interaction:3 remainder:1 macro:2 p4:1 relevant:1 loop:1 realization:2 qin:3 schoenebeck:1 iff:1 subgraph:2 mixing:2 achieve:1 chaudhuri:2 amin:1 intuitive:1 normalize:1 seattle:2 cluster:26 optimum:1 produce:1 perfect:1 leave:1 object:1 wider:1 derive:1 depending:2 clearer:2 stat:1 help:1 illustrate:1 andrew:1 received:1 strong:1 recovering:2 implies:2 quantify:1 direction:1 rml:1 stochastic:12 cmin:3 bin:1 require:2 crux:1 clustered:1 rkk:2 proposition:9 ultra:1 extension:4 strictly:1 rong:1 hold:11 practically:1 sufficiently:1 achieves:2 a2:1 aarti:1 estimation:1 currently:2 expose:1 propensity:3 sivaraman:1 largest:1 establishes:1 tool:3 weighted:4 mit:1 clearly:1 destroys:1 always:2 ck:9 pn:1 dck:2 jaakkola:1 derived:2 maria:1 bernoulli:1 likelihood:1 rank:1 blockmodel:3 rigorous:1 brendan:3 sense:3 dependent:5 hidden:1 spurious:6 expand:1 interested:2 flexible:3 denoted:1 multiplies:1 art:1 equal:1 aware:1 construct:1 washington:4 reversibility:2 sampling:2 ng:3 yu:3 hardest:1 future:1 others:1 piecewise:1 micro:1 roman:1 recognize:1 individual:1 consisting:1 n1:10 ab:1 freedom:1 detection:1 intra:1 braverman:3 introduces:1 bracket:1 nl:3 chain:5 ambient:1 edge:14 partial:1 orthogonal:2 indexed:3 bx2:1 walk:2 theoretical:4 instance:1 column:3 modeling:1 ar:1 stewart:1 entry:3 subset:1 mckay:3 uniform:1 too:1 dtot:7 dependency:1 varies:1 accomplish:1 vershynin:1 siam:1 off:3 michael:1 decorated:1 sanjeev:1 connectivity:3 w1:1 again:1 satisfied:1 rn1:1 containing:1 wan:1 von:1 hoeffding:1 admit:2 bela:1 resort:1 chung:2 supp:2 rn2:1 subsumes:2 includes:2 satisfy:3 explicitly:1 ad:1 vi:1 depends:1 multiplicative:2 break:1 view:1 ulrike:1 recover:1 dcl:4 square:3 formed:1 likewise:1 correspond:2 weak:1 accurately:1 indebted:1 whenever:3 andre:1 definition:1 eigengaps:3 proof:6 di:12 associated:1 mi:3 recovers:1 sampled:3 rational:2 dataset:2 popular:1 subsection:1 knowledge:1 segmentation:2 oghlan:1 follow:1 specify:2 wei:3 formulation:1 done:1 though:1 leen:1 furthermore:1 tsiatas:2 hand:1 harcourt:1 replacing:1 reversible:1 overlapping:1 mkl:2 defines:1 resemblance:1 usa:2 dietterich:1 name:1 normalized:3 requiring:3 verify:3 counterpart:1 true:2 concept:1 hence:2 regularization:2 symmetric:5 i2:1 self:2 davis:2 a2jk:1 coincides:2 generalized:1 outline:1 theoretic:1 balcan:3 meaning:2 recently:2 common:1 rl:2 ji:1 volume:3 extend:1 interpretation:1 numerically:1 composition:1 cambridge:3 mathematics:1 similarly:1 had:1 own:3 recent:4 imbalanced:1 moderate:1 termed:1 occasionally:1 sji:1 equitable:2 minimum:1 additional:1 pfm:31 recoverable:1 relates:1 full:1 mix:1 technical:1 characterized:1 calculation:1 academic:1 long:1 marina:3 bigger:1 plugging:1 laplacian:8 underlies:1 basic:1 florina:1 essentially:2 expectation:1 arxiv:4 normalization:1 represent:1 jiaming:1 c1:4 separately:1 singular:3 grow:11 crucial:2 rest:1 brace:1 induced:3 maxi1:1 virtually:1 undirected:1 elegant:1 member:1 balakrishnan:2 jordan:3 call:5 near:1 counting:1 easy:2 undertake:1 fit:1 perfectly:1 economic:1 computable:1 tradeoff:1 intensive:1 det:1 expression:1 eigengap:3 york:1 remark:2 useful:1 eigenvectors:8 amount:2 generate:1 exist:1 tutorial:1 notice:1 estimated:3 correctly:2 write:1 discrete:1 express:1 threshold:1 drawn:1 clarity:1 graph:34 merely:1 fraction:1 year:1 sum:3 xt2:1 run:1 luxburg:1 letter:1 almost:3 family:1 throughout:1 electronic:1 separation:8 comparable:1 submatrix:1 bound:10 yali:1 replaces:1 rkl:5 constraint:3 x2:3 encodes:1 sake:1 argument:1 min:13 prescribed:1 relatively:1 martin:1 department:2 according:1 disconnected:1 belonging:1 describes:2 smaller:1 wi:5 partitioned:1 making:1 restricted:1 multiplicity:1 sij:7 invariant:3 ln:3 remains:1 previously:1 turn:1 dmax:4 discus:1 jennifer:1 needed:1 tai:1 ge:1 lkk:2 v2:1 spectral:23 enforce:1 sachdeva:1 simulating:1 nicholas:2 travis:1 yair:1 hat:1 customary:1 original:1 clustering:30 remaining:2 ensure:3 include:1 running:1 cmax:7 l6:2 embodies:1 exploit:1 restrictive:1 question:2 quantity:2 already:1 concentration:10 dependence:1 usual:2 diagonal:6 planted:3 exhibit:1 subspace:2 distance:1 argue:1 extent:1 toward:1 assuming:1 length:2 besides:1 index:1 kk:3 balance:1 difficult:3 statement:2 relate:1 mink:2 negative:1 ba:1 coja:1 dmin:12 upper:1 markov:5 situation:1 extended:10 maxk:1 frame:17 dc:4 perturbation:4 rn:2 arbitrary:1 community:17 introduced:4 complement:1 pair:4 specified:3 extensive:1 connection:5 kl:3 address:1 below:5 pattern:3 regime:2 pioneering:1 including:2 max:6 maxij:1 endogenously:1 regularized:1 advanced:1 representing:1 imply:2 lk:1 arora:1 tresp:1 text:1 review:1 understanding:1 literature:1 geometric:1 asymptotic:1 fully:1 bear:1 interesting:3 generation:1 proportional:2 ingredient:1 stringency:1 degree:27 pij:1 consistent:1 sufficient:1 editor:2 row:8 karl:2 last:1 free:3 enjoys:1 aij:4 side:1 understand:1 wide:1 akshay:1 sparse:1 yudong:1 dimension:1 transition:3 author:2 social:2 sourav:1 dealing:1 global:1 xt1:1 assumed:4 spectrum:1 why:1 obtaining:1 cl:6 diag:3 vj:1 aistats:1 pk:1 main:5 spread:1 dense:1 bounding:2 noise:1 edition:1 n2:5 guang:1 repeated:1 pivotal:1 intracluster:1 x1:4 xu:3 tl:2 fails:2 bkk:1 mmp:1 sushant:1 third:2 qnm:1 theorem:11 rk:1 rohe:6 specific:1 borgs:3 showing:1 symbol:1 maxi:1 admits:6 dominates:2 rlk:5 exists:2 dl:1 adding:1 ci:1 magnitude:3 chatterjee:3 nk:3 gap:1 chen:1 ordered:1 ux:3 hua:1 norris:1 cite:1 satisfies:2 relies:1 acm:2 ma:1 intercluster:1 goal:1 consequently:1 bkl:2 determined:3 infinite:1 corrected:3 uniformly:1 rlm:1 principal:4 shang:1 called:4 total:1 teng:1 jovanovich:1 meaningful:2 formally:1 combinatorica:1 mark:1 latter:1 violated:2 constructive:1 princeton:1 regularizing:1 |
5,201 | 5,709 | Monotone k-Submodular Function Maximization
with Size Constraints
Yuichi Yoshida
National Institute of Informatics, and
Preferred Infrastructure, Inc.
yyoshida@nii.ac.jp
Naoto Ohsaka
The University of Tokyo
ohsaka@is.s.u-tokyo.ac.jp
Abstract
A k-submodular function is a generalization of a submodular function, where the
input consists of k disjoint subsets, instead of a single subset, of the domain.
Many machine learning problems, including influence maximization with k kinds
of topics and sensor placement with k kinds of sensors, can be naturally modeled
as the problem of maximizing monotone k-submodular functions. In this paper,
we give constant-factor approximation algorithms for maximizing monotone ksubmodular functions subject to several size constraints. The running time of our
algorithms are almost linear in the domain size. We experimentally demonstrate
that our algorithms outperform baseline algorithms in terms of the solution quality.
1
Introduction
The task of selecting a set of items subject to constraints on the size or the cost of the set is versatile
in machine learning problems. The objective can be often modeled as maximizing a function with
the diminishing return property, where for a finite set V , a function f : 2V ? R satisfies the
diminishing return property if
f (S ? {e}) ? f (S) ? f (T ? {e}) ? f (T )
for any S ? T and e ? V \ T . For example, sensor placement [13, 14], influence maximization
in social networks [11], document summarization [15], and feature selection [12] involve objectives
satisfying the diminishing return property. It is well known that the diminishing return property is
equivalent to submodularity, where a function f : 2V ? R is submodular if
f (S) + f (T ) ? f (S ? T ) + f (S ? T )
holds for any S, T ? V . When the objective function is submodular and hence satisfies the diminishing return property, we can find in polynomial time a solution with a provable guarantee on its
solution quality even with various constraints [2, 3, 18, 21].
In many practical applications, however, we want to select several disjoint sets of items instead of a
single set. To see this, let us describe two examples:
Influence maximization: Viral marketing is a cost-effective marketing strategy that promotes products by giving free (or discounted) items to a selected group of highly influential people in the hope
that, through the word-of-mouth effects, a large number of product adoptions will occur [4, 19].
Suppose that we have k kinds of items, each having a different topic and thus a different word-ofmouth effect. Then, we want to distribute these items to B people selected from a group V of n
people so as to maximize the (expected) number of product adoptions. It is natural to impose a constraint that each person can receive at most one item since giving many free items to one particular
person would be unfair.
Sensor placement: There are k kinds of sensors for different measures such as temperature,
humidity, and illuminance. Suppose that we have Bi many sensors of the i-th kind for each
1
i ? {1, 2, . . . , k}, and there is a set V of n locations, each of which can be instrumented with
exactly one sensor. Then, we want to allocate those sensors so as to maximize the information gain.
When k = 1, these problems can be modeled as maximizing monotone submodular functions [11,
14] and admit polynomial-time (1 ? 1/e)-approximation [18]. Unfortunately, however, the case of
general k cannot be modeled as maximizing submodular functions, and we cannot apply the methods
in the literature on maximizing submodular functions [2, 3, 18, 21]. We note that the problem of
selecting k disjoint sets can be sometimes modeled as maximizing monotone submodular functions
over the extended domain k ? V subject to a partition matroid. Although (1 ? 1/e)-approximation
algorithms are known [3, 5], the running time is around O(k 8 n8 ) and is prohibitively slow.
Our contributions: To address the problem of selecting k disjoint sets, we use the fact that the
objectives can be often modeled as k-submodular functions. Let (k + 1)V := {(X1 , . . . , Xk ) |
Xi ? V ?i ? {1, 2, . . . , k}, Xi ? Xj = ? ?i 6= j} be the family of k disjoint sets. Then, a function
f : (k + 1)V ? R is called k-submodular [9] if, for any x = (X1 , . . . , Xk ) and y = (Y1 , . . . , Yk )
in (k + 1)V , we have
f (x) + f (y) ? f (x t y) + f (x u y)
where
x u y := (X1 ? Y1 , . . . , Xk ? Yk ),
[
[
x t y := X1 ? Y1 \
Xi ? Yi , . . . , Xk ? Yk \
Xi ? Yi .
i6=1
i6=k
Roughly speaking, k-submodularity captures the property that, if we choose exactly one set Xe ?
{X1 , . . . , Xk } that an element e can belong to for each e ? V , then the resulting function is submodular (see Section 2 for details). When k = 1, k-submodularity coincides with submodularity.
In this paper, we give approximation algorithms for maximizing non-negative monotone ksubmodular functions with several constraints on the sizes of the k sets. Here, we say that f is
monotone if f (x) ? f (y) for any x = (X1 , . . . , Xk ) and y = (Y1 , . . . , Yk ) with Xi ? Yi for each
i ? {1, . . . , k}. Let n = |V | be the size of the domain. For the total size constraint, under which
the total size of the k sets is bounded by B ? Z+ , we show that a simple greedy algorithm outputs
1/2-approximation in O(knB) time. The approximation ratio of 1/2 is asymptotically tight since
the lower bound of k+1
2k + for any > 0 is known even when B = n [10]. Combining the random
sampling technique [17], we also give a randomized algorithm that outputs 1/2-approximation with
probability at least 1 ? ? in O(kn log B log(B/?)) time. Hence, even when B is as large as n, the
running time is almost linear in n. For the individual size constraint, under which the size of the i-th
set is bounded by Bi ? Z+ for each i ? {1, . . . , k}, we give a 1/3-approximation algorithm with
Pk
running time O(knB), where B = i=1 Bi . We then give a randomized algorithm that outputs
1/3-approximation with probability at least 1 ? ? in O(k 2 n log(B/k) log(B/?)) time.
To show the practicality of our algorithms, we apply them to the influence maximization problem
and the sensor placement problem, and we demonstrate that they outperform previous methods based
on submodular function maximization and several baseline methods in terms of the solution quality.
Related work: When k = 2, k-submodularity is called bisubmodularity, and [20] applied bisubmodular functions to machine learning problems. However, their algorithms do not have any approximation guarantee. Huber and Kolmogorov introduced k-submodularity as a generalization of
submodularity and bisubmodularity [9], and minimizing k-submodular functions was successfully
used in a computer vision application [8]. Iwata et al. [10] gave a 1/2-approximation algorithm
k
-approximation algorithm for maximizing non-monotone and monotone k-submodular
and a 2k?1
functions, respectively, when there is no constraint.
Organization: The rest of this paper is organized as follows. In Section 2, we review properties
of k-submodular functions. Sections 3 and 4 are devoted to show 1/2-approximation algorithms
for the total size constraint, and 1/3-approximation algorithms for the individual size constraint,
respectively. We show our experimental results in Section 5. We conclude our paper in Section 6.
2
Algorithm 1 k-Greedy-TS
Input: a monotone k-submodular function f : (k + 1)V ? R+ and an integer B ? Z+ .
Output: a vector s with |supp(s)| = B.
1: s ? 0.
2: for j = 1 to B do
3:
(e, i) ? arg maxe?V \supp(s),i?[k] ?e,i f (s).
4:
s(e) ? i.
5: return s.
2
Preliminaries
For an integer k ? N, [k] denotes the set {1, 2, . . . , k}. We define a partial order on (k + 1)V so
that, for x = (X1 , . . . , Xk ) and y = (Y1 , . . . , Yk ) in (k + 1)V , x y if Xi ? Yi for every i with
i ? [k]. We also define
?e,i f (x) = f (X1 , . . . , Xi?1 , Xi ? {e}, Xi+1 , . . . , Xk ) ? f (X1 , . . . , , Xk )
S
for x ? (k + 1)V , e ?
/ `?[k] X` , and i ? [k], which is the marginal gain when adding e to the
i-th set of x. Then, it is easy
S to see the monotonicity of f is equivalent to ?e,i f (x) ? 0 for any
x = (X1 , . . . , Xk ) and e 6? `?[k] X` and i ? [k]. Also it is not hard to show (see [22] for details)
that the k-submodularity of f implies the orthant submodularity, i.e.,
?e,i f (x) ? ?e,i f (y)
S
for any x, y ? (k + 1) with x y, e ?
/ `?[k] Y` , and i ? [k], and the pairwise monotonicity, i.e.,
V
?e,i f (x) + ?e,j f (x) ? 0
V
for any x ? (k + 1) , e ?
/
S
`?[k]
X` , and i, j ? [k] with i 6= j. Actually, the converse holds:
? y [22]). A function f : (k + 1)V ? R is k-submodular if and only if
Theorem 2.1 (Ward and Zivn?
f is orthant submodular and pairwise monotone.
It is often convenient to identify (k + 1)V with {0, 1 . . . , k}V to analyze k-submodular functions,
Namely, we associate (X1 , . . . , Xk ) ? (k + 1)V with x ? {0, 1, . . . , k}V by Xi = {e ? V |
x(e) = i} for i ? [k]. Hence we sometimes abuse notation, and simply write x = (X1 , . . . , Xk )
by regarding a vector x as disjoint k sets of V . We define the support of x ? {0, 1, . . . , k}V as
supp(x) = {e ? V | x(e) 6= 0}. Analogously, for x ? {0, 1, . . . , k}V and i ? [k], we define
suppi (x) = {e ? V | x(e) = i}. Let 0 be the zero vector in {0, 1, . . . , k}V .
3
Maximizing k-submodular Functions with the Total Size Constraint
In this section, we give a 1/2-approximation algorithm to the problem of maximizing monotone
k-submodular functions subject to the total size constraint. Namely, we consider
max f (x)
subject to |supp(x)| ? B and x ? (k + 1)V ,
where f : (k + 1)V ? R+ is monotone k-submodular and B ? Z+ is a non-negative integer.
3.1
A greedy algorithm
The first algorithm we propose is a simple greedy algorithm (Algorithm 1). We show the following:
Theorem 3.1. Algorithm 1 outputs a 1/2-approximate solution by evaluating f O(knB) times,
where n = |V |.
The number of evaluations of f is clearly O(knB). Hence in what follows, we focus on analyzing
the approximation ratio of Algorithm 1. Our analysis is based on the framework of [10].
Consider the j-th iteration of the for loop from Line 2. Let (e(j) , i(j) ) ? V ? [k] be the pair greedily
chosen in this iteration, and let s(j) be the solution after this iteration. We define s(0) = 0. Let o be
3
Algorithm 2 k-Stochastic-Greedy-TS
Input: a monotone k-submodular function f : (k + 1)V ? R+ , an integer B ? Z+ , and a failure
probability ? > 0.
Output: a vector s with |supp(s)| = B.
1: s ? 0.
2: for j = 1 to B do
n?j+1
3:
R ? a random subset of size min{ B?j+1
log B? , n} uniformly sampled from V \ supp(s).
4:
(e, i) ? arg maxe?R,i?[k] ?e,i f (s).
5:
s(e) ? i.
6: return s.
the optimal solution. We iteratively define o(0) = o, o(1) , . . . , o(B) as follows. For each j ? [B], let
S (j) = supp(o(j?1) ) \ supp(s(j?1) ). Then, we set o(j) = e(j) if e(j) ? S (j) , and set o(j) to be an
arbitrary element in S (j) otherwise. Then, we define o(j?1/2) as the resulting vector obtained from
o(j?1) by assigning 0 to the o(j) -th element, and then define o(j) as the resulting vector obtained
from o(j?1/2) by assigning i(j) to the e(j) -th element. Note that supp(o(j) ) = B holds for every
j ? {0, 1, . . . , B} and o(B) = s(B) = s. Moreover, we have s(j?1) o(j?1/2) for every j ? [B].
Proof of Theorem 3.1. We first show that, for each j ? [B],
f (s(j) ) ? f (s(j?1) ) ? f (o(j?1) ) ? f (o(j) ).
(1)
For each j ? [B], let y (j) = ?e(j) ,i(j) f (s(j?1) ), a(j?1/2) = ?o(j) ,o(j?1) (o(j) ) f (o(j?1/2) ), and
a(j) = ?e(j) ,i(j) f (o(j?1/2) ). Then, note that f (s(j) )?f (s(j?1) ) = y (j) , and f (o(j?1) )?f (o(j) ) =
a(j?1/2) ? a(j) . From the monotonicity of f , it suffices to show that y (j) ? a(j?1/2) . Since e(j) and
i(j) are chosen greedily, we have y (j) ? ?o(j) ,o(j?1) (o(j) ) f (s(j?1) ). Since s(j?1) o(j?1/2) , we
have ?o(j) ,o(j?1) (o(j) ) f (s(j?1) ) ? a(j?1/2) from the orthant submodularity. Combining these two
inequalities, we establish (1).
Then, we have
f (o) ? f (s) =
B
X
(f (o(j?1) ) ? f (o(j) )) ?
j=1
B
X
(f (s(j) ) ? f (s(j?1) )) = f (s) ? f (0) ? f (s),
j=1
which implies f (s) ? f (o)/2.
3.2
An almost linear-time algorithm by random sampling
In this section, we improve the number of evaluations of f from O(knB) to O(kn log B log
where ? > 0 is a failure probability.
B
? ),
Our algorithm is shown in Algorithm 2. The main difference from Algorithm 1 is that we sample a
sufficiently large subset R of V , and then greedily assign a value only looking at elements in R.
We reuse notations e(j) , i(j) , S (j) and s(j) from Section 3.1, and let R(j) be R in the j-th iteration.
We iteratively define o(0) = o, o(1) , . . . , o(B) as follows. If R(j) ?S (j) is empty, then we regard that
the algorithm failed. Suppose R(j) ?S (j) is non-empty. Then, we set o(j) = e(j) if e(j) ? R(j) ?S (j) ,
and set o(j) to be an arbitrary element in R(j) ? S (j) otherwise. Finally, we define o(j?1/2) and o(j)
as in Section 3.1 using o(j?1) , o(j) , and e(j) .
If the algorithm does not fail and o(1) , . . . , o(B) are well defined, or in other words, if R(j) ? S (j) is
not empty for every j ? [B], then the rest of the analysis is completely the same as in Section 3.1,
and we achieve an approximation ratio of 1/2. Hence, it suffices to show that o(1) , . . . , o(B) are
well defined with a high probability.
Lemma 3.2. With probability at least 1 ? ?, we have R(j) ? S (j) 6= ? for every j ? [B].
4
Algorithm 3 k-Greedy-IS
Input: a monotone k-submodular function f : (k + 1)V ? R+ and integers B1 , . . . , Bk ? Z+ .
Output: a vector s with
P |suppi (s)| = Bi for each i ? [k].
1: s ? 0 and B ? i?[k] Bi .
2: for j = 1 to B do
3:
I ? {i ? [k] | suppi (s) < Bi }.
4:
(e, i) ? arg maxe?V \supp(s),i?I ?e,i f (s).
5:
s(e) ? i.
6: return s.
Proof. Fix j ? [B]. If |R(j) | = n, then we clealy have Pr[R(j) ? S (j) = ?] = 0. Otherwise we have
Pr[R(j) ? S (j) = ?] = 1 ?
|R
|S (j) |
|V \ supp(s(j?1) )|
(j)
|
B?j+1 n?j+1
? e? n?j+1 B?j+1 log
B
?
=
?
.
B
By the union bound over j ? [B], the lemma follows.
Theorem 3.3. Algorithm 2 outputs a 1/2-approximate solution with probability at least 1 ? ? by
evaluating f at most O(k(n ? B) log B log B? ) times.
Proof. By Lemma 3.2 and the analysis in Section 3.1, Algorithm 2 outputs a 1/2-approximate
solution with probability at least 1 ? ?.
The number of evaluations of f is at most
k
X n?j+1
X n?B+j
B
B
B
log
=k
log
= O kn log B log
.
B?j+1
?
j
?
?
j?[B]
4
j?[B]
Maximizing k-submodular Functions with the Individual Size Constraint
In this section, we consider the problem of maximizing monotone k-submodular functions subject
to the individual size constraint. Namely, we consider
max f (x)
subject to |suppi (x)| ? Bi ?i ? [k] and x ? (k + 1)V ,
where f : (k + 1)V ? R+ is monotone k-submodular, and B1 , . . . , Bk ? Z+ are non-negative
integers.
4.1
A greedy algorithm
We first consider a simple greedy algorithm described in Algorithm 3. We show the following:
Theorem 4.1. Algorithm 3 outputs a 1/3-approximate solution by evaluating f at most O(knB)
times.
It is clear that the number of evaluations of f is O(knB). The analysis of the approximation ratio is
given in Appendix A.
4.2
An almost linear-time algorithm by random sampling
We next improve the number of evaluations of f from O(knB) to O k 2 n log
rithm is given in Algorithm 4. In Appendix A, we show the following.
B
k
log
B
?
. Our algo-
Theorem 4.2. Algorithm
solution with probability at least 1 ? ? by
4 outputs a 1/3-approximate
B
B
2
evaluating f at most O k n log k log ? times.
5
Algorithm 4 k-Stochastic-Greedy-IS
Input: a monotone k-submodular function f : (k + 1)V ? R+ , integers B1 , . . . , Bk ? Z+ , and a
failure probability ? > 0.
Output: a vector s with
P |suppi (s)| = Bi for each i ? [k].
1: s ? 0 and B ? i?[k] Bi .
2: for j = 1 to B do
3:
I ? {i ? [k] | suppi (s) < Bi } and R ? ?.
4:
loop
5:
Add a random element in V \ (supp(s) ? R) to R.
6:
(e, i) ? arg maxe?R,i?I ?e,i f (s).
i (s)|
if |R| ? min{ Bn?|supp
log
i ?|suppi (s)|
8:
s(e) ? i.
9:
break the loop.
10: return s
7:
5
B
? , n}
then
Experiments
In this section, we experimentally demonstrate that our algorithms outperform baseline algorithms
and our almost linear-time algorithms significantly improve efficiency in practice. We conducted
experiments on a Linux server with Intel Xeon E5-2690 (2.90 GHz) and 264GB of main memory.
We implemented all algorithms in C++. We measured the computational cost in terms of the number
of function evaluations so that we can compare the efficiency of different methods independently
from concrete implementations.
Influence maximization with k topics under the total size constraint
5.1
We first apply our algorithms to the problem of maximizing the spread of influence on several topics.
First we describe our information diffusion model, called the k-topic independent cascade (k-IC)
model, which generalizes the independent cascade model [6, 7]. In the k-IC model, there are k
kinds of items, each having a different topic, and thus k kinds of rumors independently spread
through a social network. Let G = (V, E) be a social network with an edge probability piu,v for
each edge (u, v) ? E, representing the strength of influence from u to v on the i-th topic. Given
a seed s ? (k + 1)V , for each i ? [k], the diffusion process of the rumor about the i-th topic
starts by activating vertices in suppi (s), independently from other topics. Then the process unfolds
in discrete steps according to the following randomizes rule: When a vertex u becomes active in
the step t for the first time, it is given a single chance to activate each current inactive vertex v. It
succeeds with probability piu,v . If u succeeds, then v becomes active in the step t + 1. Whether or
not u succeeds, it cannot make any further attempt to activate v in subsequent steps. The process
runs until no more activation is possible.
The influence spread ? : (k + 1)V ? R+ in the k-IC model is defined as the expected total number
of verticeshwho eventually becomei active in one of the k diffusion processes given a seed s, namely,
S
?(s) = E | i?[k] Ai (suppi (s))| , where Ai (suppi (s)) is a random variable representing the set of
activated vertices in the diffusion process of the i-th topic. Given a directed graph G = (V, E), edge
probabilities piu,v ((u, v) ? E, i ? [k]), and a budget B, the problem is to select a seed s ? (k + 1)V
that maximizes ?(s) subject to |supp(s)| ? B. It is easy to see that the influence spread function ?
is monotone k-submodular (see Appendix B for the proof).
Experimental settings: We use a publicly available real-world dataset of a social news website
Digg.1 This dataset consists of a directed graph where each vertex represents a user and each edge
represents the friendship between a pair of users, and a log of user votes for stories. We set the
number of topics k to be 10, and estimated edge probabilities on each topic from the log using the
method of [1]. We set the value of B to 5, 10, . . . , 100 and compared the following algorithms:
1
http://www.isi.edu/?lerman/downloads/digg2009.html
6
Single(3)
Degree
Random
70000
300
60000
250
50000
# of Evaluations
Influence Spread
k-Greedy-TS
k-Stochastic-Greedy-TS
350
200
150
100
50
40000
30000
20000
10000
0
0
0
20
40
60
80
100
0
Budget
20
40
60
80
100
Budget
Figure 1: Comparison of influence spreads.
Figure 2: The number of influence estimations.
? k-Greedy-TS (Algorithm 1).
? k-Stochastic-Greedy-TS (Algorithm 2). We chose ? = 0.1.
? Single(i): Greedily choose B vertices only considering the i-th topic and assign them items
of the i-th topic.
? Degree: Choose B vertices in decreasing order of degrees and assign them items of random topics.
? Random: Randomly choose B vertices and assign them items of random topics.
For the first three algorithms, we implemented the lazy evaluation technique [16] for efficiency. For
k-Greedy-TS, we maintain an upper bound on the gain of inserting each pair (e, i) to apply the lazy
evaluation technique directly. For k-Stochastic-Greedy-TS, we maintain an upper bound on the
gain for each pair (e, i), and we pick up a pair in R with the largest gain for each iteration. During
the process of the algorithms, the influence spread was approximated by simulating the diffusion
process 100 times. When the algorithms terminate, we simulated the diffusion process 10,000 times
to obtain sufficiently accurate estimates of the influence spread.
Results: Figure 1 shows the influence spread achieved by each algorithm. We only show Single(3) among Single(i) strategies since its influence spread is the largest. k-Greedy-TS and kStochastic-Greedy-TS clearly outperform the other methods owing to their theoretical guarantee
on the solution quality. Note that our two methods simulated the diffusion process 100 times to
choose a seed set, which is relatively small, because of the high computation cost. Consequently,
the approximate value of the influence spread has a relatively high variance, and this might have
caused the greedy method to choose seeds with small influence spreads. Remark that Single(3)
works worse than Degree for B larger than 35, which means that focusing on a single topic may
significantly degrade the influence spread. Random shows a poor performance as expected.
Figure 2 reports the number of influence estimations of greedy algorithms. We note that kStochastic-Greedy-TS outperforms k-Greedy-TS, which implies that the random sampling technique is effective even when combined with the lazy evaluation technique. The number of evaluations of k-Greedy-TS drastically increases when B is around 40 since we run out of influential
vertices and we need to reevaluate the remaining vertices. Indeed, the slope of k-Greedy-TS after
B = 40 is almost constant in Figure 1, which indicates that the remaining vertices have a similar
influence. Single(3) is faster than our algorithms since it only considers a single topic.
5.2
Sensor placement with k kinds of measures under the individual size constraint
Next we apply our algorithms for maximizing k-submodular functions with the individual size
constraint to the sensor placement problem that allows multiple kinds of sensors. In this problem, we want to determine the placement of multiple kinds of sensors that most effectively reduces the expected uncertainty. We need several notions to describe our model. Let ? =
{X1 , X2 , . . . , Xn } be
Pa set of discrete random variables. The entropy of a subset S of ? is defined as H(S) = ? s?dom S Pr[s] log Pr[s]. The conditional entropy of ? having observed S is
H(? | S) := H(?) ? H(S). Hence, in order to reduce the uncertainty of ?, we want to find a set
S of as a large entropy as possible.
Now we formalize the sensor placement problem. There are k kinds of sensors for different measures. Suppose that we want to allocate Bi many sensors of the i-th kind for each i ? [k], and there
7
Single(1)
Single(2)
k-Greedy-IS
k-Stochastic-Greedy-IS
1800
10
1600
# of Evaluations
11
9
Entropy
Single(3)
8
7
6
5
1400
1200
1000
800
600
400
200
4
0
0
2
4
6
8 10 12
Value of b
14
16
18
0
Figure 3: Comparison of entropy.
2
4
6
8 10 12 14 16 18
Value of b
Figure 4: The number of entropy evaluations.
are set V of n locations, each of which can be instrumented with exactly one sensor. Let Xei be the
random variable representing the observation collected from a sensor of the i-th kind if it is installed
i
V
at the e-th location, and
Slet ? = {Xe }i?[k],e?V
. Then, the problem is to select x ? (k + 1) that
x(e)
maximizes f (x) = H
} subject to |suppi (x)| ? Bi for each i ? [k]. It is easy
e?supp(x) {Xe
to see that f is monotone k-submodular (see Appendix B for details).
Experimental settings: We use the publicly available Intel Lab dataset.2 This dataset contains a
log of approximately 2.3 million readings collected from 54 sensors deployed in the Intel Berkeley
research lab between February 28th and April 5th, 2004. We extracted temperature, humidity, and
light values from each reading and discretized these values into several bins of 2 degrees Celsius
each, 5 points each, and 100 luxes each, respectively. Hence there are k = 3 kinds of sensors to be
allocated to n = 54 locations. Budgets for sensors measuring temperature, humidity, and light are
denoted by B1 , B2 , and B3 . We set B1 = B2 = B3 = b, where b is a parameter varying from 1 to
18. We compare the following algorithms:
? k-Greedy-IS (Algorithm 3).
? k-Stochastic-Greedy-IS (Algorithm 4). We chose ? = 0.1. P
? Single(i): Allocate sensors of the i-th kind to greedily chosen j Bj places.
We again implemented these algorithms with the lazy evaluation technique in a similar way to the
previous experiment. Also note that Single(i) strategies do not satisfy the individual size constraint.
Results: Figure 3 shows the entropy achieved by each algorithm. k-Greedy-IS and k-StochasticGreedy-IS clearly outperform Single(i) strategies. The maximum gap of entropies achieved by
k-Greedy-IS and k-Stochastic-Greedy-IS is only 0.18%.
Figure 4 shows the number of entropy evaluations of each algorithm. We observe that k-StochasticGreedy-IS outperforms k-Greedy-IS. Especially when b = 18, the number of entropy evaluations
is reduced by 31%. Single(i) strategies are faster because they only consider sensors of a fixed kind.
6
Conclusions
Motivated by real-world applications, we proposed approximation algorithms for maximizing monotone k-submodular functions. Our algorithms run in almost linear time and achieve the approximation ratio of 1/2 for the total size constraint and 1/3 for the individual size constraint. We empirically demonstrated that our algorithms outperform baseline methods for maximizing submodular
functions in terms of the solution quality. Improving the approximation ratio of 1/3 for the individual size constraint or showing it tight is an interesting open problem.
Acknowledgments
Y. Y. is supported by JSPS Grant-in-Aid for Young Scientists (B) (No. 26730009), MEXT Grantin-Aid for Scientific Research on Innovative Areas (24106003), and JST, ERATO, Kawarabayashi
Large Graph Project. N. O. is supported by JST, ERATO, Kawarabayashi Large Graph Project.
2
http://db.csail.mit.edu/labdata/labdata.html
8
References
[1] N. Barbieri, F. Bonchi, and G. Manco. Topic-aware social influence propagation models. In
ICDM, pages 81?90, 2012.
[2] N. Buchbinder, M. Feldman, J. S. Naor, and R. Schwartz. A tight linear time (1/2)approximation for unconstrained submodular maximization. In FOCS, pages 649?658, 2012.
[3] G. Calinescu, C. Chekuri, M. P?al, and J. Vondr?ak. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740?1766, 2011.
[4] P. Domingos and M. Richardson. Mining the network value of customers. In KDD, pages
57?66, 2001.
[5] Y. Filmus and J. Ward. Monotone submodular maximization over a matroid via non-oblivious
local search. SIAM Journal on Computing, 43(2):514?542, 2014.
[6] J. Goldenberg, B. Libai, and E. Muller. Talk of the network: A complex systems look at the
underlying process of word-of-mouth. Marketing Letters, 12(3):211?223, 2001.
[7] J. Goldenberg, B. Libai, and E. Muller. Using complex systems analysis to advance marketing
theory development: Modeling heterogeneity effects on new product growth through stochastic
cellular automata. Academy of Marketing Science Review, 9(3):1?18, 2001.
[8] I. Gridchyn and V. Kolmogorov. Potts model, parametric maxflow and k-submodular functions.
In ICCV, pages 2320?2327, 2013.
[9] A. Huber and V. Kolmogorov. Towards minimizing k-submodular functions. In Combinatorial
Optimization, pages 451?462. Springer Berlin Heidelberg, 2012.
[10] S. Iwata, S. Tanigawa, and Y. Yoshida. Improved approximation algorithms for k-submodular
function maximization. In SODA, 2016. to appear.
? Tardos. Maximizing the spread of influence through a social
[11] D. Kempe, J. Kleinberg, and E.
network. In KDD, pages 137?146, 2003.
[12] C.-W. Ko, J. Lee, and M. Queyranne. An exact algorithm for maximum entropy sampling.
Operations Research, 43(4):684?691, 1995.
[13] A. Krause, H. B. McMahon, C. Guestrin, and A. Gupta. Robust submodular observation
selection. The Journal of Machine Learning Research, 9:2761?2801, 2008.
[14] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes:
Theory, efficient algorithms and empirical studies. The Journal of Machine Learning Research,
9:235?284, 2008.
[15] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In NAACL/HLT, pages 912?920, 2010.
[16] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, 7:234?243, 1978.
[17] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondr?ak, and A. Krause. Lazier than lazy
greedy. In AAAI, pages 1812?1818, 2015.
[18] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions?I. Mathematical Programming, 14(1):265?294, 1978.
[19] M. Richardson and P. Domingos. Mining knowledge-sharing sites for viral marketing. In
KDD, pages 61?70, 2002.
[20] A. P. Singh, A. Guillory, and J. A. Bilmes. On bisubmodular maximization. In AISTATS, pages
1055?1063, 2012.
[21] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41?43, 2004.
? y. Maximizing k-submodular functions and beyond. arXiv:1409.1399v1,
[22] J. Ward and S. Zivn?
2014, A preliminary version appeared in SODA, pages 1468?1481, 2014.
9
| 5709 |@word bisubmodularity:2 version:1 polynomial:2 humidity:3 open:1 bn:1 pick:1 versatile:1 n8:1 contains:1 selecting:3 nii:1 document:2 outperforms:2 current:1 activation:1 assigning:2 subsequent:1 partition:1 kdd:3 greedy:33 selected:2 website:1 item:11 xk:12 infrastructure:1 location:4 mathematical:1 lux:1 focs:1 consists:2 naor:1 bonchi:1 pairwise:2 huber:2 indeed:1 expected:4 roughly:1 isi:1 multi:1 discretized:1 discounted:1 decreasing:1 considering:1 becomes:2 project:2 bounded:2 notation:2 moreover:1 maximizes:2 underlying:1 what:1 kind:16 guarantee:3 berkeley:1 every:5 growth:1 exactly:3 prohibitively:1 schwartz:1 converse:1 grant:1 appear:1 scientist:1 local:1 bisubmodular:2 randomizes:1 installed:1 ak:2 analyzing:1 barbieri:1 abuse:1 approximately:1 might:1 downloads:1 chose:2 minoux:1 bi:12 adoption:2 directed:2 practical:1 acknowledgment:1 union:1 practice:1 maxflow:1 area:1 empirical:1 significantly:2 cascade:2 convenient:1 word:4 cannot:3 selection:2 influence:23 www:1 equivalent:2 demonstrated:1 customer:1 maximizing:23 yoshida:2 independently:3 automaton:1 rule:1 notion:1 tardos:1 suppose:4 user:3 exact:1 programming:1 domingo:2 associate:1 element:7 pa:1 satisfying:1 approximated:1 filmus:1 observed:1 capture:1 news:1 yk:5 dom:1 singh:2 tight:3 badanidiyuru:1 algo:1 efficiency:3 completely:1 various:1 kolmogorov:3 rumor:2 talk:1 describe:3 effective:2 activate:2 larger:1 say:1 otherwise:3 ward:3 richardson:2 propose:1 product:4 inserting:1 combining:2 loop:3 achieve:2 academy:1 empty:3 mirzasoleiman:1 ac:2 measured:1 implemented:3 implies:3 submodularity:10 tokyo:2 owing:1 stochastic:9 jst:2 bin:1 activating:1 assign:4 suffices:2 generalization:2 fix:1 preliminary:2 hold:3 around:2 sufficiently:2 ic:3 seed:5 bj:1 estimation:2 combinatorial:1 knb:8 largest:2 successfully:1 hope:1 mit:1 clearly:3 sensor:24 gaussian:1 varying:1 focus:1 potts:1 indicates:1 greedily:5 baseline:4 goldenberg:2 diminishing:5 arg:4 among:1 html:2 denoted:1 illuminance:1 development:1 kempe:1 marginal:1 aware:1 having:3 sampling:5 represents:2 look:1 report:1 oblivious:1 randomly:1 national:1 individual:9 maintain:2 attempt:1 organization:1 highly:1 mining:2 evaluation:16 light:2 activated:1 devoted:1 accurate:1 edge:5 partial:1 theoretical:1 xeon:1 modeling:1 measuring:1 maximization:12 cost:4 vertex:11 subset:5 lazier:1 jsps:1 conducted:1 kn:3 guillory:1 combined:1 person:2 randomized:2 siam:2 csail:1 lee:1 informatics:1 analogously:1 concrete:1 linux:1 again:1 aaai:1 choose:6 worse:1 admit:1 return:9 supp:15 distribute:1 b2:2 inc:1 satisfy:1 caused:1 break:1 lab:2 analyze:1 start:1 slope:1 contribution:1 publicly:2 variance:1 identify:1 bilmes:2 sharing:1 hlt:1 failure:3 naturally:1 proof:4 gain:5 sampled:1 dataset:4 kawarabayashi:2 knowledge:1 organized:1 formalize:1 actually:1 focusing:1 improved:1 april:1 marketing:6 chekuri:1 until:1 propagation:1 quality:5 scientific:1 b3:2 effect:3 naacl:1 hence:7 iteratively:2 during:1 erato:2 coincides:1 demonstrate:3 temperature:3 viral:2 empirically:1 jp:2 million:1 belong:1 celsius:1 feldman:1 ai:2 unconstrained:1 i6:2 submodular:47 add:1 buchbinder:1 server:1 inequality:1 xe:3 yi:4 muller:2 guestrin:2 impose:1 determine:1 maximize:2 multiple:2 reduces:1 faster:2 lin:1 icdm:1 promotes:1 ko:1 vision:1 arxiv:1 iteration:5 sometimes:2 achieved:3 receive:1 want:6 krause:3 allocated:1 rest:2 subject:11 db:1 integer:7 near:1 easy:3 xj:1 matroid:3 gave:1 reduce:1 regarding:1 inactive:1 whether:1 motivated:1 allocate:3 gb:1 reuse:1 reevaluate:1 queyranne:1 speaking:1 remark:1 clear:1 involve:1 reduced:1 http:2 outperform:6 estimated:1 disjoint:6 labdata:2 write:1 discrete:2 group:2 budgeted:1 diffusion:7 v1:1 asymptotically:1 graph:4 monotone:23 run:3 letter:2 uncertainty:2 soda:2 xei:1 almost:7 family:1 place:1 appendix:4 bound:4 strength:1 occur:1 placement:9 constraint:24 x2:1 sviridenko:1 kleinberg:1 min:2 innovative:1 relatively:2 influential:2 according:1 poor:1 instrumented:2 iccv:1 pr:4 karbasi:1 yyoshida:1 eventually:1 fail:1 generalizes:1 available:2 operation:2 apply:5 observe:1 simulating:1 knapsack:1 denotes:1 running:4 remaining:2 giving:2 practicality:1 especially:1 establish:1 february:1 objective:4 strategy:5 parametric:1 nemhauser:1 calinescu:1 simulated:2 berlin:1 degrade:1 topic:19 considers:1 collected:2 cellular:1 provable:1 modeled:6 ratio:6 minimizing:2 unfortunately:1 negative:3 implementation:1 summarization:2 upper:2 observation:2 finite:1 t:14 orthant:3 heterogeneity:1 extended:1 looking:1 y1:5 arbitrary:2 introduced:1 bk:3 namely:4 pair:5 address:1 beyond:1 appeared:1 reading:2 including:1 max:2 memory:1 mouth:2 natural:1 representing:3 improve:3 review:2 literature:1 interesting:1 wolsey:1 degree:5 story:1 supported:2 free:2 drastically:1 institute:1 ghz:1 regard:1 xn:1 evaluating:4 unfolds:1 world:2 mcmahon:1 social:6 zivn:2 approximate:6 vondr:2 preferred:1 monotonicity:3 active:3 b1:5 conclude:1 xi:10 yuichi:1 search:1 terminate:1 robust:1 improving:1 e5:1 heidelberg:1 complex:2 domain:4 aistats:1 pk:1 main:2 spread:14 x1:13 site:1 intel:3 rithm:1 deployed:1 slow:1 aid:2 unfair:1 digg:1 young:1 theorem:6 libai:2 friendship:1 showing:1 gupta:1 adding:1 effectively:1 budget:4 gap:1 entropy:11 simply:1 failed:1 lazy:5 springer:1 iwata:2 satisfies:2 chance:1 extracted:1 conditional:1 consequently:1 towards:1 fisher:1 experimentally:2 hard:1 uniformly:1 lemma:3 called:3 total:8 experimental:3 lerman:1 succeeds:3 vote:1 maxe:4 select:3 people:3 support:1 mext:1 accelerated:1 |
5,202 | 571 | Oscillatory Neural Fields for
Globally Optimal Path Planning
Michael Lemmon
Dept. of Electrical Engineering
University of Notre Dame
Notre Dame, Indiana 46556
Abstract
A neural network solution is proposed for solving path planning problems
faced by mobile robots. The proposed network is a two-dimensional sheet
of neurons forming a distributed representation of the robot's workspace.
Lateral interconnections between neurons are "cooperative", so that the
network exhibits oscillatory behaviour. These oscillations are used to generate solutions of Bellman's dynamic programming equation in the context
of path planning. Simulation experiments imply that these networks locate
global optimal paths even in the presence of substantial levels of circuit
nOlse.
1
Dynamic Programming and Path Planning
Consider a 2-DOF robot moving about in a 2-dimensional world. A robot's location
is denoted by the real vector, p. The collection of all locations forms a set called the
workspace. An admissible point in the workspace is any location which the robot
may occupy. The set of all admissible points is called the free workspace. The
free workspace's complement represents a collection of obstacles. The robot moves
through the workspace along a path which is denoted by the parameterized curve,
p(t). An admissible path is one which lies wholly in the robot's free workspace.
Assume that there is an initial robot position, Po, and a desired final position, p J.
The robot path planning problem is to find an admissible path with Po and p J as
endpoints such that some "optimality" criterion is satisfied.
The path planning problem may be stated more precisely from an optimal control
539
540
Lemmon
theorist's viewpoint. Treat the robot as a dynamic system which is characterized
by a state vector, p, and a control vector, u. For the highest levels in a control
hierarchy, it can be assumed that the robot's dynamics are modeled by the differential equation, p
u. This equation says that the state velocity equals the
applied control. To define what is meant by "optimal", a performance functional is
introduced.
=
(1)
where IIxil is the norm of vector x and where the functional c(p) is unity if plies
in the free workspace and is infinite otherwise. This weighting functional is used
to insure that the control does not take the robot into obstacles. Equation 1's
optimality criterion minimizes the robot's control effort while penalizing controls
which do not satisfy the terminal constraints.
With the preceding definitions, the optimal path planning problem states that for
some final time, t" find the control u(t) which minimizes the performance functional
J(u). One very powerful method for tackling this minimization problem is to use
dynamic programming (Bryson, 1975). According to dynamic programming, the
optimal control, Uopt, is obtained from the gradient of an "optimal return function" ,
JO (p ). In other words, Uopt = \1 JO. The optimal return functional satisfies the
Hamilton-Jacobi-Bellman (HJB) equation. For the dynamic optimization problem
given above, the HJB equation is easily shown to be
(2)
This is a first order nonlinear partial differential equation (PDE) with terminal
(boundary) condition, JO(t,) = IIp(t,) - p, 112. Once equation 2 has been solved
for the J O, then the optimal "path" is determined by following the gradient of JO.
Solutions to equation 2 must generally be obtained numerically. One solution approach numerically integrates a full discretization of equation 2 backwards in time
using the terminal condition, JO(t,), as the starting point. The proposed numerical
solution is attempting to find characteristic trajectories of the nonlinear first-order
PDE. The PDE nonlinearities, however, only insure that these characteristics exist
locally (i.e., in an open neighborhood about the terminal condition). The resulting
numerical solutions are, therefore, only valid in a "local" sense. This is reflected in
the fact that truncation errors introduced by the discretization process will eventually result in numerical solutions violating the underlying principle of optimality
embodied by the HJB equation.
In solving path planning problems, local solutions based on the numerical integration of equation 2 are not acceptable due to the "local" nature of the resulting
solutions. Global solutions are required and these may be obtained by solving an
associated variational problem (Benton, 1977). Assume that the optimal return
function at time t, is known on a closed set B. The variational solution for equation 2 states that the optimal return at time t < t, at a point p in the neighborhood
of the boundary set B will be given by
Y1l2}
JO(p, t) = min {JO(y, t,) + lip yeB
(t, - t)
(3)
Oscillatory Neural Fields for Globally Optimal Path Planning
where Ilpll denotes the L2 norm of vector p. Equation 3 is easily generalized to
other vector norms and only applies in regions where c(p) = 1 (i.e. the robot's free
workspace). For obstacles, ]O(p, i) = ]O(p, if) for all i < if. In other words, the
optimal return is unchanged in obstacles.
2
Oscillatory Neural Fields
The proposed neural network consists of M N neurons arranged as a 2-d sheet
called a "neural field". The neurons are put in a one-to-one correspondence with
the ordered pairs, (i, j) where i = 1, ... , Nand j = 1, ... , M. The ordered pair
(i, j) will sometimes be called the (i, j)th neuron's "label". Associated with the
(i, j) th neuron is a set of neuron labels denoted by N i ,i' The neurons' whose labels
lie in Ni,i are called the "neighbors" of the (i, j)th neuron.
The (i, j)th neuron is characterized by two states. The short term activity (STA)
state, Xi,;, is a scalar representing the neuron's activity in response to the currently
applied stimulus. The long term activity (LTA) state, Wi,j, is a scalar representing
the neuron's "average" activity in response to recently applied stimuli. Each neuron
produces an output, I(Xi,;), which is a unit step function of the STA state. (Le.,
I(x) = 1 if X > 0 and I(x) = 0 if x ~ 0). A neuron will be called "active" or
"inactive" if its output is unity or zero, respectively.
Each neuron is also characterized by a set of constants. These constants are either
externally applied inputs or internal parameters. They are the disturbance Yi,j,
the rate constant Ai ,;, and the position vector Pi,j' The position vector is a 2-d
vector mapping the neuron onto the robot's workspace. The rate constant models
the STA state's underlying dynamic time constant. The rate constant is used to
encode whether or not a neuron maps onto an obstacle in the robot's workspace.
The external disturbance is used to initiate the network's search for the optimal
path.
The evolution of the STA and LTA states is controlled by the state equations. These
equations are assumed to change in a synchronous fashion. The STA state equation
IS
xtj = G (x~j +
Ai,jYi,j
+ Ai,j
L
(4)
Dkl/(Xkl/?)
(k,')ENi,i
where the summation is over all neurons contained within the neighborhood, N i ,j ,
of the (i,j)th neuron. The function G(x) is zero if x < 0 and is x if x ~ O.
This function is used to prevent the neuron's activity level from falling below zero.
Dk/ are network parameters controlling the strength of lateral interactions between
neurons. The LTA state equation is
T. = w:-?
w I,}
I,J
+ 1/'(xi J')I
(5)
I
Equation 5 means that the LTA state is incremented by one every time the (i, j)th
neuron's output changes.
Specific choices for the interconnection weights result in oscillatory behaviour. The
specific network under consideration is a cooperative field where Dkl 1 if (k, I) i=
=
541
542
Lemmon
?
=
=
(i,j) and Dkl -A < if (k, I) (i,j). Without loss of generality it will also be
assumed that the external disturbances are bounded between zero and one. It is also
assumed that the rate constants, Ai,j are either zero or unity. In the path planning
application, rate constants will be used to encode whether or not a given neuron
represents an obstacle or a point in the free-workspace. Consequently, any neuron
where Ai,i = will be called an "obstacle" neuron and any neuron where Ai,i = 1
will be called a "free-space" neuron. Under these assumptions, it has been shown
(Lemmon, 1991a) that once a free-space neuron turns active it will be oscillating
with a period of 2 provided it has at least one free-space neuron as a neighbor.
?
3
Path Planning and Neural Fields
The oscillatory neural field introduced above can be used to generate solutions of
the Bellman iteration (Eq. 3) with respect to the supremum norm. Assume that all
neuron STA and LTA states are zero at time 0. Assume that the position vectors
form a regular grid of points, Pi,i
(i~, j~)t where ~ is a constant controlling the
grid's size. Assume that all external disturbances but one are zero. In other words,
1 if (k, 1) (i,j) and is zero otherwise.
for a specific neuron with label (i,j), Yk,l
Also assume a neighborhood structure where Ni,j consist of the (i, j)th neuron and
its eight nearest neighbors, Ni,i = {(i+k,j+/);k = -1,0,1;1= -1,0,1}. With
these assumptions it has been shown (Lemmon, 1991a) that the LTA state for the
(i, j)th neuron at time n will be given by G( n - Pk,) where Pkl is the length of the
shortest path from Pk,l and Pi,i with respect to the supremum norm.
=
=
=
This fact can be seen quite clearly by examining the LTA state's dynamics in a
small closed neighborhood about the (k, I)th neuron. First note that the LTA state
equation simply increments the LTA state by one every time the neuron's STA state
toggles its output. Since a neuron oscillates after it has been initially activated, the
LTA state, will represent the time at which the neuron was first activated. This
time, in turn, will simply be the "length" of the shortest path from the site of
the initial distrubance. In particular, consider the neighborhood set for the (k,l)th
neuron and let's assume that the (k, I)th neuron has not yet been activated. If the
neighbor has been activated, with an LTA state of a given value, then we see that
the (k,l)th neuron will be activated on the next cycle and we have
Wk,l
=
max
(m,n)eN""
( wm,n - IIPk,,-pm,nlloo)
~
(6)
This is simply a dual form of the Bellman iteration shown in equation 3. In other
words, over the free-space neurons, we can conclude that the network is solving the
Bellman equation with respect to the supremum norm.
In light of the preceding discussion, the use of cooperative neural fields for path
planning is straightforward. First apply a disturbance at the neuron mapping onto
the desired terminal position, P f and allow the field to generate STA oscillations.
When the neuron mapping onto the robot's current position is activated, stop the
oscillatory behaviour. The resulting LTA state distribution for the (i, j)th neuron
equals the negative of the minimum distance (with respect to the sup norm) from
that neuron to the initial disturbance. The optimal path is then generated by a
sequence of controls which ascends the gradient of the LTA state distribution.
Oscillatory Neural Fields for Globally Optimal Path Planning
fig 1. STA activity waves
fig 2. LTA distribution
Several simulations of the cooperative neural path planner have been implemented.
The most complex case studied by these simulations assumed an array of 100 by 100
neurons. Several obstacles of irregular shape and size were randomly distributed
over the workspace. An initial disturbance was introduced at the desired terminal
location and STA oscillations were observed. A snapshot of the neuronal outputs
is shown in figure 1. This figure clearly shows wavefronts of neuronal activity propagating away from the initial disturbance (neuron (70,10) in the upper right hand
corner of figure 1). The "activity" waves propagate around obstacles without any
reflections. When the activity waves reach the neuron mapping onto the robot's
current position, the STA oscillations were turned off. The LTA distribution resulting from this particular simulation run is shown in figure 2. In this figure, light
regions denote areas of large LTA state and dark regions denote areas of small LTA
state.
The generation of the optimal path can be computed as the robot is moving towards
its goal. Let the robot's current position be the (i,j)th neuron's position vector.
The robot will then generate a control which takes it to the position associated with
one of the (i,j)th neuron's neighbors. In particular, the control is chosen so that
the robot moves to the neuron whose LTA state is largest in the neighborhood set,
Ni,j' In other words, the next position vector to be chosen is Pk,l such that its LTA
state is
(7)
Wk I =
max wz:,y
,
(z: ,Y)EN i,j
Because of the LTA distribution's optimality property, this local control strategy is
guaranteed to generate the optimal path (with respect to the sup norm) connecting
the robot to its desired terminal position. It should be noted that the selection of
the control can also be done with an analog neural network. In this case, the LTA
543
544
Lemmon
states of neurons in the neighborhood set, Ni,j are used as inputs to a competitively
inhibited neural net. The competitive interactions in this network will always select
the direction with the largest LTA state.
Since neuronal dynamics are analog in nature, it is important to consider the impact
of noise on the implementation. Analog systems will generally exhibit noise levels
with effective dynamic ranges being at most 6 to 8 bits. Noise can enter the network
in several ways. The LTA state equation can have a noise term (LTA noise), so that
the LTA distribution may deviate from the optimal distribution. In our experiments,
we assumed that LTA noise was additive and white. Noise may also enter in the
selection of the robot's controls (selection noise). In this case, the robot's next
position is the position vector, Pk )I such that Wk )I
max( X,1J )EN 1,1
. . (w x I y + Vx ) y)
where Vx,y is an i.i.d array of stochastic processes. Simulation results reported
below assume that the noise processes, Vx,y, are positive and uniformly distributed
i.i.d. processes. The introduction of noise places constraints on the "quality" of
individual neurons, where quality is measured by the neuron's effective dynamic
range. Two sets of simulation experiments have been conducted to assess the neural
field's dynamic range requirements. In the following simulations, dynamic range is
defined by the equation -log2lvm I, where IV m I is the maximum value the noise
process can take. The unit for this measure of dynamic range is "bits".
=
The first set of simulation experiments selected robotic controls in a noisy fashion.
Figure 3 shows the paths generated by a simulation run where the signal to noise
ratio was 1 (0 bits). The results indicate that the impact of "selection" noise is
to "confuse" the robot so it takes longer to find the desired terminal point. The
path shown in figures 3 represents a random walk about the true optimal path.
The important thing to note about this example is that the system is capable of
tolerating extremely large amounts of "selection" noise.
The second set of simulation experiments introduced LTA noise. These noise experiments had a detrimental effect on the robot's path planning abilities in that
several spurious extremals were generated in the LTA distribution. The result of
the spurious extremals is to fool the robot into thinking it has reached its terminal
destination when in fact it has not. As noise levels increase, the number of spurious
states increase. Figure 4, shows how this increase varies with the neuron's effective
dynamic range. The surprising thing about this result is that for neurons with as
little as 3 bits of effective dynamic range the LTA distribution is free of spurious
maxima. Even with less than 3 bits of dynamic range, the performance degradation
is not catastrophic. LTA noise may cause the robot to stop early; but upon stopping the robot is closer to the desired terminal state. Therefore, the path planning
module can be easily run again and because the robot is closer to its goal there will
be a greater probability of success in the second trial.
4
Extensions and Conclusions
This paper reported on the use of oscillatory neural networks to solve path planning problems. It was shown that the proposed neural field can compute dynamic
programming solutions to path planning problems with respect to the supremeum
norm. Simulation experiments showed that this approach exhibited low sensitivity
Oscillatory Neural Fields for Globally Optimal Path Planning
545
~~---r----.---~----'----'
N
a
a
N
(/)
(1)
C6
U5
(/)
::l
o
.~
::l
a.
en
15
~
(1)
.0
E
Dynamic Range (bits)
::l
Z
o
fig 3. Selected Path
1
2
3
4
...c.
fig 4. Dynamic Range
to noise, thereby supporting the feasibility of analog VLSI implementations.
The work reported here is related to resistive grid approaches for solving optimization problems (Chua, 1984). Resistive grid approaches may be viewed as "passive"
relaxation methods, while the oscillatory neural field is an "active" approach. The
primary virtue of the "active" approach lies in the network's potential to control the
optimization criterion by selecting the interconnections and rate constants. In this
paper and (Lemmon, 1991a), lateral interconnections were chosen to induce STA
state oscillations and this choice yields a network which solves the Bellman equation
with respect to the supremum norm. A slight modification of this model is currently
under investigation in which the neuron's dynamics directly realize the iteration of
equation 6 with respect to more general path metrics. This analog network is based
on an SIMD approach originally proposed in (Lemmon, 1991). Results for this field
are shown in figures 5 and 6. These figures show paths determined by networks
utilizing different path metrics. In figure 5, the network penalizes movement in all
directions equally. In figure 6, there is a strong penalty for horizontal or vertical
movements. As a result of these penalties (which are implemented directly in the
interconnection constants D1:1), the two networks' "optimal" paths are different.
The path in figure 6 shows a clear preference for making diagonal rather than verticalor horizontal moves. These results clearly demonstrate the ability of an "active"
neural field to solve path planning problems with respect to general path metrics.
These different path metrics, of course, represent constraints on the system's path
planning capabilities and as a result suggest that "active" networks may provide a
systematic way of incorporating holonomic and nonholonomic constraints into the
path planning process.
A final comment must be made on the apparent complexity of this approach.
546
Lemmon
fig 5. No Direction Favored
Clearly, if this method is to be of practical significance, it must be extended beyond
the 2-DOF problem to arbitrary task domains. This extension, however, is nontrivial due to the "curse of dimensionality" experienced by straightforward applications
of dynamic programming. An important area of future research therefore addresses
the decomposition of real-world tasks into smaller sub tasks which are amenable to
the solution methodology proposed in this paper.
Acknowledgements
I would like to acknowledge the partial financial support of the National Science
Foundation, grant number NSF-IRI-91-09298.
References
S.H. Benton Jr., (1977) The Hamilton-Jacobi equation: A Global Approach. Academic Press.
A.E. Bryson and Y.C. Ho, (1975) Applied Optimal Control, Hemisphere Publishing.
Washington D.C.
L.O. Chua and G.N. Lin, (1984) Nonlinear programming without computation,
IEEE Trans. Circuits Syst., CAS-31:182-188
M.D. Lemmon, (1991) Real time optimal path planning using a distributed computing paradigm, Proceedings of the Americal Control Conference, Boston, MA, June
1991.
M.D. Lemmon, (1991a) 2-Degree-of-Freedom Robot Path Planning using Cooperative Neural Fields. Neural Computation 3(3):350-362.
| 571 |@word trial:1 norm:10 open:1 simulation:11 propagate:1 decomposition:1 thereby:1 initial:5 selecting:1 current:3 discretization:2 surprising:1 tackling:1 yet:1 must:3 realize:1 additive:1 numerical:4 shape:1 selected:2 short:1 chua:2 location:4 yeb:1 preference:1 c6:1 along:1 differential:2 consists:1 resistive:2 hjb:3 planning:23 ascends:1 terminal:10 bellman:6 globally:4 little:1 curse:1 provided:1 insure:2 underlying:2 circuit:2 bounded:1 what:1 minimizes:2 indiana:1 every:2 oscillates:1 control:19 unit:2 grant:1 hamilton:2 positive:1 engineering:1 local:4 treat:1 path:46 studied:1 range:10 practical:1 wavefront:1 wholly:1 area:3 word:5 induce:1 regular:1 suggest:1 onto:5 selection:5 sheet:2 put:1 context:1 map:1 straightforward:2 iri:1 starting:1 array:2 utilizing:1 financial:1 increment:1 hierarchy:1 controlling:2 programming:7 velocity:1 pkl:1 cooperative:5 observed:1 module:1 electrical:1 solved:1 region:3 cycle:1 movement:2 highest:1 incremented:1 yk:1 substantial:1 complexity:1 dynamic:23 solving:5 upon:1 po:2 easily:3 effective:4 neighborhood:8 dof:2 whose:2 quite:1 apparent:1 solve:2 say:1 interconnection:5 otherwise:2 ability:2 noisy:1 final:3 sequence:1 net:1 interaction:2 turned:1 requirement:1 produce:1 oscillating:1 propagating:1 measured:1 nearest:1 eq:1 strong:1 solves:1 implemented:2 indicate:1 direction:3 stochastic:1 vx:3 behaviour:3 investigation:1 summation:1 extension:2 around:1 mapping:4 early:1 integrates:1 label:4 currently:2 largest:2 minimization:1 clearly:4 always:1 rather:1 mobile:1 encode:2 june:1 bryson:2 sense:1 stopping:1 nand:1 initially:1 spurious:4 vlsi:1 dual:1 denoted:3 favored:1 integration:1 field:17 equal:2 once:2 simd:1 washington:1 represents:3 thinking:1 future:1 stimulus:2 inhibited:1 sta:12 randomly:1 national:1 individual:1 xtj:1 freedom:1 notre:2 light:2 activated:6 amenable:1 capable:1 partial:2 closer:2 iv:1 walk:1 desired:6 penalizes:1 obstacle:9 examining:1 conducted:1 reported:3 varies:1 sensitivity:1 workspace:13 destination:1 off:1 systematic:1 michael:1 connecting:1 jo:7 again:1 satisfied:1 iip:1 external:3 corner:1 return:5 syst:1 potential:1 nonlinearities:1 wk:3 satisfy:1 closed:2 sup:2 reached:1 wm:1 wave:3 competitive:1 capability:1 ass:1 ni:5 characteristic:2 yield:1 nonholonomic:1 tolerating:1 trajectory:1 oscillatory:11 reach:1 definition:1 associated:3 jacobi:2 stop:2 dimensionality:1 originally:1 violating:1 reflected:1 response:2 methodology:1 arranged:1 done:1 generality:1 hand:1 horizontal:2 nonlinear:3 quality:2 effect:1 true:1 evolution:1 white:1 noted:1 criterion:3 generalized:1 toggle:1 xkl:1 demonstrate:1 reflection:1 passive:1 variational:2 consideration:1 recently:1 functional:5 endpoint:1 analog:5 slight:1 numerically:2 theorist:1 ai:6 enter:2 grid:4 pm:1 had:1 moving:2 robot:32 longer:1 showed:1 hemisphere:1 success:1 yi:1 seen:1 minimum:1 greater:1 preceding:2 extremals:2 shortest:2 period:1 paradigm:1 signal:1 full:1 characterized:3 academic:1 pde:3 long:1 lin:1 equally:1 dkl:3 controlled:1 impact:2 feasibility:1 metric:4 holonomic:1 iteration:3 represent:2 sometimes:1 irregular:1 jyi:1 exhibited:1 comment:1 thing:2 presence:1 backwards:1 americal:1 inactive:1 whether:2 synchronous:1 effort:1 penalty:2 cause:1 generally:2 fool:1 clear:1 amount:1 dark:1 u5:1 locally:1 generate:5 occupy:1 exist:1 nsf:1 benton:2 falling:1 prevent:1 penalizing:1 relaxation:1 run:3 parameterized:1 powerful:1 place:1 planner:1 oscillation:5 acceptable:1 bit:6 dame:2 guaranteed:1 correspondence:1 activity:9 nontrivial:1 strength:1 precisely:1 constraint:4 optimality:4 min:1 extremely:1 attempting:1 according:1 jr:1 smaller:1 unity:3 wi:1 modification:1 making:1 equation:27 turn:2 eventually:1 lta:30 initiate:1 competitively:1 eight:1 apply:1 away:1 ho:1 uopt:2 denotes:1 publishing:1 unchanged:1 move:3 strategy:1 primary:1 diagonal:1 exhibit:2 gradient:3 detrimental:1 distance:1 lateral:3 length:2 modeled:1 ratio:1 stated:1 negative:1 implementation:2 upper:1 vertical:1 neuron:57 snapshot:1 eni:1 acknowledge:1 lemmon:11 supporting:1 extended:1 locate:1 arbitrary:1 introduced:5 complement:1 pair:2 required:1 trans:1 address:1 beyond:1 below:2 max:3 wz:1 disturbance:8 representing:2 imply:1 embodied:1 faced:1 deviate:1 l2:1 acknowledgement:1 loss:1 generation:1 foundation:1 degree:1 principle:1 viewpoint:1 pi:3 course:1 free:11 truncation:1 allow:1 neighbor:5 distributed:4 curve:1 boundary:2 world:2 valid:1 collection:2 made:1 supremum:4 global:3 active:6 robotic:1 assumed:6 conclude:1 xi:3 search:1 lip:1 nature:2 ca:1 complex:1 domain:1 pk:4 significance:1 noise:19 neuronal:3 site:1 fig:5 en:4 fashion:2 experienced:1 position:15 sub:1 lie:3 ply:1 weighting:1 admissible:4 externally:1 specific:3 dk:1 virtue:1 consist:1 incorporating:1 confuse:1 boston:1 simply:3 forming:1 ordered:2 contained:1 scalar:2 applies:1 satisfies:1 ma:1 goal:2 viewed:1 consequently:1 towards:1 change:2 infinite:1 determined:2 uniformly:1 degradation:1 called:8 catastrophic:1 select:1 internal:1 support:1 meant:1 dept:1 d1:1 |
5,203 | 5,710 | Smooth and Strong:
MAP Inference with Linear Convergence
Ofer Meshi
TTI Chicago
Mehrdad Mahdavi
TTI Chicago
Alexander G. Schwing
University of Toronto
Abstract
Maximum a-posteriori (MAP) inference is an important task for many applications. Although the standard formulation gives rise to a hard combinatorial optimization problem, several effective approximations have been proposed and studied in recent years. We focus on linear programming (LP) relaxations, which have
achieved state-of-the-art performance in many applications. However, optimization of the resulting program is in general challenging due to non-smoothness and
complex non-separable constraints.
Therefore, in this work we study the benefits of augmenting the objective function
of the relaxation with strong convexity. Specifically, we introduce strong convexity by adding a quadratic term to the LP relaxation objective. We provide theoretical guarantees for the resulting programs, bounding the difference between their
optimal value and the original optimum. Further, we propose suitable optimization
algorithms and analyze their convergence.
1
Introduction
Probabilistic graphical models are an elegant framework for reasoning about multiple variables with
structured dependencies. They have been applied in a variety of domains, including computer vision, natural language processing, computational biology, and many more. Throughout, finding the
maximum a-posteriori (MAP) configuration, i.e., the most probable assignment, is one of the central
tasks for these models. Unfortunately, in general the MAP inference problem is NP-hard. Despite
this theoretical barrier, in recent years it has been shown that approximate inference methods based
on linear programming (LP) relaxations often provide high quality MAP solutions in practice. Although tractable in principle, LP relaxations pose a real computational challenge. In particular, for
many applications, standard LP solvers perform poorly due to the large number of variables and
constraints [33]. Therefore, significant research effort has been put into designing efficient solvers
that exploit the special structure of the MAP inference problem.
Some of the proposed algorithms optimize the primal LP directly, however this is hard due to complex coupling constraints between the variables. Therefore, most of the specialized MAP solvers
optimize the dual function, which is often easier since it preserves the structure of the underlying
model and facilitates elegant message-passing algorithms. Nevertheless, the resulting optimization
problem is still challenging since the dual function is piecewise linear and therefore non-smooth.
In fact, it was recently shown that LP relaxations for MAP inference are not easier than general
LPs [22]. This result implies that there exists an inherent trade-off between the approximation error
(accuracy) of the relaxation and its optimization error (efficiency).
In this paper we propose new ways to explore this trade-off. Specifically, we study the benefits of
adding strong convexity in the form of a quadratic term to the MAP LP relaxation objective. We
show that adding strong convexity to the primal LP results in a new smooth dual objective, which
serves as an alternative to soft-max. This smooth objective can be computed efficiently and optimized via gradient-based methods, including accelerated gradient. On the other hand, introducing
strong convexity in the dual leads to a new primal formulation in which the coupling constraints
are enforced softly, through a penalty term in the objective. This allows us to derive an efficient
1
conditional gradient algorithm, also known as the Frank-Wolfe (FW) algorithm. We can then regularize both primal and dual to obtain a smooth and strongly convex objective, for which various
algorithms enjoy linear convergence rate. We provide theoretical guarantees for the new objective
functions, analyze the convergence rate of the proposed algorithms, and compare them to existing
approaches. All of our algorithms are guaranteed to globally converge to the optimal value of the
modified objective function. Finally, we show empirically that our methods are competitive with
other state-of-the-art algorithms for MAP LP relaxation.
2
Related Work
Several authors proposed efficient approximations for MAP inference based on LP relaxations [e.g.,
30]. Kumar et al. [12] show that LP relaxation dominates other convex relaxations for MAP inference. Due to the complex non-separable constraints, only few of the existing algorithms optimize
the primal LP directly. Ravikumar et al. [23] present a proximal point method that requires iterative
projections onto the constraints in the inner loop. Inexactness of these iterative projections complicates the convergence analysis of this scheme. In Section 4.1 we show that adding a quadratic term
to the dual problem corresponds to a much easier primal in which agreement constraints are enforced
softly through a penalty term that accounts for constraint violation. This enables us to derive a simpler projection-free algorithm based on conditional gradient for the primal relaxed program [4, 13].
Recently, Belanger et al. [1] used a different non-smooth penalty term for constraint violation, and
showed that it corresponds to box-constraints on dual variables. In contrast, our penalty terms are
smooth, which leads to a different objective function and faster convergence guarantees.
Most of the popular algorithms for MAP LP relaxations focus on the dual program and optimize
it in various ways. The subgradient algorithm can be applied to the non-smooth objective [11],
however its convergence rate is rather slow, both in theory and in practice. In particular, the algorithm
requires O(1/?2 ) iterations to obtain an ?-accurate solution to the dual problem. Algorithms based
on coordinate minimization can also be applied [e.g., 6, 10, 31], and often converge fast, but they
might get stuck in suboptimal fixed points due to the non-smoothness of the objective. To overcome
this limitation it has been proposed to smooth the dual objective using a soft-max function [7, 8].
Coordinate minimization methods are then guaranteed to converge to the optimum of the smoothed
objective. Meshi et al. [17] have shown that the convergence rate of such algorithms is O(1/ ?),
where is the smoothing parameter. Accelerated gradient algorithms have
p also been successfully
applied to the smooth dual, obtaining improved convergence rate of O(1/ ?), which can be used
to obtain a O(1/?) rate w.r.t. the original objective [24]. In Section 4.2 we propose an alternative
smoothing technique, based on adding a quadratic term to the primal objective. We then show how
gradient-based algorithms can be applied efficiently to optimize the new objective function.
Other globally convergent methods that have been proposed include augmented Lagrangian [15, 16],
bundle methods [9], and a steepest descent approach [25, 26]. However, the convergence rate of
these methods in the context of MAP inference has not been analyzed yet, making them hard to
compare to other algorithms.
3
Problem Formulation
In this section we formalize MAP inference in graphical models. Consider a set of n discrete variables X1 , . . . , Xn , and denote by xi a particular assignment to variable Xi . We refer to subsets of
these variables by r ? {1, . . . , n}, also known as regions, and the total number of regions is referred
to as q. Each subset is associated with a local score function, or factor, ?r (xr ). The MAP problem is
to find an assignment x which maximizes a global score function that decomposes over the factors:
X
max
?r (xr ) .
x
r
The above combinatorial optimization problem is hard in general, and tractable only in several special cases. Most notably, for tree-structured graphs or super-modular pairwise score functions, efficient dynamic programming algorithms can be applied. Here we do not make such simplifying
assumptions and instead focus on approximate inference. In particular, we are interested in approx2
imations based on the LP relaxation, taking the following form:
XX
max f (?) :=
?r (xr )?r (xr ) = ?> ?
?2ML
where:
ML =
?
r
?
0
(1)
xr
P
Pxr ?r (xr ) = 1
xp \xr ?p (xp ) = ?r (xr )
8r
8r, xr , p : r 2 p
,
where ?r 2 p? represents a containment relationship between the regions p and r. The dual program
of the above LP is formulated as minimizing the re-parameterization of factors [32]:
!
X
X
X
X
min g( ) :=
max ?r (xr ) +
?
max ?? (xr ) , (2)
pr (xr )
rc (xc )
r
xr
p:r2p
c:c2r
r
xr
r
This is a piecewise linear function in the dual variables . Hence, it is convex (but not strongly) and
non-smooth. Two commonly used optimization schemes for this objective are subgradient descent
and block coordinate minimization. While the convergence rate of the former can be upper bounded
by O(1/?2 ), the latter is non-convergent due to the non-smoothness of the objective function.
To remedy this shortcoming, it has been proposed to smooth the objective by replacing the local
maximization with a soft-max [7, 8]. The resulting unconstrainted program is:
!
X
X
??r (xr )
min g ( ) :=
log
exp
.
(3)
r
xr
This dual form corresponds to adding local entropy terms to the primal given in Eq. (1), obtaining:
XX
max
(?r (xr )?r (xr ) + H(?r )) ,
(4)
?2ML
r
P
xr
where H(?r ) =
xr ?r (xr ) log ?r (xr ) denotes the entropy. The following guarantee holds for
the smooth optimal value g ? :
X
g? ? g? ? g? +
log Vr ,
(5)
r
where g is the optimal value of the dual program given in Eq. (2), and Vr = |r| denotes the number
of variables in region r.
P
The dual given in Eq. (3) is a smooth function with Lipschitz constant L = 1 r Vr [see 24]. In
this case coordinate minimization algorithms are globally convergent (to the smooth optimum), and
their convergence rate can be bounded by O(1/ ?) [17]. Gradient-based algorithms can also be
applied to the smooth dual and have similar convergence
rate O(1/ ?). This can be improved using
p
Nesterov?s acceleration scheme to obtain an O(1/ ?) rate [24]. The gradient of Eq. (3) takes the
simple form:
0
1
!
X
? (xr )
?
r
r pr (xr ) g = @br (xr )
bp (xp )A ,
where br (xr ) / exp
.
(6)
?
xp \xr
4
Introducing Strong Convexity
In this section we study the effect of adding strong convexity to the objective function. Specifically,
we add the Euclidean norm of the variables to either the dual (Section 4.1) or primal (Section 4.2)
function. We study the properties of the objectives, and propose appropriate optimization schemes.
4.1
Strong Convexity in the Dual
As mentioned above, the dual given in Eq. (2) is a piecewise linear function, hence not smooth.
Introducing strong convexity to control the convergence rate, is an alternative to smoothing. We
propose to introduce strong convexity by simply adding the L2 norm of the variables to the dual
3
Algorithm 1 Block-coordinate Frank-Wolfe for soft-constrained primal
(?)
1: Initialize: ?r (xr ) = {xr = argmaxx0r ??r (x0r )} for all r, xr
2: while not converged do
3:
Pick r at random
(?)
4:
Let sr (xr ) = {xr = argmaxx0r ??r (x0r )} for all xr
>
(?)
(?? ) (sr ?r )
5:
Let ? = 1 P ks ? k2 r+ 1 P
, and clip to [0, 1]
kA (s ? )k2
6:
Update ?r
7: end while
r
r
(1
r
c:c2r
rc
r
r
?)?r + ?sr
program given in Eq. (2), i.e.,
min g? ( ) := g( ) +
2
k k2 .
The corresponding primal objective is then (see Appendix A):
0
12
X
X
1
@
max f (?) := ?> ?
?p (xp ) ?r (xr )A
2 r,x ,p:r2p
?2 ?
r
(7)
= ?> ?
xp \xr
2
kA?k2 ,
(8)
where ? preserves only the
constraints in ML , and for convenience
?Pseparable per-region simplex
?
we define (A?)r,xr ,p = 1
?
(x
)
?
(x
)
.
Importantly,
this primal program is similar
r r
xp \xr p p
to the original primal given in Eq. (1), but the non-separable marginalization constraints in ML are
enforced softly ? via a penalty term in the objective. Interestingly, the primal in Eq. (8) is somewhat
similar to the objective function obtained by the steepest descent approach proposed by Schwing
et al. [25], despite being motivated from different perspectives. Similar to Schwing et al. [25], our
algorithm below is also based on conditional gradient, however ours is a single-loop algorithm,
whereas theirs employs a double-loop procedure.
We obtain the following guarantee for the optimum of the strongly convex dual (see Appendix C):
g ? ? g?? ? g ? +
2
(9)
h,
where h is chosen such that k ? k2 ? h. It can be shown that h = (4M qk?k1 )2 , where M =
maxr Wr , and Wr is the number of configurations of region r (see Appendix C). Notice that this
bound is worse than the soft-max bound stated in Eq. (5) due to the dependence on the magnitude
of the parameters ? and the number of configurations Wr .
Optimization It is easy to modify the subgradient algorithm to optimize the strongly convex dual
given in Eq. (7). It only requires adding the term
to the subgradient. Since the objective is
non-smooth and strongly convex, we obtain a convergence rate of O(1/ ?) [19]. We note that
coordinate descent algorithms for the dual objective are still non-convergent, since the program
is still non-smooth. Instead, we propose to optimize the primal given in Eq. (8) via a conditional
gradient algorithm [4]. Specifically, in Algorithm 1 we implement the block-coordinate Frank-Wolfe
algorithm proposed by Lacoste-Julien
et al. [13]. In Algorithm
1 we denote Pr = |{p : r 2 p}|, we
?P
?
P
1
define (?) as pr (xr ) =
?r (xr ) , and Arc ?r = xr \xc ?r (xr ).
xp \xr ?p (xp )
In Appendix D we show that the convergence rate of Algorithm 1 is O(1/ ?), similar to subgradient
in the dual. However, Algorithm 1 has several advantages over subgradient. First, the step-size requires no tuning since the optimal step ? is computed analytically. Second, it is easy to monitor the
P ? >
sub-optimality of the current solution by keeping track of the duality gap
(? ) (sr ?r ), which
r
r
provides a sound stopping condition.1 Notice that the basic operation for the update is maximization over the re-parameterization (maxxr ??r (xr )), which is similar to a subgradient computation.
This operation is sometimes cheaper than coordinate minimization, which requires computing max1
Similar rate guarantees can be derived for the duality gap.
4
marginals [see 28]. We also point out that, similar to Lacoste-Julien et al. [13], it is possible to
execute Algorithm 1 in terms of dual variables, without storing primal variables ?r (xr ) for large
parent regions (see Appendix E for details). As we demonstrate in Section 5, this can be important
when using global factors.
We note that Algorithm 1 can be used with minor modifications in the inner loop of an augmented
Lagrangian algorithm [15]. But we show later that this double-loop procedure is not necessary to
obtain good results for some applications. Finally, Meshi et al. [18] show how to use the objective
in Eq. (8) to obtain an efficient training algorithm for learning the score functions ? from data.
4.2
Strong Convexity in the Primal
We next consider appending the primal given in Eq. (1) with a similar L2 norm, obtaining:
max f (?) := ?> ?
?2ML
2
k?k2 .
It turns out that the corresponding dual function takes the form (see Appendix B):
0
2
?
? X
X
?
?
??r
2
r
>?
@
min g? ( ) :=
max u ?r
kuk =
min
u
u2
u2 2
2
2
r
r
(10)
2
1
A .
(11)
Thus the dual objective involves scaling the factor reparameterization ??r by 1/ , and then projecting
the resulting vector onto the probability simplex. We denote the result of this projection by ur (or
just u when clear from context). The L2 norm in Eq. (10) has the same role as the entropy terms
in Eq. (4), and serves to smooth the dual function. This is a consequence of the well known duality
between strong convexity and smoothness [e.g., 21]. In particular, the dual stated in Eq. (11) is
smooth with Lipschitz constant L = q/ .
To calculate the objective value we need to compute the projection ur onto the simplex for all factors.
This can be done by sorting the elements of the scaled reparameterization ??r / , and then shifting
all elements by the same value such that all positive elements sum to 1. The negative elements are
then set to 0 [see, e.g., 3, for details]. Intuitively, we can think of ur as a max-marginal which does
not place weight 1 on the maximum element, but instead spreads the weight among the top scoring
elements, if their score is close enough to the maximum. The effect is similar to the soft-max case,
where br can also be thought-of as a soft max-marginal (see Eq. (6)). On the other hand, unlike br ,
our max-marginal ur will most likely be sparse, since only a few elements tend to have scores close
to the maximum and hence non-zero value in ur .
Another interesting property of the dual in Eq. (11) is invariance to shifting, which is also the case
for the non-smooth dual provided in Eq. (2) and the soft-max dual given in Eq. (3). Specifically,
shifting all elements of pr (?) by the same value does not change the objective value, since the
projection onto the simplex is shift-invariant.
We next bound the difference between the smooth optimum and the original one. The bound follows
easily from the bounded norm of ?r in the probability simplex:
?
?
f?
q ? f? ? f? ,
or equivalently:
f? ? f? + q ? f? + q .
2
2
2
We actually use the equivalent form on the right in order to get an upper bound rather than a lower
bound.2 From strong duality we immediately get a similar guarantee for the dual optimum:
?
?
g ? ? g?? + q ? g ? + q .
2
2
Notice that this bound is better than the corresponding soft-max bound stated in Eq. (5), since it does
not depend on the scope size of regions, i.e., Vr .
2
In our experiments we show the shifted objective value.
5
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
FW O(1/
? ML
max u ??r
r u
Gradient O(1/ )
Accelerated O(1/
)
CD?
Primal: max ? ?
2
? ML
u
2
exp
? (xr )
r
Dual: min g? ( ) +
2
2
Gradient O( log( ))
Accelerated O( 1 log( 1 ))
1
1
Primal: max? ? ?
?
1
A?
2
2
2
?
2
) log( 1 ))
Dual: min g ( ) +
xr
2
)
SDCA O((1 +
log
r
Gradient O(1/ )
Accelerated O(1/
)
CD O(1/ )
Primal: max ? ? +
? ML
?
2
2
A?
2
?
?
Section 4.1
Subgradient O(1/ )
Primal: max ? ?
Proximal projections
Dual: min g ( ) :=
2
2
2
2
1
Gradient O( log( ))
Accelerated O( 1 log( 1 ))
1
Primal: max? ? ?
H(?r )
Section 4.3
Non-smooth
Subgradient O(1/ )
CD (non-convergent)
Primal: max ? ?
Dual: min g( ) +
xr
r
?
2
A?
2
+
H(?r )
Section 4.3
284
285
286
287
288
2
Strongly-convex
max ??r (xr )
Section 4.2
278
279
280
281
282
283
Convex
Dual: min g( ) :=
Dual: min g? ( ) :=
L2 -max
272
273
274
275
276
277
Soft-max
270
271
r
r
Table 1: Summary of objective functions, algorithms and rates. Existing approaches are shaded.
Table 1: Summary of objective functions, algorithms and rates. Row and column headers pertain to
the dual objective.
Previously
known approaches
are shaded.
Optimization
To solve
the dual program
given in Eq. (11)
we can use gradient-based algorithms.
The gradient takes the form:
0
Optimization To solve the dual program
given in Eq. (11)1we can use gradient-based algorithms.
X
The gradient takes the form:
r pr (xr ) g? = @ur (xr )
up (xp )A ,
0
xp \xr
X
1
which only requires computing the
ur ,@
asuin
the
Notice
that this form
r projection
g? =
up (xp )A
,
r (x
r )objective function.
pr (xr )
is very similar to the soft-max gradient (Eq. (6)), with projections
u
taking
the
role
of
beliefs
b. The
x
\x
p
r
1
gradient descent algorithm applies the updates:
g iteratively. The convergence rate of
L r?
this
scheme
our smooth
dual is O(1/
?), which u
isrsimilar
soft-maxpfunction.
rate [26]. Notice
As in thethat this form
which
onlyfor
requires
computing
the projection
, as in to
thethe
objective
soft-max
case, Nesterov?s
accelerated
gradient
method
a better O(1/
?) rate
16].
is very similar
to the soft-max
gradient
(Eq.
(6)),achieves
with projections
u taking
the[see
role
of beliefs b. The
gradient descent
applies
theefficient
updates:
g iteratively.
The
rate of
Unfortunately,
it is algorithm
not clear how
to derive
coordinate minimization
updates for
theconvergence
dual in
L r?
Eq.
sincefor
the our
projection
ur dual
depends
on the dual
variables
in a non-linear
this(11),
scheme
smooth
is O(1/
?), which
is similar
to the manner.
soft-max rate [20]. As in the
1
p
soft-max
accelerated
method
achieves
a better
rate [see 24].
Finally,
wecase,
point Nesterov?s
out that the program
in Eq.gradient
(10) is very
similar
to the one
solved O(1/
in the inner?)loop
of
proximal
point
methods
[5].
Therefore
our
gradient-based
algorithm
can
be
used
with
minor
Unfortunately, it is not clear how to derive efficient coordinate minimization updates for the dual in
modifications as a subroutine within such proximal algorithms (requires mapping the final dual
Eq. (11), since the projection u depends on the dual variables in a non-linear manner.
solution to a feasible primal solutionr [see, e.g., 15]).
Finally, we point out that the program in Eq. (10) is very similar to the one solved in the inner
4.3
and Strong
loop Smooth
of proximal
point methods [23]. Therefore our gradient-based algorithm can be used with
In
ordermodifications
to obtain a smooth
strongly convex
objective
function, we
can add an (requires
L2 regularizer
to
minor
as and
a subroutine
within
such proximal
algorithms
mapping
the final
the
program
in Eq.
(11) (similarly
for 17]).
the soft-max dual in Eq. (3)). Gradientdualsmooth
solution
to a given
feasible
primal
solution possible
[see, e.g.,
based algorithms have linear convergence rate in this case [26]. Equivalently, we can add an L2
term to the primal in Eq. (8). Although conditional gradient is not guaranteed to converge linearly in
4.3 case
Smooth
and Strong
this
[27], stochastic
coordinate ascent (SDCA) does enjoy linear convergence, and can even be
In
order
to
obtain
a smooth
and on
strongly
convexand
objective
canThis
addrequires
an L2 term to the
accelerated to gain better
dependence
the smoothing
convexityfunction,
parameterswe
[28].
smooth
program
giventointhe
Eq.
(11) (similarly
for the
in Eq. (3)).
only
minor
modifications
algorithms
discussedpossible
above, which
are soft-max
highlighteddual
in Appendix
F. GradientTo
conclude
this section,
summarize
all objective
in Table 1. we can add an L2
based
algorithms
havewelinear
convergence
ratefunctions
in this and
casealgorithms
[20]. Equivalently,
term to the primal in Eq. (8). Although conditional gradient is not guaranteed to converge linearly in
case [5], stochastic coordinate ascent (SDCA) does enjoy linear convergence, and can even be
5this Experiments
accelerated to gain better dependence on the smoothing and convexity parameters [27]. This requires
We now proceed to evaluate the proposed methods on real and synthetic data and compare them to
only minor modifications to the algorithms presented above, which are highlighted in Appendix F.
existing state-of-the-art approaches. We begin with a synthetic model adapted from Kolmogorov
To conclude
this section,
we summarize
allcoordinate
objective descent
functions
and algorithms
Table
[12].
This example
was designed
to show that
algorithms
might getinstuck
in 1.
suboptimal points due to non-smoothness. We compare the following MAP inference algorithms:
5 Experiments
non-smooth
coordinate descent (CD), non-smooth subgradient descent, smooth CD (for soft-max),
gradient
(GD)
and accelerated
GD (AGD)
with either
soft-max
or L2 smoothing
(Section
We nowdescent
proceed
to evaluate
the proposed
methods
on real
and synthetic
data and
compare them to
existing state-of-the-art approaches. We begin with a synthetic model adapted from Kolmogorov
[10]. This example was designed to show6 that coordinate descent algorithms might get stuck in
suboptimal points due to non-smoothness. We compare the following MAP inference algorithms:
non-smooth coordinate descent (CD), non-smooth subgradient descent, smooth CD (for soft-max),
gradient descent (GD) and accelerated GD (AGD) with either soft-max or L2 smoothing (Section
4.2), our Frank-Wolfe Algorithm 1 (FW), and the linear convergence variants (Section 4.3). In Fig. 1
6
CD Non?smooth
Subgradient
CD Soft, ?=1
CD Soft, ?=0.1
CD Soft, ?=0.01
GD Soft, ?=0.01
AGD Soft, ?=0.01
GD L , ?=0.01
Objective
0
?20
2
AGD L2, ?=1
AGD L2, ?=0.1
?40
AGD L2, ?=0.01
?60
0
10
2
10
4
Iterations
10
6
FW, ?=0.01
FW, ?=0.001
FW, ?=0.0001
AGD, ?=0.1, ?=0.001
SDCA, ?=0.1, ?=0.001
Non?smooth OPT
10
Figure 1: Comparison of various inference algorithms on a synthetic model. The objective value as
a function of the iterations is plotted. The optimal value is shown in thin dashed dark line.
we notice that non-smooth CD (light blue, dashed) is indeed stuck at the initial point. Second, we
observe that the subgradient algorithm (yellow) is extremely slow to converge. Third, we see that
smooth CD algorithms (green) converge nicely to the smooth optimum. Gradient-based algorithms
for the same smooth (soft-max) objective (purple) also converge to the same optimum, while AGD
is much faster than GD. We can also see that gradient-based algorithms for the L2 -smooth objective
(red) preform slightly better than their soft-max counterparts. In particular, they have faster convergence and tighter objective for the same value of the smoothing parameter, as our theoretical analysis
suggests. For example, compare the convergence of AGD soft and AGD L2 both with = 0.01.
For the optimal value, compare CD soft and AGD L2 both with = 1. Fourth, we note that the
FW algorithm (blue) requires smaller values of the strong-convexity parameter in order to achieve
high accuracy, as our bound in Eq. (9) predicts. We point out that the dependence on the smoothing
or strong convexity parameter is roughly linear, which is also aligned with our convergence bounds.
Finally, we see that for this model the smooth and strongly convex algorithms (gray) perform similar
or even slightly worse than either the smooth-only or strongly-convex-only counterparts.
In our experiments we compare the number of iterations rather than runtime of the algorithms since
the computational cost per iteration is roughly the same for all algorithms (includes a pass over
all factors), and the actual runtime greatly depends on the implementation. For example, gradient
computation for L2 smoothing requires sorting factors rather than just maximizing over their values,
incurring worst-case cost of O(Wr log Wr ) per factor instead of just O(Wr ) for soft-max gradient.
However, one can use partitioning around a pivot value instead of sorting, yielding O(Wr ) cost
in expectation [3], and caching the pivot can also speed-up the runtime considerably. Moreover,
logarithm and exponent operations needed by the soft-max gradient are much slower than the basic
operations used for computing the L2 smooth gradient. As another example, we point out that AGD
algorithms can be further improved by searching for the effective Lipschitz constant rather than
using the conservative bound L (see [24] for more details). In order to abstract away these details
we compare the iteration cost of the vanilla versions of all algorithms.
We next conduct experiments on real data from a protein side-chain prediction problem from
Yanover et al. [33]. This problem can be cast as MAP inference in a model with unary and pairwise
factors. Fig. 2 (left) shows the convergence of various MAP algorithms for one of the proteins (similar behavior was observed for the other instances). The behavior is similar to the synthetic example
above, except for the much better performance of non-smooth coordinate descent. In particular, we
see that coordinate minimization algorithms perform very well in this setting, better than gradientbased and the FW algorithms (this finding is consistent with previous work [e.g., 17]). Only a closer
look (Fig. 2, left, bottom) reveals that smoothing actually helps to obtain a slightly better solution
here. In particular, the soft-max CD (with = 0.001) and L2 -max AGD (with = 0.01), as well
as the primal (SDCA) and dual (AGD) algorithms for the smooth and strongly convex objective, are
able to recover the optimal solution within the allowed number of iterations. The non-smooth FW
algorithm also finds a near-optimal solution.
Finally, we apply our approach to an image segmentation problem with a global cardinality factor.
Specifically, we use the Weizmann Horse dataset for foreground-background segmentation [2]. All
images are resized to 150 ? 150 pixels, and we use 50 images to learn the parameters of the model
and the other 278 images to test inference. Our model consists of unary and pairwise factors along
with a single global cardinality factor, that serves to encourage segmentations where the number of
foreground pixels is not too far from the trainset mean. Specifically, we use the
Pcardinality factor
from Li and Zemel [14], defined as: ?c (x) = max{0, |s s0 | t}2 , where s = i xi . Here, s0 is a
reference cardinality computed from the training set, and t is a tolerance parameter, set to t = s0 /5.
7
250
200
5
x 10
150
?2
100
50
2000
0 0
10
1
10
2
3
10
Iterations
0
4
10
10
Objective
Objective
0
?4
?6
Subgradient
MPLP
FW
107
-2000
Objective
Objective
106
105
104
CD Non-smooth
Subgradient
CD Soft =0.01
CD Soft =0.001
AGD L =0.01
-4000
2
0
10
1
10
AGD L2 =0.001
103
-6000
102 0
10
?8
1
2
10
10
Iterations
3
10
4
10
2
10
Iterations
3
10
4
10
AGD Soft =0.01
AGD Soft =0.001
FW =0.01
AGD =0.01 =0.01
SDCA =0.01 =0.01
Figure 2: (Left) Comparison of MAP inference algorithms on a protein side-chain prediction problem. In the upper figure the solid lines show the optimized objective for each algorithm, and the
dashed lines show the score of the best decoded solution (obtained via simple rounding). The bottom figure shows the value of the decoded solution in more detail. (Right) Comparison of MAP
inference algorithms on an image segmentation problem. Again, solid lines show the value of the
optimized objective while dashed lines show the score of the best decoded solution so far.
-8000
-10000
10 0
10 1
10 2
Iterations
10 3
10 4
First we notice that not all of the algorithms are efficient in this setting. In particular, algorithms that
optimize the smooth dual (either soft-max or L2 smoothing) need to enumerate factor configurations
in order to compute updates, which is prohibitive for the global cardinality factor. We therefore
take the non-smooth subgradient and coordinate descent [MPLP, 6] as baselines, and compare their
performance to that of our FW Algorithm 1 (with = 0.01). We use the variant that does not store
primal variables for the global factor (Appendix E). We point out that MPLP requires calculating
max-marginals for factors, rather than a simple maximization for subgradient and FW. In the case of
cardinality factors this can be done at similar cost using dynamic programming [29], however there
are other types of factors where max-marginal computation might be more expensive than max [28].
In Fig. 2 (right) we show a typical run for a single image, where we limit the number of iterations to
10K. We observe that subgradient descent is again very slow to converge, and coordinate descent
is also rather slow here (in fact, it is not even guaranteed to reach the optimum). In contrast, our
FW algorithm converges orders of magnitude faster and finds a high quality solution (for runtime
comparison see Appendix G). Over the entire 278 test instances we found that FW gets the highest
score solution for 237 images, while MPLP finds the best solution in only 41 images, and subgradient
never wins. To explain this success, recall that our algorithm enforces the agreement constraints
between factor marginals only softly. It makes sense that in this setting it is not crucial to reach full
agreement between the cardinality factor and the other factors in order to obtain a good solution.
6
Conclusion
In this paper we studied the benefits of strong convexity for MAP inference. We introduced a simple L2 term to make either the dual or primal LP relaxations strongly convex. We analyzed the
resulting objective functions and provided theoretical guarantees for their optimal values. We then
proposed several optimization algorithms and derived upper bounds on their convergence rates. Using the same machinery, we obtained smooth and strongly convex objective functions, for which our
algorithms retained linear convergence guarantees. Our approach offers new ways to trade-off the
approximation error of the relaxation and the optimization error. Indeed, we showed empirically that
our methods significantly outperform strong baselines on problems involving cardinality potentials.
To extend our work we aim at natural language processing applications since they share characteristics similar to the investigated image segmentation task. Finally, we were unable to derive
closed-form coordinate minimization updates for our L2 -smooth dual in Eq. (11). We hope to find
alternative smoothing techniques which facilitate even more efficient updates.
References
[1] D. Belanger, A. Passos, S. Riedel, and A. McCallum. Message passing for soft constraint dual decomposition. In UAI, 2014.
8
[2] E. Borenstein, E. Sharon, and S. Ullman. Combining top-down and bottom-up segmentation. In CVPR,
2004.
[3] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l 1-ball for learning
in high dimensions. In ICML, pages 272?279, 2008.
[4] M. Frank and P. Wolfe. An algorithm for quadratic programming, volume 3, pages 95?110. 1956.
[5] D. Garber and E. Hazan. A linearly convergent conditional gradient algorithm with applications to online
and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013.
[6] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In NIPS. MIT Press, 2008.
[7] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Transactions on Information Theory, 56(12):6294?6316, 2010.
[8] J. Johnson. Convex Relaxation Methods for Graphical Models: Lagrangian and Maximum Entropy Approaches. PhD thesis, EECS, MIT, 2008.
[9] J. H. Kappes, B. Savchynskyy, and C. Schn?orr. A bundle approach to efficient map-inference by lagrangian relaxation. In CVPR, 2012.
[10] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1568?1583, 2006.
[11] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE PAMI, 2010.
[12] M. P. Kumar, V. Kolmogorov, and P. H. S.Torr. An analysis of convex relaxations for map estimation of
discrete mrfs. JMLR, 10:71?106, 2009.
[13] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe optimization
for structural SVMs. In ICML, pages 53?61, 2013.
[14] Y. Li and R. Zemel. High order regularization for semi-supervised learning of structured output problems.
In ICML, pages 1368?1376, 2014.
[15] A. L. Martins, M. A. T. Figueiredo, P. M. Q. Aguiar, N. A. Smith, and E. P. Xing. An augmented
lagrangian approach to constrained map inference. In ICML, pages 169?176, 2011.
[16] O. Meshi and A. Globerson. An alternating direction method for dual map lp relaxation. In ECML, 2011.
[17] O. Meshi, T. Jaakkola, and A. Globerson. Convergence rate analysis of map coordinate minimization
algorithms. In NIPS, pages 3023?3031, 2012.
[18] O. Meshi, N. Srebro, and T. Hazan. Efficient training of structured svms via soft constraints. In AISTATS,
2015.
[19] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983.
[20] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87. Kluwer Academic Publishers, 2004.
[21] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127?152, 2005.
[22] D. Prusa and T. Werner. Universality of the local marginal polytope. In CVPR, pages 1738?1743. IEEE,
2013.
[23] P. Ravikumar, A. Agarwal, and M. J. Wainwright. Message-passing for graph-structured linear programs:
Proximal methods and rounding schemes. JMLR, 11:1043?1080, 2010.
[24] B. Savchynskyy, S. Schmidt, J. Kappes, and C. Schnorr. A study of Nesterov?s scheme for lagrangian
decomposition and map labeling. CVPR, 2011.
[25] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Globally Convergent Dual MAP LP Relaxation
Solvers using Fenchel-Young Margins. In Proc. NIPS, 2012.
[26] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Globally Convergent Parallel MAP LP Relaxation
Solver using the Frank-Wolfe Algorithm. In Proc. ICML, 2014.
[27] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized
loss minimization. In ICML, 2014.
[28] D. Sontag, A. Globerson, and T. Jaakkola. Introduction to dual decomposition for inference. In Optimization for Machine Learning, pages 219?254. MIT Press, 2011.
[29] D. Tarlow, I. Givoni, and R. Zemel. Hop-map: Efficient message passing with high order potentials. In
AISTATS, volume 9, pages 812?819. JMLR: W&CP, 2010.
[30] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on trees: message-passing
and linear programming. IEEE Transactions on Information Theory, 51(11):3697?3717, 2005.
[31] T. Werner. A linear programming approach to max-sum problem: A review. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(7):1165?1179, 2007.
[32] T. Werner. Revisiting the linear programming relaxation approach to gibbs energy minimization and
weighted constraint satisfaction. IEEE PAMI, 32(8):1474?1488, 2010.
[33] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation ? an
empirical study. Journal of Machine Learning Research, 7:1887?1907, 2006.
9
| 5710 |@word version:1 norm:6 simplifying:1 decomposition:4 pick:1 solid:2 initial:1 configuration:4 score:9 ours:1 interestingly:1 existing:5 ka:2 current:1 yet:1 universality:1 chicago:2 enables:1 designed:2 update:9 intelligence:2 prohibitive:1 parameterization:2 mccallum:1 steepest:2 smith:1 tarlow:1 provides:1 math:1 toronto:1 simpler:1 zhang:1 rc:2 along:1 consists:1 introductory:1 manner:2 introduce:2 pairwise:3 notably:1 indeed:2 behavior:2 roughly:2 globally:5 actual:1 solver:5 cardinality:7 provided:2 xx:2 underlying:1 bounded:3 maximizes:1 begin:2 moreover:1 preform:1 finding:2 guarantee:9 runtime:4 k2:6 scaled:1 control:1 partitioning:1 enjoy:3 positive:1 local:4 modify:1 limit:1 consequence:1 despite:2 pami:2 might:4 studied:2 k:1 suggests:1 challenging:2 shaded:2 nemirovski:1 weizmann:1 globerson:4 enforces:1 practice:2 block:4 implement:1 xr:52 procedure:2 sdca:6 empirical:1 thought:1 significantly:1 projection:14 protein:3 get:5 onto:5 convenience:1 close:2 pertain:1 savchynskyy:2 put:1 context:2 optimize:8 equivalent:1 map:34 lagrangian:6 maximizing:1 convex:17 immediately:1 importantly:1 regularize:1 reparameterization:2 searching:1 coordinate:23 programming:9 designing:1 givoni:1 agreement:4 wolfe:7 element:8 pxr:1 expensive:1 predicts:1 observed:1 role:3 bottom:3 preprint:1 solved:2 worst:1 calculate:1 revisiting:1 region:8 kappes:2 trade:3 highest:1 mentioned:1 convexity:16 complexity:1 nesterov:6 dynamic:2 depend:1 passos:1 max1:1 efficiency:2 easily:1 various:4 kolmogorov:4 regularizer:1 fast:1 effective:2 shortcoming:1 zemel:3 horse:1 labeling:1 header:1 trainset:1 shalev:2 modular:1 garber:1 solve:2 cvpr:4 think:1 highlighted:1 final:2 online:1 advantage:1 propose:6 product:2 aligned:1 loop:7 combining:1 poorly:1 achieve:1 convergence:29 double:2 optimum:9 parent:1 tti:2 converges:1 help:1 coupling:2 derive:5 augmenting:1 fixing:1 pose:1 minor:5 eq:37 strong:20 involves:1 implies:1 tziritas:1 direction:1 stochastic:4 meshi:6 opt:1 probable:1 tighter:1 hold:1 gradientbased:1 around:1 exp:3 scope:1 mapping:2 achieves:2 estimation:2 proc:2 combinatorial:2 successfully:1 weighted:1 minimization:15 hope:1 mit:3 super:1 modified:1 rather:7 aim:1 caching:1 resized:1 jaakkola:4 derived:2 focus:3 greatly:1 contrast:2 baseline:2 sense:1 posteriori:2 inference:23 mrfs:1 stopping:1 unary:2 softly:4 entire:1 subroutine:2 interested:1 pixel:2 dual:62 among:1 exponent:1 art:4 special:2 smoothing:13 constrained:2 initialize:1 marginal:5 never:1 nicely:1 hop:1 biology:1 represents:1 look:1 icml:6 thin:1 foreground:2 simplex:5 np:1 piecewise:3 inherent:1 few:2 employ:1 preserve:2 cheaper:1 message:8 violation:2 analyzed:2 yielding:1 light:1 primal:33 bundle:2 chain:2 accurate:1 closer:1 encourage:1 necessary:1 machinery:1 tree:3 conduct:1 euclidean:1 logarithm:1 re:2 plotted:1 theoretical:5 complicates:1 instance:2 column:1 soft:41 fenchel:1 assignment:3 maximization:3 werner:3 cost:5 introducing:3 subset:2 rounding:2 johnson:1 too:1 dependency:1 eec:1 proximal:8 synthetic:6 gd:7 considerably:1 probabilistic:1 off:3 again:2 central:1 thesis:1 worse:2 ullman:1 li:2 mahdavi:1 account:1 potential:2 orr:1 includes:1 depends:3 later:1 closed:1 analyze:2 hazan:5 red:1 competitive:1 recover:1 shashua:1 xing:1 parallel:1 purple:1 accuracy:2 qk:1 characteristic:1 efficiently:2 yellow:1 converged:1 explain:1 reach:2 energy:3 associated:1 gain:2 dataset:1 popular:1 recall:1 segmentation:6 formalize:1 actually:2 supervised:1 improved:3 wei:1 formulation:3 execute:1 box:1 strongly:13 done:2 just:3 hand:2 belanger:2 replacing:1 propagation:2 quality:2 gray:1 facilitate:1 effect:2 remedy:1 counterpart:2 former:1 hence:3 analytically:1 regularization:1 alternating:1 iteratively:2 reweighted:1 komodakis:1 demonstrate:1 duchi:1 cp:1 reasoning:1 image:9 recently:2 specialized:1 empirically:2 volume:3 extend:1 kluwer:1 theirs:1 marginals:3 significant:1 refer:1 gibbs:1 smoothness:6 tuning:1 vanilla:1 similarly:2 language:2 add:4 jaggi:1 recent:2 showed:2 perspective:1 store:1 success:1 scoring:1 relaxed:1 somewhat:1 converge:9 dashed:4 semi:1 multiple:1 sound:1 full:1 smooth:57 faster:4 academic:1 offer:1 ravikumar:2 thethe:1 prediction:2 variant:2 basic:3 involving:1 mrf:1 vision:1 expectation:1 chandra:1 arxiv:2 iteration:12 sometimes:1 agarwal:1 achieved:1 whereas:1 thethat:1 background:1 crucial:1 publisher:1 borenstein:1 unlike:1 sr:4 ascent:3 tend:1 elegant:2 facilitates:1 structural:1 near:1 easy:2 enough:1 meltzer:1 variety:1 marginalization:1 suboptimal:3 inner:4 br:4 shift:1 pivot:2 motivated:1 effort:1 penalty:5 sontag:1 passing:8 proceed:2 enumerate:1 clear:3 dark:1 clip:1 svms:2 outperform:1 notice:7 shifted:1 per:3 wr:7 track:1 blue:2 discrete:2 pollefeys:2 imations:1 nevertheless:1 monitor:1 kuk:1 lacoste:3 sharon:1 graph:2 relaxation:25 subgradient:19 year:2 sum:2 enforced:3 run:1 fourth:1 place:1 throughout:1 prog:1 appendix:10 scaling:1 bound:12 guaranteed:5 convergent:10 quadratic:5 adapted:2 constraint:16 riedel:1 bp:1 speed:1 min:11 optimality:1 kumar:2 extremely:1 separable:3 martin:1 structured:5 ball:1 smaller:1 slightly:3 ur:8 lp:22 making:1 modification:4 projecting:1 intuitively:1 pr:7 invariant:1 previously:1 turn:1 needed:1 singer:1 tractable:2 serf:3 end:1 ofer:1 operation:4 incurring:1 apply:1 observe:2 away:1 appropriate:1 appending:1 alternative:4 schmidt:2 slower:1 original:4 denotes:2 top:2 include:1 graphical:3 xc:2 calculating:1 exploit:1 k1:1 objective:55 dependence:4 mehrdad:1 gradient:37 win:1 unable:1 mplp:4 polytope:1 urtasun:2 willsky:1 retained:1 relationship:1 minimizing:1 equivalently:3 unfortunately:3 frank:7 stated:3 rise:1 negative:1 implementation:1 perform:3 upper:4 arc:1 descent:17 ecml:1 smoothed:1 introduced:1 cast:1 optimized:3 schn:1 x0r:2 nip:3 able:1 beyond:1 below:1 pattern:2 challenge:1 summarize:2 program:17 including:2 max:52 green:1 belief:4 shifting:3 wainwright:2 suitable:1 satisfaction:1 natural:2 regularized:1 yanover:2 pletscher:1 scheme:8 julien:3 review:1 l2:23 loss:1 lecture:1 interesting:1 limitation:1 srebro:1 xp:12 consistent:1 s0:3 principle:1 inexactness:1 storing:1 share:1 cd:18 row:1 course:1 summary:2 free:1 keeping:1 figueiredo:1 side:2 taking:3 barrier:1 sparse:1 benefit:3 tolerance:1 overcome:1 dimension:1 xn:1 yudin:1 author:1 stuck:3 commonly:1 far:2 agd:19 transaction:4 approximate:3 ml:9 global:6 maxr:1 reveals:1 uai:1 containment:1 conclude:2 xi:3 shwartz:2 iterative:2 decomposes:1 table:4 learn:1 schnorr:1 obtaining:3 investigated:1 complex:3 domain:1 aistats:2 spread:1 linearly:3 bounding:1 c2r:2 allowed:1 x1:1 augmented:3 fig:4 referred:1 slow:4 vr:4 wiley:1 sub:1 paragios:1 decoded:3 jmlr:3 third:1 young:1 down:1 r2p:2 dominates:1 exists:1 adding:9 phd:1 magnitude:2 margin:1 gap:2 easier:3 sorting:3 entropy:4 simply:1 explore:1 likely:1 u2:2 applies:2 corresponds:3 conditional:7 formulated:1 acceleration:1 aguiar:1 lipschitz:3 feasible:2 hard:5 fw:15 change:1 specifically:7 except:1 typical:1 torr:1 schwing:5 conservative:1 total:1 pas:1 duality:4 invariance:1 latter:1 alexander:1 accelerated:13 evaluate:2 |
5,204 | 5,711 | Stop Wasting My Gradients: Practical SVRG
Reza Babanezhad1 , Mohamed Osama Ahmed1 , Alim Virani2 , Mark Schmidt1
Department of Computer Science
University of British Columbia
1
{rezababa, moahmed, schmidtm}@cs.ubc.ca,2 alim.virani@gmail.com
Jakub Kone?cn?y
School of Mathematics
University of Edinburgh
kubo.konecny@gmail.com
Scott Sallinen
Department of Electrical and Computer Engineering
University of British Columbia
scotts@ece.ubc.ca
Abstract
We present and analyze several strategies for improving the performance of
stochastic variance-reduced gradient (SVRG) methods. We first show that the
convergence rate of these methods can be preserved under a decreasing sequence
of errors in the control variate, and use this to derive variants of SVRG that use
growing-batch strategies to reduce the number of gradient calculations required
in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the
commonly?used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider
the generalization error of the method.
1
Introduction
We consider the problem of optimizing the average of a finite but large sum of smooth functions,
n
min f (x) =
x?Rd
1X
fi (x).
n i=1
(1)
A huge proportion of the model-fitting procedures in machine learning can be mapped to this problem. This includes classic models like least squares and logistic regression but also includes more
advanced methods like conditional random fields and deep neural network models. In the highdimensional setting (large d), the traditional approaches for solving (1) are: full gradient (FG) methods which have linear convergence rates but need to evaluate the gradient fi for all n examples on
every iteration, and stochastic gradient (SG) methods which make rapid initial progress as they only
use a single gradient on each iteration but ultimately have slower sublinear convergence rates.
Le Roux et al. [1] proposed the first general method, stochastic average gradient (SAG), that only
considers one training example on each iteration but still achieves a linear convergence rate. Other
methods have subsequently been shown to have this property [2, 3, 4], but these all require storing a
previous evaluation of the gradient fi0 or the dual variables for each i. For many objectives this only
requires O(n) space, but for general problems this requires O(np) space making them impractical.
Recently, several methods have been proposed with similar convergence rates to SAG but without the
memory requirements [5, 6, 7, 8]. They are known as mixed gradient, stochastic variance-reduced
gradient (SVRG), and semi-stochastic gradient methods (we will use SVRG). We give a canonical
SVRG algorithm in the next section, but the salient features of these methods are that they evaluate
two gradients on each iteration and occasionally must compute the gradient on all examples. SVRG
1
methods often dramatically outperform classic FG and SG methods, but these extra evaluations
mean that SVRG is slower than SG methods in the important early iterations. They also mean that
SVRG methods are typically slower than memory-based methods like SAG.
In this work we first show that SVRG is robust to inexact calculation of the full gradients it requires
(?3), provided the accuracy increases over time. We use this to explore growing-batch strategies that
require fewer gradient evaluations when far from the solution, and we propose a mixed SG/SVRG
method that may improve performance in the early iterations (?4). We next explore using support
vectors to reduce the number of gradients required when close to the solution (?5), give a justification
for the regularized SVRG update that is commonly used in practice (?6), consider alternative minibatch strategies (?7), and finally consider the generalization error of the method (?8).
2
Notation and SVRG Algorithm
SVRG assumes f is ?-strongly convex, each fi is convex, and each gradient fi0 is Lipschitzcontinuous with constant L. The method begins with an initial estimate x0 , sets x0 = x0 and
then generates a sequence of iterates xt using
xt = xt?1 ? ?(fi0t (xt?1 ) ? fi0t (xs ) + ?s ),
(2)
where ? is the positive step size, we set ?s = f 0 (xs ), and it is chosen uniformly from {1, 2, . . . , n}.
After every m steps, we set xs+1 = xt for a random t ? {1, . . . , m}, and we reset t = 0 with
x0 = xs+1 .
To analyze the convergence rate of SVRG, we will find it convenient to define the function
1
1
?(a, b) =
+ 2b? .
1 ? 2?a m??
as it appears repeatedly in our results. We will use ?(a) to indicate the value of ?(a, b) when a = b,
and we will simply use ? for the special case when a = b = L. Johnson & Zhang [6] show that if ?
and m are chosen such that 0 < ? < 1, the algorithm achieves a linear convergence rate of the form
E[f (xs+1 ) ? f (x? )] ? ?E[f (xs ) ? f (x? )],
where x? is the optimal solution. This convergence rate is very fast for appropriate ? and m. While
this result relies on constants we may not know in general, practical choices with good empirical
performance include setting m = n, ? = 1/L, and using xs+1 = xm rather than a random iterate.
Unfortunately, the SVRG algorithm requires 2m + n gradient evaluations for every m iterations
of (2), since updating xt requires two gradient evaluations and computing ?s require n gradient
evaluations. We can reduce this to m + n if we store the gradients fi0 (xs ), but this is not practical in
most applications. Thus, SVRG requires many more gradient evaluations than classic SG iterations
of memory-based methods like SAG.
3
SVRG with Error
We first give a result for the SVRG method where we assume that ?s is equal to f 0 (xs ) up to
some error es . This is in the spirit of the analysis of [9], who analyze FG methods under similar
assumptions. We assume that kxt ? x? k ? Z for all t, which has been used in related work [10] and
is reasonable because of the coercity implied by strong-convexity.
Proposition 1. If ?s = f 0 (xs ) + es and we set ? and m so that ? < 1, then the SVRG algorithm (2)
with xs+1 chosen randomly from {x1 , x2 , . . . , xm } satisfies
E[f (xs+1 ) ? f (x? )] ? ?E[f (xs ) ? f (x? )] +
ZEkes k + ?Ekes k2
.
1 ? 2?L
We give the proof in Appendix A. This result implies that SVRG does not need a very accurate
approximation of f 0 (xs ) in the crucial early iterations since the first term in the bound will dominate.
Further, this result implies that we can maintain the exact convergence rate of SVRG as long as the
errors es decrease at an appropriate rate. For example, we obtain the same convergence rate provided
that max{Ekes k, Ekes k2 } ? ? ??s for any ? ? 0 and some ?? < ?. Further, we still obtain a linear
convergence rate as long as kes k converges to zero with a linear convergence rate.
2
Algorithm 1 Batching SVRG
Input: initial vector x0 , update frequency m, learning rate ?.
for s = 0, 1, 2, . . . do
Choose batch size |B s |
B s = |B s | elements sampled without replacement from {1, 2, . . . , n}.
P
0
?s = |B1s | i?Bs fi (xs )
x0 =xs
for t = 1, 2, . . . , m do
Randomly pick it ? 1, . . . , n
0
0
xt = xt?1 ? ?(fit (xt?1 ) ? fit (xs ) + ?s )
end for
option I: set xs+1 = xm
option II: set xs+1 = xt for random t ? {1, . . . , m}
end for
3.1
(?)
Non-Uniform Sampling
Xiao & Zhang [11] show that non-uniform sampling (NUS) improves the performance of SVRG.
0
?
They assume
Pn each fi is Li -Lipschitz continuous, and sample it = i with probability Li /nL where
1
?
L = n i=1 Li . The iteration is then changed to
?
0
L 0
s
[f (xt?1 ) ? fit (?
xt = xt?1 ? ?
x)] + ? ,
Lit it
which maintains that the search direction is unbiased. In Appendix A, we show that if ?s is computed
? < 1, then we have a convergence
with error for this algorithm and if we set ? and m so that 0 < ?(L)
rate of
ZEkes k + ?Ekes k2
?
,
E[f (xs+1 ) ? f (x? )] ? ?(L)E[f
(xs ) ? f (x? )] +
?
1 ? 2? L
? may be much smaller than the maximum value L.
which can be faster since the average L
3.2
SVRG with Batching
There are many ways we could allow an error in the calculation of ?s to speed up the algorithm. For
example, if evaluating each fi0 involves solving an optimization problem, then we could solve this
optimization problem inexactly. For example, if we are fitting a graphical model with an iterative
approximate inference method, we can terminate the iterations early to save time.
When the fi are simple but n is large, a natural way to approximate ?s is with a subset (or ?batch?)
of training examples B s (chosen without replacement),
1 X 0 s
?s = s
fi (x ).
|B |
s
i?B
s
The batch size |B | controls the error in the approximation, and we can drive the error to zero by
increasing it to n. Existing SVRG methods correspond to the special case where |B s | = n for all s.
Algorithm 1 gives pseudo-code for an SVRG implementation that uses this sub-sampling strategy.
If we assume that the sample variance of the norms of the gradients is bounded by S 2 for all xs ,
n
1 X 0 s 2
kfi (x )k ? kf 0 (xs )k2 ? S 2 ,
n ? 1 i=1
then we have that [12, Chapter 2]
Ekes k2 ?
n ? |B s | 2
S .
n|B s |
So if we want Ekes k2 ? ? ??2s , where ? ? 0 is a constant for some ?? < 1, we need
|B s | ?
nS 2
.
S 2 + n? ??2s
3
(3)
Algorithm 2 Mixed SVRG and SG Method
Replace (*) in Algorithm 1 with the following lines:
if fit ? B s then
0
0
xt = xt?1 ? ?(fit (xt?1 ) ? fit (xs ) + ?s )
else
0
xt = xt?1 ? ?fit (xt?1 )
end if
If the batch size satisfies the above condition then
?
ZEkes?1 k + ?Ekes?1 k2 ? Z ? ??s + ?? ??2s
?
? 2 max{Z ?, ?? ??}?
?s ,
and the convergence rate of SVRG is unchanged compared to using the full batch on all iterations.
The condition (3) guarantees a linear convergence rate under any exponentially-increasing sequence
of batch sizes, the strategy suggested by [13] for classic SG methods. However, a tedious calculation
shows that (3) has an inflection point at s = log(S 2 /?n)/2 log(1/?
?), corresponding to |B s | =
n
.
This
was
previously
observed
empirically
[14,
Figure
3],
and
occurs
because we are sampling
2
without replacement. This transition means we don?t need to increase the batch size exponentially.
4
Mixed SG and SVRG Method
An approximate ?s can drastically reduce the computational cost of the SVRG algorithm, but does
not affect the 2 in the 2m+n gradients required for m SVRG iterations. This factor of 2 is significant
in the early iterations, since this is when stochastic methods make the most progress and when we
typically see the largest reduction in the test error.
To reduce this factor, we can consider a mixed strategy: if it is in the batch B s then perform an
SVRG iteration, but if it is not in the current batch then use a classic SG iteration. We illustrate this
modification in Algorithm 2. This modification allows the algorithm to take advantage of the rapid
initial progress of SG, since it predominantly uses SG iterations when far from the solution. Below
we give a convergence rate for this mixed strategy.
Proposition 2. Let ?s = f 0 (xs )+es and we set ? and m so that 0 < ?(L, ?L) < 1 with ? = |B s |/n.
If we assume Ekfi0 (x)k2 ? ? 2 then Algorithm 2 has
E[f (x
s+1
ZEkes k + ?Ekes k2 +
) ? f (x )] ? ?(L, ?L)E[f (x ) ? f (x )] +
1 ? 2?L
?
?
s
?? 2
2 (1
? ?)
We give the proof in Appendix B. The extra term depending on the variance ? 2 is typically the
bottleneck for SG methods. Classic SG methods require the step-size ? to converge to zero because
of this term. However, the mixed SG/SVRG method can keep the fast progress from using a constant
? since the term depending on ? 2 converges to zero as ? converges to one. Since ? < 1 implies that
?(L, ?L) < ?, this result implies that when [f (xs ) ? f (x? )] is large compared to es and ? 2 that the
mixed SG/SVRG method actually converges faster.
Sharing a single step size ? between the SG and SVRG iterations in Proposition 2 is sub-optimal.
For example, if x is close to x? and |B s | ? n, then the SG iteration might actually take us far
away from the minimizer. Thus, we may want to use a decreasing
sequence of step sizes for the SG
p
?
iterations. In Appendix B, we show that using ? = O ( (n ? |B|)/n|B|) for the SG iterations can
improve the dependence on the error es and variance ? 2 .
5
Using Support Vectors
Using a batch B s decreases the number of gradient evaluations required when SVRG is far from
the solution, but its benefit diminishes over time. However, for certain objectives we can further
4
Algorithm 3 Heuristic for skipping evaluations of fi at x
if ski = 0 then
compute fi0 (x).
if fi0 (x) = 0 then
psi = psi + 1.
{Update the number of consecutive times fi0 (x) was zero.}
max{0,psi ?2}
ski = 2
. {Skip exponential number of future evaluations if it remains zero.}
else
psi = 0.
{This could be a support vector, do not skip it next time.}
end if
return fi0 (x).
else
ski = ski ? 1.
{In this case, we skip the evaluation.}
return 0.
end if
reduce the number of gradient evaluations by identifying support vectors. For example, consider
minimizing the Huberized hinge loss (HSVM) with threshold [15],
?
?
if ? > 1 + ,
n
?0
1X
T
if ? < 1 ? ,
f (bi ai x), f (? ) = 1 ? ?
min
?
x?Rd n
? (1+?? )2 if |1 ? ? | ? ,
i=1
4
f (bi aTi x).
In terms of (1), we have fi (x) =
The performance of this loss function is similar to
logistic regression and the hinge loss, but it has the appealing properties of both: it is differentiable
like logistic regression meaning we can apply methods like SVRG, but it has support vectors like the
hinge loss meaning that many examples will have fi (x? ) = 0 and fi0 (x? ) = 0. We can also construct
Huberized variants of many non-smooth losses for regression and multi-class classification.
If we knew the support vectors where fi (x? ) > 0, we could solve the problem faster by ignoring
the non-support vectors. For example, if there are 100000 training examples but only 100 support
vectors in the optimal solution, we could solve the problem 1000 times faster. While we typically
don?t know the support vectors, in this section we outline a heuristic that gives large practical improvements by trying to identify them as the algorithm runs.
Our heuristic has two components. The first component is maintaining the list of non-support vectors
at xs . Specifically, we maintain a list of examples i where fi0 (xs ) = 0. When SVRG picks an
example it that is part of this list, we know that fi0t (xs ) = 0 and thus the iteration only needs
one gradient evaluation. This modification is not a heuristic, in that it still applies the exact SVRG
algorithm. However, at best it can only cut the runtime in half.
The heuristic part of our strategy is to skip fi0 (xs ) or fi0 (xt ) if our evaluation of fi0 has been zero
more than two consecutive times (and skipping it an exponentially larger number of times each time
it remains zero). Specifically, for each example i we maintain two variables, ski (for ?skip?) and psi
(for ?pass?). Whenever we need to evaluate fi0 for some xs or xt , we run Algorithm 3 which may
skip the evaluation. This strategy can lead to huge computational savings in later iterations if there
are few support vectors, since many iterations will require no gradient evaluations.
Identifying support vectors to speed up computation has long been an important part of SVM solvers,
and is related to the classic shrinking heuristic [16]. While it has previously been explored in the context of dual coordinate ascent methods [17], this is the first work exploring it for linearly-convergent
stochastic gradient methods.
6
Regularized SVRG
We are often interested in the special case where problem (1) has the decomposition
n
min f (x) ? h(x) +
x?Rd
5
1X
gi (x).
n i=1
(4)
A common choice of h is a scaled 1-norm of the parameter vector, h(x) = ?kxk1 . This non-smooth
regularizer encourages sparsity in the parameter vector, and can be addressed with the proximalSVRG method of Xiao & Zhang [11]. Alternately, if we want an explicit Z we could set h to the
indicator function for a 2-norm ball containing x? . In Appendix C, we give a variant of Proposition 1
that allows errors in the proximal-SVRG method for non-smooth/constrained settings like this.
Another common choice is the `2 -regularizer, h(x) = ?2 kxk2 . With this regularizer, the SVRG
updates can be equivalently written in the form
xt+1 = xt ? ? h0 (xt ) + gi0t (xt ) ? gi0t (xs ) + ?s ,
(5)
Pn
1
s
s
where ? = n i=1 gi (x ). That is, they take an exact gradient step with respect to the regularizer
and an SVRG step with respect to the gi functions. When the gi0 are sparse, this form of the update
allows us to implement the iteration without needing full-vector operations. A related update is used
by Le Roux et al. to avoid full-vector operations in the SAG algorithm [1, ?4]. In Appendix C, we
prove the below convergence rate for this update.
Proposition 3. Consider instances of problem (1) that can be written in the form (4) where h0 is
Lh -Lipschitz continuous and each gi0 is Lg -Lipschitz continuous, and assume that we set ? and m
so that 0 < ?(Lm ) < 1 with Lm = max{Lg , Lh }. Then the regularized SVRG iteration (5) has
E[f (xs+1 ) ? f (x? )] ? ?(Lm )E[f (xs ) ? f (x? )],
Since Lm ? L, and strictly so in the case of `2 -regularization, this result shows that for `2 regularized problems SVRG actually converges faster than the standard analysis would indicate (a
similar result appears in Kone?cn?y et al. [18]). Further, this result gives a theoretical justification for
using the update (5) for other h functions where it is not equivalent to the original SVRG method.
7
Mini-Batching Strategies
Kone?cn?y et al. [18] have also recently considered using batches of data within SVRG. They consider
using ?mini-batches? in the inner iteration (the update of xt ) to decrease the variance of the method,
but still use full passes through the data to compute ?s . This prior work is thus complimentary to
the current work (in practice, both strategies can be used to improve performance). In Appendix D
we show that sampling the inner mini-batch proportional to Li achieves a convergence rate of
E f (xs+1 ) ? f (x? ) ? ?M E [f (xs ) ? f (x? )] ,
where M is the size of the mini-batch while
1
M
? ,
?M =
+
2
L?
? m??
M ? 2? L
and we assume 0 < ?M < 1. This generalizes the standard rate of SVRG and improves on the result
of Kone?cn?y et al. [18] in the smooth case. This rate can be faster than the rate of the standard SVRG
method at the cost of a more expensive iteration, and may be clearly advantageous in settings where
parallel computation allows us to compute several gradients simultaneously.
The regularized SVRG form (5) suggests an alternate mini-batch strategy for problem (1): consider
a mini-batch that contains a ?fixed? set Bf and a ?random? set Bt . Without loss of generality, assume
that we sort the fi based on their Li values so that L1 ? L2 ? ? ? ? ? Ln . For the fixed Bf we will
always choose the Mf values with the largest Li , Bf = {f1 , f2 , . . . , fMf }. In contrast, we choose
the members of the random set Bt by sampling from Br = {fMf +1 , . . . , fn } proportional to their
? r = (1/Mr ) Pn
Lipschitz constants, pi = (MLr )i L? r with L
i=Mf +1 Li . In Appendix D, we show the
following convergence rate for this strategy:
P
P
Proposition 4. Let g(x) = (1/n) i?[B
/ f ] fi (x) and h(x) = (1/n)
i?[Bf ] fi (x). If we replace
the SVRG update with
!
X L
?r
0
0
0 s
0 s
xt+1 = xt ? ? h (xt ) + (1/Mr )
(f (xt ) ? fi (x )) + g (x ) ,
Li i
i?Bt
then the convergence rate is
E[f (xs+1 ) ? f (x? )] ? ?(?, ?)E[F (xs ) ? f (x? )].
where ? =
?r
(n?Mf )L
(M ?Mf )n
and ? = max{ Ln1 , ?}.
6
?
?
If L1 ? nL/M
and Mf < (??1)nM
with ? = L?Lr , then we get a faster convergence rate than
?n?M
SVRG with a mini-batch of size M . The scenario where this rate is slower than the existing mini?
batch SVRG strategy is when L1 ? nL/M
. But we could relax this assumption by dividing each
element of the fixed set Bf into two functions: ?fi and (1 ? ?)fi , where ? = 1/M , then replacing
each function fi in Bf with ?fi and adding (1 ? ?)fi to the random set Br . This result may be
relevant if we have access to a field-programmable gate array (FPGA) or graphical processing unit
(GPU) that can compute the gradient for a fixed subset of the examples very efficiently. However,
our experiments (Appendix F) indicate this strategy only gives marginal gains.
In Appendix F, we also consider constructing mini-batches by sampling proportional to fi (xs ) or
kfi0 (xs )k. These seemed to work as well as Lipschitz sampling on all but one of the datasets in our
experiments, and this strategy is appealing because we have access to these values while we may
not know the Li values. However, these strategies diverged on one of the datasets.
8
Learning efficiency
In this section we compare the performance of SVRG as a large-scale learning algorithm compared
to FG and SG methods. Following Bottou & Bousquet [19], we can formulate the generalization
error E of a learning algorithm as the sum of three terms
E = Eapp + Eest + Eopt
where the approximation error Eapp measures the effect of using a limited class of models, the estimation error Eest measures the effect of using a finite training set, and the optimization error Eopt
measures the effect of inexactly solving problem (1). Bottou & Bousquet [19] study asymptotic
performance of various algorithms for a fixed approximation error and under certain conditions on
the distribution of the data depending on parameters ? or ?. In Appendix E, we discuss how SVRG
can be analyzed in their framework. The table below includes SVRG among their results.
Algorithm
FG
SG
SVRG
Time to reach Eopt ?
O n?d log 1
2
O d??
O (n + ?)d log 1
Time to reach
2 E = O(Eapp
+ )
2 1
d ?
O 1/? log
2
O d??
2
2 1
d
O 1/? log + ?d log 1
Previous
n
3 with ? ?
3 1
d
O 2/? log
3
O d ? log2 1
2
d
log2 1
O 1/?
In this table, the condition number is ? = L/?. In this setting, linearly-convergent stochastic
gradient methods can obtain better bounds for ill-conditioned problems, with a better dependence
on the dimension and without depending on the noise variance ?.
9
Experimental Results
In this section, we present experimental results that evaluate our proposed variations on the
SVRG method. We focus on logistic regression classification: given a set of training data
(a1 , b1 ) . . . (an , bn ) where ai ? Rd and bi ? {?1, +1}, the goal is to find the x ? Rd solving
n
argmin
x?Rd
1X
?
kxk2 +
log(1 + exp(?bi aTi x)),
2
n i=1
We consider the datasets used by [1], whose properties are listed in the supplementary material. As
in their work we add a bias variable, normalize dense features, and set the regularization parameter ?
to 1/n. We used a step-size of ? = 1/L and we used m = |B s | which gave good performance across
methods and datasets. In our first experiment, we compared three variants of SVRG: the original
strategy that uses all n examples to form ?s (Full), a growing batch strategy that sets |B s | = 2s
(Grow), and the mixed SG/SVRG described by Algorithm 2 under this same choice (Mixed). While
a variety of practical batching methods have been proposed in the literature [13, 20, 21], we did not
find that any of these strategies consistently outperformed the doubling used by the simple Grow
7
Full
Grow
Mixed
Full
Grow
Mixed
0.05
10-2
0.04
Test Error
Objective minus Optimum
100
10-4
0.03
0.02
10-6
0.01
10-8
0
0
5
10
Effective Passes
15
0
5
10
Effective Passes
15
0
0.05
Full
Grow
SV(Full)
SV(Grow)
Full
Grow
SV(Full)
SV(Grow)
0.04
Test Error
Objective minus Optimum
10
10-5
0.03
0.02
0.01
10-10
0
0
5
10
Effective Passes
15
0
5
10
Effective Passes
15
Figure 1: Comparison of training objective (left) and test error (right) on the spam dataset for the
logistic regression (top) and the HSVM (bottom) losses under different batch strategies for choosing
?s (Full, Grow, and Mixed) and whether we attempt to identify support vectors (SV).
strategy. Our second experiment focused on the `2 -regularized HSVM on the same datasets, and we
compared the original SVRG algorithm with variants that try to identify the support vectors (SV).
We plot the experimental results for one run of the algorithms on one dataset in Figure 1, while
Appendix F reports results on the other 8 datasets over 10 different runs. In our results, the growing
batch strategy (Grow) always had better test error performance than using the full batch, while for
large datasets it also performed substantially better in terms of the training objective. In contrast,
the Mixed strategy sometimes helped performance and sometimes hurt performance. Utilizing support vectors often improved the training objective, often by large margins, but its effect on the test
objective was smaller.
10
Discussion
As SVRG is the only memory-free method among the new stochastic linearly-convergent methods,
it represents the natural method to use for a huge variety of machine learning problems. In this
work we show that the convergence rate of the SVRG algorithm can be preserved even under an
inexact approximation to the full gradient. We also showed that using mini-batches to approximate
?s gives a natural way to do this, explored the use of support vectors to further reduce the number of
gradient evaluations, gave an analysis of the regularized SVRG update, and considered several new
mini-batch strategies. Our theoretical and experimental results indicate that many of these simple
modifications should be considered in any practical implementation of SVRG.
Acknowledgements
We would like to thank the reviewers for their helpful comments. This research was supported by
the Natural Sciences and Engineering Research Council of Canada (RGPIN 312176-2010, RGPIN
311661-08, RGPIN-06068-2015). Jakub Kone?cn?y is supported by a Google European Doctoral
Fellowship.
8
References
[1] N. Le Roux, M. Schmidt, and F. Bach, ?A stochastic gradient method with an exponential
convergence rate for strongly-convex optimization with finite training sets,? Advances in neural
information processing systems (NIPS), 2012.
[2] S. Shalev-Schwartz and T. Zhang, ?Stochastic dual coordinate ascent methods for regularized
loss minimization,? Journal of Machine Learning Research, vol. 14, pp. 567?599, 2013.
[3] J. Mairal, ?Optimization with first-order surrogate functions,? International Conference on
Machine Learning (ICML), 2013.
[4] A. Defazio, F. Bach, and S. Lacoste-Julien, ?Saga: A fast incremental gradient method with
support for non-strongly convex composite objectives,? Advances in neural information processing systems (NIPS), 2014.
[5] M. Mahdavi, L. Zhang, and R. Jin, ?Mixed optimization for smooth functions,? Advances in
neural information processing systems (NIPS), 2013.
[6] R. Johnson and T. Zhang, ?Accelerating stochastic gradient descent using predictive variance
reduction,? Advances in neural information processing systems (NIPS), 2013.
[7] L. Zhang, M. Mahdavi, and R. Jin, ?Linear convergence with condition number independent
access of full gradients,? Advances in neural information processing systems (NIPS), 2013.
[8] J. Kone?cn?y and P. Richt?arik, ?Semi-stochastic gradient descent methods,? arXiv preprint, 2013.
[9] M. Schmidt, N. Le Roux, and F. Bach, ?Convergence rates of inexact proximal-gradient methods for convex optimization,? Advances in neural information processing systems (NIPS),
2011.
[10] C. Hu, J. Kwok, and W. Pan, ?Accelerated gradient methods for stochastic optimization and
online learning,? Advances in neural information processing systems (NIPS), 2009.
[11] L. Xiao and T. Zhang, ?A proximal stochastic gradient method with progressive variance reduction,? SIAM Journal on Optimization, vol. 24, no. 2, pp. 2057?2075, 2014.
[12] S. Lohr, Sampling: design and analysis. Cengage Learning, 2009.
[13] M. P. Friedlander and M. Schmidt, ?Hybrid deterministic-stochastic methods for data fitting,?
SIAM Journal of Scientific Computing, vol. 34, no. 3, pp. A1351?A1379, 2012.
[14] A. Aravkin, M. P. Friedlander, F. J. Herrmann, and T. Van Leeuwen, ?Robust inversion, dimensionality reduction, and randomized sampling,? Mathematical Programming, vol. 134, no. 1,
pp. 101?125, 2012.
[15] S. Rosset and J. Zhu, ?Piecewise linear regularized solution paths,? The Annals of Statistics,
vol. 35, no. 3, pp. 1012?1030, 2007.
[16] T. Joachims, ?Making large-scale SVM learning practical,? in Advances in Kernel Methods Support Vector Learning (B. Sch?olkopf, C. Burges, and A. Smola, eds.), ch. 11, pp. 169?184,
Cambridge, MA: MIT Press, 1999.
[17] N. Usunier, A. Bordes, and L. Bottou, ?Guarantees for approximate incremental svms,? International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[18] J. Kone?cn?y, J. Liu, P. Richt?arik, and M. Tak?ac? , ?ms2gd: Mini-batch semi-stochastic gradient
descent in the proximal setting,? arXiv preprint, 2014.
[19] L. Bottou and O. Bousquet, ?The tradeoffs of large scale learning,? Advances in neural information processing systems (NIPS), 2007.
[20] R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu, ?Sample size selection in optimization methods
for machine learning,? Mathematical programming, vol. 134, no. 1, pp. 127?155, 2012.
[21] K. van den Doel and U. Ascher, ?Adaptive and stochastic algorithms for EIT and DC resistivity
problems with piecewise constant solutions and many measurements,? SIAM J. Scient. Comput,
vol. 34, 2012.
9
| 5711 |@word inversion:1 advantageous:1 proportion:1 norm:3 bf:6 tedious:1 hu:1 bn:1 decomposition:1 pick:2 minus:2 reduction:4 initial:4 liu:1 contains:1 ati:2 existing:2 current:2 com:2 skipping:2 gmail:2 must:1 written:2 gpu:1 fn:1 plot:1 update:11 half:1 fewer:1 intelligence:1 lr:1 iterates:1 zhang:8 mathematical:2 prove:2 fitting:3 x0:6 rapid:2 growing:4 multi:1 decreasing:2 byrd:1 solver:1 increasing:2 provided:2 begin:1 notation:1 bounded:1 argmin:1 complimentary:1 substantially:1 scient:1 impractical:1 wasting:1 guarantee:2 pseudo:1 every:3 sag:5 runtime:1 k2:9 scaled:1 schwartz:1 control:2 unit:1 positive:1 engineering:2 path:1 might:1 doctoral:1 suggests:1 limited:1 kfi:1 bi:4 practical:7 practice:2 implement:1 procedure:1 empirical:1 composite:1 convenient:1 alim:2 get:1 close:2 selection:2 context:1 equivalent:1 deterministic:1 reviewer:1 convex:5 focused:1 formulate:1 roux:4 identifying:2 array:1 utilizing:1 dominate:1 classic:7 coordinate:2 justification:2 variation:1 hurt:1 annals:1 exact:3 programming:2 us:3 element:2 expensive:1 updating:1 cut:1 observed:1 kxk1:1 bottom:1 preprint:2 electrical:1 richt:2 decrease:3 convexity:1 ultimately:1 solving:4 predictive:1 f2:1 efficiency:1 eit:1 chapter:1 various:1 regularizer:4 fast:3 effective:4 artificial:1 choosing:1 h0:2 shalev:1 whose:1 heuristic:6 larger:1 solve:3 supplementary:1 relax:1 statistic:2 gi:3 rgpin:3 online:1 sequence:4 kxt:1 advantage:1 differentiable:1 propose:1 reset:1 relevant:1 fi0:14 normalize:1 olkopf:1 convergence:26 requirement:1 optimum:2 incremental:2 converges:5 derive:1 sallinen:1 illustrate:1 depending:4 virani:1 ac:1 school:1 progress:4 strong:1 dividing:1 c:1 involves:1 indicate:4 implies:4 skip:6 direction:1 aravkin:1 stochastic:18 subsequently:1 material:1 require:5 f1:1 generalization:3 proposition:6 exploring:1 strictly:1 considered:3 exp:1 diverged:1 lm:4 achieves:3 early:6 consecutive:2 estimation:1 diminishes:1 outperformed:1 council:1 largest:2 minimization:1 mit:1 clearly:1 always:2 arik:2 rather:1 pn:3 avoid:1 focus:1 joachim:1 improvement:1 consistently:1 contrast:2 inflection:1 helpful:1 inference:1 typically:4 bt:3 tak:1 interested:1 dual:3 classification:2 among:2 ill:1 ms2gd:1 constrained:1 special:3 marginal:1 field:2 equal:1 construct:1 saving:1 sampling:10 represents:1 lit:1 progressive:1 icml:1 ascher:1 future:1 np:1 report:1 piecewise:2 few:1 randomly:2 simultaneously:1 replacement:3 maintain:3 attempt:1 huge:3 evaluation:17 analyzed:1 nl:3 kone:7 accurate:1 lh:2 iv:1 theoretical:2 leeuwen:1 instance:1 cost:2 subset:2 uniform:2 fpga:1 johnson:2 sv:6 proximal:4 my:1 rosset:1 eopt:3 international:2 siam:3 randomized:1 nm:1 containing:1 choose:3 return:2 li:9 mahdavi:2 includes:3 later:2 try:1 performed:1 helped:1 analyze:3 sort:1 option:2 maintains:1 parallel:1 square:1 accuracy:1 variance:9 who:1 efficiently:1 correspond:1 identify:3 b1s:1 drive:1 app:1 resistivity:1 reach:2 sharing:1 whenever:1 ed:1 inexact:3 frequency:1 mohamed:1 pp:7 proof:2 psi:5 stop:1 sampled:1 gain:1 dataset:2 improves:3 dimensionality:1 actually:3 appears:2 improved:1 strongly:3 generality:1 smola:1 replacing:1 google:1 minibatch:1 schmidtm:1 logistic:5 scientific:1 effect:4 unbiased:1 ekfi0:1 regularization:2 encourages:1 trying:1 chin:1 outline:1 l1:3 meaning:2 fi:21 recently:2 predominantly:1 common:2 mlr:1 empirically:1 reza:1 exponentially:3 significant:1 measurement:1 cambridge:1 ai:2 rd:6 mathematics:1 had:1 access:3 add:1 showed:1 optimizing:1 scenario:1 occasionally:1 store:1 certain:2 mr:2 converge:1 ii:2 semi:3 full:17 needing:1 smooth:6 faster:7 calculation:4 bach:3 long:3 a1:1 variant:5 regression:6 arxiv:2 iteration:31 sometimes:2 kernel:1 preserved:2 justified:1 want:3 huberized:2 fellowship:1 addressed:1 else:3 grow:10 crucial:1 sch:1 extra:2 ascent:2 pass:5 comment:1 member:1 spirit:1 iii:1 iterate:1 affect:1 variate:1 fit:7 gave:2 variety:2 reduce:8 inner:2 cn:7 br:2 tradeoff:1 bottleneck:1 whether:1 defazio:1 accelerating:1 fmf:2 repeatedly:1 programmable:1 deep:1 dramatically:1 listed:1 svms:1 reduced:2 outperform:1 canonical:1 vol:7 salient:1 threshold:1 kes:1 lacoste:1 nocedal:1 sum:2 run:4 reasonable:1 wu:1 appendix:12 bound:2 convergent:3 x2:1 bousquet:3 generates:1 speed:2 min:3 department:2 alternate:2 ball:1 smaller:2 across:1 pan:1 appealing:2 making:2 b:1 modification:4 den:1 kfi0:1 ln:1 previously:2 remains:2 discus:1 ln1:1 know:4 end:5 usunier:1 generalizes:1 operation:2 apply:1 kwok:1 away:1 appropriate:2 batching:4 save:1 batch:29 alternative:1 schmidt:3 slower:4 gate:1 original:3 assumes:1 top:1 include:1 graphical:2 log2:2 hinge:3 maintaining:1 doel:1 exploit:1 unchanged:1 implied:1 objective:9 occurs:1 strategy:27 dependence:2 traditional:1 surrogate:1 gradient:46 thank:1 mapped:1 considers:1 code:1 mini:13 minimizing:1 equivalently:1 lg:2 unfortunately:1 implementation:2 design:1 ski:5 perform:1 datasets:7 finite:3 jin:2 descent:3 dc:1 canada:1 required:4 nu:1 alternately:1 nip:8 suggested:1 eest:2 below:3 scott:2 xm:3 sparsity:1 max:5 memory:4 natural:4 hybrid:1 regularized:10 indicator:1 advanced:1 zhu:1 improve:3 julien:1 columbia:2 prior:1 sg:22 l2:1 literature:1 kf:1 acknowledgement:1 friedlander:2 asymptotic:1 loss:8 sublinear:1 mixed:15 proportional:3 xiao:3 gi0:2 storing:1 pi:1 bordes:1 changed:1 supported:2 free:1 svrg:67 drastically:1 bias:1 allow:1 burges:1 sparse:1 fg:5 edinburgh:1 benefit:1 van:2 dimension:1 evaluating:1 lipschitzcontinuous:1 transition:1 seemed:1 commonly:2 hsvm:3 herrmann:1 adaptive:1 spam:1 far:4 approximate:5 keep:1 mairal:1 b1:1 knew:1 don:2 continuous:3 search:1 iterative:1 table:2 terminate:1 robust:2 ca:2 ignoring:1 improving:1 bottou:4 european:1 constructing:1 did:1 aistats:1 dense:1 linearly:3 noise:1 eapp:2 x1:1 n:1 shrinking:1 sub:2 explicit:1 saga:1 exponential:2 comput:1 kxk2:2 british:2 xt:30 jakub:2 list:3 x:40 svm:2 explored:2 adding:1 conditioned:1 margin:1 mf:5 simply:1 explore:2 doubling:1 applies:1 ch:1 ubc:2 minimizer:1 satisfies:2 relies:1 inexactly:2 ma:1 conditional:1 goal:1 lipschitz:5 replace:2 specifically:2 uniformly:1 pas:1 ece:1 e:6 experimental:4 highdimensional:1 mark:1 support:19 accelerated:1 evaluate:4 |
5,205 | 5,712 | Spectral Norm Regularization of Orthonormal
Representations for Graph Transduction
Rakesh Shivanna
Google Inc.
Mountain View, CA, USA
rakeshshivanna@google.com
Bibaswan Chatterjee
Dept. of Computer Science & Automation
Indian Institute of Science, Bangalore
bibaswan.chatterjee@csa.iisc.ernet.in
Raman Sankaran, Chiranjib Bhattacharyya
Dept. of Computer Science & Automation
Indian Institute of Science, Bangalore
ramans,chiru@csa.iisc.ernet.in
Francis Bach
INRIA - Sierra Project-team
?
Ecole
Normale Sup?erieure, Paris, France
francis.bach@ens.fr
Abstract
Recent literature [1] suggests that embedding a graph on an unit sphere leads to
better generalization for graph transduction. However, the choice of optimal embedding and an efficient algorithm to compute the same remains open. In this
paper, we show that orthonormal representations, a class of unit-sphere graph embeddings are PAC learnable. Existing PAC-based analysis do not apply as the VC
dimension of the function class is infinite. We propose an alternative PAC-based
bound, which do not depend on the VC dimension of the underlying function
class, but is related to the famous Lov?asz ? function. The main contribution of the
paper is SPORE, a SPectral regularized ORthonormal Embedding for graph transduction, derived from the PAC bound. SPORE is posed as a non-smooth convex
function over an elliptope. These problems are usually solved as semi-definite programs (SDPs) with time complexity O(n6 ). We present, Infeasible Inexact proximal (IIP): an Inexact proximal method which performs subgradient procedure
on an approximate projection, not necessarily feasible. IIP is more scalable than
SDP, has an O( ?1T ) convergence, and is generally applicable whenever a suitable approximate projection is available. We use IIP to compute SPORE where
the approximate projection step is computed by FISTA, an accelerated gradient
descent procedure. We show that the method has a convergence rate of O( ?1T ).
The proposed algorithm easily scales to 1000?s of vertices, while the standard
SDP computation does not scale beyond few hundred vertices. Furthermore, the
analysis presented here easily extends to the multiple graph setting.
1
Introduction
Learning problems on graph-structured data have received significant attention in recent years [11,
17, 20]. We study an instance of graph transduction, the problem of learning labels on vertices of
simple graphs1 . A typical example is webpage classification [20], where a very small part of the
entire web is manually classified. Even for simple graphs, predicting binary labels of the unlabeled
vertices is NP-complete [6].
More formally: let G = (V, E), V = [n] be a simple graph with unknown labels y ? {?1}n .
Without loss of generality, let the labels of first m ? [n] vertices be observable, let u := n ? m.
1
A simple graph is an unweighted, undirected graph with no self loops or multiple edges.
1
Let yS and yS? be the labels of S = [m] andP
S? = V \S. Given G and yS , the goal is to learn soft
predictions y
? ? Rn , such that erS`? [?
y] := S1?
`(y , y? ) is small, where ` is any loss function. The
| | j?S? j j
following formulation has been extensively used [19, 20]
min erS` [?
y] + ??
y> K?1 y
?,
y
??Rn
(1)
where K is a graph-dependent kernel and ? > 0 is a regularizer constant. Let y
?? be the solution
to (1), given G and S ? V, |S| = m. [1] proposed the following generalization bound
p
h
i
trp (K)
y? ] ? c1 infn erV` [?
ES?V erS`? [?
y] + ??
y> K?1 y
? + c2
,
(2)
y
??R
?|S|
1/p
P
where c1 , c2 are dependent on ` and trp (K) = n1 i?[n] Kpii
, p > 0. [1] argued that trp (K)
should be a constant and can be enforced by normalizing the diagonal entries of K to be 1. This
is an important advice in graph transduction, however it is to be noted that the set of normalized
kernels is quite large and (2) gives little insight in choosing the optimal kernel.
Normalizing the diagonal entries of K can be viewed geometrically as embedding the graph on a
unit sphere. Recently, [16] studied a rich class of unit sphere graph embeddings, called orthonormal
representations [13], and find that it is statistically consistent for graph transduction. However, the
choice of the optimal orthonormal embedding is not clear. We study orthonormal representations
1
for the following equivalent [19] kernel learning formulation of (1), with C = ?m
,
?C (K, yS ) = maxn
??R
X
i?S
?i ?
1 X
?i ?j yi yj Kij s.t. 0 ? ?i ? C ?i ? S, ?j = 0 ?j ?
/ S, (3)
2
i,j?S
from a probably approximately
correctly (PAC) learning point of view. Note that the final predictions
P
are given by y?i = j?S Kij ?j? yj ?i ? [n], where ?? is the optimal solution to (3).
Contributions. We make the following contributions:
? Using (3) we show the class of orthonormal representations are efficiently PAC learnable over a
large class of graph families, including power-law and random graphs.
? The above analysis suggests that spectral norm regularization could be beneficial in computing
the best embedding. To this end we pose the problem of SPectral norm regularized ORthonormal
Embedding (SPORE) for graph Transduction, namely that of minimizing a convex function
over an elliptope. One could solve such problems as SDPs which unfortunately do not scale
well beyond few hundred vertices.
? We propose an infeasible inexact proximal (IIP) method, a novel projected subgradient descent
algorithm, in which the projection is approximated by an inexact proximal method. We suggest
a novel approximation criteria which approximates the proximal operator for the support function of the feasible set within a given precision. One could compute an approximation to the
projection from the inexact proximal point which may not be feasible hence the name IIP. We
prove?that IIP converges to the optimal minimum of a non-smooth convex function with rate
O(1/ T ) in T iterations.
? The IIP algorithm is then applied to the case when the set of interest is composed of the intersection of two convex sets. The proximal operator for the support function of the set of interest
can be obtained using the FISTA algorithm, once we know the proximal operator for the support
functions of the individual sets involved.
? Our analysis paves the way for learning labels on multiple graphs by using the embedding by
adopting an MKL style approach. We present both algorithmic and generalization results.
Notations. Let k ? k, k ? kF denote the Euclidean and Frobenius norm respectively. Let Sn and
Sn+ denote the set of n ? n square symmetric and square symmetric
positive
semi-definite
matrices
respectively. Let Rn+ be a non-negative orthant. Let S n?1 = u ? Rn+ kuk1 = 1 denote the n?1
dimensional simplex. Let [n] := {1, . . . , n}. For any M ? Sn , let ?1 (M) ? . . . ? ?n (M) denote
? denote the complement
its Eigenvalues. We denote the adjacency matrix of a graph G by A. Let G
>
?
graph of G, with the adjacency matrix A = 11 ? I ? A; where 1 is a vector of all 1?s, and I is the
identity matrix. Let Y = {?1}, Yb = R be the label and soft-prediction spaces over V . Given y ? Y
2
b we use `0-1 (y, y?) = 1[y y? < 0], `hng (y, y?) = (1 ? y y?)+ 2 to denote 0-1 and hinge loss
and y? ? Y,
respectively. The notations O, o, ?, ? will denote standard measures in asymptotic analysis [4].
Related work. [1]?s analysis was restricted to Laplacian matrices, and does not give insights in
choosing the optimal unit sphere embedding. [2] studied graph transduction using PAC model,
however for graph orthonormal embeddings, there is no known sample complexity estimate. [16]
showed that working with orthonormal embeddings leads to consistency. However, the choice of
optimal embedding and an efficient algorithm to compute the same remains an open issue. Furthermore, we show that [16]?s sample complexity estimate is sub-optimal.
Preliminaries. An orthonormal embedding [13] of a simple graph G = (V, E), V = [n], is
defined by a matrix U = [u1 , . . . , un ] ? Rd?n such that u>
/ E and
i uj = 0 whenever (i, j) ?
kui k = 1 ?i ? [n]. Let Lab(G) denote the set of all possible
orthonormal
embeddings
of the
graph G, Lab(G) := U | U is an orthonormal embedding . Recently, [8] showed an interesting
connection to the set of graph kernel matrices
K(G) := K ? Sn+ | Kii = 1, ?i ? [n]; Kij = 0, ?(i, j) ?
/E .
Note that K ? K(G) is positive semidefinite, and hence there exists U ? Rd?n such that K =
U> U. Note that Kij = u>
i uj where ui is the i-th column of U. Hence by inspection it is clear
that U ? Lab(G). Using a similar argument, we can show that for any U ? Lab(G), the matrix
K = U> U ? K(G). Thus, the two sets, Lab(G) and K(G) are equivalent.
Furthermore, orthonormal embeddings are associated with an interesting quantity, the Lov?asz ?
function [13, 7]. However, computing ? requires solving an SDP, which is impractical.
2
Generalization Bound for Graph Transduction using Orthonormal
Embeddings
In this section we derive a generalization bound, used in the sequel for PAC analysis. We derive the
following error bound, valid for any orthonormal embedding (supplementary material, Section B).
Theorem 1 (Generalization bound). Let G = (V, E) be a simple graph with unknown binary labels
y ? Y n on the vertices V . Let K ? K(G). Given G, and labels of a randomly drawn subgraph
S, let y
? ? Ybn be the predictions learnt by ?C (K, yS ) in (3). Then, for m ? n/2, with probability
? 1 ? ? over the choice of S ? V , such that |S| = m
r 1
p
1 X hng
1
0-1
erS? [?
y] ?
` (yi , y?i ) + 2C 2?1 (K) + O
log
.
(4)
m
m
?
i?S
Note that the above is a high-probability bound, in comparison to the expected analysis in (2). Also,
the above result suggests that graph embeddings with low spectral norm and empirical error lead to
better generalization. [1]?s analysis in (2) suggests that we should embed a graph on a unit sphere,
however, does not help to choose the optimal embedding for graph transduction. Exploiting our
analysis from (4), we present a spectral norm regularized algorithm in Section 3.
We would also like to study PAC learnability of orthonormal embeddings, where PAC learnability
is defined as follows: given G, y; does there exist an m
? < n, such that w.p. ? 1 ? ? over
S ? V, |S| ? m;
? the generalization error erS0-1
? is termed as labelled sample
? ? . The quantity m
complexity [2]. Existing analysis [2] do not apply to orthonormal embeddings as discussed in related
work, Section 1. Theorem 1 allows us to derive improved statistical estimates (Section 3).
3
SPORE Formulation and PAC Analysis
Theorem 1 suggests that penalizing the spectral norm of K would lead to better generalization. To
this end we motivate the following formulation.
?C,? (G, yS ) = min g K
where g(K) = ?C (K, yS ) + ??1 (K).
(5)
K?K(G)
2
(a)+ = max(a, 0) ?a ? R
3
(5) gives an optimal orthonormal embedding, the optimal K, which we will refer to as SPORE. In
this section we first study the PAC learnability of SPORE, and derive a labelled sample complexity
estimate. Next, we study efficient computation of SPORE. Though SPORE can be posed as an SDP,
we show in Section 4 that it is possible to exploit the structure, and solve efficiently.
Given G and yS , the function ?C (K, yS ) is convex in K as it is the maximum of affine functions
of K. The spectral norm of K, ?1 (K) is also convex, and hence g(K) is a convex function. Furthermore K(G) is an Elliptope [5], a convex body which can be described by the intersection of
a positive semi-definite and affine constraints. It follows that hence (5) is convex. Usually these
formulations are posed as SDPs which do not scale beyond few hundred vertices. In Section 4 we
derive an efficient first order method which can solve for 1000?s of vertices. Let K? be the optimal
embedding computed from (5). Note that once the kernel is fixed, the predictions are only dependent
on ?C (K? , ySP
). Let ?? be the solution to ?C (K? , yS ) as in (3), then the final predictions of (5) is
? ?
given by y?i = j?S Kij
?j yj , ?i ? [n].
At this point, we derive an interesting graph-dependent error convergence rate. We gather two
important results, the proof of which appears in the supplementary material, Section C.
?
Lemma 2. Given a simple graph G = (V, E), maxK?K(G) ?1 (K) = ?(G).
Lemma 3. Given G and y, for any S ? V and C > 0, minK?K(G) ?C (K, yS ) ? ?(G)/2.
In the standard PAC setting, there is a complete disconnection between the data distribution and
target hypothesis. However, in the presence of unlabeled nodes, without any assumption on the
data, it is impossible to learn labels. Following existing literature [1, 9], we work with similarity
graphs ? where presence of an edge would mean two nodes are similar; and derive the following
(supplementary material, Section C).
Theorem 4. Let G = (V, E), V = [n] be a simple graph with unknown binary labels y ? Y n
? be
on the vertices V . Given G, and labels of a randomly drawn subgraph S ? V , m = |S|; let y
12
?(G)
?(G)
q
the predictions learnt by SPORE (5), for parameters C =
and ? = ? G? . Then, for
?)
( )
m ?(G
m ? n/2, with probability ? 1 ? ? over the choice of S ? V , such that |S| = m
1 p
1 21
erS0-1
y] = O
n?(G) + log
.
(6)
? [?
m
?
?
Proof. (Sketch) Let K? be the kernel learnt by SPORE (5). Using Theorem 1 and Lemma 2 for y
r
q
1
1 X hng
1
? +O
erS0-1
y] ?
` (yi , y?i ) + 2C 2? G
log
.
(7)
? [?
m
m
?
i?S
From the primal formulation of (3), using Lemma 2 and 3, we get
X
?(G)
? .
+ ?? G
C
`hng (yi , y?i ) ? ?C (K? , yS ) ? ?C,? (G, yS ) ?
2
i?S
q
?
? = 2C 2? G
? and optimizing for C gives
Plugging back in (7), choosing ? such that Cm
? G
? = n [13] proves the result.
us the choice of parameters as stated. Finally, using ?(G)?(G)
? is the complement graph of G. The optimal orthonormal embedding K?
In the theorem above, G
tend to embed vertices to nearby regions if they have connecting edges, hence, the notion of similarity is implicitly captured in the embedding. From (6), for a fixed n and m, note that the error
converges at a faster rate for a dense graph (? is small), than for a sparse graph (? is large). Such
connections relating to graph structural properties were previously unavailable [1].
We also
? =
? estimate the labelled sample complexity, by bounding (6) by > 0, to obtain m
? 12 ( ?n + log 1? ) . This connection helps to reason the intuition that for a sparse graph one
would need a larger number of labelled vertices, than for a dense graph. For constants , ?, we
1
obtain a fractional labelled sample complexity estimate of m/n
?
= ? ?/n 2 , which is a signif1
icant improvement over the recently proposed ? ?/n 3 [16]. The use of stronger machinery of
4
Rademacher averages (supplementary material, Section C), instead of VC-dimension [2], and specializing to SPORE allows us to improve over existing analysis [1, 16]. The proposed sample
complexity estimate
? is interesting for ? = o(n), examples?of such graphs include: random graphs
(?(G(n, p)) = ?( n)) and power-law graphs (?? = O( n)).
4
Inexact Proximal methods for SPORE
In this section, we propose an efficient algorithm to solve SPORE (see (5)). The optimization problem SPORE can be posed as an SDP. Generic SDP solvers have a runtime complexity of O(n6 )
and often does not scale well for large graphs. We study first-order methods, such as projected
subgradient procedures, as an alternative to SDPs, for minimizing g(K). The main computational
challenge in developing such procedures is that it is difficult to compute the projection on the elliptope. One could potentially use the seminal Dykstra?s algorithm [3] of finding a feasible point in the
intersection of two convex sets. The algorithm asymptotically finds a point in the intersection. This
asymptotic convergence is a serious disadvantage in the usage of Dykstra?s algorithm as a projection
sub-routine. It would be useful to have an algorithm which after a finite number of iterations yield
an approximate projection and a subsequent descent algorithm can yield a convergent algorithm.
Motivated by SPORE, we study the problem of minimizing non-smooth convex functions where
the projection onto the feasible set can be computed only approximately. Recently there has been
increasing interest in studying Inexact proximal
? methods [15, 18]. In the sequel we design an inexact proximal method which yields an O(1/ T ) algorithm to solve (5). The algorithm is based
on approximating the prox function by an iterative procedure which satisfies a suitably designed
criterion.
4.1
An Infeasible Inexact Proximal (IIP) algorithm
Let f be a convex function with properly defined sub-differential ?f (x) at every x ? X . Consider
the following optimization problem.
min f (x).
(8)
x?X ?Rd
A subgradient projection iteration of the form
xk+1 = PX (xk ? ?k hk ),
hk ? ?f (xk )
(9)
accurate solution by running the iterations O( 12 ) number of times,
of v ? Rd on X ? Rd if PX (v) = argminx?X 21 kv ? xk2F . In many
is often used to arrive at an
where PX (v) is the projection
situations, such as X = K(G), it is not possible to accurately compute the projection in finite amount
of time and one may obtain only an approximate projection. Using the Moreau decomposition
PX (v) + Prox?X (v) = v [14], one can compute the projection if one could compute prox?X , where
?A (a) = maxa?X x> a is the support function of X , and prox?X refers to the proximal operator for
the function g 0 at v as defined below3 .
1
(10)
proxg0 (v) = argmin pg0 (z; v) = kv ? zk2 + g 0 (z) .
2
z?Dom(g 0 )
We assume that one could compute zX
(v), not necessarily in X , such that
p?X (zX
(v); v) ? minn p?X (z; v) + ,
z?R
and PX (v) = v ? zX
.
(11)
See that zX
is an inexact prox and the resultant estimate of the projection PX can be infeasible but
hopefully not too far away. Note that = 0 recovers the exact case. The next theorem confirms that
it is possible to converge to the true optimum for a non-zero (supplementary material, Section D.5).
Theorem 5. Consider the optimization problem (8). Starting from any kx0 ? x? k ? R, where x? is
a solution of (8), for every k let us assume that we could obtain PX (yk ) such that zk = yk q
? PX (yk )
satisfy (11), where yk = xk ? ?k hk , ?k = khsk k , khk k ? L, kxk ? x? k ? R, s =
Then the iterates
xk+1 = PX (xk ? ?k hk ), hk ? ?f (xk )
3
A more general definition of the proximal operator is ? prox?g0 (v) = argminz?Dom(g0 )
5
1
2?
R2
T
+ .
(12)
kv?zk2 +g 0 (z)
r
yield
fT?
?
?f ?L
R2
+ .
T
(13)
Related Work on Inexact Proximal methods: There has been recent interest in deriving inexact proximal methods such as projected gradient descent, see [15, 18] for a comprehensive list of
references. To the best of our knowledge, composite functions have been analyzed but no one has explored the case that f is non-smooth. The results presented here are thus complementary to [15, 18].
Note the subtlety in using the proper approximation criteria. Using a distance criterion between the
true projection and the approximate projection, or an approximate optimality criteria on the optimal
distance would lead to a worse bound; using a dual approximate optimality criterion (here through
the proximal operator for the support function) is key (as noted in [15, 18] and references therein).
As an immediate consequence of Theorem 5, note that suppose we have an algorithm to compute
prox?X which guarantees after S iterations that
p?X (zS ; v) ? min p?X (z; v) ?
z?Rd
?2
R
,
S2
? particular to the set over which p? is defined. We can initialize =
for a constant R
X
?
that may suggest that one could use S = T iterations to yield
q
?
LR
?
?
?2.
?
fT ? f ? ?
where R = R2 + R
T
(14)
?2
R
S2
in (13),
(15)
Remarks: Computational efficiency dictates that the number of projection steps should?be kept at a
minimum. To this end we see that number of projection steps need to be at least S = T with the
current choice of stepsizes. Let cp be the cost of one iteration of FISTA step and c0 be the cost of
one outer iteration. The total computation cost can be then estimated as T 3/2 ? cp + T ? c0 .
4.2
Applying IIP to compute SPORE
The problem of computing SPORE can be posed as minimizing a non-smooth convex function over
an intersection of two sets: K(G) = Sn+ ? P (G), intersection of positive semi-definite cone Sn+
and a polytope of equality constraints P (G) := {M ? Sn |Mii = 1, Mij = 0 ?(i, j) ?
/ E}.
The algorithm described in Theorem 5 readily applies to the new setting if the projection can be
computed efficiently. The proximal operator for ?X can be derived as 4
1
2
Prox?X (v) = argmin p?X (a, b; v) = k(a + b) ? vk + ?A (a) + ?B (b) .
(16)
2
a,b?Rd
This means that even if we do not have an efficient procedure for computing Prox?X (v) directly,
we can devise an algorithm to guarantee the approximation (11) if we can compute Prox?A (v) and
Prox?B (v) efficiently. This can be done through the application of the popular FISTA algorithm for
(16), which also guarantees (14). Algorithm 1 (detailed in the supplementary, named IIP F IST A),
computes the following simple steps followed by the usual FISTA variable updates at each iteration
t : (a) gradient descent step on a and b with respect to the smooth term 12 k(a + b) ? vk2 and (b)
proximal step with respect to ?A and ?B using the expressions (14), (21) (supplementary material).
Using the tools discussed above, we design Algorithm 1 to solve the SPORE formulation (5) using
IIP. The proposed algorithm readily applies to general convex sets. However, we confine ourselves
to specific sets of interest in our problem. The following theorem states the convergence rate of the
proposed procedure.
T
Theorem 6. Consider the optimization problem (8) with X = A B, where A and B are Sn+ and
P (G) respectively. Starting from any K0 ? A the iterates Kt in Algorithm (1) satisfy
q
L
?
?2.
min f (Kt ) ? f (K ) ? ?
R2 + R
t=0,...,T
T
Proof. Is an immediate extension of Theorem 5 ? supplementary material, Section D.6.
4
The derivation is presented in supplementary material, Claim 6.
6
Algorithm 1 IIP for SPORE
?
1: function A PPROX
(K0 , L, R, R, T )
p - PROJ - SUBG
?2
. compute stepsize
s = ?LT ?
R2 + R
Initialize t0 = 1.
for t = 1, . . . , T do
compute ht?1
. subgradient of f (K) at Kt?1 see equation (5)
s
vt = Kt?1 ? kht?1
h
t?1
k
?
?
? t = IIP F IST A(vt , T )
. FISTA for T steps. Use Algorithm 1 (supp.)
K
?t ) = Kt ? prox (Kt )
Kt = P rojA (K
?A
. Kt needs to be psd for the next SVM call. Use (14) (supp.)
end for
end function
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
Equating the problem (8) with the SPORE problem (5), we have f (K) = ?C (K,
yS1 ) + ??>1 (K).
The set of subgradients of f at the iteration t is given by ?f (Kt ) =
? 2 Y?t ?t Y +
>
?vt vt |?t is returned by SVM, and vt is the eigen vector corresponding to ?1 (Kt ) 5 , where Y be
a diagonal matrix such that Yii = yi , for i ? S, and 0 otherwise. The step size is calculated using
? which can be derived as L = nC 2 , R = n, R
? = n2.5 for the SPORE probestimates of L, R and R,
lem. Check the supplementary material for the derivations.
5
Multiple Graph Transduction
Multiple graph transduction is of recent interest in a multi-view setting, where individual views are
expressed by a graph. This includes many practical problems in bioinformatics [17], spam detection
[21], etc. We propose an MKL style extension of SPORE, with improved PAC bounds.
Formally, the problem of multiple graph transduction is stated as ? let G = {G(1) , . . . , G(M ) } be a
set of simple graphs G(k) = (V, E (k) ), defined on a common vertex set V = [n]. Given G and yS
as before, the goal is to accurately predict yS? . Following the standard technique of taking convex
combination of graph kernels [16], we propose the following MKL-SPORE formulation
X
?k K(k) , yS + ? max ?1 (K(k) ) .
(17)
?C,? (G, yS ) =
min
min ?C
K(k) ?K(G(k) )
??S M ?1
k?[M ]
k?[M ]
Similar to Theorem 4, we can show the following (supplementary material, Theorem 8)
erS0-1
y] = O
? [?
1 p
1 21
n?(G) + log
m
?
where
?(G) ? min ?(G(k) ).
k?[M ]
(18)
It immediately follows that combining multiple graphs improves the error convergence rate (see (6)),
and hence the labelled sample complexity. Also, the bound suggests that the presence of at least one
?good? graph is sufficient for MKL-SPORE to learn accurate predictions. This motivates us to use
the proposed formulation in the presence of noisy graphs (Section 6). We can also apply the IIP
algorithm described in Section 4 to solve for (17) (supplementary material, Section F).
6
Experiments
We conducted experiments on both real world and synthetic graphs, to illustrate our theoretical
observations. All experiments were run on CPU with 2 Xeon Quad-Core processors (2.66GHz,
12MB L2 Cache) and 16GB memory running CentOS 5.3.
5
?t = argmax??Rn+ , k?k? ?C ?> 1 ? 12 ?> YKt Y? and vt = argmaxv?Rn ,kvk=1 v> Kt v
?j =0 ?j ?S
/
7
Table 1: SPORE comparison.
Dataset
Un-Lap N-Lap KS SPORE
breast-cancer 88.22 93.33 92.77 96.67
diabetes
68.89 69.33 69.44 73.33
70.00 70.00 70.44 78.00
fourclass
heart
71.97 75.56 76.42 81.97
ionosphere
67.77 68.00 68.11 76.11
sonar
58.81 58.97 59.29 63.92
mnist-1vs2
75.55 80.55 79.66 85.77
mnist-3v8
76.88 81.88 83.33 86.11
mnist-4v9
68.44 72.00 72.22 74.88
Table 2: Large Scale ? 2000 Nodes.
Dataset
Un-Lap N-Lap KS SPORE
mnist-1vs2 83.80 96.23 94.95 96.72
mnist-3vs8 55.15 87.35 87.35 91.35
mnist-5vs6 96.30 94.90 92.05 97.35
mnist-1vs7 90.65 96.80 96.55 97.25
mnist-4vs9 65.55 65.05 61.30 87.40
Graph Transduction (SPORE): We use two datasets UCI [12] and MNIST [10]. For the UCI
datasets, we use the RBF kernel6 and threshold with the mean, and for the MNIST datasets we construct a similarity matrix using cosine distance for a random sample of 500 nodes, and threshold
with 0.4 to obtain unweighted graphs. With 10% labelled nodes, we compare SPORE with formulation (3) using graph kernels ? Unnormalized Laplacian (c1 I + L)?1 , Normalized Laplacian
1
1 ?1
c2 I + D? 2 LD? 2
and K-Scaling [1], where L = D ? A, D being a diagonal matrix of
degrees. We choose parameters c1 , c2 , C and ? by cross validation. Table 1 summarizes the results, averaged over 5 different labelled samples, with each entry being accuracy in % w.r.t. 0-1 loss
function. As expected from Section 3, SPORE significantly outperforms existing methods. We also
tackle large scale graph transduction problems, Table 2 shows superior performance of Algorithm 1
for a random sample of 2000 nodes, with only 5 outer iterations and 20 inner projections.
Multiple Graph Transduction (MKL-SPORE): We illustrate the effectiveness of combining
multiple graphs, using mixture of random graphs ? G(p, q), p, q ? [0, 1] where we fix |V | = n =
100 and the labels y ? Y n such that yi = 1 if i ? n/2; ?1 otherwise. An edge (i, j) is present with
probability p if yi = yj ; otherwise present with probability q. We generate three datasets to simulate
homogenous, heterogenous and noisy case, shown in Table 3.
Table 4: Superior performance of MKL-SPORE.
Graph
Homo. Heter. Noisy
Table 3: Synthetic multiple graphs dataset.
G(1)
84.4 69.2 84.4
Graph Homo.
Heter.
Noisy
G(2)
84.8 68.6 68.2
G(1) G(0.7, 0.3) G(0.7, 0.5) G(0.7, 0.3)
G(3)
86.4 72.0 54.4
G(2) G(0.7, 0.3) G(0.6, 0.4) G(0.6, 0.4)
Union
85.5 69.3 69.3
Intersection
83.8 67.5 69.0
G(3) G(0.7, 0.3) G(0.5, 0.3) G(0.5, 0.5)
Majority
93.7 76.9 76.6
Multiple Graphs 95.6 80.6 81.9
MKL-SPORE was compared with individual graphs; and with the union, intersection and majority
graphs7 . We use SPORE to solve for single graph transduction, and the results were averaged over
10 random samples of 5% labelled nodes. For the comparison metric as before, Table 4 shows that
combining multiple graphs improves classification accuracy. Furthermore, the noisy case illustrates
the robustness of the proposed formulation, a key observation from (18).
7
Conclusion
We show that the class of orthonormal graph embeddings are efficiently PAC learnable. Our analysis
motivates a Spectral Norm regularized formulation ? SPORE for graph transduction. Using inexact
proximal method, we design an efficient first order method to solve for the proposed formulation.
The algorithm and analysis presented readily generalize to the multiple graphs setting.
Acknowledgments
We acknowledge support from a grant from Indo-French Center for Applied Mathematics (IFCAM).
kxi ?xj k2
2? 2
6
The (i, j)th entry of an RBF kernel is given by exp
7
Majority graph is a graph where an edge (i, j) is present, if a majority of the graphs have the edge (i, j).
8
, where ? is set as the mean distance.
References
[1] R. K. Ando and T. Zhang. Learning on graph with Laplacian regularization. In NIPS, 2007.
[2] N. Balcan and A. Blum. An augmented PAC model for semi-supervised learning. In O. Chapelle,
B. Sch?olkopf, and A. Zien, editors, Semi-supervised learning. MIT press Cambridge, 2006.
[3] J. P. Boyle and R. L. Dykstra. A Method for Finding Projections onto the Intersection of Convex Sets
in Hilbert Spaces. In Advances in Order Restricted Statistical Inference, volume 37 of Lecture Notes in
Statistics, pages 28?47. Springer New York, 1986.
[4] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to algorithms, volume 2. MIT
press Cambridge, 2001.
[5] M. Eisenberg-Nagy, M. Laurent, and A. Varvitsiotis. Forbidden minor characterizations for low-rank
optimal solutions to semidefinite programs over the elliptope. J. Comb. Theory, Ser. B, 108:40?80, 2014.
[6] A. Erdem and M. Pelillo. Graph transduction as a Non-Cooperative Game. Neural Computation,
24(3):700?723, 2012.
[7] M. X. Goemans. Semidefinite programming in combinatorial optimization. Mathematical Programming,
79(1-3):143?161, 1997.
[8] V. Jethava, A. Martinsson, C. Bhattacharyya, and D. P. Dubhashi. The Lov?asz ? function, SVMs and
finding large dense subgraphs. In NIPS, pages 1169?1177, 2012.
[9] R. Johnson and T. Zhang. On the Effectiveness of Laplacian Normalization for Graph Semi-supervised
Learning. JMLR, 8(7):1489?1517, 2007.
[10] Y. LeCun and C. Cortes. The MNIST database of handwritten digits, 1998.
[11] M. Leordeanu, A. Zanfir, and C. Sminchisescu. Semi-supervised learning and optimization for hypergraph
matching. In ICCV, pages 2274?2281. IEEE, 2011.
[12] M. Lichman. UCI machine learning repository, 2013.
[13] L. Lov?asz. On the shannon capacity of a graph. Information Theory, IEEE Transactions on, 25(1):1?7,
1979.
[14] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in optimization, 1(3):123?231,
2013.
[15] M. Schmidt, N. L. Roux, and F. R. Bach. Convergence rates of inexact proximal-gradient methods for
convex optimization. In NIPS, pages 1458?1466, 2011.
[16] R. Shivanna and C. Bhattacharyya. Learning on graphs using Orthonormal Representation is Statistically
Consistent. In NIPS, pages 3635?3643, 2014.
[17] L. Tran. Application of three graph Laplacian based semi-supervised learning methods to protein function
prediction problem. IJBB, 2013.
[18] S. Villa, S. Salzo, L. Baldassarre, and A. Verri. Accelerated and Inexact Forward-Backward Algorithms.
SIAM Journal on Optimization, 23(3):1607?1633, 2013.
[19] T. Zhang and R. K. Ando. Analysis of spectral kernel design based semi-supervised learning. NIPS,
18:1601, 2005.
[20] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
NIPS, 16(16):321?328, 2004.
[21] D. Zhou and C. J. C. Burges. Spectral clustering and transductive learning with multiple views. In ICML,
pages 1159?1166. ACM, 2007.
9
| 5712 |@word repository:1 norm:9 stronger:1 suitably:1 c0:2 open:2 confirms:1 decomposition:1 ld:1 lichman:1 ecole:1 bhattacharyya:3 pprox:1 outperforms:1 existing:5 kx0:1 current:1 com:1 readily:3 subsequent:1 designed:1 update:1 inspection:1 xk:7 ysp:1 core:1 lr:1 iterates:2 characterization:1 node:7 zhang:3 mathematical:1 c2:4 differential:1 prove:1 khk:1 comb:1 lov:4 expected:2 sdp:6 multi:1 little:1 cpu:1 quad:1 solver:1 increasing:1 cache:1 iisc:2 project:1 underlying:1 notation:2 rivest:1 mountain:1 cm:1 argmin:2 maxa:1 z:1 finding:3 impractical:1 guarantee:3 every:2 tackle:1 runtime:1 k2:1 ser:1 unit:6 grant:1 positive:4 before:2 kuk1:1 local:1 consequence:1 laurent:1 approximately:2 inria:1 therein:1 studied:2 equating:1 k:2 suggests:6 statistically:2 averaged:2 practical:1 acknowledgment:1 lecun:1 yj:4 union:2 definite:4 digit:1 procedure:7 empirical:1 significantly:1 composite:1 projection:22 dictate:1 matching:1 boyd:1 refers:1 suggest:2 protein:1 get:1 onto:2 unlabeled:2 operator:7 impossible:1 seminal:1 applying:1 equivalent:2 center:1 attention:1 starting:2 convex:17 roux:1 immediately:1 boyle:1 subgraphs:1 insight:2 orthonormal:22 deriving:1 embedding:18 notion:1 target:1 suppose:1 exact:1 programming:2 hypothesis:1 diabetes:1 trend:1 approximated:1 cooperative:1 database:1 ft:2 solved:1 region:1 yk:4 intuition:1 complexity:10 ui:1 hypergraph:1 dom:2 motivate:1 depend:1 solving:1 efficiency:1 easily:2 k0:2 regularizer:1 derivation:2 choosing:3 quite:1 posed:5 solve:9 supplementary:12 larger:1 jethava:1 otherwise:3 statistic:1 transductive:1 noisy:5 final:2 eigenvalue:1 propose:5 tran:1 mb:1 fr:1 yii:1 loop:1 combining:3 uci:3 subgraph:2 frobenius:1 kv:3 olkopf:2 webpage:1 convergence:7 exploiting:1 optimum:1 rademacher:1 converges:2 sierra:1 help:2 derive:7 illustrate:2 fourclass:1 pose:1 minor:1 received:1 pelillo:1 vc:3 material:11 adjacency:2 argued:1 kii:1 fix:1 generalization:9 preliminary:1 extension:2 confine:1 exp:1 algorithmic:1 predict:1 claim:1 varvitsiotis:1 baldassarre:1 applicable:1 label:13 combinatorial:1 tool:1 mit:2 normale:1 zhou:2 stepsizes:1 derived:3 improvement:1 properly:1 vk:1 check:1 rank:1 hk:5 vk2:1 inference:1 dependent:4 entire:1 proj:1 france:1 issue:1 classification:2 dual:1 ernet:2 initialize:2 homogenous:1 once:2 construct:1 manually:1 icml:1 simplex:1 np:1 bangalore:2 few:3 serious:1 randomly:2 composed:1 comprehensive:1 individual:3 argmax:1 argminx:1 ourselves:1 n1:1 ando:2 psd:1 detection:1 interest:6 leiserson:1 homo:2 analyzed:1 mixture:1 kvk:1 semidefinite:3 primal:1 accurate:2 kt:11 edge:6 machinery:1 euclidean:1 theoretical:1 instance:1 xeon:1 column:1 kij:5 soft:2 disadvantage:1 cost:3 vertex:13 entry:4 hundred:3 conducted:1 johnson:1 too:1 learnability:3 learnt:3 proximal:22 synthetic:2 kxi:1 siam:1 sequel:2 connecting:1 salzo:1 infn:1 iip:14 choose:2 worse:1 style:2 supp:2 prox:12 automation:2 includes:1 inc:1 satisfy:2 view:5 lab:5 vs2:2 francis:2 sup:1 contribution:3 square:2 accuracy:2 v9:1 efficiently:5 yield:5 xk2f:1 generalize:1 famous:1 sdps:4 handwritten:1 accurately:2 zx:4 processor:1 classified:1 whenever:2 definition:1 inexact:15 involved:1 resultant:1 associated:1 proof:3 recovers:1 dataset:3 popular:1 knowledge:1 fractional:1 improves:2 hilbert:1 routine:1 back:1 appears:1 supervised:6 improved:2 yb:1 formulation:13 done:1 though:1 verri:1 generality:1 furthermore:5 working:1 sketch:1 web:1 vs6:1 hopefully:1 google:2 mkl:7 french:1 name:1 usa:1 usage:1 normalized:2 true:2 regularization:3 hence:7 equality:1 symmetric:2 game:1 self:1 noted:2 cosine:1 unnormalized:1 criterion:6 complete:2 performs:1 cp:2 balcan:1 novel:2 recently:4 parikh:1 common:1 superior:2 volume:2 discussed:2 martinsson:1 approximates:1 relating:1 significant:1 refer:1 cambridge:2 rd:7 erieure:1 consistency:2 mathematics:1 chapelle:1 similarity:3 etc:1 recent:4 showed:2 forbidden:1 optimizing:1 subg:1 termed:1 binary:3 vt:6 yi:7 devise:1 captured:1 minimum:2 argmaxv:1 converge:1 semi:10 zien:1 multiple:14 smooth:6 faster:1 bach:3 sphere:6 cross:1 y:17 plugging:1 graphs1:1 laplacian:6 prediction:9 scalable:1 specializing:1 breast:1 metric:1 iteration:11 kernel:11 adopting:1 normalization:1 c1:4 sch:2 asz:4 probably:1 tend:1 undirected:1 effectiveness:2 call:1 structural:1 presence:4 embeddings:11 vs9:1 erv:1 xj:1 inner:1 t0:1 motivated:1 expression:1 gb:1 returned:1 york:1 remark:1 v8:1 generally:1 useful:1 clear:2 detailed:1 amount:1 stein:1 extensively:1 svms:1 argminz:1 generate:1 exist:1 estimated:1 correctly:1 ist:2 key:2 threshold:2 blum:1 drawn:2 penalizing:1 ht:1 kept:1 backward:1 graph:83 subgradient:5 geometrically:1 asymptotically:1 year:1 cone:1 enforced:1 run:1 named:1 extends:1 family:1 arrive:1 mii:1 raman:1 summarizes:1 scaling:1 bound:11 followed:1 convergent:1 constraint:2 nearby:1 bousquet:1 u1:1 simulate:1 argument:1 min:8 optimality:2 subgradients:1 px:9 structured:1 developing:1 maxn:1 combination:1 cormen:1 beneficial:1 s1:1 lem:1 restricted:2 iccv:1 heart:1 ybn:1 chiranjib:1 equation:1 remains:2 previously:1 know:1 end:5 zk2:2 studying:1 available:1 apply:3 away:1 spectral:11 generic:1 stepsize:1 vs7:1 alternative:2 ers0:4 robustness:1 eigen:1 schmidt:1 running:2 include:1 clustering:1 hinge:1 exploit:1 uj:2 prof:1 approximating:1 dykstra:3 dubhashi:1 g0:2 spore:36 quantity:2 usual:1 diagonal:4 pave:1 ys1:1 villa:1 gradient:4 distance:4 capacity:1 majority:4 outer:2 polytope:1 reason:1 minn:1 minimizing:4 nc:1 difficult:1 unfortunately:1 potentially:1 negative:1 mink:1 stated:2 design:4 proper:1 motivates:2 unknown:3 observation:2 datasets:4 finite:2 acknowledge:1 descent:5 orthant:1 immediate:2 maxk:1 situation:1 team:1 rn:6 pg0:1 complement:2 namely:1 paris:1 connection:3 lal:1 elliptope:5 kht:1 heterogenous:1 nip:6 beyond:3 andp:1 vs8:1 usually:2 challenge:1 program:2 including:1 max:2 memory:1 power:2 suitable:1 regularized:4 predicting:1 improve:1 n6:2 sn:8 literature:2 l2:1 kf:1 asymptotic:2 law:2 eisenberg:1 loss:4 lecture:1 interesting:4 validation:1 foundation:1 degree:1 affine:2 gather:1 consistent:2 sufficient:1 editor:1 cancer:1 infeasible:4 disconnection:1 nagy:1 institute:2 burges:1 taking:1 sparse:2 moreau:1 ghz:1 dimension:3 calculated:1 valid:1 world:1 unweighted:2 rich:1 computes:1 forward:1 projected:3 spam:1 far:1 transaction:1 approximate:8 observable:1 implicitly:1 global:1 un:3 iterative:1 sonar:1 table:8 learn:3 zk:1 ca:1 unavailable:1 sminchisescu:1 csa:2 kui:1 necessarily:2 main:2 dense:3 bounding:1 s2:2 n2:1 complementary:1 body:1 augmented:1 advice:1 en:1 transduction:19 precision:1 sub:3 indo:1 jmlr:1 theorem:15 embed:2 specific:1 pac:16 er:4 learnable:3 r2:5 list:1 explored:1 svm:2 ionosphere:1 normalizing:2 cortes:1 exists:1 mnist:11 illustrates:1 chatterjee:2 intersection:9 lt:1 lap:4 kxk:1 expressed:1 ykt:1 trp:3 subtlety:1 leordeanu:1 applies:2 springer:1 mij:1 satisfies:1 acm:1 weston:1 chiru:1 goal:2 viewed:1 identity:1 rbf:2 labelled:9 feasible:5 fista:6 infinite:1 typical:1 lemma:4 called:1 total:1 goemans:1 e:1 rakesh:1 shannon:1 formally:2 support:6 bioinformatics:1 indian:2 accelerated:2 dept:2 |
5,206 | 5,713 | Differentially Private Learning
of Structured Discrete Distributions
Ilias Diakonikolas?
University of Edinburgh
Moritz Hardt
Google Research
Ludwig Schmidt
MIT
Abstract
We investigate the problem of learning an unknown probability distribution over
a discrete population from random samples. Our goal is to design efficient algorithms that simultaneously achieve low error in total variation norm while guaranteeing Differential Privacy to the individuals of the population.
We describe a general approach that yields near sample-optimal and computationally efficient differentially private estimators for a wide range of well-studied and
natural distribution families. Our theoretical results show that for a wide variety
of structured distributions there exist private estimation algorithms that are nearly
as efficient?both in terms of sample size and running time?as their non-private
counterparts. We complement our theoretical guarantees with an experimental
evaluation. Our experiments illustrate the speed and accuracy of our private estimators on both synthetic mixture models and a large public data set.
1
Introduction
The majority of available data in modern machine learning applications come in a raw and unlabeled
form. An important class of unlabeled data is naturally modeled as samples from a probability
distribution over a very large discrete domain. Such data occurs in almost every setting imaginable?
financial transactions, seismic measurements, neurobiological data, sensor networks, and network
traffic records, to name a few. A classical problem in this context is that of density estimation or
distribution learning: Given a number of iid samples from an unknown target distribution, we want
to compute an accurate approximation of the distribution. Statistical and computational efficiency
are the primary performance criteria for a distribution learning algorithm. More specifically, we
would like to design an algorithm whose sample size requirements are information-theoretically
optimal, and whose running time is nearly linear in its sample size.
Beyond computational and statistical efficiency, however, data analysts typically have a variety
of additional criteria they must balance. In particular, data providers often need to maintain the
anonymity and privacy of those individuals whose information was collected. How can we reveal
useful statistics about a population, while still preserving the privacy of individuals? In this paper,
we study the problem of density estimation in the presence of privacy constraints, focusing on the
notion of differential privacy [1].
Our contributions. Our main findings suggest that the marginal cost of ensuring differential privacy in the context of distribution learning is only moderate. In particular, for a broad class of
shape-constrained density estimation problems, we give private estimation algorithms that are nearly
as efficient?both in terms of sample size and running time?as a nearly optimal non-private baseline. As our learning algorithm approximates the underlying distribution up to small error in total
variation norm, all crucial properties of the underlying distribution are preserved. In particular, the
analyst is free to compose our learning algorithm with an arbitrary non-private analysis.
?
The authors are listed in alphabetical order.
1
Our strong positive results apply to all distribution families that can be well-approximated by piecewise polynomial distributions, extending a recent line of work [2, 3, 4] to the differentially private
setting. This is a rich class of distributions including several natural mixture models, log-concave
distributions, and monotone distributions amongst many other examples. Our algorithm is agnostic so that even if the unknown distribution does not conform exactly to any of these distribution
families, it continues to find a good approximation.
These surprising positive results stand in sharp contrast with a long line of worst-case hardness
results and lower bounds in differential privacy, which show separations between private and nonprivate learning in various settings.
Complementing our theoretical guarantees, we present a novel heuristic method to achieve empirically strong performance. Our heuristic always guarantees privacy and typically converges rapidly.
We show on various data sets that our method scales easily to input sizes that were previously
prohibitive for any implemented differentially private algorithm. At the same time, the algorithm
approaches the estimation error of the best known non-private method for a sufficiently large number
of samples.
Technical overview. We briefly introduce a standard model of learning an unknown probability
distribution from samples (namely, that of [5]), which is essentially equivalent to the minimax rate
of convergence in `1 -distance [6]. A distribution learning problem is defined by a class C of distributions. The algorithm has access to independent samples from an unknown distribution p, and its
goal is to output a hypothesis distribution h that is ?close? to p. We measure the closeness between
distributions in total variation distance, which is equivalent to the `1 -distance and sometimes also
called statistical distance. In the ?noiseless? setting, we are promised that p ? C, and the goal is
to construct a hypothesis h such that (with high probability) the total variation distance dTV (h, p)
between h and p is at most ?, where ? > 0 is the accuracy parameter.
The more challenging ?noisy? or agnostic model captures the situation of having arbitrary (or even
adversarial) noise in the data. In this setting, we do not make any assumptions about the target distribution p and the goal is to find a hypothesis h that is almost as accurate as the ?best? approximation
of p by any distribution in C. Formally, given sample access to a (potentially arbitrary) target distribution p and ? > 0, the goal of an agnostic learning algorithm for C is to compute a hypothesis
distribution h such that dTV (h, p) ? C ? optC (p) + ?, where optC (p) is the total variation distance
between p and the closest distribution to it in C, and C ? 1 is a universal constant.
It is a folklore fact that learning an arbitrary discrete distribution over a domain of size N to constant
accuracy requires ?(N ) samples and running time. The underlying algorithm is straightforward:
output the empirical distribution. For distributions over very large domains, a linear dependence
on N is of course impractical, and one might hope that drastically better results can be obtained
for most natural settings. Indeed, there are many natural and fundamental distribution estimation
problems where significant improvements are possible. Consider for example the class of all unimodal distributions over [N ]. In sharp contrast to the ?(N ) lower bound for the unrestricted case,
an algorithm of Birg? [7] is known to learn any unimodal distribution over [N ] with running time
and sample complexity of O(log(N )).
The starting point of our work is a recent technique [3, 8, 4] for learning univariate distributions
via piecewise polynomial approximation. Our first main contribution is a generalization of this
technique to the setting of approximate differential privacy. To achieve this result, we exploit a connection between structured distribution learning and private ?Kolmogorov approximations?. More
specifically, we show in Section 3 that, for the class of structured distributions we consider, a private algorithm that approximates an input histogram in the Kolmogorov distance combined with the
algorithmic framework of [4] yields sample and computationally efficient private learners under the
total variation distance.
Our approach crucially exploits the structure of the underlying distributions, as the Kolmogorov
distance is a much weaker metric than the total variation distance. Combined with a recent private
algorithm [9], we obtain differentially private learners for a wide range of structured distributions
over [N ]. The sample complexity of our algorithms matches their non-private analogues up to a
?
standard dependence on the privacy parameters and a multiplicative factor of at most O(2log N ),
2
where log? denotes the iterated logarithm function. The running time of our algorithm is nearlylinear in the sample size and logarithmic in the domain size.
Related Work. There is a long history of research in statistics on estimating structured families of
distributions going back to the 1950?s [10, 11, 12, 13], and it is still a very active research area [14,
15, 16]. Theoretical computer scientists have also studied these problems with an explicit focus on
the computational efficiency [5, 17, 18, 19, 3]. In statistics, the study of inference questions under
privacy constraints goes back to the classical work of Warner [20]. Recently, Duchi et al. [21, 22]
study the trade-off between statistical efficiency and privacy in a local model of privacy obtaining
sample complexity bounds for basic inference problems. We work in the non-local model and our
focus is on both statistical and computational efficiency.
There is a large literature on answering so-called ?range queries? or ?threshold queries? over an
ordered domain subject to differential privacy. See, for example, [23] as well as the recent work [24]
and many references therein. If the output of the algorithm represents a histogram over the domain
that is accurate on all such queries, then this task is equivalent to approximating a sample in Kolmogorov distance, which is the task we consider. Apart from the work of Beimel et al. [25] and Bun
et al. [9], to the best of our knowledge all algorithms in this literature (e.g., [23, 24]) have a running
time that depends polynomially on the domain size N . Moreover, except for the aforementioned
works, we know of no other algorithm that achieves a sub-logarithmic dependence on the domain
size in its approximation guarantee. Of all algorithms in this area, we believe that ours is the first
implemented algorithm that scales to very large domains with strong empirical performance as we
demonstrate in Section 5.
2
Preliminaries
Notation and basic definitions. For N ? Z+ , we write [N ] to denote the set {1, . . . , N }. The
PN
`1 -norm of a vector v ? RN (or equivalently, a function from [N ] to R) is kvk1 =
i=1 |vi |.
For a discrete probability distribution p : [N ] ? [0, 1], we write p(i) to denote the probability of element i ? [N ] under p. For a subset of the domain S ? [N ], we write p(S) to denote
P
def
i?S p(i). The total variation distance between two distributions p and q over [N ] is dTV (p, q) =
maxS?[N ] |p(S) ? q(S)| = (1/2) ? kp ? qk1 . The Kolmogorov distance between p and q is defined
Pj
Pj
def
as dK (p, q) = maxj?[N ] | i=1 p(i) ? i=1 q(i)|. Note that dK (p, q) ? dTV (p, q). Given a set
S of n independent samples s1 , . . . , sn drawn from a distribution p : [N ] ? [0, 1], the empirical
distribution pbn : [N ] ? [0, 1] is defined as follows: for all i ? [N ], pbn (i) = |{j ? [n] | sj = i}| /n.
Definition 1 (Distribution Learning). Let C be a family of distributions over a domain ?. Given
sample access to an unknown distribution p over ? and 0 < ?, ? < 1, the goal of an (?, ?)-agnostic
learning algorithm for C is to compute a hypothesis distribution h such that with probability at least
1 ? ? it holds dTV (h, p) ? C ? optC (p) + ? , where optC (p) := inf q?C dTV (q, p) and C ? 1 is a
universal constant.
Differential Privacy. A database D ? [N ]n is an n-tuple of items from [N ]. Given a database
D = (d1 , . . . P
, dn ), we let hist(D) denote the normalized histogram corresponding to D. That is,
n
hist(D) = n1 i=1 edi , where ej denotes the j-th standard basis vector in RN .
Definition 2 (Differential Privacy). A randomized algorithm M : [N ]n ? R (where R is some
arbitrary range) is (, ?)-differentially private if for all pairs of inputs D, D0 ? [N ]n differing in
only one entry, we have that for all subsets of the range S ? R, the algorithm satisfies:
Pr[M (D) ? S] ? exp() Pr[M (D0 ) ? S] + ?.
In the context of private distribution learning, the database D is the sample set S from the unknown
target distribution p. In this case, the normalized histogram corresponding to D is the same as the
empirical distribution corresponding to S, i.e., hist(S) = pbn (S).
Basic tools from probability. We recall some probabilistic inequalities that will be crucial for our
analysis. Our first tool is the well-known VC inequality. Given a family of subsets A over [N ], define
kpkA = supA?A |p(A)|. The VC?dimension of A is the maximum size of a subset X ? [N ] that is
shattered by A (a set X is shattered by A if for every Y ? X some A ? A satisfies A ? X = Y ).
3
Theorem 1 (VC inequality, [6, p. 31]). Let pbn be an empirical distribution
p of n samples from p. Let
A be a family of subsets of VC?dimension k. Then E [kp ? pbn kA ] ? O( k/n).
We note that the RHS above is best possible (up to constant factors) and independent of the domain
size N . The Dvoretzky-Kiefer-Wolfowitz (DKW) inequality [26] can be obtained as a consequence
of the VC inequality by taking A to be the class of all intervals. The DKW inequality implies that
for n = ?(1/2 ), with probability at least 9/10 (over the draw of n samples from p) the empirical
distribution pbn will be -close to p in Kolmogorov distance.
We will also use the following uniform convergence bound:
Theorem 2 ([6, p. 17]). Let A be a family of subsets over [N ], and pbn be an empirical distribution
of n samples from p. Let X be the random variable kp ? p?kA . Then we have Pr [X ? E[X] > ?] ?
2
e?2n? .
Connection to Synthetic Data. Distribution learning is closely related to the problem of generating synthetic data. Any dataset D of size n over a universe X can be interpreted as a distribution
over the domain {1, . . . , |X|}. The weight of item x ? X corresponds to the fraction of elements in
D that are equal to x. In fact, this histogram view is convenient in a number of algorithms in Differential Privacy. If we manage to learn this unknown distribution, then we can take n samples from it
obtain another synthetic dataset D0 . Hence, the quality of the distribution learner dictates the statistical properties of the synthetic dataset. Learning in total variation distance is particularly appealing
from this point of view. If two datasets represented as distributions p, q satisfy dTV (p, q) ? ?, then
for every test function f : X ? {0, 1} we must have that |Ex?p f (x) ? Ex?q f (x)| ? ?. Put in different terminology, this means that the answer to any statistical query1 differs by at most ? between
the two distributions.
3
A Differentially Private Learning Framework
In this section, we describe our private distribution learning upper bounds. We start with the simple
case of privately learning an arbitrary discrete distribution over [N ]. We then extend this bound to
the case of privately and agnostically learning a histogram distribution over an arbitrary but known
partition of [N ]. Finally, we generalize the recent framework of [4] to obtain private agnostic learners for histogram distributions over an arbitrary unknown partition, and more generally piecewise
polynomial distributions.
Our first theorem gives a differentially private algorithm for arbitrary distributions over [N ] that essentially matches the best non-private algorithm. Let CN be the family of all probability distributions
over [N ]. We have the following:
Theorem 3. There is a computationally efficient (, 0)-differentially private (?, ?)-learning algorithm for CN that uses n = O((N + log(1/?))/?2 + N log(1/?)/(?)) samples.
The algorithm proceeds as follows: Given a dataset S of n samples from the unknown target distribution p over [N ], it outputs the hypothesis h = hist(S) + ? = pbn (S) + ?, where ? ? RN is
sampled from the N -dimensional Laplace distribution with standard deviation 1/(n). The simple
analysis is deferred to Appendix A.
A t-histogram over [N ] is a function h : [N ] ? R that is piecewise constant with at most t interval
pieces, i.e., there is a partition I of [N ] into intervals I1 , . . . , It such that h is constant on each
Ii . Let Ht (I) be the family of all t-histogram distributions over [N ] with respect to partition I =
{I1 , . . . , It }. Given sample access to a distribution p over [N ], our goal is to output a hypothesis
h : [N ] ? [0, 1] that satisfies dTV (h, p) ? C ? optt (p) + ?, where optt (p) = inf g?Ht (I) dTV (g, p).
We show the following:
Theorem 4. There is a computationally efficient (, 0)-differentially private (?, ?)-agnostic learning algorithm for Ht (I) that uses n = O((t + log(1/?))/?2 + t log(1/?)/(?)) samples.
The main idea of the proof is that the differentially private learning problem for Ht (I) can be
reduced to the same problem over distributions of support [t]. The theorem then follows by an
1
A statistical query asks for the average of a predicate over the dataset.
4
application of Theorem 3. See Appendix A for further details. Theorem 4 gives differentially private
learners for any family of distributions over [N ] that can be well-approximated by histograms over
a fixed partition, including monotone distributions and distributions with a known mode.
In the remainder of this section, we focus on approximate privacy, i.e., (, ?)-differential privacy for
? > 0, and show that for a wide range of natural and well-studied distribution families there exists a
computationally efficient and differentially private algorithm whose sample size is at most a factor
?
of 2O(log N ) worse than its non-private counterpart. In particular, we give a differentially private
version of the algorithm in [4]. For a wide range of distributions, our algorithm has near-optimal
sample complexity and runs in time that is nearly-linear in the sample size and logarithmic in the
domain size.
We can view our overall private learning algorithm as a reduction. For the sake of concreteness,
we state our approach for the case of histograms, the generalization to piecewise polynomials being
essentially identical. Let Ht be the family of all t-histogram distributions over [N ] (with unknown
partition). In the non-private setting, a combination of Theorems 1 and 2 (see appendix) implies that
Ht is (?, ?)-agnostically learnable using n = ?((t + log(1/?))/?2 ) samples. The algorithm of [4]
starts with the empirical distribution pbn and post-processes it to obtain an (?, ?)-accurate hypothesis
h. Let Ak be the collection of subsets of [N ] that can be expressed as unions of at most k disjoint
intervals. The important property of the empirical distribution pbn is that with high probability, pbn is
?-close to the target distribution p in Ak -distance for any k = O(t).
The crucial observation that enables our generalization is that the algorithm of [4] achieves the
same performance guarantees starting from any hypothesis q such that kp ? qkAO(t) ? ?.2 This
observation motivates the following two-step differentially private algorithm: (1) Starting from the
empirical distribution pbn , efficiently construct an (, ?)-differentially private hypothesis q such that
with probability at least 1 ? ?/2 it holds kq ? pbn kAO(t) ? ?/2. (2) Pass q as input to the learning
algorithm of [4] with parameters (?/2, ?/2) and return its output hypothesis h.
We now proceed to sketch correctness. Since q is (, ?)-differentially private and the algorithm of
Step (2) is only a function of q, the composition theorem implies that h is also (, ?)-differentially
private. Recall that with probability at least 1 ? ?/2 we have kp ? pbn kAO(t) ? ?/2. By the
properties of q in Step (1), a union bound and an application of the triangle inequality imply that
with probability at least 1 ? ? we have kp ? qkAO(t) ? ?. Hence, the output h of Step (2) is an
(?, ?)-accurate agnostic hypothesis.
We have thus sketched a proof of the following lemma:
Lemma 1. Suppose there is an (, ?)-differentially private synthetic data algorithm under the
AO(t) ?distance metric that is (?/2, ?/2)-accurate on databases of size n, where n = ?((t +
log(1/?))/?2 ). Then, there exists an (?, ?)-accurate agnostic learning algorithm for Ht with sample complexity n.
Recent work of Bun et al. [9] gives an efficient differentially private synthetic data algorithm under
the Kolmogorov distance metric:
Proposition 1. [9] There is an (, ?)-differentially private (?, ?)-accurate synthetic data algorithm
?
with respect to dK ?distance on databases of size n over [N ], assuming n = ?((1/(?)) ? 2O(log N ) ?
ln(1/???)). The algorithm runs in time O(n ? log N ).
Note that the Kolmogorov distance is equivalent to the A2 -distance up to a factor of 2. Hence, by
applying the above proposition for ?0 = ?/t one obtains an (?, ?)-accurate synthetic data algorithm
with respect to the At -distance. Combining the above, we obtain the following:
Theorem 5. There is an (, ?)-differentially private (?, ?)-agnostic learning algorithm for Ht that
?
uses n = O((t/?2 ) ? ln(1/?) + (t/(?)) ? 2O(log N ) ? ln(1/???)) samples and runs in time
e
O(n)
+ O(n ? log N ).
As an immediate corollary of Theorem 5, we obtain nearly-sample optimal and computationally
efficient differentially private estimators for all the structured discrete distribution families studied
2
We remark that a potential difference is in the running time of the algorithm, which depends on the support
and structure of the distribution q.
5
in [3, 4]. These include well-known classes of shape restricted densities including (mixtures of)
unimodal and multimodal densities (with unknown mode locations), monotone hazard rate (MHR)
and log-concave distributions, and others. Due to space constraints, we do not enumerate the full
descriptions of these classes or statements of these results here but instead refer the interested reader
to [3, 4].
4
Maximum Error Rule for Private Kolmogorov Distance Approximation
In this section, we describe a simple and fast algorithm for privately approximating an input histogram with respect to the Kolmogorov distance. Our private algorithm relies on a simple (nonprivate) iterative greedy algorithm to approximate a given histogram (empirical distribution) in Kolmogorov distance, which we term M AXIMUM E RROR RULE. This algorithm performs a set of basic
operations on the data and can be effectively implemented in the private setting.
To describe the non-private version of M AXIMUM E RROR RULE, we point out a connection of the
Kolmogorov distance approximation problem to the problem of approximating a monotone univariate function with by a piecewise linear function. Let pbn be the empirical probability distribution over
[N ], and let Pbn denote the corresponding empirical CDF. Note that Pbn : [N ] ? [0, 1] is monotone
non-decreasing and piecewise constant with at most n pieces. We would like to approximate pbn by
a piecewise uniform distribution with a corresponding a piecewise linear CDF. It is easy to see that
this is exactly the problem of approximating a monotone function by a piecewise linear function in
`? -norm.
The M AXIMUM E RROR RULE works as follows: Starting with the trivial linear approximation that
interpolates between (0, 0) and (N, 1), the algorithm iteratively refines its approximation to the
target empirical CDF using a greedy criterion. In each iteration, it finds the point (x, y) of the
true curve (empirical CDF Pbn ) at which the current piecewise linear approximation disagrees most
strongly with the target CDF (in `? -norm). It then refines the previous approximation by adding the
point (x, y) and interpolating linearly between the new point and the previous two adjacent points of
the approximation. See Figure 1 for a graphical illustration of our algorithm. The M AXIMUM E R ROR RULE is a popular method for monotone curve approximation whose convergence rate has been
analyzed under certain assumptions on the structure of the input curve. For example, if the monotone input curve satisfies a Lipschitz condition, it is known that the `? -error after T iterations scales
as O(1/T 2 ) (see, e.g., [27] and references therein).
There are a number of challenges towards making this algorithm differentially private. The first is
that we cannot exactly select the maximum error point. Instead, we can only choose an approximate
maximizer using a differentially private sub-routine. The standard algorithm for choosing such
a point would be the exponential mechanism of McSherry and Talwar [28]. Unfortunately, this
algorithm falls short of our goals in two respects. First, it introduces a linear dependence on the
domain size in the running time making the algorithm prohibitively inefficient over large domains.
Second, it introduces a logarithmic dependence on the domain size in the error of our approximation.
In place of the exponential mechanism, we design a sub-routine using the ?choosing mechanism?
of Beimel, Nissim, and Stemmer [25]. Our sub-routine runs in logarithmic time in the domain size
and achieves a doubly-logarithmic dependence in the error. See Figure 2 for a pseudocode of our
algorithm. In the description of the algorithm, we think of At as a CDF defined by a sequence of
points (0, 0), (x1 , y1 ), ..., (xk , yk ), (N, 1) specifying the value of the CDF at various discrete points
of the domain. We denote by weight(I, At ) ? [0, 1] the weight of the interval I according to the
CDF At , where the value at missing points in the domain is achieved by linear interpolation. In other
words, At represents a piecewise-linear CDF (corresponding to a piecewise constant histogram).
Similarly, we let weight(I, S) ? [0, 1] denote the weight of interval I according to the sample S,
that is, |S ? I|/|S|.
We show that our algorithm satisfies (, ?)-differential privacy (see Appendix B):
Lemma 2. For every ? (0, 2), ? > 0, MaximumErrorRule satisfies (, ?)-differential privacy.
Next, we provide two performance guarantees for our algorithm. The first shows that the running
time per iteration is at most O(n log N ). The second shows that if at any step t there is a ?bad?
interval in I that has large error, then our algorithm finds such a bad interval where the quantitative
6
Figure 1: CDF approximation after T = 0, 1, 2, 3 iterations.
M AXIMUM E RROR RULE(S ? [N ]n , privacy parameters , ?)
For t = 1 to T :
1. I = F IND BAD I NTERVAL(At?1 , S)
2. At = U PDATE(At?1 , S, I)
F IND BAD I NTERVAL
1. Let I be the collection of all dyadic intervals of the domain.
2. For each J ? I, let q(J; S) = |weight(J, At?1 ) ? weight(J, S)|.
3. Output an I ? I sampled from the choosing mechanism with score function q over the
collection I with privacy parameters (/2T, ?/T ).
U PDATE
1. Let I = (l, r) be the input interval.
Compute wl = weight([1, l], S) +
Laplace(0, 1/(2n)) and wr = weight([l + 1, r], S) + Laplace(0, 1/(2n)).
2. Output the CDF obtained from At?1 by adding the points (l, wl ) and (r, wl + wr ) to the
graph of At?1 .
Figure 2: Maximum Error Rule (MERR).
loss depends only doubly-logarithmically on the domain size (see Appendix B for the proof of the
following theorem).
Proposition 2. MERR runs in time O(T n log N ). Furthermore, for every step t, with probability
1 ? ?, we have that the interval I selected at step t satisfies
1
|weight(I, At?1 ) ? weight(I, S)| ? OPT ? O
? log n log N ? log(1/??) .
n
Recall that OPT = maxJ?I |weight(J, At?1 ) ? weight(J, S)|.
5
Experiments
In addition to our theoretical results from the previous sections, we also investigate the empirical
performance of our private distribution learning algorithm based on the maximum error rule. The
focus of our experiments is the learning error achieved by the private algorithm for various distributions. For this, we employ two types of data sets: multiple synthetic data sets derived from mixtures
of well-known distributions (see Appendix C), and a data set from Higgs experiments [29]. The
synthetic data sets allow us to vary a single parameter (in particular, the domain size) while keeping
the remaining problem parameters constant. We have chosen a distribution from the Higgs data set
because it gives rise to a large domain size. Our results show that the maximum error rule finds
a good approximation of the underlying distribution, matching the learning error of a non-private
baseline when the number of samples is sufficiently large. Moreover, our algorithm is very efficient
and runs in less than 5 seconds for n = 107 samples on a domain of size N = 1018 .
We implemented our algorithm in the Julia programming language (v0.3) and ran the experiments on
an Intel Core i5-4690K CPU (3.5 - 3.9 GHz, 6 MB cache). In all experiments involving our private
learning algorithm, we set the privacy parameters to = 1 and ? = n1 . Since the noise magnitude
1
depends on n
, varying has the same effect as varying the the sample size n. Similarly, changes in
? are related to changes in n, and therefore we only consider this setting of privacy parameters.
7
Higgs data. In addition to the synthetic data mentioned above, we use the lepton pT (transverse
momentum) feature of the Higgs data set (see Figure 2e of [29]). The data set contains roughly
11 million samples, which we use as unknown distribution. Since the values are specified with 18
digits of accuracy, we interpret them as discrete values in [N ] for N = 1018 . We then generate a
sample from this data set by taking the first n samples and pass this subset as input to our private
distribution learning algorithm. This time, we measure the error as Kolmogorov distance between
the hypothesis returned by our algorithm and the cdf given by the full set of 11 million samples.
In this experiment (Figure 3), we again see that the maximum-error rule achieves a good learning
error. Moreover, we investigate the following two aspects of the algorithm: (i) The number of steps
taken by the maximum error rule influences the learning error. In particular, a smaller number of
steps leads to a better approximation for small values of n, while more samples allow us to achieve
a better error with a larger number of steps. (ii) Our algorithm is very efficient. Even for the largest
sample size n = 107 and the largest number of MERR steps, our algorithm runs in less than 5
seconds. Note that on the same machine, simply sorting n = 107 floating point numbers takes about
0.6 seconds. Since our algorithm involves a sorting step, this shows that the overhead added by
the maximum error rule is only about 7? compared to sorting. In particular, this implies that no
algorithm that relies on sorted samples can outperform our algorithm by a large margin.
Limitations and future work. As we previously saw, the performance of the algorithm varies
with the number of iterations. Currently this is a parameter that must be optimized over separately,
for example, by choosing the best run privately from the exponential mechanism. This is standard
practice in the privacy literature, but it would be more appealing to find an adaptive method of
choosing this parameter on the fly as the algorithm obtains more information about the data.
There remains a gap in sample complexity between the private and the non-private algorithm. One
reason for this are the relatively large constants in the privacy analysis of the choosing mechanism [9]. With a tighter privacy analysis, one could hope to reduce the sample size requirements of
our algorithm by up to an order of magnitude.
It is likely that our algorithm could also benefit from certain post-processing steps such as smoothing
the output histogram. We did not evaluate such techniques here for simplicity and clarity of the
experiments, but this is a promising direction.
Higgs data
Higgs data
Running time (seconds)
Kolmogorov-error
100
10?1
10?2
100
10?1
?3
10
103
104
105
106
Sample size n
m=4
103
107
m=8
m = 12
104
105
106
Sample size n
m = 16
107
m = 20
Figure 3: Evaluation of our private learning algorithm on the Higgs data set. The left plot shows the
Kolmogorov error achieved for various sample sizes n and number of steps taken by the maximum
error rule (m). The right plot displays the corresponding running times of our algorithm.
Acknowledgments
Ilias Diakonikolas was supported by EPSRC grant EP/L021749/1 and a Marie Curie Career Integration grant. Ludwig Schmidt was supported by MADALGO and a grant from the MIT-Shell
Initiative.
8
References
[1] C. Dwork. The differential privacy frontier (extended abstract). In TCC, pages 496?502, 2009.
[2] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Learning mixtures of structured distributions over
discrete domains. In SODA, 2013.
[3] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Efficient density estimation via piecewise polynomial
approximation. In STOC, pages 604?613, 2014.
[4] J. Acharya, I. Diakonikolas, J. Li, and L. Schmidt. Sample-Optimal Density Estimation in Nearly-Linear
Time. Available at http://arxiv.org/abs/1506.00671, 2015.
[5] M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, and L. Sellie. On the learnability of discrete
distributions. In Proc. 26th STOC, pages 273?282, 1994.
[6] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Series in Statistics,
Springer, 2001.
[7] L. Birg?. Estimation of unimodal densities without smoothness assumptions. Annals of Statistics,
25(3):970?981, 1997.
[8] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Near-optimal density estimation in near-linear time
using variable-width histograms. In NIPS, pages 1844?1852, 2014.
[9] M. Bun, K. Nissim, U. Stemmer, and S. P. Vadhan. Differentially private release and learning of threshold
functions. CoRR, abs/1504.07553, 2015.
[10] U. Grenander. On the theory of mortality measurement. Skand. Aktuarietidskr., 39:125?153, 1956.
[11] B.L.S. Prakasa Rao. Estimation of a unimodal density. Sankhya Ser. A, 31:23?36, 1969.
[12] P. Groeneboom. Estimating a monotone density. In Proc. of the Berkeley Conference in Honor of Jerzy
Neyman and Jack Kiefer, pages 539?555, 1985.
[13] L. Birg?. Estimating a density under order restrictions: Nonasymptotic minimax risk. Ann. of Stat., pages
995?1012, 1987.
[14] F. Balabdaoui and J. A. Wellner. Estimation of a k-monotone density: Limit distribution theory and the
spline connection. The Annals of Statistics, 35(6):pp. 2536?2564, 2007.
[15] L. D umbgen and K. Rufibach. Maximum likelihood estimation of a log-concave density and its distribution function: Basic properties and uniform consistency. Bernoulli, 15(1):40?68, 2009.
[16] G. Walther. Inference and modeling with log-concave distributions. Stat. Science, 2009.
[17] Y. Freund and Y. Mansour. Estimating a mixture of two product distributions. In COLT, 1999.
[18] J. Feldman, R. O?Donnell, and R. Servedio. Learning mixtures of product distributions over discrete
domains. In FOCS, pages 501?510, 2005.
[19] C. Daskalakis, I. Diakonikolas, and R.A. Servedio. Learning k-modal distributions via testing. In SODA,
pages 1371?1385, 2012.
[20] S. L. Warner. Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias. Journal
of the American Statistical Association, 60(309), 1965.
[21] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS,
pages 429?438, 2013.
[22] J. C. Duchi, M. J. Wainwright, and M. I. Jordan. Local privacy and minimax bounds: Sharp rates for
probability estimation. In NIPS, pages 1529?1537, 2013.
[23] M. Hardt, K. Ligett, and F. McSherry. A simple and practical algorithm for differentially-private data
release. In NIPS, 2012.
[24] C. Li, M. Hay, G. Miklau, and Y. Wang. A data- and workload-aware query answering algorithm for
range queries under differential privacy. PVLDB, 7(5):341?352, 2014.
[25] A. Beimel, K. Nissim, and U. Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. In RANDOM, pages 363?378, 2013.
[26] A. Dvoretzky, J. Kiefer, and J. Wolfowitz. Asymptotic minimax character of the sample distribution
function and of the classical multinomial estimator. Ann. Mathematical Statistics, 27(3):642?669, 1956.
[27] G. Rote. The convergence rate of the sandwich algorithm for approximating convex functions. Computing,
48:337?361, 1992.
[28] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94?103, 2007.
[29] Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics
with deep learning. Nature Communications, (5), 2014.
[30] C. Dwork, G. N. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, 2010.
9
| 5713 |@word private:60 version:2 briefly:1 polynomial:5 norm:5 eliminating:1 bun:3 crucially:1 asks:1 reduction:1 contains:1 score:1 series:1 daniel:1 ours:1 miklau:1 ka:2 current:1 surprising:1 must:3 refines:2 partition:6 shape:2 enables:1 rote:1 plot:2 ligett:1 v:1 greedy:2 prohibitive:1 selected:1 item:2 complementing:1 xk:1 pvldb:1 short:1 core:1 record:1 boosting:1 location:1 ron:1 org:1 mathematical:1 dn:1 differential:17 initiative:1 walther:1 focs:4 doubly:2 compose:1 overhead:1 baldi:1 introduce:1 privacy:35 theoretically:1 indeed:1 hardness:1 roughly:1 warner:2 decreasing:1 cpu:1 cache:1 estimating:4 underlying:5 moreover:3 notation:1 agnostic:9 exotic:1 interpreted:1 aximum:5 differing:1 finding:1 impractical:1 guarantee:6 quantitative:1 every:5 berkeley:1 concave:4 exactly:3 prohibitively:1 ser:1 grant:3 positive:2 scientist:1 local:4 limit:1 consequence:1 ak:2 interpolation:1 rothblum:1 lugosi:1 might:1 therein:2 studied:4 specifying:1 challenging:1 range:8 acknowledgment:1 practical:1 testing:1 union:2 alphabetical:1 practice:1 differs:1 digit:1 area:2 universal:2 empirical:16 evasive:1 dictate:1 convenient:1 matching:1 word:1 suggest:1 cannot:1 unlabeled:2 close:3 put:1 context:3 applying:1 influence:1 risk:1 restriction:1 equivalent:4 missing:1 straightforward:1 go:1 starting:4 convex:1 survey:1 simplicity:1 pure:1 estimator:4 rule:13 financial:1 population:3 searching:1 notion:1 variation:9 beimel:3 laplace:3 annals:2 target:8 suppose:1 pt:1 programming:1 us:3 hypothesis:13 element:2 logarithmically:1 approximated:2 anonymity:1 particularly:1 continues:1 database:5 ep:1 epsrc:1 fly:1 mhr:1 capture:1 worst:1 wang:1 sun:3 trade:1 yk:1 ran:1 mentioned:1 complexity:6 ror:1 efficiency:5 learner:5 basis:1 triangle:1 easily:1 multimodal:1 workload:1 various:5 represented:1 kolmogorov:15 pdate:2 fast:1 describe:4 kp:6 query:6 choosing:6 whose:5 heuristic:2 larger:1 statistic:7 think:1 noisy:1 sequence:1 grenander:1 tcc:1 mb:1 product:2 remainder:1 combining:1 rapidly:1 ludwig:2 achieve:4 description:2 kao:2 differentially:27 convergence:4 requirement:2 extending:1 generating:1 guaranteeing:1 converges:1 illustrate:1 stat:2 strong:3 implemented:4 involves:1 come:1 implies:4 direction:1 closely:1 imaginable:1 vc:5 public:1 pbn:19 ao:1 generalization:3 preliminary:1 proposition:3 opt:2 tighter:1 frontier:1 jerzy:1 hold:2 sufficiently:2 exp:1 algorithmic:1 achieves:4 vary:1 a2:1 estimation:16 proc:2 combinatorial:1 currently:1 saw:1 largest:2 wl:3 correctness:1 tool:2 hope:2 mit:2 sensor:1 always:1 pn:1 ej:1 varying:2 kvk1:1 corollary:1 derived:1 focus:4 release:2 improvement:1 bernoulli:1 likelihood:1 contrast:2 adversarial:1 baseline:2 inference:3 shattered:2 typically:2 going:1 i1:2 interested:1 sketched:1 overall:1 aforementioned:1 colt:1 constrained:1 smoothing:1 integration:1 marginal:1 equal:1 construct:2 aware:1 having:1 identical:1 represents:2 broad:1 nearly:7 future:1 others:1 spline:1 piecewise:14 acharya:1 few:1 employ:1 modern:1 simultaneously:1 individual:3 maxj:2 floating:1 maintain:1 n1:2 ab:2 sandwich:1 investigate:3 dwork:2 evaluation:2 deferred:1 introduces:2 mixture:7 analyzed:1 mcsherry:3 accurate:9 tuple:1 dkw:2 logarithm:1 theoretical:5 modeling:1 rao:1 optc:4 cost:1 deviation:1 subset:8 entry:1 uniform:3 kq:1 predicate:1 learnability:1 answer:2 varies:1 synthetic:12 combined:2 density:15 fundamental:1 randomized:2 donnell:1 physic:1 probabilistic:1 off:1 again:1 mortality:1 manage:1 choose:1 worse:1 american:1 inefficient:1 return:1 li:2 potential:1 nonasymptotic:1 skand:1 satisfy:1 depends:4 vi:1 piece:2 multiplicative:1 view:3 higgs:7 traffic:1 start:2 curie:1 contribution:2 accuracy:4 kiefer:3 efficiently:1 yield:2 generalize:1 raw:1 iterated:1 iid:1 provider:1 history:1 definition:3 servedio:5 energy:1 pp:1 naturally:1 proof:3 sampled:2 dataset:5 hardt:2 popular:1 recall:3 knowledge:1 routine:3 back:2 focusing:1 dvoretzky:2 modal:1 response:1 strongly:1 furthermore:1 sketch:1 maximizer:1 google:1 mode:2 quality:1 reveal:1 believe:1 name:1 effect:1 normalized:2 true:1 counterpart:2 hence:3 moritz:1 iteratively:1 ind:2 adjacent:1 width:1 criterion:3 demonstrate:1 julia:1 duchi:3 performs:1 jack:1 novel:1 recently:1 pseudocode:1 multinomial:1 empirically:1 overview:1 million:2 extend:1 association:1 approximates:2 interpret:1 measurement:2 significant:1 composition:1 refer:1 feldman:1 smoothness:1 consistency:1 similarly:2 particle:1 language:1 access:4 v0:1 closest:1 recent:6 chan:3 moderate:1 apart:1 inf:2 certain:2 hay:1 honor:1 inequality:7 preserving:1 additional:1 unrestricted:1 wolfowitz:2 ii:2 full:2 unimodal:5 multiple:1 d0:3 technical:1 match:2 long:2 hazard:1 post:2 ilias:2 prakasa:1 ensuring:1 involving:1 basic:5 essentially:3 noiseless:1 metric:3 arxiv:1 histogram:17 sometimes:1 iteration:5 achieved:3 preserved:1 addition:2 want:1 separately:1 interval:11 crucial:3 subject:1 jordan:2 vadhan:2 near:4 presence:1 easy:1 variety:2 dtv:9 agnostically:2 reduce:1 idea:1 cn:2 wellner:1 peter:1 returned:1 interpolates:1 proceed:1 remark:1 deep:1 enumerate:1 useful:1 generally:1 listed:1 reduced:1 generate:1 http:1 outperform:1 exist:1 schapire:1 disjoint:1 per:1 wr:2 conform:1 discrete:12 write:3 sellie:1 terminology:1 threshold:2 promised:1 drawn:1 clarity:1 pj:2 marie:1 ht:8 qk1:1 graph:1 monotone:10 fraction:1 concreteness:1 run:8 talwar:2 i5:1 soda:2 place:1 family:14 almost:2 reader:1 separation:1 draw:1 appendix:6 bound:8 def:2 display:1 constraint:3 balabdaoui:1 sake:1 aspect:1 speed:1 relatively:1 structured:8 according:2 rubinfeld:1 combination:1 smaller:1 character:1 appealing:2 making:2 s1:1 restricted:1 pr:3 taken:2 computationally:6 ln:3 neyman:1 previously:2 remains:1 mechanism:7 know:1 available:2 operation:1 apply:1 birg:3 pierre:1 schmidt:3 denotes:2 running:12 include:1 remaining:1 graphical:1 folklore:1 exploit:2 approximating:5 classical:3 question:1 added:1 occurs:1 primary:1 dependence:6 diakonikolas:7 amongst:1 distance:27 majority:1 nissim:3 collected:1 trivial:1 reason:1 analyst:2 assuming:1 devroye:1 modeled:1 illustration:1 balance:1 equivalently:1 unfortunately:1 potentially:1 statement:1 stoc:2 rise:1 design:4 motivates:1 unknown:13 seismic:1 upper:1 observation:2 datasets:1 immediate:1 situation:1 extended:1 communication:1 y1:1 rn:3 supa:1 mansour:2 arbitrary:9 sharp:3 transverse:1 edi:1 complement:1 namely:1 pair:1 specified:1 connection:4 optimized:1 nip:3 beyond:1 proceeds:1 challenge:1 including:3 max:1 analogue:1 wainwright:2 natural:5 minimax:5 imply:1 sn:1 literature:3 disagrees:1 asymptotic:1 freund:1 loss:1 limitation:1 groeneboom:1 course:1 supported:2 free:1 keeping:1 drastically:1 bias:1 weaker:1 allow:2 wide:5 fall:1 taking:2 stemmer:3 merr:3 edinburgh:1 ghz:1 curve:4 dimension:2 benefit:1 stand:1 rich:1 author:1 collection:3 adaptive:1 polynomially:1 transaction:1 sj:1 approximate:6 obtains:2 neurobiological:1 active:1 hist:4 nonprivate:2 daskalakis:1 iterative:1 promising:1 learn:2 nature:1 career:1 obtaining:1 whiteson:1 sanitization:1 interpolating:1 domain:27 did:1 main:3 universe:1 rh:1 privately:4 noise:2 linearly:1 dyadic:1 x1:1 intel:1 sankhya:1 sub:4 momentum:1 explicit:1 exponential:3 answering:2 theorem:13 sadowski:1 bad:4 learnable:1 dk:3 closeness:1 exists:2 query1:1 effectively:1 adding:2 corr:1 magnitude:2 margin:1 sorting:3 gap:1 logarithmic:6 simply:1 univariate:2 likely:1 expressed:1 ordered:1 rror:4 springer:2 corresponds:1 satisfies:7 relies:2 lepton:1 cdf:12 shell:1 goal:8 sorted:1 ann:2 towards:1 lipschitz:1 change:2 specifically:2 except:1 lemma:3 kearns:1 total:9 called:2 pas:2 experimental:1 formally:1 select:1 support:2 evaluate:1 d1:1 ex:2 |
5,207 | 5,714 | Robust Portfolio Optimization
Fang Han
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21205
fhan@jhu.edu
Huitong Qiu
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21205
hqiu7@jhu.edu
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ 08544 hanliu@princeton.edu
Brian Caffo
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21205
bcaffo@jhsph.edu
Abstract
We propose a robust portfolio optimization approach based on quantile statistics.
The proposed method is robust to extreme events in asset returns, and accommodates large portfolios under limited historical data. Specifically, we show that the
risk of the estimated portfolio converges to the oracle optimal risk with parametric
rate under weakly dependent asset returns. The theory does not rely on higher order moment assumptions, thus allowing for heavy-tailed asset returns. Moreover,
the rate of convergence quantifies that the size of the portfolio under management
is allowed to scale exponentially with the sample size of the historical data. The
empirical effectiveness of the proposed method is demonstrated under both synthetic and real stock data. Our work extends existing ones by achieving robustness
in high dimensions, and by allowing serial dependence.
1
Introduction
Markowitz?s mean-variance analysis sets the basis for modern portfolio optimization theory [1].
However, the mean-variance analysis has been criticized for being sensitive to estimation errors in
the mean and covariance matrix of the asset returns [2, 3]. Compared to the covariance matrix,
the mean of the asset returns is more influential and harder to estimate [4, 5]. Therefore, many
studies focus on the global minimum variance (GMV) formulation, which only involves estimating
the covariance matrix of the asset returns.
Estimating the covariance matrix of asset returns is challenging due to the high dimensionality and
heavy-tailedness of asset return data. Specifically, the number of assets under management is usually
much larger than the sample size of exploitable historical data. On the other hand, extreme events
are typical in financial asset prices, leading to heavy-tailed asset returns.
To overcome the curse of dimensionality, structured covariance matrix estimators are proposed for
asset return data. [6] considered estimators based on factor models with observable factors. [7,
8, 9] studied covariance matrix estimators based on latent factor models. [10, 11, 12] proposed to
shrink the sample covariance matrix towards highly structured covariance matrices, including the
identity matrix, order 1 autoregressive covariance matrices, and one-factor-based covariance matrix
estimators. These estimators are commonly based on the sample covariance matrix. (sub)Gaussian
tail assumptions are required to guarantee consistency.
For heavy-tailed data, robust estimators of covariance matrices are desired. Classic robust covariance
matrix estimators include M -estimators, minimum volume ellipsoid (MVE) and minimum covari1
ance determinant (MCD) estimators, S-estimators, and estimators based on data outlyingness and
depth [13]. These estimators are specifically designed for data with very low dimensions and large
sample sizes. For generalizing the robust estimators to high dimensions, [14] proposed the Orthogonalized Gnanadesikan-Kettenring (OGK) estimator, which extends [15]?s estimator by re-estimating
the eigenvalues; [16, 17] studied shrinkage estimators based on Tyler?s M -estimator. However, although OGK is computationally tractable in high dimensions, consistency is only guaranteed under
fixed dimension. The shrunken Tylor?s M -estimator involves iteratively inverting large matrices.
Moreover, its consistency is only guaranteed when the dimension is in the same order as the sample size. The aforementioned robust estimators are analyzed under independent data points. Their
performance under time series data is questionable.
In this paper, we build on a quantile-based scatter matrix1 estimator, and propose a robust portfolio
optimization approach. Our contributions are in three aspects. First, we show that the proposed
method accommodates high dimensional data by allowing the dimension to scale exponentially
with sample size. Secondly, we verify that consistency of the proposed method is achieved without
any tail conditions, thus allowing for heavy-tailed asset return data. Thirdly, we consider weakly
dependent time series, and demonstrate how the degree of dependence affects the consistency of the
proposed method.
2
Background
In this section, we introduce the notation system, and provide a review on the gross-exposure constrained portfolio optimization that will be exploited in this paper.
2.1
Notation
Let v = (v1 , . . . , vd )T be a d-dimensional real vector, and M = [Mjk ] ? Rd1 ?d2 be a d1 ? d2
matrix with Mjk as the (j, k) entry. For 0 < q < ?, we define the `q vector norm of v as
Pd
kvkq := ( j=1 |vj |)1/q and the `? vector norm of v as kvk? := maxdj=1 |vj |. Let the matrix
qP
2
`max norm of M be kMkmax := maxjk |Mjk |, and the Frobenius norm be kMkF :=
jk Mjk .
d
Let X = (X1 , . . . , Xd )T and Y = (Y1 , . . . , Yd )T be two random vectors. We write X = Y if X
and Y are identically distributed. We use 1, 2, . . . to denote vectors with 1, 2, . . . at every entry.
2.2
Gross-exposure Constrained GMV Formulation
Under the GMV formulation, [18] found that imposing a no-short-sale constraint improves portfolio
efficiency. [19] relaxed the no-short-sale constraint by a gross-exposure constraint, and showed that
portfolio efficiency can be further improved.
Let X ? Rd be a random vector of asset returns. A portfolio is characterized by a vector of
investment allocations, w = (w1 , . . . , wd )T , among the d assets. The gross-exposure constrained
GMV portfolio optimization can be formulated as
min wT ?w s.t. 1T w = 1, kwk1 ? c.
(2.1)
w
Here 1T w = 1 is the budget constraint, and kwk1 ? c is the gross-exposure constraint. c ? 1 is
called the gross exposure constant, which controls the percentage of long and short positions allowed
in the portfolio [19]. The optimization problem (2.1) can be converted into a quadratic programming
problem, and solved by standard software [19].
3
Method
In this section, we introduce the quantile-based portfolio optimization approach. Let Z ? R be a
random variable with distribution function F , and {zt }Tt=1 be a sequence of observations from Z.
For a constant q ? [0, 1], we define the q-quantiles of Z and {zt }Tt=1 to be
Q(Z; q) = Q(F ; q) := inf{z : P(Z ? z) ? q},
n
o
b t }T ; q) := z (k) where k = min t : t ? q .
Q({z
t=1
T
1
A scatter matrix is defined to be any matrix proportional to the covariance matrix by a constant.
2
Here z (1) ? . . . ? z (T ) are the order statistics of {zt }Tt=1 . We say Q(Z; q) is unique if there
b t }T ; q) is unique if there exists a unique
exists a unique z such that P(Z ? z) = q. We say Q({z
t=1
(k)
z ? {z1 , . . . , zT } such that z = z . Following the estimator Qn [20], we define the population
and sample quantile-based scales to be
e 1/4) and ?
b
? Q (Z) := Q(|Z ? Z|;
bQ ({zt }Tt=1 ) := Q({|z
(3.1)
s ? zt |}1?s<t?T ; 1/4).
Q
Q
Here Ze is an independent copy of Z. Based on ? and ?
b , we can further define robust scatter matrices for asset returns. In detail, let X = (X1 , . . . , Xd )T ? Rd be a random vector
representing the returns of d assets, and {Xt }Tt=1 be a sequence of observations from X, where
Xt = (Xt1 , . . . , Xtd )T . We define the population and sample quantile-based scatter matrices (QNE)
to be
bQ
bQ
RQ := [RQ
jk ] and R := [Rjk ],
b Q are given by
where the entries of RQ and R
Q
Q
b Q := ?
bQ ({Xtj }Tt=1 )2 ,
Rjj := ? (Xj )2 , R
jj
i
1h Q
2
Q
2
RQ
:=
,
?
(X
+
X
)
?
?
(X
?
X
)
j
k
j
k
jk
4
h
i
Q
T
2
Q
T
2
b Q := 1 ?
R
b
({X
+
X
}
)
?
?
({X
?
X
}
)
.
tj
tk
tj
tk
t=1
t=1
jk
4
b Q is
Since ?
bQ can be computed using O(T log T ) time [20], the computational complexity of R
2
Q
b
O(d T log T ). Since T d in practice, R can be computed almost as efficiently as the sample
covariance matrix, which has O(d2 T ) complexity.
Let w = (w1 , . . . , wd )T be the vector of investment allocations among the d assets. For a matrix
M, we define a risk function R : Rd ? Rd?d ? R by
R(w; M) := wT Mw.
When X has covariance matrix ?, R(w; ?) = Var(wT X) is the variance of the portfolio return,
wT X, and is employed as the objected function in the GMV formulation. However, estimating ?
is difficult due to the heavy tails of asset returns. In this paper, we adopt R(w; RQ ) as a robust
alternative to the moment-based risk metric, R(w; ?), and consider the following oracle portfolio
optimization problem:
wopt = argmin R(w; RQ ) s.t. 1T w = 1, kwk1 ? c.
(3.2)
w
Here kwk1 ? c is the gross-exposure constraint introduced in Section 2.2. In practice, RQ is
b Q onto the cone
unknown and has to be estimated. For convexity of the risk function, we project R
of positive definite matrices:
Q
b ? R
e Q = argminR
R
R
max
(3.3)
s.t. R ? S? := {M ? Rd?d : MT = M, ?min Id M ?max Id }.
e Q . The optimization
Here ?min and ?max set the lower and upper bounds for the eigenvalues of R
problem (3.3) can be solved by a projection and contraction algorithm [21]. We summarize the
e Q , we formulate the empirical robust portfolio
algorithm in the supplementary material. Using R
optimization by
e Q ) s.t. 1T w = 1, kwk1 ? c.
e opt = argmin R(w; R
w
(3.4)
w
Remark 3.1. The robust portfolio optimization approach involves three parameters: ?min , ?max ,
and c. Empirically, setting ?min = 0.005 and ?max = ? proves to work well. c is typically provided
by investors for controlling the percentages of short positions. When a data-driven choice is desired,
we refer to [19] for a cross-validation-based approach.
Remark 3.2. The rationale behind the positive definite projection (3.3) lies in two aspects. First, in
order that the portfolio optimization is convex and well conditioned, a positive definite matrix with
lower bounded eigenvalues is needed. This is guaranteed by setting ?min > 0. Secondly, the projection (3.3) is more robust compared to the OGK estimate [14]. OGK induces positive definiteness
by re-estimating the eigenvalues using the variances of the principal components. Robustness is lost
when the data, possibly containing outliers, are projected onto the principal directions for estimating
the principal components.
3
Remark 3.3. We adopt the 1/4 quantile in the definitions of ? Q and ?
bQ to achieve 50% breakdown
point. However, we note that our methodology and theory carries through if 1/4 is replaced by any
absolute constant q ? (0, 1).
4
Theoretical Properties
In this section, we provide theoretical analysis of the proposed portfolio optimization approach. For
b opt , based on an estimate, R, of RQ , the next lemma shows that the error
an optimized portfolio, w
opt
b ; RQ ) and R(wopt ; RQ ) is essentially related to the estimation error in R.
between the risks R(w
opt
b
Lemma 4.1. Let w
be the solution to
min R(w; R) s.t. 1T w = 1, kwk1 ? c
(4.1)
w
for an arbitrary matrix R. Then, we have
b opt ; RQ ) ? R(wopt ; RQ )| ? 2c2 kR ? RQ kmax ,
|R(w
opt
where w
is the solution to the oracle portfolio optimization problem (3.2), and c is the grossexposure constant.
e opt ; RQ ), which relates to the rate of convergence
Next, we derive the rate of convergence for R(w
Q
Q
e ? R kmax . To this end, we first introduce a dependence condition on the asset return series.
in kR
0
:= ?(Xt : t ? 0) and
Definition 4.2. Let {Xt }t?Z be a stationary process. Denote by F??
Fn? := ?(Xt : t ? n) the ?-fileds generated by {Xt }t?0 and {Xt }t?n , respectively. The ?-mixing
coefficient is defined by
?(n) :=
sup
|P(A | B) ? P(A)|.
0
? ,P(B)>0
B?F??
,A?Fn
The process {Xt }t?Z is ?-mixing if and only if limn?? ?(n) = 0.
Condition 1. {Xt ? Rd }t?Z is a stationary process such that for any j 6= k ? {1, . . . , d},
{Xtj }t?Z , {Xtj + Xtk }t?Z , and {Xtj ? Xtk }t?Z are ?-mixing processes satisfying ?(n) ? 1/n1+
for any n > 0 and some constant > 0.
The parameter determines the rate of decay in ?(n), and characterizes the degree of dependence
in {Xt }t?Z . Next, we introduce an identifiability condition on the distribution function of the asset
returns.
f = (X
e1 , . . . , X
ed )T be an independent copy of X1 . For any j 6= k ? {1, . . . , d},
Condition 2. Let X
ej |, |X1j + X1k ? X
ej ? X
ek |, and
let F1;j , F2;j,k , and F3;j,k be the distribution functions of |X1j ? X
e
e
|X1j ? X1k ? Xj + Xk |. We assume there exist constants ? > 0 and ? > 0 such that
d
inf
F (y) ? ?
|y?Q(F ;1/4)|?? dy
for any F ? {F1;j , F2;j,k , F3;j,k : j 6= k = 1, . . . , d}.
Condition 2 guarantees the identifiability of the 1/4 quantiles, and is standard in the literature on
quantile statistics [22, 23]. Based on Conditions 1 and 2, we can present the rates of convergence
b Q and R
e Q.
for R
Theorem 4.3. Let {Xt }t?Z be an absolutely continuous stationary process satisfying Conditions
1 and 2. Suppose log d/T ? 0 as T ? ?. Then, for any ? ? (0, 1) and T large enough , with
probability no smaller than 1 ? 8?2 , we have
b Q ? RQ kmax ? rT .
kR
(4.2)
Here the rate of convergence rT is defined by
r
n 2 h 4(1 + 2C )(log d ? log ?) 4C i2
rT = max 2
+
,
?
T
T
r
Q h
4(1 + 2C )(log d ? log ?) 4C io
4?max
+
,
(4.3)
?
T
T
Q
Q
Q
Q
where
P? ?max1+:= max{? (Xj ),Q? (Xj + Xk ), ? (Xj ? Xk ) : j 6= k ? {1, . . . , d}} and C :=
. Moreover, if R ? S? for S? defined in (3.3), we further have
k=1 1/k
e Q ? RQ kmax ? 2rT .
kR
(4.4)
4
The implications of Theorem 4.3 are as follows.
Q
1. When the
p parameters ?, , and ?max do not scale with T , the rate of convergence reduces
to OP ( log d/T ). Thus, the number of assets under management is allowed to scale
exponentially with sample size T . Compared to similar rates of convergence obtained
for sample-covariance-based estimators [24, 25, 9], we do not require any moment or tail
conditions, thus accommodating heavy-tailed asset return data.
2. The effect of serial dependence P
on the rate of convergence is characterized by C . Specif?
ically, as approaches 0, C = k=1 1/k 1+ increases towards infinity, inflating rT . is
allowed to scale with T such that C = o(T / log d).
3. The rate of convergence rT is inversely related to the lower bound, ?, on the marginal
density functions around the 1/4 quantiles. This is because when ? is small, the distribution functions are flat around the 1/4 quantiles, making the population quantiles harder to
estimate.
e opt ; RQ ).
Combining Lemma 4.1 and Theorem 4.3, we obtain the rate of convergence for R(w
Theorem 4.4. Let {Xt }t?Z be an absolutely continuous stationary process satisfying Conditions 1
and 2. Suppose that log d/T ? 0 as T ? ? and RQ ? S? . Then, for any ? ? (0, 1) and T large
enough, we have
e opt ; RQ ) ? R(wopt ; RQ )| ? 2c2 rT ,
|R(w
(4.5)
where rT is defined in (4.3) and c is the gross-exposure constant.
Theorem 4.4 shows that the risk of the estimated portfolio converges to the oracle optimal risk with
parametric rate rT . The number of assets, d, is allowed to scale exponentially with sample size T .
Moreover, the rate of convergence does not rely on any tail conditions on the distribution of the asset
returns.
For the rest of this section, we build the connection between the proposed robust portfolio optimization and its moment-based counterpart. Specifically, we show that they are consistent under the
elliptical model.
Definition 4.5. [26] A random vector X ? Rd follows an elliptical distribution with location ? ?
Rd and scatter S ? Rd?d if and only if there exist a nonnegative random variable ? ? R, a matrix
A ? Rd?r with rank(A) = r, a random vector U ? Rr independent from ? and uniformly
distributed on the r-dimensional sphere, Sr?1 , such that
d
X = ? + ?AU .
T
Here S = AA has rank r. We denote X ? ECd (?, S, ?). ? is called the generating variate.
Commonly used elliptical distributions include Gaussian distribution and t-distribution. Elliptical
distributions have been widely used for modeling financial return data, since they naturally capture
many stylized properties including heavy tails and tail dependence [27, 28, 29, 30, 31, 32]. The next
theorem relates RQ and R(w; RQ ) to their moment-based counterparts, ? and R(w; ?), under the
elliptical model.
Theorem 4.6. Let X = (X1 , . . . , Xd )T ? ECd (?, S, ?) be an absolutely continuous elliptical
f = (X
e1 , . . . , X
ed )T be an independent copy of X. Then, we have
random vector and X
RQ = mQ S
(4.6)
Q
for some constant m only depending on the distribution of X. Moreover, if 0 < E? 2 < ?, we
have
RQ = cQ ? and R(w; RQ ) = cQ R(w; ?),
(4.7)
Q
where ? = Cov(X) is the covariance matrix of X, and c is a constant given by
n (X + X ? X
n (X ? X
ej )2 1 o
ej ? X
ek )2 1 o
j
j
k
cQ =Q
;
=Q
;
Var(Xj ) 4
Var(Xj + Xk )
4
n (X ? X ? X
o
2
ej + X
ek ) 1
j
k
=Q
;
.
(4.8)
Var(Xj ? Xk )
4
Here the last two inequalities hold when Var(Xj + Xk ) > 0 and Var(Xj ? Xk ) > 0.
5
By Theorem 4.6, under the elliptical model, minimizing the robust risk metric, R(w; RQ ), is equivalent with minimizing the standard moment-based risk metric, R(w; ?). Thus, the robust portfolio
optimization (3.2) is equivalent to its moment-based counterpart (2.1) in the population level. Plugging (4.7) into (4.5) leads to the following theorem.
Theorem 4.7. Let {Xt }t?Z be an absolutely continuous stationary process satisfying Conditions 1
and 2. Suppose that X1 ? ECd (?, S, ?) follows an elliptical distribution with covariance matrix
?, and log d/T ? 0 as T ? ?. Then, we have
2c2
e opt ; ?) ? R(wopt ; ?)| ? Q rT ,
|R(w
c
where c is the gross-exposure constant, cQ is defined in (4.8), and rT is defined in (4.3).
e opt , obtained from the robust portfolio
Thus, under the elliptical model, the optimal portfolio, w
optimization also leads to parametric rate of convergence for the standard moment-based risk.
5
Experiments
In this section, we investigate the empirical performance of the proposed portfolio optimization
approach. In Section 5.1, we demonstrate the robustness of the proposed approach using synthetic
heavy-tailed data. In Section 5.2, we simulate portfolio management using the Standard & Poor?s
500 (S&P 500) stock index data.
The proposed portfolio optimization approach (QNE) is compared with three competitors. These
competitors are constructed by replacing the covariance matrix ? in (2.1) by commonly used covariance/scatter matrix estimators:
1. OGK: The orthogonalized Gnanadesikan-Kettenring estimator constructs a pilot scatter
matrix estimate using a robust ? -estimator of scale, then re-estimates the eigenvalues using
the variances of the principal components [14].
2. Factor: The principal factor estimator iteratively solves for the specific variances and the
factor loadings [33].
3. Shrink: The shrinkage estimator shrinkages the sample covariance matrix towards a onefactor covariance estimator[10].
5.1
Synthetic Data
Following [19], we construct the covariance matrix of the asset returns using a three-factor model:
Xj = bj1 f1 + bj2 f2 + bj3 f3 + ?j , j = 1, . . . , d,
(5.1)
where Xj is the return of the j-th stock, bjk is the loadings of the j-th stock on factor fk , and ?j is
the idiosyncratic noise independent of the three factors. Under this model, the covariance matrix of
the stock returns is given by
? = B?f BT + diag(?12 , . . . , ?d2 ),
(5.2)
where B = [bjk ] is a d ? 3 matrix consisting of the factor loadings, ?f is the covariance matrix
of the three factors, and ?j2 is the variance of the noise ?i . We adopt the covariance in (5.2) in our
simulations. Following [19], we generate the factor loadings B from a trivariate normal distribution,
Nd (?b , ?b ), where the mean, ?b , and covariance, ?b , are specified in Table 1. After the factor
loadings are generated, they are fixed as parameters throughout the simulations. The covariance
matrix, ?f , of the three factors is also given in Table 1. The standard deviations, ?1 , . . . , ?d , of the
idiosyncratic noises are generated independently from a truncated gamma distribution with shape
3.3586 and scale 0.1876, restricting the support to [0.195, ?). Again these standard deviations are
fixed as parameters once they are generated. According to [19], these parameters are obtained by
fitting the three-factor model, (5.1), using three-year daily return data of 30 Industry Portfolios from
May 1, 2002 to Aug. 29, 2005. The covariance matrix, ?, is fixed throughout the simulations. Since
we are only interested in risk optimization, we set the mean of the asset returns to be ? = 0. The
dimension of the stocks under consideration is fixed at d = 100.
Given the covariance matrix ?, we generate the asset return data from the following three distributions.
D1 : multivariate Gaussian distribution, Nd (0, ?);
6
Table 1: Parameters for generating the covariance matrix in Equation (5.2).
Parameters for factor loadings
1.8
2.0
risk
0.4
1.0
gross?exposure constant (c)
1.2
1.0
1.8
2.0
1.0
Factor
Shrink
1.0
1.2
1.4
1.6
1.8
gross?exposure constant (c)
Gaussian
2.0
1.6
1.8
2.0
QNE
OGK
Factor
Shrink
0.2
0.0
0.2
0.4
0.6
matching rate
0.8
QNE
OGK
1.4
elliptical log-normal
0.0
0.0
0.2
0.4
0.6
matching rate
0.8
Factor
Shrink
1.2
gross?exposure constant (c)
0.8
1.0
1.6
multivariate t
Gaussian
QNE
OGK
1.4
gross?exposure constant (c)
1.0
1.6
Factor
Shrink
0.6
1.4
Oracle
QNE
OGK
0.4
1.2
-0.2042
-0.0023
0.1930
0.2
0.4
risk
Factor
Shrink
0.2
0.2
1.0
-0.035
0.3156
-0.0023
0.8
Oracle
QNE
OGK
1.2507
-0.0350
-0.2042
1.0
1.0
0.01018
-0.00697
0.08686
0.8
Factor
Shrink
0.6
0.8
Oracle
QNE
OGK
0.02387
0.05395
-0.00697
0.4
risk
0.02915
0.02387
0.01018
?f
0.6
1.0
0.7828
0.5180
0.4100
matching rate
Parameters for factor returns
?b
0.6
?b
1.0
1.2
1.4
1.6
1.8
gross?exposure constant (c)
multivariate t
2.0
1.0
1.2
1.4
1.6
1.8
2.0
gross?exposure constant (c)
elliptical log-normal
Figure 1: Portfolio risks, selected number of stocks, and matching rates to the oracle optimal portfolios.
D2 : multivariate t distribution with degree of freedom 3 and covariance matrix ?;
D2 : elliptical distribution with log-normal generating variate, log N (0, 2), and covariance matrix ?.
Under each distribution, we generate asset return series of half a year (T = 126). We estimate
the covariance/scatter matrices using QNE and the three competitors, and plug them into (2.1) to
optimize the portfolio allocations. We also solve (2.1) with the true covariance matrix, ?, to obtain
the oracle optimal portfolios as benchmarks. We range the gross-exposure constraint, c, from 1 to 2.
The results are based on 1,000 simulations.
b ?) and the matching rates between the optimized portfolios
Figure 1 shows the portfolio risks R(w;
and the oracle optimal portfolios2 . Here the matching rate is defined as follows. For two portfolios
P1 and P2 , let S1 and S2 be the corresponding sets of selected assets, i.e., the assets for which
the T
weights, wS
i , are non-zero. The matching rate between P1 and P2 is defined as r(P1 , P2 ) =
|S1 S2 |/|S1 S2 |, where |S| denotes the cardinality of set S.
We note two observations from Figure 1. (i) The four estimators leads to comparable portfolio
risks under the Gaussian model D1 . However, under heavy-tailed distributions D2 and D3 , QNE
achieves lower portfolio risk. (ii) The matching rates of QNE are stable across the three models,
and are higher than the competing methods under heavy-tailed distributions D2 and D3 . Thus, we
conclude that QNE is robust to heavy tails in both risk minimization and asset selection.
5.2
Real Data
In this section, we simulate portfolio management using the S&P 500 stocks. We collect 1,258
adjusted daily closing prices3 for 435 stocks that stayed in the S&P 500 index from January 1, 2003
2
Due to the `1 regularization in the gross-exposure constraint, the solution is generally sparse.
The adjusted closing prices accounts for all corporate actions including stock splits, dividends, and rights
offerings.
3
7
Table 2: Annualized Sharpe ratios, returns, and risks under 4 competing approaches, using S&P 500
index data.
Sharpe ratio
c=1.0
c=1.2
c=1.4
c=1.6
c=1.8
c=2.0
QNE
2.04
1.89
1.61
1.56
1.55
1.53
OGK
1.64
1.39
1.24
1.31
1.48
1.51
Factor
1.29
1.22
1.34
1.38
1.41
1.43
Shrink
0.92
0.74
0.72
0.75
0.78
0.83
return (in %)
c=1.0
c=1.2
c=1.4
c=1.6
c=1.8
c=2.0
20.46
18.41
15.58
15.02
14.77
14.51
16.59
13.15
11.30
11.48
12.39
12.27
13.18
10.79
10.88
10.68
10.57
10.60
9.84
7.20
6.55
6.49
6.58
6.76
risk (in %)
c=1.0
c=1.2
c=1.4
c=1.6
c=1.8
c=2.0
10.02
9.74
9.70
9.63
9.54
9.48
10.09
9.46
9.10
8.75
8.39
8.13
10.19
8.83
8.12
7.71
7.51
7.43
10.70
9.76
9.14
8.68
8.38
8.18
to December 31, 2007. Using the closing prices, we obtain 1,257 daily returns as the daily growth
rates of the prices.
We manage a portfolio consisting of the 435 stocks from January 1, 2003 to December 31, 20074 .
On days i = 42, 43, . . . , 1, 256, we optimize the portfolio allocations using the past 2 months stock
return data (42 sample points). We hold the portfolio for one day, and evaluate the portfolio return
on day i + 1. In this way, we obtain 1,215 portfolio returns. We repeat the process for each of the
four methods under comparison, and range the gross-exposure constant c from 1 to 25 .
Since the true covariance matrix of the stock returns is unknown, we adopt the Sharpe ratio for
evaluating the performances of the portfolios. Table 2 summarizes the annualized Sharpe ratios,
mean returns, and empirical risks (i.e., standard deviations of the portfolio returns). We observe that
QNE achieves the largest Sharpe ratios under all values of the gross-exposure constant, indicating
the lowest risks under the same returns (or equivalently, the highest returns under the same risk).
6
Discussion
In this paper, we propose a robust portfolio optimization framework, building on a quantile-based
scatter matrix. We obtain non-asymptotic rates of convergence for the scatter matrix estimators and
the risk of the estimated portfolio. The relations of the proposed framework with its moment-based
counterpart are well understood.
The main contribution of the robust portfolio optimization approach lies in its robustness to heavy
tails in high dimensions. Heavy tails present unique challenges in high dimensions compared to
low dimensions.
For example, asymptotic theory of M -estimators guarantees consistency in the rate
p
OP ( d/n) even for non-Gaussian data [34, 35]. If d n, statistical error diminishes rapidly with
increasing n. However, when d n, statistical error may scale rapidly with dimension. Thus,
stringent tail conditions, such as subGaussian conditions, are required to guarantee consistency for
moment-based estimators in high dimensions [36]. In this paper, based on quantile statistics, we
achieve consistency for portfolio risk without assuming any tail conditions, while allowing d to
scale nearly exponentially with n.
Another contribution of his work lies in the theoretical analysis of how serial dependence may affect
consistency of the estimation. We measure the degree of serial dependence using the ?-mixing
coefficient, ?(n). We show that the effect of the serial dependence
P?on the rate of convergence is
summarized by the parameter C , which characterizes the size of n=1 ?(n).
4
We drop the data after 2007 to avoid the financial crisis, when the stock prices are likely to violate the
stationary assumption.
5
c = 2 imposes a 50% upper bound on the percentage of short positions. In practice, the percentage of
short positions is usually strictly controlled to be much lower.
8
References
[1] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77?91, 1952.
[2] Michael J Best and Robert R Grauer. On the sensitivity of mean-variance-efficient portfolios to changes
in asset means: some analytical and computational results. Review of Financial Studies, 4(2):315?342,
1991.
[3] Vijay Kumar Chopra and William T Ziemba. The effect of errors in means, variances, and covariances on
optimal portfolio choice. The Journal of Portfolio Management, 19(2):6?11, 1993.
[4] Robert C Merton. On estimating the expected return on the market: An exploratory investigation. Journal
of Financial Economics, 8(4):323?361, 1980.
[5] Jarl G Kallberg and William T Ziemba. Mis-specifications in portfolio selection problems. In Risk and
Capital, pages 74?87. Springer, 1984.
[6] Jianqing Fan, Yingying Fan, and Jinchi Lv. High dimensional covariance matrix estimation using a factor
model. Journal of Econometrics, 147(1):186?197, 2008.
[7] James H Stock and Mark W Watson. Forecasting using principal components from a large number of
predictors. Journal of the American Statistical Association, 97(460):1167?1179, 2002.
[8] Jushan Bai, Kunpeng Li, et al. Statistical analysis of factor models of high dimension. The Annals of
Statistics, 40(1):436?465, 2012.
[9] Jianqing Fan, Yuan Liao, and Martina Mincheva. Large covariance estimation by thresholding principal
orthogonal complements. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
75(4):603?680, 2013.
[10] Olivier Ledoit and Michael Wolf. Improved estimation of the covariance matrix of stock returns with an
application to portfolio selection. Journal of Empirical Finance, 10(5):603?621, 2003.
[11] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365?411, 2004.
[12] Olivier Ledoit and Michael Wolf. Honey, I shrunk the sample covariance matrix. The Journal of Portfolio
Management, 30(4):110?119, 2004.
[13] Peter J Huber. Robust Statistics. Wiley, 1981.
[14] Ricardo A Maronna and Ruben H Zamar. Robust estimates of location and dispersion for highdimensional datasets. Technometrics, 44(4):307?317, 2002.
[15] Ramanathan Gnanadesikan and John R Kettenring. Robust estimates, residuals, and outlier detection with
multiresponse data. Biometrics, 28(1):81?124, 1972.
[16] Yilun Chen, Ami Wiesel, and Alfred O Hero. Robust shrinkage estimation of high-dimensional covariance
matrices. IEEE Transactions on Signal Processing, 59(9):4097?4107, 2011.
[17] Romain Couillet and Matthew R McKay. Large dimensional analysis and optimization of robust shrinkage
covariance matrix estimators. Journal of Multivariate Analysis, 131:99?120, 2014.
[18] Ravi Jagannathan and T Ma. Risk reduction in large portfolios: Why imposing the wrong constraints
helps. The Journal of Finance, 58(4):1651?1683, 2003.
[19] Jianqing Fan, Jingjin Zhang, and Ke Yu. Vast portfolio selection with gross-exposure constraints. Journal
of the American Statistical Association, 107(498):592?606, 2012.
[20] Peter J Rousseeuw and Christophe Croux. Alternatives to the median absolute deviation. Journal of the
American Statistical Association, 88(424):1273?1283, 1993.
[21] M. H. Xu and H. Shao. Solving the matrix nearness problem in the maximum norm by applying a
projection and contraction method. Advances in Operations Research, 2012:1?15, 2012.
[22] Alexandre Belloni and Victor Chernozhukov. `1 -penalized quantile regression in high-dimensional sparse
models. The Annals of Statistics, 39(1):82?130, 2011.
[23] Lan Wang, Yichao Wu, and Runze Li. Quantile regression for analyzing heterogeneity in ultra-high
dimension. Journal of the American Statistical Association, 107(497):214?222, 2012.
[24] Peter J Bickel and Elizaveta Levina. Covariance regularization by thresholding. The Annals of Statistics,
36(6):2577?2604, 2008.
[25] T Tony Cai, Cun-Hui Zhang, and Harrison H Zhou. Optimal rates of convergence for covariance matrix
estimation. The Annals of Statistics, 38(4):2118?2144, 2010.
[26] Kai-Tai Fang, Samuel Kotz, and Kai Wang Ng. Symmetric Multivariate and Related Distributions. Chapman and Hall, 1990.
[27] Harry Joe. Multivariate Models and Dependence Concepts. Chapman and Hall, 1997.
[28] Rafael Schmidt. Tail dependence for elliptically contoured distributions. Mathematical Methods of Operations Research, 55(2):301?327, 2002.
[29] Svetlozar Todorov Rachev. Handbook of Heavy Tailed Distributions in Finance. Elsevier, 2003.
[30] Svetlozar T Rachev, Christian Menn, and Frank J Fabozzi. Fat-tailed and Skewed Asset Return Distributions: Implications for Risk Management, Portfolio Selection, and Option Pricing. Wiley, 2005.
[31] Kevin Dowd. Measuring Market Risk. Wiley, 2007.
[32] Torben Gustav Andersen. Handbook of Financial Time Series. Springer, 2009.
[33] Jushan Bai and Shuzhong Shi. Estimating high dimensional covariance matrices and its applications.
Annals of Economics and Finance, 12(2):199?215, 2011.
[34] Sara Van De Geer and SA Van De Geer. Empirical Processes in M -estimation. Cambridge University
Press, Cambridge, 2000.
[35] Alastair R Hall. Generalized Method of Moments. Oxford University Press, Oxford, 2005.
[36] Peter B?uhlmann and Sara Van De Geer. Statistics for High-dimensional Data: Methods, Theory and
Applications. Springer, 2011.
9
| 5714 |@word determinant:1 wiesel:1 norm:5 loading:6 nd:2 d2:8 simulation:4 covariance:48 contraction:2 harder:2 carry:1 reduction:1 moment:11 liu:1 series:6 bai:2 offering:1 past:1 existing:1 elliptical:12 wd:2 torben:1 scatter:10 john:4 fn:2 shape:1 christian:1 designed:1 drop:1 stationary:6 half:1 selected:2 xk:7 runze:1 short:6 ziemba:2 nearness:1 matrix1:1 location:2 zhang:2 mathematical:1 c2:3 constructed:1 bj3:1 yuan:1 fitting:1 introduce:4 huber:1 market:2 expected:1 p1:3 curse:1 cardinality:1 increasing:1 project:1 estimating:8 moreover:5 notation:2 provided:1 biostatistics:3 bounded:1 lowest:1 crisis:1 argmin:2 bj1:1 inflating:1 nj:1 guarantee:4 every:1 xd:3 questionable:1 growth:1 finance:5 honey:1 wrong:1 fat:1 sale:2 control:1 positive:4 engineering:1 understood:1 io:1 id:2 analyzing:1 oxford:2 yd:1 annualized:2 au:1 studied:2 collect:1 challenging:1 sara:2 limited:1 range:2 bjk:2 unique:5 investment:2 practice:3 definite:3 ance:1 lost:1 empirical:6 jhu:2 projection:4 matching:8 alastair:1 onto:2 selection:6 risk:31 kmax:4 applying:1 optimize:2 equivalent:2 demonstrated:1 shi:1 exposure:20 economics:2 independently:1 convex:1 formulate:1 ke:1 estimator:34 fang:2 financial:7 mq:1 classic:1 population:4 his:1 exploratory:1 annals:5 controlling:1 suppose:3 programming:1 olivier:3 romain:1 ze:1 jk:4 satisfying:4 econometrics:1 breakdown:1 argminr:1 solved:2 capture:1 wang:2 jhsph:1 highest:1 gross:20 rq:26 pd:1 convexity:1 complexity:2 weakly:2 solving:1 max1:1 efficiency:2 f2:3 basis:1 kmkf:1 shao:1 stylized:1 stock:16 kevin:1 shuzhong:1 larger:1 supplementary:1 widely:1 say:2 solve:1 kai:2 statistic:10 cov:1 ledoit:3 sequence:2 eigenvalue:5 rr:1 analytical:1 cai:1 propose:3 j2:1 combining:1 rapidly:2 shrunken:1 mixing:4 achieve:2 frobenius:1 convergence:15 generating:3 converges:2 tk:2 help:1 derive:1 depending:1 objected:1 op:2 kvkq:1 sa:1 p2:3 aug:1 solves:1 involves:3 direction:1 shrunk:1 stringent:1 material:1 require:1 f1:3 stayed:1 investigation:1 opt:11 brian:1 ultra:1 secondly:2 adjusted:2 strictly:1 hold:2 around:2 considered:1 hall:3 normal:4 tyler:1 matthew:1 achieves:2 adopt:4 bickel:1 estimation:9 diminishes:1 chernozhukov:1 uhlmann:1 sensitive:1 gnanadesikan:3 largest:1 minimization:1 gaussian:7 avoid:1 ej:5 shrinkage:5 zhou:1 focus:1 rank:2 elsevier:1 caffo:1 dependent:2 typically:1 bt:1 w:1 relation:1 interested:1 aforementioned:1 among:2 constrained:3 marginal:1 construct:2 f3:3 once:1 ng:1 chapman:2 yu:1 nearly:1 markowitz:2 modern:1 gamma:1 fileds:1 xtj:4 replaced:1 consisting:2 n1:1 william:2 technometrics:1 freedom:1 detection:1 highly:1 investigate:1 multiresponse:1 analyzed:1 extreme:2 kvk:1 sharpe:5 behind:1 tj:2 implication:2 daily:4 dividend:1 bq:6 orthogonal:1 biometrics:1 desired:2 re:3 orthogonalized:2 theoretical:3 criticized:1 industry:1 modeling:1 measuring:1 deviation:4 entry:3 mckay:1 predictor:1 synthetic:3 density:1 sensitivity:1 michael:4 hopkins:3 trivariate:1 w1:2 again:1 andersen:1 management:8 containing:1 manage:1 possibly:1 ek:3 american:4 leading:1 return:43 ricardo:1 li:2 account:1 converted:1 de:3 harry:2 summarized:1 coefficient:2 sup:1 characterizes:2 investor:1 option:1 mcd:1 identifiability:2 contribution:3 variance:10 efficiently:1 asset:34 ed:2 definition:3 competitor:3 james:1 naturally:1 bj2:1 mi:1 pilot:1 merton:1 improves:1 dimensionality:2 x1j:3 alexandre:1 higher:2 day:3 methodology:2 improved:2 formulation:4 shrink:9 contoured:1 hand:1 replacing:1 pricing:1 building:1 effect:3 verify:1 true:2 concept:1 counterpart:4 regularization:2 symmetric:1 iteratively:2 i2:1 skewed:1 jagannathan:1 samuel:1 generalized:1 tailedness:1 demonstrate:2 tt:6 consideration:1 mt:1 qp:1 empirically:1 exponentially:5 volume:1 thirdly:1 tail:13 association:4 refer:1 cambridge:2 imposing:2 rd:10 consistency:9 fk:1 xtd:1 closing:3 portfolio:62 stable:1 han:2 specification:1 multivariate:8 showed:1 inf:2 driven:1 jianqing:3 inequality:1 watson:1 kwk1:6 christophe:1 exploited:1 victor:1 minimum:3 relaxed:1 employed:1 signal:1 ii:1 relates:2 violate:1 corporate:1 reduces:1 levina:1 characterized:2 plug:1 cross:1 long:1 sphere:1 serial:5 e1:2 plugging:1 controlled:1 regression:2 liao:1 essentially:1 metric:3 achieved:1 background:1 baltimore:3 harrison:1 median:1 limn:1 specif:1 rest:1 sr:1 december:2 effectiveness:1 subgaussian:1 mw:1 chopra:1 gustav:1 split:1 identically:1 enough:2 todorov:1 affect:2 xj:12 variate:2 competing:2 forecasting:1 x1k:2 peter:4 jj:1 remark:3 action:1 elliptically:1 generally:1 rousseeuw:1 induces:1 maronna:1 generate:3 percentage:4 exist:2 estimated:4 yilun:1 alfred:1 write:1 four:2 lan:1 achieving:1 d3:2 capital:1 ravi:1 kettenring:3 v1:1 vast:1 cone:1 year:2 extends:2 almost:1 throughout:2 kotz:1 wu:1 dy:1 summarizes:1 comparable:1 wopt:5 bound:3 guaranteed:3 fan:4 quadratic:1 croux:1 oracle:10 nonnegative:1 constraint:10 infinity:1 belloni:1 software:1 flat:1 aspect:2 simulate:2 min:8 kumar:1 xtk:2 department:4 influential:1 structured:2 according:1 poor:1 smaller:1 across:1 cun:1 making:1 s1:3 outlier:2 computationally:1 equation:1 tai:1 needed:1 hero:1 tractable:1 end:1 maxjk:1 operation:3 observe:1 alternative:2 robustness:4 schmidt:1 denotes:1 include:2 ecd:3 tony:1 quantile:11 build:2 prof:1 society:1 parametric:3 dependence:11 md:3 rt:11 elizaveta:1 gmv:5 accommodates:2 vd:1 accommodating:1 rjk:1 assuming:1 index:3 ellipsoid:1 cq:4 minimizing:2 ratio:5 equivalently:1 difficult:1 idiosyncratic:2 robert:2 frank:1 zt:6 unknown:2 allowing:5 upper:2 observation:3 dispersion:1 datasets:1 benchmark:1 truncated:1 january:2 heterogeneity:1 y1:1 arbitrary:1 introduced:1 inverting:1 complement:1 required:2 specified:1 z1:1 optimized:2 connection:1 usually:2 martina:1 jinchi:1 summarize:1 challenge:1 including:3 max:10 royal:1 zamar:1 event:2 rely:2 residual:1 representing:1 mjk:4 inversely:1 review:2 literature:1 ruben:1 asymptotic:2 fhan:1 rationale:1 ically:1 allocation:4 proportional:1 var:6 lv:1 validation:1 degree:4 consistent:1 imposes:1 thresholding:2 heavy:15 penalized:1 mve:1 last:1 copy:3 repeat:1 absolute:2 sparse:2 distributed:2 van:3 overcome:1 dimension:15 depth:1 evaluating:1 autoregressive:1 qn:1 commonly:3 projected:1 historical:3 transaction:1 yingying:1 observable:1 rafael:1 global:1 handbook:2 xt1:1 conclude:1 continuous:4 latent:1 quantifies:1 tailed:10 why:1 table:5 robust:26 vj:2 diag:1 main:1 s2:3 noise:3 qiu:1 allowed:5 exploitable:1 x1:5 xu:1 quantiles:5 definiteness:1 wiley:3 sub:1 position:4 rjj:1 jushan:2 lie:3 rachev:2 theorem:10 xt:13 hanliu:1 specific:1 decay:1 exists:2 joe:1 restricting:1 ramanathan:1 kr:4 hui:1 budget:1 conditioned:2 chen:1 vijay:1 rd1:1 generalizing:1 yichao:1 likely:1 springer:3 aa:1 wolf:3 determines:1 outlyingness:1 ma:1 identity:1 formulated:1 month:1 towards:3 price:5 change:1 specifically:4 typical:1 uniformly:1 ami:1 wt:4 dowd:1 principal:7 lemma:3 called:2 geer:3 indicating:1 highdimensional:1 support:1 mark:1 absolutely:4 evaluate:1 princeton:3 d1:3 |
5,208 | 5,715 | Bayesian Optimization with Exponential Convergence
Kenji Kawaguchi
MIT
Cambridge, MA, 02139
kawaguch@mit.edu
Leslie Pack Kaelbling
MIT
Cambridge, MA, 02139
lpk@csail.mit.edu
Tom?as Lozano-P?erez
MIT
Cambridge, MA, 02139
tlp@csail.mit.edu
Abstract
This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the ?-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming
and hard to implement in practice. Also, the existing Bayesian optimization
method with exponential convergence [1] requires access to the ?-cover sampling,
which was considered to be impractical [1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.
1
Introduction
We consider a general global optimization problem: maximize f (x) subject to x ? ? ? RD where
f : ? ? R is a non-convex black-box deterministic function. Such a problem arises in many realworld applications, such as parameter tuning in machine learning [3], engineering design problems
[4], and model parameter fitting in biology [5]. For this problem, one performance measure of an
algorithm is the simple regret, rn , which is given by rn = supx?? f (x) ? f (x+ ) where x+ is the
best input vector found by the algorithm. For brevity, we use the term ?regret? to mean simple regret.
The general global optimization problem is known to be intractable if we make no further assumptions [6]. The simplest additional assumption to restore tractability is to assume the existence of a
bound on the slope of f . A well-known variant of this assumption is Lipschitz continuity with a
known Lipschitz constant, and many algorithms have been proposed in this setting [7, 8, 9]. These
algorithms successfully guaranteed certain bounds on the regret. However appealing from a theoretical point of view, a practical concern was soon raised regarding the assumption that a tight Lipschitz
constant is known. Some researchers relaxed this somewhat strong assumption by proposing procedures to estimate a Lipschitz constant during the optimization process [10, 11, 12].
Bayesian optimization is an efficient way to relax this assumption of complete knowledge of the Lipschitz constant, and has become a well-recognized method for solving global optimization problems
with non-convex black-box functions. In the machine learning community, Bayesian optimization?
especially by means of a Gaussian process (GP)?is an active research area [13, 14, 15]. With the
requirement of the access to the ?-cover sampling procedure (it samples the function uniformly such
that the density of samples doubles in the feasible regions at each iteration), de Freitas et al. [1] recently proposed a theoretical procedure that maintains an exponential convergence rate (exponential
regret). However, as pointed out by Wang et al. [2], one remaining problem is to derive a GP-based
optimization method with an exponential convergence rate without the ?-cover sampling procedure,
which is computationally too demanding in many cases.
In this paper, we propose a novel GP-based global optimization algorithm, which maintains an
exponential convergence rate and converges rapidly without the ?-cover sampling procedure.
1
2
Gaussian Process Optimization
In Gaussian process optimization, we estimate the distribution over function f and use this information to decide which point of f should be evaluated next. In a parametric approach, we consider a parameterized function f (x; ?), with ? being distributed according to some prior. In contrast, the nonparametric GP approach directly puts the GP prior over f as f (?) ? GP (m(?), ?(?, ?)) where m(?) is
the mean function and ?(?, ?) is the covariance function or the kernel. That is, m(x) = E[f (x)] and
?(x, x0 ) = E[(f (x) ? m(x))(f (x0 ) ? m(x0 ))T ]. For a finite set of points, the GP model is simply a
joint Gaussian: f (x1:N ) ? N (m(x1:N ), K), where Ki,j = ?(xi , xj ) and N is the number of data
points. To predict the value of f at a new data point, we first consider the joint distribution over f
of the old data points and the new data point:
f (x1:N )
f (xN +1 )
?N
m(x1:N )
m(xN +1 ) ,
K
kT
k
?(xN +1 , xN +1 )
where k = ?(x1:N , xN +1 ) ? RN ?1 . Then, after factorizing the joint distribution using the Schur
complement for the joint Gaussian, we obtain the conditional distribution, conditioned on observed
entities DN := {x1:N , f (x1:N )} and xN +1 , as:
f (xN +1 )|DN , xN +1 ? N (?(xN +1 |DN ), ? 2 (xN +1 |DN ))
where ?(xN +1 |DN ) = m(xN +1 ) + kT K?1 (f (x1:N ) ? m(x1:N )) and ? 2 (xN +1 |DN ) =
?(xN +1 , xN +1 ) ? kT K?1 k. One advantage of GP is that this closed-form solution simplifies both
its analysis and implementation.
To use a GP, we must specify the mean function and the covariance function. The mean function is
usually set to be zero. With this zero mean function, the conditional mean ?(xN +1 |DN ) can still
be flexibly specified by the covariance function, as shown in the above equation for ?. For the covariance function, there are several common choices, including the Matern
kernel and the Gaussian
T
kernel. For example, the Gaussian kernel is defined as ?(x, x0 ) = exp ? 12 (x ? x0 ) ??1 (x ? x0 )
where ??1 is the kernel parameter matrix. The kernel parameters or hyperparameters can be estimated by empirical Bayesian methods [16]; see [17] for more information about GP.
The flexibility and simplicity of the GP prior make it a common choice for continuous objective
functions in the Bayesian optimization literature. Bayesian optimization with GP selects the next
query point that optimizes the acquisition function generated by GP. Commonly used acquisition
functions include the upper confidence bound (UCB) and expected improvement (EI). For brevity,
we consider Bayesian optimization with UCB, which works as follows. At each iteration, the UCB
function U is maintained as U (x|DN ) = ?(x|DN ) + ??(x|DN ) where ? ? R is a parameter of the
algorithm. To find the next query xn+1 for the objective function f , GP-UCB solves an additional
non-convex optimization problem with U as xN +1 = arg maxx U (x|DN ). This is often carried out
by other global optimization methods such as DIRECT and CMA-ES. The justification for introducing a new optimization problem lies in the assumption that the cost of evaluating the objective
function f dominates that of solving additional optimization problem.
For deterministic function, de Freitas et al. [1] recently presented a theoretical procedure that maintains exponential convergence rate. However, their own paper and the follow-up research [1, 2] point
out that this result relies on an impractical sampling procedure, the ?-cover sampling. To overcome
this issue, Wang et al. [2] combined GP-UCB with a hierarchical partitioning optimization method,
the SOO algorithm [18], providing a regret bound with polynomial dependence on the number of
function evaluations. They concluded that creating a GP-based algorithm with an exponential convergence rate without the impractical sampling procedure remained an open problem.
3
3.1
Infinite-Metric GP Optimization
Overview
The GP-UCB algorithm can be seen as a member of the class of bound-based search methods,
which includes Lipschitz optimization, A* search, and PAC-MDP algorithms with optimism in the
face of uncertainty. Bound-based search methods have a common property: the tightness of the
bound determines its effectiveness. The tighter the bound is, the better the performance becomes.
2
However, it is often difficult to obtain a tight bound while maintaining correctness. For example,
in A* search, admissible heuristics maintain the correctness of the bound, but the estimated bound
with admissibility is often too loose in practice, resulting in a long period of global search.
The GP-UCB algorithm has the same problem. The bound in GP-UCB is represented by UCB,
which has the following property: f (x) ? U (x|D) with some probability. We formalize this property in the analysis of our algorithm. The problem is essentially due to the difficulty of obtaining a
tight bound U (x|D) such that f (x) ? U (x|D) and f (x) ? U (x|D) (with some probability). Our
solution strategy is to first admit that the bound encoded in GP prior may not be tight enough to be
useful by itself. Instead of relying on a single bound given by the GP, we leverage the existence of
an unknown bound encoded in the continuity at a global optimizer.
Assumption 1. (Unknown Bound) There exists a global optimizer x? and an unknown semi-metric
` such that for all x ? ?, f (x? ) ? f (x) + ` (x, x? ) and ` (x, x? ) < ?.
In other words, we do not expect the known upper bound due to GP to be tight, but instead expect that
there exists some unknown bound that might be tighter. Notice that in the case where the bound by
GP is as tight as the unknown bound by semi-metric ` in Assumption 1, our method still maintains an
exponential convergence rate and an advantage over GP-UCB (no need for auxiliary optimization).
Our method is expected to become relatively much better when the known bound due to GP is less
tight compared to the unknown bound by `.
As the semi-metric ` is unknown, there are infinitely many possible candidates that we can think of
for `. Accordingly, we simultaneously conduct global and local searches based on all the candidates
of the bounds. The bound estimated by GP is used to reduce the number of candidates. Since
the bound estimated by GP is known, we can ignore the candidates of the bounds that are looser
than the bound estimated by GP. The source code of the proposed algorithm is publicly available at
http://lis.csail.mit.edu/code/imgpo.html.
3.2
Description of Algorithm
Figure 1 illustrates how the algorithm works with a simple 1-dimensional objective function. We
employ hierarchical partitioning to maintain hyperintervals, as illustrated by the line segments in the
figure. We consider a hyperrectangle as our hyperinterval, with its center being the evaluation point
of f (blue points in each line segment in Figure 1). For each iteration t, the algorithm performs the
following procedure for each interval size:
(i) Select the interval with the maximum center value among the intervals of the same size.
(ii) Keep the interval selected by (i) if it has a center value greater than that of any larger
interval.
(iii) Keep the interval accepted by (ii) if it contains a UCB greater than the center value of any
smaller interval.
(iv) If an interval is accepted by (iii), divide it along with the longest coordinate into three new
intervals.
(v) For each new interval, if the UCB of the evaluation point is less than the best function value
found so far, skip the evaluation and use the UCB value as the center value until the interval
is accepted in step (ii) on some future iteration; otherwise, evaluate the center value.
(vi) Repeat steps (i)?(v) until every size of intervals are considered
Then, at the end of each iteration, the algorithm updates the GP hyperparameters. Here, the purpose
of steps (i)?(iii) is to select an interval that might contain the global optimizer. Steps (i) and (ii)
select the possible intervals based on the unknown bound by `, while Step (iii) does so based on the
bound by GP.
We now explain the procedure using the example in Figure 1. Let n be the number of divisions of
intervals and let N be the number of function evaluations. t is the number of iterations. Initially,
there is only one interval (the center of the input region ? ? R) and thus this interval is divided,
resulting in the first diagram of Figure 1. At the beginning of iteration t = 2 , step (i) selects the third
interval from the left side in the first diagram (t = 1, n = 2), as its center value is the maximum.
Because there are no intervals of different size at this point, steps (ii) and (iii) are skipped. Step
(iv) divides the third interval, and then the GP hyperparameters are updated, resulting in the second
3
Figure 1: An illustration of IMGPO: t is the number of iteration, n is the number of divisions (or
splits), N is the number of function evaluations.
diagram (t = 2, n = 3). At the beginning of iteration t = 3, it starts conducting steps (i)?(v) for the
largest intervals. Step (i) selects the second interval from the left side and step (ii) is skipped. Step
(iii) accepts the second interval, because the UCB within this interval is no less than the center value
of the smaller intervals, resulting in the third diagram (t = 3, n = 4). Iteration t = 3 continues
by conducting steps (i)?(v) for the smaller intervals. Step (i) selects the second interval from the
left side, step (ii) accepts it, and step (iii) is skipped, resulting in the forth diagram (t = 3, n = 4).
The effect of the step (v) can be seen in the diagrams for iteration t = 9. At n = 16, the far right
interval is divided, but no function evaluation occurs. Instead, UCB values given by GP are placed
in the new intervals indicated by the red asterisks. One of the temporary dummy values is resolved
at n = 17 when the interval is queried for division, as shown by the green asterisk. The effect of
step (iii) for the rejection case is illustrated in the last diagram for iteration t = 10. At n = 18, t is
increased to 10 from 9, meaning that the largest intervals are first considered for division. However,
the three largest intervals are all rejected in step (iii), resulting in the division of a very small interval
near the global optimum at n = 18.
3.3
Technical Detail of Algorithm
We define h to be the depth of the hierarchical partitioning tree, and ch,i to be the center point
of the ith hyperrectangle at depth h. Ngp is the number of the GP evaluations. Define depth(T )
to be the largest integer h such that the set Th is not empty. To compute UCB U , we use ?M =
p
2 log(? 2 M 2 /12?) where M is the number of the calls made so far for U (i.e., each time we use U ,
we increment M by one). This particular form of ?M is to maintain the property of f (x) ? U (x|D)
during an execution of our algorithm with probability at least 1 ? ?. Here, ? is the parameter of
IMGPO. ?max is another parameter, but it is only used to limit the possibly long computation of
step (iii) (in the worst case, step (iii) computes UCBs 3?max times although it would rarely happen).
The pseudocode is shown in Algorithm 1. Lines 8 to 23 correspond to steps (i)-(iii). These lines
compute the index i?h of the candidate of the rectangle that may contain a global optimizer for each
depth h. For each depth h, non-null index i?h at Line 24 indicates the remaining candidate of a
rectangle that we want to divide. Lines 24 to 33 correspond to steps (iv)-(v) where the remaining
candidates of the rectangles for all h are divided. To provide a simple executable division scheme
(line 29), we assume ? to be a hyperrectangle (see the last paragraph of section 4 for a general case).
Lines 8 to 17 correspond to steps (i)-(ii). Specifically, line 10 implements step (i) where a single
candidate is selected for each depth, and lines 11 to 12 conduct step (ii) where some candidates are
screened out. Lines 13 to 17 resolve the the temporary dummy values computed by GP. Lines 18
0
(ch,i?h )
to 23 correspond to step (iii) where the candidates are further screened out. At line 21, Th+?
indicates the set of all center points of a fully expanded tree until depth h + ? within the region
0
(ch,i?h ) contains the nodes of
covered by the hyperrectangle centered at ch,i?h . In other words, Th+?
the fully expanded tree rooted at ch,i?h with depth ? and can be computed by dividing the current
rectangle at ch,i?h and recursively divide all the resulting new rectangles until depth ? (i.e., depth ?
from ch,i?h , which is depth h + ? in the whole tree).
4
Algorithm 1 Infinite-Metric GP Optimization (IMGPO)
Input: an objective function f , the search domain ?, the GP kernel ?, ?max ? N+ and ? ? (0, 1)
1: Initialize the set Th = {?} ?h ? 0
2: Set c0,0 to be the center point of ? and T0 ? {c0,0 }
3: Evaluate f at c0,0 : g(c0,0 ) ? f (c0,0 )
4: f + ? g(c0,0 ), D ? {(c0,0 , g(c0,0 ))}
5: n, N ? 1, Ngp ? 0, ? ? 1
6: for t = 1, 2, 3, ... do
7:
?max ? ??
for h = 0 to depth(T ) do
# for-loop for steps (i)-(ii)
8:
9:
while true do
10:
i?h ? arg maxi:ch,i ?Th g(ch,i )
if g(ch,i?h ) < ?max then
11:
12:
i?h ? ?, break
13:
else if g(ch,i?h ) is not labeled as GP-based then
14:
?max ? g(ch,i?h ), break
else
15:
16:
g(ch,i?h ) ? f (ch,i?h ) and remove the GP-based label from g(ch,i?h )
17:
N ? N + 1, Ngp ? Ngp ? 1
18:
D ? {D, (ch,i?h , g(ch,i?h ))}
19:
for h = 0 to depth(T ) do
# for-loop for step (iii)
if i?h 6= ? then
20:
21:
? ? the smallest positive integer s.t. i?h+? 6= ? and ? ? min(?, ?max ) if exists, and 0 otherwise
0
22:
z(h, i?h ) = maxk:ch+?,k ?Th+?
(ch,i? ) U (ch+?,k |D)
h
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
3.4
if ? 6= 0 and z(h, i?h ) < g(ch+?,i?h+? ) then
i?h ? ?, break
?max ? ??
for h = 0 to depth(T ) do
# for-loop for steps (iv)-(v)
if i?h 6= ? and g(ch,i?h ) ? ?max then
n ? n + 1.
Divide the hyperrectangle centered at ch,i?h along with the longest coordinate into three new hyperrectangles with the following centers:
S = {ch+1,i(lef t) , ch+1,i(center) , ch+1,i(right) }
Th+1 ? {Th+1 , S}
Th ? Th \ ch,i?h , g(ch+1,i(center) ) ? g(ch,i?h )
for inew = {i(lef t), i(right)} do
if U (ch+1,inew |D) ? f + then
g(ch+1,inew ) ? f (ch+1,inew )
D ? {D, (ch+1,inew , g(ch+1,inew ))}
N ? N + 1, f + ? max(f + , g(ch+1,inew )), ?max = max(?max , g(ch+1,inew ))
else
g(ch+1,inew ) ? U (ch+1,inew |D) and label g(ch+1,inew ) as GP-based.
Ngp ? Ngp + 1
Update ?: if f + was updated, ? ? ? + 22 , and otherwise, ? ? max(? ? 2?1 , 1)
Update GP hyperparameters by an empirical Bayesian method
Relationship to Previous Algorithms
The most closely related algorithm is the BaMSOO algorithm [2], which combines SOO with GPUCB. However, it only achieves a polynomial regret bound while IMGPO achieves a exponential
regret bound. IMGPO can achieve exponential regret because it utilizes the information encoded in
the GP prior/posterior to reduce the degree of the unknownness of the semi-metric `.
The idea of considering a set of infinitely many bounds was first proposed by Jones et al. [19]. Their
DIRECT algorithm has been successfully applied to real-world problems [4, 5], but it only maintains
the consistency property (i.e., convergence in the limit) from a theoretical viewpoint. DIRECT takes
an input parameter to balance the global and local search efforts. This idea was generalized to the
case of an unknown semi-metric and strengthened with a theoretical support (finite regret bound) by
5
Munos [18] in the SOO algorithm. By limiting the depth of the search tree with a parameter hmax ,
the SOO algorithm achieves a finite regret bound that depends on the near-optimality dimension.
4
Analysis
In this section, we prove an exponential convergence rate of IMGPO and theoretically discuss the
reason why the novel idea underling IMGPO is beneficial. The proofs are provided in the supplementary material. To examine the effect of considering infinitely many possible candidates of the
bounds, we introduce the following term.
Definition 1. (Infinite-metric exploration loss). The infinite-metric exploration loss ?t is the number
of intervals to be divided during iteration t.
Pdepth(T )
1(i?h 6= ?) at line
The infinite-metric exploration loss ?? can be computed as ?t =
h=1
25. It is the cost (in terms of the number of function evaluations) incurred by not committing to
any particular upper bound. If we were to rely on a specific bound, ?? would be minimized to 1.
For example, the DOO algorithm [18] has ?t = 1 ?t ? 1. Even if we know a particular upper
bound, relying on this knowledge and thus minimizing ?? is not a good option unless the known
bound is tight enough compared to the unknown bound leveraged in our algorithm. This will be
clarified in our analysis. Let ??t be the maximum of the averages of ?1:t0 for t0 = 1, 2, ..., t (i.e.,
P t0
??t ? max({ t10 ? =1 ?? ; t0 = 1, 2, ..., t}).
Assumption 2. There exist L > 0, ? > 0 and p ? 1 in R such that for all x, x0 ? ?, `(x0 , x) ?
L||x0 ? x||?
p.
In Theorem 1, we show that the exponential convergence rate O ?N +Ngp with ? < 1 is achieved.
We define ?n ? ?max to be the largest ? used so far with n total node expansions. For simplicity,
we assume that ? is a square, which we satisfied in our experiments by scaling original ?.
?
Theorem 1. Assume Assumptions 1 and 2. Let ? = supx,x0 ?? 12 kx ? x0 k? . Let ? = 3? 2CD??t < 1.
Then, with probability at least 1 ? ?, the regret of IMGPO is bounded as
N + Ngp
rN ? L(3?D 1/p )? exp ??
? ?n ? 2 ln 3 = O ?N +Ngp .
2CD ??t
Importantly, our bound holds for the best values of the unknown L, ? and p even though these
values are not given. The closest result in previous work is that of BaMSOO [2], which obtained
2?
? ? D(4??)
) with probability 1 ? ? for ? = {1, 2}. As can be seen, we have improved the regret
O(n
bound. Additionally, in our analysis, we can see how L, p, and ? affect the bound, allowing us
to view the inherent difficulty of an objective function in a theoretical perspective. Here, C is a
constant in N and is used in previous work [18, 2]. For example, if we conduct 2D or 3D ? 1
function evaluations per node-expansion and if p = ?, we have that C = 1.
We note that ? can get close to one as input dimension D increases, which suggests that there
is a remaining challenge in scalability for higher dimensionality. One strategy for addressing this
problem would be to leverage additional assumptions such as those in [14, 20].
Remark 1. (The effect of the tightness of UCB by GP)
UCB computed by
If
GP is ?useful? such
N +N
that N/?
?t = ?(N ), then our regret bound becomes O exp ? 2CDgp ? ln 3 . If the bound due to
Pt
up to O(N/t)
UCB by GP is too loose (and thus useless), ??tcan increase
(due to ??t ? i=1 i/t ?
t(1+Ngp /N )
? ln 3 , which can be bounded
O(N/t)), resulting in the regret bound of O exp ?
2CD
N +N
by O exp ? 2CDgp max( ?1N , Nt )? ln 3 1 . This is still better than the known results.
Remark 2. (The effect of GP) Without the use of GP, our regret bound would be as follows: rN ?
N 1
L(3?D 1/p )? exp(??[ 2CD
?t ? ??t is the infinite-metric exploration loss without
??t ?2] ln 3), where ?
?
). Our proof works with this
This can be done by limiting the depth of search tree as depth(T ) = O( N ?
additional mechanism, but results in the regret bound with
N
being
replaced
by
N . Thus, if we assume to
?
have at least ?not useless? UCBs such that N/?
?t = ?( N ), this additional mechanism can be disadvantageous. Accordingly, we do not adopt it in our experiments.
1
6
GP. Therefore, the use of GP reduces the regret bound by increasing Ngp and decreasing ??t , but may
potentially increase the bound by increasing ?n ? ?.
Remark 3. (The effect of infinite-metric optimization) To understand the effect of considering all
the possible upper bounds, we consider the case without GP. If we consider all the possible bounds,
N
1
? 2] ln 3) for the best unknown L, ? and p.
we have the regret bound L(3?D1/p )? exp(??[ 2CD
?
?t
0
0
For standard optimization with a estimated bound, we have L0 (3?D1/p )? exp(??0 [ 2CN0 D ? 2] ln 3)
0
0
0
for an estimated L , ? , and p . By algebraic manipulation, considering all the possible bounds has
a better regret when ???1
t ?
2CD
N
N ln 3? (( 2C 0 D
0
0
1/p0 ?0
(3?D
)
? 2) ln 3? + 2 ln 3? ? ln LL(3?D
1/p )? ). For an intuitive
0
?/p0
Cc2 D
LD
insight, we can simplify the above by assuming ?0 = ? and C 0 = C as ???1
t ? 1 ? N ln LD ?/p .
Because L and p are the ones that achieve the lowest bound, the logarithm on the right-hand side is
always non-negative. Hence, ??t = 1 always satisfies the condition. When L0 and p0 are not tight
enough, the logarithmic term increases in magnitude, allowing ??t to increase. For example, if the
second term on the right-hand side has a magnitude of greater than 0.5, then ??t = 2 satisfies the
inequality. Therefore, even if we know the upper bound of the function, we can see that it may be
better not to rely on this, but rather take the infinite many possibilities into account.
One may improve the algorithm with different division procedures than one presented in Algorithm
1. Accordingly, in the supplementary material, we derive an abstract version of the regret bound for
IMGPO with a family of division procedures that satisfy some assumptions. This information could
be used to design a new division procedure.
5
Experiments
In this section, we compare the IMGPO algorithm with the SOO, BaMSOO, GP-PI and GP-EI algorithms [18, 2, 3]. In previous work, BaMSOO and GP-UCB were tested with a pair of a handpicked
good kernel and hyperparameters for each function [2]. In our experiments, we assume that the
knowledge of good kernel and hyperparameters is unavailable, which is usually the case in practice.
Thus, for IMGPO, BaMSOO, GP-PI and GP-EI, we simply used one of the
p most popular kernels,
0
the isotropic Matern kernel with ? = 5/2. This is given by ?(x, x ) = g( 5||x ? x0 ||2 /l), where
g(z) = ? 2 (1 + z + z 2 /3) exp(?z). Then, we blindly initialized the hyperparameters to ? = 1
(a) Sin1: [1, 1.92, 2]
(b) Sin2: [2, 3.37, 3]
(c) Peaks: [2, 3.14, 4]
(d) Rosenbrock2: [2, 3.41, 4]
(e) Branin: [2, 4.44, 2]
(f) Hartmann3: [3, 4.11, 3]
(g) Hartmann6: [6, 4.39, 4]
(h) Shekel5: [4, 3.95, 4]
(i) Sin1000: [1000, 3.95, 4]
Figure 2: Performance Comparison: in the order, the digits inside of the parentheses [ ] indicate the
dimensionality of each function, and the variables ??t and ?n at the end of computation for IMGPO.
7
Table 1: Average CPU time (in seconds) for the experiment with each test function
Algorithm
GP-PI
GP-EI
SOO
BaMSOO
IMGPO
Sin1
29.66
12.74
0.19
43.80
1.61
Sin2
115.90
115.79
0.19
4.61
3.15
Peaks
47.90
44.94
0.24
7.83
4.70
Rosenbrock2
921.82
893.04
0.744
12.09
11.11
Branin
1124.21
1153.49
0.33
14.86
5.73
Hartmann3
573.67
562.08
0.30
14.14
6.80
Hartmann6
657.36
604.93
0.25
26.68
13.47
Shekel5
611.01
558.58
0.29
371.36
15.92
and l = 0.25 for all the experiments; these values were updated with an empirical Bayesian method
after each iteration. To compute the UCB by GP, we used ? = 0.05 for IMGPO and BaMSOO.
For IMGPO, ?max was fixed to be 22 (the effect of selecting
? different values is discussed later).
For BaMSOO and SOO, the parameter hmax was set to n, according to Corollary 4.3 in [18].
For GP-PI and GP-EI, we used the SOO algorithm and a local optimization method using gradients
to solve the auxiliary optimization. For SOO, BaMSOO and IMGPO, we used the corresponding
deterministic division procedure (given ?, the initial point is fixed and no randomness exists). For
GP-PI and GP-EI, we randomly initialized the first evaluation point and report the mean and one
standard deviation for 50 runs.
The experimental results for eight different objective functions are shown in Figure 2. The vertical
axis is log10 (f (x? ) ? f (x+ )), where f (x? ) is the global optima and f (x+ ) is the best value found
by the algorithm. Hence, the lower the plotted value on the vertical axis, the better the algorithm?s
performance. The last five functions are standard benchmarks for global optimization [21]. The first
two were used in [18] to test SOO, and can be written as fsin1 (x) = (sin(13x) sin +1)/2 for Sin1
and fsin2 (x) = fsin1 (x1 )fsin1 (x2 ) for Sin2. The form of the third function is given in Equation
(16) and Figure 2 in [22]. The last function is Sin2 embedded in 1000 dimension in the same manner
described in Section 4.1 in [14], which is used here to illustrate a possibility of using IMGPO as a
main subroutine to scale up to higher dimensions with additional assumptions. For this function,
we used REMBO [14] with IMGPO and BaMSOO as its Bayesian optimization subroutine. All of
these functions are multimodal, except for Rosenbrock2, with dimensionality from 1 to 1000.
As we can see from Figure 2, IMGPO outperformed the other algorithms in general. SOO produced
the competitive results for Rosenbrock2 because our GP prior was misleading (i.e., it did not model
the objective function well and thus the property f (x) ? U (x|D) did not hold many times). As can
be seen in Table 1, IMGPO is much faster than traditional GP optimization methods although it is
slower than SOO. For Sin 1, Sin2, Branin and Hartmann3, increasing ?max does not affect IMGPO
because ?n did not reach ?max = 22 (Figure 2). For the rest of the test functions, we would be able
to improve the performance of IMGPO by increasing ?max at the cost of extra CPU time.
6
Conclusion
We have presented
the first GP-based optimization method with an exponential convergence rate
O ?N +Ngp (? < 1) without the need of auxiliary optimization and the ?-cover sampling. Perhaps
more importantly in the viewpoint of a broader global optimization community, we have provided
a practically oriented analysis framework, enabling us to see why not relying on a particular bound
is advantageous, and how a non-tight bound can still be useful (in Remarks 1, 2 and 3). Following
the advent of the DIRECT algorithm, the literature diverged along two paths, one with a particular
bound and one without. GP-UCB can be categorized into the former. Our approach illustrates the
benefits of combining these two paths.
As stated in Section 3.1, our solution idea was to use a bound-based method but rely less on the
estimated bound by considering all the possible bounds. It would be interesting to see if a similar
principle can be applicable to other types of bound-based methods such as planning algorithms (e.g.,
A* search and the UCT or FSSS algorithm [23]) and learning algorithms (e.g., PAC-MDP algorithms
[24]).
Acknowledgments
The authors would like to thank Dr. Remi Munos for his thoughtful comments and suggestions. We
gratefully acknowledge support from NSF grant 1420927, from ONR grant N00014-14-1-0486, and
from ARO grant W911NF1410433. Kenji Kawaguchi was supported in part by the Funai Overseas
Scholarship. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of our sponsors.
8
References
[1] N. De Freitas, A. J. Smola, and M. Zoghi. Exponential regret bounds for Gaussian process bandits with
deterministic observations. In Proceedings of the 29th International Conference on Machine Learning
(ICML), 2012.
[2] Z. Wang, B. Shakibi, L. Jin, and N. de Freitas. Bayesian Multi-Scale Optimistic Optimization. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTAT), pages
1005?1014, 2014.
[3] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Proceedings of Advances in Neural Information Processing Systems (NIPS), pages 2951?2959,
2012.
[4] R. G. Carter, J. M. Gablonsky, A. Patrick, C. T. Kelley, and O. J. Eslinger. Algorithms for noisy problems
in gas transmission pipeline optimization. Optimization and engineering, 2(2):139?157, 2001.
[5] J. W. Zwolak, J. J. Tyson, and L. T. Watson. Globally optimised parameters for a model of mitotic control
in frog egg extracts. IEEE Proceedings-Systems Biology, 152(2):81?92, 2005.
[6] L. C. W. Dixon. Global optima without convexity. Numerical Optimisation Centre, Hatfield Polytechnic,
1977.
[7] B. O. Shubert. A sequential method seeking the global maximum of a function. SIAM Journal on
Numerical Analysis, 9(3):379?388, 1972.
[8] D. Q. Mayne and E. Polak. Outer approximation algorithm for nondifferentiable optimization problems.
Journal of Optimization Theory and Applications, 42(1):19?30, 1984.
[9] R. H. Mladineo. An algorithm for finding the global maximum of a multimodal, multivariate function.
Mathematical Programming, 34(2):188?200, 1986.
[10] R. G. Strongin. Convergence of an algorithm for finding a global extremum. Engineering Cybernetics,
11(4):549?555, 1973.
[11] D. E. Kvasov, C. Pizzuti, and Y. D. Sergeyev. Local tuning and partition strategies for diagonal GO
methods. Numerische Mathematik, 94(1):93?106, 2003.
[12] S. Bubeck, G. Stoltz, and J. Y. Yu. Lipschitz bandits without the Lipschitz constant. In Algorithmic
Learning Theory, pages 144?158. Springer, 2011.
[13] J. Gardner, M. Kusner, K. Weinberger, and J. Cunningham. Bayesian Optimization with Inequality Constraints. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 937?
945, 2014.
[14] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. De Freitas. Bayesian optimization in high dimensions
via random embeddings. In Proceedings of the Twenty-Third international joint conference on Artificial
Intelligence, pages 1778?1784. AAAI Press, 2013.
[15] N. Srinivas, A. Krause, M. Seeger, and S. M. Kakade. Gaussian Process Optimization in the Bandit
Setting: No Regret and Experimental Design. In Proceedings of the 27th International Conference on
Machine Learning (ICML), pages 1015?1022, 2010.
[16] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, page 521, 2012.
[17] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[18] R. Munos. Optimistic optimization of deterministic functions without the knowledge of its smoothness.
In Proceedings of Advances in neural information processing systems (NIPS), 2011.
[19] D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications, 79(1):157?181, 1993.
[20] K. Kandasamy, J. Schneider, and B. Poczos. High dimensional Bayesian optimisation and bandits via
additive models. arXiv preprint arXiv:1503.01673, 2015.
[21] S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets.
Retrieved November 30, 2014, from http://www.sfu.ca/?ssurjano, 2014.
[22] D. B. McDonald, W. J. Grantham, W. L. Tabor, and M. J. Murphy. Global and local optimization using radial basis function response surface models. Applied Mathematical Modelling, 31(10):2095?2110,
2007.
[23] T. J. Walsh, S. Goschin, and M. L. Littman. Integrating Sample-Based Planning and Model-Based Reinforcement Learning. In Proceedings of the 24th AAAI conference on Artificial Intelligence (AAAI),
2010.
[24] A. L. Strehl, L. Li, and M. L. Littman. Reinforcement learning in finite MDPs: PAC analysis. The Journal
of Machine Learning Research (JMLR), 10:2413?2444, 2009.
9
| 5715 |@word version:1 polynomial:2 advantageous:1 c0:8 open:1 simulation:1 covariance:4 p0:3 recursively:1 ld:2 initial:1 contains:2 selecting:1 existing:1 freitas:5 current:1 nt:1 must:1 written:1 numerical:2 happen:1 partition:1 additive:1 remove:1 tlp:1 update:3 intelligence:3 selected:2 kandasamy:1 accordingly:3 isotropic:1 beginning:2 ith:1 node:3 clarified:1 five:1 branin:3 dn:11 along:3 direct:4 become:2 mathematical:2 prove:1 fitting:1 combine:1 paragraph:1 inside:1 manner:1 introduce:1 theoretically:1 x0:12 snoek:1 expected:2 examine:1 planning:2 multi:1 relying:3 decreasing:1 globally:1 resolve:1 cpu:2 considering:5 increasing:4 becomes:2 provided:2 bounded:2 advent:1 null:1 lowest:1 proposing:1 hartmann6:2 finding:3 extremum:1 impractical:3 every:1 partitioning:3 hyperrectangles:1 grant:3 control:1 positive:1 engineering:3 local:5 limit:2 optimised:1 path:2 black:2 might:2 frog:1 suggests:1 walsh:1 practical:2 acknowledgment:1 practice:3 regret:23 implement:2 digit:1 procedure:14 area:1 empirical:3 maxx:1 t10:1 confidence:1 word:2 radial:1 integrating:1 get:1 close:1 put:1 www:1 deterministic:5 center:15 go:1 williams:1 flexibly:1 convex:4 numerische:1 simplicity:2 insight:1 d1:2 importantly:2 his:1 coordinate:2 justification:1 increment:1 updated:3 limiting:2 pt:1 ngp:12 programming:1 continues:1 labeled:1 observed:1 preprint:1 wang:4 worst:1 region:3 convexity:1 littman:2 hatfield:1 tight:10 solving:2 segment:2 division:10 basis:1 resolved:1 joint:5 multimodal:2 represented:1 gpucb:1 committing:1 query:2 artificial:3 heuristic:1 encoded:3 larger:1 supplementary:2 solve:1 relax:1 tightness:2 otherwise:3 statistic:1 cma:1 gp:66 think:1 itself:1 noisy:1 polak:1 advantage:2 propose:1 aro:1 loop:3 combining:1 rapidly:1 matheson:1 flexibility:1 achieve:2 mayne:1 forth:1 description:1 intuitive:1 scalability:1 aistat:1 convergence:15 double:1 requirement:2 optimum:3 empty:1 transmission:1 adam:1 converges:1 derive:2 illustrate:1 solves:1 dividing:1 auxiliary:5 kenji:2 skip:1 indicate:1 larochelle:1 strong:1 closely:1 centered:2 exploration:4 opinion:1 material:3 virtual:1 require:1 inew:11 tighter:2 hold:2 practically:1 considered:3 exp:9 algorithmic:1 predict:1 diverged:1 achieves:4 optimizer:4 smallest:1 adopt:1 purpose:1 outperformed:1 applicable:1 label:2 largest:5 correctness:2 successfully:2 mit:9 gaussian:10 always:2 rather:1 broader:1 corollary:1 goschin:1 l0:2 tabor:1 longest:2 improvement:1 modelling:1 indicates:2 zoghi:2 contrast:1 skipped:3 seeger:1 sin2:5 initially:1 cunningham:1 bandit:4 subroutine:2 selects:4 arg:2 issue:1 html:1 among:1 raised:1 initialize:1 sampling:9 biology:2 jones:2 icml:3 yu:1 future:1 minimized:1 report:1 tyson:1 simplify:1 inherent:1 employ:1 mitotic:1 randomly:1 oriented:1 simultaneously:1 murphy:2 replaced:1 maintain:3 ssurjano:1 possibility:2 evaluation:11 kt:3 unless:1 conduct:3 iv:4 old:1 divide:5 tree:6 logarithm:1 initialized:2 plotted:1 stoltz:1 theoretical:6 hutter:1 increased:1 cover:7 leslie:1 kaelbling:1 tractability:1 introducing:1 cost:3 addressing:1 deviation:1 too:3 supx:2 combined:1 st:1 density:1 peak:2 international:5 siam:1 csail:3 probabilistic:1 reflect:1 satisfied:1 aaai:3 leveraged:1 possibly:1 dr:1 admit:1 creating:1 li:2 account:1 de:5 includes:1 dixon:1 satisfy:1 vi:1 depends:1 later:1 view:3 matern:2 closed:1 break:3 optimistic:2 red:1 start:1 disadvantageous:1 maintains:5 option:1 competitive:1 slope:1 square:1 publicly:1 shakibi:1 conducting:2 correspond:4 bayesian:18 produced:1 researcher:1 cybernetics:1 randomness:1 explain:1 lpk:1 reach:1 definition:1 acquisition:2 proof:2 popular:1 knowledge:4 dimensionality:3 formalize:1 higher:2 follow:1 tom:1 specify:1 improved:1 response:1 evaluated:1 box:2 though:1 done:1 rejected:1 uct:1 smola:1 until:4 hand:2 ei:6 continuity:2 indicated:1 perhaps:1 mdp:2 effect:8 contain:2 true:1 lozano:1 hence:2 former:1 illustrated:2 ll:1 during:3 sin:3 maintained:1 rooted:1 generalized:1 complete:1 mcdonald:1 performs:1 meaning:1 novel:2 recently:2 common:3 pseudocode:1 executable:1 overview:1 discussed:1 cambridge:3 queried:1 smoothness:1 rd:1 tuning:2 consistency:1 erez:1 pointed:1 centre:1 kelley:1 gratefully:1 access:2 surface:1 patrick:1 underling:1 posterior:1 own:1 closest:1 multivariate:1 perspective:2 retrieved:1 optimizes:1 manipulation:1 certain:1 n00014:1 cn0:1 inequality:2 onr:1 watson:1 seen:4 additional:8 relaxed:1 somewhat:1 greater:3 schneider:1 recognized:1 maximize:1 period:1 semi:5 ii:10 reduces:1 technical:1 faster:1 long:2 divided:4 parenthesis:1 sponsor:1 variant:1 essentially:1 metric:12 optimisation:2 blindly:1 iteration:14 kernel:11 arxiv:2 achieved:1 doo:1 want:1 krause:1 interval:34 diagram:7 else:3 source:1 concluded:1 extra:1 eliminates:1 rest:1 comment:1 subject:1 member:1 schur:1 effectiveness:1 integer:2 call:1 near:2 leverage:2 iii:14 enough:3 split:1 embeddings:1 xj:1 affect:2 reduce:2 regarding:1 simplifies:1 idea:4 t0:5 optimism:1 effort:1 f:1 algebraic:1 poczos:1 remark:4 useful:3 covered:1 nonparametric:1 carter:1 simplest:1 http:2 exist:1 nsf:1 notice:1 estimated:8 dummy:2 per:1 blue:1 perttunen:1 rectangle:5 realworld:1 screened:2 parameterized:1 uncertainty:1 run:1 family:1 decide:1 looser:1 utilizes:1 sfu:1 scaling:1 bound:67 ki:1 guaranteed:1 constraint:1 x2:1 min:1 optimality:1 expanded:2 relatively:1 according:2 smaller:3 beneficial:1 kusner:1 appealing:1 kakade:1 pipeline:1 computationally:1 equation:2 ln:12 mathematik:1 discus:1 loose:2 mechanism:2 know:2 end:2 available:1 eight:1 polytechnic:1 hierarchical:3 weinberger:1 slower:1 existence:2 original:1 remaining:4 include:1 maintaining:1 log10:1 lipschitzian:1 ucbs:2 scholarship:1 especially:1 kawaguchi:2 seeking:1 objective:8 occurs:1 parametric:1 strategy:3 dependence:1 traditional:1 diagonal:1 gradient:1 thank:1 entity:1 outer:1 nondifferentiable:1 reason:1 assuming:1 code:2 index:2 relationship:1 illustration:1 providing:1 balance:1 minimizing:1 useless:2 thoughtful:1 difficult:1 potentially:1 negative:1 stated:1 design:3 implementation:1 unknown:12 twenty:1 allowing:2 upper:6 vertical:2 observation:1 datasets:1 benchmark:1 finite:4 enabling:1 acknowledge:1 jin:1 gas:1 stuckman:1 november:1 maxk:1 rn:5 cc2:1 community:2 complement:1 pair:1 specified:1 hyperrectangle:5 accepts:2 temporary:2 nip:2 able:1 usually:2 challenge:1 including:1 soo:12 green:1 max:21 demanding:1 difficulty:2 rely:3 restore:1 scheme:1 improve:2 misleading:1 mdps:1 library:1 gardner:1 axis:2 carried:1 extract:1 prior:6 literature:2 embedded:1 fully:2 admissibility:1 expect:2 loss:4 interesting:1 suggestion:1 asterisk:2 incurred:1 degree:1 principle:1 viewpoint:2 pi:5 cd:6 strehl:1 repeat:1 placed:1 soon:1 last:4 supported:1 lef:2 rasmussen:1 side:5 understand:1 face:1 munos:3 distributed:1 benefit:1 overcome:1 depth:17 xn:18 evaluating:1 world:1 dimension:5 computes:1 author:2 commonly:1 made:1 reinforcement:2 far:4 ignore:1 keep:2 global:22 active:1 consuming:1 xi:1 factorizing:1 continuous:1 search:11 bingham:1 why:2 table:2 additionally:1 pack:1 ca:1 obtaining:1 unavailable:1 expansion:2 necessarily:1 domain:1 did:3 main:1 whole:1 hyperparameters:7 categorized:1 x1:10 egg:1 strengthened:1 exponential:17 lie:1 candidate:11 jmlr:1 third:5 hmax:2 admissible:1 theorem:2 remained:1 specific:1 pac:3 maxi:1 concern:1 dominates:1 intractable:1 exists:4 sequential:1 magnitude:2 execution:1 conditioned:1 illustrates:2 kx:1 rejection:1 logarithmic:1 remi:1 simply:2 infinitely:3 bubeck:1 expressed:1 recommendation:1 springer:1 ch:39 determines:1 relies:1 satisfies:2 ma:3 conditional:2 lipschitz:9 feasible:1 hard:1 infinite:8 specifically:1 uniformly:1 except:1 total:1 accepted:3 e:1 experimental:2 ucb:22 rarely:1 select:3 support:2 pizzuti:1 sergeyev:1 arises:1 brevity:2 bamsoo:10 evaluate:2 tested:1 srinivas:1 |
5,209 | 5,716 | Fast Randomized Kernel Ridge Regression with
Statistical Guarantees?
Ahmed El Alaoui ?
Michael W. Mahoney ?
? Electrical Engineering and Computer Sciences
? Statistics and International Computer Science Institute
University of California, Berkeley, Berkeley, CA 94720.
{elalaoui@eecs,mmahoney@stat}.berkeley.edu
Abstract
One approach to improving the running time of kernel-based methods is to build
a small sketch of the kernel matrix and use it in lieu of the full matrix in the
machine learning task of interest. Here, we describe a version of this approach
that comes with running time guarantees as well as improved guarantees on its
statistical performance. By extending the notion of statistical leverage scores to
the setting of kernel ridge regression, we are able to identify a sampling distribution that reduces the size of the sketch (i.e., the required number of columns to
be sampled) to the effective dimensionality of the problem. This latter quantity is
often much smaller than previous bounds that depend on the maximal degrees of
freedom. We give an empirical evidence supporting this fact. Our second contribution is to present a fast algorithm to quickly compute coarse approximations to
these scores in time linear in the number of samples. More precisely, the running
time of the algorithm is O(np2 ) with p only depending on the trace of the kernel
matrix and the regularization parameter. This is obtained via a variant of squared
length sampling that we adapt to the kernel setting. Lastly, we discuss how this
new notion of the leverage of a data point captures a fine notion of the difficulty
of the learning problem.
1
Introduction
We consider the low-rank approximation of symmetric positive semi-definite (SPSD) matrices that
arise in machine learning and data analysis, with an emphasis on obtaining good statistical guarantees. This is of interest primarily in connection with kernel-based machine learning methods. Recent
work in this area has focused on one or the other of two very different perspectives: an algorithmic perspective, where the focus is on running time issues and worst-case quality-of-approximation
guarantees, given a fixed input matrix; and a statistical perspective, where the goal is to obtain good
inferential properties, under some hypothesized model, by using the low-rank approximation in
place of the full kernel matrix. The recent results of Gittens and Mahoney [2] provide the strongest
example of the former, and the recent results of Bach [3] are an excellent example of the latter.
In this paper, we combine ideas from these two lines of work in order to obtain a fast randomized
kernel method with statistical guarantees that are improved relative to the state-of-the-art.
To understand our approach, recall that several papers have established the crucial importance?
from the algorithmic perspective?of the statistical leverage scores, as they capture structural nonuniformities of the input matrix and they can be used to obtain very sharp worst-case approximation
guarantees. See, e.g., work on CUR matrix decompositions [5, 6], work on the the fast approximation of the statistical leverage scores [7], and the recent review [8] for more details. Here, we
?
A technical report version of this conference paper is available at [1].
1
simply note that, when restricted to an n ? n SPSD matrix K and a rank parameter k, the statistical
leverage scores relative to the best rank-k approximation to K, call them `i , for i ? {1, . . . , n}, are
the diagonal elements of the projection matrix onto the best rank-k approximation of K. That is,
`i = diag(Kk Kk? )i , where Kk is the best rank k approximation of K and where Kk? is the MoorePenrose inverse of Kk . The recent work by Gittens and Mahoney [2] showed that qualitatively
improved worst-case bounds for the low-rank approximation of SPSD matrices could be obtained
in one of two related ways: either compute (with the fast algorithm of [7]) approximations to the
leverage scores, and use those approximations as an importance sampling distribution in a random
sampling algorithm; or rotate (with a Gaussian-based or Hadamard-based random projection) to a
random basis where those scores are uniformized, and sample randomly in that rotated basis.
In this paper, we extend these ideas, and we show that?from the statistical perspective?we are
able to obtain a low-rank approximation that comes with improved statistical guarantees by using
a variant of this more traditional notion of statistical leverage. In particular, we improve the recent
bounds of Bach [3], which provides the first known statistical convergence result when substituting the kernel matrix by its low-rank approximation. To understand the connection, recall that a
key component of Bach?s approach is the quantity dmof = nk diag( K(K + n?I)?1 )k? , which he
calls the maximal marginal degrees of freedom.1 Bach?s main result is that by constructing a lowrank approximation of the original kernel matrix by sampling uniformly at random p = O(dmof /)
columns, i.e., performing the vanilla Nystr?om method, and then by using this low-rank approximation in a prediction task, the statistical performance is within a factor of 1 + of the performance
when the entire kernel matrix is used. Here, we show that this uniform sampling is suboptimal. We
do so by sampling with respect to a coarse but quickly-computable approximation of a variant to the
statistical leverage scores, given in Definition 1 below, and we show that we can obtain similar 1 +
guarantees by sampling only O(deff /) columns, where deff = Tr(K(K + n?I)?1 ) < dmof . The
quantity deff is called the effective dimensionality of the learning problem, and it can be interpreted
as the implicit number of parameters in this nonparametric setting [9, 10].
We expect that our results and insights will be useful much more generally. As an example of this,
we can directly compare the Nystr?om sampling method to a related divide-and-conquer approach,
thereby answering an open problem of Zhang et al. [9]. Recall that the Zhang et al. divide-andconquer method consists of dividing the dataset {(xi , yi )}ni=1 into m random partitions of equal size,
computing estimators on each partition in parallel, and then averaging the estimators. They prove
the minimax optimality of their estimator, although their multiplicative constants are suboptimal;
and, in terms of the number of kernel evaluations, their method requires m ? (n/m)2 , with m in the
order of n/d2eff , which gives a total number of O(nd2eff ) evaluations. They noticed that the scaling
of their estimator was not directly comparable to that of the Nystr?om sampling method (which was
proven to only require O(ndmof ) evaluations, if the sampling is uniform [3]), and they left it as an
open problem to determine which if either method is fundamentally better than the other. Using
our Theorem 3, we are able to put both results on a common ground for comparison. Indeed, the
estimator obtained by our non-uniform Nystr?om sampling requires only O(ndeff ) kernel evaluations
(compared to O(nd2eff ) and O(ndmof )), and it obtains the same bound on the statistical predictive
performance as in [3]. In this sense, our result combines ?the best of both worlds,? by having the
reduced sample complexity of [9] and the sharp approximation bound of [3].
2
Preliminaries and notation
Let {(xi , yi )}ni=1 be n pairs of points in X ? Y, where X is the input space and Y is the response
space. The kernel-based learning problem can be cast as the following minimization problem:
n
min
f ?F
1X
?
`(yi , f (xi )) + kf k2F ,
n i=1
2
(1)
where F is a reproducing kernel Hilbert space and ` : Y ? Y ? R is a loss function. We denote by
k : X ? X ? R the positive definite kernel corresponding to F and by ? : X ? F a corresponding
feature map. That is, k(x, x0 ) = h?(x), ?(x0 )iF for every x, x0 ? X . The representer theorem
[11, 12] allows us to reduce Problem (1) to a finite-dimensional optimization problem, in which
1
We will refer to it as the maximal degrees of freedom.
2
case Problem (1) boils down to finding the vector ? ? Rn that solves
n
minn
??R
1X
?
`(yi , (K?)i ) + ?> K?,
n i=1
2
(2)
where Kij = k(xi , xj ). We let U ?U > be the eigenvalue decomposition of K, with ? =
Diag(?1 , ? ? ? , ?n ), ?1 ? ? ? ? ? ?n ? 0, and U an orthogonal matrix. The underlying data model is
yi = f ? (xi ) + ? 2 ?i i = 1, ? ? ? , n
with f ? ? F, (xi )1?i?n a deterministic sequence and ?i are i.i.d. standard normal random variables.
We consider ` to be the squared loss, in which case we will be interested in the mean squared error
as a measure of statistical risk: for any estimator f?, let
R(f?) :=
1
E? kf? ? f ? k22
n
(3)
be the risk function of f? where E? denotes the expectation under the randomness induced by ?. In
this setting the problem is called Kernel Ridge Regression (KRR). The solution to Problem (2) is
? = (K + n?I)?1 y, and the estimate of f ? at any training point xi is given by f?(xi ) = (K?)i .
We will use f?K as a shorthand for the vector (f?(xi ))1?i?n ? Rn when the matrix K is used as a
kernel matrix. This notation will be used accordingly for other kernel matrices (e.g. f?L for a matrix
L). Recall that the risk of the estimator f?K can then be decomposed into a bias and variance term:
1
E? kK(K + n?I)?1 (f ? + ? 2 ?) ? f ? k22
n
?2
1
E? kK(K + n?I)?1 ?k22
= k(K(K + n?I)?1 ? I)f ? k22 +
n
n
?2
= n?2 k(K + n?I)?1 f ? k22 +
Tr(K 2 (K + n?I)?2 )
n
:=
bias(K)2
+
variance(K).
R(f?K ) =
(4)
Solving Problem (2), either by a direct method or by an optimization algorithm needs at least a
quadratic and often cubic running time in n which is prohibitive in the large scale setting. The
so-called Nytr?om method approximates the solution to Problem (2) by substituting K with a lowrank approximation to K. In practice, this approximation is often not only fast to construct, but
the resulting learning problem is also often easier to solve [13, 14, 15, 2]. The method operates
as follows. A small number of columns K1 , ? ? ? , Kp are randomly sampled from K. If we let
C = [K1 , ? ? ? , Kp ] ? Rn?p denote the matrix containing the sampled columns, W ? Rp?p the
overlap between C and C > in K, then the Nystr?om approximation of K is the matrix
L = CW ? C > .
More generally, if we let S ? Rn?p be an arbitrary sketching matrix, i.e., a tall and skinny matrix
that, when left-multiplied by K, produces a ?sketch? of K that preserves some desirable properties,
then the Nystr?om approximation associated with S is
L = KS(S > KS)? S > K.
For instance, for random sampling algorithms, S would contain a non-zero entry at position (i, j) if
the i-th column of K is chosen at the j-th trial of the sampling process. Alternatively, S could also
be a random projection matrix; or S could be constructed with some other (perhaps deterministic)
method, as long as it verifies some structural properties, depending on the application [8, 2, 6, 5].
We will focus in this paper on analyzing this approximation in the statistical prediction context
related to the estimation of f ? by solving Problem (2). We proceed by revisiting and improving
upon prior results from three different areas. The first result (Theorem 1) is on the behavior of the
bias of f?L , when L is constructed using a general sketching matrix S. This result underlies the
statistical analysis of the Nystr?om method. To see this, first, it is not hard to prove that L K
in the sense of usual the order on the positive semi-definite cone. Second, one can prove that the
variance is matrix-increasing, hence the variance will decrease when replacing K by L. On the other
3
hand, the bias (while not matrix monotone in general) can be proven to not increase too much when
replacing K by L. This latter statement will be the main technical difficulty for obtaining a bound
on R(f?L ) (see Appendix A). A form of this result is due to Bach [3] in the case where S is a uniform
sampling matrix. The second result (Theorem 2) is a concentration bound for approximating matrix
multiplication when the rank-one components of the product are sampled non uniformly. This result
is derived from the matrix Bernstein inequality, and yields a sharp quantification of the deviation
of the approximation from the true product. The third result (Definition 1) is an extension of the
definition of the leverage scores to the context of kernel ridge regression. Whereas the notion of
leverage is established as an algorithmic tool in randomized linear algebra, we introduce a natural
counterpart of it to this statistical setting. By combining these contributions, we are able to give a
sharp statistical statement on the behavior of the Nystr?om method if one is allowed to sample non
uniformly. All the proofs are deferred to the appendix (or see [1]).
3
3.1
Revisiting prior work and new results
A structural result
We begin by stating a ?structural? result that upper-bounds the bias of the estimator constructed
using the approximation L. This result is deterministic: it only depends on the properties of the
input data, and holds for any sketching matrix S that satisfies certain conditions. This way the
randomness of the construction of S is decoupled from the rest of the analysis. We highlight the fact
that this view offers a possible way of improving the current results since a better construction of S
-whether deterministic or random- satisfying the data-related conditions would immediately lead to
down stream algorithmic and statistical improvements in this setting.
Theorem 1. Let S ? Rn?p be a sketching matrix and L the corresponding Nystr?om approxi
mation. For ? > 0, let ? = ?(? + n?I)?1 . If the sketching matrix S satisfies ?max ? ?
1
?1/2 U > SS > U ?1/2 ? t for t ? (0, 1) and ? ? 1?t
kSk2op ? ?maxn(K) , where ?max denotes the
maximum eigenvalue and k ? kop is the operator norm then
?/?
bias(L) ? 1 +
bias(K).
1?t
(5)
?
In the special case where S contains one non zero entry equal to 1/ pn in every column with p the
number of sampled columns, the result and its proof can be found in [3] (appendix B.2), although
we believe that their argument contains a problematic statement. We propose an alternative and
complete proof in Appendix A. The subsequent analysis unfolds in two steps: (1) assuming the
sketching matrix S satisfies the conditions stated in Theorem 1, we will have R(f?L ) . R(f?K ), and
(2) matrix concentration is used to show that an appropriate random construction of S satisfies the
said conditions. We start by stating the concentration result that is the source of our improvement
(section 3.2), define a notion of statistical leverage scores (section 3.3), and then state and prove
the main statistical result (Theorem 3 section 3.4). We then present our main algorithmic result
consisting of a fast approximation to this new notion of leverage scores (section 3.5).
3.2
A concentration bound on matrix multiplication
Next, we state our result for approximating matrix products of the form ??> when a few columns
from ? are sampled to form the approximate product ?I ?>
I where ?I contains the chosen columns.
The proof relies on a matrix Bernstein inequality (see e.g. [16]) and is presented at the end of the
paper (Appendix B).
Theorem 2. Let n, m be positive integers. Consider a matrix ? ? Rn?m and denote by ?i the ith
column of ?. Let p ? m and I = {i1 , ? ? ? , ip } be a subset of {1, ? ? ? , m} formed by p elements
chosen randomly with replacement, according to the distribution
?i ? {1, ? ? ? , m}
Pr(choosing i) = pi ? ?
4
k?i k22
k?k2F
(6)
?
for some ? ? (0, 1]. Let S ? Rn?p be a sketching matrix such that Sij = 1/ p ? pij only if i = ij
and 0 elsewhere. Then
?pt2 /2
Pr ?max ??> ? ?SS > ?> ? t ? n exp
.
(7)
?max (??> )(k?k2F /? + t/3)
Remarks: 1. This result will be used for ? = ?1/2 U > , in conjunction with Theorem 1 to prove
our main result in Theorem 3. Notice that ?> is a scaled version of the eigenvectors, with a scaling
given by the diagonal matrix ? = ?(? + n?I)?1 which should be considered as ?soft projection?
matrix that smoothly selects the top part of the spectrum of K. The setting of Gittens et al. [2], in
which ? is a 0-1 diagonal is the closest analog of our setting.
2. It is known that pi =
>
k?i k22
is the optimal sampling
k?k2F
> > 2
?SS ? kF [17]. The above
distribution in terms of minimizing the
expected error Ek?? ?
result exhibits a robustness property by
allowing the chosen sampling distribution to be different from the optimal one by a factor ?.2 The
sub-optimality of such a distribution is reflected in the upper bound (7) by the amplification of the
squared Frobenius norm of ? by a factor 1/?. For instance, if the sampling distribution is chosen
k?k2F
to be uniform, i.e. pi = 1/m, then the value of ? for which (6) is tight is m maxi k?
2 , in which
i k2
case we recover a concentration result proven by Bach [3]. Note that Theorem 2 is derived from
one of the state-of-the-art bounds on matrix concentration, but it is one among many others in the
literature; and while it constitutes the base of our improvement, it is possible that a concentration
bound more tailored to the problem might yield sharper results.
3.3
An extended definition of leverage
We introduce an extended notion of leverage scores that is specifically tailored to the ridge regression
problem, and that we call the ?-ridge leverage scores.
Definition 1. For ? > 0, the ?-ridge leverage scores associated with the kernel matrix K and the
parameter ? are
n
X
?j
2
?i ? {1, ? ? ? , n},
li (?) =
Uij
.
(8)
?
+
n?
j
j=1
Note that li (?) is the ith diagonal entry of K(K + n?I)?1 . The quantities (li (?))1?i?n are in this
setting the analogs of the so-called leverage scores in the statistical literature, as they characterize the
data points that ?stick out?, and consequently that most affect the result of a statistical procedure.
They are classically defined as the row norms of the left singular matrix U of the input matrix,
and they have been used in regression diagnostics for outlier detection [18], and more recently in
randomized matrix algorithms as they often provide an optimal importance sampling distribution
for constructing random sketches for low rank approximation [17, 19, 5, 6, 2] and least squares
regression [20] when the input matrix is tall and skinny (n ? m). In the case where the input matrix
is square, this definition is vacuous as the row norms of U are all equal to 1. Recently, Gittens and
Mahoney [2] used a truncated version of these scores (that they called leverage scores relative to the
best rank-k space) to obtain the best algorithmic results known to date on low rank approximation
of positive semi-definite matrices. Definition 1 is a weighted version of the classical leverage scores,
where the weights depend on the spectrum of K and a regularization parameter ?. In this sense, it is
an interpolation between Gittens? scores and the classical (tall-and-skinny) leverage scores, where
the parameter ? plays the role of a rank parameter. In addition, we point out that Bach?s maximal
degrees of freedom dmof is to the ?-ridge leverage scores what the coherence is to Gittens? leverage
scores, i.e. their (scaled) maximum value: dmof /n = maxi li (?); and that while the sum of Gittens?
scores is the rank parameter k, the sum of the ?-ridge leverage scores is the effective dimensionality
deff . We argue in the following that Definition 1 provides a relevant notion of leverage in the context
of kernel ridge regression. It is the natural counterpart of the algorithmic notion of leverage in the
prediction context. We use it in the next section to make a statistical statement on the performance
of the Nystr?om method.
2
In their work [17], Drineas et al. have a comparable robust statement for controlling the expected error.
Our result is a robust quantification of the tail probability of the error, which is a much stronger statement.
5
3.4
Main statistical result: an error bound on approximate kernel ridge regression
Now we are able to give an improved version of a theorem by Bach [3] that establishes a performance
guaranty on the use of the Nystr?om method in the context of kernel ridge regression. It is improved
in the sense that the sufficient number of columns that should be sampled in order to incur no
(or little) loss in the prediction performance is lower. This is due to a more data-sensitive way of
sampling the columns of K (depending on the ?-ridge leverage scores) during the construction of
the approximation L. The proof is in Appendix C.
Theorem 3. Let ?, > 0, ? ? (0, 1/2), n ? 2 and L be a Nystr?om approximation of K by choosing
p columns randomly with replacement
Pnaccording to a probability distribution (pi )1?i?n such that
?i ? {1, ? ? ? , n}, pi ? ? ? li (?)/ i=1 li (?) for some ? ? (0, 1]. Let l ? mini li (?). If
deff
1
n
1 ?max (K)
+
log
and ? ? 2 1 +
,
p?8
?
6
?
l
n
Pn
with deff = i=1 li (?) = Tr(K(K + n?I)?1 ) then
R(f?L ) ? (1 + 2)2 R(f?K )
with probability at least 1 ? 2?, where (li )i are introduced in Definition 1 and R is defined in (3).
Theorem 3 asserts that substituting the kernel matrix K by a Nystr?om approximation of rank p in the
KRR problem induces an arbitrarily small prediction loss, provided that p scales linearly with the
effective dimensionality deff 3 and that ? is not too small4 . The leverage-based sampling appears to be
crucial for obtaining this dependence, as the ?-ridge leverage scores provide information on which
columns -and hence which data points- capture most of the difficulty of the estimation problem.
Also, as a sanity check, the smaller the target accuracy , the higher deff , and the more uniform the
sampling distribution (li (?))i becomes. In the limit ? 0, p is in the order of n and the scores
are uniform, and the method is essentially equivalent to using the entire matrix K. Moreover, if
the sampling distribution (pi )i is a factor ? away from optimal, a slight oversampling (i.e. increase
p by 1/?) achieves the same performance. In this sense, the above result shows robustness to the
sampling distribution. This property is very beneficial from an implementation point of view, as
the error bounds still hold when only an approximation of the leverage scores is available. If the
columns are sampled uniformly, a worse lower bound on p that depends on dmof is obtained [3].
Main algorithmic result: a fast approximation to the ?-ridge leverage scores
3.5
Although the ?-ridge leverage scores can be naively computed using SVD, the exact computation is
as costly as solving the original Problem (2). Therefore, the central role they play in the above result
motivates the problem of a fast approximation, in a similar way the importance of the usual leverage
scores has motivated Drineas et al. to approximate them is random projection time [7]. A success in
this task will allow us to combine the running time benefits with the improved statistical guarantees
we have provided.
Algorithm:
? Inputs: data points (xi )1?i?n , probability vector (pi )1?i?n , sampling parameter p ?
{1, 2, ? ? ? }, ? > 0, ? (0, 1/2).
? Output: (?li )1?i?n -approximations to (li (?))1?i?n .
1. Sample p data points from (xi )1?i?n with replacement with probabilities (pi )1?i?n .
2. Compute the corresponding columns K1 , ? ? ? , Kp of the kernel matrix.
3. Construct C = [K1 , ? ? ? , Kp ] ? Rn?p and W ? Rp?p as presented in Section 2.
4. Construct B ? Rn?p such that BB > = CW ? C > .
5. For every i ? {1, ? ? ? , n}, set
?li = B > (B > B + n?I)?1 Bi
i
(9)
where Bi is the i-th row of B, and return it.
3
Note that deff depends on the precision parameter , which is absent in the classical definition of the
effective dimensionality [10, 9, 3] However, the following bound holds: deff ? 1 Tr(K(K + n?I)?1 ).
4
This condition on ? is not necessary if one constructs L as KS(S > KS + n?I)?1 S > K (see proof).
6
Running time: The running time of the above algorithm is dominated by steps 4 and 5. Indeed,
constructing B can be done using a Cholesky factorization on W and then a multiplication of C by
the inverse of the obtained Cholesky factor, which yields a running time of O(p3 +np2 ). Computing
the approximate leverage scores (?li )1?i?n in step 5 also runs in O(p3 + np2 ). Thus, for p n,
the overall algorithm runs in O(np2 ). Note that formula (9) only involves matrices and vectors of
size p (everything is computed in the smaller dimension p), and the fact that this yields a correct
approximation relies on the matrix inversion lemma (see proof in Appendix D). Also, only the
relevant columns of K are computed and we never have to form the entire kernel matrix. This
improves over earlier models [2] that require that all of K has to be written down in memory. The
improved running time is obtained by considering the construction (9) which is quite different from
the regular setting of approximating the leverage scores of a rectangular matrix [7]. We now give
both additive and multiplicative error bounds on its approximation quality.
Theorem 4. Let ? (0, 1/2), ? ? (0, 1) and ? > 0. Let L be a Nystr?om approximation of K by
choosing p columns at random with probabilities pi = Kii /Tr(K), i = 1, ? ? ? , n. If
Tr(K) 1
n
p?8
+
log
n?
6
?
then we have ?i ? {1, ? ? ? , n}
(additive error bound) li (?) ? 2 ? ?
li ? li (?)
and
(multiplicative error bound)
? ? n?
n
li (?) ? ?li ? li (?)
?n + n?
with probability at least 1 ? ?.
Remarks: 1. Theorem 4 states that if the columns of K are sampled proportionally to Kii then
2
O( Tr(K)
n? ) is a sufficient number of samples. Recall that Kii = k?(xi )kF , so our procedure is akin
to sampling according to the squared lengths of the data vectors, which has been extensively used in
different contexts of randomized matrix approximation [21, 17, 19, 8, 2].
2. Due to how ? is defined in eq. (1) the n in the denominator is artificial: n? should be thought of as
a ?rescaled? regularization
parameter ?0 . In some?settings, the ? that yields the best generalization
?
error scales like O(1/ n), hence p = O(Tr(K)/ n) is sufficient. On the other hand, if the columns
are sampled uniformly, one would get p = O(dmof ) = O(n maxi li (?)).
4
Experiments
We test our results based on several datasets: one synthetic regression problem from [3] to illustrate the importance of the ?-ridge leverage scores, the Pumadyn family consisting of three datasets
pumadyn-32fm, pumadyn-32fh and pumadyn-32nh 5 and the Gas Sensor Array Drift Dataset from
the UCI database6 . The synthetic case consists of a regression problem on the interval X = [0, 1]
where, given a sequence (xi )1?i?n and a sequence of noise (i )1?i?n , we observe the sequence
yi = f (xi ) + ? 2 i ,
i ? {1, ? ? ? , n}.
1
The function f belongs to the RKHS F generated by the kernel k(x, y) = (2?)!
B2? (x?y ?bx?yc)
where B2? is the 2?-th Bernoulli polynomial [3]. One important feature of this regression problem
is the distribution of the points (xi )1?i?n on the interval X : if they are spread uniformly over the
interval, the ?-ridge leverage scores (li (?))1?i?n are uniform for every ? > 0, and uniform column
sampling is optimal in this case. In fact, if xi = i?1
n for i = 1, ? ? ? , n, the kernel matrix K is
a circulant matrix [3], in which case, we can prove that the ?-ridge leverage scores are constant.
Otherwise, if the data points are distributed asymmetrically on the interval, the ?-ridge leverage
scores are non uniform, and importance sampling is beneficial (Figure 1). In this experiment, the
data points xi ? (0, 1) have been generated with a distribution symmetric about 12 , having a high
density on the borders of the interval (0, 1) and a low density on the center of the interval. The
number of observations is n = 500. On Figure 1, we can see that there are few data points with
5
6
http://www.cs.toronto.edu/?delve/data/pumadyn/desc.html
https://archive.ics.uci.edu/ml/datasets/Gas+Sensor+Array+Drift+Dataset
7
Figure 1: The ?-ridge leverage scores for the synthetic Bernoulli data set described in the text (left) and
the MSE risk vs. the number of sampled columns used to construct the Nystr?om approximation for different
sampling methods (right).
high leverage, and those correspond to the region that is underrepresented in the data sample (i.e. the
region close to the center of the interval since it is the one that has the lowest density of observations).
The ?-ridge leverage scores are able to capture the importance of these data points, thus providing a
way to detect them (e.g. with an analysis of outliers), had we not known their existence.
For all datasets, we determine ? and the band width of k by cross validation, and we compute the
effective dimensionality deff and the maximal degrees of freedom dmof . Table 1 summarizes the
experiments. It is often the case that deff dmof and R(f?L )/R(f?K ) ' 1, in agreement with
Theorem 3.
kernel
Bern
Linear
RBF
dataset
Synth
Gas2
Gas3
Pum-32fm
Pum-32fh
Pum-32nh
Gas2
Gas3
Pum-32fm
Pum-32fh
Pum-32nh
n
500
1244
1586
2000
2000
2000
1244
1586
2000
2000
2000
nb. feat
128
128
32
32
32
-
band width
1
1
5
5
5
?
1e?6
1e?3
1e?3
1e?3
1e?3
1e?3
4.5e?4
5e?4
0.5
5e?2
1.3e?2
deff
24
126
125
31
31
32
1135
1450
142
747
1337
dmof
500
1244
1586
2000
2000
2000
1244
1586
1897
1989
1997
risk ratio R(f?L )/R(f?K )
1.01 (p = 2deff )
1.10 (p = 2deff )
1.09 (p = 2deff )
0.99 (p = 2deff )
0.99 (p = 2deff )
0.99 (p = 2deff )
1.56 (p = deff )
1.50 (p = deff )
1.00 (p = deff )
1.00 (p = deff )
0.99 (p = deff )
Table 1: Parameters and quantities of interest for the different datasets and using different kernels: the synthetic
dataset using the Bernoulli kernel (denoted by Synth), the Gas Sensor Array Drift Dataset (batches 2 and 3,
denoted by Gas2 and Gas3) and the Pumadyn datasets (Pum-32fm, Pum-32fh, Pum-32nh) using linear and
RBF kernels.
5
Conclusion
We showed in this paper that in the case of kernel ridge regression, the sampling complexity of the
Nystr?om method can be reduced to the effective dimensionality of the problem, hence bridging and
improving upon different previous attempts that established weaker forms of this result. This was
achieved by defining a natural analog to the notion of leverage scores in this statistical context, and
using it as a column sampling distribution. We obtained this result by combining and improving
upon results that have emerged from two different perspectives on low rank matrix approximation.
We also present a way to approximate these scores that is computationally tractable, i.e. runs in time
O(np2 ) with p depending only on the trace of the kernel matrix and the regularization parameter.
One natural unanswered question is whether it is possible to further reduce the sampling complexity,
or is the effective dimensionality also a lower bound on p? And as pointed out by previous work
[22, 3], it is likely that the same results hold for smooth losses beyond the squared loss (e.g. logistic
regression). However the situation is unclear for non-smooth losses (e.g. support vector regression).
Acknowledgements: We thank Xixian Chen for pointing out a mistake in an earlier draft of this
paper [1]. We thank Francis Bach for stimulating discussions and for contributing to a rectified proof
of Theorem 1. We thank Jason Lee and Aaditya Ramdas for fruitful discussions regarding the proof
of Theorem 1. We thank Yuchen Zhang for pointing out the connection to his work.
8
References
[1] Ahmed El Alaoui and Michael W Mahoney. Fast randomized kernel methods with statistical
guarantees. arXiv preprint arXiv:1411.0306, 2014.
[2] Alex Gittens and Michael W Mahoney. Revisiting the Nystr?om method for improved largescale machine learning. In Proceedings of The 30th International Conference on Machine
Learning, pages 567?575, 2013.
[3] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In Proceedings of
The 26th Conference on Learning Theory, pages 185?209, 2013.
[4] Francis Bach. Personal communication, October 2015.
[5] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844?881, 2008.
[6] Michael W Mahoney and Petros Drineas. CUR matrix decompositions for improved data
analysis. Proceedings of the National Academy of Sciences, 106(3):697?702, 2009.
[7] Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast
approximation of matrix coherence and statistical leverage. The Journal of Machine Learning
Research, 13(1):3475?3506, 2012.
[8] Michael W Mahoney. Randomized algorithms for matrices and data. Foundations and Trends
in Machine Learning, 3(2):123?224, 2011.
[9] Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression. In Proceedings of The 26th Conference on Learning Theory, pages 592?617, 2013.
[10] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning,
volume 1. Springer series in statistics Springer, Berlin, 2001.
[11] George Kimeldorf and Grace Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33(1):82?95, 1971.
[12] Bernhard Sch?olkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. In
Computational Learning Theory, pages 416?426. Springer, 2001.
[13] Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations.
The Journal of Machine Learning Research, 2:243?264, 2002.
[14] Christopher Williams and Matthias Seeger. Using the Nystr?om method to speed up kernel
machines. In Proceedings of the 14th Annual Conference on Neural Information Processing
Systems, pages 682?688, 2001.
[15] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling techniques for the Nystr?om
method. In International Conference on Artificial Intelligence and Statistics, pages 304?311,
2009.
[16] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012.
[17] Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for
matrices I: Approximating matrix multiplication. SIAM Journal on Computing, 36(1):132?
157, 2006.
[18] Samprit Chatterjee and Ali S Hadi. Influential observations, high leverage points, and outliers
in linear regression. Statistical Science, pages 379?393, 1986.
[19] Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for
matrices II: Computing a low-rank approximation to a matrix. SIAM Journal on Computing,
36(1):158?183, 2006.
[20] Petros Drineas, Michael W Mahoney, S Muthukrishnan, and Tam?as Sarl?os. Faster least squares
approximation. Numerische Mathematik, 117(2):219?249, 2011.
[21] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast monte-carlo algorithms for finding
low-rank approximations. Journal of the ACM (JACM), 51(6):1025?1041, 2004.
[22] Francis Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics,
4:384?414, 2010.
9
| 5716 |@word trial:1 version:6 inversion:1 polynomial:1 norm:4 stronger:1 open:2 decomposition:4 thereby:1 nystr:19 tr:8 contains:3 score:41 series:1 woodruff:1 rkhs:1 current:1 written:1 john:1 subsequent:1 partition:2 additive:2 sanjiv:1 v:1 intelligence:1 prohibitive:1 accordingly:1 ith:2 coarse:2 provides:2 draft:1 toronto:1 herbrich:1 zhang:4 mathematical:1 constructed:3 direct:1 prove:6 consists:2 shorthand:1 combine:3 introduce:2 x0:3 expected:2 indeed:2 behavior:2 pum:9 decomposed:1 little:1 considering:1 increasing:1 becomes:1 begin:1 provided:2 notation:2 underlying:1 moreover:1 kimeldorf:1 lowest:1 what:1 interpreted:1 finding:2 guarantee:11 berkeley:3 every:4 friendly:1 scaled:2 k2:1 stick:1 positive:5 engineering:1 limit:1 mistake:1 analyzing:1 interpolation:1 might:1 emphasis:1 katya:1 k:4 delve:1 factorization:1 bi:2 practice:1 definite:4 procedure:2 area:2 empirical:1 thought:1 inferential:1 projection:5 regular:1 get:1 onto:1 close:1 operator:1 put:1 risk:5 context:7 nb:1 www:1 equivalent:1 map:1 deterministic:4 center:2 fruitful:1 williams:1 focused:1 rectangular:1 underrepresented:1 numerische:1 immediately:1 insight:1 estimator:8 array:3 his:1 ralf:1 unanswered:1 notion:11 construction:5 play:2 controlling:1 target:1 exact:1 user:1 agreement:1 element:3 trend:1 satisfying:1 role:2 preprint:1 electrical:1 capture:4 worst:3 revisiting:3 region:2 decrease:1 rescaled:1 complexity:3 personal:1 depend:2 solving:3 tight:1 algebra:1 ali:1 predictive:1 incur:1 upon:3 basis:2 drineas:8 muthukrishnan:2 fast:14 describe:1 effective:8 monte:3 kp:4 artificial:2 choosing:3 sarl:1 sanity:1 quite:1 emerged:1 guaranty:1 solve:1 s:3 otherwise:1 statistic:4 ip:1 sequence:4 eigenvalue:2 matthias:1 propose:1 maximal:5 product:4 relevant:2 hadamard:1 combining:2 date:1 uci:2 academy:1 amplification:1 ismail:1 frobenius:1 asserts:1 olkopf:1 spsd:3 convergence:1 extending:1 produce:1 rotated:1 mmahoney:1 tall:3 depending:4 illustrate:1 stating:2 stat:1 ij:1 lowrank:2 eq:1 solves:1 dividing:1 c:1 involves:1 come:2 correct:1 everything:1 require:2 kii:3 generalization:1 preliminary:1 desc:1 extension:1 hold:4 considered:1 ground:1 normal:1 exp:1 ic:1 algorithmic:8 pointing:2 substituting:3 achieves:1 fh:4 estimation:2 krr:2 sensitive:1 establishes:1 tool:1 weighted:1 minimization:1 sensor:3 gaussian:1 mation:1 pn:2 conjunction:1 np2:5 derived:2 focus:2 improvement:3 rank:22 check:1 bernoulli:3 seeger:1 talwalkar:1 sense:5 detect:1 el:2 entire:3 uij:1 selects:1 interested:1 i1:1 issue:1 among:1 overall:1 html:1 denoted:2 art:2 special:1 marginal:1 santosh:1 equal:3 construct:5 having:2 never:1 sampling:33 k2f:5 constitutes:1 representer:2 report:1 others:1 fundamentally:1 spline:1 primarily:1 few:2 randomly:4 frieze:1 preserve:1 national:1 skinny:3 consisting:2 replacement:3 attempt:1 freedom:5 detection:1 friedman:1 interest:3 evaluation:4 joel:1 mahoney:13 deferred:1 diagnostics:1 necessary:1 orthogonal:1 decoupled:1 divide:3 yuchen:2 kij:1 column:24 instance:2 soft:1 earlier:2 deviation:1 entry:3 subset:1 uniform:10 too:2 characterize:1 eec:1 synthetic:4 density:3 international:3 randomized:7 siam:3 lee:1 michael:10 quickly:2 sketching:7 pumadyn:6 squared:6 central:1 containing:1 classically:1 worse:1 tam:1 ek:1 return:1 bx:1 li:22 b2:2 depends:3 stream:1 multiplicative:3 view:2 jason:1 francis:4 start:1 recover:1 parallel:1 shai:1 contribution:2 om:20 formed:1 ni:2 square:3 accuracy:1 variance:4 hadi:1 yield:5 identify:1 correspond:1 carlo:3 rectified:1 randomness:2 strongest:1 ndeff:1 trevor:1 definition:10 associated:2 proof:9 boil:1 cur:3 sampled:11 petros:6 dataset:6 recall:5 dimensionality:8 improves:1 hilbert:1 appears:1 higher:1 reflected:1 response:1 improved:10 done:1 implicit:1 lastly:1 smola:1 jerome:1 sketch:4 hand:2 tropp:1 replacing:2 christopher:1 o:1 logistic:2 quality:2 perhaps:1 believe:1 hypothesized:1 k22:7 contain:1 true:1 counterpart:2 former:1 regularization:4 hence:4 symmetric:2 during:1 width:2 self:1 generalized:1 ridge:24 complete:1 duchi:1 aaditya:1 nonuniformity:1 recently:2 common:1 nh:4 volume:1 extend:1 he:1 approximates:1 analog:3 tail:2 slight:1 refer:1 vanilla:1 mathematics:1 pointed:1 had:1 base:1 closest:1 recent:6 showed:2 perspective:6 belongs:1 certain:1 inequality:2 arbitrarily:1 deff:24 success:1 yi:6 george:1 moorepenrose:1 determine:2 semi:3 ii:1 full:2 desirable:1 reduces:1 alan:1 technical:2 smooth:2 adapt:1 ahmed:2 bach:12 long:1 offer:1 cross:1 faster:1 prediction:5 variant:3 regression:19 underlies:1 denominator:1 essentially:1 expectation:1 arxiv:2 kernel:41 tailored:2 achieved:1 whereas:1 addition:1 fine:2 interval:7 singular:1 source:1 crucial:2 sch:1 rest:1 archive:1 induced:1 alaoui:2 call:3 integer:1 structural:4 leverage:44 bernstein:2 xj:1 affect:1 hastie:1 fm:4 suboptimal:2 wahba:1 reduce:2 idea:2 regarding:1 computable:1 absent:1 whether:2 motivated:1 bridging:1 akin:1 proceed:1 remark:2 useful:1 generally:2 proportionally:1 eigenvectors:1 nonparametric:1 extensively:1 band:2 induces:1 reduced:2 http:2 problematic:1 oversampling:1 notice:1 tibshirani:1 key:1 tchebycheffian:1 ravi:3 monotone:1 cone:1 sum:3 run:3 inverse:2 place:1 family:1 electronic:1 p3:2 coherence:2 appendix:7 scaling:2 summarizes:1 comparable:2 bound:21 quadratic:1 annual:1 precisely:1 alex:2 dominated:1 speed:1 argument:1 optimality:2 min:1 kumar:1 performing:1 ameet:1 vempala:1 martin:1 influential:1 maxn:1 according:2 smaller:3 beneficial:2 gittens:8 outlier:3 restricted:1 pr:2 sij:1 computationally:1 mathematik:1 scheinberg:1 discus:1 tractable:1 end:1 lieu:1 available:2 magdon:1 multiplied:1 observe:1 away:1 appropriate:1 alternative:1 robustness:2 batch:1 rp:2 existence:1 original:2 denotes:2 running:10 top:1 pt2:1 k1:4 build:1 conquer:2 approximating:4 classical:3 malik:1 noticed:1 question:1 quantity:5 concentration:7 dependence:1 usual:2 diagonal:4 traditional:1 said:1 exhibit:1 costly:1 unclear:1 grace:1 cw:2 thank:4 berlin:1 argue:1 kannan:3 assuming:1 length:2 minn:1 kk:7 mini:1 minimizing:1 providing:1 ratio:1 october:1 robert:1 statement:6 sharper:1 synth:2 trace:2 stated:1 implementation:1 motivates:1 allowing:1 upper:2 observation:3 datasets:6 finite:1 gas:3 supporting:1 truncated:1 defining:1 extended:2 situation:1 communication:1 rn:9 reproducing:1 sharp:5 arbitrary:1 drift:3 introduced:1 vacuous:1 pair:1 required:1 cast:1 david:1 connection:3 california:1 established:3 able:6 beyond:1 below:1 yc:1 max:5 memory:1 wainwright:1 overlap:1 difficulty:3 quantification:2 natural:4 largescale:1 minimax:1 improve:1 text:1 review:1 prior:2 literature:2 acknowledgement:1 kf:4 multiplication:4 contributing:1 relative:4 loss:7 expect:1 highlight:1 proven:3 validation:1 foundation:2 degree:5 pij:1 sufficient:3 pi:9 row:3 elsewhere:1 mohri:1 bern:1 bias:7 allow:1 understand:2 weaker:1 institute:1 circulant:1 benefit:1 distributed:1 dimension:1 world:1 unfolds:1 qualitatively:1 bb:1 approximate:5 obtains:1 feat:1 bernhard:1 ml:1 approxi:1 xi:17 alternatively:1 spectrum:2 table:2 robust:2 ca:1 obtaining:3 improving:5 mse:1 excellent:1 mehryar:1 constructing:3 diag:3 main:7 spread:1 linearly:1 border:1 noise:1 arise:1 ramdas:1 verifies:1 allowed:1 cubic:1 precision:1 sub:1 position:1 answering:1 third:1 theorem:20 down:3 kop:1 formula:1 maxi:3 small4:1 svm:1 evidence:1 naively:1 importance:7 chatterjee:1 nk:1 chen:1 easier:1 smoothly:1 simply:1 likely:1 jacm:1 springer:3 satisfies:4 relies:2 acm:1 stimulating:1 goal:1 consequently:1 rbf:2 hard:1 specifically:1 uniformly:6 operates:1 andconquer:1 averaging:1 lemma:1 called:5 total:1 asymmetrically:1 svd:1 concordant:1 cholesky:2 support:1 latter:3 rotate:1 |
5,210 | 5,717 | Taming the Wild: A Unified Analysis of
H OGWILD !-Style Algorithms
Christopher De Sa, Ce Zhang, Kunle Olukotun, and Christopher R?e
cdesa@stanford.edu, czhang@cs.wisc.edu,
kunle@stanford.edu, chrismre@stanford.edu
Departments of Electrical Engineering and Computer Science
Stanford University, Stanford, CA 94309
Abstract
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD?s runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that
enables us to capture the rich noise models that may arise from such techniques.
Specifically, we use our new analysis in three ways: (1) we derive convergence
rates for the convex case (H OGWILD !) with relaxed assumptions on the sparsity
of the problem; (2) we analyze asynchronous SGD algorithms for non-convex
matrix problems including matrix completion; and (3) we design and analyze
an asynchronous SGD algorithm, called B UCKWILD !, that uses lower-precision
arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
1
Introduction
Many problems in machine learning can be written as a stochastic optimization problem
minimize E[f?(x)] over x ? Rn ,
where f? is a random objective function. One popular method to solve this is with stochastic gradient
descent (SGD), an iterative method which, at each timestep t, chooses a random objective sample f?t
and updates
xt+1 = xt ? ??f?t (xt ),
(1)
where ? is the step size. For most problems, this update step is easy to compute, and perhaps
because of this SGD is a ubiquitous algorithm with a wide range of applications in machine learning [1], including neural network backpropagation [2, 3, 13], recommendation systems [8, 19], and
optimization [20]. For non-convex problems, SGD is popular?in particular, it is widely used in
deep learning?but its success is poorly understood theoretically.
Given SGD?s success in industry, practitioners have developed methods to speed up its computation.
One popular method to speed up SGD and related algorithms is using asynchronous execution.
In an asynchronous algorithm, such as H OGWILD ! [17], multiple threads run an update rule such
as Equation 1 in parallel without locks. H OGWILD ! and other lock-free algorithms have been
applied to a variety of uses, including PageRank approximations (FrogWild! [16]), deep learning
(Dogwild! [18]) and recommender systems [24]. Many asynchronous versions of other stochastic
algorithms have been individually analyzed, such as stochastic coordinate descent (SGD) [14, 15]
and accelerated parallel proximal coordinate descent (APPROX) [6], producing rate results that are
similar to those of H OGWILD ! Recently, Gupta et al. [9] gave an empirical analysis of the effects of
a low-precision variant of SGD on neural network training. Other variants of stochastic algorithms
1
have been proposed [5, 11, 12, 21, 22, 23]; only a fraction of these algorithms have been analyzed in
the asynchronous case. Unfortunately, a new variant of SGD (or a related algorithm) may violate the
assumptions of existing analysis, and hence there are gaps in our understanding of these techniques.
One approach to filling this gap is to analyze each purpose-built extension from scratch: an entirely
new model for each type of asynchrony, each type of precision, etc. In a practical sense, this may
be unavoidable, but ideally there would be a single technique that could analyze many models. In
this vein, we prove a martingale-based result that enables us to treat many different extensions as
different forms of noise within a unified model. We demonstrate our technique with three results:
1. For the convex case, H OGWILD ! requires strict sparsity assumptions. Using our techniques, we are able to relax these assumptions and still derive convergence rates. Moreover,
under H OGWILD !?s stricter assumptions, we recover the previous convergence rates.
2. We derive convergence results for an asynchronous SGD algorithm for a non-convex matrix
completion problem. We derive the first rates for asynchronous SGD following the recent
(synchronous) non-convex SGD work of De Sa et al. [4].
3. We derive convergence rates in the presence of quantization errors such as those introduced by fixed-point arithmetic. We validate our results experimentally, and show that
B UCKWILD ! can achieve speedups of up to 2.3? over H OGWILD !-based algorithms for
logistic regression.
One can combine these different methods both theoretically and empirically. We begin with our
main result, which describes our martingale-based approach and our model.
2
Main Result
Analyzing asynchronous algorithms is challenging because, unlike in the sequential case where there
is a single copy of the iterate x, in the asynchronous case each core has a separate copy of x in its
own cache. Writes from one core may take some time to be propagated to another core?s copy of
x, which results in race conditions where stale data is used to compute the gradient updates. This
difficulty is compounded in the non-convex case, where a series of unlucky random events?bad
initialization, inauspicious steps, and race conditions?can cause the algorithm to get stuck near a
saddle point or in a local minimum.
Broadly, we analyze algorithms that repeatedly update x by running an update step
? t (xt ),
xt+1 = xt ? G
(2)
? t . For example, for SGD, we would have G(x) = ??f?t (x). The
for some i.i.d. update function G
goal of the algorithm must be to produce an iterate in some success region S?for example, a ball
centered at the optimum x? . For any T , after running the algorithm for T timesteps, we say that the
algorithm has succeeded if xt ? S for some t ? T ; otherwise, we say that the algorithm has failed,
and we denote this failure event as FT .
Our main result is a technique that allows us to bound the convergence rates of asynchronous SGD
and related algorithms, even for some non-convex problems. We use martingale methods, which
have produced elegant convergence rate results for both convex and some non-convex [4] algorithms.
Martingales enable us to model multiple forms of error?for example, from stochastic sampling,
random initialization, and asynchronous delays?within a single statistical model. Compared to
standard techniques, they also allow us to analyze algorithms that sometimes get stuck, which is
useful for non-convex problems. Our core contribution is that a martingale-based proof for the
convergence of a sequential stochastic algorithm can be easily modified to give a convergence rate
for an asynchronous version.
A supermartingale [7] is a stochastic process Wt such that E[Wt+1 |Wt ] ? Wt . That is, the expected
value is non-increasing over time. A martingale-based proof of convergence for the sequential version of this algorithm must construct a supermartingale Wt (xt , xt?1 , . . . , x0 ) that is a function of
both the time and the current and past iterates; this function informally represents how unhappy we
are with the current state of the algorithm. Typically, it will have the following properties.
Definition 1. For a stochastic algorithm as described above, a non-negative process Wt : Rn?t ? R
is a rate supermartingale with horizon B if the following conditions are true. First, it must be a
2
supermartingale; that is, for any sequence xt , . . . , x0 and any t ? B,
? t (xt ), xt , . . . , x0 )] ? Wt (xt , xt?1 , . . . , x0 ).
E[Wt+1 (xt ? G
(3)
Second, for all times T ? B and for any sequence xT , . . . , x0 , if the algorithm has not succeeded
by time T (that is, xt ?
/ S for all t < T ), it must hold that
WT (xT , xT ?1 , . . . , x0 ) ? T.
(4)
This represents the fact that we are unhappy with running for many iterations without success.
Using this, we can easily bound the convergence rate of the sequential version of the algorithm.
Statement 1. Assume that we run a sequential stochastic algorithm, for which W is a rate supermartingale. For any T ? B, the probability that the algorithm has not succeeded by time T is
P (FT ) ?
E[W0 (x0 )]
.
T
Proof. In what follows, we let Wt denote the actual value taken on by the function in a process
defined by (2). That is, Wt = Wt (xt , xt?1 , . . . , x0 ). By applying (3) recursively, for any T ,
E[WT ] ? E[W0 ] = E[W0 (x0 )].
By the law of total expectation applied to the failure event FT ,
E[W0 (x0 )] ? E[WT ] = P (FT ) E[WT |FT ] + P (?FT ) E[WT |?FT ].
Applying (4), i.e. E[WT |FT ] ? T , and recalling that W is nonnegative results in
E[W0 (x0 )] ? P (FT ) T ;
rearranging terms produces the result in Statement 1.
This technique is very general; in subsequent sections we show that rate supermartingales can be
constructed for SGD on all convex problems and for some algorithms for non-convex problems.
2.1
Modeling Asynchronicity
The behavior of an asynchronous SGD algorithm depends both on the problem it is trying to solve
and on the hardware it is running on. For ease of analysis, we assume that the hardware has the
following characteristics. These are basically the same assumptions used to prove the original H OG WILD ! result [17].
? There are multiple threads running iterations of (2), each with their own cache. At any point
in time, these caches may hold different values for the variable x, and they communicate
via some cache coherency protocol.
? There exists a central store S (typically RAM) at which all writes are serialized. This
provides a consistent value for the state of the system at any point in real time.
? If a thread performs a read R of a previously written value X, and then writes another
value Y (dependent on R), then the write that produced X will be committed to S before
the write that produced Y .
? Each write from an iteration of (2) is to only a single entry of x and is done using an atomic
read-add-write instruction. That is, there are no write-after-write races (handling these is
possible, but complicates the analysis).
Notice that, if we let xt denote the value of the vector x in the central store S after t writes have
occurred, then since the writes are atomic, the value of xt+1 is solely dependent on the single thread
? t denote the update function sample
that produces the write that is serialized next in S. If we let G
that is used by that thread for that write, and vt denote the cached value of x used by that write, then
? t (?
xt+1 = xt ? G
vt )
3
(5)
Our hardware model further constrains the value of v?t : all the read elements of v?t must have been
written to S at some time before t. Therefore, for some nonnegative variable ??i,t ,
eTi v?t = eTi xt???i,t ,
(6)
where ei is the ith standard basis vector. We can think of ??i,t as the delay in the ith coordinate
caused by the parallel updates.
We can conceive of this system as a stochastic process with two sources of randomness: the noisy up? t and the delays ??i,t . We assume that the G
? t are independent and identically
date function samples G
distributed?this is reasonable because they are sampled independently by the updating threads. It
would be unreasonable, though, to assume the same for the ??i,t , since delays may very well be correlated in the system. Instead, we assume that the delays are bounded from above by some random
variable ??. Specifically, if Ft , the filtration, denotes all random events that occurred before timestep
t, then for any i, t, and k,
P (?
?i,t ? k|Ft ) ? P (?
? ? k) .
(7)
We let ? = E[?
? ], and call ? the worst-case expected delay.
2.2
Convergence Rates for Asynchronous SGD
Now that we are equipped with a stochastic model for the asynchronous SGD algorithm, we show
how we can use a rate supermartingale to give a convergence rate for asynchronous algorithms. To
do this, we need some continuity and boundedness assumptions; we collect these into a definition,
and then state the theorem.
Definition 2. An algorithm with rate supermartingale W is (H, R, ?)-bounded if the following
conditions hold. First, W must be Lipschitz continuous in the current iterate with parameter H; that
is, for any t, u, v, and sequence xt , . . . , x0 ,
kWt (u, xt?1 , . . . , x0 ) ? Wt (v, xt?1 , . . . , x0 )k? Hku ? vk.
(8)
?
Second, G must be Lipschitz continuous in expectation with parameter R; that is, for any u, and v,
?
?
E[kG(u)
? G(v)k]
? Rku ? vk1 .
Third, the expected magnitude of the update must be bounded by ?. That is, for any x,
?
E[kG(x)k]
? ?.
(9)
(10)
Theorem 1. Assume that we run an asynchronous stochastic algorithm with the above hardware
model, for which W is a (H, R, ?)-bounded rate supermartingale with horizon B. Further assume
that HR?? < 1. For any T ? B, the probability that the algorithm has not succeeded by time T is
E[W (0, x0 )]
P (FT ) ?
.
(1 ? HR?? )T
Note that this rate depends only on the worst-case expected delay ? and not on any other properties
of the hardware model. Compared to the result of Statement 1, the probability of failure has only
increased by a factor of 1 ? HR?? . In most practical cases, HR?? 1, so this increase in
probability is negligible.
Since the proof of this theorem is simple, but uses non-standard techniques, we outline it here.
First, notice that the process Wt , which was a supermartingale in the sequential case, is not in the
asynchronous case because of the delayed updates. Our strategy is to use W to produce a new
process Vt that is a supermartingale in this case. For any t and x? , if xu ?
/ S for all u < t, we define
?
?
X
X
Vt (xt , . . . , x0 ) = Wt (xt , . . . , x0 ) ? HR?? t + HR
kxt?k+1 ? xt?k k
P (?
? ? m) .
k=1
m=k
Compared with W , there are two additional terms here. The first term is negative, and cancels out
some of the unhappiness from (4) that we ascribed to running for many iterations. We can interpret
this as us accepting that we may need to run for more iterations than in the sequential case. The
second term measures the distance between recent iterates; we would be unhappy if this becomes
large because then the noise from the delayed updates would also be large. On the other hand, if
xu ? S for some u < t, then we define
Vt (xt , . . . , xu , . . . , x0 ) = Vu (xu , . . . , x0 ).
4
We call Vt a stopped process because its value doesn?t change after success occurs. It is straightforward to show that Vt is a supermartingale for the asynchronous algorithm. Once we know this, the
same logic used in the proof of Statement 1 can be used to prove Theorem 1.
Theorem 1 gives us a straightforward way of bounding the convergence time of any asynchronous
stochastic algorithm. First, we find a rate supermartingale for the problem; this is typically no
harder than proving sequential convergence. Second, we find parameters such that the problem is
(H, R, ?)-bounded, typically ; this is easily done for well-behaved problems by using differentiation
to bound the Lipschitz constants. Third, we apply Theorem 1 to get a rate for asynchronous SGD.
Using this method, analyzing an asynchronous algorithm is really no more difficult than analyzing
its sequential analog.
3
Applications
Now that we have proved our main result, we turn our attention to applications. We show, for
a couple of algorithms, how to construct a rate supermartingale. We demonstrate that doing this
allows us to recover known rates for H OGWILD ! algorithms as well as analyze cases where no
known rates exist.
3.1
Convex Case, High Precision Arithmetic
First, we consider the simple case of using asynchronous SGD to minimize a convex function f (x)
using unbiased gradient samples ?f?(x). That is, we run the update rule
xt+1 = xt ? ??f?t (x).
(11)
We make the standard assumption that f is strongly convex with parameter c; that is, for all x and y
(x ? y)T (?f (x) ? ?f (y)) ? ckx ? yk2 .
We also assume continuous differentiability of ?f? with 1-norm Lipschitz constant L,
E[k?f?(x) ? ?f?(y)k] ? Lkx ? yk1 .
(12)
(13)
We require that the second moment of the gradient sample is also bounded for some M > 0 by
E[k?f?(x)k2 ] ? M 2 .
(14)
For some > 0, we let the success region be
S = {x|kx ? x? k2 ? }.
Under these conditions, we can construct a rate supermartingale for this algorithm.
Lemma 1. There exists a Wt where, if the algorithm hasn?t succeeded by timestep t,
? 2 ?1
Wt (xt , . . . , x0 ) =
log
e
kx
?
x
k
+ t,
t
2?c ? ?2 M 2
such that Wt is a rate submartingale for the above
? algorithm with horizon B = ?. Furthermore, it
is (H, R, ?)-bounded with parameters: H = 2 (2?c ? ?2 M 2 )?1 , R = ?L, and ? = ?M .
Using this and Theorem 1 gives us a direct bound on the failure rate of convex H OGWILD ! SGD.
Corollary 1. Assume that we run an asynchronous version of the above SGD algorithm, where for
some constant ? ? (0, 1) we choose step size
c?
? .
?= 2
M + 2LM ?
Then for any T , the probability that the algorithm has not succeeded by time T is
?
M 2 + 2LM ?
? 2 ?1
P (FT ) ?
log
e
kx
?
x
k
.
0
c2 ?T
This result is more general than the result in Niu et al. [17]. The main differences are: that we make
no assumptions about the sparsity structure of the gradient samples; and that our rate depends only
? and the expected value of ??, as opposed to requiring absolute bounds
on the second moment of G
on their magnitude. Under their stricter assumptions, the result of Corollary 1 recovers their rate.
5
3.2
Convex Case, Low Precision Arithmetic
One of the ways B UCKWILD ! achieves high performance is by using low-precision fixed-point
arithmetic. This introduces additional noise to the system in the form of round-off error. We consider
this error to be part of the B UCKWILD ! hardware model. We assume that the round-off error can
be modeled by an unbiased rounding function operating on the update samples. That is, for some
? such that, for any x ? R, it
chosen precision factor ?, there is a random quantization function Q
?
?
holds that E[Q(x)] = x, and the round-off error is bounded by |Q(x) ? x|< ??M . Using this
function, we can write a low-precision asynchronous update rule for convex SGD as
? t ??f?t (?
xt+1 = xt ? Q
vt ) ,
(15)
? t operates only on the single nonzero entry of ?f?t (?
where Q
vt ). In the same way as we did in the
high-precision case, we can use these properties to construct a rate supermartingale for the lowprecision version of the convex SGD algorithm, and then use Theorem 1 to bound the failure rate of
convex B UCKWILD !
Corollary 2. Assume that we run asynchronous low-precision convex SGD, and for some ? ? (0, 1),
we choose step size
c?
? ,
?= 2
M (1 + ?2 ) + LM ? (2 + ?2 )
then for any T , the probability that the algorithm has not succeeded by time T is
?
M 2 (1 + ?2 ) + LM ? (2 + ?2 )
? 2 ?1
log
e
kx
?
x
k
P (FT ) ?
.
0
c2 ?T
Typically, we choose a precision such that ? 1; in this case, the increased error compared to the
result of Corollary 1 will be negligible and we will converge in a number of samples that is very
similar to the high-precision, sequential case. Since each B UCKWILD ! update runs in less time than
an equivalent H OGWILD ! update, this result means that an execution of B UCKWILD ! will produce
same-quality output in less wall-clock time compared with H OGWILD !
3.3
Non-Convex Case, High Precision Arithmetic
Many machine learning problems are non-convex, but are still solved in practice with SGD. In this
section, we show that our technique can be adapted to analyze non-convex problems. Unfortunately,
there are no general convergence results that provide rates for SGD on non-convex problems, so it
would be unreasonable to expect a general proof of convergence for non-convex H OGWILD ! Instead,
we focus on a particular problem, low-rank least-squares matrix completion,
minimize E[kA? ? xxT k2F ]
(16)
subject to x ? Rn ,
for which there exists a sequential SGD algorithm with a martingale-based rate that has already
been proven. This problem arises in general data analysis, subspace tracking, principle component
?
analysis, recommendation systems, and other applications [4]. In what follows, we let A = E[A].
We assume that A is symmetric, and has unit eigenvectors u1 , u2 , . . . , un with corresponding eigenvalues ?1 > ?2 ? ? ? ? ? ?n . We let ?, the eigengap, denote ? = ?1 ? ?2 .
De Sa et al. [4] provide a martingale-based rate of convergence for a particular SGD algorithm,
Alecton, running on this problem. For simplicity, we focus on only the rank-1 version of the problem, and we assume that, at each timestep, a single entry of A is used as a sample. Under these
conditions, Alecton uses the update rule
xt+1 = (I + ?n2 e?it e?Tit Ae?jt e?Tjt )xt ,
(17)
where ?it and ?jt are randomly-chosen indices in [1, n]. It initializes x0 uniformly on the sphere of
some radius centered at the origin. We can equivalently think of this as a stochastic power iteration
algorithm. For any > 0, we define the success set S to be
2
S = {x|(uT1 x)2 ? (1 ? ) kxk }.
(18)
That is, we are only concerned with the direction of x, not its magnitude; this algorithm only recovers
the dominant eigenvector of A, not its eigenvalue. In order to show convergence for this entrywise
sampling scheme, De Sa et al. [4] require that the matrix A satisfy a coherence bound [10].
6
Table 1: Training loss of SGD as a function of arithmetic precision for logistic regression.
Dataset
Reuters
Forest
RCV1
Music
Rows
8K
581K
781K
515K
Columns
18K
54
47K
91
Size
1.2GB
0.2GB
0.9GB
0.7GB
32-bit float
0.5700
0.6463
0.1888
0.8785
16-bit int
0.5700
0.6463
0.1888
0.8785
8-bit int
0.5709
0.6447
0.1879
0.8781
Definition 3. A matrix A ? Rn?n is incoherent with parameter ? if for every standard basis vector
ej , and for all unit eigenvectors ui of the matrix, (eTj ui )2 ? ?2 n?1 .
They also require that the step size be set, for some constants 0 < ? ? 1 and 0 < ? < (1 + )?1 as
?=
???
2
2n?4 kAkF
.
For ease of analysis, we add the additional assumptions that our algorithm runs in some bounded
space. That is, for some constant C, at all times t, 1 ? kxt k and kxt k1 ? C. As in the convex
case, by following the martingale-based approach of De Sa et al. [4], we are able to generate a rate
supermartinagle for this algorithm?to save space, we only state its initial value and not the full
expression.
Lemma 2. For the problem above, choose any horizon B such that ???B ? 1. Then there exists
a function Wt such that Wt is a rate supermartingale for the above non-convex SGD algorithm with
1
parameters H = 8n? ?1 ? ?1 ??1 ? 2 , R = ?? kAkF , and ? = ?? kAkF C, and
p
E [W0 (x0 )] ? 2? ?1 ??1 log(en? ?1 ?1 ) + B 2??.
Note that the analysis parameter ? allows us to trade off between B, which determines how long we
can run the algorithm, and the initial value of the supermartingale E [W0 (x0 )]. We can now produce
a corollary about the convergence rate by applying Theorem 1 and setting B and T appropriately.
Corollary 3. Assume that we run H OGWILD ! Alecton under these conditions for T timesteps, as
defined below. Then the probability of failure, P (FT ), will be bounded as below.
2
T =
4n?4 kAkF
?
log
?2 ?? 2??
en
?
?
P (FT ) ?
,
?2
8???2
? .
? 4C??
The fact that we are able to use our technique to analyze a non-convex algorithm illustrates its
generality. Note that it is possible to combine our results to analyze asynchronous low-precision
non-convex SGD, but the resulting formulas are complex, so we do not include them here.
4
Experiments
We validate our theoretical results for both asynchronous non-convex matrix completion and B UCK WILD !, a H OGWILD ! implementation with lower-precision arithmetic. Like H OGWILD !, a B UCK WILD ! algorithm has multiple threads running an update rule (2) in parallel without locking. Compared with H OGWILD !, which uses 32-bit floating point numbers to represent input data, B UCK WILD ! uses limited-precision arithmetic by rounding the input data to 8-bit or 16-bit integers. This
not only decreases the memory usage, but also allows us to take advantage of single-instructionmultiple-data (SIMD) instructions for integers on modern CPUs.
We verified our main claims by running H OGWILD ! and B UCKWILD ! algorithms on the discussed
applications. Table 1 shows how the training loss of SGD for logistic regression, a convex problem,
varies as the precision is changed. We ran SGD with step size ? = 0.0001; however, results are
similar across a range of step sizes. We analyzed all four datasets reported in DimmWitted [25] that
favored H OGWILD !: Reuters and RCV1, which are text classification datasets; Forest, which arises
from remote sensing; and Music, which is a music classification dataset. We implemented all GLM
models reported in DimmWitted, including SVM, Linear Regression, and Logistic Regression, and
7
2
5
4
3
1
2
32-bit float
16-bit int
8-bit int
1
0
1
4
12
threads
24
(uT1 x)2 kxk?2
6
Hogwild vs. Sequential Alecton for n = 106
speedup over 32-bit best H OGWILD !
speedup over 32-bit sequential
Performance of B UCKWILD ! for Logistic Regression
(a) Speedup of B UCKWILD ! for dense RCV1
dataset.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
sequential
12-thread hogwild
0.6 0.8
1
1.2 1.4
iterations (billions)
1.6
(b) Convergence trajectories for sequential versus H OGWILD ! Alecton.
Figure 1: Experiments compare the training loss, performance, and convergence of H OGWILD ! and
B UCKWILD ! algorithms with sequential and/or high-precision versions.
report Logistic Regression because other models have similar performance. The results illustrate
that there is almost no increase in training loss as the precision is decreased for these problems. We
also investigated 4-bit and 1-bit computation: the former was slower than 8-bit due to a lack of 4-bit
SIMD instructions, and the latter discarded too much information to produce good quality results.
Figure 1(a) displays the speedup of B UCKWILD ! running on the dense-version of the RCV1 dataset
compared to both full-precision sequential SGD (left axis) and best-case H OGWILD ! (right axis).
Experiments ran on a machine with two Xeon X650 CPUs, each with six hyperthreaded cores, and
24GB of RAM. This plot illustrates that incorporating low-precision arithmetic into our algorithm
allows us to achieve significant speedups over both sequential and H OGWILD ! SGD. (Note that we
don?t get full linear speedup because we are bound by the available memory bandwidth; beyond
this limit, adding additional threads provides no benefits while increasing conflicts and thrashing
the L1 and L2 caches.) This result, combined with the data in Table 1, suggest that by doing lowprecision asynchronous updates, we can get speedups of up to 2.3? on these sorts of datasets without
a significant increase in error.
Figure 1(b) compares the convergence trajectories of H OGWILD ! and sequential versions of the nonconvex Alecton matrix completion algorithm on a synthetic data matrix A ? Rn?n with ten random
eigenvalues ?i > 0. Each plotted series represents a different run of Alecton; the trajectories differ
somewhat because of the randomness of the algorithm. The plot shows that the sequential and
asynchronous versions behave qualitatively similarly, and converge to the same noise floor. For this
dataset, sequential Alecton took 6.86 seconds to run while 12-thread H OGWILD ! Alecton took 1.39
seconds, a 4.9? speedup.
5
Conclusion
This paper presented a unified theoretical framework for producing results about the convergence
rates of asynchronous and low-precision random algorithms such as stochastic gradient descent. We
showed how a martingale-based rate of convergence for a sequential, full-precision algorithm can
be easily leveraged to give a rate for an asynchronous, low-precision version. We also introduced
B UCKWILD !, a strategy for SGD that is able to take advantage of modern hardware resources for
both task and data parallelism, and showed that it achieves near linear parallel speedup over sequential algorithms.
Acknowledgments
The B UCKWILD ! name arose out of conversations with Benjamin Recht. Thanks also to Madeleine Udell
for helpful conversations. The authors acknowledge the support of: DARPA FA8750-12-2-0335; NSF IIS1247701; NSF CCF-1111943; DOE 108845; NSF CCF-1337375; DARPA FA8750-13-2-0039; NSF IIS1353606; ONR N000141210041 and N000141310129; NIH U54EB020405; Oracle; NVIDIA; Huawei; SAP
Labs; Sloan Research Fellowship; Moore Foundation; American Family Insurance; Google; and Toshiba.
8
References
[1] L?eon Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT?2010, pages
177?186. Springer, 2010.
[2] L?eon Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade, pages 421?436.
Springer, 2012.
[3] L?eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In J.C. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, NIPS, volume 20, pages 161?168. NIPS Foundation, 2008.
[4] Christopher De Sa, Kunle Olukotun, and Christopher R?e. Global convergence of stochastic gradient
descent for some nonconvex matrix problems. ICML, 2015.
[5] John C Duchi, Peter L Bartlett, and Martin J Wainwright. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization, 22(2):674?701, 2012.
[6] Olivier Fercoq and Peter Richt?arik. Accelerated, parallel and proximal coordinate descent. arXiv preprint
arXiv:1312.5799, 2013.
[7] Thomas R Fleming and David P Harrington. Counting processes and survival analysis. volume 169,
pages 56?57. John Wiley & Sons, 1991.
[8] Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Zadeh. WTF: The who
to follow service at twitter. WWW ?13, pages 505?514, 2013.
[9] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. ICML, 2015.
[10] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating
minimization. In STOC, pages 665?674. ACM, 2013.
[11] Bj?orn Johansson, Maben Rabi, and Mikael Johansson. A randomized incremental subgradient method for
distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157?1170, 2009.
[12] Jakub Konecn`y, Zheng Qu, and Peter Richt?arik. S2cd: Semi-stochastic coordinate descent. In NIPS
Optimization in Machine Learning workshop, 2014.
[13] Yann Le Cun, L?eon Bottou, Genevieve B. Orr, and Klaus-Robert M?uller. Efficient backprop. In Neural
Networks, Tricks of the Trade. 1998.
[14] Ji Liu and Stephen J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence
properties. SIOPT, 25(1):351?376, 2015.
[15] Ji Liu, Stephen J Wright, Christopher R?e, Victor Bittorf, and Srikrishna Sridhar. An asynchronous parallel
stochastic coordinate descent algorithm. JMLR, 16:285?322, 2015.
[16] Ioannis Mitliagkas, Michael Borokhovich, Alexandros G. Dimakis, and Constantine Caramanis. Frogwild!: Fast pagerank approximations on graph engines. PVLDB, 2015.
[17] Feng Niu, Benjamin Recht, Christopher Re, and Stephen Wright. Hogwild: A lock-free approach to
parallelizing stochastic gradient descent. In NIPS, pages 693?701, 2011.
[18] Cyprien Noel and Simon Osindero. Dogwild!?Distributed Hogwild for CPU & GPU. 2014.
[19] Shameem Ahamed Puthiya Parambath. Matrix factorization methods for recommender systems. 2013.
[20] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly
convex stochastic optimization. ICML, 2012.
[21] Peter Richt?arik and Martin Tak?ac? . Parallel coordinate descent methods for big data optimization. Mathematical Programming, pages 1?52, 2012.
[22] Qing Tao, Kang Kong, Dejun Chu, and Gaowei Wu. Stochastic coordinate descent methods for regularized smooth and nonsmooth losses. In Machine Learning and Knowledge Discovery in Databases, pages
537?552. Springer, 2012.
[23] Rachael Tappenden, Martin Tak?ac? , and Peter Richt?arik. On the complexity of parallel coordinate descent.
arXiv preprint arXiv:1503.03033, 2015.
[24] Hsiang-Fu Yu, Cho-Jui Hsieh, Si Si, and Inderjit S Dhillon. Scalable coordinate descent approaches to
parallel matrix factorization for recommender systems. In ICDM, pages 765?774, 2012.
[25] Ce Zhang and Christopher Re. Dimmwitted: A study of main-memory statistical analytics. PVLDB,
2014.
9
| 5717 |@word kong:1 version:12 norm:1 johansson:2 instruction:3 hsieh:1 sgd:40 harder:1 boundedness:1 recursively:1 moment:2 initial:2 liu:2 series:2 fa8750:2 past:1 existing:1 current:3 ka:1 si:2 chu:1 written:3 must:8 john:2 gpu:1 subsequent:1 numerical:1 enables:2 plot:2 update:20 v:1 serialized:2 rku:1 ith:2 pvldb:2 core:5 accepting:1 alexandros:1 iterates:2 provides:2 bittorf:1 zhang:2 mathematical:1 constructed:1 direct:1 c2:2 prove:3 wild:5 combine:2 ascribed:1 x0:23 theoretically:2 expected:5 behavior:1 actual:1 cpu:3 cache:5 equipped:1 increasing:2 becomes:1 begin:1 moreover:1 bounded:10 what:2 kg:2 prateek:1 eigenvector:1 dimakis:1 developed:2 unified:3 differentiation:1 every:1 runtime:1 stricter:2 k2:2 platt:1 unit:2 producing:2 before:3 negligible:2 engineering:1 understood:1 treat:1 local:1 limit:1 service:1 analyzing:3 niu:2 solely:1 initialization:2 ankur:1 collect:1 challenging:1 ease:2 limited:2 factorization:2 analytics:1 range:2 practical:2 acknowledgment:1 atomic:2 vu:1 practice:1 backpropagation:1 writes:5 empirical:1 pritish:1 suggest:1 jui:1 get:5 applying:3 tappenden:1 optimize:1 equivalent:1 www:1 puthiya:1 compstat:1 straightforward:2 attention:1 independently:1 convex:34 jimmy:1 simplicity:1 chrismre:1 rule:5 ogwild:26 proving:1 coordinate:11 shamir:1 olivier:2 programming:1 us:6 origin:1 trick:3 element:1 updating:1 yk1:1 vein:1 database:1 ft:16 preprint:2 electrical:1 capture:1 worst:2 solved:1 wang:1 region:2 remote:1 trade:3 decrease:1 richt:4 ran:2 kailash:1 benjamin:2 ui:2 constrains:1 locking:1 ideally:1 complexity:1 tit:1 basis:2 easily:4 darpa:2 caramanis:1 xxt:1 jain:1 fast:1 klaus:1 stanford:5 solve:2 widely:1 say:2 relax:1 otherwise:1 think:2 noisy:1 sequence:3 kxt:3 eigenvalue:3 advantage:2 agrawal:1 took:2 networked:1 date:1 poorly:1 achieve:2 roweis:1 validate:2 billion:1 convergence:27 etj:1 optimum:1 produce:7 cached:1 incremental:1 derive:5 illustrate:1 completion:6 ac:2 srikrishna:1 sa:6 netrapalli:1 implemented:1 c:1 differ:1 direction:1 radius:1 stochastic:27 centered:2 supermartingales:1 enable:1 orn:1 backprop:1 require:3 really:1 wall:1 dimmwitted:3 tjt:1 extension:2 hold:4 wright:3 bj:1 lm:4 claim:1 achieves:2 purpose:1 unhappy:3 individually:1 eti:2 minimization:1 uller:1 arik:4 modified:1 arose:1 ej:1 og:1 corollary:6 focus:2 vk:1 rank:3 sense:1 helpful:1 twitter:1 dependent:2 huawei:1 typically:5 submartingale:1 koller:1 tak:2 tao:1 harrington:1 classification:2 favored:1 smoothing:1 construct:4 once:1 simd:2 sampling:2 represents:3 yu:1 cancel:1 filling:1 k2f:1 icml:3 report:1 sanghavi:1 nonsmooth:1 modern:3 randomly:1 kwt:1 delayed:2 qing:1 floating:1 karthik:1 recalling:1 zheng:1 insurance:1 genevieve:1 unlucky:1 analyzed:3 introduces:1 succeeded:7 fu:1 ohad:1 re:2 plotted:1 theoretical:2 complicates:1 stopped:1 increased:2 industry:2 modeling:1 column:1 xeon:1 n000141310129:1 entry:3 delay:7 rounding:2 osindero:1 too:1 reported:2 varies:1 proximal:2 synthetic:1 chooses:1 combined:1 recht:2 thanks:1 cho:1 randomized:2 siam:2 off:4 dong:1 michael:1 ashish:1 central:2 unavoidable:1 opposed:1 choose:4 leveraged:1 american:1 style:1 de:6 orr:1 ioannis:1 int:4 satisfy:1 hasn:1 sloan:1 caused:1 race:3 depends:3 hogwild:4 lab:1 analyze:10 doing:2 recover:2 czhang:1 parallel:10 sort:1 simon:1 contribution:1 minimize:3 square:1 conceive:1 characteristic:1 efficiently:1 who:1 ckx:1 produced:3 basically:1 trajectory:3 researcher:1 randomness:2 definition:4 failure:6 proof:6 recovers:2 propagated:1 sampled:1 couple:1 proved:1 dataset:5 popular:3 sap:1 conversation:2 knowledge:1 ubiquitous:2 follow:1 hku:1 entrywise:1 done:2 though:1 strongly:2 generality:1 furthermore:1 clock:1 hand:1 christopher:7 ei:1 lack:1 google:1 continuity:1 logistic:6 quality:2 perhaps:1 asynchrony:1 behaved:1 stale:1 usage:1 effect:1 name:1 requiring:1 true:1 unbiased:2 ccf:2 former:1 hence:1 rachael:1 read:3 symmetric:1 nonzero:1 moore:1 alternating:1 dhillon:1 round:3 supermartingale:17 trying:1 outline:1 demonstrate:2 performs:1 l1:1 duchi:1 recently:1 nih:1 empirically:1 ji:2 reza:1 volume:2 analog:1 occurred:2 discussed:1 interpret:1 significant:2 approx:1 sujay:1 shameem:1 similarly:1 x650:1 yk2:1 lkx:1 etc:1 add:2 operating:1 dominant:1 own:2 recent:2 showed:2 constantine:1 suyog:1 store:2 nvidia:1 nonconvex:2 onr:1 success:7 vt:9 victor:1 wtf:1 minimum:1 additional:4 relaxed:1 somewhat:1 floor:1 goel:1 converge:2 sharma:1 arithmetic:10 semi:1 multiple:4 violate:1 full:4 stephen:3 compounded:1 smooth:1 sphere:1 long:1 lin:1 icdm:1 kunle:3 variant:3 regression:7 scalable:1 ae:1 expectation:2 arxiv:4 iteration:7 sometimes:1 represent:1 fellowship:1 decreased:1 float:2 source:1 appropriately:1 unlike:1 strict:1 subject:1 elegant:1 sridharan:1 practitioner:1 call:2 integer:2 near:2 presence:1 counting:1 easy:1 identically:1 concerned:1 variety:3 iterate:3 timesteps:2 gave:1 bandwidth:1 praneeth:1 tradeoff:1 synchronous:1 thread:11 expression:1 six:1 bartlett:1 gb:5 eigengap:1 peter:5 cause:1 repeatedly:1 deep:3 useful:1 informally:1 eigenvectors:2 pankaj:1 ten:1 u54eb020405:1 hardware:8 narayanan:1 differentiability:1 reduced:1 generate:1 exist:1 nsf:4 notice:2 coherency:1 broadly:1 write:10 four:1 wisc:1 ce:2 verified:1 timestep:4 olukotun:2 ram:2 subgradient:1 fraction:1 graph:1 run:14 communicate:1 almost:1 reasonable:1 family:1 asynchronicity:1 yann:1 wu:1 coherence:1 zadeh:1 bit:15 entirely:1 bound:8 display:1 nonnegative:2 oracle:1 adapted:1 toshiba:1 bousquet:1 u1:1 speed:2 fercoq:1 rcv1:4 martin:3 speedup:10 department:1 ball:1 describes:1 across:1 son:1 parambath:1 qu:1 cun:1 making:1 glm:1 taken:1 equation:1 resource:1 previously:1 turn:1 singer:1 know:1 madeleine:1 ut1:2 available:1 unreasonable:2 apply:1 save:1 slower:1 original:1 thomas:1 denotes:1 running:10 include:1 lock:3 music:3 mikael:1 eon:4 k1:1 rabi:1 feng:1 objective:2 initializes:1 already:1 occurs:1 strategy:2 gradient:12 subspace:1 distance:1 separate:1 w0:7 gopalakrishnan:1 modeled:1 index:1 equivalently:1 difficult:1 unfortunately:2 robert:1 statement:4 stoc:1 negative:2 filtration:1 design:1 implementation:1 recommender:3 datasets:3 discarded:1 acknowledge:1 descent:18 behave:1 committed:1 rn:5 parallelizing:1 introduced:2 david:1 conflict:1 engine:1 kang:1 fleming:1 nip:4 able:4 beyond:1 below:2 parallelism:2 sparsity:3 pagerank:2 built:1 including:5 memory:3 wainwright:1 power:1 event:4 difficulty:1 regularized:1 hr:6 scheme:1 axis:2 incoherent:1 taming:1 text:1 understanding:1 l2:1 discovery:1 law:1 loss:5 expect:1 kakf:4 proven:1 versus:1 foundation:2 consistent:1 principle:1 editor:1 row:1 changed:1 asynchronous:36 free:2 copy:3 allow:1 wide:1 absolute:1 siopt:1 distributed:3 benefit:1 rich:1 doesn:1 stuck:2 qualitatively:1 author:1 aneesh:1 logic:1 global:1 don:1 continuous:3 iterative:1 un:1 table:3 ca:1 rearranging:1 correlated:1 forest:2 investigated:1 complex:1 bottou:4 protocol:1 did:1 main:8 dense:2 bounding:1 noise:5 arise:1 reuters:2 n2:1 sridhar:1 big:1 xu:4 en:2 martingale:11 hsiang:1 wiley:1 precision:27 jmlr:1 third:2 theorem:9 formula:1 bad:1 xt:40 jt:2 udell:1 jakub:1 sensing:1 rakhlin:1 svm:1 gupta:3 survival:1 exists:4 incorporating:1 quantization:2 workshop:1 sequential:23 adding:1 n000141210041:1 mitliagkas:1 magnitude:3 execution:3 illustrates:2 horizon:4 kx:4 gap:2 lowprecision:2 vk1:1 saddle:1 failed:1 kxk:2 tracking:1 inderjit:1 recommendation:2 u2:1 springer:3 determines:1 acm:1 cdesa:1 goal:1 noel:1 lipschitz:4 experimentally:2 change:1 specifically:2 operates:1 uniformly:1 wt:25 lemma:2 called:1 total:1 uck:3 support:1 latter:1 arises:2 alexander:1 accelerated:2 scratch:1 handling:1 |
5,211 | 5,718 | Beyond Convexity: Stochastic
Quasi-Convex Optimization
Elad Hazan
Princeton University
Kfir Y. Levy
Technion
Shai Shalev-Shwartz
The Hebrew University
ehazan@cs.princeton.edu
kfiryl@tx.technion.ac.il
shais@cs.huji.ac.il
Abstract
Stochastic convex optimization is a basic and well studied primitive in machine
learning. It is well known that convex and Lipschitz functions can be minimized
efficiently using Stochastic Gradient Descent (SGD).
The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient
Descent, which updates according to the direction of the gradients, rather than the
gradients themselves. In this paper we analyze a stochastic version of NGD and
prove its convergence to a global minimum for a wider class of functions: we
require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity
broadens the concept of unimodality to multidimensions and allows for certain
types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent variants.
Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient
descent algorithm provably requires a minimal minibatch size.
1
Introduction
The benefits of using the Stochastic Gradient Descent (SGD) scheme for learning could not be
stressed enough. For convex and Lipschitz objectives, SGD is guaranteed to find an -optimal solution within O(1/2 ) iterations and requires only an unbiased estimator for the gradient, which
is obtained with only one (or a few) data samples. However, when applied to non-convex problems several drawbacks are revealed. In particular, SGD is widely used for deep learning [2], one
of the most interesting fields where stochastic non-convex optimization problems arise. Often, the
objective in these kind of problems demonstrates two extreme phenomena [3]: on the one hand
plateaus?regions with vanishing gradients; and on the other hand cliffs?exceedingly high gradients. As expected, applying SGD to such problems is often reported to yield unsatisfactory results.
In this paper we analyze a stochastic version of the Normalized Gradient Descent (NGD) algorithm,
which we denote by SNGD. Each iteration of SNGD is as simple and efficient as SGD, but is
much more appropriate for non-convex optimization problems, overcoming some of the pitfalls that
SGD may encounter. Particularly, we define a family of locally-quasi-convex and locally-Lipschitz
functions, and prove that SNGD is suitable for optimizing such objectives.
Local-Quasi-convexity is a generalization of unimodal functions to multidimensions, which includes
quasi-convex, and convex functions as a subclass. Locally-Quasi-convex functions allow for certain
types of plateaus and saddle points which are difficult for SGD and other gradient descent variants.
Local-Lipschitzness is a generalization of Lipschitz functions that only assumes Lipschitzness in a
small region around the minima, whereas farther away the gradients may be unbounded. Gradient
explosion is, thus, another difficulty that is successfully tackled by SNGD and poses difficulties for
other stochastic gradient descent variants.
1
Our contributions:
? We introduce local-quasi-convexity, a property that extends quasi-convexity and captures
unimodal functions which are not quasi-convex. We prove that NGD finds an -optimal
minimum for such functions within O(1/2 ) iterations. As a special case, we show that the
above rate can be attained for quasi-convex functions that are Lipschitz in an ?()-region
around the optimum (gradients
may be unbounded outside this region). For objectives that
?
are also smooth in an ?( )-region around the optimum, we prove a faster rate of O(1/).
? We introduce a new setup: stochastic optimization of locally-quasi-convex functions; and
show that this setup captures Generalized Linear Models (GLM) regression, [14]. For this
setup, we devise a stochastic version of NGD (SNGD), and show that it converges within
O(1/2 ) iterations to an -optimal minimum.
? The above positive result requires that at each iteration of SNGD, the gradient should be
estimated using a minibatch of a minimal size. We provide a negative result showing that
if the minibatch size is too small then the algorithm might indeed diverge.
? We report experimental results supporting our theoretical guarantees and demonstrate an
accelerated convergence attained by SNGD.
1.1
Related Work
Quasi-convex optimization problems arise in numerous fields, spanning economics [20, 12], industrial organization [21] , and computer vision [8]. It is well known that quasi-convex optimization
tasks can be solved by a series of convex feasibility problems [4]; However, generally solving such
feasibility problems may be very costly [6]. There exists a rich literature concerning quasi-convex
optimization in the offline case, [17, 22, 9, 18]. A pioneering paper by [15], was the first to suggest
an efficient algorithm, namely Normalized Gradient Descent, and prove that this algorithm attains optimal solution within O(1/2 ) iterations given a differentiable quasi-convex objective. This work
was later extended by [10], establishing the same rate for upper semi-continuous quasi-convex objectives. In [11] faster rates for quasi-convex optimization are attained, but they assume to know the
optimal value of the objective, an assumption that generally does not hold in practice.
Among the deep learning community there have been several attempts to tackle plateaus/gradientexplosion. Ideas spanning gradient-clipping [16], smart initialization [5], and more [13], have shown
to improve training in practice. Yet, non of these works provides a theoretical analysis showing better
convergence guarantees. To the best of our knowledge, there are no previous results on stochastic
versions of NGD, neither results regarding locally-quasi-convex/locally-Lipschitz functions.
1.2
Plateaus and Cliffs - Difficulties for GD
Gradient descent with
fixed step sizes, including
krf (x)k = M 7! 1
its stochastic variants, is
known to perform poorly
when the gradients are
krf (x)k = m 7! 0
too small in a plateau
area of the function, or
alternatively when the
x
other extreme happens:
gradient explosions. These
two phenomena have been Figure 1: A quasi-convex Locally-Lipschitz function with plateaus
reported in certain types of and cliffs.
non-convex optimization,
such as training of deep networks.
1
2
?
Figure 1 depicts a one-dimensional family of functions for which GD behaves provably poorly. With
a large step-size, GD will hit the cliffs and then oscillate between the two boundaries. Alternatively,
with a small step size, the low gradients will cause GD to miss the middle valley which has constant
size of 1/2. On the other hand, this exact function is quasi-convex and locally-Lipschitz, and hence
the NGD algorithm provably converges to the optimum quickly.
2
2
Definitions and Notations
We use k ? k to denote the Euclidean norm. Bd (x, r) denotes the d dimensional Euclidean ball of
radius r, centered around x, and Bd := Bd (0, 1). [N ] denotes the set {1, . . . , N }.
For simplicity, throughout the paper we always assume that functions are differentiable (but if not
stated explicitly, we do not assume any bound on the norm of the gradients).
Definition 2.1. (Local-Lipschitzness and Local-Smoothness) Let z ? Rd , G, ? 0. A function
f : K 7? R is called (G, , z)-Locally-Lipschitz if for every x, y ? Bd (z, ), we have
|f (x) ? f (y)| ? Gkx ? yk .
Similarly, the function is (?, , z)-locally-smooth if for every x, y ? Bd (z, ) we have,
|f (y) ? f (x) ? h?f (y), x ? yi| ?
?
kx ? yk2 .
2
Next we define quasi-convex functions:
Definition 2.2. (Quasi-Convexity) We say that a function f : Rd 7? R is quasi-convex if ?x, y ?
Rd , such that f (y) ? f (x), it follows that
h?f (x), y ? xi ? 0 .
We further say that f is strictly-quasi-convex, if it is quasi-convex and its gradients vanish only at
the global minima, i.e., ?y : f (y) > minx?Rd f (x) ? k?f (y)k > 0.
Informally, the above characterization states that the (opposite) gradient of a quasi-convex function
directs us in a global descent direction. Following is an equivalent (more common) definition:
Definition 2.3. (Quasi-Convexity) We say that a function f : Rd 7? R is quasi-convex if any
?-sublevel-set of f is convex, i.e., ?? ? R the set
L? (f ) = {x : f (x) ? ?}
is convex.
The equivalence between the above definitions can be found in [4]. During this paper we denote the
sublevel-set of f at x by
Sf (x) = {y : f (y) ? f (x)} .
3
(1)
Local-Quasi-Convexity
Quasi-convexity does not fully capture the notion of unimodality in several dimension. As an example let x = (x1 , x2 ) ? [?10, 10]2 , and consider the function
g(x) = (1 + e?x1 )?1 + (1 + e?x2 )?1 .
(2)
It is natural to consider g as unimodal since it acquires no local minima but for the unique
global minima at x? = (?10, ?10). However, g is not quasi-convex: consider the points
x = (log 16, ? log 4), y = (? log 4, log 16), which belong to the 1.2-sub-level set, their average
does not belong to the same sub-level-set since g(x/2 + y/2) = 4/3.
Quasi-convex functions always enable us to explore, meaning that the gradient always directs us
in a global descent direction. Intuitively, from an optimization point of view, we only need such a
direction whenever we do not exploit, i.e., whenever we are not approximately optimal.
In what follows we define local-quasi-convexity, a property that enables us to either explore/exploit.
This property captures a wider class of unimodal function (such as g above) rather than mere quasiconvexity. Later we justify this definition by showing that it captures Generalized Linear Models
(GLM) regression, see [14, 7].
Definition 3.1. (Local-Quasi-Convexity) Let x, z ? Rd , ?, > 0. We say that f : Rd 7? R is
(, ?, z)-Strictly-Locally-Quasi-Convex (SLQC) in x, if at least one of the following applies:
1. f (x) ? f (z) ? .
3
2. k?f (x)k > 0, and for every y ? B(z, /?) it holds that h?f (x), y ? xi ? 0 .
Note that if f is G-Lispschitz and strictly-quasi-convex function, then ?x, z ? Rd , ? > 0, it
holds that f is (, G, z)-SLQC in x. Recalling the function g that appears in Equation (2), then it
can be shown that ? ? (0, 1], ?x ? [?10, 10]2 then this function is (, 1, x? )-SLQC in x, where
x? = (?10, ?10).
3.1
Generalized Linear Models (GLM)
3.1.1
The Idealized GLM
In this setup we have a collection of m samples {(xi , yi )}m
i=1 ? Bd ? [0, 1], and an activation
function ? : R 7? R. We are guaranteed to have w? ? Rd such that: yi = ?hw? , xi i, ?i ? [m] (we
denote ?hw, xi := ?(hw, xi)). The performance of a predictor w ? Rd , is measured by the average
square error over all samples.
m
ec
rrm (w) =
1 X
2
(yi ? ?hw, xi i) .
m i=1
(3)
In [7] it is shown that the Perceptron problem with ?-margin is a private case of GLM regression.
The sigmoid function ?(z) = (1 + e?z )?1 is a popular activation function in the field of deep
learning. The next lemma states that in the idealized GLM problem with sigmoid activation, then
the error function is SLQC (but not quasi-convex). As we will see in Section 4 this implies that
Algorithm 1 finds an -optimal minima of ec
rrm (w) within poly(1/) iterations.
Lemma 3.1. Consider the idealized GLM problem with the sigmoid activation, and assume that
kw? k ? W . Then the error function appearing in Equation (3) is (, eW , w? )-SLQC in w, ? >
0, ?w ? Bd (0, W ) (But it is not generally quasi-convex).
3.1.2
The Noisy GLM
In the noisy GLM setup (see [14, 7]), we may draw i.i.d. samples {(xi , yi )}m
i=1 ? Bd ? [0, 1],
from an unknown distribution D. We assume that there exists a predictor w? ? Rd such that
E(x,y)?D [y|x] = ?hw? , xi, where ? is an activation function. Given w ? Rd we define its expected
error as follows:
E(w) = E(x,y)?D (y ? ?hw, xi)2 ,
and it can be shown that w? is a global minima of E. We are interested in schemes that obtain an
-optimal minima to E, within poly(1/) samples and optimization steps. Given m samples from D,
their empirical error ec
rrm (w), is defined as in Equation (3). The following lemma states that in this
setup, letting m = ?(1/2 ), then ec
rrm is SLQC with high probability. This property will enable us
to apply Algorithm 2, to obtain an -optimal minima to E, within poly(1/) samples from D, and
poly(1/) optimization steps.
Lemma 3.2. Let ?, ? (0, 1). Consider the noisy GLM problem with the sigmoid activation,
and assume that kw? k ? W . Given a fixed point w ? B(0, W ), then w.p.? 1 ? ?, after
2W
+1)2
m ? 8e (W
log(1/?) samples, the empirical error function appearing in Equation (3) is
2
(, eW , w? )-SLQC in w.
Note that if we had required the SLQC to hold ?w ? B(0, W ), then we would need the number of
samples to depend on the dimension, d, which we would like to avoid. Instead, we require SLQC
to hold for a fixed w. This satisfies the conditions of Algorithm 2, enabling us to find an -optimal
solution with a sample complexity that is independent of the dimension.
4
NGD for Locally-Quasi-Convex Optimization
Here we present the NGD algorithm, and prove the convergence rate of this algorithm for SLQC
objectives. Our analysis is simple, enabling us to extend the convergence rate presented in [15]
beyond quasi-convex functions. We then show that quasi-convex and locally-Lipschitz objective are
SLQC, implying that NGD converges even if the gradients are unbounded outside a small region
4
Algorithm 1 Normalized Gradient Descent (NGD)
Input: #Iterations T , x1 ? Rd , learning rate ?
for t = 1 . . . T do
Update:
gt
xt+1 = xt ? ??
gt where gt = ?f (xt ), g?t =
kgt k
end for
? T = arg min{x1 ,...,xT } f (xt )
Return: x
around the minima. For quasi-convex and locally-smooth objectives, we show that NGD attains a
faster convergence rate.
NGD is presented in Algorithm 1. NGD is similar to GD, except we normalize the gradients. It is
intuitively clear that to obtain robustness to plateaus (where the gradient can be arbitrarily small)
and to exploding gradients (where the gradient can be arbitrarily large), one must ignore the size
of the gradient. It is more surprising that the information in the direction of the gradient suffices to
guarantee convergence.
Following is the main theorem of this section:
Theorem 4.1. Fix > 0, let f : Rd 7? R, and x? ? arg minx?Rd f (x). Given that f is (, ?, x? )SLQC in every x ? Rd . Then running the NGD algorithm with T ? ?2 kx1 ? x? k2 /2 , and ? =
/?, we have that: f (?
xT ) ? f (x? ) ? .
Theorem 4.1 states that (?, ?, x? )-SLQC functions admit poly(1/) convergence rate using NGD.
The intuition behind this lies in Definition 3.1, which asserts that at a point x either the (opposite) gradient points out a global optimization direction, or we are already -optimal. Note that the
requirement of (, ?, ?)-SLQC in any x is not restrictive, as we have seen in Section 3, there are
interesting examples of functions that admit this property ? ? [0, 1], and for any x.
For simplicity we have presented NGD for unconstrained problems. Using projections we can easily extend the algorithm and and its analysis for constrained optimization over convex sets. This
will enable to achieve convergence of O(1/2 ) for the objective presented in Equation (2), and the
idealized GLM problem presented in Section 3.1.1. We are now ready to prove Theorem 4.1:
Proof of Theorem 4.1. First note that if the gradient of f vanishes at xt , then by the SLQC assumption we must have that f (xt )?f (x? ) ? . Assume next that we perform T iterations and the gradient
of f at xt never vanishes in these iterations. Consider the update rule of NGD (Algorithm 1), then
by standard algebra we get,
kxt+1 ? x? k2 = kxt ? x? k2 ? 2?h?
gt , xt ? x? i + ? 2 .
Assume that ?t ? [T ] we have f (xt ) ? f (x? ) > . Take y = x? + (/?) g?t , and observe that
ky ? x? k ? /?. The (, ?, x? )-SLQC assumption implies that h?
gt , y ? xt i ? 0, and therefore
h?
gt , x? + (/?) g?t ? xt i ? 0 ? h?
gt , xt ? x? i ? /? .
Setting ? = /?, the above implies,
kxt+1 ? x? k2 ? kxt ? x? k2 ? 2?/? + ? 2
= kxt ? x? k2 ? 2 /?2 .
Thus, after T iterations for which f (xt ) ? f (x? ) > we get
0 ? kxT +1 ? x? k2 ? kx1 ? x? k2 ? T 2 /?2 ,
Therefore, we must have T ? ?2 kx1 ? x? k2 /2 .
4.1
Locally-Lipschitz/Smooth Quasi-Convex Optimization
It can be shown that strict-quasi-convexity and (G, /G, x? )-local-Lipschitzness of f implies that f
is (, G, x? )-SLQC ?x ? Rd , ? ? 0, and x? ? arg minx?Rd f (x). Therefore the following is a
direct corollary of Theorem 4.1:
5
Algorithm 2 Stochastic Normalized Gradient Descent (SNGD)
Input: #Iterations T , x1 ? Rd , learning rate ?, minibatch size b
for t = 1 . . . T do
Sample: {?i }bi=1 ? Db , and define,
b
ft (x) =
1X
?i (x)
b i=1
Update:
xt+1 = xt ? ??
gt where gt = ?ft (xt ), g?t =
end for
? T = arg min{x1 ,...,xT } ft (xt )
Return: x
gt
kgt k
Corollary 4.1. Fix > 0, let f : Rd 7? R, and x? ? arg minx?Rd f (x). Given that f is
strictly quasi-convex and (G, /G, x? )-locally-Lipschitz. Then running the NGD algorithm with
T ? G2 kx1 ? x? k2 /2 , and ? = /G, we have that: f (?
xT ) ? f (x? ) ? .
In case f is also locally-smooth, we state an even faster rate:
Theorem 4.2. Fix >p0, let f : Rd 7? R, and x? ? arg minx?Rd f (x). Given that f is strictly
?
quasi-convex and (?, 2/?,
the NGD algorithm with T ?
p x )-locally-smooth. Then running
? 2
?kx1 ? x k /2, and ? = 2/?, we have that: f (?
xT ) ? f (x? ) ? .
Remark 1. The above corollary (resp. theorem) impliesp
that f could have arbitrarily large gradients
and second derivatives outside B(x? , /G) (resp. B(x? , 2/?)), yet NGD is still ensured to output
an -optimal point within G2 kx1 ? x? k2 /2 (resp. ?kx1 ? x? k2 /2) iterations. We are not familiar
with a similar guarantee for GD even in the convex case.
5
SNGD for Stochastic SLQC Optimization
Here we describe the setting of stochastic SLQC optimization. Then we describe our SNGD algorithm which is ensured to yield an -optimal solution within poly(1/) queries. We also show that
the (noisy) GLM problem, described in Section 3.1.2 is an instance of stochastic SLQC optimization, allowing us to provably solve this problem within poly(1/) samples and optimization steps
using SNGD.
The stochastic SLQC optimization Setup: Consider the problem of minimizing a function f :
Rd 7? R, and assume there exists a distribution over functions D, such that:
f (x) := E??D [?(x)] .
We assume that we may access f by randomly sampling minibatches of size b, and querying
the gradients of these minibatches. Thus, upon querying a point xt ? Rd , a random minibatch
Pb
{?i }bi=1 ? Db is sampled, and we receive ?ft (xt ), where ft (x) = 1b i=1 ?i (x). We make the
following assumption regarding the minibatch averages:
Assumption 5.1. Let T, , ? > 0, x? ? arg minx?Rd f (x). There exists ? > 0, and a function
b0 : R3 7? R, that for b ? b0 (, ?, T ) then w.p.? 1 ? ? and ?t ? [T ], the minibatch average ft (x) =
Pb
1
?
d
i=1 ?i (x) is (, ?, x )-SLQC in xt . Moreover, we assume |ft (x)| ? M, ?t ? [T ], x ? R .
b
Note that we assume that b0 = poly(1/, log(T /?)).
Justification of Assumption 5.1 Noisy GLM regression (see Section 3.1.2), is an interesting
instance of stochastic optimization problem where Assumption 5.1 holds. Indeed according to
Lemma 3.2, given , ?, T > 0, then for b ? ?(log(T /?)/2 ) samples, the average minibatch function is (, ?, x? )-SLQC in xt , ?t ? [T ], w.p.? 1 ? ?.
6
Local-quasi-convexity of minibatch averages is a plausible assumption when we optimize an expected sum of quasi-convex functions that share common global minima (or when the different
global minima are close by). As seen from the Examples presented in Equation (2), and in Sections 3.1.1, 3.1.2, this sum is generally not quasi-convex, but is more often locally-quasi-convex.
Note that in the general case when the objective is a sum of quasi-convex functions, the number of
local minima of such objective may grow exponentially with the dimension d, see [1]. This might
imply that a general setup where each ? ? D is quasi-convex may be generally hard.
5.1
Main Results
SNGD is presented in Algorithm 2. SNGD is similar to SGD, except we normalize the gradients.
The normalization is crucial in order to take advantage of the SLQC assumption, and in order to
overcome the hurdles of plateaus and cliffs. Following is our main theorem:
Theorem 5.1. Fix ?, , G, M, ? > 0. Suppose we run SNGD with T ? ?2 kx1 ? x? k2 /2 iterations,
2
/?)
? = /?, and b ? max{ M log(4T
, b0 (, ?, T )} . Assume that for b ? b0 (, ?, T ) then w.p.? 1 ? ?
22
and ?t ? [T ], the function ft defined in the algorithm is M -bounded, and is also (, ?, x? )-SLQC in
xt . Then, with probability of at least 1 ? 2?, we have that f (?
xT ) ? f (x? ) ? 3.
We prove of Theorem 5.1 at the end of this section.
Remark 2. Since strict-quasi-convexity and (G, /G, x? )-local-Lipschitzness are equivalent to
SLQC, the theorem implies that f could have arbitrarily large gradients outside B(x? , /G), yet
SNGD is still ensured to output an -optimal point within G2 kx1 ? x? k2 /2 iterations. We are not
familiar with a similar guarantee for SGD even in the convex case.
Remark 3. Theorem 5.1 requires the minibatch size to be ?(1/2 ). In the context of learning,
the number of functions, n, corresponds to the number of training examples. By standard sample
complexity bounds, n should also be order of 1/2 . Therefore, one may wonder, if the size of the
minibatch should be order of n. This is not true, since the required training set size is 1/2 times
the VC dimension of the hypothesis class. In many practical cases, the VC dimension is more
significant than 1/2 , and therefore n will be much larger than the required minibatch size. The
reason our analysis requires a minibatch of size 1/2 , without the VC dimension factor, is because
we are just ?validating? and not ?learning?.
In SGD and for the case of convex functions, even a minibatch of size 1 suffices for guaranteed
convergence. In contrast, for SNGD we require a minibatch of size 1/2 . The theorem below shows
that the requirement for a large minibatch is not an artifact of our analysis but is truly required.
Theorem 5.2. Let ? (0, 0.1]; There exists a distribution over convex functions, such that running
SNGD with minibatch size of b = 0.2
, with a high probability it never reaches an -optimal solution
The gap between the upper bound of 1/2 and the lower bound of 1/ remains as an open question.
We now provide a sketch for the proof of Theorem 5.1:
Proof of Theorem 5.1. Theorem 5.1 is a consequence of the following two lemmas. In the first we
show that whenever all ft ?s are SLQC, there exists some t such that ft (xt ) ? ft (x? ) ? . In the
second lemma, we show that for a large enough minibatch size b, then for any t ? [T ] we have
f (xt ) ? ft (xt ) + , and f (x? ) ? ft (x? ) ? . Combining these two lemmas we conclude that
f (?
xT ) ? f (x? ) ? 3.
Lemma 5.1. Let , ? > 0. Suppose we run SNGD for T ? ?2 kx1 ? x? k2 /2 iterations, b ?
b0 (, ?, T ), and ? = /?. Assume that w.p.? 1 ? ? all ft ?s are (, ?, x? )-SLQC in xt , whenever
b ? b0 (, ?, T ). Then w.p.? 1 ? ? we must have some t ? [T ] for which ft (xt ) ? ft (x? ) ? .
Lemma 5.1 is proved similarly to Theorem 4.1. We omit the proof due to space constraints.
The second Lemma relates ft (xt ) ? ft (x? ) ? to a bound on f (xt ) ? f (x? ).
Lemma 5.2. Suppose b ?
M 2 log(4T /?) ?2
2
f (xt ) ? ft (xt ) + ,
then w.p.? 1 ? ? and for every t ? [T ]:
and also,
7
f (x? ) ? ft (x? ) ? .
0.3
0.04
MSGD
Nesterov
SNGD
0.25
0.055
MSGD
Nesterov
SNGD
0.035
0.03
0.04
0.035
0.025
Objective
Error
Objective
0.2
0.15
b =1
b =10
b =100
b =500
0.05
0.045
0.02
0.03
0.025
0.02
0.015
0.1
0.015
0.01
0.05
0.01
0.005
0.005
500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
Iteration
(a)
500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
Iteration
500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
Iteration
(b)
(c)
Figure 2: Comparison between optimizations schemes. Left: test error. Middle: objective value (on
training set). On the Right we compare the objective of SNGD for different minibatch sizes.
? T (Alg. 2) ,
Lemma 5.2 is a direct consequence of Hoeffding?s bound. Using the definition of x
together with Lemma 5.2 gives:
f (?
xT ) ? f (x? ) ? ft (xt ) ? ft (x? ) + 2, ?t ? [T ]
Combining the latter with Lemma 5.1, establishes Theorem 5.1.
6
Experiments
A better understanding of how to train deep neural networks is one of the greatest challenges in
current machine learning and optimization. Since learning NN (Neural Network) architectures essentially requires to solve a hard non-convex program, we have decided to focus our empirical study
on this type of tasks. As a test case, we train a Neural Network with a single hidden layer of 100
units over the MNIST data set. We use a ReLU activation function, and minimize the square loss.
We employ a regularization over weights with a parameter of ? = 5 ? 10?4 .
At first we were interested in comparing the performance of SNGD to MSGD (Minibatch Stochastic
Gradient Descent), and to a stochastic variant of Nesterov?s accelerated gradient method [19], which
is considered to be state-of-the-art. For MSGD and Nesterov?s method we used a step size rule of
the form ?t = ?0 (1 + ?t)?3/4 , with ?0 = 0.01 and ? = 10?4 . For SNGD we used the constant
step size of 0.1. In Nesterov?s method we used a momentum of 0.95. The comparison appears in
Figures 2(a),2(b). As expected, MSGD converges relatively slowly. Conversely, the performance of
SNGD is comparable with Nesterov?s method. All methods employed a minibatch size of 100.
Later, we were interested in examining the effect of minibatch size on the performance of SNGD. We
employed SNGD with different minibatch sizes. As seen in Figure 2(c), the performance improves
significantly with the increase of minibatch size.
7
Discussion
We have presented the first provable gradient-based algorithm for stochastic quasi-convex optimization. This is a first attempt at generalizing the well-developed machinery of stochastic convex optimization to the challenging non-convex problems facing machine learning, and better characterizing
the border between NP-hard non-convex optimization and tractable cases such as the ones studied
herein.
Amongst the numerous challenging questions that remain, we note that there is a gap between the
upper and lower bound of the minibatch size sufficient for SNGD to provably converge.
Acknowledgments
The research leading to these results has received funding from the European Union?s Seventh
Framework Programme (FP7/2007-2013) under grant agreement n? 336078 ? ERC-SUBLRN. Shai
S-Shwartz is supported by ISF n? 1673/14 and by Intel?s ICRI-CI.
8
References
[1] Peter Auer, Mark Herbster, and Manfred K Warmuth. Exponentially many local minima for
single neurons. Advances in neural information processing systems, pages 316?322, 1996.
[2] Yoshua Bengio. Learning deep architectures for AI. Foundations and trends in Machine
Learning, 2(1):1?127, 2009.
[3] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with
gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[4] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press,
2004.
[5] Kenji Doya. Bifurcations of recurrent neural networks in gradient descent learning. IEEE
Transactions on neural networks, 1:75?80, 1993.
[6] Jean-Louis Goffin, Zhi-Quan Luo, and Yinyu Ye. Complexity analysis of an interior cutting
plane method for convex feasibility problems. SIAM Journal on Optimization, 6(3):638?652,
1996.
[7] Adam Tauman Kalai and Ravi Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009.
[8] Qifa Ke and Takeo Kanade. Quasiconvex optimization for robust geometric reconstruction.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(10):1834?1847, 2007.
[9] Rustem F Khabibullin. A method to find a point of a convex set. Issled. Prik. Mat., 4:15?22,
1977.
[10] Krzysztof C Kiwiel. Convergence and efficiency of subgradient methods for quasiconvex minimization. Mathematical programming, 90(1):1?25, 2001.
[11] Igor V Konnov. On convergence properties of a subgradient method. Optimization Methods
and Software, 18(1):53?62, 2003.
[12] Jean-Jacques Laffont and David Martimort. The theory of incentives: the principal-agent
model. Princeton university press, 2009.
[13] James Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pages 1033?1040, 2011.
[14] P. McCullagh and JA Nelder. Generalised linear models. London: Chapman and Hall/CRC,
1989.
[15] Yu E Nesterov. Minimization methods for nonsmooth convex and quasiconvex functions.
Matekon, 29:519?531, 1984.
[16] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent
neural networks. In Proceedings of The 30th International Conference on Machine Learning,
pages 1310?1318, 2013.
[17] Boris T Polyak. A general method of solving extremum problems. Dokl. Akademii Nauk SSSR,
174(1):33, 1967.
[18] Jaros?aw Sikorski. Quasi subgradient algorithms for calculating surrogate constraints. In Analysis and algorithms of optimization problems, pages 203?236. Springer, 1986.
[19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of
initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1139?1147, 2013.
[20] Hal R Varian. Price discrimination and social welfare. The American Economic Review, pages
870?875, 1985.
[21] Elmar Wolfstetter. Topics in microeconomics: Industrial organization, auctions, and incentives. Cambridge University Press, 1999.
[22] Yaroslav Ivanovich Zabotin, AI Korablev, and Rustem F Khabibullin. The minimization of
quasicomplex functionals. Izv. Vyssh. Uch. Zaved. Mat., (10):27?33, 1972.
9
| 5718 |@word private:1 middle:2 version:4 norm:2 open:1 p0:1 sgd:12 series:1 interestingly:1 current:1 comparing:1 surprising:1 luo:1 activation:7 yet:3 bd:8 must:4 takeo:1 enables:1 update:4 discrimination:1 implying:1 intelligence:1 warmuth:1 plane:1 vanishing:1 farther:1 manfred:1 provides:1 characterization:1 pascanu:1 unbounded:3 mathematical:1 direct:2 prove:8 kiwiel:1 introduce:2 indeed:2 expected:4 themselves:1 pitfall:1 zhi:1 notation:1 moreover:1 bounded:1 what:1 kind:1 developed:1 extremum:1 lipschitzness:5 guarantee:5 every:5 subclass:1 rustem:2 tackle:1 ensured:3 demonstrates:1 hit:1 k2:15 unit:1 grant:1 omit:1 louis:1 positive:1 generalised:1 local:14 consequence:2 cliff:5 establishing:1 approximately:1 might:2 initialization:2 studied:2 equivalence:1 conversely:1 challenging:2 bi:2 decided:1 unique:1 practical:1 acknowledgment:1 practice:2 union:1 razvan:1 area:1 empirical:3 significantly:1 gkx:1 projection:1 boyd:1 suggest:1 get:2 close:1 valley:1 interior:1 context:1 applying:1 isotonic:1 optimize:1 equivalent:2 marten:2 primitive:1 economics:1 convex:66 ke:1 tomas:1 simplicity:2 estimator:1 rule:2 vandenberghe:1 notion:1 justification:1 resp:3 suppose:3 qifa:1 exact:1 programming:1 hypothesis:1 agreement:1 trend:1 particularly:1 ft:22 solved:1 capture:5 region:7 yk:1 intuition:1 vanishes:2 convexity:14 complexity:3 nesterov:7 depend:1 solving:2 smart:1 algebra:1 upon:1 efficiency:1 easily:1 tx:1 unimodality:2 train:2 describe:2 london:1 query:1 broadens:1 shalev:1 outside:4 jean:2 elad:1 widely:1 solve:2 say:4 plausible:1 larger:1 noisy:5 patrice:1 kxt:6 differentiable:2 advantage:1 reconstruction:1 adaptation:1 combining:2 laffont:1 poorly:2 kx1:10 achieve:1 nauk:1 asserts:1 normalize:2 ky:1 sutskever:2 convergence:12 optimum:4 requirement:2 adam:1 converges:4 boris:1 wider:2 recurrent:3 ac:2 pose:1 measured:1 received:1 b0:7 c:2 kenji:1 implies:5 direction:6 radius:1 drawback:1 kgt:2 sssr:1 stochastic:23 vc:3 centered:1 enable:3 crc:1 require:3 ja:1 suffices:2 generalization:2 fix:4 strictly:5 hold:6 around:6 considered:1 hall:1 welfare:1 successfully:1 establishes:1 minimization:3 always:3 rather:2 kalai:1 avoid:1 corollary:3 focus:1 directs:2 unsatisfactory:1 industrial:2 contrast:1 attains:2 nn:1 msgd:5 hidden:1 quasi:56 interested:3 provably:5 arg:7 among:1 colt:1 constrained:1 special:1 art:1 bifurcation:1 field:3 never:2 frasconi:1 sampling:1 chapman:1 kw:2 yu:1 icml:1 igor:1 minimized:1 report:1 np:1 nonsmooth:1 yoshua:3 few:1 employ:1 randomly:1 familiar:2 isotron:1 multidimensions:2 attempt:2 recalling:1 organization:2 truly:1 extreme:2 behind:1 kfir:1 ehazan:1 explosion:3 machinery:1 euclidean:2 varian:1 theoretical:2 minimal:2 instance:2 clipping:1 predictor:2 technion:2 wonder:1 examining:1 seventh:1 too:2 reported:2 dependency:1 aw:1 gd:6 herbster:1 siam:1 huji:1 international:3 diverge:1 together:1 quickly:1 ilya:2 sublevel:2 slowly:1 hoeffding:1 admit:2 american:1 derivative:1 leading:1 return:2 simard:1 rrm:4 yaroslav:1 includes:1 explicitly:1 idealized:4 later:3 view:1 hazan:1 analyze:2 shai:2 contribution:1 minimize:1 il:2 square:2 efficiently:1 yield:2 mere:1 plateau:8 reach:1 whenever:4 definition:10 james:2 proof:4 sampled:1 proved:1 popular:1 knowledge:1 improves:1 auer:1 appears:2 attained:3 just:1 hand:3 sketch:1 minibatch:25 kfiryl:1 artifact:1 icri:1 hal:1 effect:1 ye:1 concept:1 unbiased:1 normalized:6 true:1 hence:1 regularization:1 during:1 acquires:1 generalized:3 demonstrate:1 auction:1 meaning:1 funding:1 common:2 sigmoid:4 behaves:1 exponentially:2 belong:2 extend:2 lieven:1 isf:1 significant:1 cambridge:2 ai:2 smoothness:1 rd:26 vanilla:1 unconstrained:1 sastry:1 similarly:2 erc:1 had:1 access:1 yk2:1 gt:10 optimizing:1 certain:3 arbitrarily:4 yi:5 devise:1 seen:3 minimum:16 george:1 employed:2 converge:1 exploding:1 semi:1 relates:1 stephen:1 unimodal:4 smooth:6 faster:4 long:1 concerning:1 feasibility:3 variant:5 basic:1 regression:5 vision:1 essentially:1 iteration:19 normalization:1 receive:1 whereas:1 hurdle:3 grow:1 crucial:1 unlike:1 strict:2 validating:1 db:2 quan:1 revealed:1 bengio:3 enough:2 ngd:21 relu:1 architecture:2 opposite:2 polyak:1 economic:1 idea:1 regarding:2 peter:1 hessian:1 oscillate:1 cause:1 remark:3 deep:7 generally:5 clear:1 informally:1 locally:21 estimated:1 jacques:1 mat:2 incentive:2 paolo:1 pb:2 neither:1 krf:2 ravi:1 dahl:1 krzysztof:1 subgradient:3 sum:3 run:2 extends:1 family:2 throughout:1 doya:1 circumvents:1 draw:1 comparable:1 bound:7 layer:1 guaranteed:3 tackled:1 microeconomics:1 constraint:2 x2:2 software:1 min:2 mikolov:1 relatively:1 according:2 ball:1 remain:1 happens:1 intuitively:2 yinyu:1 glm:13 equation:6 remains:1 r3:1 know:1 letting:1 tractable:1 fp7:1 end:3 apply:1 observe:1 away:1 appropriate:1 appearing:2 encounter:1 robustness:1 assumes:1 denotes:2 running:4 calculating:1 exploit:2 restrictive:1 objective:17 already:1 question:2 costly:1 shais:1 surrogate:1 gradient:51 minx:6 amongst:1 topic:1 spanning:2 reason:1 provable:1 minimizing:1 hebrew:1 difficult:2 setup:8 negative:1 stated:1 uch:1 unknown:1 perform:2 allowing:1 upper:3 neuron:1 enabling:2 descent:19 supporting:1 extended:1 hinton:1 community:1 overcoming:1 david:1 namely:1 required:5 herein:1 beyond:2 dokl:1 below:1 pattern:1 challenge:1 akademii:1 pioneering:1 program:1 including:1 max:1 greatest:1 suitable:1 difficulty:4 natural:1 scheme:3 improve:1 imply:1 numerous:2 ready:1 review:1 literature:1 understanding:1 geometric:1 fully:1 loss:1 interesting:3 querying:2 facing:1 geoffrey:1 foundation:1 agent:1 sufficient:1 share:1 supported:1 free:1 offline:1 allow:1 perceptron:1 characterizing:1 tauman:1 benefit:1 boundary:1 dimension:7 overcome:1 rich:1 exceedingly:1 collection:1 programme:1 ec:4 social:1 transaction:3 functionals:1 ignore:1 cutting:1 global:9 conclude:1 nelder:1 xi:10 shwartz:2 alternatively:2 continuous:1 icml11:1 kanade:1 robust:1 alg:1 poly:8 european:1 main:3 border:1 arise:2 x1:6 intel:1 depicts:1 sub:2 momentum:2 quasiconvex:3 sf:1 lie:1 vanish:1 levy:1 hw:6 theorem:20 xt:40 showing:3 exists:6 mnist:1 importance:1 ci:1 kx:1 margin:1 gap:2 generalizing:1 saddle:2 explore:2 g2:3 applies:1 springer:1 corresponds:1 satisfies:1 minibatches:2 lipschitz:15 price:1 hard:3 mccullagh:1 except:2 justify:1 miss:1 lemma:15 principal:1 called:1 experimental:1 ew:2 mark:1 latter:1 stressed:1 accelerated:2 princeton:3 phenomenon:2 |
5,212 | 5,719 | On the Limitation of Spectral Methods:
From the Gaussian Hidden Clique Problem to
Rank-One Perturbations of Gaussian Tensors
Andrea Montanari
Department of Electrical Engineering and Department of Statistics. Stanford University.
montanari@stanford.edu
Daniel Reichman
Department of Cognitive and Brain Sciences, University of California, Berkeley, CA
daniel.reichman@gmail.com
Ofer Zeitouni
Faculty of Mathematics, Weizmann Institute, Rehovot 76100, Israel
and Courant Institute, New York University
ofer.zeitouni@weizmann.ac.il
Abstract
We consider the following detection problem: given a realization of a symmetric
matrix X of dimension n, distinguish between the hypothesis that all upper triangular variables are i.i.d. Gaussians variables with mean 0 and variance 1 and
the hypothesis that there is a planted principal submatrix B of dimension L for
which all upper triangular variables are i.i.d. Gaussians with mean 1 and variance
1, whereas all other upper triangular elements of X not in B are i.i.d. Gaussians
variables with mean 0 and variance 1.? We refer to this as the ?Gaussian hidden
clique problem?. When L = (1 + ) n ( > 0), it is possible to solve this detection problem with probability 1 ? on (1) by computing the spectrum of ?
X and
considering the largest eigenvalue of X. We prove that when L < (1 ? ) n no
algorithm that examines only the eigenvalues of X can detect the existence of a
hidden Gaussian clique, with error probability vanishing as n ? ?. The result
above is an immediate consequence of a more general result on rank-one perturbations of k-dimensional Gaussian tensors. In this context we establish a lower
bound on the critical signal-to-noise ratio below which a rank-one signal cannot
be detected.
1
Introduction
Consider the following detection problem. One is given a symmetric matrix X = X(n) of dimension n, such that the n2 + n entries (Xi,j )i?j are mutually independent random variables. Given
(a realization of) X one would like to distinguish between the hypothesis that all random variables
Xi,j have the same distribution F0 to the hypothesis where there is a set U ? [n], with L := |U |,
so that all random variables in the submatrix XU := (Xs,t : s, t ? U ) have a distribution F1 that is
different from the distribution of all other elements in X which are still distributed as F0 . We refer
to XU as the hidden submatrix.
1
The same problem was recently studied in [1, 8] and, for the asymmetric case (where no symmetry
assumption is imposed on the independent entries of X), in [6, 18, 20]. Detection problems with
similar flavor (such as the hidden clique problem) have been studied over the years in several fields
including computer science, physics and statistics. We refer to Section 5 for further discussion
of the related literature. An intriguing outcome of these works is that, while the two hypothesis are
statistically distinguishable as soon as L ? C log n (for C a sufficiently large constant) [7], practical
algorithms require significantly larger L. In this paper we study the class of spectral (or eigenvaluebased) tests detecting the hidden submatrix. Our proof technique naturally allow to consider two
further generalizations of this problem that are of independent interests. We briefly summarize our
results below.
The Gaussian hidden clique problem. This is a special case of the above hypothesis testing setting,
whereby F0 = N(0, 1) and F1 = N(1, 1) (entries on the diagonal are defined slightly differently in
order to simplify calculations). Here and below N(m, ? 2 ) denote the Gaussian distribution of mean
m and variance ? 2 . Equivalently, let Z be a random matrix from the Gaussian Orthogonal Ensemble
(GOE) i.e. Zij ? N(0, 1/n) independently for i < j, and Zii ? N(0, 2/n). Then, under hypothesis
H1,L we have X = n?1/2 1U 1T
U + Z (1U being the indicator vector of U ), and under hypothesis
H0 , X = Z (the factor n in the normalization is for technical convenience). The Gaussian hidden
clique problem can be thought of as the following clustering problem: there are n elements and the
entry (i, j) measures the similarity between elements i and j. The hidden submatrix corresponds to
a cluster of similar elements, and our goal is to determine given the matrix whether there is a large
cluster of similar elements or alternatively, whether all similarities are essentially random (Gaussian)
noise.
Our focus in this work is on the following restricted hypothesis testing question. Let ?1 ? ?2 ?
? ? ? ? ?n be the ordered eigenvalues of X. Is there a test that depends only on ?1 , . . . , ?n and
that distinguishes H0 from H1,L ?reliably,? i.e. with error probability converging to 0 as n ? ??
Notice that the eigenvalues distribution does not depend on U as long as this is independent from
the noise Z. We can therefore think of U as fixed for this
? question. Historically, the first polynomial
time algorithm for detecting a planted clique of size O( n) in a random graph [2] relied on spectral
methods (see Section 5 for more details). This is one reason for our interest in spectral tests for the
Gaussian hidden clique problem.
?
If L ? (1 + ?) n then [11] implies that a simple test checking whether ?1 ? 2 + ? for some
? = ?(?) > 0 is reliable for the Gaussian hidden clique problem.
We prove that this result is tight,
?
in the sense that no spectral test is reliable for L ? (1 ? ?) n.
Rank-one matrices in Gaussian noise. Our proof technique builds on a simple observation. Since
the noise Z is invariant under orthogonal transformations1 , the above question is equivalent to the
following testing problem. For ? ? R?0 , and v ? Rn , kvk2 = 1 a uniformly random unit vector,
test H0 :?X = Z versus H1 , X = ?vvT + Z. (The correspondence between the two problems yields
? = L/ n.)
Again, this problem (and a closely related asymmetric version [22]) has been studied in the literature,
and it follows from [11] that a reliable test exists for ? ? 1 + ?. We provide a simple proof (based
on the second moment method) that no test is reliable for ? < 1 ? ?.
Rank-one tensors in Gaussian noise. It turns that the same proof applies to an even more general
problem: detecting a rank-one signal in a noisy tensor. We carry out our analysis in this more
general setting for two reasons. First, we think that this clarifies the what aspects of the model are
important for our proof technique to apply. Second, the problem estimating tensors from noisy data
has attracted significant interest recently within the machine learning community [15, 21].
Nk n
More precisely, we consider a noisy tensor X ?
R , of the form X = ? v?k + Z, where Z is
Gaussian noise, and v is a random unit vector. We consider the problem of testing this hypothesis
against H0 : X = Z. We establish a threshold ?k2nd such that no test can be reliable for ? < ?k2nd
(in particular ?22nd = 1). Two differences are worth remarking for k ? 3 with respect to the more
familiar matrix case k = 2. First, we do not expect the second moment bound ?k2nd to be tight,
i.e. a reliable test to exist for all ? > ?k2nd . On the other hand, we can show that it is tight up to
1
By this we mean that, for any orthogonal matrix R ? O(n), independent of Z, RZRT is distributed as Z.
2
a universal (k and n independent) constant. Second, below ?k2nd the problem is more difficult than
the matrix version below ?22nd = 1: not only no reliable test exists but, asymptotically, any test
behaves asymptotically as random guessing. For more details on our results regarding noisy tensors,
see Theorem 3.
2
Main result for spectral detection
Let Z be a GOE matrix as defined in the previous section. Equivalently if G is an (asymmetric)
matrix with i.i.d. entries Gi,j ? N(0, 1),
1
Z= ?
G + GT .
(1)
2n
For a deterministic sequence of vectors v(n), kv(n)k2 = 1, we consider the two hypotheses
H0 :
X = Z,
(2)
H1,? :
X = ?vvT + Z .
?
A special?
example is provided by the Gaussian hidden clique problem in which case ? = L/ n and
v = 1U / L for some set U ? [n], |U | = L,
(
H0 :
X = Z,
(3)
H1,L :
X = ?1n 1U 1T
U + Z.
Observe that the distribution of eigenvalues of X, under either alternative, is invariant to the choice
of the vector v (or subset U ), as long as the norm of v is kept fixed. Therefore, any successful
algorithm that examines only the eigenvalues, will distinguish between H0 and H1,? but not give
any information on the vector v (or subset U , in the case of H1,L ).
We let Q0 = Q0 (n) (respectively, Q1 = Q1 (n)) denote the distribution of the eigenvalues of X
under H0 (respectively H1 = H1,? or H1,L ).
A spectral statistical test for distinguishing between H0 and H1 (or simply a spectral test) is a
measurable map Tn : (?1 , . . . , ?n ) 7? {0, 1}. To formulate precisely what we mean by the word
distinguish, we introduce the following notion.
Definition 1. For each n ? N, let P0,n , P1,n be two probability measures on the same measure
space (?n , Fn ). We say that the sequence (P1,n ) is contiguous with respect to (P0,n ) if, for any
sequence of events An ? Fn ,
lim P0,n (An ) = 0 ? lim P1,n (An ) = 0 .
(4)
n??
n??
Note that contiguity is not in general a symmetric relation.
In the context of the spectral statistical tests described above, the sequences An in Definition 1
(with Pn = Q0 (n) and Qn = Q1 (n)) can be put in correspondence with spectral statistical tests
Tn by taking An = {(?1 , . . . , ?n ) : Tn (?1 , . . . , ?n ) = 0}. We will thus say that H1 is spectrally
contiguous with respect to H0 if Qn is contiguous with respect to Pn .
Our main result on the Gaussian hidden clique problem is the following.
?
Theorem 1. For any sequence L = L(n) satisfying lim supn?? L(n)/ n < 1, the hypotheses
H1,L are spectrally contiguous with respect to H0 .
2.1
Contiguity and integrability
Contiguity is related to a notion of uniform absolute continuity of measures. Recall that a probability
measure ? on a measure space is absolutely continuous with respect to another probability measure
? if for every measurable set A, ?(A) = 0 implies that ?(A) = 0, in which case there exists a
?-integrable, non-negative
function f ? d?
d? (the Radon-Nikodym derivative of ? with respect to ?),
R
so that ?(A) = A f d? for every measurable set A. We then have the following known useful fact:
3
Lemma 2. Within the setting of Definition 1, assume that P1,n is absolutely continuous with respect
dP1,n
its Radon-Nikodym derivative.
to P0,n , and denote by ?n ? dP0,n
(a) If lim supn?? E0,n (?2n ) < ?, then (P1,n ) is contiguous with respect to (P0,n ).
(b) If limn?? E0,n (?2n ) = 1, then limn?? kP0,n ? P1,n kTV = 0, where k ? kTV denotes the total
variation distance, i.e.
kP0,n ? P1,n kTV ? sup |P0,n (A) ? P1,n (A)k.
A
2.2
Method and structure of the paper
Consider problem (2). We use the fact that the law of the eigenvalues under both H0 and H1,? are
invariant under conjugations by a orthogonal matrix. Once we conjugate matrices sampled under the
hypothesis H1,? by an independent orthogonal matrix sampled according to the Haar distribution,
we get a matrix distributed as
X = ?vvT + Z ,
(5)
where u is uniform on the n-dimensional sphere, and Z is a GOE matrix (with off-diagonal entries
of variance 1/n). Letting P1,n denote the law of ?uuT + Z and P0,n denote the law of Z, we show
that P1,n is contiguous with respect to P0,n , which implies that the law of eigenvalues Q1 (n) is
contiguous with respect to Q0 (n).
To show the contiguity, we consider a more general setup, of independent interest, of Gaussian
dP1,n
tensors of order k, and in that setup show that the Radon-Nikodym derivative ?n,L = dP0,n
is
uniformly square integrable under P0,n ; an application of Lemma 2 then quickly yields Theorem 1.
The structure of the paper is as follows. In the next section, we define formally the detection problem
for a symmetric tensor of order k ? 2. We show the existence of a threshold under which detection
is not possible (Theorem 3), and show how Theorem 1 follows from this. Section 4 is devoted to
the proof of Theorem 3, and concludes with some additional remarks and consequences of Theorem
3. Finally, Section 5 is devoted to a description of the relation between the Gaussian hidden clique
problem and hidden clique problem in computer science, and related literature.
3
A symmetric tensor model and a reduction
Exploiting rotational invariance, we will reduce the spectral detection problem to a detection problem involving a standard detection problem between random matrices. Since the latter generalizes
to a tensor setup, we first introduce a general Gaussian hypothesis testing for k-tensors, which is
of independent interest. We then explain how the spectral detection problem reduces to the special
case of k = 2.
3.1
Preliminaries and notation
We use lower-case boldface for vectors (e.g. u, v) and upper-case boldface for matrices and
tensors (e.g.
Pn X, Z). The ordinary scalar product and `p norm over vectors are denoted by
hu, vi = i=1 ui vi , and kvkp . We write Sn?1 for the unit sphere in n dimensions
Sn?1 ? x ? Rn : kxk2 = 1 .
(6)
Nk n
Given X ?
R a real k-th order tensor, we let {Xi1 ,...,ik }i1 ,...,ik denote its coordinates. The
Nk n
outer product of two tensors is X ? Y, and, for v ? Rn , we define v?k = v ? ? ? ? ? v ?
R
Nk n
R as
as the k-th outer power of v. We define the inner product of two tensors X, Y ?
X
hX, Yi =
Xi1 ,??? ,ik Yi1 ,??? ,ik .
(7)
i1 ,??? ,ik ?[n]
4
We define the Frobenius (Euclidean) norm of a tensor X by kXkF =
norm by
p
hX, Xi, and its operator
kXkop ? max{hX, u1 ? ? ? ? ? uk i : ?i ? [k] , kui k2 ? 1}.
(8)
It is easy to check that this is indeed a norm. For the special case k = 2, it reduces to the ordinary
`2 matrix operator norm (equivalently, to the largest singular value of X).
For a permutation ? ? Sk , we will denote by X? the tensor with permuted indices X?i1 ,??? ,ik =
X?(i1 ),??? ,?(ik ) . We call the tensor X symmetric if, for any permutation ? ? Sk , X? = X. It is
proved [23] that, for symmetric tensors, we have the equivalent representation
kXkop ? max{|hX, u?k i| :
kuk2 ? 1}.
(9)
We define R ? R ? ? with the usual conventions of arithmetic operations.
3.2
The symmetric tensor model and main result
Nk n
We denote by G ?
R a tensor with independent and identically distributed entries Gi1 ,??? ,ik ?
N(0, 1) (note that this tensor is not symmetric).
Nk n
We define the symmetric standard normal noise tensor Z ?
R by
1
Z=
k!
r
2 X ?
G .
n
(10)
??Sk
Note that the subset of entries with unequal indices form an i.i.d. collection {Zi1 ,i2 ,...,ik }i1 <???<ik ?
N(0, 2/(n(k!))).
Nk n
With this normalization, we have, for any symmetric tensor A ?
R
o
n
1
kAk2F .
(11)
E ehA,Zi = exp
n
We will also use the fact that Z is invariant in distribution under conjugation by orthogonal transformations, that
is, that for
any orthogonal matrix U ? O(n), {Zi1 ,...,ik } has the same distribution as
Q
P
k
{ j1 ,...,jk
U
`=1 i` ,j` ? Zj1 ,...,jk }.
Given a parameter ? ? R?0 , we consider the following model for a random symmetric tensor X:
X ? ? v?k + Z ,
(12)
with Z a standard normal tensor, and v uniformly distributed over the unit sphere Sn?1 . In the case
k = 2 this is the standard rank-one deformation of a GOE matrix.
(k)
We let P? = P? denote the law of X under model (12).
Theorem 3. For k ? 2, let
?k2nd
r
1
? k log(1 ? q 2 ) .
? inf
q
q?(0,1)
Assume ? < ?k2nd . Then, for any k ? 3, we have
lim
P? ? P0
TV = 0 .
n??
Further, for k = 2 and ? < ?k2nd = 1, P? is contiguous with respect to P0 .
A few remarks are in order, following Theorem 3.
First, it is not difficult to derive the asymptotic ?k2nd =
5
p
log(k/2) + ok (1) for large k.
(13)
(14)
Second, for k = 2 we get using log(1 ? q 2 ) ? ?q 2 , that ?k2nd = 1. Recall that for k = 2 and ? > 1,
it is known that the largest eigenvalue of X, ?1 (X) converges almost surely to (? + 1/?) [11]. As
a consequence kP0 ? P? kTV ? 1 for all ? > 1: the second moment bound is tight.
For k ? 3, it follows by the triangle inequality that kXkop ? ? ? kZkop , and further
lim supn?? kZkop ? ?k almost surely as n ? ? [19, 5] for some bounded ?k . It follows that
kP0 ? P? kTV ? 1 for all ?
? > 2?k [21]. Hence, the second moment bound is off by a k-dependent
factor. For large k, 2?k = 2 log k + Ok (1) and hence the factor is indeed bounded in k.
Behavior below the threshold. Let us stress an important qualitative difference between k = 2 and
k ? 3, for ? < ?k2nd . For k ? 3, the two models are indistinguishable and any test is essentially as
good as random guessing. Formally, for any measurable function T : ?k Rn ? {0, 1}, we have
lim P0 (T (X) = 1) + P? (T (X) = 0) = 1 .
(15)
n??
For k = 2, our result implies that, for ? < 1, kP0 ? P? kTV is bounded away from 1. On the other
hand, it is easy to see that it is bounded away from 0 as well, i.e.
0 < lim inf kP0 ? P? kTV ? lim sup kP0 ? P? kTV < 1 .
n??
(16)
n??
Indeed, consider for instance the statistics S = Tr(X). Under P0 , S ? N(0, 2), while under P? ,
S ? N(?, 2). Hence
?
?
lim inf kP0 ? P? kTV ? kN(0, 1) ? N(?/ 2, 1)kTV = 1 ? 2? ? ? > 0
(17)
n??
2 2
?
Rx
2
(Here ?(x) = ?? e?z /2 dz/ 2? is the Gaussian distribution function.) The same phenomenon
for rectangular matrices (k = 2) is discussed in detail in [22].
3.3
Reduction of spectral detection to the symmetric tensor model, k = 2
Recall that in the setup of Theorem 1, Q0,n is the law of the eigenvalues of X under H0 and Q1,n
is the law of the eigenvalues of X under H1,L . Then Q1,n is invariant by conjugation of orthogonal
matrices. Therefore, the detection problem is not changed if we replace X = n?1/2 1U 1T
U + Z by
b ? RXRT = ?1 R1U (R1U )T + RZRT ,
X
n
(18)
where R ? O(n) is an orthogonal matrix sampled according to the Haar measure. A direct calculation yields
b = ?vvT + Z,
e
X
(19)
?
e is a GOE matrix (with offwhere v is uniform on the n dimensional sphere, ? = L/ n, and Z
e are independent of one another.
diagonal entries of variance 1/n). Furthermore, v and Z
b Note that P1,n = P(k=2) with ? = L/?n. We can relate the detection
Let P1,n be the law of X.
?
problem of H0 vs. H1,L to the detection problem of P0,n vs. P1,n as follows.
Lemma 4. (a) If P1,n is contiguous with respect to P0,n then H1,L is spectrally contiguous with
respect to H0 .
(b) We have
kQ0,n ? Q1,n kTV ? kP0,n ? P1,n kTV .
In view of Lemma 4, Theorem 1 is an immediate consequence of Theorem 3.
4
Proof of Theorem 3
The proof uses the following large deviations lemma, which follows, for instance, from [9, Proposition 2.3].
6
Lemma 5. Let v a uniformly random vector on the unit sphere Sn?1 and let hv, e1 i be its first
coordinate. Then, for any interval [a, b] with ?1 ? a < b ? 1
n1
o
1
lim
log P(hv, e1 i ? [a, b]) = max
log(1 ? q 2 ) : q ? [a, b] .
(20)
n?? n
2
Proof of Theorem 3. We denote by ? the Radon-Nikodym derivative of P? with respect to P0 . By
definition E0 ? = 1. It is easy to derive the following formula
Z
n n? 2
o
n?
? = exp ?
+
hX, v?k i ?n (dv) .
(21)
4
2
where ?n is the uniform measure on Sn?1 . Squaring and using (11), we get
Z
n n?
o
2
E0 ?2 = e?n? /2 E0 exp
hX, v1 ?k + v2 ?k i ?n (dv1 )?n (dv2 )
2
Z
n n? 2
o
2
v1 ?k + v2 ?k
2 ?n (dv1 )?n (dv2 )
= e?n? /2 exp
F
4
Z
o
n n? 2
hv1 , v2 ik ?n (dv1 )?n (dv2 )
= exp
2
Z
n n? 2
o
hv, e1 ik ?n (dv) ,
= exp
2
(22)
where in the first step we used (11) and in the last step, we used rotational invariance.
Let F? : [?1, 1] ? R be defined by
F? (q) ?
1
? 2 qk
+ log(1 ? q 2 ) .
2
2
(23)
Using Lemma 5 and Varadhan?s lemma, for any ?1 ? a < b ? 1,
Z
n n? 2
o
n
o
hv, e1 ik I(hv, e1 i ? [a, b]) ?n (dv) = exp n max F? (q) + o(n) .
exp
2
q?[a,b]
It follows from the definition of ?k2nd that max|q|?? F? (q) < 0 for any ? > 0. Hence
Z
n n? 2
o
E0 ?2 ? exp
hv, e1 ik I(|hv, e1 i| ? ?) ?n (dv) + e?c(?)n ,
2
(24)
(25)
d
for some c(?) > 0 and all n large enough. Next notice that, under ?n , hv, e1 i = G/(G2 + Zn?1 )1/2
where G ? N(0, 1) and Zn?1 is a ?2 with n ? 1 degrees of freedom independent of G. Then, letting
Zn ? G2 + Zn?1 (a ?2 with n degrees of freedom)
o
n
n? 2 |G|k
I(|G/Zn1/2 | ? ?) + e?c(?)n
E0 ?2 ? E exp
k/2
2 Zn
n
n? 2 |G|k
o
1/2
? E exp
I(|G/Z
|
?
?)
I(Z
?
n(1
?
?))
n?1
n
2 Znk/2
2 k
+ en? ? /2 P Zn?1 ? n(1 ? ?) + e?c(?)n
n
n1?(k/2) ? 2
o
2 k
k
2
? E exp
|G|
I(|G|
?
2?n)
+ en? ? /2 P Zn?1 ? n(1 ? ?) + e?c(?)n
k/2
2(1 ? ?)
Z 2?n
1?k/2 k
2 k
2
x ?x2 /2
=?
eC(?,?)n
dx + en? ? /2 P Zn?1 ? n(1 ? ?) + e?c(?)n , (26)
2? 0
where C(?, ?) = ? 2 /(2(1 ? ?)k/2 ). Now, for any ? > 0, we can (and will) choose ? small enough
2 k
so that both en? ? /2 P Zn?1 ? n(1 ? ?) ? 0 exponentially fast (by tail bounds on ?2 random
variables) and, if k ? 3, the argument of the exponent in the integral in the right hand side of (26)
7
is bounded above by ?x2 /4, which is possible since the argument vanishes at x? = 2C(?, ?)n1/2 .
Hence, for any ? > 0, and all n large enough, we have
Z 2?n
1?k/2 k
2
x ?x2 /2
eC(?,?)n
E0 ?2 ? ?
dx + e?c(?)n ,
(27)
2? 0
for some c(?) > 0.
Now, for k ? 3 the integrand in (27) is dominated by e?x
to 1. Therefore, since E0 ?2 ? (E0 ?)2 = 1,
k?3:
2
/4
and converges pointwise (as n ? ?)
lim E0 ?2 = 1 .
n??
(28)
For k = 2, the argument is independent of n and can be integrated immediately, yielding (after
taking the limit ? ? 0)
k=2:
1
lim sup E0 ?2 ? p
.
n??
1 ? ?2
(29)
(Indeed, the above calculation implies that the limit exists and is given by the right-hand side.)
The proof is completed by invoking Lemma 2.
5
Related work
In the classical G(n, 1/2) planted clique problem, the computational problem is to find the planted
clique (of cardinality k) in polynomial time, where we assume the location of the planted clique is
hidden and is not part of the input.
There are several algorithms that recover the planted clique in
?
polynomial time when k = C n where C > 0 is a constant independent of n [2, 8, 10].
? Despite
significant effort, no polynomial time algorithm for this problem is known when k = o( n). In the
decision version of the planted clique problem, one seeks an efficient algorithm that distinguishes
between a random graph distributed as G(n, 1/2) or a random graph containing a planted clique of
size k ? (2 + ?) log n (for ? > 0; the natural threshold for the problem is the size of the largest
clique in a random sample of G(n, 1/2), which is asymptotic
to 2 log n [14]). No polynomial time
?
algorithm is known for this decision problem if k = o( n).
As another example, consider the following setting introduced by [4] (see also [1]): one is given
a realization of a n-dimensional Gaussian vector x := (x1 , .., xn ) with i.i.d. entries. The goal is
to distinguish between the following two hypotheses. Under the first hypothesis, all entries in x
are i.i.d. standard normals. Under the second hypothesis, one is given a family of subsets C :=
{S1 , ..., Sm } such that for every 1 ? k ? m, Sk ? {1, ..., n} and there exists an i ? {1, . . . , m}
such that, for any ? ? Si , x? is a Gaussian random variable with mean ? > 0 and unit variance
whereas for every ? ?
/ Si , x? is standard normal. (The second hypothesis does not specify the
index i, only its existence). The main question is how large ? must be such that one can reliably
distinguish between these two hypotheses. In [4], ? are vertices in certain undirected graphs and the
family C is a set of pre-specified paths in these graphs.
The Gaussian hidden clique problem is related to various applications in statistics and computational
biology [6, 18]. That detection is statistically possible when L log n was established
? in [1]. In
terms of polynomial time detection, [8] show that detection is possible when L = ?( n) for the
symmetric cases. As noted,
no polynomial time algorithm is known for the Gaussian hidden clique
?
problem when k = o( n). In?
[1, 20] it was hypothesized that the Gaussian hidden clique problem
should be difficult when L n.
The closest results to ours are the ones of [22]. In the language of the present paper, these authors
consider a rectangular matrix of the form X = ? v1 v2T + Z ? Rn1 ?n2 whereby Z has i.i.d. entries
Zij ? N(0, 1/n1 ), v1 is deterministic of unit norm, and v2 has entries which are i.i.d. N(0, 1/n1 ),
independent of Z. They consider the problem of testing this distribution against ? = 0. Setting
c = limn?? nn12 , it is proved in [22] that the distribution of the singular values of X under the
?
?
null and the alternative are mutually contiguous if ? < c and not mutually contiguous if ? > c.
While [22] derive some more refined results, their proofs rely on advanced tools from random matrix
theory [13], while our proof is simpler, and generalizable to other settings (e.g. tensors).
8
References
[1] L. Addario-Berry, N. Broutin, L. Devroye, G. Lugosi. On combinatorial testing problems. Annals of
Statistics 38(5) (2011), 3063?3092.
[2] N. Alon, M. Krivelevich and B. Sudakov. Finding a large hidden clique in a random graph. Random
Structures and Algorithms 13 (1998), 457?466.
[3] G. W. Anderson, A. Guionnet and O. Zeitouni. An introduction to random matrices. Cambridge University Press (2010).
[4] E. Arias-Castro, E. J., Cand`es, H. Helgason and O. Zeitouni. Searching for a trail of evidence in a maze.
Annals of Statistics 36 (2008), 1726?1757.
[5] A. Auffinger, G. Ben Arous, and J. Cerny. Random matrices and complexity of spin glasses. Communications on Pure and Applied Mathematics 66(2) (2013), 165?201.
[6] S. Balakrishnan, M. Kolar, A. Rinaldo, A. Singh, and L. Wasserman. Statistical and computational
tradeoffs in biclustering. NIPS Workshop on Computational Trade-offs in Statistical Learning (2011).
[7] S. Bhamidi, P.S. Dey, and A.B. Nobel. Energy landscape for large average submatrix detection problems
in Gaussian random matrices. arXiv:1211.2284.
p
[8] Y. Deshpande and A. Montanari. Finding hidden cliques of size N/e in nearly linear time. Foundations
of Computational Mathematics (2014), 1?60
[9] A. Dembo and O. Zeitouni. Matrix optimization under random external fields. arXiv:1409.4606
[10] U. Feige and R. Krauthgamer. Finding and certifying a large hidden clique in a semi-random graph.
Random Struct. Algorithms 162(2) (1999), 195?208.
[11] D. F?eral and S. P?ech?e. The largest eigenvalue of rank one deformation of large Wigner matrices. Comm.
Math. Phys. 272 (2007), 185?228.
[12] Z. F?uredi and J. Koml?os, The eigenvalues of random symmetric matrices. Combinatorica 1 (1981),
233?241.
[13] A. Guionnet and M. Maida. A Fourier view on R-transform and related asymptotics of spherical integrals.
Journal of Functional Analysis 222 (2005), 435?490.
[14] G. R. Grimmett and C. J. H. McDiarmid. On colouring random graphs. Math. proc. Cambridge Philos.
Soc. 77 (1975), 313?324.
[15] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. Journal
of Computer and System Sciences 78.5 (2012): 1460-1480.
[16] M. Jerrum. Large cliques elude the Metropolis process. Random Struct. Algorithms 3(4) (1992), 347?360.
[17] A. Knowles and J. Yin, The isotropic semicircle law and deformation of Wigner matrices. Communications on Pure and Applied Mathematics 66(11) (2013), 1663?1749.
[18] M. Kolar, S. Balakrishnan, A. Rinaldo, and A. Singh. Minimax localization of structural information in
large noisy matrices. Neural Information Processing Systems (NIPS), (2011), 909?917.
[19] M. Talagrand. Free energy of the spherical mean field model. Probability theory and related fields 134(3)
(2006), 339?382.
[20] Z Ma and Y Wu. Computational barriers in minimax submatrix detection. arXiv:1309.5914.
[21] A. Montanari and E. Richard. A Statistical Model for Tensor PCA. Neural Information Processing
Systems (NIPS) (2014), 2897?2905.
[22] A. Onatski, M. J. Moreira, M. Hallin, et al. Asymptotic power of sphericity tests for high-dimensional
data. The Annals of Statistics 41(3) (2013), 1204?1231.
[23] W. C. Waterhouse. The absolute-value estimate for symmetric multilinear forms. Linear Algebra and its
Applications 128 (1990), 97?105.
9
| 5719 |@word briefly:1 version:3 faculty:1 polynomial:7 norm:7 nd:2 hu:1 seek:1 p0:16 q1:7 invoking:1 tr:1 arous:1 carry:1 reduction:2 moment:4 zij:2 ktv:12 daniel:2 ours:1 com:1 si:2 gmail:1 intriguing:1 attracted:1 dx:2 must:1 fn:2 j1:1 v:2 isotropic:1 yi1:1 dembo:1 vanishing:1 detecting:3 math:2 location:1 mcdiarmid:1 simpler:1 zhang:1 zii:1 kvk2:1 direct:1 ik:15 qualitative:1 prove:2 introduce:2 indeed:4 behavior:1 p1:15 cand:1 andrea:1 brain:1 spherical:2 kp0:9 considering:1 cardinality:1 provided:1 estimating:1 notation:1 bounded:5 null:1 israel:1 what:2 contiguity:4 spectrally:3 generalizable:1 sudakov:1 finding:3 transformation:1 berkeley:1 every:4 k2:2 eha:1 uk:1 unit:7 engineering:1 limit:2 consequence:4 despite:1 v2t:1 path:1 lugosi:1 studied:3 statistically:2 weizmann:2 practical:1 testing:7 asymptotics:1 universal:1 semicircle:1 significantly:1 thought:1 word:1 pre:1 get:3 cannot:1 convenience:1 operator:2 put:1 context:2 equivalent:2 imposed:1 deterministic:2 measurable:4 map:1 dz:1 independently:1 rectangular:2 formulate:1 immediately:1 pure:2 wasserman:1 examines:2 searching:1 notion:2 variation:1 coordinate:2 annals:3 elude:1 distinguishing:1 us:1 hypothesis:19 trail:1 element:6 satisfying:1 jk:2 asymmetric:3 electrical:1 hv:8 trade:1 vanishes:1 comm:1 ui:1 complexity:1 depend:1 tight:4 singh:2 algebra:1 localization:1 triangle:1 differently:1 various:1 fast:1 ech:1 detected:1 outcome:1 h0:15 refined:1 stanford:2 solve:1 larger:1 say:2 triangular:3 statistic:7 gi:1 jerrum:1 think:2 transform:1 noisy:5 sequence:5 eigenvalue:14 product:3 realization:3 description:1 frobenius:1 kv:1 exploiting:1 cluster:2 converges:2 ben:1 derive:3 alon:1 ac:1 uredi:1 soc:1 implies:5 convention:1 closely:1 require:1 hx:6 f1:2 generalization:1 preliminary:1 proposition:1 multilinear:1 sufficiently:1 normal:4 exp:12 auffinger:1 proc:1 combinatorial:1 largest:5 kq0:1 tool:1 nn12:1 offs:1 sphericity:1 moreira:1 gaussian:27 pn:3 kzkop:2 focus:1 rank:8 integrability:1 check:1 detect:1 sense:1 glass:1 dependent:1 squaring:1 integrated:1 hidden:23 relation:2 zn1:1 i1:5 denoted:1 exponent:1 bhamidi:1 special:4 field:4 once:1 biology:1 vvt:4 nearly:1 hv1:1 simplify:1 richard:1 few:1 distinguishes:2 familiar:1 n1:5 freedom:2 detection:20 interest:5 yielding:1 devoted:2 kxkop:3 reichman:2 integral:2 orthogonal:9 euclidean:1 e0:12 deformation:3 instance:2 contiguous:12 kxkf:1 zn:9 ordinary:2 deviation:1 entry:13 subset:4 vertex:1 uniform:4 successful:1 kn:1 dp0:2 physic:1 off:2 xi1:2 quickly:1 again:1 rn1:1 containing:1 choose:1 cognitive:1 external:1 derivative:4 kvkp:1 depends:1 vi:2 h1:18 view:2 sup:3 relied:1 recover:1 il:1 spin:1 square:1 variance:7 qk:1 ensemble:1 yield:3 clarifies:1 landscape:1 rx:1 worth:1 dp1:2 explain:1 phys:1 definition:5 against:2 energy:2 deshpande:1 naturally:1 proof:12 sampled:3 hsu:1 proved:2 colouring:1 recall:3 lim:13 ok:2 courant:1 specify:1 anderson:1 furthermore:1 dey:1 talagrand:1 hand:4 o:1 continuity:1 hypothesized:1 hence:5 symmetric:16 q0:5 i2:1 indistinguishable:1 whereby:2 noted:1 stress:1 tn:3 wigner:2 recently:2 behaves:1 permuted:1 functional:1 exponentially:1 discussed:1 tail:1 refer:3 significant:2 cambridge:2 dv1:3 philos:1 mathematics:4 varadhan:1 language:1 f0:3 similarity:2 gt:1 closest:1 inf:3 certain:1 inequality:1 yi:1 integrable:2 additional:1 surely:2 determine:1 signal:3 arithmetic:1 semi:1 reduces:2 technical:1 calculation:3 long:2 sphere:5 e1:8 converging:1 involving:1 essentially:2 arxiv:3 normalization:2 whereas:2 interval:1 singular:2 limn:3 undirected:1 balakrishnan:2 call:1 structural:1 easy:3 identically:1 enough:3 zi:1 reduce:1 regarding:1 inner:1 tradeoff:1 whether:3 pca:1 effort:1 york:1 remark:2 krivelevich:1 useful:1 broutin:1 exist:1 notice:2 rehovot:1 write:1 threshold:4 kept:1 v1:4 graph:8 asymptotically:2 year:1 almost:2 family:2 knowles:1 wu:1 decision:2 radon:4 eral:1 submatrix:7 bound:5 conjugation:3 distinguish:6 correspondence:2 precisely:2 helgason:1 x2:3 certifying:1 dominated:1 aspect:1 u1:1 argument:3 integrand:1 fourier:1 department:3 tv:1 according:2 conjugate:1 feige:1 slightly:1 kakade:1 metropolis:1 s1:1 castro:1 dv:4 restricted:1 invariant:5 mutually:3 turn:1 goe:5 letting:2 koml:1 ofer:2 gaussians:3 generalizes:1 operation:1 apply:1 observe:1 away:2 spectral:14 v2:4 grimmett:1 alternative:2 struct:2 existence:3 denotes:1 clustering:1 completed:1 krauthgamer:1 zeitouni:5 build:1 establish:2 classical:1 tensor:30 question:4 planted:8 usual:1 diagonal:3 guessing:2 supn:3 distance:1 outer:2 kak2f:1 reason:2 boldface:2 nobel:1 devroye:1 index:3 pointwise:1 ratio:1 rotational:2 kolar:2 equivalently:3 difficult:3 setup:4 relate:1 negative:1 reliably:2 upper:4 observation:1 markov:1 sm:1 immediate:2 zi1:2 communication:2 rn:4 perturbation:2 community:1 introduced:1 specified:1 california:1 unequal:1 established:1 nip:3 below:6 remarking:1 summarize:1 including:1 reliable:7 max:5 power:2 critical:1 event:1 natural:1 rely:1 haar:2 indicator:1 advanced:1 minimax:2 historically:1 concludes:1 sn:5 literature:3 berry:1 checking:1 waterhouse:1 asymptotic:3 law:9 expect:1 permutation:2 limitation:1 versus:1 foundation:1 degree:2 znk:1 nikodym:4 changed:1 dv2:3 soon:1 last:1 free:1 side:2 allow:1 addario:1 institute:2 taking:2 barrier:1 absolute:2 distributed:6 dimension:4 xn:1 maze:1 qn:2 author:1 collection:1 ec:2 clique:27 xi:3 alternatively:1 spectrum:1 continuous:2 sk:4 ca:1 symmetry:1 kui:1 main:4 montanari:4 noise:8 n2:2 xu:2 x1:1 en:4 kxk2:1 theorem:14 kuk2:1 formula:1 uut:1 x:1 zj1:1 evidence:1 exists:5 workshop:1 aria:1 nk:7 flavor:1 cerny:1 yin:1 distinguishable:1 simply:1 gi1:1 rinaldo:2 ordered:1 g2:2 scalar:1 biclustering:1 applies:1 corresponds:1 ma:1 goal:2 replace:1 uniformly:4 principal:1 lemma:9 total:1 invariance:2 e:1 formally:2 combinatorica:1 latter:1 absolutely:2 phenomenon:1 |
5,213 | 572 | Data Analysis using G/SPLINES
David Rogers?
Research Institute for Advanced Computer Science
MS T041-5, NASA/Ames Research Center
Moffett Field, CA 94035
INTERNET: drogerS@riacs.edu
Abstract
G/SPLINES is an algorithm for building functional models of data. It
uses genetic search to discover combinations of basis functions which
are then used to build a least-squares regression model. Because it
produces a population of models which evolve over time rather than a
single model, it allows analysis not possible with other regression-based
approaches.
1 INTRODUCTION
G/SPLINES is a hybrid of Friedman's Multivariable Adaptive Regression Splines
(MARS) algorithm (Friedman, 1990) with Holland's Genetic Algorithm (Holland, 1975).
G/SPLINES has advantages over MARS in that it requires fewer least-squares
computations, is easily extendable to non-spline basis functions, may discover models
inaccessible to local-variable selection algorithms, and allows significantly larger
problems to be considered. These issues are discussed in (Rogers, 1991).
This paper begins with a discussion of linear regression models, followed by a description
of the G/SPLINES algorithm, and finishes with a series of experiments illustrating its
performance, robustness, and analysis capabilities.
* Currently at Polygen/Molecular Simulations, Inc., 796 N. Pastoria Ave., Sunnyvale, CA 94086,
INTERNET: drogers@msi.com.
1088
Data Analysis Using G/Splines
2 LINEAR MODELS
A common assumption used in data modeling is that the data samples are derived from an
underlying function:
Yi = f(X i ) + error
'''I = f( XU' .?. , X in ) + error
The goal of analysis is to develop a model F(X) which minimizes the least-squares error:
LSE(F) =
~
N
L
(Yi - F(X i )) 2
i =1
The function F(X) can then be used to estimate the underlying function fat previouslyseen data samples (recall) or at new data samples (prediction). Samples used to construct
the function F(X) are in the training set; samples used to test prediction are in the test set.
10 constructing F(X), if we assume the model F can be written as a linear combination of
basis function {ct>1C} :
M
F(X) = a O+
L ak<l>/X)
k=1
then standard least-squares regression can find the optimal coefficients {ak}' However,
selecting an appropriate set of basis functions for high-dimensional models can be
difficult. G/SPLINES is a primarily a method for selecting this set.
3 G/SPLINES
Many techniques develop a regression model by incremental addition or deletion of basis
functions to a single model.The primary idea of G/SPLINES is to keep a collection of
models, and use the genetic algorithm to recombine among these models.
G/SPLINES begins with a collection of models containing randomly-generated basis
functions.
F1: {ct>1 ct>2 ct>3 ct>4 ct>5 ct>6 ct>? ct> 8 ct>9 ct> 10 ct> II ct> 12 ct> 13 ct> 14}
F2: {01 02 03 04 05 06 &, 08 09 010 OIl}
?
?
?
?
?
?
FK: {01 ?2?3?4?5?6?7?8?9?10 011 012}
The basis functions are functions which use a small number of the variables in the data set,
such as SIN(X2 - 1) or (X4 - A)(X5 - .1). The model coefficients {ak} are determined using
least-squares regression.
Each model is scored using Friedman's "lack of fit" (LOF) measure, which is a penalized
least-squares measure for goodness of fit; this measure takes into account factors such as
the number of data samples, the least-squares error, and the number of model parameters.
1089
1090
Rogers
At this point, we repeatedly perform the genetic crossover operation:
? Two good models are probabilistically selected as "parents". The likelihood of being
chosen is inversely proportional to a model's LOF score.
? Each parent is randomly "cut" into two sections, and a new model is created using a
piece from each parent:
First parent
Second parent
New model
? Optional mutation operators may alter the newly-created model.
? The model with the worst LOF score is replaced by this new model.
This process ends when the average fitness of the population stops improving.
Some features of the G/SPLlNES algorithm are significantly different from MARS:
Unlike incremental search, full-sized models are tested at every step.
The algorithm automatically determines the proper size for models.
Many fewer models are tested than with MARS .
A population of models offers information not available from single-model methods.
4 MUTATION OPERATORS
Additional mutation operators were added to the system to counteract some negative
tendencies of a purely crossover-based algorithm.
Problem: genetic diversity is reduced as process proceeds (fewer basis functions in
population)
NEW: creates a new basis function by randomly choosing a basis function type and then
randomly filling in the parameters.
Problem: need process for constructing useful multidimensional basis functions
MERGE: takes a random basis function from each parent, and creates a new basis function
by multiplying them together.
Problem: models contain "hitchhiking" basis functions which contribute little
DELETION: ranks the basis functions in order of minimum maximum contribution to the
approximation. It removes one or more of the least-contributing basis functions.
5 EXPERIMENTAL
Experiments were conducted on data derived from a function used by Friedman (1988):
1
2
f(X) = SIN(1tX 1X 2)+20(X 3 -2:) +10X 4 +5X 5
Data Analysis Using G/Splines
Standard experimental conditions are as follows. Experiments used a training set
containing 200 samples, and a test set containing 200 samples. Each sample contained 10
predictor variables (5 informative,S non informative) and a response. Sample points were
randomly selected from within the unit hypercube. The signal/noise ratio was 4.B/1.0
The G/SPLINE population consisted of 100 models. Linear truncated-power splines were
used as basis functions. After each crossover, a model had a 50% chance of getting a new
basis function created by operator NEW or MERGE and the least-contributing 10% of its
basis functions deleted using operator DELETE.
The standard training phase involved 10,000 crossover operations. After training, the
models were tested against a set of 200 previously-unseen test samples.
5.1 G/SPLINES VS. MARS
Question: is G/SPLINE competitive with MARS?
27 . '"t-'.............._ _.............................
2
22.
2
?
~ 17.
m
o Be,t reot LS leO. .
C MARS ... t LS 000..
1
....l
<;; 12.
M
<;;
1
~ 7.
SLSOps .100
Figure 1. Test least-squares scores versus number of least-squares regressions for
G/SPLINES and MARS.
The MARS algorithm was close to convergence after 50,000 least-squares regressions,
and showed no further improvement after BO,OOO. The G/SPLINES algorithm was close to
convergence after 4,()()() least-squared regressions, and showed no further improvement
after 10,000. [Note: the number of least-squares regressions is not a direct measure of the
computational efficiency of the algorithms, as MARS uses a technique (applicable only to
linear truncated-power splines) to greatly reduce cost of doing least-squares-regression.]
To complete the comparison, we need results on the quality of the discovered models:
Final average least-squared error of the best 4 G/SPLINES models was:
Final least-squared error of the MARS model was:
The "best" model has a least-squared error (from the added noise) of:
-1 .17
-1.12
-LOB
Using only linear truncated-power splines, G/SPLINES builds models comparable
(though slightly inferior) to MARS. However, by using basis functions other than linear
truncated power splines, G/SPLINES can build improved models. If we repeat the
experiment with additional basis function types of step functions, linear splines, and
quadratic splines, we get improved results:
With additional basis functions, the final average least-squared error was:
-1.095.
I suggest that by including basis functions which reflect the underlying structure of f, the
quality of the discovered models is improved.
1091
1092
Rogers
5.2 VARIABLE ELIMINATION
Question: does variable usage in the population reflect the underlying function? (Recall
that the data samples contained 10 variables; only the first 5 were used to calculate f.)
1400.....- -...............- _............. . - -....
Ii!
>
gp
....
CI)
1200
? Var(l) use
? Var(2) use
4 Var(3) use
.Var(4) use
:::: Var(5) use
~J Var(6) use
1000
='
~
o
';:j
800
u
600
.z
400
c
? Var(7) use
? Var(8) use
"Var(9) use
,. Var[l 0) use
200
O~. .~~~~~~~.
o
10 20 31)
40 50 60 70 80 90100
II Genetic Operations x 100
Figure 2. # of basis functions using a variable vs. # of crossover operations.
G/SPLINES correctly focuses on basis functions which use the first five variables The
relative usage of these five variables reflects the complexity of the relationship between an
input variable and the response in a given dimension.
Question: is the rate of elimination of variables affected by sample size?
90 :
80
70
o Var(6)
60
a Var(7)
50
? Var(8)
.::: Var(9)
.., Var(10)
40
30
20
1~1-~~~~~!;~~~~~!1
o
5
10 15
20
25 30
35
II Genetic Operations x 100
40
45
50
5
10 15
20
25 30
35
40
45
50
II Genetic Operations x 100
Figure 3. Close-up of Figme 2, showing the five variables not affecting the response.
The left graph is the standard experiment; the right from a training with 50 samples.
The left graph plots the number of basis functions containing a variable versus the number
of genetic operations for the five noninformative variables in the standard experiment. The
variables are slowly eliminated from consideration. The right graph plots the same
infonnation, using a training set size of 50 samples. The variables are rapidly eliminated.
Smaller training sets force the algorithm to work with most predictive variables, causing a
faster elimination of less predictive variables.
Question: Is variable elimination effective with increased numbers of noninfonnative
variables?
This experiment used the standard conditions but increased the number of predictor
variables in the training and test sets to 100 (5 infonnative, 25. noninformative).
Data Analysis Using G/Splines
600
500
?s>
:i!
400
It
.
300
1~
200
.c
It
era
.
'!!
100
0
II
-100
0
10
I.
_I
20
30
40
50
Variable Index
60
10
80
100
90
Figure 4. Number of basis functions which used a variable vs. variable index, after
10,000 genetic operations.
Figure 4 shows that elimination behavior was still apparent in this high-dimensional data
set. The five infonnative variables were the first five in order of use.
5.3 MODEL SIZE
Question: What is the effect of the genetic algorithm on model size?
7~--~~------'---~
6
5
CD
?
4
~
3
o Best score
a Avg score
CD
o
Avg fcn I. ..
2
o~~~~
__-.______
10
~
0102030405060708090100
? Genetic Ops x 100
9~~~~
o
__________
~
10 20 30 40 50 60 70 80 90100
? Genlllic Ops x 100
Figure 5. Model scores on training set and average function length.
The left graph plots the best and average LOF score for the training set versus the number
of genetic operations. The right graph plots the average number of basis functions in a
model versus the number of genetic operations.
Even after the LOF error is minimized, the average model length continues to decrease.
This is likely due to pressure from the genetic algorithm; a compact representation is more
likely to survive the crossover operation without loss. (In fact, due to the nature of the
LOF function, the least-squared errors of the best models is slightly increased by this
procedure. The system considers the increase a fair trade-off for smaller model size.)
5.4 RESISTANCE TO OVER FITTING
Question: Does Friedman's LOF function resist overfitting with small training sets?
Training was conducted with data sets of two sizes: 200 and 50. The left graph in Figure 6
plots the population average least-squared error for the training set and the test set versus
the number of genetic operations, using a training set size of 200 samples. The right graph
1093
1094
Rogers
~
?
fI)
~
~
4.5
4
3 .5
3
2 .5
o Avg
a Avg
4
LS score
lesl LS score
CD
?
fI)
2
...J
~
3.5
3
2.5
2
< 1.5
< 1.5
1
.5
1
o Avg
.5
__
o~~~
o
10
____
20
.-~-.
30 40
50 60
____. -__-+
70
80
(:E Avg
O~
o
90 100
II Genelic Operellons x 100
10
LS score
lesl LS score
~
20
________- p_ _ _ _
30 40
50 60
70
~
80
-+
__
90 100
II Genelic Operallons x 100
Figure 6. LS error vs. # of operations for training with 200 and 50 samples.
plots the same information, but for a system using a training set size of 50 samples.
In both cases, little overfitting is seen, even when the algorithm is allowed to run long after
the point where improvement ceases. Training with a small number of samples still leads
to models which resist overfitting.
Question: What is the effect of additive noise on overfitting?
40+---~------------~~
40~--~------------~~
38
36
38
36
o Avg
a Avg
22
20+-__
o
~
LS score
lesl LS score
______________?
102030 40 5060 7080 90100
? LSOpsx 100
0 Avg LS score
OAvg lest LS score
20+-________________
o
~.
102030405060 7080 90100
IILSOpsx100
Figure 7. LS error vs. # of operations for low and high noise data sets.
Training was conducted with training sets having a signal/noise ratio of 1.0/1.0. The left
graph plots the least-squared error for the training and test set versus the number of
genetic operations. The right graph plots the same information, but with a higher setting of
Friedman's smoothing parameter.
Noisy data results in a higher risk of overfitting. However, this can be accommodated if
we set a higher value for Friedman's smoothing parameter.
5.5 ADDITIONAL BASIS FUNCTION TYPES AND TRAINING SET SIZES
Question: What is the effect of changes in training set size on the type of basis functions
selected?
The experiment in Figure 8 used the standard conditions, but using many additional basis
function types. The left graph plots the use of different types of basis functions using a
training set of size 50.The right graph plots the same information using a training set size
of 200. Simply put, different training set sizes lead to significant changes in preferences
among function types. A detailed analysis of these graphs can give insight into the nature
of the data and the best components for model construction.
Data Analysis Using G/Splines
4S0,....._ _....._ _ _ _ _ _ _""+
o Linear Spline use
..c. .
a Lutearuse
~4
A Quadratic use
??
~ Slop use
""'~Itd
C
~
.2:
tlO
"-'
c
Spline ocder 2 use
A BSpline order 0 use
? BSpIine ocder I use
? BS p1ine ocder 2 use
..B
"-'
]
o
10 20
30
40
SO
60 70
1# Genetic Operalions
l
100
80
90 100
=11=
10 20
30
40
SO 60 70
80
90 100
1/ Genetic Operaliono l 100
Figure 8. # of basis functions of a given type Vs. # of genetic operations, for training
sets of 50 and 200 samples.
6 CONCLUSIONS
G/SPLINES is a new algorithm related to state-of-the-art statistical modeling techniques
such as MARS. The strengths of this algorithm are that G/SPLINES builds models that are
comparable in quality to MARS, with a greatly reduced number of intermediate model
constructions; is capable of building models from data sets that are too large for the
MARS algorithm; and is easily extendable to basis functions that are not spline-based.
Weaknesses of this algorithm include the ad-hoc nature of the mutation operators; the lack
of studies of the real-time performance of G/SPLINES vs. other model builders such as
MARS; the need for theoretical analysis of the algorithm's convergence behavior; the
LOF function needs to be changed to reflect additional basis function types.
The WOLF program source code, which implements G/SPLINES, is available free to
other researchers in either Macintosh or UNIX/C formats. Contact the author
(drogerS@riacs.edu) for information.
Acknowledgments
This work was supported in part by Cooperative Agreements NCC 2-387 and NCC 2-408
between the National Aeronautics and Space Administration (NASA) and the Universities
Space Research Association (USRA). Special thanks to my domestic partner Doug
Brockman, who shared my enthusiasm even though he didn't know what the hell I was up
to; and my father, Philip, who made me want to become a scientist.
References
Friedman, J., "Multivariate Adaptive Regression Splines," Technical Report No. 102,
Laboratory for Computational Statistics, Department of Statistics, Stanford University,
November 1988 (revised August 1990).
Holland, J., Adaptation in Artificial and Natural Systems, University of Michigan Press,
Ann Arbor, MI, 1975.
Rogers, David, "G/SPLINES: A Hybrid of Friedman's Multivariate Adaptive Splines
(MARS) Algorithm with Holland's Genetic Algorithm," in Proceedings of the Fourth
International Conference on Genetic Algorithms, San Diego, July, 1991.
1095
| 572 |@word illustrating:1 simulation:1 pressure:1 series:1 score:15 selecting:2 tlo:1 genetic:22 com:1 written:1 riacs:2 additive:1 informative:2 noninformative:2 remove:1 plot:10 v:7 fewer:3 selected:3 contribute:1 ames:1 preference:1 five:6 direct:1 become:1 fitting:1 behavior:2 automatically:1 little:2 domestic:1 begin:2 discover:2 underlying:4 didn:1 what:4 minimizes:1 every:1 multidimensional:1 fat:1 unit:1 scientist:1 local:1 era:1 ak:3 merge:2 p_:1 acknowledgment:1 implement:1 procedure:1 crossover:6 significantly:2 suggest:1 get:1 close:3 selection:1 operator:6 put:1 risk:1 center:1 l:12 insight:1 population:7 construction:2 diego:1 us:2 agreement:1 continues:1 cut:1 cooperative:1 worst:1 calculate:1 decrease:1 trade:1 inaccessible:1 complexity:1 predictive:2 purely:1 recombine:1 creates:2 f2:1 efficiency:1 basis:35 easily:2 tx:1 leo:1 effective:1 artificial:1 choosing:1 apparent:1 larger:1 stanford:1 statistic:2 unseen:1 gp:1 noisy:1 final:3 hoc:1 advantage:1 adaptation:1 causing:1 rapidly:1 description:1 getting:1 parent:6 convergence:3 macintosh:1 produce:1 incremental:2 develop:2 rogers:6 sunnyvale:1 elimination:5 f1:1 hell:1 considered:1 lof:8 itd:1 applicable:1 currently:1 infonnation:1 builder:1 reflects:1 rather:1 probabilistically:1 derived:2 focus:1 improvement:3 rank:1 likelihood:1 greatly:2 ave:1 issue:1 among:2 smoothing:2 art:1 special:1 field:1 construct:1 having:1 eliminated:2 x4:1 survive:1 filling:1 alter:1 fcn:1 minimized:1 report:1 spline:40 brockman:1 primarily:1 randomly:5 national:1 fitness:1 replaced:1 phase:1 friedman:9 weakness:1 capable:1 accommodated:1 theoretical:1 delete:1 increased:3 modeling:2 infonnative:2 goodness:1 cost:1 predictor:2 father:1 conducted:3 too:1 my:3 extendable:2 thanks:1 international:1 ops:2 off:1 together:1 squared:8 reflect:3 containing:4 slowly:1 account:1 diversity:1 coefficient:2 inc:1 ad:1 piece:1 doing:1 competitive:1 capability:1 mutation:4 contribution:1 square:12 figme:1 who:2 multiplying:1 researcher:1 ncc:2 against:1 involved:1 mi:1 stop:1 newly:1 recall:2 nasa:2 higher:3 response:3 improved:3 ooo:1 though:2 mar:17 lesl:3 hitchhiking:1 lack:2 quality:3 usage:2 effect:3 building:2 contain:1 consisted:1 oil:1 laboratory:1 sin:2 x5:1 lob:1 inferior:1 m:1 multivariable:1 complete:1 lse:1 consideration:1 fi:2 common:1 functional:1 enthusiasm:1 discussed:1 association:1 he:1 significant:1 fk:1 had:1 aeronautics:1 multivariate:2 showed:2 yi:2 seen:1 minimum:1 additional:6 signal:2 ii:8 july:1 full:1 technical:1 faster:1 offer:1 long:1 molecular:1 prediction:2 regression:13 addition:1 affecting:1 want:1 source:1 unlike:1 lest:1 slop:1 intermediate:1 finish:1 fit:2 reduce:1 idea:1 administration:1 resistance:1 repeatedly:1 useful:1 detailed:1 reduced:2 correctly:1 affected:1 deleted:1 graph:12 counteract:1 run:1 unix:1 fourth:1 comparable:2 internet:2 ct:15 followed:1 quadratic:2 strength:1 x2:1 format:1 department:1 combination:2 smaller:2 slightly:2 b:1 previously:1 know:1 end:1 available:2 operation:16 appropriate:1 robustness:1 include:1 build:4 hypercube:1 contact:1 added:2 question:8 primary:1 philip:1 me:1 partner:1 considers:1 length:2 code:1 msi:1 relationship:1 index:2 ratio:2 difficult:1 negative:1 proper:1 perform:1 revised:1 november:1 optional:1 truncated:4 discovered:2 august:1 david:2 resist:2 deletion:2 proceeds:1 program:1 including:1 power:4 natural:1 hybrid:2 force:1 advanced:1 inversely:1 created:3 doug:1 usra:1 evolve:1 contributing:2 relative:1 loss:1 proportional:1 moffett:1 versus:6 var:15 s0:1 cd:3 penalized:1 changed:1 repeat:1 supported:1 free:1 institute:1 dimension:1 author:1 collection:2 adaptive:3 avg:9 made:1 san:1 compact:1 keep:1 overfitting:5 search:2 nature:3 ca:2 improving:1 constructing:2 noise:5 scored:1 fair:1 allowed:1 xu:1 showing:1 cease:1 ci:1 michigan:1 simply:1 likely:2 contained:2 bo:1 holland:4 wolf:1 determines:1 chance:1 goal:1 sized:1 ann:1 shared:1 change:2 determined:1 tendency:1 experimental:2 arbor:1 tested:3 |
5,214 | 5,720 | Regularized EM Algorithms: A Unified Framework
and Statistical Guarantees
Constantine Caramanis
Dept. of Electrical and Computer Engineering
The University of Texas at Austin
constantine@utexas.edu
Xinyang Yi
Dept. of Electrical and Computer Engineering
The University of Texas at Austin
yixy@utexas.edu
Abstract
Latent models are a fundamental modeling tool in machine learning applications,
but they present significant computational and analytical challenges. The popular
EM algorithm and its variants, is a much used algorithmic tool; yet our rigorous
understanding of its performance is highly incomplete. Recently, work in [1] has
demonstrated that for an important class of problems, EM exhibits linear local
convergence. In the high-dimensional setting, however, the M -step may not be
well defined. We address precisely this setting through a unified treatment using
regularization. While regularization for high-dimensional problems is by now
well understood, the iterative EM algorithm requires a careful balancing of making
progress towards the solution while identifying the right structure (e.g., sparsity or
low-rank). In particular, regularizing the M -step using the state-of-the-art highdimensional prescriptions (e.g., a` la [19]) is not guaranteed to provide this balance.
Our algorithm and analysis are linked in a way that reveals the balance between
optimization and statistical errors. We specialize our general framework to sparse
gaussian mixture models, high-dimensional mixed regression, and regression with
missing variables, obtaining statistical guarantees for each of these examples.
1
Introduction
We give general conditions for the convergence of the EM method for high-dimensional estimation.
We specialize these conditions to several problems of interest, including high-dimensional sparse
and low-rank mixed regression, sparse gaussian mixture models, and regression with missing covariates. As we explain below, the key problem in the high-dimensional setting is the M -step. A natural
idea is to modify this step via appropriate regularization, yet choosing the appropriate sequence of
regularizers is a critical problem. As we know from the theory of regularized M-estimators (e.g.,
[19]) the regularizer should be chosen proportional to the target estimation error. For EM, however,
the target estimation error changes at each step.
The main contribution of our work is technical: we show how to perform this iterative regularization.
We show that the regularization sequence must be chosen so that it converges to a quantity controlled
by the ultimate estimation error. In existing work, the estimation error is given by the relationship
between the population and empirical M -step operators, but this too is not well defined in the highdimensional setting. Thus a key step, related both to our algorithm and its convergence analysis, is
obtaining a different characterization of statistical error for the high-dimensional setting.
Background and Related Work
EM (e.g., [8, 12]) is a general algorithmic approach for handling latent variable models (including
mixtures), popular largely because it is typically computationally highly scalable, and easy to implement. On the flip side, despite a fairly long history of studying EM in theory (e.g., [12, 17, 21]),
1
very little has been understood about general statistical guarantees until recently. Very recent work
in [1] establishes a general local convergence theorem (i.e., assuming initialization lies in a local region around true parameter) and statistical guarantees for EM, which is then specialized to
obtain near-optimal rates for several specific low-dimensional problems ? low-dimensional in the
sense of the classical statistical setting where the samples outnumber the dimension. A central challenge in extending EM (and as a corollary, the analysis in [1]) to the high-dimensional regime is
the M -step. On the algorithm side, the M -step will not be stable (or even well-defined in some
cases) in the high-dimensional setting. To make matters worse, any analysis that relies on showing
that the finite-sample M -step is somehow ?close? to the M -step performed with infinite data (the
population-level M -step) simply cannot apply in the high-dimensional regime. Recent work in [20]
treats high-dimensional EM using a truncated M -step. This works in some settings, but also requires
specialized treatment for every different setting, precisely because of the difficulty with the M -step.
In contrast to work in [20], we pursue a high-dimensional extension via regularization. The central
challenge, as mentioned above, is in picking the sequence of regularization coefficients, as this
must control the optimization error (related to the special structure of ? ? ), as well as the statistical
error. Finally, we note that for finite mixture regression, St?adler et al.[16] consider an `1 regularized
EM algorithm for which they develop some asymptotic analysis and oracle inequality. However,
this work doesn?t establish the theoretical properties of local optima arising from regularized EM.
Our work addresses this issue from a local convergence perspective by using a novel choice of
regularization.
2
Classical EM and Challenges in High Dimensions
The EM algorithm is an iterative algorithm designed to combat the non-convexity of max likelihood
due to latent variables. For space concerns we omit the standard derivation, and only give the
definitions we need in the sequel. Let Y , Z be random variables taking values in Y,Z, with joint
distribution f? (y, z) depending on model parameter ? ? ? ? Rp . We observe samples of Y but not
of the latent variable Z. EM seeks to maximize a lower bound on the maximum likelihood function
for ?. Letting ?? (z|y) denote the conditional distribution of Z given Y = y, letting y?? (y) denote
the marginal distribution of Y , and defining the function
n Z
1X
0
Qn (? |?) :=
?? (z|yi ) log f?0 (yi , z)dz,
(2.1)
n i=1 Z
one iteration of the EM algorithm, mapping ? (t) to ? (t+1) , consists of the following two steps:
? E-step: Compute function Qn (?|? (t) ) given ? (t) .
? M-step: ? (t+1) ? Mn (?) := arg max?0 ?? Qn (? 0 |? (t) ).
We can define the population (infinite sample) versions of Qn and Mn in a natural manner:
Z
Z
0
?
Q(? |?) :=
y? (y)
?? (z|y) log f?0 (y, z)dzdy
Y
M(?)
=
Z
0
arg max
Q(? |?).
0
? ??
(2.2)
(2.3)
This paper is about the high-dimensional setting where the number of samples n may be far less
than the dimensionality p of the parameter ?, but where ? exhibits some special structure, e.g., it
may be a sparse vector or a low-rank matrix. In such a setting, the M -step of the EM algorithm may
be highly problematic. In many settings, for example sparse mixed regression, the M -step may not
even be well defined. More generally, when n p, Mn (?) may be far from the population version,
M(?), and in particular, the minimum estimation error kMn (? ? ) ? M(? ? )k can be much larger
than the signal strength k? ? k. This quantity is used in [1] as well as in follow-up work in [20], as a
measure of statistical error. In the high dimensional setting, something else is needed.
3
Algorithm
The basis of our algorithm is the by-now well understood concept of regularized high dimensional
estimators, where the regularization is tuned to the underlying structure of ? ? , thus defining a regu2
larized M -step via
Mrn (?) := arg max
Qn (? 0 |?) ? ?n R(? 0 ),
0
(3.1)
? ??
where R(?) denotes an appropriate regularizer chosen to match the structure of ? ? . The key chal(t)
lenge is how to choose the sequence of regularizers {?n } in the iterative process, so as to control
optimization and statistical error. As detailed in Algorithm 1, our sequence of regularizers attempts
to match the target estimation error at each step of the EM iteration. For an intuition of what this
might look like, consider the estimation error at step t: kMrn (? (t) ) ? ? ? k2 . By the triangle inequality, we can bound this by a sum of two terms: the optimization error and the final estimation
error:
kMrn (? (t) ) ? ? ? k2 ? kMrn (? (t) ) ? Mrn (? ? )k2 + kMrn (? ? ) ? ? ? k2 .
(3.2)
(t)
Since we expect (and show) linear convergence of the optimization, it is natural to update ?n via a
(t)
(t?1)
recursion of the form ?n = ??n
+? as in (3.3), where the first term represents the optimization
error, and ? represents the final statistical error, i.e., the last term above in (3.2). A key part of our
analysis shows that this error (and hence ?) is controlled by k?Qn (? ? |?) ? ?Q(? ? |?)kR? , which
in turn can be bounded uniformly for a variety of important applications of EM, including the three
discussed in this paper (see Section 5). While a technical point, it is this key insight that enables
the right choice of algorithm and its analysis. In the cases we consider, we obtain min-max optimal
rates of convergence, demonstrating that no algorithm, let alone another variant of EM, can perform
better.
Algorithm 1 Regularized EM Algorithm
Input Samples {yi }ni=1 , regularizer R, number of iterations T , initial parameter ? (0) , initial regu(0)
larization parameter ?n , estimated statistical error ?, contractive factor ? < 1.
1: For t = 1, 2, . . . , T do
2:
Regularization parameter update:
(t?1)
?(t)
+ ?.
n ? ??n
3:
4:
E-step: Compute function Qn (?|?
Regularized M-step:
(t?1)
(3.3)
) according to (2.1).
? (t) ? Mrn (? (t?1) ) := arg max Qn (?|? (t?1) ) ? ?(t)
n ? R(?).
???
5: End For
Output ? (T ) .
4
Statistical Guarantees
We now turn to the theoretical analysis of regularized EM algorithm. We first set up a general
analytical framework for regularized EM where the key ingredients are decomposable regularizer
and several technical conditions on the population based Q(?|?) and the sample based Qn (?|?). In
Section 4.3, we provide our main result (Theorem 1) that characterizes both computational and
statistical performance of the proposed variant of regularized EM algorithm.
4.1
Decomposable Regularizers
Decomposable regularizers (e.g., [3, 6, 14, 19]), have been shown to be useful both empirically and
theoretically for high dimensional structural estimation, and they also play an important role in our
analytical framework. Recall that for R : Rp ? R+ a norm, and a pair of subspaces (S, S) in Rp
such that S ? S, we have the following definition:
Definition 1 (Decomposability). Regularizer R : Rp ? R+ is decomposable with respect to (S, S)
if
?
R(u + v) = R(u) + R(v), for any u ? S, v ? S .
Typically, the structure of model parameter ? ? can be characterized by specifying a subspace S such
that ? ? ? S. The common use of a regularizer is thus to penalize the compositions of solution that
3
live outside S. We are interested in bounding the estimation error in some norm k ? k. The following
quantity is critical in connecting R to k ? k.
Definition 2 (Subspace Compatibility Constant). For any subspace S ? Rp , a given regularizer R
and some norm k ? k, the subspace compatibility constant of S with respect to R, k ? k is given by
R(u)
?(S) := sup
.
u?S\{0} kuk
As is standard, the dual norm of R is defined as R? (v) := supR(u)?1 u, v . To simplify notation,
we let kukR := R(u) and kukR? := R? (u).
4.2
Conditions on Q(?|?) and Qn (?|?)
Next, we review three technical conditions, originally proposed by [1], on the population level Q(?|?)
function, and then we give two important conditions that the empirial function Qn (?|?) must satisfy,
including one that characterizes the statistical error.
It is well known that performance of EM algorithm is sensitive to initialization. Following the lowdimensional development
in [1], our results
are local, and apply to an r-neighborhood region around
? ? : B(r; ? ? ) := u ? ?, ku ? ? ? k ? r .
We first require that Q(?|? ? ) is self consistent as stated below. This is satisfied, in particular, when
? ? maximizes the population log likelihood function, as happens in most settings of interest [12].
Condition 1 (Self Consistency). Function Q(?|? ? ) is self consistent, namely
? ? = arg max Q(?|? ? ).
???
We also require that the function Q(?|?) satisfies a certain strong concavity condition and is smooth
over ?.
Condition 2 (Strong Concavity and Smoothness (?, ?, r)). Q(?|? ? ) is ?-strongly concave over ?,
i.e.,
?
(4.1)
Q(?2 |? ? ) ? Q(?1 |? ? ) ? ?Q(?1 |? ? ), ?2 ? ?1 ? ? k?2 ? ?1 k2 , ? ?1 , ?2 ? ?.
2
?
For any ? ? B(r; ? ), Q(?|?) is ?-smooth over ?, i.e.,
?
Q(?2 |?) ? Q(?1 |?) ? ?Q(?1 |?), ?2 ? ?1 ? ? k?2 ? ?1 k2 , ? ?1 , ?2 ? ?.
(4.2)
2
The next condition is key in guaranteeing the curvature of Q(?|?) is similar to that of Q(?|? ? ) when
? is close to ? ? . It has also been called First Order Stability in [1].
Condition 3 (Gradient Stability (?, r)). For any ? ? B(r; ? ? ), we have
?Q(M(?)|?) ? ?Q(M(?)|? ? )
? ? k? ? ? ? k.
The above condition only requires that the gradient be stable at one point M(?). This is sufficient
for our analysis.
In fact, for many concrete
examples, one can verify a stronger version of Condition
3 that is
?Q(? 0 |?) ? ?Q(? 0 |? ? )
? ? k? ? ? ? k, ? ? 0 ? B(r; ? ? ).
Next we require two conditions on the empirical function Qn (?|?), which is computed from finite
number of samples according to (2.1). Our first condition, parallel to Condition 2, imposes a curvature constraint on Qn (?|?). In order to guarantee that the estimation error k? (t) ? ? ? k in step t
of the EM algorithm is well controlled, we would like Qn (?|? (t?1) ) to be strongly concave at ? ? .
However, in the setting where n p, there might exist directions along which Qn (?|? (t?1) ) is flat,
e.g., as in mixed linear regression and missing covariate regression. In contrast with Condition 2, we
only require Qn (?|?) to be strongly concave over a particular set C(S, S; R) that is defined in terms
of the subspace pair (S, S) and regularizer R. This set is defined as follows:
p
C(S, S; R) := u ? R : ?S ? (u) R ? 2 ? ?S (u) R + 2 ? ?(S) ? u ,
(4.3)
where the projection operator ?S : Rp ? Rp is defined as ?S (u) := arg minv?S kv ? uk. The
restricted strong concavity (RSC) condition is as follows.
4
Condition 4 (RSC (?n , S, S, r,T?)). For any fixed ? ? B(r; ? ? ), with probability at least 1 ? ?, we
have that for all ? 0 ? ? ? ? ? C(S, S; R),
?n
Qn (? 0 |?) ? Qn (? ? |?) ? ?Qn (? ? |?), ? 0 ? ? ? ? ? k? 0 ? ? ? k2 .
2
The above condition states that Qn (?|?) is strongly concave in directions ? 0 ? ? ? that belong to
C(S, S; R). It is instructive to compare Condition 4 with a related condition proposed by [14] for
analyzing high dimensional M-estimators. They require the loss function to be strongly convex over
the cone {u ? Rp : k?S ? (u)kR . k?S (u)kR }. Therefore our restrictive set (4.3) is similar to the
cone but has the additional term 2?(S)kuk. The main purpose of the term 2?(S)kuk is to allow
the regularization parameter ?n to jointly control optimization and statistical error. We note that
while Condition 4 is stronger than the usual
RSC
condition in M-estimator, in typical settings
the
difference is immaterial. This is because
?S (u)
R is within a constant factor of ?(S) ?
u
, and
hence checking RSC over C amounts to checking it over k?S ? (u)kR . ?(S)kuk, which is indeed
what is typically also done in the M-estimator setting.
Finally, we establish the condition that characterizes the achievable statistical error.
Condition 5 (Statistical Error (?n , r, ?)). For any fixed ? ? B(r; ? ? ), with probability at least
1 ? ?, we have
?Qn (? ? |?) ? ?Q(? ? |?)
? ? ?n .
(4.4)
R
This quantity replaces the term kMn (?)?M(?)k which appears in [1] and [20], and which presents
problems in the high dimensional regime.
4.3
Main Results
In this section, we provide the theoretical guarantees for a resampled version of our regularized EM
algorithm: we split the whole dataset into T pieces and use a fresh piece of data in each iteration of
regularized EM. As in [1], resampling makes it possible to check that Conditions 4-5 are satisfied
without requiring them to hold uniformly for all ? ? B(r; ? ? ) with high probability. Our empirical
results indicate that it is not in fact required and is an artifact of the analysis. We refer to this
resampled version as Algorithm 2. In the sequel, we let m := n/T to denote the sample complexity
in each iteration. We let ? := supu?Rp \{0} kuk? /kuk, where k ? k? is the dual norm of k ? k.
For Algorithm 2, our main result is as follows. The proof is deferred to the Supplemental Material.
Theorem 1. Assume the model parameter ? ? ? S and regularizer R is decomposable with respect
to (S, S) where S ? S ? Rp . Assume r > 0 is such that B(r; ? ? ) ? ?. Further, assume function
Q(?|?), defined in (2.2), is self consistent and satisfies Conditions 2-3 with parameters (?, ?, r) and
(?, r). Given n samples and T iterations, let m := n/T . Assume Qm (?|?), computed from any
m i.i.d. samples according to (2.1), satisfies Conditions 4-5 with parameters (?m , S, S, r, 0.5?/T )
???
and (?m , r, 0.5?/T ). Let ?? := 5 ??
, and assume 0 < ? < ? and 0 < ?? ? 3/4. Define
m
? := r?m /[60?(S)] and assume ?m is such that ?m ? ?.
Consider Algorithm 2 with initialization ? (0) ? B(r; ? ? ) and with regularization parameters given
by
1 ? ?t
t ?m
?(t)
k? (0) ? ? ? k +
?, t = 1, 2, . . . , T
(4.5)
m =?
1??
5?(S)
for any ? ? [3?m , 3?], ? ? [?? , 3/4]. Then with probability at least 1 ? ?, we have that for any
t ? [T ],
5 1 ? ?t
?(S)?.
(4.6)
k? (t) ? ? ? k ? ?t k? (0) ? ? ? k +
?m 1 ? ?
The estimation error is bounded by a term decaying linearly with number of iterations t, which we
can think of as the optimization error and a second term that characterizes the ultimate estimation
error of our algorithm. With T = O(log n) and suitable choice of ? such that ? = O(?n/T ), we
bound the ultimate estimation error as
1
k? (T ) ? ? ? k .
?(S)?n/T .
(4.7)
(1 ? ?)?n/T
5
We note that overestimating the initial error, k? (0) ?? ? k is not important, as it may slightly increase
the overall number of iterations, but will not impact the ultimate estimation error.
The constraint ?m . r?m /?(S) ensures that ? (t) is contained in B(r; ? ? ) for all t ? [T ]. This
constraint is quite mild in the sense that if ?m = ?(r?m /?(S)), ? (0) is a decent estimator with
estimation error O(?(S)?m /?m ) that already matches our expectation.
5
Examples: Applying the Theory
Now we introduce three well known latent variable models. For each model, we first review the
standard EM algorithm formulations, and discuss the extensions to the high dimensional setting.
Then we apply Theorem 1 to obtain the statistical guarantee of the regularized EM with data splitting
(Algorithm 2). The key ingredient underlying these results is to check the technical conditions in
Section 4 hold for each model. We postpone these tedious details to the Supplemental Material.
5.1
Gaussian Mixture Model
We consider the balanced isotropic Gaussian mixture model (GMM) with two components where
the distribution of random variables (Y, Z) ? Rp ? {?1, 1} is characterized as
Pr (Y = y|Z = z) = ?(y; z ? ? ? , ? 2 Ip ), Pr(Z = 1) = Pr(Z = ?1) = 1/2.
Here we use ?(?|?, ?) to denote the probability density function of N (?, ?). In this example, Z
is the latent variable that indicates the cluster id of each sample. Given n i.i.d. samples {yi }ni=1 ,
function Qn (?|?) defined in (2.1) corresponds to
n
M
QGM
(? 0 |?) = ?
n
1 X
w(yi ; ?)kyi ? ? 0 k22 + (1 ? w(yi ; ?))kyi + ? 0 k22 ,
2n i=1
ky??k2
ky??k2
(5.1)
ky+?k2
where w(y; ?) := exp (? 2?2 2 )[exp (? 2?2 2 ) + exp (? 2?2 2 )]?1 . We assume ? ? ?
B0 (s; p) := {u ? Rp : | supp(u)| ? s}. Naturally, we choose the regularizer R(?) to be the `1
norm. We define the signal-to-noise ratio SNR := k? ? k2 /?.
Corollary 1 (Sparse Recovery in GMM). There exist constants ?, C such that if SNR ? ?, n/T ?
2
[80C(k? ? k? + ?)/k? ? k2 ] s log p, ? (0) ? B(k? ? k2 /4; ? ? ); then with probability at least 1 ? T /p
p
?
(0)
Algorithm 2 with parameters ? = C(k? ? k? + ?) T log p/n, ?n/T = 0.2k? (0) ? ? ? k2 / s, any
? ? [1/2, 3/4] and `1 regularization generates ? (t) that has estimation error
r
5C(k? ? k? + ?) s log p
(t)
?
t
(0)
?
k? ? ? k2 ? ? k? ? ? k2 +
T , for all t ? [T ].
1??
n
(5.2)
Note that by p
setting T log(n/ log p), the order of final estimation error turns out to be
(k? ? k? + ?) (s log p)/n)
p log (n/ log p). The minimax rate for estimating s-sparse vector in a
single Gaussian cluster is s log p/n, thereby the rate is optimal on (n, p, s) up to a log factor.
5.2
Mixed Linear Regression
Mixed linear regression (MLR), as considered in some recent work [5, 7, 22], is the problem of
recovering two or more linear vectors from mixed linear measurements. In the case of mixed linear
regression with two symmetric and balanced components, the response-covariate pair (Y, X) ?
R ? Rp is linked through
Y = hX, Z ? ? ? i + W,
where W is the noise term and Z is the latent variable that has Rademacher distribution over {?1, 1}.
We assume X ? N (0, Ip ), W ? N (0, ? 2 ). In this setting, with n i.i.d. samples {yi , xi }ni=1 of pair
(Y, X), function Qn (?|?) then corresponds to
n
LR
QM
(? 0 |?) = ?
n
1 X
w(yi , xi ; ?)(yi ? hxi , ? 0 i)2 + (1 ? w(yi , xi ; ?))(yi + hxi , ? 0 i)2 ,
2n i=1
(5.3)
6
2
2
2
where w(y, x; ?) := exp (? (y?hx,?i)
)[exp (? (y?hx,?i)
) + exp (? (y+hx,?i)
)]?1 .
2? 2
2? 2
2? 2
We consider two kinds of structure on ? ? :
Sparse Recovery. Assume ? ? ? B0 (s; p). Then let R be the `1 norm, as in the previous section.
We define SNR := k? ? k2 /?.
Corollary 2 (Sparse recovery in MLR). There exist constant ?, C, C 0 such that if SNR ? ?, n/T ?
2
C 0 [(k? ? k2 + ?)/k? ? k2 ] s log p, ? (0) ? B(k? ? k2 /240, ? ? ); then with probability at least 1 ? T /p
p
?
(0)
Algorithm 2 with parameters ? = C(k? ? k2 + ?) T log p/n, ?n/T = k? (0) ? ? ? k2 /(15 s), any
? ? [1/2, 3/4] and `1 regularization generates ? (t) that has estimation error
r
15C(k? ? k2 + ?) s log p
(t)
?
t
(0)
?
k? ? ? k2 ? ? k? ? ? k2 +
T , for all t ? [T ].
1??
n
Performing
T
log(n/(s log p)) iterations gives us estimation rate (k? ? k2 +
p
?) (s log p/n) log (n/(s log p)) which is near-optimal on (s, p, n). The dependence on k? ? k2 ,
which also appears in the analysis of EM in the classical (low dimensional) setting [1], arises from
fundamental limits of EM. Removing such dependence for MLR is possible by convex relaxation
[7]. It is interesting to study how to remove it in the high dimensional setting.
Low Rank Recovery. Second we consider the setting where the model parameter is a matrix ?? ?
Rp1 ?p2 with rank(?? ) = ? min(p1 , p2 ). We further assume X ? Rp1 ?p2 is an i.i.d. Gaussian
matrix, i.e., entries of X are independent random variables with distribution
1). We apply
PpN1 ,p(0,
2
nuclear norm regularization to serve the low rank structure, i.e, R(?) =
i=1 |si (?)|, where
si (?) is the ith singular value of ?. Similarly, we let SNR := k?? kF /?.
Corollary 3 (Low rank recovery in MLR). There exist constant ?, C, C 0 such that if SNR ? ?,
2
n/T ? C 0 [(k?? kF + ?)/k?? kF ] ?(p1 + p2 ), ?(0) ? B(k?? kF /1600, ?? ); thenpwith probability
at least 1 ? T exp(?p1 ? p2 ) Algorithm 2 with parameters ? = C(k?? kF + ?) T (p1 + p2 )/n,
?
(0)
?n/T = 0.01k?(0) ? ?? kF / 2?, any ? ? [1/2, 3/4] and nuclear norm regularization generates
?(t) that has estimation error
(t)
k?
?
t
(0)
? ? kF ? ? k?
100C 0 (k?? kF + ?)
? ? kF +
1??
?
r
2?(p1 + p2 )
T , for all t ? [T ].
n
The standard low rank matrix recovery with a single component, including other sensing matrix
designs beyond the Gaussianity, has been studied extensively (e.g., [2, 4, 13, 15]). To the best of our
knowledge, the theoretical study of the mixed low rank matrix recovery has not been considered.
5.3
Missing Covariate Regression
As our last example, we consider the missing covariate regression (MCR) problem. To parallel
standard linear regression, {yi , xi }ni=1 are samples of (Y, X) linked through Y = hX, ? ? i + W .
However, we assume each entry of xi is missing independently with probability ? (0, 1). Thereei takes the form
fore, the observed covariate vector x
xi,j with probability 1 ?
x
ei,j =
.
?
otherwise
We assume the model is under Gaussian design X ? N (0, Ip ), W ? N (0, ? 2 ). We refer the
reader to our Supplementary Material for the specific Qn (?|?) function. In high dimensional case,
we assume ? ? ? B0 (s; p). We define ? := k? ? k2 /? to be the SNR and ? := r/k? ? k2 to be the
relative contractivity radius. In particular, let ? := (1 + ?)?.
Corollary 4 (Sparse Recovery in MCR). There exist constants C, C 0 , C0 , C1 such that if (1+?)? ?
C0 < 1, < C1 , n/T ? C 0 max{? 2 (??)?1 , 1}s log p, ? (0) ? B(?k? ? k2 , ? ? ); then with probp
(0)
ability at least 1 ? T /p Algorithm 2 with parameters ? = C? T log p/n, ?n/T = k? (0) ?
?
? ? k2 /(45 s), any ? ? [1/2, 3/4] and `1 regularization generates ? (t) that has estimation error
r
45C? s log p
(t)
?
t
(0)
?
k? ? ? k2 ? ? k? ? ? k2 +
T , for all t ? [T ],
1??
n
7
Unlike the previous two models, we require an upper bound on the signal to noise ratio. This unusual
constraint
p is in fact unavoidable [10]. By optimizing T , the order of final estimation error turns out
to be ? s log p/n log(n/(s log p)).
6
Simulations
We now provide some simulation results to back up our theory. Note that while Theorem 1 requires
resampling, we believe in practice this is unnecessary. This is validated by our results, where we
apply Algorithm 1 to the four latent variable models discussed in Section 5.
Convergence Rate. We first evaluate the convergence of Algorithm 1 assuming only that the initialization is a bounded distance from ? ? . For a given error ?k? ? k2 , the initial parameter ? (0) is picked
randomly from the sphere centered around ? ? with radius ?k? ? k2 . We use Algorithm 1 with T = 7,
(0)
? = 0.7, ?n in Theorem 1. The choice of the critical parameter ? is given in the Supplementary
Material. For every single trial, we report estimation error k? (t) ? ? ? k2 and optimization error
k? (t) ? ? (T ) k2 in every iteration. We plot the log of errors over iteration t in Figure 1.
2
Est error
Opt error
Log error
-1
Log error
1
Est error
Opt error
0
-2
-3
3
Est error
Opt error
0
-2
Log error
0
-4
-1
-2
-6
-4
-5
-8
-6
-10
0
1
2
3
4
5
6
7
Est error
Opt error
2
Log error
1
1
0
-1
-3
0
1
2
Number of iterations
3
4
5
6
-4
7
-2
0
1
2
Number of iterations
(a) GMM
3
4
5
6
-3
7
0
1
2
Number of iterations
(b) MLR(sparse)
3
4
5
6
7
Number of iterations
(c) MLR(low rank)
(d) MCR
Figure 1: Convergence of regularized EM algorithm. In each panel, one curve is plotted from single
independent trial. Settings: (a,b,d) (n, p, s) = (500, 800, 5); (d) (n, p, ?) = (600, 30, 3); (a-c)
SNR = 5; (d) (SNR, ) = (0.5, 0.2); (a-d) ? = 0.5.
Statistical Rate. We now evaluate the statistical rate. We set T = 7 and compute estimation error
on ?b := ? (T ) . In Figure 2, we plot k?b ? ? ? k2 over normalized sample complexity, i.e., n/(s log p)
for s-sparse parameter and n/(?p) for rank ? p-by-p parameter. We refer the reader to Figure 1 for
other settings. We observe that the same normalized sample complexity leads to almost identical
estimation error in practice, which thus supports the corresponding statistical rate established in
Section 5.
0.4
p = 200
p = 400
p = 800
1.4
p = 200
p = 400
p = 800
0.18
0.16
0.14
0.3
0.25
1
0.12
0.6
0.1
0.15
0.4
10
15
20
25
30
n/(s log p)
(a) GMM
5
10
15
20
25
30
1.4
1
4
5
6
7
n/(?p)
(b) MLR(sparse)
1.6
1.2
3
n/(s log p)
p = 200
p = 400
p = 800
1.8
0.8
0.2
5
2
p = 25
p = 30
p = 35
1.2
? ? ?? kF
k?
0.35
k?? ? ? ? k2
k?? ? ? ? k2
0.2
k?? ? ? ? k2
0.22
(c) MLR(low rank)
8
5
10
15
20
25
30
n/(s log p)
(d) MCR
Figure 2: Statistical rates. Each point is an average of 20 independent trials. Settings: (a,b,d) s = 5;
(c) ? = 3.
Acknowledgments
The authors would like to acknowledge NSF grants 1056028, 1302435 and 1116955. This research
was also partially supported by the U.S. Department of Transportation through the Data-Supported
Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center.
8
References
[1] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm:
From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014.
[2] T Tony Cai and Anru Zhang. Rop: Matrix recovery via rank-one projections. The Annals of Statistics,
43(1):102?138, 2015.
[3] Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger
than n. The Annals of Statistics, pages 2313?2351, 2007.
[4] Emmanuel J Cand`es and Yaniv Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342?
2359, 2011.
[5] Arun Tejasvi Chaganty and Percy Liang. Spectral experts for estimating mixtures of linear regressions.
arXiv preprint arXiv:1306.3729, 2013.
[6] Yudong Chen, Sujay Sanghavi, and Huan Xu. Improved graph clustering. Information Theory, IEEE
Transactions on, 60(10):6440?6455, Oct 2014.
[7] Yudong Chen, Xinyang Yi, and Constantine Caramanis. A convex formulation for mixed regression with
two components: Minimax optimal rates. In Conf. on Learning Theory, 2014.
[8] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via
the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1?38, 1977.
[9] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. In Advances in Neural Information Processing Systems, pages 2726?
2734, 2011.
[10] Po-Ling Loh and Martin J Wainwright. Corrupted and missing predictors: Minimax bounds for highdimensional linear regression. In Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on, pages 2601?2605. IEEE, 2012.
[11] Jinwen Ma and Lei Xu. Asymptotic convergence properties of the em algorithm with respect to the
overlap in the mixture. Neurocomputing, 68:105?129, 2005.
[12] Geoffrey McLachlan and Thriyambakam Krishnan. The EM algorithm and extensions, volume 382. John
Wiley & Sons, 2007.
[13] Sahand Negahban, Martin J Wainwright, et al. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. The Annals of Statistics, 39(2):1069?1097, 2011.
[14] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A unified framework for
high-dimensional analysis of m-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348?1356, 2009.
[15] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[16] Nicolas St?adler, Peter B?uhlmann, and Sara Van De Geer. L1-penalization for mixture regression models.
Test, 19(2):209?256, 2010.
[17] Paul Tseng. An analysis of the em algorithm and entropy-like proximal point methods. Mathematics of
Operations Research, 29(1):27?44, 2004.
[18] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
[19] Martin J Wainwright. Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1:233?253, 2014.
[20] Zhaoran Wang, Quanquan Gu, Yang Ning, and Han Liu. High dimensional expectation-maximization
algorithm: Statistical optimization and asymptotic normality. arXiv preprint arXiv:1412.8729, 2014.
[21] C.F.Jeff Wu. On the convergence properties of the em algorithm. The Annals of statistics, pages 95?103,
1983.
[22] Xinyang Yi, Constantine Caramanis, and Sujay Sanghavi. Alternating minimization for mixed linear
regression. arXiv preprint arXiv:1310.3745, 2013.
9
| 5720 |@word mild:1 trial:3 version:5 achievable:1 norm:10 stronger:2 c0:2 tedious:1 seek:1 simulation:2 thereby:1 initial:4 liu:1 series:1 tuned:1 xinyang:3 existing:1 si:2 yet:2 must:3 john:1 enables:1 remove:1 designed:1 plot:2 update:2 larization:1 resampling:2 alone:1 rp1:2 isotropic:1 ith:1 lr:1 characterization:1 zhang:1 along:1 symposium:1 specialize:2 consists:1 introduce:1 manner:1 theoretically:1 indeed:1 p1:5 planning:1 cand:1 little:1 estimating:2 underlying:2 bounded:3 notation:1 maximizes:1 panel:1 what:2 kind:1 pursue:1 supplemental:2 unified:3 guarantee:10 combat:1 every:3 concave:4 k2:41 qm:2 uk:1 control:3 grant:1 omit:1 engineering:2 local:6 understood:3 modify:1 treat:1 limit:1 despite:1 analyzing:1 id:1 might:2 initialization:4 studied:1 dantzig:1 specifying:1 sara:1 contractive:1 fazel:1 acknowledgment:1 practice:2 minv:1 implement:1 supu:1 postpone:1 empirical:3 projection:2 donald:1 cannot:1 close:2 operator:2 live:1 applying:1 demonstrated:1 missing:8 dz:1 transportation:3 center:1 independently:1 convex:3 decomposable:6 identifying:1 splitting:1 recovery:10 estimator:7 insight:1 nuclear:3 population:8 stability:2 annals:4 target:3 play:1 larized:1 observed:1 role:1 preprint:5 electrical:2 wang:1 region:2 ensures:1 mentioned:1 intuition:1 balanced:2 convexity:2 complexity:3 covariates:1 dempster:1 benjamin:1 immaterial:1 tight:1 serve:1 basis:1 triangle:1 gu:1 po:2 joint:1 caramanis:3 regularizer:10 derivation:1 choosing:1 outside:1 neighborhood:1 quite:1 larger:2 supplementary:2 otherwise:1 ability:1 statistic:5 think:1 jointly:1 noisy:2 laird:1 final:4 ip:3 sequence:5 analytical:3 cai:1 lowdimensional:1 maryam:1 kv:1 ky:3 convergence:12 cluster:2 optimum:1 extending:1 rademacher:1 yaniv:1 guaranteeing:1 converges:1 depending:1 develop:1 b0:3 progress:1 p2:7 strong:3 recovering:1 indicate:1 direction:2 ning:1 radius:2 centered:1 material:4 bin:2 require:6 hx:5 opt:4 isit:1 extension:3 hold:2 around:3 considered:2 exp:7 algorithmic:2 mapping:1 purpose:1 estimation:29 uhlmann:1 utexas:2 sensitive:1 sivaraman:1 quanquan:1 establishes:1 tool:2 arun:1 mclachlan:1 minimization:2 gaussian:7 corollary:5 validated:1 methodological:1 rank:16 likelihood:4 check:2 indicates:1 contrast:2 rigorous:1 sense:2 thriyambakam:1 typically:3 interested:1 tao:1 compatibility:2 issue:2 arg:6 dual:2 overall:1 development:1 plan:1 rop:1 art:1 special:2 fairly:1 marginal:1 identical:1 represents:2 look:1 yu:2 report:1 sanghavi:2 simplify:1 overestimating:1 roman:1 randomly:1 neurocomputing:1 attempt:1 interest:2 highly:3 deferred:1 mixture:9 pradeep:1 regularizers:7 arthur:1 huan:1 incomplete:2 supr:1 plotted:1 theoretical:4 minimal:1 rsc:4 modeling:1 maximization:1 decomposability:1 entry:2 snr:9 predictor:1 too:1 corrupted:1 proximal:1 adler:2 vershynin:1 st:2 density:1 fundamental:2 international:1 negahban:2 recht:1 siam:1 sequel:2 picking:1 terence:1 connecting:1 concrete:1 central:2 satisfied:2 unavoidable:1 choose:2 worse:1 conf:1 expert:1 supp:1 parrilo:1 de:1 zhaoran:1 gaussianity:1 coefficient:1 matter:1 satisfy:1 piece:2 performed:1 picked:1 linked:3 characterizes:4 sup:1 decaying:1 parallel:2 candes:1 contribution:1 ni:4 largely:1 fore:1 history:1 explain:1 definition:4 naturally:1 proof:1 outnumber:1 stop:1 dataset:1 treatment:2 popular:2 recall:1 knowledge:1 regu:1 dimensionality:1 probp:1 back:1 appears:2 originally:1 follow:1 response:1 improved:1 formulation:2 done:1 strongly:5 until:1 ei:1 somehow:1 artifact:1 lei:1 believe:1 k22:2 concept:1 true:1 verify:1 requiring:1 normalized:2 regularization:17 hence:2 alternating:1 symmetric:1 self:4 percy:1 l1:1 novel:1 recently:2 common:1 specialized:2 mlr:8 empirically:1 volume:1 discussed:2 belong:1 significant:1 composition:1 refer:3 measurement:2 smoothness:1 chaganty:1 sujay:2 consistency:1 mathematics:1 similarly:1 hxi:2 stable:2 han:1 something:1 curvature:2 recent:3 perspective:1 constantine:4 optimizing:1 certain:1 inequality:3 yi:15 minimum:2 additional:1 maximize:1 mrn:3 signal:3 smooth:2 technical:5 match:3 characterized:2 long:1 sphere:1 dept:2 prescription:1 ravikumar:1 controlled:3 impact:1 variant:3 regression:20 scalable:1 expectation:2 arxiv:10 iteration:15 penalize:1 c1:2 background:1 else:1 singular:1 unlike:1 kukr:2 balakrishnan:1 structural:1 near:3 yang:1 split:1 easy:1 decent:1 krishnan:1 variety:1 idea:1 texas:2 ultimate:4 sahand:2 loh:2 peter:1 generally:1 useful:1 detailed:1 amount:1 extensively:1 exist:5 problematic:1 nsf:1 estimated:1 arising:1 anru:1 key:8 four:1 demonstrating:1 kyi:2 gmm:4 kuk:6 graph:1 relaxation:1 sum:1 cone:2 almost:1 reader:2 wu:1 scaling:1 bound:5 resampled:2 guaranteed:2 nan:1 replaces:1 oracle:2 annual:1 strength:1 precisely:2 constraint:4 flat:1 generates:4 min:2 performing:1 martin:6 department:1 structured:1 according:3 slightly:1 em:40 son:1 making:1 happens:1 restricted:1 pr:3 tier:1 computationally:1 equation:1 turn:4 discus:1 needed:1 know:1 flip:1 letting:2 end:1 unusual:1 studying:1 operation:2 apply:5 observe:2 appropriate:3 spectral:1 rp:13 denotes:1 clustering:1 tony:1 restrictive:1 emmanuel:2 establish:2 classical:3 society:1 already:1 quantity:4 dependence:2 usual:1 exhibit:2 gradient:2 subspace:6 distance:1 tseng:1 fresh:1 provable:1 assuming:2 relationship:1 ratio:2 balance:2 liang:1 stated:1 design:2 perform:2 upper:1 finite:3 acknowledge:1 truncated:1 defining:2 pablo:1 pair:4 namely:1 required:1 established:1 address:2 beyond:1 below:2 chal:1 regime:3 sparsity:1 challenge:4 including:5 max:8 royal:1 wainwright:6 critical:3 suitable:1 difficulty:1 natural:3 regularized:14 overlap:1 recursion:1 mn:3 minimax:3 normality:1 review:4 understanding:1 checking:2 kf:10 asymptotic:4 relative:1 loss:1 expect:1 mixed:11 interesting:1 proportional:1 lenge:1 geoffrey:1 ingredient:2 penalization:1 sufficient:1 consistent:3 imposes:1 rubin:1 balancing:1 austin:2 supported:2 last:2 side:2 allow:1 taking:1 sparse:13 van:1 curve:1 dimension:2 yudong:2 doesn:1 qn:24 concavity:3 author:1 far:2 transaction:2 selector:1 reveals:1 unnecessary:1 xi:6 latent:8 iterative:4 ku:1 nicolas:1 obtaining:2 main:5 linearly:1 bounding:1 whole:1 noise:4 kmn:2 ling:2 paul:1 xu:2 wiley:1 lie:1 theorem:6 removing:1 specific:2 covariate:5 showing:1 sensing:1 concern:1 kr:4 chen:2 entropy:1 simply:1 contained:1 partially:1 corresponds:2 satisfies:3 relies:1 ma:1 oct:1 conditional:1 careful:1 towards:1 jeff:1 change:1 infinite:2 typical:1 uniformly:2 called:1 geer:1 e:1 la:1 est:4 yixy:1 highdimensional:3 support:1 arises:1 evaluate:2 regularizing:1 instructive:1 handling:1 |
5,215 | 5,721 | Black-box optimization of noisy functions with
unknown smoothness
Jean-Bastien Grill
Michal Valko
SequeL team, INRIA Lille - Nord Europe, France
jean-bastien.grill@inria.fr
michal.valko@inria.fr
R?emi Munos
Google DeepMind, UK?
munos@google.com
Abstract
We study the problem of black-box optimization of a function f of any dimension, given function evaluations perturbed by noise. The function is assumed to
be locally smooth around one of its global optima, but this smoothness is unknown. Our contribution is an adaptive optimization algorithm, POO or parallel
optimistic optimization, that is able to deal with this setting. POO performs almost
as well as the best known algorithms requiring the knowledge of the smoothness.
Furthermore, POO works for a larger class of functions than what was previously
considered, especially for functions that are difficult to optimize, in a very precise
sense. We provide a finite-time analysis of POO?s?performance, which shows that
its error after n evaluations is at most a factor of ln n away from the error of the
best known optimization algorithms using the knowledge of the smoothness.
1
Introduction
We treat the problem of optimizing a function f : X ? R given a finite budget of n noisy evaluations. We consider that the cost of any of these function evaluations is high. That means, we care
about assessing the optimization performance in terms of the sample complexity, i.e., the number
of n function evaluations. This is typically the case when one needs to tune parameters for a complex
system seen as a black-box, which performance can only be evaluated by a costly simulation. One
such example, is the hyper-parameter tuning where the sensitivity to perturbations is large and the
derivatives of the objective function with respect to these parameters do not exist or are unknown.
Such setting fits the sequential decision-making setting under bandit feedback. In this setting, the
actions are the points that lie in a domain X . At each step t, an algorithm selects an action xt ? X
and receives a reward rt , which is a noisy function evaluation such that rt = f (xt ) + ?t , where ?t is
a bounded noise with E [?t |xt ] = 0. After n evaluations, the algorithm outputs its best guess x(n),
which can be different from xn . The performance measure we want to minimize is the value of the
function at the returned point compared to the optimum, also referred to as simple regret,
def
Rn = sup f (x) ? f (x (n)) .
x?X
We assume there exists at least one point x? ? X such that f (x? ) = supx?X f (x).
The relationship with bandit settings motivated UCT [10, 8], an empirically successful heuristic
that hierarchically partitions domain X and selects the next point xt ? X using upper confidence
bounds [1]. The empirical success of UCT on one side but the absence of performance guarantees for
it on the other, incited research on similar but theoretically founded algorithms [4, 9, 12, 2, 6].
As the global optimization of the unknown function without absolutely any assumptions would
be a daunting needle-in-a-haystack problem, most of the algorithms assume at least a very weak
?
on the leave from SequeL team, INRIA Lille - Nord Europe, France
1
assumption that the function does not decrease faster than a known rate around one of its global
optima. In other words, they assume a certain local smoothness property of f . This smoothness
is often expressed in the form of a semi-metric ` that quantifies this regularity [4]. Naturally, this
regularity also influences the guarantees that these algorithms are able to furnish. Many of them
define a near-optimality dimension d or a zooming dimension. These are `-dependent quantities
used to bound the simple regret Rn or a related notion called cumulative regret.
Our work focuses on a notion of such near-optimality dimension d that does not directly relate
the smoothness property of f to a specific metric ` but directly to the hierarchical partitioning
P = {Ph,i }, a tree-based representation of the space used by the algorithm. Indeed, an interesting
fundamental question is to determine a good characterization of the difficulty of the optimization
for an algorithm that uses a given hierarchical partitioning of the space X as its input. The kind of
hierarchical partitioning {Ph,i } we consider is similar to the ones introduced in prior work: for any
depth h ? 0 in the tree representation, the set of cells {Ph,i }1?i?Ih form a partition of X , where Ih
is the number of cells at depth h. At depth 0, the root of the tree, there is a single cell P0,1 = X . A
cell Ph,i of depth h is split into several children subcells {Ph+1,j }j of depth h + 1. We refer to the
standard partitioning as to one where each cell is split into regular same-sized subcells [13].
An important insight, detailed in Section 2, is that a near-optimality dimension d that is independent
from the partitioning used by an algorithm (as defined in prior work [4, 9, 2]) does not embody the
optimization difficulty perfectly. This is easy to see, as for any f we could define a partitioning,
perfectly suited for f . An example is a partitioning, that at the root splits X into {x? } and X \ x? ,
which makes the optimization trivial, whatever d is. This insight was already observed by Slivkins
[14] and Bull [6], whose zooming dimension depends both on the function and the partitioning.
In this paper, we define a notion of near-optimality dimension d which measures the complexity of
the optimization problem directly in terms of the partitioning used by an algorithm. First, we make
the following local smoothness assumption about the function, expressed in terms of the partitioning
and not any metric: For a given partitioning P, we assume that there exist ? > 0 and ? ? (0, 1), s.t.,
f (x) ? f (x? ) ? ??h
?h ? 0, ?x ? Ph,i?h ,
where (h, i?h ) is the (unique) cell of depth h containing x? . Then, we define the near-optimality
dimension d(?, ?) as
n
o
0
def
d(?, ?) = inf d0 ? R+ : ?C > 0, ?h ? 0, Nh (2??h ) ? C??d h ,
where for all ? > 0, Nh (?) is the number of cells Ph,i of depth h s.t. supx?Ph,i f (x) ? f (x? ) ? ?.
Intuitively, functions with smaller d are easier to optimize and we denote (?, ?), for which d(?, ?) is
the smallest, as (?? , ?? ). Obviously, d(?, ?) depends on P and f , but does not depend on any choice
of a specific metric. In Section 2, we argue that this definition of d1 encompasses the optimization
complexity better. We stress this is not an artifact of our analysis and previous algorithms, such as
HOO [4], TaxonomyZoom [14], or HCT [2], can be shown to scale with this new notion of d.
Most of the prior bandit-based algorithms proposed for function optimization, for either deterministic or stochastic setting, assume that the smoothness of the optimized function is known. This is the
case of known semi-metric [4, 2] and pseudo-metric [9]. This assumption limits the application of
these algorithms and opened a very compelling question of whether this knowledge is necessary.
Prior work responded with algorithms not requiring this knowledge. Bubeck et al. [5] provided an
algorithm for optimization of Lipschitz functions without the knowledge of the Lipschitz constant.
However, they have to assume that f is twice differentiable and a bound on the second order derivative is known. Combes and Prouti`ere [7] treat unimodal f restricted to dimension one. Slivkins
[14] considered a general optimization problem embedded in a taxonomy2 and provided guarantees
as a function of the quality of the taxonomy. The quality refers to the probability of reaching two
cells belonging to the same branch that can have values that differ by more that half of the diameter
(expressed by the true metric) of the branch. The problem is that the algorithm needs a lower bound
on this quality (which can be tiny) and the performance depends inversely on this quantity. Also it
assumes that the quality is strictly positive. In this paper, we do not rely on the knowledge of quality
and also consider a more general class of functions for which the quality can be 0 (Appendix E).
1
2
we use the simplified notation d instead of d(?, ?) for clarity when no confusion is possible
which is similar to the hierarchical partitioning previously defined
2
0.09
simple regret after 5000 evaluations
0.0
f (x)
?0.2
?0.4
?0.6
?0.8
?1.0
0.0
0.2
0.4
0.6
0.8
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.0
1.0
0.2
0.4
0.6
0.8
1.0
?
x
p
p
2
Figure 1: Difficult function f : x ? s (log2 |x ? 0.5|) ? ( |x ? 0.5| ? (x ? 0.5) ) ? |x ? 0.5|
where, s(x) = 1 if the fractional part of x, that is, x ? bxc, is in [0, 0.5] and s(x) = 0, if it is in
(0.5, 1). Left: Oscillation between two envelopes of different smoothness leading to a nonzero d for
a standard partitioning. Right: Regret of HOO after 5000 evaluations for different values of ?.
Another direction has been followed by Munos [11], where in the deterministic case (the function
evaluations are not perturbed by noise), their SOO algorithm performs almost as well as the best
known algorithms without the knowledge of the function smoothness. SOO was later extended to
StoSOO [15] for the stochastic case. However StoSOO only extends SOO for a limited case of easy
instances of functions for which there exists a semi-metric under which d = 0. Also, Bull [6] provided a similar regret bound for the ATB algorithm for a class of functions, called zooming continuous
functions, which is related to the class of functions for which there exists a semi-metric under which
the near-optimality dimension is d = 0. But none of the prior work considers a more general class
of functions where there is no semi-metric adapted to the standard partitioning for which d = 0.
To give an example of a difficult function, consider the function in Figure 1. It?possesses a lower
and upper envelope around its global optimum that are equivalent to x2 and x; and therefore
have different smoothness. Thus, for a standard partitioning, there is no semi-metric of the form
`(x, y) = ||x ? y||? for which the near-optimality dimension is d = 0, as shown by Valko et al.
[15]. Other examples of nonzero near-optimality dimension are the functions that for a standard
partitioning behave differently depending on the direction, for instance f : (x, y) 7? 1 ? |x| ? y 2 .
Using a bad value for the ? parameter can have dramatic consequences on the simple regret. In
Figure 1, we show the simple regret after 5000 function evaluations for different values of ?. For the
values of ? that are too low, the algorithm does not explore enough and is stuck in a local maximum
while for values of ? too high the algorithm wastes evaluations by exploring too much.
In this paper, we provide a new algorithm, POO, parallel optimistic optimization, which competes
with the best algorithms that assume the knowledge of the function smoothness, for a larger class
of functions than was previously done. Indeed, POO handles a panoply of functions, including hard
instances, i.e., such that d > 0, like the function illustrated above. We also recover the result of
StoSOO and ATB for functions with d = 0. In particular, we bound the POO?s simple regret as
1/(2+d(?? ,?? ))
E[Rn ] ? O
ln2 n /n
.
This result should be compared to the simple regret of the best known algorithm that uses the knowledge of the metric under which the function is smooth, or equivalently (?, ?), which is of the order of
O((ln n/n)1/(2+d) ). Thus POO?s performance is at most a factor of (ln n)1/(2+d) away from that of
the best known optimization algorithms that require the knowledge of the function smoothness. Interestingly, this factor decreases with the complexity measure d: the harder the function to optimize,
the less important it is to know its precise smoothness.
2
2.1
Background and assumptions
Hierarchical optimistic optimization
POO optimizes functions without the knowledge of their smoothness using a subroutine, an anytime
algorithm optimizing functions using the knowledge of their smoothness. In this paper, we use a
modified version of HOO [4] as such subroutine. Therefore, we embark with a quick review of HOO.
HOO follows an optimistic strategy close to UCT [10], but unlike UCT, it uses proper confidence
bounds to provide theoretical guarantees. HOO refines a partition of the space based on a hierarchical
partitioning, where at each step, a yet unexplored cell (a leaf of the corresponding tree) is selected,
3
and the function is evaluated at a point within this cell. The selected path (from the root to the leaf)
is the one that maximizes the minimum value Uh,i (t) among all cells of each depth, where the value
Uh,i (t) of any cell Ph,i is defined as
s
2 ln(t)
Uh,i (t) = ?
bh,i (t) +
+ ??h ,
Nh,i (t)
where t is the number of evaluations done so far, ?
bh,i (t) is the empirical average of all evaluations
done within Ph,i , and Nh,i (t) is the number of them. The second term in the definition of Uh,i (t) is
a Chernoff-Hoeffding type confidence interval, measuring the estimation error induced by the noise.
The third term, ??h with ? ? (0, 1) is, by assumption, a bound on the difference f (x? ) ? f (x) for
any x ? Ph,i?h , a cell containing x? . Is it this bound, where HOO relies on the knowledge of the
smoothness, because the algorithm requires the values of ? and ?. In the next sections, we clarify
the assumptions made by HOO vs. related algorithms and point out the differences with POO.
2.2
Assumptions made in prior work
Most of previous work relies on the knowledge of a semi-metric on X such that the function is either
locally smooth near to one of its maxima with respect to this metric [11, 15, 2] or require a stronger,
weakly-Lipschitz assumption [4, 12, 2]. Furthermore, Kleinberg et al. [9] assume the full metric.
Note, that the semi-metric does not require the triangular inequality to hold. For instance, consider
the semi-metric `(x, y) = ||x ? y||? on Rp with || ? || being the euclidean metric. When ? < 1
then this semi-metric does not satisfy the triangular inequality. However, it is a metric for ? ? 1.
Therefore, using only semi-metric allows us to consider a larger class of functions.
Prior work typically requires two assumptions. The first one is on semi-metric ` and the function.
An example is the weakly-Lipschitz assumption needed by Bubeck et al. [4] which requires that
?x, y ? X ,
f (x? ) ? f (y) ? f (x? ) ? f (x) + max {f (x? ) ? f (x), ` (x, y)} .
It is a weak version of a Lipschitz condition, restricting f in particular for the values close to f (x? ).
More recent results [11, 15, 2] assume only a local smoothness around one of the function maxima,
x?X
f (x? ) ? f (x) ? `(x? , x).
The second common assumption links the hierarchical partitioning with the semi-metric. It requires
the partitioning to be adapted to the (semi) metric. More precisely the well-shaped assumption states
that there exist ? < 1 and ?1 ? ?2 > 0, such that for any depth h ? 0 and index i = 1, . . . , Ih , the
subset Ph,i is contained by and contains two open balls of radius ?1 ?h and ?2 ?h respectively, where
the balls are w.r.t. the same semi-metric used in the definition of the function smoothness.
?Local smoothness? is weaker than ?weakly Lipschitz? and therefore preferable. Algorithms requiring the local-smoothness assumption always sample a cell Ph,i in a special representative point and,
in the stochastic case, collect several function evaluations from the same point before splitting the
cell. This is not the case of HOO, which allows to sample any point inside the selected cell and to
expand each cell after one sample. This additional flexibility comes at the price of requiring the
stronger weakly-Lipschitzness assumption. Nevertheless, although HOO does not wait before expanding a cell, it does something similar by selecting a path from the root to this leaf that maximizes
the minimum of the U -value over the cells of the path, as mentioned in Section 2.1. The fact that
HOO follows an optimistic strategy even after reaching the cell that possesses the minimal U -value
along the path is not used in the analysis of the HOO algorithm.
Furthermore, a reason for better dependency on the smoothness in other algorithms, e.g., HCT [2],
is not only algorithmic: HCT needs to assume a slightly stronger condition on the cell, i.e., that the
single center of the two balls (one that covers and the other one that contains the cell) is actually the
same point that HCT uses for sampling. This is stronger than just assuming that there simply exist
such centers of the two balls, which are not necessarily the same points where we sample (which is
the HOO assumption). Therefore, this is in contrast with HOO that samples any point from the cell. In
fact, it is straightforward to modify HOO to only sample at a representative point in each cell and only
require the local-smoothness assumption. In our analysis and the algorithm, we use this modified
version of HOO, thereby profiting from this weaker assumption.
4
Prior work [9, 4, 11, 2, 12] often defined some ?dimension? d of the near-optimal space of f measured
according to the (semi-) metric `. For example, the so-called near-optimality dimension [4] measures
the size of the near-optimal space X? = {x ? X : f (x) > f (x? ) ? ?} in terms of packing numbers:
For any c > 0, ?0 > 0, the (c, ?0 )-near-optimality dimension d of f with respect to ` is defined as
inf d ? [0, ?) : ?C s.t. ?? ? ?0 , N (Xc? , `, ?) ? C??d ,
(1)
where for any subset A ? X , the packing number N (A, `, ?) is the maximum number of disjoint
balls of radius ? contained in A.
2.3
Our assumption
Contrary to the previous approaches, we need only a single assumption. We do not introduce any
(semi)-metric and instead directly relate f to the hierarchical partitioning P, defined in Section 1.
Let K be the maximum number of children cells (Ph+1,jk )1?k?K per cell Ph,i . We remind the
reader that given a global maximum x? of f , i?h denotes the index of the unique cell of depth h
containing x? , i.e., such that x? ? Ph,i?h . With this notation we can state our sole assumption on
both the partitioning (Ph,i ) and the function f .
Assumption 1. There exists ? > 0 and ? ? (0, 1) such that
?h ? 0, ?x ? Ph,i?h ,
f (x) ? f (x? ) ? ??h .
The values (?, ?) defines a lower bound on the possible drop of f near the optimum x? according
to the partitioning. The choice of the exponential rate ??h is made to cover a very large class of
functions, as well as to relate to results from prior work. In particular, for a standard partitioning on
Rp and any ?, ? > 0, any function f such that f (x) ?x?x? ?||x ? x? ||? fits this assumption. This
is also the case for more complicated functions such as the one illustrated in Figure 1. An example
of a function and a partitioning that does not satisfy this assumption is the function f : x 7? 1/ ln x
and a standard partitioning of [0, 1) because the function decreases too fast around x? = 0. As
observed by Valko [15], this assumption can be weaken to hold only for values of f that are ?-close
to f (x? ) up to an ?-dependent constant in the regret.
Let us note that the set of assumptions made by prior work (Section 2.2) can be reformulated using
solely Assumption 1. For example, for any f (x) ?x?x? ?||x ? x? ||? , one could consider the semimetric `(x, y) = ?||x ? y||? for which the corresponding near-optimality dimension defined by
Equation 1 for a standard partitioning is d = 0. Yet we argue that our setting provides a more natural
way to describe the complexity of the optimization problem for a given hierarchical partitioning.
Indeed, existing algorithms, that use a hierarchical partitioning of X , like HOO, do not use the full
metric information but instead only use the values ? and ?, paired up with the partitioning. Hence,
the precise value of the metric does not impact the algorithms?t decisions, neither their performance.
What really matters, is how the hierarchical partitioning of X fits f . Indeed, this fit is what we
measure. To reinforce this argument, notice again that any function can be trivially optimized given
a perfectly adapted partitioning, for instance the one that associates x? to one child of the root.
Also, the previous analyses tried to provide performance guaranties based only on the metric and f .
However, since the metric is assumed to be such that the cells of the partitioning are well shaped,
the large diversity of possible metrics vanishes. Choosing such metric then comes down to choosing
only ?, ?, and a hierarchical decomposition of X . Another way of seeing this is to remark that
previous works make an assumption on both the function and the metric, and an other on both the
metric and the partitioning. We underline that the metric is actually there just to create a link between
the function and the partitioning. By discarding the metric, we merge the two assumptions into a
single one and convert a topological problem into a combinatorial one, leading to easier analysis.
To proceed, we define a new near-optimality dimension. For any ? > 0 and ? ? (0, 1), the nearoptimality dimension d(?, ?) of f with respect to the partitioning P is defined as follows.
Definition 1. Near-optimality dimension of f is
n
o
0
def
d(?) = inf d0 ? R+ : ?C > 0, ?h ? 0, Nh (2??h ) ? C??d h
where Nh (?) is the number of cells Ph,i of depth h such that supx?Ph,i f (x) ? f (x? ) ? ?.
5
The hierarchical decomposition of the space X is the only prior information available to the algorithm. The (new) near-optimality dimension is a measure of how well is this partitioning adapted
to f . More precisely, it is a measure of the size of the near-optimal set, i.e., the cells which are such
that supx?Ph,i f (x) ? f (x? ) ? ?. Intuitively, this corresponds to the set of cells that any algorithm
would have to sample in order to discover the optimum.
As an example, any f such that f (x) ?x?x? ||x ? x? ||? , for any ? > 0, has a zero near-optimality
dimension with respect to the standard partitioning and an appropriate choice of ?. As discussed
by Valko et al. [15], any function such that the upper and lower envelopes of f near its maximum are
of the same order has a near-optimality dimension of zero for a standard partitioning of [0, 1]. An
example of a function with d > 0 for the standard partitioning is in Figure 1. Functions that behave
differently in different dimensions have also d > 0 for the standard partitioning. Nonetheless, for a
some handcrafted partitioning, it is possible to have d = 0 even for those troublesome functions.
Under our new assumption and our new definition of near-optimality dimension, one can prove the
same regret bound for HOO as Bubeck et al. [4] and the same can be done for other related algorithms.
3
The POO algorithm
3.1
Description of POO
The POO algorithm uses, as a subroutine, an optimizing algorithm that requires the knowledge of
the function smoothness. We use HOO [4] as the base algorithm, but other algorithms, such as
HCT [2], could be used as well. POO, with pseudocode in Algorithm 1, runs several HOO instances
in parallel, hence the name parallel optimistic optimization. The number of base HOO instances and
other parameters are adapted to the budget of evaluations and are automatically decided on the fly.
Each instance of HOO requires two real
numbers ? and ?.
Running HOO
parametrized with (?, ?) that are far from
the optimal one (?? , ?? )3 would cause HOO
to underperform. Surprisingly, our analysis of this suboptimality gap reveals that it
does not decrease too fast as we stray away
from (?? , ?? ). This motivates the following observation. If we simultaneously run
a slew of HOOs with different (?, ?)s, one
of them is going to perform decently well.
In fact, we show that to achieve good performance, we only require (ln n) HOO instances, where n is the current number of
function evaluations. Notice, that we do
not require to know the total number of
rounds in advance which hints that we can
hope for a naturally anytime algorithm.
Algorithm 1 POO
Parameters: K, P = {Ph,i }
Optional parameters: ?max , ?max
Initialization:
Dmax ? ln K/ ln (1/?max )
n ? 0 {number of evaluation performed}
N ? 1 {number of HOO instances}
S ? {(?max , ?max )} {set of HOO instances}
while computational budget is available do
while N ? 12 Dmax ln (n/(ln n)) do
for i ? 1, . . . , N do {start new
HOOs}
s ? ?max , ?max 2N/(2i+1)
S ? S ? {s}
n
Perform N
function evaluation with HOO(s)
Update the average reward ?
b[s] of HOO(s)
end for
n ? 2n
N ? 2N
end while{ensure there is enough HOOs}
for s ? S do
Perform a function evaluation with HOO(s)
Update the average reward ?
b[s] of HOO(s)
end for
n?n+N
end while
s? ? argmaxs?S ?
b[s]
Output: A random point evaluated by HOO(s? )
The strategy of POO is quite simple: It
consists of running N instances of HOO in
parallel, that are all launched with different (?, ?)s. At the end of the whole process, POO selects the instance s? which
performed the best and returns one of the
points selected by this instance, chosen
uniformly at random. Note that just using a doubling trick in HOO with increasing
values of ? and ? is not enough to guarantee a good performance. Indeed, it is important to keep track of all HOO instances. Otherwise, the
regret rate would suffer way too much from using the value of ? that is too far from the optimal one.
3
the parameters (?, ?) satisfying Assumption 1 for which d(?, ?) is the smallest
6
For clarity, the pseudo-code of Algorithm 1 takes ?max and ?max as parameters but in Appendix C
we show how to set ?max and ?max automatically as functions of the number of evaluations, i.e.,
?max (n), ?max (n). Furthermore, in Appendix D, we explain how to share information between the
HOO instances which makes the empirical performance light-years better.
Since POO is anytime, the number of instances N (n) is time-dependent and does not need to be
known in advance. In fact, N (n) is increased alongside the execution of the algorithm. More
precisely, we want to ensure that
N (n) ? 21 Dmax ln (n/ ln n) ,
where
def
Dmax =(ln K)/ ln (1/?max ) ?
To keep the set of different (?, ?)s well distributed, the number of HOOs is not increased one by one
but instead is doubled when needed. Moreover, we also require that HOOs run in parallel, perform the
same number of function evaluations. Consequently, when we start running new instances, we first
ensure to make these instances on par with already existing ones in terms of number of evaluations.
Finally, as our analysis reveals, a good choice of parameters (?i ) is not a uniform grid
on [0, 1]. Instead, as suggested by our analysis, we require that 1/ ln(1/?i ) is a uniform grid
on [0, 1/(ln 1/?max )]. As a consequence, we add HOO instances in batches such that ?i = ?max N/i .
3.2
Upper bound on POO?s regret
POO does not require the knowledge of a (?, ?) verifying Assumption 1 and4 yet we prove that it
achieves a performance close5 to the one obtained by HOO using the best parameters (?? , ?? ). This
result solves the open question of Valko et al. [15], whether the stochastic optimization of f with
unknown parameters (?, ?) when d > 0 for the standard partitioning is possible.
Theorem 1. Let Rn be the simple regret of POO at step n. For any (?, ?) verifying Assumption 1
such that ? ? ?max and ? ? ?max there exists ? such that for all n
E[Rn ] ? ? ?
Dmax
Moreover, ? = ? ? Dmax (?max /?? )
1/(d(?,?)+2)
ln2 n /n
, where ? is a constant independent of ?max and ?max .
We prove Theorem 1 in the Appendix A and B. Notice that Theorem 1 holds for any ? ? ?max
and ? ? ?max and in particular for the parameters (?? , ?? ) for which d(?, ?) is minimal as long as
?? ? ?max and ?? ? ?max . In Appendix C, we show how to make ?max and ?max optional.
To give some intuition on Dmax , it is easy to prove that it is the attainable upper bound on the nearoptimality dimension of functions verifying Assumption 1 with ? ? ?max . Moreover, any function
of [0, 1]p , Lipschitz for the Euclidean metric, has (ln K)/ ln (1/?) = p for a standard partitioning.
The POO?s performance should be compared to the simple regret of HOO run with the best parameters ?? and ?? , which is of order
1/(d(?? ,?? )+2)
.
O ((ln n) /n)
1/(d(? ,? )+2)
? ?
Thus POO?s performance is only a factor of O((ln n)
) away from the optimally fitted
HOO. Furthermore, we our regret bound for POO is slightly better than the known regret bound
? for
StoSOO [15] in the case when d(?, ?) = 0 for the same partitioning, i.e., E[Rn ] = O (ln n/ n) .
With our algorithm and analysis, we generalize this bound for any value of d ? 0.
Note that we only give a simple regret bound for POO whereas HOO ensures a bound on both the cumulative and simple regret.6 Notice that since POO runs several HOOs with non-optimal values of the
(?, ?) parameters, this algorithm explores much more than optimally fitted HOO, which dramatically
impacts the cumulative regret. As a consequence, our result applies to the simple regret only.
4
note that several possible?values of those parameters are possible for the same function
up to a logarithmic term ln n in the simple regret
6
in fact, the bound on the simple regret is a direct consequence of the bound on the cumulative regret [3]
5
7
simple regret
0.16
0.14
simple regret (log-scaled)
HOO, ? = 0.0
HOO, ? = 0.3
HOO, ? = 0.66
HOO, ? = 0.9
POO
0.18
0.12
0.10
0.08
?2.0
?2.5
?3.0
HOO, ? = 0.0
HOO, ? = 0.3
HOO, ? = 0.66
HOO, ? = 0.9
POO
?3.5
0.06
100
200
300
number of evaluations
400
?4.0
500
4
5
6
7
number of evaluation (log-scaled)
8
Figure 2: Regret of POO and HOO run for different values of ?.
4
Experiments
We ran experiments on the function plotted in Figure 1 for HOO algorithms with different values of
? and the POO7 algorithm for ?max = 0.9. This function, as described in Section 1, has an upper and
lower envelope that are not of the same order and therefore has d > 0 for a standard partitioning.
In Figure 2, we show the simple regret of the algorithms as function of the number of evaluations.
In the figure on the left, we plot the simple regret after 500 evaluations. In the right one, we plot the
regret after 5000 evaluations in the log-log scale, in order to see the trend better. The HOO algorithms
return a random point chosen uniformly among those evaluated. POO does the same for the best
empirical instance of HOO. We compare the algorithms according to the expected simple regret,
which is the difference between the optimum and the expected value of function value at the point
they return. We compute it as the average of the value of the function for all evaluated points. While
we did not investigate possibly different heuristics, we believe that returning the deepest evaluated
point would give a better empirical performance.
As expected, the HOO algorithms using values of ? that are too low, do not explore enough and
become quickly stuck in a local optimum. This is the case for both UCT (HOO run for ? = 0) and
HOO run for ? = 0.3. The HOO algorithm using ? that is too high waste their budget on exploring
too much. This way, we empirically confirmed that the performance of the HOO algorithm is greatly
impacted by the choice of this ? parameter for the function we considered. In particular, at T = 500,
the empirical regret of HOO with ? = 0.66 was a half of the regret of UCT.
In our experiments, HOO with ??= 0.66 performed the best which is a bit lower than what the theory
would suggest, since ?? = 1/ 2 ? 0.7. The performance of HOO using this parameter is almost
matched by POO. This is surprising, considering the fact the POO was simultaneously running 100
different HOOs. It shows that carefully sharing information between the instances of HOO, as described
and justified in Appendix D, has a major impact on empirical performance. Indeed, among the 100
HOO instances, only two (on average) actually needed a fresh function evaluation, the 98 could reuse
the ones performed by another HOO instance.
5
Conclusion
We introduced POO for global optimization of stochastic functions with unknown smoothness and
showed that it competes with the best known optimization algorithms that know this smoothness.
This results extends the previous work of Valko et al. [15], which is only able to deal with a nearoptimality dimension d = 0. POO is provable able to deal with a trove of functions for which d ? 0
for a standard partitioning. Furthermore, we gave a new insight on several assumptions required by
prior work and provided a more natural measure of the complexity of optimizing a function given a
hierarchical partitioning of the space, without relying on any (semi-)metric.
Acknowledgements The research presented in this paper was supported by French Ministry
of Higher Education and Research, Nord-Pas-de-Calais Regional Council, a doctoral grant of
?
Ecole
Normale Sup?erieure in Paris, Inria and Carnegie Mellon University associated-team project
EduBand, and French National Research Agency project ExTra-Learn (n.ANR-14-CE24-0010-01).
7
code available at https://sequel.lille.inria.fr/Software/POO
8
References
[1] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time Analysis of the Multiarmed
Bandit Problem. Machine Learning, 47(2-3):235?256, 2002.
[2] Mohammad Gheshlaghi Azar, Alessandro Lazaric, and Emma Brunskill. Online Stochastic
Optimization under Correlated Bandit Feedback. In International Conference on Machine
Learning, 2014.
[3] S?ebastien Bubeck, R?emi Munos, and Gilles Stoltz. Pure Exploration in Finitely-Armed and
Continuously-Armed Bandits. Theoretical Computer Science, 412:1832?1852, 2011.
[4] S?ebastien Bubeck, R?emi Munos, Gilles Stoltz, and Csaba Szepesv?ari. X-armed Bandits. Journal of Machine Learning Research, 12:1587?1627, 2011.
[5] S?ebastien Bubeck, Gilles Stoltz, and Jia Yuan Yu. Lipschitz Bandits without the Lipschitz
Constant. In Algorithmic Learning Theory, 2011.
[6] Adam D. Bull. Adaptive-treed bandits. Bernoulli, 21(4):2289?2307, 2015.
[7] Richard Combes and Alexandre Prouti`ere. Unimodal Bandits without Smoothness. ArXiv
e-prints: http://arxiv.org/abs/1406.7447, 2015.
[8] Pierre-Arnaud Coquelin and R?emi Munos. Bandit Algorithms for Tree Search. In Uncertainty
in Artificial Intelligence, 2007.
[9] Robert Kleinberg, Alexander Slivkins, and Eli Upfal. Multi-armed Bandit Problems in Metric
Spaces. In Symposium on Theory Of Computing, 2008.
[10] Levente Kocsis and Csaba Szepesv?ari. Bandit based Monte-Carlo Planning. In European
Conference on Machine Learning, 2006.
[11] R?emi Munos. Optimistic Optimization of Deterministic Functions without the Knowledge of
its Smoothness. In Neural Information Processing Systems, 2011.
[12] R?emi Munos. From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to
Optimization and Planning. Foundations and Trends in Machine Learning, 7(1):1?130, 2014.
[13] Philippe Preux, R?emi Munos, and Michal Valko. Bandits Attack Function Optimization. In
Congress on Evolutionary Computation, 2014.
[14] Aleksandrs Slivkins. Multi-armed Bandits on Implicit Metric Spaces. In Neural Information
Processing Systems, 2011.
[15] Michal Valko, Alexandra Carpentier, and R?emi Munos. Stochastic Simultaneous Optimistic
Optimization. In International Conference on Machine Learning, 2013.
9
| 5721 |@word version:3 stronger:4 underline:1 open:2 underperform:1 simulation:1 tried:1 decomposition:2 p0:1 attainable:1 dramatic:1 thereby:1 harder:1 atb:2 contains:2 selecting:1 ecole:1 interestingly:1 existing:2 current:1 com:1 michal:4 surprising:1 yet:3 refines:1 partition:3 drop:1 plot:2 update:2 v:1 half:2 leaf:3 guess:1 selected:4 intelligence:1 characterization:1 provides:1 attack:1 org:1 treed:1 along:1 direct:1 become:1 symposium:1 yuan:1 prove:4 consists:1 inside:1 emma:1 introduce:1 theoretically:1 expected:3 indeed:6 embody:1 planning:2 multi:2 relying:1 automatically:2 armed:5 considering:1 increasing:1 provided:4 discover:1 bounded:1 notation:2 competes:2 maximizes:2 project:2 moreover:3 what:4 matched:1 kind:1 deepmind:1 lipschitzness:1 csaba:2 guarantee:5 pseudo:2 unexplored:1 preferable:1 returning:1 scaled:2 uk:1 partitioning:47 whatever:1 grant:1 positive:1 before:2 local:8 treat:2 modify:1 limit:1 consequence:4 congress:1 troublesome:1 path:4 solely:1 merge:1 black:3 inria:6 twice:1 initialization:1 doctoral:1 collect:1 limited:1 decided:1 unique:2 regret:34 empirical:7 ce24:1 confidence:3 word:1 regular:1 refers:1 wait:1 seeing:1 suggest:1 doubled:1 needle:1 close:3 bh:2 influence:1 optimize:3 equivalent:1 deterministic:3 quick:1 center:2 poo:35 straightforward:1 splitting:1 pure:1 insight:3 handle:1 notion:4 us:5 associate:1 trick:1 trend:2 satisfying:1 jk:1 pa:1 observed:2 fly:1 verifying:3 ensures:1 decrease:4 ran:1 mentioned:1 intuition:1 vanishes:1 agency:1 complexity:6 alessandro:1 reward:3 depend:1 weakly:4 uh:4 packing:2 differently:2 fast:2 describe:1 monte:2 artificial:1 hyper:1 choosing:2 jean:2 heuristic:2 larger:3 whose:1 guaranty:1 quite:1 otherwise:1 anr:1 triangular:2 fischer:1 noisy:3 online:1 obviously:1 kocsis:1 differentiable:1 fr:3 argmaxs:1 flexibility:1 achieve:1 description:1 regularity:2 optimum:8 assessing:1 adam:1 leave:1 depending:1 measured:1 finitely:1 sole:1 solves:1 come:2 differ:1 direction:2 radius:2 stochastic:7 opened:1 exploration:1 education:1 require:9 really:1 strictly:1 exploring:2 clarify:1 hold:3 around:5 considered:3 algorithmic:2 major:1 achieves:1 smallest:2 estimation:1 combinatorial:1 calais:1 council:1 ere:2 create:1 hope:1 always:1 modified:2 reaching:2 normale:1 focus:1 bernoulli:1 greatly:1 contrast:1 sense:1 dependent:3 typically:2 bandit:16 expand:1 going:1 france:2 selects:3 subroutine:3 among:3 special:1 shaped:2 sampling:1 chernoff:1 lille:3 yu:1 hint:1 richard:1 simultaneously:2 national:1 ab:1 investigate:1 evaluation:29 light:1 necessary:1 stoltz:3 tree:6 euclidean:2 plotted:1 theoretical:2 minimal:2 fitted:2 weaken:1 increased:2 instance:24 compelling:1 cover:2 measuring:1 bull:3 cost:1 subset:2 uniform:2 successful:1 too:10 optimally:2 dependency:1 perturbed:2 supx:4 fundamental:1 sensitivity:1 explores:1 international:2 sequel:3 quickly:1 continuously:1 again:1 cesa:1 containing:3 possibly:1 hoeffding:1 derivative:2 leading:2 return:3 diversity:1 de:1 waste:2 matter:1 satisfy:2 depends:3 later:1 root:5 performed:4 optimistic:9 sup:2 start:2 recover:1 parallel:6 complicated:1 jia:1 contribution:1 minimize:1 responded:1 generalize:1 weak:2 none:1 carlo:2 confirmed:1 explain:1 simultaneous:1 decently:1 sharing:1 definition:5 nonetheless:1 semimetric:1 naturally:2 associated:1 knowledge:17 fractional:1 anytime:3 carefully:1 actually:3 auer:1 alexandre:1 higher:1 impacted:1 daunting:1 evaluated:6 box:3 done:4 furthermore:6 just:3 uct:6 implicit:1 receives:1 combes:2 google:2 french:2 defines:1 artifact:1 quality:6 believe:1 alexandra:1 name:1 requiring:4 true:1 furnish:1 hence:2 arnaud:1 nonzero:2 illustrated:2 deal:3 round:1 suboptimality:1 ln2:2 stress:1 mohammad:1 confusion:1 performs:2 ari:2 common:1 pseudocode:1 empirically:2 handcrafted:1 nh:6 discussed:1 refer:1 mellon:1 multiarmed:1 haystack:1 smoothness:29 tuning:1 trivially:1 grid:2 erieure:1 europe:2 base:2 add:1 something:1 recent:1 showed:1 optimizing:4 inf:3 optimizes:1 certain:1 slew:1 inequality:2 success:1 seen:1 minimum:2 additional:1 care:1 ministry:1 determine:1 semi:18 branch:2 full:2 unimodal:2 d0:2 smooth:3 faster:1 long:1 paired:1 impact:3 metric:41 arxiv:2 cell:31 justified:1 background:1 want:2 whereas:1 szepesv:2 interval:1 envelope:4 launched:1 unlike:1 posse:2 regional:1 extra:1 induced:1 contrary:1 near:23 split:3 easy:3 enough:4 fit:4 gave:1 perfectly:3 grill:2 whether:2 motivated:1 reuse:1 suffer:1 peter:1 returned:1 reformulated:1 proceed:1 cause:1 action:2 remark:1 dramatically:1 detailed:1 tune:1 locally:2 ph:22 diameter:1 http:2 exist:4 notice:4 disjoint:1 per:1 track:1 lazaric:1 carnegie:1 nevertheless:1 clarity:2 levente:1 neither:1 carpentier:1 convert:1 year:1 run:8 eli:1 uncertainty:1 extends:2 almost:3 reader:1 oscillation:1 decision:2 appendix:6 bit:1 bound:20 def:4 followed:1 topological:1 adapted:5 precisely:3 x2:1 software:1 kleinberg:2 emi:8 argument:1 optimality:17 according:3 ball:5 belonging:1 hoo:65 smaller:1 slightly:2 making:1 intuitively:2 restricted:1 ln:22 equation:1 previously:3 dmax:7 needed:3 know:3 end:5 profiting:1 hct:5 available:3 hierarchical:14 away:4 appropriate:1 pierre:1 batch:1 rp:2 assumes:1 denotes:1 running:4 ensure:3 log2:1 xc:1 especially:1 objective:1 question:3 quantity:2 already:2 print:1 strategy:3 costly:1 rt:2 evolutionary:1 link:2 zooming:3 reinforce:1 parametrized:1 argue:2 considers:1 trivial:1 reason:1 fresh:1 provable:1 assuming:1 code:2 index:2 relationship:1 subcells:2 remind:1 equivalently:1 difficult:3 robert:1 taxonomy:1 relate:3 nord:3 ebastien:3 proper:1 motivates:1 unknown:6 perform:4 bianchi:1 upper:6 gilles:3 observation:1 finite:3 nearoptimality:3 behave:2 philippe:1 optional:2 extended:1 team:3 precise:3 rn:6 perturbation:1 aleksandrs:1 introduced:2 required:1 paris:1 slivkins:4 optimized:2 prouti:2 gheshlaghi:1 able:4 suggested:1 alongside:1 encompasses:1 preux:1 including:1 soo:3 max:30 difficulty:2 rely:1 natural:2 valko:9 inversely:1 prior:12 review:1 deepest:1 acknowledgement:1 nicol:1 embedded:1 par:1 interesting:1 foundation:1 upfal:1 principle:1 tiny:1 share:1 surprisingly:1 supported:1 side:1 weaker:2 munos:10 distributed:1 feedback:2 dimension:26 xn:1 depth:11 cumulative:4 stuck:2 made:4 adaptive:2 simplified:1 founded:1 far:3 keep:2 global:6 reveals:2 assumed:2 continuous:1 search:2 quantifies:1 learn:1 expanding:1 complex:1 taxonomyzoom:1 necessarily:1 domain:2 european:1 did:1 hierarchically:1 whole:1 noise:4 paul:1 azar:1 child:3 referred:1 representative:2 bxc:1 brunskill:1 stray:1 exponential:1 lie:1 third:1 down:1 theorem:3 bad:1 bastien:2 xt:4 specific:2 discarding:1 exists:5 ih:3 restricting:1 sequential:1 execution:1 budget:4 gap:1 easier:2 suited:1 hoos:7 logarithmic:1 simply:1 explore:2 bubeck:6 expressed:3 contained:2 doubling:1 applies:1 corresponds:1 relies:2 sized:1 consequently:1 lipschitz:9 absence:1 price:1 hard:1 uniformly:2 called:3 total:1 coquelin:1 alexander:1 absolutely:1 d1:1 correlated:1 |
5,216 | 5,722 | Combinatorial Cascading Bandits
Branislav Kveton
Adobe Research
San Jose, CA
kveton@adobe.com
Zheng Wen
Yahoo Labs
Sunnyvale, CA
zhengwen@yahoo-inc.com
Azin Ashkan
Technicolor Research
Los Altos, CA
azin.ashkan@technicolor.com
Csaba Szepesv?ari
Department of Computing Science
University of Alberta
szepesva@cs.ualberta.ca
Abstract
We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject
to constraints and receives a reward if and only if the weights of all chosen items
are one. The weights of the items are binary, stochastic, and drawn independently
of each other. The agent observes the index of the first chosen item whose weight
is zero. This observation model arises in network routing, for instance, where the
learning agent may only observe the first link in the routing path which is down,
and blocks the path. We propose a UCB-like algorithm for solving our problems,
CombCascade; and prove gap-dependent and gap-free upper bounds on its n-step
regret. Our proofs build on recent work in stochastic combinatorial semi-bandits
but also address two novel challenges of our setting, a non-linear reward function
and partial observability. We evaluate CombCascade on two real-world problems
and show that it performs well even when our modeling assumptions are violated.
We also demonstrate that our setting requires a new learning algorithm.
1
Introduction
Combinatorial optimization [16] has many real-world applications. In this work, we study a class of
combinatorial optimization problems with a binary objective function that returns one if and only if
the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn
independently of each other. Many popular optimization problems can be formulated in our setting.
Network routing is a problem of choosing a routing path in a computer network that maximizes the
probability that all links in the chosen path are up. Recommendation is a problem of choosing a list
of items that minimizes the probability that none of the recommended items are attractive. Both of
these problems are closely related and can be solved using similar techniques (Section 2.3).
Combinatorial cascading bandits are a novel framework for online learning of the aforementioned
problems where the distribution over the weights of items is unknown. Our goal is to maximize the
expected cumulative reward of a learning agent in n steps. Our learning problem is challenging for
two main reasons. First, the reward function is non-linear in the weights of chosen items. Second,
we only observe the index of the first chosen item with a zero weight. This kind of feedback arises
frequently in network routing, for instance, where the learning agent may only observe the first link
in the routing path which is down, and blocks the path. This feedback model was recently proposed
in the so-called cascading bandits [10]. The main difference in our work is that the feasible set can
be arbitrary. The feasible set in cascading bandits is a uniform matroid.
1
Stochastic online learning with combinatorial actions has been previously studied with semi-bandit
feedback and a linear reward function [8, 11, 12], and its monotone transformation [5]. Established
algorithms for multi-armed bandits, such as UCB1 [3], KL-UCB [9], and Thompson sampling [18, 2];
can be usually easily adapted to stochastic combinatorial semi-bandits. However, it is non-trivial to
show that the algorithms are statistically efficient, in the sense that their regret matches some lower
bound. Kveton et al. [12] recently showed this for CombUCB1, a form of UCB1. Our analysis builds
on this recent advance but also addresses two novel challenges of our problem, a non-linear reward
function and partial observability. These challenges cannot be addressed straightforwardly based on
Kveton et al. [12, 10].
We make multiple contributions. In Section 2, we define the online learning problem of combinatorial cascading bandits and propose CombCascade, a variant of UCB1, for solving it. CombCascade
is computationally efficient on any feasible set where a linear function can be optimized efficiently.
A minor-looking improvement to the UCB1 upper confidence bound, which exploits the fact that the
expected weights of items are bounded by one, is necessary in our analysis. In Section 3, we derive
gap-dependent and gap-free upper bounds on the regret of CombCascade, and discuss the tightness
of these bounds. In Section 4, we evaluate CombCascade on two practical problems and show that
the algorithm performs well even when our modeling assumptions are violated. We also show that
CombUCB1 [8, 12] cannot solve some instances of our problem, which highlights the need for a new
learning algorithm.
2
Combinatorial Cascading Bandits
This section introduces our learning problem, its applications, and also our proposed algorithm. We
discuss the computational complexity of the algorithm and then introduce the co-called disjunctive
variant of our problem. We denote random variables by boldface letters. The cardinality of set A is
|A| and we assume that min ; = +1. The binary and operation is denoted by ^, and the binary or
is _.
2.1
Setting
We model our online learning problem as a combinatorial cascading bandit. A combinatorial cascading bandit is a tuple B = (E, P, ?), where E = {1, . . . , L} is a finite set of L ground items, P
E
is a probability distribution over a binary hypercube {0, 1} , ? ? ?? (E), and:
?? (E) = {(a1 , . . . , ak ) : k
1, a1 , . . . , ak 2 E, ai 6= aj for any i 6= j}
is the set of all tuples of distinct items from E. We refer to ? as the feasible set and to A 2 ? as a
feasible solution. We abuse our notation and also treat A as the set of items in solution A. Without
loss of generality, we assume that the feasible set ? covers the ground set, E = [?.
E
Let (wt )nt=1 be an i.i.d. sequence of n weights drawn from distribution P , where wt 2 {0, 1} . At
time t, the learning agent chooses solution At = (at1 , . . . , at|At | ) 2 ? based on its past observations
and then receives a binary reward:
^
rt = min wt (e) =
wt (e)
e2At
e2At
as a response to this choice. The reward is one if and only if the weights of all items in At are one.
The key step in our solution and its analysis is that the reward can be expressed as rt = f (At , wt ),
where f : ? ? [0, 1]E ! [0, 1] is a reward function, which is defined as:
Y
f (A, w) =
w(e) , A 2 ? , w 2 [0, 1]E .
e2A
At the end of time t, the agent observes the index of the first item in At whose weight is zero, and
+1 if such an item does not exist. We denote this feedback by Ot and define it as:
Ot = min 1 ? k ? |At | : wt (atk ) = 0 .
Note that Ot fully determines the weights of the first min {Ot , |At |} items in At . In particular:
wt (atk ) = 1{k < Ot }
k = 1, . . . , min {Ot , |At |} .
2
(1)
Accordingly, we say that item e is observed at time t if e = atk for some 1 ? k ? min {Ot , |At |}.
Note that the order of items in At affects the feedback Ot but not the reward rt . This differentiates
our problem from combinatorial semi-bandits.
The goal of our learning agent is to maximize its expected cumulative reward. This is equivalent to
minimizing the expected cumulative regret in n steps:
Pn
R(n) = E [ t=1 R(At , wt )] ,
where R(At , wt ) = f (A? , wt ) f (At , wt ) is the instantaneous stochastic regret of the agent at
time t and A? = arg max A2? E [f (A, w)] is the optimal solution in hindsight of knowing P . For
simplicity of exposition, we assume that A? , as a set, is unique.
A major simplifying assumption, which simplifies our optimization problem and its learning, is that
the distribution P is factored:
Q
P (w) = e2E Pe (w(e)) ,
(2)
where Pe is a Bernoulli distribution with mean w(e).
?
We borrow this assumption from the work of
Kveton et al. [10] and it is critical to our results. We would face computational difficulties without
it. Under this assumption, the expected reward of solution A 2 ?, the probability that the weight of
each item in A is one, can be written as E [f (A, w)] = f (A, w),
? and depends only on the expected
weights of individual items in A. It follows that:
A? = arg max A2? f (A, w)
? .
In Section 4, we experiment with two problems that violate our independence assumption. We also
discuss implications of this violation.
Several interesting online learning problems can be formulated as combinatorial cascading bandits.
Consider the problem of learning routing paths in Simple Mail Transfer Protocol (SMTP) that maximize the probability of e-mail delivery. The ground set in this problem are all links in the network
and the feasible set are all routing paths. At time t, the learning agent chooses routing path At and
observes if the e-mail is delivered. If the e-mail is not delivered, the agent observes the first link in
the routing path which is down. This kind of information is available in SMTP. The weight of item
e at time t is an indicator of link e being up at time t. The independence assumption in (2) requires
that all links fail independently. This assumption is common in the existing network routing models
[6]. We return to the problem of network routing in Section 4.2.
2.2
CombCascade Algorithm
Our proposed algorithm, CombCascade, is described in Algorithm 1. This algorithm belongs to the
family of UCB algorithms. At time t, CombCascade operates in three stages. First, it computes the
upper confidence bounds (UCBs) Ut 2 [0, 1]E on the expected weights of all items in E. The UCB
of item e at time t is defined as:
? Tt
Ut (e) = min w
1 (e)
(e) + ct
1,Tt
1 (e)
,1 ,
(3)
? s (e) is the average of s observed
where w
weights of item e, Tt (e) is the number of times that item e
p
? s (e)
is observed in t steps, and ct,s = (1.5 log t)/s is the radius of a confidence interval around w
? s (e) ct,s , w
? s (e) + ct,s ] holds with a high probability. After the
after t steps such that w(e)
?
2 [w
UCBs are computed, CombCascade chooses the optimal solution with respect to these UCBs:
At = arg max A2? f (A, Ut ) .
Finally, CombCascade observes Ot and updates its estimates of the expected weights based on the
weights of the observed items in (1), for all items atk such that k ? Ot .
For simplicity of exposition, we assume that CombCascade is initialized by one sample w0 ? P . If
w0 is unavailable, we can formulate the problem of obtaining w0 as an optimization problem on ?
with a linear objective [12]. The initialization procedure of Kveton et al. [12] tracks observed items
and adaptively chooses solutions with the maximum number of unobserved items. This approach is
computationally efficient on any feasible set ? where a linear function can be optimized efficiently.
CombCascade has two attractive
P properties. First, the algorithm is computationally efficient, in the
sense that At = arg max A2? e2A log(Ut (e)) is the problem of maximizing a linear function on
3
Algorithm 1 CombCascade for combinatorial cascading bandits.
// Initialization
Observe w0 ? P
8e 2 E : T0 (e)
1
? 1 (e)
8e 2 E : w
w0 (e)
for all t = 1, . . . , n do
// Compute UCBs
? Tt
8e 2 E : Ut (e) = min w
1 (e)
(e) + ct
1,Tt
1 (e)
,1
// Solve the optimization problem and get feedback
At
arg max A2? f (A, Ut )
Observe Ot 2 {1, . . . , |At | , +1}
// Update statistics
8e 2 E : Tt (e)
Tt 1 (e)
for all k = 1, . . . , min {Ot , |At |} do
e
atk
Tt (e)
Tt (e) + 1
? Tt 1 (e) (e) + 1{k < Ot }
Tt 1 (e)w
? Tt (e) (e)
w
Tt (e)
?. This problem can be solved efficiently for various feasible sets ?, such as matroids, matchings,
and paths. Second, CombCascade is sample efficient because the UCB of solution A, f (A, Ut ), is a
product of the UCBs of all items in A, which are estimated separately. The regret of CombCascade
does not depend on |?| and is polynomial in all other quantities of interest.
2.3
Disjunctive Objective
Our reward model is conjuctive, the reward is one if and only if the weights
W of all chosen items are
one. A natural alternative is a disjunctive model rt = maxe2At wt (e) = e2At wt (e), the reward
is one if the weight of any item in At is one. This model arises in recommender systems, where the
recommender is rewarded when the user is satisfied with any recommended item. The feedback Ot
is the index of the first item in At whose weight is one, as in cascading bandits [10].
Q
Let f_ : ? ? [0, 1]E ! [0, 1] be a reward function, which is defined as f_ (A, w) = 1
e2A (1
w(e)). Then under the independence assumption in (2), E [f_ (A, w)] = f_ (A, w)
? and:
Y
A? = arg max f_ (A, w)
? = arg min
(1 w(e))
?
= arg min f (A, 1 w)
? .
A2?
A2?
A2?
e2A
Therefore, A can be learned by a variant of CombCascade where the observations are 1
each UCB Ut (e) is substituted with a lower confidence bound (LCB) on 1 w(e):
?
?
? Tt
w
Lt (e) = max 1
1 (e)
(e)
ct
1,Tt
1 (e)
wt and
,0 .
Let R(At , wt ) = f (At , 1 wt ) f (A , 1 wt ) be the instantaneous stochastic regret at time t.
Then we can bound the regret of CombCascade as in Theorems 1 and 2. The only difference is that
?
e,min and f are redefined as:
w)
?
f (A? , 1 w)
? , f ? = f (A? , 1 w)
? .
e,min = minA2?:e2A, A >0 f (A, 1
?
3
Analysis
We prove gap-dependent and gap-free upper bounds on the regret of CombCascade in Section 3.1.
We discuss these bounds in Section 3.2.
3.1
Upper Bounds
We define the suboptimality gap of solution A = (a1 , . . . , a|A| ) as A = f (A? , w)
?
f (A, w)
? and
Q|A| 1
the probability that all items in A are observed as pA = k=1 w(a
? k ). For convenience, we define
4
? = E \ A? be the set of suboptimal items, the items
shorthands f ? = f (A? , w)
? and p? = pA? . Let E
?
? is:
that are not in A . Then the minimum gap associated with suboptimal item e 2 E
e,min
= f (A? , w)
?
maxA2?:e2A,
A >0
f (A, w)
? .
Let K = max {|A| : A 2 ?} be the maximum number of items in any solution and f ? > 0. Then
the regret of CombCascade is bounded as follows.
K X 4272
?2
Theorem 1. The regret of CombCascade is bounded as R(n) ? ?
log n +
L.
f
3
e,min
?
e2E
Proof. The proof is in Appendix A. The main idea is to reduce our analysis to that of CombUCB1 in
stochastic combinatorial semi-bandits [12]. This reduction is challenging for two reasons. First, our
reward function is non-linear in the weights of chosen items. Second, we only observe some of the
chosen items.
Our analysis can be trivially reduced to semi-bandits by conditioning on the event of observing all
items. In particular, let Ht = (A1 , O1 , . . . , At 1 , Ot 1 , At ) be the history of CombCascade up to
choosing solution At , the first t 1 observations and t actions. Then we can express the expected
regret at time t conditioned on Ht as:
E [R(At , wt ) | Ht ] = E [
At (1/pAt )1{
At
> 0, Ot
|At |} | Ht ]
and analyze our problem under the assumption that all items in At are observed. This reduction is
problematic because the probability pAt can be low, and as a result we get a loose regret bound.
We address this issue by formalizing the following insight into our problem. When f (A, w)
? ? f ?,
?
CombCascade can distinguish A from A without learning the expected weights of all items in A.
In particular, CombCascade acts implicitly on the prefixes of suboptimal solutions, and we choose
them in our analysis such that the probability of observing all items in the prefixes is ?close? to f ? ,
and the gaps are ?close? to those of the original solutions.
Lemma 1. Let A = (a1 , . . . , a|A| ) 2 ? be a feasible solution and Bk = (a1 , . . . , ak ) be a prefix of
1
1 ?
k ? |A| items of A. Then k can be set such that Bk
2 A and pBk
2f .
Then we count the number of times that the prefixes can be chosen instead of A? when all items in
the prefixes are observed. The last remaining issue is that f (A, Ut ) is non-linear in the confidence
radii of the items in A. Therefore, we bound it from above based on the following lemma.
Lemma 2. Let 0 ? p1 , . . . , pK ? 1 and u1 , . . . , uK 0. Then:
QK
QK
PK
k=1 min {pk + uk , 1} ?
k=1 pk +
k=1 uk .
This bound is tight when p1 , . . . , pK = 1 and u1 , . . . , uK = 0.
The rest of our analysis is along the lines of Theorem 5 in Kveton et al. [12]. We can achieve linear
dependency on K, in exchange for a multiplicative factor of 534 in our upper bound.
We also prove the following gap-free bound.
Theorem 2. The regret of CombCascade is bounded as R(n) ? 131
s
KLn log n ? 2
+
L.
f?
3
Proof. The proof is in Appendix B. The key idea is to decompose the regret of CombCascade into
two parts, where the gaps At are at most ? and larger than ?. We analyze each part separately and
then set ? to get the desired result.
3.2
Discussion
In Section 3.1, we prove two upper bounds on the n-step regret of CombCascade:
p
Theorem 1: O(KL(1/f ? )(1/ ) log n) , Theorem 2: O( KL(1/f ? )n log n) ,
where = mine2E? e,min . These bounds do not depend on the total number of feasible solutions
|?| and are polynomial in any other quantity of interest. The bounds match, up to O(1/f ? ) factors,
5
w
7 = (0:4; 0:4; 0:2; 0:2)
500
100
20
CombCascade
CombUCB1
2k
4k
6k
Step n
8k
10k
300
200
100
0
w
7 = (0:4; 0:4; 0:3; 0:3)
80
Regret
40
0
w
7 = (0:4; 0:4; 0:9; 0:1)
400
60
Regret
Regret
80
60
40
20
2k
4k
6k
Step n
8k
10k
0
2k
4k
6k
Step n
8k
10k
Figure 1: The regret of CombCascade and CombUCB1 in the synthetic experiment (Section 4.1). The
results are averaged over 100 runs.
the upper bounds of CombUCB1 in stochastic combinatorial semi-bandits [12]. Since CombCascade
receives less feedback than CombUCB1, this is rather surprising and unexpected. The upper bounds
of Kveton et al. [12] are known to be tight up to polylogarithmic factors. We believe that our upper
bounds are also tight in the setting similar to Kveton et al. [12], where the expected weight of each
item is close to 1 and likely to be observed.
The assumption that f ? is large is often reasonable. In network routing, the optimal routing path is
likely to be reliable. In recommender systems, the optimal recommended list often does not satisfy
a reasonably large fraction of users.
4
Experiments
We evaluate CombCascade in three experiments. In Section 4.1, we compare it to CombUCB1 [12],
a state-of-the-art algorithm for stochastic combinatorial semi-bandits with a linear reward function.
This experiment shows that CombUCB1 cannot solve all instances of our problem, which highlights
the need for a new learning algorithm. It also shows the limitations of CombCascade. We evaluate
CombCascade on two real-world problems in Sections 4.2 and 4.3.
4.1
Synthetic
In the first experiment, we compare CombCascade to CombUCB1 [12] on a synthetic problem. This
problem is a combinatorial cascading bandit with L = 4 items and ? = {(1, 2), (3, 4)}. CombUCB1
is a popular algorithm for stochastic combinatorial
semi-bandits with a linear reward function. We
P
approximate maxA2? f Q
(A, w) by minA2? P e2A (1 w(e)). This approximation is motivated by
the fact that f (A, w) = e2A w(e) ? 1
w(e)) as mine2E w(e) ! 1. We update the
e2A (1
estimates of w
? in CombUCB1 as in CombCascade, based on the weights of the observed items in (1).
We experiment with three different settings of w
? and report our results in Figure 1. The settings of
w
? are reported in our plots. We assume that wt (e) are distributed independently, except for the last
plot where wt (3) = wt (4). Our plots represent three common scenarios
P that we encountered in our
experiments. In the first plot, arg max A2? f (A, w)
? = arg min A2? e2A (1 w(e)).
?
In this case,
both CombCascade and CombUCB1 can learn A? . The regret of CombCascade P
is slightly lower than
that of CombUCB1. In the second plot, arg max A2? f (A, w)
? 6= arg min A2? e2A (1 w(e)).
?
In
this case, CombUCB1 cannot learn A? and therefore suffers linear regret. In the third plot, we violate
our modeling assumptions. Perhaps surprisingly, CombCascade can still learn the optimal solution
A? , although it suffers higher regret than CombUCB1.
4.2
Network Routing
In the second experiment, we evaluate CombCascade on a problem of network routing. We experiment with six networks from the RocketFuel dataset [17], which are described in Figure 2a.
Our learning problem is formulated as follows. The ground set E are the links in the network. The
feasible set ? are all paths in the network. At time t, we generate a random pair of starting and end
nodes, and the learning agent chooses a routing path between these nodes. The goal of the agent is
to maximizes the probability that all links in the path are up. The feedback is the index of the first
link in the path which is down. The weight of link e at time t, wt (e), is an indicator of link e being
6
8k
30k
Regret
1221
1755
3967
6k
Regret
Network Nodes Links
1221
108 153
1239
315 972
1755
87 161
3257
161 328
3967
79 147
6461
141 374
4k
2k
0
60k
120k 180k 240k 300k
Step n
1239
3257
6461
20k
10k
0
60k
120k 180k 240k 300k
Step n
(a)
(b)
Figure 2: a. The description of six networks from our network routing experiment (Section 4.2). b.
The n-step regret of CombCascade in these networks. The results are averaged over 50 runs.
?
up at time t. We model wt (e) as an independent Bernoulli random variable wt (e) ? B(w(e))
with
?
mean w(e)
= 0.7 + 0.2 local(e), where local(e) is an indicator of link e being local. We say that
the link is local when its expected latency is at most 1 millisecond. About a half of the links in our
networks are local. To summarize, the local links are up with probability 0.9; and are more reliable
than the global links, which are up only with probability 0.7.
Our results are reported in Figure 2b. We observe that the n-step regret of CombCascade flattens as
time n increases. This means that CombCascade learns near-optimal policies in all networks.
4.3
Diverse Recommendations
In our last experiment, we evaluate CombCascade on a problem of diverse recommendations. This
problem is motivated by on-demand media streaming services like Netflix, which often recommend
groups of movies, such as ?Popular on Netflix? and ?Dramas?. We experiment with the MovieLens
dataset [13] from March 2015. The dataset contains 138k people who assigned 20M ratings to 27k
movies between January 1995 and March 2015.
Our learning problem is formulated as follows. The ground set E are 200 movies from our dataset:
25 most rated animated movies, 75 random animated movies, 25 most rated non-animated movies,
and 75 random non-animated movies. The feasible set ? are all K-permutations of E where K/2
movies are animated. The weight of item e at time t, wt (e), indicates that item e attracts the user at
time t. We assume that wt (e) = 1 if and only if the user rated item e in our dataset. This indicates
that the user watched movie e at some point in time, perhaps because the movie was attractive. The
user at time t is drawn randomly from our pool of users. The goal of the learning agent is to learn a
list of items A? = arg max A2? E [f_ (A, w)] that maximizes the probability that at least one item
is attractive. The feedback is the index of the first attractive item in the list (Section 2.3). We would
like to point out that our modeling assumptions are violated in this experiment. In particular, wt (e)
are correlated across items e because the users do not rate movies independently. The result is that
A? 6= arg max A2? f_ (A, w).
? It is NP-hard to compute A? . However, E [f_ (A, w)] is submodular
and monotone in A, and therefore a (1 1/e) approximation to A? can be computed greedily. We
denote this approximation by A? and show it for K = 8 in Figure 3a.
Our results are reported in Figure 3b. Similarly to Figure 2b, the n-step regret of CombCascade is
a concave function of time n for all studied K. This indicates that CombCascade solutions improve
over time. We note that the regret does not flatten as in Figure 2b. The reason is that CombCascade
does not learn A? . Nevertheless, it performs well and we expect comparably good performance in
other domains where our modeling assumptions are not satisfied. Our current theory cannot explain
this behavior and we leave it for future work.
5
Related Work
Our work generalizes cascading bandits of Kveton et al. [10] to arbitrary combinatorial constraints.
The feasible set in cascading bandits is a uniform matroid, any list of K items out of L is feasible.
Our generalization significantly expands the applicability of the original model and we demonstrate
this on two novel real-world problems (Section 4). Our work also extends stochastic combinatorial
semi-bandits with a linear reward function [8, 11, 12] to the cascade model of feedback. A similar
model to cascading bandits was recently studied by Combes et al. [7].
7
8k
K=8
K = 12
K = 16
6k
Regret
Movie title
Animation
Pulp Fiction
No
Forrest Gump
No
Independence Day
No
Shawshank Redemption
No
Toy Story
Yes
Shrek
Yes
Who Framed Roger Rabbit?
Yes
Aladdin
Yes
4k
2k
0
20k
40k
60k
80k
100k
Step n
(a)
(b)
Figure 3: a. The optimal list of 8 movies in the diverse recommendations experiment (Section 4.3).
b. The n-step regret of CombCascade in this experiment. The results are averaged over 50 runs.
Our generalization is significant for two reasons. First, CombCascade is a novel learning algorithm.
CombUCB1 [12] chooses solutions with the largest sum of the UCBs. CascadeUCB1 [10] chooses K
items out of L with the largest UCBs. CombCascade chooses solutions with the largest product of
the UCBs. All three algorithms can find the optimal solution in cascading bandits. However, when
the feasible set is not a matroid, it is critical to maximize the product of the UCBs. CombUCB1 may
learn a suboptimal solution in this setting and we illustrate this in Section 4.1.
Second, our analysis is novel. The proof of Theorem 1 is different from those of Theorems 2 and 3
in Kveton et al. [10]. These proofs are based on counting the number of times that each suboptimal
item is chosen instead of any optimal item. They can be only applied to special feasible sets, such a
matroid, because they require that the items in the feasible solutions are exchangeable. We build on
the recent work of Kveton et al. [12] to achieve linear dependency on K in Theorem 1. The rest of
our analysis is novel.
Our problem is a partial monitoring problem where some of the chosen items may be unobserved.
Agrawal et al. [1] and Bartok et al. [4] studied partial monitoring problems and proposed learning
algorithms for solving them. These algorithms are impractical in our setting. As an example, if we
formulate our problem as in Bartok et al. [4], we get |?| actions and 2L unobserved outcomes; and
2
the learning algorithm reasons over |?| pairs of actions and requires O(2L ) space. Lin et al. [15]
also studied combinatorial partial monitoring. Their feedback is a linear function of the weights of
chosen items. Our feedback is a non-linear function of the weights.
Our reward function is non-linear in unknown parameters. Chen et al. [5] studied stochastic combinatorial semi-bandits with a non-linear reward function, which is a known monotone function of an
unknown linear function. The feedback in Chen et al. [5] is semi-bandit, which is more informative
than in our work. Le et al. [14] studied a network optimization problem where the reward function
is a non-linear function of observations.
6
Conclusions
We propose combinatorial cascading bandits, a class of stochastic partial monitoring problems that
can model many practical problems, such as learning of a routing path in an unreliable communication network that maximizes the probability of packet delivery, and learning to recommend a list of
attractive items. We propose a practical UCB-like algorithm for our problems, CombCascade, and
prove upper bounds on its regret. We evaluate CombCascade on two real-world problems and show
that it performs well even when our modeling assumptions are violated.
Our results and analysis apply to any combinatorial action set, and therefore are quite general. The
strongest assumption in our work is that the weights of items are distributed independently of each
other. This assumption is critical and hard to eliminate (Section 2.1). Nevertheless, it can be easily
relaxed to conditional independence given the features of items, along the lines of Wen et al. [19].
We leave this for future work. From the theoretical point of view, we want to derive a lower bound
on the n-step regret in combinatorial cascading bandits, and show that the factor of f ? in Theorems
1 and 2 is intrinsic.
8
References
[1] Rajeev Agrawal, Demosthenis Teneketzis, and Venkatachalam Anantharam. Asymptotically
efficient adaptive allocation schemes for controlled i.i.d. processes: Finite parameter space.
IEEE Transactions on Automatic Control, 34(3):258?267, 1989.
[2] Shipra Agrawal and Navin Goyal. Analysis of Thompson sampling for the multi-armed bandit
problem. In Proceeding of the 25th Annual Conference on Learning Theory, pages 39.1?39.26,
2012.
[3] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47:235?256, 2002.
[4] Gabor Bartok, Navid Zolghadr, and Csaba Szepesvari. An adaptive algorithm for finite stochastic partial monitoring. In Proceedings of the 29th International Conference on Machine Learning, 2012.
[5] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework, results and applications. In Proceedings of the 30th International Conference on Machine Learning, pages 151?159, 2013.
[6] Baek-Young Choi, Sue Moon, Zhi-Li Zhang, Konstantina Papagiannaki, and Christophe Diot.
Analysis of point-to-point packet delay in an operational network. In Proceedings of the 23rd
Annual Joint Conference of the IEEE Computer and Communications Societies, 2004.
[7] Richard Combes, Stefan Magureanu, Alexandre Proutiere, and Cyrille Laroche. Learning to
rank: Regret lower bounds and efficient algorithms. In Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, 2015.
[8] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with
unknown variables: Multi-armed bandits with linear rewards and individual observations.
IEEE/ACM Transactions on Networking, 20(5):1466?1478, 2012.
[9] Aurelien Garivier and Olivier Cappe. The KL-UCB algorithm for bounded stochastic bandits
and beyond. In Proceeding of the 24th Annual Conference on Learning Theory, pages 359?
376, 2011.
[10] Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan. Cascading bandits: Learning to rank in the cascade model. In Proceedings of the 32nd International Conference on
Machine Learning, 2015.
[11] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid
bandits: Fast combinatorial optimization with learning. In Proceedings of the 30th Conference
on Uncertainty in Artificial Intelligence, pages 420?429, 2014.
[12] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for
stochastic combinatorial semi-bandits. In Proceedings of the 18th International Conference on
Artificial Intelligence and Statistics, 2015.
[13] Shyong Lam and Jon Herlocker. MovieLens Dataset. http://grouplens.org/datasets/movielens/,
2015.
[14] Thanh Le, Csaba Szepesvari, and Rong Zheng. Sequential learning for multi-channel wireless
network monitoring with channel switching costs. IEEE Transactions on Signal Processing,
62(22):5919?5929, 2014.
[15] Tian Lin, Bruno Abrahao, Robert Kleinberg, John Lui, and Wei Chen. Combinatorial partial monitoring game with linear feedback and its applications. In Proceedings of the 31st
International Conference on Machine Learning, pages 901?909, 2014.
[16] Christos Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization. Dover Publications, Mineola, NY, 1998.
[17] Neil Spring, Ratul Mahajan, and David Wetherall. Measuring ISP topologies with Rocketfuel.
IEEE / ACM Transactions on Networking, 12(1):2?16, 2004.
[18] William. R. Thompson. On the likelihood that one unknown probability exceeds another in
view of the evidence of two samples. Biometrika, 25(3-4):285?294, 1933.
[19] Zheng Wen, Branislav Kveton, and Azin Ashkan. Efficient learning in large-scale combinatorial semi-bandits. In Proceedings of the 32nd International Conference on Machine Learning,
2015.
9
| 5722 |@word polynomial:2 nd:2 simplifying:1 reduction:2 contains:1 prefix:5 animated:5 past:1 existing:1 yajun:1 current:1 com:3 nt:1 surprising:1 smtp:2 written:1 john:1 informative:1 plot:6 update:3 half:1 intelligence:2 item:71 accordingly:1 dover:1 node:3 org:1 zhang:1 along:2 yuan:1 prove:5 shorthand:1 introduce:1 expected:12 behavior:1 p1:2 frequently:1 multi:5 alberta:1 zhi:1 armed:4 cardinality:1 bounded:5 notation:1 alto:1 maximizes:4 formalizing:1 medium:1 kind:2 minimizes:1 hindsight:1 csaba:5 transformation:1 unobserved:3 impractical:1 act:1 concave:1 expands:1 biometrika:1 uk:4 exchangeable:1 control:1 service:1 local:6 treat:1 switching:1 ak:3 path:17 abuse:1 initialization:2 studied:7 challenging:2 co:1 tian:1 statistically:1 averaged:3 practical:3 unique:1 drama:1 kveton:16 block:2 regret:35 goyal:1 procedure:1 significantly:1 cascade:2 gabor:1 confidence:5 lcb:1 flatten:1 get:4 cannot:5 convenience:1 close:3 eriksson:1 branislav:5 equivalent:1 thanh:1 maximizing:1 starting:1 independently:6 thompson:3 rabbit:1 formulate:2 simplicity:2 factored:1 insight:1 cascading:20 borrow:1 ualberta:1 user:8 olivier:1 pa:2 observed:10 disjunctive:3 solved:2 wang:1 zhengwen:1 observes:5 redemption:1 complexity:1 reward:25 depend:2 solving:3 tight:4 mina2:2 matchings:1 shipra:1 easily:2 joint:1 isp:1 various:1 distinct:1 jain:1 fast:1 artificial:2 choosing:3 outcome:1 whose:3 quite:1 larger:1 solve:3 say:2 tightness:1 statistic:2 fischer:1 neil:1 delivered:2 online:5 sequence:1 agrawal:3 propose:5 lam:1 product:3 achieve:2 description:1 los:1 leave:2 derive:2 illustrate:1 minor:1 c:1 radius:2 closely:1 stochastic:17 packet:2 routing:19 atk:5 sunnyvale:1 exchange:1 require:1 generalization:2 decompose:1 brian:1 rong:1 hold:1 around:1 ground:6 major:1 a2:14 combinatorial:34 grouplens:1 title:1 largest:3 stefan:1 sigmetrics:1 rather:1 pn:1 publication:1 abrahao:1 improvement:1 bernoulli:2 indicates:3 rank:2 likelihood:1 greedily:1 sense:2 dependent:3 streaming:1 eliminate:1 bandit:41 proutiere:1 arg:14 aforementioned:1 issue:2 denoted:1 yahoo:2 art:1 special:1 sampling:2 jon:1 future:2 papadimitriou:1 report:1 recommend:2 np:1 richard:1 wen:6 randomly:1 individual:2 bartok:3 william:1 interest:2 zheng:6 introduces:1 violation:1 implication:1 tuple:2 partial:9 necessary:1 initialized:1 desired:1 theoretical:1 instance:4 modeling:7 cover:1 measuring:1 applicability:1 cost:1 uniform:2 delay:1 reported:3 straightforwardly:1 dependency:2 synthetic:3 chooses:9 adaptively:1 st:1 international:7 pool:1 gump:1 satisfied:2 cesa:1 choose:1 return:2 toy:1 li:1 inc:1 satisfy:1 depends:1 multiplicative:1 view:2 lab:1 observing:2 analyze:2 netflix:2 e2e:2 contribution:1 moon:1 qk:2 who:2 efficiently:3 yes:4 comparably:1 none:1 monitoring:8 history:1 explain:1 strongest:1 networking:2 suffers:2 ashkan:6 proof:7 associated:1 dataset:6 popular:3 ut:9 pulp:1 auer:1 cappe:1 alexandre:1 higher:1 day:1 response:1 wei:2 rahul:1 generality:1 roger:1 stage:1 receives:3 navin:1 combes:2 rajeev:1 aj:1 perhaps:2 believe:1 assigned:1 mahajan:1 attractive:6 pbk:1 game:1 szepesva:1 suboptimality:1 tt:15 demonstrate:2 performs:4 instantaneous:2 novel:7 ari:1 recently:3 common:2 conditioning:1 refer:1 significant:1 multiarmed:1 measurement:1 ai:1 framed:1 automatic:1 rd:1 trivially:1 similarly:1 bruno:1 submodular:1 nicolo:1 recent:3 showed:1 belongs:1 rewarded:1 scenario:1 binary:7 christophe:1 yi:1 minimum:1 relaxed:1 maximize:4 recommended:3 signal:1 semi:14 demosthenis:1 multiple:1 violate:2 papagiannaki:1 exceeds:1 match:2 lin:2 a1:6 watched:1 adobe:2 controlled:1 variant:3 navid:1 shawshank:1 sue:1 represent:1 szepesv:1 want:1 separately:2 addressed:1 interval:1 ot:16 rest:2 subject:1 bhaskar:1 near:1 counting:1 yang:1 affect:1 matroid:5 independence:5 attracts:1 topology:1 suboptimal:5 observability:2 simplifies:1 idea:2 knowing:1 reduce:1 t0:1 motivated:2 six:2 peter:1 azin:6 action:5 latency:1 reduced:1 generate:1 http:1 exist:1 problematic:1 millisecond:1 fiction:1 estimated:1 track:1 diverse:3 express:1 group:1 key:2 nevertheless:2 drawn:4 garivier:1 ht:4 kenneth:1 asymptotically:1 monotone:3 fraction:1 sum:1 run:3 jose:1 letter:1 uncertainty:1 extends:1 family:1 reasonable:1 forrest:1 delivery:2 appendix:2 bound:26 ct:6 distinguish:1 encountered:1 annual:3 adapted:1 constraint:2 aurelien:1 kleinberg:1 u1:2 min:19 spring:1 diot:1 department:1 march:2 across:1 slightly:1 combucb1:18 computationally:3 previously:1 discus:4 loose:1 differentiates:1 fail:1 technicolor:2 count:1 end:2 available:1 operation:1 generalizes:1 apply:1 observe:7 alternative:1 eydgahi:1 original:2 kln:1 remaining:1 zolghadr:1 exploit:1 ucbs:9 build:3 hypercube:1 society:1 objective:3 quantity:2 flattens:1 rt:4 link:18 w0:5 evaluate:7 mail:4 trivial:1 reason:5 boldface:1 o1:1 index:6 teneketzis:1 minimizing:1 robert:1 herlocker:1 redefined:1 unknown:5 policy:1 bianchi:1 upper:12 recommender:3 observation:6 datasets:1 finite:4 pat:2 january:1 magureanu:1 looking:1 communication:2 steiglitz:1 arbitrary:2 rating:1 bk:2 david:1 pair:2 kl:4 optimized:2 learned:1 polylogarithmic:1 established:1 address:3 beyond:1 usually:1 challenge:3 summarize:1 max:12 reliable:2 rocketfuel:2 critical:3 event:1 difficulty:1 natural:1 indicator:3 scheme:1 improve:1 movie:13 rated:3 loss:1 fully:1 highlight:2 permutation:1 expect:1 interesting:1 limitation:1 allocation:1 at1:1 agent:14 laroche:1 story:1 surprisingly:1 last:3 free:4 wireless:1 face:1 matroids:1 venkatachalam:1 distributed:2 feedback:15 world:5 cumulative:3 computes:1 adaptive:2 san:1 transaction:4 approximate:1 implicitly:1 unreliable:1 global:1 tuples:1 channel:2 learn:6 szepesvari:4 transfer:1 ca:4 f_:8 obtaining:1 reasonably:1 unavailable:1 operational:1 hoda:1 protocol:1 substituted:1 domain:1 pk:5 main:3 animation:1 paul:1 gai:1 ny:1 christos:1 pe:2 third:1 learns:1 young:1 down:4 theorem:10 choi:1 baek:1 list:7 krishnamachari:1 evidence:1 intrinsic:1 sequential:1 conditioned:1 konstantina:1 demand:1 gap:11 chen:4 ucb1:4 lt:1 likely:2 expressed:1 unexpected:1 recommendation:4 determines:1 acm:3 conditional:1 goal:4 formulated:4 exposition:2 feasible:18 hard:2 movielens:3 except:1 operates:1 lui:1 wt:27 lemma:3 called:2 total:1 ucb:8 people:1 arises:3 violated:4 anantharam:1 correlated:1 |
5,217 | 5,723 | Adaptive Primal-Dual Splitting Methods for
Statistical Learning and Image Processing
Thomas Goldstein?
Department of Computer Science
University of Maryland
College Park, MD
Min Li?
School of Economics and Management
Southeast University
Nanjing, China
Xiaoming Yuan?
Department of Mathematics
Hong Kong Baptist University
Kowloon Tong, Hong Kong
Abstract
The alternating direction method of multipliers (ADMM) is an important tool for
solving complex optimization problems, but it involves minimization sub-steps
that are often difficult to solve efficiently. The Primal-Dual Hybrid Gradient
(PDHG) method is a powerful alternative that often has simpler sub-steps than
ADMM, thus producing lower complexity solvers. Despite the flexibility of this
method, PDHG is often impractical because it requires the careful choice of multiple stepsize parameters. There is often no intuitive way to choose these parameters
to maximize efficiency, or even achieve convergence. We propose self-adaptive
stepsize rules that automatically tune PDHG parameters for optimal convergence.
We rigorously analyze our methods, and identify convergence rates. Numerical
experiments show that adaptive PDHG has strong advantages over non-adaptive
methods in terms of both efficiency and simplicity for the user.
1
Introduction
Splitting methods such as ADMM [1, 2, 3] have recently become popular for solving problems
in distributed computing, statistical regression, and image processing. ADMM allows complex
problems to be broken down into sequences of simpler sub-steps, usually involving large-scale least
squares minimizations. However, in many cases these least squares minimizations are difficult to
directly compute. In such situations, the Primal-Dual Hybrid Gradient method (PDHG) [4, 5],
also called the linearized ADMM [4, 6], enables the solution of complex problems with a simpler
sequence of sub-steps that can often be computed in closed form. This flexibility comes at a cost
? the PDHG method requires the user to choose multiple stepsize parameters that jointly determine
the convergence of the method. Without having extensive analytical knowledge about the problem
being solved (such as eigenvalues of linear operators), there is no intuitive way to select stepsize
parameters to obtain fast convergence, or even guarantee convergence at all.
In this article we introduce and analyze self-adaptive variants of PDHG ? variants that automatically
tune stepsize parameters to attain (and guarantee) fast convergence without user input. Applying
adaptivity to splitting methods is a difficult problem. It is known that naive adaptive variants of
?
tomg@cs.umd.edu
limin@seu.edu.cn
?
xmyuan@hkbu.edu.hk
?
1
ADMM are non-convergent, however recent results prove convergence when specific mathematical
requirements are enforced on the stepsizes [7]. Despite this progress, the requirements for convergence of adaptive PDHG have been unexplored. This is surprising, given that stepsize selection is a
much bigger issue for PDHG than for ADMM because it requires multiple stepsize parameters.
The contributions of this paper are as follows. First, we describe applications of PDHG and its
advantages over ADMM. We then introduce a new adaptive variant of PDHG. The new algorithm not
only tunes parameters for fast convergence, but contains a line search that guarantees convergence
when stepsize restrictions are unknown to the user. We analyze the convergence of adaptive PDHG,
and rigorously prove convergence rate guarantees. Finally, we use numerical experiments to show
the advantages of adaptivity on both convergence speed and ease of use.
2
The Primal-Dual Hybrid Gradient Method
The PDHG scheme has its roots in the Arrow-Hurwicz method, which was studied by Popov [8].
Research in this direction was reinvigorated by the introduction of PDHG, which converges rapidly
for a wider range of stepsizes than Arrow-Hurwicz. PDHG was first presented in [9] and analyzed
for convergence in [4, 5]. It was later studied extensively for image segmentation [10]. An extensive
technical study of the method and its variants is given by He and Yuan [11]. Several extensions
of PDHG, including simplified iterations for the case that f or g is differentiable, are presented by
Condat [12]. Several authors have also derived PDHG as a preconditioned form of ADMM [4, 6].
PDHG solves saddle-point problems of the form
min max f (x) + y T Ax
x2X y2Y
g(y).
(1)
for convex f and g. We will see later that an incredibly wide range of problems can be cast as (1).
The steps of PDHG are given by
8 k+1
x
?
= x k ? k AT y k
>
>
>
>
1
>
k+1
>
= arg min f (x) +
kx x
?k+1 k2
>
<x
2?
k
x2X
k+1
k
k+1
>
y
?
=
y
+
A(2x
xk )
k
>
>
>
>
1
>
>
: y k+1 = arg min g(y) +
ky y?k+1 k2
2 k
y2Y
(2)
(3)
(4)
(5)
where {?k } and { k } are stepsize parameters. Steps (2) and (3) of the method update x, decreasing
the energy (1) by first taking a gradient descent step with respect to the inner product term in (1)
and then taking a ?backward? or proximal step involving f . In steps (4) and (5), the energy (1) is
increased by first marching up the gradient of the inner product term with respect to y, and then a
backward step is taken with respect to g.
PDHG has been analyzed in the case of constant stepsizes, ?k = ? and k = . In particular,
it is known to converge as long as ? < 1/?(AT A) [4, 5, 11]. However, PDHG typically does
not converge when non-constant stepsizes are used, even in the case that k ?k < 1/?(AT A) [13].
Furthermore, it is unclear how to select stepsizes when the spectral properties of A are unknown. In
this article, we identify the specific stepsize conditions that guarantee convergence in the presence
of adaptivity, and propose a backtracking scheme that can be used when the spectral radius of A is
unknown.
3
Applications
Linear Inverse Problems Many inverse problems and statistical regressions have the form
minimize
h(Sx) + f (Ax
b)
(6)
where f (the data term) is some convex function, h is a (convex) regularizer (such as the `1 -norm),
A and S are linear operators, and b is a vector of data. Recently, the alternating direction method
2
of multipliers (ADMM) has become a popular method for solving such problems. The ADMM
relies on the change of variables y
Sx, and generates the following sequence of iterates for some
stepsize ?
8 k+1
= arg minx f (Ax b) + (Sx y k )T k + ?2 kSx y k k2
<x
k+1
(7)
y
= arg miny h(y) + (Sxk+1 y)T k + ?2 kSxk+1 yk2
: k+1
= k + ? (Sxk+1 y k+1 ).
The x-update in (7) requires the solution of a (potentially large) least-square problem involving both
A and S. Common formulations such as the consensus ADMM [14] solve these large sub-problems
with direct matrix factorizations, however this is often impractical when either the data matrices are
extremely large or fast transforms (such as FFT, DCT, or Hadamard) cannot be used.
The problem (6) can be put into the form (1) using the Fenchel conjugate of the convex function h,
denoted h? , which satisfies the important identity
h(z) = max y T z
y
h? (y)
for all z in the domain of h. Replacing h in (6) with this expression involving its conjugate yields
min max f (Ax
x
y
b) + y T Sx
h? (y)
which is of the form (1). The forward (gradient) steps of PDHG handle the matrix A explicitly,
allowing linear inverse problems to be solved without any difficult least-squares sub-steps. We will
see several examples of this below.
Scaled Lasso The square-root lasso [15] or scaled lasso [16] is a variable selection regression that
obtains sparse solutions to systems of linear equations. Scaled lasso has several advantages over
classical lasso ? it is more robust to noise and it enables setting penalty parameters without cross
validation [15, 16]. Given a data matrix D and a vector b, the scaled lasso finds a sparse solution to
the system Dx = b by solving
min ?kxk1 + kDx bk2
(8)
x
for some scaling parameter ?. Note the `2 term in (8) is not squared as in classical lasso. If we write
?kxk1 =
max y1T x,
ky1 k1 ??
and
kDx
bk2 = max y2T (Dx
ky2 k2 ?1
b)
we can put (8) in the form (1)
min
x
max
ky1 k1 ??,ky2 k2 ?1
y1T x + y2T (Dx
b).
(9)
Unlike ADMM, PDHG does not require the solution of least-squares problems involving D.
Total-Variation Minimization
form
Total variation [17] is commonly used to solve problems of the
1
min ?krxk1 + kAx f k22
(10)
x
2
where x is a 2D array (image), r is the discrete gradient operator, A is a linear operator, and f
contains data. If we add a dual variable y and write ?krxk1 = maxkyk1 ?? y T rx, we obtain
max min
kyk1 ??
x
1
kAx
2
f k2 + y T rx
(11)
which is clearly of the form (1).
The PDHG solver using formulation (11) avoids the inversion of the gradient operator that is required
by ADMM. This is useful in many applications. For example, in compressive sensing the matrix A
may be a sub-sampled orthogonal Hadamard [18], wavelet, or Fourier transform [19, 20]. In this
case, the proximal sub-steps of PDHG are solvable in closed form using fast transforms because they
do not involve the gradient operator r. The sub-steps of ADMM involve both the gradient operator
and the matrix A simultaneously, and thus require inner loops with expensive iterative solvers.
3
4
Adaptive Formulation
The convergence of PDHG can be measured by the size of the residuals, or gradients of (1) with
respect to the primal and dual variables x and y. These primal and dual gradients are simply
pk+1 = @f (xk+1 ) + AT y k+1 ,
and
dk+1 = @g(y k+1 ) + Axk+1
(12)
where @f and @g denote the sub-differential of f and g. The sub-differential can be directly evaluated from the sequence of PDHG iterates using the optimality condition for (3): 0 2 @f (xk+1 ) +
1
k+1
x
?k+1 ). Rearranging this yields ?1k (?
xk+1 xk+1 ) 2 @f (xk+1 ). The same method can be
?k (x
applied to (5) to obtain @g(y k+1 ). Applying these results to (12) yields the closed form residuals
pk+1 =
1 k
(x
?k
xk+1 )
AT (y k
y k+1 ),
dk+1 =
1
(y k
y k+1 )
A(xk
xk+1 ).
(13)
k
When choosing the stepsize for PDHG, there is a tradeoff between the primal and dual residuals.
Choosing a large ?k and a small k drives down the primal residuals at the cost of large dual residuals. Choosing a small ?k and large k results in small dual residuals but large primal errors. One
would like to choose stepsizes so that the larger of pk+1 and dk+1 is as small as possible. If we assume the residuals on step k+1 change monotonically with ?k , then max{pk+1 , dk+1 } is minimized
when pk+1 = dk+1 . This suggests that we tune ?k to ?balance? the primal and dual residuals.
To achieve residual balancing, we first select a parameter ?0 < 1 that controls the aggressiveness of
adaptivity. On each iteration, we check whether the primal residual is at least twice the dual. If so,
we increase the primal stepsize to ?k+1 = ?k /(1 ?k ) and decrease the dual to k+1 = k (1 ?k ).
If the dual residual is at least twice the primal, we do the opposite. When we modify the stepsize, we
shrink the adaptivity level to ?k+1 = ??k , for ? 2 (0, 1). We will see in Section 5 that this adaptivity
level decay is necessary to guarantee convergence. In our implementation we use ?0 = ? = .95.
In addition to residual balancing, we check the following backtracking condition after each iteration
c
kxk+1
2?k
xk k2
2(y k+1
y k )T A(xk+1
xk ) +
c
2
k
ky k+1
y k k2 > 0
(14)
where c 2 (0, 1) is a constant (we use c = 0.9) is our experiments. If condition (14) fails, then we
shrink ?k and k before the next iteration. We will see in Section 5 that the backtracking condition
(14) is sufficient to guarantee convergence. The complete scheme is listed in Algorithm 1.
Algorithm 1 Adaptive PDHG
1: Choose x0 , y 0 , large ?0 and 0 , and set ?0 = ? = 0.95.
2: while kpk k, kdk k > tolerance do
3:
Compute (xk+1 , y k+1 ) from (xk , y k ) using the PDHG updates (2-5)
4:
Check the backtracking condition (14) and if it fails set ?k
?k /2, k
k /2
5:
Compute the residuals (13), and use them for the following two adaptive updates
6:
If 2kpk+1 k < kdk+1 k, then set ?k+1 = ?k (1 ?k ), k+1 = k /(1 ?k ), and ?k+1 = ?k ?
7:
If kpk+1 k > 2kdk+1 k, then set ?k+1 = ?k /(1 ?k ), k+1 = k (1 ?k ), and ?k+1 = ?k ?
8:
If no adaptive updates were triggered, then ?k+1 = ?k , k+1 = k , and ?k+1 = ?k
9: end while
5
Convergence Theory
In this section, we analyze Algorithm 1 and its rate of convergence. In our analysis, we consider
adaptive variants of PDHG that satisfy the following assumptions. We will see later that these
assumptions guarantee convergence of PDHG with rate O(1/k).
Algorithm 1 trivially satisfies Assumption A. The sequence { k } measures the adaptive aggressiveness on iteration k, and serves the same role as ?k in Algorithm 1. The geometric decay of ?k
ensures that Assumption B holds. The backtracking rule explicitly guarantees Assumption C.
4
Assumptions for Adaptive PDHG
A The sequences {?k } and {
are positive and bounded.
n
B The sequence { k } is summable, where k = max ?k ??kk+1 ,
k}
k
k+1
k
o
,0 .
C Either X or Y is bounded, and there is a constant c 2 (0, 1) such that for all k > 0
c
c
kxk+1 xk k2 2(y k+1 y k )T A(xk+1 xk ) +
ky k+1 y k k2 > 0.
2?k
2 k
5.1
Variational Inequality Formulation
For notational simplicity, we define the composite vector uk = (xk , y k ) and the matrices
? 1
?
? 1
?
?
?
? I
AT
?k I
0
AT y
Mk = k
,
H
=
,
and
Q(u)
=
.
k
1
1
Ax
A
0
k I
k I
(15)
This notation allows us to formulate the optimality conditions for (1) as a variational inequality (VI).
If u? = (x? , y ? ) is a solution to (1), then x? is a minimizer of (1). More formally,
f (x? ) + (x
f (x)
x ? ) T AT y ?
0
8 x 2 X.
(16)
y ? )T Ax? ? 0
8 y 2 Y.
(17)
8u 2 ?,
(18)
Likewise, (1) is maximized by y , and so
?
g(y) + g(y ? ) + (y
Subtracting (17) from (16) and letting h(u) = f (x) + g(y) yields the VI formulation
h(u)
h(u? ) + (u
u? )T Q(u? )
0
where ? = X ? Y. We say u
? is an approximate solution to (1) with VI accuracy ? if
h(u)
h(?
u) + (u
u
?)T Q(?
u)
?
8u 2 B1 (?
u) \ ?,
(19)
where B1 (?
u) is a unit ball centered at u
?. In Theorem 1, we prove O(1/k) ergodic convergence of
adaptive PDHG using the VI notion of convergence.
5.2
Preliminary Results
We now prove several results about the PDHG iterates that are needed to obtain a convergence rate.
Lemma 1. The iterates generated by PDHG (2-5) satisfy
kuk
u? k2Mk
kuk+1
uk k2Mk + kuk+1
u? k2Mk .
The proof of this lemma follows standard techniques, and is presented in the supplementary material.
This next lemma bounds iterates generated by PDHG.
Lemma 2. Suppose the stepsizes for PDHG satisfy Assumptions A, B and C. Then
kuk
for some upper bound CU > 0.
u? k2Hk ? CU
The proof of this lemma is given in the supplementary material.
Lemma 3. Under Assumptions A, B, and C, we have
n ?
X
kuk
k=0
k
k=1
where C =
P1
uk2Mk
kuk
uk2Mk
1
?
? 2C CU + 2C CH ku
and CH is a constant such that ku
5
u? k2Hk ? CH ku
u ? k2
u? k2 .
Proof. Using the definition of Mk we obtain
n ?
X
k=1
kuk
uk2Mk
n ?
X
1
=
(
?k
?
=
k=1
n
X
k=1
n
X
k=1
n
X
?2
?2
k=1
n
X
k 1
kuk
uk2Mk
1
)kxk
?k 1
?
1 k
kx
?k
k 1 ku
k
1
1
k
xk2 +
1
k
ky k
)ky k
yk2
k 1
yk2
?
(20)
uk2Hk
kuk
k 1
CU + CH ku
u? k2Hk + ku
? 2C CU + 2C CH ku
where we have used the bound kuk
?
xk2 + (
k 1
k=1
1
u? k2Hk
u? k2
u ? k2 ,
u? k2Hk ? CU from Lemma 2 and C =
P1
k=0
k.
This final lemma provides a VI interpretation of the PDHG iteration.
Lemma 4. The iterates uk = (xk , y k ) generated by PDHG satisfy
h(u)
h(uk+1 ) + (u
uk+1 )T [Quk+1 + Mk (uk+1
uk )]
0
8u 2 ?.
(21)
Proof. Let uk = (xk , y k ) be a pair of PDHG iterates. The minimizers in (3) and (5) of PDHG
satisfy the following for all x 2 X
f (x)
f (xk+1 ) + (x
xk+1 )T [AT y k+1
AT (y k+1
y k+1 )T [ Axk+1
A(xk+1
yk ) +
1 k+1
(x
?k
xk )]
1
y k )]
0,
(22)
and also for all y 2 Y
g(y)
g(y k+1 ) + (y
xk ) +
(y k+1
0.
(23)
k
Adding these two inequalities and using the notation (15) yields the result.
5.3
Convergence Rate
We now combine the above lemmas into our final convergence result.
Theorem 1. Suppose that the stepsizes in PDHG satisfy Assumptions A, B, and C. Consider the
sequence defined by
t
1X k
u
?t =
u .
t
k=1
This sequence satisfies the convergence bound
h(u)
h(?
ut ) + (u
u
?t )T Q(?
ut )
ku
u
?t k2Mt
ku
u0 k2M0
2C CU
2t
Thus u
?t converges to a solution of (1) with rate O(1/k) in the VI sense (19).
6
2C CH ku
u? k2
.
Proof. We begin with the following identity (a special case of the polar identity for vector spaces):
1
1
(ku uk+1 k2Mk
ku uk k2Mk ) + kuk
2
2
We apply this to the VI formulation of the PDHG iteration (18) to get
(u
uk+1 )T Mk (uk
h(uk+1 ) + (u
h(u)
uk+1 ) =
uk+1 )T Q(uk+1 )
1
ku uk+1 k2Mk
2
1
uk k2Mk + kuk
2
ku
uk+1 k2Mk .
uk+1 k2Mk .
(24)
xk+1 ) = 0,
(25)
Note that
(u
uk+1 )T Q(u
uk+1 ) = (x
y k+1 )
y k+1 )A(x
(y
uk+1 )T Q(u) = (u uk+1 )T Q(uk+1 ). Also, Assumption C guarantees that kuk
0. These observations reduce (24) to
and so (u
uk+1 k2Mk
h(u)
1
ku uk+1 k2Mk
2
1, and invoke Lemma 3,
h(uk+1 ) + (u
uk+1 )T Q(u)
We now sum (26) for k = 0 to t
2
xk+1 )AT (y
t 1
X
[h(u)
h(uk+1 ) + (u
uk k2Mk .
ku
(26)
uk+1 )T Q(u)]
k=0
Because h is convex,
ku
ut k2Mt
ku
u0 k2M0 +
ku
ut k2Mt
ku
u0 k2M0
t 1
X
h(u
k+1
)=
k=0
t
X
t h
X
k=1
uk k2Mk
ku
2C CU
h(u )
th
k=1
1X k
u
t
k=1
!
uk k2Mk
u? k2 .
2C CH ku
t
k
1
ku
i
(27)
= th(?
ut ).
The left side of (27) therefore satisfies
2t h(u)
h(?
ut ) + (u
u
?t )T Q(u)
2
t 1
X
?
h(u)
h(uk+1 ) + (u
k=0
Combining (27) and (28) yields the tasty bound
h(u)
h(?
ut ) + (u
u
?t )T Q(u)
ku
ut k2Mt
ku
u0 k2M0
2C CU
2t
?
uk+1 )T Q(u) .
2C CH ku
(28)
u? k2
.
Applying (19) proves the theorem.
6
Numerical Results
We apply the original and adaptive PDHG to the test problems described in Section 3. We terminate
the algorithms when both the primal and dual residual norms (i.e. kpk k and kdk k) are smaller
than 0.05. We consider four variants of PDHG. The method ?Adapt:Backtrack? denotes adaptive
PDHG with backtracking. The method ?Adapt: ? = L? refers to the adaptive method without
1
backtracking with ?0 = 0 = 0.95?(AT A) 2 .
We alsop
consider the non-adaptive PDHG with two different stepsize choices. The method ?Const:
p
?, = L? refers to the constant-stepsize method with both stepsize parameters equal to L =
1
?(AT A) 2 . The method ?Const: ? -final? refers to the constant-stepsize method, where the stepsizes
are chosen to be the final values of the stepsizes used by ?Adapt: ? = L.? This final method is
meant to demonstrate the performance of PDHG with a stepsize that is customized to the problem
at hand, but still non-adaptive. The specifics of each test problem are described below:
7
Primal Stepsize (? k )
ROF Convergence Curves, ? = 0.05
7
10
12
6
A d a p t: Ba cktra ck
A d a p t: ? ? = L
?
Co n st: ? = L
Co n st: ? -fi n al
10
5
A d a p t: Ba cktra ck
A d a p t: ? ? = L
8
4
10
?k
Energy Gap
10
10
6
3
10
4
2
10
2
1
10
0
10
0
0
50
100
150
200
Iteration
250
300
0
50
100
150
200
Iteration
250
300
Figure 1: (left) Convergence curves for the TV denoising experiment with ? = 0.05. The y-axis
displays the difference between the objective (10) at the kth iterate and the optimal objective value.
(right) Stepsize sequences, {?k }, for both adaptive schemes.
Table 1: Iteration counts for each problem with runtime (sec) in parenthesis.
Problem
Scaled Lasso (50%)
Scaled Lasso (20%)
Scaled Lasso (10%)
TV, ? = .25
TV, ? = .05
TV, ? = .01
Compressive (20%)
Compressive (10%)
Compressive (5%)
Adapt:Backtrack
212 (0.33)
349 (0.22)
360 (0.21)
16 (0.0475)
50 (0.122)
109 (0.262)
163 (4.08)
244 (5.63)
382 (9.54)
Adapt: ? = L
240 (0.38)
330 (0.21)
322 (0.18)
16 (0.041)
51 (0.122)
122 (0.288)
168 (4.12)
274 (6.21)
438 (10.7)
p
Const: ?, = L
342 (0.60)
437 (0.25)
527 (0.28)
78 (0.184)
281 (0.669)
927 (2.17)
501 (12.54)
908 (20.6)
1505 (34.2)
Const: ? -final
156 (0.27)
197 (0.11)
277 (0.15)
48 (0.121)
97 (0.228)
152 (0.369)
246 (6.03)
437 (9.94)
435 (9.95)
Scaled Lasso We test our methods on (8) using the synthetic problem suggested in [21]. The test
problem recovers a 1000 dimensional vector with 10 nonzero components using a Gaussian matrix.
Total Variation Minimization We apply the model (10) with A = I to the ?Cameraman? image.
The image is scaled to the range [0, 255], and noise contaminated with standard deviation 10. The
image is denoised with ? = 0.25, 0.05, and 0.01. See Table 1 for time trial results. Note the similar
performance of Algorithm 1 with and without backtracking, indicating that there is no advantage to
knowing the constant L = ?(AT A) 1 . We plot convergence curves and show the evolution of ?k in
Figure 1. Note that ?k is large for the first several iterates and then decays over time.
Compressed Sensing We reconstruct a Shepp-Logan phantom from sub-sampled Hadamard measurements. Data is generated by applying the Hadamard transform to a 256 ? 256 discretization of
the Shepp-Logan phantom, and then sampling 5%, 10%, and 20% of the coefficients are random.
7
Discussion and Conclusion
Several interesting observations can be made from the results in Table 1. First, both the backtracking
(?Adapt: Backtrack?) and non-backtracking (?Adapt: ? = L?) methods have similar performance
on average for the imaging problems, with neither algorithm showing consistently better performance. Thus there is no cost to using backtracking instead of knowing the ideal stepsize ?(AT A).
Finally, the method ?Const: ? -final? (using non-adaptive, ?optimized? stepsizes) did not always outperform the constant, non-optimized stepsizes. This occurs because the true ?best? stepsize choice
depends on the active set of the problem and the structure of the remaining error and thus evolves
over time. This is depicted in Figure 1, which shows the time dependence of ?k . This show that
adaptive methods can achieve superior performance by evolving the stepsize over time.
8
Acknowledgments
This work was supported by the National Science Foundation ( #1535902), the Office of Naval
Research (#N00014-15-1-2676), and the Hong Kong Research Grants Council?s General Research
Fund (HKBU 12300515). The second author was supported in part by the Program for New Century
Excellent University Talents under Grant No. NCET-12-0111, and the Qing Lan Project.
8
References
[1] R. Glowinski and A. Marroco. Sur l?approximation, par e? l?ements finis d?ordre un, et la r?esolution, par
p?enalisation-dualit?e d?une classe de probl`emes de Dirichlet non lin?eaires. Rev. Franc?aise d?Automat. Inf.
Recherche Op?erationelle, 9(2):41?76, 1975.
[2] Roland Glowinski and Patrick Le Tallec. Augmented Lagrangian and Operator-Splitting Methods in
Nonlinear Mechanics. Society for Industrial and Applied Mathematics, Philadephia, PA, 1989.
[3] Tom Goldstein and Stanley Osher. The Split Bregman method for `1 regularized problems. SIAM J. Img.
Sci., 2(2):323?343, April 2009.
[4] Ernie Esser, Xiaoqun Zhang, and Tony F. Chan. A general framework for a class of first order primal-dual
algorithms for convex optimization in imaging science. SIAM Journal on Imaging Sciences, 3(4):1015?
1046, 2010.
[5] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with
applications to imaging. Convergence, 40(1):1?49, 2010.
[6] Yuyuan Ouyang, Yunmei Chen, Guanghui Lan, and Eduardo Pasiliao Jr. An accelerated linearized alternating direction method of multipliers. arXiv preprint arXiv:1401.6607, 2014.
[7] B. He, H. Yang, and S.L. Wang. Alternating direction method with self-adaptive penalty parameters for
monotone variational inequalities. Journal of Optimization Theory and Applications, 106(2):337?356,
2000.
[8] L.D. Popov. A modification of the arrow-hurwicz method for search of saddle points. Mathematical notes
of the Academy of Sciences of the USSR, 28:845?848, 1980.
[9] Mingqiang Zhu and Tony Chan. An efficient primal-dual hybrid gradient algorithm for total variation
image restoration. UCLA CAM technical report, 08-34, 2008.
[10] T. Pock, D. Cremers, H. Bischof, and A. Chambolle. An algorithm for minimizing the mumford-shah
functional. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1133?1140, 2009.
[11] Bingsheng He and Xiaoming Yuan. Convergence analysis of primal-dual algorithms for a saddle-point
problem: From contraction perspective. SIAM J. Img. Sci., 5(1):119?149, January 2012.
[12] Laurent Condat. A primal-dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms. Journal of Optimization Theory and Applications, 158(2):460?479,
2013.
[13] Silvia Bonettini and Valeria Ruggiero. On the convergence of primal?dual hybrid gradient algorithms for
total variation image restoration. Journal of Mathematical Imaging and Vision, 44(3):236?253, 2012.
[14] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning
via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 2010.
[15] A. Belloni, Victor Chernozhukov, and L. Wang. Square-root lasso: pivotal recovery of sparse signals via
conic programming. Biometrika, 98(4):791?806, 2011.
[16] Tingni Sun and Cun-Hui Zhang. Scaled sparse linear regression. Biometrika, 99(4):879?898, 2012.
[17] L Rudin, S Osher, and E Fatemi. Nonlinear total variation based noise removal algorithms. Physica. D.,
60:259?268, 1992.
[18] Tom Goldstein, Lina Xu, Kevin Kelly, and Richard Baraniuk. The STONE transform: Multi-resolution
image enhancement and real-time compressive video. Preprint available at Arxiv.org (arXiv:1311.34056),
2013.
[19] M. Lustig, D. Donoho, and J. Pauly. Sparse MRI: The application of compressed sensing for rapid MR
imaging. Magnetic Resonance in Medicine, 58:1182?1195, 2007.
[20] Xiaoqun Zhang and J. Froment. Total variation based fourier reconstruction and regularization for computer tomography. In Nuclear Science Symposium Conference Record, 2005 IEEE, volume 4, pages
2332?2336, Oct 2005.
[21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58:267?288, 1994.
9
| 5723 |@word kong:3 cu:9 mri:1 inversion:1 trial:1 norm:2 k2hk:5 linearized:2 contraction:1 automat:1 contains:2 series:1 discretization:1 surprising:1 dx:3 chu:1 dct:1 numerical:3 enables:2 plot:1 update:5 fund:1 rudin:1 une:1 xk:27 recherche:1 record:1 iterates:8 provides:1 org:1 simpler:3 zhang:3 mathematical:3 direct:1 become:2 differential:2 symposium:1 yuan:3 prove:4 combine:1 introduce:2 x0:1 rapid:1 p1:2 mechanic:1 multi:1 y2y:2 decreasing:1 automatically:2 solver:3 begin:1 project:1 bounded:2 notation:2 ouyang:1 compressive:5 impractical:2 eduardo:1 guarantee:10 unexplored:1 runtime:1 biometrika:2 k2:17 scaled:10 uk:36 control:1 unit:1 grant:2 producing:1 before:1 positive:1 pock:2 modify:1 despite:2 k2mt:4 laurent:1 twice:2 china:1 studied:2 suggests:1 co:2 ease:1 factorization:1 range:3 acknowledgment:1 evolving:1 attain:1 composite:2 boyd:1 refers:3 nanjing:1 cannot:1 get:1 selection:3 operator:8 put:2 applying:4 restriction:1 phantom:2 lagrangian:1 economics:1 incredibly:1 convex:8 ergodic:1 formulate:1 resolution:1 simplicity:2 splitting:5 recovery:1 pasiliao:1 rule:2 array:1 nuclear:1 century:1 handle:1 notion:1 variation:7 suppose:2 user:4 programming:1 pa:1 trend:1 expensive:1 yuyuan:1 sxk:2 kxk1:2 role:1 preprint:2 solved:2 wang:2 ensures:1 sun:1 decrease:1 yk:1 broken:1 complexity:1 miny:1 rigorously:2 cam:1 solving:4 efficiency:2 bingsheng:1 regularizer:1 fast:5 describe:1 kevin:1 choosing:3 y1t:2 solve:3 larger:1 say:1 supplementary:2 reconstruct:1 compressed:2 jointly:1 transform:3 final:7 advantage:5 sequence:10 eigenvalue:1 analytical:1 differentiable:1 triggered:1 propose:2 subtracting:1 reconstruction:1 product:2 hadamard:4 loop:1 rapidly:1 combining:1 flexibility:2 achieve:3 academy:1 intuitive:2 ky:5 convergence:34 enhancement:1 requirement:2 converges:2 wider:1 measured:1 op:1 school:1 progress:1 strong:1 solves:1 c:1 involves:1 come:1 direction:6 radius:1 xiaoqun:2 centered:1 aggressiveness:2 material:2 require:2 preliminary:1 extension:1 physica:1 hold:1 xk2:2 polar:1 erationelle:1 chernozhukov:1 council:1 southeast:1 tool:1 minimization:5 clearly:1 kowloon:1 gaussian:1 always:1 ck:2 shrinkage:1 stepsizes:12 office:1 derived:1 ax:6 naval:1 notational:1 consistently:1 check:3 hk:1 industrial:1 sense:1 minimizers:1 typically:1 issue:1 dual:21 arg:4 denoted:1 ussr:1 resonance:1 special:1 equal:1 having:1 sampling:1 park:1 minimized:1 contaminated:1 report:1 richard:1 franc:1 simultaneously:1 seu:1 national:1 qing:1 analyzed:2 primal:21 bregman:1 popov:2 necessary:1 orthogonal:1 logan:2 mk:4 increased:1 fenchel:1 tingni:1 restoration:2 cost:3 deviation:1 kdx:2 proximal:2 synthetic:1 guanghui:1 st:2 international:1 siam:3 invoke:1 squared:1 management:1 choose:4 summable:1 hkbu:2 li:1 de:2 sec:1 coefficient:1 satisfy:6 cremers:1 explicitly:2 valeria:1 vi:7 depends:1 later:3 root:3 closed:3 analyze:4 denoised:1 contribution:1 minimize:1 square:7 accuracy:1 efficiently:1 likewise:1 yield:6 identify:2 maximized:1 proximable:1 backtrack:3 rx:2 drive:1 kpk:4 definition:1 energy:3 proof:5 recovers:1 sampled:2 popular:2 knowledge:1 ut:8 stanley:1 segmentation:1 goldstein:3 tom:2 april:1 formulation:6 evaluated:1 shrink:2 chambolle:2 furthermore:1 hand:1 replacing:1 axk:2 nonlinear:2 k22:1 true:1 multiplier:4 evolution:1 regularization:1 alternating:5 nonzero:1 self:3 hong:3 eaires:1 stone:1 complete:1 demonstrate:1 image:10 variational:3 recently:2 fi:1 parikh:1 common:1 superior:1 functional:1 volume:1 he:3 interpretation:1 measurement:1 probl:1 talent:1 trivially:1 mathematics:2 esser:1 yk2:3 add:1 patrick:1 recent:1 chan:2 perspective:1 inf:1 n00014:1 inequality:4 victor:1 pauly:1 mr:1 determine:1 maximize:1 converge:2 monotonically:1 signal:1 u0:4 multiple:3 yunmei:1 technical:2 adapt:7 cross:1 long:1 lin:1 roland:1 bigger:1 lina:1 parenthesis:1 kax:2 involving:6 regression:5 variant:7 vision:2 arxiv:4 iteration:10 addition:1 x2x:2 umd:1 unlike:1 presence:1 ideal:1 yang:1 split:1 fft:1 iterate:1 lasso:13 opposite:1 inner:3 reduce:1 cn:1 knowing:2 tradeoff:1 hurwicz:3 whether:1 expression:1 penalty:2 useful:1 involve:2 tune:4 listed:1 transforms:2 extensively:1 tomography:1 outperform:1 tibshirani:1 write:2 discrete:1 four:1 lan:2 lustig:1 neither:1 kuk:13 backward:2 imaging:6 ordre:1 monotone:1 sum:1 enforced:1 inverse:3 powerful:1 baraniuk:1 k2m0:4 scaling:1 bound:5 convergent:1 display:1 ements:1 belloni:1 ucla:1 generates:1 fourier:2 speed:1 min:9 extremely:1 optimality:2 emes:1 xiaoming:2 department:2 tv:4 ball:1 conjugate:2 jr:1 smaller:1 cun:1 evolves:1 rev:1 modification:1 osher:2 taken:1 marroco:1 equation:1 cameraman:1 count:1 needed:1 letting:1 end:1 serf:1 finis:1 available:1 apply:3 spectral:2 magnetic:1 stepsize:24 alternative:1 ky1:2 shah:1 thomas:2 original:1 denotes:1 remaining:1 dirichlet:1 tony:2 const:5 lipschitzian:1 medicine:1 k1:2 prof:1 classical:2 society:2 objective:2 occurs:1 mumford:1 dependence:1 md:1 unclear:1 gradient:14 minx:1 y2t:2 kth:1 maryland:1 sci:2 consensus:1 preconditioned:1 fatemi:1 sur:1 kk:1 balance:1 minimizing:1 difficult:4 robert:1 potentially:1 ba:2 implementation:1 unknown:3 allowing:1 upper:1 observation:2 descent:1 january:1 situation:1 glowinski:2 peleato:1 cast:1 required:1 pair:1 extensive:2 optimized:2 tallec:1 bischof:1 eckstein:1 rof:1 shepp:2 suggested:1 usually:1 below:2 program:1 including:1 max:9 video:1 royal:1 pdhg:49 hybrid:5 regularized:1 solvable:1 residual:14 customized:1 zhu:1 scheme:4 tasty:1 conic:1 axis:1 naive:1 dualit:1 ky2:2 geometric:1 kelly:1 removal:1 par:2 adaptivity:6 interesting:1 validation:1 foundation:2 sufficient:1 article:2 bk2:2 balancing:2 quk:1 supported:2 side:1 wide:1 taking:2 sparse:5 distributed:2 tolerance:1 curve:3 avoids:1 author:2 forward:1 adaptive:26 commonly:1 simplified:1 kdk:4 made:1 approximate:1 obtains:1 active:1 b1:2 img:2 search:2 iterative:1 un:1 table:3 ku:26 terminate:1 robust:1 rearranging:1 excellent:1 complex:3 domain:1 did:1 pk:5 arrow:3 silvia:1 noise:3 antonin:1 condat:2 pivotal:1 xu:1 aise:1 augmented:1 tong:1 esolution:1 sub:12 fails:2 classe:1 kyk1:1 wavelet:1 down:2 theorem:3 specific:3 showing:1 sensing:3 dk:5 decay:3 adding:1 hui:1 kx:2 sx:4 gap:1 marching:1 chen:1 depicted:1 backtracking:11 simply:1 saddle:3 limin:1 kxk:3 ch:8 minimizer:1 satisfies:4 relies:1 oct:1 identity:3 donoho:1 careful:1 admm:15 change:2 denoising:1 lemma:11 called:1 total:7 la:1 indicating:1 select:3 college:1 formally:1 meant:1 accelerated:1 |
5,218 | 5,724 | Sum-of-Squares Lower Bounds for Sparse PCA
Tengyu Ma?1 and Avi Wigderson?2
1
Department of Computer Science, Princeton University
2
School of Mathematics, Institute for Advanced Study
Abstract
This paper establishes a statistical versus computational trade-off for solving
a basic high-dimensional machine learning problem via a basic convex relaxation method. Specifically, we consider the Sparse Principal Component
Analysis (Sparse PCA) problem, and the family of Sum-of-Squares (SoS, aka
Lasserre/Parillo) convex relaxations. It was well known that in large dimension p,
a planted k-sparse unit vector can be in principle detected using only n ? k log p
(Gaussian or Bernoulli) samples, but all efficient (polynomial time) algorithms
known require n ? k 2 samples. It was also known that this quadratic gap cannot
be improved by the the most basic semi-definite (SDP, aka spectral) relaxation,
equivalent to a degree-2 SoS algorithms. Here we prove that also degree-4 SoS algorithms cannot improve this quadratic gap. This average-case lower bound adds
to the small collection of hardness results in machine learning for this powerful
family of convex relaxation algorithms. Moreover, our design of moments (or
?pseudo-expectations?) for this lower bound is quite different than previous lower
bounds. Establishing lower bounds for higher degree SoS algorithms for remains
a challenging problem.
1
Introduction
We start with a general discussion of the tension between sample size and computational efficiency in
statistical and learning problems. We then describe the concrete model and problem at hand: Sumof-Squares algorithms and the Sparse-PCA problem. All are broad topics studied from different
viewpoints, and the given references provide more information.
1.1
Statistical vs. computational sample-size
Modern machine learning and statistical inference problems are often high dimensional, and it is
highly desirable to solve them using far less samples than the ambient dimension. Luckily, we often
know, or assume, some underlying structure of the objects sought, which allows such savings in
principle. Typical such assumption is that the number of real degrees of freedom is far smaller
than the dimension; examples include sparsity constraints for vectors, and low rank for matrices
and tensors. The main difficulty that occurs in nearly all these problems is that while information
theoretically the sought answer is present (with high probability) in a small number of samples,
actually computing (or even approximating) it from these many samples is a computationally hard
problem. It is often expressed as a non-convex optimization program which is NP-hard in the worst
case, and seemingly hard even on random instances.
Given this state of affairs, relaxed formulations of such non-convex programs were proposed, which
can be solved efficiently, but sometimes to achieve accurate results seem to require far more samples
?
?
Supported in part by Simons Award for Graduate Students in Theoretical Computer Science
Supported in part by NSF grant CCF-1412958
1
than existential bounds provide. This phenomenon has been coined the ?statistical versus computational trade-off? by Chandrasekaran and Jordan [1], who motivate and formalize one framework to
study it in which efficient algorithms come from the Sum-of-Squares family of convex relaxations
(which we shall presently discuss). They further give a detailed study of this trade-off for the basic
de-noising problem [2, 3, 4] in various settings (some exhibiting the trade-off and others that do
not). This trade-off was observed in other practical machine learning problems, in particular for the
Sparse PCA problem that will be our focus, by Berthet and Rigollet [5].
As it turns out, the study of the same phenomenon was proposed even earlier in computational
complexity, primarily from theoretical motivations. Decatur, Goldreich and Ron [6] initiate the study
of ?computational sample complexity? to study statistical versus computation trade-offs in samplesize. In their framework efficient algorithms are arbitrary polynomial time ones, not restricted to any
particular structure like convex relaxations. They point out for example that in the distribution-free
PAC-learning framework of Vapnik-Chervonenkis and Valiant, there is often no such trade-off. The
reason is that the number of samples is essentially determined (up to logarithmic factors, which we
will mostly ignore here) by the VC-dimension of the given concept class learned, and moreover,
an ?Occam algorithm? (computing any consistent hypothesis) suffices for classification from these
many samples. So, in the many cases where efficiently finding a hypothesis consistent with the
data is possible, enough samples to learn are enough to do so efficiently! This paper also provide
examples where this is not the case in PAC learning, and then turns to an extensive study of possible
trade-offs for learning various concept classes under the uniform distribution. This direction was
further developed by Servedio [7].
The fast growth of Big Data research, the variety of problems successfully attacked by various
heuristics and the attempts to find efficient algorithms with provable guarantees is a growing area of
interaction between statisticians and machine learning researchers on the one hand, and optimization and computer scientists on the other. The trade-offs between sample size and computational
complexity, which seems to be present for many such problems, reflects a curious ?conflict? between these fields, as in the first more data is good news, as it allows more accurate inference and
prediction, whereas in the second it is bad news, as a larger input size is a source of increased complexity and inefficiency. More importantly, understanding this phenomenon can serve as a guide to
the design of better algorithms from both a statistical and computational viewpoints, especially for
problems in which data acquisition itself is costly, and not just computation. A basic question is
thus for which problems is such trade-off inherent, and to establish the limits of what is achievable
by efficient methods.
Establishing a trade-off has two parts. One has to prove an existential, information theoretic upper
bound on the number of samples needed when efficiency is not an issue, and then prove a computational lower bound on the number of samples for the class of efficient algorithms at hand. Needless
to say, it is desirable that the lower bounds hold for as wide a class of algorithms as possible, and that
it will match the best known upper bound achieved by algorithms from this class. The most general
one, the computational complexity framework of [6, 7] allows all polynomial-time algorithms. Here
one cannot hope for unconditional lower bounds, and so existing lower bounds rely on computational assumptions, e.g.?cryptographic assumptions?, e.g. that factoring integers has no polynomial
time algorithm, or other average case assumptions. For example, hardness of refuting random 3CNF
was used for establishing the sample-computational tradeoff for learning halfspaces [8], and hardness of finding planted clique in random graphs was used for tradeoff in sparse PCA [5, 9]. On the
other hand, in frameworks such as [1], where the class of efficient algorithms is more restricted (e.g.
a family of convex relaxations), one can hope to prove unconditional lower bounds, which are called
?integrality gaps? in the optimization and algorithms literature. Our main result is of this nature,
adding to the small number of such lower bounds for machine learning problems.
We now describe and motivate SoS convex relaxations algorithms, and the Sparse PCA problem.
1.2
Sum-of-Squares convex relaxations
Sum-of-Squares algorithms (sometimes called the Lasserre hierarchy) encompasses perhaps the
strongest known algorithmic technique for a diverse set of optimization problems. It is a family
of convex relaxations introduced independently around the year 2000 by Lasserre [10], Parillo [11],
and in the (equivalent) context of proof systems by Grigoriev [12]. These papers followed better
and better understanding in real algebraic geometry [13, 14, 15, 16, 17, 18, 19]of David Hilbert?s
2
famous 17th problem on certifying the non-negativity of a polynomial by writing it as a sum of
squares (which explains the name of this method). We only briefly describe this important class of
algorithms; far more can be found in the book [20] and the excellent extensive survey [21].
The SoS method provides a principled way of adding constraints to a linear or convex program in a
way that obtains tighter and tighter convex sets containing all solutions of the original problem. This
family of algorithms is parametrized by their degree d (sometimes called the number of rounds); as
d gets larger, the approximation becomes better, but the running time becomes slower, specifically
nO(d) . Thus in practice one hopes that small degree (ideally constant) would provide sufficiently
good approximation, so that the algorithm would run in polynomial time. This method extends
the standard semi-definite relaxation (SDP, sometimes called spectral), that is captured already by
degree-2 SoS algorithms. Moreover, it is more powerful than two earlier families of relaxations: the
Sherali-Adams [22] and Lov?asz-Scrijver [23] hierarchies.
The introduction of these algorithms has made a huge splash in the optimization community, and
numerous applications of it to problems in diverse fields were found that greatly improve solution
quality and time performance over all past methods. For large classes of problems they are considered the strongest algorithmic technique known. Relevant to us is the very recent growing set of
applications of constant-degree SoS algorithms to machine learning problems, such as [24, 25, 26].
The survey [27] contains some of these exciting developments. Section 2.1 contains some selfcontained material about the general framework SoS algorithms as well.
Given their power, it was natural to consider proving lower bounds on what SoS algorithms can do.
There has been an impressive progress on SoS degree lower bounds (via beautiful techniques) for
a variety of combinatorial optimization problems [28, 12, 29, 30]. However, for machine learning
problems relatively few such lower bounds (above SDP level) are known [26, 31] and follow via
reductions to the above bounds. So it is interesting to enrich the set of techniques for proving such
limits on the power of SoS for ML. The lower bound we prove indeed seem to follow a different
route than previous such proofs.
1.3
Sparse PCA
Sparse principal component analysis, the version of the classical PCA problem which assumes that
the direction of variance of the data has a sparse structure, is by now a central problem of highdiminsional statistical analysis. In this paper we focus on the single-spiked covariance model introduced by Johnstone [32]. One observes n samples from p-dimensional Gaussian distribution with
covariance ? = ?vv T + I where (the planted vector) v is assumed to be a unit-norm sparse vector
with at most k non-zero entries, and ? > 0 represents the strength of the signal. The task is to
find (or estimate) the sparse vector v. More general versions of the problem allow several sparse
directions/components and general covariance matrix [33, 34]. Sparse PCA and its variants have a
wide variety of applications ranging from signal processing to biology: see, e.g., [35, 36, 37, 38].
The hardness of Sparse PCA, at least in the worst case, can be seen through its connection to the
(NP-hard) Clique problem in graphs. Note that if ? is a {0, 1} adjacency matrix of a graph (with 1?s
on the diagonal), then it has a k-sparse eigenvector v with eigenvalue k if and only if the graph has
a k-clique. This connection between these two problems is actually deeper, and will appear again
below, for our real, average case version above.
From a theoretical point of view, Sparse PCA is one of the simplest examples where we observe a
gap between the number of samples needed information theoretically and the number of samples
needed for a polynomial time estimator: It has been well understood [39, 40, 41] that information
theoretically, given n = O(k log p) samples1 , one can estimate v up to constant error (in euclidean
norm), using a non-convex (therefore not polynomial time) optimization algorithm. On the other
hand, all the existing provable polynomial time algorithms [36, 42, 34, 43], which use either diagonal thresholding (for the single spiked model) or semidefinite programming (for general covariance),
first introduced for this problem in [44], need at least quadratically many samples to solve the problem, namely n = O(k 2 ). Moreover, Krauthgamer, Nadler and Vilenchik [45] and Berthet and
Rigollet [41] have shown that for semi-definite programs (SDP) this bound is tight. Specifically,
the natural SDP cannot even solve the detection problem: to distinguish the data from covariance
1
We treat ? as a constant so that we omit the dependence on it for simplicity throughout the introduction
section
3
? = ?vv T + I from the null hypothesis in which no sparse vector is planted, namely the n samples
are drawn from the Gaussian distribution with covariance matrix I.
Recall that the natural SDP for this problem (and many others) is just the first level of the SoS
hierarchy, namely degree-2. Given the importance of the Sparse PCA, it is an intriguing question
whether one can solve it efficiently with far fewer samples by allowing degree-d SoS algorithms with
larger d. A very interesting conditional negative answer was suggested by Berthet and Rigollet [41].
They gave an efficient reduction from Planted Clique2 problem to Sparse PCA, which shows in
particular that degree-d SoS algorithms for Sparse PCA will imply similar ones for Planted Clique.
Gao, Ma and Zhou [9] strengthen the result by establishing the hardness of the Gaussian singlespiked covariance model, which is an interesting subset of models considered by [5]. These are
useful as nontrivial constant-degree SoS lower bounds for Planted Clique were recently proved
by [30, 46] (see there for the precise description, history and motivation for Planted Clique). As [41,
9] argue, strong yet believed bounds, if true, would imply that the quadratic gap is tight for any
constant d. Before the submission of this paper, the known lower bounds above for planted clique
were not strong enough yet to yield any lower bound for Sparse PCA beyond the minimax sample
complexity. We also note that the recent progress [47, 48] that show the tight lower bounds for
planted clique, together with the reductions of [5, 9], also imply the tight lower bounds for Sparse
PCA, as shown in this paper.
1.4
Our contribution
We give a direct, unconditional lower bound proof for computing Sparse PCA using degree-4 SoS
e 2 ) samples to solve the detection problem (Theoalgorithms, showing that they too require n = ?(k
rem 3.1), which is tight up to polylogarithmic factors when the strength of the signal ? is a constant.
Indeed the theorem gives a lower bound for every strength ?, which becomes weaker as ? gets larger.
Our proof proceeds by constructing the necessary pseudo-moments for the SoS program that achieve
too high an objective value (in the jargon of optimization, we prove an ?integrality gap? for these
programs). As usual in such proofs, there is tension between having the pseudo-moments satisfy the
constraints of the program and keeping them positive semidefinite (PSD). Differing from past lower
bound proofs, we construct two different PSD moments, each approximately satisfying one sets of
constraints in the program and is negligible on the rest. Thus, their sum give PSD moments which
approximately satisfy all constraints. We then perturb these moments to satisfy constraints exactly,
and show that with high probability over the random data, this perturbation leaves the moments
PSD.
We note several features of our lower bound proof which makes the result particularly strong and
general. First, it applies not only for the Gaussian distribution, but also for Bernoulli and other
distributions. Indeed, we give a set of natural (pseudorandomness) conditions on the sampled data
vectors under which the SoS algorithm is ?fooled?, and show that these conditions are satisfied
with high probability under many similar distributions (possessing strong concentration of measure).
Next, our lower bound holds even if the hidden sparse vector is discrete, namely its entries come
from the set {0, ? ?1k }. We also extend the lower bound for the detection problem to apply also
to the estimation problem, in the regime when the ambient dimension is linear in the number of
samples, namely n ? p ? Bn for constant B.
Organization: Section 2 provides more backgrounds of sparse PCA and SoS algorithms. We state
our main results in Section 3. A complete paper is available as supplementary material or on arxiv.
2
Formal description of the model and problem
Notation: We will assume that n, k, p are all sufficiently large3 , and that n ? p. Throughout this
paper, by ?with high probability some event happens?, we mean the failure probability is bounded
by p?c for every constant c, as p tends to infinity.
Sparse PCA estimation and detection problems We will consider the simplest setting of sparse
PCA, which is called single-spiked covariance model in literature [32] (note that restricting to a
2
An average case version of the Clique problem in which the input is a random graph in which a much
larger than expected clique is planted.
3
Or we assume that they go to infinity as typically done in statistics.
4
special case makes our lower bound hold in all generalizations of this simple model). In this model,
the task is to recover a single sparse vector from noisy samples as follows. The ?hidden data? is
an unknown k-sparse vector v ? Rp with |v|0 = k and kvk = 1. To make the task easier (and so
the lower bound stronger), we even assume that v has discrete entries, namely that vi ? {0, ? ?1k }
for all i ? [p]. We observe n noisy
X 1 , . . . , X n ? Rp that are generated as follows. Each
?samples
j
j
is independently drawn as X = ?g v + ? j from a distribution which generalizes both Gaussian
and Bernoulli noise to v. Namely, the g j ?s are i.i.d real random variable with mean 0 and variance
1, and ? j ?s are i.i.d random vectors which have independent entries with mean zero and variance 1.
Therefore under this model, the covariance of X i is equal to ?vv T +I. Moreover, we assume that g j
and entries of ? j are sub-gaussian4 with variance proxy O(1). Given these samples, the estimation
problem is to approximate the unknown sparse vector v (up to sign flip).
It is also interesting to also consider the sparse component detection problem [41, 5], which is the
decision problem of distinguishing from random samples the following two distributions
H0 : data X j = ? j is purely random
?
Hv : data X j = ? j + ?g j v contains a hidden sparse signal with strength ?.
Rigollet [49] observed that a polynomial time algorithm for estimation version of sparse PCA with
constant error implies that an algorithm for the detection problem with twice number of the samples.
Thus, for polynomial time lower bounds,
it suffices to consider the detection problem. We will use
X as a shorthand for the p ? n matrix X 1 , . . . , X n . We denote the rows of X as X1T , . . . , XpT ,
therefore Xi ?s are n-dimensional column vectors. The empirical covariance matrix is defined as
? = 1 XX T .
?
n
Statistically optimal estimator/detector It is well known that the following non-convex program
achieves optimal statistical minimax rate for the estimation problem and the optimal sample
? complexity for the detection problem. Note that we scale the variables x up by a factor of k for
simplicity (the hidden vector now has entries from {0, ?1}).
1
? max
k
subject to
? =
?kmax (?)
? xxT i
h?,
(2.1)
kxk22 = k, kxk0 = k
(2.2)
Proposition 2.1 ([42], [41], [39] informally stated). The non-convex program (2.1) statistically
optimally solves the sparse PCA problem when n ? Ck/?2 log p for some sufficiently large C.
Namely, the following hold with high probability. If X is generated from Hv , then optimal solution
? is at least
xopt of program (2.1) satisfies k k1 ? xopt xTopt ? vv T k ? 13 , and the objective value ?kmax (?)
k
?
1 + 2?
3 . On the other hand, if X is generated from null hypothesis H0 , then ?max (?) is at most
?
1+ 3 .
? > 1 + ? to distinguish
Therefore, for the detection problem, once can simply use the test ?kmax (?)
2
2
e
the case of H0 and Hv , with n = ?(k/?
) samples. However, this test is highly inefficient, as the
? take exponential time! We now turn to consider efficient
best known ways for computing ?kmax (?)
ways of solving this problem.
2.1
Sum of Squares (Lasserre) Relaxations
Here we will only briefly introduce the basic ideas of Sum-of-Squares (Lasserre) relaxation that will
be used for this paper. We refer readers to the extensive [20, 21, 27] for detailed discussions of sum
of squares algorithms and proofs and their applications to algorithm design.
Let R[x]d denote the set of all real polynomials of degree at most d with n variables x1 , . . . , xn .
We start by defining the notion of pseudo-moment (sometimes called pseudo-expectation ). The
intuition is that these pseudo-moments behave like the actual first d moments of a real probability
distribution.
4
A real random variable X is subgaussian with variance proxy ? 2 if it has similar tail behavior as gaussian
distribution with variance ? 2 . More formally, if for any t ? R, E[exp(tX)] ? exp(t2 ? 2 /2)
5
Definition 2.2 (pseudo-moment). A degree-d pseudo-moments M is a linear operator that maps
R[x]d to R and satisfies M (1) = 1 and M (p2 (x)) ? 0 for all real polynomials p(x) of degree at
most d/2.
Q
For a mutli-set S ? [n], we use xS to denote the monomial i?S xi . Since M is a linear operator, it
can be clearly described by all the values of M on the monomial of degree d, that is, all the values of
M (xS ) for mutli-set S of size at most d uniquely determines M . Moreover, the nonnegativity constraint M (p(x)2 ) ? 0 is equivalent to the positive semidefiniteness of the matrix-form (as defined
below), and therefore the set of all pseudo-moments is convex.
Definition 2.3 (matrix-form). For an even integer d and any degree-d pseudo-moments M , we
define the matrix-form of M as the trivial way of viewing all the values of M on monomials as a
matrix: we use mat(M ) to denote the matrix that is indexed by multi-subset S of [n] with size at
most d/2, and mat(M )S,T = M (xS xT ).
Given polynomials p(x) and q1 (x), . . . , qm (x) of degree at most d, and a polynomial program,
Maximize
Subject to
p(x)
qi (x) = 0, ?i ? [m]
(2.3)
We can write a sum of squares based relaxation in the following way: Instead of searching over
x ? Rn , we search over all the possible ?pseudo-moments? M of a hypothetical distribution over
solutions x, that satisfy the constraints above. The key of the relaxation is to consider only moments
up to degree d. Concretely, we have the following semidefinite program in roughly nd variables.
Variables M (xS )
Maximize M (p(x))
?S : |S| ? d
(2.4)
K
?i, K : |K| + deg(qi ) ? d
Subject to M (qi (x)x ) = 0
mat(M ) 0
Note that (2.4) is a valid relaxation because for any solution x? of (2.3), if we define M (xS ) to be
M (xS ) = xS? , then M satisfies all the constraints and the objective value is p(x? ). Therefore it is
guaranteed that the optimal value of (2.4) is always larger than that of (2.3).
Finally, the key point is that this program can be solved efficiently, in polynomial time in its size,
namely in time nO(d) . As d grows, the constraints added make the ?pseudo-distribution? defined by
the moments closer and closer to an actual distribution, thus providing a tighter relaxation, at the
cost of a larger running time to solve it. In the next section we apply this relaxation to the Sparse
PCA problem and state our results.
3
Main Results
To exploit the sum of squares relaxation framework as described in Section 2.1], we first convert the
statistically optimal estimator/detector (2.1) into the ?polynomial? program version below.
? xxT i
Maximizeh?,
subject
(3.1)
tokxk22
= k, and
|x|1 ? k
x3i
= xi , ?i ? [p]
(3.2& 3.3)
(3.4)
The non-convex sparsity constraint (2.2) is replaced by the polynomial constraint (3.3), which ensures that any solution vector x has entries in {0, ?1}, and so together with the constraint (3.2)
guarantees that it has precisely k non-zero ?1 entries. The constraint (.3.3) implies other natural
constraints that one may add to the program in order to make it stronger: for example, the upper
bound on each entry xi , the lower bound on the non-zero entries of xi , and the constraint kxk4 ? k
which is used as a surrogate for k-sparse vectors in [25, 24]. Note that we also added an `1 sparsity constraint (3.4) (which is convex) as is often used in practice and makes our lower bound even
stronger. Of course, it is formally implied by the other constraints, but not in low-degree SoS.
Now we are ready to apply the sum-of-squares relaxation scheme described in Section 2.1) to the
polynomial program above as . For degree-4 relaxation we obtain the following semidefinite pro? which we view as an algorithm for both detection and estimation problems. Note
gram SoS4 (?),
6
that the same objective function, with only the three constraints (C1&2), (C6) gives the degree-2
relaxation, which is precisely the standard SDP relaxation of Sparse PCA studied in [42, 41, 45]. So
? subsumes the SDP relaxation.
clearly SoS4 (?)
? Degree-4 Sum of Squares Relaxation
Algorithm 1 SoS4 (?):
? and maximizer M ? .
Solve the following SDP and obtain optimal objective value SoS4 (?)
Variables: M (S), for all mutli-sets S of size at most 4.
X
? = max
? ij
SoS4 (?)
M (xi xj )?
(Obj)
i,j
subject to
X
M (x2i ) = k
X
and
i?[p]
i,j?[p]
M (x3i xj ) = M (xi xj ), and
X
|M (xi xj )| ? k 2
(C1&2)
M (x2` xi xj ) = kM (xi xj ), ?i, j ? [p] (C4)
`?[p]
X
|M (xi xj xs xt )| ? k 4
and
M 0
(C5&6)
i,j,s,t?[p]
? > (1 + 1 ?)k, H0 otherwise
Output: 1. For detection problem : output Hv if SoS4 (?)
2
?
?
2. For estimation problem: output M2 = (M (xi xj ))i,j?[p]
Before stating the lower bounds for both detection and estimation in the next two subsections, we
comment on the choices made for the outputs of the algorithm in both, as clearly other choices can be
made that would be interesting to investigate. For detection, we pick the natural threshold (1 + 12 ?)k
from the statistically optimal detection algorithm of Section 2. Our lower bound of the objective
under H0 is actually a large constant multiple of ?k, so we could have taken a higher threshold.
To analyze even higher ones would require analyzing the behavior of SoS4 under the (planted)
alternative distribution Hv . For estimation we output the maximizer M2? of the objective function,
and prove that it is not too correlated with the rank-1 matrix vv T in the planted distribution Hv .
This suggest, but does not prove, that the leading eigenvector of M2? (which is a natural estimator
for v) is not too correlated with v. We finally note that Rigollet?s efficient reduction from detection
to estimation is not in the SoS framework, and so our detection lower bound does not automatically
imply the one for estimation.
? gives a large objective on null hypothesis H0 .
For the detection problem, we prove that SoS4 (?)
?
Theorem 3.1. There exists absolute constant C and r such that for 1 ? ? < min{k 1/4 , n} and
?
any p ? C?n, k ? C?7/6 n logr p, the following holds. When the data X is drawn from the null
hypothesis H0 , then with high probability (1 ? p?10 ), the objective value of degree-4 sum of squares
? is at least 10?k. Consequently, Algorithm 1 can?t solve the detection problem.
relaxation SoS4 (?)
To parse the theorem and to understand its consequence, consider first the case when ? is a constant
(which is also arguably the most interesting regime). Then the theorem says that when we have only
n k 2 samples, degree-4 SoS relaxation SoS4 still overfits heavily to the randomness of the data X
? > (1 + ? )k (or even 10?k) as a threshold
under the null hypothesis H0 . Therefore, using SoS4 (?)
2
will fail with high probability to distinguish H0 and Hv .
We note that for constant ? our result is essentially tight in terms of the dependencies between
e
n, k, p. The condition p = ?(n)
is necessary since otherwise when p = o(n), even without the
? has maximum
sum of squares relaxation, the objective value is controlled by (1 + o(1))k since ?
e ?n) is
eigenvalue 1 + o(1) in this regime. Furthermore, as mentioned in the introduction, k ? ?(
2
also necessary (up to poly-logarithmic factors), since when n k , a simple diagonal thresholding
algorithm works for this simple single-spike model.
When ? is not considered as a constant, the dependence?of the lower bound on ? is not optimal, but
close. Ideally one could expect that as long as k ? n, and p ? ?n, the objective value on the
null hypothesis is at least ?(?k). Tightening the ?1/6 slack, and possibly extending the range of
7
? are left to future study. Finally, we note that he result can be extended to a lower bound for the
estimation problem, which is presented in the supplementary material.
References
[1] Venkat Chandrasekaran and Michael I. Jordan. Computational and statistical tradeoffs via convex relaxation. Proceedings of the National Academy of Sciences, 110(13):E1181?E1190, 2013.
[2] IM Johnstone. Function estimation and gaussian sequence models. Unpublished manuscript, 2002.
[3] D. L. Donoho. De-noising by soft-thresholding. IEEE Trans. Inf. Theor., 41(3):613?627, May 1995.
[4] David L. Donoho and Iain M. Johnstone. Minimax estimation via wavelet shrinkage. Ann. Statist.,
26(3):879?921, 06 1998.
[5] Quentin Berthet and Philippe Rigollet. Complexity theoretic lower bounds for sparse principal component
detection. In COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton
University, NJ, USA, pages 1046?1066, 2013.
[6] Scott Decatur, Oded Goldreich, and Dana Ron. Computational sample complexity. In Proceedings of
the Tenth Annual Conference on Computational Learning Theory, COLT ?97, pages 130?142, New York,
NY, USA, 1997. ACM.
[7] Rocco A. Servedio. Computational sample complexity and attribute-efficient learning. Journal of Computer and System Sciences, 60(1):161 ? 178, 2000.
[8] Amit Daniely, Nati Linial, and Shai Shalev-Shwartz. More data speeds up training time in learning
halfspaces over sparse vectors. In Christopher J. C. Burges, L?eon Bottou, Zoubin Ghahramani, and
Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 26: 27th Annual
Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December
5-8, 2013, Lake Tahoe, Nevada, United States., pages 145?153, 2013.
[9] C. Gao, Z. Ma, and H. H. Zhou. Sparse CCA: Adaptive Estimation and Computational Barriers. ArXiv
e-prints, September 2014.
[10] Jean B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on
Optimization, 11(3):796?817, 2001.
[11] Pablo A. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness
and Optimization. PhD thesis, California Institute of Technology, 2000.
[12] Dima Grigoriev. Linear lower bound on degrees of positivstellensatz calculus proofs for the parity. Theoretical Computer Science, 259(1):613?622, 2001.
?
[13] Emil Artin. Uber
die zerlegung definiter funktionen in quadrate. In Abhandlungen aus dem mathematischen Seminar der Universit?at Hamburg, volume 5, pages 100?115. Springer, 1927.
[14] Jean-Louis Krivine. Anneaux pr?eordonn?es. Journal d?analyse math?ematique, 1964.
[15] Gilbert Stengle. A nullstellensatz and a positivstellensatz in semialgebraic geometry. Mathematische
Annalen, 207(2):87?97, 1974.
[16] N.Z. Shor. An approach to obtaining global extremums in polynomial mathematical programming problems. Cybernetics, 23(5):695?700, 1987.
[17] Konrad Schm?udgen. Thek-moment problem for compact semi-algebraic sets. Mathematische Annalen,
289(1):203?206, 1991.
[18] Mihai Putinar. Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics
Journal, 42(3):969?984, 1993.
[19] Yurii Nesterov. Squared functional systems and optimization problems. In Hans Frenk, Kees Roos,
Tams Terlaky, and Shuzhong Zhang, editors, High Performance Optimization, volume 33 of Applied
Optimization, pages 405?440. Springer US, 2000.
[20] Jean Bernard Lasserre. An introduction to polynomial and semi-algebraic optimization. Cambridge Texts
in Applied Mathematics. Cambridge: Cambridge University Press. , 2015.
[21] Monique Laurent. Sums of squares, moment matrices and optimization over polynomials. In Mihai
Putinar and Seth Sullivant, editors, Emerging Applications of Algebraic Geometry, volume 149 of The
IMA Volumes in Mathematics and its Applications, pages 157?270. Springer New York, 2009.
[22] Hanif D. Sherali and Warren P. Adams. A hierarchy of relaxations between the continuous and convex hull
representations for zero-one programming problems. SIAM Journal on Discrete Mathematics, 3(3):411?
430, 1990.
[23] L. Lov?asz and A. Schrijver. Cones of matrices and set-functions and 01 optimization. SIAM Journal on
Optimization, 1(2):166?190, 1991.
8
[24] Boaz Barak, Jonathan A. Kelner, and David Steurer. Dictionary learning and tensor decomposition via
the sum-of-squares method. In Proceedings of the Forty-seventh Annual ACM Symposium on Theory of
Computing, STOC ?15, 2015.
[25] Boaz Barak, Jonathan A. Kelner, and David Steurer. Rounding sum-of-squares relaxations. In STOC,
pages 31?40, 2014.
[26] Boaz Barak and Ankur Moitra. Tensor prediction, rademacher complexity and random 3-xor. CoRR,
abs/1501.06521, 2015.
[27] Boaz Barak and David Steurer. Sum-of-squares proofs and the quest toward optimal algorithms. In
Proceedings of International Congress of Mathematicians (ICM), 2014. To appear.
[28] D. Grigoriev. Complexity of positivstellensatz proofs for the knapsack. computational complexity,
10(2):139?154, 2001.
[29] Grant Schoenebeck. Linear level lasserre lower bounds for certain k-csps. In Proceedings of the 2008 49th
Annual IEEE Symposium on Foundations of Computer Science, FOCS ?08, pages 593?602, Washington,
DC, USA, 2008. IEEE Computer Society.
[30] Raghu Meka, Aaron Potechin, and Avi Wigderson. Sum-of-squares lower bounds for planted clique.
CoRR, abs/1503.06447, 2015.
[31] Z. Wang, Q. Gu, and H. Liu. Statistical Limits of Convex Relaxations. ArXiv e-prints, March 2015.
[32] Iain M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. Ann.
Statist., 29(2):295?327, 04 2001.
[33] Zongming Ma. Sparse principal component analysis and iterative thresholding. Ann. Statist., 41(2):772?
801, 04 2013.
[34] Vincent Q. Vu and Jing Lei. Minimax sparse principal subspace estimation in high dimensions. Ann.
Statist., 41(6):2905?2947, 12 2013.
[35] U. Alon, N. Barkai, D. A. Notterman, K. Gish, S. Ybarra, D. Mack, and A. J. Levine. Broad patterns
of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745?6750, 1999.
[36] Iain M. Johnstone and Arthur Yu Lu. On consistency and sparsity for principal components analysis in
high dimensions. Journal of the American Statistical Association, 104(486):pp. 682?703, 2009.
[37] Xi Chen. Adaptive elastic-net sparse principal component analysis for pathway association testing. Statistical Applications in Genetics and Molecular Biology, 10, 2011.
[38] Rodolphe Jenatton, Guillaume Obozinski, and Francis R. Bach. Structured sparse principal component
analysis. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pages 366?373, 2010.
[39] Vincent Q. Vu and Jing Lei. Minimax rates of estimation for sparse PCA in high dimensions. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2012, La
Palma, Canary Islands, April 21-23, 2012, pages 1278?1286, 2012.
[40] Debashis Paul and Iain M Johnstone. Augmented sparse principal component analysis for high dimensional data. arXiv preprint arXiv:1202.1242, 2012.
[41] Quentin Berthet and Philippe Rigollet. Optimal detection of sparse principal components in high dimension. The Annals of Statistics, 41(4):1780?1815, 2013.
[42] Arash A. Amini and Martin J. Wainwright. High-dimensional analysis of semidefinite relaxations for
sparse principal components. Ann. Statist., 37(5B):2877?2921, 10 2009.
[43] Yash Deshpande and Andrea Montanari. Sparse PCA via covariance thresholding. In Advances in Neural
Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014,
December 8-13 2014, Montreal, Quebec, Canada, pages 334?342, 2014.
[44] Alexandre d?Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A direct
formulation for sparse pca using semidefinite programming. SIAM Review, 49(3):434?448, 2007.
[45] Robert Krauthgamer, Boaz Nadler, and Dan Vilenchik. Do semidefinite relaxations solve sparse pca up
to the information limit? The Annals of Statistics, 43(3):1300?1322, 2015.
[46] Y. Deshpande and A. Montanari. Improved Sum-of-Squares Lower Bounds for Hidden Clique and Hidden
Submatrix Problems. ArXiv e-prints, February 2015.
[47] Prasad Raghavendra and Tselil Schramm. Tight lower bounds for planted clique in the degree-4 SOS
program. CoRR, abs/1507.05136, 2015.
[48] Samuel B. Hopkins, Pravesh K. Kothari, and Aaron Potechin. Sos and planted clique: Tight analysis of
MPW moments at all degrees and an optimal lower bound at degree four. CoRR, abs/1507.05230, 2015.
[49] Tengyu Ma and Philippe Rigollet. personal communication, 2014.
9
| 5724 |@word version:6 briefly:2 achievable:1 polynomial:24 seems:1 norm:2 stronger:3 nd:1 km:1 calculus:1 palma:1 gish:1 bn:1 covariance:11 decomposition:1 prasad:1 q1:1 pick:1 reduction:4 moment:21 inefficiency:1 contains:3 liu:1 united:1 chervonenkis:1 sherali:2 past:2 existing:2 yet:2 intriguing:1 v:1 intelligence:2 fewer:1 leaf:1 affair:1 provides:2 math:1 ron:2 tahoe:1 c6:1 kelner:2 zhang:1 mathematical:1 direct:2 symposium:2 focs:1 prove:9 shorthand:1 pathway:1 dan:1 introduce:1 theoretically:3 lov:2 indeed:3 expected:1 roughly:1 andrea:1 hardness:5 sdp:9 growing:2 multi:1 behavior:2 rem:1 automatically:1 actual:2 becomes:3 xx:1 moreover:6 underlying:1 notation:1 bounded:1 null:6 what:2 emerging:1 eigenvector:2 developed:1 mathematician:1 differing:1 finding:2 extremum:1 indiana:1 nj:1 guarantee:2 pseudo:12 every:2 hypothetical:1 growth:1 debashis:1 exactly:1 universit:1 qm:1 dima:1 parillo:2 unit:2 grant:2 omit:1 appear:2 louis:1 arguably:1 before:2 positive:3 scientist:1 understood:1 treat:1 negligible:1 limit:4 tends:1 consequence:1 congress:1 laguna:1 analyzing:1 establishing:4 laurent:2 approximately:2 twice:1 au:1 studied:2 ankur:1 challenging:1 graduate:1 statistically:4 range:1 practical:1 kxk4:1 testing:1 vu:2 practice:2 definite:3 xopt:2 area:1 empirical:1 suggest:1 zoubin:1 get:2 cannot:4 needle:1 close:1 operator:2 noising:2 context:1 kmax:4 writing:1 e1181:1 gilbert:1 equivalent:3 map:1 go:1 independently:2 convex:22 survey:2 simplicity:2 m2:3 estimator:4 iain:4 array:1 importantly:1 quentin:2 proving:2 searching:1 notion:1 gert:1 annals:2 hierarchy:4 heavily:1 strengthen:1 programming:4 distinguishing:1 hypothesis:8 lanckriet:1 satisfying:1 particularly:1 submission:1 observed:2 levine:1 preprint:1 solved:2 hv:7 worst:2 wang:1 notterman:1 ensures:1 news:2 kilian:1 trade:11 observes:1 halfspaces:2 principled:1 intuition:1 mentioned:1 complexity:13 ideally:2 nesterov:1 personal:1 motivate:2 solving:2 tight:8 serve:1 purely:1 linial:1 efficiency:2 gu:1 seth:1 goldreich:2 various:3 tx:1 yash:1 xxt:2 fast:1 describe:3 detected:1 artificial:2 avi:2 shalev:1 h0:9 shuzhong:1 quite:1 heuristic:1 larger:7 solve:9 supplementary:2 say:2 jean:3 otherwise:2 statistic:5 analyse:1 itself:1 noisy:2 seemingly:1 sequence:1 eigenvalue:3 net:1 nevada:1 emil:1 interaction:1 schoenebeck:1 relevant:1 achieve:2 academy:2 description:2 x1t:1 extending:1 rademacher:1 jing:2 adam:2 object:1 alon:1 stating:1 montreal:1 ij:1 school:1 progress:2 strong:4 p2:1 solves:1 come:2 implies:2 exhibiting:1 direction:3 attribute:1 hull:1 luckily:1 arash:1 vc:1 viewing:1 material:3 adjacency:1 explains:1 require:4 suffices:2 generalization:1 proposition:1 tighter:3 theor:1 im:1 hold:5 around:1 sufficiently:3 considered:3 normal:1 exp:2 nadler:2 algorithmic:2 formal:1 sought:2 achieves:1 dictionary:1 ematique:1 estimation:17 pravesh:1 combinatorial:1 largest:1 gaussian4:1 establishes:1 successfully:1 reflects:1 hope:3 offs:3 clearly:3 putinar:2 gaussian:8 always:1 ck:1 thek:1 zhou:2 shrinkage:1 artin:1 focus:2 june:1 bernoulli:3 rank:2 fooled:1 aka:2 greatly:1 colon:1 inference:2 factoring:1 el:1 typically:1 hidden:6 issue:1 classification:1 colt:2 development:1 enrich:1 special:1 field:2 construct:1 saving:1 having:1 equal:1 once:1 washington:1 biology:2 represents:1 broad:2 yu:1 nearly:1 future:1 np:2 others:2 t2:1 inherent:1 primarily:1 few:1 modern:1 national:2 ima:1 replaced:1 geometry:4 statistician:1 attempt:1 freedom:1 detection:20 psd:4 huge:1 organization:1 ab:4 highly:2 investigate:1 rodolphe:1 kvk:1 semidefinite:8 unconditional:3 held:1 accurate:2 ambient:2 closer:2 necessary:3 potechin:2 arthur:1 indexed:1 euclidean:1 pseudorandomness:1 theoretical:4 instance:1 increased:1 earlier:2 column:1 soft:1 cost:1 entry:10 subset:2 monomials:1 uniform:1 daniely:1 rounding:1 terlaky:1 seventh:1 too:4 optimally:1 dependency:1 answer:2 maximizeh:1 international:3 siam:4 off:8 michael:2 together:2 hopkins:1 concrete:1 again:1 central:1 satisfied:1 thesis:1 containing:1 squared:1 possibly:1 moitra:1 book:1 tam:1 inefficient:1 leading:1 logr:1 american:1 resort:1 parrilo:1 de:2 semidefiniteness:1 schramm:1 student:1 subsumes:1 dem:1 satisfy:4 vi:1 view:2 analyze:1 overfits:1 francis:1 start:2 recover:1 shai:1 simon:1 contribution:1 square:22 xor:1 variance:6 who:1 efficiently:5 yield:1 raghavendra:1 famous:1 vincent:2 lu:1 researcher:1 cybernetics:1 randomness:1 history:1 tissue:1 detector:2 strongest:2 definition:2 failure:1 servedio:2 acquisition:1 pp:1 deshpande:2 proof:11 sampled:1 proved:1 recall:1 subsection:1 hilbert:1 formalize:1 actually:3 jenatton:1 manuscript:1 alexandre:1 higher:3 follow:2 tension:2 improved:2 april:1 formulation:2 done:1 furthermore:1 just:2 hand:6 parse:1 christopher:1 maximizer:2 quality:1 perhaps:1 lei:2 grows:1 barkai:1 name:1 usa:3 concept:2 true:1 ccf:1 jargon:1 round:1 konrad:1 uniquely:1 die:1 mutli:3 samuel:1 theoretic:2 complete:1 pro:1 ranging:1 recently:1 possessing:1 functional:1 rigollet:8 volume:4 extend:1 tail:1 he:1 mathematischen:1 ybarra:1 association:2 refer:1 mihai:2 cambridge:3 meka:1 consistency:1 mathematics:5 e1190:1 han:1 impressive:1 add:2 recent:2 csps:1 italy:1 inf:1 route:1 hamburg:1 certain:1 meeting:1 der:1 captured:1 seen:1 relaxed:1 kxk0:1 forty:1 maximize:2 signal:4 semi:6 multiple:1 desirable:2 match:1 believed:1 long:1 bach:1 chia:1 grigoriev:3 molecular:1 award:1 controlled:1 qi:3 prediction:2 variant:1 basic:6 tselil:1 essentially:2 expectation:2 xpt:1 fifteenth:1 arxiv:6 sometimes:5 achieved:1 c1:2 whereas:1 background:1 thirteenth:1 source:1 rest:1 asz:2 comment:1 subject:5 december:2 quebec:1 seem:2 jordan:3 integer:2 obj:1 curious:1 subgaussian:1 revealed:1 enough:3 variety:3 xj:8 gave:1 shor:1 idea:1 tradeoff:3 weaker:1 whether:1 expression:1 pca:28 algebraic:5 york:2 cnf:1 useful:1 detailed:2 informally:1 statist:5 annalen:2 funktionen:1 simplest:2 nsf:1 sign:1 diverse:2 mathematische:2 discrete:3 write:1 shall:1 mat:3 probed:1 key:2 four:1 threshold:3 drawn:3 decatur:2 tenth:1 integrality:2 graph:5 relaxation:35 sum:22 year:1 convert:1 run:1 cone:1 oligonucleotide:1 powerful:2 extends:1 family:7 chandrasekaran:2 throughout:2 reader:1 lake:1 decision:1 submatrix:1 cca:1 bound:50 followed:1 distinguish:3 guaranteed:1 quadratic:3 annual:6 nontrivial:1 strength:4 constraint:19 infinity:2 precisely:2 x2:1 certifying:1 speed:1 min:1 tengyu:2 relatively:1 martin:1 department:1 structured:2 sumof:1 march:1 smaller:1 island:1 happens:1 presently:1 restricted:2 spiked:3 ghaoui:1 pr:1 mack:1 taken:1 computationally:1 remains:1 discus:1 turn:3 fail:1 slack:1 needed:3 know:1 initiate:1 flip:1 yurii:1 refuting:1 raghu:1 available:1 generalizes:1 apply:3 observe:2 spectral:2 amini:1 alternative:1 weinberger:1 robustness:1 slower:1 rp:2 knapsack:1 original:1 assumes:1 running:2 include:1 clustering:1 krauthgamer:2 wigderson:2 monique:1 exploit:1 coined:1 eon:1 ghahramani:1 especially:1 establish:1 february:1 society:1 approximating:1 classical:1 perturb:1 k1:1 tensor:3 objective:11 implied:1 question:2 already:1 occurs:1 added:2 spike:1 planted:16 costly:1 dependence:2 diagonal:3 usual:1 concentration:1 surrogate:1 rocco:1 september:1 subspace:1 parametrized:1 topic:1 print:3 argue:1 trivial:1 reason:1 provable:2 toward:1 providing:1 mostly:1 robert:1 stoc:2 negative:1 stated:1 tightening:1 design:3 steurer:3 cryptographic:1 unknown:2 allowing:1 upper:3 kothari:1 zongming:1 attacked:1 behave:1 philippe:3 defining:1 extended:1 communication:1 precise:1 dc:1 rn:1 perturbation:1 arbitrary:1 community:1 canada:1 introduced:3 david:5 namely:9 unpublished:1 pablo:1 extensive:3 connection:2 conflict:1 c4:1 california:1 learned:1 quadratically:1 polylogarithmic:1 trans:1 beyond:1 suggested:1 proceeds:1 below:3 pattern:1 scott:1 regime:3 sparsity:4 encompasses:1 program:19 max:3 wainwright:1 power:2 event:1 sullivant:1 difficulty:1 rely:1 natural:7 beautiful:1 advanced:1 minimax:5 scheme:1 improve:2 kxk22:1 x2i:1 technology:1 imply:4 numerous:1 ready:1 negativity:1 canary:1 aspremont:1 existential:2 text:1 sardinia:1 understanding:2 literature:2 nati:1 review:1 expect:1 interesting:6 versus:3 dana:1 semialgebraic:2 foundation:1 degree:31 consistent:2 proxy:2 principle:2 viewpoint:2 exciting:1 thresholding:5 editor:3 occam:1 row:1 course:1 genetics:1 supported:2 parity:1 free:1 keeping:1 monomial:2 warren:1 guide:1 burges:1 understand:1 barak:4 vv:5 allow:1 institute:2 wide:2 johnstone:6 deeper:1 absolute:1 sparse:53 barrier:1 dimension:9 xn:1 valid:1 gram:1 berthet:5 collection:1 made:3 concretely:1 c5:1 adaptive:2 far:5 approximate:1 obtains:1 ignore:1 compact:2 boaz:5 gene:1 clique:14 ml:1 deg:1 global:2 roos:1 assumed:1 xi:13 shwartz:1 search:1 continuous:1 iterative:1 lasserre:8 learn:1 nature:1 elastic:1 vilenchik:2 obtaining:1 excellent:1 poly:1 bottou:1 constructing:1 aistats:2 main:4 montanari:2 motivation:2 big:1 noise:1 kees:1 paul:1 icm:1 x1:1 augmented:1 venkat:1 oded:1 ny:1 sub:1 seminar:1 nonnegativity:1 exponential:1 wavelet:1 theorem:4 bad:1 xt:2 pac:2 showing:1 x:8 exists:1 vapnik:1 adding:2 valiant:1 importance:1 restricting:1 corr:4 phd:1 splash:1 gap:6 easier:1 chen:1 logarithmic:2 simply:1 x3i:2 gao:2 expressed:1 applies:1 springer:3 satisfies:3 determines:1 acm:2 ma:5 obozinski:1 conditional:1 consequently:1 donoho:2 ann:5 hard:4 specifically:3 typical:1 determined:1 principal:12 tumor:1 called:6 bernard:1 e:1 uber:1 schrijver:1 la:1 aaron:2 formally:2 guillaume:1 quest:1 amit:1 jonathan:2 princeton:2 phenomenon:3 correlated:2 |
5,219 | 5,725 | Online Gradient Boosting
Alina Beygelzimer
Yahoo Labs
New York, NY 10036
beygel@yahoo-inc.com
Elad Hazan
Princeton University
Princeton, NJ 08540
ehazan@cs.princeton.edu
Satyen Kale
Yahoo Labs
New York, NY 10036
satyen@yahoo-inc.com
Haipeng Luo
Princeton University
Princeton, NJ 08540
haipengl@cs.princeton.edu
Abstract
We extend the theory of boosting for regression problems to the online
learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm
with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with
smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm
that converts a weak online learning algorithm into a strong one where the
larger class of functions is the linear span of the base class. We also give a
simpler boosting algorithm that converts a weak online learning algorithm
into a strong one where the larger class of functions is the convex hull of
the base class, and prove its optimality.
1
Introduction
Boosting algorithms [21] are ensemble methods that convert a learning algorithm for a base
class of models with weak predictive power, such as decision trees, into a learning algorithm
for a class of models with stronger predictive power, such as a weighted majority vote over
base models in the case of classification, or a linear combination of base models in the case
of regression.
Boosting methods such as AdaBoost [9] and Gradient Boosting [10] have found tremendous
practical application, especially using decision trees as the base class of models. These
algorithms were developed in the batch setting, where training is done over a fixed batch of
sample data. However, with the recent explosion of huge data sets which do not fit in main
memory, training in the batch setting is infeasible, and online learning techniques which
train a model in one pass over the data have proven extremely useful.
A natural goal therefore is to extend boosting algorithms to the online learning setting.
Indeed, there has already been some work on online boosting for classification problems [20,
11, 17, 12, 4, 5, 2]. Of these, the work by Chen et al. [4] provided the first theoretical study
of online boosting for classification, which was later generalized by Beygelzimer et al. [2] to
obtain optimal and adaptive online boosting algorithms.
However, extending boosting algorithms for regression to the online setting has been elusive
and escaped theoretical guarantees thus far. In this paper, we rigorously formalize the
setting of online boosting for regression and then extend the very commonly used gradient
1
boosting methods [10, 19] to the online setting, providing theoretical guarantees on their
performance.
The main result of this paper is an online boosting algorithm that competes with any
linear combination the base functions, given an online linear learning algorithm over the
base class. This algorithm is the online analogue of the batch boosting algorithm of Zhang
and Yu [24], and in fact our algorithmic technique, when specialized to the batch boosting
setting, provides exponentially better convergence guarantees.
We also give an online boosting algorithm that competes with the best convex combination
of base functions. This is a simpler algorithm which is analyzed along the lines of the FrankWolfe algorithm [8]. While the algorithm has weaker theoretical guarantees, it can still be
useful in practice. We also prove that this algorithm obtains the optimal regret bound (up
to constant factors) for this setting.
Finally, we conduct some proof-of-concept experiments which show that our online boosting
algorithms do obtain performance improvements over di?erent classes of base learners.
1.1
Related Work
While the theory of boosting for classification in the batch setting is well-developed (see
[21]), the theory of boosting for regression is comparatively sparse.The foundational theory
of boosting for regression can be found in the statistics literature [14, 13], where boosting
is understood as a greedy stagewise algorithm for fitting of additive models. The goal is to
achieve the performance of linear combinations of base models, and to prove convergence to
the performance of the best such linear combination.
While the earliest works on boosting for regression such as [10] do not have such convergence
proofs, later works such as [19, 6] do have convergence proofs but without a bound on the
speed of convergence. Bounds on the speed of convergence have been obtained by Du?y
and Helmbold [7] relying on a somewhat strong assumption on the performance of the base
learning algorithm. A di?erent approach to boosting for regression was taken by Freund and
Schapire [9], who give an algorithm that reduces the regression problem to classification and
then applies AdaBoost; the corresponding proof of convergence relies on an assumption on
the induced classification problem which may be hard to satisfy in practice. The strongest
result is that of Zhang and Yu [24], who prove convergence to the performance of the best
linear combination of base functions, along with a bound on the rate of convergence, making
essentially no assumptions on the performance of the base learning algorithm. Telgarsky [22]
proves similar results for logistic (or similar) loss using a slightly simpler boosting algorithm.
The results in this paper are a generalization of the results of Zhang and Yu [24] to the online
setting. However, we emphasize that this generalization is nontrivial and requires di?erent
algorithmic ideas and proof techniques. Indeed, we were not able to directly generalize
the analysis in [24] by simply adapting the techniques used in recent online boosting work
[4, 2], but we made use of the classical Frank-Wolfe algorithm [8]. On the other hand, while
an important part of the convergence analysis for the batch setting is to show statistical
consistency of the algorithms [24, 1, 22], in the online setting we only need to study the
empirical convergence (that is, the regret), which makes our analysis much more concise.
2
Setup
Examples are chosen from a feature space X , and the prediction space is Rd . Let k ? k denote
some norm in Rd . In the setting for online regression, in each round t for t = 1, 2, . . . , T , an
adversary selects an example xt 2 X and a loss function `t : Rd ! R, and presents xt to the
online learner. The online learner outputs a prediction yt 2 Rd , obtains the loss function
`t , and incurs loss `t (yt ).
Let F denote a reference class of regression functions f : X ! Rd , and let C denote a class
of loss functions ` : Rd ! R. Also, let R : N ! R+ be a non-decreasing function. We
say that the function class F is online learnable for losses in C with regret R if there is an
online learning algorithm A, that for every T 2 N and every sequence (xt , `t ) 2 X ? C for
2
t = 1, 2, . . . , T chosen by the adversary, generates predictions1 A(xt ) 2 Rd such that
T
X
t=1
`t (A(xt )) ? inf
f 2F
T
X
`t (f (xt )) + R(T ).
(1)
t=1
If the online learning algorithm is randomized, we require the above bound to hold with
high probability.
The above definition is simply the online generalization of standard empirical risk minimization (ERM) in the batch setting. A concrete example is 1-dimensional regression, i.e.
the prediction space is R. For a labeled data point (x, y ? ) 2 X ? R, the loss for the prediction y 2 R is given by `(y ? , y) where `(?, ?) is a fixed loss function that is convex in
the second argument (such as squared loss, logistic loss, etc). Given a batch of T labeled
data points {(xt , yt? ) | t = 1, 2, . . . , T } and a base class of regression functions F (say, the
set of bounded norm linear regressors), an ERM algorithm finds the function f 2 F that
PT
minimizes t=1 `(yt? , f (xt )).
In the online setting, the adversary reveals the data (xt , yt? ) in an online fashion, only
presenting the true label yt? after the online learner A has chosen a prediction yt . Thus,
setting `t (yt ) = `(yt? , yt ), we observe that if A satisfies the regret bound (1), then it makes
predictions with total loss almost as small as that of the empirical risk minimizer, up to the
regret term. If F is the set of all bounded-norm linear regressors, for example, the algorithm
A could be online gradient descent [25] or online Newton Step [16].
At a high level, in the batch setting, ?boosting? is understood as a procedure that, given a
batch of data and access to an ERM algorithm for a function class F (this is called a ?weak?
learner), obtains an approximate ERM algorithm for a richer function class F 0 (this is called
a ?strong? learner). Generally, F 0 is the set of finite linear combinations of functions in F.
The efficiency of boosting is measured by how many times, N , the base ERM algorithm
needs to be called (i.e., the number of boosting steps) to obtain an ERM algorithm for
the richer function within the desired approximation tolerance. Convergence rates [24] give
bounds on how quickly the approximation error goes to 0 and N ! 1.
We now extend this notion of boosting to the online setting in the natural manner. To
capture the full generality of the techniques, we also specify a class of loss functions that
the online learning algorithm can work with. Informally, an online boosting algorithm is a
reduction that, given access to an online learning algorithm A for a function class F and
loss function class C with regret R, and a bound N on the total number of calls made in each
iteration to copies of A, obtains an online learning algorithm A0 for a richer function class
F 0 , a richer loss function class C 0 , and (possibly larger) regret R0 . The bound N on the total
number of calls made to all the copies of A corresponds to the number of boosting stages in
the batch setting, and in the online setting it may be viewed as a resource constraint on the
algorithm. The efficacy of the reduction is measured by R0 which is a function of R, N , and
certain parameters of the comparator class F 0 and loss function class C 0 . We desire online
boosting algorithms such that T1 R0 (T ) ! 0 quickly as N ! 1 and T ! 1. We make the
notions of richness in the above informal description more precise now.
Comparator function classes. A given function class F is said to be D-bounded if for
all x 2 X and all f 2 F, we have kf (x)k ? D. Throughout this paper, we assume that F is
symmetric:2 i.e. if f 2 F, then f 2 F, and it contains the constant zero function, which
we denote, with some abuse of notation, by 0.
1
There is a slight abuse of notation here. A(?) is not a function but rather the output of the
online learning algorithm A computed on the given example using its internal state.
2
This is without loss of generality; as will be seen momentarily, our base assumption only requires
an online learning algorithm A for F for linear losses `t . By running the Hedge algorithm on two
copies of A, one of which receives the actual loss functions `t and the other recieves `t , we get
an algorithm which competes with negations of functions in F and the constant zero function as
well. Furthermore, since the loss functions are convex (indeed, linear) this can be made into a
deterministic reduction by choosing the convex combination of the outputs of the two copies of A
with mixing weights given by the Hedge algorithm.
3
Given F, we define two richer function classes F 0 : the convex hull of F, denoted CH(F), is the set of convex combinations of a finite number of functions in F, and the span of F, denoted span(F), is the set of linear combinations
many functions in F.
For any f 2 span(F),
define kf k1 :=
n of finitely
o
P
P
inf max{1, g2S |wg |} : f = g2S wg g, S ? F, |S| < 1, wg 2 R . Since functions in
span(F) are not bounded, it is not possible to obtain a uniform regret bound for all functions
in span(F): rather, the regret of an online learning algorithm A for span(F) is specified in
terms of regret bounds for individual comparator functions f 2 span(F ), viz.
Rf (T ) :=
T
X
`t (A(xt ))
t=1
T
X
`t (f (xt )).
t=1
Loss function classes. The base loss function class we consider is L, the set of all linear
functions ` : Rd ! R, with Lipschitz constant bounded by 1. A function class F that is
online learnable with the loss function class L is called online linear learnable for short. The
richer loss function class we consider is denoted by C and is a set of convex loss functions
` : Rd ! R satisfying some regularity conditions specified in terms of certain parameters
described below.
We define a few parameters of the class C. For any b > 0, let Bd (b) = {y 2 Rd : kyk ? b}
be the ball of radius b. The class C is said to have Lipschitz constant Lb on Bd (b) if for all
` 2 C and all y 2 Bd (b) there is an efficiently computable subgradient r`(y) with norm at
most Lb . Next, C is said to be b -smooth on Bd (b) if for all ` 2 C and all y, y0 2 Bd (b) we
have
`(y0 ) ? `(y) + r`(y) ? (y0
b
ky y0 k2 .
2
Next, define the projection operator ?b : Rd ! Bd (b) as ?b (y) := arg miny0 2Bd (b) ky
b (y)) `(y)
and define ?b := supy2Rd , `2C `(?
k?b (y) yk .
3
y) +
y0 k,
Online Boosting Algorithms
The setup is that we are given a D-bounded reference class of functions F with an online
linear learning algorithm A with regret bound R(?). For normalization, we also assume that
the output of A at any time is bounded in norm by D, i.e. kA(xt )k ? D for all t. We
further assume that for every b > 0, we can compute3 a Lipschitz constant Lb , a smoothness
parameter b , and the parameter ?b for the class C over Bd (b). Furthermore, the online
boosting algorithm may make up to N calls per iteration to any copies of A it maintains,
for a given a budget parameter N .
Given this setup, our main result is an online boosting algorithm, Algorithm 1, competing
with span(F). The algorithm maintains N copies of A, denoted Ai , for i = 1, 2, . . . , N . Each
copy corresponds to one stage in boosting. When it receives a new example xt , it passes it
to each Ai and obtains their predictions Ai (xt ), which it then combines into a prediction
for yt using a linear combination. At the most basic level, this linear combination is simply
the sum of all the predictions scaled by a step size parameter ?. Two tweaks are made to
this sum in step 8 to facilitate the analysis:
1. While constructing the sum, the partial sum yti 1 is multiplied by a shrinkage factor
i
(1
t ?). This shrinkage term is tuned using an online gradient descent algorithm
in step 14. The goal of the tuning is to induce the partial sums yti 1 to be aligned
with a descent direction for the loss functions, as measured by the inner product
r`t (yti 1 ) ? yti 1 .
2. The partial sums yti are made to lie in Bd (B), for some parameter B, by using the
projection operator ?B . This is done to ensure that the Lipschitz constant and
smoothness of the loss function are suitably bounded.
3
It suffices to compute upper bounds on these parameters.
4
Algorithm 1 Online Gradient Boosting for span(F)
Require: Number of weak learners N , step size parameter ? 2 [ N1 , 1],
1: Let B = min{?N D, inf{b D : ? b b2
?b D}}.
2: Maintain N copies of the algorithm A, denoted Ai for i = 1, 2, . . . , N .
3: For each i, initialize 1i = 0.
4: for t = 1 to T do
5:
Receive example xt .
6:
Define yt0 = 0.
7:
for i = 1 to N do
i 1
i
8:
Define yti = ?B ((1
+ ?Ai (xt )).
t ?)yt
9:
end for
10:
Predict yt = ytN .
11:
Obtain loss function `t and su?er loss `t (yt ).
12:
for i = 1 to N do
13:
Pass loss function `it (y) = L1B r`t (yti 1 ) ? y to Ai .
i
14:
Set t+1
= max{min{ ti + ?t r`t (yti 1 ) ? yti 1 ), 1}, 0}, where ?t =
15:
end for
16: end for
1p
.
LB B t
Once the boosting algorithm makes the prediction yt and obtains the loss function `t , each
Ai is updated using a suitably scaled linear approximation to the loss function at the partial
sum yti 1 , i.e. the linear loss function L1B r`t (yti 1 )?y. This forces Ai to produce predictions
that are aligned with a descent direction for the loss function.
For lack of space, we provide the analysis of the algorithm in Section B in the supplementary
material. The analysis yields the following regret bound for the algorithm:
Theorem 1. Let ? 2 [ N1 , 1] be a given parameter. Let B = min{?N D, inf{b D : ? b b2
?b D}}. Algorithm 1 is an online learning algorithm for span(F) and losses in C with the
following regret bound for any f 2 span(F):
Rf0 (T ) ?
where
0
:=
PT
?
1
?
kf k1
t=1 `t (0)
?N
0
+ 3?
BB
2
p
kf k1 T + LB kf k1 R(T ) + 2LB Bkf k1 T ,
`t (f (xt )).
The regret bound in this theorem depends on several parameters such as B, B and LB .
In applications of the algorithm for 1-dimensional regression with commonly used loss functions, however, these parameters are essentially modest constants; see Section 3.1 for calculations of the parameters for various loss functions. Furthermore, if ? is appropriately set
(e.g. ? = (log N )/N ), then the average regret Rf0 (T )/T clearly converges to 0 as N ! 1
and T ! 1. While the requirement that N ! 1 may raise concerns about computational
efficiency, this is in fact analogous to the guarantee in the batch setting: the algorithms
converge only when the number of boosting stages goes to infinity. Moreover, our lower
bound (Theorem 4) shows that this is indeed necessary.
We also present a simpler boosting algorithm, Algorithm 2, that competes with CH(F).
Algorithm 2 is similar to Algorithm 1, with some simplifications: the final prediction is
simply a convex combination of the predictions of the base learners, with no projections or
shrinkage necessary. While Algorithm 1 is more general, Algorithm 2 may still be useful in
practice when a bound on the norm of the comparator function is known in advance, using
the observations in Section 4.2. Furthermore, its analysis is cleaner and easier to understand
for readers who are familiar with the Frank-Wolfe method, and this serves as a foundation
for the analysis of Algorithm 1. This algorithm has an optimal (up to constant factors)
regret bound as given in the following theorem, proved in Section A in the supplementary
material. The upper bound in this theorem is proved along the lines of the Frank-Wolfe [8]
algorithm, and the lower bound using information-theoretic arguments.
5
Theorem 2. Algorithm 2 is an online learning algorithm for CH(F) for losses in C with
the regret bound
8 D D2
R0 (T ) ?
T + LD R(T ).
N
Furthermore, the dependence of this regret bound on N is optimal up to constant factors.
The dependence of the regret bound on R(T ) is unimprovable without additional assumptions: otherwise, Algorithm 2 will be an online linear learning algorithm over F with better
than R(T ) regret.
Algorithm 2 Online Gradient Boosting for CH(F)
1: Maintain N copies of the algorithm A, denoted A1 , A2 , . . . , AN , and let ?i =
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
i = 1, 2, . . . , N .
for t = 1 to T do
Receive example xt .
Define yt0 = 0.
for i = 1 to N do
Define yti = (1 ?i )yti 1 + ?i Ai (xt ).
end for
Predict yt = ytN .
Obtain loss function `t and su?er loss `t (yt ).
for i = 1 to N do
Pass loss function `it (y) = L1D r`t (yti 1 ) ? y to Ai .
end for
end for
2
i+1
for
Using a deterministic base online linear learning algorithm. If the base online
linear learning algorithm A is deterministic, then our results can be improved, because our
online boosting algorithms are also deterministic, and using a standard simple reduction,
we can now allow C to be any set of convex functions (smooth or not) with a computable
Lipschitz constant Lb over the domain Bd (b) for any b > 0.
This reduction converts arbitrary convex loss functions into linear functions: viz. if yt is
the output of the online boosting algorithm, then the loss function provided to the boosting
algorithm as feedback is the linear function `0t (y) = r`t (yt ) ? y. This reduction immediately
implies that the base online linear learning algorithm A, when fed loss functions L1D `0t , is
already an online learning algorithm for CH(F) with losses in C with the regret bound
R0 (T ) ? LD R(T ).
As for competing with span(F), since linear loss functions are 0-smooth, we obtain the
following easy corollary of Theorem 1:
Corollary 1. Let ? 2 [ N1 , 1] be a given parameter, and set B = ?N D. Algorithm 1 is an
online learning algorithm for span(F) for losses in C with the following regret bound for any
f 2 span(F):
?
?N
p
?
0
Rf (T ) ? 1
0 + LB kf k1 R(T ) + 2LB Bkf k1 T ,
kf k1
PT
where 0 := t=1 `t (0) `t (f (xt )).
3.1
The parameters for several basic loss functions
In this section we consider the application of our results to 1-dimensional regression, where
we assume, for normalization, that the true labels of the examples and the predictions of
the functions in the class F are in [ 1, 1]. In this case k ? k denotes the absolute value norm.
Thus, in each round, the adversary chooses a labeled data point (xt , yt? ) 2 X ? [ 1, 1], and
the loss for the prediction yt 2 [ 1, 1] is given by `t (yt ) = `(yt? , yt ) where `(?, ?) is a fixed
loss function that is convex in the second argument. Note that D = 1 in this setting. We
6
give examples of several such loss functions below, and compute the parameters Lb ,
?b for every b > 0, as well as B from Theorem 1.
b
and
1. Linear loss: `(y ? , y) = y ? y. We have Lb = 1, b = 0, ?b = 1, and B = ?N .
2. p-norm loss, for some p
2: `(y ? , y) = |y ? y|p . We have Lb = p(b + 1)p 1 ,
p 2
=
p(p
1)(b
+
1)
,
?
=
max{p(1 b)p 1 , 0}, and B = 1.
b
b
?
3. Modified least squares: `(y , y) = 12 max{1 y ? y, 0}2 . We have Lb = b + 1, b = 1,
?b = max{1 b, 0}, and B = 1.
4. Logistic loss: `(y ? , y) = ln(1 + exp( y ? y)). We have Lb =
?b =
4
exp( b)
1+exp( b) ,
exp(b)
1+exp(b) ,
b
=
1
4,
and B = min{?N, ln(4/?)}.
Variants of the boosting algorithms
Our boosting algorithms and the analysis are considerably flexible: it is easy to modify
the algorithms to work with a di?erent (and perhaps more natural) kind of base learner
which does greedy fitting, or incorporate a scaling of the base functions which improves
performance. Also, when specialized to the batch setting, our algorithms provide better
convergence rates than previous work.
4.1
Fitting to actual loss functions
The choice of an online linear learning algorithm over the base function class in our algorithms was made to ease the analysis. In practice, it is more common to have an online
algorithm which produce predictions with comparable accuracy to the best function in hindsight for the actual sequence of loss functions. In particular, a common heuristic in boosting
algorithms such as the original gradient boosting algorithm by Friedman [10] or the matching pursuit algorithm of Mallat and Zhang [18] is to build a linear combination of base
functions by iteratively augmenting the current linear combination via greedily choosing a
base function and a step size for it that minimizes the loss with respect to the residual label.
Indeed, the boosting algorithm of Zhang and Yu [24] also uses this kind of greedy fitting
algorithm as the base learner.
In the online setting, we can model greedy fitting as follows. We first fix a step size ? 0
in advance. Then, in each round t, the base learner A receives not only the example xt , but
also an o?set yt0 2 Rd for the prediction, and produces a prediction A(xt ) 2 Rd , after which
it receives the loss function `t and su?ers loss `t (yt0 + ?A(xt )). The predictions of A satisfy
T
X
t=1
`t (yt0 + ?A(xt )) ? inf
f 2F
T
X
`t (yt0 + ?f (xt )) + R(T ),
t=1
where R is the regret. Our algorithms can be made to work with this kind of base learner
as well. The details can be found in Section C.1 of the supplementary material.
4.2
Improving the regret bound via scaling
Given an online linear learning algorithm A over the function class F with regret R, then
for any scaling parameter > 0, we trivially obtain an online linear learning algorithm,
denoted A, over a -scaling of F, viz. F := { f | f 2 F}, simply by multiplying the
predictions of A by . The corresponding regret scales by as well, i.e. it becomes R.
The performance of Algorithm 1 can be improved by using such an online linear learning
algorithm over F for a suitably chosen scaling
1 of the function class F. The regret
bound from Theorem 1 improves because the 1-norm of f measured with respect to F,
i.e. kf k01 = max{1, kf k1 }, is smaller than kf k1 , but degrades because the parameter B 0 =
min{?N D, inf{b
D : ? b b2
?b D}} is larger than B. But, as detailed in Section
C.2 of the supplementary material, in many situations the improvement due to the former
compensates for the degradation due to the latter, and overall we can get improved regret
bounds using a suitable value of .
7
4.3
Improvements for batch boosting
Our algorithmic technique can be easily specialized and modified to the standard batch
setting with a fixed batch of training examples and a base learning algorithm operating over
the batch, exactly as in [24]. The main di?erence compared to the algorithm of [24] is the
use of the variables to scale the coefficients of the weak hypotheses appropriately. While
a seemingly innocuous tweak, this allows us to derive analogous bounds to those of Zhang
and Yu [24] on the optimization error that show that our boosting algorithm converges
exponential faster. A detailed comparison can be found in Section C.3 of the supplementary
material.
5
Experimental Results
Is it possible to boost in an online fashion in practice with real base learners? To study
this question, we implemented and evaluated Algorithms 1 and 2 within the Vowpal Wabbit
(VW) open source machine learning system [23]. The three online base learners used were
VW?s default linear learner (a variant of stochastic gradient descent), two-layer sigmoidal
neural networks with 10 hidden units, and regression stumps.
Regression stumps were implemented by doing stochastic gradient descent on each individual
feature, and predicting with the best-performing non-zero valued feature in the current
example.
All experiments were done on a collection of 14 publically available regression and classification datasets (described in Section D in the supplementary material) using squared loss.
The only parameters tuned were the learning rate and the number of weak learners, as well
as the step size parameter for Algorithm 1. Parameters were tuned based on progressive
validation loss on half of the dataset; reported is propressive validation loss on the remaining
half. Progressive validation is a standard online validation technique, where each training
example is used for testing before it is used for updating the model [3].
The following table reports the average and the median, over the datasets, relative improvement in squared loss over the respective base learner. Detailed results can be found in
Section D in the supplementary material.
Base learner
SGD
Regression stumps
Neural networks
Average relative improvement
Algorithm 1
Algorithm 2
1.65%
20.22%
7.88%
1.33%
15.9%
0.72%
Median relative improvement
Algorithm 1
Algorithm 2
0.03%
10.45%
0.72%
0.29%
13.69%
0.33%
Note that both SGD (stochastic gradient descent) and neural networks are already very
strong learners. Naturally, boosting is much more e?ective for regression stumps, which is
a weak base learner.
6
Conclusions and Future Work
In this paper we generalized the theory of boosting for regression problems to the online
setting and provided online boosting algorithms with theoretical convergence guarantees.
Our algorithmic technique also improves convergence guarantees for batch boosting algorithms. We also provide experimental evidence that our boosting algorithms do improve
prediction accuracy over commonly used base learners in practice, with greater improvements for weaker base learners. The main remaining open question is whether the boosting
algorithm for competing with the span of the base functions is optimal in any sense, similar
to our proof of optimality for the the boosting algorithm for competing with the convex hull
of the base functions.
8
References
[1] Peter L. Bartlett and Mikhail Traskin. AdaBoost is consistent. JMLR, 8:2347?2368,
2007.
[2] Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms
for online boosting. In ICML, 2015.
[3] Avrim Blum, Adam Kalai, and John Langford. Beating the hold-out: Bounds for k-fold
and progressive cross-validation. In COLT, pages 203?208, 1999.
[4] Shang-Tse Chen, Hsuan-Tien Lin, and Chi-Jen Lu. An Online Boosting Algorithm
with Theoretical Justifications. In ICML, 2012.
[5] Shang-Tse Chen, Hsuan-Tien Lin, and Chi-Jen Lu. Boosting with Online Binary Learners for the Multiclass Bandit Problem. In ICML, 2014.
[6] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, AdaBoost
and Bregman distances. In COLT, 2000.
[7] Nigel Du?y and David Helmbold. Boosting methods for regression. Machine Learning,
47(2/3):153?200, 2002.
[8] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval
Res. Logis. Quart., 3:95?110, 1956.
[9] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line
learning and an application to boosting. JCSS, 55(1):119?139, August 1997.
[10] Jerome H. Friedman. Greedy function approximation: A gradient boosting machine.
Annals of Statistics, 29(5), October 2001.
[11] Helmut Grabner and Horst Bischof. On-line boosting and vision. In CVPR, volume 1,
pages 260?267, 2006.
[12] Helmut Grabner, Christian Leistner, and Horst Bischof. Semi-supervised on-line boosting for robust tracking. In ECCV, pages 234?247, 2008.
[13] Trevor Hastie and R. J Robet Tibshirani. Generalized Additive Models. Chapman and
Hall, 1990.
[14] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical
Learning: Data Mining, Inference, and Prediction. Springer Verlag, 2001.
[15] Elad Hazan and Satyen Kale. Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. JMLR, 15(1):2489?2512, 2014.
[16] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online
convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[17] Xiaoming Liu and Ting Yu. Gradient feature selection for online boosting. In ICCV,
pages 1?8, 2007.
[18] St?ephane G. Mallat and Zhifeng Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397?3415, December 1993.
[19] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Boosting algorithms
as gradient descent. In NIPS, 2000.
[20] Nikunj C. Oza and Stuart Russell. Online bagging and boosting. In AISTATS, pages
105?112, 2001.
[21] Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. MIT
Press, 2012.
[22] Matus Telgarsky. Boosting with the logistic loss is consistent. In COLT, 2013.
[23] VW. URL https://github.com/JohnLangford/vowpal_wabbit/.
[24] Tong Zhang and Bin Yu. Boosting with early stopping: Convergence and consistency.
Annals of Statistics, 33(4):1538?1579, 2005.
[25] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient
ascent. In ICML, 2003.
9
| 5725 |@word stronger:1 norm:9 suitably:3 open:2 d2:1 incurs:1 sgd:2 concise:1 ytn:2 ld:2 reduction:6 liu:1 contains:1 efficacy:1 tuned:3 frankwolfe:1 ka:1 com:3 current:2 beygelzimer:3 luo:2 bd:10 john:1 additive:2 christian:1 greedy:5 half:2 kyk:1 short:1 provides:1 boosting:75 sigmoidal:1 simpler:4 zhang:8 along:3 prove:4 fitting:5 combine:1 manner:1 indeed:5 chi:2 relying:1 decreasing:1 actual:3 becomes:1 provided:3 competes:6 bounded:8 notation:2 moreover:1 kind:3 minimizes:2 developed:2 hindsight:1 nj:2 guarantee:7 every:4 ti:1 exactly:1 k2:1 scaled:2 unit:1 t1:1 before:1 understood:2 llew:1 modify:1 abuse:2 innocuous:1 ease:1 practical:1 testing:1 practice:6 regret:30 procedure:1 foundational:1 empirical:3 erence:1 adapting:1 projection:3 matching:2 induce:1 get:2 selection:1 operator:2 risk:2 zinkevich:1 deterministic:4 yt:24 vowpal:1 elusive:1 kale:4 go:2 convex:17 hsuan:2 immediately:1 helmbold:2 notion:3 justification:1 analogous:2 updated:1 annals:2 pt:3 mallat:2 programming:2 us:1 hypothesis:1 wolfe:4 element:1 satisfying:1 nikunj:1 updating:1 labeled:3 jcss:1 capture:1 oza:1 momentarily:1 richness:1 russell:1 yk:1 rigorously:1 raise:1 predictive:2 efficiency:2 learner:23 beygel:1 easily:1 various:1 train:1 choosing:2 vowpal_wabbit:1 richer:6 elad:3 larger:5 supplementary:7 say:2 heuristic:1 otherwise:1 wg:3 compensates:1 satyen:5 statistic:3 cvpr:1 final:1 online:82 seemingly:1 sequence:2 wabbit:1 product:1 aligned:2 mixing:1 achieve:1 k01:1 description:1 haipeng:2 ky:2 convergence:16 regularity:1 requirement:1 extending:1 produce:3 telgarsky:2 converges:2 adam:1 derive:1 frean:1 augmenting:1 measured:4 finitely:1 erent:4 strong:6 implemented:2 c:2 implies:1 direction:2 radius:1 hull:3 stochastic:4 material:7 bin:1 require:2 suffices:1 generalization:4 fix:1 leistner:1 hold:2 hall:1 exp:5 algorithmic:4 predict:2 matus:1 dictionary:1 early:1 a2:1 label:3 weighted:1 minimization:2 mit:1 clearly:1 modified:2 rather:2 kalai:1 shrinkage:3 earliest:1 corollary:2 viz:3 naval:1 improvement:7 greedily:1 helmut:2 sense:1 inference:1 stopping:1 publically:1 a0:1 hidden:1 bandit:1 selects:1 arg:1 classification:7 flexible:1 overall:1 denoted:7 colt:3 yahoo:4 initialize:1 once:1 chapman:1 progressive:3 stuart:1 yu:7 icml:4 future:1 report:1 ephane:1 few:1 individual:2 familiar:1 n1:3 negation:1 maintain:2 friedman:3 huge:1 unimprovable:1 mining:1 analyzed:1 bregman:1 ehazan:1 explosion:1 partial:4 necessary:2 respective:1 modest:1 tree:2 conduct:1 desired:1 re:1 theoretical:6 tse:2 yoav:2 tweak:2 uniform:1 reported:1 nigel:1 considerably:1 chooses:1 st:1 randomized:1 michael:1 quickly:2 concrete:1 squared:3 possibly:1 stump:4 b2:3 coefficient:1 inc:2 satisfy:2 depends:1 later:2 lab:2 hazan:3 doing:1 maintains:2 square:1 accuracy:2 who:3 efficiently:1 ensemble:1 yield:1 generalize:1 weak:9 l1d:2 lu:2 multiplying:1 strongest:1 trevor:2 definition:1 infinitesimal:1 frequency:1 naturally:1 proof:6 di:5 proved:2 dataset:1 ective:1 improves:3 formalize:1 supervised:1 adaboost:4 specify:1 improved:3 done:3 evaluated:1 strongly:1 generality:2 furthermore:5 stage:3 jerome:2 langford:1 hand:1 receives:4 su:3 lack:1 logistic:5 stagewise:1 perhaps:1 facilitate:1 concept:1 true:2 former:1 symmetric:1 iteratively:1 round:3 generalized:4 presenting:1 theoretic:2 l1b:2 common:2 specialized:3 exponentially:1 volume:1 extend:4 slight:1 ai:10 smoothness:2 rd:13 tuning:1 consistency:2 trivially:1 access:2 operating:1 etc:1 base:41 recent:2 inf:6 certain:2 verlag:1 binary:1 tien:2 seen:1 haipengl:1 somewhat:1 additional:1 greater:1 r0:5 converge:1 signal:1 semi:1 full:1 reduces:1 smooth:4 faster:1 calculation:1 cross:1 escaped:1 lin:2 a1:1 prediction:22 variant:2 regression:25 basic:2 essentially:2 vision:1 iteration:2 normalization:2 agarwal:1 receive:2 logis:1 median:2 source:1 appropriately:2 pass:1 ascent:1 induced:1 johnlangford:1 december:1 call:3 vw:3 easy:2 baxter:1 fit:1 hastie:2 competing:4 inner:1 idea:1 computable:2 multiclass:1 whether:1 bartlett:2 url:1 peter:2 york:2 useful:3 generally:1 detailed:3 informally:1 cleaner:1 quart:1 schapire:4 http:1 per:1 tibshirani:2 blum:1 alina:2 subgradient:1 convert:4 sum:7 almost:1 throughout:1 reader:1 decision:3 scaling:5 comparable:1 bound:31 layer:1 simplification:1 valued:1 fold:1 quadratic:1 nontrivial:1 constraint:1 infinity:1 generates:1 speed:2 argument:3 span:16 optimality:2 extremely:1 min:5 performing:1 xiaoming:1 martin:1 combination:15 ball:1 smaller:1 slightly:1 y0:5 making:1 iccv:1 erm:6 taken:1 ln:2 resource:1 singer:1 fed:1 end:6 serf:1 informal:1 pursuit:2 available:1 multiplied:1 observe:1 batch:20 original:1 bagging:1 denotes:1 running:1 ensure:1 remaining:2 newton:1 rf0:2 yoram:1 ting:1 k1:10 especially:1 prof:1 build:1 classical:1 comparatively:1 grabner:2 amit:1 already:3 question:2 degrades:1 dependence:2 said:3 gradient:16 distance:1 majority:1 philip:1 marcus:1 modeled:1 traskin:1 providing:1 setup:3 october:1 robert:4 frank:4 upper:2 observation:1 datasets:2 finite:2 descent:8 situation:1 precise:1 lb:15 arbitrary:1 august:1 david:1 specified:2 bischof:2 tremendous:1 boost:1 nip:1 able:1 adversary:4 beyond:1 below:2 beating:1 rf:2 max:6 memory:1 analogue:1 power:2 suitable:1 natural:3 force:1 predicting:1 residual:1 improve:1 github:1 miny0:1 literature:1 kf:10 relative:3 freund:3 loss:65 proven:1 validation:5 foundation:2 consistent:2 eccv:1 yt0:6 copy:9 infeasible:1 weaker:2 understand:1 allow:1 barrier:1 absolute:1 sparse:1 mikhail:1 tolerance:1 feedback:1 default:1 horst:2 commonly:3 adaptive:2 made:8 regressors:2 collection:1 far:1 transaction:1 bb:1 approximate:1 obtains:6 emphasize:1 reveals:1 table:1 robust:1 improving:1 du:2 constructing:1 domain:1 aistats:1 main:6 fashion:2 ny:2 tong:1 exponential:1 lie:1 jmlr:2 zhifeng:1 theorem:9 xt:26 jen:2 er:3 learnable:3 mason:1 concern:1 evidence:1 avrim:1 budget:1 chen:3 easier:1 generalizing:1 logarithmic:1 simply:5 desire:1 tracking:1 applies:1 springer:1 ch:5 corresponds:2 minimizer:1 satisfies:1 relies:1 hedge:2 comparator:4 goal:3 viewed:1 lipschitz:5 yti:14 hard:1 marguerite:1 degradation:1 shang:2 total:3 called:4 pas:3 experimental:2 vote:1 internal:1 latter:1 collins:1 jonathan:1 incorporate:1 bkf:2 princeton:6 |
5,220 | 5,726 | Regularization-Free Estimation in Trace Regression with
Symmetric Positive Semidefinite Matrices
Matthias Hein
Department of Computer Science
Department of Mathematics
Saarland University
Saarbr?ucken, Germany
hein@cs.uni-saarland.de
Martin Slawski
Ping Li
Department of Statistics & Biostatistics
Department of Computer Science
Rutgers University
Piscataway, NJ 08854, USA
{martin.slawski@rutgers.edu,
pingli@stat.rutgers.edu}
Abstract
Trace regression models have received considerable attention in the context of
matrix completion, quantum state tomography, and compressed sensing. Estimation of the underlying matrix from regularization-based approaches promoting
low-rankedness, notably nuclear norm regularization, have enjoyed great popularity. In this paper, we argue that such regularization may no longer be necessary
if the underlying matrix is symmetric positive semidefinite (spd) and the design
satisfies certain conditions. In this situation, simple least squares estimation subject to an spd constraint may perform as well as regularization-based approaches
with a proper choice of regularization parameter, which entails knowledge of the
noise level and/or tuning. By contrast, constrained least squares estimation comes
without any tuning parameter and may hence be preferred due to its simplicity.
1 Introduction
Trace regression models of the form
yi = tr(Xi? ?? ) + ?i , i = 1, . . . , n,
(1)
?
m1 ?m2
is the parameter of interest to be estimated given measurement matrices Xi ?
where ? ? R
Rm1 ?m2 and observations yi contaminated by errors ?i , i = 1, . . . , n, have attracted considerable
interest in high-dimensional statistical inference, machine learning and signal processing over the
past few years. Research in these areas has focused on a setting with few measurements n ? m1 ?m2
and ?? being (approximately) of low rank r ? min{m1 , m2 }. Such setting is relevant to problems
such as matrix completion [6, 23], compressed sensing [5, 17], quantum state tomography [11] and
phase retrieval [7]. A common thread in these works is the use of the nuclear norm of a matrix
as a convex surrogate for its rank [18] in regularized estimation amenable to modern optimization
techniques. This approach can be seen as natural generalization of ?1 -norm (aka lasso) regularization
for the linear regression model [24] that arises as a special case of model (1) in which both ?? and
{Xi }ni=1 are diagonal. It is inarguable that in general regularization is essential if n < m1 ? m2 .
The situation is less clear if ?? is known to satisfy additional constraints that can be incorporated in
estimation. Specifically, in the present paper we consider the case in which m1 = m2 = m and ??
m
is known to be symmetric positive semidefinite (spd), i.e. ?? ? Sm
+ with S+ denoting the positive
m
semidefinite cone in the space of symmetric real m ? m matrices S . The set Sm
+ deserves specific
interest as it includes covariance matrices and Gram matrices in kernel-based learning [20]. It is
rather common for these matrices to be of low rank (at least approximately), given the widespread
use of principal components analysis and low-rank kernel approximations [28]. In the present paper,
we focus on the usefulness of the spd constraint for estimation. We argue that if ?? is spd and the
measurement matrices {Xi }ni=1 obey certain conditions, constrained least squares estimation
n
1 X
minm
(yi ? tr(Xi? ?))2
(2)
??S+ 2n
i=1
may perform similarly well in prediction and parameter estimation as approaches employing nuclear
norm regularization with proper choice of the regularization parameter, including the interesting
1
regime n < ?m , where ?m = dim(Sm ) = m(m + 1)/2. Note that the objective in (2) only consists
of a data fitting term and is hence convenient to work with in practice since there is no free parameter.
Our findings can be seen as a non-commutative extension of recent results on non-negative least
squares estimation for linear regression [16, 21].
Related work. Model (1) with ?? ? Sm
+ has been studied in several recent papers. A good deal
of these papers consider the setup of compressed sensing in which the {Xi }ni=1 can be chosen by
the user, with the goal to minimize the number of observations required to (approximately) recover
?? . For example, in [27], recovery of ?? being low-rank from noiseless observations (?i = 0,
i = 1, . . . , n) by solving a feasibility problem over Sm
+ is considered, which is equivalent to the
constrained least squares problem (1) in a noiseless setting.
In [3, 8], recovery from rank-one measurements is considered, i.e., for {xi }ni=1 ? Rm
?
? ?
?
yi = x?
i ? xi + ?i = tr(Xi ? ) + ?i , with Xi = xi xi , i = 1, . . . , n.
(3)
As opposed to [3, 8], where estimation based on nuclear norm regularization is proposed, the present
work is devoted to regularization-free estimation. While rank-one measurements as in (3) are also in
the center of interest herein, our framework is not limited to this case. In [3] an application of (3) to
n
covariance matrix estimation given only one-dimensional projections {x?
i zi }i=1 of the data points
n
is discussed, where the {zi }i=1 are i.i.d. from a distribution with zero mean and covariance matrix
?? . In fact, this fits the model under study with observations
? ?
?
?
?
2
?
?
yi = (x?
i zi ) = xi zi zi xi = xi ? xi + ?i , ?i = xi {zi zi ? ? }xi , i = 1, . . . , n.
(4)
Specializing (3) to the case in which ?? = ? ? (? ? )? , one obtains the quadratic model
?
? 2
yi = |x?
i ? | + ?i
(5)
which (with complex-valued ? ) is relevant to the problem of phase retrieval [14]. The approach
of [7] treats (5) as an instance of (1) and uses nuclear norm regularization to enforce rank-one
solutions. In follow-up work [4], the authors show a refined recovery result stating that imposing an
spd constraint ? without regularization ? suffices. A similar result has been proven independently
by [10]. However, the results in [4] and [10] only concern model (5). After posting an extended
version [22] of the present paper, a generalization of [4, 10] to general low-rank spd matrices has
been achieved in [13]. Since [4, 10, 13] consider bounded noise, whereas the analysis herein assumes
Gaussian noise, our results are not direclty comparable to those in [4, 10, 13].
Notation.
Md denotes the space of real d ? d matrices with inner product hM, M ? i :=
tr(M ? M ? ). The subspace of symmetric matrices Sd has dimension ?d := d(d + 1)/2. M ? Sd
P
has an eigen-decomposition M = U ?U ? = dj=1 ?j (M )uj u?
j , where ?1 (M ) = ?max (M ) ?
?2 (M ) ? . . . ? ?d (M ) = ?min (M ), ? = diag(?1 (M ), . . . , ?d (M )), and U = [u1 . . . ud ]. For
P
q ? [1, ?) and M ? Sd , kM kq := ( dj=1 |?j (M )|q )1/q denotes the Schatten-q-norm (q = 1: nuclear norm; q = 2 Frobenius norm kM kF , q = ?: spectral norm kM k? := max1?j?d |?j (M )|).
Let S1 (d) = {M ? Sd : kM k1 = 1} and S1+ (d) = S1 (d) ? Sd+ . The symbols , , ?, ? refer to
the semidefinite ordering. For a set A and ? ? R, ?A := {?a, a ? A}.
It is convenient to re-write model (1) as y = X (?? ) + ?, where y = (yi )ni=1 , ? = (?i )ni=1 and
X : Mm ? Rn is a linear map defined by (X (M ))i = tr(Xi? M ), i =
Pn1, . . . , n, referred to as
sampling operator. Its adjoint X ? : Rn ? Mm is given by the map v 7? i=1 vi Xi .
Supplement. The appendix contains all proofs, additional experiments and figures.
2 Analysis
Preliminaries. Throughout this section, we consider a special instance of model (1) in which
yi = tr(Xi ?? ) + ?i ,
i.i.d.
2
m
where ?? ? Sm
+ , Xi ? S , and ?i ? N (0, ? ), i = 1, . . . , n.
(6)
{?i }ni=1
are Gaussian is made for convenience as it simplifies the
The assumption that the errors
stochastic part of our analysis, which could be extended to sub-Gaussian errors.
Note that w.l.o.g., we may assume that {Xi }ni=1 ? Sm . In fact, since ?? ? Sm , for any M ? Mm
we have that tr(M ?? ) = tr(M sym ?? ), where M sym = (M + M ? )/2.
2
In the sequel, we study the statistical performance of the constrained least squares estimator
b ? argmin 1 ky ? X (?)k22
?
2n
??Sm
+
(7)
under model (6). More specifically, under certain conditions on X , we shall derive bounds on
1
b 2 , and (b) k?
b ? ?? k 1 ,
(a) kX (?? ) ? X (?)k
(8)
2
n
where (a) will be referred to as ?prediction error? below. The most basic method for estimating ??
is ordinary least squares (ols) estimation
b ols ? argmin 1 ky ? X (?)k22 ,
?
(9)
??Sm 2n
which is computationally simpler than (7). While (7) requires convex programming, (9) boils down
to solving a linear system of equations in ?m = m(m + 1)/2 variables. On the other hand, the
prediction error of ols scales as OP (dim(range(X ))/n), where dim(range(X )) can be as large as
min{n, ?m }, in which case the prediction error vanishes only if ?m /n ? 0 as n ? ?. Moreover,
b ols ? ?? k1 is unbounded unless n ? ?m . Research conducted over the past
the estimation error k?
few years has thus focused on methods dealing successfully with the case n < ?m as long as the
target ?? has additional structure, notably low-rankedness. Indeed, if ?? has rank r ? m, the
intrinsic dimension of the problem becomes (roughly) mr ? ?m . In a large body of work, nuclear
norm regularization, which serves as a convex surrogate of rank regularization, is considered as a
computationally convenient alternative for which a series of adaptivity properties to underlying lowrankedness has been established, e.g. [5, 15, 17, 18, 19]. Complementing (9) with nuclear norm
regularization yields the estimator
b 1 ? argmin 1 ky ? X (?)k22 + ?k?k1 ,
?
(10)
??Sm 2n
where ? > 0 is a regularization parameter. In case an spd constraint is imposed (10) becomes
b 1+ ? argmin 1 ky ? X (?)k2 + ? tr(?).
(11)
?
2
2n
??Sm
+
Our analysis aims at elucidating potential advantages of the spd constraint in the constrained least
squares problem (7) from a statistical point of view. It turns out that depending on properties of
b can range from a performance similar to the least squares estimator ?
b ols on
X , the behaviour of ?
b 1+ with properly
the one hand to a performance similar to the nuclear norm regularized estimator ?
b
chosen/tuned ? on the other hand. The latter case appears to be remarkable: ? may enjoy similar
b is obtained from a pure
adaptivity properties as nuclear norm regularized estimators even though ?
data fitting problem without explicit regularization.
2.1 Negative result
b does not imWe first discuss a negative example of X for which the spd-constrained estimator ?
ols
b
prove (substantially) over the unconstrained estimator ? . At the same time, this example provides
clues on conditions to be imposed on X to achieve substantially better performance.
Random Gaussian design. Consider the Gaussian orthogonal ensemble (GOE)
i.i.d.
i.i.d.
GOE(m) = {X = (xjk ), {xjj }m
j=1 ? N (0, 1), {xjk = xkj }1?j<k?m ? N (0, 1/2)}.
Gaussian measurements are common in compressed sensing. It is hence of interest to study meai.i.d.
surements {Xi }ni=1 ? GOE(m) in the context of the constrained least squares problem (7). The
following statement points to a serious limitation associated with such measurements.
i.i.d.
Proposition 1. Consider Xi ? GOE(m), i = 1, . . . , n. For any ? > 0, if n ? (1 ? ?)?m /2, with
probability at least 1 ? 32 exp(??2 ?m ), there exists ? ? Sm
+ , ? 6= 0 such that X (?) = 0.
Proposition 1 implies that if the number of measurements drops below 1/2 of the ambient dimension
b ? ?? k1 is unbounded,
?m , estimating ?? based on (7) becomes ill-posed; the estimation error k?
?
irrespective of the rank of ? . Geometrically, the consequence of Proposition 1 is that the convex
cone CX = {z ? Rn : z = X (?), ? ? Sm
+ } contains 0. Unless 0 is contained in the boundary
of CX (we conjecture that this event has measure zero), this means that CX = Rn , i.e. the spd
constraint becomes vacuous.
3
2.2 Slow Rate Bound on the Prediction Error
b under an additional
We present a positive result on the spd-constrained least squares estimator ?
condition on the sampling operator X . Specifically, the prediction error will be bounded as
1
b 22 = O(?0 k?? k1 + ?20 ), where ?0 = 1 kX ? (?)k? ,
kX (?? ) ? X (?)k
(12)
n
n
p
with ?0 typically being of the order O( m/n) (up to log factors). The rate in (12) can be a sigb ols if k?? k1 = tr(?? ) is small. If ?0 = o(k?? k1 )
nificant improvement of what is achieved by ?
that rate coincides with those of the nuclear norm regularized estimators (10), (11) with regularization parameter ? ? ?0 , cf. Theorem 1 in [19]. For nuclear norm regularized estimators, the rate
O(?0 k?? k1 ) is achieved for any choice of X and is slow in the sense that the squared prediction
error only decays at the rate n?1/2 instead of n?1 .
Condition on X .
In order to arrive at a suitable condition to be imposed on X so that (12) can
be achieved, it makes sense to re-consider the negative example of Proposition 1, which states that
as long as n is bounded away from ?m /2 from above, there is a non-trivial ? ? Sm
+ such that
X (?) = 0. Equivalently, dist(PX , 0) = min??S + (m) kX (?)k2 = 0, where
1
n
PX := {z ? R : z = X (?), ? ?
S1+ (m)},
and S1+ (m) := {? ? Sm
+ : tr(?) = 1}.
In this situation, it is impossible to derive a non-trivial bound on the prediction error as dist(PX , 0) =
b 22 = k?k22 . To rule this out, the condition
0 may imply CX = Rn so that kX (?? ) ? X (?)k
dist(PX , 0) > 0 is natural. More strongly, one may ask for the following:
There exists a constant ? > 0 such that ?02 (X ) :=
min
??S1+ (m)
1
kX (?)k22 ? ? 2 .
n
(13)
An analogous condition is sufficient for a slow rate bound in the vector case, cf. [21]. However, the
condition for the slow rate bound in Theorem 1 below is somewhat stronger than (13).
Condition 1. There exist constants R? > 1, ?? > 0 s.t. ? 2 (X , R? ) ? ??2 , where for R ? R
? 2 (X , R) = dist2 (RPX , PX )/n =
min
+
A?RS1 (m)
B?S1+ (m)
1
kX (A) ? X (B)k22 .
n
The following condition is sufficient for Condition 1 and in some cases much easier to check.
Proposition 2. Suppose there exists a ? Rn , kak2 ? 1, and constants 0 < ?min ? ?max s.t.
?min (n?1/2 X ? (a)) ? ?min ,
and ?max (n?1/2 X ? (a)) ? ?max .
Then for any ? > 1, X satisfies Condition 1 with R? = ?(?max /?min ) and ??2 = (? ? 1)2 ?2max .
The condition of Proposition 2 can be phrased as having
a positive definite matrix in the image of
?
the unit ball under X ? , which, after scaling by 1/ n, has its smallest eigenvalue bounded
? away
from zero and a bounded condition number. As a simple example, suppose that X1 = nI. Invoking Proposition 2 with a = (1, 0, . . . , 0)? and ? = 2, we find that Condition 1 is satisfied with
R? = 2 and ??2 = 1. A more interesting example is random design where the {Xi }ni=1 are (sample) covariance matrices, where the underlying random vectors satisfy appropriate tail or moment
conditions.
Corollary 1. Let ?m be a probability distribution on Rm with second moment matrix ? :=
Ez??m [zz ?] satisfying ?min (?) > 0. Consider the random matrix ensemble
o
n P
i.i.d.
(14)
M(?m , q) = q1 qk=1 zk zk? , {zk }qk=1 ? ?m .
i.i.d.
b n := 1 Pn Xi and 0 < ?n < ?min (?). Under the
Suppose that {Xi }ni=1 ? M(?m , q) and let ?
i=1
n
b n k? ? ?n }, X satisfies Condition 1 with
event {k? ? ?
R? =
2(?max (?) + ?n )
?min (?) ? ?n
and ??2 = (?max (?) + ?n )2 .
4
It is instructive to spell out Corollary 1 with ?m as the standard Gaussian distribution on Rm . The
b n equals the sample covariance matrix computed from N = n ? q samples. It is well-known
matrix ?
2
2
b
b
[9] that for m, N large, ?max
p (?n ) and ?min (?n ) concentrate sharply around (1+?n ) and (1??n ) ,
respectively, where ?n = m/N . Hence, for any ? > 0, there exists C? > 1 so that if N ? C? m,
b n k? exist for the
it holds that R? ? 2 + ?. Similar though weaker concentration results for k? ? ?
broader class of distributions ?m with finite fourth moments [26]. Specialized to q = 1, Corollary 1
yields a statement about X made up from random rank-one measurements Xi = zi zi? , i = 1, . . . , n,
cf. (3). The preceding discussion indicates that Condition 1 tends to be satisfied in this case.
Theorem 1. Suppose that model (6) holds with X satisfying Condition 1 with constants R? and ??2 .
We then have
(
2 )
1
R
?
b 22 ? max 2(1 + R? )?0 k?? k1 , 2?0 k?? k1 + 8 ?0
kX (?? ) ? X (?)k
n
??
where, for any ? ? 0, with probability at least 1 ? (2m)??
q
P
V2
?0 ? ? (1 + ?)2 log(2m) nn , where Vn2 =
n1 ni=1 Xi2
? .
Remark: Under the scalings R? = O(1) and ??2 = ?(1), the bound of Theorem 1 is of the
order O(?0 k?? k1 + ?20 ) as announced at the beginning of this section. For given X , the quantity
? 2 (X , R) can be evaluated by solving a least squares problem with spd constraints. Hence it is
feasible to check in practice whether Condition 1 holds. For later reference, we evaluate the term
Vn2 for M(?m , q) with ?m as standard Gaussian distribution. As shown in the supplement, with
high probability, Vn2 = O(m log n) holds as long as m = O(nq).
2.3 Bound on the Estimation Error
In the previous subsection, we did not make any assumptions about ?? apart from ?? ? Sm
+ . Henceforth, we suppose that ?? is of low rank 1 ? r ? m and study the performance of the constrained
least squares estimator (7) for prediction and estimation in such setting.
Preliminaries. Let ?? = U ?U ? be the eigenvalue decomposition of ?? , where
?r
0r?(m?r)
Uk
U?
U=
0(m?r)?r 0(m?r)?(m?r)
m ? r m ? (m ? r)
where ?r is diagonal with positive diagonal entries. Consider the linear subspace
?
,
T? = {M ? Sm : M = U? AU?
A ? Sm?r }.
? ?
From U?
? U? = 0, it follows that ?? is contained in the orthogonal complement
T = {M ? Sm : M = Uk B + B ? Uk? ,
B ? Rr?m },
of dimension mr ? r(r ? 1)/2 ? ?m if r ? m. The image of T under X is denoted by T .
Conditions on X . We introduce the key quantities the bound in this subsection depends on.
Separability constant.
? 2 (T) =
1
dist2 (T , PX ) ,
n
=
min
+
??T, ??S1
(m)?T?
PX := {z ? Rn : z = X (?), ? ? T? ? S1+ (m)}
1
kX (?) ? X (?)k22
n
Restricted eigenvalue.
?2 (T) = min
06=??T
kX (?)k22 /n
.
k?k21
b ? ?? k, it is
As indicated by the following statement concerning the noiseless case, for bounding k?
inevitable to have lower bounds on the above two quantities.
5
Proposition 3. Consider the trace regression model (1) with ?i = 0, i = 1, . . . , n. Then
argmin
??Sm
+
1
kX (?? ) ? X (?)k22 = {?? } for all ?? ? T ? Sm
+
2n
if and only if it holds that ? 2 (T) > 0 and ?2 (T) > 0.
Correlation constant. Moreover, we use of the following the quantity. It is not clear to us if it is
intrinsically required, or if its appearance in our bound is for merely technical reasons.
?(T) = max n1 hX (?), X (?? )i : k?k1 ? 1, ? ? T, ?? ? S1+ (m) ? T? .
b ? ?? k 1 .
We are now in position to provide a bound on k?
Theorem 2. Suppose that model (6) holds with ?? as considered throughout this subsection and let
?0 be defined as in Theorem 1. We then have
(
1
?(T)
3
?(T)
1
?
b
+ 4?0
,
k? ? ? k1 ? max 8?0 2
+
+
? (T)?2 (T) 2 ?2 (T)
?2 (T) ? 2 (T)
)
8?0
?(T)
8?0
1+ 2
, 2
.
?2 (T)
? (T)
? (T)
Remark. Given Theorem 2 an improved bound on the prediction error scaling with ?20 in place
of ?0 can be derived, cf. (26) in Appendix D.
The quality of the bound of Theorem 2 depends on how the quantities ? 2 (T), ?2 (T) and ?(T) scale
with n, m and r, which is design-dependent. Accordingly, the estimation error in nuclear norm
can be non-finite in the worst case and O(?0 r) in the best case, which matches existing bounds for
nuclear norm regularization (cf. Theorem 2 in [19]).
? The quantity ? 2 (T) is specific to the geometry of the constrained least squares problem
(7) and hence of critical importance. For instance, it follows from Proposition 1 that for
standard Gaussian measurements, ? 2 (T) = 0 with high probability once n < ?m /2. The
situation can be much better for random spd measurements (14) as exemplified for meai.i.d.
surements Xi = zi zi? with zi ? N (0, I) in Appendix H. Specifically, it turns out that
2
? (T) = ?(1/r) as long as n = ?(m ? r).
? It is not restrictive to assume ?2 (T) is positive. Indeed, without that assumption, even an
oracle estimator based on knowledge of the subspace T would fail. Reasonable sampling
operators X have rank min{n, ?m } so that the nullspace of X only has a trivial intersection
with the subspace T as long as n ? dim(T) = mr ? r(r ? 1)/2.
? For fixed T, computing ?(T) entails solving a biconvex (albeit non-convex) optimization
problem in ? ? T and ?? ? S1+ (m)?T? . Block coordinate descent is a practical approach
to such optimization problems for which a globally optimal solution is out of reach. In
this manner we explore the scaling of ?(T) numerically as done for ? 2 (T). We find that
?(T) = O(?m /n) so that ?(T) = O(1) apart from the regime n/?m ? 0, without ruling
out the possibility of undersampling, i.e. n < ?m .
3 Numerical Results
b In particular, its performance
In this section, we empirically study properties of the estimator ?.
relative to regularization-based methods is explored. We also present an application to spiked covariance estimation for the CBCL face image data set and stock prices from NASDAQ.
b ? ?? k 1
Comparison with regularization-based approaches. We here empirically evaluate k?
relative to well-known regularization-based methods.
i.i.d.
Setup. We consider rank-one Wishart measurement matrices Xi = zi zi? , zi ? N (0, I), i =
1, . . . , n, fix m = 50 and let n ? {0.24, 0.26, . . . , 0.36, 0.4, . . . , 0.56} ? m2 and r ? {1, 2, . . . , 10}
vary. Each configuration of (n, r) is run with 50 replications. In each of these, we generate data
yi = tr(Xi ?? ) + ??i , ? = 0.1, i = 1, . . . , n,
?
where ? is generated randomly as rank r Wishart matrices and the
6
{?i }ni=1
(15)
are i.i.d. N (0, 1).
0.07
0.06
0.05
0.12
0.1
0.08
0.03
0.06
900 1000 1100 1200 1300 1400
600
n
700
800
900
1
0.3
0.25
0.2
0.7
0.6
0.5
0.4
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
r: 8
1
0.9
0.8
0.7
0.6
0.5
0.3
0.2
800
900 1000 1100 1200 1300 1400
700
900 1000 1100 1200 1300 1400
n
1.8
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
r: 10
1.6
1.4
1.2
1
0.8
0.6
0.4
0.3
800
2
1.1
1
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
|Sigma ? Sigma*|
r: 6
700
n
1.2
0.8
700
600
1000 1100 1200 1300 1400
1
800
|Sigma ? Sigma*|
700
0.9
600
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
r: 4
0.35
0.15
1
|Sigma ? Sigma*|1
0.14
0.04
600
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
r: 2
0.16
|Sigma ? Sigma*|
0.08
|Sigma ? Sigma*|1
1
|Sigma ? Sigma*|
constrained LS
regularized LS #
regularized LS
Chen et al. #
Chen et al.
oracle
r: 1
0.09
0.4
800
900
1000 1100 1200 1300 1400
n
n
800
900
1000
1100
n
1200
1300
1400
Figure 1: Average estimation error (over 50 replications) in nuclear norm for fixed m = 50 and
certain choices of n and r. In the legend, ?LS? is used as a shortcut for ?least squares?. Chen et
al. refersp
to (16). ?#?indicates an oracular choice of the tuning parameter. ?oracle? refers to the ideal
error ?r m/n. Best seen in color.
b to the corresponding nuclear norm regularized
Regularization-based approaches. We compare ?
estimator in (11). Regarding the choice of the regularization
p parameter ?, we consider the grid
?? ? {0.01, 0.05, 0.1, 0.3, 0.5, 1, 2, 4, 8, 16}, where ?? = ? m/n as recommended in [17] and pick
? so that the prediction error on a separate validation data set of size n generated from (15) is
minimized. Note that in general, neither ? is known nor an extra validation data set is available. Our
goal here is to ensure that the regularization parameter is properly tuned. In addition, we consider
an oracular choice of ? where ? is picked from the above grid such that the performance measure
of interest (the distance to the target in the nuclear norm) is minimized. We also compare to the
constrained nuclear norm minimization approach of [8]:
min tr(?) subject to ? 0, and ky ? X (?)k1 ? ?.
(16)
?
p
For ?, we consider the grid n? 2/? ? {0.2, 0.3, p
. . . , 1, 1.25}. This specific choice is motivated by
the fact that E[ky ? X (?? )k1 ] = E[k?k1 ] = n? 2/?. Apart from that, tuning of ? is performed
as for the nuclear norm regularized estimator. In addition, we have assessed the performance of the
approach in [3], which does not impose an spd constraint but adds another constraint to (16). That
additional constraint significantly complicates optimization and yields a second tuning parameter.
Thus, instead of doing a 2D-grid search, we use fixed values given in [3] for known ?. The results
are similar or worse than those of (16) (note in particular that positive semidefiniteness is not taken
advantage of in [3]) and are hence not reported.
Discussion of the results. We conclude from Figure 1 that in most cases, the performance of
the constrained least squares estimator does not differ much from that of the regularization-based
methods with careful tuning. For larger values of r, the constrained least squares estimator seems to
require slightly more measurements to achieve competitive performance.
Real Data Examples. We P
now present an application to recovery of spiked covariance matrices
2
2
which are of the form ?? = rj=1 ?j uj u?
j + ? I, where r ? m and ?j ? ? > 0, j = 1, . . . , r.
This model appears frequently in connection with principal components analysis (PCA).
Extension to the spiked case. So far, we have assumed that ?? is of low rank, but it is straightforward to extend the proposed approach to the case in which ?? is spiked as long as ? 2 is known or
b + ? 2 I, where
an estimate is available. A constrained least squares estimator of ?? takes the form ?
b ? argmin 1 ky ? X (? + ? 2 I)k22 .
?
(17)
2n
??Sm
+
The case of ? 2 unknown or general (unknown) diagonal perturbation is left for future research.
7
0.6
0
log10(|Sigma ? Sigma*|F)
log10(|Sigma ? Sigma*|F)
? = 1/N (1 sample)
0.2
? = 0.008
?0.2
?0.4
? = 0.08
?0.6
? = 0.4
?0.8
?1
?1.2
? = 0.008
1.5
? = 0.08
1
? = 0.4
0.5
NASDAQ
0
? = 1 (all samples)
CBCL
?0.5
oracle
2
4
6
n / (m * r)
8
? = 1 (all samples)
oracle
?1.4
0
? = 1/N (1 sample)
2
0.4
10
0
12
1
2
3
n / (m * r)
4
5
6
b ? ?? kF in dependence of n/(mr) and the paramFigure 2: Average reconstruction errors log10 k?
eter ?. ?oracle? refers to the best rank r-approximation ?r .
Data sets. (i) The CBCL facial image data set [1] consist of N = 2429 images of 19 ? 19 pixels
(i.e., m = 361). We take ?? as the sample covariance matrix of this data set. It turns out that
?? can be well approximated by ?r , r = 50, where ?r is the best rank r approximation to ??
obtained from computing its eigendecomposition and setting to zero all but the top r eigenvalues.
(ii) We construct a second data set from the daily end prices of m = 252 stocks from the technology
sector in NASDAQ, starting from the beginning of the year 2000 to the end of the year 2014 (in
total N = 3773 days, retrieved from finance.yahoo.com). We take ?? as the resulting sample
correlation matrix and choose r = 100.
Experimental setup.
As in preceding measurements, we consider n random Wishart measurements for the operator X , where n = C(mr), where C ranges from 0.25 to 12. Since
k?r ? ?? kF /k?? kF ? 10?3 for both data sets, we work with ? 2 = 0 in (17) for simplicity.
To make recovery of ?? more difficult, we make the problem noisy by using observations
yi = tr(Xi Si ),
i = 1, . . . , n,
(18)
?
where Si is an approximation to ? obtained from the sample covariance respectively sample correlation matrix of ?N data points randomly sampled with replacement from the entire data set,
i = 1, . . . , n, where ? ranges from 0.4 to 1/N (Si is computed from a single data point). For each
choice of n and ?, the reported results are averages over 20 replications.
b accurately approximates ?? once the
Results. For the CBCL data set, as shown in Figure 2, ?
number of measurements crosses 2mr. Performance degrades once additional noise is introduced to
the problem by using measurements (18). Even under significant perturbations (? = 0.08), reasonable reconstruction of ?? remains possible, albeit the number of required measurements increases
accordingly. In the extreme case ? = 1/N , the error is still decreasing with n, but millions of
samples seems to be required to achieve reasonable reconstruction error.
The general picture is similar for the NASDAQ data set, but the difference between using measurements based on the full sample correlation matrix on the one hand and approximations based on
random subsampling (18) on the other hand are more pronounced.
4 Conclusion
We have investigated trace regression in the situation that the underlying matrix is symmetric positive semidefinite. Under restrictions on the design, constrained least squares enjoys similar statistical
properties as methods employing nuclear norm regularization. This may come as a surprise, as regularization is widely regarded as necessary in small sample settings.
Acknowledgments
The work of Martin Slawski and Ping Li is partially supported by NSF-DMS-1444124, NSF-III1360971, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137.
8
References
[1] CBCL face dataset. http://cbcl.mit.edu/software-datasets/FaceData2.html.
[2] D. Amelunxen, M. Lotz, M. McCoy, and J. Tropp. Living on the edge: phase transitions in convex
programs with random data. Information and Inference, 3:224?294, 2014.
[3] T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. The Annals of Statistics, 43:102?
138, 2015.
[4] E. Candes and X. Li. Solving quadratic equations via PhaseLift when there are about as many equations
as unknowns. Foundation of Computational Mathematics, 14:1017?1026, 2014.
[5] E. Candes and Y. Plan. Tight oracle bounds for low-rank matrix recovery from a minimal number of noisy
measurements. IEEE Transactions on Information Theory, 57:2342?2359, 2011.
[6] E. Candes and B. Recht. Exact matrix completion via convex optimization. Foundation of Computational
Mathematics, 9:2053?2080, 2009.
[7] E. Candes, T. Strohmer, and V. Voroninski. PhaseLift: exact and stable signal recovery from magnitude
measurements via convex programming. Communications on Pure and Applied Mathematics, 66:1241?
1274, 2012.
[8] Y. Chen, Y. Chi, and A. Goldsmith. Exact and Stable Covariance Estimation from Quadratic Sampling
via Convex Programming. IEEE Transactions on Information Theory, 61:4034?4059, 2015.
[9] K. Davidson and S. Szarek. Handbook of the Geometry of Banach Spaces, volume 1, chapter Local
operator theory, random matrices and Banach spaces, pages 317?366. 2001.
[10] L. Demanet and P. Hand. Stable optimizationless recovery from phaseless measurements. Journal of
Fourier Analysis and its Applications, 20:199?221, 2014.
[11] D. Gross, Y.-K. Liu, S. Flammia, S. Becker, and J. Eisert. Quantum State Tomography via Compressed
Sensing. Physical Review Letters, 105:150401?15404, 2010.
[12] R. Horn and C. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[13] M. Kabanva, R. Kueng, and H. Rauhut und U. Terstiege. Stable low rank matrix recovery via null space
properties. arXiv:1507.07184, 2015.
[14] M. Klibanov, P. Sacks, and A. Tikhonarov. The phase retrieval problem. Inverse Problems, 11:1?28,
1995.
[15] V. Koltchinskii, K. Lounici, and A. Tsybakov. Nuclear-norm penalization and optimal rates for noisy
low-rank matrix completion. The Annals of Statistics, 39:2302?2329, 2011.
[16] N. Meinshausen. Sign-constrained least squares estimation for high-dimensional regression. The Electronic Journal of Statistics, 7:1607?1631, 2013.
[17] S. Negahban and M. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional
scaling. The Annals of Statistics, 39:1069?1097, 2011.
[18] B. Recht, M. Fazel, and P. Parillo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52:471?501, 2010.
[19] A. Rohde and A. Tsybakov. Estimation of high-dimensional low-rank matrices. The Annals of Statistics,
39:887?930, 2011.
[20] B. Sch?olkopf and A. Smola. Learning with kernels. MIT Press, Cambridge, Massachussets, 2002.
[21] M. Slawski and M. Hein. Non-negative least squares for high-dimensional linear models: consistency
and sparse recovery without regularization. The Electronic Journal of Statistics, 7:3004?3056, 2013.
[22] M. Slawski, P. Li, and M. Hein. Regularization-free estimation in trace regression with positive semidefinite matrices. arXiv:1504.06305, 2015.
[23] N. Srebro, J. Rennie, and T. Jaakola. Maximum margin matrix factorization. In Advances in Neural
Information Processing Systems 17, pages 1329?1336, 2005.
[24] R. Tibshirani. Regression shrinkage and variable selection via the lasso. Journal of the Royal Statistical
Society Series B, 58:671?686, 1996.
[25] J. Tropp. User-friendly tools for random matrices: An introduction. 2014. http://users.cms.
caltech.edu/?jtropp/.
[26] R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix ? Journal of
Theoretical Probability, 153:405?419, 2012.
[27] M. Wang, W. Xu, and A. Tang. A unique ?nonnegative? solution to an underdetermined system: from
vectors to matrices. IEEE Transactions on Signal Processing, 59:1007?1016, 2011.
[28] C. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In Advances in
Neural Information Processing Systems 14, pages 682?688, 2001.
9
| 5726 |@word version:1 seems:2 norm:26 stronger:1 km:4 covariance:12 decomposition:2 invoking:1 q1:1 pick:1 tr:14 iii1360971:1 nystr:1 moment:3 configuration:1 contains:2 series:2 liu:1 denoting:1 tuned:2 past:2 existing:1 com:1 si:3 attracted:1 numerical:1 drop:1 nq:1 complementing:1 accordingly:2 beginning:2 fa9550:1 provides:1 simpler:1 zhang:1 unbounded:2 saarland:2 replication:3 consists:1 prove:1 fitting:2 manner:1 introduce:1 notably:2 indeed:2 roughly:1 dist:3 nor:1 frequently:1 chi:1 globally:1 decreasing:1 actual:1 ucken:1 becomes:4 estimating:2 underlying:5 bounded:5 notation:1 biostatistics:1 moreover:2 null:1 what:1 argmin:6 cm:1 substantially:2 szarek:1 finding:1 nj:1 friendly:1 rohde:1 finance:1 rm:3 k2:2 uk:3 parillo:1 unit:1 phaseless:1 enjoy:1 positive:11 local:1 treat:1 sd:5 tends:1 consequence:1 jtropp:1 approximately:3 au:1 studied:1 koltchinskii:1 meinshausen:1 limited:1 factorization:1 range:5 jaakola:1 fazel:1 practical:1 acknowledgment:1 horn:1 unique:1 practice:2 block:1 definite:1 area:1 significantly:1 convenient:3 projection:2 refers:2 convenience:1 close:1 selection:1 operator:5 context:2 impossible:1 restriction:1 equivalent:1 map:2 imposed:3 center:1 straightforward:1 attention:1 starting:1 independently:1 convex:9 focused:2 l:19 williams:1 simplicity:2 recovery:11 pure:2 m2:7 estimator:18 rule:1 regarded:1 nuclear:22 coordinate:1 analogous:1 annals:4 target:2 suppose:6 user:3 exact:3 programming:3 us:1 eisert:1 satisfying:2 approximated:1 wang:1 worst:1 ordering:1 gross:1 vanishes:1 und:1 solving:5 tight:1 max1:1 stock:2 chapter:1 refined:1 posed:1 valued:1 larger:1 widely:1 rennie:1 compressed:5 statistic:7 noisy:3 slawski:5 advantage:2 eigenvalue:4 rr:1 matthias:1 cai:1 reconstruction:3 product:1 relevant:2 achieve:3 adjoint:1 frobenius:1 pronounced:1 ky:7 olkopf:1 dist2:2 derive:2 depending:1 completion:4 stat:1 stating:1 op:1 received:1 c:1 come:2 implies:1 differ:1 concentrate:1 stochastic:1 require:1 behaviour:1 hx:1 suffices:1 generalization:2 fix:1 preliminary:2 proposition:9 underdetermined:1 extension:2 mm:3 hold:6 around:1 considered:4 exp:1 great:1 cbcl:6 vary:1 smallest:1 estimation:26 successfully:1 tool:1 minimization:2 mit:2 gaussian:9 aim:1 rather:1 pn:1 shrinkage:1 mccoy:1 broader:1 corollary:3 kueng:1 derived:1 focus:1 properly:2 improvement:1 rank:27 check:2 indicates:2 aka:1 contrast:1 seeger:1 sense:2 amelunxen:1 dim:4 inference:2 dependent:1 nn:1 entire:1 typically:1 nasdaq:4 lotz:1 voroninski:1 germany:1 pixel:1 ill:1 html:1 denoted:1 yahoo:1 rop:1 constrained:22 special:2 plan:1 equal:1 once:3 construct:1 having:1 sampling:4 zz:1 inevitable:1 future:1 minimized:2 contaminated:1 serious:1 few:3 modern:1 randomly:2 phase:4 geometry:2 replacement:1 n1:2 interest:6 possibility:1 elucidating:1 extreme:1 semidefinite:7 devoted:1 strohmer:1 amenable:1 ambient:1 edge:1 necessary:2 daily:1 facial:1 orthogonal:2 unless:2 phaselift:2 re:2 xjk:2 hein:4 theoretical:1 minimal:1 complicates:1 instance:3 deserves:1 ordinary:1 entry:1 kq:1 usefulness:1 conducted:1 johnson:1 reported:2 vershynin:1 recht:2 negahban:1 siam:1 sequel:1 squared:1 satisfied:2 opposed:1 choose:1 henceforth:1 wishart:3 worse:1 li:4 potential:1 de:1 semidefiniteness:1 includes:1 satisfy:2 vi:1 depends:2 later:1 view:1 picked:1 performed:1 doing:1 surements:2 recover:1 competitive:1 candes:4 minimize:1 square:21 ni:14 om:1 qk:2 ensemble:2 yield:3 accurately:1 rauhut:1 minm:1 ping:2 reach:1 dm:1 proof:1 associated:1 boil:1 sampled:1 dataset:1 intrinsically:1 ask:1 knowledge:2 subsection:3 color:1 appears:2 day:1 follow:1 improved:1 evaluated:1 though:2 strongly:1 done:1 lounici:1 smola:1 correlation:4 hand:6 tropp:2 widespread:1 quality:1 indicated:1 usa:1 k22:10 spell:1 regularization:32 hence:7 symmetric:6 deal:1 coincides:1 biconvex:1 goldsmith:1 image:5 xkj:1 common:3 ols:7 specialized:1 empirically:2 physical:1 volume:1 million:1 discussed:1 tail:1 m1:5 extend:1 numerically:1 approximates:1 banach:2 measurement:22 refer:1 pn1:1 significant:1 imposing:1 cambridge:2 enjoyed:1 rankedness:2 tuning:6 mathematics:4 similarly:1 unconstrained:1 grid:4 consistency:1 dj:2 sack:1 stable:4 entail:2 longer:1 add:1 recent:2 retrieved:1 apart:3 certain:4 n00014:1 onr:1 yi:10 caltech:1 seen:3 minimum:1 additional:6 somewhat:1 preceding:2 mr:6 impose:1 ud:1 recommended:1 signal:3 ii:1 living:1 full:1 rj:1 technical:1 match:1 cross:1 long:6 retrieval:3 concerning:1 specializing:1 feasibility:1 prediction:11 regression:10 basic:1 noiseless:3 rutgers:3 arxiv:2 kernel:4 achieved:4 eter:1 whereas:1 addition:2 flammia:1 extra:1 sch:1 subject:2 rs1:1 legend:1 near:1 ideal:1 spd:15 fit:1 zi:15 lasso:2 inner:1 simplifies:1 regarding:1 thread:1 whether:1 motivated:1 pca:1 becker:1 xjj:1 remark:2 clear:2 facedata2:1 tsybakov:2 tomography:3 generate:1 http:2 exist:2 nsf:2 vn2:3 sign:1 estimated:1 popularity:1 tibshirani:1 write:1 shall:1 key:1 undersampling:1 neither:1 geometrically:1 merely:1 year:4 cone:2 run:1 inverse:1 letter:1 fourth:1 arrive:1 throughout:2 place:1 reasonable:3 ruling:1 electronic:2 appendix:3 scaling:5 announced:1 comparable:1 bound:15 guaranteed:1 quadratic:3 oracle:12 nonnegative:1 constraint:11 sharply:1 software:1 phrased:1 u1:1 fourier:1 speed:1 min:18 martin:3 conjecture:1 px:7 department:4 piscataway:1 ball:1 oracular:2 slightly:1 separability:1 s1:11 restricted:1 spiked:4 taken:1 computationally:2 equation:4 remains:1 turn:3 discus:1 fail:1 xi2:1 goe:4 serf:1 end:2 available:2 promoting:1 obey:1 away:2 enforce:1 spectral:1 appropriate:1 v2:1 alternative:1 eigen:1 assumes:1 denotes:2 cf:5 ensure:1 top:1 subsampling:1 log10:3 restrictive:1 k1:16 uj:2 society:1 objective:1 pingli:1 quantity:6 degrades:1 concentration:1 dependence:1 md:1 diagonal:4 surrogate:2 kak2:1 subspace:4 distance:1 separate:1 schatten:1 argue:2 trivial:3 reason:1 equivalently:1 setup:3 difficult:1 sector:1 statement:3 trace:6 negative:5 sigma:16 design:5 proper:2 unknown:3 perform:2 observation:5 datasets:1 sm:23 finite:2 descent:1 situation:5 extended:2 incorporated:1 communication:1 rn:7 perturbation:2 introduced:1 vacuous:1 complement:1 required:4 connection:1 herein:2 saarbr:1 established:1 below:3 exemplified:1 regime:2 program:1 including:1 max:12 meai:2 royal:1 wainwright:1 event:2 suitable:1 natural:2 critical:1 regularized:19 technology:1 imply:1 picture:1 irrespective:1 hm:1 review:2 kf:4 relative:2 afosr:1 adaptivity:2 interesting:2 limitation:1 proven:1 srebro:1 remarkable:1 validation:2 eigendecomposition:1 foundation:2 penalization:1 sufficient:2 supported:1 free:4 sym:2 enjoys:1 weaker:1 face:2 sparse:1 boundary:1 dimension:4 gram:1 transition:1 quantum:3 author:1 made:2 clue:1 employing:2 far:1 transaction:3 obtains:1 uni:1 preferred:1 dealing:1 handbook:1 conclude:1 assumed:1 xi:33 davidson:1 search:1 zk:3 investigated:1 complex:1 diag:1 did:1 bounding:1 noise:5 massachussets:1 body:1 x1:1 xu:1 referred:2 nificant:1 slow:4 sub:1 position:1 explicit:1 nullspace:1 posting:1 tang:1 down:1 theorem:9 specific:3 k21:1 sensing:5 symbol:1 decay:1 explored:1 concern:1 essential:1 intrinsic:1 exists:4 albeit:2 consist:1 importance:1 supplement:2 magnitude:1 commutative:1 kx:11 margin:1 chen:14 easier:1 surprise:1 cx:4 intersection:1 appearance:1 explore:1 ez:1 contained:2 partially:1 satisfies:3 goal:2 rm1:1 careful:1 price:2 considerable:2 feasible:1 shortcut:1 specifically:4 principal:2 total:1 terstiege:1 experimental:1 latter:1 arises:1 assessed:1 evaluate:2 instructive:1 |
5,221 | 5,727 | Convergence Analysis of Prediction Markets via
Randomized Subspace Descent
Rafael Frongillo
Department of Computer Science
University of Colorado, Boulder
raf@colorado.edu
Mark D. Reid
Research School of Computer Science
The Australian National University & NICTA
mark.reid@anu.edu.au
Abstract
Prediction markets are economic mechanisms for aggregating information about
future events through sequential interactions with traders. The pricing mechanisms in these markets are known to be related to optimization algorithms in machine learning and through these connections we have some understanding of how
equilibrium market prices relate to the beliefs of the traders in a market. However,
little is known about rates and guarantees for the convergence of these sequential
mechanisms, and two recent papers cite this as an important open question.
In this paper we show how some previously studied prediction market trading
models can be understood as a natural generalization of randomized coordinate
descent which we call randomized subspace descent (RSD). We establish convergence rates for RSD and leverage them to prove rates for the two prediction
market models above, answering the open questions. Our results extend beyond
standard centralized markets to arbitrary trade networks.
1
Introduction
In recent years, there has been an increasing appreciation of the shared mathematical foundations
between prediction markets and a variety of techniques in machine learning. Prediction markets
consist of agents who trade securities that pay out depending on the outcome of some uncertain,
future event. As trading takes place, the prices of these securities reflect an aggregation of the
beliefs the traders have about the future event. A popular class of mechanisms for updating these
prices as trading occurs has been shown to be closely related to techniques from online learning [7,
1, 21], convex optimization [10, 19, 13], probabilistic aggregation [24, 14], and crowdsourcing [3].
Building these connections serve several purposes, however one important line of research has been
to use insights from machine learning to better understand how to interpret prices in a prediction
market as aggregations of trader beliefs, and moreover, how the market together with the traders can
be viewed as something akin to a distributed machine learning algorithm [24].
The analysis in this paper was motivated in part by two pieces of work that considered the equilibria of prediction markets with specific models of trader behavior: traders as risk minimizers [13];
and traders who maximize expected exponential utility using beliefs from exponential families [2].
In both cases, the focus was on understanding the properties of the market at convergence, and
questions concerning whether and how convergence happened were left as future work. In [2], the
authors note that ?we have not considered the dynamics by which such an equilibrium would be
reached, nor the rate of convergence etc., yet we think such questions provide fruitful directions
for future research.? In [13], ?One area of future work would be conducting a detailed analysis of
this framework using the tools of convex optimisation. A particularly interesting topic is to find the
conditions under which the market will converge.?
1
The main contribution of this paper is to answer these questions of convergence. We do so by first
proposing a new and very general model of trading networks and dynamics (?3) that subsumes the
models used in [2] and [13] and provide a key structural result for what we call efficient trades in
these networks (Theorem 2). As an aside, this structural result provides an immediate generalization
of an existing aggregation result in [2] to trade networks of ?compatible? agents (Theorem 8). In
?4, we argue that efficient trades in our networks model can be viewed as steps of what we call
Random Subspace Descent (RSD) algorithm (Algorithm 1). This novel generalization of coordinate
descent allows an objective to be minimized by taking steps along affinely constrained subspaces,
and maybe be of independent interest beyond prediction market analysis. We provide a convergence
analysis of RSD under two sets of regularity constraints (Theorems 3 & 9) and show how these can
be used to derive (slow & fast) convergence rates in trade networks (Theorems 4 & 5).
Before introducing our general trading networks and convergence rate results, we first introduce the
now standard presentation of potential-based prediction markets [1] and the recent variant in which
all agents determine their trades using risk measures [13]. We will then state informal versions of
our main results so as to highlight how we address issues of convergence in existing frameworks.
2
Background and Informal Results
Prediction markets are mechanisms for eliciting and aggregating distributed information or beliefs
about uncertain future events. The set of events or outcomes under consideration in the market will
be denoted ? and may be finite or infinite. For example, each outcome ? ? ? might represent
a certain presidential candidate winning an election, the location of a missing submarine, or an
unknown label for an item in a data set. Following [1], the goods that are traded in a prediction
market are k outcome-dependent securities {?(?)i }ki=1 that pay ?(?)i dollars should the outcome
? ? ? occur. We denote the set of distributions over ? by ?? and note, for any p ? ?? , that the
expected pay off for the securities under p is E??p [?(?)] and the set of all expected pay offs is just
the convex hull, denoted ? := conv(?(?)). A simple and commonly studied case is when ? =
[k] := {1, . . . , k} (i.e., when there are exactly k outcomes) and the securities are the Arrow-Debreu
securities that pay out $1 should a specific outcome occur and nothing otherwise (i.e., ?(?)i = 1 if
? = i and ?(?)i = 0 for ? 6= i). Here, the securities are just basis vectors for Rk and ? = ?? .
Traders in a prediction market hold portfolios of securities r ? Rk called positions that pay out a
Pk
total of r ? ?(?) = i=1 ri ?(?)i dollars should outcome ? occur. We denote the set of positions
by R = Rk . We will assume that R always contains a position r$ that returns a dollar regardless of
which outcome occurs, meaning r$ ? ?(?) = 1 for all ? ? ?. We therefore interpret r$ as ?cash?
within the market in the sense that buying or selling r$ guarantees a fixed change in wealth.
In order to address the questions about convergence in [2, 13] we will consider a common form of
prediction market that is run through a market maker. This is an automated agent that is willing to
buy or sell securities in return for cash. The specific and well-studied prediction market mechanism
we consider is the potential-based market maker [1]. Here, traders interact with the market maker
sequentially, and the cost for each trade is determined by a convex potential function C : R ? R
applied to the market maker?s state s ? R. Specifically, the cost for a trade dr when the market
maker has state s is given by cost(dr; s) = C(s?dr)?C(s), i.e., the change in potential value of the
market maker?s position due to the market maker accepting the trade. After a trade, the market maker
updates the state to s ? s ? dr.1 As noted in the next section, the usual axiomatic requirements for
a cost function (e.g., in [1]) specify a function that is effectively a risk measure, commonly studied
in mathematical finance (see, e.g., [9]).
2.1
Risk Measures
As in [13], agents in our framework will each quantify their uncertainty in positions using what is
known as risk measure. This is a function that assigns dollar values to positions. As Example 1
below shows, this assumption will also cover the case of agents maximizing exponential utility, as
considered in [2].
1
It is more common in the prediction market literature for s to be a liability vector, tracking what the market
maker stands to lose instead of gain. Here we adopt positive positions to match the convention for risk measures.
2
A (convex monetary) risk measure is a function ? : R ? R satisfying, for all r, r0 ? R:
?
?
?
?
Monotonicity: ?? r ? ?(?) ? r0 ? ?(?) =? ?(r) ? ?(r0 ).
Cash invariance: ?(r + c r$ ) = ?(r) ? c for all c ? R.
Convexity: ? ?r + (1 ? ?)r0 ? ??(r) + (1 ? ?)?(r0 ) for all ? ? (0, 1).
Normalization: ?(0) = 0.
The reasonableness of these properties is usually argued as follows (see, e.g., [9]). Monotonicity
ensures that positions that result in strictly smaller payoffs regardless of the outcome are considered
more risky. Cash invariance captures the idea that if a guaranteed payment of $c is added to the
payment on each outcome then the risk will decrease by $c. Convexity states that merging positions
results in lower risk. Finally, normalization requires that holding no securities should carry no risk.
This last condition is only for convenience since any risk without this condition can trivially have its
argument translated so it holds without affecting the other three properties. A key result concerning
convex risk measures is the following representation theorem (cf. [9, Theorem 4.15], ).
Theorem 1 (Risk Representation). A functional ? : R ? R is a convex risk measure if and only if
there is a closed convex function ? : ? ? R ? {?} such that ?(r) = sup??relint(?) h?, ?ri ? ?(?).
Here relint(?) denotes the relative interior of ?, the interior relative to the affine hull of ?. Notice
that if f ? denotes the convex conjugate f ? (y) := supx hy, xi ? f (x), then this theorem states that
?(r) = ?? (?r), that is, ? and ? are ?dual? in the same way prices and positions are dual [5, ?5.4.4].
This suggests that the function ? can be interpreted as a penalty function, assigning a measure of
?unlikeliness? ?(?) to each expected value ? of the securities defined above. Equivalently, ?(Ep [?])
measures the unlikeliness of distribution p over the outcomes. We can then see that the risk is the
greatest expected loss under each distribution, taking into account the penalties assigned by ?.
Example 1. A well-studied risk measure is the entropic risk relative to a reference distribution
q ? ?? [9]. This is defined on positions r ? R by ?? (r) := ? log E??q [exp(?r ? ?(?)/?)]. The
cost function C(r) = ?? (?r) associated with this risk exactly corresponds to the logarithmic market scoring rule (LMSR). Its associated convex function ?? over distributions is the scaled relative
entropy ?? (p) = ? KL(p | q). As discussed in [2, 13], the entropic risk is closely related to exponential utility U? (w) := ? ?1 exp(??w). Indeed, ?? (r) = ?U? (E??q [U? (r ? ?(?))]) which is just
the negative certainty equivalent of the position r ? i.e., the amount of cash an agent with utility
U? and belief q would be willing to trade for the uncertain position r. Due to the monotonicity of
U??1 , it follows that a trader maximizing expected utility E??q [U? (r ? ?(?))] of holding position r
is equivalent to minimizing the entropic risk ?? (r).
For technical reasons, in addition to the standard assumptions for convex risk measures, we will also
make two weak regularity assumptions. These are similar to properties required of cost functions in
the prediction market literature (cf. [1, Theorem 3.2]):
? Expressiveness: ? is everywhere differentiable, and closure{??(r) : r ? R} = ?.
? Strict risk aversion: the Convexity inequality is strict unless r ? r0 = c r$ for some c ? R.
As discussed in [1], expressiveness is related to the dual formulation given above; roughly, it says
that the agent must take into account every possible expected value of the securities when calculating
the risk. Strict risk aversion says that an agent should strictly prefer a mixture of positions, unless
of course the difference is outcome-independent.
Under these assumptions, the representation result of Theorem 1 and a similar result for cost functions [1, Theorem 3.2]) coincide and we are able to show that cost functions and risk measures
are exactly the same object; we write ?C (r) = C(r) when we think of C as a risk measure. Unfolding the definition of cost now using cash invariance, we have ?C (s ? dr + cost(dr; s)r$ ) =
?C (s ? dr) ? cost(dr; s) = C(s ? dr) ? C(s ? dr) + C(s) = ?C (s). Thus, we may view a
potential-based market maker as a constant-risk agent.
2.2
Trading Dynamics and Aggregation
As described above, we consider traders who approach the market maker sequentially and at random,
and select the optimal trade based on their current position, the market state, and the cost function C.
3
As we just observed, we may think of the market maker as a constant-risk agent with ?C = C. Let
us examine the optimization problem faced by the trader with position r when the current market
state is s. This trader will choose a portfolio dr? from the market maker so as to minimise her risk:
dr? ? arg min ? (r + dr ? cost(dr)r$ ) = arg min ?(r + dr) + ?C (s ? dr) .
(1)
dr?Rk
dr?Rk
Since, by the cash invariance of ? and the definition of cost, the objective is ?(r + dr) + ?C (s ?
dr) ? ?C (s), and ?C (s) does not depend on dr. Thus, if we think of F (r, s) = ?(r) + ?C (s) as
a kind of ?social risk?, we can define the surplus as simply the net risk taken away by an optimal
trade, namely F (r, s) ? F (r + dr? , s ? dr? ).
We can now state our central question: if a set of N such traders arrive at random and execute
optimal (or perhaps near-optimal) trades with the market maker, will the market state converge to
the optimal risk, and if so how fast? As discussed in the introduction, this is precisely the question
asked in [2, 13] that we set out to answer. To do so we will draw a close connection to the literature
on distributed optimization algorithms for machine learning. Specifically, if we encode the entire
state of our system in the positions R = (r0 = s, r1 , . . . , rn ) of the market maker and each of the
n traders, we may view the optimal trade in eq. (1) as performing a coordinate descent step, by
optimizing only with respect to coordinates 0 and i. We build on this connection in Section 4 and
leverage a generalization of coordinate descent methods to show the following in Theorem 4: If a
set of risk-based traders is sampled at random to sequentially trade in the market, the market state
and prices converge to within of the optimal total risk in O(1/) rounds.
In fact, under mild smoothness assumptions on the cost potential function C, we can improve this
rate to O(log(1/)). We can also relax the optimality of the trader behavior; as long as traders find
a trade dr which extracts at least a constant fraction of the surplus, the rate remains intact.
With convergence rates in hand, the next natural question might be: to what does the market converge? Abernethy et al. [2] show that when traders minimize expected exponential utility and have
exponential family beliefs, the market equilibrium price can be thought of as a weighted average of
the parameters of the traders, with the weights being a measure of their risk tolerance. Even though
our setting is far more general than exponential utility and exponential families, the framework we
develop can also be used to show that their results can be extended to interactions between traders
who have what we call ?compatible? risks and beliefs. Specifically, for any risk-based trader possessing a risk ? with dual ?, we can think of that trader?s ?belief? as the least surprising distribution
p according to ?. This view induces a family of distributions (which happen to be generalized exponential families [11]) that are parameterized by the initial positions of the traders. Furthermore,
the risk tolerance b is given by how sensitive this belief is to small changes of an agent?s position.
The results of [2] are then a special case of our Theorem 8 for agents with ? being entropic risk (cf.
Example 1): If each trader i has risk tolerance bi and a belief parameterized by ?i , and the initial
market state P
is ?0 , then the equilibrium state of the market, to which the market converges, is given
?0 + i bi ?i
?
P
by ? = 1+ bi .
i
As the focus of this paper is on the convergence, the details for this result are given in Appendix C.
The main insight that drives the above analysis of the interaction between a risk-based trader and a
market maker is that each trade minimizes a global objective for the market that is the infimal convolution [6] of the traders? and market maker?s risks. In fact, this observation naturally generalizes to
trades between three or more agents and the same convergence analysis applies. In other words, our
analysis also holds when bilateral trade with a fixed market maker is replaced by multilateral trade
among arbitrarily overlapping subsets of agents. Viewed as a graph with agents as nodes, the standard prediction market framework is represented by the star graph, where the central market market
interacts with traders sequentially and individually. More generally we have what we call a trading
network, in which the structure of trades can form arbitrary connected graphs or even hypergraphs.
An obvious choice is the complete graph, which can model a decentralized market, and in fact we
can even compare the convergence rate of our dynamics between the centralized and decentralized
models; see Appendix D.2 and the discussion in ? 5.
4
3
General Trading Dynamics
The previous section described the two agent case of what is more generally known as the optimal
risk allocation problem [6] where two or more agents express their preferences for positions via
risk measures. This is formalized by considering N agents with risk measures ?i : R ? R for
i ? [N ] :=
P{1, . . . , N } who are asked to split a position
P r ? R in to per-agent positions ri ? R
satisfying i ri = r so as to minimise the total risk i ?i (ri ). They note that the value of the total
risk is given by the infimal convolution ?i ?i of the individual agent risks ? that is,
(
)
X
X
(?i ?i )(r) := inf
?i (ri ) :
ri = r , ri ? R .
(2)
i
i
A key property of the infimal convolution, which will underly much of our analysis, is that its convex
conjugate is the sum of the conjugates of its constituent functions. See e.g. [23] for a proof.
X
(?i ?i )? =
??i .
(3)
i?[N ]
One can think of ?i ?i as the ?market risk?, which captures the risk P
of the entire market (i.e., as
if it were a single risk-based agent) as a function of the net position i ri of its constituents. By
definition, eq. (2) says that the market is trying to reallocate the risk so as to minimize this net risk.
This interpretation is confirmed by eq. (3) when we interpret the duals as penalty functions as above:
the penalty of ? is the sum of the penalties of the market participants.
As alluded to above, we allow our agents to interact round by round by conducting trades, which are
simply the exchange of outcome-contingent securities. Since by assumption our position space R is
closed under linear combinations, a trade between two agents is simply a position which is added to
one agent and subtracted from another. Generalizing from this two agent interaction, a trade among
a set of agents S ? [N ] is just a collection of trade vectors, one for each agent, which sum to 0.
Formally, let S ? [N ] be a subsetP
of agents. A trade on S is then a vector of positions dr ? RN
N ?k
(i.e., a matrix in R
) such that i?S dri = 0 ? R and dri = 0 for all i ?
/ S. This last condition
specifies that agents not in S do not change their position.
A key quantity in our analysis is a measure of how much the total risk of a collection of traders drops
N
due to trading.
P Given some subset of
Ptraders S, the S-surplus is a function ?S : R ? R defined
by ?S (r) = i?S ?i (ri ) ? (?i ?i )( i?S ri ) which measures the maximum achievable drop in risk
(since ?i ?i is an infimum). In particular, ?(r) := ?[N ] (r) is the surplus function. The trades that
achieve this optimal drop in risk are called efficient: given current state r ? RN , a trade dr ? RN
on S ? [N ] is efficient if ?S (r + dr) = 0.
Our following key result shows that efficient trades have remarkable structure: once the state r and
subset S is specified, there is a unique efficient trade, up to cash transfers. In other words, the
surplus is removed from the position vectors and then redistributed as cash to the traders; the choice
of trade is merely in how this redistribution takes place. The fact that the derivatives match has strong
intuition from prediction markets: agents must agree on the price.2 The proof is in Appendix A.1.
Theorem 2. Let r ? RN and S ? [N ] be given.
i. The surplus is always finite: 0 ? ?S (r) < ?.
ii. The set of efficient trades on S is nonempty.
?
N
iii. Efficient trades are unique up to zero-sum cash transfers: Given efficient
P trades dr , dr ? R
on S, we have dr = dr? + (z1 r$ , . . . , zN r$ ) for some z ? RN with i zi = 0.
iv. Traders agree on ?prices?: A trade dr on S is efficient if and only if for all i, j ? S,
??i (ri + dri ) = ??j (rj + drj ).
v. There is a unique ?efficient price?: If dr isPan efficient trade
on S, for all i ? S we have
P
??i (ri + dri ) = ??S? , where ?S? = arg min i?S ?i (?) ? ?, i?S ri .
???
2
As intuition for the term ?price?, consider that the highest price-per-unit agent i would be willing to pay
for an infinitesimal quantity of a position dri is dri ? (???i (ri )), and likewise the lowest price-per-unit to sell.
Thus, the entries of ???i (ri ) act as the ?fair? prices for their corresponding basis positions/securities.
5
The above properties of efficient trades drive the remainder of our convergence analysis of network
dynamics. It also allows us to write a simple closed form for the market price when traders share
a common risk profile (Theorem 8). Details are in Appendix C. Beyond our current focus on rates,
Theorem 2 has implications for a variety of other economic properties of trade networks. For example, in Appendix B we show that efficient trades correspond to fixed points for more general
dynamics, market clearing equilibria, and equilibria of natural bargaining games among the traders.
Recall that in the prediction market framework of [13], each round has a single trader, say i > 1,
interacting with the market maker who we will assume has index 1. In the notation just defined this
corresponds to choosing S = {1, i}. We now wish to consider richer dynamics where groups of two
or more agents trade efficiently each round. To this end will we call a collection S = {Sj ? [N ]}m
j=1
of groups of traders a trading network and assume there is some fixed distribution D over S with
full support. A trade dynamic over S is a process that begins at t = 0 with some initial positions
r0 ? RN for the N traders, and at each round t, draws a random group of traders S t ? S according
to D, selects some efficient trade drt on S, then updates the trader positions using rt+1 = rt + drt .
For the purposes of proving the convergence of trade dynamics, a crucial property is whether all
traders can directly or indirectly affect the others. To capture this we will say a trade network
is connected if the hypergraph on [N ] with edges given by S is connected; i.e., information can
propagate throughout the entire network. Dynamics over classical prediction markets are always
connected since any pair of groups from its network will always contain the market maker.
4
Convergence Analysis of Randomized Subspace Descent
Before briefly reviewing the literature on coordinate descent, let us see why this might be a useful
way to think of our dynamics. Recall that we have a set S of subsets of agents, and that in each step,
an efficient trade dr is chosen which only modifies the positions of agents in the sampled S ? S.
Thinking of (r1 , . . . , rN ) as a vector of dimension N ? k vector (recall R = Rk ), changing rt to
rt+1 = rt + dr thus only modifies |S| blocks of k entries. Moreover, efficiency ensures that dr
minimizes the sum of the risks of agents in S. Hence, ignoring for now the constraint that the sum
of the positions must remain constant, the trade dynamic seems to be performing a kind of block
coordinate descent of the surplus function ?.
4.1
Randomized Subspace Descent
Several randomized coordinate descent methods have appeared in the literature recently, with increasing levels of sophistication. While earlier methods focused on updates which only modified
disjoint blocks of coordinates [18, 22], more recent methods allow for more general configurations,
such as overlapping blocks [17, 16, 20]. In fact, these last three methods are closest to what we study
here; the authors consider an objective which decomposes as the sum of convex functions on each
coordinate, and study coordinate updates which follow a graph structure, all under the constraint
that coordinates sum to 0. Despite the similarity of these methods to our trade dynamics, we require
even more general updates, as we allow coordinate i to correspond to arbitrary subsets Si ? S.
Instead, we establish a unification of these methods which we call randomized subspace descent
(RSD), listed in Algorithm 1. Rather than blocks of coordinates or specific linear constraints, RSD
abstracts away these constructs by simply specifying ?coordinate subspaces? in which the optimization is to be performed. Specifically, the algorithm takes a list of projection matrices {?i }ni=1 which
define the subspaces, and at each step t selects a ?i at random and tries to optimize the objective
under the constraint that it may only move within the image space of ?i ; that is, if the current point
is xt , then xt+1 ? xt ? im(?i ).
Before stating our convergence results for Algorithm 1, we will need a notion of smoothness relative
to our subspaces. Specifically, we say F is Li -?i -smooth if for all i there are constants Li > 0 such
that for all y ? im(?i ),
F (x + y) ? F (x) + h?F (x), yi +
2
Li
2 kyk2
.
(4)
Finally, let F min := miny?span{im(?i )}i F (x0 + y) be the global minimizer of F subject to the
constraints from the ?i . Then we have the following result for a constant R(x0 ) which increases in:
6
ALGORITHM 1: Randomized Subspace Descent
Input: Smooth convex function F : Rn ? R, initial point x0 ? Rn , matrices {?i ? Rn?n }m
i=1 ,
smoothness parameters {Li }m
i=1 , distribution p ? ?m
for iteration t in {0, 1, 2, ? ? ? } do
sample i from p
xt+1 ? xt ? L1i ?i ?F (xt )
end
(1) the distance from the point x0 to furthest minimizer of F , (2) the Lipschitz constants of F w.r.t.
the ?i , and (3) the connectivity of the hypergraph induced by the projections.
Theorem 3. Let F , {?i }i , {Li }i , x0 , and p be given as in Algorithm 1, with the condition that F is
Li -?i -smooth for all i. Then E F (xt ) ? F min ? 2R2 (x0 ) / t.
The proof is in Appendix D. Additionally, when F is strongly convex, meaning it has a uniform local
quadratic lower bound, RSD enjoys faster, linear convergence. Formally, this condition requires F
to be ?-strongly convex for some constant ? > 0, that is, for all x, y ? dom F we require
F (y) ? F (x) + ?F (x) ? (y ? x) + ?2 ky ? xk2 .
(5)
The statement and details of this stronger result is given in Appendix D.1.
Importantly for our setting these results only track the progress per iteration. Thus, they apply to
more sophisticated update steps than a simple gradient step as long as they improve the objective
by at least as much. For example, if in each step the algorithm computed the exact minimizer
xt+1 = arg miny?im(?i ) F (xt + y), both theorems would still hold.
4.2
Convergence Rates for Trade Dynamics
To apply Theorem 3 to the convergence of trading dynamics, we let F = ? and x = (r1 , . . . , rN ) ?
RN ?
= RN k be the joint position of all agents. For each subset S ? S of agents, we havePa subspace
of RN consisting of all possible trades on S, namely {dr ? RN : dri = 0 for i 6= S,
i?S dri =
0}, with corresponding projection matrix ?S . For the special case of prediction markets with a
centralized market maker, we have N ? 1 subspaces S = {{1, i} : i ? {2, . . . , N }} and ?1,i
projects onto {dr ? RN : dri = ?dr1 , drj = 0 for j 6= 1, i}. The intuition of coordinate descent is
clear now: the subset S of agents seek to minimize the total surplus within the subspace of trades on
S, and thus the coordinate descent steps of Algorithm 1 will correspond to roughly efficient trades.
We now apply Theorem 3 to show that trade dynamics achieve surplus > 0 in time O(1/). Note
that we will have to assume the risk measure ?i of agent i is Li -smooth for some Li > 0. This is a
very loose restriction, as our risk measures are all differentiable by the expressiveness condition.
Theorem 4. Let ?i be an Li -smooth risk measure for all i. Then for any connected trade dynamic,
we have E [?(rt )] = O(1/t).
Proof. Taking LS = maxi?S Li , one can check that F is LS -?S -smooth for all S ? S by eq. (4).
Since Algorithm 1 has no state aside from xt , and the proof of Theorem 3 depends only the drop
in F per step, any algorithm selecting the sets S ? S with the same distribution and satisfying
F (xt+1 ) ? F (xt ? L1i ?i ?F (xt )) will yield the same convergence rate. As trade dynamics satisfy
F (xt+1 ) = miny?RN k F (xt ? ?i y), this property trivially holds, and so Theorem 3 applies.
If we assume slightly more, that our risk measures have local quadratic lower bounds, then we can
obtain linear convergence. Note that this is also a relatively weak assumption, and holds whenever
the risk measure has a Hessian with only one zero eigenvalue (for r$ ) at each point. This is satisfied,
for example, by all the variants of entropic risk we discuss in the paper. The proof is in Appendix D.
Theorem 5. Suppose for each i we have a continuous function ?i : R ? R+ such that for all r,
risk ?i is ?i (r)-strongly convex with respect to r$ ? in a neighborhood of r; in other words, eq. (5)
holds for F = ?i , ? = ?i (r), and all y in a neighborhood of r such that (r ? y) ? r$ = 0. Then for
all connected trade dynamics, E [?(rt )] = O(2?t ).
7
Graph
Kn
Pn
Cn
K`,k
Bk
|V (G)|
|E(G)|
?2 (G)
n
n
n
`+k
2k
n(n ? 1)/2
n?1
n
`k
k2k?1
n
2(1?cos n? )
2(1?cos 2?
n )
k
2
Table 1: Algebraic connectivities for common graphs.
Figure 1: Average (in bold) of 30 market simulations
for the complete and star graphs. The empirical gap in
iteration complexity is just under 2 (cf. Fig. 3).
Amazingly, the convergence rates in Theorem 4 and Theorem 5 hold for all connected trade dynamics. The constant hidden in the O(?) does depend on the structure of the network but can be
explicitly determined in terms its algebraic connectivity. This is discussed further in Appendix D.2.
The intuition behind these convergence rates given here is that agents in whichever group S is chosen
always trade to fully minimize their surplus. Because the proofs (in Appendix D) of these methods
merely track the reduction in surplus per trading round, the bounds apply as long as the update is at
least as good as a gradient step. In fact, we can say even more: if only an fraction of the surplus is
taken at each round, the rates are still O(1/(t)) and O((1 ? ?)t ), respectively. This suggests that
our convergence results are robust with respect to the model of rationality one employs; if agents
have bounded rationality and can only compute positions which approximately minimize their risk,
the rates remain intact (up to constant factors) as long as the inefficiency is bounded.
5
Conclusions & Future Work
Using the tools of convex analysis to analyse the behavior of markets allows us to make precise,
quantitative statements about their global behavior. In this paper we have seen that, with appropriate
assumptions on trader behaviour, we can determine the rate at which the market will converge to
equilibrium prices, thereby closing some open questions raised in [2] and [13].
In addition, our newly proposed trading networks model allow us to consider a variety of prediction
market structures. As discussed in ?3, the usual prediction market setting is centralized, and corresponds to a star graph with the market maker at the center. A decentralized market where any trader
can trade with any other corresponds to a complete graph over the traders. We can also model more
exotic networks, such as two or more market maker-based prediction markets with a risk minimizing
arbitrageur or small-world networks where agents only trade with a limited number of ?neighbours?.
Furthermore, because these arrangements are all instances of trade networks, we can immediately
compare the convergence rates across various constraints on how traders may interact. For example,
in Appendix D.2, we show that a market that trades through a centralized market maker incurs an
quantifiable efficiency overhead: convergence takes twice as long (see Figure 1). More generally,
we show that the rates scale as ?2 (G)/|E(G)|, allowing us to make similar comparisons between
arbitrary networks; see Table 1. This raises an interesting question for future work: given some
constraints such as a bound on how many traders a single agent can trade with, the total number of
edges, etc, which network optimizes the convergence rate of the market? These new models and
the analysis of their convergence may provide new principles for building and analyzing distributed
systems of heterogeneous and self-interested learning agents.
Acknowledgments
We would like to thank Matus Telgarsky for his generous help, as well as the lively discussions with,
and helpful comments of, S?ebastien Lahaie, Miro Dud??k, Jenn Wortman Vaughan, Yiling Chen,
David Parkes, and Nageeb Ali. MDR is supported by an ARC Discovery Early Career Research
Award (DE130101605). Part of this work was developed while he was visiting Microsoft Research.
8
References
[1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex
optimization, and a connection to online learning. ACM Transactions on Economics and Computation,
1(2):12, 2013.
[2] Jacob Abernethy, Sindhu Kutty, S?ebastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proceedings of the fifteenth ACM conference on Economics and computation,
pages 395?412. ACM, 2014.
[3] Jacob D Abernethy and Rafael M Frongillo. A collaborative mechanism for crowdsourcing prediction
problems. In Advances in Neural Information Processing Systems, pages 2600?2608, 2011.
[4] Aharon Ben-Tal and Marc Teboulle. An old-new concept of convex risk measures: The optimized certainty equivalent. Mathematical Finance, 17(3):449?476, 2007.
[5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[6] Christian Burgert and Ludger R?uschendorf. On the optimal risk allocation problem. Statistics & decisions,
24(1/2006):153?171, 2006.
[7] Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via no-regret
learning. In Proceedings of the 11th ACM conference on Electronic commerce, pages 189?198. ACM,
2010.
[8] Nair Maria Maia de Abreu. Old and new results on algebraic connectivity of graphs. Linear algebra and
its applications, 423(1):53?73, 2007.
[9] Hans F?ollmer and Alexander Schied. Stochastic Finance: An Introduction in Discrete Time, volume 27
of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin, 2nd edition, 2004.
[10] Rafael M Frongillo, Nicol?as Della Penna, and Mark D Reid. Interpreting prediction markets: a stochastic
approach. In Proceedings of Neural Information Processing Systems, 2012.
[11] P.D. Gr?unwald and A.P. Dawid. Game theory, maximum entropy, minimum discrepancy and robust
Bayesian decision theory. The Annals of Statistics, 32(4):1367?1433, 2004.
[12] JB Hiriart-Urruty and C Lemar?echal. Grundlehren der mathematischen wissenschaften. Convex Analysis
and Minimization Algorithms II, 306, 1993.
[13] Jinli Hu and Amos Storkey. Multi-period trading prediction markets with connections to machine learning. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
[14] Jono Millin, Krzysztof Geras, and Amos J Storkey. Isoelastic agents and wealth updates in machine
learning markets. In Proceedings of the 29th International Conference on Machine Learning (ICML-12),
pages 1815?1822, 2012.
[15] Bojan Mohar. The Laplacian spectrum of graphs. In Graph Theory, Combinatorics, and Applications,
1991.
[16] I Necoara, Y Nesterov, and F Glineur. A random coordinate descent method on large-scale optimization
problems with linear constraints. Technical Report, 2014.
[17] Ion Necoara. Random coordinate descent algorithms for multi-agent convex optimization over networks.
Automatic Control, IEEE Transactions on, 58(8):2001?2012, 2013.
[18] Yurii Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[19] Mindika Premachandra and Mark Reid. Aggregating predictions via sequential mini-trading. In Asian
Conference on Machine Learning, pages 373?387, 2013.
[20] Sashank Reddi, Ahmed Hefny, Carlton Downey, Avinava Dubey, and Suvrit Sra. Large-scale randomizedcoordinate descent methods with non-separable linear constraints. arXiv preprint arXiv:1409.2617, 2014.
[21] Mark D Reid, Rafael M Frongillo, Robert C Williamson, and Nishant Mehta. Generalized mixability via
entropic duality. In Proc. of Conference on Learning Theory (COLT), 2015.
[22] Peter Richt?arik and Martin Tak?ac? . Iteration complexity of randomized block-coordinate descent methods
for minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014.
[23] R.T. Rockafellar. Convex analysis. Princeton University Press, 1997.
[24] Amos J Storkey. Machine learning markets. In International Conference on Artificial Intelligence and
Statistics, pages 716?724, 2011.
9
| 5727 |@word mild:1 version:1 briefly:1 achievable:1 seems:1 stronger:1 nd:1 open:3 mehta:1 willing:3 closure:1 seek:1 propagate:1 simulation:1 jacob:3 hu:1 incurs:1 thereby:1 carry:1 reduction:1 initial:4 configuration:1 contains:1 inefficiency:1 selecting:1 existing:2 current:5 surprising:1 si:1 yet:1 assigning:1 must:3 underly:1 happen:1 christian:1 drop:4 update:8 aside:2 intelligence:1 mdr:1 item:1 accepting:1 parkes:1 provides:1 node:1 location:1 preference:1 mathematical:4 along:1 prove:1 overhead:1 introduce:1 x0:6 expected:8 indeed:1 roughly:2 market:92 nor:1 examine:1 multi:2 behavior:4 buying:1 little:1 election:1 considering:1 increasing:2 conv:1 begin:1 project:1 moreover:2 notation:1 bounded:2 exotic:1 geras:1 lowest:1 what:9 kind:2 interpreted:1 minimizes:2 developed:1 proposing:1 guarantee:2 certainty:2 quantitative:1 every:1 act:1 finance:3 exactly:3 scaled:1 control:1 unit:2 reid:5 before:3 positive:1 understood:1 aggregating:3 local:2 despite:1 analyzing:1 approximately:1 might:3 twice:1 au:1 studied:5 suggests:2 specifying:1 co:3 limited:1 bi:3 drj:2 unique:3 acknowledgment:1 commerce:1 block:6 regret:1 area:1 empirical:1 thought:1 composite:1 projection:3 boyd:1 word:3 convenience:1 interior:2 close:1 onto:1 risk:70 vaughan:3 optimize:1 fruitful:1 equivalent:3 restriction:1 missing:1 maximizing:2 modifies:2 center:1 regardless:2 economics:2 l:2 convex:24 focused:1 formalized:1 assigns:1 immediately:1 insight:2 rule:1 importantly:1 vandenberghe:1 his:1 proving:1 notion:1 coordinate:21 annals:1 suppose:1 colorado:2 rationality:2 exact:1 programming:1 dawid:1 storkey:3 satisfying:3 particularly:1 updating:1 ep:1 observed:1 preprint:1 capture:3 ensures:2 connected:7 richt:1 trade:64 decrease:1 removed:1 highest:1 intuition:4 convexity:3 complexity:2 hypergraph:2 miny:3 asked:2 nesterov:2 dynamic:21 dom:1 depend:2 reviewing:1 raise:1 algebra:1 ali:1 serve:1 efficiency:3 basis:2 selling:1 translated:1 joint:1 represented:1 various:1 walter:1 fast:2 artificial:1 outcome:14 choosing:1 abernethy:4 neighborhood:2 richer:1 say:7 relax:1 otherwise:1 presidential:1 statistic:3 think:7 analyse:1 online:2 differentiable:2 eigenvalue:1 net:3 redistributed:1 interaction:4 yiling:3 hiriart:1 remainder:1 monetary:1 maia:1 achieve:2 ky:1 constituent:2 quantifiable:1 convergence:32 regularity:2 requirement:1 r1:3 telgarsky:1 converges:1 ben:1 object:1 help:1 depending:1 derive:1 develop:1 stating:1 ac:1 school:1 progress:1 eq:5 strong:1 trading:15 australian:1 convention:1 quantify:1 direction:1 closely:2 hull:2 stochastic:2 duals:1 redistribution:1 argued:1 exchange:1 require:2 behaviour:1 generalization:4 im:4 strictly:2 hold:8 considered:4 exp:2 equilibrium:8 traded:1 matus:1 adopt:1 entropic:6 xk2:1 generous:1 purpose:2 early:1 proc:1 axiomatic:1 label:1 lose:1 maker:24 sensitive:1 individually:1 lmsr:1 tool:2 weighted:1 amos:3 unfolding:1 minimization:1 offs:1 always:5 arik:1 modified:1 rather:1 cash:10 frongillo:4 pn:1 encode:1 focus:3 maria:1 check:1 bojan:1 affinely:1 dollar:4 sense:1 helpful:1 dependent:1 minimizers:1 entire:3 her:1 hidden:1 tak:1 selects:2 interested:1 issue:1 dual:4 among:3 colt:1 denoted:2 arg:4 constrained:1 special:2 raised:1 once:1 construct:1 sell:2 icml:2 thinking:1 future:9 minimized:1 others:1 discrepancy:1 jb:1 report:1 employ:1 neighbour:1 national:1 individual:1 asian:1 replaced:1 consisting:1 microsoft:1 centralized:5 interest:1 huge:1 mixture:1 behind:1 amazingly:1 necoara:2 implication:1 edge:2 unification:1 lahaie:2 unless:2 iv:1 old:2 uncertain:3 instance:1 earlier:1 teboulle:1 cover:1 zn:1 cost:15 introducing:1 subset:7 entry:2 trader:45 uniform:1 wortman:3 gr:1 kn:1 answer:2 supx:1 st:1 international:3 randomized:9 siam:1 probabilistic:1 off:1 together:1 avinava:1 connectivity:4 reflect:1 central:2 satisfied:1 choose:1 dr:38 derivative:1 return:2 li:10 account:2 potential:6 relint:2 de:3 star:3 bold:1 subsumes:1 rockafellar:1 satisfy:1 mohar:1 explicitly:1 combinatorics:1 depends:1 piece:1 bilateral:1 view:3 performed:1 closed:3 try:1 sup:1 reached:1 aggregation:6 participant:1 raf:1 contribution:1 minimize:5 collaborative:1 ni:1 who:6 conducting:2 likewise:1 correspond:3 efficiently:1 yield:1 weak:2 bayesian:1 confirmed:1 drive:2 penna:1 whenever:1 definition:3 infinitesimal:1 bargaining:1 obvious:1 naturally:1 associated:2 proof:7 gain:1 sampled:2 newly:1 popular:1 recall:3 hefny:1 sophisticated:1 surplus:12 follow:1 specify:1 rahul:1 formulation:1 execute:1 though:1 strongly:3 furthermore:2 just:7 hand:1 overlapping:2 infimum:1 perhaps:1 pricing:1 building:2 lively:1 contain:1 concept:1 hence:1 assigned:1 dud:1 round:8 game:2 kyk2:1 self:1 noted:1 kutty:1 generalized:2 trying:1 complete:3 interpreting:1 meaning:2 image:1 consideration:1 novel:1 possessing:1 recently:1 common:4 functional:1 volume:1 extend:1 discussed:5 hypergraphs:1 interpretation:1 interpret:3 he:1 lieven:1 mathematischen:1 cambridge:1 smoothness:3 automatic:1 trivially:2 mathematics:1 closing:1 portfolio:2 han:1 similarity:1 etc:2 something:1 closest:1 recent:4 optimizing:1 inf:1 optimizes:1 certain:1 suvrit:1 inequality:1 carlton:1 arbitrarily:1 yi:1 der:1 scoring:1 seen:1 minimum:1 contingent:1 r0:8 converge:5 maximize:1 determine:2 period:1 ii:2 stephen:1 full:1 rj:1 debreu:1 smooth:6 technical:2 match:2 faster:1 ahmed:1 long:5 concerning:2 drt:2 award:1 jinli:1 laplacian:1 prediction:30 variant:2 heterogeneous:1 optimisation:1 fifteenth:1 arxiv:2 iteration:4 represent:1 normalization:2 ion:1 background:1 affecting:1 addition:2 wealth:2 crucial:1 strict:3 comment:1 subject:1 induced:1 dri:9 grundlehren:1 call:7 reddi:1 structural:2 near:1 leverage:2 split:1 iii:1 sami:1 automated:1 variety:3 affect:1 zi:1 economic:2 idea:1 cn:1 minimise:2 whether:2 motivated:1 dr1:1 utility:7 akin:1 penalty:5 downey:1 peter:1 algebraic:3 sashank:1 hessian:1 generally:3 useful:1 detailed:1 listed:1 clear:1 dubey:1 maybe:1 amount:1 induces:1 specifies:1 notice:1 happened:1 disjoint:1 per:6 track:2 write:2 discrete:1 express:1 group:5 key:5 changing:1 krzysztof:1 graph:13 merely:2 fraction:2 year:1 sum:8 run:1 everywhere:1 uncertainty:1 parameterized:2 place:2 family:6 submarine:1 arrive:1 throughout:1 electronic:1 draw:2 infimal:3 decision:2 prefer:1 appendix:11 ki:1 bound:4 pay:7 guaranteed:1 quadratic:2 occur:3 constraint:10 precisely:1 ri:16 millin:1 hy:1 tal:1 argument:1 min:5 optimality:1 span:1 performing:2 separable:1 relatively:1 martin:1 department:1 according:2 combination:1 conjugate:3 smaller:1 remain:2 slightly:1 across:1 making:1 boulder:1 taken:2 alluded:1 agree:2 previously:1 payment:2 remains:1 loose:1 mechanism:7 nonempty:1 discus:1 jennifer:2 urruty:1 whichever:1 end:2 yurii:1 informal:2 generalizes:1 aharon:1 decentralized:3 apply:4 away:2 indirectly:1 appropriate:1 subtracted:1 denotes:2 cf:4 schied:1 calculating:1 build:1 establish:2 eliciting:1 classical:1 mixability:1 objective:6 move:1 question:11 added:2 occurs:2 quantity:2 arrangement:1 rt:7 usual:2 interacts:1 visiting:1 gradient:2 subspace:14 distance:1 thank:1 berlin:1 topic:1 argue:1 reason:1 nicta:1 furthest:1 index:1 mini:1 minimizing:3 equivalently:1 robert:1 statement:2 relate:1 holding:2 glineur:1 negative:1 ebastien:2 unknown:1 appreciation:1 allowing:1 convolution:3 observation:1 arc:1 finite:2 descent:21 immediate:1 payoff:1 extended:1 precise:1 rn:18 interacting:1 arbitrary:4 expressiveness:3 bk:1 david:1 namely:2 required:1 kl:1 unlikeliness:2 connection:6 specified:1 security:14 z1:1 pair:1 optimized:1 nishant:1 address:2 beyond:3 able:1 below:1 usually:1 appeared:1 belief:11 greatest:1 event:5 natural:3 improve:2 risky:1 extract:1 ollmer:1 faced:1 understanding:3 literature:5 discovery:1 nicol:1 relative:5 loss:1 fully:1 highlight:1 interesting:2 allocation:2 remarkable:1 foundation:1 aversion:2 agent:47 abreu:1 affine:1 principle:1 share:1 echal:1 compatible:2 course:1 supported:1 last:3 clearing:1 liability:1 enjoys:1 allow:4 understand:1 taking:3 distributed:4 tolerance:3 dimension:1 stand:1 world:1 author:2 commonly:2 collection:3 coincide:1 far:1 social:1 transaction:2 sj:1 rafael:4 monotonicity:3 global:3 sequentially:4 buy:1 xi:1 reasonableness:1 spectrum:1 continuous:1 decomposes:1 why:1 table:2 additionally:1 transfer:2 robust:2 career:1 ignoring:1 sra:1 interact:3 williamson:1 marc:1 wissenschaften:1 pk:1 main:3 arrow:1 k2k:1 profile:1 edition:1 nothing:1 fair:1 fig:1 slow:1 position:37 wish:1 exponential:10 winning:1 candidate:1 answering:1 theorem:26 rk:6 specific:4 xt:15 sindhu:1 maxi:1 list:1 r2:1 consist:1 sequential:3 effectively:1 merging:1 jenn:1 anu:1 gap:1 chen:3 entropy:2 generalizing:1 logarithmic:1 sophistication:1 simply:4 tracking:1 applies:2 cite:1 corresponds:4 minimizer:3 acm:5 nair:1 gruyter:2 viewed:3 presentation:1 price:16 shared:1 lipschitz:1 change:4 lemar:1 infinite:1 determined:2 specifically:5 called:2 total:7 invariance:4 duality:1 intact:2 unwald:1 select:1 formally:2 mark:5 support:1 alexander:1 l1i:2 princeton:1 della:1 crowdsourcing:2 |
5,222 | 5,728 | Accelerated Proximal Gradient Methods for
Nonconvex Programming
Huan Li
Zhouchen Lin B
Key Lab. of Machine Perception (MOE), School of EECS, Peking University, P. R. China
Cooperative Medianet Innovation Center, Shanghai Jiaotong University, P. R. China
lihuanss@pku.edu.cn
zlin@pku.edu.cn
Abstract
Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge.
Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a critical point in nonconvex programming. In this paper, we extend
APG for general nonconvex and nonsmooth programs by introducing a monitor
that satisfies the sufficient descent property. Accordingly, we propose a monotone
APG and a nonmonotone APG. The latter waives the requirement on monotonic
reduction of the objective function and needs less computation in each iteration.
To the best of our knowledge, we are the first to provide APG-type algorithms
for general nonconvex and nonsmooth problems ensuring that every
accumulation
point is a critical point, and the convergence rates remain O k12 when the problems are convex, in which k is the number of iterations. Numerical results testify
to the advantage of our algorithms in speed.
1
Introduction
In recent years, sparse and low rank learning has been a hot research topic and leads to a wide
variety of applications in signal/image processing, statistics and machine learning. l1 -norm and
nuclear norm, as the continuous and convex surrogates of l0 -norm and rank, respectively, have been
used extensively in the literature. See e.g., the recent collections [1]. Although l1 -norm and nuclear
norm have achieved great success, in many cases they are suboptimal as they can promote sparsity
and low-rankness only under very limited conditions [2, 3]. To address this issue, many nonconvex
regularizers have been proposed, such as lp -norm [4], Capped-l1 penalty [3], Log-Sum Penalty [2],
Minimax Concave Penalty [5], Geman Penalty [6], Smoothly Clipped Absolute Deviation [7] and
Schatten-p norm [8]. This trend motivates a revived interest in the analysis and design of algorithms
for solving nonconvex and nonsmooth problems, which can be formulated as
min F (x) = f (x) + g(x),
x?Rn
(1)
where f is differentiable (it can be nonconvex) and g can be both nonconvex and nonsmooth.
Accelerated gradient methods have been at the heart of convex optimization research. In a series of
celebrated works [9, 10, 11, 12, 13, 14], several accelerated gradient methods are proposed for problem (1) with convex f and g. In these methods, k iterations are sufficient to find a solution within
O k12 error from the optimal objective value. Recently, Ghadimi and Lan [15] presented a unified
treatment of accelerated gradient method (UAG) for convex, nonconvex and stochastic optimiza1
Table 1: Comparisons of GD (General Descent Method), iPiano, GIST, GDPA, IR, IFB, APG, UAG
and our method for problem (1). The measurements include the assumption, whether the methods
accelerate for convex programs (CP) and converge for nonconvex programs (NCP).
Method name
GD [16, 17]
iPiano [18]
GIST [19]
GDPA [20]
IR [8, 21]
IFB [22]
APG [12, 13]
UAG [15]
Ours
Assumption
f + g: KL
nonconvex f , convex g
nonconvex f , g = g1 ? g2 , g1 , g2 convex
nonconvex f , g = g1 ? g2 , g1 , g2 convex
special f and g
nonconvex f , nonconvex g
convex f , convex g
nonconvex f , convex g
nonconvex f , nonconvex g
Accelerate (CP)
No
No
No
No
No
No
Yes
Yes
Yes
converge (NCP)
Yes
Yes
Yes
Yes
Yes
Yes
Unclear
Yes
Yes
1
tion. They proved that their algorithm converges
in nonconvex programming with nonconvex f but
1
convex g and accelerates with an O k2 convergence rate in convex programming for problem (1).
Convergence rate about the gradient mapping is also analyzed in [15].
Attouch et al. [16] proposed a unified framework to prove the convergence of a general class of
descent methods using the Kurdyka-?ojasiewicz (KL) inequality for problem (1) and Frankel et
al. [17] studied the convergence rates of general descent methods under the assumption that the
desingularising function ? in KL property has the form of C? t? . A typical example in their framework is the proximal gradient method. However, there is no literature showing that there exists an
accelerated gradient method satisfying the conditions in their framework.
Other typical methods for problem (1) includes Inertial Forward-Backward (IFB) [22], iPiano [18],
General Iterative Shrinkage and Thresholding (GIST) [19], Gradient Descent with Proximal Average(GDPA) [20] and Iteratively Reweighted Algorithms (IR) [8, 21]. Table 1 demonstrates that the
existing methods are not ideal. GD and IFB cannot accelerate the convergence for convex programs.
GIST and GDPA require that g should be explicitly written as a difference of two convex functions.
iPiano demands the convexity of g and IR is suitable for some special cases of problem (1). APG
can accelerate the convergence for convex programs, however, it is unclear whether APG can converge to critical points for nonconvex programs. UAG can ensure the convergence for nonconvex
programming, however, it requires g to be convex. This restricts the applications of UAG to solving
nonconvexly regularized problems, such as sparse and low rank learning. To the best of our knowledge, extending the accelerated
gradient method for general nonconvex and nonsmooth programs
while keeping the O k12 convergence rate in the convex case remains an open problem.
In this paper we aim to extend Beck and Teboulle?s APG [12, 13] to solve general nonconvex and
nonsmooth problem (1). APG first extrapolates a point yk by combining the current point and
the previous point, then solves a proximal mapping problem. When extending APG to nonconvex
programs the chief difficulty lies in the extrapolated point yk . We have little restriction on F (yk )
when the convexity is absent. In fact, F (yk ) can be arbitrarily larger than F (xk ) when yk is a bad
extrapolation, especially when F is oscillatory. When xk+1 is computed by a proximal mapping at
a bad yk , F (xk+1 ) may also be arbitrarily larger than F (xk ). Beck and Teboulle?s monotone APG
[12] ensures F (xk+1 ) ? F (xk ). However, this is not enough to ensure the convergence to critical
points. To address this issue, we introduce a monitor satisfying the sufficient descent property to
prevent a bad extrapolation of yk and then correct it by this monitor. In summary, our contributions
include:
1. We propose APG-type algorithms for general nonconvex and nonsmooth programs (1). We
first extend Beck and Teboulle?s monotone APG [12] by replacing their descent condition
with sufficient descent condition. This critical change ensures that every accumulation point
is a critical point. Our monotone APG satisfies some modified conditions for the framework
of [16, 17] and thus stronger results on convergence rate can be obtained under the KL
1
Except for the work under the KL assumption, convergence for nonconvex problems in this paper and the
references of this paper means that every accumulation point is a critical point.
2
assumption. Then we propose a nonmonotone APG, which allows for larger stepsizes
when line search is used and reduces the average number of proximal mappings in each
iteration. Thus it can further speed up the convergence in practice.
2. For our APGs, the convergence rates maintain O k12 when the problems are convex. This
result is of great significance when the objective function is locally convex in the neighborhoods of local minimizers even if it is globally nonconvex.
2
2.1
Preliminaries
Basic Assumptions
Note that a function g : Rn ? (??, +?] is said to be proper if dom g 6= ?, where dom g =
{x ? R : g(x) < +?}. g is lower semicontinuous at point x0 if lim inf x?x0 g(x) ? g(x0 ). In
problem (1), we assume that f is a proper function with Lipschitz continuous gradients and g is
proper and lower semicontinuous. We assume that F (x) is coercive, i.e., F is bounded from below
and F (x) ? ? when kxk ? ?, where k ? k is the l2 -norm.
2.2
KL Inequality
Definition 1. [23] A function f : Rn ? (??, +?] is said to have the KL property at u ?
dom?f := {x ? Rn : ?f (u) 6= ?} if there
T exists ? ? (0, +?], a neighborhood U of u and a
function ? ? ?? , such that for all u ? U {u ? Rn : f (u) < f (u) < f (u) + ?}, the following
inequality holds
?0 (f (u) ? f (u))dist(0, ?f (u)) > 1,
(2)
where ?? stands for a class of function ? : [0, ?) ? R+ satisfying: (1) ? is concave and C 1 on
(0, ?); (2) ? is continuous at 0, ?(0) = 0; and (3) ?0 (x) > 0, ?x ? (0, ?).
All semi-algebraic functions and subanalytic functions satisfy the KL property. Specially, the desingularising function ?(t) of semi-algebraic functions can be chosen to be the form of C? t? with
? ? (0, 1]. Typical semi-algebraic functions include real polynomial functions, kxkp with p ? 0,
rank(X), the indicator function of PSD cone, Stiefel manifolds and constant rank matrices [23].
2.3
Review of APG in the Convex Case
We first review APG in the convex case. Bech and Teboulle [13] extend Nesterov?s accelerated
gradient method to the nonsmooth case. It is named the Accelerated Proximal Gradient method and
consists of the following steps:
tk?1 ? 1
(xk ? xk?1 ),
tk
xk+1 = prox?k g (yk ? ?k ?f (yk )),
p
4(tk )2 + 1 + 1
tk+1 =
,
2
yk = xk +
(3)
(4)
(5)
1
where the proximal mapping is defined as prox?g (x) = argminu g(u) + 2?
kx ? uk2 . APG is not
a monotone algorithm, which means that F (xk+1 ) may not be smaller than F (xk ). So Beck and
Teboulle [12] further proposed a monotone APG, which consists of the following steps:
tk?1 ? 1
tk?1
(zk ? xk ) +
(xk ? xk?1 ),
tk
tk
zk+1 = prox?k g (yk ? ?k ?f (yk )),
p
4(tk )2 + 1 + 1
tk+1 =
,
2
zk+1 , if F (zk+1 ) ? F (xk ),
xk+1 =
xk ,
otherwise.
yk = xk +
3
(6)
(7)
(8)
(9)
3
APGs for Nonconvex Programs
In this section, we propose two APG-type algorithms for general nonconvex
nonsmooth problems.
We establish the convergence in the nonconvex case and the O k12 convergence rate in the convex
case. When the KL property is satisfied we also provide stronger results on convergence rate.
3.1
Monotone APG
We give two reasons that result in the difficulty of convergence analysis on the usual APG [12, 13]
for nonconvex programs: (1) yk may be a bad extrapolation, (2) in [12] only descent property,
F (xk+1 ) ? F (xk ), is ensured. To address these issues, we need to monitor and correct yk when
it has the potential to fail, and the monitor should enjoy the property of sufficient descent which is
critical to ensure the convergence to a critical point. As is known, proximal gradient methods can
make sure sufficient descent [16] (cf. (15)). So we use a proximal gradient step as the monitor. More
specially, our algorithm consists of the following steps:
tk?1
tk?1 ? 1
yk = xk +
(zk ? xk ) +
(xk ? xk?1 ),
(10)
tk
tk
zk+1 = prox?y g (yk ? ?y ?f (yk )),
(11)
vk+1 = prox?x g (xk ? ?x ?f (xk )),
p
4(tk )2 + 1 + 1
,
tk+1 =
2
zk+1 , if F (zk+1 ) ? F (vk+1 ),
xk+1 =
vk+1 , otherwise.
(12)
(13)
(14)
where ?y and ?x can be fixed constants satisfying ?y < L1 and ?x < L1 , or dynamically computed
by backtracking line search initialized by Barzilai-Borwein rule2 . L is the Lipschitz constant of ?f .
Our algorithm is an extension of Beck and Teboulle?s monotone APG [12]. The difference lies in
the extra v, as the role of monitor, and the correction step of x-update. In (9) F (zk+1 ) is compared
with F (xk ), while in (14) F (zk+1 ) is compared with F (vk+1 ). A further difference is that Beck
and Teboulle?s algorithm only ensures descent while our algorithm makes sure sufficient descent,
which means
F (xk+1 ) ? F (xk ) ? ?kvk+1 ? xk k2 ,
(15)
where ? > 0 is a small constant. It is not difficult to understand that only the descent property cannot
ensure the convergence to a critical point in nonconvex programming. We present our convergence
result in the following theorem3 .
Theorem 1. Let f be a proper function with Lipschitz continuous gradients and g be proper and
lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F (x) is coercive.
Then {xk } and {vk } generated by (10)-(14) are bounded. Let x? be any accumulation point of
{xk }, we have 0 ? ?F (x? ), i.e., x? is a critical point.
A remarkable aspect of our algorithm is that although we have made some modifications on Beck
and Teboulle?s algorithm, the O k12 convergence rate in the convex case still holds. Similar to
Theorem 5.1 in [12], we have the following theorem on the accelerated convergence in the convex
case:
Theorem 2. For convex f and g, assume that ?f is Lipschitz continuous, let x? be any global
optimum, then {xk } generated by (10)-(14) satisfies
2
F (xN +1 ) ? F (x? ) ?
kx0 ? x? k2 ,
(16)
?y (N + 1)2
When the objective function is locally convex in the neighborhood of local minimizers, Theorem
2 means that APG can ensure to have an O k12 convergence rate when approaching to a local
minimizer, thus accelerating the convergence.
For better reference, we summarize the proposed monotone APG algorithm in Algorithm 1.
2
3
For the detail of line search with Barzilai-Borwein initializtion please see Supplementary Materials.
The proofs in this paper can be found in Supplementary Materials.
4
Algorithm 1 Monotone APG
Initialize z1 = x1 = x0 , t1 = 1, t0 = 0, ?y < L1 , ?x <
for k = 1, 2, 3, ? ? ? do
update yk , zk+1 , vk+1 , tk+1 and xk+1 by (10)-(14).
end for
3.2
1
L.
Convergence Rate under the KL Assumption
The KL property is a powerful tool and is studied by [16], [17] and [23] for a class of general
descent methods. The usual APG in [12, 13] does not satisfy the sufficient descent property, which
is crucial to use the KL property, and thus has no conclusions under the KL assumption. On the
other hand, due to the intermediate variables yk , vk and zk , our algorithm is more complex than
the general descent methods and also does not satisfy the conditions therein. However, due to the
monitor-corrector step (12) and (14), some modified conditions4 can be satisfied and we can still
get some exciting results under the KL assumption. With the same framework of [17], we have the
following theorem.
Theorem 3. Let f be a proper function with Lipschitz continuous gradients and g be proper and
lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F (x) is coercive.
If we further assume that f and g satisfy the KL property and the desingularising function has the
form of ?(t) = C? t? for some C > 0, ? ? (0, 1], then
1. If ? = 1, then there exists k1 such that F (xk ) = F ? for all k > k1 and the algorithm
terminates in finite steps.
2. If ? ? [ 21 , 1), then there exists k2 such that for all k > k2 ,
k?k2
d1 C 2
?
F (xk ) ? F ?
rk2 .
1 + d1 C 2
3. If ? ? (0, 21 ), then there exists k3 such that for all k > k3 ,
1
1?2?
C
?
F (xk ) ? F ?
,
(k ? k3 )d2 (1 ? 2?)
(17)
(18)
where F ? is the same function value at all the accumulation points of {xk }, rk = F (vk ) ?
2
n
2??1
o
C
F ? , d1 = ?1x + L / 2?1 x ? L2 and d2 = min 2d11 C , 1?2?
2 2??2 ? 1 r02??1
When F (x) is a semi-algebraic function, the desingularising function ?(t) can be chosen to be the
form of C? t? with ? ? (0, 1] [23]. In this case, as shown in Theorem 3, our algorithm converges in
finite iterations when ? = 1, converges with a linear rate when ? ? [ 12 , 1) and a sublinear rate (at
least O( k1 )) when ? ? (0, 12 ) for the gap F (xk ) ? F ? . This is the same as the results mentioned in
[17], although our algorithm does not satisfy the conditions therein.
3.3
Nonmonotone APG
Algorithm 1 is a monotone algorithm. When the problem is ill-conditioned, a monotone algorithm
has to creep along the bottom of a narrow curved valley so that the objective function value does not
increase, resulting in short stepsizes or even zigzagging and hence slow convergence [24]. Removing
the requirement on monotonicity can improve convergence speed because larger stepsizes can be
adopted when line search is used.
On the other hand, in Algorithm 1 we need to compute zk+1 and vk+1 in each iteration and use
vk+1 to monitor and correct zk+1 . This is a conservative strategy. In fact, we can accept zk+1 as
xk+1 directly if it satisfies some criterion showing that yk is a good extrapolation. Then vk+1 is
computed only when this criterion is not met. Thus, we can reduce the average number of proximal
4
For the details of difference please see Supplementary Materials.
5
mappings, accordingly the computation cost, in each iteration. So in this subsection we propose a
nonmonotone APG to speed up convergence.
In monotone APG, (15) is ensured. In nonmonotone APG, we allow xk+1 to make a larger objective function value than F (xk ). Specifically, we allow xk+1 to yield an objective function value
smaller than ck , a relaxation of F (xk ). ck should not be too far from F (xk ). So the average of
F (xk ), F (xk?1 ), ? ? ? , F (x1 ) is a good choice. Thus we follow [24] to define ck as a convex combination of F (xk ), F (xk?1 ), ? ? ? , F (x1 ) with exponentially decreasing weights:
Pk
k?j
F (xj )
j=1 ?
ck =
,
(19)
Pk
k?j
j=1 ?
where ? ? [0, 1) controls the degree of nonmonotonicity. In practice ck can be efficiently computed
by the following recursion:
qk+1 = ?qk + 1,
?qk ck + F (xk+1 )
ck+1 =
,
qk+1
(20)
(21)
where q1 = 1 and c1 = F (x1 ).
According to (14), we can split (15) into two parts by the different choices of xk+1 . Accordingly, in
nonmonotone APG we consider the following two conditions to replace (15):
F (zk+1 ) ? ck ? ?kzk+1 ? yk k2 ,
(22)
2
(23)
F (vk+1 ) ? ck ? ?kvk+1 ? xk k .
We choose (22) as the criteria mentioned before. When (22) holds, we deem that yk is a good
extrapolation and accept zk+1 directly. Then we do not compute vk+1 in this case. However, (22)
does not hold all the time. When it fails, we deem that yk may not be a good extrapolation. In this
case, we compute vk+1 by (12) satisfying (23), and then monitor and correct zk+1 by (14). (23) is
ensured when ?x ? 1/L. When backtracking line search is used, such vk+1 that satisfies (23) can
be found in finite steps5 .
Combing (20), (21), (22) and xk+1 = zk+1 we have
ck+1 ? ck ?
?kxk+1 ? yk k2
.
qk+1
(24)
Similarly, replacing (22) and xk+1 = zk+1 by (23) and xk+1 = vk+1 , respectively, we have
ck+1 ? ck ?
?kxk+1 ? xk k2
.
qk+1
(25)
This means that we replace the sufficient descent condition of F (xk ) in (15) by the sufficient descent
of ck .
We summarize the nonmonotone APG in Algorithm 26 . Similar to monotone APG,
nonmonotone
APG also enjoys the convergence property in the nonconvex case and the O k12 convergence rate
in the convex case. We present our convergence result in Theorem 4. Theorem 2 still holds for
Algorithm 2 with no modification. So we omit it here.
Define ?1 = {k1 , k2 , ? ? ? , kj , ? ? ? } and ?2 = {m1 , m2 , ? ? ? , mj , ? ? ? }, such that in Algorithm 2,
(22) holds and xk+1 = zk+1 is executed for all k T= kj ? ?1 . For
S all k = mj ? ?2 , (22) does
not hold and (14) is executed. Then we have ?1 ?2 = ?, ?1 ?2 = {1, 2, 3, ? ? ? , } and the
following theorem holds.
Theorem 4. Let f be a proper function with Lipschitz continuous gradients and g be proper and
lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F (x) is coercive.
Then {xk }, {vk } and {ykj } where kj ? ?1 generated by Algorithm 2 are bounded, and
1. if ?1 or ?2 is finite, then for any accumulation point {x? } of {xk }, we have 0 ? ?F (x? ).
5
6
See Lemma 2 in Supplementary Materials.
Please see Supplementary Materials for nonmonotone APG with line search.
6
Algorithm 2 Nonmonotone APG
Initialize z1 = x1 = x0 , t1 = 1, t0 = 0, ? ? [0, 1), ? > 0, c1 = F (x1 ), q1 = 1, ?x <
1
L.
for k = 1, 2, 3, ? ? ? do
tk?1 ?1
yk = xk + tk?1
(xk ? xk?1 ),
tk (zk ? xk ) +
tk
zk+1 = prox?y g (yk ? ?y ?f (yk ))
if F (zk+1 ) ? ck ? ?kzk+1 ? yk k2 then
xk+1 = zk+1 .
else
vk+1 = prox?x g (xk ? ?x ?f (xk )),
zk+1 , if F (zk+1 ) ? F (vk+1 ),
xk+1 =
vk+1 , otherwise.
end if ?
4(tk )2 +1+1
tk+1 =
,
2
qk+1 = ?qk + 1,
(xk+1 )
ck+1 = ?qk ckq+F
.
k+1
end for
1
L , ?y
<
2. if ?1 and ?2 are both infinite, then for any accumulation point x? of {xkj +1 }, y? of {ykj }
where kj ? ?1 and any accumulation point v? of {vmj +1 }, x? of {xmj } where mj ? ?2 ,
we have 0 ? ?F (x? ), 0 ? ?F (y? ) and 0 ? ?F (v? ).
4
Numerical Results
In this section, we test the performance of our algorithm on the problem of Sparse Logistic Regression (LR)7 .Sparse LR is an attractive extension to LR as it can reduce overfitting and perform
feature selection simultaneously. Sparse LR is widely used in areas such as bioinformatics [25] and
text categorization [26]. In this subsection, we follow Gong et al. [19] to consider Sparse LR with a
nonconvex regularizer:
n
min
w
1X
log(1 + exp(?yi xTi w)) + r(w).
n i=1
(26)
We choose r(w) as the capped l1 penalty [3], defined as
r(w) = ?
d
X
min(|wi |, ?),
? > 0.
(27)
i=1
We compare monotone APG (mAPG) and nonmonotone APG (nmAPG) with monotone GIST8
(mGIST), nonmonotone GIST (nmGIST) [19] and IFB [22]. We test the performance on the real-sim
data set9 , which contains 72309 samples of 20958 dimensions. We follow [19] to set ? = 0.0001,
? = 0.1? and the starting point as zero vectors. In nmAPG we set ? = 0.8. In IFB the inertial
parameter ? is set at 0.01 and the Lipschitz constant is computed by backtracking. To make a
fair comparison, we first run mGIST. The algorithm is terminated when the relative change of two
consecutive objective function values is less than 10?5 or the number of iterations exceeds 1000.
This termination condition is the same as in [19]. Then we run nmGIST, mAPG, nmAPG and
IFB. These four algorithms are terminated when they achieve an equal or smaller objective function
value than that by mGIST or the number of iterations exceeds 1000. We randomly choose 90% of
the data as training data and the rest as test data. The experiment result is averaged over 10 runs. All
algorithms are run on Matlab 2011a and Windows 7 with an Intel Core i3 2.53 GHz CPU and 4GB
memory. The result is reported in Table 2. We also plot the curves of objective function values vs.
iteration number and CPU time in Figure 1.
7
For the sake of space limitation we leave another experiment, Sparse PCA, in Supplementary Materials.
http://www.public.asu.edu/ yje02/Software/GIST
9
http://www.csie.ntu.tw/cjlin/libsvmtools/datasets
8
7
Table 2: Comparisons of APG, GIST and IFB on the sparse logistic regression problem. The quantities include number of iterations, averaged number of line searches in each iteration, computing
time (in seconds) and test error. They are averaged over 10 runs.
Method
#Iter. #Line search
Time
test error
mGIST
994
2.19
300.42
2.94%
nmGIST
806
1.69
222.22
2.94%
635
2.59
215.82
2.96%
IFB
mAPG
175
2.99
133.23
2.93%
nmAPG
146
1.01
42.99
2.97%
We have the following observations: (1) APG-type methods need much fewer iterations and less
computing time than GIST and IFB to reach the same (or smaller) objective function values. As
GIST is indeed a Proximal Gradient method (PG) and IFB is an extension of PG, this verifies that
APG can indeed accelerate the convergence in practice. (2) nmAPG is faster than mAPG. We give
two reasons: nmAPG avoids the computation of vk in most of the time and reduces the number
of line searches in each iteration. We mention that in mAPG line search is performed in both (11)
and (12), while in nmAPG only the computation of zk+1 needs line search in every iteration. vk+1
is computed only when necessary. We note that the average number of line searches in nmAPG is
nearly one. This means that (22) holds in most of the time. So we can trust that zk can work well in
most of the time and only in a few times vk is computed to correct zk and yk . On the other hand,
nonmonotonicity allows for larger stepsizes, which results in fewer line searches.
?0.8
?0.8
mGIST
nmGIST
IFB
mAPG
nmAPG
?1
?1.2
?1.2
?1.4
Function Value
Function Value
?1.4
?1.6
?1.8
?1.6
?1.8
?2
?2
?2.2
?2.2
?2.4
?2.4
?2.6
mGIST
nmGIST
IFB
mAPG
nmAPG
?1
0
200
400
600
800
?2.6
1000
Iteration
(a) Objective function value v.s. iteration
0
50
100
150
CPU Time
200
250
300
(b) Objective function value v.s. time
Figure 1: Compare the objective function value produced by APG, GIST and IFB.
5
Conclusions
In this paper, we propose two APG-type algorithms for efficiently solving general nonconvex nonsmooth problems, which are abundant in machine learning. We provide a detailed convergence
analysis, showing that every accumulation point is a critical point
for general nonconvex nonsmooth
programs and the convergence rate is maintained at O k12 for convex programs. Nonmonotone
APG allows for larger stepsizes and needs less computation cost in each iteration and thus is faster
than monotone APG in practice. Numerical experiments testify to the advantage of the two algorithms.
Acknowledgments
Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no.
2015CB352502), National Natural Science Foundation (NSF) of China (grant nos. 61272341 and
61231002), and Microsoft Research Asia Collaborative Research Program. He is the corresponding
author.
8
References
[1] F. Yun, editor. Low-rank and sparse modeling for visual analysis. Springer, 2014. 1
[2] E.J. Candes, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization. Journal of
Fourier Analysis and Applications, 14(5):877?905, 2008. 1
[3] T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization. The Journal of Machine
Learning Rearch, 11:1081?1107, 2010. 1, 7
[4] S. Foucart and M.J. Lai. Sparsest solutions of underdeterminied linear systems via lq minimization for
0 < q ? 1. Applied and Computational Harmonic Analysis, 26(3):395?407, 2009. 1
[5] C.H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics,
38(2):894?942, 2010. 1
[6] D. Geman and C. Yang. Nonlinear image recovery with half-quadratic regularization. IEEE Transactions
on Image Processing, 4(7):932?946, 1995. 1
[7] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
of the American Statistical Association, 96(456):1348?1360, 2001. 1
[8] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. The Journal of
Machine Learning Research, 13(1):3441?3473, 2012. 1, 2
[9] Y.E. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence
O(1/k2 ). Soviet Mathematics Doklady, 27(2):372?376, 1983. 1
[10] Y.E. Nesterov. Smooth minimization of nonsmooth functions. Mathematical programming, 103(1):127?
152, 2005. 1
[11] Y.E. Nesterov. Gradient methods for minimizing composite objective functions. Technical report, Center
for Operations Research and Econometrics(CORE), Catholie University of Louvain, 2007. 1
[12] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising
and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419?2434, 2009. 1, 2, 3, 4, 5
[13] A. Beck and M. Teboulle. A fast iterative shrinkage thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sciences, 2(1):183?202, 2009. 1, 2, 3, 4, 5
[14] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Technical report,
University of Washington, Seattle, 2008. 1
[15] S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. arXiv preprint arXiv:1310.3787, 2013. 1, 2
[16] H. Attouch, J. Bolte, and B.F. Svaier. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Mathematical Programming, 137:91?129, 2013. 2, 4, 5
[17] P. Frankel, G. Garrigos, and J. Peypouquet. Splitting methods with variable metric for Kurdyka?ojasiewicz functions and general convergence rates. Journal of Optimization Theory and Applications,
165:874?900, 2014. 2, 5
[18] P. Ochs, Y. Chen, T. Brox, and T. Pock. IPiano: Inertial proximal algorithms for nonconvex optimization.
SIAM J. Image Sciences, 7(2):1388?1419, 2014. 2
[19] P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye. A general iterative shrinkage and thresholding algorithm
for nonconvex regularized optimization problems. In ICML, pages 37?45, 2013. 2, 7
[20] W. Zhong and J. Kwok. Gradient descent with proximal average for nonconvex and composite regularization. In AAAI, 2014. 2
[21] P. Ochs, A. Dosovitskiy, T. Brox, and T. Pock. On iteratively reweighted algorithms for non-smooth
non-convex optimization in computer vision. SIAM J. Imaging Sciences, 2014. 2
[22] R.L. Bot, E.R. Csetnek, and S. L?aszl?o. An inertial forward-backward algorithm for the minimization of
the sum of two nonconvex functions. Preprint, 2014. 2, 7
[23] J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized minimization for nonconvex and
nonsmooth problems. Mathematical Programming, 146(1-2):459?494, 2014. 3, 5
[24] H. Zhang and W.W. Hager. A nonmonotone line search technique and its application to unconstrained
optimization. SIAM J. Optimization, 14:1043?1056, 2004. 5, 6
[25] S.K. Shevade and S.S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic
regression. Bioinformatics, 19(17):2246?2253, 2003. 7
[26] A. Genkin, D.D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization.
Technometrics, 49(14):291?304, 2007. 7
9
| 5728 |@word polynomial:1 norm:8 stronger:2 open:1 termination:1 d2:2 semicontinuous:5 linearized:1 pg:2 q1:2 mention:1 hager:1 reduction:1 celebrated:1 series:1 contains:1 ours:1 existing:1 nonmonotone:14 current:1 kx0:1 written:1 numerical:3 plot:1 gist:10 update:2 v:1 half:1 asu:1 zlin:1 fewer:2 accordingly:3 xk:71 ckq:1 short:1 ojasiewicz:2 core:2 lr:5 zhang:4 mathematical:3 along:1 prove:1 consists:3 introduce:1 x0:5 indeed:2 dist:1 multi:1 globally:1 decreasing:1 little:1 xti:1 window:1 cpu:3 deem:2 bounded:3 coercive:4 unified:2 every:5 concave:4 ensured:3 k2:12 demonstrates:1 doklady:1 control:1 grant:2 enjoy:1 omit:1 t1:2 before:1 local:3 pock:2 rule2:1 therein:2 china:4 studied:2 dynamically:1 limited:1 averaged:3 fazel:1 acknowledgment:1 practice:4 area:1 composite:2 boyd:1 madigan:1 get:1 cannot:2 valley:1 selection:4 cb352502:1 accumulation:9 ghadimi:2 restriction:1 www:2 center:2 attention:1 starting:1 convex:36 recovery:1 splitting:2 m2:1 nuclear:2 variation:1 conditions4:1 annals:1 barzilai:2 programming:11 deblurring:1 trend:1 satisfying:5 ochs:2 econometrics:1 geman:2 cooperative:1 bottom:1 role:1 csie:1 preprint:2 aszl:1 ensures:3 yk:30 mentioned:2 tame:1 convexity:2 nesterov:4 dom:3 solving:4 accelerate:5 regularizer:1 soviet:1 subanalytic:1 fast:2 medianet:1 neighborhood:3 larger:7 solve:1 supplementary:6 widely:1 otherwise:3 statistic:3 g1:4 advantage:2 differentiable:1 propose:6 combining:1 achieve:1 r02:1 convergence:39 seattle:1 requirement:2 extending:2 optimum:1 categorization:2 converges:3 leave:1 tk:23 gong:2 school:1 received:1 sim:1 solves:1 met:1 correct:5 stochastic:2 libsvmtools:1 material:6 public:1 ipiano:5 require:1 preliminary:1 ntu:1 extension:3 correction:1 hold:9 exp:1 great:2 k3:3 mapping:6 consecutive:1 tool:1 minimization:7 aim:1 modified:2 ck:16 i3:1 shrinkage:3 zhong:1 stepsizes:5 l0:1 waif:1 vk:23 rank:7 likelihood:1 minimizers:2 accept:2 issue:3 ill:1 constrained:1 special:2 initialize:2 brox:2 equal:1 apgs:2 washington:1 icml:1 nearly:2 promote:1 nonsmooth:18 report:2 dosovitskiy:1 few:1 randomly:1 genkin:1 simultaneously:1 national:2 beck:9 keerthi:1 maintain:1 microsoft:1 testify:2 psd:1 technometrics:1 interest:1 analyzed:1 d11:1 kvk:2 regularizers:1 necessary:1 huan:1 pku:2 initialized:1 abundant:1 modeling:1 teboulle:11 cost:2 introducing:1 deviation:1 too:1 reported:1 eec:1 proximal:18 gd:3 siam:4 borwein:2 aaai:1 satisfied:2 choose:3 huang:1 american:1 li:2 combing:1 sabach:1 potential:1 prox:7 includes:1 satisfy:5 explicitly:1 tion:1 performed:1 extrapolation:6 lab:1 candes:1 contribution:1 collaborative:1 ir:4 qk:9 efficiently:2 yield:1 yes:11 bayesian:1 produced:1 lu:1 oscillatory:1 reach:1 definition:1 proof:1 proved:1 treatment:1 knowledge:2 lim:1 subsection:2 inertial:4 follow:3 asia:1 stage:1 shevade:1 hand:3 replacing:2 trust:1 nonlinear:2 logistic:4 name:1 attouch:2 ye:1 rk2:1 unbiased:1 hence:1 regularization:3 alternating:1 iteratively:2 attractive:1 reweighted:4 please:3 maintained:1 criterion:3 yun:1 l1:8 cp:2 stiefel:1 image:7 harmonic:1 recently:2 xkj:1 shanghai:1 exponentially:1 extend:4 he:1 m1:1 association:1 measurement:1 unconstrained:2 zhouchen:2 similarly:1 mathematics:1 peypouquet:1 recent:2 inf:1 nonconvex:51 inequality:3 success:1 arbitrarily:2 yi:1 frankel:2 jiaotong:1 creep:1 converge:3 signal:2 semi:5 reduces:2 seidel:1 exceeds:2 smooth:2 faster:2 technical:2 lin:2 lai:1 peking:1 ensuring:1 basic:2 regression:4 enhancing:1 metric:1 vision:1 arxiv:2 iteration:18 achieved:1 c1:2 else:1 crucial:1 extra:1 rest:1 specially:2 sure:2 nonconcave:1 yang:1 ideal:1 intermediate:1 split:1 enough:1 variety:1 xj:1 approaching:1 suboptimal:1 reduce:2 cn:2 absent:1 t0:2 whether:3 pca:1 gb:1 accelerating:1 penalty:6 algebraic:5 matlab:1 detailed:1 revived:1 extensively:1 locally:2 http:2 restricts:1 nsf:1 uk2:1 bot:1 key:1 four:1 iter:1 lan:2 monitor:10 prevent:1 backward:3 imaging:2 relaxation:2 monotone:17 year:1 sum:2 cone:1 run:5 inverse:1 powerful:1 named:1 clipped:1 accelerates:1 apg:49 fan:1 quadratic:1 oracle:1 extrapolates:1 software:1 sake:1 aspect:1 speed:4 fourier:1 min:4 according:1 combination:1 remain:1 smaller:4 terminates:1 garrigos:1 wi:1 lp:1 tw:1 modification:2 heart:1 remains:2 fail:1 cjlin:1 end:3 adopted:1 operation:1 kwok:1 ensure:6 include:4 cf:1 wakin:1 k1:4 especially:1 establish:1 objective:15 quantity:1 strategy:1 usual:3 surrogate:1 unclear:2 said:2 gradient:24 schatten:1 zigzagging:1 topic:1 manifold:1 tseng:1 reason:2 corrector:1 minimizing:1 innovation:1 difficult:1 executed:2 design:1 motivates:1 proper:9 unknown:1 perform:1 observation:1 datasets:1 finite:4 descent:21 curved:1 rn:5 moe:1 kl:15 z1:2 louvain:1 narrow:1 address:3 capped:2 below:1 perception:1 sparsity:2 challenge:1 summarize:2 program:16 memory:1 hot:1 critical:12 suitable:1 difficulty:2 natural:1 regularized:3 indicator:1 recursion:1 minimax:2 improve:1 kj:4 text:2 review:2 literature:2 l2:2 nonmonotonicity:2 relative:1 sublinear:1 limitation:1 remarkable:1 foundation:1 degree:1 sufficient:10 thresholding:3 exciting:1 editor:1 kxkp:1 summary:1 extrapolated:1 supported:1 penalized:1 keeping:1 enjoys:1 allow:2 understand:1 wide:1 absolute:1 sparse:11 k12:9 ghz:1 kzk:2 dimension:1 xn:1 stand:1 curve:1 avoids:1 forward:3 collection:1 made:1 author:1 far:1 transaction:2 gene:1 monotonicity:1 global:1 overfitting:1 continuous:7 iterative:4 search:14 chief:1 table:4 mj:3 optimiza1:1 ncp:2 zk:30 excellent:1 complex:1 significance:1 pk:2 terminated:2 big:1 verifies:1 fair:1 x1:6 intel:1 slow:1 fails:1 sparsest:1 lq:1 lie:2 theorem:12 rk:1 removing:1 bad:4 showing:3 foucart:1 exists:5 mohan:1 conditioned:1 demand:1 rankness:1 kx:1 gap:1 chen:1 bolte:2 smoothly:1 ifb:14 backtracking:3 kurdyka:2 visual:1 kxk:3 g2:4 monotonic:1 springer:1 minimizer:1 satisfies:5 lewis:1 formulated:1 lipschitz:7 replace:2 considerable:1 change:2 typical:3 except:1 specifically:1 infinite:1 denoising:1 lemma:1 argminu:1 conservative:1 total:1 gauss:1 latter:1 bioinformatics:2 accelerated:12 d1:3 ykj:2 |
5,223 | 5,729 | Nearly-Optimal Private LASSO?
Kunal Talwar
Google Research
kunal@google.com
Abhradeep Thakurta
(Previously) Yahoo! Labs
guhathakurta.abhradeep@gmail.com
Li Zhang
Google Research
liqzhang@google.com
Abstract
We present a nearly optimal differentially private version of the well known
LASSO estimator. Our algorithm provides privacy protection with respect to each
training example. The excess risk of our algorithm, compared to the non-private
2/3
e
version, is O(1/n
), assuming all the input data has bounded `? norm. This
is the first differentially private algorithm that achieves such a bound without the
polynomial dependence on p under no additional assumptions on the design matrix. In addition, we show that this error bound is nearly optimal amongst all
differentially private algorithms.
1
Introduction
A common task in supervised learning is to select the model that best fits the data. This is frequently
achieved by selecting a loss function that associates a real-valued loss with each datapoint d and
model ? and then selecting from a class of admissible models, the model ? that minimizes the
average loss over all data points in the training set. This procedure is commonly referred to as
Empirical Risk Minimization(ERM).
The availability of large datasets containing sensitive information from individuals has motivated the
study of learning algorithms that guarantee the privacy of individuals contributing to the database. A
rigorous and by-now standard privacy guarantee is via the notion of differential privacy. In this work,
we study the design of differentially private algorithms for Empirical Risk Minimization, continuing
a long line of work. (See [2] for a survey.)
In particular, we study adding privacy protection to the classical LASSO estimator, which has been
widely used and analyzed. We first present a differentially private optimization algorithm for the
LASSO estimator. The algorithm is the combination of the classical Frank-Wolfe algorithm [15]
and the exponential mechanism for guaranteeing the privacy [21]. We then show that our algorithm
achieves nearly optimal risk among all the differentially private algorithms. This lower bound proof
relies on recently developed techniques with roots in Cryptography [4, 14],
Consider the training dataset D consisting of n pairs of data di = (xi , yi ) where xi ? Rp , usually
called the feature vector, and yi ? R, the prediction.
P The LASSO estimator, or the sparse linear
regression, solves for ?? = argmin? L(?; di ) = n1 i |xi ? ? ? yi |2 subject to k?k1 ? c. To simplify
presentation, we assume c = 1, but our results directly extend to general c. The `1 constraint tends to
induce sparse ?? so is widely used in the high dimensional setting when p n. Here, we will study
approximating the LASSO estimation with minimum possible error while protecting the privacy of
each individual di . Below we define the setting more formally.
?
Part of this work was done at Microsoft Research Silicon Valley Campus.
1
Problem definition: Given a data set D = {d1 , ? ? ? , dn } of n samples from a domain D, a constraint
set C ? Rp , and a loss function L : C ? D ? R, for any model ?, define its excess empirical risk as
n
def
R(?; D) =
n
1X
1X
L(?; di ) ? min
L(?; di ).
??C n
n i=1
i=1
(1)
For LASSO, the constraint set is the `1 ball, and the loss is the quadratic loss function. We define
the risk of a mechanism A on a data set D as R(A; D) = E[R(A(D); D)], where the expectation
is over the internal randomness of A, and the risk R(A) = maxD?Dn R(A; D) is the maximum
risk over all the possible data sets. Our objective is then to design a mechanism A which preserves
(, ?)-differential privacy (Definition 1.3) and achieves as low risk as possible. We call the minimum
achievable risk as privacy risk, defined as minA R(A), where the min is over all (, ?)-differentially
private mechanisms A.
There has been much work on studying the privacy risk for the LASSO estimator. However, all the
previous results either need to make strong assumption about the input data or have polynomial dependence on the dimension p. First [20] and then [24] studied the LASSO estimator with differential
privacy guarantee. They showed that one can avoid the polynomial dependence on p in the excess
empirical risk if the data matrix X satisfy the restricted strong convexity and mutual incoherence
properities. While such assumptions seem necessary to prove that LASSO recovers the exact support in the worst case, they are often violated in practice, where LASSO still leads to useful models.
It is therefore desirable to design and analyze private versions of LASSO in the absence of such assumptions. In this work, we do so by analyzing the loss achieved by the private optimizer, compared
to the true optimizer.
We make primarily two contributions in this paper. First we present an algorithm that achieves
2/3
e
the privacy risk of O(1/n
) for the LASSO problem1 . Compared to the previous work, we only
assume that the input data has bounded `? norm. In addition, the above risk bound only has logarithmic dependence on p, which fits particularly well for LASSO as we usually assume n p
when applying LASSO. This bound is achieved by a private version of the Frank-Wolfe algorithm.
Assuming that each data point di satisfies that kdi k? ? 1, we have
Theorem 1.1. There exists an (, ?)-differentially private algorithm A for LASSO such that
!
p
log(np) log(1/?)
R(A) = O
.
(n)2/3
Our second contribution is to show that, surprisingly, this simple algorithm gives a nearly tight
bound. We show that this rather unusual n?2/3 dependence is not an artifact of the algorithm or the
analysis, but is in fact the right dependence for the LASSO problem: no differentially private algorithm can do better! We prove a lower bound by employing fingerprinting codes based techniques
developed in [4, 14].
Theorem 1.2. For the sparse linear regression problem where kxi k? ? 1, for = 0.1 and ? =
o(1/n2 ), any (, ?)-differentially private algorithm A must have
R(A) = ?(1/(n log n)2/3 ) .
Our improved privacy risk crucially depends on the fact that the constraint set is a polytope with
few (polynomial in dimensions) vertices. This allows us to use a private version of the Frank-Wolfe
algorithm, where at each step, we use the exponential mechanism to select one of the vertices of
the polytope. We also present a variant of Frank-Wolfe that uses objective perturbation instead of
the exponential mechanism. We show that (Theorem 2.6) we can obtain a risk bound dependent on
the Gaussian width of the constraint set, which often results in tighter bounds compared to bounds
based, e.g., on diameter. While more general, this variant adds much more noise than the FrankWolfe based algorithm, as it is effectively publishing the whole gradient at each step. When C is not
a polytope with a small number of vertices, one can still use the exponential mechanism as long as
one has a small list of candidate points which contains an approximate optimizer for every direction.
For many simple cases, for example the `q ball with 1 < q < 2, the bounds attained in this way have
1
e to hide logarithmic factors.
Throughout the paper, we use O
2
an additional polynomial dependence on the dimension p, instead of the logarithmic dependence in
the above result. For example, when q = 1, the upper bound from this variant has an extra factor
of p1/3 . Whereas such a dependence is provably needed for q = 2, the upper bound jump rather
abruptly from the logarithmic dependence for q = 1 to a polynomial dependence on p for q > 1.
We leave open the question of resolving this discontinuity and interpolating more smoothly between
the `1 case and the `2 case.
Our results enlarge the set of problems for which privacy comes ?for free?. Given n samples from
a distribution, suppose that ?? is the empirical risk minimizer and ?priv is the differentially private
approximate minimizer. Then the non-private ERM algorithm outputs ?? and incurs expected (on the
distribution) loss equal to the loss(?? , training-set) + generalization-error, where the generalization
error term depends on the loss function, C and on the number of samples n. The differentially private
algorithm incurs an additional loss of the privacy risk. If the privacy risk is asymptotically no larger
than the generalization error, we can think of privacy as coming for free, since under the assumption
of n being large enough to make the generalization error small, we are also making n large enough
to make the privacy risk small. In the case when C is the `1 -ball, and the loss function is the squared
loss with kxk? ? 1 and |y| ? 1, the best known generalization error bounds dominate the privacy
risk when n = ?(log3 p) [1, Theorem 18].
1.1
Related work
There have been much work on private LASSO or more generally private ERM algorithms. The
error bounds mainly depend on the shape of the constraint set and the Lipschitz condition of the loss
function. Here we will summarize these related results. Related to our results, we distinguish two
settings: i) the constraint set is bounded in the `1 -norm and the the loss function is 1-Lipschitz in
the `1 -norm. (call it the (`1 /`? )-setting). This is directly related to our bounds on LASSO; and
ii) the constraint set has bounded `2 norm and the loss function is 1-Lipschitz in the `2 norm (the
(`2 /`2 )-setting), which is related to our bounds using Gaussian width.
The (`1 /`? )-setting: The results in this setting include [20, 24, 19, 25]. The first two works make
certain assumptions about the instance (restricted strong convexity (RSC) and mutual incoherence).
Under these assumptions, they obtain privacy risk guarantees that depend logarithmically in the dimensions p, and thus allowing the guarantees to be meaningful even when p n. In fact their
bound of O(polylog p/n) can be better than our tight bound of O(polylog p/n2/3 ). However, these
assumptions on the data are strong and may not hold in practice. Our guarantees do not require
any such data dependent assumptions. The result of [19] captures the scenario when the constraint
set C is the probability simplex and the loss function is a generalized linear model, but provides
a worse bound of O(polylog p/n1/3 ). For the special case of linear loss functions, which are interesting primarily in the online prediction setting, the techniques of [19, 25] provide a bound of
O(polylog p/n).
The (`2 /`2 )-setting: In all the works on private convex optimization that we are aware of, either the
excess risk guarantees depend polynomially on the dimensionality of the problem (p), or assumes
special structure to the loss (e.g., generalized linear model [19] or linear losses [25]). Similar dependence is also present in the online version of the problem [18, 26]. [2] recently show that in
the private ERM setting, in general this polynomial dependence on p is unavoidable. In our work
we show that one can replace this dependence on p with the Gaussian width of the constraint set C,
which can be much smaller.
Effect of Gaussian width in risk minimization: Our result on general C has an dependence on
the Gaussian width of C. This geometric concept has previously appeared in other contexts. For
example, [1] bounds the the excess generalization error by the Gaussian width of the constraint set C.
Recently [5] show that the Gaussian width of a constraint set C is very closely related to the number
of generic linear measurements one needs to perform to recover an underlying model ?? ? C. The
notion of Gaussian width has also been used by [22, 11] in the context of differentially private query
release mechanisms but in the very different context of answering multiple linear queries over a
database.
3
1.2
Background
Differential Privacy: The notion of differential privacy (Definition 1.3) is by now a defacto standard
for statistical data privacy [10, 12]. One of the reasons why differential privacy has become so
popular is because it provides meaningful guarantees even in the presence of arbitrary auxiliary
information. At a semantic level, the privacy guarantee ensures that an adversary learns almost
the same thing about an individual independent of his presence or absence in the data set. The
parameters (, ?) quantify the amount of information leakage. For reasons beyond the scope of this
work, ? 0.1 and ? = 1/n?(1) are a good choice of parameters. Here n refers to the number of
samples in the data set.
Definition 1.3. A randomized algorithm A is (, ?)-differentially private if, for all neighboring data
sets D and D0 (i.e., they differ in one record, or equivalently, dH (D, D0 ) = 1) and for all events S
in the output space of A, we have
Pr(A(D) ? S) ? e Pr(A(D0 ) ? S) + ? .
Here dH (D, D0 ) refers to the Hamming distance.
p
`q -norm, q ? 1: For q ? 1, the `q -norm for any vector v ? R is defined as
p
P
v(i)
q
1/q
, where
i=1
v(i) is the i-th coordinate of the vector v.
L-Lipschitz continuity w.r.t. norm k ? k: A function ? : C ? R is L-Lispchitz within a set C w.r.t.
a norm k ? k if the following holds.
??1 , ?2 ? C, |?(?1 ) ? ?(?2 )| ? L ? k?1 ? ?2 k.
Gaussian width of a set C: Let b ? N (0, Ip ) be a Gaussian
random vector in Rp . The Gaussian
def
width of a set C is defined as GC = Eb sup |hb, wi| .
w?C
2
Private Convex Optimization by Frank-Wolfe algorithm
In this section we analyze a differentially private variant of the classical Frank-Wolfe algorithm [15].
We show that for the setting where the constraint set C is a polytope with k vertices, and the loss
function L(?; d) is Lipschitz w.r.t. the `1 -norm, one can obtain an excess privacy risk of roughly
O(log k/n2/3 ). This in particular captures the high-dimensional linear regression setting. One such
example is the classical LASSO algorithm[27], which computes argmin?:k?k1 ?1 n1 kX? ? yk22 . In
the usual case of |xij |, |yj | = O(1), L(?) = n1 kX? ?yk22 is O(1)-Lipschitz with respect to `1 -norm,
2/3
e
we show that one can achieve the nearly optimal privacy risk of O(1/n
).
The Frank-Wolfe algorithm [15] can be regarded as a ?greedy? algorithm which moves towards
the optimum solution in the first order approximation (see Algorithm 1 for the description). How
fast Frank-Wolfe algorithm converges depends on L?s ?curvature?, defined as follows according
to [8, 17]. We remark that a ?-smooth function on C has curvature constant bounded by ?kCk2 .
Definition 2.1 (Curvature constant). For L : C ? R, define ?L as below.
?L :=
2
(L(?3 ) ? L(?1 ) ? h?3 ? ?1 , 5L(?1 )i) .
2
?1 ,?2 ,?C,??(0,1],?3 =?1 +?(?2 ??1 ) ?
sup
Remark 1. A useful bound can be derived for a quadratic loss L(?) = ?AT A? + hb, ?i. In this
case, by [8], ?L ? maxa,b?A?C ka ? bk22 . When C is centrally symmetric, we have the bound
?L ? 4 max??C kA?k22 . For LASSO, A = ?1n X.
Define ?? = argmin L(?). The following theorem bounds the convergence rate of Frank-Wolfe
??C
algorithm.
4
Algorithm 1 Frank-Wolfe algorithm
Input: C ? Rp , L : C ? R, ?t
1: Choose an arbitrary ?1 from C
2: for t = 1 to T ? 1 do
3:
Compute ?et = argmin??C h5L(?t ), (? ? ?t )i
4:
Set ?t+1 = ?t + ?t (?et ? ?t )
5: return ?T .
Theorem 2.2 ([8, 17]). If we set ?t = 2/(t + 2), then L(?T ) ? L(?? ) = O(?L /T ) .
While the Frank-Wolfe algorithm does not necessarily provide faster convergence compared to the
gradient-descent based method, it has two major advantages. First, on Line 3, it reduces the problem
to solving a minimization of linear function. When C is defined by small number of vertices, e.g.
when C is an `1 ball, the minimization can be done by checking h5L(?t ), xi for each vertex x of
C. This can be done efficiently. Secondly, each step in Frank-Wolfe takes a convex combination
of ?t and ?et , which is on the boundary of C. Hence each intermediate solution is always inside C
(sometimes called projection free), and the final outcome ?T is the convex combination of up to T
points on the boundary of C (or vertices of C when C is a polytope). Such outcome might be desired,
for example when C is a polytope, as it corresponds to a sparse solution. Due to these reasons
Frank-Wolfe algorithm has found many applications in machine learning [23, 16, 8]. As we shall
see below, these properties are also useful for obtaining low risk bounds for their private version.
2.1
Private Frank-Wolfe Algorithm
We now present a private version of the Frank-Wolfe algorithm. The algorithm accesses the private
data only through the loss function in step 3 of the algorithm. Thus to achieve privacy, it suffices to
replace this step by a private version.
To do so, we apply the exponential mechanism [21] to select an approximate optimizer. In the case
when the set C is a polytope, it suffices to optimize over the vertices of C due to the following basic
fact:
Fact 2.3. Let C ? Rp be the convex hull of a compact set S ? Rp . For any vector v ? Rp ,
arg minh?, vi ? S 6= ?.
??C
Thus it suffices to run the exponential mechanism to select ?t+1 from amongst the vertices of C.
This leads to a differentially private algorithm with risk logarithmically dependent on |S|. When
|S| is polynomial in p, it leads to an error bound with log p dependence. We can bound the error
in terms of the `1 -Lipschitz constant, which can be much smaller than the `2 -Lipschitz constant. In
particular, as we show in the next section, the private Frank-Wolfe algorithm is nearly optimal for
the important high-dimensional sparse linear regression problem.
Algorithm 2 ANoise?FW(polytope) : Differentially Private Frank-Wolfe Algorithm (Polytope Case)
n
P
Input: Data set: D = {d1 , ? ? ? , dn }, loss function: L(?; D) = n1
L(?; di ) (with `1 -Lipschitz
i=1
constant L1 for L), privacy parameters: (, ?), convex set: C = conv(S) with kCk1 denoting
maxs?S ksk1 .
1: Choose an arbitrary ?1 from C
2: for t = 1 to T ? 1 do
?
L1 kCk1 8T log(1/?)
1 ?|x|/?
3:
?s ? S, ?s ? hs, 5L(?t ; D)i + Lap
, where Lap(?) ? 2?
e
.
n
4:
?et ? arg min ?s .
s?S
?t+1 ? (1 ? ?t )?t + ?t ?et , where ?t =
6: Output ? priv = ?T .
5:
2
t+2 .
Theorem 2.4 (Privacy guarantee). Algorithm 2 is (, ?)-differentially private.
5
Since each data item is assumed to have bounded `? norm, for two neighboring databases D and
D0 and any ? ? C, s ? S, we have that
|hs, 5L(?; D)i ? hs, 5L(?; D)i| = O(L1 kCk1 /n) .
The proof of privacy then follows from a straight-forward application of the exponential mechanism
[21] or its noisy maximum version [3, Theorem 5]) and the strong composition theorem [13]. In
Theorem 2.5 we prove the utility guarantee for the private Frank-Wolfe algorithm for the convex
polytope case. Define ?L = max CL over all the possible data sets in D.
D?D
Theorem 2.5 (Utility guarantee). Let L1 , S and kCk1 be defined as in Algorithms 2 (Algorithm
ANoise?FW(polytope) ). Let ?L be an upper bound on the curvature constant (defined in Definition 2.1)
for the loss function L(?; d) that holds for all d ? D. In Algorithm ANoise?FW(polytope) , if we set
T =
?L 2/3 (n)2/3
,
(L1 kCk1 )2/3
then
!
p
log(n|S|) log(1/?)
.
(n)2/3
2/3
E L(?priv ; D) ? min L(?; D) = O
?L 1/3 (L1 kCk1 )
??C
Here the expectation is over the randomness of the algorithm.
The proof of utility uses known bounds on noisy Frank-Wolfe [17], along with error bounds for the
exponential mechanism. The details can be found in the full version.
General C While a variant of this mechanism can be applied to the case when C is not a polytope,
its error would depend on the size of a cover of the boundary of C, which can be exponential in p,
leading to an error bound with polynomial dependence on p. In the full version, we analyze another
variant of private Frank-Wolfe that uses objective perturbation to ensure privacy. This variant is
well-suited for a general convex set C and the following result, proven in the Appendix, bounds its
excess risk in terms of the Gaussian Width of C. For this mechanism, we only need C to be bounded
in `2 diameter, but our error now depends on the `2 -Lipschitz constant of the loss functions.
Theorem 2.6. Suppose that each loss function is L2 -Lipschitz with respect to the `2 norm, and that
C has `2 diameter at most kCk2 . Let GC the Gaussian width of the convex set C ? Rp , and let ?L
be the curvature constant (defined in Definition 2.1) for the loss function `(?; d) for all ? ? C and
d ? D. Then there is an (, ?)-differentially private algorithm ANoise?FW with excess empirical risk:
!
2/3
?L 1/3 (L2 GC ) log2 (n/?)
priv
E L(?
; D) ? min L(?; D) = O
.
??C
(n)2/3
Here the expectation is over the randomness of the algorithm.
2.2
Private LASSO algorithm
We now apply the private Frank-Wolfe algorithm ANoise?FW(polytope) to the important case of the
sparse linear regression (or LASSO) problem.
Problem definition: Given a data set D = {(x1 , y1 ), ? ? ? , (xn , yn )} of n-samples from the domain
D = {(x, y) : x ? Rp , y ? [?1, 1], kxk? ? 1}, and the convex set C = `p1 . Define the mean
squared loss,
1 X
(hxi , ?i ? yi )2 .
(2)
L(?; D) =
n
i?[n]
The objective is to compute ?priv ? C to minimize L(?; D) while preserving privacy with respect to
any change of individual (xi , yi ) pair. The non-private setting of the above problem is a variant of
the least squares problem with `1 regularization, which was started by the work of LASSO [27, 28]
and intensively studied in the past years.
Since the `1 ball is the convex hull of 2p vertices, we can apply the private Frank-Wolfe algorithm ANoise?FW(polytope) . For the above setting, it is easy to check that the `1 -Lipschitz constant is
bounded by O(1). Further, by applying the bound on quadratic programming Remark 1, we have
that CL ? 4 max??C n1 kX?k22 = O(1) since C is the unit `1 ball, and |xij | ? 1. Hence ? = O(1).
Now applying Theorem 2.5, we have
6
Corollary 2.7. Let D = {(x1 , y1 ), ? ? ? , (xn , yn )} of n samples from the domain D = {(x, y) :
kxk? ? 1, |y| ? 1}, and the convex set C equal to the `1 -ball. The output ?priv of Algorithm
ANoise?FW(polytope) ensures the following.
log(np/?)
E[L(?priv ; D) ? min L(?; D)] = O
.
??C
(n)2/3
Remark 2. Compared to the previous work [20, 24], the above upper bound makes no assumption of
restricted strong convexity or mutual incoherence, which might be too strong for realistic settings.
1/3
2/3
?
?
Also our results significantly improve bounds of [19], from O(1/n
) to O(1/n
), which considered the case of the set C being the probability simplex and the loss being a generalized linear
model.
3
Optimality of Private LASSO
In the following, we shall show that to ensure privacy, the error bound in Corollary 2.7 is nearly
optimal in terms of the dominant factor of 1/n2/3 .
Theorem 3.1 (Optimality of private Frank-Wolfe). Let C be the `1 -ball and L be the mean squared
loss in equation (2). For every sufficiently large n, for every (, ?)-differentially private algorithm
A, with ? 0.1 and ? = o(1/n2 ), there exists a data set D = {(x1 , y1 ), ? ? ? , (xn , yn )} of n samples
from the domain D = {(x, y) : kxk? ? 1, |y| ? 1} such that
1
e
E[L(A(D); D) ? min L(?; D)] = ?
.
??C
n2/3
We prove the lower bound by following the fingerprinting codes argument of [4] for lowerbounding the error of (, ?)-differentially private algorithms. Similar to [4] and [14], we start with the
following lemma which is implicit in [4].The matrix X in Theorem 3.2 is the padded Tardos code
used in [14, Section 5]. For any matrix X, denote by X(i) the matrix obtained by removing the i-th
row of X. Call a column of a matrix a consensus column if the entries in the column are either all
1 or all ?1. The sign of a consensus column is simply the consensus value of the column. Write
w = m/ log m and p = 1000m2 . The following theorem follows immediately from the proof of
Corollary 16 in [14].
Theorem 3.2. [Corollary 16 from [14], restated] Let m be a sufficiently large positive integer. There
exists a matrix X ? {?1, 1}(w+1)?p with the following property. For each i ? [1, w + 1], there are
at least 0.999p consensus columns Wi in each X(i) . In addition, for algorithm A on input matrix
X(i) where i ? [1, w + 1], if with probability at least 2/3, A(X(i) ) produces a p-dimensional sign
vector which agrees with at least 43 p columns in Wi , then A is not (?, ?) differentially private with
respect to single row change (to some other row in X).
Write ? = 0.001. Let k = ? wp. We first form an k ? p matrix Y where the column vectors of
Y are mutually orthogonal {1, ?1} vectors. This is possible as k p. Now we construct w + 1
databases Di for 1 ? i ? w + 1 as follows. For all the databases, they contain the common set of
examples (zj , 0) (i.e. vector zj with label 0) for 1 ? j ? k where zj = (Yj1 , . . . , Yjp ) is the j-th
row vector of Y . In addition, each Di contains w examples (xj , 1) for xj = (Xj1 , . . . , Xjk ) for
j 6= i. Then L(?; Di ) is defined as follows (for the ease of notation in this proof, we work with the
un-normalized loss. This does not affect the generality of the arguments in any way.)
k
X
X
X
L(?; Di ) =
(xj ? ? ? 1)2 +
(yj ? ?)2 =
(xj ? ? ? 1)2 + kk?k22 .
j=1
j6=i
j6=i
The last equality is due
o the columns of Y are mutually orthogonal {?1, 1} vectors. For each
n to that
p
such that the sign of the coordinates of ?? matches the sign for the
? we have the following,
consensus columns of X(i) . Plugging ?? in L(?? ; D)
w
X
k
? ?
L(?? ; D)
(2? )2 +
[since the number of consensus columns is at least (1 ? ? )p]
p
i=1
Di , consider ?? ?
? p1 , p1
= (? + 4? 2 )w .
(3)
7
We now prove the crucial lemma, which states that if ? is such that k?k1 ? 1 and L(?; Di ) is small,
then ? has to agree with the sign of most of the consensus columns of X(i) .
Lemma 3.3. Suppose that k?k1 ? 1, and L(?; Di ) < 1.1? w. For j ? Wi , denote by sj the sign of
the consensus column j. Then we have
|{j ? Wi : sign(?j ) = sj }| ?
3
p.
4
Proof. For any S ? {1, . . . , p}, denote by ?|S the projection of ? to the coordinate subset S. Consider three subsets S1 , S2 , S3 , where
S1 = {j ? Wi : sign(?j ) = sj } ,
S2 = {j ? Wi : sign(?j ) 6= sj } ,
S3 = {1, . . . , p} \ Wi .
The proof is by contradiction. Assume that |S1 | < 34 p.
Further denote?
?i = ?|Si for i = 1, 2, 3. Now we will bound k?1 k1 and k?3 k1 using the inequality
kxk2 ? kxk1 / d for any d-dimensional vector.
k?3 k22 ? k?3 k21 /|S3 | ? k?3 k21 /(? p) .
Hence kk?3 k22 ? wk?3 k21 . But kk?3 k22 ? kk?k22 ? 1.1? w, so that k?3 k1 ?
Similarly by the assumption of |S1 | <
?
1.1? ? 0.04.
3
4 p,
k?1 k22 ? k?1 k21 /|S1 | ? 4k?1 k21 /(3p) .
p
Again using kk?k22 < 1.1? w, we have that k?1 k1 ? 1.1 ? 3/4 ? 0.91.
Now we have hxi , ?i ? 1 = k?1 k1 ? k?2 k1 + ?i ? 1 where |?i | ? k?3 k1 ? 0.04. By k?1 k1 +
k?2 k1 + k?3 k1 ? 1, we have
|hxi , ?i ? 1| ? 1 ? k?1 k ? |?i | ? 1 ? 0.91 ? 0.04 = 0.05 .
Hence we have that L(?; Di ) ? (0.05)2 w ? 1.1? w. This leads to a contradiction. Hence we must
have |S1 | ? 43 p.
With Theorem 3.2 and Lemma 3.3, we can now prove Theorem 3.1.
Proof. Suppose that A is private. And for the datasets we constructed above,
E[L(A(Di ); Di ) ? min L(?; Di )] ? cw ,
?
for sufficiently small constant c. By Markov inequality, we have with probability at least 2/3,
L(A(Di ); Di ) ? min? L(?; Di ) ? 3cw. By (3), we have min L(?; Di ) ? (? + 4? 2 )w. Hence if we
?
choose c a constant small enough, we have with probability 2/3,
L(A(Di ); Di ) < (? + 4? 2 + 3c)w ? 1.1? w .
(4)
3
4p
consensus columns in X(i) . However
By Lemma 3.3, (4) implies that A(Di ) agrees with at least
by Theorem 3.2, this violates the privacy of A. Hence we have that there exists i, such that
E[L(A(Di ); Di ) ? min L(?; Di )] > cw .
?
Recall that w = m/ log m and n = w + wp = O(m3 / log m). Hence we have that
E[L(A(Di ); Di ) ? min L(?; Di )] = ?(n1/3 / log2/3 n) .
?
The proof is completed by converting the above bound to the normalized version of
?(1/(n log n)2/3 ).
8
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. The Journal of Machine Learning Research, 3:463?482, 2003.
[2] R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization, revisited. In FOCS, 2014.
[3] R. Bhaskar, S. Laxman, A. Smith, and A. Thakurta. Discovering frequent patterns in sensitive data. In
KDD, New York, NY, USA, 2010.
[4] M. Bun, J. Ullman, and S. Vadhan. Fingerprinting codes and the price of approximate differential privacy.
In STOC, 2014.
[5] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[6] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In NIPS, 2008.
[7] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization.
JMLR, 12:1069?1109, 2011.
[8] K. L. Clarkson. Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm. ACM Transations
on Algorithms, 2010.
[9] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS,
2013.
[10] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In Theory of Cryptography Conference, pages 265?284. Springer, 2006.
[11] C. Dwork, A. Nikolov, and K. Talwar. Efficient algorithms for privately releasing marginals via convex
relaxations. arXiv preprint arXiv:1308.1385, 2013.
[12] C. Dwork and A. Roth. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in
Theoretical Computer Science. NOW Publishers, 2014.
[13] C. Dwork, G. N. Rothblum, and S. P. Vadhan. Boosting and differential privacy. In FOCS, 2010.
[14] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze gauss: optimal bounds for privacy-preserving
principal component analysis. In STOC, 2014.
[15] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly,
3(1-2):95?110, 1956.
[16] E. Hazan and S. Kale. Projection-free online learning. In ICML, 2012.
[17] M. Jaggi. Revisiting {Frank-Wolfe}: Projection-free sparse convex optimization. In ICML, 2013.
[18] P. Jain, P. Kothari, and A. Thakurta. Differentially private online learning. In COLT, pages 24.1?24.34,
2012.
[19] P. Jain and A. Thakurta. (near) dimension independent risk bounds for differentially private learning. In
International Conference on Machine Learning (ICML), 2014.
[20] D. Kifer, A. Smith, and A. Thakurta. Private convex empirical risk minimization and high-dimensional
regression. In COLT, pages 25.1?25.40, 2012.
[21] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94?103. IEEE,
2007.
[22] A. Nikolov, K. Talwar, and L. Zhang. The geometry of differential privacy: The sparse and approximate
cases. In STOC, 2013.
[23] S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity in optimization problems with
sparsity constraints. SIAM Journal on Optimization, 2010.
[24] A. Smith and A. Thakurta. Differentially private feature selection via stability arguments, and the robustness of the Lasso. In COLT, 2013.
[25] A. Smith and A. Thakurta. Follow the perturbed leader is differentially private with optimal regret guarantees. Manuscript in preparation, 2013.
[26] A. Smith and A. Thakurta. Nearly optimal algorithms for private online learning in full-information and
bandit settings. In NIPS, 2013.
[27] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), 1996.
[28] R. Tibshirani et al. The Lasso method for variable selection in the cox model. Statistics in medicine,
16(4):385?395, 1997.
[29] J. Ullman. Private multiplicative weights beyond linear queries. CoRR, abs/1407.1571, 2014.
9
| 5729 |@word h:3 private:59 version:13 cox:1 polynomial:9 norm:14 achievable:1 bun:1 open:1 crucially:1 incurs:2 contains:2 series:1 selecting:2 denoting:1 frankwolfe:1 past:1 ksk1:1 ka:2 com:3 protection:2 si:1 gmail:1 must:2 realistic:1 kdd:1 shape:1 greedy:2 discovering:1 item:1 smith:7 record:1 provides:3 defacto:1 revisited:1 boosting:1 zhang:4 dn:3 along:1 constructed:1 differential:11 become:1 focs:4 prove:6 inside:1 privacy:43 expected:1 roughly:1 p1:4 frequently:1 conv:1 bounded:8 campus:1 underlying:1 notation:1 argmin:4 minimizes:1 maxa:1 developed:2 guarantee:13 every:3 unit:1 yn:3 laxman:1 positive:1 local:1 tends:1 analyzing:1 incoherence:3 rothblum:1 might:2 eb:1 studied:2 ease:1 yj:2 practice:2 regret:1 procedure:1 empirical:9 significantly:1 projection:4 induce:1 refers:2 valley:1 selection:3 risk:36 applying:3 context:3 optimize:1 roth:1 kale:1 convex:16 survey:1 restated:1 immediately:1 m2:1 estimator:6 contradiction:2 regarded:1 dominate:1 his:1 stability:1 notion:3 coordinate:3 tardos:1 suppose:4 exact:1 programming:2 us:3 kunal:2 associate:1 wolfe:26 logarithmically:2 trend:1 particularly:1 database:5 kxk1:1 preprint:1 capture:2 worst:1 revisiting:1 ensures:2 convexity:3 complexity:1 depend:4 tight:2 solving:1 jain:2 fast:1 query:3 outcome:2 shalev:1 widely:2 valued:1 larger:1 statistic:1 think:1 noisy:2 ip:1 online:5 final:1 advantage:1 coming:1 frequent:1 neighboring:2 chaudhuri:2 achieve:2 description:1 differentially:27 convergence:2 optimum:1 rademacher:1 produce:1 guaranteeing:1 leave:1 converges:1 polylog:4 solves:1 strong:7 auxiliary:1 come:1 implies:1 quantify:1 differ:1 direction:1 trading:1 closely:1 hull:2 violates:1 require:1 suffices:3 generalization:6 tighter:1 secondly:1 hold:3 sufficiently:3 considered:1 scope:1 algorithmic:1 major:1 achieves:4 optimizer:4 estimation:1 label:1 thakurta:10 sensitive:2 agrees:2 minimization:8 gaussian:14 always:1 rather:2 priv:7 avoid:1 shrinkage:1 corollary:4 release:1 derived:1 naval:1 methodological:1 check:1 mainly:1 rigorous:1 lowerbounding:1 dependent:3 bandit:1 provably:1 arg:2 among:1 colt:3 yahoo:1 special:2 mutual:3 equal:2 aware:1 construct:1 enlarge:1 icml:3 nearly:9 problem1:1 simplex:2 np:2 simplify:1 primarily:2 few:1 preserve:1 individual:5 geometry:2 consisting:1 n1:7 microsoft:1 ab:1 dwork:5 analyzed:1 mcsherry:2 necessary:1 orthogonal:2 continuing:1 desired:1 xjk:1 theoretical:1 rsc:1 instance:1 column:14 cover:1 vertex:10 entry:1 subset:2 too:1 perturbed:1 kxi:1 recht:1 international:1 randomized:1 sensitivity:1 siam:1 squared:3 again:1 unavoidable:1 containing:1 choose:3 worse:1 leading:1 return:1 ullman:2 li:1 parrilo:1 wk:1 availability:1 coresets:1 satisfy:1 depends:4 vi:1 multiplicative:1 root:1 lab:1 analyze:4 sup:2 hazan:1 start:1 recover:1 contribution:2 minimize:1 square:1 accuracy:1 efficiently:1 straight:1 randomness:3 j6:2 datapoint:1 monteleoni:2 definition:8 proof:9 di:31 recovers:1 hamming:1 dataset:1 popular:1 intensively:1 recall:1 dimensionality:1 manuscript:1 attained:1 supervised:1 follow:1 improved:1 done:3 generality:1 implicit:1 google:4 continuity:1 logistic:1 artifact:1 usa:1 effect:1 k22:9 normalized:2 true:1 concept:1 contain:1 calibrating:1 regularization:1 equality:1 hence:8 xj1:1 symmetric:1 wp:2 semantic:1 width:12 generalized:3 mina:1 duchi:1 l1:6 recently:3 common:2 sarwate:1 extend:1 marginals:1 silicon:1 measurement:1 bk22:1 composition:1 mathematics:1 similarly:1 hxi:3 access:1 add:1 dominant:1 curvature:5 jaggi:1 showed:1 hide:1 scenario:1 certain:1 inequality:2 maxd:1 yi:5 preserving:3 minimum:2 additional:3 converting:1 nikolov:2 ii:1 resolving:1 full:3 desirable:1 multiple:1 reduces:1 d0:5 smooth:1 faster:1 match:1 long:2 plugging:1 prediction:2 variant:8 regression:8 basic:1 expectation:3 arxiv:2 sometimes:1 achieved:3 abhradeep:2 addition:4 whereas:1 background:1 crucial:1 publisher:1 extra:1 releasing:1 subject:1 thing:1 seem:1 jordan:1 bhaskar:1 call:3 integer:1 structural:1 vadhan:2 presence:2 yk22:2 intermediate:1 enough:3 easy:1 hb:2 near:1 xj:4 fit:2 affect:1 lasso:28 motivated:1 utility:3 bartlett:1 abruptly:1 clarkson:1 york:1 remark:4 useful:3 generally:1 amount:1 diameter:3 kck2:2 xij:2 zj:3 s3:3 sign:9 tibshirani:2 write:2 shall:2 asymptotically:1 relaxation:1 padded:1 year:1 run:1 talwar:5 inverse:1 throughout:1 almost:1 chandrasekaran:1 appendix:1 bound:43 def:2 distinguish:1 centrally:1 quadratic:4 constraint:14 argument:3 min:12 optimality:2 according:1 combination:3 ball:8 smaller:2 wi:8 making:1 s1:6 restricted:3 pr:2 erm:4 equation:1 mutually:2 previously:2 agree:1 kdi:1 mechanism:15 needed:1 unusual:1 yj1:1 studying:1 kifer:1 apply:3 quarterly:1 generic:1 robustness:1 rp:9 assumes:1 include:1 ensure:2 publishing:1 completed:1 log2:2 medicine:1 k1:14 approximating:1 classical:4 society:1 leakage:1 objective:4 move:1 question:1 dependence:17 usual:1 amongst:2 gradient:2 distance:1 cw:3 polytope:16 nissim:1 consensus:9 reason:3 willsky:1 assuming:2 code:4 kk:5 equivalently:1 stoc:3 frank:26 design:5 perform:1 allowing:1 upper:4 kothari:1 datasets:2 markov:1 minh:1 protecting:1 descent:1 logistics:1 y1:3 gc:3 perturbation:2 arbitrary:3 fingerprinting:3 pair:2 discontinuity:1 nip:2 beyond:2 adversary:1 usually:2 below:3 pattern:1 appeared:1 sparsity:2 summarize:1 max:4 royal:1 wainwright:1 event:1 minimax:1 improve:1 started:1 geometric:1 l2:2 checking:1 contributing:1 loss:32 interesting:1 proven:1 srebro:1 foundation:3 row:4 surprisingly:1 last:1 free:5 sparse:9 boundary:3 dimension:5 kck1:6 xn:3 computes:1 forward:1 commonly:1 jump:1 employing:1 polynomially:1 log3:1 excess:8 approximate:5 compact:1 sj:4 assumed:1 xi:5 shwartz:1 leader:1 un:1 why:1 obtaining:1 interpolating:1 necessarily:1 cl:2 domain:4 privately:1 whole:1 noise:2 s2:2 n2:6 cryptography:2 x1:3 referred:1 bassily:1 ny:1 exponential:9 candidate:1 kxk2:1 answering:1 jmlr:1 learns:1 admissible:1 theorem:20 removing:1 k21:5 list:1 exists:4 mendelson:1 adding:1 effectively:1 corr:1 kx:3 suited:1 smoothly:1 logarithmic:4 lap:2 simply:1 kxk:4 springer:1 corresponds:1 minimizer:2 satisfies:1 relies:1 dh:2 acm:1 presentation:1 towards:1 lipschitz:12 absence:2 replace:2 fw:7 change:2 price:1 transations:1 lemma:5 principal:1 called:2 gauss:1 m3:1 meaningful:2 select:4 formally:1 internal:1 support:1 violated:1 preparation:1 d1:2 |
5,224 | 573 | Active Exploration in Dynamic Environments
Sebastian B. Thrun
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
E-mail: thrun@cs.cmu.edu
Knut Moller
University of Bonn
Dept. of Computer Science
ROmerstr. 164
D-5300 Bonn, Germany
Abstract
\Vhenever an agent learns to control an unknown environment, two opposing principles have to be combined, namely: exploration (long-term optimization) and exploitation (short-term optimization). Many real-valued
connectionist approaches to learning control realize exploration by randomness in action selection. This might be disadvantageous when costs
are assigned to "negative experiences" . The basic idea presented in this
paper is to make an agent explore unknown regions in a more directed
manner. This is achieved by a so-called competence map, which is trained
to predict the controller's accuracy, and is used for guiding exploration.
Based on this, a bistable system enables smoothly switching attention
between two behaviors - exploration and exploitation - depending on expected costs and knowledge gain.
The appropriateness of this method is demonstrated by a simple robot
navigation task.
INTRODUCTION
The need for exploration in adaptive control has been recognized by various authors [MB89, Sut90, Mo090, Sch90, BB591]. Many connectionist approaches (e.g.
[~leI89, MB89)) distinguish a random exploration phase, at which a controller is
constructed by generating actions randomly, and a subsequent exploitation phase.
Random exploration usually suffers from three major disadvantages:
? Whenever costs are assigned to certain experiences - which is the case for
various real-world t.asks such as autonomous robot learning, chemical control.
flight control etc. -, exploration may become unnecessarily expensive. Intuitively speaking, a child would burn itself again and again simply because it is
531
532
Thrun and Moller
world
Figure 1: The training of the model network is a system identification task. Weights
and biases of the network are estimated by gradient descent using the backpropagation
algorithm.
in its random phase .
? Random exploration is often inefficient in terms of learning time, too [Whi9l,
Thr92]. Random actions usually make an agent waste plenty of time in already
well-explored regions in state space, while other regions may still be poorly
explored. Exploration happens by chance and is thus undirected .
? Once the exploitation phase begins, learning is finished and the system is unable
to adapt to time-varying, dynamic environments.
However, more efficient exploration techniques rely on knowledge about the learning process itself, which is used for guiding exploration. Rather than selecting actions randomly, these exploration techniques select actions such that the expected
knowledge gain is maximal. In discrete domains, this may be achieved by preferring
states (or state-action pairs) that have been visited less frequently [BS90], or less
recently [Sut90], or have previously shown a high prediction error [Mo090, Sch91]i.
For various discrete deterministic domains such exploration heuristics have been
proved to prevent from exponential learning time [Thr92] (exponential in size of
the state space). However, such techniques require a variable associated with each
state-action pair, which is not feasible if states and actions are real-valued.
A novel real-valued generalization of these approaches is presented in this paper.
A so-called competence map estimates the controller's accuracy. Using this estimation, the agent is driven into regions in state space with low accuracy, where
the resulting learning effect is assumed to be maximal. This technique defines a
directed exploration rule. In order to minimize costs during learning, exploration is
combined with an exploitation mechanism using selective attention, which allows
for switching between exploration and exploitation.
INDIRECT CONTROL USING FORWARD MODELS
In this paper we focus on an adaptive control scheme adopted from Jordan (JorS9]:
System identification (Fig. 1): Observing the input-output behavior of the unknown world (environment), a model is constructed by minimizing the difference of
the observed outcome and its corresponding predictions. This is done with backpropagation.
Action search using the model network (Fig. 2): Let an actual state sand
a goal state s* be given. Optimal actions are searched using gradient descent in
action space: starting with an initial action (e.g. randomly chosen), the next state
1
Note that these two approaches [Moo90, Sch91] are real-valued.
Active Exploration in Dynamic Environments
Figure 2: Using the model for optimizing actions (exploitation). Starting with some
initial action, gradient descent through the model network progressively improves actions.
s is predicted
Eexploit
with the world model. The exploitation energy function
(s'" -
sf (s'" -
s)
measures the LMS-deviation of the predicted and the desired state. Since the
model network is differentiable, gradients of EexPloit can be propagated back through
the model network. Using these gradients, actions are optimized progressively by
gradient descent in action space, minimizing Eexploit. The resulting actions exploit
the world.
THE COMPETENCE MAP
The general principle of many enhanced exploration schemes [BS90, Sut90, Mo090,
TM91, Sch91, Thr92] is to select actions such that the resulting observations are
expected to optimally improve the controller. In terms of the above control scheme,
this may be realized by driving the agent into regions in state-action space where
the accuracy of the model network is assumed to be low, and thus the knowledge
gain by visiting these regions is assumed to be high. In order to estimate the
accuracy of the model network, we introduce the notion of a competence network
[Sch91, TM91]. Basically, this map estimates some upper bound of the LMS-error
of the model network. This estimation is used for exploring the world by selecting
actions which minimize the expected competence of the model, and thus maximize
the resulting learning effect .
However, training the competence map is not as straightforward, since it is impossible to exactly predict the accuracy of the model network for regions in state space
not visited for some time. The training procedure for the competence map is based
on the assumption that the error increases (and thus competence decreases) slowly
for such regions due to relearning and environmental dynamics:
1. At each time tick, backpropagation learning is applied using the last stateaction pair as input, and the observed LMS-prediction error of the model as
target value (c.f. Fig. 3), normalized to (O,Cmax) (O~cmax~l, so far we used
cmax=l).
2. For some 2 randomly generated state-action pairs, the competence map is subsequently trained with target 1.0 (~ largest possible error cmax ) [ACL +90]. This
training step establishes a heuristic, realizing the loss of accuracy in unvisited
regions: over time, the output values of the competence map increase for these
reglOns.
Actions are now selected with respect to an energy function E which combines both
2in our simulations: five - with a small learning rate
533
534
Thrun and Moller
world model
Figure 3: Training the competence map to predict the error of the model by gradient
descen t (see text).
exploration and exploitation:
E
(I-f) . Eexplore + f? EexPloil
(1)
with gain parameter f (O<f<I). Here the exploration energy
Eexplore
1 - competence(action)
is evaluated using the competence map - minimizing Eexplore is equivalent to maximizing the predicted model error. Since both the model net and the competence net
are differentiable, gradient descent in action space may be used for minimizing Eq.
(1). E combines exploration with exploitation: on the one hand minimizing Eexploil
serves to avoid costs (short-term optimization), and on the other hand minimizing
Eexplore ensures exploration (long-term optimization). r determines the portion of
both target functions - which can be viewed to represent behaviors - in the action
selection process.
Note that Cma.x determines the character of exploration: if Cma.x is large, the agent
is attracted by regions in state space which have previously shown high prediction
error. The smaller Cma.x is, the more the agent is attracted by rarely-visited regions.
EXPLORATION AND SELECTIVE ATTENTION
Clearly, exploration and exploitation are often conflicting and can hinder each other.
E.g. if exploration and exploitation pull a mobile robot into opposite directions, the
system will stay where it is. It therefore makes sense not to keep r constant during
learning, but sometimes to focus more on exploration and sometimes more on exploitation, depending on expected costs and improvements. In our approach, this is
achieved by determining the focus of attention r using the following bistable recursive function which allows for smoothly switching attention between both policies.
At each step of action search, let eexploil = ~EexPloil(a) and eexplore = ~Eexplore(a)
denote the expected change of both energy functions by action a. With fC) being
a positive and monotonically increasing function 3 ,
K
f?f(eexploil) - (l-r)?f(eexplore)
(2)
compares the influence of action a on both energy functions under the current focus
of attention r. The new r is then derived by squashing K (with c>O):
1
r
(3)
1 + e-CoK.
3We chosed f(x) = eX in our simulations.
Active Exploration in Dynamic Environments
goal
+
o
obstacle
start
?
Figure 4: (a) Robot world - note that there are two equally good paths leading around
the obstacle. (b) Potential field: In addition to the x-y-state vector, the environment
returns for each state a potential field value (the darker the color, the larger the value).
Gradient ascent in the potential field yields both optimal paths depicted. Learning this
potential field function is part of the system identification task.
> 0, the learning system is in exploitation mood and r > 0.5 . Likewise, if
0, the system is in exploration mood and r < 0.5. Since the actual attention
r weighs both competing energy functions, in most cases Eqs. (2) and (3) establish
two stable points (fixpoints), close to 0 and 1, respectively. Attention is switched
only if K changes its sign. The scalar c serves as stability factor : the larger cis,
the closer is r to its extremal values and the larger the switching factors r(l-r)-l
(taken from Eq. (2)).
If
K
K
<
A ROBOT NAVIGATION TASK
We now will demonstrate the benefits of active exploration using a competence map
with selective attention by a simple robot navigation example. The environment is
a 2-dimensional room with one obstacle and walls (see Fig. 4a), and x-y-states are
evaluated by a potential field function (Fig. 4b). The goal is to navigate the robot
from the start to the goal position without colliding with the obstacle or a wall.
Using a model network without hidden units for state prediction and a model with
two hidden layers (10 units with gaussian activation functions in the first hidden
layer, and 8 logistic units in the second) for potential field value prediction, we
compared the following exploration techniques - Table 1 summarizes the results:
? Pure random exploration. In Fig. 5a the best result out of 20 runs is
shown. The dark color in the middle indicates that the obstacle was touched
extremely often. Moreover, the resulting controller (exploitation phase) did
not find a path to the goal.
? Pure exploitation (see Fig. 5b). (With a bit of randomness in the beginning)
this exploration technique found one of two paths but failed in both finding the
other path and performing proper system identification. The number of crashes
535
536
Thrun and Moller
Figure 5: Resulting models of the potential field function. (a) Random exploration.
The dark color in the middle indicates the high number of crashes against the obstacle.
Note that the agent is restarted whenever it crashes against a wall or the obstacle - the
probability for reaching the goal is 0.0007. (b) Pure exploitation: The resulting model
is accurate along the path, but inaccurate elsewhere. Only one of two paths is identified.
Figure 6: Active exploration. (a) Resulting model of the potential field function. This
model is most accurate, and the number of crashes during training is the smallest. Both
paths are found about equally often. (b) "Typical" competence map: The arrows indicate
actions which maximize Eexplore (pure exploration) .
#
random exploration
pure exploitation
active exploration
runs
10000
15000
15000
#
crashes
9993
11000
4000
#
paths found
0
1
2
L 2 -model error
2.5 %
0.7 %
0.4 %
Table 1: Results (averaged over 20 runs). The L2 -model error is measured in relation to
its initial value (= 100%).
Active Exploration in Dynamic Environments
explor:a.lion
region
(b)
explo~ .. lion
regIOn
(a)
/
o
(c)
Figure 7: Three examples of trajectories during learning demonstrate the switching attention mechanism described in the paper. Thick lines indicate exploration mode (r <0.2),
and thin lines indicate exploitation (r>o.S). The arrows mark some points where exploration is switched off due to a predicted collision.
during learning was significantly smaller than with random exploration .
? Directed exploration with selective attention. Using a competence network with two hidden layers (6 units each hidden layer), a proper model was
found in all simulations we performed (Fig. 6a), and the number of collisions
were the least. An intermediate state of the competence map is depicted in
Fig. 6b, and three exploration runs are shown in Fig. 7.
DISCUSSION
We have presented an adaptive strategy for efficient exploration in non-discrete
environments. A so-called competence map is trained to estimate the competence
(error) of the world model, and is used for driving the agent to less familiar regions.
In order to avoid unnecessary exploration costs, a selective attention mechanism
switches between exploration and exploitation. The resulting learning system is
dynamic in the sense that whenever one particular region in state space is preferred
for several runs, sooner or later the exploration behavior forces the agent to leave
this region. Benefits of this exploration technique have been demonstrated on a
robot navigation task.
However, it should be noted that the exploration method presented seeks to explore more or less the whole state-action space. This may be reasonable for the
above robot navigation task, but many state spaces, e.g. those typically found in
traditional AI, are too large for getting exhaustively explored even once. In order
to deal with such spaces, this method should be extended by some mechanism for
cutting off exploration in "unrelevant" regions in state-action space, which may be
determined by some notion of "relevance" .
Note that the technique presented here does not depend on the particular control
scheme at hand. E.g., some exploration techniques in the context of reinforcement
537
538
Thrun and Moller
learning may be found in [Sut90, BBS91], and are surveyed and compared in [Thr92].
Acknowledgements
The authors wish to thank Jonathan Bachrach, Andy Barto, Jorg Kindermann,
Long-Ji Lin, Alexander Linden, Tom Mitchell, Andy Moore, Satinder Singh, Don
Sofge, Alex Waibel, and the reinforcement learning group at CMU for interesting
and fruitful discussions. S. Thrun gratefully acknowledges the support by German
National Research Center for Computer Science (GMD) where part of the research
was done, and also the financial support from Siemens Corp.
References
[ACL +90]
[BBS91]
[BS90]
[Jor89]
[MB89]
[MeI89]
[Mo090]
[Sch90]
[Sch91]
[Sut90]
[TM91]
[Thr92]
[Whi91]
1. Atlas, D. Cohn, R. Ladner, M.A. EI-Sharkawi, R.J. Marks, M.E. Aggoune,
and D.C. Park. Training connectionist networks with queries and selective
sampling. In D. Touretzky (ed.) Advances in Neural Information Processing
Systems 2, San Mateo, CA, 1990. IEEE, Morgan Kaufmann.
A.G. Barto, S.J. Bradtke, and S.P. Singh. Real-time learning and control using
asynchronous dynamic programming. Technical Report COINS 91-57, Department of Computer Science, University of Massachusetts, MA, Aug. 1991.
A.G. Barto and S.P. Singh. On the computational economics of reinforcement
learning. In D.S. Touretzky et al. (eds.), Connectionist Models, Proceedings of
the 1990 Summer School, San Mateo, CA, 1990. Morgan Kaufmann.
M.l. Jordan. Generic constraints on underspecified target trajectories. In
Proceedings of the First International Joint Conference on Neural Networks,
Washington, DC, IEEE TAB Neural Network Committee, San Diego, 1989.
M.C. Mozer and J.R. Bachrach. Discovering the structure of a reactive environment by exploration. Technical Report CU-CS-451-89, Dept. of Computer
Science, University of Colorado, Boulder, Nov. 1989.
B.W. Mel. Murphy: A neurally-inspired connectionist approach to learning
and performance in vision-based robot motion planning. Technical Report
CCSR-89-17 A, Center for Complex Systems Research Beckman Institute, University of Illinois, 1989.
A.W. Moore. Efficient Memory-based Learning for Robot Control. PhD thesis,
Trinity Hall, University of Cambridge, England, 1990.
J.H. Schmidhuber. Making the world differentiable: On using supervised learning fully recurrent neural networks for dynamic reinforcemen t learning and
planning in non-stationary environments. Technical Report, Technische Universitiit Munchen, Germany, 1990.
J.H. Schmidhuber. Adaptive confidence and adaptive curiosity. Technical
Report FKI-149-91, Technische Universitat Munchen, Germany 1991.
R.S. Sutton. Integrated architectures for learning, planning, and reacting based
on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, June 1990.
S.B. Thrun and K. Moller. On planning and exploration in non-discrete environments. Technical Report 528, GMD, St.Augustin, FRG, 1991.
S.B. Thrun. Efficient exploration in reinforcement learning. Technical Report
CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, Jan. 1992.
S.D. Whitehead. A study of cooperative mechanisms for faster reinforcement
learning. Technical Report 365, University of Rochester, Computer Science
Department, Rochester, NY, March 1991.
| 573 |@word exploitation:20 middle:2 cu:1 fixpoints:1 simulation:3 seek:1 asks:1 initial:3 selecting:2 current:1 activation:1 attracted:2 realize:1 subsequent:1 enables:1 atlas:1 progressively:2 stationary:1 selected:1 discovering:1 beginning:1 realizing:1 short:2 five:1 along:1 constructed:2 become:1 combine:2 introduce:1 manner:1 expected:6 behavior:4 frequently:1 planning:4 inspired:1 actual:2 increasing:1 begin:1 moreover:1 finding:1 stateaction:1 exactly:1 control:11 unit:4 positive:1 switching:5 sutton:1 reacting:1 path:9 might:1 burn:1 acl:2 mateo:2 averaged:1 directed:3 recursive:1 backpropagation:3 procedure:1 jan:1 significantly:1 confidence:1 close:1 selection:2 context:1 impossible:1 influence:1 equivalent:1 map:14 demonstrated:2 deterministic:1 maximizing:1 fruitful:1 straightforward:1 attention:12 starting:2 economics:1 center:2 bachrach:2 pure:5 rule:1 pull:1 financial:1 stability:1 notion:2 autonomous:1 enhanced:1 target:4 diego:1 colorado:1 programming:2 pa:1 expensive:1 underspecified:1 cooperative:1 observed:2 region:17 ensures:1 decrease:1 aggoune:1 mozer:1 environment:13 dynamic:10 hinder:1 exhaustively:1 trained:3 depend:1 singh:3 joint:1 indirect:1 various:3 query:1 outcome:1 heuristic:2 larger:3 valued:4 cma:3 itself:2 mood:2 differentiable:3 net:2 maximal:2 poorly:1 getting:1 generating:1 ccsr:1 leave:1 depending:2 recurrent:1 measured:1 school:2 aug:1 eq:3 c:3 predicted:4 indicate:3 appropriateness:1 direction:1 thick:1 subsequently:1 exploration:57 bistable:2 sand:1 require:1 trinity:1 frg:1 generalization:1 wall:3 exploring:1 around:1 hall:1 predict:3 lm:3 driving:2 major:1 smallest:1 estimation:2 beckman:1 visited:3 augustin:1 extremal:1 kindermann:1 largest:1 knut:1 establishes:1 clearly:1 gaussian:1 rather:1 reaching:1 avoid:2 varying:1 mobile:1 barto:3 derived:1 focus:4 june:1 improvement:1 indicates:2 sense:2 inaccurate:1 typically:1 integrated:1 hidden:5 relation:1 selective:6 jor89:1 germany:3 field:8 once:2 washington:1 sampling:1 unnecessarily:1 park:1 thin:1 plenty:1 connectionist:5 report:8 randomly:4 national:1 murphy:1 familiar:1 phase:5 opposing:1 navigation:5 accurate:2 andy:2 closer:1 experience:2 sooner:1 reglons:1 desired:1 weighs:1 obstacle:7 disadvantage:1 cost:7 deviation:1 technische:2 seventh:1 too:2 optimally:1 universitat:1 combined:2 st:1 international:2 preferring:1 stay:1 off:2 again:2 thesis:1 slowly:1 inefficient:1 leading:1 return:1 unvisited:1 potential:8 waste:1 performed:1 later:1 observing:1 tab:1 portion:1 disadvantageous:1 start:2 rochester:2 minimize:2 accuracy:7 kaufmann:2 likewise:1 yield:1 identification:4 fki:1 basically:1 trajectory:2 randomness:2 descen:1 suffers:1 touretzky:2 sebastian:1 whenever:3 ed:2 against:2 energy:6 associated:1 propagated:1 gain:4 proved:1 massachusetts:1 mitchell:1 knowledge:4 color:3 improves:1 back:1 supervised:1 tom:1 done:2 evaluated:2 flight:1 hand:3 ei:1 cohn:1 defines:1 logistic:1 mode:1 effect:2 normalized:1 assigned:2 chemical:1 moore:2 deal:1 during:5 noted:1 chosed:1 mel:1 demonstrate:2 bradtke:1 motion:1 novel:1 recently:1 ji:1 mellon:2 cambridge:1 ai:1 illinois:1 gratefully:1 robot:11 stable:1 etc:1 optimizing:1 driven:1 schmidhuber:2 certain:1 corp:1 morgan:2 recognized:1 maximize:2 monotonically:1 neurally:1 technical:8 faster:1 adapt:1 england:1 long:3 lin:1 equally:2 prediction:6 basic:1 controller:5 vision:1 cmu:3 represent:1 sometimes:2 achieved:3 addition:1 crash:5 ascent:1 undirected:1 jordan:2 intermediate:1 switch:1 architecture:1 competing:1 opposite:1 identified:1 idea:1 speaking:1 action:32 collision:2 dark:2 gmd:2 sign:1 estimated:1 carnegie:2 discrete:4 group:1 prevent:1 run:5 reasonable:1 summarizes:1 bit:1 bound:1 layer:4 summer:1 distinguish:1 constraint:1 alex:1 colliding:1 bonn:2 extremely:1 performing:1 romerstr:1 department:2 waibel:1 march:1 smaller:2 character:1 making:1 happens:1 intuitively:1 boulder:1 taken:1 previously:2 german:1 mechanism:5 committee:1 serf:2 whitehead:1 adopted:1 munchen:2 generic:1 coin:1 cmax:4 exploit:1 establish:1 approximating:1 already:1 realized:1 strategy:1 traditional:1 visiting:1 gradient:9 unable:1 thank:1 thrun:9 mail:1 minimizing:6 negative:1 sch90:2 jorg:1 proper:2 policy:1 unknown:3 upper:1 ladner:1 observation:1 descent:5 extended:1 dc:1 competence:20 namely:1 pair:4 optimized:1 conflicting:1 curiosity:1 usually:2 lion:2 memory:1 rely:1 force:1 scheme:4 improve:1 finished:1 acknowledges:1 text:1 l2:1 acknowledgement:1 determining:1 loss:1 fully:1 interesting:1 switched:2 agent:10 principle:2 squashing:1 elsewhere:1 last:1 asynchronous:1 bias:1 tick:1 institute:1 benefit:2 world:10 author:2 forward:1 adaptive:5 reinforcement:5 san:3 far:1 nov:1 preferred:1 cutting:1 keep:1 satinder:1 active:7 sofge:1 pittsburgh:2 assumed:3 unnecessary:1 don:1 search:2 table:2 ca:2 moller:6 complex:1 domain:2 did:1 arrow:2 whole:1 child:1 fig:10 explor:1 darker:1 ny:1 surveyed:1 position:1 guiding:2 wish:1 exponential:2 sf:1 learns:1 touched:1 navigate:1 explored:3 linden:1 ci:1 phd:1 sharkawi:1 relearning:1 smoothly:2 depicted:2 fc:1 simply:1 explore:2 failed:1 scalar:1 restarted:1 chance:1 environmental:1 determines:2 ma:1 goal:6 viewed:1 room:1 universitiit:1 feasible:1 change:2 typical:1 determined:1 called:3 siemens:1 explo:1 rarely:1 select:2 searched:1 mark:2 support:2 jonathan:1 alexander:1 relevance:1 reactive:1 dept:2 ex:1 |
5,225 | 5,730 | Minimax Time Series Prediction
Alan Malek
UC Berkeley
malek@berkeley.edu
Wouter M. Koolen
Centrum Wiskunde & Informatica
wmkoolen@cwi.nl
Peter L. Bartlett
UC Berkeley & QUT
bartlett@cs.berkeley.edu
Yasin Abbasi-Yadkori
Queensland University of Technology
yasin.abbasiyadkori@qut.edu.au
Abstract
We consider an adversarial formulation of the problem of predicting a time series
with square loss. The aim is to predict an arbitrary sequence of vectors almost
as well as the best smooth comparator sequence in retrospect. Our approach allows natural measures of smoothness such as the squared norm of increments.
More generally, we consider a linear time series model and penalize the comparator sequence through the energy of the implied driving noise terms. We derive
the minimax strategy for all problems of this type and show that it can be implemented efficiently. The optimal predictions are linear in the previous observations.
We obtain an explicit expression for the regret in terms of the parameters defining
the problem. For typical, simple definitions of smoothness, the computation of the
optimal predictions involves only sparse matrices. In the case of norm-constrained
data, where the smoothness is defined in terms of the squared
? norm of the comparator?s increments, we show that the regret grows as T / ?T , where T is the
length of the game and ?T is an increasing limit on comparator smoothness.
1
Introduction
In time series prediction, tracking, and filtering problems, a learner sees a stream of (possibly noisy,
vector-valued) data and needs to predict the future path. One may think of robot poses, meteorological measurements, stock prices, etc. Popular stochastic models for such tasks include the
auto-regressive moving average (ARMA) model in time series analysis, Brownian motion models in
finance, and state space models in signal processing.
In this paper, we study the time series prediction problem in the regret framework; instead of making assumptions on the data generating process, we ask: can we predict the data sequence online
almost as well as the best offline prediction method in some comparison class (in this case, offline
means that the comparator only needs to model the data sequence after seeing all of it)? Our main
contribution is computing the exact minimax strategy for a range of time series prediction problems.
As a concrete motivating example, let us pose the simplest nontrivial such minimax problem
( T
)
T
T
+1
X
X
X
2
2
2
? t?1 k .
min max ? ? ? min max
kat ? xt k ? min
k?
at ? xt k + ?T
k?
at ? a
a1 x1 ?B
aT xT ?B
? 1 ,...,?
a
aT
t=1
|
{z
Loss of Learner
}
t=1
|
t=1
{z
}
Loss of Comparator
|
{z
Comparator Complexity
}
(1)
This notion of regret is standard in online learning, going back at least to [1] in 2001, which views it
as the natural generalization of L2 regularization to deal with non-stationarity comparators. We offer
two motivations for this regularization. First, one can interpret the complexity term as the magnitude
1
of the noise required to generate the comparator using a multivariate Gaussian random walk, and,
generalizing slightly, as the energy of the innovations required to model the comparator using a
single, fixed linear time series model (e.g. specific ARMA coefficients). Second, we can view the
comparator term in Equation (1) as akin to the Lagrangian of a constrained optimization problem.
? 1, . . . , a
? T that minimizes the cumulative loss
Rather than competing with the comparator sequence a
subject to a hard constraint on the complexity term, the learner must compete with the comparator
sequence that best trades off the cumulative loss and the smoothness. The Lagrange multiplier, ?T ,
controls the trade-off. Notice that it is natural to allow ?T to grow with T , since that penalizes the
comparator?s change per round more than the loss per round.
For the particular problem (1) we obtain an efficient algorithm using amortized O(d) time per round,
where d is the dimension of the data; there is no nasty dependence on T as often happens with minimax algorithms. Our general minimax analysis extends to more advanced complexity terms. For
example, we may regularize instead by higher-order smoothness (magnitude of increments of increments, etc.), or more generally, we may consider a fixed linear process and regularize the comparator
by the energy of its implied driving noise terms (innovations). We also deal with arbitrary sequences
of rank-one quadratic constraints on the data.
We show that the minimax algorithm is of a familiar nature; it is a linear filter, with a twist. Its
coefficients are not time-invariant but instead arise from the intricate interplay between the regularization and the range of the data, combined with shrinkage. Fortunately, they may be computed in
a pre-processing step by a simple recurrence. An unexpected detail of the analysis is the following. As we will show, the regret objective in (1) is a convex quadratic function of all data, and the
sub-problem objectives that arise from the backward induction steps in the minimax analysis remain
quadratic functions of the past. However, they may be either concave or convex. Changing direction
of curvature is typically a source of technical difficulty: the minimax solution is different in either
case. Quite remarkably, we show that one can determine a priori which rounds are convex and which
are concave and apply the appropriate solution method in each.
We also consider what happens when the assumptions we need to make for the minimax analysis to
go through are violated. We will show that the obtained minimax algorithm is in fact highly robust.
Simply applying it unlicensed anyway results in adaptive regret bounds that scale naturally with the
realized data magnitude (or, more generally, its energy).
1.1
Related Work
There is a rich history of tracking problems in the expert setting. In this setting, the learner has some
finite number of actions to play and must select a distribution over actions to play each round in
such a way as to guarantee that the loss is almost as small as the best single action in hindsight. The
problem of tracking the best expert forces the learner to compare with sequences of experts (usually
with some fixed number of switches). The fixed-share algorithm [2] was an early solution, but there
has been more recent work [3, 4, 5, 6]. Tracking experts has been applied to other areas; see e.g. [7]
for an application to sequential allocation. An extension to linear combinations of experts where the
expert class is penalized by the p-norm of the sequence was considered in [1].
Minimax algorithms for squared Euclidean loss have been studied in several contexts such as Gaussian density estimation [8] and linear regression [9]. In [10], the authors showed that the minimax
algorithm for quadratic loss is Follow the Leader (i.e. predicting the previous data mean) when the
player is constrained to play in a ball around the previous data mean. Additionally, Moroshko and
Krammer [11, 12] propose a weak notion of non-stationarity that allows them to apply the last-step
minimax approach to a regression-like framework.
The tracking problem in the regret setting has been considered previously, e.g. [1], where the authors
studied the best linear predictor with a comparison class of all sequences with bounded smoothness
P
2
t kat ? at?1 k and proposed a general method for converting regret bounds in the static setting
to ones in the shifting setting (where the best expert is allowed to change).
Outline We start by presenting the formal setup in Section 2 and derive the optimal offline predictions. In Section 3 we zoom in to single-shot quadratic games, and solve these both in the convex
and concave case. With this in hand, we derive the minimax solution to the time series prediction
problem by backward induction in Section 4. In Section 5 we focus on the motivating problem
2
(1) for which we give a faster implementation and tightly sandwich the minimax regret. Section 6
concludes with discussion, conjectures and open problems.
2
Protocol and Offline Problem
The game protocol is described in Figure 1 and is the usual online prediction game with
squared Euclidean loss. The goal of the learner is to incur small regret, that is, to predict
?1 ? ? ? a
? T chosen in hindthe data almost as well as the best complexity-penalized sequence a
sight. Our motivating problem (1) gauged complexity by the sum of squared norms of the
increments, thus encouraging smoothness. Here we generalize to complexityPterms defined
?1 ? ? ? a
? T by
? |s a
? t.
by a complexity matrix K 0, and charge the comparator a
s,t Ks,t a
We recover the smoothness penalty of (1) by taking K to be the T ? T tridiagonal matrix
?
?
2 ?1
For t = 1, 2, . . . , T :
?
??1 2 ?1
? Learner predicts at ? Rd
?
?
..
?,
?
(2)
d
.
?
?
? Environment reveals xt ? R
?
?
?1 2 ?1
2
? Learner suffers loss kat ? xt k .
?1 2
but we may also regularize by e.g. the sum of
Figure 1: Protocol
squared norms (K = I), the sum of norms of higher
order increments, or more generally, we may consider a fixed linear process and take K 1/2 to be the
matrix that recovers the driving noise terms from the signal, and then our penalty is exactly the energy of the implied noise for that linear process. We now turn to computing the identity and quality
of the best competitor sequence in hindsight.
Theorem 1. For any complexity matrix K 0, regularization scalar ?T ? 0, and d ? T data
matrix XT = [x1 ? ? ? xT ] the problem
L? :=
min
? 1 ,...,?
a
aT
T
X
X
2
? |s a
?t
k?
at ? xt k + ?T
Ks,t a
s,t
t=1
has linear minimizer and quadratic value given by
? T ] = XT (I + ?T K)?1
[?
a1 ? ? ? a
and
L? = tr XT (I ? (I + ?T K)?1 )XT| .
? = [?
? T ] we can compactly express the offline problem as
Proof. Writing A
a1 ? ? ? a
? ? XT )| (A
? ? XT ) + ?T K A
?| A
? .
L? = min tr (A
?
A
? derivative of the objective is 2(A
? ? XT ) + 2?T AK.
?
The A
Setting this to zero yields
?1
?
the minimizer A = XT (I + ?T K) . Back-substitution and simplification result in value
tr XT (I ? (I + ?T K)?1 )XT| .
? can be performed in O(dT ) time by
Note that for the choice of K in (2) computing the optimal A
solving the linear system A(I + ?T KT ) = XT directly. This system decomposes into d (one per
dimension) independent tridiagonal systems, each in T (one per time step) variables, which can each
be solved in linear time using Gaussian elimination.
This theorem shows that the objective of our minimax problem is a quadratic function of the data.
In order to solve a T round minimax problem with quadratic regret objective, we first solve simple
single round quadratic games.
3
Minimax Single-shot Squared Loss Games
One crucial tool in the minimax analysis of our tracking problem will be solving particular singleshot min-max games. In such games, the player and adversary play prediction a and data x resulting
in payoff given by the following square loss plus a quadratic in x:
2
2
V (a, x) := ka ? xk + (? ? 1)kxk + 2b| x.
3
(3)
The quadratic and linear terms in x have coefficients ? ? R and b ? Rd . Note
that V (a, x) is convex in a and either convex or concave in x as decided by
the sign of ?. The following result, proved in Appendix B.1 and illustrated for
kbk = 1 by the figure to the right, gives the minimax analysis for both cases.
Theorem 2. Let V (a, x) be as in (3). If kbk ? 1, then the minimax problem
V ? := min
max
a?Rd x?Rd :kxk?1
has value V
?
?
2
? kbk
=
? 1 ?2?
kbk + ?
4
3
2
1
-4
-2
0
2
4
?
b
and minimizer a =
1??
?b
if ? ? 0,
V?
5
0
V (a, x)
?
?
if ? ? 0,
6
if ? ? 0,
(4)
if ? ? 0.
We also want to look at the performance of this strategy when we do not impose the norm bound
kxk ? 1 nor make the assumption kbk ? 1. By evaluating (3) we obtain an adaptive expression
2
that scales with the actual norm kxk of the data.
Theorem 3. Let a be the strategy from (4). Then, for any data x ? Rd and any b ? Rd ,
2
2
2
b
kbk
kbk
V (a, x) =
?
+ ?
? x
if ? ? 0, and
1??
1??
1??
2
2
V (a, x) = kbk + ?kxk
if ? ? 0.
These two theorems point out that the strategy in (4) is amazingly versatile. The former theorem
establishes minimax optimality under data constraint kxk ? 1 assuming that kbk ? 1. Yet the latter
theorem tells us that, even without constraints and assumptions, this strategy is still an extremely
useful heuristic. For its actual regret is bounded by the minimax regret we would have incurred if
we would have known the scale of the data kxk (and kbk) in advance. The norm bound we imposed
in the derivation induces the complexity measure for the data to which the strategy adapts. This
robustness property will extend to the minimax strategy for time series prediction.
Finally, it remains to note that we present the theorems in the canonical case. Problems with a
constraint of the form kx ? ck ? ? may be canonized by re-parameterizing by x0 = x?c
? and
?2
and
scaling
the
objective
by
?
.
We
find
a0 = a?c
?
Corollary 4. Fix ? ? 0 and c ? Rd . Let V ? (?, b) denote the minimax value from (4) with
parameters ?, b. If k(? ? 1)c + bk ? ? then
(? ? 1)c + b
2
2 ?
min max
V (a, x) = ? V
?,
+ 2b| c + (? ? 1)kck .
a x:kx?ck??
?
With this machinery in place, we continue the minimax analysis of time series prediction problems.
4
Minimax Time Series Prediction
In this section, we give the minimax solution to the online prediction problem. Recall that the
evaluation criterion, the regret, is defined by
R :=
T
T
X
X
2
2
?| A
?
kat ? xt k ? min
k?
at ? xt k + ?T tr K A
t=1
? 1 ,...,?
a
aT
(5)
t=1
where K 0 is a fixed T ? T matrix measuring the complexity of the comparator sequence. Since
all the derivations ahead will be for a fixed T , we drop the T subscript on the ?. We study the
minimax problem
R? := min max ? ? ? min max R
(6)
a1
x1
aT
xT
under the constraint on the data that kXt vt k ? 1 in each round t for some fixed sequence v1 , . . . vT
such that vt ? Rt . This constraint generalizes the norm bound constraint from the motivating
problem (1), which is recovered by taking vt = et . This natural generalization allows us to also
consider bounded norms of increments, bounded higher order discrete derivative norms etc.
4
We compute the minimax regret and get an expression for the minimax algorithm. We show that,
at any point in the game, the value is a quadratic function of the past samples and the minimax
algorithm is linear: it always predicts with a weighted sum of all past samples.
Most intriguingly, the value function can either be a convex or concave quadratic in the last data
point, depending on the regularization. We saw in the previous section that these two cases require
a different minimax solution. It is therefore an extremely fortunate fact that the particular case we
find ourselves in at each round is not a function of the past data, but just a property of the problem
parameters K and vt .
We are going to solve the sequential minimax problem (6) one round at a time. To do so, it is
convenient to define the value-to-go of the game from any state Xt = [x1 ? ? ? xt ] recursively by
V (XT ) := ? L?
and
V (Xt?1 ) := min
max
2
at xt :kXt vt k?1
kat ? xt k + V (Xt ).
We are interested in the minimax algorithm and minimax regret R? = V (X0 ). We will show that
the minimax value and strategy are a quadratic and linear function of the observations. To express the
value and strategy and state the necessary condition on the problem, we will need a series of scalars
dt and matrices Rt ? Rt?t for t = 1, . . . , T , which, as we will explain below, arises naturally from
the minimax analysis. The matrices, which depend on the regularization parameter ?, comparator
complexity matrix K and data constraints vt , are defined recursively back-to-front.
The base case
At bt
ut
?1
is RT := (I + ?T K) . Using the convenient abbreviations vt = wt
and Rt =
|
1
b t ct
we then recursively define Rt?1 and set dt by
ct
|
Rt?1 := At + (bt ? ct ut ) (bt ? ct ut ) ? ct ut u|t ,
dt := 2
if ct ? 0,
(7a)
wt
bt b|t
Rt?1 := At +
,
dt := 0
if ct ? 0.
(7b)
1 ? ct
Using this recursion for dt and Rt , we can perform the exact minimax analysis under a certain
condition on the interplay between the data constraint and the regularization. We then show below
that the obtained algorithm has a condition-free data-dependent regret bound.
Theorem 5. Assume that K and vt are such that
any data sequence XT
satisfying the constraint
kXt vt k ? 1 for all rounds t ? T also satisfies
Xt?1 (ct ? 1)ut ? bt
? 1/wt for all rounds
t ? T . Then the minimax value of and strategy for problem (6) are given by
(
T
bt
X
if ct ? 0,
|
ds
and
at = Xt?1 1?ct
V (Xt ) = tr (Xt (Rt ? I) Xt ) +
bt ? ct ut if ct ? 0,
s=t+1
In particular, this shows that the minimax regret (6) is given by R? =
PT
t=1
dt .
Proof. By induction. The base case V (XT ) is Theorem 1. For any t < T we apply the definition
of V (Xt?1 ) and the induction hypothesis to get
V (Xt?1 ) = min
max
2
at xt :kXt vt k?1
kat ? xt k + tr (Xt (Rt ? I)Xt| ) +
T
X
ds
s=t+1
|
= tr(Xt?1 (At ? I)Xt?1
)+
T
X
dt + C
s=t+1
where we abbreviated
C := min
max
at xt :kXt vt k?1
2
kat ? xt k + (ct ? 1)x|t xt + 2x|t Xt?1 bt .
Without loss of generality, assume wt > 0. Now, as kXt vt k ? 1 iff kXt?1 ut + xt k ? 1/wt ,
application of Corollary 4 with ? = ct , b = Xt?1 bt , ? = 1/wt and c = ?Xt?1 ut followed by
Theorem 2 results in optimal strategy
(X b
t?1 t
if ct ? 0,
1?ct
at =
?ct Xt?1 ut + Xt?1 bt if ct ? 0.
5
and value
2
C = (ct ?1)kXt?1 ut k
|
?2b|t Xt?1
Xt?1 ut +
(
Xt?1 (ct ? 1)ut ? bt
2 /(1 ? ct ) if ct ? 0,
Xt?1 (ct ? 1)ut ? bt
2 + ct /wt2 if ct ? 0,
Expanding all squares and rearranging (cycling under the trace) completes the proof.
On the one hand, from a technical perspective the condition of Theorem 5 is rather natural. It
guarantees that the prediction of the algorithm will fall within the constraint imposed on the data.
(If it would not, we could benefit by clipping the prediction. This would be guaranteed to reduce the
loss, and it would wreck the backwards induction.) Similar clipping conditions arise in the minimax
analyses for linear regression [9] and square loss prediction with Mahalanobis losses [13].
In practice we typically do not have a hard bound on the data. Sill, by running the above minimax
algorithm obtained for data complexity bounds kXt vt k ? 1, we get an adaptive regret bound that
2
scales with the actual data complexity kXt vt k , as can be derived by replacing the application of
Theorem 2 in the proof of Theorem 5 by an invocation of Theorem 3.
Theorem 6. Let K 0 and vt be arbitrary. The minimax algorithm obtained in Theorem 5 keeps
PT
2
the regret (5) bounded by R ? t=1 dt kXt vt k for any data sequence XT .
4.1
Computation, sparsity
In the important special case (typical application) where the regularization K and data
constraint vt are encoding some order of smoothness, we find that K is banded diagonal
and vt only has a few tail non-zero entries. It hence is the case that RT?1?1 = I + ?K
is sparse. We now argue that the recursive updates (7) preserve sparsity of the inverse Rt?1 . In
?1
Appendix C we derive an update for Rt?1
in terms of Rt?1 . For computation it hence makes sense
to tabulate Rt?1 directly. We now argue (proof in Appendix B.2) that all Rt?1 are sparse.
Theorem 7. Say the vt are V -sparse (all but their tail V entries are zero). And say that K is
D-banded (all but the the main and D ? 1 adjacent diagonals to either side are zero). Then each
Rt?1 is the sum of the D-banded matrix I + ?K1:t,1:t and a (D + V ? 2)-blocked matrix (i.e. all
but the lower-right block of size D + V ? 2 is zero).
So what does this sparsity argument buy us? We only need to maintain the original D-banded matrix
K and the (D + V ? 2)2 entries of the block perturbation. These entries can be updated backwards
from t = T, . . . , 1 in O((D + V ? 2)3 ) time per round using block matrix inverses. This means that
the run-time of the entire pre-processing step is linear in T . For updates and prediction we need ct
and bt , which we can compute using Gaussian elimination from Rt?1 in O(t(D + V )) time. In the
next section we will see a special case in which we can update and predict in constant time.
5
Norm-bounded Data with Increment Squared Regularization
We return to our motivating problem (1) with complexity matrix K = KT given by (2) and norm
constrained data, i.e. vt = et . We show that the Rt matrices are very simple: their inverse is
I + ?Kt with its lower-right entry perturbed. Using this, we show that the prediction is a linear
combination of the past observations with weights decaying exponentially backward in time. We
derive a constant-time update equation for the minimax prediction and tightly sandwich the regret.
Here, we will calculate a few quantities that will be useful throughout this section. The inverse
(I + ?KT )?1 can be computed in closed form as a direct application of the results in [14]:
ex ?e?x
2
x
?x
and cosh(x) = e +e
. For any ? ? 0:
2
cosh (T + 1 ? |i ? j|)? ? cosh (T + 1 ? i ? j)?
?1
(I + ?KT )i,j =
,
2? sinh(?) sinh (T + 1)?
1
where ? = cosh?1 1 + 2?
.
Lemma 8. Recall that sinh(x) =
6
We need some control on this inverse. We will use the abbreviations
zt := (I + ?Kt )?1 et ,
ht :=
?1
e|t (I
(8)
+ ?Kt ) et =
2
?
h :=
.
1 + 2? + 1 + 4?
e|t zt ,
and
(9)
(10)
We now show that these quantities are easily computable (see Appendix B for proofs).
Lemma 9. Let ? be as in Lemma 8. Then, we can write
ht =
1 ? (?h)2t
h,
1 ? (?h)2t+2
and limt?? ht = h from below, exponentially fast.
A direct application of block matrix inversion (Lemma 12) results in
Lemma 10. We have
ht =
1
1 + 2? ? ?2 ht?1
zt = ht
and
?zt?1
.
1
Intriguingly, following the optimal algorithm for all T rounds can be done in O(T d) computation
and O(d) memory. These resource requirements are surprising as playing weighted averages typically requires O(T 2 d). We found that the weighted averages are similar between rounds and can be
updated cheaply.
We are now ready to state the main result of this section, proved in Appendix B.3.
Theorem 11. Let zt and ht be as in (8) and Kt as in (2). For the minimax problem (1) we have
Rt?1 = I + ?Kt + ?t et e|t
and the minimax prediction in round t is given by
at = ?ct Xt?1 zt?1
where ?t =
5.1
1
ct
?
1
ht
and ct satisfy the recurrence cT = hT and ct?1 = ht?1 + ?2 h2t?1 ct (1 + ct ).
Implementation
Theorem 11 states that the minimax prediction is at = ?ct Xt?1 zt?1 . Using Lemma 10, we can
derive an incremental update for at by defining a1 = 0 and
?zt?1
at+1 = ?ct+1 Xt zt = ?ct+1 [Xt?1 xt ]ht
= ?ct+1 ht (Xt?1 ?zt?1 + xt )
1
at
+ xt .
= ?ct+1 ht
ct
This means we can predict in constant time O(d) per round.
5.2
Lower Bound
PT
By Theorem 5, using that wt = 1 so that dt = ct , the minimax regret equals t=1 ct . For convenience, we define rt := 1 ? (?T h)2t (and rT +1 = 1) so that ht = hrt /rt+1 . We can obtain a lower
bound on ct from the expression given in Theorem 11 by ignoring the (positive) c2t term to obtain:
ct?1 ? ht?1 + ?2T h2t?1 ct . By unpacking this lower bound recursively, we arrive at
ct ? h
T
X
(?T h)2(k?t)
k=t
7
rt2
.
rk rk+1
r2
rt
t
? rt+1
which leads to
Since rt2 /(ri ri+1 ) is a decreasing function in i for every t, we have ri ri+1
Z
Z
T
T X
T
T
T ?1
X
X
hT
2(k?t) rt
2(k?t) rt
(?T h)
ct ? h
?h
dkdt = ? ?
(?T h)
rt+1
rt+1
2 log(?T h)
t+1
0
t=1
t=1
k=t
where we have exploited the fact that the integrand is monotonic and concave in k and monotonic
and convex in t to lower bound the sums with an integral. See Claim 14 in the appendix for more
?
PT
details. Since ? log(?T h) = O(1/ ?T ) and h = ?(1/?T ), we have that t=1 ct = ?( ?T? ),
T
matching the upper bound below.
5.3
Upper Bound
As h ? ht , the alternative recursion c0T +1 = 0 and c0t?1 = h + ?2 h2 c0t (1 + c0t ) satisfies c0t ? ct .
A simple induction 1 shows that c0t is increasing with decreasing t, and it must hence have a limit.
This limit is a fixed-point of c 7? h + ?2 h2 c(1 + c). This results in a quadratic equation, which has
2 2
h
two solutions. Our starting point c0T +1 = 0 lies below the half-way point 1??
2?2 h2 > 0, so the sought
limit is the smaller solution:
p
??2 h2 + 1 ? (?2 h2 ? 1)2 ? 4?2 h3
c =
.
2?2 h2
This is monotonic in h. Plugging in the definition of h, we find
? q
?
?
?
?
4? + 1(2? + 1) + 4? + 1 ? 2 2? ? 2 4? + 1 + 7 + 3 4? + 1 + 4 + 4? + 1 + 1
c=
.
4?2
Series expansion around ? ? ? results in c ? (1 + ?)?1/2 . So all in all, the bound is
T
?
?
R = O
,
1 + ?T
where we have written the explicit T dependence of ?. As discussed in the introduction, allowing
?T to grow with T is natural and necessary for sub-linear regret. If ?T were constant, the regret term
and complexity term would grow with T at the same rate, effectively forcing the learner to compete
with sequences that could track the xt sequence arbitrarily well.
6
Discussion
We looked at obtaining the minimax solution to simple tracking/filtering/time series prediction problems with square loss, square norm regularization and square norm data constraints. We obtained a
computational method to get the minimax result. Surprisingly, the problem turns out to be a mixture
of per-step quadratic minimax problems that can be either concave or convex. These two problems
have different solutions. Since the type of problem that is faced in each round is not a function
of the past data, but only of the regularization, the coefficients of the value-to-go function can still
be computed recursively. However, extending the analysis beyond quadratic loss and constraints is
difficult; the self-dual property of the 2-norm is central to the calculations.
Several open problems arise. The stability of the coefficient recursion is so far elusive. For the case
of norm bounded data, we found that the ct are positive and essentially constant. However, for higher
order smoothness constraints on the data (norm bounded increments, increments of increments,
. . . ) the situation is more intricate. We find negative ct and oscillating ct , both diminishing and
increasing. Understanding the behavior of the minimax regret and algorithm as a function of the
regularization K (so that we can tune ? appropriately) is an intriguing and elusive open problem.
Acknowledgments
We gratefully acknowledge the support of the NSF through grant CCF-1115788, and of the Australian Research Council through an Australian Laureate Fellowship (FL110100281) and through
the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Thanks also to the Simons Institute for the Theory of Computing Spring 2015 Information Theory Program.
1
For the base case, cT +1 = 0 ? cT = h. Then c0t?1 = h+?2 h2 c0t (1+c0t ) ? h+?2 h2 c0t+1 (1+c0t+1 ) = c0t .
8
References
[1] Mark Herbster and Manfred K Warmuth. Tracking the best linear predictor. The Journal of
Machine Learning Research, 1:281?309, 2001.
[2] Mark Herbster and Manfred K. Warmuth. Tracking the best expert. Machine Learning,
32:151?178, 1998.
[3] Claire Monteleoni. Online learning of non-stationary sequences. Master?s thesis, MIT, May
2003. Artificial Intelligence Report 2003-11.
[4] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. An online learning-based framework for
tracking. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI), pages 101?108, 2010.
[5] Olivier Bousquet and Manfred K Warmuth. Tracking a small set of experts by mixing past
posteriors. The Journal of Machine Learning Research, 3:363?396, 2003.
[6] Nicol`o Cesa-bianchi, Pierre Gaillard, Gabor Lugosi, and Gilles Stoltz. Mirror Descent meets
Fixed Share (and feels no regret). In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems 25, pages 980?988. Curran Associates, Inc., 2012.
[7] Avrim Blum and Carl Burch. On-line learning and the metrical task system problem. Machine
Learning, 39(1):35?58, 2000.
[8] Eiji Takimoto and Manfred K. Warmuth. The minimax strategy for Gaussian density estimation. In 13th COLT, pages 100?106, 2000.
[9] Peter L. Bartlett, Wouter M. Koolen, Alan Malek, Manfred K. Warmuth, and Eiji Takimoto.
Minimax fixed-design linear regression. In P. Gr?unwald, E. Hazan, and S. Kale, editors, Proceedings of The 28th Annual Conference on Learning Theory (COLT), pages 226?239, 2015.
[10] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual
Conference on Learning Theory (COLT 2008), pages 415?423, December 2008.
[11] Edward Moroshko and Koby Crammer. Weighted last-step min-max algorithm with improved
sub-logarithmic regret. In N. H. Bshouty, G. Stoltz, N. Vayatis, and T. Zeugmann, editors,
Algorithmic Learning Theory - 23rd International Conference, ALT 2012, Lyon, France, October 29-31, 2012. Proceedings, volume 7568 of Lecture Notes in Computer Science, pages
245?259. Springer, 2012.
[12] Edward Moroshko and Koby Crammer. A last-step regression algorithm for non-stationary
online learning. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, April 29 - May 1, 2013, volume 31
of JMLR Proceedings, pages 451?462. JMLR.org, 2013.
[13] Wouter M. Koolen, Alan Malek, and Peter L. Bartlett. Efficient minimax strategies for square
loss games. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems (NIPS) 27, pages 3230?3238,
December 2014.
[14] G. Y. Hu and Robert F. O?Connell. Analytical inversion of symmetric tridiagonal matrices.
Journal of Physics A: Mathematical and General, 29(7):1511, 1996.
9
| 5730 |@word inversion:2 norm:20 h2t:2 open:3 hu:1 queensland:1 jacob:1 tr:7 versatile:1 shot:2 recursively:5 substitution:1 series:15 tabulate:1 daniel:1 past:7 ka:1 recovered:1 surprising:1 yet:1 intriguing:1 must:3 written:1 drop:1 update:6 stationary:2 half:1 intelligence:3 warmuth:5 xk:1 manfred:5 regressive:1 org:1 mathematical:2 direct:2 excellence:1 x0:2 intricate:2 behavior:1 nor:1 yasin:2 decreasing:2 encouraging:1 actual:3 lyon:1 increasing:3 bounded:8 what:2 minimizes:1 hindsight:2 guarantee:2 berkeley:4 every:1 concave:7 charge:1 finance:1 exactly:1 control:2 grant:1 positive:2 limit:4 encoding:1 ak:1 subscript:1 path:1 meet:1 lugosi:1 plus:1 au:1 studied:2 k:2 sill:1 range:2 decided:1 acknowledgment:1 practice:1 regret:27 recursive:1 block:4 kat:7 area:1 gabor:1 convenient:2 matching:1 pre:2 seeing:1 get:4 convenience:1 context:1 applying:1 writing:1 imposed:2 lagrangian:1 elusive:2 go:3 kale:1 starting:1 convex:10 parameterizing:1 regularize:3 stability:1 notion:2 anyway:1 increment:11 updated:2 feel:1 pt:4 play:4 exact:2 olivier:1 carl:1 curran:1 hypothesis:1 associate:1 amortized:1 satisfying:1 centrum:1 predicts:2 solved:1 calculate:1 trade:2 environment:1 complexity:15 depend:1 solving:2 incur:1 learner:9 compactly:1 easily:1 stock:1 derivation:2 fast:1 artificial:3 tell:1 abernethy:1 quite:1 heuristic:1 valued:1 solve:4 say:2 statistic:1 think:1 noisy:1 online:8 interplay:2 sequence:20 kxt:11 analytical:1 propose:1 iff:1 chaudhuri:1 mixing:1 adapts:1 sixteenth:1 az:1 requirement:1 extending:1 oscillating:1 generating:1 incremental:1 derive:6 depending:1 pose:2 rt2:2 bshouty:1 h3:1 edward:2 implemented:1 c:1 involves:1 hrt:1 australian:2 direction:1 filter:1 stochastic:1 elimination:2 require:1 fix:1 generalization:2 extension:1 frontier:1 around:2 considered:2 lawrence:1 algorithmic:1 predict:6 claim:1 driving:3 sought:1 early:1 estimation:2 saw:1 council:1 gaillard:1 establishes:1 tool:1 weighted:4 mit:1 gaussian:5 sight:1 aim:1 always:1 rather:2 ck:2 shrinkage:1 corollary:2 cwi:1 derived:1 focus:1 rank:1 adversarial:1 sense:1 dependent:1 typically:3 bt:13 a0:1 entire:1 diminishing:1 going:2 fl110100281:1 interested:1 france:1 dual:1 colt:3 priori:1 constrained:4 special:2 uc:2 equal:1 intriguingly:2 look:1 comparators:1 koby:2 future:1 report:1 few:2 preserve:1 tightly:2 zoom:1 familiar:1 ourselves:1 sandwich:2 maintain:1 stationarity:2 wouter:3 highly:1 evaluation:1 mixture:1 nl:1 amazingly:1 kt:9 metrical:1 integral:1 necessary:2 machinery:1 stoltz:2 euclidean:2 walk:1 arma:2 penalizes:1 re:1 measuring:1 yoav:1 clipping:2 entry:5 predictor:2 tridiagonal:3 front:1 gr:1 motivating:5 perturbed:1 combined:1 thanks:1 density:2 herbster:2 st:1 international:2 off:2 physic:1 concrete:1 squared:8 abbasi:1 cesa:1 central:1 thesis:1 possibly:1 expert:9 derivative:2 return:1 coefficient:5 inc:1 satisfy:1 stream:1 performed:1 view:2 closed:1 hazan:1 start:1 recover:1 decaying:1 simon:1 contribution:1 square:8 efficiently:1 yield:1 generalize:1 weak:1 history:1 explain:1 banded:4 suffers:1 monteleoni:1 definition:3 sixth:1 competitor:1 energy:5 naturally:2 proof:6 recovers:1 static:1 hsu:1 proved:2 popular:1 ask:1 recall:2 ut:13 back:3 higher:4 dt:10 follow:1 improved:1 april:1 formulation:1 done:1 generality:1 just:1 retrospect:1 hand:2 d:2 replacing:1 meteorological:1 quality:1 grows:1 usa:1 multiplier:1 unpacking:1 former:1 regularization:12 hence:3 ccf:1 symmetric:1 illustrated:1 deal:2 round:18 mahalanobis:1 game:12 adjacent:1 recurrence:2 self:1 criterion:1 presenting:1 outline:1 motion:1 koolen:3 twist:1 exponentially:2 volume:2 extend:1 tail:2 discussed:1 interpret:1 wmkoolen:1 measurement:1 singleshot:1 blocked:1 smoothness:11 rd:8 c0t:13 centre:1 gratefully:1 moving:1 robot:1 etc:3 base:3 wt2:1 curvature:1 brownian:1 multivariate:1 recent:1 showed:1 perspective:1 posterior:1 forcing:1 certain:1 continue:1 arbitrarily:1 vt:21 exploited:1 fortunately:1 impose:1 converting:1 determine:1 signal:2 alan:3 technical:2 smooth:1 faster:1 calculation:1 offer:1 a1:5 plugging:1 prediction:24 regression:5 essentially:1 limt:1 c2t:1 penalize:1 vayatis:1 remarkably:1 want:1 fellowship:1 completes:1 grow:3 source:1 crucial:1 appropriately:1 subject:1 december:2 backwards:2 switch:1 competing:1 reduce:1 computable:1 expression:4 bartlett:5 wiskunde:1 akin:1 penalty:2 peter:4 action:3 generally:4 useful:2 tewari:1 tune:1 cosh:4 induces:1 eiji:2 informatica:1 simplest:1 wreck:1 generate:1 zeugmann:1 canonical:1 nsf:1 notice:1 sign:1 per:8 track:1 discrete:1 write:1 express:2 kck:1 blum:1 changing:1 takimoto:2 ht:17 backward:3 v1:1 sum:6 compete:2 inverse:5 run:1 master:1 uncertainty:1 extends:1 almost:4 place:1 throughout:1 arrive:1 appendix:6 scaling:1 bound:17 ct:54 followed:1 simplification:1 guaranteed:1 sinh:3 quadratic:17 annual:2 nontrivial:1 ahead:1 constraint:16 burch:1 ri:4 bousquet:1 integrand:1 argument:1 min:15 optimality:1 extremely:2 spring:1 connell:1 conjecture:1 combination:2 ball:1 remain:1 slightly:1 smaller:1 making:1 happens:2 kbk:10 invariant:1 equation:3 resource:1 previously:1 remains:1 turn:2 abbreviated:1 generalizes:1 apply:3 appropriate:1 pierre:1 yadkori:1 robustness:1 alternative:1 weinberger:2 original:1 running:1 include:1 scottsdale:1 k1:1 ghahramani:1 implied:3 objective:6 realized:1 quantity:2 looked:1 strategy:15 dependence:2 usual:1 rt:30 diagonal:2 cycling:1 argue:2 induction:6 assuming:1 abbasiyadkori:1 length:1 innovation:2 setup:1 difficult:1 october:1 robert:1 trace:1 negative:1 implementation:2 design:1 zt:10 twenty:1 perform:1 allowing:1 upper:2 bianchi:1 observation:3 gilles:1 arc:1 finite:1 acknowledge:1 descent:1 defining:2 payoff:1 situation:1 nasty:1 perturbation:1 arbitrary:3 bk:1 required:2 nip:1 beyond:1 adversary:1 usually:1 below:5 sparsity:3 program:1 ambuj:1 max:11 memory:1 shifting:1 difficulty:1 natural:6 force:1 predicting:2 recursion:3 advanced:1 minimax:58 technology:1 concludes:1 ready:1 auto:1 faced:1 understanding:1 l2:1 nicol:1 freund:1 loss:20 lecture:1 gauged:1 filtering:2 allocation:1 h2:8 incurred:1 editor:4 playing:1 share:2 claire:1 penalized:2 surprisingly:1 last:4 free:1 offline:5 formal:1 allow:1 side:1 burges:1 institute:1 fall:1 taking:2 sparse:4 benefit:1 dimension:2 evaluating:1 cumulative:2 rich:1 author:2 adaptive:3 far:1 welling:1 laureate:1 keep:1 reveals:1 buy:1 uai:1 leader:1 decomposes:1 additionally:1 nature:1 robust:1 expanding:1 rearranging:1 ignoring:1 obtaining:1 expansion:1 bottou:1 protocol:3 aistats:1 main:3 motivation:1 noise:5 arise:4 allowed:1 x1:4 qut:2 sub:3 pereira:1 explicit:2 fortunate:1 invocation:1 lie:1 jmlr:2 theorem:22 rk:2 xt:66 specific:1 r2:1 rakhlin:1 cortes:1 alt:1 avrim:1 sequential:2 effectively:1 kamalika:1 mirror:1 magnitude:3 kx:2 generalizing:1 logarithmic:1 simply:1 cheaply:1 lagrange:1 unexpected:1 kxk:7 tracking:11 scalar:2 monotonic:3 springer:1 malek:4 minimizer:3 satisfies:2 comparator:17 abbreviation:2 goal:1 identity:1 price:1 hard:2 change:2 typical:2 wt:7 lemma:6 player:2 unwald:1 select:1 support:1 mark:2 latter:1 arises:1 crammer:2 alexander:1 violated:1 ex:1 |
5,226 | 5,731 | Communication Complexity of Distributed
Convex Learning and Optimization
Ohad Shamir
Weizmann Institute of Science
Rehovot 7610001, Israel
ohad.shamir@weizmann.ac.il
Yossi Arjevani
Weizmann Institute of Science
Rehovot 7610001, Israel
yossi.arjevani@weizmann.ac.il
Abstract
We study the fundamental limits to communication-efficient distributed methods
for convex learning and optimization, under different assumptions on the information available to individual machines, and the types of functions considered. We
identify cases where existing algorithms are already worst-case optimal, as well as
cases where room for further improvement is still possible. Among other things,
our results indicate that without similarity between the local objective functions
(due to statistical data similarity or otherwise) many communication rounds may
be required, even if the machines have unbounded computational power.
1
Introduction
We consider the problem of distributed convex learning and optimization, where a set of m machines, each with access to a different local convex function Fi : Rd 7? R and a convex domain
W ? Rd , attempt to solve the optimization problem
m
min F (w) where F (w) =
w?W
1 X
Fi (w).
m i=1
(1)
A prominent application is empirical risk minimization, where the goal is to minimize the average
loss over some dataset, where each machine has access to a different subset of the data. Letting
{z1 , . . . , zN } be the dataset composed of N examples, and assuming the loss function `(w, z)
PN
is convex in w, then the empirical risk minimization problem minw?W N1 i=1 `(w, zi ) can be
written as in Eq. (1), where Fi (w) is the average loss over machine i?s examples.
The main challenge in solving such problems is that communication between the different machines
is usually slow and constrained, at least compared to the speed of local processing. On the other
hand, the datasets involved in distributed learning are usually large and high-dimensional. Therefore,
machines cannot simply communicate their entire data to each other, and the question is how well
can we solve problems such as Eq. (1) using as little communication as possible.
As datasets continue to increase in size, and parallel computing platforms becoming more and more
common (from multiple cores on a single CPU to large-scale and geographically distributed computing grids), distributed learning and optimization methods have been the focus of much research
in recent years, with just a few examples including [25, 4, 2, 27, 1, 5, 13, 23, 16, 17, 8, 7, 9, 11, 20,
19, 3, 26]. Most of this work studied algorithms for this problem, which provide upper bounds on
the required time and communication complexity.
In this paper, we take the opposite direction, and study what are the fundamental performance limitations in solving Eq. (1), under several different sets of assumptions. We identify cases where
existing algorithms are already optimal (at least in the worst-case), as well as cases where room for
further improvement is still possible.
1
Since a major constraint in distributed learning is communication, we focus on studying the amount
of communication required to optimize Eq. (1) up to some desired accuracy . More precisely,
we consider the number of communication rounds that are required, where in each communication
round the machines can generally broadcast to each other information linear in the problem?s dimension d (e.g. a point in W or a gradient). This applies to virtually all algorithms for large-scale
learning we are aware of, where sending vectors and gradients is feasible, but computing and sending
larger objects, such as Hessians (d ? d matrices) is not.
Our results pertain to several possible settings (see Sec. 2 for precise definitions). First, we distinguish between the local functions being merely convex or strongly-convex, and whether they are
smooth or not. These distinctions are standard in studying optimization algorithms for learning, and
capture important properties such as the regularization and the type of loss function used. Second,
we distinguish between a setting where the local functions are related ? e.g., because they reflect
statistical similarities in the data residing at different machines ? and a setting where no relationship
is assumed. For example, in the extreme case where data was split uniformly at random between
machines, one can show that quantities
such as the values, gradients and Hessians of the local func?
tions differ only by ? = O(1/ n), where n is the sample size per machine, due to concentration
of measure effects. Such similarities can be used to speed up the optimization/learning process, as
was done in e.g. [20, 26]. Both the ?-related and the unrelated setting can be considered in a unified
way, by letting ? be a parameter and studying the attainable lower bounds as a function of ?. Our
results can be summarized as follows:
? First, we define a mild structural assumption on the algorithm (which is satisfied by reasonable
approaches we are aware of), which allows us to provide the lower bounds described below on
the number of communication rounds required to reach a given suboptimality .
p
? When the local functions can be unrelated, we prove
p a lower bound of ?( 1/? log(1/)) for
smooth and ?-strongly convex functions, and ?( 1/) for smooth convex functions. These
lower bounds are matched by a straightforward distributed implementation of accelerated gradient descent. In particular, the results imply that many communication rounds may be required
to get a high-accuracy solution, and moreover, that no algorithm satisfying our structural assumption would be better, even if we endow the local machines with
p unbounded computational
power. For non-smooth functions, we show a lower bound of ?( 1/?) for ?-strongly convex
functions, and ?(1/) for general convex functions. Although we leave a full derivation to
future work, it seems these lower bounds can be matched in our framework by an algorithm
combining acceleration and Moreau proximal smoothing of the local functions.
? When the local functions are related
(as quantified by the parameter ?), we prove a communicap
tion round lower bound of ?( ?/? log(1/)) for smooth and ?-strongly convex functions. For
quadratics, this bound is matched by (up to constants and logarithmic factors) by the recentlyproposed DISCO algorithm [26]. However, getting an optimal algorithm for general strongly
convex and smooth functions in the ?-related setting, let alone for non-smooth or non-strongly
convex functions, remains open.
? We also study the attainable performance without posing any structural assumptions on the algorithm, but in the more restricted case where only a single round of communication is allowed. We
prove that in a broad regime, the performance of any distributed algorithm may be no better than a
?trivial? algorithm which returns the minimizer of one of the local functions, as long as the number
of bits communicated is less than ?(d2 ). Therefore, in our setting, no communication-efficient
1-round distributed algorithm can provide non-trivial performance in the worst case.
Related Work
There have been several previous works which considered lower bounds in the context of distributed
learning and optimization, but to the best of our knowledge, none of them provide a similar type of
results. Perhaps the most closely-related paper is [22], which studied the communication complexity
of distributed optimization, and showed that ?(d log(1/)) bits of communication are necessary
between the machines, for d-dimensional convex problems. However, in our setting this does not
lead to any non-trivial lower bound on the number of communication rounds (indeed, just specifying
a d-dimensional vector up to accuracy required O(d log(1/)) bits). More recently, [2] considered
lower bounds for certain types of distributed learning problems, but not convex ones in an agnostic
2
distribution-free framework. In the context of lower bounds for one-round algorithms, the results of
[6] imply that ?(d2 ) bits of communication are required to solve linear regression in one round of
communication. However, that paper assumes a different model than ours, where the function to be
optimized is not split among the machines as in Eq. (1), where each Fi is convex. Moreover, issues
such as strong convexity and smoothness are not considered. [20] proves an impossibility result for
a one-round distributed learning scheme, even when the local functions are not merely related, but
actually result from splitting data uniformly at random between machines. On the flip side, that
result is for a particular algorithm, and doesn?t apply to any possible method.
Finally, we emphasize that distributed learning and optimization can be studied under many settings,
including ones different than those studied here. For example, one can consider distributed learning
on a stream of i.i.d. data [19, 7, 10, 8], or settings where the computing architecture is different, e.g.
where the machines have a shared memory, or the function to be optimized is not split as in Eq. (1).
Studying lower bounds in such settings is an interesting topic for future work.
2
Notation and Framework
The only vector and matrix norms used in this paper are the Euclidean norm and the spectral norm,
respectively. ej denotes the j-th standard unit vector. We let ?G(w) and ?2 G(w) denote the
gradient and Hessians of a function G at w, if they exist. G is smooth (with parameter L) if it
is differentiable and the gradient is L-Lipschitz. In particular, if w? = arg minw?W G(w), then
2
G(w) ? G(w? ) ? L2 kw ? w? k . G is strongly convex (with parameter ?) if for any w, w0 ?
2
W, G(w0 ) ? G(w) + hg, w0 ? wi + ?2 kw0 ? wk where g ? ?G(w0 ) is a subgradient of G
2
at w. In particular, if w? = arg minw?W G(w), then G(w) ? G(w? ) ? ?2 kw ? w? k . Any
convex function is also strongly-convex with ? = 0. A special case of smooth convex functions are
quadratics, where G(w) = w> Aw + b> w + c for some positive semidefinite matrix A, vector b
and scalar c. In this case, ? and L correspond to the smallest and largest eigenvalues of A.
We model the distributed learning algorithm as an iterative process, where in each round the machines may perform some local computations, followed by a communication round where each machine broadcasts a message to all other machines. We make no assumptions on the computational
complexity of the local computations. After all communication rounds are completed, a designated
machine provides the algorithm?s output (possibly after additional local computation).
Clearly, without any assumptions on the number of bits communicated, the problem can be trivially
solved in one round of communication (e.g. each machine communicates the function Fi to the
designated machine, which then solves Eq. (1). However, in practical large-scale scenarios, this
is non-feasible, and the size of each message (measured by the number of bits) is typically on the
?
order of O(d),
enough to send a d-dimensional real-valued vector1 , such as points in the optimization
domain or gradients, but not larger objects such as d ? d Hessians.
In this model, our main question is the following: How many rounds of communication are necessary in order to solve problems such as Eq. (1) to some given accuracy ?
As discussed in the introduction, we first need to distinguish between different assumptions on the
possible relation between the local functions. One natural situation is when no significant relationship can be assumed, for instance when the data is arbitrarily split or is gathered by each machine
from statistically dissimilar sources. We denote this as the unrelated setting. However, this assumption is often unnecessarily pessimistic. Often the data allocation process is more random, or we can
assume that the different data sources for each machine have statistical similarities (to give a simple example, consider learning from users? activity across a geographically distributed computing
grid, each servicing its own local population). We will capture such similarities, in the context of
quadratic functions, using the following definition:
Definition 1. We say that a set of quadratic functions
Fi (w) := w> Ai w + bi w + ci ,
Ai ? Rd?d , bi ? Rd , ci ? R
1
? hides constants and factors logarithmic in the required accuracy of the solution. The idea is that we
The O
can represent real numbers up to some arbitrarily high machine precision, enough so that finite-precision issues
are not a problem.
3
are ?-related, if for any i, j ? {1 . . . k}, it holds that
kAi ? Aj k ? ?, kbi ? bj k ? ?, |ci ? cj | ? ?
For example, in the context of linear regression with the squared loss over a bounded subset of
Rd , and assuming mn data points with bounded norm are randomly and equally
split among m
?
machines, it can be shown that the conditions above hold with ? = O(1/ n) [20]. The choice
of ? provides us with a spectrum of learning problems ranked by difficulty: When?? = ?(1), this
generally corresponds to the unrelated setting discussed earlier. When ? = O(1/ n), we get the
situation typical of randomly partitioned data. When ? = 0, then all the local functions have essentially the same minimizers, in which case Eq. (1) can be trivially solved with zero communication,
just by letting one machine optimize its own local function. We note that although Definition 1 can
be generalized to non-quadratic functions, we do not need it for the results presented here.
We end this section with an important remark. In this paper, we prove lower bounds for the ?-related
setting, which includes as ?
a special case the commonly-studied setting of randomly partitioned data
(in which case ? = O(1/ n)). However, our bounds do not apply for random partitioning, since
they use ?-related constructions which do not correspond to randomly partitioned data. In fact, very
recent work [12] has cleverly shown that for randomly partitioned data, and for certain reasonable
regimes of strong convexity and smoothness, it is actually possible to get better performance than
what is indicated by our lower bounds. However, this encouraging result crucially relies on the
random partition property, and in parameter regimes which limit how much each data point needs to
be ?touched?, hence preserving key statistical independence properties. We suspect that it may be
difficult to improve on our lower bounds under substantially weaker assumptions.
3
Lower Bounds Using a Structural Assumption
In this section, we present lower bounds on the number of communication rounds, where we impose a certain mild structural assumption on the operations performed by the algorithm. Roughly
speaking, our lower bounds pertain to a very large class of algorithms, which are based on linear
operations involving points, gradients, and vector products with local Hessians and their inverses,
as well as solving local optimization problems involving such quantities. At each communication
round, the machines can share any of the vectors they have computed so far. Formally, we consider algorithms which satisfy the assumption stated below. For convenience, we state it for smooth
functions (which are differentiable) and discuss the case of non-smooth functions in Sec. 3.2.
Assumption 1. For each machine j, define a set Wj ? Rd , initially Wj = {0}. Between communication rounds, each machine j iteratively computes and adds to Wj some finite number of points
w, each satisfying
n
?w + ??Fj (w) ? span w0 , ?Fj (w0 ) , (?2 Fj (w0 ) + D)w00 , (?2 Fj (w0 ) + D)?1 w00
o
w0 , w00 ? Wj , D diagonal , ?2 Fj (w0 ) exists , (?2 Fj (w0 ) + D)?1 exists .
(2)
for some ?, ? ? 0 such that ? + ? > 0. After every communication round, let Wj := ?m
i=1 Wi for all
j. The algorithm?s final output (provided by the designated machine j) is a point in the span of Wj .
This assumption requires several remarks:
? Note that Wj is not an explicit part of the algorithm: It simply includes all points computed by
machine j so far, or communicated to it by other machines, and is used to define the set of new
points which the machine is allowed to compute.
? The assumption bears some resemblance ? but is far weaker ? than standard assumptions used to
provide lower bounds for iterative optimization algorithms. For example, a common assumption
(see [14]) is that each computed point w must lie in the span of the previous gradients. This corresponds to a special case of Assumption 1, where ? = 1, ? = 0, and the span is only over gradients
of previously computed points. Moreover, it also allows (for instance) exact optimization of each
local function, which is a subroutine in some distributed algorithms (e.g. [27, 25]), by setting
? = 0, ? = 1 and computing a point w satisfying ?w + ??Fj (w) = 0. By allowing the span
to include previous gradients, we also incorporate algorithms which perform optimization of the
4
local function plus terms involving previous gradients and points, such as [20], as well as algorithms which rely on local Hessian information and preconditioning, such as [26]. In summary,
the assumption is satisfied by most techniques for black-box convex optimization that we are
aware of. Finally, we emphasize that we do not restrict the number or computational complexity
of the operations performed between communication rounds.
? The requirement that ?, ? ? 0 is to exclude algorithms which solve non-convex local optimization
2
problems of the form minw Fj (w) + ? kwk with ? < 0, which are unreasonable in practice and
can sometimes break our lower bounds.
? The assumption that Wj is initially {0} (namely, that the algorithm starts from the origin) is purely
for convenience, and our results can be easily adapted to any other starting point by shifting all
functions accordingly.
The techniques we employ in this section are inspired by lower bounds on the iteration complexity of first-order methods for standard (non-distributed) optimization (see for example [14]). These
are based on the construction of ?hard? functions, where each gradient (or subgradient) computation can only provide a small improvement in the objective value. In our setting, the dynamics are
roughly similar, but the necessity of many gradient computations is replaced by many communication rounds. This is achieved by constructing suitable local functions, where at any time point no
individual machine can ?progress? on its own, without information from other machines.
3.1
Smooth Local Functions
We begin by presenting a lower bound when the local functions Fi are strongly-convex and smooth:
Theorem 1. For any even number m of machines, any distributed algorithm which satisfies Assumption 1, and for any ? ? [0, 1), ? ? (0, 1), there exist m local quadratic functions over Rd
(where d is sufficiently large) which are 1-smooth, ?-strongly convex, and ?-related, such that if
? satisfyw? = arg minw?Rd F (w), then the number of communication rounds required to obtain w
? ? F (w? ) ? (for any > 0) is at least
ing F (w)
s
!
!
!!
r
2
2
1
1
? kw? k
1
? kw? k
?
1+?
? 1 ? 1 log
? = ?
log
4
?
4
2
?
q
3?
if ? > 0, and at least 32
kw? k ? 2 if ? = 0.
The assumption of m being even is purely for technical convenience, and can be discarded at the
cost of making the proof slightly more complex. Also, note that m does not appear explicitly in
the bound, but may appear implicitly, via ? (for example, in a statistical setting ? may depend on
the number of data points per machine, and may be larger if the same dataset is divided to more
machines).
Let us contrast our lower bound with some existing algorithms and guarantees in the literature. First,
regardless of whether the local functions are similar or not, we can always simulate any gradientbased method designed for a single machine, by iteratively computing gradients of the local functions, and performing a communication round
Pmto compute their average. Clearly, this will be a
1
gradient of the objective function F (?) = m
i=1 Fi (?), which can be fed into any gradient-based
method such as gradient descent or accelerated gradient descent [14]. The resulting number of
required communication rounds is then equal to the number of iterations. In particular, using accelerated
for smooth and p
?-strongly convex functions yields a round complexity
p gradient descent
2
of O( 1/? log(kw? k /)), and O(kw? k 1/) for smooth convex functions. This matches our
lower bound (up to constants and log factors) when the local functions are unrelated (? = ?(1)).
When the functions are related, however, the upper bounds above are highly sub-optimal: Even if
the local functions are completely identical, and ? = 0, the number of communication rounds will
remain the same as when ? = ?(1). To utilize function similarity while guaranteeing arbitrary small
, the two most relevant algorithms are DANE [20], and the more recent DISCO [26]. For smooth
and ?-strongly convex functions, p
which are either quadratic or satisfy a certain self-concordance
?
condition, DISCO achieves O(1+
?/?) round complexity ([26, Thm.2]), which matches our lower
bound in terms of dependence on ?, ?. However, for non-quadratic losses, the round complexity
5
bounds are somewhat worse, and there are no guarantees for strongly convex and smooth functions
which are not self-concordant. Thus, the question of the optimal round complexity for such functions
remains open.
The full proof of Thm. 1 appears in the supplementary material, and is based on the following idea:
For simplicity, suppose we have two machines, with local functions F1 , F2 defined as follows,
?(1 ? ?) >
?(1 ? ?) >
?
2
w A1 w ?
e1 w + kwk
4
2
2
?(1 ? ?) >
?
2
F2 (w) =
w A2 w + kwk , where
4
2
?
?
1 ?1 0
0
0
0
0 0 ...
? ?1 1
0
0
?
?1 0
0 0 ... ?
0
1 ?1
? 0
?
1
0
0 0 ... ?
? 0
0 ?1 1
?
0
1 ?1 0 . . . ?
? , A2 = ? 0
0
0
0
?
?
0 ?1 1 0 . . . ?
? 0
0
0
0
?
..
..
..
..
..
.
.
.
..
.
.
.
.
.
..
..
..
.
F1 (w) =
?
1
?0
?
?0
A1 = ?
?0
?0
?
..
.
0
1
?1
0
0
..
.
(3)
0
0
0
0
1
?1
..
.
0
0
0
0
?1
1
..
.
?
...
... ?
?
... ?
... ?
?
... ?
?
... ?
?
..
.
It is easy to verify that for ?, ? ? 1, both F1 (w) and F2 (w) are 1-smooth and ?-strongly convex,
as well as ?-related. Moreover, the optimum of their average is a point w? with non-zero entries
at all coordinates. However, since each local functions has a block-diagonal quadratic term, it can
be shown that for any algorithm satisfying Assumption 1, after T communication rounds, the points
computed by the two machines can only have the first T + 1 coordinates non-zero. No machine
will be able to further ?progress? on its own, and cause additional coordinates to become non-zero,
without another communication round. This leads to a lower bound on the optimization error which
depends on T , resulting in the theorem statement after a few computations.
3.2
Non-smooth Local Functions
Remaining in the framework of algorithms satisfying Assumption 1, we now turn to discuss the
situation where the local functions are not necessarily smooth or differentiable. For simplicity, our
formal results here will be in the unrelated setting, and we only informally discuss their extension
to a ?-related setting (in a sense relevant to non-smooth functions). Formally defining ?-related
non-smooth functions is possible but not altogether trivial, and is therefore left to future work.
We adapt Assumption 1 to the non-smooth case, by allowing gradients to be replaced by arbitrary
subgradients at the same points. Namely, we replace Eq. (2) by the requirement that for some
g ? ?Fj (w), and ?, ? ? 0, ? + ? > 0,
n
?w + ?g ? span w0 , g0 , (?2 Fj (w0 ) + D)w00 , (?2 Fj (w0 ) + D)?1 w00
o
w0 , w00 ? Wj , g0 ? ?Fj (w0 ) , D diagonal , ?2 Fj (w0 ) exists , (?2 Fj (w0 ) + D)?1 exists .
The lower bound for this setting is stated in the following theorem.
Theorem 2. For any even number m of machines, any distributed optimization algorithm which
satisfies Assumption 1, and for any ? ? 0, there exist ?-strongly convex (1+?)-Lipschitz continuous
convex local functions F1 (w) and F2 (w) over the unit Euclidean ball in Rd (where d is sufficiently
large), such that if w? = arg minw:kwk?1 F (w), the number of communication rounds required to
1
? satisfying F (w)
? ? F (w? ) ? (for any sufficiently small > 0) is 8
obtain w
? 2 for ? = 0, and
q
1
16? ? 2 for ? > 0.
As in Thm. 1, we note that the assumption of even m is for technical convenience.
This theorem, together with Thm. 1, implies that both strong convexity and smoothness are necessary for the number of communication rounds to scale logarithmically with the required accuracy
. We emphasize that this is true even if we allow the machines unbounded computational power,
to perform arbitrarily many operations satisfying Assumption 1. Moreover, a preliminary analysis
6
indicates that performing accelerated gradient descent on smoothed versions of the local functions
(using Moreau proximal smoothing, e.g. [15, 24]), can match these lower bounds up to log factors2 .
We leave a full formal derivation (which has some subtleties) to future work.
The full proof of Thm. 2 appears in the supplementary material. The proof idea relies on the following construction: Assume that we fix the number of communication rounds to be T , and (for
simplicity) that T is even and the number of machines is 2. Then we use local functions of the form
1
?
1
2
F1 (w) = ? |b ? w1 | + p
(|w2 ? w3 | + |w4 ? w5 | + ? ? ? + |wT ? wT +1 |) + kwk
2
2
2(T + 2)
1
?
2
F2 (w) = p
(|w1 ? w2 | + |w3 ? w4 | + ? ? ? + |wT +1 ? wT +2 |) + kwk ,
2
2(T + 2)
where b is a suitably chosen parameter. It is easy to verify that both local functions are ?-strongly
convex and (1 + ?)-Lipschitz continuous over the unit Euclidean ball. Similar to the smooth case,
we argue that after T communication rounds, the resulting points w computed by machine 1 will
be non-zero only on the first T + 1 coordinates, and the points w computed by machine 2 will be
non-zero only on the first T coordinates. As in the smooth case, these functions allow us to ?control?
the progress of any algorithm which satisfies Assumption 1.
Finally, although the result is in the unrelated setting, it is straightforward to have a similar construction in a ??-related? setting, by multiplying F1 and F2 by ?. The resulting two functions have their
gradients and subgradients at most ?-different from each other, and p
the construction above leads to
a lower bound of ?(?/) for convex Lipschitz functions, and ?(? 1/?) for ?-strongly convex
Lipschitz functions. In terms of upper bounds, we are actually unaware of any relevant algorithm
in the literature adapted to such a setting, and the question of attainable performance here remains
wide open.
4
One Round of Communication
In this section, we study what lower bounds are attainable without any kind of structural assumption
(such as Assumption 1). This is a more challenging setting, and the result we present will be limited to algorithms using a single round of communication round. We note that this still captures a
realistic non-interactive distributed computing scenario, where we want each machine to broadcast
a single message, and a designated machine is then required to produce an output. In the context of
distributed optimization, a natural example is a one-shot averaging algorithm, where each machine
optimizes its own local data, and the resulting points are averaged (e.g. [27, 25]).
Intuitively, with only a single round of communication, getting an arbitrarily small error may be
infeasible. The following theorem establishes a lower bound on the attainable error, depending on
the strong convexity parameter ? and the similarity measure ? between the local functions, and
compares this with a ?trivial? zero-communication algorithm, which just returns the optimum of a
single local function:
Theorem 3. For any even number m of machines, any dimension d larger than some numerical
constant, any ? ? 3? > 0, and any (possibly randomized) algorithm which communicates at most
d2 /128 bits in a single round of communication, there exist m quadratic functions over Rd , which
are ?-related, ?-strongly convex and 9?-smooth, for which the following hold for some positive
numerical constants c, c0 :
? returned by the algorithm satisfies
? The point w
?2
? ? min F (w) ? c
E F (w)
?
w?Rd
in expectation over the algorithm?s randomness.
2
Roughly speaking, for any ? > 0, this smoothing creates a ?1 -smooth function which is ?-close to the
original function. Plugging these into the guarantees of accelerated gradient descent and tuning ? yields our
lower bounds. Note that, in order to execute this algorithm each machine must be sufficiently powerful to obtain
the gradient of the Moreau envelope of its local function, which is indeed the case in our framework.
7
? j ) ? minw?Rd F (w) ? c0 ? 2 /?.
? j = arg minw?Rd Fj (w), then F (w
? For any machine j, if w
The theorem shows that unless the communication budget is extremely large (quadratic in the dimension), there are functions which cannot be optimized to non-trivial accuracy in one round of
communication, in the sense that the same accuracy (up to a universal constant) can be obtained
with a ?trivial? solution where we just return the optimum of a single local function. This complements an earlier result in [20], which showed that a particular one-round algorithm is no better than
returning the optimum of a local function, under the stronger assumption that the local functions are
not merely ?-related, but are actually the average loss over some randomly partitioned data.
The full proof appears in the supplementary material, but we sketch the main ideas below. As
before, focusing on the case of two machines, and assuming machine 2 is responsible for providing
the output, we use
!
?1
1
1
>
F1 (w) = 3?w
I+ ? M
? I w
2
2c d
3?
2
kwk ? ?ej ,
2
where M is essentially
a randomly chosen {?1, +1}-valued d ? d symmetric matrix with spectral
?
norm at most c d, and c is a suitable constant. These functions can be shown to be ?-related as well
as ?-strongly convex. Moreover, the optimum of F (w) = 21 (F1 (w) + F2 (w)) equals
?
1
w? =
I + ? M ej .
6?
2c d
F2 (w) =
Thus, we see that the optimal point w? depends on the j-th column of M . Intuitively, the machines
need to approximate this column, and this is the source of hardness in this setting: Machine 1
knows M but not j, yet needs to communicate to machine 2 enough information to construct its j-th
column. However, given a communication budget much smaller than the size of M (which is d2 ), it
is difficult to convey enough information on the j-th column without knowing what j is. Carefully
formalizing this intuition, and using some information-theoretic tools, allows us to prove the first
part of Thm. 3. Proving the second part of Thm. 3 is straightforward, using a few computations.
5
Summary and Open Questions
In this paper, we studied lower bounds on the number of communication rounds needed to solve
distributed convex learning and optimization problems, under several different settings. Our results
indicate that when the local functions are unrelated, then regardless of the local machines? computational power, many communication rounds may be necessary (scaling polynomially with 1/ or
1/?), and that the worst-case optimal algorithm (at least for smooth functions) is just a straightforward distributed implementation of accelerated gradient descent. When the functions are related, we
show that the optimal performance is achieved by the algorithm of [26] for quadratic and strongly
convex functions, but designing optimal algorithms for more general functions remains open. Beside these results, which required a certain mild structural assumption on the algorithm employed,
we also provided an assumption-free lower bound for one-round algorithms, which implies that
even for strongly convex quadratic functions, such algorithms can sometimes only provide trivial
performance.
Besides the question of designing optimal algorithms for the remaining settings, several additional
questions remain open. First, it would be interesting to get assumption-free lower bounds for algorithms with multiple rounds of communication. Second, our work focused on communication
complexity, but in practice the computational complexity of the local computations is no less important. Thus, it would be interesting to understand what is the attainable performance with simple,
runtime-efficient algorithms. Finally, it would be interesting to study lower bounds for other distributed learning and optimization scenarios.
Acknowledgments: This research is supported in part by an FP7 Marie Curie CIG grant, the Intel
ICRI-CI Institute, and Israel Science Foundation grant 425/13. We thank Nati Srebro for several
helpful discussions and insights.
8
References
[1] A. Agarwal, O. Chapelle, M. Dud??k, and J. Langford. A reliable effective terascale linear
learning system. CoRR, abs/1110.4198, 2011.
[2] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT, 2012.
[3] M.-F. Balcan, V. Kanchanapally, Y. Liang, and D. Woodruff. Improved distributed principal
component analysis. In NIPS, 2014.
[4] R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and
distributed approaches. Cambridge University Press, 2011.
[5] S.P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via ADMM. Foundations and Trends in Machine Learning, 3(1):1?122, 2011.
[6] K. Clarkson and D. Woodruff. Numerical linear algebra in the streaming model. In STOC,
2009.
[7] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated
gradient methods. In NIPS, 2011.
[8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. Journal of Machine Learning Research, 13:165?202, 2012.
[9] J. Duchi, A. Agarwal, and M. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592?606, 2012.
[10] R. Frostig, R. Ge, S. Kakade, and A. Sidford. Competing with the empirical risk minimizer in
a single pass. arXiv preprint arXiv:1412.6606, 2014.
[11] M. Jaggi, V. Smith, M. Tak?ac, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan.
Communication-efficient distributed dual coordinate ascent. In NIPS, 2014.
[12] J. Lee, T. Ma, and Q. Lin. Distributed stochastic variance reduced gradient methods. CoRR,
1507.07595, 2015.
[13] D. Mahajan, S. Keerthy, S. Sundararajan, and L. Bottou. A parallel SGD method with strong
convergence. CoRR, abs/1311.0636, 2013.
[14] Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer, 2004.
[15] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical programming,
103(1):127?152, 2005.
[16] B. Recht, C. R?e, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011.
[17] P. Richt?arik and M. Tak?ac. Distributed coordinate descent method for learning with big data.
CoRR, abs/1310.2059, 2013.
[18] O. Shamir. Fundamental limits of online and distributed algorithms for statistical learning and
estimation. In NIPS, 2014.
[19] O. Shamir and N. Srebro. On distributed stochastic optimization and learning. In Allerton
Conference on Communication, Control, and Computing, 2014.
[20] O. Shamir, N. Srebro, and T. Zhang. Communication-efficient distributed optimization using
an approximate newton-type method. In ICML, 2014.
[21] T. Tao. Topics in random matrix theory, volume 132. American Mathematical Soc., 2012.
[22] J. Tsitsiklis and Z.-Q. Luo. Communication complexity of convex optimization. J. Complexity,
3(3):231?243, 1987.
[23] T. Yang. Trading computation for communication: Distributed SDCA. In NIPS, 2013.
[24] Y.-L. Yu. Better approximation and faster algorithm using proximal average. In NIPS, 2013.
[25] Y. Zhang, J. Duchi, and M. Wainwright. Communication-efficient algorithms for statistical
optimization. Journal of Machine Learning Research, 14:3321?3363, 2013.
[26] Y. Zhang and L. Xiao. Communication-efficient distributed optimization of self-concordant
empirical loss. In ICML, 2015.
[27] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In
NIPS, 2010.
9
| 5731 |@word mild:3 version:1 stronger:1 seems:1 norm:5 suitably:1 c0:2 open:6 bekkerman:1 d2:4 dekel:1 crucially:1 attainable:6 automat:1 sgd:1 shot:1 necessity:1 woodruff:2 ours:1 existing:3 luo:1 yet:1 chu:1 written:1 must:2 realistic:1 partition:1 numerical:3 hofmann:1 designed:1 alone:1 accordingly:1 smith:1 core:1 provides:2 allerton:1 zhang:3 unbounded:3 mathematical:2 become:1 prove:5 introductory:1 privacy:1 hardness:1 indeed:2 roughly:3 inspired:1 bilenko:1 little:1 cpu:1 encouraging:1 provided:2 begin:1 unrelated:8 moreover:6 matched:3 agnostic:1 notation:1 bounded:2 israel:3 what:5 kind:1 formalizing:1 substantially:1 unified:1 guarantee:3 every:1 interactive:1 runtime:1 returning:1 partitioning:1 unit:3 control:2 grant:2 appear:2 positive:2 before:1 local:51 limit:3 niu:1 becoming:1 black:1 plus:1 studied:6 quantified:1 specifying:1 challenging:1 limited:1 bi:2 statistically:1 averaged:1 weizmann:4 practical:1 responsible:1 acknowledgment:1 practice:2 block:1 communicated:3 sdca:1 empirical:4 w4:2 universal:1 boyd:1 kbi:1 get:4 cannot:2 convenience:4 pertain:2 close:1 risk:3 context:5 optimize:2 zinkevich:1 send:1 straightforward:4 regardless:2 starting:1 convex:43 focused:1 bachrach:1 simplicity:3 splitting:1 insight:1 population:1 proving:1 coordinate:7 shamir:7 construction:5 suppose:1 user:1 exact:1 programming:1 designing:2 origin:1 logarithmically:1 trend:1 satisfying:7 dane:1 preprint:1 solved:2 capture:3 worst:4 wj:9 richt:1 intuition:1 convexity:4 complexity:15 nesterov:2 dynamic:1 depend:1 solving:3 algebra:1 purely:2 creates:1 f2:8 completely:1 preconditioning:1 easily:1 derivation:2 effective:1 larger:4 solve:6 valued:2 say:1 vector1:1 otherwise:1 kai:1 supplementary:3 final:1 online:2 differentiable:3 eigenvalue:1 cig:1 product:1 relevant:3 combining:1 getting:2 convergence:2 requirement:2 optimum:5 produce:1 guaranteeing:1 leave:2 object:2 tions:1 depending:1 ac:4 measured:1 progress:3 eq:10 solves:1 soc:1 strong:5 indicate:2 implies:2 trading:1 differ:1 direction:1 closely:1 stochastic:4 material:3 f1:8 fix:1 preliminary:1 pessimistic:1 extension:1 hold:3 gradientbased:1 sufficiently:4 considered:5 residing:1 wright:1 bj:1 major:1 achieves:1 smallest:1 a2:2 estimation:1 largest:1 establishes:1 tool:1 cotter:1 minimization:3 clearly:2 always:1 arik:1 pn:1 ej:3 geographically:2 endow:1 focus:2 improvement:3 indicates:1 impossibility:1 contrast:1 sense:2 helpful:1 contr:1 minimizers:1 streaming:1 entire:1 typically:1 initially:2 relation:1 tak:2 subroutine:1 tao:1 issue:2 among:3 colt:1 dual:2 arg:5 constrained:1 platform:1 smoothing:3 special:3 equal:2 aware:3 construct:1 identical:1 kw:7 broad:1 unnecessarily:1 icml:2 yu:1 future:4 few:3 employ:1 randomly:7 composed:1 individual:2 replaced:2 n1:1 attempt:1 ab:3 message:3 w5:1 highly:1 extreme:1 semidefinite:1 hg:1 necessary:4 ohad:2 minw:8 unless:1 euclidean:3 desired:1 instance:2 column:4 earlier:2 sidford:1 zn:1 cost:1 subset:2 entry:1 aw:1 proximal:3 recht:1 fundamental:3 randomized:1 lee:1 together:1 w1:2 squared:1 reflect:1 satisfied:2 broadcast:3 possibly:2 worse:1 american:1 return:3 li:1 concordance:1 exclude:1 sec:2 summarized:1 wk:1 includes:2 satisfy:2 explicitly:1 depends:2 stream:1 kanchanapally:1 tion:1 performed:2 break:1 hogwild:1 kwk:7 start:1 parallel:3 curie:1 minimize:1 il:2 accuracy:8 variance:1 correspond:2 identify:2 gathered:1 yield:2 none:1 multiplying:1 randomness:1 reach:1 definition:4 involved:1 proof:5 dataset:3 knowledge:1 cj:1 carefully:1 actually:4 appears:3 focusing:1 improved:1 done:1 box:1 strongly:21 execute:1 just:6 smola:1 langford:2 hand:1 sketch:1 aj:1 perhaps:1 indicated:1 resemblance:1 icri:1 effect:1 verify:2 true:1 regularization:1 hence:1 dud:1 symmetric:1 iteratively:2 mahajan:1 round:48 self:3 suboptimality:1 generalized:1 prominent:1 presenting:1 theoretic:1 duchi:2 fj:15 balcan:2 fi:8 recently:1 parikh:1 common:2 volume:1 discussed:2 sundararajan:1 significant:1 cambridge:1 ai:2 smoothness:3 rd:13 tuning:1 grid:2 trivially:2 frostig:1 chapelle:1 access:2 similarity:8 add:1 jaggi:1 own:5 recent:3 showed:2 hide:1 optimizes:1 scenario:3 certain:5 continue:1 arbitrarily:4 preserving:1 additional:3 somewhat:1 impose:1 employed:1 parallelized:1 full:5 multiple:2 smooth:31 ing:1 technical:2 match:3 adapt:1 faster:1 long:1 lin:1 divided:1 equally:1 e1:1 a1:2 plugging:1 prediction:1 servicing:1 regression:2 involving:3 basic:1 essentially:2 expectation:1 arxiv:2 iteration:2 represent:1 sometimes:2 agarwal:2 achieved:2 gilad:1 want:1 fine:1 source:3 w2:2 envelope:1 ascent:1 suspect:1 virtually:1 thing:1 sridharan:1 jordan:1 structural:7 yang:1 split:5 enough:4 easy:2 krishnan:1 independence:1 zi:1 w3:2 architecture:1 restrict:1 opposite:1 competing:1 idea:4 knowing:1 whether:2 arjevani:2 clarkson:1 returned:1 hessian:6 speaking:2 cause:1 remark:2 generally:2 informally:1 amount:1 reduced:1 exist:4 per:2 rehovot:2 key:1 blum:1 marie:1 utilize:1 subgradient:2 merely:3 year:1 inverse:1 powerful:1 communicate:2 reasonable:2 scaling:3 bit:7 bound:43 followed:1 distinguish:3 quadratic:13 activity:1 adapted:2 constraint:1 precisely:1 speed:2 simulate:1 min:2 span:6 extremely:1 performing:2 subgradients:2 designated:4 ball:2 cleverly:1 across:1 slightly:1 remain:2 smaller:1 wi:2 partitioned:5 kakade:1 making:1 intuitively:2 restricted:1 remains:4 previously:1 kw0:1 discus:3 turn:1 needed:1 yossi:2 letting:3 flip:1 fed:1 know:1 end:1 sending:2 fp7:1 studying:4 available:1 operation:4 ge:1 unreasonable:1 apply:2 spectral:2 batch:2 altogether:1 original:1 assumes:1 denotes:1 include:1 remaining:2 completed:1 lock:1 newton:1 prof:1 objective:3 g0:2 already:2 question:7 quantity:2 concentration:1 dependence:1 diagonal:3 gradient:30 thank:1 w0:18 topic:2 argue:1 trivial:8 assuming:3 besides:1 relationship:2 mini:2 providing:1 liang:1 difficult:2 statement:1 stoc:1 stated:2 implementation:2 perform:3 allowing:2 upper:3 datasets:2 discarded:1 finite:2 descent:10 situation:3 defining:1 communication:60 precise:1 mansour:1 smoothed:1 arbitrary:2 thm:7 parallelizing:1 peleato:1 complement:1 namely:2 required:15 eckstein:1 z1:1 optimized:3 distinction:1 nip:8 trans:1 able:1 usually:2 below:3 regime:3 challenge:1 including:2 memory:1 reliable:1 shifting:1 power:4 suitable:2 wainwright:2 natural:2 ranked:1 difficulty:1 rely:1 mn:1 scheme:1 improve:1 imply:2 func:1 literature:2 l2:1 nati:1 beside:1 loss:8 lecture:1 bear:1 interesting:4 limitation:1 allocation:1 srebro:4 foundation:2 xiao:2 terascale:1 share:1 course:1 summary:2 supported:1 free:4 infeasible:1 tsitsiklis:1 side:1 weaker:2 formal:2 allow:2 institute:3 wide:1 understand:1 moreau:3 distributed:41 dimension:3 unaware:1 doesn:1 computes:1 commonly:1 far:3 polynomially:1 approximate:2 emphasize:3 implicitly:1 assumed:2 spectrum:1 continuous:2 iterative:2 posing:1 bottou:1 complex:1 necessarily:1 constructing:1 domain:2 main:3 weimer:1 big:1 allowed:2 convey:1 intel:1 slow:1 precision:2 sub:1 explicit:1 lie:1 communicates:2 w00:6 touched:1 theorem:8 exists:4 corr:4 ci:4 budget:2 terhorst:1 logarithmic:2 simply:2 scalar:1 subtlety:1 applies:1 springer:1 corresponds:2 minimizer:2 satisfies:4 relies:2 ma:1 goal:1 acceleration:1 room:2 shared:1 lipschitz:5 feasible:2 hard:1 replace:1 admm:1 typical:1 uniformly:2 wt:4 disco:3 averaging:2 principal:1 pas:1 concordant:2 formally:2 dissimilar:1 accelerated:7 incorporate:1 |
5,227 | 5,732 | Explore no more: Improved high-probability regret
bounds for non-stochastic bandits
Gergely Neu?
SequeL team
INRIA Lille ? Nord Europe
gergely.neu@gmail.com
Abstract
This work addresses the problem of regret minimization in non-stochastic multiarmed bandit problems, focusing on performance guarantees that hold with high
probability. Such results are rather scarce in the literature since proving them requires a large deal of technical effort and significant modifications to the standard,
more intuitive algorithms that come only with guarantees that hold on expectation.
One of these modifications
? is forcing the learner to sample arms from the uniform
distribution at least ?( T ) times over T rounds, which can adversely affect performance if many of the arms are suboptimal. While it is widely conjectured that
this property is essential for proving high-probability regret bounds, we show in
this paper that it is possible to achieve such strong results without this undesirable
exploration component. Our result relies on a simple and intuitive loss-estimation
strategy called Implicit eXploration (IX) that allows a remarkably clean analysis. To demonstrate the flexibility of our technique, we derive several improved
high-probability bounds for various extensions of the standard multi-armed bandit
framework. Finally, we conduct a simple experiment that illustrates the robustness
of our implicit exploration technique.
1
Introduction
Consider the problem of regret minimization in non-stochastic multi-armed bandits, as defined in
the classic paper of Auer, Cesa-Bianchi, Freund, and Schapire [5]. This sequential decision-making
problem can be formalized as a repeated game between a learner and an environment (sometimes
called the adversary). In each round t = 1, 2, . . . , T , the two players interact as follows: The
learner picks an arm (also called an action) It ? [K] = {1, 2, . . . , K} and the environment selects
a loss function `t : [K] ? [0, 1], where the loss associated with arm i ? [K] is denoted as `t,i .
Subsequently, the learner incurs and observes the loss `t,It . Based solely on these observations, the
goal of the learner is to choose its actions so as to accumulate as little loss as possible during the
course of the game. As traditional in the online learning literature [10], we measure the performance
of the learner in terms of the regret defined as
RT =
T
X
`t,It ? min
i?[K]
t=1
T
X
`t,i .
t=1
We say that the environment is oblivious if it selects the sequence of loss vectors irrespective of
the past actions taken by the learner, and adaptive (or non-oblivious) if it is allowed to choose `t
as a function of the past actions It?1 , . . . , I1 . An equivalent formulation of the multi-armed bandit
game uses the concept of rewards (also called gains or payoffs) instead of losses: in this version,
?
The author is currently with the Department of Information and Communication Technologies, Pompeu
Fabra University, Barcelona, Spain.
1
the adversary chooses the sequence of reward functions (rt ) with rt,i denoting the reward given
to the learner for choosing action i in round t. In this game, the learner aims at maximizing its
total rewards. We will refer to the above two formulations as the loss game and the reward game,
respectively.
Our goal in this paper is to construct algorithms for the learner that guarantee that the regret grows
sublinearly. Since it is well known that no deterministic learning algorithm can achieve this goal
[10], we are interested in randomized algorithms. Accordingly, the regret RT then becomes a random variable that we need to bound in some probabilistic sense. Most of the existing literature on
non-stochastic bandits is concerned with bounding the pseudo-regret (or weak regret) defined as
" T
#
T
X
X
b
`t,I ?
`t,i ,
RT = max E
i?[K]
t
t=1
t=1
where the expectation integrates over the randomness injected by the learner. Proving bounds on
the actual regret that hold with high probability is considered to be a significantly harder task that
can be achieved by serious changes made to the learning algorithms and much more complicated
analyses. One particular common belief is that in order to guarantee high-confidence performance
guarantees,
?
the learner cannot avoid repeatedly sampling arms from a uniform distribution, typically
? KT times [5, 4, 7, 9]. It is easy to see that such explicit exploration can impact the empirical
performance of learning algorithms in a very negative way if there are many arms with high losses:
even if the base learning algorithm quickly learns to focus on good arms, explicit exploration still
forces the regret to grow at a steady rate. As a result, algorithms with high-probability performance
guarantees tend to perform poorly even in very simple problems [25, 7].
In the current paper, we propose an algorithm that guarantees strong regret bounds that hold with
high probability without the explicit exploration component. One component that we preserve from
the classical recipe for such algorithms is the biased estimation of losses, although our bias is of
a much more delicate nature, and arguably more elegant than previous approaches. In particular,
we adopt the implicit exploration (IX) strategy first proposed by Koc?ak, Neu, Valko, and Munos
[19] for the problem of online learning with side-observations. As we show in the current paper, this simple loss-estimation strategy allows proving high-probability bounds for a range of nonstochastic bandit problems including bandits with expert advice, tracking the best arm and bandits
with side-observations. Our proofs are arguably cleaner and less involved than previous ones, and
very elementary in the sense that they do not rely on advanced results from probability theory like
Freedman?s inequality [12]. The resulting bounds are tighter than all previously known bounds and
hold simultaneously for all confidence levels, unlike most previously known bounds [5, 7]. For the
first time in the literature, we also provide high-probability bounds for anytime algorithms that do
not require prior knowledge of the time horizon T . A minor conceptual improvement in our analysis
is a direct treatment of the loss game, as opposed to previous analyses that focused on the reward
game, making our treatment more coherent with other state-of-the-art results in the online learning
literature1 .
The rest of the paper is organized as follows. In Section 2, we review the known techniques for proving high-probability regret bounds for non-stochastic bandits and describe our implicit exploration
strategy in precise terms. Section 3 states our main result concerning the concentration of the IX
loss estimates and shows applications of this result to several problem settings. Finally, we conduct
a set of simple experiments to illustrate the benefits of implicit exploration over previous techniques
in Section 4.
2
Explicit and implicit exploration
Most principled learning algorithms for the non-stochastic bandit problem are constructed by using
a standard online learning algorithm such as the exponentially weighted forecaster ([26, 20, 13])
or follow the perturbed leader ([14, 18]) as a black box, with the true (unobserved) losses replaced
by some appropriate estimates. One of the key challenges is constructing reliable estimates of the
losses `t,i for all i ? [K] based on the single observation `t,It . Following Auer et al. [5], this is
1
In fact, studying the loss game is colloquially known to allow better constant factors in the bounds in many
settings (see, e.g., Bubeck and Cesa-Bianchi [9]). Our result further reinforces these observations.
2
traditionally achieved by using importance-weighted loss/reward estimates of the form
`t,i
`bt,i =
I{I =i}
pt,i t
or
rbt,i =
rt,i
I{I =i}
pt,i t
(1)
where pt,i = P [ It = i| Ft?1 ] is the probability that the learner picks action i in round t, conditioned
on the observation history Ft?1 of the learner up to the beginning of round t. It is easy to show that
these estimates are unbiased for all i with pt,i > 0 in the sense that E`bt,i = `t,i for all such i.
For concreteness, consider the E XP 3 algorithm of Auer et al. [5] as described in Bubeck and CesaBianchi [9, Section 3]. In every round t, this algorithmuses the loss estimates defined in Equation (1)
Pt?1
to compute the weights wt,i = exp ?? s=1 `bs?1,i for all i and some positive parameter ? that
is often called the learning rate. Having computed these weights, E XP 3 draws arm It = i with
probability proportional to wt,i . Relying on the unbiasedness of the estimates
(1) and an optimized
?
setting of ?, one can prove that E XP 3 enjoys a pseudo-regret bound of 2T K log K. However, the
fluctuations of the loss estimates around the true losses are too large to permit bounding the true
regret with high probability. To keep these fluctuations under control, Auer et al. [5] propose to use
the biased reward-estimates
?
ret,i = rbt,i +
(2)
pt,i
with an appropriately chosen ? > 0. Given these
estimates, the E XP 3.P algorithm of Auer et al. [5]
Pt?1
computes the weights wt,i = exp ? s=1 res,i for all arms i and then samples It according to the
distribution
wt,i
?
pt,i = (1 ? ?) PK
+ ,
K
j=1 wt,j
where ? ? [0, 1] is the exploration parameter. The argument for this explicit exploration is that it
helps to keep the range (and thus the variance) of the above reward estimates bounded, thus enabling
the use of (more or less) standard concentration results2 . In particular, the key element in the analysis
of E XP 3.P [5, 9, 7, 6] is showing that the inequality
T
X
(rt,i ? ret,i ) ?
t=1
log(K/?)
?
holds simultaneously for all i with probability at least 1 ? ?. In other words, this shows that the
PT
PT
cumulative estimates t=1 ret,i are upper confidence bounds for the true rewards t=1 rt,i .
In the current paper, we propose to use the loss estimates defined as
`et,i =
`t,i
I{I =i} ,
pt,i + ?t t
(3)
for all i and an appropriately chosen ?t > 0, and then use the resulting estimates in an exponentialweights algorithm scheme without any explicit exploration. Loss estimates of this form were first
used by Koc?ak et al. [19]?following them, we refer to this technique as Implicit eXploration, or,
in short, IX. In what follows, we argue that that IX as defined above achieves a similar variancereducing effect as the one achieved by the combination of explicit exploration and the biased reward
estimates of Equation (2). In particular, we show that the IX estimates (3) constitute a lower confidence bound for the true losses which allows proving high-probability bounds for a number of
variants of the multi-armed bandit problem.
3
High-probability regret bounds via implicit exploration
In this section, we present a concentration result concerning the IX loss estimates of Equation (3),
and apply this result to prove high-probability performance guarantees for a number of nonstochastic bandit problems. The following lemma states our concentration result in its most general
form:
2
Explicit exploration is believed to be inevitable for proving bounds in the reward game for various other
reasons, too?see Bubeck and Cesa-Bianchi [9] for a discussion.
3
Lemma 1. Let (?t ) be a fixed non-increasing sequence with ?t ? 0 and let ?t,i be nonnegative
Ft?1 -measurable random variables satisfying ?t,i ? 2?t for all t and i. Then, with probability at
least 1 ? ?,
T X
K
X
?t,i `et,i ? `t,i ? log (1/?) .
t=1 i=1
A particularly important special case of the above lemma is the following:
Corollary 1. Let ?t = ? ? 0 for all t. With probability at least 1 ? ?,
T
X
t=1
log (K/?)
`et,i ? `t,i ?
.
2?
simultaneously holds for all i ? [K].
This corollary follows from applying Lemma 1 to the functions ?t,i = 2?I{i=j} for all j and
applying the union bound. The full proof of Lemma 1 is presented in the Appendix. For didactic
purposes, we now present a direct proof for Corollary 1, which is essentially a simpler version of
Lemma 1.
Proof of Corollary 1. For convenience, we will use the notation ? = 2?. First, observe that
`t,i
1
2?`t,i /pt,i
1
`t,i
I{It =i} ?
I{It =i} =
?
I{It =i} ? ? log 1 + ? `bt,i ,
`et,i =
pt,i + ?
pt,i + ?`t,i
2? 1 + ?`t,i /pt,i
?
z
where the first step follows from `t,i ? [0, 1] and last one from the elementary inequality 1+z/2
?
log(1 + z) that holds for all z ? 0. Using the above inequality, we get that
h
i
h
i
E exp ? `et,i Ft?1 ?E 1 + ? `bt,i Ft?1 ? 1 + ?`t,i ? exp (?`t,i ) ,
h
i
where the second and third steps are obtained by using E `bt,i Ft?1 ? `t,i that holds by definition
of `bt,i , and the inequality 1 + z ? ez that holds for all z ? R. As a result, the process Zt =
Pt
exp ? s=1 `es,i ? `s,i is a supermartingale with respect to (Ft ): E [ Zt | Ft?1 ] ? Zt?1 . Observe
that, since Z0 = 1, this implies E [ZT ] ? E [ZT ?1 ] ? . . . ? 1, and thus by Markov?s inequality,
" T
#
"
!#
T
X
X
P
`et,i ? `t,i > ? ? E exp ?
`et,i ? `t,i
? exp(???) ? exp(???)
t=1
t=1
holds for any ? > 0. The statement of the lemma follows from solving exp(???) = ?/K for ? and
using the union bound over all arms i.
In what follows, we put Lemma 1 to use and prove improved high-probability performance guarantees for several well-studied variants of the non-stochastic bandit problem, namely, the multi-armed
bandit problem with expert advice, tracking the best arm for multi-armed bandits, and bandits with
side-observations. The general form of Lemma 1 will allow us to prove high-probability bounds for
anytime algorithms that can operate without prior knowledge of T . For clarity, we will only provide
such bounds for the standard multi-armed bandit setting; extending the derivations to other settings
is left as an easy exercise. For all algorithms, we prove bounds that scale linearly with
p log(1/?) and
hold simultaneously for all levels ?. Note that this dependence can be improved to log(1/?) for a
fixed confidence level ?, if the algorithm can use this ? to tune its parameters. This is the way that
Table 1 presents our new bounds side-by-side with the best previously known ones.
4
Setting
Multi-armed bandits
Bandits with expert advice
Tracking the best arm
Bandits with side-observations
Best known
p regret bound
5.15
p T K log(K/?)
6
p T K log(N/?)
7 KT S ?
log(KT
/?S)
e mT
O
Ourpnew regret bound
2p2T K log(K/?)
p2 2T K log(N/?)
2 2KT S?
log(KT
/?S)
e ?T
O
Table 1: Our results compared to the best previously known results in the four settings considered
in Sections 3.1?3.4. See the respective sections for references and notation.
3.1
Multi-armed bandits
In this section, we propose a variant of the
E XP 3 algorithm of Auer et al. [5] that uses the
IX loss estimates (3): E XP 3-IX. The algorithm
in its most general form uses two nonincreasing
sequences of nonnegative parameters: (?t ) and
(?t ). In every round, E XP 3-IX chooses action
It = i with probability proportional to
!
t?1
X
e
pt,i ? wt,i = exp ??t
`s,i , (4)
Algorithm 1 E XP 3-IX
Parameters: ? > 0, ? > 0.
Initialization: w1,i = 1.
for t = 1, 2, . . . , T , repeat
1. pt,i =
w
PK t,i
.
j=1 wt,j
2. Draw It ? pt = (pt,1 , . . . , pt,K ).
3. Observe loss `t,It .
s=1
4. `et,i ?
without mixing any explicit exploration term
into the distribution. A fixed-parameter version
of E XP 3-IX is presented as Algorithm 1.
`t,i
pt,i +? I{It =i}
for all i ? [K].
5. wt+1,i ? wt,i e??`t,i for all i ? [K].
e
Our theorem below states a high-probability?bound on the regret of E XP 3-IX. Notably, our bound
exhibits the best known constant factor of 2 2 in the leading term, improving on the factor of 5.15
due to Bubeck
? and Cesa-Bianchi [9]. The best known leading constant for the pseudo-regret bound
of E XP 3 is 2, also proved in Bubeck and Cesa-Bianchi [9].
q
K
Theorem 1. Fix an arbitrary ? > 0. With ?t = 2?t = 2 log
KT for all t, E XP 3-IX guarantees
s
!
p
2KT
+ 1 log (2/?)
RT ? 2 2KT log K +
log K
q
K
with probability at least 1??. Furthermore, setting ?t = 2?t = log
Kt for all t, the bound becomes
s
!
p
KT
RT ? 4 KT log K + 2
+ 1 log (2/?) .
log K
Proof. Let us fix an arbitrary ? 0 ? (0, 1). Following the standard analysis of E XP 3 in the loss game
and nonincreasing learning rates [9], we can obtain the bound
!
T
K
T
K
2
X
X
log K X ?t X
pt,i `et,i ? `et,j ?
+
pt,i `et,i
?T
2 i=1
t=1
t=1
i=1
for any j. Now observe that
K
K
K
K
X
X
X
X
`t,i (pt,i + ?t )
`t,i
pt,i `et,i =
I{It =i}
? ?t
I{It =i}
= `t,It ? ?t
`et,i . (5)
pt,i + ?t
pt,i + ?t `t,i
i=1
i=1
i=1
i=1
PK
PK
Similarly, i=1 pt,i `e2t,i ? i=1 `et,i holds by the boundedness of the losses. Thus, we get that
T
T
T
K
log K X
X
X
X
?t
(`t,It ? `t,j ) ?
`t,j ? `et,j +
+ ?t
+
`et,i
?T
2
t=1
t=1
t=1
i=1
T
?
K
X
log (K/? 0 ) log K X ?t
+
+
+ ?t
`t,i + log (1/? 0 )
2?
?
2
t=1
i=1
5
holds with probability at least 1 ? 2? 0 , where the last line follows from an application of Lemma 1
with ?t,i = ?t /2 + ?t for all t, i and taking the union bound. By taking j = arg mini LT,i and
? 0 = ?/2, and using the boundedness of the losses, we obtain
T
RT ?
X ?t
log (2K/?) log K
+
+K
+ ?t + log (2/?) .
2?T
?T
2
t=1
The statements of the theorem then follow immediately, noting that
3.2
PT
t=1
?
?
1/ t ? 2 T .
Bandits with expert advice
We now turn to the setting of multi-armed bandits with expert advice, as defined in Auer et al. [5],
and later revisited by McMahan and Streeter [22] and Beygelzimer et al. [7]. In this setting, we
assume that in every round t = 1, 2, . . . , T , the learner observes a set of N probability distributions
PK
?t (1), ?t (2), . . . , ?t (N ) ? [0, 1]K over the K arms, such that i=1 ?t,i (n) = 1 for all n ? [N ].
We assume that the sequences (?t (n)) are measurable with respect to (Ft ). The nth of these vectors
represent the probabilistic advice of the corresponding nth expert. The goal of the learner in this
setting is to pick a sequence of arms so as to minimize the regret against the best expert:
RT? =
T
X
`t,It ? min
n?[N ]
t=1
T X
K
X
?t,i (n)`t,i ? min .
t=1 i=1
To tackle this problem, we propose a modification of the E XP 4 algorithm of Auer et al. [5] that uses
the IX loss estimates (3), and also drops the explicit exploration component of the original algorithm.
Specifically, E XP 4-IX uses the loss estimates defined in Equation (3) to compute the weights
!
K
t?1 X
X
?s,i (n)`es,i
wt,n = exp ??
s=1 i=1
PN
for every expert n ? [N ], and then draw arm i with probability pt,i ? n=1 wt,n ?t,i (n). We now
state the performance guarantee of E XP
?4-IX. Our bound improves the best known leading constant
of 6 due to Beygelzimer et al. [7] to 2 2 and is a factor of 2 worse than the best known constant in
the pseudo-regret bound for E XP 4 [9]. The proof of the theorem is presented in the Appendix.
q
N
Theorem 2. Fix an arbitrary ? > 0 and set ? = 2? = 2 log
KT for all t. Then, with probability at
least 1 ? ?, the regret of E XP 4-IX satisfies
s
!
p
2KT
?
RT ? 2 2KT log N +
+ 1 log (2/?) .
log N
3.3
Tracking the best sequence of arms
In this section, we consider the problem of competing with sequences of actions. Similarly to
Herbster and Warmuth [17], we consider the class of sequences that switch at most S times between
actions. We measure the performance of the learner in this setting in terms of the regret against the
best sequence from this class C(S) ? [K]T , defined as
RTS =
T
X
t=1
`t,It ?
T
X
min
(Jt )?C(S)
`t,Jt .
t=1
Similarly to Auer et al. [5], we now propose to adapt the Fixed Share algorithm of Herbster and
Warmuth [17] to our setting. Our algorithm, called E XP 3-SIX, updates a set of weights wt,? over
the arms in a recursive fashion. In the first round, E XP 3-SIX sets w1,i = 1/K for all i. In the
following rounds, the weights are updated for every arm i as
wt+1,i = (1 ? ?)wt,i ? e??`t,i +
e
6
K
? X
e
wt,j ? e??`t,j .
K j=1
In round t, the algorithm draws arm It = i with probability pt,i ? ?wt,i . Below, we give the
performance guarantees of E XP 3-SIX. Note that our leading factor of 2 2 again improves over the
best previously known leading factor of 7, shown by Audibert and Bubeck [3]. The proof of the
theorem is given in the Appendix.
q
? log K
Theorem 3. Fix an arbitrary ? > 0 and set ? = 2? = 2SKT
and ? = T S?1 , where S? = S + 1.
Then, with probability at least 1 ? ?, the regret of E XP 3-SIX satisfies
s
s
!
eKT
2KT
S
?
RT ? 2 2KT S log
+
+ 1 log (2/?) .
S
S? log K
3.4
Bandits with side-observations
Let us now turn to the problem of online learning in bandit problems in the presence of side observations, as defined by Mannor and Shamir [21] and later elaborated by Alon et al. [1]. In this
setting, the learner and the environment interact exactly as in the multi-armed bandit problem, the
main difference being that in every round, the learner observes the losses of some arms other than
its actually chosen arm It . The structure of the side observations is described by the directed graph
G: nodes of G correspond to individual arms, and the presence of arc i ? j implies that the learner
will observe `t,j upon selecting It = i.
Implicit exploration and E XP 3-IX was first proposed by Koc?ak et al. [19] for this precise setting.
To describe this variant, let us introduce the notations Ot,i = I{It =i} + I{(It ?i)?G} and ot,i =
Ot,i `t,i
E [ Ot,i | Ft?1 ]. Then, the IX loss estimates in this setting are defined for all t, i as `et,i = ot,i
+?t .
With these estimates at hand, E XP 3-IX draws arm It from the exponentially weighted distribution
defined in Equation (4). The following theorem provides the regret bound concerning this algorithm.
q
K
Theorem 4. Fix an arbitrary ? > 0. Assume that T ? K 2 /(8?) and set ? = 2? = 2?Tlog
log(KT ) ,
where ? is the independence number of G. With probability at least 1 ? ?, E XP 3-IX guarantees
s
r
q
p
?T log(KT )
T log(4/?)
2
log (4/?)+
.
RT ? 4+2 log (4/?) ? 2?T log K +log KT +2
log K
2
The proof of the theorem is given in the Appendix. While the proof of this statement is significantly
more involved than the other proofs presented in this paper, it provides a fundamentally new result.
In particular, our bound is in terms of the independence number ? and thus matches the minimax
regret bound proved by Alon et al. [1] for this setting up to logarithmic factors. In contrast, the only
high-probability regret bound for this setting due to Alon et al. [2] scales with the size m of the
maximal acyclic subgraph of G, which can be much larger than ? in general (i.e., m may be o(?)
for some graphs [1]).
4
Empirical evaluation
We conduct a simple experiment to demonstrate the robustness of E XP 3-IX as compared to E XP 3
and its superior performance as compared to E XP 3.P. Our setting is a 10-arm bandit problem where
all losses are independent draws of Bernoulli random variables. The mean losses of arms 1 through
8 are 1/2 and the mean loss of arm 9 is 1/2 ? ? for all rounds t = 1, 2, . . . , T . The mean losses of
arm 10 are changing over time: for rounds t ? T /2, the mean is 1/2 + ?, and 1/2 ? 4? afterwards.
This choice ensures that up to at least round T /2, arm 9 is clearly better than other arms. In the
second half of the game, arm 10 starts to outperform arm 9 and eventually becomes the leader.
We have evaluated the performance of E XP 3, E XP 3.P and E XP 3-IX in the above setting with T =
106 and ? = 0.1. For fairness of comparison, we evaluate all three algorithms for a wide range
of parameters. In particular, for all three algorithms, we set a base learning rate ? according to the
best known theoretical results [9, Theorems 3.1 and 3.3] and varied the multiplier of the respective
base parameters between 0.01 and 100. Other parameters are set as ? = ?/2 for E XP 3-IX and
? = ?/K = ? for E XP 3.P. We studied the regret up to two interesting rounds in the game: up
to T /2, where the losses are i.i.d., and up to T where the algorithms have to notice the shift in the
7
4
5
5
x 10
1.5
EXP3
EXP3.P
EXP3?IX
4.5
EXP3
EXP3.P
EXP3?IX
1
4
3.5
0.5
3
regret at T
regret at T/2
x 10
2.5
2
0
?0.5
1.5
1
?1
0.5
0
?2
10
?1
10
0
10
? multiplier
1
10
?1.5
?2
10
2
10
?1
10
0
10
? multiplier
1
10
2
10
Figure 1: Regret of E XP 3, E XP 3.P, and E XP 3-IX, respectively in the problem described in Section 4.
loss distributions. Figure 1 shows the empirical means and standard deviations over 50 runs of the
regrets of the three algorithms as a function of the multipliers. The results clearly show that E XP 3IX largely improves on the empirical performance of E XP 3.P and is also much more robust in the
non-stochastic regime than vanilla E XP 3.
5
Discussion
In this paper, we have shown that, contrary to popular belief, explicit exploration is not necessary to
achieve high-probability regret bounds for non-stochastic bandit problems. Interestingly, however,
we have?
observed in several of our experiments that our IX-based algorithms still draw every arm
roughly T times, even though this is not explicitly enforced by the algorithm. This suggests a need
for?a more complete study of the role of exploration, to find out whether pulling every single arm
?( T ) times is necessary for achieving near-optimal guarantees.
One can argue that tuning the IX parameter that we introduce may actually be just as difficult in
practice as tuning the parameters of E XP 3.P. However, every aspect of our analysis suggests that
?t = ?t /2 is the most natural choice for these parameters, and thus this is the choice that we
recommend. One limitation of our current analysis is that it only permits deterministic learning-rate
and IX parameters (see the conditions of Lemma 1). That is, proving adaptive regret bounds in the
vein of [15, 24, 23] that hold with high probability is still an open challenge.
Another interesting direction for future work is whether the implicit exploration approach can help in
advancing the state of the art in the more general setting of linear bandits. All known algorithms for
this setting rely on explicit exploration techniques, and the strength of the obtained results depend
crucially on the choice of the exploration distribution (see [8, 16] for recent advances). Interestingly,
IX has a natural extension to the linear bandit problem. To see this, consider the vector Vt = eIt and
the matrix Pt = E [Vt VtT ]. Then, the IX loss estimates can be written as `et = (Pt + ?I)?1 Vt VtT `t .
Whether or not this estimate is the right choice for linear bandits remains to be seen.
Finally, we note that our estimates (3) are certainly not the only ones that allow avoiding explicit
exploration. In fact, the careful reader might deduce from the proof of Lemma 1 that the same
concentration can be shown
to hold for the alternative loss estimates `t,i I{It =i} / (pt,i + ?`t,i ) and
log 1 + 2?`t,i I{It =i} /pt,i /(2?). Actually, a variant of the latter estimate was used previously for
proving high-probability regret bounds in the reward game by Audibert and Bubeck [4]?however,
their proof still relied on explicit exploration. It is not hard to verify that all the results we presented
in this paper (except Theorem 4) can be shown to hold for the above two estimates, too.
Acknowledgments This work was supported by INRIA, the French Ministry of Higher Education
and Research, and by FUI project Herm`es. The author wishes to thank Haipeng Luo for catching a
bug in an earlier version of the paper, and the anonymous reviewers for their helpful suggestions.
8
References
[1] N. Alon, N. Cesa-Bianchi, C. Gentile, and Y. Mansour. From Bandits to Experts: A Tale of Domination
and Independence. In NIPS-25, pages 1610?1618, 2012.
[2] N. Alon, N. Cesa-Bianchi, C. Gentile, S. Mannor, Y. Mansour, and O. Shamir. Nonstochastic multi-armed
bandits with graph-structured feedback. arXiv preprint arXiv:1409.8428, 2014.
[3] J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of
the 22nd Annual Conference on Learning Theory (COLT), 2009.
[4] J.-Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. Journal of
Machine Learning Research, 11:2785?2836, 2010.
[5] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SIAM J. Comput., 32(1):48?77, 2002. ISSN 0097-5397.
[6] P. L. Bartlett, V. Dani, T. P. Hayes, S. Kakade, A. Rakhlin, and A. Tewari. High-probability regret bounds
for bandit online linear optimization. In COLT, pages 335?342, 2008.
[7] A. Beygelzimer, J. Langford, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandit algorithms with
supervised learning guarantees. In AISTATS 2011, pages 19?26, 2011.
[8] S. Bubeck, N. Cesa-Bianchi, and S. M. Kakade. Towards minimax policies for online linear optimization
with bandit feedback. 2012.
[9] S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems. Now Publishers Inc, 2012.
[10] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New
York, NY, USA, 2006.
[11] N. Cesa-Bianchi, P. Gaillard, G. Lugosi, and G. Stoltz. Mirror descent meets fixed share (and feels no
regret). In NIPS-25, pages 989?997. 2012.
[12] D. A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3:100?118, 1975.
[13] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
[14] J. Hannan. Approximation to Bayes risk in repeated play. Contributions to the theory of games, 3:97?139,
1957.
[15] E. Hazan and S. Kale. Better algorithms for benign bandits. The Journal of Machine Learning Research,
12:1287?1311, 2011.
[16] E. Hazan, Z. Karnin, and R. Meka. Volumetric spanners: an efficient exploration basis for learning. In
COLT, pages 408?422, 2014.
[17] M. Herbster and M. Warmuth. Tracking the best expert. Machine Learning, 32:151?178, 1998.
[18] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and
System Sciences, 71:291?307, 2005.
[19] T. Koc?ak, G. Neu, M. Valko, and R. Munos. Efficient learning by implicit exploration in bandit problems
with side observations. In NIPS-27, pages 613?621, 2014.
[20] N. Littlestone and M. Warmuth. The weighted majority algorithm. Information and Computation, 108:
212?261, 1994.
[21] S. Mannor and O. Shamir. From Bandits to Experts: On the Value of Side-Observations. In Neural
Information Processing Systems, 2011.
[22] H. B. McMahan and M. Streeter. Tighter bounds for multi-armed bandits with expert advice. In COLT,
2009.
[23] G. Neu. First-order regret bounds for combinatorial semi-bandits. In COLT, pages 1360?1375, 2015.
[24] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In COLT, pages 993?1019,
2013.
[25] Y. Seldin, N. Cesa-Bianchi, P. Auer, F. Laviolette, and J. Shawe-Taylor. PAC-Bayes-Bernstein inequality
for martingales and its application to multiarmed bandits. In Proceedings of the Workshop on On-line
Trading of Exploration and Exploitation 2, 2012.
[26] V. Vovk. Aggregating strategies. In Proceedings of the third annual workshop on Computational learning
theory (COLT), pages 371?386, 1990.
9
| 5732 |@word exploitation:1 version:4 nd:1 open:1 forecaster:1 crucially:1 pick:3 incurs:1 boundedness:2 harder:1 selecting:1 denoting:1 interestingly:2 past:2 existing:1 current:4 com:1 contextual:1 beygelzimer:3 luo:1 gmail:1 written:1 benign:1 drop:1 update:1 half:1 warmuth:4 accordingly:1 rts:1 beginning:1 short:1 provides:2 mannor:3 revisited:1 node:1 boosting:1 simpler:1 constructed:1 direct:2 prove:5 introduce:2 tlog:1 notably:1 sublinearly:1 vtt:2 roughly:1 multi:14 relying:1 little:1 armed:14 actual:1 increasing:1 becomes:3 spain:1 colloquially:1 bounded:1 notation:3 project:1 what:2 ret:3 unobserved:1 guarantee:15 pseudo:4 every:9 tackle:1 exactly:1 control:1 arguably:2 positive:1 aggregating:1 ak:4 meet:1 solely:1 fluctuation:2 lugosi:2 inria:2 black:1 might:1 initialization:1 studied:2 suggests:2 range:3 directed:1 acknowledgment:1 union:3 regret:41 recursive:1 practice:1 empirical:4 significantly:2 confidence:5 word:1 get:2 cannot:1 undesirable:1 convenience:1 put:1 risk:1 applying:2 equivalent:1 deterministic:2 measurable:2 reviewer:1 maximizing:1 kale:1 focused:1 formalized:1 immediately:1 pompeu:1 proving:9 classic:1 traditionally:1 updated:1 feel:1 pt:36 shamir:3 annals:1 play:1 us:6 element:1 satisfying:1 particularly:1 vein:1 observed:1 ft:10 role:1 preprint:1 ensures:1 observes:3 principled:1 environment:4 predictable:1 reward:13 depend:1 solving:1 upon:1 learner:20 basis:1 eit:1 various:2 derivation:1 describe:2 choosing:1 widely:1 larger:1 say:1 online:9 sequence:11 propose:6 maximal:1 reyzin:1 mixing:1 flexibility:1 achieve:3 poorly:1 subgraph:1 intuitive:2 bug:1 haipeng:1 recipe:1 extending:1 help:2 derive:1 illustrate:1 alon:5 tale:1 minor:1 p2:1 strong:2 come:1 implies:2 trading:1 direction:1 stochastic:11 subsequently:1 exploration:30 education:1 require:1 fix:5 generalization:1 anonymous:1 tighter:2 elementary:2 extension:2 hold:17 around:1 considered:2 exp:11 achieves:1 adopt:1 purpose:1 e2t:1 estimation:3 integrates:1 combinatorial:1 currently:1 gaillard:1 weighted:4 minimization:2 dani:1 clearly:2 aim:1 rather:1 kalai:1 avoid:1 pn:1 corollary:4 focus:1 improvement:1 bernoulli:1 contrast:1 adversarial:1 sense:3 helpful:1 typically:1 bt:6 bandit:46 selects:2 i1:1 interested:1 arg:1 colt:7 denoted:1 art:2 special:1 construct:1 karnin:1 having:1 sampling:1 lille:1 fairness:1 inevitable:1 future:1 recommend:1 fundamentally:1 serious:1 oblivious:2 preserve:1 simultaneously:4 individual:1 replaced:1 delicate:1 evaluation:1 certainly:1 nonincreasing:2 kt:19 partial:1 necessary:2 respective:2 stoltz:1 conduct:3 taylor:1 littlestone:1 re:1 catching:1 theoretical:1 earlier:1 deviation:1 uniform:2 too:3 perturbed:1 chooses:2 unbiasedness:1 herbster:3 randomized:1 siam:1 sequel:1 probabilistic:2 fui:1 quickly:1 gergely:2 w1:2 again:1 cesa:13 opposed:1 choose:2 worse:1 adversely:1 expert:12 leading:5 li:1 inc:1 explicitly:1 audibert:4 later:2 hazan:2 start:1 relied:1 bayes:2 complicated:1 elaborated:1 contribution:1 minimize:1 variance:1 ekt:1 largely:1 correspond:1 weak:1 rbt:2 monitoring:1 randomness:1 history:1 koc:4 neu:5 definition:1 volumetric:1 against:2 involved:2 associated:1 proof:12 gain:1 proved:2 treatment:2 popular:1 anytime:2 knowledge:2 improves:3 organized:1 auer:11 actually:3 focusing:1 higher:1 supervised:1 follow:2 improved:4 formulation:2 evaluated:1 box:1 though:1 furthermore:1 just:1 implicit:11 langford:1 hand:1 french:1 pulling:1 grows:1 usa:1 effect:1 concept:1 true:5 unbiased:1 multiplier:4 verify:1 deal:1 round:16 game:16 during:1 supermartingale:1 steady:1 complete:1 demonstrate:2 theoretic:1 common:1 superior:1 mt:1 exponentially:2 tail:1 accumulate:1 significant:1 multiarmed:3 refer:2 cambridge:1 meka:1 tuning:2 vanilla:1 similarly:3 shawe:1 europe:1 deduce:1 base:3 recent:1 conjectured:1 forcing:1 inequality:7 vt:3 seen:1 ministry:1 gentile:2 cesabianchi:1 semi:1 full:1 afterwards:1 hannan:1 technical:1 match:1 adapt:1 exp3:6 believed:1 concerning:3 impact:1 prediction:1 variant:5 essentially:1 expectation:2 arxiv:2 sometimes:1 represent:1 achieved:3 remarkably:1 grow:1 publisher:1 appropriately:2 biased:3 rest:1 unlike:1 operate:1 ot:5 tend:1 elegant:1 contrary:1 sridharan:1 near:1 noting:1 presence:2 bernstein:1 easy:3 concerned:1 results2:1 switch:1 affect:1 independence:3 nonstochastic:5 competing:1 suboptimal:1 shift:1 whether:3 six:4 bartlett:1 effort:1 york:1 constitute:1 action:9 repeatedly:1 tewari:1 cleaner:1 tune:1 schapire:4 outperform:1 notice:1 reinforces:1 didactic:1 key:2 four:1 achieving:1 clarity:1 changing:1 clean:1 advancing:1 graph:3 concreteness:1 enforced:1 run:1 injected:1 reader:1 draw:7 decision:3 appendix:4 bound:46 nonnegative:2 annual:2 strength:1 aspect:1 argument:1 min:4 vempala:1 department:1 structured:1 according:2 combination:1 kakade:2 modification:3 making:2 b:1 taken:1 equation:5 previously:6 remains:1 turn:2 eventually:1 studying:1 permit:2 apply:1 observe:5 appropriate:1 alternative:1 robustness:2 original:1 laviolette:1 classical:1 skt:1 strategy:5 concentration:5 rt:15 fabra:1 traditional:1 dependence:1 p2t:1 exhibit:1 thank:1 majority:1 argue:2 reason:1 issn:1 mini:1 difficult:1 statement:3 nord:1 negative:1 zt:5 policy:3 perform:1 bianchi:13 upper:1 observation:13 markov:1 arc:1 enabling:1 descent:1 payoff:1 communication:1 team:1 precise:2 mansour:2 varied:1 arbitrary:5 namely:1 optimized:1 coherent:1 barcelona:1 nip:3 address:1 adversary:2 below:2 regime:1 challenge:2 spanner:1 max:1 including:1 reliable:1 belief:2 natural:2 force:1 rely:2 valko:2 scarce:1 advanced:1 arm:34 nth:2 scheme:1 minimax:4 technology:1 irrespective:1 prior:2 literature:4 review:1 freund:3 loss:41 interesting:2 limitation:1 proportional:2 suggestion:1 acyclic:1 xp:41 share:2 course:1 repeat:1 last:2 supported:1 enjoys:1 bias:1 side:11 allow:3 wide:1 taking:2 munos:2 benefit:1 feedback:2 cumulative:1 computes:1 author:2 made:1 adaptive:2 keep:2 hayes:1 conceptual:1 herm:1 leader:2 streeter:2 table:2 nature:1 robust:1 improving:1 interact:2 constructing:1 aistats:1 pk:5 main:2 linearly:1 bounding:2 freedman:2 repeated:2 allowed:1 advice:7 fashion:1 martingale:2 ny:1 explicit:14 wish:1 exercise:1 mcmahan:2 comput:1 third:2 ix:34 learns:1 z0:1 theorem:12 jt:2 showing:1 pac:1 rakhlin:2 essential:1 workshop:2 sequential:1 importance:1 mirror:1 illustrates:1 conditioned:1 horizon:1 lt:1 logarithmic:1 explore:1 bubeck:11 seldin:1 ez:1 tracking:5 satisfies:2 relies:1 goal:4 careful:1 towards:1 change:1 hard:1 specifically:1 except:1 vovk:1 wt:16 lemma:12 called:6 total:1 e:3 player:1 domination:1 latter:1 evaluate:1 avoiding:1 |
5,228 | 5,733 | A Nonconvex Optimization Framework for Low Rank
Matrix Estimation?
Tuo Zhao
Johns Hopkins University
Zhaoran Wang
Han Liu
Princeton University
Abstract
We study the estimation of low rank matrices via nonconvex optimization. Compared with convex relaxation, nonconvex optimization exhibits superior empirical
performance for large scale instances of low rank matrix estimation. However, the
understanding of its theoretical guarantees are limited. In this paper, we define the
notion of projected oracle divergence based on which we establish sufficient conditions for the success of nonconvex optimization. We illustrate the consequences
of this general framework for matrix sensing. In particular, we prove that a broad
class of nonconvex optimization algorithms, including alternating minimization
and gradient-type methods, geometrically converge to the global optimum and
exactly recover the true low rank matrices under standard conditions.
1
Introduction
Let M ? 2 Rm?n be a rank k matrix with k much smaller than m and n. Our goal is to estimate
M ? based on partial observations of its entires. For example, matrix sensing is based on linear
measurements hAi , M ? i, where i 2 {1, . . . , d} with d much smaller than mn and Ai is the sensing
matrix. In the past decade, significant progress has been established on the recovery of low rank matrix
[4, 5, 23, 18, 15, 16, 12, 22, 7, 25, 19, 6, 14, 11, 13, 8, 9, 10, 27]. Among all these existing works, most
are based upon convex relaxation with nuclear norm constraint or regularization. Nevertheless, solving
these convex optimization problems can be computationally prohibitive in high dimensional regimes
with large m and n [27]. A computationally more efficient alternative is nonconvex optimization. In
particular, we reparameterize the m ? n matrix variable M in the optimization problem as U V >
with U 2 Rm?k and V 2 Rn?k , and optimize over U and V . Such a reparametrization automatically
enforces the low rank structure and leads to low computational cost per iteration. Due to this reason,
the nonconvex approach is widely used in large scale applications such as recommendation systems
[17].
Despite the superior empirical performance of the nonconvex approach, the understanding of its
theoretical guarantees is relatively limited in comparison with the convex relaxation approach. Only
until recently has there been progress on coordinate descent-type nonconvex optimization methods,
which is known as alternating minimization [14, 8, 9, 10]. They show that, provided a desired
initialization, the alternating minimization algorithm converges at a geometric rate to U ? 2 Rm?k
and V ? 2 Rn?k , which satisfy M = U ? V ? > . Meanwhile, [15, 16] establish the convergence of
gradient-type methods, and [27] further establish the convergence of a broad class of nonconvex
algorithms including both gradient-type and coordinate descent-type methods. However, [15, 16, 27]
only establish the asymptotic convergence for an infinite number of iterations, rather than the explicit
rate of convergence. Besides these works, [18, 12, 13] consider projected gradient-type methods,
which optimize over the matrix variable M 2 Rm?n rather than U 2 Rm?k and V 2 Rn?k . These
methods involve calculating the top k singular vectors of an m ? n matrix at each iteration. For
?
Research supported by NSF IIS1116730, NSF IIS1332109, NSF IIS1408910, NSF IIS1546482-BIGDATA,
NSF DMS1454377-CAREER, NIH R01GM083084, NIH R01HG06841, NIH R01MH102339, and FDA
HHSF223201000072C.
1
k much smaller than m and n, they incur much higher computational cost per iteration than the
aforementioned methods that optimize over U and V . All these works, except [27], focus on specific
algorithms, while [27] do not establish the explicit optimization rate of convergence.
In this paper, we propose a general framework that unifies a broad class of nonconvex algorithms
for low rank matrix estimation. At the core of this framework is a quantity named projected oracle
divergence, which sharply captures the evolution of generic optimization algorithms in the presence
of nonconvexity. Based on the projected oracle divergence, we establish sufficiently conditions under
which the iteration sequences geometrically converge to the global optima. For matrix sensing, a direct
consequence of this general framework is that, a broad family of nonconvex algorithms, including
gradient descent, coordinate gradient descent and coordinate descent, converge at a geometric rate
to the true low rank matrices U ? and V ? . In particular, our general framework covers alternating
minimization as a special case and recovers the results of [14, 8, 9, 10] under standard conditions.
Meanwhile, our framework covers gradient-type methods, which are also widely used in practice
[28, 24]. To the best of our knowledge, our framework is the first one that establishes exact recovery
guarantees and geometric rates of convergence for a broad family of nonconvex matrix sensing
algorithms.
To achieve maximum generality, our unified analytic framework significantly differs from previous
works. In detail, [14, 8, 9, 10] view alternating minimization as a perturbed version of the power
method. However, their point of view relies on the closed form solution of each iteration of alternating
minimization, which makes it hard to generalize to other algorithms, e.g., gradient-type methods.
Meanwhile, [27] take a geometric point of view. In detail, they show that the global optimum of the
optimization problem is the unique stationary point within its neighborhood and thus a broad class of
algorithms succeed. However, such geometric analysis of the objective function does not characterize
the convergence rate of specific algorithms towards the stationary point. Unlike existing analytic
frameworks, we analyze nonconvex optimization algorithms as perturbed versions of their convex
counterparts. For example, under our framework we view alternating minimization as a perturbed
version of coordinate descent on convex objective functions. We use the key quantity, projected oracle
divergence, to characterize such a perturbation effect, which results from the local nonconvexity
at intermediate solutions. This framework allows us to establish explicit rate of convergence in an
analogous way as existing convex optimization analysis.
P
Notation: For a vector v = (v1 , . . . , vd )T 2 Rd , let the vector `q norm be kvkqq = j vjq . For a
matrix A 2 Rm?n , we use A?j = (A1j , ..., Amj )> to denote the j-th column of A, and Ai? =
(Ai1 , ..., Ain )> to denote the i-th row of A. Let max (A) and min (A) be theP
largest and smallest
nonzero singular values of A. We define the following matrix norms: kAk2F = j kA?j k22 , kAk2 =
singular values of A. Given another matrix
max (A). Moreover, we define kAk? to be the sum of allP
B 2 Rm?n , we define the inner product as hA, Bi = i,j Aij Bij . We define ei as an indicator
vector, where the i-th entry is one, and all other entries are zero. For a bivariate function f (u, v), we
define ru f (u, v) to be the gradient with respect to u. Moreover, we use the common notations of
?(?), O(?), and o(?) to characterize the asymptotics of two real sequences.
2
Problem Formulation and Algorithms
Let M ? 2 Rm?n be the unknown low rank matrix of interest. We have d sensing matrices Ai 2
Rm?n with i 2 {1, . . . , d}. Our goal is to estimate M ? based on bi = hAi , M ? i in the high
dimensional regime with d much smaller than mn. Under such a regime, a common assumption
is rank(M ? ) = k ? min{d, m, n}. Existing approaches generally recover M ? by solving the
following convex optimization problem
min
kM k? subject to b = A(M ),
(2.1)
m?n
M 2R
where b = [b1 , ..., bd ] 2 R , and A(M ) : Rm?n ! Rd is an operator defined as
A(M ) = [hA1 , M i, ..., hAi , M i]> 2 Rd .
(2.2)
Existing convex optimization algorithms for solving (2.1) are computationally inefficient, in the sense
that they incur high per-iteration computational cost, and only attain sublinear rates of convergence to
the global optimum [14]. Instead, in large scale settings we usually consider the following nonconvex
>
d
2
optimization problem
1
kb A(U V > )k22 .
(2.3)
2
The reparametrization of M = U V > , though making the optimization problem in (2.3) nonconvex,
significantly improves the computational efficiency. Existing literature [17, 28, 21, 24] has established
convincing empirical evidence that (2.3) can be effectively solved by a board variety of gradient-based
nonconvex optimization algorithms, including gradient descent, alternating exact minimization (i.e.,
alternating least squares or coordinate descent), as well as alternating gradient descent (i.e., coordinate
gradient descent), which are shown in Algorithm 1.
min
U 2Rm?k ,V
2Rn?k
F(U, V ).
where F(U, V ) =
It is worth noting the QR decomposition and rank k singular value decomposition in Algorithm
1 can be accomplished efficiently. In particular, the QR decomposition can be accomplished in
O(k 2 max{m, n}) operations, while the rank k singular value decomposition can be accomplished
in O(kmn) operations. In fact, the QR decomposition is not necessary for particular update schemes,
e.g., [14] prove that the alternating exact minimization update schemes with or without the QR
decomposition are equivalent.
Algorithm 1 A family of nonconvex optimization algorithms for matrix sensing. Here (U , D, V )
KSVD(M ) is the rank k singular value decomposition of M . Here D is a diagonal matrix containing
the top k singular values of M in decreasing order, and U and V contain the corresponding top k left
and right singular vectors of M . Here (V , RV )
QR(V ) is the QR decomposition, where V is the
corresponding orthonormal matrix and RV is the corresponding upper triangular matrix.
Input: {bi }di=1 , {Ai }di=1
Parameter: Step size ?, Total number of iterations T
P
(0)
(0)
(0)
(0)
(U , D(0) , V )
KSVD( di=1 bi Ai ), V (0)
V D(0) , U (0)
U D(0)
For: t = 0, ...., T 1
9
(t)
>
Alternating Exact Minimization : V (t+0.5)
argminV F (U , V )
>
>
>
(t+1)
(t+0.5)
>
(t+0.5)
>
(V
, RV
)
QR(V
)
>
>
>
(t)
=
(t+0.5)
(t)
(t)
Alternating Gradient Descent : V
V
?rV F (U , V )
Updating V
(t+1)
(t)
(t+0.5)
(t+0.5)>
>
(V
, RV
)
QR(V (t+0.5) ), U (t)
U RV
>
>
>
(t)
>
>
Gradient Descent : V (t+0.5)
V (t) ?rV F (U , V (t) )
>
>
>
(t+1)
(t) (t+0.5)>
(t+0.5)
;
(t+0.5)
(t+1)
(V
, RV
)
QR(V
), U
U RV
9
(t+1)
>
Alternating Exact Minimization : U (t+0.5)
argminU F (U, V
)
>
>
>
(t+1)
(t+0.5)
>
(t+0.5)
>
(U
, RU
)
QR(U
)
>
>
>
(t+1)
=
(t+0.5)
(t)
(t)
Alternating Gradient Descent : U
U
?rU F (U , V
)
Updating U
(t+1)
t+1 (t+0.5)>
(t+0.5)
(t+0.5)
(t+1)
>
(U
, RU
)
QR(U
), V
V
RU
>
>
>
(t)
>
>
Gradient Descent : U (t+0.5)
U (t) ?rU F (U (t) , V )
>
>
>
(t+1)
t (t+0.5)>
(t+0.5)
;
(t+0.5)
(t+1)
(U
, RU
)
QR(U
), V
V RU
End for
(T )>
(T )
Output: M (T )
U (T 0.5) V
(for gradient descent we use U V (T )> )
3
Theoretical Analysis
We analyze the convergence properties of the general family of nonconvex optimization algorithms
illustrated in ?2. Before we present the main results, we first introduce a unified analytic framework
based on a key quantity named projected oracle divergence. Such a unified framework equips our
theory with the maximum generality. Without loss of generality, we assume m ? n throughout the
rest of this paper.
3.1 Projected Oracle Divergence
We first provide an intuitive explanation for the success of nonconvex optimization algorithms, which
forms the basis of our later proof for the main results. Recall that (2.3) is a special instance of the
following optimization problem,
min
f (U, V ).
(3.1)
U 2Rm?k ,V 2Rn?k
A key observation is that, given fixed U , f (U, ?) is strongly convex and smooth in V under suitable
conditions, and the same also holds for U given fixed V correspondingly. For the convenience of
3
discussion, we summarize this observation in the following technical condition, which will be later
verified for matrix sensing under suitable conditions.
Condition 3.1 (Strong Biconvexity and Bismoothness). There exist universal constants ?+ > 0 and
? > 0 such that
?
?+ 0
kU 0 U k2F ? f (U 0 , V ) f (U, V ) hU 0 U, rU f (U, V )i ?
kU
U k2F for all U, U 0 ,
2
2
?
?+ 0
kV 0 V k2F ? f (U, V 0 ) f (U, V ) hV 0 V, rV f (U, V )i ?
kV
V k2F for all V, V 0 .
2
2
For the simplicity of discussion, for now we assume U ? and V ? are the unique global minimizers to
the generic optimization problem in (3.1). Assuming U ? is given, we can obtain V ? by
V ? = argmin f (U ? , V ).
(3.2)
V 2Rn?k
Condition 3.1 implies the objective function in (3.2) is strongly convex and smooth. Hence, we can
choose any gradient-based algorithm to obtain V ? . For example, we can directly solve for V ? in
rV f (U ? , V ) = 0,
(3.3)
?
or iteratively solve for V using gradient descent, i.e.,
V (t) = V (t 1) ?rV f (U ? , V (t 1) ),
(3.4)
where ? is the step size. For the simplicity of discussion, we put aside the renormalization issue for
now. In the example of gradient descent, by invoking classical convex optimization results [20], it is
easy to prove that
kV (t) V ? kF ? ?kV (t 1) V ? kF for all t = 0, 1, 2, . . . ,
where ? 2 (0, 1) is a contraction coefficient, which depends on ?+ and ? in Condition 3.1.
However, the first-order oracle rV f (U ? , V (t 1) ) is not accessible in practice, since we do not know
U ? . Instead, we only have access to rV f (U, V (t 1) ), where U is arbitrary. To characterize the
divergence between the ideal first-order oracle rV f (U ? , V (t 1) ) and the accessible first-order oracle
rV f (U, V (t 1) ), we define a key quantity named projected oracle divergence, which takes the form
?
?
D(V, V 0 , U ) = rV f (U ? , V 0 ) rV f (U, V 0 ), V V ? /(kV V ? kF ) ,
(3.5)
where V 0 is the point for evaluating the gradient. In the above example, it holds for V 0 = V (t 1) .
Later we will illustrate that, the projection of the difference of first-order oracles onto a specific one
dimensional space, i.e., the direction of V V ? , is critical to our analysis. In the above example of
gradient descent, we will prove later that for V (t) = V (t 1) ?rV f (U, V (t 1) ), we have
kV (t) V ? kF ? ?kV (t 1) V ? kF + 2/?+ ? D(V (t) , V (t 1) , U ).
(3.6)
In other words, the projection of the divergence of first-order oracles onto the direction of V (t) V ?
captures the perturbation effect of employing the accessible first-order oracle rV f (U, V (t 1) ) instead
of the ideal rV f (U ? , V (t 1) ). For V (t+1) = argminV f (U, V ), we will prove that
kV (t) V ? kF ? 1/? ? D(V (t) , V (t) , U ).
(3.7)
According to the update schemes shown in Algorithm 1, for alternating exact minimization, we set
U = U (t) in (3.7), while for gradient descent or alternating gradient descent, we set U = U (t 1) or
U = U (t) in (3.6) respectively. Correspondingly, similar results hold for kU (t) U ? kF .
To establish the geometric rate of convergence towards the global minima U ? and V ? , it remains to
establish upper bounds for the projected oracle divergence. In the example of gradient decent we will
prove that for some ? 2 (0, 1 ?),
2/?+ ? D(V (t) , V (t 1) , U (t 1) ) ? ?kU (t 1) U ? kF ,
which together with (3.6) (where we take U = U (t 1) ) implies
kV (t) V ? kF ? ?kV (t 1) V ? kF + ?kU (t
1)
U ? kF .
Correspondingly, similar results hold for kU
U kF , i.e.,
kU (t) U ? kF ? ?kU (t 1) U ? kF + ?kV (t 1) V ? kF .
Combining (3.8) and (3.9) we then establish the contraction
max{kV (t) V ? kF , kU (t) U ? kF } ? (? + ?) ? max{kV (t 1) V ? kF , kU (t
?
(t)
4
(3.8)
(3.9)
1)
U ? kF },
which further implies the geometric convergence, since ? 2 (0, 1 ?). Respectively, we can establish
similar results for alternating exact minimization and alternating gradient descent. Based upon such a
unified analytic framework, we now simultaneously establish the main results.
Remark 3.2. Our proposed projected oracle divergence is inspired by previous work [3, 2, 1],
which analyzes the Wirtinger Flow algorithm for phase retrieval, the expectation maximization (EM)
Algorithm for latent variable models, and the gradient descent algorithm for sparse coding. Though
their analysis exploits similar nonconvex structures, they work on completely different problems, and
the delivered technical results are also fundamentally different.
3.2 Matrix Sensing
Before we present our main results, we first introduce an assumption known as the restricted isometry
property (RIP). Recall that k is the rank of the target low rank matrix M ? .
Assumption 3.3. The linear operator A(?) : Rm?n ! Rd defined in (2.2) satisfies 2k-RIP with
parameter 2k 2 (0, 1), i.e., for all 2 Rm?n such that rank( ) ? 2k, it holds that
2
2
2
(1
2k )k kF ? kA( )k2 ? (1 + 2k )k kF .
Several random matrix ensembles satisfy k-RIP for a sufficiently large d with high probability. For
example, suppose that each entry of Ai is independently drawn from a sub-Gaussian distribution,
A(?) satisfies 2k-RIP with parameter 2k with high probability for d = ?( 2k2 kn log n).
The following theorem establishes the geometric rate of convergence of the nonconvex optimization
algorithms summarized in Algorithm 1.
Theorem 3.4. Assume there exists a sufficiently small constant C1 such that A(?) satisfies 2k-RIP
with 2k ? C1 /k, and the largest and smallest nonzero singular values of M ? are constants, which do
not scale with (d, m, n, k). For any pre-specified precision ?, there exist an ? and universal constants
C2 and C3 such that for all T C2 log(C3 /?), we have kM (T ) M ? kF ? ?.
The proof of Theorems 3.4 is provided in Appendices 4.1, A.1, and A.2. Theorem 3.4 implies that all
three nonconvex optimization algorithms geometrically converge to the global optimum. Moreover,
assuming that each entry of Ai is independently drawn from a sub-Gaussian distribution with mean
zero and variance proxy one, our result further suggests, to achieve exact low rank matrix recovery,
our algorithm requires the number of measurements d to satisfy
d = ?(k 3 n log n),
(3.10)
since we assume that 2k ? C1 /k. This sample complexity result matches the state-of-the-art result
for nonconvex optimization methods, which is established by [14]. In comparison with their result,
which only covers the alternating exact minimization algorithm, our results holds for a broader variety
of nonconvex optimization algorithms.
Note that the sample complexity in (3.10) depends on a polynomial of max (M ? )/ min (M ? ), which
is treated as a constant in our paper. If we allow max (M ? )/ min (M ? ) to increase with the dimension,
we can plug the nonconvex optimization algorithms into the multi-stage framework proposed by [14].
Following similar lines to the proof of Theorem 3.4, we can derive a new sample complexity, which
is independent of max (M ? )/ min (M ? ). See more details in [14].
4
Proof of Main Results
Due to space limitation, we only sketch the proof of Theorem 3.4 for alternating exact minimization.
The proof of Theorem 3.4 for alternating gradient descent and gradient descent, and related lemmas
are provided in the appendix. For notational simplicity, let 1 = max (M ? ) and k = min (M ? ).
Before we proceed with the main proof, we first introduce the following lemma, which verifies
Condition 3.1.
Lemma 4.1. Suppose that A(?) satisfies 2k-RIP with parameter 2k . Given an arbitrary orthonormal
matrix U 2 Rm?k , for any V, V 0 2 Rn?k , we have
1 + 2k 0
1
2k
kV
V k2F F(U , V 0 ) F(U , V ) hrV F(U , V ), V 0 V i
kV 0 V k2F .
2
2
The proof of Lemma 4.1 is provided in Appendix B.1. Lemma 4.1 implies that F(U , ?) is strongly
convex and smooth in V given a fixed orthonormal matrix U , as specified in Condition 3.1. Equipped
with Lemma 4.1, we now lay out the proof for each update scheme in Algorithm 1.
5
4.1 Proof of Theorem 3.4 (Alternating Exact Minimization)
Proof. Throughout the proof of alternating exact minimization, we define a constant ? 2 (1, 1)
for notational simplicity. We assume that at the t-th iteration, there exists a matrix factorization of
?(t)
?(t)
M ? = U V ?(t)> , where U
is orthonormal. We choose the projected oracle divergence as
?
V (t+0.5) V ?(t)
(t)
?(t)
(t)
D(V (t+0.5) , V (t+0.5) , U )= rV F(U , V (t+0.5) ) rV F(U , V (t+0.5) ), (t+0.5)
.
kV
V ?(t) kF
Remark 4.2. Note that the matrix factorization is not necessarily unique. Because given a factorizae Ve > , where U
e = U O and
tion of M ? = U V > , we can always obtain a new factorization of M ? = U
k?k
Ve = V O for an arbitrary unitary matrix O 2 R
. However, this is not a issue to our convergence
analysis. As will be shown later, we can prove that there always exists a factorization of M ? satisfying
the desired computational properties for each iteration (See Lemma 4.5, Corollaries 4.7 and 4.8).
The following lemma establishes an upper bound for the projected oracle divergence.
(t)
Lemma 4.3. Suppose that 2k and U satisfy
p
2
2(1
(1
(t)
?(t)
2k ) k
2k ) k
and kU
U kF ?
.
2k ?
4?k(1 + 2k ) 1
4?(1 + 2k ) 1
(1
(t)
(t)
?(t)
2k ) k
Then we have D(V (t+0.5) , V (t+0.5) , U ) ?
kU
U kF .
2?
(4.1)
The proof of Lemma 4.3 is provided in Appendix B.2. Lemma 4.3 shows that the projected oracle di(t)
vergence for updating V diminishes with the estimation error of U .The following lemma quantifies
the progress of an exact minimization step using the projected oracle divergence.
Lemma 4.4. We have kV (t+0.5)
V ?(t) kF ? 1/(1
2k )
? D(V (t+0.5) , V (t+0.5) , U
(t)
).
The proof of Lemma 4.4 is provided in Appendix B.3. Lemma 4.4 illustrates that the estimation error
of V (t+0.5) diminishes with the projected oracle divergence. The following lemma characterizes the
effect of the renormalization step using QR decomposition.
Lemma 4.5. Suppose that V (t+0.5) satisfies
kV (t+0.5)
V ?(t) kF ?
(4.2)
k /4.
?(t+1)
Then there exists a factorization of M ? = U ?(t+1) V
such that V
(t+1)
?(t+1)
orthonormal matrix, and satisfies kV
V
kF ? 2/ k ? kV (t+0.5)
?(t+0.5)
V
?(t)
2 Rn?k is an
kF .
The proof of Lemma 4.5 is provided in Appendix B.4. The next lemma quantifies the accuracy of the
(0)
initialization U .
Lemma 4.6. Suppose that
2k
satisfies
2k
2 4
(1
2k ) k
2
192? k(1 + 2k )2
?
(4.3)
4.
1
?(0)
Then there exists a factorization of M ? = U
V ?(0)> such that U
(0)
?
(1 2k ) k
matrix, and satisfies kU
U kF ? 4?(1+ 2k ) 1 .
?(0)
2 Rm?k is an orthonormal
The proof of Lemma 4.6 is provided in Appendix B.5. Lemma 4.6 implies that the initial solution
(0)
U attains a sufficiently small estimation error.
Combining the above Lemmas, we obtain the next corollary for a complete iteration of updating V .
(t)
Corollary 4.7. Suppose that 2k and U satisfy
2 4
(1
(t)
2k ) k
and kU
2k ?
192? 2 k(1 + 2k )2 14
We then have kV
(t)
1
? kU
U
?(t)
(t+1)
V
kF and kV
?(t+1)
kF ?
(t+0.5)
V
(1 2k ) k
4?(1+ 2k ) 1 .
?(t)
kF ?
6
?(t)
kF ?
(1
2k ) k
.
4?(1 + 2k ) 1
Moreover, we also have kV
2? kU
k
U
(t)
U
?(t)
kF .
(t+1)
V
(4.4)
?(t+1)
kF ?
The proof of Corollary 4.7 is provided in Appendix B.6. Since the alternating exact minimization
algorithm updates U and V in a symmetric manner, we can establish similar results for a complete
iteration of updating U in the next corollary.
(t+1)
Corollary 4.8. Suppose that 2k and V
satisfy
2 4
(1
(t+1)
2k ) k
and kV
2k ?
192? 2 k(1 + 2k )2 14
V
?(t+1)
?(t+1)
kF ?
(1
2k ) k
.
4?(1 + 2k ) 1
(4.5)
?(t+1)
Then there exists a factorization of M ? = U
V ?(t+1)> such U
is an orthonormal matrix,
(t+1)
?(t+1)
(t+1)
?(t+1)
(1 2k ) k
and satisfies kU
U
kF ? 4?(1+ 2k ) 1 . Moreover, we also have kU
U
kF ?
(t+1)
1
? kV
V
?(t+1)
kF and kU (t+0.5)
U ?(t+1) kF ?
2? kV
k
(t+1)
V
?(t+1)
kF .
The proof of Corollary 4.8 directly follows Appendix B.6, and is therefore omitted.
We then proceed with the proof of Theorem 3.4 for alternating exact minimization. Lemma 4.6
(0)
ensures that (4.4) of Corollary 4.7 holds for U . Then Corollary 4.7 ensures that (4.5) of Corollary
(1)
4.8 holds for V . By induction, Corollaries 4.7 and 4.8 can be applied recursively for all T iterations.
Thus we obtain
1 (T 1)
1
(T )
?(T )
?(T 1)
(T 1)
?(T 1)
kV
V
kF ? kU
U
kF ? 2 kV
V
kF
?
?
1
(1
(0)
?(0)
2k ) k
? ? ? ? ? 2T 1 kU
U
kF ? 2T
,
(4.6)
?
4? (1 + 2k ) 1
where the lastlinequality? comes from?Lemma 4.6.
m Therefore, for a pre-specified accuracy ?, we need
(1 2k ) k
1
at most T = 1/2 ? log 2?(1+ 2k ) 1 log ? iterations such that
kV
(T )
V
?(T )
kF ?
Moreover, Corollary 4.8 implies
kU (T
U ?(T ) kF ?
0.5)
k
kV
(1
2k ) k
4? 2T (1 + 2k )
(T )
V
?(T )
kF ?
1
?
?
.
2
(1
(4.7)
2
2k ) k
8? 2T +1 (1
2?
+
where the last inequality comes from (4.6). Therefore, we need at most
?
?
?
?
2
(1
2k ) k
T = 1/2 ? log
log 1 ?
4??(1 + 2k )
iterations such that
2
(1
?
2k ) k
kU (T 0.5) U ? kF ? 2T +1
?
.
8?
(1 + 2k ) 1
2 1
Then combining (4.7) and (4.8), we obtain
kM (T )
M ? k = kU (T
? kV
(T )
0.5)
(T )>
V
k2 kU
(T
where the last inequality is from kV
1
5
(since U ?(T ) V
?(T )>
0.5)
(T )
= M ? and V
U ?(T ) V
U
k2 =
?(T )
?(T )
?(T )>
kF
kF + kU ?(T ) k2 kV
1 (since V
(T )
(T )
V
2k ) 1
,
(4.8)
?(T )
kF ? ?,
(4.9)
is orthonormal) and kU ? k2 = kM ? k2 =
is orthonormal). Thus we complete the proof.
Extension to Matrix Completion
Under the same setting as matrix sensing, we observe a subset of the entries of M ? , namely, W ?
?
{1, . . . , m} ? {1, . . . , n}. We assume that W is drawn uniformly at random, i.e., Mi,j
is observed
?
independently with probability ?? 2 (0, 1]. To exactly recover M , a common assumption is the
incoherence of M ? , which will be specified later. A popular approach for recovering M ? is to solve
the following convex optimization problem
min
kM k? subject to PW (M ? ) = PW (M ),
(5.1)
m?n
M 2R
where PW (M ) : Rm?n ! Rm?n is an operator defined as [PW (M )]ij = Mij if (i, j) 2 W, and
0 otherwise. Similar to matrix sensing, existing algorithms for solving (5.1) are computationally
7
inefficient. Hence, in practice we usually consider the following nonconvex optimization problem
min
FW (U, V ), where FW (U, V ) = 1/2 ? kPW (M ? ) PW (U V > )k2F . (5.2)
U 2Rm?k ,V 2Rn?k
Similar to matrix sensing, (5.2) can also be efficiently solved by gradient-based algorithms. Due to
space limitation, we present these matrix completion algorithms in Algorithm 2 of Appendix D. For
the convenience of later convergence analysis, we partition the observation set W into 2T + 1 subsets
W0 ,...,W2T using Algorithm 4 in Appendix D. However, in practice we do not need the partition
scheme, i.e., we simply set W0 = ? ? ? = W2T = W.
Before we present the main results, we introduce an assumption known as the incoherence property.
Assumption 5.1. The target rank k matrix M ? is incoherent with parameter ?, i.e., given the rank k
?
?>
singular value decomposition of M ? = U ?? V , we have
p
p
?
?
max kU i? k2 ? ? k/m and max kV j? k2 ? ? k/n.
i
j
The incoherence assumption guarantees that M ? is far from a sparse matrix, which makes it feasible
to complete M ? when its entries are missing uniformly at random. The following theorem establishes
the iteration complexity and the estimation error under the Frobenius norm.
Theorem 5.2. Suppose that there exists a universal constant C4 such that ?? satisfies
?? C4 ?2 k 3 log n log(1/?)/m,
(5.3)
where ? is the pre-specified precision. Then there exist an ? and universal constants C5 and C6 such
that for any T C5 log(C6 /?), we have kM (T ) M kF ? ? with high probability.
Due to space limit, we defer the proof of Theorem 5.2 to the longer version of this paper. Theorem
5.2 implies that all three nonconvex optimization algorithms converge to the global optimum at a
geometric rate. Furthermore, our results indicate that the completion of the true low rank matrix M ?
up to ?-accuracy requires the entry observation probability ?? to satisfy
?? = ?(?2 k 3 log n log(1/?)/m).
(5.4)
This result matches the result established by [8], which is the state-of-the-art result for alternating
minimization. Moreover, our analysis covers three nonconvex optimization algorithms.
6
Experiments
Estimation Error
Estimation Error
We present numerical experiments for matrix sensing to support our theoretical analysis. We choose
m = 30, n = 40, and k = 5, and vary d from 300 to 900. Each entry of Ai ?s are independent sampled
e 2 Rm?k and Ve 2 Rn?k are two matrices
from N (0, 1). We then generate M = U V > , where U
with all their entries independently sampled from N (0, 1/k). We then generate d measurements by
bi = hAi , M i for i = 1, ..., d. Figure 1 illustrates the empirical performance of the alternating exact
minimization and alternating gradient descent algorithms for a single realization. The step size for the
alternating gradient descent algorithm is determined by the backtracking line search procedure. We
see that both algorithms attain linear rate of convergence for d = 600 and d = 900. Both algorithms
fail for d = 300, because d = 300 is below the minimum requirement of sample complexity for the
exact matrix recovery.
10 0
10
d = 300
d = 600
d = 900
-5
0
10
20
30
40
10 0
10
d = 300
d = 600
d = 900
-5
0
Number of Iterations
10
20
30
40
Number of Iterations
(a) Alternating Exact Minimization
(b) Alternating Gradient Descent
Figure 1: Two illustrative examples for matrix sensing. The vertical axis corresponds to estimation
error kM (t) M kF . The horizontal axis corresponds to numbers of iterations. Both the alternating
exact minimization and alternating gradient descent algorithms attain linear rate of convergence for
d = 600 and d = 900. But both algorithms fail for d = 300, because d = 300 is below the minimum
requirement of sample complexity for the exact matrix recovery.
8
References
[1] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse
coding. arXiv preprint arXiv:1503.00778, 2015.
[2] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm:
From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014.
[3] Emmanuel J Cand`es, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory
and algorithms. IEEE Transactions on Information Theory, 61(4):1985?2007, 2015.
[4] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, 2009.
[5] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[6] Yudong Chen. Incoherence-optimal matrix completion. arXiv preprint arXiv:1310.0154, 2013.
[7] David Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on
Information Theory, 57(3):1548?1566, 2011.
[8] Moritz Hardt. Understanding alternating minimization for matrix completion. In Symposium on Foundations of Computer Science, pages 651?660, 2014.
[9] Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz. Computational limits for matrix
completion. arXiv preprint arXiv:1402.2331, 2014.
[10] Moritz Hardt and Mary Wootters. Fast matrix completion without the condition number. arXiv preprint
arXiv:1407.4070, 2014.
[11] Trevor Hastie, Rahul Mazumder, Jason Lee, and Reza Zadeh. Matrix completion and low-rank SVD via
fast alternating least squares. arXiv preprint arXiv:1410.2596, 2014.
[12] Prateek Jain, Raghu Meka, and Inderjit S Dhillon. Guaranteed rank minimization via singular value
projection. In Advances in Neural Information Processing Systems, pages 937?945, 2010.
[13] Prateek Jain and Praneeth Netrapalli. Fast exact matrix completion with finite samples. arXiv preprint
arXiv:1411.1087, 2014.
[14] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating
minimization. In Symposium on Theory of Computing, pages 665?674, 2013.
[15] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries.
IEEE Transactions on Information Theory, 56(6):2980?2998, 2010.
[16] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries.
Journal of Machine Learning Research, 11:2057?2078, 2010.
[17] Yehuda Koren. The Bellkor solution to the Netflix grand prize. Netflix Prize Documentation, 81, 2009.
[18] Kiryung Lee and Yoram Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE
Transactions on Information Theory, 56(9):4402?4416, 2010.
[19] Sahand Negahban and Martin J Wainwright. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. The Annals of Statistics, 39(2):1069?1097, 2011.
[20] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer, 2004.
[21] Arkadiusz Paterek. Improving regularized singular value decomposition for collaborative filtering. In
Proceedings of KDD Cup and workshop, volume 2007, pages 5?8, 2007.
[22] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12:3413?3430, 2011.
[23] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[24] Benjamin Recht and Christopher R?e. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201?226, 2013.
[25] Angelika Rohde and Alexandre B Tsybakov. Estimation of high-dimensional low-rank matrices. The
Annals of Statistics, 39(2):887?930, 2011.
[26] Gilbert W Stewart, Ji-guang Sun, and Harcourt B Jovanovich. Matrix perturbation theory, volume 175.
Academic press New York, 1990.
[27] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factorization. arXiv preprint
arXiv:1411.8003, 2014.
[28] G?abor Tak?acs, Istv?an Pil?aszy, Botty?an N?emeth, and Domonkos Tikk. Major components of the gravity
recommendation system. ACM SIGKDD Explorations Newsletter, 9(2):80?83, 2007.
9
| 5733 |@word version:4 pw:5 polynomial:1 norm:5 km:7 hu:1 prasad:1 decomposition:12 contraction:2 invoking:1 recursively:1 initial:1 liu:1 past:1 existing:7 ka:2 luo:1 bd:1 john:1 numerical:1 partition:2 kdd:1 analytic:4 update:5 aside:1 stationary:2 prohibitive:1 prize:2 core:1 c6:2 simpler:1 mathematical:1 c2:2 direct:1 symposium:2 ksvd:2 prove:7 introductory:1 manner:1 introduce:4 andrea:2 cand:3 multi:1 inspired:1 decreasing:1 automatically:1 zhi:1 equipped:1 provided:9 r01mh102339:1 notation:2 moreover:7 prateek:3 argmin:1 kiryung:1 unified:4 guarantee:5 rohde:1 gravity:1 exactly:2 rm:20 k2:9 before:4 local:1 limit:2 consequence:2 despite:1 amj:1 incoherence:4 initialization:2 ankur:1 suggests:1 limited:2 factorization:8 bi:5 fazel:1 unique:3 enforces:1 atomic:1 practice:4 yehuda:1 differs:1 procedure:1 asymptotics:1 empirical:4 universal:4 significantly:2 attain:3 projection:3 word:1 pre:3 convenience:2 onto:2 operator:3 put:1 optimize:3 equivalent:1 gilbert:1 missing:1 independently:4 convex:18 simplicity:4 recovery:5 nuclear:2 orthonormal:9 dms1454377:1 oh:2 population:1 notion:1 coordinate:7 analogous:1 annals:2 target:2 suppose:8 rip:6 exact:22 programming:1 documentation:1 satisfying:1 updating:5 lay:1 observed:1 preprint:8 wang:1 capture:2 solved:2 hv:1 ensures:2 sun:2 gross:1 benjamin:5 complexity:6 nesterov:1 angelika:1 solving:4 bellkor:1 incur:2 upon:2 efficiency:1 basis:2 completely:1 jain:3 fast:3 neighborhood:1 widely:2 solve:3 otherwise:1 triangular:1 statistic:2 noisy:1 delivered:1 sequence:2 propose:1 maryam:1 product:1 combining:3 realization:1 achieve:2 intuitive:1 frobenius:1 kv:34 qr:13 convergence:17 optimum:6 requirement:2 converges:1 illustrate:2 derive:1 completion:17 ac:1 ij:1 progress:3 strong:1 netrapalli:2 recovering:2 implies:8 come:2 indicate:1 direction:2 stochastic:1 kb:1 exploration:1 bin:1 admira:1 extension:1 rong:1 hold:8 sufficiently:4 major:1 vary:1 smallest:2 omitted:1 estimation:13 diminishes:2 sivaraman:1 ain:1 largest:2 establishes:4 hrv:1 minimization:28 gaussian:2 always:2 rather:2 broader:1 corollary:12 focus:1 notational:2 rank:29 sigkdd:1 attains:1 sense:1 minimizers:1 entire:1 abor:1 tak:1 tao:1 issue:2 among:1 aforementioned:1 art:2 special:2 broad:6 yu:1 k2f:7 sanghavi:1 fundamentally:1 few:2 simultaneously:1 divergence:15 ve:3 phase:2 raghunandan:2 interest:1 ai1:1 tikk:1 partial:1 necessary:1 desired:2 theoretical:4 instance:2 column:1 cover:4 stewart:1 maximization:1 cost:3 entry:11 subset:2 characterize:4 kn:1 perturbed:3 recht:4 grand:1 negahban:1 siam:1 accessible:3 lee:2 terence:1 together:1 hopkins:1 sanjeev:1 moitra:1 containing:1 choose:3 zhao:1 inefficient:2 li:1 parrilo:1 zhaoran:1 coding:2 summarized:1 coefficient:2 satisfy:7 depends:2 later:7 view:4 tion:1 closed:1 jason:1 analyze:2 characterizes:1 netflix:2 recover:3 reparametrization:2 parallel:1 pil:1 weitz:1 defer:1 r01hg06841:1 collaborative:1 square:2 accuracy:3 variance:1 efficiently:2 ensemble:1 generalize:1 raghavendra:1 unifies:1 worth:1 trevor:1 proof:21 di:4 recovers:1 mi:1 sampled:2 hardt:3 popular:1 recall:2 knowledge:1 improves:1 alexandre:1 higher:1 rahul:1 formulation:1 though:2 strongly:3 generality:3 furthermore:1 stage:1 until:1 sketch:1 horizontal:1 harcourt:1 ei:1 keshavan:2 christopher:1 mary:1 xiaodong:1 effect:3 k22:2 contain:1 true:3 counterpart:1 evolution:1 regularization:1 hence:2 alternating:37 moritz:3 nonzero:2 iteratively:1 symmetric:1 dhillon:1 illustrated:1 illustrative:1 kak:1 complete:4 newsletter:1 equips:1 recently:1 nih:3 superior:2 common:3 ji:1 reza:1 volume:3 r01gm083084:1 measurement:3 significant:1 cup:1 ai:8 meka:2 rd:4 sujay:1 mathematics:1 soltanolkotabi:1 access:1 han:1 longer:1 isometry:1 argminv:2 nonconvex:29 inequality:2 success:2 arkadiusz:1 accomplished:3 ruoyu:1 minimum:5 analyzes:1 converge:5 rv:23 smooth:3 technical:2 match:2 academic:1 plug:1 iis1408910:1 retrieval:2 basic:1 expectation:1 arxiv:16 iteration:19 c1:3 singular:12 rest:1 unlike:1 subject:2 quan:1 balakrishnan:1 flow:2 unitary:1 near:2 presence:1 noting:1 intermediate:1 ideal:2 easy:1 decent:1 wirtinger:2 variety:2 hastie:1 inner:1 praneeth:2 sahand:1 proceed:2 york:1 remark:2 wootters:1 generally:1 involve:1 tsybakov:1 generate:2 exist:3 nsf:5 iis1332109:1 per:3 key:4 nevertheless:1 istv:1 drawn:3 verified:1 nonconvexity:2 v1:1 relaxation:4 geometrically:3 sum:1 allp:1 named:3 family:4 throughout:2 zadeh:1 appendix:11 kpw:1 scaling:1 bound:2 guaranteed:3 koren:1 oracle:20 constraint:1 sharply:1 fda:1 reparameterize:1 min:11 tengyu:1 relatively:1 martin:2 according:1 smaller:4 em:2 making:1 restricted:1 computationally:4 equation:1 remains:1 fail:2 know:1 ge:1 end:1 raghu:2 yurii:1 operation:2 observe:1 generic:2 alternative:1 top:3 calculating:1 exploit:1 yoram:1 emmanuel:3 establish:13 classical:1 objective:3 quantity:4 kak2:1 diagonal:1 hai:4 exhibit:1 gradient:36 vd:1 w0:2 kak2f:1 reason:1 induction:1 assuming:2 ru:9 besides:1 convincing:1 a1j:1 unknown:1 upper:3 vertical:1 observation:5 finite:1 descent:28 rn:10 perturbation:3 arbitrary:3 tuo:1 david:1 pablo:1 namely:1 specified:5 c3:2 c4:2 w2t:2 established:4 usually:2 below:2 regime:3 summarize:1 including:4 max:11 explanation:1 wainwright:2 power:2 suitable:2 critical:1 treated:1 regularized:1 indicator:1 mn:2 scheme:5 axis:2 arora:1 incoherent:1 review:1 understanding:3 geometric:9 literature:1 kf:55 asymptotic:1 loss:1 bresler:1 lecture:1 sublinear:1 limitation:2 filtering:1 foundation:2 sufficient:1 proxy:1 sewoong:2 row:1 course:1 supported:1 last:2 aij:1 allow:1 correspondingly:3 sparse:3 ha1:1 yudong:1 dimension:1 evaluating:1 c5:2 projected:15 employing:1 far:1 transaction:5 global:8 b1:1 thep:1 search:1 latent:1 decade:1 vergence:1 quantifies:2 ku:28 career:1 mazumder:1 improving:1 necessarily:1 meanwhile:3 main:7 montanari:2 noise:1 kmn:1 guang:1 verifies:1 board:1 renormalization:2 precision:2 sub:2 explicit:3 mahdi:1 bij:1 theorem:13 specific:3 sensing:14 evidence:1 bivariate:1 exists:7 workshop:1 effectively:1 illustrates:2 chen:1 backtracking:1 simply:1 inderjit:1 recommendation:2 springer:1 mij:1 corresponds:2 satisfies:10 relies:1 acm:1 ma:1 succeed:1 goal:2 towards:2 feasible:1 hard:1 fw:2 infinite:1 except:1 uniformly:2 determined:1 lemma:25 argminu:1 total:1 jovanovich:1 e:3 svd:1 support:1 bigdata:1 princeton:1 |
5,229 | 5,734 | Individual Planning in In?nite-Horizon Multiagent
Settings: Inference, Structure and Scalability
Xia Qu
Epic Systems
Verona, WI 53593
quxiapisces@gmail.com
Prashant Doshi
THINC Lab, Dept. of Computer Science
University of Georgia, Athens, GA 30622
pdoshi@cs.uga.edu
Abstract
This paper provides the ?rst formalization of self-interested planning in multiagent settings using expectation-maximization (EM). Our formalization in the context of in?nite-horizon and ?nitely-nested interactive POMDPs (I-POMDP) is
distinct from EM formulations for POMDPs and cooperative multiagent planning
frameworks. We exploit the graphical model structure speci?c to I-POMDPs, and
present a new approach based on block-coordinate descent for further speed up.
Forward ?ltering-backward sampling ? a combination of exact ?ltering with sampling ? is explored to exploit problem structure.
1
Introduction
Generalization of bounded policy iteration (BPI) to ?nitely-nested interactive partially observable
Markov decision processes (I-POMDP) [1] is currently the leading method for in?nite-horizon selfinterested multiagent planning and obtaining ?nite-state controllers as solutions. However, interactive BPI is acutely prone to converge to local optima, which severely limits the quality of its solutions
despite the limited ability to escape from these local optima.
Attias [2] posed planning using MDP as a likelihood maximization problem where the ?data? is
the initial state and the ?nal goal state or the maximum total reward. Toussaint et al. [3] extended
this to infer ?nite-state automata for in?nite-horizon POMDPs. Experiments reveal good quality
controllers of small sizes although run time is a concern. Given BPI?s limitations and the compelling
potential of this approach in bringing advances in inferencing to bear on planning, we generalize it
to in?nite-horizon and ?nitely-nested I-POMDPs. Our generalization allows its use toward planning
for an individual agent in noncooperation where we may not assume common knowledge of initial
beliefs or common rewards, due to which others? beliefs, capabilities and preferences are modeled.
Analogously to POMDPs, we formulate a mixture of ?nite-horizon DBNs. However, the DBNs
differ by including models of other agents in a special model node. Our approach, labeled as I-EM,
improves on the straightforward extension of Toussaint et al.?s EM to I-POMDPs by utilizing various
types of structure. Instead of ascribing as many level 0 ?nite-state controllers as candidate models
and improving each using its own EM, we use the underlying graphical structure of the model node
and its update to formulate a single EM that directly provides the marginal of others? actions across
all models. This rests on a new insight, which considerably simpli?es and speeds EM at level 1.
We present a general approach based on block-coordinate descent [4, 5] for speeding up the nonasymptotic rate of convergence of the iterative EM. The problem is decomposed into optimization
subproblems in which the objective function is optimized with respect to a small subset (block) of
variables, while holding other variables ?xed. We discuss the unique challenges and present the ?rst
effective application of this iterative scheme to multiagent planning.
Finally, sampling offers a way to exploit the embedded problem structure such as information in distributions. The exact forward-backward E-step is replaced with forward ?ltering-backward sampling
1
(FFBS) that generates trajectories weighted with rewards, which are used to update the parameters of
the controller. While sampling has been integrated in EM previously [6], FFBS speci?cally mitigates
error accumulation over long horizons due to the exact forward step.
2
Overview of Interactive POMDPs
A ?nitely-nested I-POMDP [7] for an agent i with strategy level, l, interacting with agent j is:
I-POMDPi,l = ?ISi,l , A, Ti , ?i , Oi , Ri , OCi ?
? ISi,l denotes the set of interactive states de?ned as, ISi,l = S ? Mj,l?1 , where Mj,l?1 =
{?j,l?1 ? SMj }, for l ? 1, and ISi,0 = S, where S is the set of physical states. ?j,l?1 is the
set of computable, intentional models ascribed to agent j: ?j,l?1 = ?bj,l?1 , ??j ?. Here bj,l?1 is
agent j?s level l ? 1 belief, bj,l?1 ? ?(ISj,l?1 ) where ?(?) is the space of distributions, and ??j =
?A, Tj , ?j , Oj , Rj , OCj ?, is j?s frame. At level l=0, bj,0 ? ?(S) and a intentional model reduces
to a POMDP. SMj is the set of subintentional models of j, an example is a ?nite state automaton.
? A = Ai ? Aj is the set of joint actions of all agents.
? Other parameters ? transition function, Ti , observations, ?i , observation function, Oi , and preference function, Ri ? have their usual semantics analogously to POMDPs but involve joint actions.
? Optimality criterion, OCi , here is the discounted in?nite horizon sum.
An agent?s belief over its interactive states is a suf?cient statistic fully summarizing the agent?s
observation history. Given the associated belief update, solution to an I-POMDP is a policy. Using
the Bellman equation, each belief state in an I-POMDP has a value which is the maximum payoff
the agent can expect starting from that belief and over the future.
3
Planning in I-POMDP as Inference
We may represent the policy of agent i for the in?nite horizon case as a stochastic ?nite state
controller (FSC), de?ned as: ?i = ?Ni , Ti , Li , Vi ? where Ni is the set of nodes in the controller.
Ti : Ni ? Ai ? ?i ? Ni ? [0, 1] represents the node transition function; Li : Ni ? Ai ? [0, 1] denotes agent i?s action distribution at each node; and an initial distribution over the nodes is denoted
by, Vi : Ni ? [0, 1]. For convenience, we group Vi , Ti and Li in f?i . De?ne a controller at level l for
agent i as, ?i,l = ? Ni,l , f?i,l ?, where Ni,l is the set of nodes in the controller and f?i,l groups remaining parameters of the controller as mentioned before. Analogously to POMDPs [3], we formulate
planning in multiagent settings formalized by I-POMDPs as a likelihood maximization problem:
?
?i,l
= arg max (1 ? ?)
?i,l
??
T =0
? T P r(riT = 1|T ; ?i,l )
(1)
where ?i,l are all level-l FSCs of agent i, riT is a binary random variable whose value is 0 or 1
emitted after T time steps with probability proportional to the reward, Ri (s, ai , aj ).
n1i,l
n0i,l
n0i,l
a0i
a0i
o1i
n2i,l
a1i
o2i
nTi,l
a2i
oTi
aTi
s1
s0
s
ri0
0
s0
s1
s2
a0j
a0j
a0j
a2j
a1j
a0k
0
Mj,0
1
Mj,0
a0k
0
Mk,0
0
Mk,0
1
Mk,0
2
Mj,0
a1k
a1j
oTj
aTj
rjT
m1j,0
mTj,0
T
Mj,0
a2k
2
Mk,0
o1j
aTj
m0j,0
0
Mj,0
sT
riT
sT
a0k
T
Mk,0
Figure 1: (a) Mixture of DBNs with 1 to T time slices for I-POMDPi,1 with i?s level-1 policy represented as
a standard FSC whose ?node state? is denoted by ni,l . The DBNs differ from those for POMDPs by containing
special model nodes (hexagons) whose values are candidate models of other agents. (b) Hexagonal model nodes
and edges in bold for one other agent j in (a) decompose into this level-0 DBN. Values of the node mtj,0 are the
candidate models. CPT of chance node atj denoted by ?j,0 (mtj,0 , atj ) is inferred using likelihood maximization.
2
The planning problem is modeled as a mixture of DBNs of increasing time from T =0 onwards
(Fig. 1). The transition and observation functions of I-POMDPi,l parameterize the chance nodes s
and oi , respectively, along with P r(riT |aTi , aTj , sT ) ?
are the maximum and minimum reward values in Ri .
T
Ri (sT ,aT
i ,aj )?Rmin
.
Rmax ?Rmin
Here, Rmax and Rmin
The networks include nodes, ni,l , of agent i?s level-l FSC. Therefore, functions in f?i,l parameterize
the network as well, which are to be inferred. Additionally, the network includes the hexagonal
model nodes ? one for each other agent ? that contain the candidate level 0 models of the agent.
Each model node provides the expected distribution over another agent?s actions. Without loss of
generality, no edges exist between model nodes in the same time step. Correlations between agents
could be included as state variables in the models.
Agent j?s model nodes and the edges (in bold) between them, and between the model and chance
action nodes represent a DBN of length T as shown in Fig. 1(b). Values of the chance node, m0j,0 , are
the candidate models of agent j. Agent i?s initial belief over the state and models of j becomes the
parameters of s0 and m0j,0 . The likelihood maximization at level 0 seeks to obtain the distribution,
P r(aj |m0j,0 ), for each candidate model in node, m0j,0 , using EM on the DBN.
Proposition 1 (Correctness). The likelihood maximization problem as de?ned in Eq. 1 with the
mixture models as given in Fig. 1 is equivalent to the problem of solving the original I-POMDPi,l
with discounted in?nite horizon whose solution assumes the form of a ?nite state controller.
All proofs are given in the supplement. Given the unique mixture models above, the challenge is to
generalize the EM-based iterative maximization for POMDPs to the framework of I-POMDPs.
3.1
Single EM for Level 0 Models
The straightforward approach is to infer a likely FSC for each level 0 model. However, this approach
does not scale to many models. Proposition 2 below shows that the dynamic P r(atj |st ) is suf?cient
predictive information about other agent from its candidate models at time t, to obtain the most
likely policy of agent i. This is markedly different from using behavioral equivalence [8] that clusters
models with identical solutions. The latter continues to require the full solution of each model.
Proposition 2 (Suf?ciency). Distributions P r(atj |st ) across actions atj ? Aj for each state st is
suf?cient predictive information about other agent j to obtain the most likely policy of i.
In the context of Proposition 2, we seek to infer P r(atj |mtj,0 ) for each (updated) model of j at
all time steps, which is denoted as ?j,0 . Other terms in the computation of P r(atj |st ) are known
parameters of the level 0 DBN. The likelihood maximization for the level 0 DBN is:
??j,0 = arg max (1 ? ?)
?j,0
??
T =0
?
T
mj,0 ?Mj,0
? T P r(rjT = 1|T, mj,0 ; ?j,0 )
As the trajectory consisting of states, models, actions and observations of the other agent is hidden
at planning time, we may solve the above likelihood maximization using EM.
E-step Let zj0:T = {st , mtj,0 , atj , otj }T0 where the observation at t = 0 is null, be the hidden trajectory.
The log likelihood is obtained as an expectation of these hidden trajectories:
Q(??j,0 |?j,0 ) =
??
T =0
?
zj0:T
P r(rjT = 1, zj0:T , T ; ?j,0 ) log P r(rjT = 1, zj0:T , T ; ??j,0 )
(2)
The ?data? in the level 0 DBN consists of the initial belief over the state and models, b0i,1 , and the
observed reward at T . Analogously to EM for POMDPs, this motivates forward ?ltering-backward
smoothing on a network with joint state (st ,mtj,0 ) for computing the log likelihood. The transition
function for the forward and backward steps is:
P r(st , mtj,0 |st?1 , mt?1
j,0 ) =
?
at?1
,otj
j
t?1
t?1
?j,0 (mt?1
) Tmj (st?1 , at?1
, st ) P r(mtj,0 |mt?1
, otj )
j,0 , aj
j
j,0 , aj
, otj )
? Omj (st , at?1
j
(3)
t?1
t
where mj in the subscripts is j?s model at t ? 1. Here, P r(mtj,0 |at?1
j , oj , mj,0 ) is the Kroneckert?1
t?1
t
delta function that is 1 when j?s belief in mj,0 updated using aj and oj equals the belief in mtj,0 ;
otherwise 0.
3
Forward ?ltering gives the probability of the next state as follows:
?
?t (st , mtj,0 ) =
st?1 ,mt?1
j,0
t?1 t?1
P r(st , mtj,0 |st?1 , mt?1
(s , mt?1
j,0 ) ?
j,0 )
where ?0 (s0 , m0j,0 ) is the initial belief of agent i. The smoothing by which we obtain the joint
probability of the state and model at t ? 1 from the distribution at t is:
? h (st?1 , mt?1
j,0 ) =
?
st ,mtj,0
h?1 t
P r(st , mtj,0 |st?1 , mt?1
(s , mtj,0 )
j,0 ) ?
where h denotes the horizon to T and ? 0 (sT , mTj,0 ) = EaTj |mTj,0 [P r(rjT = 1|sT , mTj,0 )]. Messages
?t and ? h give the probability of a state at some time slice in the DBN. As we consider a mixture of
BNs, we seek probabilities for all states in the mixture model. Subsequently, we may compute the
forward and backward messages at all states for the entire mixture model in one sweep.
?
?(s, mj,0 ) =
??
t=0
? mj,0 ) =
?(s,
P r(T = t) ?t (s, mj,0 )
??
h=0
P r(T = h) ? h (s, mj,0 )
(4)
Model growth As the other agent performs its actions and makes observations, the space
0
of j?s models grows exponentially: starting from a ?nite set of |Mj,0
| models, we obtain
0
t
O(|Mj,0 |(|Aj ||?j |) ) models at time t. This greatly increases the number of trajectories in Zj0:T .
We limit the growth in the model space by sampling models at the next time step from the distribution, ?t (st , mtj,0 ), as we perform each step of forward ?ltering. It limits the growth by exploiting
the structure present in ?j,0 and Oj , which guide how the models grow.
M-step We obtain the updated ??j,0 from the full log likelihood in Eq. 2 by separating the terms:
Q(??j,0 |?j,0 ) = ?terms independent of ??j,0 ? +
and maximizing it w.r.t. ??j,0 :
??j,0 (atj , mtj,0 ) ? ?j,0 (atj , mtj )
3.2
?
st
??
T =0
?
zj0:T
P r(riT = 1, zj0:T , T ; ??j,0 )
Rmj (st , atj ) ?
?(st , mtj,0 ) +
?
t+1
st ,st+1 ,mt+1
j,0 ,oj
?T
t=0
??j,0 (atj |mtj,0 )
? ? t+1 t+1
?(s , mj,0 )
1??
t
t
t+1
) Omj (st+1 , atj , ot+1
)
? ?
?(st , mtj,0 ) Tmj (st , atj , st+1 ) P r(mt+1
j,0 |mj,0 , aj , oj
j
Improved EM for Level l I-POMDP
At strategy levels l ? 1, Eq. 1 de?nes the likelihood maximization problem, which is iteratively
solved using EM. We show the E- and M -steps next beginning with l = 1.
E-step In a multiagent setting, the hidden variables additionally include what the other agent
may observe and how it acts over time. However, a key insight is that Prop. 2 allows us to limit
attention to the marginal distribution over other agents? actions given the state. Thus, let zi0:T =
{st , oti , nti,l , ati , atj , . . . , atk }T0 , where the observation at t = 0 is null, and other agents are labeled j
to k; this group is denoted ?i. The full log likelihood involves an expectation over hidden variables:
?
Q(?i,l
|?i,l ) =
??
T =0
?
zi0:T
?
P r(riT = 1, zi0:T , T ; ?i,l ) log P r(riT = 1, zi0:T , T ; ?i,l
)
(5)
Due to the subjective perspective in I-POMDPs, Q computes the likelihood of agent i?s FSC only
(and not of joint FSCs as in team planning [9]).
In the T -step DBN of Fig. 1, observed evidence includes the reward, riT , at the end and the initial
belief. We seek the likely distributions, Vi , Ti , and Li , across time slices. We may again realize the
full joint in the expectation using a forward-backward algorithm on a hidden Markov model whose
state is (st , nti,l ). The transition function of this model is,
P r(st , nti,l |st?1 , nt?1
i,l ) =
?
t
at?1
,at?1
i
?i ,oi
t?1
Li (nt?1
)
i,l , ai
?
?i
t?1
t?1
P r(at?1
) Ti (nt?1
, oti , nti,l )
?i |s
i,l , ai
t
t
t?1
t
? Ti (st?1 , at?1
, at?1
, at?1
i
?i , s ) Oi (s , ai
?i , oi )
(6)
In addition to parameters of I-POMDPi,l , which are given, parameters of agent i?s controller and
those relating to other agents? predicted actions, ??i,0 , are present in Eq. 6. Notice that in consequence of Proposition 2, Eq. 6 precludes j?s observation and node transition functions.
4
The forward message, ?t = P r(st , nti,l ), represents the probability of being at some state of the
DBN at time t:
?t (st , nti,l ) =
?
st?1 ,nt?1
i,l
t?1 t?1
P r(st , nti,l |st?1 , nt?1
(s , nt?1
i,l ) ?
i,l )
(7)
where, ?0 (s0 , n0i,l ) = Vi (n0i,l )b0i,l (s0 ). The backward message gives the probability of observing the
reward in the ?nal T th time step given a state of the Markov model, ? t (st , nti,l ) = P r(riT = 1|st , nti,l ):
? h (st , nti,l ) =
where, ? 0 (sT , nTi,l ) =
?
?
T
aT
i ,a?i
st+1 ,nt+1
i,l
t
t
h?1 t+1
P r(st+1 , nt+1
(s , nt+1
i,l |s , ni,l ) ?
i,l )
P r(riT = 1|sT , aTi , aT?i ) Li (nTi,l , aTi )
?
h ? T is the horizon. Here, P r(riT = 1|sT , aTi , aT?i ) ? Ri (sT , aTi , aT?i ).
?i
(8)
P r(aT?i |sT ), and 1 ?
? and
A side effect of P r(at?i |st ) being dependent on t is that we can no longer conveniently de?ne ?
?
? for use in M -step at level 1. Instead, the computations are folded in the M -step.
?
M-step We update the parameters, Li , Ti and Vi , of ?i,l to obtain ?i,l
based on the expectation
T
0:T
in the E-step. Speci?cally, take log of the likelihood P r(r = 1, zi , T ; ?i,l ) with ?i,l substituted
?
?
and focus on terms involving the parameters of ?i,l
:
with ?i,l
?T
?
?
log P r(rT = 1, zi0:T , T ; ?i,l
) =?terms independent of ?i,l
?+
log L?i (nti,l , ati )+
t=0
?T ?1
?
log Ti? (nti,l , ati , ot+1
, nt+1
i
i,l ) + log Vi (ni,l )
t=0
In order to update, Li , we partially differentiate the Q-function of Eq. 5 with respect to L?i . To
facilitate differentiation, we focus on the terms involving Li , as shown below.
?
Q(?i,l
|?i,l ) = ?terms indep. of L?i ? +
??
T =0
Pr(T )
L?i on maximizing the above equation is:
L?i (nti,l , ati ) ? Li (nti,l , ati )
??
T =0
?
?i
?
sT ,aT
?i
?T
t=0
?
zi0:T
Pr(riT = 1, zi0:t |T ; ?i,l ) log L?i (nti,l , ati )
?T
P r(riT = 1|sT , aTi , aT?i ) P r(aT?i |sT ) ?T (sT , nTi,l )
1??
?
Node transition probabilities Ti and node distribution Vi for ?i,l
, is updated analogously to Li .
Because a FSC is inferred at level 1, at strategy levels l = 2 and greater, lower-level candidate
models are FSCs. EM at these higher levels proceeds by replacing the state of the DBN, (st , nti,l )
with (st , nti,l , ntj,l?1 , . . . , ntk,l?1 ).
3.3
Block-Coordinate Descent for Non-Asymptotic Speed Up
Block-coordinate descent (BCD) [4, 5, 10] is an iterative scheme to gain faster non-asymptotic rate
of convergence in the context of large-scale N -dimensional optimization problems. In this scheme,
within each iteration, a set of variables referred to as coordinates are chosen and the objective function, Q, is optimized with respect to one of the coordinate blocks while the other coordinates are
held ?xed. BCD may speed up the non-asymptotic rate of convergence of EM for both I-POMDPs
and POMDPs. The speci?c challenge here is to determine which of the many variables should be
grouped into blocks and how.
We empirically show in Section 5 that grouping the number of time slices, t, and horizon, h, in
Eqs. 7 and 8, respectively, at each level, into coordinate blocks of equal size is bene?cial. In other
words, we decompose the mixture model into blocks containing equal numbers of BNs. Alternately,
grouping controller nodes is ineffective because distribution Vi cannot be optimized for subsets of
nodes. Formally, let ?t1 be a subset of {T = 1, T = 2, . . . , T = Tmax }. Then, the set of blocks is,
Bt = {?t1 , ?t2 , ?t3 , . . .}. In practice, because both t and h are ?nite (say, Tmax ), the cardinality of
Bt is bounded by some C ? 1. Analogously, we de?ne the set of blocks of h, denoted by Bh .
In the M -step now, we compute ?t for the time steps in a single coordinate block ?tc only, while
? tc .
using the values of ?t from the previous iteration for the complementary coordinate blocks, ?
Analogously, we compute ? h for the horizons in ?hc only, while using ? values from the previous
iteration for the remaining horizons. We cyclically choose a block, ?tc , at iterations c + qC where
q ? {0, 1, 2, . . .}.
5
3.4
Forward Filtering - Backward Sampling
An approach for exploiting embedded structure in the transition and observation functions is to
replace the exact forward-backward message computations with exact forward ?ltering and backward sampling (FFBS) [11] to obtain a sampled reverse trajectory consisting of ?sT , nTi,l , aTi ?,
?nTi,l?1 , aTi ?1 , oTi , nTi,l ?, and so on from T to 0. Here, P r(riT = 1|sT , aTi , aT?i ) is the likelihood
?
weight of this trajectory sample. Parameters of the updated FSC, ?i,l
, are obtained by summing and
normalizing the weights.
Each trajectory is obtained by ?rst sampling T? ? P r(T ), which becomes the length of i?s DBN for
this sample. Forward message, ?t (st , nti,l ), t = 0 . . . T? is computed exactly (Eq. 7) followed by the
backward message, ? h (st , nti,l ), h = 0 . . . T? and t = T? ? h. Computing ? h differs from Eq. 8 by
utilizing the forward message:
? h (st , nti,l |st+1 , nt+1
i,l ) =
?
ati ,at?i ,ot+1
i
?t (st , nti,l ) Li (nti,l , ati )
?
?i
P r(at?i |st ) Ti (st , ati , at?i , st+1 )
t+1
Ti (nti,l , ati , ot+1
, nt+1
, ati , at?i , ot+1
)
i
i
i,l ) Oi (s
where ? 0 (sT , nTi,l , riT ) =
?
ati ,at?i
?T (sT , nTi,l )
?
?i
(9)
P r(aT?i |sT ) L(nTi,l , aTi ) P r(riT |sT , aTi , aT?i ).
Subsequently, we may easily sample ?sT , nTi,l , riT ? followed by sampling sTi ?1 , nTi,l?1 from Eq. 9.
|st , nti,l , st+1 , nt+1
We sample aTi ?1 , oTi ? P r(ati , ot+1
i
i,l ), where:
P r(ati , ot+1
|st , nti,l , st+1 , nt+1
i
i,l ) ?
?
t
t
t
t+1
P r(at?i |st ) Li (nti,l , ati ) Ti (nti,l , ati , ot+1
, nt+1
)
i
i,l ) Ti (s , ai , a?i , s
?i
t+1
Oi (s
4
, ati , at?j , ot+1
)
i
Computational Complexity
Our EM at level 1 is signi?cantly quicker compared to ascribing FSCs to other agents. In the latter,
nodes of others? controllers must be included alongside s and ni,l .
Proposition 3 (E-step speed up). Each E-step at level 1 using the forward-backward pass as shown
previously results in a net speed up of O((|M ||N?i,0 |)2K |??i |K ) over the formulation that ascribes
|M | FSCs each to K other agents with each having |N?i,0 | nodes.
Analogously, updating the parameters Li and Ti in the M-step exhibits a speedup of
O((|M ||N?i,0 |)2K |??i |K ), while Vi leads to O((|M ||N?i,0 |)K ). This improvement is exponential
in the number of other agents. On the other hand, the E-step at level 0 exhibits complexity that is
typically greater compared to the total complexity of the E-steps for |M | FSCs.
Proposition 4 (E-step ratio at level 0). E-steps when |M | FSCs are inferred for K agents exhibit a
|N?i,0 |2
ratio of complexity, O( |M
| ), compared to the E-step for obtaining ??i,0 .
The ratio in Prop. 4 is < 1 when smaller-sized controllers are sought and there are several models.
5
Experiments
Five variants of EM are evaluated as appropriate: the exact EM inference-based planning (labeled as
I-EM); replacing the exact M-step with its greedy variant analogously to the greedy maximization in
EM for POMDPs [12] (I-EM-Greedy); iterating EM based on coordinate blocks (I-EM-BCD) and
coupled with a greedy M-step (I-EM-BCD-Greedy); and lastly, using forward ?ltering-backward
sampling (I-EM-FFBS).
We use 4 problem domains: the noncooperative multiagent tiger problem [13] (|S|= 2, |Ai |= |Aj |=
3, |Oi |= |Oj |= 6 for level l ? 1, |Oj |= 3 at level 0, and ? = 0.9) with a total of 5 agents and 50
models for each other agent. A larger noncooperative 2-agent money laundering (ML) problem [14]
forms the second domain. It exhibits 99 physical states for the subject agent (blue team), 9 actions
for blue and 4 for the red team, 11 observations for subject and 4 for the other, with about 100 models
6
2-agent ML
5-agent Tiger
Level 1 Value
-50
-100
-100
-110
400
350
300
250
200
-150
-200
-120
I-EM
I-EM-Greedy
I-EM-BCD
I-EM-FFBS
-250
-300
10
-130
-140
100
1000
time(s) in log scale
150
I-EM
I-EM-Greedy
I-EM-BCD-Greedy
I-EM-FFBS
100
1000
100
0
0
10000
-90
-50
-100
-100
(I-c) EM methods
400
I-EM-BCD-Greedy
I-BPI
350
300
250
-110
200
-150
150
-120
-200
10
100
-130
I-EM-BCD
I-BPI
-300
100
10000 20000 30000 40000 50000 60000 70000
time(s)
(I-b) EM methods
0
-250
I-EM-Greedy
I-EM-BCD-Greedy
I-EM-FFBS
50
time(s) in log scale
(I-a) EM methods
Level 1 Value
3-agent UAV
-90
0
1000
0
0
100
1000
time(s) in log scale
time(s) in log scale
(II-a) I-EM-BCD, I-BPI
I-EM-BCD-Greedy
I-BPI
50
-140
10000
10000
20000
30000
40000
time(s)
(II-c) I-EM-BCD-Greedy, I-BPI
(II-b) I-EM-BCD-Greedy, I-BPI
5-agent policing
1200
1100
1100
1000
1000
I-EM
I-EM-Greedy
I-EM-BCD
I-EM-BCD-Greedy
900
800
700
900
800
700
600
500
I-EM-BCD
I-BPI
600
0
5000
10000
time(s)
15000
20000
0
5000
10000
time(s)
15000
20000
(II-d) I-EM-BCD, I-BPI
(I-d) EM methods
Figure 2: FSCs improve with time for I-POMDPi,1 in the (I-a) 5-agent tiger, (I-b) 2-agent money laundering,
(I-c) 3-agent UAV, and (I-d) 5-agent policing contexts. Observe that BCD causes substantially larger improvements in the initial iterations until we are close to convergence. I-EM-BCD or its greedy variant converges
signi?cantly quicker than I-BPI to similar-valued FSCs for all four problem domains as shown in (II-a, b, c and
d), respectively. All experiments were run on Linux with Intel Xeon 2.6GHz CPUs and 32GB RAM.
for red team. We also evaluate a 3-agent UAV reconnaissance problem involving a UAV tasked with
intercepting two fugitives in a 3x3 grid before they both reach the safe house [8]. It has 162 states for
the UAV, 5 actions, 4 observations for each agent, and 200,400 models for the two fugitives. Finally,
the recent policing protest problem is used in which police must maintain order in 3 designated
protest sites populated by 4 groups of protesters who may be peaceful or disruptive [15]. It exhibits
27 states, 9 policing and 4 protesting actions, 8 observations, and 600 models per protesting group.
The latter two domains are historically the largest test problems for self-interested planning.
Comparative performance of all methods In Fig. 2-I(a-d), we compare the variants on all problems. Each method starts with a random seed, and the converged value is signi?cantly better than
a random FSC for all methods and problems. Increasing the sizes of FSCs gives better values in
general but also increases time; using FSCs of sizes 5, 3, 9 and 5, for the 4 domains respectively
demonstrated a good balance. We explored various coordinate block con?gurations eventually settling on 3 equal-sized blocks for both the tiger and ML, 5 blocks for UAV and 2 for policing protest.
I-EM and the Greedy and BCD variants clearly exhibit an anytime property on the tiger, UAV and
policing problems. The noncooperative ML shows delayed increases because we show the value of
agent i?s controller and initial improvements in the other agent?s controller may maintain or decrease
the value of i?s controller. This is not surprising due to competition in the problem. Nevertheless,
after a small delay the values improve steadily which is desirable.
I-EM-BCD consistently improves on I-EM and is often the fastest: the corresponding value improves
by large steps initially (fast non-asymptotic rate of convergence). In the context of ML and UAV,
I-EM-BCD-Greedy shows substantive improvements leading to controllers with much improved
values compared to other approaches. Despite a low sample size of about 1,000 for the problems,
I-EM-FFBS obtains FSCs whose values improve in general for tiger and ML, though slowly and
not always to the level of others. This is because the EM gets caught in a worse local optima due
7
to sampling approximation ? this strongly impacts the UAV problem; more samples did not escape
these optima. However, forward ?ltering only (as used in Wu et al. [6]) required a much larger
sample size to reach these levels. FFBS did not improve the controller in the fourth domain.
Characterization of local optima While an exact solution for the smaller tiger problem with 5
agents (or the larger problems) could not be obtained for comparison, I-EM climbs to the optimal
value of 8.51 for the downscaled 2-agent version (not shown in Fig. 2). In comparison, BPI does
not get past the local optima of -10 using an identical-sized controller ? corresponding controller
predominantly contains listening actions ? relying on adding nodes to eventually reach optimum.
While we are unaware of any general technique to escape local convergence in EM, I-EM can reach
the global optimum given an appropriate seed. This may not be a coincidence: the I-POMDP value
function space exhibits a single ?xed point ? the global optimum ? which in the context of Propo?
sition 1 makes the likelihood function, Q(?i,l
|?i,l ), unimodal (if ?i,l is appropriately sized as we
?
|?i,l ) is continuously differentiable for the
do not have a principled way of adding nodes). If Q(?i,l
domain on hand, Corollary 1 in Wu [16] indicates that ?i,l will converge to the unique maximizer.
Improvement on I-BPI We compare the quickest of the I-EM variants with previous best algorithm, I-BPI (Figs. 2-II(a-d)), allowing the latter to escape local optima as well by adding nodes.
Observe that FSCs improved using I-EM-BCD converge to values similar to those of I-BPI almost
two orders of magnitude faster. Beginning with 5 nodes, I-BPI adds 4 more nodes to obtain the same
level of value as EM for the tiger problem. For money laundering, I-EM-BCD-Greedy converges to
controllers whose value is at least 1.5 times better than I-BPI?s given the same amount of allocated
time and less nodes. I-BPI failed to improve the seed controller and could not escape for the UAV
and policing protest problems. To summarize, this makes I-EM variants with emphasis on BCD the
fastest iterative approaches for in?nite-horizon I-POMDPs currently.
6
Concluding Remarks
The EM formulation of Section 3 builds on the EM for POMDP and differs drastically from the Eand M-steps for the cooperative DEC-POMDP [9]. The differences re?ect how I-POMDPs build on
POMDPs and differ from DEC-POMDPs. These begin with the structure of the DBNs where the
DBN for I-POMDPi,1 in Fig. 1 adds to the DBN for POMDP hexagonal model nodes that contain
candidate models; chance nodes for action; and model update edges for each other agent at each
time step. This differs from the DBN for DEC-POMDP, which adds controller nodes for all agents
and a joint observation chance node. The I-POMDP DBN contains controller nodes for the subject
agent only, and each model node collapses into an ef?cient distribution on running EM at level 0.
In domains where the joint reward function may be decomposed into factors encompassing subsets
of agents, ND-POMDPs allow the value function to be factorized as well. Kumar et al. [17] exploit
this structure by simply decomposing the whole DBN mixture into a mixture for each factor and iterating over the factors. Interestingly, the M-step may be performed individually for each agent and
this approach scales beyond two agents. We exploit both graphical and problem structures to speed
up and scale in a way that is contextual to I-POMDPs. BCD also decomposes the DBN mixture
into equal blocks of horizons. While it has been applied in other areas [18, 19], these applications
do not transfer to planning. Additionally, problem structure is considered by using FFBS that exploits information in the transition and observation distributions of the subject agent. FFBS could be
viewed as a tenuous example of Monte Carlo EM, which is a broad category and also includes the
forward sampling utilized by Wu et al. [6] for DEC-POMDPs. However, fundamental differences
exist between the two: forward sampling may be run in simulation and does not require the transition
and observation functions. Indeed, Wu et al. utilize it in a model free setting. FFBS is model based
utilizing exact forward messages in the backward sampling phase. This reduces the accumulation of
sampling errors over many time steps in extended DBNs, which otherwise af?icts forward sampling.
The advance in this paper for self-interested multiagent planning has wider relevance to areas such
as game play and ad hoc teams where agents model other agents. Developments in online EM for
hidden Markov models [20] provide an interesting avenue to utilize inference for online planning.
Acknowledgments
This research is supported in part by a NSF CAREER grant, IIS-0845036, and a grant from ONR,
N000141310870. We thank Akshat Kumar for feedback that led to improvements in the paper.
8
References
[1] Ekhlas Sonu and Prashant Doshi. Scalable solutions of interactive POMDPs using generalized
and bounded policy iteration. Journal of Autonomous Agents and Multi-Agent Systems, pages
DOI: 10.1007/s10458?014?9261?5, in press, 2014.
[2] Hagai Attias. Planning by probabilistic inference. In Ninth International Workshop on AI and
Statistics (AISTATS), 2003.
[3] Marc Toussaint and Amos J. Storkey. Probabilistic inference for solving discrete and continuous state markov decision processes. In International Conference on Machine Learning
(ICML), pages 945?952, 2006.
[4] Jeffrey A. Fessler and Alfred O. Hero.
Space-alternating generalized expectationmaximization algorithm. IEEE Transactions on Signal Processing, 42:2664?2677, 1994.
[5] P. Tseng. Convergence of block coordinate descent method for nondifferentiable minimization.
Journal of Optimization Theory and Applications, 109:475?494, 2001.
[6] Feng Wu, Shlomo Zilberstein, and Nicholas R. Jennings. Monte-carlo expectation maximization for decentralized POMDPs. In Twenty-Third International Joint Conference on Arti?cial
Intelligence (IJCAI), pages 397?403, 2013.
[7] Piotr J. Gmytrasiewicz and Prashant Doshi. A framework for sequential planning in multiagent
settings. Journal of Arti?cial Intelligence Research, 24:49?79, 2005.
[8] Yifeng Zeng and Prashant Doshi. Exploiting model equivalences for solving interactive dynamic in?uence diagrams. Journal of Arti?cial Intelligence Research, 43:211?255, 2012.
[9] Akshat Kumar and Shlomo Zilberstein. Anytime planning for decentralized pomdps using
expectation maximization. In Conference on Uncertainty in AI (UAI), pages 294?301, 2010.
[10] Ankan Saha and Ambuj Tewari. On the nonasymptotic convergence of cyclic coordinate descent methods. SIAM Journal on Optimization, 23(1):576?601, 2013.
[11] C. K. Carter and R. Kohn. Markov chainmonte carlo in conditionally gaussian state space
models. Biometrika, 83:589?601, 1996.
[12] Marc Toussaint, Laurent Charlin, and Pascal Poupart. Hierarchical POMDP controller optimization by likelihood maximization. In Twenty-Fourth Conference on Uncertainty in Arti?cial Intelligence (UAI), pages 562?570, 2008.
[13] Prashant Doshi and Piotr J. Gmytrasiewicz. Monte Carlo sampling methods for approximating
interactive POMDPs. Journal of Arti?cial Intelligence Research, 34:297?337, 2009.
[14] Brenda Ng, Carol Meyers, Ko? Boakye, and John Nitao. Towards applying interactive
POMDPs to real-world adversary modeling. In Innovative Applications in Arti?cial Intelligence (IAAI), pages 1814?1820, 2010.
[15] Ekhlas Sonu, Yingke Chen, and Prashant Doshi. Individual planning in agent populations:
Anonymity and frame-action hypergraphs. In International Conference on Automated Planning and Scheduling (ICAPS), pages 202?211, 2015.
[16] C. F. Jeff Wu. On the convergence properties of the em algorithm. Annals of Statistics,
11(1):95?103, 1983.
[17] Akshat Kumar, Shlomo Zilberstein, and Marc Toussaint. Scalable multiagent planning using
probabilistic inference. In International Joint Conference on Arti?cial Intelligence (IJCAI),
pages 2140?2146, 2011.
[18] S. Arimoto. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Transactions on Information Theory, 18(1):14?20, 1972.
[19] Jeffrey A. Fessler and Donghwan Kim. Axial block coordinate descent (ABCD) algorithm for
X-ray CT image reconstruction. In International Meeting on Fully Three-dimensional Image
Reconstruction in Radiology and Nuclear Medicine, volume 11, pages 262?265, 2011.
[20] Olivier Cappe and Eric Moulines. Online expectation-maximization algorithm for latent
data models. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
71(3):593?613, 2009.
9
| 5734 |@word m1j:1 version:1 verona:1 nd:1 seek:4 simulation:1 arti:7 o2i:1 initial:9 cyclic:1 contains:2 series:1 interestingly:1 ati:30 subjective:1 past:1 com:1 nt:15 surprising:1 contextual:1 gmail:1 must:2 john:1 realize:1 shlomo:3 update:6 a1k:1 greedy:20 intelligence:7 beginning:2 provides:3 characterization:1 node:40 preference:2 mtj:24 five:1 atj:18 along:1 ect:1 a2j:1 consists:1 downscaled:1 behavioral:1 ray:1 ascribed:1 indeed:1 expected:1 isi:4 planning:24 multi:1 bellman:1 moulines:1 discounted:2 decomposed:2 relying:1 cpu:1 cardinality:1 increasing:2 becomes:2 begin:1 bounded:3 underlying:1 factorized:1 null:2 what:1 xed:3 rmax:2 substantially:1 gurations:1 differentiation:1 cial:8 ti:16 act:1 growth:3 interactive:10 exactly:1 biometrika:1 icaps:1 grant:2 before:2 t1:2 local:7 limit:4 severely:1 consequence:1 despite:2 ntk:1 subscript:1 nitely:4 laurent:1 tmax:2 quickest:1 emphasis:1 equivalence:2 fastest:2 limited:1 zi0:7 collapse:1 unique:3 acknowledgment:1 practice:1 block:21 differs:3 x3:1 nite:18 area:2 word:1 get:2 convenience:1 ga:1 cannot:1 close:1 bh:1 scheduling:1 context:6 applying:1 accumulation:2 equivalent:1 demonstrated:1 maximizing:2 straightforward:2 attention:1 starting:2 caught:1 automaton:2 pomdp:15 formulate:3 qc:1 formalized:1 insight:2 utilizing:3 nuclear:1 population:1 coordinate:15 autonomous:1 updated:5 annals:1 dbns:7 play:1 exact:9 olivier:1 storkey:1 anonymity:1 updating:1 continues:1 utilized:1 cooperative:2 labeled:3 observed:2 quicker:2 coincidence:1 solved:1 parameterize:2 indep:1 decrease:1 mentioned:1 principled:1 complexity:4 reward:9 dynamic:2 solving:3 policing:7 predictive:2 eric:1 easily:1 joint:10 various:2 represented:1 distinct:1 fast:1 effective:1 fsc:8 monte:3 doi:1 whose:7 posed:1 solve:1 larger:4 say:1 valued:1 otherwise:2 precludes:1 ability:1 statistic:3 radiology:1 online:3 differentiate:1 hoc:1 differentiable:1 net:1 reconstruction:2 icts:1 competition:1 scalability:1 rst:3 convergence:9 cluster:1 optimum:10 exploiting:3 ijcai:2 comparative:1 converges:2 wider:1 inferencing:1 axial:1 a2k:1 expectationmaximization:1 sonu:2 eq:10 c:1 involves:1 predicted:1 signi:3 differ:3 safe:1 stochastic:1 subsequently:2 a0k:3 atk:1 require:2 generalization:2 decompose:2 proposition:7 hagai:1 extension:1 intentional:2 considered:1 seed:3 bj:4 sought:1 athens:1 currently:2 individually:1 grouped:1 largest:1 correctness:1 a0j:3 weighted:1 amos:1 minimization:1 clearly:1 always:1 gaussian:1 b0i:2 corollary:1 zilberstein:3 focus:2 improvement:6 consistently:1 likelihood:17 indicates:1 greatly:1 kim:1 summarizing:1 inference:7 dependent:1 integrated:1 entire:1 bt:2 initially:1 hidden:7 gmytrasiewicz:2 fessler:2 typically:1 interested:3 semantics:1 arg:2 pascal:1 acutely:1 denoted:6 development:1 smoothing:2 special:2 marginal:2 equal:5 fugitive:2 having:1 piotr:2 sampling:18 ng:1 identical:2 represents:2 broad:1 icml:1 future:1 others:4 t2:1 escape:5 saha:1 individual:3 delayed:1 replaced:1 phase:1 consisting:2 jeffrey:2 maintain:2 onwards:1 message:9 mixture:12 tj:1 held:1 a0i:2 edge:4 o1i:1 noncooperative:3 re:1 uence:1 mk:5 xeon:1 modeling:1 compelling:1 zj0:7 maximization:15 subset:4 delay:1 ascribing:2 considerably:1 st:82 fundamental:1 international:6 siam:1 cantly:3 probabilistic:3 a1i:1 reconnaissance:1 analogously:9 continuously:1 linux:1 again:1 intercepting:1 containing:2 choose:1 slowly:1 bpi:19 worse:1 leading:2 li:14 potential:1 nonasymptotic:2 de:7 bold:2 includes:3 vi:10 ad:1 performed:1 lab:1 observing:1 red:2 start:1 capability:1 ltering:9 oi:9 ni:13 who:1 t3:1 generalize:2 rjt:5 carlo:4 trajectory:8 pomdps:31 history:1 converged:1 reach:4 isj:1 steadily:1 doshi:6 associated:1 proof:1 con:1 gain:1 sampled:1 iaai:1 knowledge:1 anytime:2 improves:3 cappe:1 higher:1 methodology:1 improved:3 formulation:3 evaluated:1 though:1 strongly:1 generality:1 charlin:1 lastly:1 correlation:1 until:1 ntj:1 hand:2 replacing:2 zeng:1 maximizer:1 aj:11 quality:2 reveal:1 mdp:1 grows:1 facilitate:1 effect:1 contain:2 alternating:1 memoryless:1 iteratively:1 conditionally:1 game:1 self:3 criterion:1 generalized:2 performs:1 image:2 ef:1 predominantly:1 common:2 mt:10 physical:2 overview:1 empirically:1 arimoto:1 exponentially:1 volume:1 hypergraphs:1 relating:1 ai:11 dbn:17 grid:1 populated:1 fscs:13 longer:1 money:3 add:3 own:1 tmj:2 recent:1 perspective:1 reverse:1 binary:1 onr:1 meeting:1 minimum:1 greater:2 simpli:1 speci:4 converge:3 determine:1 signal:1 ii:7 full:4 desirable:1 rj:1 infer:3 reduces:2 unimodal:1 meyers:1 faster:2 af:1 offer:1 long:1 dept:1 uav:10 n0i:4 impact:1 involving:3 variant:7 scalable:2 controller:26 sition:1 expectation:8 tasked:1 ko:1 iteration:7 represent:2 m0j:6 dec:4 addition:1 rmj:1 diagram:1 grow:1 allocated:1 appropriately:1 ot:9 rest:1 bringing:1 markedly:1 ineffective:1 subject:4 n1i:1 propo:1 climb:1 emitted:1 automated:1 zi:1 otj:5 avenue:1 computable:1 attias:2 listening:1 t0:2 ankan:1 kohn:1 gb:1 cause:1 action:17 remark:1 cpt:1 jennings:1 iterating:2 tewari:1 involve:1 amount:1 category:1 epic:1 carter:1 exist:2 nsf:1 notice:1 delta:1 per:1 blue:2 alfred:1 discrete:2 group:5 key:1 four:1 nevertheless:1 nal:2 utilize:2 backward:15 ram:1 sum:1 run:3 sti:1 fourth:2 uncertainty:2 almost:1 wu:6 decision:2 hexagon:1 ct:1 followed:2 rmin:3 ri:6 bcd:26 generates:1 speed:7 optimality:1 bns:2 concluding:1 kumar:4 hexagonal:3 innovative:1 ned:3 speedup:1 ri0:1 designated:1 combination:1 protest:4 across:3 smaller:2 em:76 wi:1 qu:1 s1:2 pr:2 equation:2 previously:2 discus:1 eventually:2 hero:1 end:1 decomposing:1 decentralized:2 observe:3 hierarchical:1 appropriate:2 nicholas:1 a2i:1 original:1 denotes:3 remaining:2 include:2 assumes:1 running:1 graphical:3 cally:2 medicine:1 exploit:6 build:2 approximating:1 society:1 feng:1 sweep:1 objective:2 strategy:3 rt:1 usual:1 exhibit:7 thank:1 separating:1 capacity:1 nondifferentiable:1 poupart:1 tseng:1 toward:1 substantive:1 length:2 modeled:2 ffbs:12 ratio:3 balance:1 disruptive:1 abcd:1 a1j:2 holding:1 subproblems:1 motivates:1 policy:7 twenty:2 perform:1 allowing:1 observation:16 markov:6 descent:7 payoff:1 extended:2 team:5 frame:2 interacting:1 ninth:1 arbitrary:1 police:1 inferred:4 required:1 bene:1 optimized:3 nti:39 alternately:1 beyond:1 adversary:1 proceeds:1 below:2 alongside:1 protester:1 challenge:3 summarize:1 ambuj:1 ocj:1 including:1 oj:8 max:2 belief:13 royal:1 settling:1 scheme:3 improve:5 historically:1 ne:4 coupled:1 brenda:1 speeding:1 asymptotic:4 embedded:2 fully:2 expect:1 multiagent:11 bear:1 loss:1 suf:4 limitation:1 proportional:1 filtering:1 interesting:1 toussaint:5 oci:2 agent:70 s0:6 prone:1 supported:1 free:1 drastically:1 guide:1 side:1 allow:1 ghz:1 slice:4 xia:1 feedback:1 transition:10 world:1 unaware:1 computes:1 forward:23 transaction:2 observable:1 obtains:1 peaceful:1 tenuous:1 ml:6 global:2 uai:2 summing:1 continuous:1 iterative:5 latent:1 decomposes:1 additionally:3 mj:21 transfer:1 channel:1 career:1 obtaining:2 improving:1 hc:1 domain:8 substituted:1 marc:3 did:2 aistats:1 s2:1 whole:1 complementary:1 fig:8 referred:1 cient:4 intel:1 site:1 georgia:1 formalization:2 ciency:1 exponential:1 candidate:9 house:1 third:1 oti:5 cyclically:1 mitigates:1 explored:2 concern:1 evidence:1 grouping:2 normalizing:1 workshop:1 adding:3 encompassing:1 sequential:1 supplement:1 magnitude:1 horizon:17 chen:1 tc:3 led:1 simply:1 likely:4 conveniently:1 failed:1 n2i:1 partially:2 nested:4 chance:6 prop:2 goal:1 sized:4 viewed:1 towards:1 jeff:1 replace:1 tiger:8 included:2 folded:1 total:3 prashant:6 pas:1 e:1 rit:17 formally:1 latter:4 carol:1 relevance:1 akshat:3 evaluate:1 |
5,230 | 5,735 | Randomized Block Krylov Methods for Stronger and
Faster Approximate Singular Value Decomposition
Christopher Musco
Massachusetts Institute of Technology, EECS
Cambridge, MA 02139, USA
cpmusco@mit.edu
Cameron Musco
Massachusetts Institute of Technology, EECS
Cambridge, MA 02139, USA
cnmusco@mit.edu
Abstract
Since being analyzed by Rokhlin, Szlam, and Tygert [1] and popularized by
Halko, Martinsson, and Tropp [2], randomized Simultaneous Power Iteration has
become the method of choice for approximate singular value decomposition. It is
more accurate than simpler sketching algorithms, yet still converges quickly for
?
any matrix, independently of singular value gaps. After O(1/)
iterations, it gives
a low-rank approximation within (1 + ) of optimal for spectral norm error.
We give the first provable runtime improvement on Simultaneous Iteration: a randomized block Krylov method, closely related to the classic Block Lanczos algo? ?) iterations and performs substanrithm, gives the same guarantees in just O(1/
tially better experimentally. Our analysis is the first of a Krylov subspace method
that does not depend on singular value gaps, which are unreliable in practice.
Furthermore, while it is a simple accuracy benchmark, even (1 + ) error for
spectral norm low-rank approximation does not imply that an algorithm returns
high quality principal components, a major issue for data applications. We address
this problem for the first time by showing that both Block Krylov Iteration and
Simultaneous Iteration give nearly optimal PCA for any matrix. This result further
justifies their strength over non-iterative sketching methods.
1
Introduction
Any matrix A ? Rn?d with rank r can be written using a singular value decomposition (SVD) as
A = U?VT . U ? Rn?r and V ? Rd?r have orthonormal columns (A?s left and right singular
vectors) and ? ? Rr?r is a positive diagonal matrix containing A?s singular values: ?1 ? . . . ? ?r .
A rank k partial SVD algorithm returns just the top k left or right singular vectors of A. These are
the first k columns of U or V, denoted Uk and Vk respectively.
Among countless applications, the SVD is used for optimal low-rank approximation and principal
component analysis (PCA). Specifically, for k < r, a partial SVD can be used to construct a rank k
approximation Ak such that both kA ? Ak kF and kA ? Ak k2 are as small as possible. We simply
set Ak = Uk UTk A. That is, Ak is A projected onto the space spanned by its top k singular vectors.
For principal component analysis, A?s top singular vector u1 provides a top principal component,
which describes the direction of greatest variance within A. The ith singular vector ui provides the
ith principal component, which is the direction of greatest variance orthogonal to all higher principal
components. Formally, denoting A?s ith singular value as ?i ,
uTi AAT ui = ?i2 =
max
x:kxk2 =1, x?uj ?j<i
xT AAT x.
Traditional SVD algorithms are expensive, typically running in O(nd2 ) time, so there has been substantial research on randomized techniques that seek nearly optimal low-rank approximation and
1
PCA [3, 4, 1, 2, 5]. These methods are quickly becoming standard tools in practice and implementations are widely available [6, 7, 8, 9], including in popular learning libraries [10].
Recent work focuses on algorithms whose runtimes do not depend on properties of A. In contrast,
classical literature typically gives runtime bounds that depend on the gaps between A?s singular
values and become useless when these gaps are small (which is often the case in practice ? see
Section 6). This limitation is due to a focus on how quickly approximate singular vectors converge
to the actual singular vectors of A. When two singular vectors have nearly identical values they are
difficult to distinguish, so convergence inherently depends on singular value gaps.
Only recently has a shift in approximation goal, along with an improved understanding of randomization, allowed for algorithms that avoid gap dependence and thus run provably fast for any matrix.
For low-rank approximation and PCA, we only need to find a subspace that captures nearly as much
variance as A?s top singular vectors ? distinguishing between two close singular values is overkill.
1.1
Prior Work
The fastest randomized SVD algorithms [3, 5] run in O(nnz(A)) time1 , are based on non-iterative
sketching methods, and return a rank k matrix Z with orthonormal columns z1 , . . . , zk satisfying
Frobenius Norm Error:
kA ? ZZT AkF ? (1 + )kA ? Ak kF .
(1)
Unfortunately, as emphasized in prior work [1, 2, 11, 12], Frobenius norm error is often hopelessly
insufficient, especially for data analysis and learning applications.
P When A has a ?heavy-tail? of
singular values, which is common for noisy data, kA ? Ak k2F = i>k ?i2 can be huge, potentially
much larger than A?s top singular value. This renders (1) meaningless since Z does not need to
align with any large singular vectors to obtain good multiplicative error.
To address this shortcoming, a number of papers target spectral norm low-rank approximation error,
Spectral Norm Error:
kA ? ZZT Ak2 ? (1 + )kA ? Ak k2 ,
(2)
which is intuitively stronger. When looking for a rank k approximation, A?s top k singular vectors
are often considered data and the remaining tail is considered noise. A spectral norm guarantee
roughly ensures that ZZT A recovers A up to this noise threshold.
A series of work [1, 2, 13, 14, 15] shows that the decades old Simultaneous Power Iteration (also
called subspace iteration or orthogonal iteration) implemented with random start vectors, achieves
?
(2) after O(1/)
iterations. Hence, this method, which was popularized by Halko, Martinsson, and
Tropp in [2], has become the randomized SVD algorithm of choice for practitioners [10, 16].
2
Our Results
Algorithm 1 S IMULTANEOUS I TERATION
Algorithm 2 B LOCK K RYLOV I TERATION
n?d
input: A ? R
, error ? (0, 1), rank k ? n, d input: A ? Rn?d , error ? (0, 1), rank k ? n, d
n?k
output: Z ? R
output: Z ? Rn?k
log d
d?k
? d ), ? ? N (0, 1)d?k
1: q := ?( ), ? ? N (0, 1)
1: q := ?( log
q
2: K := AAT A?
2: K := A?, (AAT )A?, ..., (AAT )q A?
3: Orthonormalize the columns of K to obtain 3: Orthonormalize the columns of K to obtain
Q ? Rn?k .
Q ? Rn?qk .
T
T
k?k
4: Compute M := Q AA Q ? R
.
4: Compute M := QT AAT Q ? Rqk?qk .
?
? k to the top k singular vectors of M.
5: Set Uk to the top k singular vectors of M.
5: Set U
? k.
? k.
6: return Z = QU
6: return Z = QU
2.1
Faster Algorithm
We show that Algorithm 2, a randomized relative of the Block Lanczos algorithm [17, 18], which
we call Block Krylov Iteration, gives the same guarantees as Simultaneous Iteration (Algorithm 1)
? ?) iterations. This not only gives the fastest known theoretical runtime for achieving
in just O(1/
(2), but also yields substantially better performance in practice (see Section 6).
1
Here nnz(A) is the number of non-zero entries in A and this runtime hides lower order terms.
2
Even though the algorithm has been discussed and tested for potential improvement over Simultaneous Iteration [1, 19, 20], theoretical bounds for Krylov subspace and Lanczos methods are much
more limited. As highlighted in [11],
?Despite decades of research on Lanczos methods, the theory for [randomized
power iteration] is more complete and provides strong guarantees of excellent
accuracy, whether or not there exist any gaps between the singular values.?
Our work addresses this issue, giving the first gap independent bound for a Krylov subspace method.
2.2
Stronger Guarantees
In addition to runtime improvements, we target a much stronger notion of approximate SVD that is
needed for many applications, but for which no gap-independent analysis was known.
Specifically, as noted in [21], while intuitively stronger than Frobenius norm error, (1 + ) spectral norm low-rank approximation error does not guarantee any accuracy in Z for many matrices2 .
Consider A with its top k + 1 squared singular values all equal to 10 followed by a tail of smaller
singular values (e.g. 1000k at 1). kA ? Ak k22 = 10 but in fact kA ? ZZT Ak22 = 10 for any rank
k Z, leaving the spectral norm bound useless. At the same time, kA ? Ak k2F is large, so Frobenius
error is meaningless as well. For example, any Z obtains kA ? ZZT Ak2F ? (1.01)kA ? Ak k2F .
With this scenario in mind, it is unsurprising that low-rank approximation guarantees fail as an
accuracy measure in practice. We ran a standard sketch-and-solve approximate SVD algorithm
(see Section 3) on SNAP/ AMAZON 0302, an Amazon product co-purchasing dataset [22, 23], and
achieved very good low-rank approximation error in both norms for k = 30:
kA ? ZZT AkF < 1.001kA ? Ak kF and kA ? ZZT Ak2 < 1.038kA ? Ak k2 .
However, the approximate principal components given by Z are of significantly lower quality than
A?s true singular vectors (see Figure 1). We saw similar results for a number of other datasets.
450
? 2 = uT(AAT)u
i
Singular Value
400
i
i
zTi (AAT)zi
350
300
250
200
150
100
50
5
10
15
20
25
30
Index i
Figure 1: Poor per vector error (3) for SNAP/ AMAZON 0302 returned by a sketch-and-solve approximate SVD that gives very good low-rank approximation in both spectral and Frobenius norm.
We address this issue by introducing a per vector guarantee that requires each approximate singular
vector z1 , . . . , zk to capture nearly as much variance as the corresponding true singular vector:
2
Per Vector Error:
?i, uTi AAT ui ? zTi AAT zi ? ?k+1
.
(3)
2
The error bound (3) is very strong in that it depends on ?k+1
, which is better then relative error
for A?s large singular values. While it is reminiscent of the bounds sought in classical numerical
analysis [24], we stress that (3) does not require each zi to converge to ui in the presence of small
singular value gaps. In fact, we show that both randomized Block Krylov Iteration and our slightly
modified Simultaneous Iteration algorithm achieve (3) in gap-independent runtimes.
2.3
Main Result
Our contributions are summarized in Theorem 1. Its detailed proof is relegated to the full version of
this paper [25]. The runtimes are given in Theorems 6 and 7, and the three error bounds shown in
Theorems 10, 11, and 12. In Section 4 we provide a sketch of the main ideas behind the result.
2
In fact, it does not even imply (1 + ) Frobenius norm error.
3
Theorem 1 (Main Theorem). With high probability, Algorithms 1 and 2 find approximate singular
vectors Z = [z1 , . . . , zk ] satisfying guarantees (1) and (2) for low-rank approximation and (3) for
PCA. For?error , Algorithm 1 requires q = O(log d/) iterations while Algorithm 2 requires q =
O(log d/ ) iterations. Excluding lower order terms, both algorithms run in time O(nnz(A)kq).
In the full version of this paper we also use our results to give an alternative analysis that does
depend on singular value gaps and can offer significantly faster convergence when A has decaying
singular values. It is possible to take further advantage of this result by running Algorithms 1 and 2
with a ? that has > k columns, a simple modification for accelerating either method.
In Section 6 we test both algorithms on a number of large datasets. We justify the importance of gap
independent bounds for predicting algorithm convergence and we show that Block Krylov Iteration
in fact significantly outperforms the more popular Simultaneous Iteration.
2.4
Comparison to Classical Bounds
Decades of work has produced a variety of gap dependent bounds for Krylov methods [26]. Most
relevant to our work are bounds for block Krylov methods with block size equal to k [27]. Roughly
speaking, with randomized initialization, these results offerp
guarantees equivalent to our strong equation (3) for the top k singular directions after O(log(d/)/ ?k /?k+1 ? 1) iterations.
This bound is recovered in Section 7 of this paper?s full version [25]. When the target accuracy
is smaller than the relative singular value gap (?k /?k+1 ? 1), it is tighter than our gap independent
results. However, as discussed in Section 6, for high dimensional data problems where is set far
above machine precision, gap independent bounds more accurately predict required iteration count.
Prior work also attempts to analyze algorithms with block size smaller than k [24]. While ?small
block? algorithms offer runtime advantages, it is well understood that with b duplicate singular
values, it is impossible to recover the top k singular directions with a block of size < b [28]. More
generally, large singular value clusters slow convergence, so any small block algorithm must have
runtime dependence on the gaps between each adjacent pair of top singular values [29].
3
Analyzing Simultaneous Iteration
Before discussing our proof of Theorem 1, we review prior work on Simultaneous Iteration to
demonstrate how it can achieve the spectral norm guarantee (2).
Algorithms for Frobenius norm error (1) typically work by sketching A into very few dimensions
using a Johnson-Lindenstrauss random projection matrix ? with poly(k/) columns.
An?d ? ?d?poly(k/) = (A?)n?poly(k/)
? is usually a random Gaussian or (possibly sparse) random sign matrix and Z is computed using
the SVD of A? or of A projected onto A? [3, 5, 30]. This ?sketch-and-solve? approach is very
efficient ? the computation of A? is easily parallelized and, regardless, pass-efficient in a single
processor setting. Furthermore, once a small compression of A is obtained, it can be manipulated
in fast memory for the final computation of Z.
However, Frobenius norm error seems an inherent limitation of sketch-and-solve methods. The
noise from A?s lower r ? k singular values corrupts A?, making it impossible to extract a good
partial SVD if the sum of these singular values (equal to kA ? Ak k2F ) is too large.
In order to achieve spectral norm error (2), Simultaneous Iteration must reduce this noise down to
the scale of ?k+1 = kA ? Ak k2 . It does this by working with the powered matrix Aq [31].3 By the
spectral theorem, Aq has exactly the same singular vectors as A, but its singular values are equal to
those of A raised to the q th power. Powering spreads the values apart and accordingly, Aq ?s lower
singular values are relatively much smaller than its top singular values (see example in Figure 2a).
Specifically, q = O( log d ) is sufficient to increase any singular value ? (1 + )?k+1 to be significantly (i.e. poly(d) times) larger than any value ? ?k+1 . This effectively denoises our problem ?
if we use a sketching method to find a good Z for approximating Aq up to Frobenius norm error, Z
will have to align very well with every singular vector with value ? (1 + )?k+1 . It thus provides
an accurate basis for approximating A up to small spectral norm error.
3
For nonsymmetric matrices we work with (AAT )q A, but present the symmetric case here for simplicity.
4
45
15
Spectrum of A
Spectrum of Aq
xO(1/?)
TO(1/??)(x)
40
Singular Value ?i
35
30
10
25
20
15
10
5
5
0
?5
0
0
5
10
15
0
20
0.2
Index i
0.4
0.6
0.8
1
x
?
(b) An O(1/ )-degree Chebyshev polynomial, TO(1/?) (x), pushes low values nearly
as close to zero as xO(1/) .
(a) A?s singular values compared to those of
Aq , rescaled to match on ?1 . Notice the significantly reduced tail after ?8 .
Figure 2: Replacing A with a matrix polynomial facilitates higher accuracy approximation.
Computing Aq directly is costly, so Aq ? is computed iteratively ? start with a random ? and
repeatedly multiply by A on the left. Since even a rough Frobenius norm approximation for Aq
suffices, ? can be chosen to have just k columns. Each iteration thus takes O(nnz(A)k) time.
When analyzing Simultaneous Iteration, [15] uses the following randomized sketch-and-solve result
to find a Z that gives a coarse Frobenius norm approximation to B = Aq and therefore a good
spectral norm approximation to A. The lemma is numbered for consistency with our full paper.
Lemma 4 (Frobenius Norm Low-Rank Approximation). For any B ? Rn?d and ? ? Rd?k where
the entries of ? are independent Gaussians drawn from N (0, 1). If we let Z be an orthonormal
basis for span (B?), then with probability at least 99/100, for some fixed constant c,
kB ? ZZT Bk2F ? c ? dkkB ? Bk k2F .
For analyzing block methods, results like Lemma 4 can effectively serve as a replacement for earlier
random initialization analysis that applies to single vector power and Krylov methods [32].
?k+1 (Aq ) ?
1
q
poly(d) ?m (A )
for any m with ?m (A) ? (1 + )?k+1 (A). Plugging into Lemma 4:
kAq ? ZZT Aq k2F ? cdk ?
r
X
2
2
?i2 (Aq ) ? cdk ? d ? ?k+1
(Aq ) ? ?m
(Aq )/ poly(d).
i=k+1
? 2 (Aq )
m
Rearranging using Pythagorean theorem, we have kZZT Aq k2F ? kAq k2F ? poly(d)
. That is, Aq ?s
projection onto Z captures nearly all of its Frobenius norm. This is only possible if Z aligns very
well with the top singular vectors of Aq and hence gives a good spectral norm approximation for A.
4
Proof Sketch for Theorem 1
The intuition for beating Simultaneous Iteration with Block Krylov Iteration matches that of many
accelerated iterative methods. Simply put, there are better polynomials than Aq for denoising tail
singular values. In particular, we can use a lower degree polynomial, allowing us to compute fewer
powers of A and thus leading
to an algorithm with fewer iterations. For example, an appropriately
?
shifted q = O(log(d)/ ) degree Chebyshev polynomial can push the tail of A nearly as close to
zero as AO(log d/) , even if the long run growth of the polynomial is much lower (see Figure 2b).
Specifically, we prove the following scalar polynomial lemma in the full version of our paper [25],
which can then be applied to effectively denoising A?s singular value tail.
?
Lemma 5 (Chebyshev Minimizing Polynomial). For ? (0, 1] and q = O(log d/ ), there exists
a degree q polynomial p(x) such that p((1 + )?k+1 ) = (1 + )?k+1 and,
1) p(x) ? x for x ? (1 + )?k+1
2) |p(x)| ?
?k+1
poly(d)
for x ? ?k+1 .
Furthermore, we can choose the polynomial to only contain monomials with odd powers.
5
Block Krylov Iteration takes advantage of such polynomials by working with the Krylov subspace,
K = ? A? A2 ? A3 ? . . . Aq ? ,
from which we can construct pq (A)? for any polynomial pq (?) of degree q.4 Since the polynomial
from Lemma 5 must be scaled and shifted based on the value of ?k+1 , we cannot easily compute it
directly. Instead, we argue that the very best k rank approximation to A lying in the span of K at
least matches the approximation achieved by projecting onto the span of pq (A)?. Finding this best
approximation will therefore give a nearly optimal low-rank approximation to A.
Unfortunately, there?s a catch. Surprisingly, it is not clear how to efficiently compute the best spectral
norm error low-rank approximation to A lying in a given subspace (e.g. K?s span) [14, 33]. This
challenge precludes an analysis of Krylov methods parallel to recent work on Simultaneous Iteration.
Nevertheless, since our analysis shows that projecting to Z captures nearly all the Frobenius norm
of pq (A), we can show that the best Frobenius norm low-rank approximation to A in the span of K
gives good enough spectral norm approximation. By the following lemma, this optimal Frobenius
norm low-rank approximation is given by ZZT A, where Z is exactly the output of Algorithm 2.
Lemma 6 (Lemma 4.1 of [15]). Given A ? Rn?d and Q ? Rm?n with orthonormal columns,
min
kA ? QCkF .
kA ? (QQT A)k kF = kA ? Q QT A k kF =
C|rank(C)=k
Q QT A k can be obtained using an SVD of the m ? m matrix M = QT (AAT )Q. Specifically,
??
? 2U
? T be the SVD of M, and Z = QU
? k then Q QT A = ZZT A.
letting M = U
k
4.1
Stronger Per Vector Error Guarantees
Achieving the per vector guarantee of (3) requires a more nuanced understanding of how Simultaneous Iteration and Block Krylov Iteration denoise the spectrum of A. The analysis for spectral norm
low-rank approximation relies on the fact that Aq (or pq (A) for Block Krylov Iteration) blows up
any singular value ? (1 + )?k+1 to much larger than any singular value ? ?k+1 . This ensures that
our output Z aligns very well with the singular vectors corresponding to these large singular values.
If ?k ? (1 + )?k+1 , then Z aligns well with all top k singular vectors of A and we get good
Frobenius norm error and the per vector guarantee (3). Unfortunately, when there is a small gap
between ?k and ?k+1 , Z could miss intermediate singular vectors whose values lie between ?k+1
and (1 + )?k+1 . This is the case where gap dependent guarantees of classical analysis break down.
However, Aq or, for Block Krylov Iteration, some q-degree polynomial in our Krylov subspace, also
significantly separates singular values > ?k+1 from those < (1 ? )?k+1 . Thus, each column of Z
at least aligns with A nearly as well as uk+1 . So, even if we miss singular values between ?k+1 and
(1 + )?k+1 , they will be replaced with approximate singular values > (1 ? )?k+1 , enough for (3).
For Frobenius norm low-rank approximation (1), we prove that the degree to which Z falls outside of
the span of A?s top k singular vectors depends on the number of singular values between ?k+1 and
(1?)?k+1 . These are the values that could be ?swapped in? for the true top k singular values. Since
their weight counts towards A?s tail, our total loss compared to optimal is at worst kA ? Ak k2F .
5
Implementation and Runtimes
For both Algorithm 1 and 2, ? can be replaced by a random sign matrix, or any matrix achieving
the guarantee of Lemma 4. ? may also be chosen with p > k columns. In our full paper [25], we
discuss in detail how this approach can give improved accuracy.
5.1
Simultaneous Iteration
? k , which is necessary for achieving per vector guarantees for
In our implementation we set Z = QU
approximate PCA. However, for near optimal low-rank approximation, we can simply set Z = Q.
? k is equivalent to projecting to Q as these matrices have the same column spans.
Projecting A to QU
Since powering A spreads its singular values, K = (AAT )q A? could be poorly conditioned. To
improve stability we orthonormalize K after every iteration (or every few iterations). This does not
change K?s column span, so it gives an equivalent algorithm in exact arithmetic.
4
Algorithm 2 in fact only constructs odd powered terms in K, which is sufficient for our choice of pq (x).
6
Theorem 7 (Simultaneous Iteration Runtime). Algorithm 1 runs in time
O nnz(A)k log(d)/ + nk 2 log(d)/ .
Proof. Computing K requires first multiplying A by ?, which takes O(nnz(A)k) time. Computing
i
i?1
A? then takes O(nnz(A)k) time to first multiply our (n ? k)
AAT A? given AAT
matrix by AT and then by A. Reorthogonalizing after each iteration takes O(nk 2 ) time via GramSchmidt. This gives a total runtime of O(nnz(A)kq + nk 2 q) for computing K. Finding Q takes
O(nk 2 ) time. Computing M by multiplying from left to right requires O(nnz(A)k + nk 2 ) time.
? k by Q takes
M?s SVD then requires O(k 3 ) time using classical techniques. Finally, multiplying U
2
time O(nk ). Setting q = ?(log d/) gives the claimed runtime.
5.2
Block Krylov Iteration
In the traditional Block Lanczos algorithm, one starts by computing an orthonormal basis for A?,
the first block in K. Bases for subsequent blocks are computed from previous blocks using a three
term recurrence that ensures QT AAT Q is block tridiagonal, with k ? k sized blocks [18]. This
technique can be useful if qk is large, since it is faster to compute the top singular vectors of a block
tridiagonal matrix. However, computing Q using a recurrence can introduce a number of stability
issues, and additional steps may be required to ensure that the matrix remains orthogonal [28].
An alternative, uesd in [1], [19], and our Algorithm 2, is to compute K explicitly and then find Q
using a QR decomposition. This method does not guarantee that QT AAT Q is block tridiagonal,
but avoids stability issues. Furthermore, if qk is small, taking the SVD of QT AAT Q will still be
fast and typically dominated by the cost of computing K.
As with Simultaneous Iteration, we orthonormalize each block of K after it is computed, avoiding
poorly conditioned blocks and giving an equivalent algorithm in exact arithmetic.
Theorem 8 (Block Krylov Iteration Runtime). Algorithm 2 runs in time
?
O nnz(A)k log(d)/ + nk 2 log2 (d)/ + k 3 log3 (d)/3/2 .
Proof. Computing K, including reorthogonalization, requires O(nnz(A)kq + nk 2 q) time. The remaining steps are analogous to those in Simultaneous Iteration except somewhat more costly as we
work with a k ? q rather than k dimensional subspace. Finding Q takes O(n(kq)2 ) time. Computing
M take O(nnz(A)(kq) + n(kq)2 ) time and its SVD then requires
O((kq)3 ) time. Finally, multi?
?
plying Uk by Q takes time O(nk(kq)). Setting q = ?(log d/ ) gives the claimed runtime.
6
Experiments
We close with several experimental results. A variety of empirical papers, not to mention widespread
adoption, already justify the use of randomized SVD algorithms. Prior work focuses in particular on
benchmarking Simultaneous Iteration [19, 11] and, due to its improved accuracy over sketch-andsolve approaches, this algorithm is popular in practice [10, 16]. As such, we focus on demonstrating
that for many data problems Block Krylov Iteration can offer significantly better convergence.
We implement both algorithms in MATLAB using Gaussian random starting matrices with exactly
k columns. We explicitly compute K for both algorithms, as described in Section 5, and use reorthonormalization at each iteration to improve stability [34]. We test the algorithms with varying
iteration count q on three common datasets, SNAP/ AMAZON 0302 [22, 23], SNAP/ EMAIL -E NRON
[22, 35], and 20 N EWSGROUPS [36], computing column principal components in all cases. We plot
error vs. iteration count for metrics (1), (2), and (3) in Figure 3. For per vector error (3), we plot the
maximum deviation amongst all top k approximate principal components (relative to ?k+1 ).
Unsurprisingly, both algorithms obtain very accurate Frobenius norm error, kA ? ZZT AkF /kA ?
Ak kF , with very few iterations. This is our intuitively weakest guarantee and, in the presence of a
heavy singular value tail, both iterative algorithms will outperform the worst case analysis.
On the other hand, for spectral norm low-rank approximation and per vector error, we confirm that
Block Krylov Iteration converges much more rapidly than Simultaneous Iteration, as predicted by
7
Block Krylov ? Frobenius Error
Block Krylov ? Spectral Error
Block Krylov ? Per Vector Error
Simult. Iter. ? Frobenius Error
Simult. Iter. ? Spectral Error
Simult. Iter. ? Per Vector Error
0.25
Error ?
0.2
Block Krylov ? Frobenius Error
Block Krylov ? Spectral Error
Block Krylov ? Per Vector Error
Simult. Iter. ? Frobenius Error
Simult. Iter. ? Spectral Error
Simult. Iter. ? Per Vector Error
0.4
0.35
0.3
0.25
Error ?
0.3
0.15
0.2
0.15
0.1
0.1
0.05
0.05
0
0
5
10
15
20
25
5
10
Iterations q
(a) SNAP/ AMAZON 0302, k = 30
Error ?
0.25
25
Block Krylov ? Frobenius Error
Block Krylov ? Spectral Error
Block Krylov ? Per Vector Error
Simult. Iter. ? Frobenius Error
Simult. Iter. ? Spectral Error
Simult. Iter. ? Per Vector Error
0.35
0.3
0.25
Error ?
0.3
20
(b) SNAP/ EMAIL -E NRON, k = 10
Block Krylov ? Frobenius Error
Block Krylov ? Spectral Error
Block Krlyov ? Per Vector Error
Simult. Iter. ? Frobenius Error
Simult. Iter. ? Spectral Error
Simult. Iter. ? Per Vector Error
0.35
15
Iterations q
0.2
0.15
0.2
0.15
0.1
0.1
0.05
0.05
0
0
5
10
15
20
25
0
Iterations q
1
2
3
4
5
6
7
Runtime (seconds)
(c) 20 N EWSGROUPS, k = 20
(d) 20 N EWSGROUPS, k = 20, runtime cost
Figure 3: Low-rank approximation and per vector error convergence rates for Algorithms 1 and 2.
our theoretical analysis. It it often possible to achieve nearly optimal error with < 8 iterations where
as getting to within say 1% error with Simultaneous Iteration can take much longer.
The final plot in Figure 3 shows error verses runtime for the 11269 ? 15088 dimensional 20 N EWS GROUPS dataset. We averaged over 7 trials and ran the experiments on a commodity laptop with
16GB of memory. As predicted, because its additional memory overhead and post-processing costs
are small compared to the cost of the large matrix multiplication required for each iteration, Block
Krylov Iteration outperforms Simultaneous Iteration for small .
More generally, these results justify the importance of convergence bounds that are independent of
singular value gaps. Our analysis in Section 6 of the full paper predicts that, once is small in
k
comparison to the gap ??k+1
? 1, we should see much more rapid convergence since q will depend
on log(1/) instead of 1/. However, for Simultaneous Iteration, we do not see this behavior with
SNAP/ AMAZON 0302 and it only just begins to emerge for 20 N EWSGROUPS.
While all three datasets have rapid singular value decay, a careful look confirms that their singular
value gaps are actually quite small! For example, ?k /?k+1 ? 1 is .004 for SNAP/ AMAZON 0302
and .011 for 20 N EWSGROUPS, in comparison to .042 for SNAP/ EMAIL -E NRON. Accordingly, the
frequent claim that singular value gaps can be taken as constant is insufficient, even for small .
References
[1] Vladimir Rokhlin, Arthur Szlam, and Mark Tygert. A randomized algorithm for principal component
analysis. SIAM Journal on Matrix Analysis and Applications, 31(3):1100?1124, 2009.
[2] Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217?288, 2011.
[3] Tam?as Sarl?os. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006.
[4] Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. A randomized algorithm for the approximation of matrices. Technical Report 1361, Yale University, 2006.
[5] Kenneth Clarkson and David Woodruff. Low rank approximation and regression in input sparsity time. In
Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pages 81?90, 2013.
[6] Antoine Liutkus. Randomized SVD, 2014. MATLAB Central File Exchange.
[7] Daisuke Okanohara. redsvd: RandomizED SVD. https://code.google.com/p/redsvd/, 2010.
8
[8] David Hall et al. ScalaNLP: Breeze. http://www.scalanlp.org/, 2009.
[9] IBM Reseach Division, Skylark Team. libskylark: Sketching-based Distributed Matrix Computations for
Machine Learning. IBM Corporation, Armonk, NY, 2014.
[10] F. Pedregosa et al. Scikit-learn: Machine learning in Python. JMLR, 12:2825?2830, 2011.
[11] Arthur Szlam, Yuval Kluger, and Mark Tygert. An implementation of a randomized algorithm for principal component analysis. arXiv:1412.3510, 2014.
[12] Zohar Karnin and Edo Liberty. Online PCA with spectral bounds. In Proceedings of the 28th Annual
Conference on Computational Learning Theory (COLT), pages 505?509, 2015.
[13] Rafi Witten and Emmanuel J. Cand`es. Randomized algorithms for low-rank matrix factorizations: Sharp
performance bounds. Algorithmica, 31(3):1?18, 2014.
[14] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near-optimal column-based matrix reconstruction. SIAM Journal on Computing, 43(2):687?717, 2014.
[15] David P. Woodruff. Sketching as a tool for numerical linear algebra. Found. Trends in Theoretical
Computer Science, 10(1-2):1?157, 2014.
[16] Andrew Tulloch. Fast randomized singular value decomposition. http://research.facebook.
com/blog/294071574113354/fast-randomized-svd/, 2014.
[17] Jane Cullum and W.E. Donath. A block Lanczos algorithm for computing the q algebraically largest
eigenvalues and a corresponding eigenspace of large, sparse, real symmetric matrices. In IEEE Conference
on Decision and Control including the 13th Symposium on Adaptive Processes, pages 505?509, 1974.
[18] Gene Golub and Richard Underwood. The block Lanczos method for computing eigenvalues. Mathematical Software, (3):361?377, 1977.
[19] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal
component analysis of large data sets. SIAM Journal on Scientific Computing, 33(5):2580?2594, 2011.
[20] Nathan Halko. Randomized methods for computing low-rank approximations of matrices. PhD thesis, U.
of Colorado, 2012.
[21] Ming Gu. Subspace iteration randomization and singular value problems. arXiv:1408.2208, 2014.
[22] Timothy A. Davis and Yifan Hu. The university of florida sparse matrix collection. ACM Transactions on
Mathematical Software, 38(1):1:1?1:25, December 2011.
[23] Jure Leskovec, Lada A. Adamic, and Bernardo A. Huberman. The dynamics of viral marketing. ACM
Transactions on the Web, 1(1), May 2007.
[24] Y. Saad. On the rates of convergence of the Lanczos and the Block-Lanczos methods. SIAM Journal on
Numerical Analysis, 17(5):687?706, 1980.
[25] Cameron Musco and Christopher Musco. Randomized block Krylov methods for stronger and faster
approximate singular value decomposition. arXiv:1504.05477, 2015.
[26] Yousef Saad. Numerical Methods for Large Eigenvalue Problems: Revised Edition, volume 66. 2011.
[27] Gene Golub, Franklin Luk, and Michael Overton. A block Lanczos method for computing the singular
values and corresponding singular vectors of a matrix. ACM Trans. Math. Softw., 7(2):149?169, 1981.
[28] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins University Press, 3rd edition, 1996.
[29] Ren-Cang Li and Lei-Hong Zhang. Convergence of the block Lanczos method for eigenvalue clusters.
Numerische Mathematik, 131(1):83?113, 2015.
[30] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu. Dimensionality
reduction for k-means clustering and low rank approximation. In Proceedings of the 47th Annual ACM
Symposium on Theory of Computing (STOC), 2015.
[31] Friedrich L. Bauer. Das verfahren der treppeniteration und verwandte verfahren zur l?osung algebraischer
eigenwertprobleme. Zeitschrift f?ur angewandte Mathematik und Physik ZAMP, 8(3):214?235, 1957.
[32] J. Kuczy?nski and H. Wo?zniakowski. Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start. SIAM Journal on Matrix Analysis and Applications, 13(4):1094?1122, 1992.
[33] Kin Cheong Sou and Anders Rantzer. On the minimum rank of a generalized matrix approximation
problem in the maximum singular value norm. In Proceedings of the 19th International Symposium on
Mathematical Theory of Networks and Systems (MTNS), 2010.
[34] Per-Gunnar Martinsson, Arthur Szlam, and Mark Tygert. Normalized power iterations for the computation
of SVD, 2010. NIPS Workshop on Low-rank Methods for Large-scale Machine Learning.
[35] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: Densification laws, shrinking
diameters and possible explanations. In Proceedings of the 11th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining (KDD), pages 177?187, 2005.
[36] Jason Rennie. 20 newsgroups. http://qwone.com/?jason/20Newsgroups/, May 2015.
9
| 5735 |@word trial:1 luk:1 version:4 compression:1 stronger:7 norm:36 seems:1 polynomial:14 physik:1 hu:1 confirms:1 seek:1 decomposition:7 mention:1 reduction:1 series:1 woodruff:2 denoting:1 franklin:1 outperforms:2 ka:24 recovered:1 com:3 yet:1 written:1 reminiscent:1 must:3 john:1 numerical:4 subsequent:1 kdd:1 plot:3 v:1 fewer:2 accordingly:2 ith:3 provides:4 coarse:1 math:1 org:1 simpler:1 zhang:1 mathematical:3 along:1 become:3 symposium:5 focs:1 prove:2 overhead:1 introduce:1 rapid:2 behavior:1 cand:1 roughly:2 multi:1 zti:2 ming:1 actual:1 begin:1 estimating:1 laptop:1 eigenspace:1 substantially:1 finding:4 corporation:1 guarantee:19 every:3 commodity:1 bernardo:1 growth:1 runtime:15 exactly:3 k2:4 scaled:1 uk:5 control:1 szlam:4 rm:1 positive:1 before:1 aat:18 understood:1 zeitschrift:1 despite:1 ak:17 analyzing:3 becoming:1 initialization:2 co:1 fastest:2 limited:1 factorization:1 adoption:1 averaged:1 practice:6 block:53 implement:1 nnz:12 empirical:1 significantly:7 projection:3 numbered:1 get:1 onto:4 close:4 cannot:1 put:1 impossible:2 www:1 equivalent:4 nron:3 regardless:1 starting:1 independently:1 musco:6 amazon:7 simplicity:1 numerische:1 orthonormal:5 spanned:1 classic:1 stability:4 notion:1 analogous:1 zzt:12 target:3 colorado:1 exact:2 distinguishing:1 us:1 trend:1 ak2f:1 expensive:1 satisfying:2 predicts:1 capture:4 worst:2 ensures:3 rescaled:1 ran:2 substantial:1 intuition:1 und:2 ui:4 dynamic:1 depend:5 algo:1 algebra:1 serve:1 division:1 gramschmidt:1 basis:3 gu:1 drineas:1 easily:2 fast:5 shortcoming:1 outside:1 sarl:1 whose:2 quite:1 widely:1 larger:3 solve:5 snap:9 say:1 rennie:1 precludes:1 okanohara:1 highlighted:1 noisy:1 final:2 online:1 advantage:3 rr:1 eigenvalue:5 reconstruction:1 product:1 frequent:1 relevant:1 rapidly:1 poorly:2 achieve:4 ismail:1 frobenius:27 qr:1 getting:1 convergence:10 cluster:2 converges:2 andrew:1 odd:2 qt:8 strong:3 implemented:1 predicted:2 direction:4 liberty:1 closely:1 kb:1 kluger:1 require:1 exchange:1 suffices:1 ao:1 randomization:2 tighter:1 teration:2 lying:2 considered:2 hall:1 predict:1 claim:1 major:1 achieves:1 sought:1 a2:1 cheong:1 saw:1 largest:2 simult:12 tool:2 mit:2 rough:1 gaussian:2 powering:2 modified:1 rather:1 avoid:1 verwandte:1 varying:1 sou:1 focus:4 improvement:3 vk:1 rank:37 nd2:1 contrast:1 sigkdd:1 armonk:1 dependent:2 anders:1 typically:4 breeze:1 relegated:1 corrupts:1 provably:1 issue:5 among:1 colt:1 yoel:1 denoted:1 raised:1 ak2:2 equal:4 construct:3 once:2 karnin:1 runtimes:4 identical:1 softw:1 look:1 k2f:9 nearly:12 jon:1 report:1 duplicate:1 few:3 inherent:1 richard:1 manipulated:1 shkolnisky:1 replaced:2 algorithmica:1 replacement:1 attempt:1 huge:1 mining:1 multiply:2 joel:1 golub:3 analyzed:1 reseach:1 behind:1 daisuke:1 accurate:3 overton:1 partial:3 necessary:1 arthur:3 orthogonal:3 old:1 bk2f:1 theoretical:4 leskovec:2 column:16 earlier:1 lanczos:12 cost:4 introducing:1 deviation:1 entry:2 monomials:1 kq:8 johnson:1 tridiagonal:3 too:1 unsurprising:1 eec:2 nski:1 international:2 randomized:22 siam:6 probabilistic:1 michael:2 sketching:7 quickly:3 hopkins:1 squared:1 central:1 thesis:1 containing:1 choose:1 possibly:1 tam:1 denoises:1 leading:1 return:5 li:1 potential:1 blow:1 summarized:1 explicitly:2 depends:3 multiplicative:1 break:1 jason:2 analyze:1 start:4 decaying:1 recover:1 parallel:1 contribution:1 accuracy:8 variance:4 qk:4 efficiently:1 yield:1 accurately:1 produced:1 ren:1 lada:1 multiplying:3 processor:1 randomness:1 simultaneous:24 edo:1 aligns:4 email:3 facebook:1 verse:1 boutsidis:1 proof:5 recovers:1 petros:1 dataset:2 massachusetts:2 popular:3 knowledge:1 ut:1 dimensionality:1 actually:1 elder:1 higher:2 improved:4 though:1 furthermore:4 just:5 marketing:1 sketch:8 working:2 hand:1 adamic:1 christopher:3 tropp:3 replacing:1 o:1 scikit:1 google:1 widespread:1 web:1 quality:2 nuanced:1 scientific:1 lei:1 usa:2 k22:1 contain:1 true:3 normalized:1 qwone:1 hence:2 symmetric:2 iteratively:1 i2:3 adjacent:1 recurrence:2 davis:1 noted:1 hong:1 generalized:1 stress:1 complete:1 demonstrate:1 performs:1 recently:1 common:2 viral:1 witten:1 cohen:1 volume:1 tail:9 martinsson:6 discussed:2 nonsymmetric:1 cambridge:2 rd:3 consistency:1 zniakowski:1 aq:23 pq:6 longer:1 tygert:6 align:2 base:1 recent:2 hide:1 apart:1 redsvd:2 scenario:1 claimed:2 blog:1 discussing:1 vt:1 der:1 minimum:1 additional:2 somewhat:1 utk:1 parallelized:1 converge:2 algebraically:1 arithmetic:2 full:7 technical:1 faster:5 match:3 offer:3 long:1 post:1 cameron:3 plugging:1 regression:1 metric:1 arxiv:3 iteration:65 achieved:2 zur:1 addition:1 singular:81 leaving:1 donath:1 appropriately:1 swapped:1 meaningless:2 saad:2 file:1 facilitates:1 december:1 practitioner:1 call:1 near:2 presence:2 intermediate:1 enough:2 variety:2 newsgroups:2 zi:3 reduce:1 idea:1 chebyshev:3 shift:1 whether:1 pca:7 gb:1 accelerating:1 wo:1 clarkson:1 render:1 returned:1 speaking:1 repeatedly:1 matlab:2 generally:2 useful:1 detailed:1 clear:1 diameter:1 reduced:1 http:4 outperform:1 exist:1 notice:1 shifted:2 sign:2 per:22 group:1 iter:12 gunnar:4 threshold:1 nevertheless:1 achieving:4 drawn:1 demonstrating:1 time1:1 kenneth:1 graph:1 sum:1 run:6 uti:2 decision:1 bound:16 followed:1 distinguish:1 yale:1 annual:4 strength:1 software:2 dominated:1 kleinberg:1 u1:1 nathan:3 span:8 min:1 relatively:1 popularized:2 poor:1 describes:1 smaller:4 slightly:1 sam:1 ur:1 matrices2:1 qu:5 modification:1 making:1 intuitively:3 projecting:4 xo:2 taken:1 equation:1 remains:1 mathematik:2 discus:1 count:4 fail:1 needed:1 mind:1 letting:1 available:1 gaussians:1 magdon:1 rantzer:1 spectral:27 alternative:2 jane:1 faloutsos:1 florida:1 top:20 running:2 remaining:2 ensure:1 underwood:1 madalina:1 lock:1 log2:1 clustering:1 giving:2 emmanuel:1 uj:1 especially:1 approximating:2 classical:5 malik:1 already:1 costly:2 dependence:2 diagonal:1 traditional:2 antoine:1 kaq:2 amongst:1 osung:1 subspace:10 separate:1 argue:1 provable:1 code:1 useless:2 index:2 insufficient:2 minimizing:1 vladimir:2 difficult:1 unfortunately:3 cdk:2 potentially:1 stoc:2 implementation:4 yousef:1 allowing:1 revised:1 datasets:4 benchmark:1 looking:1 excluding:1 team:1 rn:8 orthonormalize:4 sharp:1 bk:1 david:3 pair:1 required:3 verfahren:2 z1:3 friedrich:1 akf:3 nip:1 trans:1 address:4 zohar:1 jure:2 krylov:37 usually:1 beating:1 sparsity:1 challenge:1 max:1 including:3 memory:3 explanation:1 power:9 greatest:2 predicting:1 improve:2 technology:2 imply:2 library:1 catch:1 extract:1 prior:5 literature:1 understanding:2 review:2 kf:6 countless:1 powered:2 relative:4 unsurprisingly:1 multiplication:1 loss:1 python:1 law:1 limitation:2 plying:1 tially:1 foundation:1 degree:7 purchasing:1 sufficient:2 heavy:2 ibm:2 surprisingly:1 institute:2 fall:1 overkill:1 taking:1 emerge:1 sparse:3 distributed:1 van:1 bauer:1 dimension:1 lindenstrauss:1 avoids:1 collection:1 adaptive:1 projected:2 far:1 reorthogonalization:1 cang:1 log3:1 transaction:2 approximate:14 obtains:1 unreliable:1 confirm:1 gene:2 persu:1 cullum:1 yifan:1 spectrum:3 discovery:1 iterative:4 decade:3 rqk:1 learn:1 zk:3 rearranging:1 inherently:1 angewandte:1 excellent:1 poly:8 constructing:1 da:1 main:3 spread:2 noise:4 edition:2 denoise:1 allowed:1 benchmarking:1 slow:1 ny:1 precision:1 christos:2 shrinking:1 lie:1 kxk2:1 jmlr:1 qqt:1 kin:1 theorem:11 down:2 xt:1 emphasized:1 showing:1 densification:1 decay:1 a3:1 weakest:1 exists:1 workshop:1 liutkus:1 effectively:3 importance:2 phd:1 justifies:1 push:2 conditioned:2 nk:9 gap:24 ak22:1 halko:5 timothy:1 simply:3 hopelessly:1 scalar:1 applies:1 aa:1 relies:1 acm:6 ma:2 goal:1 sized:1 careful:1 towards:1 experimentally:1 change:1 loan:1 specifically:5 except:1 yuval:1 justify:3 huberman:1 denoising:2 principal:12 lemma:11 called:1 tulloch:1 cnmusco:1 pas:1 svd:22 miss:2 total:2 experimental:1 e:1 ew:1 pedregosa:1 formally:1 rokhlin:3 mark:5 accelerated:1 pythagorean:1 tested:1 avoiding:1 |
5,231 | 5,736 | Minimum Weight Perfect Matching
via Blossom Belief Propagation
Sungsoo Ahn?
Sejun Park?
Michael Chertkov?
Jinwoo Shin?
?
School of Electrical Engineering,
Korea Advanced Institute of Science and Technology, Daejeon, Korea
?
Theoretical Division and Center for Nonlinear Studies,
Los Alamos National Laboratory, Los Alamos, USA
?
?
{sungsoo.ahn, sejun.park, jinwoos}@kaist.ac.kr
chertkov@lanl.gov
Abstract
Max-product Belief Propagation (BP) is a popular message-passing algorithm for
computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM). It has been shown that BP can solve a number of combinatorial optimization problems including minimum weight matching,
shortest path, network flow and vertex cover under the following common assumption: the respective Linear Programming (LP) relaxation is tight, i.e., no integrality
gap is present. However, when LP shows an integrality gap, no model has been
known which can be solved systematically via sequential applications of BP. In
this paper, we develop the first such algorithm, coined Blossom-BP, for solving
the minimum weight matching problem over arbitrary graphs. Each step of the
sequential algorithm requires applying BP over a modified graph constructed by
contractions and expansions of blossoms, i.e., odd sets of vertices. Our scheme
guarantees termination in O(n2 ) of BP runs, where n is the number of vertices in
the original graph. In essence, the Blossom-BP offers a distributed version of the
celebrated Edmonds? Blossom algorithm by jumping at once over many sub-steps
with a single BP. Moreover, our result provides an interpretation of the Edmonds?
algorithm as a sequence of LPs.
1
Introduction
Graphical Models (GMs) provide a useful representation for reasoning in a number of scientific disciplines [1, 2, 3, 4]. Such models use a graph structure to encode the joint probability distribution,
where vertices correspond to random variables and edges specify conditional dependencies. An
important inference task in many applications involving GMs is to find the most-likely assignment
to the variables in a GM, i.e., Maximum-A-Posteriori (MAP). Belief Propagation (BP) is a popular algorithm for approximately solving the MAP inference problem and it is an iterative, message
passing one that is exact on tree structured GMs. BP often shows remarkably strong heuristic performance beyond trees, i.e., over loopy GMs. Furthermore, BP is of a particular relevance to large-scale
problems due to its potential for parallelization [5] and its ease of programming within the modern
programming models for parallel computing, e.g., GraphLab [6], GraphChi [7] and OpenMP [8].
The convergence and correctness of BP was recently established for a certain class of loopy GM formulations of several classical combinatorial optimization problems, including matching [9, 10, 11],
perfect matching [12], shortest path [13], independent set [14], network flow [15] and vertex cover
[16]. The important common feature of these models is that BP converges to a correct assignment
when the Linear Programming (LP) relaxation of the combinatorial optimization is tight, i.e., when
it shows no integrality gap. The LP tightness is an inevitable condition to guarantee the performance
of BP and no combinatorial optimization instance has been known where BP would be used to solve
1
problems without the LP tightness. On the other hand, in the LP literature, it has been extensively
studied how to enforce the LP tightness via solving multiple intermediate LPs that are systematically
designed, e.g., via the cutting-plane method [21]. Motivated by these studies, we pose a similar question for BP, ?how to enforce correctness of BP, possibly by solving multiple intermediate BPs?. In
this paper, we show how to resolve this question for the minimum weight (or cost) perfect matching
problem over arbitrary graphs.
Contribution. We develop an algorithm, coined Blossom-BP, for solving the minimum weight
matching problem over an arbitrary graph. Our algorithm solves multiple intermediate BPs until the
final BP outputs the solution. The algorithm is sequential, where each step includes running BP over
a ?contracted? graph derived from the original graph by contractions and infrequent expansions of
blossoms, i.e., odd sets of vertices. To build such a scheme, we first design an algorithm, coined
Blossom-LP, solving multiple intermediate LPs. Second, we show that each LP is solvable by
BP using the recent framework [16] that establishes a generic connection between BP and LP. For
the first part, cutting-plane methods solving multiple intermediate LPs for the minimum weight
matching problem have been discussed by several authors over the past decades [17, 18, 19, 20] and
a provably polynomial-time scheme was recently suggested [21]. However, LPs in [21] were quite
complex to solve by BP. To address the issue, we design much simpler intermediate LPs that allow
utilizing the framework of [16].
We prove that Blossom-BP and Blossom-LP guarantee to terminate in O(n2 ) of BP and LP runs,
respectively, where n is the number of vertices in the graph. To establish the polynomial complexity,
we show that intermediate outputs of Blossom-BP and Blossom-LP are equivalent to those of a variation of the Blossom-V algorithm [22] which is the latest implementation of the Blossom algorithm
due to Kolmogorov. The main difference is that Blossom-V updates parameters by maintaining disjoint tree graphs, while Blossom-BP and Blossom-LP implicitly achieve this by maintaining disjoint
cycles, claws and tree graphs. Notice, however, that these combinatorial structures are auxiliary, as
required for proofs, and they do not appear explicitly in the algorithm descriptions. Therefore, they
are much easier to implement than Blossom-V that maintains complex data structures, e.g., priority
queues. To the best of our knowledge, Blossom-BP and Blossom-LP are the simplest possible algorithms available for solving the problem in polynomial time. Our proof implies that in essence,
Blossom-BP offers a distributed version of the Edmonds? Blossom algorithm [23] jumping at once
over many sub-steps of Blossom-V with a single BP.
The subject of solving convex optimizations (other than LP) via BP was discussed in the literature
[24, 25, 26]. However, we are not aware of any similar attempts to solve Integer Programming, via
sequential application of BP. We believe that the approach developed in this paper is of a broader
interest, as it promises to advance the challenge of designing BP-based MAP solvers for a broader
class of GMs. Furthermore, Blossom-LP stands alone as providing an interpretation for the Edmonds? algorithm in terms of a sequence of tractable LPs. The Edmonds? original LP formulation
contains exponentially many constraints, thus naturally suggesting to seek for a sequence of LPs,
each with a subset of constraints, gradually reducing the integrality gap to zero in a polynomial number of steps. However, it remained illusive for decades: even when the bipartite LP relaxation of the
problem has an integral optimal solution, the standard Edmonds? algorithm keeps contracting and
expanding a sequence of blossoms. As we mentioned earlier, we resolve the challenge by showing
that Blossom-LP is (implicitly) equivalent to a variant of the Edmonds? algorithm with three major
modifications: (a) parameter-update via maintaining cycles, claws and trees, (b) addition of small
random corrections to weights, and (c) initialization using the bipartite LP relaxation.
Organization. In Section 2, we provide backgrounds on the minimum weight perfect matching
problem and the BP algorithm. Section 3 describes our main result ? Blossom-LP and Blossom-BP
algorithms, where the proof is given in Section 4.
2
2.1
Preliminaries
Minimum weight perfect matching
Given an (undirected) graph G = (V, E), a matching of G is a set of vertex-disjoint edges, where
a perfect matching additionally requires to cover every vertices of G. Given integer edge weights
(or costs) w = [we ] ? Z|E| , the minimum weight (or cost) perfect matching problem consists in
computing a perfect matching which minimizes the summation of its associated edge weights. The
2
problem is formulated as the following IP (Integer Programming):
X
minimize
w?x
subject to
xe = 1, ?v ? V,
x = [xe ] ? {0, 1}|E|
(1)
e??(v)
Without loss of generality, one can assume that weights are strictly positive.1 Furthermore, we assume that IP (1) is feasible, i.e., there exists at least one perfect matching in G. One can naturally
relax the above integer constraints to x = [xe ] ? [0, 1]|E| to obtain an LP (Linear Programming),
which is called the bipartite relaxation. The integrality of the bipartite LP relaxation is not guaranteed, however it can be enforced by adding the so-called blossom inequalities [22]:
w?x
X
minimize
subject to
xe = 1,
X
?v ? V,
e??(v)
xe ? 1,
?S ? L,
x = [xe ] ? [0, 1]|E| ,
e??(S)
(2)
V
where L ? 2 is a collection of odd cycles in G, called blossoms, and ?(S) is a set of edges between
S and V \ S. It is known that if L is the collection of all the odd cycles in G, then LP (2) always
has an integral solution. However, notice that the number of odd cycles is exponential in |V |, thus
solving LP (2) is computationally intractable. To overcome this complication we are looking for a
tractable subset of L of a polynomial size which guarantees the integrality. Our algorithm, searching
for such a tractable subset of L is iterative: at each iteration it adds or subtracts a blossom.
2.2
Belief propagation for linear programming
A joint distribution of n (binary) random variables Z = [Zi ] ? {0, 1}n is called a Graphical Model
(GM) if it factorizes as follows: for z = [zi ] ? ?n ,
Y
Y
Pr[Z = z] ?
?i (zi )
?? (z? ),
??F
i?{1,...,n}
where {?i , ?? } are (given) non-negative functions, the so-called factors; F is a collection of subsets
F = {?1 , ?2 , ..., ?k } ? 2{1,2,...,n}
(each ?j is a subset of {1, 2, . . . , n} with |?j | ? 2); z? is the projection of z onto dimensions
included in ?.2 In particular, ?i is called a variable factor. Assignment z ? is called a maximum-aposteriori (MAP) solution if z ? = arg maxz?{0,1}n Pr[z]. Computing a MAP solution is typically
computationally intractable (i.e., NP-hard) unless the induced bipartite graph of factors F and variables z, so-called factor graph, has a bounded treewidth. The max-product Belief Propagation (BP)
algorithm is a popular simple heuristic for approximating the MAP solution in a GM, where it iterates messages over a factor graph. BP computes a MAP solution exactly after a sufficient number
of iterations, if the factor graph is a tree and the MAP solution is unique. However, if the graph
contains loops, BP is not guaranteed to converge to a MAP solution in general. Due to the space
limitation, we provide detailed backgrounds on BP in the supplemental material.
Consider the following GM: for x = [xi ] ? {0, 1}n and w = [wi ] ? Rn ,
Y
Y
Pr[X = x] ?
e?wi xi
?? (x? ),
i
(3)
??F
where F is the set of non-variable factors and the factor function ?? for ? ? F is defined as
1 if A? x? ? b? , C? x? = d?
?? (x? ) =
,
0 otherwise
for some matrices A? , C? and vectors b? , d? . Now we consider Linear Programming (LP) corresponding to this GM:
minimize
subject to
w?x
?? (x? ) = 1,
?? ? F,
x = [xi ] ? [0, 1]n .
(4)
1
If some edges have negative weights, one can add the same positive constant to all edge weights, and this
does not alter the solution of IP (1).
2
For example, if z = [0, 1, 0] and ? = {1, 3}, then z? = [0, 0].
3
One observes that the MAP solution for GM (3) corresponds to the (optimal) solution of LP (4) if the
LP has an integral solution x? ? {0, 1}n . Furthermore, the following sufficient conditions relating
max-product BP to LP are known [16]:
Theorem 1 The max-product BP applied to GM (3) converges to the solution of LP (4) if the following conditions hold:
C1. LP (4) has a unique integral solution x? ? {0, 1}n , i.e., it is tight.
C2. For every i ? {1, 2, . . . , n}, the number of factors associated with xi is at most two, i.e.,
|Fi | ? 2.
C3. For every factor ?? , every x? ? {0, 1}|?| with ?? (x? ) = 1, and every i ? ? with xi 6= x?i ,
there exists ? ? ? such that
|{j ? {i} ? ? : |Fj | = 2}| ? 2
xk if k ?
/ {i} ? ?
0
0
.
?? (x? ) = 1,
where xk =
x?k otherwise
xk if k ? {i} ? ?
?? (x00? ) = 1,
where x00k =
.
x?k otherwise
3
Main result: Blossom belief propagation
In this section, we introduce our main result ? an iterative algorithm, coined Blossom-BP, for solving
the minimum weight perfect matching problem over an arbitrary graph, where the algorithm uses the
max-product BP as a subroutine. We first describe the algorithm using LP instead of BP in Section
3.1, where we call it Blossom-LP. Its BP implementation is explained in Section 3.2.
3.1
Blossom-LP algorithm
Let us modify
weights: we ? we + ne , where ne is an i.i.d. random number chosen in
h the edge
i
1
the interval 0, |V | . Note that the solution of the minimum weight perfect matching problem (1)
remains the same after this modification because the overall noise does not exceed 1. The BlossomLP algorithm updates the following parameters iteratively.
? L ? 2V : a laminar collection of odd cycles in G.
? yv , yS : v ? V and S ? L.
In the above, L is called laminar if for every S, T ? L, S ? T = ?, S ? T or T ? S.We call S ? L
an outer blossom if there exists no T ? L such that S ? T . Initially, L = ? and yv = 0 for all
v ? V . The algorithm iterates between Step A and Step B and terminates at Step C.
Blossom-LP algorithm
A. Solving LP on a contracted graph. First construct an auxiliary (contracted) graph G? =
(V ? , E ? ) by contracting every outer blossom in L to a single vertex, where the weights w? = [we? :
e ? E ? ] are defined as
X
X
we? = we ?
yv ?
yS ,
? e ? E?.
v?V :v6?V ? ,e??(v)
S?L:v(S)6?V ? ,e??(S)
We let v(S) be the blossom vertex in G? coined as the contracted graph and solve the following LP:
minimize
subject to
w? ? x
X
xe = 1,
? v ? V ? , v is a non-blossom vertex
xe ? 1,
? v ? V ? , v is a blossom vertex
e??(v)
X
e??(v)
?
x = [xe ] ? [0, 1]|E | .
4
(5)
B. Updating parameters. After we obtain a solution x = [xe : e ? E ? ] of LP (5), the parameters
are updated as follows:
P
?
(a) If x is integral, i.e., x ? {0, 1}|E | and e??(v) xe = 1 for all v ? V ? , then proceed to the
termination step C.
P
(b) Else if there exists a blossom S such that e??(v(S)) xe > 1, then we choose one of such
blossoms and update
L ? L\{S}
and
yv ? 0, ? v ? S.
Call this step ?blossom S expansion?.
(c) Else if there exists an odd cycle C in G? such that xe = 1/2 for every edge e in it, we
choose one of them and update
1 X
L ? L ? {V (C)}
and
yv ?
(?1)d(e,v) we? , ?v ? V (C),
2
e?E(C)
where V (C), E(C) are the set of vertices and edges of C, respectively, and d(v, e) is the
graph distance from vertex v to edge e in the odd cycle C. The algorithm also remembers
the odd cycle C = C(S) corresponding to every blossom S ? L.
If (b) or (c) occur, go to Step A.
C. Termination. The algorithm iteratively expands blossoms in L to obtain the minimum
weighted perfect matching M ? as follows:
(i) Let M ? be the set of edges in the original G such that its corresponding edge e in the
contracted graph G? has xe = 1, where x = [xe ] is the (last) solution of LP (5).
(ii) If L = ?, output M ? .
(iii) Otherwise, choose an outer blossom S ? L, then update G? by expanding S, i.e. L ?
L\{S}.
(iv) Let v be the vertex in S covered by M ? and MS be a matching covering S\{v} using the
edges of odd cycle C(S).
(v) Update M ? ? M ? ? MS and go to Step (ii).
An example of the evolution of L is described in the supplementary material. We provide the
following running time guarantee for this algorithm, which is proven in Section 4.
Theorem 2 Blossom-LP outputs the minimum weight perfect matching in O(|V |2 ) iterations.
3.2
Blossom-BP algorithm
In this section, we show that the algorithm can be implemented using BP. The result is derived in
two steps, where the first one consists in the following theorem proven in the supplementary material
due to the space limitation.
|E ? |
Theorem 3 LP (5) always has a half-integral solution x? ? 0, 12 , 1
such that the collection
of its half-integral edges forms disjoint odd cycles.
Next let us design BP for obtaining the half-integral solution of LP (5). First, we duplicate each
edge e ? E ? into e1 , e2 and define a new graph G? = (V ? , E ? ) where E ? = {e1 , e2 : e ? E ? }.
Then, we build the following equivalent LP:
minimize
subject to
w? ? x
X
xe = 2,
? v ? V ? , v is a non-blossom vertex
xe ? 2,
? v ? V ? , v is a blossom vertex
e??(v)
X
e??(v)
?
x = [xe ] ? [0, 1]|E | ,
5
(6)
where we?1 = we?2 = we? . One can easily observe that solving LP (6) is equivalent to solving LP (5)
due to our construction of G? , w? , and LP (6) always have an integral solution due to Theorem 3.
Now, construct the following GM for LP (6):
Y
Y
?
?v (x?(v) ),
(7)
ewe xe
Pr[X = x] ?
v?V ?
e?E ?
where the factor function ?v is defined as
?
P
?
?1 if v is a non-blossom vertex and Pe??(v) xe = 2
?v (x?(v) ) = 1 else if v is a blossom vertex and e??(v) xe ? 2 .
?
?0 otherwise
For this GM, we derive the following corollary of Theorem 1 proven in the supplementary material
due to the space limitation.
Corollary 4 If LP (6) has a unique solution, then the max-product BP applied to GM (7) converges
to it.
The uniqueness condition stated in the corollary above is easy to guarantee by adding small random
noises to edge weights. Corollary 4 shows that BP can compute the half-integral solution of LP (5).
4
Proof of Theorem 2
First, it is relatively easy to prove the correctness of Blossom-BP, as stated in the following lemma.
Lemma 5 If Blossom-LP terminates, it outputs the minimum weight perfect matching.
/ V ? , v(S) ?
/ V ? ] denote the parameter values at
Proof. We let x? = [x?e ], y ? = [yv? , yS? : v ?
the termination of Blossom-BP. Then, the strong duality theorem and the complementary slackness
condition imply that
x?e (w? ? yu? ? yv? ) = 0, ?e = (u, v) ? E ? .
(8)
where y ? be a dual solution of x? . Here, observe that y ? and y ? cover y-variables inside and outside
of V ? , respectively. Hence, one can naturally define y ? = [yv? yu? ] to cover all y-variables, i.e.,
yv , yS for all v ? V, S ? L. If we define x? for the output matching M ? of Blossom-LP as x?e = 1
if e ? M ? and x?e = 0 otherwise, then x? and y ? satisfy the following complementary slackness
condition:
?
?
!
X
X
x?e we ? yu? ? yv? ?
yS? = 0, ?e = (u, v) ? E,
yS? ?
x?e ? 1? = 0, ?S ? L,
S?L
e??(S)
where L is the last set of blossoms at the termination of Blossom-BP. In the above, the first equality
is from (8) and the definition of w? ,P
and the second equality is because the construction of M ? in
Blossom-BP is designed to enforce e??(S) x?e = 1. This proves that x? is the optimal solution of
LP (2) and M ? is the minimum weight perfect matching, thus completing the proof of Lemma 5.
To guarantee the termination of Blossom-LP in polynomial time, we use the following notions.
Definition 1 Claw is a subset of edges such that every edge in it shares a common vertex, called
center, with all other edges, i.e., the claw forms a star graph.
Definition 2 Given a graph G = (V, E), a set of odd cycles O ? 2E , a set of claws W ? 2E and
a matching M ? E, (O, W, M ) is called cycle-claw-matching decomposition of G if all sets in
O ? W ? {M } are disjoint and each vertex v ? V is covered by exactly one set among them.
To analyze the running time of Blossom-BP, we construct an iterative auxiliary algorithm that outputs the minimum weight perfect matching in a bounded number of iterations. The auxiliary algorithm outputs a cycle-claw-matching decomposition at each iteration, and it terminates when the
cycle-claw-matching decomposition corresponds to a perfect matching. We will prove later that
the auxiliary algorithm and Blossom-LP are equivalent and, therefore, conclude that the iteration of
Blossom-LP is also bounded.
6
To design the auxiliary algorithm, we consider the following dual of LP (5):
X
yv
minimize
subject to
v?V ?
we? ?
(9)
yv ? yu ? 0,
?e = (u, v) ? E ? ,
yv(S) ? 0,
?S ? L.
Next we introduce an auxiliary iterative algorithm which updates iteratively the blossom set L and
also the set of variables yv , yS for v ? V, S ? L. We call edge e = (u, v) ?tight? if we ? yu ? yv ?
P
S?L:e??(S) yS = 0. Now, we are ready to describe the auxiliary algorithm having the following
parameters.
? G? = (V ? , E ? ), L ? 2V , and yv , yS for v ? V, S ? L.
? (O, W, M ): A cycle-claw-matching decomposition of G?
? T ? G? : A tree graph consisting of + and ? vertices.
Initially, set G? = G and L, T = ?. In addition, set yv , yS by an optimal solution of LP (9) with
w? = w and (O, W, M ) by the cycle-claw-matching decomposition of G? consisting of tight edges
with respect to [yv , yS ]. The parameters are updated iteratively as follows.
The auxiliary algorithm
Iterate the following steps until M becomes a perfect matching:
1. Choose a vertex r ? V ? from the following rule.
Expansion. If W =
6 ?, choose a claw W ? W of center blossom vertex c and choose
a non-center vertex r in W . Remove the blossom S(c) corresponding to c from L and
update G? by expanding it. Find a matching M 0 covering all vertices in W and S(c)
except for r and update M ? M ? M 0 .
Contraction. Otherwise, choose a cycle C ? O, add and remove it from L and
O, respectively. In addition, G? is also updated by contracting C and choose the
contracted vertex r in G? and set yr = 0.
Set tree graph T having r as + vertex and no edge.
2. Continuously increase yv of every + vertex v in T and decrease yv of ? vertex v in T by
the same amount until one of the following events occur:
Grow. If a tight edge (u, v) exists where u is a + vertex of T and v is covered by M ,
find a tight edge (v, w) ? M . Add edges (u, v), (v, w) to T and remove (v, w) from
M where v, w becomes ?, + vertices of T , respectively.
Matching. If a tight edge (u, v) exists where u is a + vertex of T and v is covered by
C ? O, find a matching M 0 that covers T ? C. Update M ? M ? M 0 and remove
C from O.
Cycle. If a tight edge (u, v) exists where u, v are + vertices of T , find a cycle C and
a matching M 0 that covers T . Update M ? M ? M 0 and add C to O.
Claw. If a blossom vertex v(S) with yv(S) = 0 exists, find a claw W (of center v(S))
and a matching M 0 covering T . Update M ? M ? M 0 and add W to W.
If Grow occurs, resume the step 2. Otherwise, go to the step 1.
Note that the auxiliary algorithm updates parameters in such a way that the number of vertices in
every claw in the cycle-claw-matching decomposition is 3 since every ? vertex has degree 2. Hence,
there exists a unique matching M 0 in the expansion step. Furthermore, the existence of a cycle-clawmatching decomposition at the initialization can be guaranteed using the complementary slackness
condition and the half-integrality of LP (5). We establish the following lemma for the running time
of the auxiliary algorithm, where its proof is given in the supplemental material due to the space
limitation.
Lemma 6 The auxiliary algorithm terminates in O(|V |2 ) iterations.
7
Now we are ready to prove the equivalence between the auxiliary algorithm and the Blossom-LP,
i.e., prove that the numbers of iterations of Blossom-LP and the auxiliary algorithm are equal. To
this end, given a cycle-claw-matching decomposition (O, W, M ), observe that one can choose the
?
corresponding x = [xe ] ? {0, 1/2, 1}|E | that satisfies constraints of LP (5):
?
?1 if e is an edge in W or M
xe = 21 if e is an edge in O
.
?
0 otherwise
?
Similarly, given a half-integral x = [xe ] ? {0, 1/2, 1}|E | that satisfies constraints of LP (5), one
can find the corresponding cycle-claw-matching decomposition. Furthermore, one can also define
weight w? in G? for the auxiliary algorithm as Blossom-LP does:
X
X
we? = we ?
yv ?
yS ,
? e ? E?.
(10)
v?V :v6?V ? ,e??(v)
S?L:v(S)6?V ? ,e??(S)
In the auxiliary algorithm, e = (u, v) ? E ? is tight if and only if we? ? yu? ? yv? = 0. Under
these equivalences in parameters between Blossom-LP and the auxiliary algorithm, we will use
the induction to show that cycle-claw-matching decompositions maintained by both algorithms are
equal at every iteration, as stated in the following lemma whose proof is given in the supplemental
material due to the space limitation..
Lemma 7 Define the following notation:
y ? = [yv : v ? V ? ]
and
y ? = [yv , yS : v ? V, v 6? V ? , S ? L, v(S) ?
/ V ? ],
i.e., y ? and y ? are parts of y which involves and does not involve in V ? , respectively. Then, the
Blossom-LP and the auxiliary algorithm update parameters L, y ? equivalently and output the same
cycle-claw-decomposition of G? at each iteration.
The above lemma implies that Blossom-LP also terminates in O(|V |2 ) iterations due to Lemma 6.
This completes the proof of Theorem 2. The equivalence between the half-integral solution of LP
(5) in Blossom-LP and the cycle-claw-matching decomposition in the auxiliary algorithm implies
that LP (5) is always has a half-integral solution, and hence, one of Steps B.(a), B.(b) or B.(c) always
occurs.
5
Conclusion
The BP algorithm has been popular for approximating inference solutions arising in graphical models, where its distributed implementation, associated ease of programming and strong parallelization potential are the main reasons for its growing popularity. This paper aims for designing a
polynomial-time BP-based scheme solving the maximum weigh perfect matching problem. We believe that our approach is of a broader interest to advance the challenge of designing BP-based MAP
solvers in more general GMs as well as distributed (and parallel) solvers for large-scale IPs.
Acknowledgement. This work was supported by Institute for Information & communications
Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R0132-15-1005),
Content visual browsing technology in the online and offline environments. The work at LANL was
carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC52-06NA25396.
References
[1] J. Yedidia, W. Freeman, and Y. Weiss, ?Constructing free-energy approximations and generalized belief propagation algorithms,? IEEE Transactions on Information Theory, vol. 51, no. 7,
pp. 2282 ? 2312, 2005.
[2] T. J. Richardson and R. L. Urbanke, Modern Coding Theory. Cambridge University Press,
2008.
[3] M. Mezard and A. Montanari, Information, physics, and computation, ser. Oxford Graduate
Texts. Oxford: Oxford Univ. Press, 2009.
8
[4] M. J. Wainwright and M. I. Jordan, ?Graphical models, exponential families, and variational
inference,? Foundations and Trends in Machine Learning, vol. 1, no. 1, pp. 1?305, 2008.
[5] J. Gonzalez, Y. Low, and C. Guestrin. ?Residual splash for optimally parallelizing belief propagation,? in International Conference on Artificial Intelligence and Statistics, 2009.
[6] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein, ?GraphLab:
A New Parallel Framework for Machine Learning,? in Conference on Uncertainty in Artificial
Intelligence (UAI), 2010.
[7] A. Kyrola, G. E. Blelloch, and C. Guestrin. ?GraphChi: Large-Scale Graph Computation on
Just a PC,? in Operating Systems Design and Implementation (OSDI), 2012.
[8] R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, and J. McDonald, ?Parallel Programming in OpenMP,? Morgan Kaufmann, ISBN 1-55860-671-8, 2000.
[9] M. Bayati, D. Shah, and M. Sharma, ?Max-product for maximum weight matching: Convergence, correctness, and lp duality,? IEEE Transactions on Information Theory, vol. 54, no. 3,
pp. 1241 ?1251, 2008.
[10] S. Sanghavi, D. Malioutov, and A. Willsky, ?Linear Programming Analysis of Loopy Belief
Propagation for Weighted Matching,? in Neural Information Processing Systems (NIPS), 2007
[11] B. Huang, and T. Jebara, ?Loopy belief propagation for bipartite maximum weight bmatching,? in Artificial Intelligence and Statistics (AISTATS), 2007.
[12] M. Bayati, C. Borgs, J. Chayes, R. Zecchina, ?Belief-Propagation for Weighted b-Matchings
on Arbitrary Graphs and its Relation to Linear Programs with Integer Solutions,? SIAM Journal
in Discrete Math, vol. 25, pp. 989?1011, 2011.
[13] N. Ruozzi, Nicholas, and S. Tatikonda, ?st Paths using the min-sum algorithm,? in 46th Annual
Allerton Conference on Communication, Control, and Computing, 2008.
[14] S. Sanghavi, D. Shah, and A. Willsky, ?Message-passing for max-weight independent set,? in
Neural Information Processing Systems (NIPS), 2007.
[15] D. Gamarnik, D. Shah, and Y. Wei, ?Belief propagation for min-cost network flow: convergence & correctness,? in SODA, pp. 279?292, 2010.
[16] S. Park, and J. Shin, ?Max-Product Belief Propagation for Linear Programming: Applications
to Combinatorial Optimization,? in Conference on Uncertainty in Artificial Intelligence (UAI),
2015.
[17] M. Trick. ?Networks with additional structured constraints?, PhD thesis, Georgia Institute of
Technology, 1978.
[18] M. Padberg, and M. Rao. ?Odd minimum cut-sets and b-matchings,? in Mathematics of Operations Research, vol. 7, no. 1, pp. 67?80, 1982.
[19] M. Gr?otschel, and O. Holland. ?Solving matching problems with linear programming,? in
Mathematical Programming, vol. 33, no. 3, pp. 243?259, 1985.
[20] M Fischetti, and A. Lodi. ?Optimizing over the first Chv?atal closure?, in Mathematical Programming, vol. 110, no. 1, pp. 3?20, 2007.
[21] K. Chandrasekaran, L. A. Vegh, and S. Vempala. ?The cutting plane method is polynomial for
perfect matchings,? in Foundations of Computer Science (FOCS), 2012
[22] V. Kolmogorov, ?Blossom V: a new implementation of a minimum cost perfect matching algorithm,? Mathematical Programming Computation, vol. 1, no. 1, pp. 43?67, 2009.
[23] J. Edmonds, ?Paths, trees, and flowers?, Canadian Journal of Mathematics, vol. 3, pp. 449?
467, 1965.
[24] D. Malioutov, J. Johnson, and A. Willsky, ?Walk-sums and belief propagation in gaussian
graphical models,? J. Mach. Learn. Res., vol. 7, pp. 2031-2064, 2006.
[25] Y. Weiss, C. Yanover, and T Meltzer, ?MAP Estimation, Linear Programming and Belief Propagation with Convex Free Energies,? in Conference on Uncertainty in Artificial Intelligence
(UAI), 2007.
[26] C. Moallemi and B. Roy, ?Convergence of min-sum message passing for convex optimization,?
in 45th Allerton Conference on Communication, Control and Computing, 2008.
9
| 5736 |@word version:2 polynomial:8 termination:6 closure:1 seek:1 contraction:3 decomposition:12 celebrated:1 contains:2 past:1 remove:4 designed:2 update:15 bickson:1 alone:1 half:8 intelligence:5 yr:1 plane:3 xk:3 provides:1 iterates:2 complication:1 math:1 allerton:2 simpler:1 mathematical:3 constructed:1 c2:1 focs:1 prove:5 consists:2 ewe:1 inside:1 introduce:2 growing:1 freeman:1 gov:1 resolve:2 solver:3 chv:1 becomes:2 moreover:1 bounded:3 notation:1 minimizes:1 developed:1 supplemental:3 guarantee:7 zecchina:1 every:14 expands:1 exactly:2 ser:1 control:2 grant:1 appear:1 positive:2 engineering:1 modify:1 mach:1 oxford:3 path:4 approximately:1 initialization:2 studied:1 equivalence:3 ease:2 graduate:1 unique:4 implement:1 shin:2 matching:48 projection:1 onto:1 applying:1 equivalent:5 map:13 maxz:1 center:5 latest:1 go:3 convex:3 rule:1 utilizing:1 nuclear:1 searching:1 notion:1 variation:1 updated:3 construction:2 gm:18 infrequent:1 exact:1 programming:18 us:1 designing:3 trick:1 trend:1 roy:1 updating:1 cut:1 electrical:1 solved:1 cycle:27 iitp:1 decrease:1 observes:1 mentioned:1 weigh:1 environment:1 complexity:1 tight:10 solving:16 division:1 bipartite:6 matchings:3 easily:1 joint:2 represented:1 kolmogorov:2 univ:1 describe:2 artificial:5 outside:1 quite:1 heuristic:2 supplementary:3 kaist:1 solve:5 whose:1 tightness:3 relax:1 otherwise:9 statistic:2 richardson:1 final:1 ip:4 online:1 chayes:1 sequence:4 isbn:1 product:8 loop:1 achieve:1 description:1 los:2 convergence:4 perfect:21 converges:3 derive:1 develop:2 ac:1 pose:1 school:1 odd:13 solves:1 strong:3 auxiliary:19 implemented:1 involves:1 implies:3 treewidth:1 correct:1 material:6 government:1 preliminary:1 blelloch:1 summation:1 strictly:1 correction:1 hold:1 major:1 uniqueness:1 estimation:1 dagum:1 combinatorial:6 tatikonda:1 correctness:5 establishes:1 weighted:3 promotion:1 illusive:1 always:5 gaussian:1 aim:1 modified:1 factorizes:1 broader:3 corollary:4 encode:1 derived:2 kyrola:2 posteriori:2 inference:4 osdi:1 typically:1 initially:2 relation:1 subroutine:1 provably:1 issue:1 arg:1 overall:1 dual:2 among:1 equal:2 once:2 aware:1 construct:3 having:2 park:3 yu:6 inevitable:1 alter:1 np:1 sanghavi:2 duplicate:1 modern:2 national:2 consisting:2 attempt:1 organization:1 interest:2 message:5 pc:1 edge:30 integral:13 moallemi:1 korea:3 respective:1 jumping:2 unless:1 tree:9 iv:1 urbanke:1 walk:1 re:1 theoretical:1 instance:1 earlier:1 rao:1 cover:7 sungsoo:2 assignment:4 loopy:4 cost:5 vertex:38 subset:6 alamo:2 johnson:1 gr:1 optimally:1 dependency:1 ac52:1 st:1 international:1 siam:1 contract:1 physic:1 discipline:1 michael:1 continuously:1 thesis:1 choose:9 possibly:1 huang:1 priority:1 suggesting:1 potential:2 de:1 star:1 coding:1 includes:1 satisfy:1 explicitly:1 msip:1 later:1 analyze:1 yv:25 maintains:1 parallel:4 contribution:1 minimize:6 kaufmann:1 correspond:1 resume:1 malioutov:2 definition:3 energy:3 pp:11 e2:2 naturally:3 proof:9 associated:3 popular:4 knowledge:1 specify:1 wei:3 formulation:2 generality:1 furthermore:6 just:1 until:3 hand:1 nonlinear:1 propagation:15 slackness:3 menon:1 scientific:1 believe:2 usa:1 evolution:1 hence:3 equality:2 laboratory:1 iteratively:4 essence:2 covering:3 maintained:1 m:2 generalized:1 mcdonald:1 fj:1 reasoning:1 variational:1 gamarnik:1 recently:2 fi:1 common:3 exponentially:1 discussed:2 interpretation:2 relating:1 cambridge:1 mathematics:2 similarly:1 funded:1 ahn:2 operating:1 add:6 recent:1 optimizing:1 certain:1 inequality:1 binary:1 xe:24 guestrin:3 minimum:18 morgan:1 additional:1 converge:1 shortest:2 sharma:1 ii:2 multiple:5 graphchi:2 offer:2 e1:2 y:13 involving:1 variant:1 chandra:1 iteration:11 c1:1 addition:3 remarkably:1 background:2 interval:1 else:3 grow:2 completes:1 parallelization:2 subject:7 induced:1 undirected:1 flow:3 jordan:1 integer:5 call:4 intermediate:7 exceed:1 iii:1 easy:2 canadian:1 iterate:1 meltzer:1 zi:3 administration:1 motivated:1 queue:1 passing:4 proceed:1 useful:1 detailed:1 covered:4 involve:1 amount:1 extensively:1 simplest:1 notice:2 disjoint:5 arising:1 popularity:1 ruozzi:1 edmonds:8 discrete:1 promise:1 vol:10 integrality:7 graph:30 relaxation:6 sum:3 enforced:1 run:2 vegh:1 uncertainty:3 soda:1 family:1 chandrasekaran:1 padberg:1 gonzalez:2 completing:1 guaranteed:3 laminar:2 annual:1 occur:2 constraint:6 bp:58 auspex:1 min:3 claw:20 vempala:1 relatively:1 structured:2 department:1 describes:1 terminates:5 wi:2 lp:79 modification:2 explained:1 gradually:1 pr:4 computationally:2 remains:1 tractable:3 end:1 available:1 operation:1 yedidia:1 observe:3 hellerstein:1 enforce:3 generic:1 nicholas:1 shah:3 existence:1 original:4 running:4 graphical:6 maintaining:3 coined:5 build:2 establish:2 approximating:2 classical:1 prof:1 question:2 occurs:2 distance:1 otschel:1 outer:3 reason:1 induction:1 willsky:3 providing:1 equivalently:1 negative:2 stated:3 design:5 implementation:5 looking:1 communication:3 rn:1 arbitrary:5 jebara:1 parallelizing:1 required:1 lanl:2 c3:1 connection:1 security:1 established:1 nip:2 address:1 beyond:1 suggested:1 flower:1 challenge:3 program:1 max:9 including:2 belief:15 wainwright:1 event:1 solvable:1 residual:1 advanced:1 yanover:1 scheme:4 technology:4 imply:1 ne:2 ready:2 carried:1 remembers:1 text:1 literature:2 acknowledgement:1 contracting:3 loss:1 limitation:5 proven:3 aposteriori:1 bayati:2 foundation:2 degree:1 sufficient:2 systematically:2 share:1 supported:1 last:2 free:2 offline:1 blossom:76 allow:1 institute:3 distributed:4 overcome:1 dimension:1 stand:1 computes:1 author:1 collection:5 subtracts:1 transaction:2 implicitly:2 cutting:3 keep:1 graphlab:2 uai:3 conclude:1 xi:5 x00:1 iterative:5 decade:2 additionally:1 terminate:1 learn:1 expanding:3 obtaining:1 expansion:5 complex:2 constructing:1 aistats:1 main:5 montanari:1 noise:2 n2:2 complementary:3 contracted:6 georgia:1 sub:2 mezard:1 exponential:2 pe:1 chertkov:2 theorem:9 remained:1 atal:1 borgs:1 showing:1 exists:10 intractable:2 sequential:4 adding:2 kr:1 phd:1 jinwoos:1 splash:1 browsing:1 gap:4 easier:1 jinwoo:1 likely:1 visual:1 v6:2 holland:1 corresponds:2 satisfies:2 conditional:1 formulated:1 feasible:1 hard:1 content:1 daejeon:1 openmp:2 included:1 reducing:1 except:1 lemma:9 called:11 duality:2 bps:2 relevance:1 |
5,232 | 5,737 | Super-Resolution Off the Grid
Qingqing Huang
MIT,
EECS,
LIDS,
qqh@mit.edu
Sham M. Kakade
University of Washington,
Department of Statistics,
Computer Science & Engineering,
sham@cs.washington.edu
Abstract
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal
processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures
which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some
cutoff frequency); we hope to take a (quantifiably) small number of measurements;
we desire our algorithm to run quickly.
Suppose we have k point sources in d dimensions, where the points are separated
by at least
from each other (in Euclidean distance). This work provides an
algorithm with the following favorable guarantees:
? The algorithm uses Fourier measurements, whose frequencies are bounded
by O(1/ ) (up to log factors).
p Previous algorithms require a cutoff frequency
which may be as large as ?( d/ ).
? The number of measurements taken by and the computational complexity of
our algorithm are bounded by a polynomial in both the number of points k
and the dimension d, with no dependence on the separation . In contrast,
previous algorithms depended inverse polynomially on the minimal separation and exponentially on the dimension for both of these quantities.
Our estimation procedure itself is simple: we take random bandlimited measurements (as opposed to taking an exponential number of measurements on the hypergrid). Furthermore, our analysis and algorithm are elementary (based on concentration bounds for sampling and the singular value decomposition).
1
Introduction
We follow the standard mathematical abstraction of this problem (Candes & Fernandez-Granda
[4, 3]): consider a d-dimensional signal x(t) modeled as a weighted sum of k Dirac measures in Rd :
x(t) =
k
X
wj
?(j) ,
(1)
j=1
where the point sources, the ?(j) ?s, are in Rd . Assume that the weights wj are complex valued,
whose absolute values are lower and upper bounded by some positive constant. Assume that we are
given k, the number of point sources1 .
1
An upper bound of the number of point sources suffices.
1
Define the measurement function f (s) : Rd ! C to be the convolution of the point source x(t) with
a low-pass point spread function ei?<s,t> as below:
f (s) =
Z
ei?<t,s> x(dt) =
t2Rd
k
X
wj ei?<?
(j)
,s>
.
(2)
j=1
In the noisy setting, the measurements are corrupted by uniformly bounded perturbation z:
fe(s) = f (s) + z(s),
|z(s)| ? ?z , 8s.
(3)
Suppose that we are only allowed to measure the signal x(t) by evaluating the measurement function
fe(s) at any s 2 Rd , and we want to recover the parameters of the point source signal, i.e., {wj , ?(j) :
j 2 [k]}. We follow the standard normalization to assume that:
?(j) 2 [ 1, +1]d ,
|wj | 2 [0, 1]
Let wmin = minj |wj | denote the minimal weight, and let
sources defined as follows:
= min0 k?(j)
j6=j
8j 2 [k].
be the minimal separation of the point
0
?(j ) k2 ,
(4)
where we use the Euclidean distance between the point sources for ease of exposition2 . These
quantities are key parameters in our algorithm and analysis. Intuitively, the recovery problem is
harder if the minimal separation is small and the minimal weight wmin is small.
The first question is that, given exact measurements, namely ?z = 0, where and how many measurements should we take so that the original signal x(t) can be exactly recovered.
Definition 1.1 (Exact recovery). In the exact case, i.e. ?z = 0, we say that an algorithm achieves
exact recovery with m measurements of the signal x(t) if, upon input of these m measurements, the
algorithm returns the exact set of parameters {wj , ?(j) : j 2 [k]}.
Moreover, we want the algorithm to be measurement noise tolerant, in the sense that in the presence
of measurement noise we can still recover good estimates of the point sources.
Definition 1.2 (Stable recovery). In the noisy case, i.e., ?z 0, we say that an algorithm achieves
stable recovery with m measurements of the signal x(t) if, upon input of these m measurements, the
algorithm returns estimates {w
bj , ?
b(j) : j 2 [k]} such that
n
o
min max kb
?(j) ?(?(j)) k2 : j 2 [k] ? poly(d, k)?z ,
?
where the min is over permutations ? on [k] and poly(d,k) is a polynomial function in d and k.
By definition, if an algorithm achieves stable recovery with m measurements, it also achieves exact
recovery with these m measurements.
The terminology of ?super-resolution? is appropriate due to the following remarkable result (in the
noiseless case) of Donoho [9]: suppose we want to accurately recover the point sources to an error
of , where ? . Naively, we may expect to require measurements whose frequency depends
inversely on the desired the accuracy . Donoho [9] showed that it suffices to obtain a finite number
of measurements, whose frequencies are bounded by O(1/ ), in order to achieve exact recovery;
thus resolving the point sources far more accurately than that which is naively implied by using
frequencies of O(1/ ). Furthermore, the work of Candes & Fernandez-Granda [4, 3] showed that
stable recovery, in the univariate case (d = 1), is achievable with a cutoff frequency of O(1/ )
using a convex program and a number of measurements whose size is polynomial in the relevant
quantities.
2
Our claims hold withut using the ?wrap around metric?, as in [4, 3], due to our random sampling. Also, it
is possible to extend these results for the `p -norm case.
2
d=1
d
cutoff freq
measurements
runtime
cutoff freq
SDP
1
k log(k) log( 1 )
poly( 1 , k)
Cd
MP
1
1
( 1 )3
Ours
1
(k log(k))
2
1
measurements
(
-
(k log(k))
2
log(kd)
1
1
1
)d
poly((
(k log(k) + d)
runtime
1
1
)d , k)
2
(k log(k) + d)2
Table 1: See Section 1.2 for description. See Lemma 2.3 for details about the cutoff frequency.
Here, we are implicitly using O(?) notation.
1.1
This work
We are interested in stable recovery procedures with the following desirable statistical and computational properties: we seek to use coarse (low frequency) measurements; we hope to take a
(quantifiably) small number of measurements; we desire our algorithm run quickly. Informally, our
main result is as follows:
Theorem 1.3 (Informal statement of Theorem 2.2). For a fixed probability of error, the proposed
algorithm achieves stable recovery with a number of measurements and with computational runtime
that are both on the order of O((k log(k) + d)2 ). Furthermore, the algorithm makes measurements
which are bounded in frequency by O(1/ ) (ignoring log factors).
Notably, our algorithm and analysis directly deal with the multivariate case, with the univariate case
as a special case. Importantly, the number of measurements and the computational runtime do not
depend on the minimal separation of the point sources. This may be important even in certain low
dimensional imaging applications where taking physical measurements are costly (indeed, superresolution is important in settings where is small). Furthermore, our technical contribution of how
to decompose a certain tensor constructed with Fourier measurements may be of broader interest to
related questions in statistics, signal processing, and machine learning.
1.2
Comparison to related work
Table 1 summarizes the comparisons between our algorithm and the existing results. The multidimensional cutoff frequency we refer to in the table is the maximal coordinate-wise entry of any
measurement frequency s (i.e. ksk1 ). ?SDP? refers to the semidefinite programming (SDP) based
algorithms of Candes & Fernandez-Granda [3, 4]; in the univariate case, the number of measurements can be reduced by the method in Tang et. al. [23] (this is reflected in the table). ?MP? refers
to the matrix pencil type of methods, studied in [14] and [15] for the univariate case. Here, we are
0
defining the infinity norm separation as 1 = minj6=j 0 k?(j) ?(j ) k1 , which is understood as the
wrap around distance on the unit circle. Cd 1 is a problem dependent constant (discussed below).
Observe the following differences between our algorithm and prior work:
1) Our minimal separation is measured under the `2 -norm instead of the infinity norm, as in the
SDP based algorithm. Note that 1p
depends on the coordinate system;
in the worst case, it can
p
underestimate the separation by a 1/ d factor, namely 1 ? / d.
2) The computation complexity and number of measurements are polynomial in dimension d and
the number of point sources k, and surprisingly do not depend on the minimal separation of the
point sources! Intuitively, when the minimal separation between the point sources is small, the
problem should be harder, this is only reflected in the sampling range and the cutoff frequency
of the measurements in our algorithm.
3) Furthermore, one could project the multivariate signal to the coordinates and solve multiple univariate problems (such as in [19, 17], which provided
p only exact recovery results). Naive random
projections would lead to a cutoff frequency of O( d/ ).
3
SDP approaches: The work in [3, 4, 10] formulates the recovery problem as a total-variation minimization problem; they then show the dual problem can be formulated as an SDP. They focused
on the analysis of d = 1 and only explicitly extend
the proofs for d = 2. For d 1, Ingham-type
p
theorems (see [20, 12]) suggest that Cd = O( d).
The number of measurements can be reduced by the method in [23] for the d = 1 case, which is
noted in the table. Their method uses sampling ?off the grid?; technically, their sampling scheme is
actually sampling random points from the grid, though with far fewer measurements.
Matrix pencil approaches: The matrix pencil method, MUSIC and Prony?s method are essentially
the same underlying idea, executed in different ways. The original Prony?s method directly attempts
to find roots of a high degree polynomial, where the root stability has few guarantees. Other methods
aim to robustify the algorithm.
Recently, for the univariate matrix pencil method, Liao & Fannjiang [14] and Moitra [15] provide a
stability analysis of the MUSIC algorithm. Moitra [15] studied the optimal relationship between the
cutoff frequency and , showing that if the cutoff frequency is less than 1/ , then stable recovery
is not possible with matrix pencil method (with high probability).
1.3
Notation
Let R, C, and Z to denote real, complex, and natural numbers. For d 2 Z, [d] denotes the set
[d] = {1, . . . , d}. For a set S, |S| denotes its cardinality. We use to denote the direct sum of sets,
namely S1 S2 = {(a + b) : a 2 S1 , b 2 S2 }.
d
Let en to denote the n-th standard basis vector in Rd , for n 2 [d]. Let PR,2
= {x 2 Rd : kxk2 = 1}
to denote the d-sphere of radius R in the d-dimensional standard Euclidean space.
Denote the condition number of a matrix X 2 Rm?n as cond2 (X) = max (X)/
max (X) and min (X) are the maximal and minimal singular values of X.
min (X),
where
We use ? to denote tensor product. Given matrices A, B, C 2 Cm?k , the tensor product V =
Pk
A ? B ? C 2 Cm?m?m is equivalent to Vi1 ,i2 ,i3 =
n=1 Ai1 ,n Bi2 ,n Ci3 ,n . Another view of
tensor is that it defines a multi-linear mapping. For given dimension mA , mB , mC the mapping
V (?, ?, ?) : Cm?mA ? Cm?mB ? Cm?mC ! CmA ?mB ?mC is defined as:
X
[V (XA , XB , Xc )]i1 ,i2 ,i3 =
Vj1 ,j2 ,j3 [XA ]j1 ,i1 [XB ]j2 ,i2 [XC ]j3 ,i3 .
j1 ,j2 ,j3 2[m]
In particular, for a 2 Cm , we use V (I, I, a) to denote the projection of tensor V along the 3rd
dimension. Note that if the tensor admits a decomposition V = A ? B ? C, it is straightforward to
verify that
V (I, I, a) = ADiag(C > a)B > .
It is well-known that if the factors A, B, C have full column rank then the rank k decomposition
is unique up to re-scaling and common column permutation. Moreover, if the condition number
of the factors are upper bounded by a positive constant, then one can compute the unique tensor
decomposition V with stability guarantees (See [1] for a review. Lemma 2.5 herein provides an
explicit statement.).
2
2.1
Main Results
The algorithm
We briefly describe the steps of Algorithm 1 below:
(Take measurements) Given positive numbers m and R, randomly draw a sampling set S =
s(1) , . . . s(m) of m i.i.d. samples of the Gaussian distribution N (0, R2 Id?d ). Form the set
S 0 = S [ {s(m+1) = e1 , . . . , s(m+d) = ed , s(m+d+1) = 0} ? Rd . Denote m0 = m + d + 1.
Take another independent random sample v from the unit sphere, and define v (1) = v, v (2) = 2v.
4
Input: R, m, noisy measurement function fe(?).
Output: Estimates {w
bj , ?
b(j) : j 2 [k]}.
1. Take measurements:
Let S = {s(1) , . . . , s(m) } be m i.i.d. samples from the Gaussian distribution N (0, R2 Id?d ).
Set s(m+n) = en for all n 2 [d] and s(m+n+1) = 0. Denote m0 = m + d + 1.
Take another random samples v from the unit sphere, and set v (1) = v and v (2) = 2v.
0
0
Construct a tensor Fe 2 Cm ?m ?3 : Fen ,n ,n = fe(s)
(n )
(n )
(n ) .
1
2
s=s
3
1
+s
2
+v
3
b w ) = TensorDecomp(Fe ).
2. Tensor Decomposition: Set (VbS 0 , D
b
b
For j = 1, . . . , k, set [VS 0 ]j = [VS 0 ]j /[VbS 0 ]m0 ,j
3. Read of estimates: For j = 1, . . . , k, set ?
b(j) = Real(log([VbS ][m+1:m+d,j] )/(i?)).
c = arg minW 2Ck kFb
4. Set W
VbS 0 ? VbS 0 ? Vbd Dw kF .
Algorithm 1: General algorithm
0
0
Construct the 3rd order tensor Fe 2 Cm ?m ?3 with noise corrupted measurements fe(s) evaluated
0
0
(1) (2)
at the points in S
S
{v , v }, arranged in the following way:
Fen1 ,n2 ,n3 = fe(s)
s=s(n1 ) +s(n2 ) +v (n3 )
, 8n1 , n2 2 [m0 ], n3 2 [2].
(Tensor decomposition) Define the characteristic matrix VS to be:
2
(1) (1)
(k) (1)
ei?<? ,s > . . . ei?<? ,s >
(k) (2)
6 i?<?(1) ,s(2) >
6 e
. . . ei?<? ,s >
6
VS = 6
..
..
4
.
...
.
ei?<?
(1)
,s(m) >
...
ei?<?
(k)
0
and define matrix V 0 2 Cm ?k to be
VS 0 =
"
VS
Vd
1, . . . , 1
where Vd 2 Cd?k is defined in (17). Define
2
(1) (1)
ei?<? ,v >
V2 = 4 ei?<?(1) ,v(2) >
1
...
...
...
#
,s(m) >
3
7
7
7.
7
5
,
(5)
(6)
(7)
3
(k) (1)
ei?<? ,v >
(k) (2)
ei?<? ,v > 5 .
1
Note that in the exact case (?z = 0) the tensor F constructed in (5) admits a rank-k decomposition:
F = VS 0 ? VS 0 ? (V2 Dw ),
(8)
Assume that V has full column rank, then this tensor decomposition is unique up to column
permutation and rescaling with very high probability over the randomness of the random unit vector
v. Since each element of VS 0 has unit norm, and we know that the last row of VS 0 and the last row
of V2 are all ones, there exists a proper scaling so that we can uniquely recover wj ?s and columns
of VS 0 up to common permutation.
In this paper, we adopt Jennrich?s algorithm (see Algorithm 2) for tensor decomposition. Other
algorithms, for example tensor power method ([1]) and recursive projection ([24]), which are possibly more stable than Jennrich?s algorithm, can also be applied here.
(Read off estimates) Let log(Vd ) denote the element-wise logarithm of Vd . The estimates of the
point sources are given by:
h
i log(V )
d
?(1) , ?(2) , . . . , ?(k) =
.
i?
S0
5
Input: Tensor Fe 2 Cm?m?3 , rank k.
output: Factor Vb 2 Cm?k .
b Pb> with the k leading singular values.
1. Compute the truncated SVD of Fe (I, I, e1 ) = Pb?
b = Fe (Pb , Pb, I). Set E
b1 = E(I,
b I, e1 ) and E
b2 = E(I,
b I, e2 ).
2. Set E
b be the eigenvectors of E
b1 E
b 1 corresponding to the k eigenvalues
3. Let the columns of U
2
with the largest absolute value.
p
b.
4. Set Vb = mPbU
Algorithm 2: TensorDecomp
Remark 2.1. In the toy example, the simple algorithm corresponds to using the sampling set S 0 =
{e1 , . . . , ed }. The conventional univariate matrix pencil method corresponds to using the sampling
set S 0 = {0, 1, . . . , m} and the set of measurements S 0 S 0 S 0 corresponds to the grid [m]3 .
2.2
Guarantees
In this section, we discuss how to pick the two parameters m and R and prove that the proposed
algorithm indeed achieves stable recovery in the presence of measurement noise.
Theorem 2.2 (Stable recovery). There exists a universal constant C such that the following holds.
Fix ?x ,
s, v
2 (0, 12 );
pick m such that m
for d = 1, pick R
max
p
n
k
?x
q
8 log
2 log(1+2/?x )
;
?
o
,
d
;
s
k
for d
2, pick R
p
2 log(k/?x )
.
?
Assume the bounded measurement noise model as in (3) and that ?z ?
2
v wmin
p
100 dk5
?
1 2?x
1+2?x
?2.5
.
With probability at least (1 s ) over the random sampling of S, and with probability at least (1 v )
over the random projections in Algorithm 2, the proposed Algorithm 1 returns an estimation of the
Pk
point source signal x
b(t) = j=1 w
bj b?(j) with accuracy:
n
(j)
min max kb
?
?
?
(?(j))
o
k2 : j 2 [k] ? C
p
dk 5 wmax
2
v wmin
?
1 + 2?x
1 2?x
?2.5
?z ,
where the min is over permutations ? on [k]. Moreover, the proposed algorithm has time complexity
in the order of O((m0 )3 ).
The next lemma shows that essentially, with overwhelming probability, all the frequencies taken
concentrate within the hyper-cube with cutoff frequency R0 on each coordinate, where R0 is comparable to R,
Lemma 2.3 (The cutoff frequency). For d > 1, with high probability, all of the 2(m0 )2 sampling
0
frequencies in S 0 S 0 {v (1) , v (2) } satisfy that ks(j1 ) +s(j2 ) +v (j3 ) kp
8j1 , j2 2 [m], j3 2
1 ?R ,
[2], where the per-coordinate cutoff frequency is given by R0 = O(R log md).
For d = 1 case, the cutoff frequency R0 can be made to be in the order of R0 = O(1/ ).
Remark 2.4 (Failure probability). Overall, the failure probability consists of two pieces: v for
random projection of v, and s for random sampling to ensure the bounded condition number of VS .
This may be boosed to arbitrarily high probability through repetition.
6
2.3
Key Lemmas
Stability of tensor decomposition: In this paragraph, we give a brief description and the stability
guarantee of the well-known Jennrich?s algorithm ([11, 13]) for low rank 3rd order tensor decomposition. We only state it for the symmetric tensors as appeared in the proposed algorithm.
Consider a tensor F = V ? V ? (V2 Dw ) 2 Cm?m?3 where the factor V has full column rank k.
Then the decomposition is unique up to column permutation and rescaling, and Algorithm 2 finds the
factors efficiently. Moreover, the eigen-decomposition is stable if the factor V is well-conditioned
and the eigenvalues of Fa Fb? are well separated.
Lemma 2.5 (Stability of Jennrich?s algorithm). Consider the 3rd order tensor F = V ? V ?
(V2 Dw ) 2 Cm?m?3 of rank k ? m, constructed as in Step 1 in Algorithm 1.
Given a tensor Fe that is element-wise close to F , namely for all n1 , n2 , n3 2 [m], Fen1 ,n2 ,n3
2
v wmin
p
Fn ,n ,n ? ?z , and assume that the noise is small ?z ?
. Use Fe as the input
1
2
100 dkwmax cond2 (V )5
3
(1)
to Algorithm 2. With probability at least (1
and v (2) , we can
v ) over the random projections v
bound the distance between columns of the output Vb and that of V by:
p 2
n
o
dk wmax
b
min max kVj V?(j) k2 : j 2 [k] ? C
cond2 (V )5 ?z ,
(9)
2
?
j
v wmin
where C is a universal constant.
Condition number of VS 0 : The following lemma is helpful:
Lemma 2.6. Let VS 0 2 C(m+d+1)?k be the factor as defined in (7). Recall that VS 0 = [VS ; Vd ; 1],
where Vd is defined in (17), and VS is the characteristic matrix defined in (6).
We can bound the condition number of VS 0 by
cond2 (V ) ?
S0
q
1+
p
kcond2 (VS ).
(10)
Condition number of the characteristic matrix VS : Therefore, the stability analysis of the proposed algorithm boils down to understanding the relation between the random sampling set S and
the condition number of the characteristic matrix VS . This is analyzed in Lemma 2.8 (main technical
lemma).
Lemma 2.7. For any fixed number
?x 2 (0, 1/2). Consider apGaussian vector s with distribution
p
2 log(k/? )
2 log(1+2/? )
x
x
N (0, R2 Id?d ), where R
for d 2, and R
for d = 1. Define the
?
?
k?k
Hermitian random matrix Xs 2 Cherm to be
2
3
(1)
e i?<? ,s>
(2)
6
7
i
6 e i?<? ,s> 7 h i?<?(1) ,s> i?<?(2) ,s>
i?<?(k) ,s>
6
7
Xs = 6
e
,
e
,
.
.
.
e
.
(11)
..
7
4
5
.
e
i?<?(k) ,s>
We can bound the spectrum of Es [Xs ] by:
(1
?x )Ik?k
Es [Xs ]
(1 + ?x )Ik?k .
(12)
Lemma 2.8 (Main technical lemma). In the same setting of Lemma q
2.7, Let S = {s(1) , . . . , s(m) }
k
be m independent samples of the Gaussian vector s. For m
8 log ks , with probability at
?x
least 1
s over the random sampling, the condition number of the factor VS is bounded by:
r
1 + 2?x
cond2 (VS ) ?
.
(13)
1 2?x
7
Acknowledgments
The authors thank Rong Ge and Ankur Moitra for very helpful discussions. Sham Kakade acknowledges funding from the Washington Research Foundation for innovation in Data-intensive
Discovery.
References
[1] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. The Journal of Machine Learning Research, 15(1):2773?2832,
2014.
[2] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and
hidden markov models. arXiv preprint arXiv:1203.0683, 2012.
[3] E. J. Cand`es and C. Fernandez-Granda. Super-resolution from noisy data. Journal of Fourier
Analysis and Applications, 19(6):1229?1254, 2013.
[4] E. J. Cand`es and C. Fernandez-Granda. Towards a mathematical theory of super-resolution.
Communications on Pure and Applied Mathematics, 67(6):906?956, 2014.
[5] Y. Chen and Y. Chi. Robust spectral compressed sensing via structured matrix completion.
Information Theory, IEEE Transactions on, 60(10):6576?6601, 2014.
[6] S. Dasgupta. Learning mixtures of gaussians. In Foundations of Computer Science, 1999. 40th
Annual Symposium on, pages 634?644. IEEE, 1999.
[7] S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and lindenstrauss.
Random structures and algorithms, 22(1):60?65, 2003.
[8] S. Dasgupta and L. J. Schulman. A two-round variant of em for gaussian mixtures. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages 152?159.
Morgan Kaufmann Publishers Inc., 2000.
[9] D. L. Donoho. Superresolution via sparsity constraints. SIAM Journal on Mathematical Analysis, 23(5):1309?1331, 1992.
[10] C. Fernandez-Granda. A Convex-programming Framework for Super-resolution. PhD thesis,
Stanford University, 2014.
[11] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an ?explanatory? multi-modal factor analysis. 1970.
[12] V. Komornik and P. Loreti. Fourier series in control theory. Springer Science & Business
Media, 2005.
[13] S. Leurgans, R. Ross, and R. Abel. A decomposition for three-way arrays. SIAM Journal on
Matrix Analysis and Applications, 14(4):1064?1083, 1993.
[14] W. Liao and A. Fannjiang. Music for single-snapshot spectral estimation: Stability and superresolution. Applied and Computational Harmonic Analysis, 2014.
[15] A. Moitra. The threshold for super-resolution via extremal functions.
arXiv:1408.1681, 2014.
arXiv preprint
[16] E. Mossel and S. Roch. Learning nonsingular phylogenies and hidden markov models. In
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 366?
375. ACM, 2005.
[17] S. Nandi, D. Kundu, and R. K. Srivastava. Noise space decomposition method for twodimensional sinusoidal model. Computational Statistics & Data Analysis, 58:147?161, 2013.
[18] K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions
of the Royal Society of London. A, pages 71?110, 1894.
[19] D. Potts and M. Tasche. Parameter estimation for nonincreasing exponential sums by pronylike methods. Linear Algebra and its Applications, 439(4):1024?1039, 2013.
[20] D. L. Russell. Controllability and stabilizability theory for linear partial differential equations:
recent progress and open questions. Siam Review, 20(4):639?739, 1978.
8
[21] A. Sanjeev and R. Kannan. Learning mixtures of arbitrary gaussians. In Proceedings of the
thirty-third annual ACM symposium on Theory of computing, pages 247?257. ACM, 2001.
[22] G. Schiebinger, E. Robeva, and B. Recht. Superresolution without separation. arXiv preprint
arXiv:1506.03144, 2015.
[23] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht. Compressed sensing off the grid. Information
Theory, IEEE Transactions on, 59(11):7465?7490, 2013.
[24] S. S. Vempala and Y. F. Xiao. Max vs min: Independent component analysis with nearly linear
sample complexity. arXiv preprint arXiv:1412.2954, 2014.
9
| 5737 |@word briefly:1 achievable:1 polynomial:5 norm:5 vi1:1 open:1 seek:2 decomposition:16 pick:4 harder:2 moment:1 series:1 ours:1 existing:1 ksk1:1 recovered:1 fn:1 j1:4 v:24 intelligence:1 fewer:1 coarse:3 provides:2 mathematical:4 along:1 constructed:3 direct:1 differential:1 symposium:3 ik:2 prove:1 consists:1 paragraph:1 hermitian:1 notably:1 indeed:2 cand:2 sdp:6 multi:2 chi:1 overwhelming:1 cardinality:1 project:1 provided:1 bounded:11 moreover:4 granda:6 notation:2 underlying:1 superresolution:4 medium:1 cm:13 astronomy:1 guarantee:5 multidimensional:1 runtime:4 exactly:1 k2:4 rm:1 control:1 unit:5 harshman:1 positive:3 engineering:1 understood:1 depended:1 id:3 studied:2 k:2 ankur:1 ease:1 range:1 unique:4 acknowledgment:1 thirty:2 recursive:1 procedure:4 universal:2 projection:6 refers:2 suggest:1 close:1 minj6:1 twodimensional:1 equivalent:1 conventional:1 straightforward:1 convex:2 focused:1 resolution:7 recovery:16 pure:1 array:1 importantly:1 dw:4 stability:8 coordinate:5 variation:1 suppose:3 exact:9 programming:2 us:2 element:3 preprint:4 worst:1 wj:8 russell:1 complexity:4 vbs:5 abel:1 depend:2 algebra:1 technically:1 upon:2 basis:1 separated:2 describe:1 london:1 kp:1 artificial:1 hyper:1 pearson:1 whose:5 stanford:1 valued:1 solve:1 say:2 compressed:2 statistic:3 cma:1 itself:1 noisy:4 eigenvalue:2 maximal:2 product:2 mb:3 j2:5 relevant:1 achieve:1 sixteenth:1 description:2 dirac:1 telgarsky:1 object:1 completion:1 measured:1 progress:1 recovering:1 c:1 concentrate:1 radius:1 min0:1 kb:2 require:2 suffices:2 fix:1 decompose:1 elementary:2 rong:1 hold:2 around:2 mapping:2 bj:3 claim:1 m0:6 achieves:6 adopt:1 estimation:5 favorable:1 superposition:1 ross:1 extremal:1 largest:1 repetition:1 weighted:1 hope:2 minimization:1 mit:2 gaussian:4 super:7 aim:1 i3:3 ck:1 broader:1 parafac:1 adiag:1 potts:1 rank:8 contrast:1 sense:1 helpful:2 abstraction:1 dependent:1 explanatory:1 hidden:2 relation:1 i1:2 jennrich:4 interested:1 arg:1 dual:1 fannjiang:2 overall:1 special:1 cube:1 construct:2 washington:3 sampling:14 biology:1 nearly:1 few:1 randomly:1 n1:3 attempt:1 interest:2 ai1:1 analyzed:1 mixture:4 semidefinite:1 xb:2 nonincreasing:1 partial:1 minw:1 euclidean:3 logarithm:1 desired:1 circle:1 re:1 minimal:10 column:9 formulates:1 entry:1 johnson:1 seventh:1 eec:1 corrupted:3 recht:2 siam:3 off:4 quickly:2 kvj:1 sanjeev:1 thesis:1 moitra:4 opposed:1 huang:1 vbd:1 possibly:1 leading:1 return:3 rescaling:2 toy:1 sinusoidal:1 b2:1 inc:1 satisfy:1 mp:2 fernandez:6 depends:2 explicitly:1 piece:1 root:2 view:1 recover:4 candes:3 contribution:2 accuracy:2 kaufmann:1 characteristic:4 efficiently:1 nonsingular:1 accurately:2 mc:3 j6:1 randomness:1 minj:1 ed:2 definition:3 failure:2 underestimate:1 frequency:22 e2:1 proof:2 boil:1 hsu:2 nandi:1 recall:1 actually:1 dt:1 follow:2 reflected:2 modal:1 arranged:1 evaluated:1 though:1 furthermore:5 xa:2 robustify:1 ei:12 wmax:2 defines:1 verify:1 evolution:1 pencil:6 read:2 symmetric:1 freq:2 i2:3 deal:1 round:1 uniquely:1 noted:1 ranging:1 wise:3 harmonic:1 recently:1 funding:1 common:3 quantifiably:2 physical:1 exponentially:1 extend:2 discussed:1 measurement:46 refer:1 leurgans:1 rd:11 grid:5 mathematics:1 stable:11 multivariate:2 showed:2 recent:1 certain:2 arbitrarily:1 fen:1 morgan:1 r0:5 signal:10 resolving:1 multiple:1 desirable:2 sham:3 full:3 technical:3 sphere:3 e1:4 j3:5 variant:1 liao:2 noiseless:1 metric:1 essentially:2 arxiv:8 normalization:1 want:3 singular:3 source:17 publisher:1 bhaskar:1 anandkumar:2 presence:2 idea:1 intensive:1 remark:2 informally:1 eigenvectors:1 reduced:2 per:1 dasgupta:3 key:2 terminology:1 threshold:1 pb:4 cutoff:15 imaging:2 sum:3 run:2 inverse:1 uncertainty:1 kfb:1 separation:11 draw:1 summarizes:1 scaling:2 vb:3 comparable:1 bound:5 annual:3 infinity:2 constraint:1 n3:5 fourier:6 min:8 vempala:1 department:1 structured:1 kd:1 em:1 kakade:4 lid:1 s1:2 intuitively:2 pr:1 taken:2 equation:1 discus:1 know:1 ge:2 informal:1 gaussians:2 observe:1 v2:5 appropriate:1 spectral:2 shah:1 eigen:1 original:2 denotes:2 ensure:1 xc:2 music:3 k1:1 society:1 implied:1 tensor:23 question:3 quantity:3 fa:1 concentration:1 dependence:1 costly:1 md:1 wrap:2 distance:4 thank:1 vd:6 kannan:1 modeled:1 relationship:1 innovation:1 executed:1 fe:14 statement:2 proper:1 upper:3 convolution:1 snapshot:1 markov:2 finite:1 controllability:1 truncated:1 defining:1 communication:1 perturbation:1 vj1:1 arbitrary:1 namely:4 philosophical:1 herein:1 roch:1 below:3 appeared:1 sparsity:1 program:1 max:7 prony:2 royal:1 bandlimited:2 bi2:1 power:1 natural:1 business:1 kundu:1 scheme:1 brief:1 inversely:1 numerous:1 mossel:1 acknowledges:1 naive:1 prior:1 review:2 understanding:1 discovery:1 kf:1 schulman:1 expect:1 permutation:6 remarkable:1 foundation:3 degree:1 s0:2 xiao:1 cd:4 row:2 surprisingly:1 last:2 taking:2 wmin:6 absolute:2 dimension:6 evaluating:1 lindenstrauss:1 fb:1 author:1 made:1 far:2 polynomially:1 transaction:3 implicitly:1 tolerant:1 b1:2 spectrum:1 latent:1 table:5 robust:2 ignoring:1 obtaining:1 spectroscopy:1 complex:2 poly:4 pk:2 spread:1 main:4 s2:2 noise:9 n2:5 allowed:1 en:2 explicit:1 exponential:2 kxk2:1 third:1 tang:2 theorem:5 down:1 showing:1 sensing:2 r2:3 dk:2 admits:2 x:4 gupta:1 naively:2 exists:2 phd:1 conditioned:1 chen:1 univariate:7 desire:2 springer:1 corresponds:3 acm:4 ma:2 formulated:1 donoho:3 towards:1 uniformly:1 lemma:14 total:1 pas:1 svd:1 e:4 phylogeny:1 arises:1 srivastava:1 |
5,233 | 5,738 | b-bit Marginal Regression
Ping Li
Department of Statistics and Biostatistics
Department of Computer Science
Rutgers University
pingli@stat.rutgers.edu
Martin Slawski
Department of Statistics and Biostatistics
Department of Computer Science
Rutgers University
martin.slawski@rutgers.edu
Abstract
We consider the problem of sparse signal recovery from m linear measurements
quantized to b bits. b-bit Marginal Regression is proposed as recovery algorithm.
We study the question of choosing b in the setting of a given budget of bits B =
m ? b and derive a single easy-to-compute expression characterizing the trade-off
between m and b. The choice b = 1 turns out to be optimal for estimating the unit
vector corresponding to the signal for any level of additive Gaussian noise before
quantization as well as for adversarial noise. For b ? 2, we show that Lloyd-Max
quantization constitutes an optimal quantization scheme and that the norm of the
signal can be estimated consistently by maximum likelihood by extending [15].
1 Introduction
Consider the common compressed sensing (CS) model
yi = hai , x? i + ??i , i = 1, . . . , m, or equivalently
m,n
m
m
n
y = Ax? + ??, y = (yi )m
i=1 , A = (Aij )i,j=1 , {ai = (Aij )j=1 }i=1 , ? = (?i )i=1 ,
(1)
where the {Aij } and the {?i } are i.i.d. N (0, 1) (i.e. standard Gaussian) random variables, the latter
of which will be referred to by the term ?additive noise? and accordingly ? > 0 as ?noise level?, and
x? ? Rn is the signal of interest to be recovered given (A, y). Let s = kx? k0 := |S(x? )|, where
S(x? ) = {j : |x?j | > 0}, be the ?0 -norm of x? (i.e. the cardinality of its support S(x? )). One of the
celebrated results in CS is that accurate recovery of x? is possible as long as m & s log n, and can
be carried out by several computationally tractable algorithms e.g. [3, 5, 21, 26, 29].
Subsequently, the concept of signal recovery from an incomplete set (m < n) of linear measurements was developed further to settings in which only coarsely quantized versions of such linear
measurements are available, with the extreme case of single-bit measurements [2, 8, 11, 22, 23, 28,
16]. More generally, one can think of b-bit measurements, b ? {1, 2, . . .}. Assuming that one is free
to choose b given a fixed budget of bits B = m ? b gives rise to a trade-off between m and b. An
optimal balance of these two quantities minimizes the error in recovering the signal. Such optimal
trade-off depends on the quantization scheme, the noise level, and the recovery algorithm. This
trade-off has been considered in previous CS literature [13]. However, the analysis therein concerns
an oracle-assisted recovery algorithm equipped with knowledge of S(x? ) which is not fully realistic.
In [9] a specific variant of Iterative Hard Thresholding [1] for b-bit measurements is considered. It is
shown via numerical experiments that choosing b ? 2 can in fact achieve improvements over b = 1
at the level of the total number of bits required for approximate signal recovery. On the other hand,
there is no analysis supporting this observation. Moreover, the experiments in [9] only concern a
noiseless setting. Another approach is to treat quantization as additive error and to perform signal
recovery by means of variations of recovery algorithms for infinite-precision CS [10, 14, 18]. In this
line of research, b is assumed to be fixed and a discussion of the aforementioned trade-off is missing.
In the present paper we provide an analysis of compressed sensing from b-bit measurements using a
specific approach to signal recovery which we term b-bit Marginal Regression. This approach builds
on a method for one-bit compressed sensing proposed in an influential paper by Plan and Vershynin
[23] which has subsequently been refined in several recent works [4, 24, 28]. As indicated by the
name, b-bit Marginal Regression can be seen as a quantized version of Marginal Regression, a simple
1
yet surprisingly effective approach to support recovery that stands out due to its low computational
cost, requiring only a single matrix-vector multiplication and a sorting operation [7]. Our analysis
yields a precise characterization of the above trade-off involving m and b in various settings. It
turns out that the choice b = 1 is optimal for recovering the normalized signal x?u = x? /kx? k2 ,
under additive Gaussian noise as well as under adversarial noise. It is shown that the choice b =
2 additionally enables one to estimate kx? k2 , while being optimal for recovering x?u for b ? 2.
Hence for the specific recovery algorithm under consideration, it does not pay off to take b > 2.
Furthermore, once the noise level is significantly high, b-bit Marginal Regression is empirically
shown to perform roughly as good as several alternative recovery algorithms, a finding suggesting
that in high-noise settings taking b > 2 does not pay off in general. As an intermediate step in our
analysis, we prove that Lloyd-Max quantization [19, 20] constitutes an optimal b-bit quantization
scheme in the sense that it leads to a minimization of an upper bound on the reconstruction error.
Notation: We use [d] = {1, . . . , d} and S(x) for the support of x ? Rn . x ? x? = (xj ? x?j )nj=1 .
I(P ) is the indicator function of expression P . The symbol ? means ?up to a positive universal
constant?. Supplement: Proofs and additional experiments can be found in the supplement.
2 From Marginal Regression to b-bit Marginal Regression
Some background on Marginal Regression. It is common to perform sparse signal recovery by
solving an optimization problem of the form
1
?
min
ky ? Axk22 + P (x), ? ? 0,
(2)
x 2m
2
where P is a penalty term encouraging sparse solutions. Standard choices for P are P (x) = kxk0 ,
which is computationally not feasible in general, its convex relaxation P (x) = kxk1 or non-convex
penalty terms like SCAD or MCP that are more amenable to optimization than the ?0 -norm [27].
Alternatively P can as well be used to enforce a constraint by setting P (x) = ?C (x), where ?C (x) =
0 if x ? C and +? otherwise, with C = {x ? Rn : kxk0 ? s} or C = {x ? Rn : kxk1 ? r} being
standard choices. Note that (2) is equivalent to the optimization problem
?
A? y
1 A? A
x + P (x), where ? =
.
min ? h?, xi + x?
x
2
m
2
m
Replacing A? A/m by E[A? A/m] = I (recall that the entries of A are i.i.d. N (0, 1)), we obtain
1
?
A? y
min ? h?, xi + kxk22 + P (x), ? =
,
(3)
x
2
2
m
which tends to be much simpler to solve than (2) as the first two terms are separable in the components of x. For the choices of P mentioned above, we obtain closed form solutions:
P (x) = kxk0 : x
bj = ?j I(|?j | ? ? 1/2 )
P (x) = kxk1 : x
bj = (|?j | ? ?)+ sign(?j ),
P (x) = ?x:kxk0 ?s : x
bj = ?j I(|?j | ? |?(s) |) P (x) = ?x:kxk1 ?r : x
bj = (|?j | ? ? ? )+ sign(?j ) (4)
for j ? [n], where + denotes the positive part and |?(s) | is the sth largest entry in ? in absolute
Pn
magnitude and ? ? = min{? ? 0 : j=1 (|?j | ? ?)+ ? r}. In other words, the estimators are hardrespectively soft-thresholded versions of ?j = A?
j y/m which are essentially equal to the univariate
2
?1
(or marginal) regression coefficients ?j = A?
y/kA
)),
j k2 in the sense that ?j = ?j (1 + OP (m
j
j ? [n], hence the term ?marginal regression?. In the literature, it is the estimator in the left half of
(4) that is popular [7], albeit as a means to infer the support of x? rather than x? itself. Under (2) the
performance with respect to signal recovery can still be reasonable in view of the statement below.
Proposition 1. Consider model (1) with x? 6= 0 and the Marginal Regression estimator x
b defined
component-wise by x
bj = ?j I(|?j | ? |?(s) |), j ? [n], where ? = A? y/m. Then there exists positive
constants c, C > 0 such that with probability at least 1 ? cn?1
r
kx? k2 + ? s log n
kb
x ? x? k2
?C
.
(5)
kx? k2
kx? k2
m
In comparison,pthe relative ?2 -error of more sophisticated methods like the lasso scales as
O({?/kx? k2 } s log(n)/m) which is comparable to (5) once ? is of the same order of magnitude as kx? k2 . Marginal Regression can also be interpreted as a single projected gradient iteration
2
from 0 for problem (2) with P = ?x:kxk0 ?s . Taking more than one projected gradient iteration gives
rise to a popular recovery algorithm known as Iterative Hard Thresholding (IHT, [1]).
Compressed sensing with non-linear observations and the method of Plan & Vershynin. As a
generalization of (1) one can consider measurements of the form
yi = Q(hai , x? i + ??i ), i ? [m]
(6)
for some map Q. Without loss generality, one may assume that kx? k2 = 1 as long as x? 6= 0 (which
is assumed in the sequel) by defining Q accordingly. Plan and Vershynin [23] consider the following
optimization problem for recovering x? , and develop a framework for analysis that covers even more
general measurement models than (6). The proposed estimator minimizes
min
? ? h?, xi ,
x:kxk2 ?1,kxk1 ? s
? = A? y/m.
(7)
?
Note that the constraint set {x : kxk2 ? 1, kxk1 ? s} contains {x : kxk2 ? 1, kxk0 ? s}. The
authors prefer the former because it is suited for approximately sparse signals as well and second
because it is convex. However, the optimization problem with sparsity constraint is easy to solve:
min
x:kxk2 ?1,kxk0 ?s
? h?, xi ,
? = A? y/m.
(8)
Lemma 1. The solution of problem (8) is given by x
b=x
e/ke
xk2 , x
ej = ?j I(|?j | ? |?(s) |), j ? [n].
While this is elementary we state it as a separate lemma as there has been some confusion in the existing literature. In [4] the same solution is obtained after (unnecessarily) convexifying the constraint
set, which yields the unit ball of the so-called s-support norm. In [24] a family of concave penalty
terms including the SCAD and MCP is proposed in place of the cardinality constraint. However, in
light of Lemma 1, the use of such penalty terms lacks motivation.
The minimization problem (8) is essentially that of Marginal Regression (3) with P = ?x:kxk0 ?s , the
only difference being that the norm of the solution is fixed to one. Note that the Marginal Regression
estimator is equi-variant w.r.t. re-scaling of y, i.e. for a ? y with a > 0, x
b changes to ab
x. In addition,
let ?, ? > 0 and define x
b(?) and x
b[?] as the minimizers of the optimization problems
?
min ? h?, xi + kxk22 ,
min
? h?, xi .
(9)
x:kxk0 ?s
2
x:kxk2 ??,kxk0 ?s
It is not hard to verify that x
b(?)/kb
x(?)k2 = x
b[?]/kb
x[?]k2 = x
b[1]. In summary, for estimating the
direction x?u = x? /kx? k2 it does not matter if a quadratic term in the objective or an ?2 -norm constraint is used. Moreover, estimation of the ?scale? ? ? = kx? k2 and the direction can be separated.
Adopting the framework in [23], we provide a straightforward bound on the ?2 -error of x
b minimizing
(8). To this end we define two quantities which will be of central interest in subsequent analysis.
? = E[g ?(g)], g ? N (0, 1), where ? is defined by E[y1 |a1 ] = ?(ha1 , x? i)
p
(10)
? = inf{C > 0 : P{max1?j?n |?j ? E[?j ]| ? C log(n)/m} ? 1 ? 1/n.}.
The quantity ? concerns the deterministic part of the analysis as it quantifies the distortion of the
linear measurements under the map Q, while ? is used to deal with the stochastic part. The definition
of ? is based on the usual tail bound for the maximum of centered sub-Gaussian random variables.
In fact, as long as Q has bounded range, Gaussianity of the {Aij } implies that the {?j ? E[?j ]}nj=1
are zero-mean sub-Gaussian. Accordingly, the constant ? is proportional to the sub-Gaussian norm
of the {?j ? E[?j ]}nj=1 , cf. [25].
Proposition 2. Consider model (6) s.t. kx? k2 = 1 and (10). Suppose that ? > 0 and denote by x
b
the minimizer of (8). Then with probability at least 1 ? 1/n, it holds that
r
? ? s log n
?
kx ? x
bk2 ? 2 2
.
(11)
?
m
So far s has been assumed to be known. If that is not the case, s can be estimated as follows.
p
b as
Proposition 3. In the setting of Proposition 2, consider sb = |{j : |?j | > ? log(n)/m}| and x
the minimizer of (8) with s replaced by sb. Then with probability at least 1 ? 1/n, S(b
x) ? S(x? )
(i.e. no false positive selection). Moreover, if
p
min? |x?j | > (2?/?) log(n)/m, one has S(b
x) = S(x? ).
(12)
j?S(x )
3
b-bit Marginal Regression. b-bit quantized measurements directly fit into the non-linear observation model (6). Here the map Q represents a quantizer that partitions R+ into K = 2b?1 bins
?
{Rk }K
(in increasing order) and t0 = 0,
k=1 given by distinct thresholds t = (t1 , . . . , tK?1 )
tK = +? such that R1 = [t0 , t1 ), . . . , RK = [tK?1 , tK ). Each bin is assigned a distinct representative from M = {?1 , . . . , ?K } (in increasing order) so that Q : R ? ?M ? M is defined by
PK
z 7? Q(z) = sign(z) k=1 ?k I(|z| ? Rk ). Expanding model (6) accordingly, we obtain
PK
yi = sign(hai , x? i + ??i ) k=1 ?k I( |(hai , x? i + ??i )| ? Rk )
P
?
?
= sign(hai , x?u i + ? ?i ) K
k=1 ?k I( |(hai , xu i + ? ?i )| ? Rk /? ), i ? [m],
where ? ? = kx? k2 , x?u = x? /? ? and ? = ?/? ? . Thus the scale ? ? of the signal can be absorbed
into the definition of the bins respectively thresholds which should be proportional to ? ? . We may
thus again fix ? ? = 1 and in turn x? = x?u , ? = ? w.l.o.g. for the analysis below. Estimation of ? ?
separately from x?u will be discussed in an extra section.
3 Analysis
In this section we study in detail the central question of the introduction. Suppose we have a fixed
budget B of bits available and are free to choose the number of measurements m and the number
of bits per measurement b subject to B = m ? b such that the ?2 -error kb
x ? x? k2 of b-bit Marginal
Regression is as small as possible. What is the optimal choice of (m, b)? In order to answer this
question, let us go back to the error bound (11). That bound applies to b-bit Marginal Regression for
any choice of b and varies with ? = ?b and ? = ?b , both of which additionally depend on ?, the
choice of the thresholds t and the representatives ?. It can be shown that the dependence of (11) on
the ratio ?/? is tight asymptotically as m ? ?. Hence it makes sense to compare two different
?
choices b and
?b in terms of the ratio of ?b = ?b /?b and ?b? = ?b? /?b? . Since the bound (11)
decays with m, for b? -bit measurements, b? > b, to improve
p over b-bit measurements with respect
?
to the total #bits used, it is then required that ?b /?b > b? /b. The route to be taken is thus as
follows: we first derive expressions for ?b and ?b and then minimize the resulting expression for ?b
w.r.t. the free parameters t and ?. We are then in position to compare ?b /?b? for b 6= b? .
Evaluating ?b = ?b (t, ?). Below, ? denotes the entry-wise multiplication between vectors.
Lemma 2. We have ?b (t, ?) = h?(t), E(t) ? ?i /(1 + ? 2 ), where
?
?(t) = (?1 (t), . . . , ?K (t)) , ?k (t) = P {|e
g | ? Rk (t)} , e
g ? N (0, 1 + ? 2 ), k ? [K],
E(t) = (E1 (t), . . . , EK (t))? , Ek (t) = E[e
g|e
g ? Rk (t)], e
g ? N (0, 1 + ? 2 ), k ? [K].
Evaluating ?b = ?b (t, ?). Exact evaluation proves to be difficult. We hence resort to an analytically more tractable approximation which is still sufficiently accurate as confirmed by experiments.
p
Lemma 3. As |x?j | ? 0, j = 1, . . . , n, and as m ? ?, we have ?b (t, ?) ? h?(t), ? ? ?i.
Note that the proportionality constant (not depending on b) in front of the given expression does not
need to be known as it cancels out when computing ratios ?b /?b? . The asymptotics |x?j | ? 0, j ?
[n], is limiting but still makes sense for s growing with n (recall that we fix kx? k2 = 1 w.l.o.g.).
Optimal choice of t and ?. It turns that the optimal choice of (t, ?) minimizing ?b /?b coincides
with the solution of an instance of the classical Lloyd-Max quantization problem [19, 20] stated
below. Let h be a random variable with finite variance and Q the quantization map from above.
PK
(13)
min E[{h ? Q(h; t, ?)}2 ] = min E[{h ? sign(h) k=1 ?k I(|h| ? Rk (t) )}2 ].
t,?
t,?
Problem (13) can be seen as a one-dimensional k-means problem at the population level, and it is
solved in practice by an alternating scheme similar to that used for k-means. For h from a logconcave distribution (e.g. Gaussian) that scheme can be shown to deliver the global optimum [12].
Theorem 1. Consider the minimization problem mint,? ?b (t, ?)/?b (t, ?). Its minimizer (t? , ?? )
equals that of the Lloyd-Max problem (13) for h ? N (0, 1 + ? 2 ). Moreover,
p
?b (t? , ?? ) = ?b (t? , ?? )/?b (t? , ?? ) ? (? 2 + 1)/?b,0 (t?0 , ??0 ),
where ?b,0 (t?0 , ??0 ) denotes the value of ?b for ? = 0 evaluated at (t?0 , ??0 ), the choice of (t, ?)
minimizing ?b for ? = 0.
4
Regarding the choice of (t, ?) the result of Theorem 1 may not come as a suprise as the entries of y
are i.i.d. N (0, 1 + ? 2 ). It is less immediate though that this specific choice can also be motivated
as the one leading to the minimization of the error bound (11). Furthermore, Theorem 1 implies
that the relative performance of b- and b? -bit measurements does not depend on ? as long as the
respective optimal choice of (t, ?) is used, which requires ? to be known. Theorem 1 provides
an explicit expression for ?b that is straightforward to compute. The following table lists ratios
?b /?b? for selected values of b and b? .
?b /?b? :
required for b? ? b:
b = 1, b? = 2
1.178
?
2 ? 1.414
b = 2, b? = 3
1.046
p
3/2 ? 1.225
b = 3, b? = 4
1.013
p
4/3 ? 1.155
These figures suggests that the smaller b, the better the performance for a given budget of bits B.
Beyond additive noise. Additive Gaussian noise is perhaps the most studied form of perturbation,
but one can of course think of numerous other mechanisms whose effect can be analyzed on the
basis of the same scheme used for additive noise as long as it is feasible to obtain the corresponding
expressions for ? and ?. We here do so for the following mechanisms acting after quantization.
(I) Random bin flip. For i ? [m]: with probability 1 ? p, yi remains unchanged. With probability p,
yi is changed to an element from (?M ? M) \ {yi } uniformly at random.
(II) Adversarial bin flip. For i ? [m]: Write yi = q?k for q ? {?1, 1} and ?k ? M. With
probability 1 ? p, yi remains unchanged. With probability p, yi is changed to ?q?K .
Note that for b = 1, (I) and (II) coincide (sign flip with probability p). Depending on the magnitude
of p, the corresponding value ? = ?b,p may even be negative, which is unlike the case of additive
noise. Recall that the error bound (11) requires ? > 0. Borrowing terminology from robust statistics,
we consider p?b = min{p : ?b,p ? 0} as the breakdown point, i.e. the (expected) proportion of
contaminated observations that can still be tolerated so that (11) continues to hold. Mechanism (II)
produces a natural counterpart of gross corruptions in the standard setting (1). It can be shown
that among all maps ?M ? M ? ?M ? M applied randomly to the observations with a fixed
probability, (II) maximizes the ratio ?/?, hence the attribute ?adversarial?. In Figure 1 we display
?b,p /?b,p for b ? {1, 2, 3, 4} for both (I) and (II). The table below lists the corresponding breakdown
points. For simplicity, (t, ?) are not optimized but set to the optimal (in the sense of Lloyd-Max)
choice (t?0 , ??0 ) in the noiseless case. The underlying derivations can be found in the supplement.
(I)
p?b
b=1
1/2
b=2
3/4
b=3
7/8
b=4
15/16
b=1
1/2
(II)
p?b
b=2
0.42
b=3
0.36
b=4
0.31
Figure 1 and the table provide one more argument in favour of one-bit measurements as they offer
better robustness vis-`a-vis adversarial corruptions. In fact, once the fraction of such corruptions
reaches 0.2, b = 1 performs best ? on the measurement scale. For the milder corruption scheme (I),
b = 2 turns out to the best choice for significant but moderate p.
1.8
2.5
1.6
2
1.4
b=4
b=3
log10 (?/?)
log10 (?/?)
1.2
1.5
1
0.8
b=1
0.6
1
b=1
b=2
0.4
0.5
0.2
0
0
b=2
b = 3 / 4 (~overlap)
0.1
0.2
0.3
fraction of bin flips
0.4
0
0
0.5
0.1
0.2
0.3
0.4
fraction of gross corruptions
0.5
Figure 1: ?b,p /?b,p (log10 -scale), b ? {1, 2, 3, 4}, p ? [0, 0.5] for mechanisms (I, L) and (II, R).
4 Scale estimation
In Section 2, we have decomposed x? = x?u ? ? into a product of a unit vector x?u and a scale
parameter ? ? > 0. We have pointed out that x?u can be estimated by b-bit Marginal Regression
5
separately from ? ? since the latter can be absorbed into the definition of the bins {Rk }. Accordingly,
bu and ?b estimating x?u and ? ? , respectively. We here consider
we may estimate x? as x
b=x
bu ?b with x
the maximum likelihood estimator (MLE) for ? ? , by following [15] which studied the estimation of
the scale parameter for the entire ?-stable family of distributions. The work of [15] was motivated
by a different line of one scan 1-bit CS algorithm [16] based on ?-stable designs [17].
First, we consider the case ? = 0, so that the {yi } are i.i.d. N (0, (? ? )2 ). The likelihood function is
L(?) =
m X
K
Y
i=1 k=1
I(yi ? Rk ) P(|yi | ? Rk ) =
K
Y
k=1
{2(?(tk /?) ? ?(tk?1 /?))}mk ,
(14)
where mk = |{i : |yi | ? Rk }|, k ? [K], and ? denotes the standard Gaussian cdf. Note that for
K = 1, L(?) is constant (i.e. does not depend on ?) which confirms that for b = 1, it is impossible
to recover ? ? . For K = 2 (i.e. b = 2), the MLE has a simple a closed form expression given by
?b = t1 /??1 (0.5(1 + m1 /m)). The following tail bound establishes fast convergence of ?b to ? ? .
Proposition 4. Let ? ? (0, 1) and c = 2{?? (t1 /? ? )}2 , where ?? denotes the derivative of the
b ? ? 1| ? ?.
standard Gaussian pdf. With probability at least 1 ? 2 exp(?cm?2 ), we have |?/?
The exponent c is maximized for t1 = ? ? and becomes smaller as t1 /? ? moves away from 1.
While scale estimation from 2-bit measurements is possible, convergence can be slow if t1 is not
well chosen. For b ? 3, convergence can be faster but the MLE is not available in closed form [15].
We now turn to the case ? > 0. The MLE based on (14) is no longer consistent. If x?u is known then
the joint likelihood of for (? ? , ?) is given by
m
Y
ui ? ? hai , x?u i
li ? ? hai , x?u i
?
L(?, ?
e) =
??
,
(15)
?
e
?
e
i=1
where [li , ui ] denotes the interval the i-th observation is contained in before quantization, i ? [m]. It
is not clear to us whether the likelihood is log-concave, which would ensure that the global optimum
can be obtained by convex programming. Empirically, we have not encountered any issue with
spurious local minima when using ? = 0 and ?
e as the MLE from the noiseless case as starting
point. The only issue with (15) we are aware of concerns the case in which there exists ? so that
? hai , x?u i ? [li , ui ], i ? [m]. In this situation, the MLE for ? equals zero and the MLE for ? may
not be unique. However, this is a rather unlikely scenario as long as there is a noticeable noise level.
As x?u is typically unknown, we may follow the plug-in principle, replacing x?u by an estimator x
bu .
5 Experiments
We here provide numerical results supporting/illustrating some of the key points made in the previous sections. We also compare b-bit Marginal Regression to alternative recovery algorithms.
Setup. Our simulations follow model (1) with n = 500, s ? {10, 20, . . . , 50}, ? ? {0, 1, 2}
and b ? {1, 2}. Regarding x? , the support and its signs are selected uniformly at random, while
the absolute magnitude of the entries corresponding
p to the support are drawn from the uniform
distribution on [?, 2?], where ? = f ? (1/?1,? ) log(n)/m and m = f 2 (1/?1,? )2 s log n with
f ? {1.5, 3, 4.5, . . . , 12} controlling the signal strength. The resulting?signal is then normalized
to unit 2-norm. Before normalization, the norm of the signal lies in [1, 2] by construction which
ensures that as f increases the signal strength condition (12) is satisfied with increasing probability. For b = 2, we use Lloyd-Max quantization for a N (0, 1)-random variable which is optimal for
? = 0, but not for ? > 0. Each possible configuration for s, f and ? is replicated 20 times. Due to
space limits, a representative subset of the results is shown; the rest can be found in the supplement.
Empirical verification of the analysis in Section 3. The experiments reveal that what is predicted
by the analysis of the comparison of the relative performance of 1-bit and 2-bit measurements for
estimating x? closely agrees with what is observed empirically, as can be seen in Figure 2.
Estimation of the scale and the noise level. Figure 3 suggests that the plug-in MLE for (? ? =
kx? k2 , ?) is a suitable approach, at least as long as ? ? /? is not too small. For ? = 2, the plug-in
MLE for ? ? appears to have a noticeable bias as it tends to 0.92 instead of 1 for increasing f (and
thus increasing m). Observe that for ? = 0, convergence to the true value 1 is smaller as for ? = 1,
6
?1
b=1
b=2
required improvement
predicted improvement
?1.5
?2
?2.5
? =0, s = 10
?2.5
log2(error)
log2(error)
?2
?3
?3.5
? =0, s = 50
?3
?3.5
?4
?4
?4.5
?4.5
?5
?5
0.5
1
1.5
2
f
2.5
3
3.5
4
0.5
b=1
b=2
required improvement
predicted improvement
?1.5
?2
?2.5
1
1.5
2
f
?2
?2.5
? =1, s = 50
?3
?3.5
3
3.5
4
? =2, s = 50
?3
?3.5
?4
?4
?4.5
?4.5
?5
2.5
b=1
b=2
required improvement
predicted improvement
?1.5
log2(error)
log2(error)
b=1
b=2
required improvement
predicted improvement
?1.5
?5
0.5
1
1.5
2
f
2.5
3
3.5
4
0.5
1
1.5
2
f
2.5
3
3.5
4
Figure 2: Average ?2 -estimation errors kx? ? x
bk2 for b = 1 and b = 2 on the log2 -scale in dependence of the signal strength f . The curve ?predicted improvement? (of b = 2 vs. b = 1) is obtained
by scaling the ?2 -estimation error by the factor predicted by the theory of Section
3. Likewise the
?
curve ?required improvement? results by scaling the error of b = 1 by 1/ 2 and indicates what
would be required by b = 2 to improve over b = 1 at the level of total #bits.
1.02
1.8
0.98
0.96
0.94
?=2
0.92
0.9
?=0
s = 50
0.88
1.4
1.2
?=1
1
0.8
0.6
s = 50
0.4
0.86
0.5
?=2
1.6
estimated noise level
estimated norm of x*
1 ?=1
0.2
1
1.5
2
2.5
3
3.5
0.5
4
?=0
1
1.5
2
2.5
3
3.5
4
f
f
Figure 3: Estimation of ? = kx? k2 (here 1) and ?. The curves depict the average of the plug-in
MLE discussed in Section 4 while the bars indicate ?1 standard deviation.
while ? is over-estimated (about 0.2) for small f . The above two issues are presumably a plug-in
effect, i.e. a consequence of using x
bu in place of x?u .
b-bit Marginal Regression and alternative recovery algorithms. We compare the ?2 -estimation
error of b-bit Marginal Regression to several common recovery algorithms. Compared to apparently
more principled methods which try to enforce agreement of Q(y) and Q(Ab
x) w.r.t. the Hamming
distance (or a surrogate thereof), b-bit Marginal Regression can be seen as a crude approach as it is
based on maximizing the inner product between y and Ax. One may thus expect that its performance
is inferior. In summary, our experiments confirm that this is true in low-noise settings, but not so if
the noise level is substantial. Below we briefly present the alternatives that we consider.
Plan-Vershynin: The approach in [23] based on (7) which only differs in that the constraint set
results from a relaxation. As shown in Figure 4 the performance is similar though slightly inferior.
IHT-quadratic: Standard Iterative Hard Thresholding based on quadratic loss [1]. As pointed out
above, b-bit Marginal Regression can be seen as one-step version of Iterative Hard Thresholding.
7
IHT-hinge (b = 1): The variant of Iterative Hard Threshold for binary observations using a hinge
loss-type loss function as proposed in [11].
SVM (b = 1): Linear SVM with squared hinge
loss and an ?1 -penalty, implemented in LIBLINEAR
?
[6]. The cost parameter is chosen from 1/ m log m.{2?3 , 2?2 , . . . , 23 } by 5-fold cross-validation.
IHT-Jacques (b = 2): A variant of Iterative Hard Threshold for quantized observations based on a
specific piecewiese linear loss function [9].
SVM-type (b = 2):P
This approach is based on solving the following convex optimization problem:
m
minx,{?i } ?kxk1 + i=1 ?i subject to li ? ?i ? hai , xi ? ui + ?i , ?i ? 0, i ? [m], where [li , ui ]
is the bin observation i is assigned to. The essential idea is to enforce consistency of the observed
and predicted bin assignments up to slacks
? {?i } while promoting sparsity of the solution via an ?1 penalty. The parameter ? is chosen from m log m?{2?10 , 2?9 , . . . , 23 } by 5-fold cross-validation.
Turning to the results as depicted by Figure 4, the difference between a noiseless (? = 0) and
heavily noisy setting (? = 2) is perhaps most striking.
? = 0: both IHT variants significantly outperform b-bit Marginal Regression. By comparing errors
for IHT, b = 2 can be seen to improve over b = 1 at the level of the total # bits.
? = 2: b-bit Marginal Regression is on par with the best performing methods. IHT-quadratic for
b = 2 only achieves a moderate reduction in error over b = 1, while IHT-hinge is supposedly
affected by convergence issues. Overall, the results suggest that a setting with substantial noise
favours a crude approach (low-bit measurements and conceptually simple recovery algorithms).
Marginal
Plan?Vershynin
IHT?quadratic
IHT?hinge
SVM
?2
?3
0
?1
?1.5
log2(error)
log2(error)
?4
b=1
?5
?6
?7
?2
?2.5
?3
?3.5
?4
?8 ? =0, s = 50
0.5
1
1.5
2
f
2.5
?3
?4
3.5
4
0.5
?5
?6
?7
?8
1
1.5
2
f
2.5
3
3.5
4
Marginal
Plan?Vershynin
IHT?quadratic
IHT?Jacques
SVM?type
?1.5
?2
?2.5
log2(error)
log2(error)
3
Marginal
Plan?Vershynin
IHT?quadratic
IHT?Jacques
SVM?type
?2
?3
?3.5
?4
?9
?10
? =2, s = 50
?4.5
?9
b=2
Marginal
Plan?Vershynin
IHT?quadratic
IHT?hinge
SVM
?0.5
?4.5
? =0, s = 50
0.5
1
1.5
?5 ? =2, s = 50
2
f
2.5
3
3.5
4
0.5
1
1.5
2
f
2.5
3
3.5
4
Figure 4: Average ?2 -estimation errors for several recovery algorithms on the log2 -scale in dependence of the signal strength f . We contrast ? = 0 (L) vs. ? = 2 (R), b = 1 (T) vs. b = 2 (B).
6 Conclusion
Bridging Marginal Regression and a popular approach to 1-bit CS due to Plan & Vershynin, we
have considered signal recovery from b-bit quantized measurements. The main finding is that for
b-bit Marginal Regression it is not beneficial to increase b beyond 2. A compelling argument for
b = 2 is the fact that the norm of the signal can be estimated unlike the case b = 1. Compared to
high-precision measurements, 2-bit measurements also exhibit strong robustness properties. It is of
interest if and under what circumstances the conclusion may differ for other recovery algorithms.
Acknowledgement. This work is partially supported by NSF-Bigdata-1419210, NSF-III-1360971,
ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137.
8
References
[1] T. Blumensath and M. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27:265?274, 2009.
[2] P. Boufounos and R. Baraniuk. 1-bit compressive sensing. In Information Science and Systems, 2008.
[3] E. Candes and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. The
Annals of Statistics, 35:2313?2351, 2007.
[4] S. Chen and A. Banerjee. One-bit Compressed Sensing with the k-Support Norm. In AISTATS, 2015.
[5] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52:1289?1306, 2006.
[6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[7] C. Genovese, J. Jin, L. Wasserman, and Z. Yao. A Comparison of the Lasso and Marginal Regression.
Journal of Machine Learning Research, 13:2107?2143, 2012.
[8] S. Gopi, P. Netrapalli, P. Jain, and A. Nori. One-bit Compressed Sensing: Provable Support and Vector
Recovery. In ICML, 2013.
[9] L. Jacques, K. Degraux, and C. De Vleeschouwer. Quantized iterative hard thresholding: Bridging 1-bit
and high-resolution quantized compressed sensing. arXiv:1305.1786, 2013.
[10] L. Jacques, D. Hammond, and M. Fadili. Dequantizing compressed sensing: When oversampling and
non-gaussian constraints combine. IEEE Transactions on Information Theory, 57:559?571, 2011.
[11] L. Jacques, J. Laska, P. Boufounos, and R. Baraniuk. Robust 1-bit Compressive Sensing via Binary Stable
Embeddings of Sparse Vectors. IEEE Transactions on Information Theory, 59:2082?2102, 2013.
[12] J. Kieffer. Uniqueness of locally optimal quantizer for log-concave density and convex error weighting
function. IEEE Transactions on Information Theory, 29:42?47, 1983.
[13] J. Laska and R. Baraniuk. Regime change: Bit-depth versus measurement-rate in compressive sensing.
arXiv:1110.3450, 2011.
[14] J. Laska, P. Boufounos, M. Davenport, and R. Baraniuk. Democracy in action: Quantization, saturation,
and compressive sensing. Applied and Computational Harmonic Analysis, 31:429?443, 2011.
[15] P. Li. Binary and Multi-Bit Coding for Stable Random Projections. arXiv:1503.06876, 2015.
[16] P. Li. One scan 1-bit compressed sensing. Technical report, arXiv:1503.02346, 2015.
[17] P. Li, C.-H. Zhang, and T. Zhang. Compressed counting meets compressed sensing. In COLT, 2014.
[18] J. Liu and S. Wright. Robust dequantized compressive sensing. Applied and Computational Harmonic
Analysis, 37:325?346, 2014.
[19] S. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28:129?137,
1982.
[20] J. Max. Quantizing for Minimum Distortion. IRE Transactions on Information Theory, 6:7?12, 1960.
[21] D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
Applied and Computational Harmonic Analysis, 26:301?321, 2008.
[22] Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming. Communications on Pure
and Applied Mathematics, 66:1275?1297, 2013.
[23] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: a convex
programming approach. IEEE Transactions on Information Theory, 59:482?494, 2013.
[24] R. Zhu and Q. Gu. Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing. In
ICML, 2015.
[25] R. Vershynin. In: Compressed Sensing: Theory and Applications, chapter ?Introduction to the nonasymptotic analysis of random matrices?. Cambridge University Press, 2012.
[26] M. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using ?1 constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55:2183?2202,
2009.
[27] C.-H. Zhang and T. Zhang. A general theory of concave regularization for high-dimensional sparse
estimation problems. Statistical Science, 27:576?593, 2013.
[28] L. Zhang, J. Yi, and R. Jin. Efficient algorithms for robust one-bit compressive sensing. In ICML, 2014.
[29] T. Zhang. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations. IEEE
Transactions on Information Theory, 57:4689?4708, 2011.
9
| 5738 |@word illustrating:1 briefly:1 version:4 norm:12 proportion:1 proportionality:1 confirms:1 simulation:1 hsieh:1 reduction:1 liblinear:2 celebrated:1 contains:1 configuration:1 liu:1 existing:1 recovered:1 ka:1 comparing:1 yet:1 axk22:1 additive:8 realistic:1 numerical:2 subsequent:1 partition:1 enables:1 depict:1 v:3 half:1 selected:2 greedy:1 accordingly:5 fa9550:1 characterization:1 quantized:8 equi:1 quantizer:2 provides:1 ire:1 simpler:1 zhang:6 prove:1 blumensath:1 combine:1 expected:1 roughly:1 growing:1 multi:1 decomposed:1 encouraging:1 equipped:1 cardinality:2 increasing:5 becomes:1 estimating:4 moreover:4 notation:1 bounded:1 biostatistics:2 maximizes:1 underlying:1 what:5 cm:1 interpreted:1 minimizes:2 developed:1 compressive:6 finding:2 nj:3 concave:4 k2:20 unit:4 before:3 positive:4 t1:7 local:1 treat:1 tends:2 limit:1 consequence:1 meet:1 approximately:1 therein:1 studied:2 dantzig:1 suggests:2 range:1 unique:1 practice:1 differs:1 asymptotics:1 universal:1 empirical:1 significantly:2 davy:1 projection:1 word:1 suggest:1 selection:1 impossible:1 equivalent:1 map:5 deterministic:1 missing:1 maximizing:1 straightforward:2 go:1 starting:1 fadili:1 convex:7 ke:1 resolution:1 simplicity:1 recovery:26 needell:1 pure:1 wasserman:1 estimator:7 population:1 variation:1 limiting:1 annals:1 controlling:1 suppose:2 construction:1 heavily:1 exact:1 programming:4 agreement:1 element:1 democracy:1 continues:1 breakdown:2 kxk1:7 observed:2 solved:1 wang:1 ensures:1 trade:6 mentioned:1 gross:2 principled:1 substantial:2 ui:5 supposedly:1 degraux:1 complexity:1 depend:3 solving:2 tight:1 deliver:1 max1:1 basis:1 gu:1 joint:1 k0:1 various:1 chapter:1 derivation:1 separated:1 distinct:2 fast:1 effective:1 jain:1 choosing:2 refined:1 nori:1 whose:1 larger:1 solve:2 distortion:2 otherwise:1 compressed:17 statistic:4 think:2 itself:1 noisy:2 slawski:2 quantizing:1 reconstruction:1 product:2 pthe:1 achieve:1 ky:1 convergence:5 optimum:2 extending:1 r1:1 produce:1 cosamp:1 tk:6 derive:2 develop:1 depending:2 stat:1 op:1 noticeable:2 strong:1 netrapalli:1 recovering:4 c:6 implemented:1 implies:2 come:1 indicate:1 predicted:8 direction:2 differ:1 closely:1 attribute:1 subsequently:2 kb:4 stochastic:1 centered:1 bin:9 fix:2 generalization:1 proposition:5 elementary:1 assisted:1 hold:2 sufficiently:1 considered:3 wright:1 exp:1 presumably:1 bj:5 achieves:1 xk2:1 uniqueness:1 estimation:13 largest:1 agrees:1 establishes:1 minimization:4 gaussian:11 rather:2 pn:1 ej:1 ax:2 improvement:11 consistently:1 likelihood:5 indicates:1 contrast:1 adversarial:5 sense:5 milder:1 minimizers:1 inaccurate:1 typically:1 sb:2 entire:1 unlikely:1 borrowing:1 spurious:1 tao:1 issue:4 aforementioned:1 overall:1 classification:1 among:1 exponent:1 colt:1 plan:11 constrained:1 laska:3 marginal:33 equal:3 once:3 aware:1 represents:1 unnecessarily:1 cancel:1 constitutes:2 genovese:1 icml:3 contaminated:1 report:1 randomly:1 replaced:1 ab:2 interest:3 evaluation:1 analyzed:1 extreme:1 light:1 amenable:1 accurate:2 respective:1 incomplete:2 re:1 mk:2 instance:1 soft:1 compelling:1 cover:1 assignment:1 cost:2 deviation:1 entry:5 subset:1 uniform:1 too:1 front:1 answer:1 varies:1 tolerated:1 vershynin:12 density:1 sequel:1 bu:4 off:8 yao:1 squared:1 again:1 central:2 satisfied:1 choose:2 davenport:1 ek:2 resort:1 leading:1 derivative:1 li:9 suggesting:1 nonasymptotic:1 de:1 lloyd:7 coding:1 gaussianity:1 coefficient:1 matter:1 depends:1 vi:2 view:1 try:1 closed:3 apparently:1 recover:1 candes:1 minimize:1 square:1 variance:1 likewise:1 maximized:1 yield:2 conceptually:1 hammond:1 confirmed:1 corruption:5 ping:1 reach:1 iht:16 definition:3 thereof:1 proof:1 hamming:1 popular:3 recall:3 knowledge:1 sophisticated:1 back:1 appears:1 follow:2 evaluated:1 though:2 generality:1 furthermore:2 hand:1 tropp:1 replacing:2 banerjee:1 lack:1 logistic:1 indicated:1 perhaps:2 reveal:1 name:1 effect:2 concept:1 true:2 requiring:1 normalized:2 verify:1 former:1 hence:5 assigned:2 analytically:1 alternating:1 counterpart:1 regularization:1 deal:1 inferior:2 coincides:1 pdf:1 confusion:1 performs:1 wise:2 consideration:1 harmonic:4 common:3 empirically:3 tail:2 discussed:2 m1:1 measurement:25 significant:1 cambridge:1 ai:1 consistency:1 mathematics:1 pointed:2 stable:4 longer:1 recent:1 inf:1 mint:1 moderate:2 scenario:1 route:1 n00014:1 binary:3 onr:1 yi:15 seen:6 minimum:2 additional:1 kxk0:10 signal:23 ii:7 infer:1 technical:1 faster:1 plug:5 offer:1 long:7 cross:2 lin:1 e1:1 mle:10 a1:1 variant:5 regression:30 involving:1 noiseless:4 essentially:2 rutgers:4 circumstance:1 arxiv:4 iteration:2 normalization:1 adopting:1 kieffer:1 background:1 addition:1 separately:2 interval:1 extra:1 rest:1 unlike:2 subject:2 logconcave:1 counting:1 intermediate:1 iii:1 easy:2 embeddings:1 xj:1 fit:1 lasso:3 inner:1 regarding:2 cn:1 idea:1 t0:2 favour:2 expression:8 motivated:2 whether:1 bridging:2 penalty:6 action:1 generally:1 clear:1 locally:1 outperform:1 nsf:2 oversampling:1 sign:8 estimated:7 jacques:6 per:1 write:1 coarsely:1 affected:1 key:1 terminology:1 threshold:6 drawn:1 thresholded:1 backward:1 asymptotically:1 relaxation:2 fraction:3 baraniuk:4 striking:1 place:2 family:2 reasonable:1 prefer:1 scaling:3 comparable:1 bit:59 bound:9 pay:2 display:1 fold:2 quadratic:9 fan:1 encountered:1 oracle:1 strength:4 constraint:8 argument:2 min:12 performing:1 separable:1 martin:2 department:4 influential:1 scad:2 ball:1 smaller:3 slightly:1 beneficial:1 sth:1 taken:1 computationally:2 mcp:2 remains:2 turn:6 slack:1 mechanism:4 flip:4 tractable:2 end:1 available:3 operation:1 promoting:1 observe:1 away:1 enforce:3 alternative:4 robustness:2 denotes:6 cf:1 ensure:1 log2:10 hinge:6 log10:3 build:1 prof:1 classical:1 unchanged:2 objective:1 move:1 pingli:1 question:3 quantity:3 dependence:3 usual:1 surrogate:1 hai:10 exhibit:1 gradient:2 minx:1 distance:1 separate:1 provable:1 assuming:1 ratio:5 balance:1 minimizing:3 equivalently:1 difficult:1 setup:1 statement:1 stated:1 rise:2 negative:1 design:1 unknown:1 perform:3 upper:1 observation:9 finite:1 jin:2 supporting:2 immediate:1 defining:1 situation:1 communication:1 precise:1 y1:1 rn:4 perturbation:1 sharp:1 required:9 optimized:1 beyond:2 bar:1 below:6 regime:1 sparsity:3 saturation:1 max:7 including:1 wainwright:1 convexifying:1 overlap:1 suitable:1 natural:1 indicator:1 turning:1 zhu:1 scheme:7 improve:3 kxk22:2 library:1 numerous:1 carried:1 literature:3 acknowledgement:1 multiplication:2 relative:3 afosr:1 fully:1 loss:6 expect:1 par:1 proportional:2 versus:1 validation:2 verification:1 consistent:1 thresholding:6 bk2:2 principle:1 course:1 summary:2 changed:2 surprisingly:1 supported:1 free:3 aij:4 bias:1 characterizing:1 taking:2 absolute:2 sparse:8 ha1:1 curve:3 depth:1 stand:1 evaluating:2 author:1 made:1 adaptive:1 projected:2 coincide:1 replicated:1 forward:1 far:1 transaction:9 approximate:1 selector:1 confirm:1 global:2 assumed:3 xi:7 alternatively:1 iterative:9 quantifies:1 table:3 additionally:2 robust:6 expanding:1 aistats:1 pk:3 main:1 motivation:1 noise:19 xu:1 referred:1 representative:3 slow:1 precision:2 sub:3 position:1 explicit:1 lie:1 kxk2:5 crude:2 weighting:1 rk:12 theorem:4 specific:5 sensing:22 symbol:1 decay:1 list:2 svm:7 concern:4 exists:2 essential:1 quantization:14 albeit:1 false:1 supplement:4 magnitude:4 budget:4 kx:18 sorting:1 chen:1 suited:1 depicted:1 univariate:1 pcm:1 absorbed:2 contained:1 partially:1 chang:1 applies:1 minimizer:3 cdf:1 donoho:1 towards:1 feasible:2 hard:9 change:2 infinite:1 uniformly:2 acting:1 lemma:5 boufounos:3 total:4 called:1 support:9 latter:2 scan:2 bigdata:1 |
5,234 | 5,739 | LASSO with Non-linear Measurements is Equivalent
to One With Linear Measurements
Ehsan Abbasi
Department of Electrical Engineering
Caltech
eabbasi@caltech.edu
Christos Thrampoulidis,
Department of Electrical Engineering
Caltech
cthrampo@caltech.edu
Babak Hassibi
Department of Electrical Engineering
Caltech
hassibi@caltech.edu ?
Abstract
Consider estimating an unknown, but structured (e.g. sparse, low-rank, etc.), signal x0 ? Rn from a vector y ? Rm of measurements of the form yi = gi (ai T x0 ),
where the ai ?s are the rows of a known measurement matrix A, and, g(?) is a
(potentially unknown) nonlinear and random link-function. Such measurement
functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., gi (x) = sign(x + zi ),
corresponds to noisy 1-bit quantized measurements. Motivated by the classical
work of Brillinger, and more recent work of Plan and Vershynin, we estimate x0
? := arg minx ky ? Ax0 k2 + ?f (x)
via solving the Generalized-LASSO, i.e., x
for some regularization parameter ? > 0 and some (typically non-smooth) convex
regularizer f (?) that promotes the structure of x0 , e.g. `1 -norm, nuclear-norm,
etc. While this approach seems to naively ignore the nonlinear function g(?), both
Brillinger (in the non-constrained case) and Plan and Vershynin have shown that,
when the entries of A are iid standard normal, this is a good estimator of x0 up to
a constant of proportionality ?, which only depends on g(?). In this work, we considerably strengthen these results by obtaining explicit expressions fork?
x ??x0 k2 ,
for the regularized Generalized-LASSO, that are asymptotically precise when m
and n grow large. A main result is that the estimation performance of the Generalized LASSO with non-linear measurements is asymptotically the same as one
whose measurements are linear yi = ?ai T x0 + ?zi , with ? = E?g(?) and
? 2 = E(g(?) ? ??)2 , and, ? standard normal. To the best of our knowledge,
the derived expressions on the estimation performance are the first-known precise
results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the
Generalized LASSO is the celebrated Lloyd-Max quantizer.
1
Introduction
Non-linear Measurements. Consider the problem of estimating an unknown signal vector x0 ? Rn
from a vector y = (y1 , y2 , . . . , ym )T of m measurements taking the following form:
yi = gi (aTi x0 ), i = 1, 2, . . . , m.
(1)
Here, each ai represents a (known) measurement vector. The gi ?s are independent copies of a
(generically random) link function g. For instance, gi (x) = x + zi , with say zi being normally
?
This work was supported in part by the National Science Foundation under grants CNS-0932428, CCF-1018927, CCF-1423663 and
CCF-1409204, by a grant from Qualcomm Inc., by NASA?s Jet Propulsion Laboratory through the President and Directors Fund, by King
Abdulaziz University, and by King Abdullah University of Science and Technology.
1
distributed, recovers the standard linear regression setup with gaussian noise. In this paper, we are
particularly interested in scenarios where g is non-linear. Notable examples include g(x) = sign(x)
(or gi (x) = sign(x+zi )) and g(x) = (x)+ , corresponding to 1-bit quantized (noisy) measurements,
and, to the censored Tobit model, respectively. Depending on the situation, g might be known or
unspecified. In the statistics and econometrics literature, the measurement model in (1) is popular
under the name single-index model and several aspects of it have been well-studied, e.g. [4,5,14,15]1 .
Structured Signals. It is typical that the unknown signal x0 obeys some sort of structure. For
instance, it might be sparse, i.e.?only
a few k n, of its entries are non-zero; or, it might be that
?
n? n
x0 = vec(X0 ), where X0 ? R
is a matrix of low-rank r n. To exploit this information
it is typical to associate with the structure of x0 a properly chosen function f : Rn ? R, which we
refer to as the regularizer. Of particular interest are convex and non-smooth such regularizers, e.g.
the `1 -norm for sparse signals, the nuclear-norm for low-rank ones, etc. Please refer to [1, 6, 13] for
further discussions.
An Algorithm for Linear Measurements: The Generalized LASSO. When the link function
is linear, i.e. gi (x) = x + zi , perhaps the most popular way of estimating x0 is via solving the
Generalized LASSO algorithm:
? := arg min ky ? Axk2 + ?f (x).
x
x
(2)
Here, A = [a1 , a2 , . . . , am ]T ? Rm?n is the known measurement matrix and ? > 0 is a regularizer
parameter. This is often referred to as the `2 -LASSO or the square-root-LASSO [3] to distinguish
from the one solving minx 21 ky ? Axk22 + ?f (x), instead. Our results can be accustomed to this
latter version, but for concreteness, we restrict attention to (2) throughout. The acronym LASSO for
(2) was introduced in [22] for the special case of `1 -regularization; (2) is a natural generalization
to other kinds of structures and includes the group-LASSO [25], the fused-LASSO [23] as special
cases. We often drop the term ?Generalized? and refer to (2) simply as the LASSO.
One popular, measure of estimation performance of (2) is the squared-error k?
x ? x0 k22 . Recently,
there have been significant advances on establishing tight bounds and even precise characterizations
of this quantity, in the presence of linear measurements [2, 10, 16, 18, 19, 21]. Such precise results
have been core to building a better understanding of the behavior of the LASSO, and, in particular,
on the exact role played by the choice of the regularizer f (in accordance with the structure of x0 ),
by the number of measurements m, by the value of ?, etc.. In certain cases, they even provide us
with useful insights into practical matters such as the tuning of the regularizer parameter.
Using the LASSO for Non-linear Measurements?. The LASSO is by nature tailored to a linear
model for the measurements. Indeed, the first term of the objective function in (2) tries to fit Ax to
the observed vector y presuming that this is of the form yi = aTi x0 + noise. Of course, no one stops
us from continuing to use it even in cases where yi = g(aTi x0 ) with g being non-linear2 . But, the
? of the Generalized LASSO
question then becomes: Can there be any guarantees that the solution x
is still a good estimate of x0 ?
The question just posed was first studied back in the early 80?s by Brillinger [5] who provided answers in the case of solving (2) without a regularizer term. This, of course, corresponds to standard
Least Squares (LS). Interestingly, he showed that when the measurement vectors are Gaussian, then
the LS solution is a consistent estimate of x0 , up to a constant of proportionality ?, which only
depends on the link-function g. The result is sharp, but only under the assumption that the number
of measurements m grows large, while the signal dimension n stays fixed, which was the typical
setting of interest at the time. In the world of structured signals and high-dimensional measurements, the problem was only very recently revisited by Plan and Vershynin [17]. They consider a
constrained version of the Generalized LASSO, in which the regularizer is essentially replaced by a
constraint, and derive upper bounds on its performance. The bounds are not tight (they involve absolute constants), but they demonstrate some key features: i) the solution to the constrained LASSO
? is a good estimate of x0 up to the same constant of proportionality ? that appears in Brillinger?s
x
result. ii) Thus, k?
x ? ?x0 k22 is a natural measure of performance. iii) Estimation is possible even
with m < n measurements by taking advantage of the structure of x0 .
1
The single-index model is a classical topic and can also be regarded as a special case of what is known
as sufficient dimension reduction problem. There is extensive literature on both subjects; unavoidably, we only
refer to the directly relevant works here.
2
Note that the Generalized LASSO in (2) does not assume knowledge of g. All that is assumed is the
availability of the measurements yi . Thus, the link-function might as well be unknown or unspecified.
2
3
Non-linear
Linear
Prediction
2.5
? ? x0 k22
k??1 x
2
m<n
1.5
m>n
1
0.5
0
0
0.5
1
1.5
2
2.5
3
?
Figure 1: Squared error of the `1 -regularized LASSO with non-linear measurements () and with corresponding linear ones (?) as a function of the regularizer parameter ?; both compared to the asymptotic prediction.
Here, gi (x) = sign(x + 0.3zi ) with zi ? N (0, 1). The unknown signal x0 is of dimension n = 768 and has
d0.15ne non-zero entries (see Sec. 2.2.2 for details). The different curves correspond to d0.75ne and d1.2ne
number of measurements, respectively. Simulation points are averages over 20 problem realizations.
1.1 Summary of Contributions
Inspired by the work of Plan and Vershynin [17], and, motivated by recent advances on the precise
analysis of the Generalized LASSO with linear measurements, this paper extends these latter results
to the case of non-linear mesaurements. When the measurement matrix A has entries i.i.d. Gaussian
(henceforth, we assume this to be the case without further reference), and the estimation performance
is measured in a mean-squared-error sense, we are able to precisely predict the asymptotic behavior
of the error. The derived expression accurately captures the role of the link function g, the particular
structure of x0 , the role of the regularizer f , and, the value of the regularizer parameter ?. Further,
it holds for all values of ?, and for a wide class of functions f and g.
Interestingly, our result shows in a very precise manner that in large dimensions, modulo the information about the magnitude of x0 , the LASSO treats non-linear measurements exactly as if they
were scaled and noisy linear measurements with scaling factor ? and noise variance ? 2 defined as
? := E[?g(?)], and ? 2 := E[(g(?) ? ??)2 ],
for ? ? N (0, 1),
(3)
where the expecation is with respect to both ? and g. In particular, when g is such that ? 6= 03 , then,
the estimation performance of the Generalized LASSO with measurements of the form
yi = gi (aTi x0 ) is asymptotically the same as if the measurements were rather of the form
yi = ?aTi x0 + ?zi , with ?, ? 2 as in (3) and zi standard gaussian noise.
Recent analysis of the squared-error of the LASSO, when used to recover structured signals from
noisy linear observations, provides us with either precise predictions (e.g. [2, 20]), or in other cases,
with tight upper bounds (e.g. [10, 16]). Owing to the established relation between non-linear and
(corresponding) linear measurements, such results also characterize the performance of the LASSO
in the presence of nonlinearities. We remark that some of the error formulae derived here in the
general context of non-linear measurements, have not been previously known even under the prism
of linear measurements. Figure 1 serves as an illustration; the error with non-linear measurements
matches well with the error of the corresponding linear ones and both are accurately predicted by
our analytic expression.
Under the generic model in (1), which allows for g to even be unspecified, x0 can, in principle, be
estimated only up to a constant of proportionality [5, 15, 17]. For example, if g is uknown then any
information about the norm kx0 k2 could be absorbed in the definition of g. The same is true when
g(x) = sign(x), eventhough g might be known here. In these cases, what becomes important is
the direction of x0 . Motivated by this, and, in order to simplify the presentation, we have assumed
throughout that x0 has unit Euclidean norm4 , i.e. kx0 k2 = 1.
3
This excludes for example link functions g that are even, but also some other not so obvious cases [11,
Sec. 2.2]. For a few special cases, e.g. sparse recovery with binary measurements yi [24], different methodologies than the LASSO have been recently proposed that do not require ? = 0.
4
In [17, Remark 1.8], they note that their results can be easily generalized to the case when kx0 k2 6= 1 by
simply redifining g?(x) = g(kx0 k2 x) and accordingly adjusting the values of the parameters ? and ? 2 in (3).
The very same argument is also true in our case.
3
1.2 Discussion of Relevant Literature
Extending an Old Result. Brillinger [5] identified the asymptotic behavior of the estimation error
? LS = (AT A)?1 AT y by showing that, when n (the dimension of x0 ) is fixed,
of the LS solution x
?
?
lim mk?
xLS ? ?x0 k2 = ? n,
(4)
m??
2
where ? and ? are same as in (3). Our result can be viewed as a generalization of the above in
several directions. First, we extend (4) to the regime where m/n = ? ? (1, ?) and both grow large
by showing that
?
lim k?
xLS ? ?x0 k2 = ?
.
(5)
n??
??1
Second, and most importantly, we consider solving the Generalized LASSO instead, to which LS is
only a very special case. This allows versions of (5) where the error is finite even when ? < 1 (e.g.,
? no longer has a
see (8)). Note the additional challenges faced when considering the LASSO: i) x
closed-form expression, ii) the result needs to additionally capture the role of x0 , f , and, ?.
Motivated by Recent Work. Plan and Vershynin consider a constrained Generalized LASSO:
? C-LASSO = arg min ky ? Axk2 ,
x
(6)
x?K
n
with y as in (1) and K ? R some known set (not necessarily convex). In its simplest form, their
result shows that when m & DK (?x0 ) then with highp
probability,
? DK (?x0 ) + ?
?
.
(7)
k?
xC-LASSO ? ?x0 k2 .
Here, D (?x ) is the Gaussian width, a specific measure ofmcomplexity of the constrained set K
K
0
when viewed from ?x0 . For our purposes, it suffices to remark that if K is properly chosen, and,
if ?x0 is on the boundary of K, then DK (?x0 ) is less than n. Thus, estimation is in principle is
possible with m < n measurements. The parameters ? and ? that appear in (7) are the same as in
(3) and ? := E[(g(?) ? ??)2 ? 2 ]. Observe that, in contrast to (4) and to the setting of this paper,
the result in (7) is non-asymptotic. Also, it suggests the critical role played by ? and ?. On the
other hand, (7) is only an upper bound on the error, and also, it suffers from unknown absolute
proportionality constants (hidden in .).
Moving the analysis into an asymptotic setting, our work expands upon the result of [17]. First, we
consider the regularized LASSO instead, which is more commonly used in practice. Most importantly, we improve the loose upper bounds into precise expressions. In turn, this proves in an exact
manner the role played by ? and ? 2 to which (7) is only indicative. For a direct comparison with
(7) we mention the following result which follows from our analysis (we omit the proof for brevity).
Assume K is convex, m/n = ? ? (0,
p ?), DK (?x0 )/n = ? ? (0, 1] and n ? ?. Also, ? > ?.
Then, (7) yields an upper bound C? ?/? to the error, for some
? constant C > 0. Instead, we show
?
k?
xC-LASSO ? ?x0 k2 ? ? ?
.
(8)
???
Precise Analysis of the LASSO With Linear Measurements. The first precise error formulae
were established in [2, 10] for the `22 -LASSO with `1 -regularization. The analysis was based on
the the Approximate Message Passing (AMP) framework [9]. A more general line of work studies
the problem using a recently developed framework termed the Convex Gaussian Min-max Theorem
(CGMT) [19], which is a tight version of a classical Gaussian comparison inequality by Gordon
[12]. The CGMT framework was initially used by Stojnic [18] to derive tight upper bounds on the
constrained LASSO with `1 -regularization; [16] generalized those to general convex regularizers
and also to the `2 -LASSO; the `22 -LASSO was studied in [21]. Those bounds hold for all values
of SNR, but they become tight only in the high-SNR regime. A precise error expression for all
values of SNR was derived in [20] for the `2 -LASSO with `1 -regularization under a gaussianity
assumption on the distribution of the non-zero entries of x0 . When measurements are linear, our
Theorem 2.3 generalizes this assumption. Moreover, our Theorem 2.2 provides error predictions
for regularizers going beyond the `1 -norm, e.g. `1,2 -norm, nuclear norm, which appear to be novel.
When it comes to non-linear measurements, to the best of our knowledge, this paper is the first to
derive asymptotically precise results on the performance of any LASSO-type program.
2
Results
2.1 Modeling Assumptions
Unknown structured signal. We let x0 ? Rn represent the unknown signal vector. We assume that
x0 = x0 /kx0 k2 , with x0 sampled from a probability density px0 in Rn . Thus, x0 is deterministically
4
of unit Euclidean-norm (this is mostly to simplify the presentation, see Footnote 4). Information
about the structure of x0 (and correspondingly of x0 ) is encoded in px0 . E.g., to study an x0 which
is sparse, it is typical to assume that its entries are i.i.d. x0,i ? (1 ? ?)?0 + ?qX 0 , where ? ? (0, 1)
becomes the normalized sparsity level, qX 0 is a scalar p.d.f. and ?0 is the Dirac delta function5 .
Regularizer. We consider convex regularizers f : Rn ? R.
Measurement matrix. The entries of A ? Rm?n are i.i.d. N (0, 1).
Measurements and Link-function. We observe y = ~g (Ax0 ) where ~g is a (possibly random) map
from Rm to Rm and ~g (u) = [g1 (u1 ), . . . , gm (um )]T . Each gi is i.i.d. from a real valued random
function g for which ? and ? 2 are defined in (3). We assume that ? and ? 2 are nonzero and bounded.
Asymptotics. We study a linear asymptotic regime. In particular, we consider a sequence of prob(n)
lem instances {x0 , A(n) , f (n) , m(n) }n?N indexed by n such that A(n) ? Rm?n has entries i.i.d.
(n)
N (0, 1), f
: Rn ? R is proper convex, and, m := m(n) with m = ?n, ? ? (0, ?). We further
require that the following conditions hold:
(n)
(a) x0
(n)
is sampled from a probability density px0 in Rn with one-dimensional marginals that are
(n)
P
independent of n and have bounded second moments. Furthermore, n?1 kx0 k22 ?
? ?x2 = 1.
(b) For any n ? N and any kxk2 ? C, it holds n?1/2 f (x) ? c1 and n?1/2 maxs??f (n) (x) ksk2 ?
c2 , for constants c1 , c2 , C ? 0 independent of n.
P
In (a), we used ??
?? to denote convergence in probability as n ? ?. The assumption ?x2 = 1 holds
without loss of generality, and, is only necessary to simplify the presentation. In (b), ?f (x) denotes
the subdifferential of f at x. The condition itself is no more than a normalization condition on f .
(n)
(n)
(n)
Every such sequence {x0 , A(n) , f (n) }n?N generates a sequence {x0 , y(n) }n?N where x0 :=
(n)
(n)
(n)
(n)
x0 /kx0 k2 and y := ~g (Ax0 ). When clear from the context, we drop the superscript (n).
2.2
Precise Error Prediction
(n)
Let {x0 , A(n) , f (n) , y(n) }n?N be a sequence of problem instances that satisfying all the conditions above. With these, define the sequence {?
x(n) }n?N of solutions to the corresponding LASSO
problems for fixed ? > 0:
o
1 n (n)
? (n) := min ?
ky ? A(n) xk2 + ?f (n) (x) .
(9)
x
x
n
(n)
? (n) ? x0 k22 with high
The main contribution of this paper is a precise evaluation of limn?? k??1 x
probability over the randomness of A, of x0 , and of g.
2.2.1
General Result
(n)
To state the result in a general framework, we require a further assumption on px0 and f (n) . Later
in this section we illustrate how this assumption can be naturally met. We write f ? for the Fenchel?s
conjugate of f , i.e., f ? (v) := supx xT v ? f (x); also, we denote the Moreau envelope of f at v
with index ? to be ef,? (v) := minx { 12 kv ? xk22 + ? f (x)}.
Assumption 1. We say Assumption 1 holds if for all non-negative constants c1 , c2 , c3 ? R the
point-wise limit of n1 e?n(f ? )(n) ,c3 (c1 h + c2 x0 ) exists with probability one over h ? N (0, In ) and
(n)
x0 ? px0 . Then, we denote the limiting value as F (c1 , c2 , c3 ).
Theorem 2.1 (Non-linear=Linear). Consider the asymptotic setup of Section 2.1 and let Assumption
? be the minimizer of the Generalized LASSO in (9) for
1 hold. Recall ? and ? 2 as in (3) and let x
? lin be the solution to the Generalized
fixed ? > 0 and for measurements given by (1). Further let x
lin
LASSO when used with linear measurements of the form y = A(?x0 ) + ?z, where z has entries
i.i.d. standard normal. Then, in the limit of n ? ?, with probability one,
k?
x ? ?x0 k22 = k?
xlin ? ?x0 k22 .
5
Such models have been widely used in the relevant literature, e.g. [7,8,10]. In fact, the results here continue
to hold as long as the marginal distribution of x0 converges to a given distribution (as in [2]).
5
Theorem 2.1 relates in a very precise manner the error of the Generalized LASSO under non-linear
measurements to the error of the same algorithm when used under appropriately scaled noisy linear
measurements. Theorem 2.2 below, derives an asymptotically exact expression for the error.
Theorem 2.2 (Precise Error Formula). Under the same assumptions of Theorem 2.1 and ? := m/n,
it holds, with probability one,
lim k?
x ? ?x0 k22 = ??2 ,
n??
where ?? is the unique optimal solution to the convex program
? p
??
?2 ?
??2
? ?? ?
max min ? ? ?2 + ? 2 ?
+
?
F
,
,
.
0???1 ??0
2
2?
?
? ?? ??
(10)
? ?0
Also, the optimal cost of the LASSO in (9) converges to the optimal cost of the program in (10).
Under the stated conditions, Theorem 2.2 proves that the limit of k?
x ? ?x0 k2 exists and is equal
to the unique solution of the optimization program in (10). Notice that this is a deterministic and
convex optimization, which only involves three scalar optimization variables. Thus, the optimal ??
can, in principle, be efficiently numerically computed. In many specific cases of interest, with some
extra effort, it is possible to yield simpler expressions for ?? , e.g. see Theorem 2.3 below. The role
of the normalized number of measurement ? = m/n, of the regularizer parameter ?, and, that of
g, through ? and ? 2 , are explicit in (10); the structure of x0 and the choice of the regularizer f are
implicit in F . Figures 1-2 illustrate the accuracy of the prediction of the theorem in a number of
different settings. The proofs of both the Theorems are deferred to Appendix A. In the next sections,
we specialize Theorem 2.2 to the cases of sparse, group-sparse and low-rank signal recovery.
2.2.2 Sparse Recovery
Assume each entry x0,i , i = 1, . . . , n is sampled i.i.d. from a distribution
pX 0 (x) = (1 ? ?) ? ?0 (x) + ? ? qX 0 (x),
(11)
where ?0 is the delta Dirac function, ? ? (0, 1) and qX 0 a probability density function with second
moment normalized to 1/? so that condition (a) of Section 2.1 is satisfied. Then, x0 = x0 /kx0 k2
is ?n-sparse on average and has unit Euclidean norm. Letting f (x) = kxk1 also satisfies condition
(b). Let us now check Assumption 1. The Fenchel?s conjugate of the `1 -norm is simply the indicator
function of the `? unit ball. Hence, without much effort,
n
1 ?
1 X
e n(f ? )(n) ,c3 (c1 h + c2 x0 ) =
min (vi ? (c1 hi + c2 x0,i ))2
n
2n i=1 |vi |?1
n
=
1 X 2
? (c1 hi + c2 x0,i ; 1),
2n i=1
(12)
where we have denoted
?(x; ? ) := (x/|x|) (|x| ? ? )+
(13)
for the soft thresholding operator. An application of the weak
law of large numbers
to see that the
limit of the expression in (12) equals F (c1 , c2 , c3 ) := 21 E ? 2 (c1 h + c2 X 0 ; 1) , where the expectation is over h ? N (0, 1) and X 0 ? pX 0 . With all these, Theorem 2.2 is applicable. We have put
extra effort in order to obtain the following equivalent but more insightful characterization of the
error, as stated below and proved in Appendix B.
Theorem 2.3 (Sparse Recovery). If ? > 1, then define ?crit = 0. Otherwise, let ?crit , ?crit be the
unique pair of solutions to the following set of equations:
(
(14)
?2 ? = ? 2 + E (?(?h + ?X 0 ; ??) ? ?X 0 )2 ,
?? = E[(?(?h + ?X 0 ; ??) ? h)],
(15)
where h ? N (0, 1) and is independent of X 0 ? pX 0 . Then, for any ? > 0, with probability one,
2
??crit ? ? 2
, ? ? ?crit ,
2
lim k?
x ? ?x0 k2 =
n??
??2? (?) ? ? 2 , ? ? ?crit ,
where ?2? (?) is the unique solution to (14).
6
Sparse signal recovery
Group-sparse signal recovery
0.55
Simulation
Thm. 2.3
0.5
2
kx ? ?x0 k22
k??1 x ? x0 k22
0.45
1.5
? = 0.75
1
0.4
0.35
? = 1.2
0.3
0.5
Simulation
Thm. 2.2
0.25
?crit
0.5
1
1.5
2
0.2
0.5
2.5
?
1
1.5
2
2.5
3
3.5
4
4.5
?
Figure 2: Squared error of the LASSO as a function of the regularizer parameter compared to the asymptotic
predictions. Simulation points represent averages over 20 realizations. (a) Illustration of Thm. 2.3 for g(x) =
sign(x), n = 512, pX 0 (+1) = pX 0 (+1) = 0.05, pX 0 (+1) = 0.9 and two values of ?, namely 0.75 and 1.2.
(b) Illustration of Thm. 2.2 for x0 being group-sparse as in Section 2.2.3 and gi (x) = sign(x + 0.3zi ). In
particular, x0 is composed of t = 512 blocks of block size b = 3. Each block is zero with probability 0.95,
otherwise its entries are i.i.d. N (0, 1). Finally, ? = 0.75.
Figures 1 and 2(a) validate the prediction of the theorem, for different signal distributions, namely
qX 0 being Gaussian and Bernoulli, respectively. For the case of compressed (? < 1) measurements,
observe the two different regimes of operation, one for ? ? ?crit and the other for ? ? ?crit , precisely
as they are predicted by the theorem (see also [16, Sec. 8]). The special case of Theorem 2.3 for
which qX 0 is Gaussian has been previously studied in [20]. Otherwise, to the best of our knowledge,
this is the first precise analysis result for the `2 -LASSO stated in that generality. Analogous result,
but via different analysis tools, has only been known for the `22 -LASSO as appears in [2].
2.2.3
Group-Sparse Recovery
Let x0 ? Rn be composed of t non-overlapping blocks of constant size b each such that n = t ? b.
Each block [x0 ]i , i = 1, . . . , t is sampled i.i.d. from a probability density in Rb : pX 0 (x) = (1 ?
?) ? ?0 (x) + ? ? qX 0 (x), x ? Rb , where ? ? (0, 1). Thus, x0 is ?t-block-sparse on average. We
operate in the regime of linear measurements m/n = ? ? (0, ?). As is common we use the
Pt
`1,2 -norm to induce block-sparsity, i.e., f (x) =
i=1 k[x0 ]i k2 ; with this, (9) is often referred
to as group-LASSO in the literature [25]. It is not hard to show that Assumption 1 holds with
1
E k~? (c1 h + c2 X 0 ; 1)k22 , where ~? (x; ? ) = x/kxk (kxk2 ? ? )+ , x ? Rb is the
F (c1 , c2 , c3 ) := 2b
vector soft thresholding operator and h ? N (0, Ib ), X 0 ? pX 0 and are independent. Thus Theorem
2.2 is applicable in this setting; Figure 2(b) illustrates the accuracy of the prediction.
2.2.4
Low-rank Matrix Recovery
Let X0 ? Rd?d be an unknown matrix of rank r, in which case, x0 = vec(X0 ) with n = d2 .
Assume m/d2 = ? ? (0, ?) and r/d = ? ? (0, 1). As
? usual in this setting, we consider nuclearnorm regularization; in particular, we choose f (x) = dkXk? . Each subgradient S ? ?f (X) then
satisfies kSkF ? d in agreement with assumption (b) of Section 2.1. Furthermore, for this choice of
regularizer, we have
1 ?
1
e n(f ? )(n) ,c3 c1 H + c2 X0 = 2 min? kV ? (c1 H + c2 X0 )k2F
n
2d kVk2 ? d
=
d
1
1 X 2 ?1/2
min kV ? d?1/2 (c1 H + c2 X0 )k2F =
? si d
(c1 H + c2 X0 ) ; 1 ,
2d kVk2 ?1
2d i=1
where ?(?; ?) is as in (13), si (?) denotes the ith singular value of its argument and H ? Rd?d has entries N (0, 1). If conditions are met such that the empirical distribution of the singular values of (the
sequence of random matrices) c1 H + c2 X0 converges
asymptotically to a limiting distribution, say
q(c1 , c2 ), then F (c1 , c2 , c3 ) := 21 Ex?q(c1 ,c2 ) ? 2 (x; 1) , and Theorem 2.1?2.2 apply. For instance,
this will be the case if d?1/2 X0 = USVt , where U, V unitary matrices and S is a diagonal matrix
7
whose entries have a given marginal distribution with bounded moments (in particular, independent
of d). We leave the details and the problem of (numerically) evaluating F for future work.
2.3
An Application to q-bit Compressive Sensing
2.3.1 Setup
Consider recovering a sparse unknown signal x0 ? Rn from scalar q-bit quantized linear measurements. Let t := {t0 = 0, t1 , . . . , tL?1 , tL = +?} represent a (symmetric with respect to 0) set of
decision thresholds and ` := {?`1 , ?`2 , . . . , ?`L } the corresponding representation points, such
that L = 2q?1 . Then, quantization of a real number x into q-bits can be represented as
L
X
Qq (x, `, t) = sign(x)
`i 1{ti?1 ?|x|?ti } ,
i=1
where 1S is the indicator function of a set S. For example, 1-bit quantization with level ` corresponds to Q1 (x, `) = ` ? sign(x). The measurement vector y = [y1 , y2 . . . , ym ]T takes the form
yi = Qq (aTi x0 , `, t),
i = 1, 2, . . . , m,
(16)
where aTi ?s are the rows of a measurement matrix A ? Rm?n , which is henceforth assumed i.i.d.
? of x0 as
standard Gaussian. We use the LASSO to obtain an estimate x
? := arg min ky ? Axk2 + ?kxk1 .
x
x
(17)
Henceforth, we assume for simplicity that kx0 k2 = 1. Also, in our case, ? is known since g = Qq
? and consider the error quantity
is known; thus, is reasonable to scale the solution of (17) as ??1 x
? ? x0 k2 as a measure of estimation performance. Clearly, the error depends (besides others)
k??1 x
on the number of bits q, on the choice of the decision thresholds t and on the quantization levels `.
An interesting question of practical importance becomes how to optimally choose these to achieve
less error. As a running example for this section, we seek optimal quantization thresholds and
corresponding levels
? ? x0 k2 ,
(t? , `? ) = arg min k??1 x
(18)
t,`
while keeping all other parameters such as the number of bits q and of measurements m fixed.
2.3.2 Consequences of Precise Error Prediction
? ? x0 k2 = k?
? lin is the solution to (17), but only,
xlin ? x0 k2 , where x
Theorem 2.1 shows that k??1 x
?
lin
this time with a measurement vector y = Ax0 + ? z, where ?, ? as in (20) and z has entries i.i.d.
standard normal. Thus, lower values of the ratio ? 2 /?2 correspond to lower values of the error and
the design problem posed in (18) is equivalent to the following simplified one:
(t? , `? ) = arg min
t,`
? 2 (t, `)
.
?2 (t, `)
(19)
To be explicit, ?rand ? 2 above can be easily expressed from (3) after setting g = Qq as follows:
L
2
2
2X
? := ?(`, t) =
`i ? e?ti?1 /2 ? e?ti /2
and ? 2 := ? 2 (`, t) := ? 2 ? ?2 ,
(20)
? i=1
Z ?
L
X
1
where, ? 2 := ? 2 (`, t) = 2
`2i ? (Q(ti?1 ) ? Q(ti )) and Q(x) = ?
exp(?u2 /2)du.
2?
x
i=1
2.3.3 An Algorithm for Finding Optimal Quantization Levels and Thresholds
In contrast to the initial problem in (18), the optimization involved in (19) is explicit in terms of
the variables ` and t, but, is still hard to solve in general. Interestingly, we show in Appendix C
that the popular Lloyd-Max (LM) algorithm can be an effective algorithm for solving (19), since
the values to which it converges are stationary points of the objective in (19). Note that this is not a
directly obvious result since the classical objective of the LM algorithm is minimizing the quantity
? ? x0 k22 ].
E[ky ? Ax0 k22 ] rather than E[k??1 x
8
References
[1] Francis R Bach. Structured sparsity-inducing norms through submodular functions. In Advances in Neural
Information Processing Systems, pages 118?126, 2010.
[2] Mohsen Bayati and Andrea Montanari. The lasso risk for gaussian matrices. Information Theory, IEEE
Transactions on, 58(4):1997?2017, 2012.
[3] Alexandre Belloni, Victor Chernozhukov, and Lie Wang. Square-root lasso: pivotal recovery of sparse
signals via conic programming. Biometrika, 98(4):791?806, 2011.
[4] David R. Brillinger. The identification of a particular nonlinear time series system. Biometrika, 64(3):509?
515, 1977.
[5] David R Brillinger. A generalized linear model with? gaussian? regressor variables. A Festschrift For
Erich L. Lehmann, page 97, 1982.
[6] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of
linear inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[7] David L Donoho and Iain M Johnstone. Minimax risk overl p-balls forl p-error. Probability Theory and
Related Fields, 99(2):277?303, 1994.
[8] David L Donoho, Lain Johnstone, and Andrea Montanari. Accurate prediction of phase transitions in
compressed sensing via a connection to minimax denoising. IEEE transactions on information theory,
59(6):3396?3433, 2013.
[9] David L Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed
sensing. Proceedings of the National Academy of Sciences, 106(45):18914?18919, 2009.
[10] David L Donoho, Arian Maleki, and Andrea Montanari. The noise-sensitivity phase transition in compressed sensing. Information Theory, IEEE Transactions on, 57(10):6920?6941, 2011.
[11] Alexandra L Garnham and Luke A Prendergast. A note on least squares sensitivity in single-index model
estimation and the benefits of response transformations. Electronic J. of Statistics, 7:1983?2004, 2013.
[12] Yehoram Gordon. On Milman?s inequality and random subspaces which escape through a mesh in Rn .
Springer, 1988.
[13] Marwa El Halabi and Volkan Cevher. A totally unimodular view of structured sparsity. arXiv preprint
arXiv:1411.1990, 2014.
[14] Hidehiko Ichimura. Semiparametric least squares (sls) and weighted sls estimation of single-index models. Journal of Econometrics, 58(1):71?120, 1993.
[15] Ker-Chau Li and Naihua Duan. Regression analysis under link violation. The Annals of Statistics, pages
1009?1052, 1989.
[16] Samet Oymak, Christos Thrampoulidis, and Babak Hassibi. The squared-error of generalized lasso: A
precise analysis. arXiv preprint arXiv:1311.0830, 2013.
[17] Yaniv Plan and Roman Vershynin. The generalized lasso with non-linear observations. arXiv preprint
arXiv:1502.04071, 2015.
[18] Mihailo Stojnic. A framework to characterize performance of lasso algorithms. arXiv preprint
arXiv:1303.7291, 2013.
[19] Christos Thrampoulidis, Samet Oymak, and Babak Hassibi. Regularized linear regression: A precise
analysis of the estimation error. In Proceedings of The 28th Conference on Learning Theory, pages 1683?
1709, 2015.
[20] Christos Thrampoulidis, Ashkan Panahi, Daniel Guo, and Babak Hassibi. Precise error analysis of the
lasso. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on,
pages 3467?3471.
[21] Christos Thrampoulidis, Ashkan Panahi, and Babak Hassibi. Asymptotically exact error analysis for
the generalized `22 -lasso. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages
2021?2025. IEEE, 2015.
[22] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), pages 267?288, 1996.
[23] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness
via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91?
108, 2005.
[24] Xinyang Yi, Zhaoran Wang, Constantine Caramanis, and Han Liu. Optimal linear estimation under unknown nonlinear transform. arXiv preprint arXiv:1505.03257, 2015.
[25] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of
the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
9
| 5739 |@word version:4 norm:13 seems:1 proportionality:5 d2:2 simulation:4 seek:1 q1:1 mention:1 moment:3 reduction:1 celebrated:1 series:4 liu:1 initial:1 daniel:1 interestingly:3 amp:1 ati:7 xinyang:1 kx0:9 si:2 axk22:1 mesh:1 analytic:1 drop:2 fund:1 stationary:1 device:1 accordingly:1 indicative:1 ith:1 core:1 volkan:1 quantizer:2 quantized:3 characterization:2 revisited:1 provides:2 simpler:1 c2:20 direct:1 become:1 kvk2:2 symposium:1 director:1 yuan:1 specialize:1 manner:3 x0:105 indeed:1 andrea:4 behavior:3 usvt:1 inspired:1 ming:1 duan:1 considering:1 totally:1 becomes:4 provided:1 estimating:3 moreover:1 bounded:3 what:2 kind:1 unspecified:3 minimizes:1 developed:1 compressive:1 finding:1 brillinger:7 transformation:1 guarantee:1 every:1 expands:1 ti:6 unimodular:1 exactly:1 um:1 rm:7 k2:21 scaled:2 biometrika:2 normally:1 grant:2 unit:4 appear:2 omit:1 t1:1 engineering:3 accordance:1 treat:1 limit:4 consequence:2 establishing:1 might:5 studied:4 suggests:1 luke:1 marwa:1 hidehiko:1 obeys:1 presuming:1 practical:2 unique:4 practice:1 block:7 ker:1 asymptotics:1 empirical:1 induce:1 selection:2 operator:2 put:1 context:3 risk:2 equivalent:3 map:1 deterministic:1 attention:1 l:5 convex:11 simplicity:1 recovery:9 estimator:1 insight:1 iain:1 regarded:1 nuclear:3 importantly:2 president:1 limiting:2 analogous:1 pt:1 gm:1 qq:4 strengthen:1 exact:4 modulo:1 programming:1 annals:1 agreement:1 associate:1 function5:1 satisfying:1 particularly:1 econometrics:2 observed:1 fork:1 role:7 kxk1:2 preprint:5 electrical:3 capture:2 wang:2 knight:1 benjamin:1 babak:5 solving:6 tight:6 mohsen:1 crit:9 upon:1 easily:2 icassp:1 represented:1 caramanis:1 regularizer:15 effective:1 saunders:1 whose:2 encoded:1 posed:2 valued:1 widely:1 say:3 solve:1 otherwise:3 compressed:4 qualcomm:1 gi:11 statistic:3 g1:1 transform:1 noisy:5 itself:1 superscript:1 advantage:1 sequence:6 relevant:3 unavoidably:1 realization:2 achieve:1 academy:1 inducing:1 kv:3 dirac:2 ky:7 validate:1 convergence:1 yaniv:1 extending:1 overl:1 converges:4 leave:1 depending:1 derive:3 illustrate:2 measured:1 keith:1 recovering:1 predicted:2 involves:1 come:1 met:2 direction:2 owing:1 linear2:1 require:3 suffices:1 generalization:2 samet:2 isit:1 hold:10 normal:4 exp:1 predict:1 lm:2 early:1 a2:1 xk2:1 purpose:1 estimation:15 chernozhukov:1 applicable:2 grouped:1 tool:1 weighted:1 clearly:1 gaussian:12 rather:2 shrinkage:1 derived:4 ax:1 properly:2 methodological:1 rank:6 check:1 bernoulli:1 panahi:2 contrast:2 am:1 sense:1 el:1 typically:1 initially:1 hidden:1 relation:1 going:1 interested:1 arg:6 denoted:1 chau:1 plan:6 constrained:6 special:6 marginal:2 equal:2 field:1 represents:1 k2f:2 future:1 others:1 simplify:3 roman:1 few:2 escape:1 gordon:2 composed:2 national:2 festschrift:1 replaced:1 geometry:1 phase:2 cns:1 n1:1 interest:3 message:2 evaluation:1 deferred:1 generically:1 violation:1 regularizers:4 accurate:1 necessary:1 censored:1 arian:2 indexed:1 continuing:1 euclidean:3 old:1 mk:1 cevher:1 instance:5 fenchel:2 modeling:1 soft:2 ax0:5 cost:2 entry:14 snr:3 characterize:2 optimally:1 answer:1 supx:1 considerably:1 vershynin:6 rosset:1 recht:1 density:4 international:2 sensitivity:2 oymak:2 stay:1 regressor:1 michael:1 ym:2 fused:2 squared:6 abbasi:1 satisfied:1 choose:2 possibly:1 henceforth:3 li:1 nonlinearities:2 parrilo:1 lloyd:2 sec:3 includes:1 availability:1 inc:1 matter:1 gaussianity:1 notable:1 zhaoran:1 depends:3 vi:2 later:1 root:2 try:1 closed:1 view:1 px0:5 francis:1 sort:1 recover:1 contribution:2 square:5 accuracy:2 variance:1 who:1 efficiently:1 correspond:2 yield:2 weak:1 identification:1 accurately:2 iid:1 randomness:1 footnote:1 suffers:1 ashkan:2 definition:1 involved:1 obvious:2 naturally:1 proof:2 recovers:1 stop:1 sampled:4 proved:1 adjusting:1 popular:4 recall:1 knowledge:4 lim:4 back:1 nasa:1 appears:2 alexandre:1 ichimura:1 methodology:3 response:1 rand:1 generality:2 furthermore:2 just:1 implicit:1 eventhough:1 hand:1 stojnic:2 nonlinear:4 overlapping:1 perhaps:1 grows:1 alexandra:1 building:1 name:1 k22:13 normalized:3 y2:2 true:2 ccf:3 maleki:2 regularization:6 hence:1 symmetric:1 laboratory:1 nonzero:1 width:1 please:1 generalized:23 demonstrate:1 saharon:1 wise:1 novel:1 recently:4 ef:1 common:1 ji:1 extend:1 he:1 marginals:1 numerically:2 measurement:57 refer:4 significant:1 vec:2 ai:4 smoothness:1 tuning:1 rd:2 erich:1 mathematics:1 submodular:1 moving:1 han:1 longer:1 etc:4 recent:4 showed:1 constantine:1 scenario:1 termed:1 certain:1 inequality:2 prism:1 binary:1 continue:1 yi:12 caltech:6 victor:1 additional:1 signal:18 ii:2 relates:1 d0:2 smooth:2 alan:1 jet:1 match:1 bach:1 long:1 lin:5 promotes:1 a1:1 prediction:11 regression:5 essentially:1 expectation:1 arxiv:10 represent:3 tailored:1 normalization:1 c1:20 subdifferential:1 semiparametric:1 grow:2 singular:2 limn:1 appropriately:1 envelope:1 extra:2 operate:1 subject:1 unitary:1 presence:2 iii:1 fit:1 zi:11 lasso:60 restrict:1 identified:1 t0:1 motivated:4 expression:10 effort:3 speech:1 passing:2 remark:3 kskf:1 useful:1 clear:1 involve:1 simplest:1 sl:2 notice:1 sign:9 estimated:1 delta:2 tibshirani:2 rb:3 write:1 group:6 key:1 threshold:4 asymptotically:7 excludes:1 concreteness:1 subgradient:1 lain:1 prob:1 inverse:1 uncertainty:1 lehmann:1 extends:1 throughout:2 reasonable:1 chandrasekaran:1 electronic:1 decision:2 appendix:3 scaling:1 bit:8 bound:9 hi:2 distinguish:1 abdullah:1 played:3 milman:1 constraint:1 precisely:2 belloni:1 x2:2 generates:1 aspect:1 u1:1 argument:2 min:11 px:8 department:3 structured:7 yehoram:1 ball:2 conjugate:2 lem:1 axk2:3 xk22:1 equation:1 previously:2 turn:1 loose:1 letting:1 halabi:1 serf:1 acronym:1 generalizes:1 ksk2:1 operation:1 apply:1 observe:3 generic:1 denotes:2 running:1 include:1 xc:2 exploit:1 prof:2 classical:4 society:3 objective:3 question:3 quantity:3 usual:1 diagonal:1 minx:3 subspace:1 link:9 propulsion:1 topic:1 willsky:1 besides:1 index:5 illustration:3 ratio:1 minimizing:1 setup:3 mostly:1 robert:2 potentially:1 negative:1 stated:3 design:2 proper:1 unknown:12 upper:6 observation:2 finite:1 situation:1 precise:21 y1:2 rn:11 sharp:1 thm:4 thrampoulidis:5 introduced:1 david:6 pair:1 namely:2 pablo:1 extensive:1 c3:8 connection:1 acoustic:1 established:2 able:1 beyond:1 below:3 regime:5 sparsity:5 challenge:1 program:4 max:5 royal:3 critical:1 natural:2 regularized:4 indicator:2 zhu:1 minimax:2 improve:1 technology:1 ne:3 conic:1 faced:1 literature:5 understanding:1 asymptotic:8 law:1 loss:1 interesting:2 bayati:1 foundation:2 sufficient:1 consistent:1 forl:1 principle:3 thresholding:2 row:2 course:2 summary:1 supported:1 copy:1 keeping:1 johnstone:2 wide:1 taking:2 correspondingly:1 absolute:2 sparse:17 moreau:1 distributed:1 benefit:1 curve:1 dimension:5 boundary:1 world:1 evaluating:1 transition:2 commonly:1 simplified:1 qx:7 transaction:3 approximate:1 ignore:1 assumed:3 additionally:1 nature:1 obtaining:1 du:1 ehsan:1 necessarily:1 main:2 montanari:4 accustomed:1 noise:5 arise:2 pivotal:1 referred:2 tl:2 venkat:1 christos:5 hassibi:6 explicit:4 deterministically:1 xl:2 lie:1 kxk2:2 ib:1 formula:3 theorem:21 specific:2 xt:1 showing:2 insightful:1 sensing:4 dk:4 derives:1 naively:1 exists:2 quantization:5 importance:1 magnitude:1 illustrates:1 kx:1 simply:3 absorbed:1 kxk:1 expressed:1 scalar:3 u2:1 springer:1 corresponds:3 minimizer:1 satisfies:2 viewed:2 presentation:3 king:2 donoho:4 hard:2 typical:4 denoising:1 guo:1 latter:2 brevity:1 d1:1 ex:1 |
5,235 | 574 | 3D Object Recognition Using Unsupervised
Feature Extraction
Nathan Intrator
Center for Neural Science,
Brown University
Providence, RI 02912, USA
Heinrich H. Biilthoff
Dept. of Cognitive Science,
Brown University,
and Center for
Biological Information Processing,
MIT, Cambridge, MA 02139 USA
Josh I. Gold
Center for Neural Science,
Brown University
Providence, RI 02912, USA
Shimon Edelman
Dept. of Applied Mathematics
and Computer Science,
Weizmann Institute of Science,
Rehovot 76100, Israel
Abstract
Intrator (1990) proposed a feature extraction method that is related to
recent statistical theory (Huber, 1985; Friedman, 1987), and is based on
a biologically motivated model of neuronal plasticity (Bienenstock et al.,
1982). This method has been recently applied to feature extraction in the
context of recognizing 3D objects from single 2D views (Intrator and Gold,
1991). Here we describe experiments designed to analyze the nature of the
extracted features, and their relevance to the theory and psychophysics of
object recognition.
1
Introduction
Results of recent computational studies of visual recognition (e.g., Poggio and Edelman, 1990) indicate that the problem of recognition of 3D objects can be effectively
reformulated in terms of standard pattern classification theory. According to this
approach, an object is represented by a few of its 2D views, encoded as clusters in
multidimentional space. Recognition of a novel view is then carried out by interpo460
3D Object Recognition Using Unsupervised Feature Extraction
lating among the stored views in the representation space. A major characteristic
of the view interpolation scheme is its sensitivity to viewpoint: the farther the novel
view is from the stored views, the lower the expected recognition rate.
This characteristic performance in the recognition of novel views of synthetic 3D
stimuli was indeed found in human subjects by Biilthoff and Edelman (1991), who
also replicated it in simulated psychophysical experiments that involved a computer
implementation of the view interpolation model. Because of the high dimensionality
of the raster images seen by the human subjects, it was impossible to use them directly for classification in the simulated experiments. Consequently, the simulations
were simplified, in that the views presented to the model were encoded as lists of
vertex locations of the objects (which resembled 3D wire frames).
This simplification amounts to what is referred to in the psychology of recognition
as the feature extraction step (LaBerge, 1976). The discussion of the issue of features of recognition in recent psychological literature is relatively scarce, probably
because of the abandonment of invariant feature theories (which postulate that objects are represented by clusters of points in multidimensional feature spaces (Duda
and Hart, 1973)) in favor of structural models (see review in (Edelman, 1991)). Although some attempts have been made to generate and verify specific psychophysical
predictions based on the feature space approach (see especially (Shepard, 1987)),
current feature-based theories of perception seem to be more readily applicable to
lower-level visual tasks than to the problem of object recognition.
In the present work, our aim was to explore a computationally tractable model
of feature extraction conceived as dimensionality reduction, and to test its psychophysical validity. This work was guided by previous successful applications in
pattern recognition of dimensionality reduction by a network model implementing Exploratory Projection Pursuit (Intrator, 1990; Intrator and Gold, 1991). We
were also motivated by results of recent psychophysical experiments (Edelman and
Biilthoff, 1990; Edelman et al., 1991) that found improvement in subjects' performance with increasing stimulus familiarity. These results are compatible with a
feature-based recognition model which extracts problem-specific features in addition to universal ones. Specifically, the subjects' ability to discern key elements
of the solution appears to increase as the problem becomes more familiar. This
finding suggests that some of the features used by the visual system are based on
the task-specific data, and therefore raises the question of how can such features be
extracted. It was our conjecture that features found by the EPP model would turn
out to be similar to the task-specific features in human vision.
1.1
Unsupervised Feature Extraction - The BCM Model
The feature extraction method briefly described below emphasizes dimensionality
reduction, while seeking features of a set of objects that would best distinguish
among the members of the set. This method does not rely on a general pre-defined
set of features. This is not to imply, however, that the features are useful only in
recognition of the original set of images from which they were extracted. In fact, the
potential importance of these features is related to their invariance properties, or
their ability to generalize. Invariance properties of features extracted by this method
have been demonstrated previously in speech recognition (Intrator and Tajchman,
461
462
Intrator, Gold, Biilthoff, and Edelman
1991; Intrator, 1992).
From a mathematical viewpoint, extracting features from gray level images is related
to dimensionality reduction in a high dimensional vector space, in which an n x k
pixel image is considered to be a vector oflength n x k. The dimensionality reduction
is achieved by replacing each image (or its high dimensional equivalent vector) by a
low dimensional vector in which each element represents a projection of the image
onto a vector of synaptic weights (constructed by a BCM neuron).
Projections through m1
1
m
1
m
2
m
Figure 1: The stable solutions for a two dimensional two input problem are
(left) and similarly with a two-cluster data (right).
ml
and
m2
The feature extraction method we used (Intrator and Cooper, 1991) seeks multimodality in the projected distribution of these high dimensional vectors. A simple
example is illustrated in Figure 1. For a two-input problem in two dimensions, the
stable solutions (projection directions) are ml and m2, each has the property of
being orthogonal to one of the inputs. In a higher dimensional space, for n linearly
independent inputs, a stable solution is one that it is orthogonal to all but one of
the inputs. In case of noisy but clustered inputs, a stable solution will be orthogonal
to all but one of the cluster centers. As is seen in Figure 1 (right), this leads to
a bimodal, or, in general, multi-modal, projected distribution. Further details are
given in (Intrator and Cooper, 1991). In the present study, the features extracted
by the above approach were used for classification as described in (Intrator and
Gold, 1991; Intrator, 1992).
1.2
Experimental paradigm
We have studied the features extracted by the BCM model by replicating the experiments of Biilthoff and Edelman (1991), designed to test generalization from familiar
to novel views of 3D objects. As in the psychophysical experiments, images of novel
wire-like computer-generated objects (Biilthoff and Edelman, 1991; Edelman and
Biilthoff, 1990) were used as stimuli. These objects proved to be easily manipulated,
and yet complex enough to yield interesting results. Using wires also simplified the
problem for the feature extractor, as they provided little or no occlusion of the
key features from any viewpoint. The objects were generated by the Symbolics
S-Geometry ? modeling package, and rendered with a visualization graphics tool
(AVS, Stardent, Inc.). Each object consisted of seven connected equal-length segments, pointing in random directions and distributed equally around the origin (for
further details, see Edelman and Biilthoff, 1990).
In the psychophysical experiments of Biilthoff and Edelman (1991), subjects were
3D Object Recognition Using Unsupervised Feature Extraction
shown a target wire from two standard views, located 75? apart along the equator of the viewing sphere. The target oscillated around each of the two standard
orientations with an amplitude of ?15? about a fixed vertical axis, with views
spaced at 3? increments. Test views were located either along the equator - on
the minor arc bounded by the two standard views (INTER condition) or on the
corresponding major arc (EXTRA condition) - or on the meridian passing through
one of the standard views (ORTHO condition). Testing was conducted according to
a two-alternative forced choice (2AFC) paradigm, in which subjects were asked to
indicate whether the displayed image constituted a view of the target object shown
during the preceding training session. Test images were either unfamiliar views of
the training object, or random views of a distractor (one of a distinct set of objects
generated by the same procedure).
To apply the above paradigm to the BCM network, the objects were rendered in
a 63 x 63 array, at 8 bits/pixel, under simulated illumination that combined ambient lighting of relative strength 0.3 with a point source of strength 1.0 at infinity.
The study described below involved six-way classification, which is more difficult
than the 2AFC task used in the psychophysical experiments. The six wires used
Figure 2: The six wires used in the computational experiments, as seen from a
single view point.
in the experiments are depicted in Figure 2. Given the task of recognizing the six
wires, the network extracted features that corresponded to small patches of the
different images, namely areas that either remained relatively invariant under the
rotation performed during training, or represented distinctive features of specific
wires (Intrator and Gold, 1991). The classification results were in good agreement
with the psychophysical data of Biilthoff and Edelman (1991): (1) the error rate
was the lowest in the INTER condition, (2) recognition deteriorated to chance level
with increased misorientation in the EXTRA and ORTHO conditions, and (3) horizontal training led to a better performance in the INTER condition than did vertical
training. 1 The first two points were interpreted as resulting from the ability of the
BCM network to extract rotation-invariant features. Indeed, features appearing in
all the training views would be expected to correspond to the INTER condition.
EXTRA and ORTHO views, on the other hand, are less familiar and therefore yield
worse performance, and also may require features other than the rotation-invariant
ones extracted by the model.
lThe horizontal-vertical asymmetry might be related to an asymmetric structure of the
visual field in humans (Hughes, 1977). This asymmetry was modeled by increasing the
resolution along the horizontal axis.
463
464
Imrator, Gold, Bulthoff, and Edelman
2
Examining the Features of Recognition
To understand the meaning of the features extracted by the BCM network under
the various conditions, and to establish a basis for further comparison between the
psychophysical experiments and computational models, we developed a method for
occluding key features from the images and examining the subsequent effects on the
various recognition tasks.
2.1
The Occlusion Experiment
In this experiment, some of the features previously extracted by the network could
be occluded during training and/or testing. Each input to a BCM neuron in our
model corresponds to a particular point in the 2D input image, while "features"
correspond to combinations of excitatory and inhibitory inputs. Assuming that inputs with strong positive weights constitute a significant proportion of the features,
we occluded (set to 0) input pixels whose previously computed synaptic weight exceeded a preset threshold. Figure 3 shows a synaptic weight matrix defining a set
of features, and the set of wires with the corresponding features occluded.
The main hypothesis we tested concerns the general utility of the extracted features for recognition. If the features extracted by the BCM network do capture
rotation-invariant aspects of the object and can support recognition across a variety of rotations, then occluding those features during training should lead to a
pronounced and general decline in recognition performance of the model. In particular, recognition should deteriorate most significantly in the INTER and EXTRA
cases, since they lie in the plane of rotation during training and therefore can be
expected to rely to a larger extent on rotation-invariant features. Little change
should be seen in the ORTHO condition, on the other hand, because recognition of
ORTHO views, situated outside the plane of rotation defined by the training phase,
does not benefit from rotation-invariant features.
2.2
Results and Discussion
When there was no occlusion, the pattern of the model's performance replicated
the results of the psychophysical experiments of (Biilthoff and Edelman, 1991).
Specifically, the best performance was achieved for INTER views, with progressive
deterioration under EXTRA and ORTHO conditions (Intrator and Gold, 1991; see
Figure 4). The results of simulations involving occlusion of key features during
training and no occlusion during testing are illustrated in Figure 5. Essentially
the same results were obtained when occlusion was done during either training or
testing.
Occlusion of the key features led to a number of interesting results. First, when
features in the training image were occluded, occluding the same features during
testing made little difference. This is not unexpected, since these features were not
used to build the internal representation of the objects. Second, there was a general
decline in performance within the plane of rotation used during training (especially
in the INTER condition) when the extracted features were occluded. This is a
strong indication that the features initially chosen by the network were in fact those
features which best described the object acroSs a range of rotations. Third, there
3D Object Recognition Using Unsupervised Feature Extraction
Figure 3: Wires occluded with a feature extracted by BeM network (left).
Inter .............
0. 8
.....'"
et:
...0
......
Extra ............
0.6
Ortho
>--4>--0
0. 8
. . L-..,.-..-: :.- ::.
..../.;;/1
.
.
.
.
.
/
.
f
0 .4
w
....0--0
0.6
0.4
...... -....... - ...... /~
0.2
Inter
0.2
.....
o ~--~~~--~--~-L~-=~
o
10
20
30
40
Distance [Jeg}
50
60
Figure 4:
Misclassification performance, regular training.
10
20
30
40
50
60
Distance [DegJ
Figure 5:
Misclassification performance, training on occluded Images.
was little degradation of performance in the ORTHO condition when features were
occluded during training. This result lends further support to the notion that the
extracted features emphasized rotation-invariant characteristics of the objects, as
abstracted in the training phase. Finally, we mention that the occlusion of the same
features in a new psychophysical experiment caused the same selective deterioration
found in the simulations to appear in the human subjects' performance. Specifically,
the subjects' error rate was elevated in the INTER condition more than in the other
conditions, and this effect was significantly stronger for occlusion masks obtained
from the extracted features than for other, randomized, masks (Sklar et al., 1991).
To summarize, this work was undertaken to elucidate the nature of the features of
recognition of 3D objects. We were especially interested in the features extracted
by an unsupervised BCM network, and in their relation to computational and psychophysical findings concerning object recognition. We compared recognition performance of our model following training that involved features extracted by the
BCM network with performance in the absence of these features. We found that
the model's performance was affected by the occlusion of key features in a manner
consistent with their predicted computational role. This method of testing the relative importance of features has also been applied in psychophysical experiments.
Preliminary results of those experiments show that feature-derived masks have a
stronger effect on human performance compared to other masks that occlude the
same proportion of the image, but are not obtained via the BCM model. Taken
together, these results demonstrate the strength of the dimensionality reduction
approach to feature extraction, and provide a foundation for examining the link
465
466
Intraror, Gold, Bulthoff, and Edelman
between computational and psychophysical studies of the features of recognition.
Acknowledgements
Research was supported by the National Science Foundation, the Army Research
Office, and the Office of Naval Research.
References
Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction
in visual cortex. Journal Neuroscience, 2:32-48.
Biilthoff, H. H. and Edelman, S. (1991). Psychophysical support for a 2D interpolation theory of object recognition. Proceedings of the National Academy of
Science. to appear.
Duda, R. O. and Hart, P. E. (1973). Pattern Classification and Scene Analysis.
John Wiley, New York.
Edelman, S. (1991). Features of recognition. CS-TR 10, Weizmann Institute of
Science.
Edelman, S. and Biilthoff, H. H. (1990). Viewpoint-specific representations in threedimensional object recognition. A.I. Memo No. 1239, Artificial Intelligence
Laboratory, Massachusetts Institute of Technology.
Edelman, S., Biilthoff, H. H., and Sklar, E. (1991). Task and object learning in visual
recognition. CBIP Memo No. 63, Center for Biological Information Processing,
Ma!..sachusetts Institute of Technology.
Friedman, J. H. (1987). Exploratory projection pursuit. Journal of the American
Statistical Association, 82:249-266.
Huber, P. J. (1985). Projection pursuit. (with discussion). The Annals of Statistics,
13:435-475.
Hughes, A. (1977). The topography of vision in mammals of contrasting live style:
Comparative optics and retinal organisation. In Crescitelli, F., editor, The
Visual System in Vertebrates, Handbook of Sensory Physiology VII/5, pages
613-756. Springer Verlag, Berlin.
Intrator, N. (1990). Feature extraction using an unsupervised neural network. In
Touretzky, D. S., Ellman, J. L., Sejnowski, T. J., and Hinton, G. E., editors,
Proceedings of the 1990 Connectionist Models Summer School, pages 310-318.
Morgan Kaufmann, San Mateo, CA.
Intrator, N. (1992). Feature extraction using an unsupervised neural network. Neural Computation, 4:98-107.
Intrator, N. and Cooper, L. N. (1991). Objective function formulation of the BCM
theory of visual cortical plasticity: Statistical connections, stability conditions.
Neural Networks. To appear.
Intrator, N. and Gold, J. I. (1991). Three-dimensional object recognition of gray
level images: The usefulness of distinguishing features. Submitted.
3D Object Recognition Using Unsupervised Feature Extraction
Intrator, N. and Tajchman, G. (1991). Supervised and unsupervised feature extraction from a cochlear model for speech recognition. In Juang, B. H., Kung,
S. Y., and Kamm, C. A., editors, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop, pages 460-469.
LaBerge, D. (1976). Perceptual learning and attention. In Estes, W. K., editor, Handbook of learning and cognitive processes, volume 4, pages 237-273.
Lawrence Erlbaum, Hillsdale, New Jersey.
Poggio, T. and Edelman, S. (1990). A network that learns to recognize threedimensional objects. Nature, 343:263-266.
Shepard, R. N. (1987). Toward a universal law of generalization for psychological
science. Science, 237:1317-1323.
Sklar, E., Intrator, N., Gold, J. J., Edelman, S. Y., and Biilthoff, H. H. (1991). A
hierarchical model for 3D object recognition based on 2D visual representation.
In Neurosci. Soc. Abs.
467
PART VIII
OPTICAL
CHARACTER
RECOGNITION
| 574 |@word briefly:1 stronger:2 duda:2 proportion:2 simulation:3 seek:1 mammal:1 mention:1 tr:1 reduction:6 current:1 yet:1 readily:1 john:1 subsequent:1 plasticity:2 designed:2 occlude:1 intelligence:1 plane:3 farther:1 location:1 mathematical:1 along:3 constructed:1 edelman:22 multimodality:1 manner:1 deteriorate:1 inter:10 mask:4 indeed:2 huber:2 expected:3 distractor:1 multi:1 kamm:1 little:4 increasing:2 becomes:1 provided:1 vertebrate:1 bounded:1 lowest:1 israel:1 what:1 interpreted:1 developed:1 contrasting:1 finding:2 multidimensional:1 appear:3 positive:1 interpolation:3 might:1 studied:1 mateo:1 suggests:1 range:1 weizmann:2 testing:6 hughes:2 procedure:1 area:1 universal:2 significantly:2 physiology:1 projection:6 pre:1 regular:1 specificity:1 onto:1 context:1 live:1 impossible:1 equivalent:1 demonstrated:1 center:5 attention:1 oscillated:1 resolution:1 m2:2 array:1 stability:1 exploratory:2 ortho:8 increment:1 notion:1 deteriorated:1 target:3 elucidate:1 annals:1 distinguishing:1 hypothesis:1 origin:1 agreement:1 element:2 recognition:38 located:2 asymmetric:1 role:1 capture:1 connected:1 asked:1 heinrich:1 occluded:8 raise:1 segment:1 distinctive:1 basis:1 easily:1 represented:3 various:2 jersey:1 sklar:3 forced:1 distinct:1 describe:1 sejnowski:1 artificial:1 corresponded:1 outside:1 whose:1 encoded:2 larger:1 favor:1 ability:3 statistic:1 noisy:1 indication:1 interaction:1 gold:11 academy:1 pronounced:1 juang:1 cluster:4 asymmetry:2 comparative:1 object:34 school:1 minor:1 strong:2 soc:1 predicted:1 c:1 indicate:2 direction:2 guided:1 human:6 viewing:1 implementing:1 hillsdale:1 require:1 clustered:1 generalization:2 preliminary:1 biological:2 around:2 considered:1 lawrence:1 pointing:1 major:2 applicable:1 ellman:1 tool:1 mit:1 aim:1 office:2 derived:1 naval:1 improvement:1 abandonment:1 initially:1 bienenstock:2 relation:1 selective:1 interested:1 pixel:3 issue:1 classification:6 among:2 orientation:2 development:1 psychophysics:1 equal:1 field:1 extraction:16 represents:1 progressive:1 unsupervised:10 afc:2 sachusetts:1 connectionist:1 stimulus:3 few:1 manipulated:1 national:2 recognize:1 familiar:3 geometry:1 occlusion:10 phase:2 friedman:2 attempt:1 ab:1 ambient:1 poggio:2 orthogonal:3 psychological:2 increased:1 modeling:1 vertex:1 usefulness:1 recognizing:2 successful:1 examining:3 conducted:1 erlbaum:1 graphic:1 meridian:1 stored:2 providence:2 synthetic:1 combined:1 sensitivity:1 randomized:1 together:1 postulate:1 worse:1 cognitive:2 american:1 style:1 potential:1 retinal:1 inc:1 caused:1 performed:1 view:24 analyze:1 kaufmann:1 characteristic:3 who:1 yield:2 spaced:1 correspond:2 generalize:1 misorientation:1 emphasizes:1 lighting:1 submitted:1 touretzky:1 synaptic:3 raster:1 involved:3 proved:1 massachusetts:1 dimensionality:7 amplitude:1 appears:1 exceeded:1 higher:1 supervised:1 modal:1 formulation:1 done:1 binocular:1 hand:2 bulthoff:2 horizontal:3 replacing:1 gray:2 usa:3 effect:3 validity:1 brown:3 verify:1 consisted:1 laboratory:1 illustrated:2 during:11 demonstrate:1 image:16 meaning:1 bem:1 novel:5 recently:1 rotation:12 shepard:2 volume:1 association:1 elevated:1 m1:1 oflength:1 unfamiliar:1 significant:1 cambridge:1 mathematics:1 similarly:1 session:1 replicating:1 stable:4 cortex:1 recent:4 apart:1 selectivity:1 verlag:1 seen:4 morgan:1 preceding:1 paradigm:3 signal:1 sphere:1 hart:2 concerning:1 equally:1 prediction:1 involving:1 vision:2 essentially:1 bimodal:1 achieved:2 equator:2 deterioration:2 addition:1 source:1 extra:6 probably:1 subject:8 member:1 seem:1 extracting:1 structural:1 enough:1 variety:1 psychology:1 decline:2 whether:1 motivated:2 six:4 munro:1 utility:1 reformulated:1 speech:2 passing:1 york:1 constitute:1 useful:1 amount:1 situated:1 generate:1 inhibitory:1 neuroscience:1 conceived:1 rehovot:1 affected:1 key:6 threshold:1 undertaken:1 package:1 discern:1 patch:1 bit:1 summer:1 simplification:1 distinguish:1 strength:3 optic:1 infinity:1 ri:2 scene:1 nathan:1 aspect:1 degj:1 optical:1 rendered:2 relatively:2 conjecture:1 according:2 combination:1 across:2 character:1 biologically:1 invariant:8 taken:1 computationally:1 visualization:1 previously:3 turn:1 tractable:1 pursuit:3 apply:1 hierarchical:1 intrator:20 appearing:1 alternative:1 original:1 symbolics:1 estes:1 especially:3 establish:1 build:1 threedimensional:2 psychophysical:15 seeking:1 objective:1 question:1 lends:1 distance:2 link:1 simulated:3 tajchman:2 berlin:1 seven:1 cochlear:1 extent:1 lthe:1 toward:1 viii:1 assuming:1 length:1 modeled:1 difficult:1 memo:2 implementation:1 av:1 wire:10 neuron:3 vertical:3 arc:2 displayed:1 defining:1 hinton:1 frame:1 namely:1 connection:1 bcm:12 below:2 pattern:4 perception:1 summarize:1 misclassification:2 rely:2 scarce:1 scheme:1 technology:2 imply:1 axis:2 carried:1 extract:2 review:1 literature:1 acknowledgement:1 relative:2 law:1 topography:1 interesting:2 foundation:2 consistent:1 viewpoint:4 editor:4 compatible:1 excitatory:1 supported:1 understand:1 institute:4 distributed:1 benefit:1 dimension:1 cortical:1 sensory:1 made:2 projected:2 replicated:2 simplified:2 san:1 biilthoff:15 ml:2 abstracted:1 handbook:2 nature:3 ca:1 complex:1 did:1 main:1 constituted:1 linearly:1 neurosci:1 neuronal:1 referred:1 cooper:4 wiley:1 lie:1 perceptual:1 third:1 extractor:1 learns:1 shimon:1 remained:1 familiarity:1 resembled:1 specific:6 emphasized:1 list:1 concern:1 organisation:1 workshop:1 effectively:1 importance:2 illumination:1 vii:1 depicted:1 led:2 explore:1 army:1 josh:1 visual:9 unexpected:1 epp:1 springer:1 corresponds:1 chance:1 extracted:18 ma:2 consequently:1 absence:1 change:1 specifically:3 preset:1 degradation:1 invariance:2 experimental:1 occluding:3 internal:1 support:3 kung:1 relevance:1 dept:2 tested:1 |
5,236 | 5,740 | Optimal Rates for Random Fourier Features
Bharath K. Sriperumbudur?
Department of Statistics
Pennsylvania State University
University Park, PA 16802, USA
bks18@psu.edu
Zolt?an Szab?o?
Gatsby Unit, CSML, UCL
Sainsbury Wellcome Centre, 25 Howland Street
London - W1T 4JG, UK
zoltan.szabo@gatsby.ucl.ac.uk
Abstract
Kernel methods represent one of the most powerful tools in machine learning to tackle
problems expressed in terms of function values and derivatives due to their capability to
represent and model complex relations. While these methods show good versatility, they
are computationally intensive and have poor scalability to large data as they require operations on Gram matrices. In order to mitigate this serious computational limitation, recently
randomized constructions have been proposed in the literature, which allow the application of fast linear algorithms. Random Fourier features (RFF) are among the most popular
and widely applied constructions: they provide an easily computable, low-dimensional
feature representation for shift-invariant kernels. Despite the popularity of RFFs, very little is understood theoretically about their approximation quality. In this paper, we provide
a detailed finite-sample theoretical analysis about the approximation quality of RFFs by (i)
establishing optimal (in terms of the RFF dimension, and growing set size) performance
guarantees in uniform norm, and (ii) presenting guarantees in Lr (1 ? r < ?) norms.
We also propose an RFF approximation to derivatives of a kernel with a theoretical study
on its approximation quality.
1
Introduction
Kernel methods [17] have enjoyed tremendous success in solving several fundamental problems of
machine learning ranging from classification, regression, feature extraction, dependency estimation,
causal discovery, Bayesian inference and hypothesis testing. Such a success owes to their capability
to represent and model complex relations by mapping points into high (possibly infinite) dimensional
feature spaces. At the heart of all these techniques is the kernel trick, which allows to implicitly
compute inner products between these high dimensional feature maps, ? via a kernel function k:
k(x, y) = h?(x), ?(y)i. However, this flexibility and richness of kernels has a price: by resorting
to implicit computations these methods operate on the Gram matrix of the data, which raises serious
computational challenges while dealing with large-scale data. In order to resolve this bottleneck,
numerous solutions have been proposed, such as low-rank matrix approximations [25, 6, 1], explicit
feature maps designed for additive kernels [23, 11], hashing [19, 9], and random Fourier features
(RFF) [13] constructed for shift-invariant kernels, the focus of the current paper.
RFFs implement an extremely simple, yet efficient idea: instead of relying on the implicit feature
map ? associated with the kernel, by appealing to Bochner?s theorem [24]?any bounded, continuous, shift-invariant kernel is the Fourier transform of a probability measure?-[13] proposed an
explicit low-dimensional random Fourier feature map ? obtained by empirically approximating the
Fourier integral so that k(x, y) ? h?(x), ?(y)i. The advantage of this explicit low-dimensional
feature representation is that the kernel machine can be efficiently solved in the primal form through
fast linear solvers, thereby enabling to handle large-scale data. Through numerical experiments, it
has also been demonstrated that kernel algorithms constructed using the approximate kernel do not
?
Contributed equally.
1
suffer from significant performance degradation [13]. Another advantage with the RFF approach is
that unlike low rank matrix approximation approach [25, 6] which also speeds up kernel machines,
it approximates the entire kernel function and not just the kernel matrix. This property is particularly useful while dealing with out-of-sample data and also in online learning applications. The RFF
technique has found wide applicability in several areas such as fast function-to-function regression
[12], differential privacy preserving [2] and causal discovery [10].
Despite the success of the RFF method, surprisingly, very little is known about its performance guarantees. To the best of our knowledge, the only paper in the machine learning literature providing
certain theoretical insight into the accuracy of kernel approximationpvia RFF is [13, 22]:1 it shows
that Am := sup{|k(x, y) ? h?(x), ?(y)iR2m | : x, y ? S} = Op ( log(m)/m) for any compact
set S ? Rd , where m is the number of random Fourier features. However, since the approximation
proposed by the RFF method involves empirically approximating the Fourier integral, the RFF estimator can be thought of as an empirical characteristic function (ECF). In the probability literature,
the systematic study of ECF-s was initiated by [7] and followed up by [5, 4, 27]. While [7] shows
the almost sure (a.s.) convergence of Am to zero, [5, Theorems 1 and 2] and [27, Theorems 6.2 and
6.3] show that the optimal rate is m?1/2 . In addition, [7] shows that almost sure convergence cannot
be attained over the entire space (i.e., Rd ) if the characteristic function decays to zero at infinity.
Due to this, [5, 27] study the convergence behavior of Am when the diameter of S grows with m
and show that almost sure convergence of Am is guaranteed as long as the diameter of S is eo(m) .
Unfortunately, all these results (to the best of our knowledge) are asymptotic in nature and the only
known finite-sample guarantee by [13, 22] is non-optimal. In this paper (see Section 3), we present
a finite-sample probabilistic bound for Am that holds for any m and provides the optimal rate of
m?1/2 for any compact set S along with guaranteeing the almost sure convergence of Am as long
as the diameter of S is eo(m) . Since convergence in uniform norm might sometimes be a too strong
requirement and may not be suitable to attain correct rates in the generalization bounds associated
with learning algorithms involving RFF,2 we also study the behavior of k(x, y) ? h?(x), ?(y)iR2m
in Lr -norm (1 ? r < ?) and obtain an optimal rate of m?1/2 . The RFF approach to approximate
a translation-invariant kernel can be seen as a special of the problem of approximating a function in
the barycenter of a family (say F) of functions, which was considered in [14]. However, the approximation guarantees in [14, Theorem 3.2] do not directly apply to RFF as the assumptions on F are
not satisfied by the cosine function, which is the family of functions that is used to approximate the
kernel in the RFF approach. While a careful modification of the proof of [14, Theorem 3.2] could
yield m?1/2 rate of approximation for any compact set S, this result would still be sub-optimal by
providing a linear dependence on |S| similar to the theorems in [13, 22], in contrast to the optimal
logarithmic dependence on |S| that is guaranteed by our results.
Traditionally, kernel based algorithms involve computing the value of the kernel. Recently, kernel algorithms involving the derivatives of the kernel (i.e., the Gram matrix consists of derivatives
of the kernel computed at training samples) have been used to address numerous machine learning tasks, e.g., semi-supervised or Hermite learning with gradient information [28, 18], nonlinear variable selection [15, 16], (multi-task) gradient learning [26] and fitting of distributions in an
infinite-dimensional exponential family [20]. Given the importance of these derivative based kernel algorithms, similar to [13], in Section 4, we propose a finite dimensional random feature map
approximation to kernel derivatives, which can be used to speed up the above mentioned derivative
based kernel algorithms. We present a finite-sample bound that quantifies the quality of approximation in uniform and Lr -norms and show the rate of convergence to be m?1/2 in both these cases.
A summary of our contributions are as follows. We
1. provide the first detailed finite-sample performance analysis of RFFs for approximating kernels
and their derivatives.
2. prove uniform and Lr convergence on fixed compacts sets with optimal rate in terms of the RFF
dimension (m);
3. give sufficient conditions for the growth rate of compact sets while preserving a.s. convergence
uniformly and in Lr ; specializing our result we match the best attainable asymptotic growth rate.
1
[22] derived tighter constants compared to [13] and also considered different RFF implementations.
For example, in applications like kernel ridge regression based on RFF, it is more appropriate to consider
the approximation guarantee in L2 norm than in the uniform norm.
2
2
Various notations and definitions that are used throughout the paper are provided in Section 2 along
with a brief review of RFF approximation proposed by [13]. The missing proofs of the results in
Sections 3 and 4 are provided in the supplementary material.
2
Notations & preliminaries
In this section, we introduce notations that are used throughout the paper and then present preliminaries on kernel approximation through random feature maps as introduced by [13].
Definitions & Notation: For a topological space X , C(X ) (resp. Cb (X )) denotes the space of all
continuous (resp. bounded continuous) functions on X . For f ? Cb (X ), kf kX := supx?X |f (x)|
1
is the supremum norm of f . Mb (X ) and M+
(X ) is the set of all finite Borel and probability measures on X , respectively. For ? ? Mb (X ), Lr (X , ?) denotes the Banach space of r-power (r ? 1)
?-integrable functions. For X ? Rd , we will use Lr (X ) for Lr (X , ?) if ? is a Lebesgue measure
1/r
R
denotes the Lr -norm of f for 1 ? r < ?
on X . For f ? Lr (X , ?), kf kLr (X ,?) := X |f |r d?
d
and we write it as k?kLr (X ) if X ? R and ? is the Lebesgue measure. For any f ? L1 (X , P) where
R
Pm
1
1
m i.i.d.
P ? M+
(X ), we define Pf := X f (x) dP(x) and Pm f := m
i=1 f (Xi ) where (Xi )i=1 ? P,
P
m
1
Pm := m
i=1 ?Xi is the empirical measure and ?x is a Dirac measure supported on x ? X .
supp(P) denotes the support of P. Pm := ?m
j=1 P denotes the m-fold product measure.
qP
d
2
For v := (v1 , . . . , vd ) ? Rd , kvk2 :=
i=1 vi . The diameter of A ? Y where (Y, ?) is a metric
space is defined as |A|? := sup{?(x, y) : x, y ? Y}. If Y = Rd with ? = k?k2 , we denote the
R diameter of A as |A|; |A| < ? if A is compact. The volume of A ? Rd is defined as vol(A) = A 1 dx.
For A ? Rd , we define A? := A ? A = {x ? y : x, y ? A}. conv(A) is the convex hull of A. For
|p|+|q|
g(x,y)
a function g defined on open set B ? Rd ? Rd , ? p,q g(x, y) := ?xp1?????xpd ?y
qd , (x, y) ? B,
q1
1 ????yd
1
d
P
Qd
p
d
where p, q ? Nd are multi-indices, |p| = j=1 pj and N := {0, 1, 2, . . .}. Define vp = j=1 vj j .
an
For positive sequences (an )n?N , (bn )n?N , an = o(bn ) if limn?? bn = 0. Xn = Op (rn ) (resp.
R?
n
in probability (resp. almost surely). ?(t) = 0 xt?1 e?x dx
Oa.s. (rn )) denotes that X
rn isbounded
?
is the Gamma function, ? 12 = ? and ?(t + 1) = t?(t).
Random feature maps: Let k : Rd ? Rd ? R be a bounded, continuous, positive definite,
translation-invariant kernel, i.e., there exists a positive definite function ? : Rd ? R such that
k(x, y) = ?(x ? y), x, y ? Rd where ? ? Cb (Rd ). By Bochner?s theorem [24, Theorem 6.6], ?
can be represented as the Fourier transform of a finite non-negative Borel measure ? on Rd , i.e.,
Z
Z
?
T
(?)
cos ? T (x ? y) d?(?),
(1)
k(x, y) = ?(x ? y) =
e ?1? (x?y) d?(?) =
Rd
Rd
where (?) follows from
the fact that ? is real-valued and symmetric. Since ?(Rd ) = ?(0),
R ??1?T (x?y)
?
1
k(x, y) = ?(0) e
dP(?) where P := ?(0)
? M+
(Rd ). Therefore, w.l.o.g., we
1
assume throughout the paper that ?(0) = 1 and so ? ? M+
(Rd ). Based on (1), [13] proposed an
i.i.d.
approximation to k by replacing ? with its empirical measure, ?m constructed from (?i )m
i=1 ? ?
so that resultant approximation can be written as the Euclidean inner product of finite dimensional
random feature maps, i.e.,
m
X
(?)
? y) = 1
cos ?iT (x ? y) = h?(x), ?(y)iR2m ,
k(x,
m i=1
(2)
T
T
where ?(x) = ?1m (cos(?1T x), . . . , cos(?m
x)) and (?) holds based on
x), sin(?1T x), . . . , sin(?m
the basic trigonometric identity: cos(a?b) = cos a cos b+sin a sin b. This elegant approximation to
k is particularly useful in speeding up kernel-based algorithms as the finite-dimensional random feature map ? can be used to solve these algorithms in the primal thereby offering better computational
complexity (than by solving them in the dual) while at the same time not lacking in performance.
Apart from these practical advantages, [13, Claim 1] (and similarly, [22, Prop. 1]) provides a theoretical guarantee that kk? ? kkS?S ? 0 as m ? ? for any compact set S ? Rd . Formally, [13, Claim
3
1] showed that?note that (3) is slightly different but more precise than the one in the statement of
Claim 1 in [13]?for any ? > 0,
n
o
2d
m?2
?
? Cd |S|???1 d+2 e? 4(d+2) ,
(3)
?m (?i )m
i=1 : kk ? kkS?S ? ?
d
2
R
6d+2
2
2 d+2
+ d2 d+2 ? 27 d d+2 when d ? 2. The
where ? 2 := k?k2 d?(?) and Cd := 2 d+2
d
condition ? 2 < ? implies that ? (and therefore k) is twice differentiable. From (3) it is clear that
the probability has polynomial tails if ? < |S|? (i.e., small ?) and Gaussian tails if ? ? |S|? (i.e.,
large ?) and can be equivalently written as
o
n
p
d+2
?
d
? ? kkS?S ? C 2d |S|? m?1 log m
? m 4(d+2) (log m)? d+2 ,
(4)
:
k
k
?m (?i )m
i=1
d
d+2
where ? := 4d ? Cd d |S|2 ? 2 . For |S| sufficiently large (i.e., ? < 0), it follows from (4) that
p
kk? ? kkS?S = Op |S| m?1 log m .
(5)
While (5) shows that k? is a consistent estimator of k in the topology of compact
pconvergence (i.e.,
k? convergences to k uniformly over compact sets), the rate of convergence of (log m)/m is not
optimal. In addition, the order of dependence on |S| is not optimal. While a faster rate (in fact,
an optimal rate) of convergence is desired?better rates in (5) can lead to better convergence rates
?
for the excess error of the kernel machine constructed using k?,
the order of dependence on |S| is
also important as it determines the the number of RFF features (i.e., m) that are needed to achieve
a given approximation accuracy. In fact, the order of dependence on |S| controls the rate at which
|S| can be grown as a function of m when m ? ? (see Remark 1(ii) for a detailed discussion
about the significance of growing |S|). In the following section, we present an analogue of (4)?see
Theorem 1?that provides optimal rates and has correct dependence on |S|.
3
Main results: approximation of k
As discussed in Sections 1 and 2, while the random feature map approximation of k introduced by
[13] has many practical advantages, it does not seem to be theoretically well-understood. The existing theoretical results on the quality of approximation do not provide a complete picture owing to
their non-optimality. In this section, we first present our main result (see Theorem 1) that improves
upon (4) and provides a rate of m?1/2 with logarithm dependence on |S|. We then discuss the consequences of Theorem 1 along with its optimality in Remark 1. Next, in Corollary 2 and Theorem 3,
we discuss the Lr -convergence (1 ? r < ?) of k? to k over compact subsets of Rd .
d
d
Theorem
definite and
R 1. 2Suppose k(x, y) = ?(x ? y), x, y ? R where ? ? Cb (R ) is positive
2
? := k?k d?(?) < ?. Then for any ? > 0 and non-empty compact set S ? Rd ,
(
? )!
h(d, |S|, ?) + 2?
m
m
?
?
?
(?i )i=1 : kk ? kkS?S ?
? e?? ,
m
p
p
p
where h(d, |S|, ?) := 32 2d log(2|S| + 1) + 32 2d log(? + 1) + 16 2d[log(2|S| + 1)]?1 .
? y) ? k(x, y)| = sup
Proof (sketch). Note that kk? ? kkS?S = supx,y?S |k(x,
g?G |?m g ? ?g|,
T
where G := {gx,y (?) = cos(? (x ? y)) : x, y ? S}, which means the object of interest is the
suprema of an empirical process indexed by G. Instead of bounding supg?G |?m g ? ?g| by using
Hoeffding?s inequality on a cover of G and then applying union bound as carried out in [13, 22],
we use the refined technique of applying concentration via McDiarmid?s inequality, followed by
symmetrization and bound the Rademacher average by Dudley entropy bound. The result is obtained
by carefully bounding the L2 (?m )-covering number of G. The details are provided in Section B.1
of the supplementary material.
Remark 1. (i) Theorem 1 shows that k? is a consistent estimator
p of k in the topology of compact convergence as m ? ? with the rate of a.s. convergence being m?1 log |S| (almost sure convergence
is guaranteed by the first Borel-Cantelli lemma). In comparison to (4), it is clear that Theorem 1
4
provides improved rates with better constants and logarithmic dependence on |S| instead of a linear
dependence. The logarithmic dependence on |S| ensures that we need m = O(??2 log |S|) random features instead of O(??2 |S|2 log(|S|/?)) random features, i.e., significantly fewer features to
achieve the same approximation accuracy of ?.
(ii) Growing diameter: While Theorem 1 provides almost sure convergence uniformly over compact sets, one might wonder whether it is possible to achieve uniform convergence over Rd . [7,
Section 2] showed that such a result is possible if ? is a discrete measure but not possible for ?
that is absolutely continuous w.r.t. the Lebesgue measure (i.e., if ? has a density). Since uniform
convergence of k? to k over Rd is not possible for many interesting k (e.g., Gaussian kernel), it is
of interest to study the convergence on S whose diameter grows with m. Therefore, as mentioned
in Section 2, the order of dependence of rates on |S| is critical. Suppose |Sm | ? ? as m ? ?
(we write |Sm | instead of |S| to show the explicit dependence on m). Then Theorem 1 shows that
k? is a consistent estimator of k in the topology of compact convergence if m?1 log p
|Sm | ? 0 as
m ? ? (i.e., |Sm | = eo(m) ) in contrast to the result in (4) which requires |Sm | = o( m/ log m).
In other words, Theorem 1 ensures consistency even when |Smp
| grows exponentially in m whereas
(4) ensures consistency only if |Sm | does not grow faster than m/ log m.
1
(iii) Optimality: Note that ? is the characteristic function of ? ? M+
(Rd ) since ? is the Fourier
transform of ? (by Bochner?s theorem). Therefore, the object of interest kk? ? kkS?S = k?? ? ?kS? ,
is the
function ?? =
Pmuniform norm of the difference between ? and the empirical characteristic
1
d
i=1 cos(h?i , ?i), when both are restricted to a compact set S? ? R . The question of the conm
?
vergence behavior of k???k
S? is not new and has been studied in great detail in the probability and
statistics literature (e.g., see [7, 27] for d = 1 and [4, 5] for d > 1) where the characteristic function
is not just a real-valued symmetric function (like ?) but is Hermitian. [27, Theorems 6.2 and 6.3]
show that the optimal rate of convergence of k?? ? ?kS? is m?1/2 when d = 1, which matches
with our result in Theorem 1. Also Theorems 1 and 2 in [5] show that the logarithmic dependence
on |Sm | is optimal asymptotically. In particular, [5, Theorem 1] matches with the growing diameter result in Remark 1(ii), while [5, Theorem 2] shows that if ? is absolutely continuous w.r.t. the
Lebesgue measure and if lim supm?? m?1 log |Sm | > 0, then there exists a positive ? such that
lim supm?? ?m (k?? ? ?kSm,? ? ?) > 0. This means the rate |Sm | = eo(m) is not only the best
possible in general for almost sure convergence, but if faster sequence |Sm | is considered then even
stochastic convergence cannot be retained for any characteristic function vanishing at infinity along
at least one path. While these previous results match with that of Theorem 1 (and its consequences),
we would like to highlight the fact that all these previous results are asymptotic in nature whereas
Theorem 1 provides a finite-sample probabilistic inequality that holds for any m. We are not aware
of any such finite-sample result except for the one in [13, 22].
Using Theorem 1, one can obtain a probabilistic inequality for the Lr -norm of k? ? k over any
compact set S ? Rd , as given by the following result.
Corollary 2. Suppose k satisfies the assumptions in Theorem 1. Then for any 1 ? r < ?, ? > 0
and non-empty compact set S ? Rd ,
??
??
!2/r
? ?
?
d/2
d
?
|S|
2?
h(d,
|S|,
?)
+
?
? ? e?? ,
?
?m ? (?i )m
i=1 : kk ? kkLr (S) ?
?
?
m
2d ?( d2 + 1)
where kk? ? kkLr (S) := kk? ? kkLr (S?S) =
Proof. Note that
R R
S S
? y) ? k(x, y)|r dx dy
|k(x,
r1
.
kk? ? kkLr (S) ? kk? ? kkS?S vol2/r (S).
The result follows byocombining Theorem 1 and the fact that vol(S) ? vol(A) where A :=
n
d/2
d
(which follows from [8, Corollary 2.55]).
and vol(A) = 2?d ? d|S|
x ? Rd : kxk2 ? |S|
2
( 2 +1)
p
Corollary 2 shows that kk? ? kkLr (S) = Oa.s. (m?1/2 |S|2d/r log |S|) and therefore if |Sm | ? ? as
p
m ? ?, then consistency of k? in Lr (Sm )-norm is achieved as long as m?1/2 |Sm |2d/r log |Sm | ?
5
0 as m ? ?. This means, in comparison to the uniform normr in Theoremr 1 where |Sm | can grow
exponential in m? (? < 1), |Sm | cannot grow faster than m 4d (log m)? 4d ?? (? > 0) to achieve
consistency in Lr -norm.
Instead of using Theorem 1 to obtain a bound on kk? ? kkLr (S) (this bound may be weak as kk? ?
kkLr (S) ? kk? ? kkS?S vol2/r (S) for any 1 ? r < ?), a better bound (for 2 ? r < ?) can be
obtained by directly bounding kk? ? kkLr (S) , as shown in the following result.
Theorem 3. Suppose k(x, y) = ?(x ? y), x, y ? Rd where ? ? Cb (Rd ) is positive definite. Then
for any 1 < r < ?, ? > 0 and non-empty compact set S ? Rd ,
??
??
!2/r
? !?
?
?
d/2
d
2?
Cr
? |S|
?
? ? e?? ,
?m ? (?i )m
+ ?
1 1
i=1 : kk ? kkLr (S) ?
?
m ?
2d ?( d2 + 1)
m1?max{ 2 , r }
where Cr? is the Khintchine constant given by Cr? = 1 for r ? (1, 2] and Cr? =
for r ? [2, ?).
?
2 ?
r+1
2
? r1
/ ?
? Lr (S) satisfies the bounded difference
Proof (sketch). As in Theorem 1, we show that kk ? kk
property, hence by the McDiarmid?s inequality, it concentrates around its expectation Ekk ?
? Lr (S) . By symmetrization, we then show that Ekk ? kk
? Lr (S) is upper bounded in terms of
kk
Pm
m
E? k i=1 ?i cos(h?i , ? ? ?i)kLr (S) , where ? := (?i )i=1 are Rademacher random variables. By
exploiting the fact that Lr (S) is a Banach space of type min{r, 2}, the result follows. The details
are provided in Section B.2 of the supplementary material.
p
Remark 2. Theorem 3 shows an improved dependence on |S| without the extra log |S| factor given
in Corollary 2 and therefore provides a better rate for 2 ? r < ? when the diameter of S grows, i.e.,
r
a.s.
kk? ? kkLr (Sm ) ? 0 if |Sm | = o(m 4d ) as m ? ?. However, for 1 < r < 2, Theorem 3 provides
a slower rate than Corollary 2 and therefore it is appropriate to use the bound in Corollary 2. While
one might wonder why we only considered the convergence of kk? ? kkLr (S) and not kk? ? kkLr (Rd ) ,
it is important to note that the latter is not well-defined because k? ?
/ Lr (Rd ) even if k ? Lr (Rd ).
4
Approximation of kernel derivatives
In the previous section we focused on the approximation of the kernel function where we presented
uniform and Lr convergence guarantees on compact sets for the random Fourier feature approximation, and discussed how fast the diameter of these sets can grow to preserve uniform and Lr
convergence almost surely. In this section, we propose an approximation to derivatives of the kernel
and analyze the uniform and Lr convergence behavior of the proposed approximation. As motivated
in Section 1, the question of approximating the derivatives of the kernel through finite dimensional
random feature map is also important as it enables to speed up several interesting machine learning
tasks that involve the derivatives of the kernel [28, 18, 15, 16, 26, 20], see for example the recent
infinite dimensional exponential family fitting technique [21], which implements this idea.
To this end, we consider k as in (1) and define ha := cos( ?a
2 + ?), a ? N (in other words
d
h
=
cos,
h
=
?
sin,
h
=
?
cos,
h
=
sin
and
h
=
h
1
2
3
a
a mod 4 ). For p, q ? N , assuming
R 0 p+q
|?
| d?(?) < ?, it follows from the dominated convergence theorem that
Z
? p,q k(x, y) =
? p (??)q h|p+q| ? T (x ? y) d?(?)
d
ZR
=
? p+q h|p| (? T x)h|q| (? T y) + h3+|p| (? T x)h3+|q| (? T y) d?(?),
Rd
so that ? p,q k(x, y) can be approximated by replacing ? with ?m , resulting in
m
1 X p
p,q k(x, y) := sp,q (x, y) =
?j (??j )q h|p+q| ?jT (x ? y) = h?p (x), ?q (y)iR2m , (6)
?\
m j=1
6
?1
m
i.i.d.
where ?p (u) :=
p
T
p
T
h3+|p| (?m
u)
h|p| (?m
u), ?1p h3+|p| (?1T u), ? ? ? , ?m
?1p h|p| (?1T u), ? ? ? , ?m
and (?j )m
? ?. Now the goal is to understand the behavior of ksp,q ? ? p,q kkS?S and
j=1
p,q
p,q
ks ? ? kkLr (S) for r ? [1, ?), i.e., obtain analogues of Theorems 1 and 3.
As in the proof sketch of Theorem 1, while ksp,q ?? p,q kkS?S can be analyzed as the suprema of an
empirical process indexed by a suitable function class (say G), some technical issues arise because
G is not uniformly bounded. This means McDiarmid or Talagrand?s inequality cannot be applied
to achieve concentration and bounding Rademacher average by Dudley entropy bound may not be
reasonable. While these issues can be tackled by resorting to more technical and refined methods,
in this paper, we generalize (see Theorem 4 which is proved in Section B.1 of the supplement)
Theorem 1 to derivatives under the restrictive assumption that supp(?) is bounded (note that many
popular kernels including the Gaussian do not satisfy this assumption). We also present another
result (see Theorem 5) by generalizing the proof technique3 of [13] to unbounded functions where
the boundedness assumption of supp(?) is relaxed but at the expense of a worse rate (compared to
Theorem 4).
i
h
2
Theorem 4. Let p, q ? Nd , Tp,q := sup??supp(?) |? p+q |, Cp,q := E??? |? p+q | k?k2 , and
assume that C2p,2q < ?. Suppose supp(?) is bounded if p 6= 0 and q 6= 0. Then for any ? > 0
and non-empty compact set S ? Rd ,
(
? )!
H(d, p, q, |S|) + Tp,q 2?
m
m
p,q
p,q
?
?
(?i )i=1 : k? k ? s kS?S ?
? e?? ,
m
where
"
#
q
p
p
p
1
+ log( C2p,2q + 1) ,
U (p, q, |S|) + p
H(d, p, q, |S|) = 32 2d T2p,2q
2 U (p, q, |S|)
?1/2
U (p, q, |S|) = log 2|S|T2p,2q + 1 .
Remark 3. (i) Note that Theorem 4 reduces to Theorem 1 if p = q = 0, in which case
Tp,q = T2p,2q = 1. If p 6= 0 or q 6= 0, then the boundedness of supp(?) implies that Tp,q < ?
and T2p,2q < ?.
(ii) Growth of |Sm |: By the same reasoning as in Remark 1(ii) and Corollary 2, it follows
a.s.
a.s.
that k? p,q k ? sp,q kSm ?Sm ?? 0 if |Sm | = eo(m) and k? p,q k ? sp,q kLr (Sm ) ?? 0 if
p
m?1/2 |Sm |2d/r log |Sm | ? 0 (for 1 ? r < ?) as m ? ?. An exact analogue of Theorem 3 can
be obtained (but with different constants) under the assumption that supp(?) is bounded and it can
r
a.s.
be shown that for r ? [2, ?), k? p,q k ? sp,q kLr (Sm ) ?? 0 if |Sm | = o(m 4d ).
The following result relaxes the boundedness of supp(?) by imposing certain moment conditions on
? but at the expense of a worse rate. The proof relies on applying Bernstein inequality at the elements
of a net (which exists by the compactness of S) combined with a union bound, and extending the
approximation error from the anchors by a probabilistic Lipschitz argument.
Theorem 5. Let p, q ? Nd , ? be continuously differentiable, z 7? ?z [? p,q k(z)] be continuous,
S ? Rd be any non-empty compact set, Dp,q,S := supz?conv(S? ) k?z [? p,q k(z)]k2 and Ep,q :=
E??? [|? p+q | k?k2 ]. Assume that Ep,q < ?. Suppose ?L > 0, ? > 0 such that
3
M ! ? 2 LM ?2
E??? |f (z; ?)|M ?
2
(?M ? 2, ?z ? S? ),
(7)
We also correct some technical issues in the proof of [13, Claim 1], where (i) a shift-invariant argument was
Pm
T
T
1
?
applied to the non-shift invariant kernel estimator k(x,
y) = m
j=1 2 cos(?j x + bj ) cos(?j y + bj ) =
P
m
T
T
1
leading to
j=1 cos(?j (x ? y)) + cos(?j (x + y) + 2bj ) , (ii) the convexity of S was not imposed
m
?
possibly undefined Lipschitz constant (L) and (iii) the randomness of ? = arg max??S? ?[k(?) ?
?
was not taken into account, thus the upper bound on the expectation of the squared Lipschitz constant
k(?)]
2
2
(E[L ]) does not hold.
7
1
d
where f (z; ?) = ? p,q k(z) ? ? p (??)q h|p+q| ? T z . Define Fd := d? d+1 + d d+1 .4 Then
p,q
?m ({(?i )m
k ? sp,q kS?S ? ?}) ?
i=1 : k?
d
m?2
m?2
?
4d?1
?L
|S|(Dp,q,S + Ep,q ) d+1 ? 8(d+1)?2 (1+ ?L2 )
2
2?
? 2d?1 e 8? (1+ 2?2 ) + Fd 2 d+1
.
e
?
(8)
Remark 4. (i) The compactness of S implies that of S? . Hence, by the continuity
of z 7?
?z [? p,q k(z)], one gets Dp,q,S < ?. (7) holds if |f (z; ?)| ? L2 and E??? |f (z; ?)|2 ? ? 2
(?z ? S? ). If supp(?) is bounded, then the boundedness of f is guaranteed (see Section B.4 in the
supplement).
(ii) In the special case when p = q = 0, our requirement boils down to the continuously differentiability of ?, E0,0 = E??? k?k2 < ?, and (7).
(iii) Note that (8) is similar to p
(3) and therefore based on the discussion in Section 2, one has
k? p,q k ? sp,q kS?S = Oa.s. (|S| m?1 log m). But the advantage with Theorem 5 over [13, Claim
1] and [22, Prop. 1] is that it can handle unbounded functions. In comparison to Theorem 4, we
obtain worse rates and it will be of interest to improve the rates of Theorem 5 while handling unbounded functions.
5
Discussion
In this paper, we presented the first detailed theoretical analysis about the approximation quality of
random Fourier features (RFF) that was proposed by [13] in the context of improving the computational complexity of kernel machines. While [13, 22] provided a probabilistic bound on the uniform
approximation (over compact subsets of Rd ) of a kernel by random features, the result is not optimal. We improved this result by providing a finite-sample bound with optimal rate of convergence
and also analyzed the quality of approximation in Lr -norm (1 ? r < ?). We also proposed an
RFF approximation for derivatives of a kernel and provided theoretical guarantees on the quality of
approximation in uniform and Lr -norms over compact subsets of Rd .
While all the results in this paper (and also in the literature) dealt with the approximation quality
of RFF over only compact subsets of Rd , it is of interest to understand its behavior over entire Rd .
However, as discussed in Remark 1(ii) and in the paragraph following Theorem 3, RFF cannot approximate the kernel uniformly or in Lr -norm over Rd . By truncating the Taylor series expansion
of the exponential function, [3] proposed a non-random finite dimensional representation to approximate the Gaussian kernel which also enjoys the computational advantages of RFF. However, this
representation also does not approximate the Gaussian kernel uniformly over Rd . Therefore, the
question remains whether it is possible to approximate a kernel uniformly or in Lr -norm over Rd
but still retaining the computational advantages associated with RFF.
Acknowledgments
Z. Szab?o wishes to thank the Gatsby Charitable Foundation for its generous support.
References
[1] A. E. Alaoui and M. Mahoney. Fast randomized kernel ridge regression with statistical guarantees. In
NIPS, 2015.
[2] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization.
Journal of Machine Learning Research, 12:1069?1109, 2011.
[3] A. Cotter, J. Keshet, and N. Srebro. Explicit approximations of the Gaussian kernel. Technical report,
2011. http://arxiv.org/pdf/1109.4603.pdf.
[4] S. Cs?org?o. Multivariate empirical characteristic functions. Zeitschrift f?ur Wahrscheinlichkeitstheorie und
Verwandte Gebiete, 55:203?229, 1981.
[5] S. Cs?org?o and V. Totik. On how long interval is the empirical characteristic function uniformly consistent?
Acta Scientiarum Mathematicarum, 45:141?149, 1983.
4
Fd is monotonically decreasing in d, F1 = 2.
8
[6] P. Drineas and M. W. Mahoney. On the Nystr?om method for approximating a Gram matrix for improved
kernel-based learning. Journal of Machine Learning Research, 6:2153?2175, 2005.
[7] A. Feuerverger and R. A. Mureika. The empirical characteristic function and its applications. Annals of
Statistics, 5(1):88?98, 1977.
[8] G. B. Folland. Real Analysis: Modern Techniques and Their Applications. Wiley-Interscience, 1999.
[9] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 34:1092?1104, 2012.
[10] D. Lopez-Paz, K. Muandet, B. Sch?olkopf, and I. Tolstikhin. Towards a learning theory of cause-effect
inference. JMLR W&CP ? ICML, pages 1452?1461, 2015.
[11] S. Maji, A. C. Berg, and J. Malik. Efficient classification for additive kernel SVMs. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 35:66?77, 2013.
[12] J. Oliva, W. Neiswanger, B. P?oczos, E. Xing, and J. Schneider. Fast function to function regression. JMLR
W&CP ? AISTATS, pages 717?725, 2015.
[13] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177?1184,
2007.
[14] A. Rahimi and B. Recht. Uniform approximation of functions with random bases. In Allerton, pages
555?561, 2008.
[15] L. Rosasco, M. Santoro, S. Mosci, A. Verri, and S. Villa. A regularization approach to nonlinear variable
selection. JMLR W&CP ? AISTATS, 9:653?660, 2010.
[16] L. Rosasco, S. Villa, S. Mosci, M. Santoro, and A. Verri. Nonparametric sparsity and regularization.
Journal of Machine Learning Research, 14:1665?1714, 2013.
[17] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002.
[18] L. Shi, X. Guo, and D.-X. Zhou. Hermite learning with gradient data. Journal of Computational and
Applied Mathematics, 233:3046?3059, 2010.
[19] Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, A. Strehl, and V. Vishwanathan. Hash kernels.
AISTATS, 5:496?503, 2009.
[20] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, A. Hyv?arinen, and R. Kumar.
sity estimation in infinite dimensional exponential families.
Technical report,
http://arxiv.org/pdf/1312.3516.pdf.
Den2014.
[21] H. Strathmann, D. Sejdinovic, S. Livingstone, Z. Szab?o, and A. Gretton. Gradient-free Hamiltonian
Monte Carlo with efficient kernel exponential families. In NIPS, 2015.
[22] D. J. Sutherland and J. Schneider. On the error of random Fourier features. In UAI, pages 862?871, 2015.
[23] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 34:480?492, 2012.
[24] H. Wendland. Scattered Data Approximation. Cambridge University Press, 2005.
[25] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In NIPS, pages
682?688, 2001.
[26] Y. Ying, Q. Wu, and C. Campbell. Learning the coordinate gradients. Advances in Computational Mathematics, 37:355?378, 2012.
[27] J. E. Yukich. Some limit theorems for the empirical process indexed by functions. Probability Theory
and Related Fields, 74:71?90, 1987.
[28] D.-X. Zhou. Derivative reproducing properties for kernel methods in learning theory. Journal of Computational and Applied Mathematics, 220:456?463, 2008.
9
| 5740 |@word kulis:1 private:1 polynomial:1 norm:17 nd:3 open:1 d2:3 hyv:1 bn:3 zolt:1 attainable:1 q1:1 thereby:2 nystr:2 boundedness:4 moment:1 series:1 offering:1 existing:1 current:1 yet:1 dx:3 written:2 additive:3 numerical:1 enables:1 designed:1 hash:1 intelligence:3 fewer:1 ksm:2 vanishing:1 hamiltonian:1 lr:27 provides:9 gx:1 allerton:1 mcdiarmid:3 org:4 hermite:2 unbounded:3 along:4 constructed:4 kvk2:1 differential:1 lopez:1 consists:1 prove:1 fitting:2 interscience:1 paragraph:1 hermitian:1 introduce:1 privacy:1 theoretically:2 mosci:2 behavior:6 growing:4 multi:2 relying:1 decreasing:1 resolve:1 little:2 pf:1 solver:1 conv:2 provided:6 bounded:11 notation:4 dror:1 guarantee:10 mitigate:1 tackle:1 growth:3 grauman:1 k2:6 uk:2 control:1 unit:1 positive:6 sutherland:1 understood:2 limit:1 consequence:2 zeitschrift:1 despite:2 initiated:1 establishing:1 path:1 yd:1 might:3 twice:1 acta:1 k:6 studied:1 co:17 practical:2 acknowledgment:1 testing:1 union:2 implement:2 definite:4 area:1 empirical:11 suprema:2 thought:1 attain:1 significantly:1 vedaldi:1 word:2 get:1 cannot:5 selection:2 context:1 applying:3 risk:1 map:12 demonstrated:1 missing:1 imposed:1 folland:1 shi:2 williams:1 truncating:1 convex:1 ekk:2 focused:1 insight:1 estimator:5 supz:1 handle:2 traditionally:1 coordinate:1 resp:4 construction:2 suppose:6 annals:1 exact:1 hypothesis:1 pa:1 trick:1 element:1 approximated:1 particularly:2 ep:3 solved:1 ensures:3 richness:1 mentioned:2 und:1 convexity:1 complexity:2 raise:1 solving:2 upon:1 drineas:1 easily:1 various:1 represented:1 maji:1 grown:1 fast:6 london:1 monte:1 refined:2 whose:1 widely:1 supplementary:3 valued:2 say:2 solve:1 statistic:3 transform:3 online:1 advantage:7 sequence:2 differentiable:2 net:1 ucl:2 propose:3 product:3 mb:2 trigonometric:1 flexibility:1 achieve:5 chaudhuri:1 dirac:1 scalability:1 differentially:1 olkopf:2 exploiting:1 rff:25 convergence:31 requirement:2 empty:5 rademacher:3 r1:2 extending:1 guaranteeing:1 strathmann:1 sity:1 object:2 ac:1 h3:4 op:3 strong:1 c:2 involves:1 implies:3 qd:2 concentrate:1 correct:3 owing:1 hull:1 stochastic:1 material:3 bks18:1 require:1 arinen:1 f1:1 generalization:1 preliminary:2 tighter:1 zoltan:1 mathematicarum:1 hold:5 sufficiently:1 considered:4 around:1 great:1 cb:5 mapping:1 bj:3 claim:5 lm:1 generous:1 estimation:2 symmetrization:2 sensitive:1 tool:1 cotter:1 minimization:1 fukumizu:1 mit:1 gaussian:6 zhou:2 cr:4 verwandte:1 corollary:8 derived:1 focus:1 rank:2 cantelli:1 contrast:2 seeger:1 am:6 inference:2 entire:3 santoro:2 compactness:2 kernelized:1 relation:2 ksp:2 among:1 classification:2 dual:1 issue:3 arg:1 retaining:1 special:2 field:1 aware:1 extraction:1 psu:1 park:1 icml:1 report:2 serious:2 modern:1 gamma:1 preserve:1 petterson:1 szabo:1 lebesgue:4 yukich:1 versatility:1 interest:5 fd:3 tolstikhin:1 mahoney:2 analyzed:2 undefined:1 primal:2 integral:2 owes:1 indexed:3 euclidean:1 logarithm:1 taylor:1 desired:1 causal:2 e0:1 theoretical:7 cover:1 tp:4 rffs:4 applicability:1 conm:1 subset:4 uniform:14 wonder:2 paz:1 too:1 dependency:1 supx:2 combined:1 muandet:1 recht:2 density:1 fundamental:1 randomized:2 systematic:1 probabilistic:5 continuously:2 squared:1 satisfied:1 rosasco:2 possibly:2 hoeffding:1 wahrscheinlichkeitstheorie:1 worse:3 derivative:15 leading:1 supp:9 account:1 satisfy:1 vi:1 supg:1 analyze:1 sup:4 xing:1 capability:2 contribution:1 supm:2 om:2 accuracy:3 characteristic:9 efficiently:1 yield:1 vp:1 generalize:1 weak:1 bayesian:1 dealt:1 carlo:1 randomness:1 bharath:1 monteleoni:1 definition:2 sriperumbudur:2 resultant:1 associated:3 proof:9 boil:1 proved:1 popular:2 knowledge:2 lim:2 improves:1 carefully:1 campbell:1 hashing:2 attained:1 supervised:1 zisserman:1 improved:4 verri:2 just:2 implicit:2 smola:2 langford:1 talagrand:1 sketch:3 replacing:2 nonlinear:2 scientiarum:1 continuity:1 quality:9 grows:4 usa:1 effect:1 hence:2 regularization:3 symmetric:2 sin:6 covering:1 cosine:1 pdf:4 presenting:1 ridge:2 complete:1 l1:1 cp:4 reasoning:1 ranging:1 recently:2 smp:1 empirically:2 qp:1 exponentially:1 volume:1 banach:2 tail:2 discussed:3 approximates:1 m1:1 sarwate:1 significant:1 cambridge:1 imposing:1 enjoyed:1 rd:45 resorting:2 pm:6 similarly:1 consistency:4 mathematics:3 centre:1 jg:1 base:1 multivariate:1 showed:2 recent:1 apart:1 certain:2 inequality:7 oczos:1 success:3 integrable:1 preserving:2 seen:1 relaxed:1 schneider:2 eo:5 surely:2 bochner:3 monotonically:1 ii:9 semi:1 reduces:1 rahimi:2 gretton:2 technical:5 match:4 faster:4 long:4 equally:1 specializing:1 involving:2 regression:5 basic:1 oliva:1 metric:1 expectation:2 arxiv:2 kernel:60 represent:3 sometimes:1 sejdinovic:1 achieved:1 addition:2 whereas:2 interval:1 grow:4 limn:1 sch:2 extra:1 operate:1 unlike:1 sure:7 elegant:1 alaoui:1 mod:1 seem:1 t2p:4 xp1:1 bernstein:1 iii:3 relaxes:1 pennsylvania:1 topology:3 inner:2 idea:2 computable:1 intensive:1 csml:1 shift:5 bottleneck:1 whether:2 motivated:1 suffer:1 cause:1 remark:9 useful:2 detailed:4 involve:2 clear:2 nonparametric:1 svms:1 differentiability:1 diameter:10 http:2 popularity:1 write:2 discrete:1 vol:4 pj:1 v1:1 asymptotically:1 powerful:1 khintchine:1 almost:9 family:6 throughout:3 reasonable:1 wu:1 dy:1 bound:15 followed:2 guaranteed:4 tackled:1 fold:1 topological:1 infinity:2 vishwanathan:1 dominated:1 fourier:13 speed:4 argument:2 extremely:1 optimality:3 min:1 kumar:1 department:1 poor:1 slightly:1 ur:1 appealing:1 modification:1 invariant:7 restricted:1 wellcome:1 heart:1 computationally:1 taken:1 remains:1 discus:2 needed:1 neiswanger:1 end:1 operation:1 apply:1 appropriate:2 dudley:2 slower:1 denotes:6 restrictive:1 approximating:6 malik:1 question:3 barycenter:1 concentration:2 dependence:14 villa:2 gradient:5 dp:5 thank:1 oa:3 street:1 vd:1 assuming:1 index:1 retained:1 kk:24 providing:3 ying:1 equivalently:1 unfortunately:1 statement:1 expense:2 negative:1 implementation:1 contributed:1 upper:2 sm:26 finite:15 enabling:1 precise:1 rn:3 reproducing:1 introduced:2 tremendous:1 nip:4 address:1 beyond:1 pattern:3 sparsity:1 challenge:1 max:2 including:1 analogue:3 power:1 suitable:2 critical:1 zr:1 improve:1 brief:1 numerous:2 picture:1 carried:1 speeding:1 review:1 literature:5 discovery:2 l2:4 kf:2 asymptotic:3 lacking:1 highlight:1 interesting:2 limitation:1 srebro:1 foundation:1 sufficient:1 consistent:4 charitable:1 strehl:1 klr:5 translation:2 cd:3 summary:1 surprisingly:1 supported:1 free:1 enjoys:1 allow:1 understand:2 wide:1 dimension:2 xn:1 gram:4 c2p:2 transaction:3 excess:1 approximate:7 compact:24 implicitly:1 supremum:1 dealing:2 uai:1 anchor:1 xi:3 continuous:7 quantifies:1 vergence:1 why:1 ir2m:4 nature:2 improving:1 expansion:1 complex:2 vj:1 sp:6 significance:1 main:2 aistats:3 bounding:4 arise:1 w1t:1 borel:3 scattered:1 gatsby:3 wiley:1 sub:1 explicit:6 wish:1 exponential:6 kxk2:1 jmlr:3 theorem:51 down:1 xt:1 jt:1 decay:1 exists:3 importance:1 supplement:2 keshet:1 kx:1 locality:1 entropy:2 generalizing:1 logarithmic:4 expressed:1 wendland:1 determines:1 satisfies:2 relies:1 prop:2 identity:1 goal:1 careful:1 towards:1 price:1 lipschitz:3 infinite:4 except:1 szab:3 uniformly:8 degradation:1 lemma:1 livingstone:1 formally:1 berg:1 support:3 gebiete:1 latter:1 guo:1 absolutely:2 handling:1 |
5,237 | 5,741 | Submodular Hamming Metrics
Jennifer Gillenwater? , Rishabh Iyer? , Bethany Lusch? , Rahul Kidambi? , Jeff Bilmes?
?
University of Washington, Dept. of EE, Seattle, U.S.A.
?
University of Washington, Dept. of Applied Math, Seattle, U.S.A.
{jengi, rkiyer, herwaldt, rkidambi, bilmes}@uw.edu
Abstract
We show that there is a largely unexplored class of functions (positive polymatroids) that can define proper discrete metrics over pairs of binary vectors and
that are fairly tractable to optimize over. By exploiting submodularity, we are
able to give hardness results and approximation algorithms for optimizing over
such metrics. Additionally, we demonstrate empirically the effectiveness of these
metrics and associated algorithms on both a metric minimization task (a form of
clustering) and also a metric maximization task (generating diverse k-best lists).
1
Introduction
A good distance metric is often the key to an effective machine learning algorithm. For instance,
when clustering, the distance metric largely defines which points end up in which clusters. Similarly,
in large-margin learning, the distance between different labelings can contribute as much to the
definition of the margin as the objective function itself. Likewise, when constructing diverse k-best
lists, the measure of diversity is key to ensuring meaningful differences between list elements.
We consider distance metrics d : {0, 1}n ? {0, 1}n ? R+ over binary vectors, x ? {0, 1}n . If
we define the set V = {1, . . . , n}, then each x = 1A can seen as the characteristic vector of a
set A ? V , where 1A (v) = 1 if v ? A, and 1A (v) = 0 otherwise. For sets A, B ? V , with
4 representing the symmetricP
difference, A4B P
, (A \ B) ? (B \ A), the Hamming distance is
n
n
then: dH (A, B) = |A4B| = i=1 1A4B (i) = i=1 1(1A (i) 6= 1B (i)). A Hamming distance
between two vectors assumes that each entry difference contributes value one. Weighted Hamming
distance generalizes this slightly, allowing each entry a unique weight. The Mahalanobis distance
further extends this. For many practical applications, however, it is desirable to have entries interact
with each other in more complex and higher-order ways than Hamming or Mahalanobis allow. Yet,
arbitrary interactions would result in non-metric functions whose optimization would be intractable.
In this work, therefore, we consider an alternative class of functions that goes beyond pairwise
interactions, yet is computationally feasible, is natural for many applications, and preserves metricity.
Given a set function f : 2V ? R, we can define a distortion between two binary vectors as
follows: df (A, B) = f (A4B). By asking f to satisfy certain properties, we will arrive at a class
of discrete metrics that is feasible to optimize and preserves metricity. We say that f is positive
if f (A) > 0 whenever A 6= ?; f is normalized if f (?) = 0; f is monotone if f (A) ? f (B)
for all A ? B ? V ; f is subadditive if f (A) + f (B) ? f (A ? B) for all A, B ? V ; f is
modular if f (A) + f (B) = f (A ? B) + f (B ? A) for all A, B ? V ; and f is submodular
if f (A) + f (B) ? f (A ? B) + f (B ? A) for all A, B ? V . If we assume that f is positive,
normalized, monotone, and subadditive then df (A, B) is a metric (see Theorem 3.1), but without
useful computational properties. If f is positive, normalized, monotone, and modular, then we recover
the weighted Hamming distance. In this paper, we assume that f is positive, normalized, monotone,
and submodular (and hence also subadditive). These conditions are sufficient to ensure the metricity
of df , but allow for a significant generalization over the weighted Hamming distance. Also, thanks to
the properties of submodularity, this class yields efficient optimization algorithms with guarantees
1
Table 1: Hardness for SH-min and SH-max. UC stands for unconstrained, and Card stands for
cardinality-constrained. The entry ?open? implies that the problem is potentially poly-time solvable.
UC
Card
SH-min
homogeneous
heterogeneous
Open
4/3
?
?
n
n
? 1+(?n?1)(1??f )
? 1+(?n?1)(1??
f)
SH-max
homogeneous heterogeneous
3/4
3/4
1 ? 1/e
1 ? 1/e
Table 2: Approximation guarantees of algorithms for SH-min and SH-max. ?-? implies that no
guarantee holds for the corresponding pair. B EST-B only works for the homogeneous case, while all
other algorithms work in both cases.
SH-min
SH-max
U NION -S PLIT
UC
Card
2
1/4
1/2e
B EST-B
UC
2 ? 2/m
-
M AJOR -M IN
Card
n
1+(n?1)(1??f )
-
R AND -S ET
UC
1/8
for practical machine learning problems. In what follows, we will refer to normalized monotone
submodular functions as polymatroid functions; all of our results will be concerned with positive
polymatroids. We note here that despite the restrictions described above, the polymatroid class is in
fact quite broad; it contains a number of natural choices of diversity and coverage functions, such as
set cover, facility location, saturated coverage, and concave-over-modular functions.
Given a positive polymatroid function f , we refer to df (A, B) = f (A4B) as a submodular
Hamming (SH) distance. We study two optimization problems involving these metrics (each fi is a
positive polymatroid, each Bi ? V , and C denotes a combinatorial constraint):
m
m
X
X
SH-min: min
fi (A4Bi ),
and
SH-max: max
fi (A4Bi ).
(1)
A?C
A?C
i=1
i=1
We will use F as shorthand for the
(f1 , . . . , fm ), B for the sequence (B1 , . . . , Bm ), and
Psequence
m
F (A) for the objective function i=1 fi (A4Bi ). We will also make a distinction between the
homogeneous case where all fi are the same function, and the more general heterogeneous case
where each fi may be distinct. In terms of constraints, in this paper?s theory we consider only the
unconstrained (C = 2V ) and the cardinality-constrained (e.g., |A| ? k, |A| ? k) settings. In general
though, C could express more complex concepts such as knapsack constraints, or that solutions must
be an independent set of a matroid, or a cut (or spanning tree, path, or matching) in a graph.
Intuitively, the SH-min problem can be thought of as a centroid-finding problem; the minimizing A
should be as similar to the Bi ?s as possible, since a penalty of fi (A4Bi ) is paid for each difference.
Analogously, the SH-max problem can be thought of as a diversification problem; the maximizing A
should be as distinct from all Bi ?s as possible, as fi (A4B) is awarded for each difference. Given
modular fi (the weighted Hamming distance case), these optimization problems can be solved exactly
and efficiently for many constraint types. For the more general case of submodular fi , we establish
several hardness results and offer new approximation algorithms, as summarized in Tables 1 and 2.
Our main contribution is to provide (to our knowledge), the first systematic study of the properties of
submodular Hamming (SH) metrics, by showing metricity, describing potential machine learning
applications, and providing optimization algorithms for SH-min and SH-max.
The outline of this paper is as follows. In Section 2, we offer further motivation by describing several
applications of SH-min and SH-max to machine learning. In Section 3, we prove that for a positive
polymatroid function f , the distance df (A, B) = f (A4B) is a metric. Then, in Sections 4 and 5 we
give hardness results and approximation algorithms, and in Section 6 we demonstrate the practical
advantage that submodular metrics have over modular metrics for several real-world applications.
2
Applications
We motivate SH-min and SH-max by showing how they occur naturally in several applications.
2
Clustering: Many clustering algorithms, including for example k-means [1], use distance functions
in their optimization. If each item i to be clustered is represented by a binary feature vector
bi ? {0, 1}n , then counting the disagreements between bi and bj is one natural distance function.
Defining sets Bi = {v : bi (v) = 1}, this count is equivalent to the Hamming distance |Bi 4Bj |.
Consider a document clustering application where V is the set of all features (e.g., n-grams) and
Bi is the set of features for document i. Hamming distance has value 2 both when Bi 4Bj =
{?submodular?, ?synapse?} and when Bi 4Bj = {?submodular?, ?modular?}. Intuitively, however,
a smaller distance seems warranted in the latter case since the difference is only in one rather than
two distinct concepts. The submodular Hamming distances we propose in this work can easily
capture this type of
pbehavior. Given feature clusters W, one can define a submodular function as:
P
f (Y ) = W ?W |Y ? W |. Applying this with Y = Bi 4Bj , if the documents? differences are
confined to one cluster, the distance is smaller than if the differences
occur across several word
?
clusters. In the case discussed above, the distances are 2 and 2. If this submodular Hamming
distance is used for k-means clustering, then the mean-finding step becomes an instance of the SHmin problem. That is, if cluster j P
contains documents Cj , then its mean takes exactly the following
SH-min form: ?j ? argminA?V i?Cj f (A4Bi ).
Structured prediction: Structured support vector machines (SVMs) typically rely on Hamming
distance to compare candidate structures to the true one. The margin required between the correct
structure score and a candidate score is then proportional to their Hamming distance. Consider
the problem of segmenting an image into foreground and background. Let Bi be image i?s true
set of foreground pixels. Then Hamming distance between Bi and a candidate segmentation with
foreground pixels A counts the number of mis-labeled pixels. However, both [2] and [3] observe
poor performance with Hamming distance and recent work by [4] shows improved performance
with richer distances that are supermodular functions of A. One potential direction for further
enriching image segmentation distance functions is thus to consider non-modular functions from
within our submodular Hamming metrics class. These functions have the ability to correct for
the over-penalization that the current distance functions may suffer from when the same kind of
difference happens repeatedly. For instance, if Bi differs from A only in the pixels local to a particular
block of the image, then current distance functions could be seen as over-estimating the difference.
Using a submodular Hamming function, the ?loss-augmented inference? step in SVM optimization
becomes an SH-max problem. More concretely, if the segmentation model is defined by a submodular
graph cut g(A), then we have: maxA?V g(A) + f (A4Bi ). (Note that g(A) = g(A4?).) In fact,
[5] observes superior results with this type of loss-augmented inference using a special case of a
submodular Hamming metric for the task of multi-label image classification.
Diverse k-best: For some machine learning tasks, rather than finding a model?s single highestscoring prediction, it is helpful to find a diverse set of high-quality predictions. For instance, [6]
showed that for image segmentation and pose tracking a diverse set of k solutions tended to contain
a better predictor than the top k highest-scoring solutions. Additionally, finding diverse solutions
can be beneficial for accommodating user interaction. For example, consider the task of selecting
10 photos to summarize the 100 photos that a person took while on vacation. If the model?s best
prediction (a set of 10 images) is rejected by the user, then the system should probably present a
substantially different prediction on its second try. Submodular functions are a natural model for
several summarization problems [7, 8]. Thus, given a submodular summarization model g, and a
set of existing diverse summaries A1 , A2 , . . . , Ak?1 , one could find a kth summary to present to
Pk?1
the user by solving: Ak = argmaxA?V,|A|=` g(A) + i=1 f (A4Ai ). If f and g are both positive
polymatroids, then this constitutes an instance of the SH-max problem.
3
Properties of the submodular Hamming metric
We next show several interesting properties of the submodular Hamming distance. Proofs for all
theorems and lemmas can be found in the supplementary material. We begin by showing that any
positive polymatroid function of A4B is a metric. In fact, we show the more general result that any
positive normalized monotone subadditive function of A4B is a metric. This result is known (see for
instance Chapter 8 of [9]), but we provide a proof (in the supplementary material) for completeness.
Theorem 3.1. Let f : 2V ? R be a positive normalized monotone subadditive function. Then
df (A, B) = f (A4B) is a metric on A, B ? V .
3
While these subadditive functions are metrics, their optimization is known to be very difficult. The
simple subadditive function example in the introduction of [10] shows that subadditive minimization is
inapproximable, and Theorem 17 of [11] states that no algorithm exists for subadditive maximization
? ?n). By contrast, submodular minimization is
that has an approximation factor better than O(
poly-time in the unconstrained setting [12], and a simple greedy algorithm from [13] gives a 1 ? 1/eapproximation for maximization of positive polymatroids subject to a cardinality constraint. Many
other approximation results are also known for submodular function optimization subject to various
other types of constraints. Thus, in this work we restrict ourselves to positive polymatroids.
Corollary 3.1.1. Let f : 2V ? R+ be a positive polymatroid function. Then df (A, B) = f (A4B)
is a metric on A, B ? V .
This restriction does not entirely resolve the question of optimization hardness though. Recall that
the optimization in SH-min and SH-max is with respect to A, but that the fi are applied to the sets
A4Bi . Unfortunately, the function gB (A) = f (A4B), for a fixed set B, is neither necessarily
submodular nor supermodular in A. The next example demonstrates this violation of submodularity.
Example 3.1.1. To be submodular, the function gB (A) = f (A4B) must satisfy the following
+ gB (A2 ) ? gB (A1 ? A2 ) + gB (A1 ? A2 ). Consider
condition for all sets A1 , A2 ? V : gB (A1 )p
the positive polymatroid function f (Y ) = |Y | and let B consist of two elements:
1 , b2 }.
?
?B = {b?
Then for A1 = {b1 } and A2 = {c} (with c ?
/ B): gB (A1 ) + gB (A2 ) = 1 + 3 < 2 2 =
gB (A1 ? A2 ) + gB (A1 ? A2 ).
Although gB (A) = f (A4B) can be non-submodular, we are interestingly still able to make use of
the fact that f is submodular in A4B to develop approximation algorithms for SH-min and SH-max.
4
Minimization of the submodular Hamming metric
In this section, we focus on SH-min (the centroid-finding problem). We consider the four cases
from Table 1: the constrained (A ? C ? 2V ) and unconstrained (A ? C = 2V ) settings, as well
as the homogeneous case (where all fi are the same function) and the heterogeneous case. Before
diving in, we note P
that in all cases we assume not only the natural oracle access to the objective
m
function F (A) = i=1 fi (A4Bi ) (i.e., the ability to evaluate F (A) for any A ? V ), but also
knowledge of the Bi (the B sequence). Theorem 4.1 shows that without knowledge of B, SH-min is
inapproximable. In practice, requiring knowledge of B is not a significant limitation; for all of the
applications described in Section 2, B is naturally known.
Theorem 4.1. Let f be a positive polymatroid function. Suppose that the subset B ? V is fixed
but unknown and gB (A) = f (A4B). If we only have an oracle for gB , then there is no poly-time
approximation algorithm for minimizing gB , up to any polynomial approximation factor.
4.1
Unconstrained setting
Submodular minimization is poly-time in the unconstrained setting [12]. Since a sum of submodular
functions is itself submodular, at first glance it might then seem that the sum of fi in SH-min can
be minimized in poly-time. However, recall from Example 3.1.1 that the fi ?s are not necessarily
submodular in the optimization variable, A. This means that the question of SH-min?s hardness,
even in the unconstrained setting, is an open question. Theorem 4.2 resolves this question for
the heterogeneous case, showing that it is NP-hard and that no algorithm can do better than a
4/3-approximation guarantee. The question of hardness in the homogeneous case remains open.
Theorem 4.2. The unconstrained and heterogeneous version of SH-min is NP-hard. Moreover, no
poly-time algorithm can achieve an approximation factor better than 4/3.
Since unconstrained SH-min is NP-hard, it makes sense to consider approximation algorithms for
this problem. We first provide a simple 2-approximation, U NION -S PLIT (see Algorithm 1). This
algorithm splits f (A4B) = f ((A \ B) ? (B \ A)) into f (A \ B) + f (B \ A), then applies standard
submodular minimization (see e.g. [14]) to the split function. Theorem 4.3 shows that this algorithm
is a 2-approximation for SH-min. It relies on Lemma 4.2.1, which we state first.
Lemma 4.2.1. Let f be a positive monotone subadditive function. Then, for any A, B ? V :
f (A4B) ? f (A \ B) + f (B \ A) ? 2f (A4B).
4
(2)
Algorithm 1 U NION -S PLIT
Algorithm 3 M AJOR -M IN
Input: F, B
Define fi0 (Y ) = fP
i (Y \ Bi ) + fi (Bi \ Y )
m
Define F 0 (Y ) = i=1 fi0 (Y )
Output: S UBMODULAR -O PT (F 0 )
Input: F, B, C
A??
repeat
c ? F (A)
Set wF? as in Equation 3
A ? M ODULAR -M IN (wF? , C)
until F (A) = c
Output: A
Algorithm 2 B EST-B
Input: F , B
A ? B1
for i = 2, . . . , m do
if F (Bi ) < F (A): A ? Bi
Output: A
Theorem 4.3. U NION -S PLIT is a 2-approximation for unconstrained SH-min.
Restricting to the homogeneous setting, we can provide a different algorithm that has a better approximation
guarantee than U NION -S PLIT. This algorithm simply checks the value of
Pm
F (A) =
i=1 f (A4Bi ) for each Bi and returns the minimizing Bi . We call this algorithm
B EST-B (Algorithm 2). Theorem 4.4 gives the approximation guarantee for B EST-B. This result
is known [15], as the proof of the guarantee only makes use of metricity and homogeneity (not
submodularity), and these properties are common to much other work. We provide the proof in our
notation for completeness though.
Theorem
4.4. For m = 1, B EST-B exactly solves unconstrained SH-min. For m > 1, B EST-B is a
2
2? m
-approximation for unconstrained homogeneous SH-min.
4.2
Constrained setting
In the constrained setting, the SH-min problem becomes more difficult. Essentially, all of the
hardness results established in existing work on constrained submodular minimization applies to
the constrained SH-min problem as well. Theorem 4.5 shows that, even for a simple cardinality
constraint and identical fi (homogeneous?setting), not only is SH-min NP-hard, but also it is hard to
approximate with a factor better than ?( n).
Theorem 4.5. Homogeneous SH-min is NP-hard under cardinality
constraints.
Moreover, no
?
n
?
algorithm can achieve an approximation factor better than ? 1+( n?1)(1??f ) , where ?f =
1 ? minj?V
f (j|V \j)
f (j)
denotes the curvature of f . This holds even when m = 1.
We can also show similar hardness results for several other combinatorial constraints including matroid
constraints, shortest paths, spanning trees, cuts, etc. [16, 17]. Note that the hardness established
in Theorem 4.5 depends on a quantity ?f , which is also called the curvature of a submodular
function [18, 16]. Intuitively, this factor measures how close a submodular function is to a modular
function. The result suggests that the closer the function is being modular, the easier it is to optimize.
This makes sense, since with a modular function, SH-min can be exactly minimized under several
combinatorial constraints. To see this for the cardinality-constrained case, first note that for modular
fi , the corresponding F -function is also modular. Lemma 4.5.1 formalizes this.
Pm
Lemma 4.5.1. If the fi in SH-min are modular, then F (A) = i=1 fi (A4Bi ) is also modular.
Given Lemma 4.5.1, from the definition of modularity
we know that there exists some constant C and
P
vector wF ? Rn , such that F (A) = C + j?A wF (j). From this representation it is clear that F
can be minimized subject to the constraint |A| ? k by choosing as the set A the items corresponding
to the k smallest entries in wF . Thus, for modular fi , or fi with small curvature ?fi , such constrained
minimization is relatively easy.
Having established the hardness of constrained SH-min, we now turn to considering approximation
algorithms for this problem. Unfortunately, the U NION -S PLIT algorithm from the previous section
5
requires an efficient algorithm for submodular function minimization, and no such algorithm exists
in the constrained setting; submodular minimization is NP-hard even under simple cardinality constraints [19]. Similarly, the B EST-B algorithm breaks down in the constrained setting; its guarantees
carry over only if all the Bi are within the constraint set C. Thus, for the constrained SH-min problem
we instead propose a majorization-minimization algorithm. Theorem 4.6 shows that this algorithm
has an O(n) approximation guarantee, and Algorithm 3 formally defines the algorithm.
Essentially, M AJOR -M IN proceeds by iterating the following two steps: constructing F? , a modular
upper bound for F at the current solution A, then minimizing F? to get a new A. F? consists of
superdifferentials [20, 21] of F ?s component submodular functions. We use the superdifferentials
defined as ?grow? and ?shrink? in [22]. Defining sets S, T as S = V \ j, T = A4Bi for ?grow?, and
S = (A4Bi ) \ j, T = ? for ?shrink?, the wF? vector that represents the modular F? can be written:
m
X
fi (j | S) if j ? A4Bi
wF? (j) =
(3)
fi (j | T ) otherwise,
i=1
where f (Y | X) = f (Y ? X) ? f (X) is the gain in f -value when adding Y to X. We now state the
main theorem characterizing algorithm M AJOR -M IN?s performance on SH-min.
Pm
Theorem 4.6. M AJOR -M IN is guaranteed to improve the objective value, F (A) = i=1 fi (A4Bi ),
at every iteration. Moreover, for any constraint over which
a modular function can be exactly
|A? 4Bi |
optimized, it has a maxi 1+(|A? 4Bi |?1)(1??f (A? 4Bi )) approximation guarantee, where A? is
i
the optimal solution of SH-min.
While M AJOR -M IN does not have a constant-factor guarantee (which is possible only in the unconstrained setting), the bounds are not too far from the hardness of the constrained setting. For example,
n
in the cardinality case, the guarantee of M AJOR -M IN is 1+(n?1)(1??
, while the hardness shown in
f)
?
n
Theorem 4.5 is ? 1+(n?1)(1??f ) .
5
Maximization of the submodular Hamming metric
We next characterize the hardness of SH-max (the diversification problem) and describe approximation
algorithms for it. We first show that all versions of SH-max, even the unconstrained homogeneous
one, are NP-hard. Note that this is a non-trivial result. Maximization of a monotone function such
as a polymatroid is not NP-hard; the maximizer is always the full set V . But, for SH-max, despite
the fact that the fi are monotone with respect to their argument A4Bi , they are not monotone with
respect to A itself. This makes SH-max significantly harder. After establishing that SH-max is
NP-hard, we show that no poly-time algorithm can obtain an approximation factor better 3/4 in the
unconstrained setting, and a factor of (1 ? 1/e) in the constrained setting. Finally, we provide a
simple approximation algorithm which achieves a factor of 1/4 for all settings.
Theorem 5.1. All versions of SH-max (constrained or unconstrained, heterogeneous or homogeneous) are NP-hard. Moreover, no poly-time algorithm can obtain a factor better than 3/4 for the
unconstrained versions, or better than 1 ? 1/e for the cardinality-constrained versions.
We turn now to approximation algorithms. For the unconstrained setting, Lemma 5.1.1 shows that
simply choosing a random subset, A ? V provides a 1/8-approximation in expectation.
Lemma 5.1.1. A random subset is a 1/8-approximation for SH-max in the unconstrained (homogeneous or heterogeneous) setting.
An improved approximation guarantee of 1/4 can be shown for a variant of U NION -S PLIT (Algorithm 1), if the call to S UBMODULAR -O PT is a call to a S UBMODULAR -M AX algorithm. Theorem 5.2
makes this precise for both the unconstrained case and a cardinality-constrained case. It might also be
of interest to consider more complex constraints, such as matroid independence and base constraints,
but we leave the investigation of such settings to future work.
Pm
Theorem 5.2. Maximizing F? (A) = i=1 (fi (A \ Bi ) + fi (Bi \ A)) with a bi-directional greedy
algorithm [23, Algorithm 2] is a linear-time 1/4-approximation for maximizing F (A) =
P
m
i=1 fi (A4Bi ), in the unconstrained setting. Under the cardinality constraint |A| ? k, using the
1
randomized greedy algorithm [24, Algorithm 1] provides a 2e
-approximation.
6
Table 3: mV-ROUGE averaged over the 14 datasets (?
standard deviation).
HM
0.38 ? 0.14
6
SP
0.43 ? 0.20
Table 4: # of wins (out of 14 datasets).
TP
0.50 ? 0.26
HM
3
SP
1
TP
10
Experiments
To demonstrate the effectiveness of the submodular Hamming metrics proposed here, we apply them
to a metric minimization task (clustering) and a metric maximization task (diverse k-best).
6.1
SH-min application: clustering
We explore the document clustering problem described in Section 2, where the groundset V is all
unigram features and Bi contains the unigrams of document i. We run k-means
P clustering and
each iteration find the mean for cluster Cj by solving: ?j ? argminA:|A|?` i?Cj f (A4Bi ).
The constraint |A| ? ` requires the mean to contain at least ` unigrams, which helps k-means to
create richer and p
more meaningful cluster centers. We compare using the submodular function
P
f (Y ) = W ?W |Y ? W | (SM), to using Hamming distance (HM). The problem of finding ?j
above can be solved exactly for HM, since it is a modular function. In the SM case, we apply M AJOR M IN (Algorithm 3). As an initial test, we generate synthetic data consisting of 100 ?documents?
assigned to 10 ?true? clusters. We set the number of ?word? features to n = 1000, and partition the
features into 100 word classes (the W in the submodular function). Ten word classes are associated
with each true document cluster, and each document contains one word from each of these word
classes. That is, each word is contained in only one document, but documents in the same true cluster
have words from the same word classes. We set the minimum cluster center size to ` = 100. We use
k-means++ initialization [25] and average over 10 trials. Within the k-means optimization, we enforce
that all clusters are of equal size by assigning a document to the closest center whose current size
is < 10. With this setup, the average accuracy of HM is 28.4% (?2.4), while SM is 69.4% (?10.5).
The HM accuracy is essentially the accuracy of a random assignment of documents to clusters; this
makes sense, as no documents share words, rendering the Hamming distance useless. In real-world
data there would likely be some word overlap though; to better model this, we let each document
contain a random sampling of 10 words from the word clusters associated with its document cluster.
In this case, the average accuracy of HM is 57.0% (?6.8), while SM is 88.5% (?8.4). The results
for SM are even better if randomization is removed from the initialization (we simply choose the next
center to be one with greatest distance from the current centers). In this case, the average accuracy
of HM is 56.7% (?7.1), while SM is 100% (?0.0). This indicates that as long as the starting point
for SM contains one document from each cluster, the SM optimization will recover the true clusters.
Moving beyond synthetic data, we applied the same method to the problem of clustering NIPS papers.
The initial set of documents that we consider consists of all NIPS papers1 from 1987 to 2014. We filter
the words of a given paper by first removing stopwords and any words that don?t appear at least 3 times
in the paper. We further filter by removing words that have small tf-idf value (< 0.001) and words that
occur in only one paper or in more than 10% of papers. We then filter the papers themselves, discarding
any that have fewer than 25 remaining words and for each other paper retaining only its top (by tf-idf
score) 25 words. Each of the 5,522 remaining papers defines a Bi set. Among the Bi there are 12,262
unique words. To get the word clusters W, we first run the WORD 2 VEC code of [26], which generates
a 100-dimensional real-valued vector of features for each word, and then run k-means clustering with
Euclidean distance on these vectors to define 100 word clusters. We set the center size cardinality
constraint to ` = 100 and set the number of document clusters to k = 10. To initialize, we again use
k-means++ [25], with k = 10. Results are averaged over 10 trials. While we do not have groundtruth
labels for NIPS paper clusters, we can use within-cluster distances as a proxy for cluster goodness
(lower values, indicating tighter clusters, are better). Specifically, we compute: k-means-score =
Pk P
j=1
i?Cj g(?j 4Bi ). With Hamming for g, the average ratio of HM?s k-means-score to SM?s
is 0.916 ? 0.003. This indicates that, as expected, HM does a better job of optimizing the Hamming
loss. However, with the submodular function for g, the average ratio of HM?s k-means-score to SM?s
is 1.635 ? 0.038. Thus, SM does a significantly better job optimizing the submodular loss.
1
Papers were downloaded from http://papers.nips.cc/.
7
6.2
SH-max application: diverse k-best
In this section, we explore a diverse k-best image collection summarization problem, as described in Section 2. For this problem, our goal is to obtain k summaries, each of size
l, by selecting from a set consisting of n l images. The idea is that either: (a) the
user could choose from among these k summaries the one that they find most appealing,
or (b) a (more computationally expensive) model could be applied to re-rank these k summaries and choose the best. As is described in Section 2, we obtain the kth summary Ak ,
Pk?1
given the first k ? 1 summaries A1:k?1 via: Ak = argmaxA?V,|A|=` g(A) + i=1 f (A4Ai ).
For g we
P use the facility location function:
g(A) = i?V maxj?A Sij , where Sij is a similarity score for images i and j. We compute
Sij by taking the dot product of the ith and jth
feature vectors, which are the same as those
used by [8]. For f we compare two different functions: (1) f (A4Ai ) = |A4Ai |, the
Hamming distance (HM), and (2) f (A4Ai ) =
g(A4Ai ), the submodular facility location distance (SM). For HM we optimize via the standard greedy algorithm [13]; since the facility location function g is monotone submodular, this implies an approximation guarantee
of (1 ? 1/e). For SM, we experiment with
two algorithms: (1) standard greedy [13], and Figure 1: An example photo montage (zoom in to
(2) U NION -S PLIT (Algorithm 1) with standard see detail) showing 15 summaries of size 10 (one
greedy as the S UBMODULAR -O PT function. We per row) from the HM approach (left) and the TP
will refer to these two cases as ?single part? (SP) approach (right), for image collection #6.
and ?two part? (TP). Note that neither of these optimization techniques has a formal approximation
guarantee, though the latter would if instead of standard greedy we used the bi-directional greedy
algorithm of [23]. We opt to use standard greedy though, as it typically performs much better in practice. We employ the image summarization dataset from [8], which consists of 14 image collections,
each of which contains n = 100 images. For each image collection, we seek k = 15 summaries of
size ` = 10. For evaluation, we employ the V-ROUGE score developed by [8]; the mean V-ROUGE
(mV-ROUGE) of the k summaries provides a quantitative measure of their goodness. V-ROUGE
scores are normalized such that a score of 0 corresponds to randomly generated summaries, while a
score of 1 is on par with human-generated summaries.
Table 3 shows that SP and TP outperform HM in terms of mean mV-ROUGE, providing support for
the idea of using submodular Hamming distances in place of (modular) Hamming for diverse k-best
applications. TP also outperforms SP, suggesting that the objective-splitting used in U NION -S PLIT
is of practical significance. Table 4 provides additional evidence of TP?s superiority, indicating that
for 10 out of the 14 image collections, TP has the best mV-ROUGE score of the three approaches.
Figure 1 provides some qualitative evidence of TP?s goodness. Notice that the images in the green
rectangle tend to be more redundant with images from the previous summaries in the HM case than
in the TP case; the HM solution contains many images with a ?sky? theme, while TP contains more
images with other themes. This shows that the HM solution lacks diversity across summaries. The
quality of the individual summaries also tends to become poorer for the later HM sets; considering
the images in the red rectangles overlaid on the montage, the HM sets contain many images of tree
branches here. By contrast, the TP summary quality remains good even for the last few summaries.
7
Conclusion
In this work we defined a new class of distance functions: submodular Hamming metrics. We
established hardness results for the associated SH-min and SH-max problems, and provided approximation algorithms. Further, we demonstrated the practicality of these metrics for several applications.
There remain several open theoretical questions (e.g., the tightness of the hardness results and the
NP-hardness of SH-min), as well as many opportunities for applying submodular Hamming metrics
to other machine learning problems (e.g., the prediction application from Section 2).
8
References
[1] S. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28(2):129?137,
1982.
[2] T. Hazan, S. Maji, J. Keshet, and T. Jaakkola. Learning Efficient Random Maximum A-Posteriori Predictors
with Non-Decomposable Loss Functions. In NIPS, 2013.
[3] M. Szummer, P. Kohli, and D. Hoiem. Learning CRFs Using Graph Cuts. In ECCV, 2008.
[4] A. Osokin and P. Kohli. Perceptually Inspired Layout-Aware Losses for Image Segmentation. In ECCV,
2014.
[5] J. Yu and M. Blaschko. Learning Submodular Losses with the Lovasz Hinge. In ICML, 2015.
[6] D. Batra, P. Yadollahpour, A. Guzman, and G. Shakhnarovich. Diverse M-Best Solutions in Markov
Random Fields. In ECCV, 2012.
[7] H. Lin and J. Bilmes. A Class of Submodular Functions for Document Summarization. In ACL.
[8] S. Tschiatschek, R. Iyer, H. Wei, and J. Bilmes. Learning Mixtures of Submodular Functions for Image
Collection Summarization. In NIPS, 2014.
[9] P. Halmos. Measure Theory. Springer, 1974.
[10] S. Jegelka and J. Bilmes. Approximation Bounds for Inference using Cooperative Cuts. In ICML, 2011.
[11] M. Bateni, M. Hajiaghayi, and M. Zadimoghaddam. Submodular Secretary Problem and Extensions.
Technical report, MIT, 2010.
[12] W. H. Cunningham. On Submodular Function Minimization. Combinatorica, 3:185 ? 192, 1985.
[13] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular
Set Functions I. 14(1), 1978.
[14] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2 edition, 2005.
[15] D. Gusfield. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology.
Cambridge University Press, 1997.
[16] R. Iyer, S. Jegelka, and J. Bilmes. Curvature and Efficient Approximation Algorithms for Approximation
and Minimization of Submodular Functions. In NIPS, 2013.
[17] G. Goel, C. Karande, P. Tripathi, and L. Wang. Approximability of combinatorial problems with multi-agent
submodular cost functions. In FOCS, 2009.
[18] J. Vondr?ak. Submodularity and Curvature: The Optimal Algorithm. RIMS Kokyuroku Bessatsu, 23, 2010.
[19] Z. Svitkina and L. Fleischer. Submodular Approximation: Sampling-Based Algorithms and Lower Bounds.
In FOCS, 2008.
[20] S. Jegelka and J. Bilmes. Submodularity Beyond Submodular Energies: Coupling Edges in Graph Cuts. In
CVPR, 2011.
[21] R. Iyer and J. Bilmes. The Submodular Bregman and Lov?asz-Bregman Divergences with Applications. In
NIPS, 2012.
[22] R. Iyer, S. Jegelka, and J. Bilmes. Fast Semidifferential-Based Submodular Function Optimization. In
ICML, 2013.
[23] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A Tight Linear Time (1/2)-Approximation for
Unconstrained Submodular Maximization. In FOCS, 2012.
[24] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. Submodular maximization with cardinality
constraints. In SODA, 2014.
[25] D. Arthur and S. Vassilvitskii. k-means++: The Advantages of Careful Seeding. In SODA, 2007.
[26] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed Representations of Words and
Phrases and their Compositionality. In NIPS, 2013.
9
| 5741 |@word kohli:2 trial:2 version:5 polynomial:1 seems:1 semidifferential:1 open:5 seek:1 paid:1 harder:1 carry:1 initial:2 contains:8 score:12 selecting:2 hoiem:1 document:20 interestingly:1 outperforms:1 existing:2 current:5 yet:2 assigning:1 must:2 written:1 partition:1 seeding:1 greedy:9 fewer:1 item:2 ith:1 completeness:2 math:1 contribute:1 location:4 provides:5 stopwords:1 become:1 qualitative:1 shorthand:1 prove:1 consists:3 focs:3 naor:2 pairwise:1 lov:1 expected:1 hardness:17 themselves:1 nor:1 multi:2 inspired:1 montage:2 resolve:2 cardinality:13 considering:2 becomes:3 begin:1 estimating:1 moreover:4 notation:1 provided:1 blaschko:1 herwaldt:1 what:1 kind:1 substantially:1 string:1 maxa:1 developed:1 finding:6 guarantee:15 formalizes:1 quantitative:1 unexplored:1 every:1 sky:1 concave:1 hajiaghayi:1 exactly:6 demonstrates:1 jengi:1 schwartz:2 appear:1 superiority:1 segmenting:1 positive:19 before:1 local:1 tends:1 rouge:7 despite:2 ak:5 establishing:1 path:2 might:2 acl:1 initialization:2 suggests:1 tschiatschek:1 bi:34 enriching:1 averaged:2 unique:2 practical:4 practice:2 block:1 differs:1 thought:2 significantly:2 matching:1 word:25 argmaxa:2 get:2 close:1 satoru:1 applying:2 optimize:4 restriction:2 equivalent:1 demonstrated:1 center:6 maximizing:4 crfs:1 go:1 layout:1 starting:1 dean:1 decomposable:1 splitting:1 pt:3 suppose:1 user:4 homogeneous:13 element:2 expensive:1 cut:6 labeled:1 cooperative:1 solved:2 capture:1 wang:1 highest:1 removed:1 observes:1 motivate:1 solving:2 shakhnarovich:1 tight:1 highestscoring:1 easily:1 represented:1 chapter:1 various:1 maji:1 distinct:3 fast:1 effective:1 describe:1 choosing:2 whose:2 modular:20 quite:1 richer:2 supplementary:2 distortion:1 say:1 otherwise:2 valued:1 tightness:1 ability:2 cvpr:1 itself:3 sequence:3 advantage:2 took:1 propose:2 interaction:3 product:1 bateni:1 achieve:2 fi0:2 seattle:2 exploiting:1 cluster:24 sutskever:1 generating:1 leave:1 help:1 coupling:1 develop:1 pose:1 job:2 solves:1 coverage:2 implies:3 direction:1 submodularity:6 correct:2 filter:3 human:1 material:2 f1:1 generalization:1 clustered:1 investigation:1 randomization:1 opt:1 tighter:1 extension:1 hold:2 overlaid:1 bj:5 achieves:1 a2:9 smallest:1 combinatorial:4 label:2 create:1 tf:2 weighted:4 minimization:14 lovasz:1 mit:1 always:1 rather:2 jaakkola:1 corollary:1 ax:1 focus:1 rank:1 check:1 indicates:2 contrast:2 centroid:2 sense:3 wf:7 helpful:1 inference:3 posteriori:1 secretary:1 elsevier:1 typically:2 cunningham:1 labelings:1 pixel:4 classification:1 among:2 retaining:1 constrained:18 special:1 fairly:1 uc:5 initialize:1 equal:1 aware:1 field:1 having:1 washington:2 sampling:2 identical:1 represents:1 broad:1 yu:1 biology:1 constitutes:1 icml:3 foreground:3 future:1 minimized:3 report:1 np:11 guzman:1 employ:2 few:1 randomly:1 preserve:2 divergence:1 homogeneity:1 zoom:1 individual:1 maxj:1 ourselves:1 consisting:2 interest:1 evaluation:1 saturated:1 violation:1 mixture:1 sh:59 rishabh:1 poorer:1 bregman:2 edge:1 closer:1 arthur:1 tree:4 euclidean:1 re:1 theoretical:1 instance:6 asking:1 cover:1 tp:12 goodness:3 assignment:1 maximization:8 phrase:1 cost:1 polymatroids:5 deviation:1 subset:3 entry:5 predictor:2 too:1 characterize:1 synthetic:2 thanks:1 person:1 randomized:1 systematic:1 plit:9 analogously:1 again:1 choose:3 kidambi:1 return:1 suggesting:1 potential:2 diversity:3 summarized:1 b2:1 lloyd:1 satisfy:2 mv:4 depends:1 later:1 try:1 break:1 unigrams:2 hazan:1 red:1 recover:2 contribution:1 majorization:1 square:1 accuracy:5 largely:2 likewise:1 characteristic:1 yield:1 efficiently:1 directional:2 bilmes:9 cc:1 minj:1 tended:1 whenever:1 definition:2 energy:1 a4b:19 naturally:2 associated:4 mi:1 proof:4 hamming:35 gain:1 dataset:1 recall:2 knowledge:4 cj:5 segmentation:5 rim:1 higher:1 supermodular:2 rahul:1 synapse:1 improved:2 wei:1 though:6 shrink:2 rejected:1 until:1 subadditive:10 maximizer:1 lack:1 glance:1 defines:3 quality:3 svitkina:1 normalized:8 concept:2 true:6 contain:4 facility:4 hence:1 requiring:1 assigned:1 mahalanobis:2 outline:1 demonstrate:3 performs:1 image:24 fi:30 superior:1 common:1 polymatroid:10 empirically:1 discussed:1 significant:2 refer:3 cambridge:1 vec:1 feldman:2 unconstrained:22 pm:4 similarly:2 submodular:65 gillenwater:1 dot:1 ajor:8 moving:1 access:1 similarity:1 etc:1 argmina:2 base:1 curvature:5 closest:1 recent:1 showed:1 optimizing:3 diving:1 awarded:1 zadimoghaddam:1 buchbinder:2 diversification:2 certain:1 binary:4 karande:1 scoring:1 seen:2 minimum:1 additional:1 goel:1 shortest:1 redundant:1 corrado:1 branch:1 full:1 desirable:1 technical:1 offer:2 long:1 lin:1 dept:2 a1:10 ensuring:1 prediction:6 involving:1 variant:1 heterogeneous:8 essentially:3 metric:33 df:7 expectation:1 iteration:2 confined:1 background:1 grow:2 asz:1 probably:1 subject:3 tend:1 fujishige:1 effectiveness:2 seem:1 call:3 ee:1 counting:1 split:2 easy:1 concerned:1 rendering:1 independence:1 matroid:3 fm:1 restrict:1 idea:2 fleischer:1 vassilvitskii:1 gb:14 penalty:1 suffer:1 repeatedly:1 useful:1 iterating:1 clear:1 ten:1 svms:1 generate:1 http:1 outperform:1 notice:1 per:1 diverse:12 discrete:2 express:1 key:2 four:1 neither:2 yadollahpour:1 rectangle:2 uw:1 graph:4 monotone:12 sum:2 run:3 soda:2 extends:1 arrive:1 place:1 groundtruth:1 entirely:1 bound:4 guaranteed:1 oracle:2 occur:3 constraint:21 idf:2 generates:1 argument:1 min:36 approximability:1 mikolov:1 relatively:1 structured:2 poor:1 smaller:2 slightly:1 across:2 beneficial:1 remain:1 bessatsu:1 appealing:1 happens:1 intuitively:3 sij:3 computationally:2 equation:1 remains:2 jennifer:1 describing:2 count:2 turn:2 know:1 tractable:1 end:1 photo:3 generalizes:1 apply:2 observe:1 enforce:1 disagreement:1 alternative:1 vacation:1 knapsack:1 denotes:2 clustering:12 assumes:1 ensure:1 top:2 a4:1 remaining:2 opportunity:1 hinge:1 practicality:1 establish:1 objective:5 question:6 quantity:1 nemhauser:1 nion:9 kth:2 distance:40 win:1 card:4 accommodating:1 trivial:1 spanning:2 code:1 useless:1 providing:2 minimizing:4 ratio:2 difficult:2 unfortunately:2 setup:1 potentially:1 proper:1 summarization:6 unknown:1 allowing:1 upper:1 datasets:2 sm:13 markov:1 gusfield:1 defining:2 precise:1 rn:1 arbitrary:1 compositionality:1 pair:2 required:1 ubmodular:4 optimized:1 distinction:1 established:4 nip:9 able:2 beyond:3 proceeds:1 fp:1 summarize:1 max:23 including:2 green:1 greatest:1 overlap:1 natural:5 rely:1 superdifferentials:2 solvable:1 representing:1 improve:1 kokyuroku:1 hm:20 loss:7 par:1 interesting:1 limitation:1 proportional:1 wolsey:1 penalization:1 downloaded:1 agent:1 rkiyer:1 sufficient:1 proxy:1 jegelka:4 share:1 row:1 eccv:3 summary:17 repeat:1 last:1 jth:1 formal:1 allow:2 characterizing:1 taking:1 distributed:1 stand:2 world:2 gram:1 concretely:1 collection:6 bm:1 osokin:1 far:1 transaction:1 approximate:1 vondr:1 bethany:1 b1:3 don:1 modularity:1 table:8 additionally:2 contributes:1 interact:1 warranted:1 complex:3 poly:8 constructing:2 necessarily:2 sp:5 pk:3 main:2 significance:1 motivation:1 edition:1 augmented:2 theme:2 candidate:3 metricity:5 theorem:22 down:1 removing:2 unigram:1 discarding:1 showing:5 maxi:1 list:3 svm:1 tripathi:1 evidence:2 intractable:1 exists:3 consist:1 restricting:1 adding:1 quantization:1 keshet:1 halmos:1 iyer:5 perceptually:1 margin:3 chen:1 easier:1 simply:3 explore:2 likely:1 pcm:1 contained:1 tracking:1 applies:2 springer:1 corresponds:1 relies:1 dh:1 goal:1 careful:1 jeff:1 fisher:1 feasible:2 hard:11 inapproximable:2 specifically:1 lemma:8 called:1 batra:1 est:8 meaningful:2 indicating:2 formally:1 combinatorica:1 support:2 latter:2 szummer:1 evaluate:1 |
5,238 | 5,742 | Top-k Multiclass SVM
1
Maksim Lapin,1 Matthias Hein2 and Bernt Schiele1
Max Planck Institute for Informatics, Saarbr?cken, Germany
2
Saarland University, Saarbr?cken, Germany
Abstract
Class ambiguity is typical in image classification problems with a large number
of classes. When classes are difficult to discriminate, it makes sense to allow k
guesses and evaluate classifiers based on the top-k error instead of the standard
zero-one loss. We propose top-k multiclass SVM as a direct method to optimize
for top-k performance. Our generalization of the well-known multiclass SVM is
based on a tight convex upper bound of the top-k error. We propose a fast optimization scheme based on an efficient projection onto the top-k simplex, which is
of its own interest. Experiments on five datasets show consistent improvements in
top-k accuracy compared to various baselines.
1
Introduction
As the number of classes increases, two important issues emerge: class overlap and multilabel nature of examples [9]. This phenomenon
asks for adjustments of both the evaluation metrics as well as the loss functions employed.
When a predictor is allowed k guesses and is
not penalized for k ? 1 mistakes, such an evaluation measure is known as top-k error. We argue that this is an important metric that will in- Figure 1: Images from SUN 397 [29] illustrating
evitably receive more attention in the future as class ambiguity. Top: (left to right) Park, River,
Pond. Bottom: Park, Campus, Picnic area.
the illustration in Figure 1 indicates.
How obvious is it that each row of Figure 1 shows examples of different classes? Can we imagine
a human to predict correctly on the first attempt? Does it even make sense to penalize a learning
system for such ?mistakes?? While the problem of class ambiguity is apparent in computer vision,
similar problems arise in other domains when the number of classes becomes large.
We propose top-k multiclass SVM as a generalization of the well-known multiclass SVM [5]. It
is based on a tight convex upper bound of the top-k zero-one loss which we call top-k hinge loss.
While it turns out to be similar to a top-k version of the ranking based loss proposed by [27], we
show that the top-k hinge loss is a lower bound on their version and is thus a tighter bound on the
top-k zero-one loss. We propose an efficient implementation based on stochastic dual coordinate
ascent (SDCA) [24]. A key ingredient in the optimization is the (biased) projection onto the top-k
simplex. This projection turns out to be a tricky generalization of the continuous quadratic knapsack
problem, respectively the projection onto the standard simplex. The proposed algorithm for solving
it has complexity O(m log m) for x ? Rm . Our implementation of the top-k multiclass SVM scales
to large datasets like Places 205 with about 2.5 million examples and 205 classes [30]. Finally,
extensive experiments on several challenging computer vision problems show that top-k multiclass
SVM consistently improves in top-k error over the multiclass SVM (equivalent to our top-1 multiclass SVM), one-vs-all SVM and other methods based on different ranking losses [11, 16].
1
2
Top-k Loss in Multiclass Classification
In multiclass classification, one is given a set S = {(xi , yi ) | i = 1, . . . , n} of n training examples
xi ? X along with the corresponding labels yi ? Y. Let X = Rd be the feature space and
Y = {1, . . . , m} the set of labels. The task is to learn a set of m linear predictors wy ? Rd such that
the risk of the classifier arg maxy?Y hwy , xi is minimized for a given loss function, which is usually
chosen to be a convex upper bound of the zero-one loss. The generalization to nonlinear predictors
using kernels is discussed below.
The classification problem becomes extremely challenging in the presence of a large number of
ambiguous classes. It is natural in that case to extend the evaluation protocol to allow k guesses,
which leads to the popular top-k error and top-k accuracy performance measures. Formally, we
consider a ranking of labels induced by the prediction scores hwy , xi. Let the bracket [?] denote a
permutation of labels such that [j] is the index of the j-th largest score, i.e.
w[1] , x ? w[2] , x ? . . . ? w[m] , x .
The top-k zero-one loss errk is defined as
errk (f (x), y) = 1hw[k] ,xi>hwy ,xi ,
>
where f (x) = (hw1 , xi , . . . , hwm , xi) and 1P = 1 if P is true and 0 otherwise. Note that
the standard zero-one loss is recovered when k = 1, and errk (f (x), y) is always 0 for k = m.
Therefore, we are interested in the regime 1 ? k < m.
2.1
Multiclass Support Vector Machine
In this section we review the multiclass SVM of Crammer and Singer [5] which will be extended to
the top-k multiclass SVM in the following. We mainly follow the notation of [24].
Given a training pair (xi , yi ), the multiclass SVM loss on example xi is defined as
max{1y6=yi + hwy , xi i ? hwyi , xi i}.
y?Y
(1)
Since our optimization scheme is based on Fenchel duality, we also require a convex conjugate of
the primal loss function (1). Let c , 1?eyi , where 1 is the all ones vector and ej is the j-th standard
basis vector in Rm , let a ? Rm be defined componentwise as aj , hwj , xi i ? hwyi , xi i, and let
? , {x ? Rm | h1, xi ? 1, 0 ? xi , i = 1, . . . , m}.
Proposition 1 ([24], ? 5.1). A primal-conjugate pair for the multiclass SVM loss (1) is
? hc, bi if b ? ?,
?(a) = max{0, (a + c)[1] },
?? (b) =
+?
otherwise.
(2)
Note that thresholding with 0 in ?(a) is actually redundant as (a + c)[1] ? (a + c)yi = 0 and is only
given to enhance similarity to the top-k version defined later.
2.2
Top-k Support Vector Machine
The main motivation for the top-k loss is to relax the penalty for making an error in the top-k
predictions. Looking at ? in (2), a direct extension to the top-k setting would be a function
?k (a) = max{0, (a + c)[k] },
which incurs a loss iff (a + c)[k] > 0. Since the ground truth score (a + c)[yi ] = 0, we conclude that
?k (a) > 0 ?? w[1] , xi ? . . . ? w[k] , xi > hwyi , xi i ? 1,
which directly corresponds to the top-k zero-one loss errk with margin 1.
Note that the function ?k ignores the values of the first (k ? 1) scores, which could be quite large if
there are highly similar classes. That would be fine in this model as long as the correct prediction is
2
within the first k guesses. However, the function ?k is unfortunately nonconvex since the function
fk (x) = x[k] returning the k-th largest coordinate is nonconvex for k ? 2. Therefore, finding a
globally optimal solution is computationally intractable.
Instead, we propose the following convex upper bound on ?k , which we call the top-k hinge loss,
k
n 1X
o
?k (a) = max 0,
(a + c)[j] ,
k j=1
(3)
where the sum of the k largest components is known to be convex [3]. We have that
?k (a) ? ?k (a) ? ?1 (a) = ?(a),
for any k ? 1 and a ? Rm . Moreover, ?k (a) < ?(a) unless all k largest scores are the same. This
extra slack can be used to increase the margin between the current and the (m ? k) remaining least
similar classes, which should then lead to an improvement in the top-k metric.
2.2.1
Top-k Simplex and Convex Conjugate of the Top-k Hinge Loss
In this section we derive the conjugate of the proposed loss (3). We begin with a well known result
that is used later in the proof. All proofs can be found in the supplement. Let [a]+ = max{0, a}.
Pk
Pm
Lemma 1 ([17], Lemma 1).
j=1 h[j] = mint kt +
j=1 [hj ? t]+ .
We also define a set ?k which arises naturally as the effective
domain1 of the conjugate of (3). By analogy, we call it the top-k
simplex as for k = 1 it reduces to the standard simplex with the
inequality constraint (i.e. 0 ? ?k ). Let [m] , 1, . . . , m.
Definition 1. The top-k simplex is a convex polytope defined as
1
?k (r) , x h1, xi ? r, 0 ? xi ? h1, xi , i ? [m] ,
k
Top-1
Top-2
Top-3
1
1/2
1/3
where r ? 0 is the bound on the sum h1, xi. We let ?k , ?k (1).
0
0
1/3
1/3
The crucial difference to the standard simplex is the upper bound
on xi ?s, which limits their maximal contribution to the total sum
h1, xi. See Figure 2 for an illustration.
1/2
1/2
1
1
Figure 2: Top-k simplex ?k (1)
The first technical contribution of this work is as follows.
for m = 3. Unlike
the standard
Proposition 2. A primal-conjugate pair for the top-k hinge loss
simplex, it has m
+
1
vertices.
k
(3) is given as follows:
k
n 1X
o
? hc, bi if b ? ?k ,
?
?k (a) = max 0,
(a + c)[j] ,
?k (b) =
(4)
+?
otherwise.
k
j=1
Moreover, ?k (a) = max{ha + c, ?i | ? ? ?k }.
Therefore, we see that the proposed formulation (3) naturally extends the multiclass SVM of Crammer and Singer [5], which is recovered when k = 1. We have also obtained an interesting extension
(or rather contraction, since ?k ? ?) of the standard simplex.
2.3
Relation of the Top-k Hinge Loss to Ranking Based Losses
Usunier et al. [27] have recently formulated a very general family of convex losses for ranking and
multiclass classification. In their framework, the hinge loss on example xi can be written as
L? (a) =
m
X
?y max{0, (a + c)[y] },
y=1
1
A convex function f : X ? R ? {??} has an effective domain dom f = {x ? X | f (x) < +?}.
3
where ?1 ? . . . ? ?m ? 0 is a non-increasing sequence of non-negative numbers which act as
weights for the ordered losses.
The relation to the top-k hinge loss becomes apparent if we choose ?j =
In that case, we obtain another version of the top-k hinge loss
1
k
if j ? k, and 0 otherwise.
k
1X
max{0, (a + c)[j] }.
??k a =
k j=1
(5)
It is straightforward to check that
?k (a) ? ?k (a) ? ??k (a) ? ?1 (a) = ??1 (a) = ?(a).
The bound ?k (a) ? ??k (a) holds with equality if (a + c)[1] ? 0 or (a + c)[k] ? 0. Otherwise, there
is a gap and our top-k loss is a strictly better upper bound on the actual top-k zero-one loss. We
perform extensive evaluation and comparison of both versions of the top-k hinge loss in ? 5.
While [27] employed LaRank [1] and [9], [28] optimized an approximation of L? (a), we show in
the supplement how the loss function (5) can be optimized exactly and efficiently within the ProxSDCA framework.
Multiclass to binary reduction. It is also possible to compare directly to ranking based methods
that solve a binary problem using the following reduction. We employ it in our experiments to
evaluate the ranking based methods SVMPerf [11] and TopPush [16]. The trick is to augment the
training set by embedding each xi ? Rd into Rmd using a feature map ?y for each y ? Y. The
mapping ?y places xi at the y-th position in Rmd and puts zeros everywhere else. The example
?yi (xi ) is labeled +1 and all ?y (xi ) for y 6= yi are labeled ?1. Therefore, we have a new training
set with mn examples and md dimensional (sparse) features. Moreover, hw, ?y (xi )i = hwy , xi i
which establishes the relation to the original multiclass problem.
Another approach to general performance measures is given in [11]. It turns out that using the above
reduction, one can show that under certain constraints on the classifier, the recall@k is equivalent to
the top-k error. A convex upper bound on recall@k is then optimized in [11] via structured SVM.
As their convex upper bound on the recall@k is not decomposable in an instance based loss, it is not
directly comparable to our loss. While being theoretically very elegant, the approach of [11] does
not scale to very large datasets.
3
Optimization Framework
We begin with a general `2 -regularized multiclass classification problem, where for notational convenience we keep the loss function unspecified. The multiclass SVM or the top-k multiclass SVM
are obtained by plugging in the corresponding loss function from ? 2.
3.1
Fenchel Duality for `2 -Regularized Multiclass Classification Problems
Let X ? Rd?n be the matrix of training examples xi ? Rd , let W ? Rd?m be the matrix of primal
variables obtained by stacking the vectors wy ? Rd , and A ? Rm?n the matrix of dual variables.
Before we prove our main result of this section (Theorem 1), we first impose a technical constraint
on a loss function to be compatible with the choice of the ground truth coordinate. The top-k hinge
loss from Section 2 satisfies this requirement as we show in Proposition 3. We also prove an auxiliary
Lemma 2, which is then used in Theorem 1.
Definition 2. A convex function ? is j-compatible if for any y ? Rm with yj = 0 we have that
sup{hy, xi ? ?(x) | xj = 0} = ?? (y).
This constraint is needed to prove equality in the following Lemma.
Lemma 2. Let ? be j-compatible, let Hj = I ? 1e>
j , and let ?(x) = ?(Hj x), then
?
? (y ? yj ej ) if h1, yi = 0,
?? (y) =
+?
otherwise.
4
We can now use Lemma 2 to compute convex conjugates of the loss functions.
Theorem 1. Let ?i be yi -compatible for each i ? [n], let ? > 0 be a regularization parameter, and
let K = X >X be the Gram matrix. The primal and Fenchel dual objective functions are given as:
P (W ) = +
n
?
1X
?i W > xi ? hwyi , xi i 1 + tr W > W ,
n i=1
2
n
D(A) = ?
1X ?
?
? (??n(ai ? ayi ,i eyi )) ? tr AKA> , if h1, ai i = 0 ?i, +? otherwise.
n i=1 i
2
Moreover, we have that W = XA> and W > xi = AKi , where Ki is the i-th column of K.
Finally, we show that Theorem 1 applies to the loss functions that we consider.
Proposition 3. The top-k hinge loss function from Section 2 is yi -compatible.
We have repeated the derivation from Section 5.7 in [24] as there is a typo in the optimization
problem (20) leading to the conclusionP
that ayi ,i must be 0 at the optimum. Lemma 2 fixes this
by making the requirement ayi ,i = ? j6=yi aj,i explicit. Note that this modification is already
mentioned in their pseudo-code for Prox-SDCA.
3.2
Optimization of Top-k Multiclass SVM via Prox-SDCA
As an optimization scheme, we employ the Algorithm 1 Top-k Multiclass SVM
proximal stochastic dual coordinate ascent
1: Input: training data {(xi , yi )n
i=1 }, parameters
(Prox-SDCA) framework of Shalev-Shwartz
k (loss), ? (regularization), (stopping cond.)
and Zhang [24], which has strong convergence
2: Output: W ? Rd?m , A ? Rm?n
guarantees and is easy to adapt to our prob3: Initialize: W ? 0, A ? 0
lem. In particular, we iteratively update a batch
4: repeat
m
ai ? R of dual variables corresponding to
5:
randomly permute training data
the training pair (xi , yi ), so as to maximize the 6: for i = 1 to n do
dual objective D(A) from Theorem 1. We also
7:
si ? W > xi {prediction scores}
? ai {cache previous values}
aold
maintain the primal variables W = XA> and 8:
i
ai ? update(k, ?, kxi k2 , yi , si , ai )
stop when the relative duality gap is below . 9:
{see ? 3.2.1 for details}
This procedure is summarized in Algorithm 1.
old >
W ? W + xi (ai ? ai )
10:
Let us make a few comments on the advantages
{rank-1 update}
of the proposed method. First, apart from the 11: end for
update step which we discuss below, all main 12: until relative duality gap is below
operations can be computed using a BLAS library, which makes the overall implementation efficient. Second, the update step in Line 9 is optimal
in the sense that it yields maximal dual objective increase jointly over m variables. This is opposed
to SGD updates with data-independent step sizes, as well as to maximal but scalar updates in other
SDCA variants. Finally, we have a well-defined stopping criterion as we can compute the duality
gap (see discussion in [2]). The latter is especially attractive if there is a time budget for learning.
The algorithm can also be easily kernelized since W > xi = AKi (cf. Theorem 1).
3.2.1
Dual Variables Update
For the proposed top-k hinge loss from Section 2, optimization of the dual objective D(A) over
ai ? Rm given other variables fixed is an instance of a regularized (biased) projection problem onto
1
). Let a\j be obtained by removing the j-th coordinate from vector a.
the top-k simplex ?k ( ?n
\yi
Proposition 4. The following two problems are equivalent with ai
2
= ?x and ayi ,i = h1, xi
2
1
max{D(A) | h1, ai i = 0} ? min{kb ? xk + ? h1, xi | x ? ?k ( ?n
)},
ai
where b =
1
hxi ,xi i
x
q \yi + (1 ? qyi )1 , q = W > xi ? hxi , xi i ai and ? = 1.
1
We discuss in the following section how to project onto the set ?k ( ?n
) efficiently.
5
4
Efficient Projection onto the Top-k Simplex
One of our main technical results is an algorithm for efficiently computing projections onto ?k (r),
respectively the biased projection introduced in Proposition 4. The optimization problem in Proposition 4 reduces to the Euclidean projection onto ?k (r) for ? = 0, and for ? > 0 it biases the solution
to be orthogonal to 1. Let us highlight that ?k (r) is substantially different from the standard simplex
and none of the existing methods can be used as we discuss below.
4.1
Continuous Quadratic Knapsack Problem
Finding the Euclidean projection onto the simplex is an instance of the general optimization problem
2
minx {ka ? xk2 | hb, xi ? r, l ? xi ? u} known as the continuous quadratic knapsack problem
(CQKP). For example, to project onto the simplex we set b = 1, l = 0 and r = u = 1. This is a well
examined problem and several highly efficient algorithms are available (see the surveys [18, 19]).
The first main difference to our set is the upper bound on the xi ?s. All existing algorithms expect
that u is fixed, which allows them to consider decompositions minxi {(ai ? xi )2 | l ? xi ? u} which
can be solved in closed-form. In our case, the upper bound k1 h1, xi introduces coupling across all
variables, which makes the existing algorithms not applicable. A second main difference is the bias
2
term ? h1, xi added to the objective. The additional difficulty introduced by this term is relatively
minor. Thus we solve the problem for general ? (including ? = 0 for the Euclidean projection onto
?k (r)) even though we need only ? = 1 in Proposition 4. The only case when our problem reduces
to CQKP is when the constraint h1, xi ? r is satisfied with equality. In that case we can let u = r/k
and use any algorithm for the knapsack problem. We choose [13] since it is easy to implement, does
not require sorting, and scales linearly in practice. The bias in the projection problem reduces to a
constant ?r2 in this case and has, therefore, no effect.
4.2
Projection onto the Top-k Cone
When the constraint h1, xi ? r is not satisfied with equality at the optimum, it has essentially no
influence on the projection problem and can be removed. In that case we are left with the problem
of the (biased) projection onto the top-k cone which we address with the following lemma.
Lemma 3. Let x? ? Rd be the solution to the following optimization problem
2
2
min{ka ? xk + ? h1, xi | 0 ? xi ?
x
and let U , {i | x?i =
1
k
h1, x? i}, M , {i | 0 < x?i <
1
k
1
k
h1, xi , i ? [d]},
h1, x? i}, L , {i | x?i = 0}.
1. If U = ? and M = ?, then x? = 0.
2. If U 6= ? and M = ?, then U = {[1], . . . , [k]}, x?i =
[i] is the index of the i-th largest component in a.
3. Otherwise (M
?
?u
t0
?
D
1
k+?k2
Pk
i=1
a[i] for i ? U , where
6= ?), the following system of linear equations holds
P
P
= |M | i?U ai + (k ? |U |) i?M ai /D,
P
P
= |U | (1 + ?k) i?M ai ? (k ? |U | + ?k |M |) i?U ai /D,
= (k ? |U |)2 + (|U | + ?k 2 ) |M | ,
(6)
together with the feasibility constraints on t , t0 + ?uk
max ai ? t ? min ai ,
i?L
max ai ? t + u ? min ai ,
i?M
i?M
i?U
(7)
and we have x? = min{max{0, a ? t}, u}.
We now show how to check if the (biased) projection is 0. For the standard simplex, where the cone
is the positive orthant Rd+ , the projection is 0 when all ai ? 0. It is slightly more involved for ?k .
Pk
Lemma 4. The biased projection x? onto the top-k cone is zero if i=1 a[i] ? 0 (sufficient condition). If ? = 0 this is also necessary.
6
Projection. Lemmas 3 and 4 suggest a simple algorithm for the (biased) projection onto the topk cone. First, we check if the projection is constant (cases 1 and 2 in Lemma 3). In case 2, we
compute x and check if it is compatible with the corresponding sets U , M , L. In the general case 3,
we suggest a simple exhaustive search strategy. We sort a and loop over the feasible partitions
U , M , L until we find a solution to (6) that satisfies (7). Since we know that 0 ? |U | < k and
k ? |U | + |M | ? d, we can limit the search to (k ? 1)(d ? k + 1) iterations in the worst case, where
each iteration requires a constant number of operations. For the biased projection, we leave x = 0
as the fallback case as Lemma 4 gives only a sufficient condition. This yields a runtime complexity
of O(d log(d) + kd), which is comparable to simplex projection algorithms based on sorting.
4.3
Projection onto the Top-k Simplex
As we argued in ? 4.1, the (biased) projection onto the top-k simplex becomes either the knapsack
problem or the (biased) projection onto the top-k cone depending on the constraint h1, xi ? r at the
optimum. The following Lemma provides a way to check which of the two cases apply.
Lemma 5. Let x? ? Rd be the solution to the following optimization problem
2
2
min{ka ? xk + ? h1, xi | h1, xi ? r, 0 ? xi ?
x
1
k
h1, xi , i ? [d]},
let (t, u) be the optimal thresholds such that x? = min{max{0, a ? P
t}, u}, and let U be defined as
in Lemma 3. Then it must hold that ? = t + kp ? ?r ? 0, where p = i?U ai ? |U | (t + u).
Projection. We can now use Lemma 5 to compute the (biased) projection onto ?k (r) as follows.
First, we check the special cases of zero and constant projections, as we did before. If that fails, we
proceed with the knapsack problem since it is faster to solve. Having the thresholds (t, u) and the
partitioning into the sets U , M , L, we compute the value of ? as given in Lemma 5. If ? ? 0, we
are done. Otherwise, we know that h1, xi < r and go directly to the general case 3 in Lemma 3.
5
Experimental Results
We have two main goals in the experiments. First, we show that the (biased) projection onto the
top-k simplex is scalable and comparable to an efficient algorithm [13] for the simplex projection
(see the supplement). Second, we show that the top-k multiclass SVM using both versions of the
top-k hinge loss (3) and (5), denoted top-k SVM? and top-k SVM? respectively, leads to improvements in top-k accuracy consistently over all datasets and choices of k. In particular, we note
improvements compared to the multiclass SVM of Crammer and Singer [5], which corresponds to
top-1 SVM? /top-1 SVM? . We release our implementation of the projection procedures and both
SDCA solvers as a C++ library2 with a Matlab interface.
5.1
Image Classification Experiments
We evaluate our method on five image classification datasets of different scale and complexity:
Caltech 101 Silhouettes [26] (m = 101, n = 4100), MIT Indoor 67 [20] (m = 67, n = 5354), SUN
397 [29] (m = 397, n = 19850), Places 205 [30] (m = 205, n = 2448873), and ImageNet 2012
[22] (m = 1000, n = 1281167). For Caltech, d = 784, and for the others d = 4096. The results on
the two large scale datasets are in the supplement.
We cross-validate hyper-parameters in the range 10?5 to 103 , extending it when the optimal value
is at the boundary. We use LibLinear [7] for SVMOVA , SVMPerf [11] with the corresponding loss
function for Recall@k, and the code provided by [16] for TopPush. When a ranking method like
Recall@k and TopPush does not scale to a particular dataset using the reduction of the multiclass
to a binary problem discussed in ? 2.3, we use the one-vs-all version of the corresponding method.
We implemented Wsabie++ (denoted W++, Q/m) based on the pseudo-code from Table 3 in [9].
On Caltech 101, we use features provided by [26]. For the other datasets, we extract CNN features
of a pre-trained CNN (fc7 layer after ReLU). For the scene recognition datasets, we use the Places
205 CNN [30] and for ILSVRC 2012 we use the Caffe reference model [10].
2
https://github.com/mlapin/libsdca
7
Caltech 101 Silhouettes
MIT Indoor 67
Method
Top-1
Top-2
Top-3
Top-4
Top-5
Top-10
Top-1 [26]
Top-2 [26]
Top-5 [26]
62.1
61.4
60.2
-
79.6
79.2
78.7
-
83.1
83.4
83.4
-
Method
Top-1
Top-2
Top-3
Top-4
Top-5
Top-10
Top-1
Top-2
Top-3
Top-4
Top-5
Top-10
SVM
TopPush
61.81
63.11
73.13
75.16
76.25
78.46
77.76
80.19
78.89
81.97
83.57
86.95
71.72
70.52
81.49
83.13
84.93
86.94
86.49
90.00
87.39
91.64
90.45
95.90
Recall@1
Recall@5
Recall@10
61.55
61.60
61.51
73.13
72.87
72.95
77.03
76.51
76.46
79.41
78.76
78.72
80.97
80.54
80.54
85.18
84.74
84.92
71.57
71.49
71.42
83.06
81.49
81.49
87.69
85.45
85.52
90.45
87.24
87.24
92.24
88.21
88.28
96.19
92.01
92.16
W++, 0/256
W++, 1/256
W++, 2/256
62.68
59.25
55.09
76.33
65.63
61.81
79.41
69.22
66.02
81.71
71.09
68.88
83.18
72.95
70.61
88.95
79.71
76.59
70.07
68.13
64.63
84.10
81.49
78.43
89.48
86.64
84.18
92.46
89.63
88.13
94.48
91.42
89.93
97.91
95.45
94.55
top-1 SVM?
top-10 SVM?
top-20 SVM?
62.81 74.60 77.76 80.02 81.97 86.91
62.98 77.33 80.49 82.66 84.57 89.55
59.21 75.64 80.88 83.49 85.39 90.33
73.96
70.00
65.90
85.22
85.45
84.10
89.25
90.00
89.93
91.94
93.13
92.69
93.43
94.63
94.25
96.94
97.76
97.54
top-1 SVM?
top-10 SVM?
top-20 SVM?
62.81 74.60 77.76 80.02
64.02 77.11 80.49 83.01
63.37 77.24 81.06 83.31
73.96
71.87
71.94
85.22
85.30
85.30
89.25
90.45
90.07
91.94
93.36
92.46
93.43
94.40
94.33
96.94
97.76
97.39
OVA
81.97
84.87
85.18
Method
Top-1
Method
Top-1
Method
BLH [4] 48.3 DGE [6] 66.87 RAS [21]
SP [25] 51.4 ZLX [30] 68.24 KL [14]
JVJ [12] 63.10 GWG [8] 68.88
86.91
89.42
90.03
Top-1
69.0
70.1
SUN 397 (10 splits)
Top-1 accuracy
Method
XHE [29]
SPM [23]
38.0
47.2 ? 0.2
LSH [15]
GWG [8]
49.48 ? 0.3
51.98
ZLX [30]
KL [14]
54.32 ? 0.1
54.65 ? 0.2
Top-1
Top-2
Top-3
Top-4
Top-5
Top-10
OVA
SVM
TopPushOVA
55.23 ? 0.6
53.53 ? 0.3
66.23 ? 0.6
65.39 ? 0.3
70.81 ? 0.4
71.46 ? 0.2
73.30 ? 0.2
75.25 ? 0.1
74.93 ? 0.2
77.95 ? 0.2
79.00 ? 0.3
85.15 ? 0.3
Recall@1OVA
Recall@5OVA
Recall@10OVA
52.95 ? 0.2
50.72 ? 0.2
50.92 ? 0.2
65.49 ? 0.2
64.74 ? 0.3
64.94 ? 0.2
71.86 ? 0.2
70.75 ? 0.3
70.95 ? 0.2
75.88 ? 0.2
74.02 ? 0.3
74.14 ? 0.2
78.72 ? 0.2
76.06 ? 0.3
76.21 ? 0.2
86.03 ? 0.2
80.66 ? 0.2
80.68 ? 0.2
top-1 SVM?
top-10 SVM?
top-20 SVM?
58.16 ? 0.2
58.00 ? 0.2
55.98 ? 0.3
71.66 ? 0.2
73.65 ? 0.1
72.51 ? 0.2
78.22 ? 0.1
80.80 ? 0.1
80.22 ? 0.2
82.29 ? 0.2
84.81 ? 0.2
84.54 ? 0.2
84.98 ? 0.2
87.45 ? 0.2
87.37 ? 0.2
91.48 ? 0.2
93.40 ? 0.2
93.62 ? 0.2
top-1 SVM?
top-10 SVM?
top-20 SVM?
58.16 ? 0.2 71.66 ? 0.2 78.22 ? 0.1 82.29 ? 0.2 84.98 ? 0.2 91.48 ? 0.2
59.32 ? 0.1 74.13 ? 0.2 80.91 ? 0.2 84.92 ? 0.2 87.49 ? 0.2 93.36 ? 0.2
58.65 ? 0.2 73.96 ? 0.2 80.95 ? 0.2 85.05 ? 0.2 87.70 ? 0.2 93.64 ? 0.2
Table 1: Top-k accuracy (%). Top section: State of the art. Middle section: Baseline methods.
Bottom section: Top-k SVMs: top-k SVM? ? with the loss (3); top-k SVM? ? with the loss (5).
Experimental results are given in Table 1. First, we note that our method is scalable to large datasets
with millions of training examples, such as Places 205 and ILSVRC 2012 (results in the supplement).
Second, we observe that optimizing the top-k hinge loss (both versions) yields consistently better
top-k performance. This might come at the cost of a decreased top-1 accuracy (e.g. on MIT Indoor
67), but, interestingly, may also result in a noticeable increase in the top-1 accuracy on larger datasets
like Caltech 101 Silhouettes and SUN 397. This resonates with our argumentation that optimizing
for top-k is often more appropriate for datasets with a large number of classes.
Overall, we get systematic increase in top-k accuracy over all datasets that we examined. For example, we get the following improvements in top-5 accuracy with our top-10 SVM? compared to
top-1 SVM? : +2.6% on Caltech 101, +1.2% on MIT Indoor 67, and +2.5% on SUN 397.
6
Conclusion
We demonstrated scalability and effectiveness of the proposed top-k multiclass SVM on five image
recognition datasets leading to consistent improvements in top-k performance. In the future, one
could study if the top-k hinge loss (3) can be generalized to the family of ranking losses [27]. Similar
to the top-k loss, this could lead to tighter convex upper bounds on the corresponding discrete losses.
8
References
[1] A. Bordes, L. Bottou, P. Gallinari, and J. Weston. Solving multiclass support vector machines with
LaRank. In ICML, pages 89?96, 2007.
[2] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In NIPS, pages 161?168, 2008.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] S. Bu, Z. Liu, J. Han, and J. Wu. Superpixel segmentation based structural scene recognition. In MM,
pages 681?684. ACM, 2013.
[5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265?292, 2001.
[6] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode
seeking. In NIPS, pages 494?502, 2013.
[7] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[8] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In ECCV, 2014.
[9] M. R. Gupta, S. Bengio, and J. Weston. Training highly multiclass classifiers. JMLR, 15:1461?1492,
2014.
[10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[11] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377?384,
2005.
[12] M. Juneja, A. Vedaldi, C. Jawahar, and A. Zisserman. Blocks that shout: distinctive parts for scene
classification. In CVPR, 2013.
[13] K. Kiwiel. Variable fixing algorithms for the continuous quadratic knapsack problem. Journal of Optimization Theory and Applications, 136(3):445?458, 2008.
[14] M. Koskela and J. Laaksonen. Convolutional network features for scene recognition. In Proceedings of
the ACM International Conference on Multimedia, pages 1169?1172. ACM, 2014.
[15] M. Lapin, B. Schiele, and M. Hein. Scalable multitask representation learning for scene classification. In
CVPR, 2014.
[16] N. Li, R. Jin, and Z.-H. Zhou. Top rank optimization in linear time. In NIPS, pages 1502?1510, 2014.
[17] W. Ogryczak and A. Tamir. Minimizing the sum of the k largest functions in linear time. Information
Processing Letters, 85(3):117?122, 2003.
[18] M. Patriksson. A survey on the continuous nonlinear resource allocation problem. European Journal of
Operational Research, 185(1):1?46, 2008.
[19] M. Patriksson and C. Str?mberg. Algorithms for the continuous nonlinear resource allocation problem
? new implementations and numerical studies. European Journal of Operational Research, 243(3):703?
722, 2015.
[20] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
[21] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding
baseline for recognition. arXiv preprint arXiv:1403.6382, 2014.
[22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge, 2014.
[23] J. S?nchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the Fisher vector: theory
and practice. IJCV, pages 1?24, 2013.
[24] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized
loss minimization. Mathematical Programming, pages 1?41, 2014.
[25] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation.
In ICCV, pages 3400?3407, 2013.
[26] K. Swersky, B. J. Frey, D. Tarlow, R. S. Zemel, and R. P. Adams. Probabilistic n-choose-k models for
classification and ranking. In NIPS, pages 3050?3058, 2012.
[27] N. Usunier, D. Buffoni, and P. Gallinari. Ranking with ordered weighted pairwise classification. In ICML,
pages 1057?1064, 2009.
[28] J. Weston, S. Bengio, and N. Usunier. Wsabie: scaling up to large vocabulary image annotation. IJCAI,
pages 2764?2770, 2011.
[29] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
[30] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition
using places database. In NIPS, 2014.
9
| 5742 |@word multitask:1 illustrating:1 version:8 cnn:4 middle:1 contraction:1 decomposition:1 hsieh:1 incurs:1 sgd:1 asks:1 tr:2 liblinear:2 reduction:4 liu:1 score:6 interestingly:1 existing:3 recovered:2 current:1 ka:3 com:1 guadarrama:1 si:2 activation:1 written:1 must:2 numerical:1 partition:1 update:8 v:2 guess:4 xk:3 tarlow:1 provides:1 zhang:2 five:3 saarland:1 along:1 mathematical:1 direct:2 prove:3 ijcv:1 kiwiel:1 pairwise:1 theoretically:1 ra:1 multi:1 globally:1 actual:1 cache:1 solver:1 increasing:1 becomes:4 begin:2 project:2 campus:1 notation:1 moreover:4 provided:2 qyi:1 lapedriza:1 unspecified:1 substantially:1 finding:2 guarantee:1 pseudo:2 act:1 runtime:1 exactly:1 returning:1 classifier:4 rm:9 tricky:1 uk:1 k2:2 partitioning:1 gallinari:2 planck:1 before:2 positive:1 frey:1 mistake:2 limit:2 might:1 examined:2 challenging:2 bi:2 range:1 yj:2 practice:2 block:1 implement:1 larank:2 sullivan:1 procedure:2 sdca:6 area:1 vedaldi:1 projection:32 boyd:1 pre:1 patriksson:2 suggest:2 get:2 onto:20 convenience:1 put:1 risk:1 influence:1 optimize:1 equivalent:3 map:1 demonstrated:1 straightforward:1 attention:1 go:1 convex:16 survey:2 decomposable:1 vandenberghe:1 mberg:1 embedding:2 coordinate:6 imagine:1 programming:1 superpixel:1 trick:1 element:1 recognition:8 labeled:2 database:2 bottom:2 domain1:1 rmd:2 preprint:2 solved:1 wang:2 worst:1 sun:7 removed:1 mentioned:1 complexity:3 schiele:1 dom:1 multilabel:1 trained:1 tight:2 solving:2 distinctive:1 basis:1 easily:1 various:1 derivation:1 fast:2 effective:2 kp:1 zemel:1 minxi:1 hyper:1 shalev:2 exhaustive:1 caffe:2 apparent:2 bernt:1 quite:1 solve:3 larger:1 cvpr:4 relax:1 otherwise:9 jointly:1 sequence:1 advantage:1 karayev:1 matthias:1 propose:5 maximal:3 loop:1 iff:1 validate:1 scalability:1 convergence:1 ijcai:1 requirement:2 optimum:3 extending:1 darrell:1 adam:1 leave:1 derive:1 coupling:1 depending:1 fixing:1 gong:1 minor:1 noticeable:1 strong:1 auxiliary:1 implemented:1 come:1 correct:1 stochastic:3 kb:1 human:1 require:2 argued:1 fix:1 generalization:4 proposition:8 tighter:2 extension:2 strictly:1 svmperf:2 hold:3 mm:1 ground:2 mapping:1 predict:1 algorithmic:1 efros:1 cken:2 torralba:3 xk2:1 abbey:1 applicable:1 label:4 jawahar:1 largest:6 establishes:1 weighted:1 minimization:1 mit:4 always:1 rather:1 zhou:2 ej:2 hj:3 shelf:1 azizpour:1 release:1 joachim:1 improvement:6 consistently:3 notational:1 indicates:1 mainly:1 check:6 aka:1 rank:2 ponce:1 baseline:3 sense:3 stopping:2 perronnin:1 kernelized:1 relation:3 interested:1 germany:2 arg:1 issue:1 classification:16 dual:10 augment:1 denoted:2 overall:2 art:1 special:1 initialize:1 having:1 y6:1 park:2 icml:3 future:2 simplex:22 others:1 minimized:1 employ:2 few:1 randomly:1 astounding:1 maintain:1 attempt:1 interest:1 highly:3 evaluation:4 introduces:1 bracket:1 primal:6 ogryczak:1 kt:1 necessary:1 orthogonal:1 unless:1 hwyi:4 old:1 euclidean:3 hein:1 girshick:1 fenchel:3 instance:3 column:1 laaksonen:1 stacking:1 cost:1 vertex:1 predictor:3 recognizing:1 lapin:2 proximal:2 kxi:1 international:1 river:1 bu:1 systematic:1 off:1 informatics:1 probabilistic:1 enhance:1 together:1 topk:1 ambiguity:3 satisfied:2 opposed:1 choose:3 huang:1 leading:2 li:1 prox:3 summarized:1 eyi:2 ranking:11 later:2 h1:23 ayi:4 closed:1 razavian:1 sup:1 sort:1 annotation:1 jia:1 contribution:2 cosegmentation:1 accuracy:9 toppush:4 convolutional:3 efficiently:3 yield:3 none:1 zoo:1 russakovsky:1 shout:1 j6:1 argumentation:1 quattoni:1 detector:1 definition:2 involved:1 hwm:1 obvious:1 naturally:2 proof:2 stop:1 dataset:1 popular:1 recall:11 improves:1 segmentation:1 actually:1 follow:1 zisserman:1 formulation:1 done:1 though:1 mensink:1 xa:2 until:2 su:1 nonlinear:3 spm:1 fallback:1 mode:1 aj:2 dge:1 effect:1 true:1 equality:4 regularization:2 iteratively:1 attractive:1 aki:2 ambiguous:1 criterion:1 generalized:1 interface:1 image:8 lazebnik:1 recently:1 million:2 discussed:2 extend:1 blas:1 cambridge:1 ai:24 rd:11 doersch:1 fk:1 pm:1 hxi:2 lsh:1 han:1 similarity:1 fc7:1 multivariate:1 own:1 optimizing:2 mint:1 apart:1 certain:1 nonconvex:2 hay:1 inequality:1 binary:3 yi:17 caltech:6 additional:1 impose:1 employed:2 deng:1 maximize:1 redundant:1 reduces:4 technical:3 faster:1 adapt:1 cross:1 long:2 lin:1 hwj:1 plugging:1 feasibility:1 prediction:4 variant:1 scalable:3 verbeek:1 oliva:2 vision:2 metric:3 essentially:1 arxiv:4 iteration:2 kernel:2 buffoni:1 penalize:1 receive:1 fine:1 krause:1 decreased:1 else:1 crucial:1 biased:12 extra:1 unlike:1 typo:1 hwy:5 ascent:3 comment:1 induced:1 pooling:1 elegant:1 koskela:1 effectiveness:1 call:3 structural:1 presence:1 bernstein:1 split:1 easy:2 bengio:2 hb:1 xj:1 relu:1 architecture:1 multiclass:33 tradeoff:1 t0:2 penalty:1 proceed:1 str:1 matlab:1 deep:2 karpathy:1 mid:1 svms:1 hw1:1 http:1 correctly:1 discrete:1 key:1 threshold:2 sum:4 cone:6 everywhere:1 letter:1 swersky:1 place:6 extends:1 family:2 wu:1 scaling:1 comparable:3 bound:15 ki:1 layer:1 fan:1 quadratic:4 constraint:8 fei:2 scene:8 hy:1 bousquet:1 extremely:1 min:7 relatively:1 structured:1 conjugate:7 kd:1 across:1 slightly:1 wsabie:2 making:2 modification:1 lem:1 maxy:1 aold:1 iccv:1 computationally:1 equation:1 resource:2 turn:3 slack:1 discus:3 singer:4 needed:1 know:2 end:1 usunier:3 available:1 operation:2 apply:1 observe:1 appropriate:1 batch:1 knapsack:7 original:1 top:133 remaining:1 cf:1 hinge:16 k1:1 especially:1 seeking:1 pond:1 objective:5 already:1 added:1 strategy:1 md:1 minx:1 polytope:1 argue:1 code:3 index:2 illustration:2 minimizing:1 difficult:1 unfortunately:1 maksim:1 negative:1 implementation:6 satheesh:1 perform:1 upper:11 datasets:13 jin:1 orthant:1 extended:1 looking:1 introduced:2 pair:4 kl:2 extensive:2 componentwise:1 optimized:3 imagenet:2 saarbr:2 zlx:2 nip:5 address:1 wy:2 usually:1 below:5 indoor:5 juneja:1 regime:1 challenge:1 max:15 including:1 overlap:1 natural:1 difficulty:1 regularized:4 mn:1 scheme:3 github:1 library:2 extract:1 review:1 carlsson:1 discovery:1 relative:2 loss:55 expect:1 permutation:1 highlight:1 interesting:1 allocation:2 analogy:1 ingredient:1 shelhamer:1 sufficient:2 consistent:2 xiao:2 thresholding:1 bordes:1 row:1 eccv:1 compatible:6 penalized:1 repeat:1 ovum:5 bias:3 allow:2 institute:1 emerge:1 sparse:1 orderless:1 boundary:1 vocabulary:1 gram:1 tamir:1 ignores:1 silhouette:3 keep:1 conclude:1 xi:65 shwartz:2 discriminative:2 continuous:6 search:2 khosla:1 table:3 nature:1 learn:1 operational:2 permute:1 hc:2 bottou:2 european:2 domain:2 protocol:1 did:1 pk:3 main:7 sp:1 linearly:1 motivation:1 arise:1 allowed:1 repeated:1 ehinger:1 fails:1 position:1 explicit:1 resonates:1 jvj:1 jmlr:1 hw:2 donahue:1 theorem:6 removing:1 r2:1 svm:45 gupta:2 intractable:1 supplement:5 budget:1 margin:2 gap:4 sorting:2 nchez:1 visual:2 adjustment:1 ordered:2 scalar:1 chang:1 applies:1 corresponds:2 truth:2 satisfies:2 acm:3 ma:1 weston:3 goal:1 formulated:1 fisher:1 feasible:1 typical:1 lemma:19 total:1 multimedia:1 discriminate:1 duality:5 experimental:2 cond:1 formally:1 ilsvrc:2 berg:1 support:4 guo:1 latter:1 crammer:4 arises:1 accelerated:1 evaluate:3 phenomenon:1 |
5,239 | 5,743 | Solving Random Quadratic Systems of Equations
Is Nearly as Easy as Solving Linear Systems
Yuxin Chen
Department of Statistics
Stanford University
Stanford, CA 94305
yxchen@stanfor.edu
Emmanuel J. Cand?s
Department of Mathematics and Department of Statistics
Stanford University
Stanford, CA 94305
candes@stanford.edu
Abstract
This paper is concerned with finding a solution x to a quadratic system of equations yi = |hai , xi|2 , i = 1, . . . , m. We demonstrate that it is possible to solve
unstructured random quadratic systems in n variables exactly from O(n) equations in linear time, that is, in time proportional to reading the data {ai } and {yi }.
This is accomplished by a novel procedure, which starting from an initial guess
given by a spectral initialization procedure, attempts to minimize a nonconvex
objective. The proposed algorithm distinguishes from prior approaches by regularizing the initialization and descent procedures in an adaptive fashion, which
discard terms bearing too much influence on the initial estimate or search directions. These careful selection rules?which effectively serve as a variance reduction scheme?provide a tighter initial guess, more robust descent directions, and
thus enhanced practical performance. Further, this procedure also achieves a nearoptimal statistical accuracy in the presence of noise. Empirically, we demonstrate
that the computational cost of our algorithm is about four times that of solving a
least-squares problem of the same size.
1
Introduction
Suppose we are given a response vector y = [yi ]1?i?m generated from a quadratic transformation
of an unknown object x ? Rn /Cn , i.e.
2
yi = |hai , xi| ,
i = 1, ? ? ? , m,
(1)
where the feature/design vectors ai ? Rn /Cn are known. In other words, we acquire measurements
about the linear product hai , xi with all signs/phases missing. Can we hope to recover x from this
nonlinear system of equations?
This problem can be recast as a quadratically constrained quadratic program (QCQP), which subsumes as special cases various classical combinatorial problems with Boolean variables (e.g. the
NP-complete stone problem [1, Section 3.4.1]). In the physical sciences, this problem is commonly
referred to as phase retrieval [2]; the origin is that in many imaging applications (e.g. X-ray crystallography, diffraction imaging, microscopy) it is infeasible to record the phases of the diffraction
patterns so that we can only record |Ax|2 , where x is the electrical field of interest. Moreover, this
problem finds applications in estimating the mixture of linear regression, since one can transform the
latent membership variables into missing phases [3]. Despite its importance across various fields,
solving the quadratic system (1) is combinatorial in nature and, in general, NP complete.
To be more realistic albeit more challenging, the acquired samples are almost always corrupted by
some amount of noise, namely,
2
yi ? |hai , xi| ,
i = 1, ? ? ? , m.
1
(2)
For instance, in imaging applications the data are best modeled by Poisson random variables
ind.
2
yi ? Poisson |hai , xi| ,
i = 1, ? ? ? , m,
(3)
which captures the variation in the number of photons detected by a sensor. While we shall pay
special attention to the Poisson noise model due to its practical relevance, the current work aims to
accommodate general?or even deterministic?noise structures.
1.1
Nonconvex optimization
Assuming independent samples, the first attempt is to seek the maximum likelihood estimate (MLE):
Xm
minimizez ?
` (z; yi ) ,
(4)
i=1
where ` (z; yi ) represents the log-likelihood of a candidate z given the outcome yi . As an example,
under the Poisson data model (3), one has (up to some constant offset)
`(z; yi ) = yi log(|a?i z|2 ) ? |a?i z|2 .
Computing the MLE, however, is in general intractable, since `(z; yi ) is not concave in z.
(5)
Fortunately, under unstructured random systems, the problem is not as ill-posed as it might seem,
and is solvable via convenient convex programs with optimal statistical guarantees [4?12]. The
basic paradigm is to lift the quadratically constrained problem into a linearly constrained problem
by introducing a matrix variable X = xx? and relaxing the rank-one constraint. Nevertheless,
working with the auxiliary matrix variable significantly increases the computational complexity,
which exceeds the order of n3 and is prohibitively expensive for large-scale data.
This paper follows a different route, which attempts recovery by minimizing the nonconvex objective (4) or (5) directly (e.g. [2, 13?19]). The main incentive is the potential computational benefit,
since this strategy operates directly upon vectors instead of lifting decision variables to higher dimension. Among this class of procedures, one natural candidate is the family of gradient-descent
type algorithms developed with respect to the objective (4). This paradigm can be regarded as performing some variant of stochastic gradient descent over the random samples {(yi , ai )}1?i?m as
an approximation to maximize the population likelihood L(z) := E(y,a) [`(z; y)]. While in general
nonconvex optimization falls short of performance guarantees, a recently proposed approach called
Wirtinger Flow (WF) [13] promises efficiency under random features. In a nutshell, WF initializes
the iterate via a spectral method, and then successively refines the estimate via the following update
rule:
?t Xm
z (t+1) = z (t) +
?`(z (t) ; yi ),
i=1
m
where z (t) denotes the tth iterate of the algorithm, and ?t is the learning rate. Here, ?`(z; yi )
represents the Wirtinger derivative with respect to z, which reduces to the ordinary gradient in the
real setting. Under Gaussian designs, WF (i) allows exact recovery from O (n log n) noise-free
quadratic equations [13];1 (ii) recovers x up to -accuracy within O(mn2 log 1/) time (or flops)
[13]; and (iii) is stable and converges to the MLE under Gaussian noise [20]. Despite these intriguing
guarantees, the computational complexity of WF still far exceeds the best that one can hope for.
Moreover, its sample complexity is a logarithmic factor away from the information-theoretic limit.
1.2
This paper: Truncated Wirtinger Flow
This paper develops a novel linear-time algorithm, called Truncated Wirtinger Flow (TWF), that
achieves a near-optimal statistical accuracy. The distinguishing features include a careful initialization procedure and a more adaptive gradient flow. Informally, TWF entails two stages:
1. Initialization: compute an initial guess z (0) by means of a spectral method applied to a
subset T0 of data {yi } that do not bear too much influence on the spectral estimates;
2. Loop: for 0 ? t < T ,
?t X
?`(z (t) ; yi )
(6)
z (t+1) = z (t) +
i?Tt+1
m
for some index set Tt+1 ? {1, ? ? ? , m} over which ?`(z (t) ; yi ) are well-controlled.
1
f (n) = O (g(n)) or f (n) . g(n) (resp. f (n) & g(n)) means there exists a constant c > 0 such that
|f (n)| ? c|g(n)| (resp. |f (n)| ? c |g(n)|). f (n) g(n) means f (n) and g(n) are orderwise equivalent.
2
100
-20
-25
Relative MSE (dB)
Relative error
10-1
truncated WF
10-2
10-3
10-4
least squares (CG)
10-5
10-6
truncated WF
-35
-40
-45
-50
MLE w/ phase
-55
-60
10-7
10
-30
-65
-8
0
0
5
20
Iteration
15
60
10
40
15
20
25
30
35
40
45
50
55
SNR (dB) (n =100)
(a)
(b)
Figure 1: (a) Relative errors of CG and TWF vs. iteration count, where n = 1000 and m = 8n.
(b) Relative MSE vs. SNR in dB, where n = 100. The curves are shown for two settings: TWF for
solving quadratic equations (blue), and MLE had we observed additional phase information (green).
We highlight three aspects of the proposed algorithm, with details deferred to Section 2.
(a) In contrast to WF and other gradient descent variants, we regularize both the initialization and
the gradient flow in a more cautious manner by operating only upon some iteration-varying
index sets Tt . The main point is that enforcing such careful selection rules lead to tighter
initialization and more robust descent directions.
(b) TWF sets the learning rate ?t in a far more liberal fashion (e.g. ?t ? 0.2 under suitable
conditions), as opposed to the situation in WF that recommends ?t = O(1/n).
(c) Computationally, each iterative step mainly consists in calculating {?`(z; yi )}, which is inexpensive and can often be performed in linear time, that is, in time proportional to evaluating
the data and the constraints. Take the real-valued Poisson likelihood (5) for example:
2
yi ? |a>
yi
i z|
>
>
a
a
z
?
a
a
z
=
2
ai , 1 ? i ? m,
?`(z; yi ) = 2
i i
2 i i
|a>
a>
i z|
i z
which essentially amounts to two matrix-vector products. To see this, rewrite
(
y ?|a> z (t) |2
X
2 i a>iz(t) , i ? Tt+1 ,
>
(t)
?`(z ; yi ) = A v,
vi =
i
0,
otherwise,
i?Tt+1
where A := [a1 , ? ? ? , am ]> . Hence, Az (t) gives v and A> v the desired truncated gradient.
1.3
Numerical surprises
The power of TWF is best illustrated by numerical examples. Since x and e?j? x are indistinguishable given y, we evaluate the solution based on a metric that disregards the global phase [13]:
dist (z, x) := min?:?[0,2?) ke?j? z ? xk.
(7)
In the sequel, TWF operates according to the Poisson log-likelihood (5), and takes ?t ? 0.2.
We first compare the computational efficiency of TWF for solving quadratic systems with that of
conjugate gradient (CG) for solving least square problems. As is well known, CG is among the
most popular methods for solving large-scale least square problems, and hence offers a desired
benchmark. We run TWF and CG respectively over the following two problems:
(a)
find x ? Rn
(b)
n
find x ? R
s.t. bi = a>
i x,
s.t. bi =
ind.
|a>
i x|,
1 ? i ? m,
1 ? i ? m,
where m = 8n, x ? N (0, I), and ai ? N (0, I). This yields a well-conditioned design matrix
A, for which CG converges extremely fast [21]. The relative estimation errors of both methods are
reported in Fig. 1(a), where TWF is seeded by 10 power iterations. The iteration counts are plotted
in different scales so that 4 TWF iterations are tantamount to 1 CG iteration. Since each iteration
of CG and TWF involves two matrix vector products Az and A> v, the numerical plots lead to a
suprisingly positive observation for such an unstructured design:
3
Figure 2: Recovery after (top) truncated spectral initialization, and (bottom) 50 TWF iterations.
Even when all phase information is missing, TWF is capable of solving a quadratic system of
equations only about 4 times2 slower than solving a least squares problem of the same size!
The numerical surprise extends to noisy quadratic systems. Under the Poisson data model, Fig. 1(b)
displays the relative mean-square error (MSE) of TWF when the signal-to-noise ratio (SNR) varies;
here, the relative MSE and the SNR are defined as3
MSE := dist2 (?
x, x) / kxk2
and
SNR := 3kxk2 ,
(8)
? is an estimate. Both SNR and MSE are displayed on a dB scale (i.e. the values of
where x
10 log10 (SNR) and 10 log10 (MSE) are plotted). To evaluate the quality of the TWF solution, we
compare it with the MLE applied to an ideal problem where the phases (i.e. {?i = sign(a>
i x)})
are revealed a priori. The presence of this precious side information gives away the phase retrieval
problem and allows us to compute the MLE via convex programming. As illustrated in Fig. 1(b),
TWF solves the quadratic system with nearly the best possible accuracy, since it only incurs an extra
1.5 dB loss compared to the ideal MLE with all true phases revealed.
To demonstrate the scalability of TWF on real data, we apply TWF on a 320?1280 image. Consider
a type of physically realizable measurements called coded diffraction patterns (CDP) [22], where
y (l) = |F D (l) x|2 ,
1 ? l ? L,
(9)
where m = nL, |z|2 denotes the vector of entrywise squared magnitudes, and F is the DFT matrix.
Here, D (l) is a diagonal matrix whose diagonal entries are randomly drawn from {1, ?1, j, ?j},
which models signal modulation before diffraction. We generate L = 12 masks for measurements,
and run TWF on a MacBook Pro with a 3 GHz Intel Core i7. We run 50 truncated power iterations
and 50 TWF iterations, which in total cost 43.9 seconds for each color channel. The relative errors
after initialization and TWF iterations are 0.4773 and 2.2 ? 10?5 , respectively; see Fig. 2.
1.4
Main results
We corroborate the preceding numerical findings with theoretical support. For concreteness, we
assume TWF proceeds according to the Poisson log-likelihood (5). We suppose the samples (yi , ai )
are independently and randomly drawn from the population, and model the random features ai as
ai ? N (0, I n ) .
(10)
To start with, the following theorem confirms the performance of TWF under noiseless data.
2
Similar phenomena arise in many other experiments we?ve conducted (e.g. when the sample size m ranges
from 6n to 20n). In fact, this factor seems to improve slightly as m/n increases.
3
2
To justify the definition of SNR, note that the signals and noise are captured by ?i = (a>
i x) and yi ? ?i ,
respectively. The SNR is thus given by
Pm
?2
Pm i=1 i
i=1 Var[yi ]
=
4
Pm
>
4
i=1 |ai x|
Pm
>
2
i=1 |ai x|
?
3mkxk4
mkxk2
= 3kxk2 .
Theorem 1 (Exact recovery). Consider the noiseless case (1) with an arbitrary x ? Rn . Suppose
that the learning rate ?t is either taken to be a constant ?t ? ? > 0 or chosen via a backtracking
line search. Then there exist some constants 0 < ?, ? < 1 and ?0 , c0 , c1 , c2 > 0 such that with
probability exceeding 1 ? c1 exp (?c2 m), the TWF estimates (Algorithm 1) obey
dist(z (t) , x) ? ?(1 ? ?)t kxk, ?t ? N,
provided that m ? c0 n and ? ? ?0 . As discussed below, we can take ?0 ? 0.3.
(11)
Theorem 1 justifies two intriguing properties of TWF. To begin with, TWF recovers the ground
truth exactly as soon as the number of equations is on the same order of the number of unknowns,
which is information theoretically optimal. More surprisingly, TWF converges at a geometric rate,
i.e. it achieves -accuracy (i.e. dist(z (t) , x) ? kxk) within at most O (log 1/) iterations. As a
result, the time taken for TWF to solve the quadratic systems is proportional to the time taken to
read the data, which confirms the linear-time complexity of TWF. These outperform the theoretical
guarantees of WF [13], which requires O(mn2 log 1/) runtime and O(n log n) sample complexity.
Notably, the performance gain of TWF is the result of the key algorithmic changes. Rather than
maximizing the data usage at each step, TWF exploits the samples at hand in a more selective
manner, which effectively trims away those components that are too influential on either the initial
guess or the search directions, thus reducing the volatility of each movement. With a tighter initial
guess and better-controlled search directions in place, TWF is able to proceed with a more aggressive
learning rate. Taken collectively these efforts enable the appealing convergence property of TWF.
Next, we turn to more realistic noisy data by accounting for a general additive noise model:
2
yi = |hai , xi| + ?i ,
1 ? i ? m,
(12)
where ?i represents a noise term. The stability of TWF is demonstrated in the theorem below.
Theorem 2 (Stability). Consider the noisy case (12). Suppose that the learning rate ?t is either
taken to be a positive constant ?t ? ? or chosen via a backtracking line search. If
2
m ? c0 n, ? ? ?0 , and k?k? ? c1 kxk ,
then with probability at least 1 ? c2 exp (?c3 m), the TWF estimates (Algorithm 1) satisfy
k?k
dist(z (t) , x) . ?
+ (1 ? ?)t kxk, ?t ? N
mkxk
for all x ? Rn . Here, 0 < ? < 1 and ?0 , c0 , c1 , c2 , c3 > 0 are some universal constants.
Alternatively, if one regards the SNR for the model (12) as follows
Xm
SNR :=
|hai , xi|4 / k?k2 ? 3mkxk4 / k?k2 ,
i=1
(13)
(14)
(15)
then we immediately arrive at another form of performance guarantee stated in terms of SNR:
1
kxk + (1 ? ?)t kxk, ?t ? N.
(16)
dist(z (t) , x) . ?
SNR
As a consequence, the relative error of TWF reaches O(SNR?1/2 ) within a logarithmic number of
iterations. It is worth emphasizing that the above stability guarantee is deterministic, which holds for
any noise structure obeying (13). Encouragingly, this statistical accuracy is nearly un-improvable,
as revealed by a minimax lower bound that we provide in the supplemental materials.
We pause to remark that several other nonconvex methods have been proposed for solving quadratic
equations, which exhibit intriguing empirical performances. A partial list includes the error reduction schemes by Fienup [2], alternating minimization [14], Kaczmarz method [17], and generalized
approximate message passing [15]. However, most of them fall short of theoretical support. The
analytical difficulty arises since these methods employ the same samples in each iteration, which
introduces complicated dependencies across all iterates. To circumvent this issue, [14] proposes
a sample-splitting version of the alternating minimization method that employs fresh samples in
each iteration. Despite the mathematical convenience, the sample complexity of this approach is
O(n log3 n + n log2 n log 1/), which is a factor of O(log3 n) from optimal and is empirically
largely outperformed by the variant that reuses all samples. In contrast, our algorithm uses the same
pool of samples all the time and is therefore practically appealing. Besides, the approach in [14]
does not come with provable stability guarantees. Numerically, each iteration of Fienup?s algorithm
(or alternating minimization) involves solving a least squares problem, and the algorithm converges
in tens or hundreds of iterations. This is computationally more expensive than TWF, whose computational complexity is merely about 4 times that of solving a least squares problem.
5
2
Algorithm: Truncated Wirtinger Flow
This section explains the basic principles of truncated
Wirtinger
flow. For notational convenience,
M
a
for any M ? Rn?n .
we denote A := [a1 , ? ? ? , am ]> and A (M ) := a>
i
i
1?i?m
2.1
Truncated gradient stage
x = (2.7, 8)
z = (3, 6)
In the case of independent real-valued data, the descent direction of the WF updates?which is the gradient of the Poisson
log-likelihood?can be expressed as follows:
m
X
i=1
?`(z; yi ) =
m
2
X
yi ? |a>
i z|
ai ,
2
>
ai z
i=1 |
{z
}
(17)
:=?i
where ?i represents the weight assigned to each feature ai .
Figure 3: The locus of ? 12 ?`i (z)
for all unit vectors ai . The red arrows depict those directions with
large weights.
Unfortunately, the gradient of this form is non-integrable and
hence uncontrollable. To see this, consider any fixed z ? Rn .
1
The typical value of min1?i?m |a>
i z| is on the order of m kzk,
leading to some excessively large weights ?i . Notably, an underlying premise for a nonconvex procedure to succeed is to
ensure all iterates reside within a basin of attraction, that is, a neighborhood surrounding x within
which x is the unique stationary point of the objective. When a gradient is unreasonably large, the
iterative step might overshoot and end up leaving this basin of attraction. Consequently, WF moving
along the preceding direction might not come close to the truth unless z is already very close to x.
This is observed in numerical simulations4 .
TWF addresses this challenge by discarding terms having too high of a leverage on the search
direction; this is achieved by regularizing the weights ?i via appropriate truncation. Specifically,
?t
(18)
z (t+1) = z (t) + ?`tr (z (t) ), ?t ? N,
m
where ?`tr (?) denotes the truncated gradient given by
?`tr (z) :=
Xm
i=1
2
2
yi ? |a>
i z|
ai 1E1i (z)?E2i (z)
>
ai z
(19)
for some appropriate truncation criteria specified by E1i (?) and E2i (?). In our algorithm, we take
E1i (z) and E2i (z) to be two collections of events given by
lb
ub
E1i (z) :=
?z kzk ? a>
(20)
i z ? ?z kzk ;
>
?h
2
y ? A zz >
|ai z| ,
E2i (z) :=
|yi ? |a>
(21)
i z| | ?
1 kzk
m
where ?zlb , ?zub , ?z are predetermined truncation thresholds. In words, we drop components whose
size fall outside some confidence range?a range where the magnitudes of both the numerator and
denominator of ?i are comparable to their respective mean values.
This paradigm could be counter-intuitive at first glance, since one might expect the larger terms
to be better aligned with the desired search direction. The issue, however, is that the large terms
are extremely volatile and could dominate all other components in an undesired way. In contrast,
TWF makes use of only gradient components of typical sizes, which slightly increases the bias but
remarkably reduces the variance of the descent direction. We expect such gradient regularization
and variance reduction schemes to be beneficial for solving a broad family of nonconvex problems.
2.2
Truncated spectral initialization
A key step to ensure meaningful convergence is to seed TWF with some point inside the basin of
attraction, which proves crucial for other nonconvex procedures as well. An appealing initialization
4
For complex-valued data, WF converges empirically, as mini |a>
i z| is much larger than the real case.
6
Algorithm 1 Truncated Wirtinger Flow.
Input: Measurements {yi | 1 ? i ? m} and feature vectors {ai | 1 ? i ? m}; truncation
thresholds ?zlb , ?zub , ?h , and ?y satisfying (by default, ?zlb = 0.3, ?zub = ?h = 5, and ?y = 3)
0 < ?zlb ? 0.5,
Initialize z (0) to be
q
?zub ? 5,
?h ? 5,
and ?y ? 3.
(25)
q
Pm
1
? is the leading eigenvector of
where ? = m
i=1 yi and z
X
m
1
Y =
yi ai a?i 1{|yi |??2y ?20 } .
(22)
i=1
m
Pmmn 2 ??
z,
i=1 kai k
Loop: for t = 0 : T do
2
2?t Xm yi ? a?i z (t)
z
= z +
ai 1E1i ?E2i ,
(23)
i=1
m
z (t)? ai
where
?
?
n |a?i z (t) |
n |a?i z (t) |
i
lb
ub
i
? (t) 2
E1 := ?z ?
? ?z , E2 := |yi ? |ai z | | ? ?h Kt
, (24)
kai k kz (t) k
kai k kz (t) k
1 Xm
yl ? |a?l z (t) |2 .
and Kt :=
l=1
m
Output z (T ) .
(t+1)
(t)
(0)
e
procedure
Pm is the>spectral method [14] [13], which initializes z as the leading eigenvector of Y :=
1
y
a
a
.
This
is
based
on
the
observation
that
for
any
fixed
unit
vector
x,
i=1 i i i
m
E[Ye ] = I + 2xx> ,
whose principal component is exactly x with an eigenvalue of 3.
Unfortunately, the success of this method requires a sample complexity exceeding n log n. To see
? k := ak /kak k, one can derive
this, recall that maxi yi ? 2 log m. Letting k = arg maxi yi and a
>
>e
?1
>
? k ? (2n log m)/m,
?k ? a
? k m ak ak yk a
?k Y a
a
? k is closer to the principal
which dominates x> Ye x ? 3 unless m & n log m. As a result, a
e
component of Y than x when m n. This drawback turns out to be a substantial practical issue.
1
Relative MSE
This issue can be remedied if we preclude those data yi with large
magnitudes when running the spectral method. Specifically, we
propose to initialize z (0) as the leading eigenvector of
1 Xm
Pm
yi ai a>
Y :=
(26)
1
i 1{|yi |??2y ( m
l=1 yl )}
i=1
m
spectral method
truncated spectral method
0.9
0.8
(0)
followed by proper scaling so as to ensure kz k ? kxk. As illustrated in Fig. 4, the empirical advantage of the truncated spectral
method is increasingly more remarkable as n grows.
2.3
0.7
1000
2000
3000
4000
n: signal dimension (m = 6n)
5000
Figure 4: Relative initialization error when ai ? N (0, I).
Choice of algorithmic parameters
One important implementation detail is the learning rate ?t . There
are two alternatives that work well in both theory and practice:
1. Fixed size. Take ?t ? ? for some constant ? > 0. As long as ? is not too large, this strategy
always works. Under the condition (25), our theorems hold for any positive constant ? < 0.28.
2. Backtracking line search with truncated objective. This strategy performs a line search along
the descent direction and determines an appropriate learning rate that guarantees a sufficient
improvement with respect to the truncated objective. Details are deferred to the supplement.
Another algorithmic details to specify are the truncation thresholds ?h , ?zlb , ?zub , and ?y . The
present paper isolates a concrete set of combinations as given in (25). In all theory and numerical
experiments presented in this work, we assume that the parameters fall within this range.
7
-20
TWF (Poisson objective)
WF (Gaussian objective)
0.5
-30
0.5
0
3n
4n
5n
m : number of measurements (n =1000)
6n
-35
-40
-45
-50
-55
-60
0
2n
m = 6n
m = 8n
m = 10n
-25
Empirical success rate
Empirical success rate
1
Relative MSE (dB)
TWF (Poisson objective)
WF (Gaussian objective)
1
2n
3n
4n
5n
m : number of measurements (n =1000)
6n
-65
15
20
25
30
35
40
SNR (dB) (n =1000)
45
50
55
(a)
(b)
(c)
Figure 5: (a) Empirical success rates for real Gaussian design; (b) empirical success rates for complex Gaussian design; (c) relative MSE (averaged over 100 runs) vs. SNR for Poisson data.
3
More numerical experiments and discussion
We conduct more extensive numerical experiments to corroborate our main results and verify the
applicability of TWF on practical problems. For all experiments conducted herein, we take a fixed
step size ?t ? 0.2, employ 50 power iterations for initialization and T = 1000 gradient iterations.
The truncation levels are taken to be the default values ?zlb = 0.3, ?zub = ?h = 5, and ?y = 3.
We first apply TWF to a sequence of noiseless problems with n = 1000 and varying m. Generate the
ind.
object x at random, and produce the feature vectors ai in two different ways: (1) ai ? N (0, I);
ind.
? obeys
(2) ai ? N (0, I) + jN (0, I). A Monte Carlo trial is declared success if the estimate x
dist (?
x, x) / kxk ? 10?5 . Fig. 5(a) and 5(b) illustrate the empirical success rates of TWF (average
over 100 runs for each m) for noiseless data, indicating that m ? 5n are m ? 4.5n are often
sufficient under real and complex Gaussian designs, respectively. For the sake of comparison, we
simulate the empirical success rates of WF, with the step size ?t = min{1 ? e?t/330 , 0.2} as
recommended by [13]. As shown in Fig. 5, TWF outperforms WF under random Gaussian features,
implying that TWF exhibits either better convergence rate or enhanced phase transition behavior.
ind.
While this work focuses on the Poisson-type objective for concreteness, the proposed paradigm carries over to a variety of
nonconvex objectives, and might have implications in solving
other problems that involve latent variables, e.g. matrix completion [23?25], sparse coding [26], dictionary learning [27], and
mixture problems (e.g. [28, 29]). We conclude this paper with an
example on estimating mixtures of linear regression. Imagine
>
ai ? 1 , with probability p,
yi ?
1 ? i ? m,
(27)
a>
else,
i ?2 ,
where ? 1 , ? 2 are unknown. It has been shown in [3] that in the
noiseless case, the ground truth satisfies
Empirical success rate
Next, we empirically evaluate the stability of TWF under noisy data. Set n = 1000, produce ai ?
N (0, I), and generate yi according to the Poisson model (3). Fig. 5(c) shows the relative mean
square error?on the dB scale?with varying SNR (cf. (8)). As can be seen, the empirical relative
MSE scales inversely proportional to SNR, which matches our stability guarantees in Theorem 2
(since on the dB scale, the slope is about -1 as predicted by the theory (16)).
1
0.5
0
5n
6n
7n
8n
9n
10n
m: number of measurements (n =1000)
Figure 6: Empirical success rate
for mixed regression (p = 0.5).
>
>
>
fi (?1 , ?2 ) := yi2 + 0.5a>
i (? 1 ? 2 + ? 2 ? 1 )ai ? ai (? 1 + ? 2 ) yi = 0,
1 ? i ? m,
which forms a set of quadratic constraints (in particular, if one further knows
Pm ? 1 = ?? 2 , then this
reduces to the form (1)). Running TWF with a nonconvex objective i=1 fi2 (z1 , z2 ) (with the
assistance of a 1-D grid search proposed in [29] applied right after truncated initialization) yields
accurate estimation of ? 1 , ? 2 under minimal sample complexity, as illustrated in Fig. 6.
Acknowledgments
E. C. is partially supported by NSF under grant CCF-0963835 and by the Math + X Award from the
Simons Foundation. Y. C. is supported by the same NSF grant.
8
References
[1] A. Ben-Tal and A. Nemirovski. Lectures on modern convex optimization, volume 2. 2001.
[2] J. R. Fienup. Phase retrieval algorithms: a comparison. Applied optics, 21:2758?2769, 1982.
[3] Y. Chen, X. Yi, and C. Caramanis. A convex formulation for mixed regression with two
components: Minimax optimal rates. In Conference on Learning Theory (COLT), 2014.
[4] E. J. Cand?s, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from
magnitude measurements via convex programming. Communications on Pure and Applied
Mathematics, 66(8):1017?1026, 2013.
[5] I. Waldspurger, A. d?Aspremont, and S. Mallat. Phase recovery, maxcut and complex semidefinite programming. Mathematical Programming, 149(1-2):47?81, 2015.
[6] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging
with partially incoherent light via quadratic compressed sensing. Optics express, 19(16), 2011.
[7] E. J. Cand?s and X. Li. Solving quadratic equations via PhaseLift when there are about as
many equations as unknowns. Foundations of Computational Math., 14(5):1017?1026, 2014.
[8] H. Ohlsson, A. Yang, R. Dong, and S. Sastry. Cprl?an extension of compressive sensing to the
phase retrieval problem. In Advances in Neural Information Processing Systems (NIPS), 2012.
[9] Y. Chen, Y. Chi, and A. J. Goldsmith. Exact and stable covariance estimation from quadratic
sampling via convex programming. IEEE Trans. on Inf. Theory, 61(7):4034?4059, 2015.
[10] T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. Annals of Stats.
[11] K. Jaganathan, S. Oymak, and B. Hassibi. Recovery of sparse 1-D signals from the magnitudes
of their Fourier transform. In IEEE ISIT, pages 1473?1477, 2012.
[12] D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical
designs. Journal of Fourier Analysis and Applications, 21(2):229?266, 2015.
[13] E. J. Cand?s, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and
algorithms. IEEE Transactions on Information Theory, 61(4):1985?2007, April 2015.
[14] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. NIPS,
2013.
[15] P. Schniter and S. Rangan. Compressive phase retrieval via generalized approximate message
passing. IEEE Transactions on Signal Processing, 63(4):1043?1055, Feb 2015.
[16] A. Repetti, E. Chouzenoux, and J.-C. Pesquet. A nonconvex regularized approach for phase
retrieval. International Conference on Image Processing, pages 1753?1757, 2014.
[17] K. Wei. Phase retrieval via Kaczmarz methods. arXiv:1502.01822, 2015.
[18] C. White, R. Ward, and S. Sanghavi. The local convexity of solving quadratic equations.
arXiv:1506.07868, 2015.
[19] Y. Shechtman, A. Beck, and Y. C. Eldar. GESPAR: Efficient phase retrieval of sparse signals.
IEEE Transactions on Signal Processing, 62(4):928?938, 2014.
[20] M. Soltanolkotabi. Algorithms and Theory for Clustering and Nonconvex Quadratic Programming. PhD thesis, Stanford University, 2014.
[21] L. N. Trefethen and D. Bau III. Numerical linear algebra, volume 50. SIAM, 1997.
[22] E. J. Cand?s, X. Li, and M. Soltanolkotabi. Phase retrieval from coded diffraction patterns. to
appear in Applied and Computational Harmonic Analysis, 2014.
[23] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of
Machine Learning Research, 11:2057?2078, 2010.
[24] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In ACM symposium on Theory of computing, pages 665?674, 2013.
[25] R. Sun and Z. Luo. Guaranteed matrix completion via nonconvex factorization. FOCS, 2015.
[26] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse
coding. Conference on Learning Theory (COLT), 2015.
[27] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere. ICML, 2015.
[28] S. Balakrishnan, M. J. Wainwright, and B. Yu. Statistical guarantees for the EM algorithm:
From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014.
[29] X. Yi, C. Caramanis, and S. Sanghavi. Alternating minimization for mixed linear regression.
International Conference on Machine Learning, June 2014.
9
| 5743 |@word trial:1 version:1 seems:1 c0:4 confirms:2 seek:1 accounting:1 covariance:1 incurs:1 tr:3 accommodate:1 carry:1 shechtman:2 reduction:3 initial:6 outperforms:1 current:1 z2:1 luo:1 intriguing:3 refines:1 realistic:2 numerical:10 additive:1 predetermined:1 plot:1 drop:1 update:2 depict:1 v:3 stationary:1 implying:1 guess:5 xk:1 short:2 core:1 record:2 yuxin:1 iterates:2 math:2 liberal:1 zhang:1 mathematical:2 along:2 c2:4 symposium:1 focs:1 consists:1 ray:1 inside:1 manner:2 theoretically:1 acquired:1 notably:2 mask:1 behavior:1 cand:5 dist:6 derandomization:1 chi:1 spherical:1 preclude:1 provided:1 estimating:2 moreover:2 xx:2 begin:1 underlying:1 eigenvector:3 developed:1 compressive:2 supplemental:1 finding:2 transformation:1 guarantee:10 concave:1 nutshell:1 runtime:1 exactly:3 prohibitively:1 k2:2 unit:2 grant:2 reuses:1 appear:1 positive:3 before:1 local:1 cdp:1 limit:1 consequence:1 despite:3 ak:3 modulation:1 orderwise:1 might:5 initialization:13 challenging:1 relaxing:1 factorization:1 nemirovski:1 bi:2 range:4 averaged:1 obeys:1 practical:4 unique:1 acknowledgment:1 practice:1 kaczmarz:2 procedure:9 universal:1 empirical:11 significantly:1 convenient:1 projection:1 word:2 confidence:1 convenience:2 close:2 selection:2 influence:2 equivalent:1 deterministic:2 demonstrated:1 missing:3 maximizing:1 attention:1 starting:1 independently:1 convex:6 ke:1 unstructured:3 recovery:9 immediately:1 splitting:1 pure:1 stats:1 rule:3 attraction:3 regarded:1 dominate:1 regularize:1 oh:1 population:3 stability:6 variation:1 e2i:5 resp:2 enhanced:2 suppose:4 imagine:1 mallat:1 exact:4 programming:6 annals:1 distinguishing:1 us:1 origin:1 expensive:2 satisfying:1 observed:2 bottom:1 min1:1 preprint:1 electrical:1 capture:1 sun:2 movement:1 counter:1 yk:1 substantial:1 gross:1 convexity:1 complexity:9 overshoot:1 solving:17 rewrite:1 algebra:1 serve:1 upon:2 efficiency:2 various:2 caramanis:2 surrounding:1 jain:2 mn2:2 fast:1 monte:1 encouragingly:1 detected:1 lift:1 outcome:1 neighborhood:1 outside:1 precious:1 whose:4 trefethen:1 stanford:6 solve:2 posed:1 valued:3 larger:2 otherwise:1 kai:3 compressed:1 statistic:2 ward:1 transform:2 noisy:5 advantage:1 eigenvalue:1 sequence:1 analytical:1 cai:1 propose:1 product:3 aligned:1 loop:2 intuitive:1 zlb:6 cautious:1 az:2 dist2:1 scalability:1 waldspurger:1 convergence:3 produce:2 converges:5 ben:1 object:2 volatility:1 derive:1 illustrate:1 completion:4 solves:1 netrapalli:2 auxiliary:1 predicted:1 involves:2 come:2 direction:12 drawback:1 stochastic:1 enable:1 material:1 explains:1 premise:1 uncontrollable:1 isit:1 tighter:3 extension:1 hold:2 practically:1 ground:2 wright:1 exp:2 seed:1 algorithmic:3 achieves:3 dictionary:2 estimation:3 outperformed:1 combinatorial:2 suprisingly:1 hope:2 minimization:6 sensor:1 always:2 gaussian:8 aim:1 rather:1 varying:3 as3:1 kueng:1 ax:1 focus:1 june:1 notational:1 improvement:1 rank:3 likelihood:7 mainly:1 contrast:3 cg:8 wf:16 am:2 realizable:1 membership:1 selective:1 voroninski:1 issue:4 among:2 cprl:1 ill:1 arg:1 colt:2 priori:1 eldar:2 rop:1 proposes:1 constrained:3 special:2 initialize:2 field:2 having:1 sampling:1 zz:1 represents:4 broad:1 yu:1 icml:1 nearly:3 np:2 sanghavi:4 develops:1 employ:3 distinguishes:1 modern:1 randomly:2 ve:1 beck:1 phase:22 attempt:3 interest:1 message:2 deferred:2 introduces:1 mixture:3 nl:1 semidefinite:1 light:1 strohmer:1 implication:1 kt:2 accurate:1 schniter:1 capable:1 partial:2 closer:1 respective:1 unless:2 conduct:1 mkxk:1 phaselift:3 desired:3 plotted:2 theoretical:3 minimal:1 instance:1 boolean:1 corroborate:2 ordinary:1 cost:2 introducing:1 applicability:1 subset:1 entry:2 snr:18 hundred:1 conducted:2 too:5 reported:1 nearoptimal:1 dependency:1 varies:1 corrupted:1 international:2 oymak:1 siam:1 sequel:1 yl:2 dong:1 pool:1 concrete:1 squared:1 thesis:1 successively:1 opposed:1 moitra:1 derivative:1 leading:4 li:3 leverage:1 aggressive:1 potential:1 photon:1 coding:2 subsumes:1 includes:1 satisfy:1 vi:1 performed:1 red:1 start:1 recover:1 complicated:1 candes:1 slope:1 simon:1 minimize:1 square:9 accuracy:6 improvable:1 variance:3 largely:1 yield:2 ohlsson:1 carlo:1 worth:1 szameit:1 reach:1 definition:1 inexpensive:1 e2:1 recovers:2 gain:1 macbook:1 popular:1 recall:1 color:1 higher:1 response:1 specify:1 april:1 entrywise:1 formulation:1 wei:1 stage:2 working:1 hand:1 keshavan:1 nonlinear:1 glance:1 quality:1 grows:1 usage:1 excessively:1 ye:2 true:1 verify:1 ccf:1 hence:3 seeded:1 assigned:1 read:1 alternating:6 regularization:1 illustrated:4 undesired:1 ind:5 white:1 indistinguishable:1 numerator:1 assistance:1 kak:1 criterion:1 generalized:2 stone:1 jaganathan:1 complete:3 demonstrate:3 theoretic:1 tt:5 performs:1 goldsmith:1 pro:1 image:2 harmonic:1 isolates:1 novel:2 recently:1 fi:1 volatile:1 empirically:4 physical:1 volume:2 discussed:1 numerically:1 measurement:8 ai:31 dft:1 grid:1 mathematics:2 pm:8 sastry:1 maxcut:1 soltanolkotabi:3 had:1 moving:1 stable:3 entail:1 operating:1 feb:1 inf:1 discard:1 route:1 nonconvex:13 success:10 yi:47 accomplished:1 integrable:1 captured:1 seen:1 fortunately:1 additional:1 preceding:2 bau:1 paradigm:4 maximize:1 recommended:1 signal:9 ii:1 reduces:3 exceeds:2 match:1 offer:1 long:1 retrieval:11 sphere:1 mle:8 e1:1 coded:2 a1:2 controlled:2 award:1 variant:3 regression:5 basic:2 denominator:1 essentially:1 metric:1 poisson:14 noiseless:5 physically:1 iteration:20 arxiv:4 microscopy:1 achieved:1 c1:4 remarkably:1 else:1 leaving:1 crucial:1 extra:1 db:9 balakrishnan:1 flow:9 seem:1 near:1 yang:1 presence:2 wirtinger:8 ideal:2 iii:2 easy:1 concerned:1 recommends:1 iterate:2 revealed:3 variety:1 pesquet:1 cn:2 i7:1 t0:1 effort:1 proceed:1 passing:2 remark:1 informally:1 involve:1 amount:2 ten:1 tth:1 generate:3 outperform:1 exist:1 nsf:2 sign:2 blue:1 twf:50 shall:1 incentive:1 promise:1 iz:1 express:1 key:2 four:1 nevertheless:1 threshold:3 drawn:2 imaging:4 concreteness:2 merely:1 run:5 extends:1 almost:1 family:2 place:1 arrive:1 decision:1 diffraction:5 scaling:1 comparable:1 bound:1 pay:1 followed:1 guaranteed:1 display:1 quadratic:20 optic:2 constraint:3 segev:1 rangan:1 n3:1 qcqp:1 tal:1 sake:1 declared:1 aspect:1 simulate:1 fourier:2 min:2 extremely:2 performing:1 department:3 influential:1 according:3 combination:1 conjugate:1 across:2 slightly:2 beneficial:1 increasingly:1 em:1 appealing:3 qu:1 taken:6 computationally:2 equation:12 turn:2 count:2 locus:1 letting:1 know:1 ge:1 end:1 apply:2 obey:1 away:3 spectral:11 appropriate:3 alternative:1 slower:1 jn:1 denotes:3 top:1 include:1 ensure:3 unreasonably:1 running:2 log2:1 cf:1 log10:2 clustering:1 calculating:1 exploit:1 emmanuel:1 prof:1 classical:1 objective:13 initializes:2 already:1 strategy:3 diagonal:2 hai:7 exhibit:2 gradient:16 remedied:1 enforcing:1 fresh:1 provable:1 assuming:1 besides:1 modeled:1 index:2 mini:1 ratio:1 minimizing:1 acquire:1 unfortunately:2 minimizez:1 stated:1 design:8 implementation:1 proper:1 unknown:4 observation:2 benchmark:1 descent:9 displayed:1 truncated:18 flop:1 situation:1 communication:1 rn:7 arbitrary:1 lb:2 namely:1 specified:1 c3:2 extensive:1 z1:1 quadratically:2 herein:1 nip:2 trans:1 address:1 able:1 proceeds:1 below:2 pattern:3 xm:7 fi2:1 reading:1 challenge:1 sparsity:1 program:2 recast:1 green:1 wainwright:1 power:4 suitable:1 event:1 natural:1 difficulty:1 circumvent:1 regularized:1 solvable:1 pause:1 minimax:2 scheme:3 improve:1 inversely:1 arora:1 aspremont:1 incoherent:1 prior:1 geometric:1 relative:15 tantamount:1 loss:1 expect:2 bear:1 highlight:1 mixed:3 lecture:1 proportional:4 var:1 remarkable:1 foundation:2 fienup:3 basin:3 sufficient:2 principle:1 zub:6 surprisingly:1 supported:2 free:1 soon:1 infeasible:1 truncation:6 side:1 bias:1 fall:4 sparse:4 benefit:1 ghz:1 curve:1 dimension:2 regard:1 evaluating:1 kzk:4 default:2 kz:3 transition:1 reside:1 commonly:1 adaptive:2 collection:1 far:2 log3:2 transaction:3 approximate:2 trim:1 global:1 e1i:5 conclude:1 xi:7 alternatively:1 gespar:1 search:10 latent:2 iterative:2 un:1 nature:1 channel:1 robust:2 ca:2 bearing:1 mse:11 complex:4 main:4 yi2:1 linearly:1 arrow:1 krahmer:1 noise:11 arise:1 montanari:1 fig:9 referred:1 intel:1 fashion:2 hassibi:1 sub:1 exceeding:2 obeying:1 candidate:2 kxk2:3 theorem:7 emphasizing:1 discarding:1 maxi:2 offset:1 list:1 sensing:2 dominates:1 intractable:1 exists:1 albeit:1 effectively:2 importance:1 supplement:1 lifting:1 magnitude:5 phd:1 conditioned:1 justifies:1 chen:3 crystallography:1 surprise:2 logarithmic:2 backtracking:3 wavelength:1 kxk:8 expressed:1 partially:2 collectively:1 truth:3 determines:1 satisfies:1 acm:1 ma:1 succeed:1 consequently:1 careful:3 change:1 typical:2 specifically:2 operates:2 reducing:1 justify:1 principal:2 called:3 total:1 disregard:1 meaningful:1 indicating:1 support:2 arises:1 relevance:1 ub:2 evaluate:3 regularizing:2 phenomenon:1 |
5,240 | 5,744 | Sampling from Probabilistic Submodular Models
Alkis Gotovos
ETH Zurich
S. Hamed Hassani
ETH Zurich
Andreas Krause
ETH Zurich
alkisg@inf.ethz.ch
hamed@inf.ethz.ch
krausea@ethz.ch
Abstract
Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively.
These notions have deep consequences for optimization, and the problem of (approximately) optimizing submodular functions has received much attention. However, beyond optimization, these notions allow specifying expressive probabilistic models that can be used to quantify predictive uncertainty via marginal inference. Prominent, well-studied special cases include Ising models and determinantal point processes, but the general class of log-submodular and log-supermodular
models is much richer and little studied. In this paper, we investigate the use of
Markov chain Monte Carlo sampling to perform approximate inference in general log-submodular and log-supermodular models. In particular, we consider a
simple Gibbs sampling procedure, and establish two sufficient conditions, the first
guaranteeing polynomial-time, and the second fast (O(n log n)) mixing. We also
evaluate the efficiency of the Gibbs sampler on three examples of such models,
and compare against a recently proposed variational approach.
1
Introduction
Modeling notions such as coverage, representativeness, or diversity is an important challenge in
many machine learning problems. These notions are well captured by submodular set functions.
Analogously, supermodular functions capture notions of smoothness, regularity, or cooperation. As
a result, submodularity and supermodularity, akin to concavity and convexity, have found numerous
applications in machine learning. The majority of previous work has focused on optimizing such
functions, including the development and analysis of algorithms for minimization [10] and maximization [9,26], as well as the investigation of practical applications, such as sensor placement [21],
active learning [12], influence maximization [19], and document summarization [25].
Beyond optimization, though, it is of interest to consider probabilistic models defined via submodular functions, that is, distributions over finite sets (or, equivalently, binary random vectors) defined
as p(S) ? exp(?F (S)), where F : 2V ? R is a submodular or supermodular function (equivalently, either F or ?F is submodular), and ? ? 0 is a scaling parameter. Finding most likely sets in
such models captures classical submodular optimization. However, going beyond point estimates,
that is, performing general probabilistic (e.g., marginal) inference in them, allows us to quantify
uncertainty given some observations, as well as learn such models from data. Only few special
cases belonging to this class of models have been extensively studied in the past; most notably,
Ising models [20], which are log-supermodular in the usual case of attractive (ferromagnetic) potentials, or log-submodular under repulsive (anti-ferromagnetic) potentials, and determinantal point
processes [23], which are log-submodular.
Recently, Djolonga and Krause [6] considered a more general treatment of such models, and proposed a variational approach for performing approximate probabilistic inference for them. It is
natural to ask to what degree the usual alternative to variational methods, namely Monte Carlo sampling, is applicable to these models, and how it performs in comparison. To this end, in this paper
1
we consider a simple Markov chain Monte Carlo (MCMC) algorithm on log-submodular and logsupermodular models, and provide a first analysis of its performance. We present two theoretical
conditions that respectively guarantee polynomial-time and fast (O(n log n)) mixing in such models,
and experimentally compare against the variational approximations on three examples.
2
Problem Setup
We start by considering set functions F : 2V ? R, where V is a finite ground set of size |V | =
n. Without loss of generality, if not otherwise stated, we will hereafter assume that V = [n] :=
{1, 2, . . . , n}. The marginal gain obtained by adding element v ? V to set S ? V is defined
as F (v|S) := F (S ? {v}) ? F (S). Intuitively, submodularity expresses a notion of diminishing
returns; that is, adding an element to a larger set provides less benefit than adding it to a smaller
one. More formally, F is submodular if, for any S ? T ? V , and any v ? V \ T , it holds that
F (v|T ) ? F (v|S). Supermodularity is defined analogously by reversing the sign of this inequality.
In particular, if a function F is submodular, then the function ?F is supermodular. If a function
m is both submodular
and supermodular, then it is called modular, and may be written in the form
P
m(S) = c + v?S mv , where c ? R, and mv ? R, for all v ? V .
Our main focus in this paper are distributions over the powerset of V of the form
exp(?F (S))
,
(1)
Z
for all S ? V , where F is submodular or supermodular. The scaling parameter ? is referred
to as inverse temperature, and distributions of the above form are called
P log-submodular or logsupermodular respectively. The constant denominator Z := Z(?) := S?V exp(?F (S)) serves
the purpose of normalizing the distribution and is called the partition function of p. An alternative
and equivalent way of defining distributions of the above form is via binary random vectors X ?
{0, 1}n . If we define V (X) := {v ? V | Xv = 1}, it is easy to see that the distribution pX (X) ?
exp(?F (V (X))) over binary vectors is isomorphic to the distribution over sets of (1). With a slight
abuse of notation, we will use F (X) to denote F (V (X)), and use p to refer to both distributions.
p(S) =
Example models The (ferromagnetic) Ising model is an example of a log-supermodular model.
In its simplest form, it is defined through an undirected graph (V, E), and a set of pairwise potentialsP?v,w (S) := 4(1{v?S} ? 0.5)(1{w?S} ? 0.5). Its distribution
P has the form p(S) ?
exp(? {v,w}?E ?v,w (S)), and is log-supermodular, because F (S) = {v,w}?E ?v,w (S) is supermodular. (Each ?v,w is supermodular, and supermodular functions are closed under addition.)
Determinantal point processes (DPPs) are examples of log-submodular models. A DPP is defined via
a positive semidefinite matrix K ? Rn?n , and has a distribution of the form p(S) ? det(KS ), where
KS denotes the square submatrix indexed by S. Since F (S) = ln det(KS ) is a submodular function,
p is log-submodular. Another example of log-submodular
models are those defined through facility
P
location functions, which have the form F (S) = `?[L] maxv?S wv,` , where wv,` ? 0, and are
submodular. If wv,` ? {0, 1}, then F represents a set cover function.
Note that, both the facility location model and the Ising model use decomposable functions, that is,
functions that can be written as a sum of simpler submodular (resp. supermodular) functions F` :
X
F (S) =
F` (S).
(2)
`?[L]
Marginal inference Our goal is to perform marginal inference for the distributions described
above. Concretely, for some fixed A ? B ? V , we would like to compute the probability of sets S
that contain all elements of A, but no elements outside of B, that is, p(A ? S ? B). More generally,
we are interested in computing conditional probabilities of the form p(A ? S ? B | C ? S ? D).
This computation can be reduced to computing unconditional marginals as follows. For any C ? V ,
define the contraction of F on C, FC : 2V \C ? R, by FC (S) = F (S?C)?F (S), for all S ? V \C.
Also, for any D ? V , define the restriction of F to D, F D : 2D ? R, by F D (S) = F (S), for all
S ? D. If F is submodular, then its contractions and restrictions are also submodular, and, thus,
(FC )D is submodular. Finally, it is easy to see that p(S | C ? S ? D) ? exp(?(FC )D (S)). In
2
Algorithm 1 Gibbs sampler
Input: Ground set V , distribution p(S) ? exp(?F (S))
1: X0 ? random subset of V
2: for t = 0 to Niter do
3:
v ? Unif(V )
4:
?F (v|Xt ) ? F (Xt ? {v}) ? F (Xt \ {v})
5:
padd ? exp(??F (v|Xt ))/(1 + exp(??F (v|Xt )))
6:
z ? Unif([0, 1])
7:
if z ? padd then Xt+1 ? Xt ? {v} else Xt+1 ? Xt \ {v}
8: end for
our experiments, we consider computing marginals of the form p(v ? S | C ? S ? D), for some
v ? V , which correspond to A = {v}, and B = V .
3
Sampling and Mixing Times
Performing exact inference in models defined by (1) boils down to computing the partition function
Z. Unfortunately, this is generally a #P-hard problem, which was shown to be the case even for Ising
models by Jerrum and Sinclair [17]. However, they also proposed a sampling-based FPRAS for a
class of ferromagnetic models, which gives us hope that it may be possible to efficiently perform
approximate inference in more general models under suitable conditions.
MCMC sampling [24] approaches are based on performing randomly selected local moves in a
state space E to approximately compute quantities of interest. The visited states (X0 , X1 , . . .) form
a Markov chain, which under mild conditions converges to a stationary distribution ?. Crucially,
the probabilities of transitioning from one state to another are carefully chosen to ensure that the
stationary distribution is identical to the distribution of interest. In our case, the state space is the
powerset of V (equivalently, the space of all binary vectors of length n), and to approximate the
marginal probabilities of p we construct a chain over subsets of V that has stationary distribution p.
The Gibbs sampler In this paper, we focus on one of the simplest and most commonly used
chains, namely the Gibbs sampler, also known as the Glauber chain. We denote by P the transition
matrix of the chain; each element P (x, y) corresponds to the conditional probability of transitioning
from state x to state y, that is, P (x, y) := P[Xt+1 = y | Xt = x], for any x, y ? E, and any t ? 0.
We also define an adjacency relation x ? y on the elements of the state space, which denotes that x
and y differ by exactly one element. It follows that each x ? E has exactly n neighbors.
The Gibbs sampler is defined by an iterative two-step procedure, as shown in Algorithm 1. First, it
selects an element v ? V uniformly at random; then, it adds or removes v to the current state Xt
according to the conditional probability of the resulting state. Importantly, the conditional probabilities that need to be computed do not depend on the partition function Z, thus the chain can be
simulated efficiently, even though Z is unknown and hard to compute. Moreover, it is easy to see
that ?F (v|Xt ) = 1{v6?Xt } F (v|Xt ) + 1{v?Xt } F (v|Xt \ {v}); thus, the sampler only requires a
black box for the marginal gains of F , which are often faster to compute than the values of F itself.
Finally, it is easy to show that the stationary distribution of the chain constructed this way is p.
Mixing times Approximating quantities of interest using MCMC methods is based on using time
averages to estimate expectations over the desired distribution. In particular, we estimate the exPT
pected value of function f : E ? R by Ep [f (X)] ? (1/T ) r=1 f (Xs+r ). For example, to
estimate the marginal p(v ? S), for some v ? V , we would define f (x) = 1{xv =1} , for all x ? E.
The choice of burn-in time s and number of samples T in the above expression presents a tradeoff
between computational efficiency and approximation accuracy. It turns out that the effect of both s
and T is largely dependent on a fundamental quantity of the chain called mixing time [24].
The mixing time of a chain quantifies the number of iterations t required for the distribution
of Xt to be close to the stationary distribution ?. More formally, it is defined as tmix () :=
min {t | d(t) ? }, where d(t) denotes the worst-case (over the starting state X0 of the chain) total
variation distance between the distribution of Xt and ?. Establishing upper bounds on the mix3
ing time of our Gibbs sampler is, therefore, sufficient to guarantee efficient approximate marginal
inference (e.g., see [24, Theorem 12.19]).
4
Theoretical Results
In the previous section we mentioned that exact computation of the partition function for the class
of models we consider here is, in general, infeasible. Only for very few exceptions, such as DPPs,
is exact inference possible in polynomial time [23]. Even worse, it has been shown that the partition
function of general Ising models is hard to approximate; in particular, there is no FPRAS for these
models, unless RP = NP. [17] This implies that the mixing time of any Markov chain with such
a stationary distribution will, in general, be exponential in n. It is, therefore, our aim to derive
sufficient conditions that guarantee sub-exponential mixing times for the general class of models.
In some of our results we will use the fact that any submodular function F can be written as
F = c + m + f,
(3)
where c ? R is a constant that has no effect on distributions defined by (1); m is a normalized
(m(?) = 0) modular function; and f is a normalized (f (?) = 0) monotone submodular function,
that is, it additionally satisfies the monotonicity property f (v|S) ? 0, for all v ? V , and all S ? V .
A similar decomposition is possible for any supermodular function as well.
4.1
Polynomial-time mixing
Our guarantee for mixing times that are polynomial in n depends crucially on the following quantity,
which is defined for any set function F : 2V ? R:
?F := max |F (A) + F (B) ? F (A ? B) ? F (A ? B)| .
A,B?V
Intuitively, ?F quantifies a notion of distance to modularity. To see this, note that a function F is
modular if and only if F (A) + F (B) = F (A ? B) + F (A ? B), for all A, B ? V . For modular
functions, therefore, we have ?F = 0. Furthermore, a function F is submodular if and only if
F (A) + F (B) ? F (A ? B) + F (A ? B), for all A, B ? V . Similarly, F is supermodular if the
above holds with the sign reversed. It follows that for submodular and supermodular functions, ?F
represents the worst-case amount by which F violates the modular equality. It is also important
to note that, for submodular and supermodular functions, ?F depends only on the monotone part
of F ; if we decompose F according to (3), then it is easy to see that ?F = ?f . A trivial upper
bound on ?F , therefore, is ?F ? f (V ). Another quantity that has been used in the past to quantify
the deviation of a submodular function from modularity is the curvature [4], defined as ?F :=
1 ? minv?V (F (v|V \ {v})/F (v)). Although of similar intuitive meaning, the multiplicative nature
of its definition makes it significantly different from ?F , which is defined additively.
As an example of a function class with ?F that do not depend on n, assume a ground set V =
SL
PL
`=1 V` , and consider functions F (S) =
`=1 ?(|S ? V` |), where ? : R ? R is a bounded
concave function, for example, ?(x) = min{?max , x}. Functions of this form are submodular, and
have been used in applications such as document summarization to encourage diversity [25]. It is
easy to see that, for such functions, ?F ? L?max , that is, ?F is independent of n.
The following theorem establishes a bound on the mixing time of the Gibbs sampler run on models
of the form (1). The bound is exponential in ?F , but polynomial in n.
Theorem 1. For any function F : 2V ? R, the mixing time of the Gibbs sampler is bounded by
1
tmix () ? 2n2 exp(2??F ) log
,
pmin
where pmin := min p(S). If F is submodular or supermodular, then the bound is improved to
S?E
2
tmix () ? 2n exp(??f ) log
4
1
pmin
.
Note that, since the factor of two that constitutes the difference between the two statements of the
theorem lies in the exponent, it can have a significant impact on the above bounds. The dependence
on pmin is related to the (worst-case) starting state of the chain, and can be eliminated if we have
a way to guarantee a high-probability starting state. If F is submodular or supermodular, this is
usually straightforward to accomplish by using one of the standard constant-factor optimization
algorithms [10, 26] as a preprocessing step. More generally, if F is bounded by 0 ? F (S) ? Fmax ,
for all S ? V , then log(1/pmin ) = O(n?Fmax ).
Canonical paths Our proof of Theorem 1 is based on the method of canonical paths [5,15,16,28].
The high-level idea of this method is to view the state space as a graph, and try to construct a
path between each pair of states that carries a certain amount of flow specified by the stationary
distribution under consideration. Depending on the choice of these paths and the resulting load on
the edges of the graph, we can derive bounds on the mixing time of the Markov chain.
More concretely, let us assume that for some set function F and corresponding distribution p as in
(1), we construct the Gibbs chain on state space E = 2V with transition matrix P . We can view the
state space as a directed graph that has vertex set E, and for any A, B ? E, contains edge (S, S 0 )
if and only if S ? S 0 , that is, if and only if S and S 0 differ by exactly one element. Now, assume
that, for any pair of states A, B ? E, we define what is called a canonical path ?AB := (A =
S0 , S1 , . . . , S` = B), such that all (Si , Si+1 ) are edges in the above graph. We denote the length
of path ?AB by |?AB |, and define Q(S, S 0 ) := p(S)P (S, S 0 ). We also denote the set of all pairs of
states whose canonical path goes through (S, S 0 ) by CSS 0 := {(A, B) ? E ? E | (S, S 0 ) ? ?AB }.
The following quantity, referred to as the congestion of an edge, uses a collection of canonical paths
to quantify to what amount that edge is overloaded:
X
1
p(A)p(B)|?AB |.
(4)
?(S, S 0 ) :=
0
Q(S, S )
(A,B)?CSS 0
0
The denominator Q(S, S ) quantifies the capacity of edge (S, S 0 ), while the sum represents the total
flow through that edge according to the choice of canonical paths. The congestion of the whole graph
is then defined as ? := maxS?S 0 ?(S, S 0 ). Low congestion implies that there are no bottlenecks in
the state space, and the chain can move around fast, which also suggests rapid mixing. The following
theorem makes this concrete.
Theorem 2 ([15, 28]). For any collection of canonical paths with congestion ?, the mixing time of
the chain is bounded by
1
.
tmix () ? ? log
pmin
Proof outline of Theorem 1 To apply Theorem 2 to our class of distributions, we need to construct a set of canonical paths in the corresponding state space 2V , and upper bound the resulting
congestion. First, note that, to transition from state A ? E to state B ? E, in our case, it is enough to
remove the elements of A\B and add the elements of B \A. Each removal and addition corresponds
to an edge in the state space graph, and the order of these operations identify a canonical path in this
graph that connects A to B. For our analysis, we assume a fixed order on V (e.g., the natural order
of the elements themselves), and perform the operations according to this order.
Having defined the set of canonical paths, we proceed to bounding the congestion ?(S, S 0 ) for any
edge (S, S 0 ). The main difficulty in bounding ?(S, S 0 ) is due to the sum in (4) over all pairs in CSS 0 .
To simplify this sum we construct for each edge (S, S 0 ) an injective map ?SS 0 : CSS 0 ? E; this is a
combinatorial encoding technique that has been previously used in similar proofs to ours [15]. We
then prove the following key lemma about these maps.
Lemma 1. For any S ? S 0 , and any A, B ? E, it holds that
p(A)p(B) ? 2n exp(2??F )Q(S, S 0 )p(?SS 0 (A, B)).
P
Since ?SS 0 is injective, it follows that (A,B)?CSS0 p(?SS 0 (A, B)) ? 1. Furthermore, it is clear
that each canonical path ?AB has length |?AB | ? n, since we need to add and/or remove at most n
elements to get from state A to state B. Combining these two facts with the above lemma, we get
?(S, S 0 ) ? 2n2 exp(2??F ).
If F is submodular or supermodular, we show that the dependence on ?F in Lemma 1 is improved
to exp(??F ). More details can be found in the longer version of the paper.
5
4.2
Fast mixing
We now proceed to show that, under some stronger conditions, we are able to establish even faster?
O(n log n)?mixing. For any function F , we denote ?F (v|S) := F (S ? {v}) ? F (S \ {v}), and
define the following quantity,
X
?
?F,? := max
?F (v|S) ? ?F (v|S ? {r}) ,
tanh
S?V
2
r?V v?V
which quantifies the (maximum) total influence of an element r ? V on the values of F . For
example, if the inclusion of r makes no difference with respect to other elements of the ground set,
we will have ?F,? = 0. The following theorem establishes conditions for fast mixing of the Gibbs
sampler when run on models of the form (1).
Theorem 3. For any set function F : 2V ? R, if ?F,? < 1, then the mixing time of the Gibbs
sampler is bounded by
1
1
tmix () ?
n(log n + log ).
1 ? ?F,?
If F is additionally submodular or supermodular, and is decomposed according to (3), then
1
1
n(log n + log ).
tmix () ?
1 ? ?f,?
Note that, in the second part of the theorem, ?f,? depends only on the monotone part of F . We have
seen in Section 2 that some commonly used models are based on decomposable functions that can
be written in the form (2). We prove the following corollary that provides an easy to check condition
for fast mixing of the Gibbs sampler when F is a decomposable submodular function.
Corollary 1. For any submodular function F that can be written in the form of (2), with f being its
monotone (also decomposable) part according to (3), if we define
Xp
Xp
?f := max
f` (v) and ?f := max
f` (v),
v?V
`?[L]
`?[L]
v?V
then it holds that
?f,? ?
?
?f ? f .
2
For example, applying this to the facility location model defined in Section 2, we get ?f =
PL ?
P
?
maxv `=1 wv,` , and ?f = max` v?V wv,` , and obtain fast mixing if ?f ?f ? 2/?. As a
special case, if we consider the class of set cover functions (wv,` ? {0, 1}), such that each v ? V
covers at most ? sets, and each set ` ? [L] is covered by at most ? elements, then ?f , ?f ? ?, and we
obtain fast mixing if ? 2 ? 2/?. Note, that the corollary can be trivially applied to any submodular
function by taking L = 1, but may, in general, result in a loose bound if used that way.
Coupling Our proof of Theorem 3 is based on the coupling technique [1]; more specifically, we
use the path coupling method [2,15,24]. Given a Markov chain (Xt ) on state space E with transition
matrix P , a coupling for (Zt ) is a new Markov chain (Xt , Yt ) on state space E ? E, such that
both (Xt ) and (Yt ) are by themselves Markov chains with transition matrix P . The idea is to
construct the coupling in such a way that, even when the starting points X0 and Y0 are different,
the chains (Xt ) and (Yt ) tend to coalesce. Then, it can be shown that the coupling time tcouple :=
min {t ? 0 | Xt = Yt } is closely related to the mixing time of the original chain (Zt ). [24]
The main difficulty in applying the coupling approach lies in the construction of the coupling itself,
for which one needs to consider any possible pair of states (Yt , Zt ). The path coupling technique
makes this construction easier by utilizing the same state-space graph that we used to define canonical paths in Section 4.1. The core idea is to first define a coupling only over adjacent states, and
then extend it for any pair of states by using a metric on the graph. More concretely, let us denote
by d : E ? E ? R the path metric on state space E; that is, for any x, y ? E, d(x, y) is the minimum
length of any path from x to y in the state space graph. The following theorem establishes fast
mixing using this metric, as well as the diameter of the state space, diam(E) := maxx,y?E d(x, y).
6
Theorem 4 ([2, 24]). For any Markov chain (Zt ), if (Xt , Yt ) is a coupling, such that, for some
a ? 0, and any x, y ? E with x ? y, it holds that
E[d(Xt+1 , Yt+1 ) | Xt = x, Yt = y] ? e?? d(x, y),
then the mixing time of the original chain is bounded by
1
1
tmix () ?
log(diam(E)) + log
.
?
Proof outline of Theorem 3 In our case, the path metric d is the Hamming distance between
the binary vectors representing the states (equivalently, the number of elements by which two sets
differ). We need to construct a suitable coupling (Xt , Yt ) for any pair of states x ? y. Consider the
two corresponding sets S, R ? V that differ by exactly one element, and assume that R = S ? {r},
for some r ? V . (The case S = R ? {s} for some s ? V is completely analogous.) Remember that
the Gibbs sampler first chooses an element v ? V uniformly at random, and then adds or removes
it according to the conditional probabilities. Our goal is to make the same updates happen to both S
and R as frequently as possible. As a first step, we couple the candidate element for update v ? V
to always be the same in both chains. Then, we have to distinguish between the following cases.
If v = r, then the conditionals for both chains are identical, therefore we can couple both chains
to add r with probability padd := p(S ? {r})/(p(S) + p(S ? {r})), which will result in new
sets S 0 = R0 = S ? {r}, or remove r with probability 1 ? padd , which will result in new sets
S 0 = R0 = S. Either way, we will have d(S 0 , R0 ) = 0.
If v 6= r, we cannot always couple the updates of the chains, because the conditional probabilities
of the updates are different. In fact, we are forced to have different updates (one chain adding v, the
other chain removing v) with probability equal to the difference of the corresponding conditionals,
which we denote here by pdif (v). If this is the case, we will have d(S 0 , R0 ) = 2, otherwise the
chains will make the same update and will still differ only by element r, that is, d(S 0 , R0 ) = 1.
Putting together all the above, we get the following expected distance after one step:
1
1X
1
1 ? ?F,?
0
0
E[d(S , R )] = 1 ? +
pdif (v) ? 1 ? (1 ? ?F,? ) ? exp ?
.
n n
n
n
v6=r
Our result follows from applying Theorem 4 with ? = ?F,? /n, noting that diam(E) = n.
5
Experiments
We compare the Gibbs sampler against the variational approach proposed by Djolonga and Krause
[6] for performing inference in models of the form (1), and use the same three models as in their
experiments. We briefly review here the experimental setup and refer to their paper for more details.
The first is a (log-submodular) facility location model with an added modular term that penalizes the
number of selected elements, that is, p(S) ? exp(F (S) ? 2|S|), where F is a submodular facility
location function. The model is constructed from randomly subsampling real data from a problem of
sensor placement in a water distribution network [22]. In the experiments, we iteratively condition
on random observations for each variable in the ground set. The second is a (log-supermodular)
pairwise Markov random field (MRF; a generalized Ising model with varying weights), constructed
by first randomly sampling points from a 2-D two-cluster Gaussian mixture model, and then introducing a pairwise potential for each pair of points with exponentially-decreasing weight in the
distance of the pair. In the experiments, we iteratively condition on pairs of observations, one from
each cluster. The third is a (log-supermodular) higher-order MRF, which is constructed by first generating a random Watts-Strogatz graph, and then creating one higher-order potential per node, which
contains that node and all of its neighbors in the graph. The strength of the potentials is controlled
by a parameter ?, which is closely related to the curvature of the functions that define them. In the
experiments, we vary this parameter from 0 (modular model) to 1 (?strongly? supermodular model).
For all three models, we constrain the size of the ground set to n = 20, so that we are able to
compute, and compare against, the exact marginals. Furthermore, we run multiple repetitions for
each model to account for the randomness of the model instance, and the random initialization of
7
0.15
0.1
Var (upper)
Var (lower) 0.2
Gibbs (100)
Gibbs (500)
Gibbs (2000)
0.1
Var (upper)
Var (lower)
0.08
Gibbs (100)
Gibbs (500)
Gibbs (2000) 0.06
0.1
0.04
0.05
Var (upper)
Var (lower)
Gibbs (100)
Gibbs (500)
Gibbs (2000)
0.02
0
0
0
2
4
6
8
10 12 14 16 18
Number of conditioned elements
(a) Facility location
0
1
2
3
4
5
6
7
Number of conditioned pairs
(b) Pairwise MRF
8
9
0
0
0.2
0.4
0.6
0.8
?
(c) Higher-order MRF
Figure 1: Absolute error of the marginals computed by the Gibbs sampler compared to variational
inference [6]. A modest 500 Gibbs iterations outperform the variational method for the most part.
the Gibbs sampler. The marginals we compute are of the form p(v ? S | C ? S ? D), for all
v ? V . We run the Gibbs sampler for 100, 500, and 2000 iterations on each problem instance.
In compliance with recommended MCMC practice [11], we discard the first half of the obtained
samples as burn-in, and only use the second half for estimating the marginals.
Figure 1 compares the average absolute error of the approximate marginals with respect to the exact
ones. The averaging is performed over v ? V , and over the different repetitions of each experiment;
errorbars depict two standard errors. The two variational approximations are obtained from factorized distributions associated with modular lower and upper bounds respectively [6]. We notice a
similar trend on all three models. For the regimes that correspond to less ?peaked? posterior distributions (small number of conditioned variables, small ?), even 100 Gibbs iterations outperform
both variational approximations. The latter gain an advantage when the posterior is concentrated
around only a few states, which happens after having conditioned on almost all variables in the first
two models, or for ? close to 1 in the third model.
6
Further Related Work
In contemporary work to ours, Rebeschini and Karbasi [27] analyzed the mixing times of logsubmodular models. Using a method based on matrix norms, which was previously introduced by
Dyer et al. [7], and is closely related to path coupling, they arrive at a similar?though not directly
comparable?condition to the one we presented in Theorem 3.
Iyer and Bilmes [13] recently considered a different class of probabilistic models, called submodular point processes, which are also defined through submodular functions, and have the form
p(S) ? F (S). They showed that inference in SPPs is, in general, also a hard problem, and provided approximations and closed-form solutions for some subclasses.
The canonical path method for bounding mixing times has been previously used in applications, such
as approximating the partition function of ferromagnetic Ising models [17], approximating matrix
permanents [16, 18], and counting matchings in graphs [15]. The most prominent application of
coupling-based methods is counting k-colorings in low-degree graphs [3,14,15]. Other applications
include counting independent sets in graphs [8], and approximating the partition function of various
subclasses of Ising models at high temperatures [24].
7
Conclusion
We considered the problem of performing marginal inference using MCMC sampling techniques in
probabilistic models defined through submodular functions. In particular, we presented for the first
time sufficient conditions to obtain upper bounds on the mixing time of the Gibbs sampler in general
log-submodular and log-supermodular models. Furthermore, we demonstrated that, in practice, the
Gibbs sampler compares favorably to previously proposed variational approximations, at least in
regimes of high uncertainty. We believe that this is an important step towards a unified framework
for further analysis and practical application of this rich class of probabilistic submodular models.
Acknowledgments This work was partially supported by ERC Starting Grant 307036.
8
1
References
[1] David Aldous. Random walks on finite groups and rapidly mixing markov chains. In Seminaire de
Probabilites XVII. Springer, 1983.
[2] Russ Bubley and Martin Dyer. Path coupling: A technique for proving rapid mixing in markov chains. In
Symposium on Foundations of Computer Science, 1997.
[3] Russ Bubley, Martin Dyer, and Catherine Greenhill. Beating the 2d bound for approximately counting
colourings: A computer-assisted proof of rapid mixing. In Symposium on Discrete Algorithms, 1998.
[4] Michele Conforti and Gerard Cornuejols. Submodular set functions, matroids and the greedy algorithm:
Tight worst-case bounds and some generalizations of the rado-edmonds theorem. Disc. App. Math., 1984.
[5] Persi Diaconis and Daniel Stroock. Geometric bounds for eigenvalues of markov chains. The Annals of
Applied Probability, 1991.
[6] Josip Djolonga and Andreas Krause. From MAP to marginals: Variational inference in bayesian submodular models. In Neural Information Processing Systems, 2014.
[7] Martin Dyer, Leslie Ann Goldberg, and Mark Jerrum. Matrix norms and rapid mixing for spin systems.
Annals of Applied Probability, 2009.
[8] Martin Dyer and Catherine Greenhill. On markov chains for independent sets. J. of Algorithms, 2000.
[9] Uriel Feige, Vahab S. Mirrokni, and Jan Vondrak. Maximizing non-monotone submodular functions. In
Symposium on Foundations of Computer Science, 2007.
[10] Satoru Fujishige. Submodular Functions and Optimization. Elsevier Science, 2005.
[11] Andrew Gelman and Kenneth Shirley. Innovation and intellectual property rights. In Handbook of Markov
Chain Monte Carlo. CRC Press, 2011.
[12] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. Journal of Artificial Intelligence Research, 2011.
[13] Rishabh Iyer and Jeff Bilmes. Submodular point processes with applications in machine learning. In
International Conference on Artificial Intelligence and Statistics, 2015.
[14] Mark Jerrum. A very simple algorithm for estimating the number of k-colorings of a low-degree graph.
Random Structures and Algorithms, 1995.
[15] Mark Jerrum. Counting, Sampling and Integrating: Algorithms and Complexity. Birkh?auser, 2003.
[16] Mark Jerrum and Alistair Sinclair. Approximating the permanent. SIAM Journal on Computing, 1989.
[17] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM
Journal on Computing, 1993.
[18] Mark Jerrum, Alistair Sinclair, and Eric Vigoda. A polynomial-time approximation algorithm for the
permanent of a matrix with non-negative entries. Journal of the ACM, 2004.
[19] David Kempe, Jon Kleinberg, and Eva Tardos. Maximizing the spread of influence through a social
network. In Conference on Knowledge Discovery and Data Mining, 2003.
[20] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT
Press, 2009.
[21] Andreas Krause, Carlos Guestrin, Anupam Gupta, and Jon Kleinberg. Near-optimal sensor placements:
Maximizing information while minimizing communication cost. In Information Processing in Sensor
Networks, 2006.
[22] Andreas Krause, Jure Leskovec, Carlos Guestrin, Jeanne Vanbriesen, and Christos Faloutsos. Efficient
sensor placement optimization for securing large water distribution networks. Journal of Water Resources
Planning and Management, 2008.
[23] Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Foundations and
Trends in Machine Learning, 2012.
[24] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American
Mathematical Society, 2008.
[25] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In Human
Language Technologies, 2011.
[26] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for
maximizing submodular set functions. Mathematical Programming, 1978.
[27] Patrick Rebeschini and Amin Karbasi. Fast mixing for discrete point processes. In Conference on Learning Theory, 2015.
[28] Alistair Sinclair. Improved bounds for mixing rates of markov chains and multicommodity flow. Combinatorics, Probability and Computing, 1992.
9
| 5744 |@word mild:1 version:1 briefly:1 polynomial:8 stronger:1 norm:2 laurence:1 unif:2 additively:1 crucially:2 contraction:2 decomposition:1 multicommodity:1 carry:1 contains:2 hereafter:1 daniel:2 document:3 ours:2 past:2 current:1 si:2 written:5 determinantal:4 partition:7 happen:1 remove:5 update:6 maxv:2 depict:1 stationary:7 congestion:6 selected:2 half:2 greedy:1 intelligence:2 core:1 provides:2 math:1 node:2 location:6 intellectual:1 simpler:1 daphne:1 mathematical:2 constructed:4 symposium:3 prove:2 x0:4 pairwise:4 notably:1 expected:1 rapid:4 themselves:2 frequently:1 planning:1 decomposed:1 decreasing:1 little:1 considering:1 provided:1 estimating:2 notation:1 moreover:1 bounded:6 factorized:1 what:3 probabilites:1 unified:1 finding:1 guarantee:5 remember:1 subclass:2 concave:1 exactly:4 grant:1 positive:1 local:1 xv:2 consequence:1 vigoda:1 encoding:1 establishing:1 path:23 approximately:3 abuse:1 black:1 burn:2 initialization:1 studied:3 k:3 specifying:1 suggests:1 directed:1 practical:2 acknowledgment:1 practice:2 minv:1 procedure:2 jan:1 eth:3 significantly:1 maxx:1 integrating:1 get:4 cannot:1 close:2 satoru:1 gelman:1 influence:3 applying:3 restriction:2 equivalent:1 map:3 demonstrated:1 yt:9 fpras:2 maximizing:4 straightforward:1 attention:1 starting:5 go:1 focused:1 decomposable:4 utilizing:1 importantly:1 proving:1 notion:8 variation:1 analogous:1 tardos:1 resp:1 cs:4 construction:2 annals:2 exact:5 programming:1 us:1 goldberg:1 element:23 trend:2 ising:10 ep:1 taskar:1 capture:2 worst:4 ferromagnetic:5 eva:1 contemporary:1 mentioned:1 rado:1 convexity:1 complexity:1 bubley:2 depend:2 tight:1 predictive:1 efficiency:2 eric:1 completely:1 matchings:1 various:1 forced:1 fast:10 monte:4 birkh:1 artificial:2 gotovos:1 outside:1 tmix:7 richer:1 larger:1 modular:8 whose:1 supermodularity:2 otherwise:2 s:4 statistic:1 jerrum:7 itself:2 advantage:1 eigenvalue:1 combining:1 fmax:2 rapidly:1 mixing:35 amin:1 intuitive:1 regularity:2 cluster:2 gerard:1 generating:1 guaranteeing:1 converges:1 ben:1 derive:2 depending:1 coupling:15 andrew:1 received:1 coverage:1 expt:1 implies:2 quantify:4 differ:5 submodularity:3 closely:3 stochastic:1 human:1 violates:1 adjacency:1 crc:1 generalization:1 investigation:1 decompose:1 rebeschini:2 pl:2 assisted:1 hold:5 around:2 considered:3 ground:6 exp:16 vary:1 purpose:1 applicable:1 combinatorial:1 tanh:1 visited:1 repetition:2 establishes:3 minimization:1 hope:1 mit:1 sensor:5 always:2 gaussian:1 aim:1 varying:1 corollary:3 focus:2 check:1 elsevier:1 inference:15 jeanne:1 dependent:1 diminishing:1 relation:1 koller:1 going:1 interested:1 selects:1 exponent:1 development:1 special:3 auser:1 kempe:1 marginal:10 equal:1 construct:7 field:1 having:2 sampling:10 eliminated:1 identical:2 represents:3 constitutes:1 jon:2 peaked:1 djolonga:3 np:1 simplify:1 few:3 randomly:3 diaconis:1 cornuejols:1 powerset:2 connects:1 ab:7 friedman:1 interest:4 investigate:1 mining:1 mixture:1 analyzed:1 semidefinite:1 unconditional:1 rishabh:1 chain:39 edge:10 encourage:1 injective:2 modest:1 unless:1 indexed:1 penalizes:1 desired:1 walk:1 theoretical:2 josip:1 leskovec:1 alkis:1 instance:2 vahab:1 modeling:1 marshall:1 cover:3 stroock:1 maximization:2 leslie:1 applicability:1 cost:1 deviation:1 subset:2 vertex:1 entry:1 introducing:1 levin:1 accomplish:1 chooses:1 fundamental:1 international:1 siam:2 probabilistic:9 analogously:2 together:1 concrete:1 seminaire:1 management:1 sinclair:5 worse:1 creating:1 american:1 return:1 pmin:6 account:1 potential:5 diversity:3 de:1 representativeness:1 permanent:3 combinatorics:1 mv:2 depends:3 multiplicative:1 view:2 try:1 closed:2 performed:1 start:1 carlos:2 square:1 spin:1 accuracy:1 largely:1 efficiently:2 correspond:2 identify:1 bayesian:1 rus:2 disc:1 carlo:4 bilmes:3 randomness:1 app:1 hamed:2 definition:1 against:4 proof:6 associated:1 boil:1 hamming:1 gain:3 couple:3 treatment:1 ask:1 colouring:1 persi:1 knowledge:1 hassani:1 carefully:1 coloring:2 higher:3 supermodular:27 improved:3 though:3 box:1 generality:1 furthermore:4 strongly:1 uriel:1 expressive:1 michele:1 believe:1 effect:2 contain:1 normalized:2 facility:6 equality:1 iteratively:2 glauber:1 attractive:1 adjacent:1 generalized:1 prominent:2 outline:2 performs:1 temperature:2 meaning:1 variational:11 consideration:1 recently:3 exponentially:1 extend:1 slight:1 marginals:8 refer:2 significant:1 gibbs:31 smoothness:1 dpps:2 trivially:1 similarly:1 inclusion:1 erc:1 submodular:55 logsupermodular:2 language:1 longer:1 add:5 patrick:1 curvature:2 posterior:2 showed:1 optimizing:2 inf:2 aldous:1 discard:1 catherine:2 certain:1 inequality:1 binary:5 wv:6 captured:1 seen:1 minimum:1 guestrin:2 george:1 r0:5 recommended:1 multiple:1 ing:1 faster:2 lin:1 controlled:1 impact:1 mrf:4 denominator:2 expectation:1 metric:4 iteration:4 addition:2 conditionals:2 krause:7 else:1 compliance:1 tend:1 fujishige:1 undirected:1 flow:3 near:1 noting:1 counting:5 easy:7 enough:1 andreas:5 idea:3 tradeoff:1 det:2 bottleneck:1 expression:1 akin:1 proceed:2 deep:1 generally:3 clear:1 covered:1 amount:3 extensively:1 concentrated:1 simplest:2 reduced:1 diameter:1 sl:1 outperform:2 canonical:13 notice:1 sign:2 per:1 edmonds:1 discrete:2 express:1 group:1 key:1 putting:1 shirley:1 kenneth:1 graph:17 monotone:5 sum:4 run:4 inverse:1 uncertainty:3 arrive:1 almost:1 scaling:2 comparable:1 submatrix:1 capturing:1 bound:15 securing:1 distinguish:1 pected:1 strength:1 placement:4 constrain:1 alex:1 vondrak:1 kleinberg:2 min:4 performing:6 px:1 martin:4 according:7 watt:1 belonging:1 smaller:1 feige:1 alistair:4 y0:1 vanbriesen:1 elizabeth:1 s1:1 happens:1 intuitively:2 karbasi:2 ln:1 resource:1 zurich:3 previously:4 turn:1 loose:1 dyer:5 end:2 serf:1 repulsive:1 operation:2 apply:1 alternative:2 anupam:1 faloutsos:1 rp:1 original:2 denotes:3 include:2 ensure:1 subsampling:1 graphical:1 establish:2 approximating:5 classical:1 society:1 move:2 added:1 quantity:7 dependence:2 usual:2 mirrokni:1 nemhauser:1 distance:5 reversed:1 simulated:1 capacity:1 majority:1 trivial:1 water:3 length:4 conforti:1 minimizing:1 innovation:1 equivalently:4 setup:2 unfortunately:1 statement:1 favorably:1 stated:1 negative:1 zt:4 summarization:3 unknown:1 perform:4 upper:8 observation:3 markov:17 finite:3 anti:1 defining:1 padd:4 communication:1 peres:1 rn:1 overloaded:1 introduced:1 namely:2 required:1 pair:11 specified:1 david:3 coalesce:1 errorbars:1 jure:1 beyond:3 able:2 usually:1 beating:1 regime:2 kulesza:1 challenge:1 including:1 max:8 suitable:2 natural:2 difficulty:2 representing:1 technology:1 numerous:1 nir:1 review:1 geometric:1 discovery:1 removal:1 loss:1 wolsey:1 var:6 greenhill:2 foundation:3 krausea:1 degree:3 sufficient:4 xp:2 s0:1 principle:1 cooperation:1 supported:1 wilmer:1 infeasible:1 allow:1 wide:1 neighbor:2 taking:1 absolute:2 matroids:1 benefit:1 dpp:1 transition:5 rich:1 concavity:1 concretely:3 commonly:2 collection:2 preprocessing:1 adaptive:1 social:1 approximate:7 monotonicity:1 active:2 handbook:1 iterative:1 quantifies:4 modularity:2 additionally:2 learn:1 nature:1 golovin:1 main:3 xvii:1 spread:1 whole:1 bounding:3 n2:2 x1:1 referred:2 christos:1 sub:1 exponential:3 lie:2 candidate:1 third:2 down:1 theorem:19 transitioning:2 load:1 xt:28 removing:1 x:1 gupta:1 normalizing:1 adding:4 hui:1 iyer:2 conditioned:4 easier:1 fc:4 likely:1 v6:2 strogatz:1 partially:1 springer:1 ch:3 corresponds:2 satisfies:1 acm:1 conditional:6 goal:2 diam:3 ann:1 towards:1 jeff:2 fisher:1 experimentally:1 hard:4 specifically:1 uniformly:2 reversing:1 sampler:19 averaging:1 yuval:1 lemma:4 called:6 total:3 isomorphic:1 experimental:1 exception:1 formally:2 mark:6 latter:1 ethz:3 evaluate:1 mcmc:5 |
5,241 | 5,745 | Distributionally Robust Logistic Regression
Soroosh Shafieezadeh-Abadeh
Peyman Mohajerin Esfahani
Daniel Kuhn
?
Ecole
Polytechnique F?ed?erale de Lausanne, CH-1015 Lausanne, Switzerland
{soroosh.shafiee,peyman.mohajerin,daniel.kuhn} @epfl.ch
Abstract
This paper proposes a distributionally robust approach to logistic regression. We
use the Wasserstein distance to construct a ball in the space of probability distributions centered at the uniform distribution on the training samples. If the radius of
this ball is chosen judiciously, we can guarantee that it contains the unknown datagenerating distribution with high confidence. We then formulate a distributionally robust logistic regression model that minimizes a worst-case expected logloss
function, where the worst case is taken over all distributions in the Wasserstein
ball. We prove that this optimization problem admits a tractable reformulation
and encapsulates the classical as well as the popular regularized logistic regression
problems as special cases. We further propose a distributionally robust approach
based on Wasserstein balls to compute upper and lower confidence bounds on the
misclassification probability of the resulting classifier. These bounds are given by
the optimal values of two highly tractable linear programs. We validate our theoretical out-of-sample guarantees through simulated and empirical experiments.
1
Introduction
Logistic regression is one of the most frequently used classification methods [1]. Its objective is to
establish a probabilistic relationship between a continuous feature vector and a binary explanatory
variable. However, in spite of its overwhelming success in machine learning, data analytics and
medicine etc., logistic regression models can display a poor out-of-sample performance if training
data is sparse. In this case modelers often resort to ad hoc regularization techniques in order to
combat overfitting effects. This paper aims to develop new regularization techniques for logistic
regression?and to provide intuitive probabilistic interpretations for existing ones?by using tools
from modern distributionally robust optimization.
Logistic Regression: Let x ? Rn denote a feature vector and y ? {?1, +1} the associated binary
label to be predicted. In logistic regression, the conditional distribution of y given x is modeled as
?1
Prob(y|x) = [1 + exp(?yh?, xi)]
,
(1)
where the weight vector ? ? Rn constitutes an unknown regression parameter. Suppose that N
training samples {(?
xi , y?i )}N
i=1 have been observed. Then, the maximum likelihood estimator of
classical logistic regression is found by solving the geometric program
min
?
N
1 X
l? (?
xi , y?i ) ,
N i=1
(2)
whose objective function is given by the sample average of the logloss function l? (x, y) = log(1 +
exp (?yh?, xi)). It has been observed, however, that the resulting maximum likelihood estimator
may display a poor out-of-sample performance. Indeed, it is well documented that minimizing the
average logloss function leads to overfitting and weak classification performance [2, 3]. In order
1
to overcome this deficiency, it has been proposed to modify the objective function of problem (2)
[4, 5, 6]. An alternative approach is to add a regularization term to the logloss function in order to
mitigate overfitting. These regularization techniques lead to a modified optimization problem
N
1 X
min
l? (?
xi , y?i ) + ?R(?) ,
(3)
? N
i=1
where R(?) and ? denote the regularization function and the associated coefficient, respectively.
A popular choice for the regularization term is R(?) = k?k, where k ? k denotes a generic norm
such as the `1 or the `2 -norm. The use of `1 -regularization tends to induce sparsity in ?, which in
turn helps to combat overfitting effects [7]. Moreover, `1 -regularized logistic regression serves as
an effective means for feature selection. It is further shown in [8] that `1 -regularization outperforms
`2 -regularization when the number of training samples is smaller than the number of features. On
the downside, `1 -regularization leads to non-smooth optimization problems, which are more challenging. Algorithms for large scale regularized logistic regression are discussed in [9, 10, 11, 12].
Distributionally Robust Optimization: Regression and classification problems are typically
modeled as optimization problems under uncertainty. To date, optimization under uncertainty has
been addressed by several complementary modeling paradigms that differ mainly in the representation of uncertainty. For instance, stochastic programming assumes that the uncertainty is governed
by a known probability distribution and aims to minimize a probability functional such as the expected cost or a quantile of the cost distribution [13, 14]. In contrast, robust optimization ignores
all distributional information and aims to minimize the worst-case cost under all possible uncertainty realizations [15, 16, 17]. While stochastic programs may rely on distributional information
that is not available or hard to acquire in practice, robust optimization models may adopt an overly
pessimistic view of the uncertainty and thereby promote over-conservative decisions.
The emerging field of distributionally robust optimization aims to bridge the gap between the conservatism of robust optimization and the specificity of stochastic programming: it seeks to minimize
a worst-case probability functional (e.g., the worst-case expectation), where the worst case is taken
with respect to an ambiguity set, that is, a family of distributions consistent with the given prior
information on the uncertainty. The vast majority of the existing literature focuses on ambiguity sets
characterized through moment and support information, see e.g. [18, 19, 20]. However, ambiguity
sets can also be constructed via distance measures in the space of probability distributions such as
the Prohorov metric [21] or the Kullback-Leibler divergence [22]. Due to its attractive measure
concentration properties, we use here the Wasserstein metric to construct ambiguity sets.
Contribution: In this paper we propose a distributionally robust perspective on logistic regression. Our research is motivated by the well-known observation that regularization techniques can
improve the out-of-sample performance of many classifiers. In the context of support vector machines and Lasso, there have been several recent attempts to give ad hoc regularization techniques a
robustness interpretation [23, 24]. However, to the best of our knowledge, no such connection has
been established for logistic regression. In this paper we aim to close this gap by adopting a new distributionally robust optimization paradigm based on Wasserstein ambiguity sets [25]. Starting from
a data-driven distributionally robust statistical learning setup, we will derive a family of regularized
logistic regression models that admit an intuitive probabilistic interpretation and encapsulate the
classical regularized logistic regression (3) as a special case. Moreover, by invoking recent measure
concentration results, our proposed approach provides a probabilistic guarantee for the emerging
regularized classifiers, which seems to be the first result of this type. All proofs are relegated to the
technical appendix. We summarize our main contributions as follows:
? Distributionally robust logistic regression model and tractable reformulation: We propose a
data-driven distributionally robust logistic regression model based on an ambiguity set induced by
the Wasserstein distance. We prove that the resulting semi-infinite optimization problem admits
an equivalent reformulation as a tractable convex program.
? Risk estimation: Using similar distributionally robust optimization techniques based on the
Wasserstein ambiguity set, we develop two highly tractable linear programs whose optimal values
provide confidence bounds on the misclassification probability or risk of the emerging classifiers.
? Out-of-sample performance guarantees: Adopting a distributionally robust framework allows
us to invoke results from the measure concentration literature to derive finite-sample probabilistic
2
guarantees. Specifically, we establish out-of-sample performance guarantees for the classifiers
obtained from the proposed distributionally robust optimization model.
? Probabilistic interpretation of existing regularization techniques: We show that the standard
regularized logistic regression is a special case of our framework. In particular, we show that the
regularization coefficient ? in (3) can be interpreted as the size of the ambiguity set underlying
our distributionally robust optimization model.
2
A distributionally robust perspective on statistical learning
In the standard statistical learning setting all training and test samples are drawn independently from
some distribution P supported on ? = Rn ? {?1, +1}. If the distribution P was known, the best
weight parameter ? could be found by solving the stochastic optimization problem
Z
P
inf E [l? (x, y)] =
l? (x, y)P(d(x, y)) .
(4)
?
Rn ?{?1,+1}
In practice, however, P is only indirectly observable through N independent training samples. Thus,
the distribution P is itself uncertain, which motivates us to address problem (4) from a distributionally robust perspective. This means that we use the training samples to construct an ambiguity set
P, that is, a family of distributions that contains the unknown distribution P with high confidence.
Then we solve the distributionally robust optimization problem
inf sup EQ [l? (x, y)] ,
? Q?P
(5)
which minimizes the worst-case expected logloss function. The construction of the ambiguity set
P should be guided by the following principles. (i) Tractability: It must be possible to solve the
distributionally robust optimization problem (5) efficiently. (ii) Reliability: The optimizer of (5)
should be near-optimal in (4), thus facilitating attractive out-of-sample guarantees. (iii) Asymptotic
consistency: For large training data sets, the solution of (5) should converge to the one of (4). In this
paper we propose to use the Wasserstein metric to construct P as a ball in the space of probability
distributions that satisfies (i)?(iii).
Definition 1 (Wasserstein Distance). Let M (?2 ) denote the set of probability distributions on ???.
The Wasserstein distance between two distributions P and Q supported on ? is defined as
Z
W (Q, P) := inf 2
d(?, ? 0 ) ?(d?, d? 0 ) : ?(d?, ?) = Q(d?), ?(?, d? 0 ) = P(d? 0 ) ,
??M (? )
?2
where ? = (x, y) and d(?, ? 0 ) is a metric on ?.
The Wasserstein distance represents the minimum cost of moving the distribution P to the distribution Q, where the cost of moving a unit mass from ? to ? 0 amounts to d(?, ? 0 ).
In the remainder, we denote by B? (P) := {Q : W (Q, P) ? ?} the ball of radius ? centered at P with
respect to the Wasserstein distance. In this paper we propose to use Wasserstein balls as ambiguity
sets. Given the training data points {(?
xi , y?i )}N
, a natural candidate for the center of the Wasseri=1
PN
?
stein ball is the empirical distribution PN = N1 i=1 ?(?xi ,?yi ) , where ?(?xi ,?yi ) denotes the Dirac point
measure at (?
xi , y?i ). Thus, we henceforth examine the distributionally robust optimization problem
inf
?
EQ [l? (x, y)]
sup
(6)
?N )
Q?B? (P
equipped with a Wasserstein ambiguity set. Note that (6) reduces to the average logloss minimization
problem (2) associated with classical logistic regression if we set ? = 0.
3
Tractable reformulation and probabilistic guarantees
In this section we demonstrate that (6) can be reformulated as a tractable convex program and establish probabilistic guarantees for its optimal solutions.
3
3.1
Tractable reformulation
We first define a metric on the feature-label space, which will be used in the remainder.
Definition 2 (Metric on the Feature-Label Space).
The distance between two data points
(x, y), (x0 , y 0 ) ? ? is defined as d (x, y), (x0 , y 0 ) = kx ? x0 k + ?|y ? y 0 |/2 , where k ? k is
any norm on Rn , and ? is a positive weight.
The parameter ? in Definition 2 represents the relative emphasis between feature mismatch and label
uncertainty. The following theorem presents a tractable reformulation of the distributionally robust
optimization problem (6) and thus constitutes the first main result of this paper.
Theorem 1 (Tractable Reformulation). The optimization problem (6) is equivalent to
?
N
? min ?? + 1 P s
?
i
?
N
?
??,?,si
i=1
Q
l? (?
xi , y?i ) ? si
?i ? N
J? := inf sup E [l? (x, y)] = s.t.
(7)
?
?
?N )
?
Q?B? (P
?
l? (?
xi , ??
yi ) ? ?? ? si ?i ? N
?
?
k?k? ? ?.
Note that (7) constitutes a tractable convex program for most commonly used norms k ? k.
Remark 1 (Regularized Logistic Regression). As the parameter ? > 0 characterizing the metric
d(?, ?) tends to infinity, the second constraint group in the convex program (7) becomes redundant.
Hence, (7) reduces to the celebrated regularized logistic regression problem
inf ?k?k? +
?
N
1 X
l? (?
xi , y?i ),
N i=1
where the regularization function is determined by the dual norm on the feature space, while the
regularization coefficient coincides with the radius of the Wasserstein ball. Note that for ? = ?
the Wasserstein distance between two distributions is infinite if they assign different labels to a
? N ) must then have nonfixed feature vector with positive probability. Any distribution in B? (P
overlapping conditional supports for y = +1 and y = ?1. Thus, setting ? = ? reflects the belief
that the label is a (deterministic) function of the feature and that label measurements are exact. As
this belief is not tenable in most applications, an approach with ? < ? may be more satisfying.
3.2
Out-of-sample performance guarantees
? N conWe now exploit a recent measure concentration result characterizing the speed at which P
verges to P with respect to the Wasserstein distance [26] in order to derive out-of-sample performance guarantees for distributionally robust logistic regression.
? N := {(?
In the following, we let ?
xi , y?i )}N
i=1 be a set of N independent training samples from P,
?
?
and we denote by ?, ?, and s?i the optimal solutions and J? the corresponding optimal value of (7).
?N .
Note that these values are random objects as they depend on the random training data ?
Theorem 2 (Out-of-Sample Performance). Assume that the distribution P is light-tailed, i.e. , there
is a > 1 with A := EP [exp(k2xka )] < +?. If the radius ? of the Wasserstein ball is set to
1
1
log (c1 ? ?1 ) n
log (c1 ? ?1 ) a
?1
1
+
1
(8)
?N (?) =
log (c1 ?
)
log (c1 ? ?1 ) ,
{N <
}
{N ?
}
c2 N
c2 N
c2 c3
c2 c3
? N ) ? 1 ? ?, implying that PN {?
? N : EP [l ? (x, y)] ? J}
? ? 1??
then we have PN P ? B? (P
?
for all sample sizes N ? 1 and confidence levels ? ? (0, 1]. Moreover, the positive constants c1 , c2 ,
and c3 appearing in (8) depend only on the light-tail parameters a and A, the dimension n of the
feature space, and the metric on the feature-label space.
? N by
Remark 2 (Worst-Case Loss). Denoting the empirical logloss function on the training set ?
?N
P
?
E [l ? (x, y)], the worst-case loss J can be expressed as
?
N
X
? + EP? N [l ? (x, y)] + 1
? x
?
J? = ??
max{0, y?i h?,
?i i ? ??}.
?
N i=1
4
(9)
Note that the last term in (9) can be viewed as a complementary regularization term that does not
appear in standard regularized logistic regression. This term accounts for label uncertainty and
decreases with ?. Thus, ? can be interpreted as our trust in the labels of the training samples. Note
? converges to k?k
? ?
that this regularization term vanishes for ? ? ?. One can further prove that ?
for ? ? ?, implying that (9) reduces to the standard regularized logistic regression in this limit.
Remark 3 (Performance Guarantees). The following comments are in order:
I. Light-Tail Assumption: The light-tail assumption of Theorem 2 is restrictive but seems to
be unavoidable for any a priori guarantees of the type described in Theorem 2. Note that this
assumption is automatically satisfied if the features have bounded support or if they are known
to follow, for instance, a Gaussian or exponential distribution.
II. Asymptotic Consistency: For any fixed confidence level ?, the radius ?N (?) defined in (8)
drops to zero as the sample size N increases, and thus the ambiguity set shrinks to a singleton.
To be more precise, with probability 1 across all training datasets, a sequence of distributions
in the ambiguity set (8) converges in the Wasserstein metric, and thus weakly, to the unknown
data generating distribution P; see [25, Corollary 3.4] for a formal proof. Consequently, the
solution of (2) can be shown to converge to the solution of (4) as N increases.
III. Finite Sample Behavior: The a priori bound (8) on the size of the Wasserstein ball has two
1
1
growth regimes. For small N , the radius decreases as N a , and for large N it scales with N n ,
where n is the dimension of the feature space. We refer to [26, Section 1.3] for further details
on the optimality of these rates and potential improvements for special cases. Note that when
the support of the underlying distribution P is bounded or P has a Gaussian distribution, the
parameter a can be effectively set to 1.
3.3
Risk Estimation: Worst- and Best-Cases
One of the main objectives in logistic regression is to control the classification performance. Specifically, we are interested in predicting labels from features.
This can be achieved via a classifier
function f? : Rn ? {+1, ?1}, whose risk R(?) := P y 6= f? (x) represents the misclassification
probability. In logistic regression, a natural choice for the classifier is f? (x) = +1 if Prob(+1|x) >
0.5; = ?1 otherwise. The conditional probability Prob(y|x)
is defined
in (1). The risk associated
with this classifier can be expressed as R(?) = EP 1{yh?,xi?0} . As in Section 3.1, we can use
worst- and best-case expectations over Wasserstein balls to construct confidence bounds on the risk.
Theorem 3 (Risk Estimation). For any ?? depending on the training dataset {(?
xi , y?i )}N we have:
i=1
Q
? := sup
(i) The worst-case risk Rmax (?)
] is given by
? N ) E [1{yh?,xi?0}
?
Q?B? (P
?
N
P
?
?
?? + N1
si
?
??,smin
?
i ,ri ,ti
i=1
?
?
? s.t.
? x
1 ? ri y?i h?,
?i i ? si
?i ? N
? =
Rmax (?)
?
1
+
t
y
?
h
?,
x
?
i
?
??
?
s
?i ? N
?
i
i
i
i
?
?
?
?
?
?
r
k
?k
?
?,
t
k
?k
?
?
?i
?N
i
?
i
?
?
?
?
ri , ti , si ? 0
?i ? N.
(10a)
? ? R(?)
? with
If the Wasserstein radius ? is set to ?N (?) as defined in (8), then Rmax (?)
N
probability 1 ? ? across all training sets {(xi , yi )}i=1 .
Q
? := inf
(ii) Similarly, the best-case risk Rmin (?)
] is given by
? N ) E [1{yh?,xi<0}
?
Q?B? (P
?
N
P
?
?
si
min
?? + N1
?
?
?
?,si ,ri ,ti
i=1
?
?
? s.t.
? x
1 + ri y?i h?,
?i i ? si
?i ? N
? =1?
Rmin (?)
?
1
?
t
y
?
h
?,
x
?
i
?
??
?
s
?i ? N
?
i i
i
i
?
?
?
?
?
?
ri k?k? ? ?, ti k?k? ? ?
?i ? N
?
?
?
ri , ti , si ? 0
?i ? N.
5
(10b)
1
85
1
0.9
95
1
0.9
0.8
80
94.5
0.8
93
0.8
70
0.3
91
0.5
0.4
89
94.3
0.6
94.1
Average CCR(%)
0.4
0.6
1 ? ??
0.5
1 ? ??
1 ? ??
75
Average CCR(%)
0.7
0.6
Average CCR
0.7
0.3
0.4
0.2
65
0.1
0
10-5
0.2
87
93.9
0.1
10
-4
10
-3
10
-2
10
-1
60
100
?
(a) N = 10 training samples
0
10-5
10
-4
10
-3
10
-2
10
-1
85
100
?
(b) N = 100 training samples
0.2
10-5
10
-4
10
-3
10
-2
10
-1
100
?
(c) N = 1000 training samples
Figure 1: Out-of-sample performance (solid blue line) and the average CCR (dashed red line)
? ? R(?)
? with
If the Wasserstein radius ? is set to ?N (?) as defined in (8), then Rmin (?)
.
probability 1 ? ? across all training sets {(xi , yi )}N
i=1
We emphasize that (10a) and (10b) constitute highly tractable linear programs. Moreover, we have
? ? R(?)
? ? Rmax (?)
? with probability 1 ? 2?.
Rmin (?)
4
Numerical Results
We now showcase the power of distributionally robust logistic regression in simulated and empirical
experiments. All optimization problems are implemented in MATLAB via the modeling language
YALMIP [27] and solved with the state-of-the-art nonlinear programming solver IPOPT [28]. All
experiments were run on an Intel XEON CPU (3.40GHz). For the largest instance studied (N =
1000), the problems (2), (3), (7) and (10) were solved in 2.1, 4.2, 9.2 and 0.05 seconds, respectively.
4.1
Experiment 1: Out-of-Sample Performance
We use a simulation experiment to study the out-of-sample performance guarantees offered by distributionally robust logistic regression. As in [8], we assume that the features x ? R10 follow a multivariate standard normal distribution and that the conditional distribution of the labels y ? {+1, ?1}
is of the form (1) with ? = (10, 0, . . . , 0). The true distribution P is uniquely determined by this
information. If we use the `? -norm to measure distances in the feature space, then P satisfies the
light-tail assumption of Theorem 2 for 2 > a & 1. Finally, we set ? = 1.
Our experiment comprises 100 simulation runs. In each run we generate N ? {10, 102 , 103 } training samples and 104 test samples from P. We calibrate the distributionally robust logistic regression
model (6) to the training data and use the test data to evaluate the average logloss as well as the
? We then record the percentage
correct classification rate (CCR) of the classifier associated with ?.
?
??N (?) of simulation runs in which the average logloss exceeds J. Moreover, we calculate the average CCR across all simulation runs. Figure 1 displays both 1 ? ??N (?) and the average CCR as a
function of ? for different values of N . Note that 1 ? ??N (?) quantifies the probability (with respect
to the training data) that P belongs to the Wasserstein ball of radius ? around the empirical distri? N . Thus, 1 ? ??N (?) increases with ?. The average CCR benefits from the regularization
bution P
induced by the distributional robustness and increases with ? as long as the empirical confidence
1 ? ??N (?) is smaller than 1. As soon as the Wasserstein ball is large enough to contain the distribution P with high confidence (1 ? ??N (?) . 1), however, any further increase of ? is detrimental to
the average CCR.
Figure 1 also indicates that the radius ? implied by a fixed empirical confidence level scales inversely
with the number of training samples N . Specifically, for N = 10, 102 , 103 , the Wasserstein radius
implied by the confidence level 1 ? ?? = 95% is given by ? ? 0.2, 0.02, 0.003, respectively. This
observation is consistent with the a priori estimate (8) of the Wasserstein radius ?N (?) associated
1
with a given ?. Indeed, as a & 1, Theorem 2 implies that ?N (?) scales with N a . N for ? ? c3 .
6
4.2
Experiment 2: The Effect of the Wasserstein Ball
In the second simulation experiment we study the statistical properties of the out-of-sample logloss.
As in [2], we set n = 10 and assume that the features follow a multivariate standard normal distribution, while the conditional distribution of the labels is of the form (1) with ? sampled uniformly from
the unit sphere. We use the `2 -norm in the feature space, and we set ? = 1. All results reported here
are averaged over 100 simulation runs. In each trial, we use N = 102 training samples to calibrate
problem (6) and 104 test samples to estimate the logloss distribution of the resulting classifier.
Figure 2(a) visualizes the conditional value-at-risk (CVaR) of the out-of-sample logloss distribution for various confidence levels and for different values of ?. The CVaR of the logloss at level
? is defined as the conditional expectation of the logloss above its (1 ? ?)-quantile, see [29]. In
other words, the CVaR at level ? quantifies the average of the ? ? 100% worst logloss realizations.
As expected, using a distributionally robust approach renders the logistic regression problem more
?risk-averse?, which results in uniformly lower CVaR values of the logloss, particularly for smaller
confidence levels. Thus, increasing the radius of the Wasserstein ball reduces the right tail of the
logloss distribution. Figure 2(c) confirms this observation by showing that the cumulative distribution function (CDF) of the logloss converges to a step function for large ?. Moreover, one can prove
that the weight vector ?? tends to zero as ? grows. Specifically, for ? ? 0.1 we have ? ? 0, in
which case the logloss approximates the deterministic value log(2) = 0.69. Zooming into the CVaR
graph of Figure 2(a) at the end of the high confidence levels, we observe that the 100%-CVaR, which
coincides in fact with the expected logloss, increases at every quantile level; see Figure 2(b).
4.3
Experiment 3: Real World Case Studies and Risk Estimation
Next, we validate the performance of the proposed distributionally robust logistic regression method
on the MNIST dataset [30] and three popular datasets from the UCI repository: Ionosphere, Thoracic
Surgery, and Breast Cancer [31]. In this experiment, we use the distance function of Definition 2
with the `1 -norm. We examine three different models: logistic regression (LR), regularized logistic
regression (RLR), and distributionally robust logistic regression with ? = 1 (DRLR). All results
reported here are averaged over 100 independent trials. In each trial related to a UCI dataset, we
randomly select 60% of data to train the models and the rest to test the performance. Similarly, in
each trial related to the MNIST dataset, we randomly select 103 samples from the training dataset,
and test the performance on the complete test dataset. The results in Table 1 (top) indicate that
DRLR outperforms RLR in terms of CCR by about the same amount by which RLR outperforms
classical LR (0.3%?1%), consistently across all experiments. We also evaluated the out-of-sample
CVaR of logloss, which is a natural performance indicator for robust methods. Table 1 (bottom)
shows that DRLR wins by a large margin (outperforming RLR by 4%?43%).
In the remainder we focus on the Ionosphere case study (the results of which are representative
for the other case studies). Figures 3(a) and 3(b) depict the logloss and the CCR for different
Wasserstein radii ?. DRLR (? = 1) outperforms RLR (? = ?) consistently for all sufficiently
small values of ?. This observation can be explained by the fact that DRLR accounts for uncertainty
in the label, whereas RLR does not. Thus, there is a wider range of Wasserstein radii that result in
6
1
3
0.8
? =0
? =0.005
? =0.01
? =0.05
? =0.1
? =0.5
0.8
CDF
CVaR
4
? =0
? =0.005
? =0.01
? =0.05
? =0.1
? =0.5
0.9
CVaR
? =0
? =0.005
? =0.01
? =0.05
? =0.1
? =0.5
5
0.6
0.4
0.7
2
0.2
1
0.6
0
0
0
20
40
60
Quantile Percentage
80
100
94
95
96
97
98
Quantile Percentage
99
100
0
1
2
3
4
5
6
logloss
(a) CVaR versus quantile of the (b) CVaR versus quantile of the (c) Cumulative distribution of the
logloss function
logloss function (zoomed)
logloss function
Figure 2: CVaR and CDF of the logloss function for different Wasserstein radii ?
7
Table 1: The average and standard deviation of CCR and CVaR evaluated on the test dataset.
LR
RLR
DRLR
Ionosphere
84.8 ? 4.3% 86.1 ? 3.1% 87.0 ? 2.6%
Thoracic Surgery 82.7 ? 2.0% 83.1 ? 2.0% 83.8 ? 2.0%
Breast Cancer
94.4 ? 1.8% 95.5 ? 1.2% 95.8 ? 1.2%
CCR
MNIST 1 vs 7
97.8 ? 0.6% 98.0 ? 0.3% 98.6 ? 0.2%
MNIST 4 vs 9
93.7 ? 1.1% 94.6 ? 0.5% 95.1 ? 0.4%
MNIST 5 vs 6
94.9 ? 1.6% 95.7 ? 0.5% 96.7 ? 0.4%
Ionosphere
10.5 ? 6.9
4.2 ? 1.5
3.5 ? 2.0
Thoracic Surgery
3.0 ? 1.9
2.3 ? 0.3
2.2 ? 0.2
Breast Cancer
20.3 ? 15.1
1.3 ? 0.4
0.9 ? 0.2
CVaR
MNIST 1 vs 7
3.9 ? 2.8
0.67 ? 0.13
0.38 ? 0.06
MNIST 4 vs 9
8.7 ? 6.5
1.45 ? 0.20
1.09 ? 0.08
MNIST 5 vs 6
14.1 ? 9.5
1.35 ? 0.20
0.84 ? 0.08
1
0.88
RLR (? = +?)
DRLR (? = 1)
RLR (? = +?)
DRLR (? = 1)
8
6
4
0
10 -4
0.8
0.86
0.6
0.6
0.4
0.4
0.2
0.2
0.85
0.84
2
10 -3
10 -2
?
10 -1
1
True Risk
Upper Bound
Lower Bound
Confidence
0.87
Risk
Average CCR
Average logloss
10
0.83
10 -4
10 -3
10 -2
?
10 -1
0
10 -5
10 -4
0.8
10 -3
10 -2
10 -1
Confidence (1 ? 2?
?)
12
0
10 0
?
(a) The average logloss for differ- (b) The average correct classifica- (c) Risk estimation and its confient ?
tion rate for different ?
dence level
Figure 3: Average logloss, CCR and risk for different Wasserstein radii ? (Ionosphere dataset)
an attractive out-of-sample logloss and CCR. This effect facilitates the choice of ? and could be a
significant advantage in situations where it is difficult to determine ? a priori.
In the experiment underlying Figure 3(c), we first fix ?? to the optimal solution of (7) for ? = 0.003
? and its confidence bounds. As expected, for ? = 0
and ? = 1. Figure 3(c) shows the true risk R(?)
the upper and lower bounds coincide with the empirical risk on the training data, which is a lower
bound for the true risk on the test data due to over-fitting effects. As ? increases, the confidence
interval between the bounds widens and eventually covers the true risk. For instance, at ? ? 0.05
the confidence interval is given by [0, 0.19] and contains the true risk with probability 1?2?
? = 95%.
Acknowledgments: This research was supported by the Swiss National Science Foundation under
grant BSCGI0 157733.
References
[1] D. W. Hosmer and S. Lemeshow. Applied Logistic Regression. John Wiley & Sons, 2004.
[2] J. Feng, H. Xu, S. Mannor, and S. Yan. Robust logistic regression and classification. In
Advances in Neural Information Processing Systems, pages 253?261, 2014.
[3] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A
convex programming approach. IEEE Transactions on Information Theory, 59(1):482?494,
2013.
[4] N. Ding, S. Vishwanathan, M. Warmuth, and V. S. Denchev. t-logistic regression for binary
and multiclass classification. The Journal of Machine Learning Research, 5:1?55, 2013.
[5] C. Liu. Robit Regression: A Simple Robust Alternative to Logistic and Probit Regression,
pages 227?238. John Wiley & Sons, 2005.
8
[6] P. J. Rousseeuw and A. Christmann. Robustness against separation and outliers in logistic
regression. Computational Statistics & Data Analysis, 43(3):315?332, 2003.
[7] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society: Series B, 58(1):267?288, 1996.
[8] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In Proceedings
of the Twenty-First International Conference on Machine Learning, pages 78?85, 2004.
[9] K. Koh, S.-J. Kim, and S. Boyd. An interior-point method for large-scale `1 -regularized logistic
regression. The Journal of Machine Learning Research, 8:1519?1555, 2007.
[10] S. Shalev-Shwartz and A. Tewari. Stochastic methods for `1 -regularized loss minimization.
The Journal of Machine Learning Research, 12:1865?1892, 2011.
[11] J. Shi, W. Yin, S. Osher, and P. Sajda. A fast hybrid algorithm for large-scale `1 -regularized
logistic regression. The Journal of Machine Learning Research, 11:713?741, 2010.
[12] S. Yun and K.-C. Toh. A coordinate gradient descent method for `1 -regularized convex minimization. Computational Optimization and Applications, 48(2):273?307, 2011.
[13] A. Shapiro, D. Dentcheva, and A. Ruszczyski. Lectures on Stochastic Programming. SIAM,
2009.
[14] J. R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer, 2011.
[15] A. Ben-Tal and A. Nemirovski. Robust optimization?methodology and applications. Mathematical Programming B, 92(3):453?480, 2002.
[16] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52(1):35?53, 2004.
[17] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University
Press, 2009.
[18] E. Delage and Y. Ye. Distributionally robust optimization under moment uncertainty with
application to data-driven problems. Operations Research, 58(3):595?612, 2010.
[19] J. Goh and M. Sim. Distributionally robust optimization and its tractable approximations.
Operations Research, 58(4):902?917, 2010.
[20] W. Wiesemann, D. Kuhn, and M. Sim. Distributionally robust convex optimization. Operations
Research, 62(6):1358?1376, 2014.
[21] E. Erdo?gan and G. Iyengar. Ambiguous chance constrained problems and robust optimization.
Mathematical Programming B, 107(1-2):37?61, 2006.
[22] Z. Hu and L. J. Hong. Kullback-Leibler divergence constrained distributionally robust optimization. Technical report, Available from Optimization Online, 2013.
[23] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. The Journal of Machine Learning Research, 10:1485?1510, 2009.
[24] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso. IEEE Transactions on
Information Theory, 56(7):3561?3574, 2010.
[25] P. Mohajerin Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the
Wasserstein metric: Performance guarantees and tractable reformulations. http://arxiv.
org/abs/1505.05116, 2015.
[26] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical
measure. Probability Theory and Related Fields, pages 1?32, 2014.
[27] J. L?ofberg. YALMIP: A toolbox for modeling and optimization in Matlab. In IEEE International Symposium on Computer Aided Control Systems Design, pages 284?289, 2004.
[28] A. W?achter and L. T. Biegler. On the implementation of an interior-point filter line-search
algorithm for large-scale nonlinear programming. Mathematical Programming A, 106(1):25?
57, 2006.
[29] R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk,
2:21?42, 2000.
[30] Y. LeCun. The MNIST database of handwritten digits, 1998. http://yann.lecun.com/
exdb/mnist/.
[31] K. Bache and M. Lichman. UCI machine learning repository, 2013. http://archive.
ics.uci.edu/ml.
9
| 5745 |@word trial:4 repository:2 guillin:1 norm:8 seems:2 hu:1 confirms:1 seek:1 simulation:6 invoking:1 datagenerating:1 thereby:1 solid:1 moment:2 celebrated:1 contains:3 liu:1 series:1 lichman:1 daniel:2 ecole:1 denoting:1 outperforms:4 existing:3 com:1 si:10 toh:1 must:2 john:2 numerical:1 drop:1 depict:1 v:7 implying:2 warmuth:1 record:1 lr:3 provides:1 mannor:3 org:1 mathematical:3 constructed:1 c2:5 symposium:1 prove:4 fitting:1 x0:3 indeed:2 expected:6 behavior:1 frequently:1 examine:2 automatically:1 cpu:1 overwhelming:1 equipped:1 solver:1 increasing:1 becomes:1 distri:1 moreover:6 underlying:3 bounded:2 mass:1 interpreted:2 minimizes:2 emerging:3 rmax:4 guarantee:15 combat:2 mitigate:1 every:1 ti:5 wiesemann:1 growth:1 classifier:10 control:2 unit:2 grant:1 appear:1 encapsulate:1 positive:3 modify:1 tends:3 limit:1 emphasis:1 studied:1 lausanne:2 challenging:1 nemirovski:2 analytics:1 range:1 averaged:2 acknowledgment:1 lecun:2 practice:2 swiss:1 digit:1 delage:1 empirical:9 yan:1 boyd:1 confidence:19 induce:1 word:1 specificity:1 spite:1 close:1 selection:3 interior:2 context:1 risk:23 equivalent:2 deterministic:2 center:1 shi:1 starting:1 independently:1 convex:7 formulate:1 estimator:2 coordinate:1 construction:1 suppose:1 exact:1 programming:10 satisfying:1 particularly:1 showcase:1 bache:1 distributional:3 database:1 observed:2 ep:4 bottom:1 ding:1 solved:2 worst:13 calculate:1 averse:1 decrease:2 vanishes:1 depend:2 solving:2 weakly:1 yalmip:2 various:1 caramanis:2 train:1 sajda:1 fast:1 effective:1 shalev:1 whose:3 solve:2 denchev:1 otherwise:1 compressed:1 statistic:1 itself:1 online:1 hoc:2 sequence:1 advantage:1 propose:5 zoomed:1 remainder:3 uci:4 realization:2 date:1 erale:1 intuitive:2 validate:2 dirac:1 convergence:1 generating:1 converges:3 ben:2 object:1 help:1 derive:3 develop:2 depending:1 wider:1 sim:3 eq:2 implemented:1 predicted:1 christmann:1 implies:1 indicate:1 differ:2 kuhn:4 switzerland:1 radius:17 guided:1 correct:2 filter:1 stochastic:7 centered:2 assign:1 fix:1 pessimistic:1 around:1 sufficiently:1 ic:1 normal:2 exp:3 optimizer:1 adopt:1 estimation:5 label:14 bridge:1 largest:1 tool:1 reflects:1 minimization:3 iyengar:1 gaussian:2 aim:5 modified:1 pn:4 shrinkage:1 corollary:1 focus:2 improvement:1 consistently:2 likelihood:2 mainly:1 indicates:1 contrast:1 kim:1 birge:1 el:1 epfl:1 typically:1 explanatory:1 relegated:1 interested:1 classification:7 dual:1 priori:4 proposes:1 plan:1 art:1 special:4 constrained:2 field:2 construct:5 ng:1 represents:3 constitutes:3 promote:1 report:1 modern:1 randomly:2 divergence:2 national:1 n1:3 attempt:1 ab:1 highly:3 light:5 logloss:31 goh:1 theoretical:1 uncertain:1 instance:4 xeon:1 modeling:3 downside:1 cover:1 calibrate:2 cost:5 tractability:1 deviation:1 uniform:1 reported:2 vershynin:1 international:2 siam:1 probabilistic:8 invoke:1 ambiguity:14 unavoidable:1 satisfied:1 henceforth:1 admit:1 verge:1 resort:1 achter:1 account:2 potential:1 de:1 singleton:1 coefficient:3 rockafellar:1 ad:2 tion:1 view:1 sup:4 red:1 bution:1 contribution:2 minimize:3 efficiently:1 weak:1 handwritten:1 visualizes:1 ed:1 definition:4 against:1 associated:6 proof:2 modeler:1 sampled:1 dataset:8 popular:3 knowledge:1 follow:3 classifica:1 hosmer:1 methodology:1 evaluated:2 shrink:1 trust:1 nonlinear:2 overlapping:1 logistic:43 grows:1 effect:5 ye:1 contain:1 true:6 regularization:21 hence:1 leibler:2 attractive:3 uniquely:1 ambiguous:1 coincides:2 hong:1 yun:1 exdb:1 complete:1 polytechnique:1 demonstrate:1 rlr:9 l1:1 functional:2 discussed:1 interpretation:4 tail:5 approximates:1 measurement:1 refer:1 significant:1 consistency:2 similarly:2 language:1 reliability:1 moving:2 etc:1 add:1 multivariate:2 recent:3 perspective:3 inf:7 driven:4 belongs:1 binary:3 success:1 outperforming:1 yi:5 minimum:1 wasserstein:35 converge:2 paradigm:2 redundant:1 determine:1 dashed:1 semi:1 ii:3 reduces:4 smooth:1 technical:2 exceeds:1 characterized:1 long:1 sphere:1 regression:48 breast:3 expectation:3 metric:10 arxiv:1 adopting:2 achieved:1 c1:5 whereas:1 addressed:1 thoracic:3 interval:2 rest:1 archive:1 comment:1 induced:2 facilitates:1 near:1 iii:3 enough:1 lasso:3 multiclass:1 judiciously:1 motivated:1 ipopt:1 render:1 reformulated:1 constitute:1 remark:3 matlab:2 tewari:1 ofberg:1 amount:2 rousseeuw:1 stein:1 documented:1 generate:1 shapiro:1 http:3 percentage:3 overly:1 tibshirani:1 ccr:16 blue:1 group:1 smin:1 reformulation:7 drawn:1 r10:1 fournier:1 tenable:1 vast:1 graph:1 bertsimas:1 run:6 prob:3 uncertainty:11 family:3 yann:1 separation:1 decision:1 appendix:1 bit:1 bound:11 display:3 infinity:1 deficiency:1 constraint:1 rmin:4 ri:7 vishwanathan:1 dence:1 tal:2 speed:1 min:4 optimality:1 ball:16 poor:2 smaller:3 across:5 son:2 encapsulates:1 osher:1 explained:1 outlier:1 lemeshow:1 koh:1 ghaoui:1 taken:2 turn:1 eventually:1 tractable:14 serf:1 end:1 reformulations:1 available:2 operation:4 observe:1 generic:1 indirectly:1 appearing:1 alternative:2 robustness:5 denotes:2 assumes:1 top:1 gan:1 widens:1 medicine:1 exploit:1 restrictive:1 quantile:7 establish:3 classical:5 society:1 feng:1 implied:2 objective:4 surgery:3 concentration:4 gradient:1 detrimental:1 win:1 distance:13 zooming:1 simulated:2 majority:1 cvar:14 modeled:2 relationship:1 rotational:1 minimizing:1 acquire:1 setup:1 difficult:1 dentcheva:1 design:1 implementation:1 motivates:1 unknown:4 twenty:1 upper:3 observation:4 datasets:2 finite:2 descent:1 situation:1 precise:1 rn:6 toolbox:1 c3:4 connection:1 established:1 address:1 mismatch:1 regime:1 sparsity:1 summarize:1 program:9 max:1 royal:1 belief:2 power:1 misclassification:3 natural:3 rely:1 regularized:16 predicting:1 indicator:1 hybrid:1 improve:1 inversely:1 prior:1 geometric:1 literature:2 l2:1 asymptotic:2 relative:1 loss:3 probit:1 lecture:1 versus:2 foundation:1 offered:1 consistent:2 principle:1 cancer:3 supported:3 last:1 soon:1 formal:1 characterizing:2 sparse:2 ghz:1 benefit:1 overcome:1 dimension:2 world:1 cumulative:2 ignores:1 commonly:1 coincide:1 uryasev:1 transaction:2 observable:1 emphasize:1 kullback:2 ml:1 overfitting:4 xi:19 shwartz:1 biegler:1 continuous:1 search:1 quantifies:2 tailed:1 table:3 robust:45 conservatism:1 main:3 complementary:2 facilitating:1 xu:3 intel:1 representative:1 wiley:2 comprises:1 exponential:1 candidate:1 governed:1 yh:5 theorem:8 showing:1 sensing:1 admits:2 ionosphere:5 mnist:10 effectively:1 kx:1 margin:1 gap:2 yin:1 expressed:2 springer:1 ch:2 satisfies:2 chance:1 cdf:3 peyman:2 conditional:8 viewed:1 consequently:1 price:1 hard:1 aided:1 infinite:2 specifically:4 determined:2 uniformly:2 conservative:1 invariance:1 distributionally:34 select:2 support:6 evaluate:1 princeton:1 |
5,242 | 5,746 | On some provably correct cases of variational
inference for topic models
Andrej Risteski
Department of Computer Science
Princeton University
Princeton, NJ 08540
risteski@cs.princeton.edu
Pranjal Awasthi
Department of Computer Science
Rutgers University
New Brunswick, NJ 08901
pranjal.awasthi@rutgers.edu
Abstract
Variational inference is an efficient, popular heuristic used in the context of latent
variable models. We provide the first analysis of instances where variational inference algorithms converge to the global optimum, in the setting of topic models.
Our initializations are natural, one of them being used in LDA-c, the most popular
implementation of variational inference. In addition to providing intuition into
why this heuristic might work in practice, the multiplicative, rather than additive
nature of the variational inference updates forces us to use non-standard proof
arguments, which we believe might be of general theoretical interest.
1
Introduction
Over the last few years, heuristics for non-convex optimization have emerged as one of the most
fascinating phenomena for theoretical study in machine learning. Methods like alternating minimization, EM, variational inference and the like enjoy immense popularity among ML practitioners,
and with good reason: they?re vastly more efficient than alternate available methods like convex
relaxations, and are usually easily modified to suite different applications.
Theoretical understanding however is sparse and we know of very few instances where these methods come with formal guarantees. Among more classical results in this direction are the analyses of
Lloyd?s algorithm for K-means, which is very closely related to the EM algorithm for mixtures of
Gaussians [20], [13], [14]. The recent work of [9] also characterizes global convergence properties
of the EM algorithm for more general settings. Another line of recent work has focused on a different heuristic called alternating minimization in the context of dictionary learning. [1], [6] prove that
with appropriate initialization, alternating minimization can provably recover the ground truth. [22]
have proven similar results in the context of phase retreival.
Another popular heuristic which has so far eluded such attempts is known as variational inference [19]. We provide the first characterization of global convergence of variational inference based
algorithms for topic models [12]. We show that under natural assumptions on the topic-word matrix
and the topic priors, along with natural initialization, variational inference converges to the parameters of the underlying ground truth model. To prove our result we need to overcome a number
of technical hurdles which are unique to the nature of variational inference. Firstly, the difficulty
in analyzing alternating minimization methods for dictionary learning is alleviated by the fact that
one can come up with closed form expressions for the updates of the dictionary matrix. We do
not have this luxury. Second, the ?norm? in which variational inference naturally operates is KL
divergence, which can be difficult to work with. We stress that the focus of this work is not to identify new instances of topic modeling that were previously not known to be efficiently solvable, but
rather providing understanding about the behaviour of variational inference, the defacto method for
learning and inference in the context of topic models.
1
2
Latent variable models and EM
We briefly review EM and variational methods. The setting is latent variable models, where
the observations Xi are generated according to a distribution P (Xi |?) = P (Zi |?)P (Xi |Zi , ?)
where ? are parameters of the models, and Zi is a latent variable. Given the observations
Xi , a
X
common task is to find the max likelihood value of the parameter ?: argmax?
log(P (Xi |?)).
i
The EM algorithm is an iterative method to achieve this, dating all the way back to [15] and
[24] in the 70s. In the above framework it can be formulated as the following procedure, maintaining estimates ?t , P? t (Z) of the model parameters and the posterior distribution over the hidt
?t
den variables: In the E-step, we
Xcompute the distribution P (Z) = P (Z|X, ? ). In the Mstep, we set ?t+1 = argmax?
EP? t [log P (Xi , Zi |?)]. Sometimes even the above two steps
i
may not be computationally feasible, in which case they can be relaxed by choosing a family of simple distributions F , and performing the following updates. In the variational E-step,
we compute the distribution P? t (Z) = min
KL(P t (Z)||P (Z|X, ?t )). In the M-step, we set
P t ?F
X
EP? t [log P (Xi , Zi |?)]. By picking the family F appropriately, it?s often possi?t+1 = argmax?
i
ble to make both steps above run in polynomial time. As expected, none of the above two families
of approximations, come with any provable global convergence guarantees. With EM, the problem
is ensuring that one does not get stuck in a local optimum. With variational EM, additionally, we
are faced with the issue of in principle not even exploring the entire space of solutions.
3
Topic models and prior work
We focus on a particular, popular latent variable model - topic models [12]. The generative model
over word documents is the following. For each document in the corpus, a proportion of topics
?1 , ?2 , . . . , ?k is sampled according to a prior distribution ?. Then, for each position p in the document, we pick a topic Zp according to a multinomial with parameters ?1 , . . . , ?k . Conditioned on
Zp = i, we pick a word j from a multinomial with parameters (?i,1 , ?i,2 , . . . , ?i,k ) to put in position
p. The matrix of values {?i,j } is known as the topic-word matrix.
The body of work on topic models is vast [11]. Prior theoretical work relevant in the context of
this paper includes the sequence of works by [7],[4], as well as [2], [16], [17] and [10]. [7] and
[4] assume that the topic-word matrix contains ?anchor words?. This means that each topic has a
word which appears in that topic, and no other. [2] on the other hand work with a certain expansion
assumption on the word-topic graph, which says that if one takes a subset S of topics, the number
of words in the support of these topics should be at least |S| + smax , where smax is the maximum
support size of any topic. Neither paper needs any assumption on the topic priors, and can handle
(almost) arbitrarily short documents.
The assumptions we make on the word-topic matrix will be related to the ones in the above works,
but our documents will need to be long, so that the empirical counts of the words are close to their
expected counts. Our priors will also be more structured. This is expected since we are trying to
analyze an existing heuristic rather than develop a new algorithmic strategy. The case where the
documents are short seems significantly more difficult. Namely, in that case there are two issues to
consider. One is proving the variational approximation to the posterior distribution over topics is not
too bad. The second is proving that the updates do actually reach the global optimum. Assuming
long documents allows us to focus on the second issue alone, which is already challenging. On a
high level, the instances we consider will have the following structure:
? The topics will satisfy a weighted expansion property: for any set S of topics of constant size,
for any topic i in this set, the probability mass on words which belong to i, and no other topic in
S will be large. (Similar to the expansion in [2], but only over constant sized subsets.)
? The number of topics per document will be small. Further, the probability of including a given
topic in a document is almost independent of any other topics that might be included in the
document already. Similar properties are satisfied by the Dirichlet prior, one of the most popular
2
priors in topic modeling. (Originally introduced by [12].) The documents will also have a
?dominating topic?, similarly as in [10].
? For each word j, and a topic i it appears in, there will be a decent proportion of documents that
contain topic i and no other topic containing j. These can be viewed as ?local anchor documents?
for that word-pair topic.
We state below, informally, our main result. See Sections 6 and 7 for more details.
Theorem. Under the above mentioned assumptions, popular variants of variational inference for
topic models, with suitable initializations, provably recover the ground truth model in polynomial
time.
4
Variational relaxation for learning topic models
In this section we briefly review the variational relaxation for topic models, following closely [12].
Throughout the paper, we will denote by N the total number of words and K the number of topics.
We will assume that we are working with a sample set of D documents. We will also denote by
f?d,j the fractional count of word j in document d (i.e. f?d,j = Count(j)/Nd , where Count(j) is the
number of times word j appears in the document, and Nd is the number of words in the document).
For topic models variational updates are a way to approximate the computationally intractable
E-step [23] as described in Section 2. Recall the model parameters for topic models are the
topic prior parameters ? and the topic-word matrix ?. The observable X is the list of words
in the document. The latent variables are the topic assignments Zj at each position j in the
document and the topic proportions ?. The variational E-step hence becomes P? t (Z, ?) =
minP t ?F KL(P t (Z, ?)||P (Z, ?|X, ?t , ? t ) for some family F of distributions. The family F one
0
d
usually considered is P t (?, Z) = q(?)?N
j=1 qj (Zj ), i.e. a mean field family. In [12] it?s shown
that for Dirichlet priors ? the optimal distributions q, qj0 are a Dirichlet distribution for q, with some
parameter ?? , and multinomials for qj0 , with some parameters ?j . The variational EM updates are
shown to have the following form. In the E-step, one runs to convergence the following updates on
PNd
t
t
the ? and ?? parameters: ?d,j,i ? ?i,w
eEq [log(?d )|??d ] , ??d,i = ?d,i
+ j=1
?d,j,i . In the M-step, one
d,j
N
D
d
XX
t+1
updates the ? and parameters by setting ?i,j
?
?td,j,i wd,j,j 0 where ?td,j,i is the converged
d=1 j 0 =1
value of ?d,j,i ; wd,j is the word in document d, position j; wd,j,j 0 is an indicator variable which is 1
if the word in position j 0 in document d is word j. The ? Dirichlet parameters do not have a closed
form expression and are updated via gradient descent.
4.1
Simplified updates in the long document limit
From the above updates it is difficult to give assign an intuitive meaning to the ?? and ? parameters.
(Indeed, it?s not even clear what one would like them to be ideally at the global optimum.) We will
be however working in the large document limit - and this will simplify the updates. In particular,
in the E-step, in the large document limit, the first term in the update equation for ?? has a vanishing
PNd
t
contribution. In this case, we can simplify the E-update as: ?d,j,i ? ?i,j
?d,i , ?d,i ? j=1
?d,j,i .
Notice, importantly, in the second update we now use variables ?d,i instead of ??d,i , which are norK
X
malized such that
?d,i = 1. These correspond to the max-likelihood topic proportions, given
i=1
t
our current estimates ?i,j
for the model parameters. The M-step will remain as is - but we will
focus on the ? only, and ignore the ? updates - as the ? estimates disappeared from the E updates:
D
X
t+1
t
t
?i,j
?
f?d,j ?d,i
, where ?d,i
is the converged value of ?d,i . In this case, the intuitive meand=1
ing of the ? t and ? t variables is clear: they are estimates of the the model parameters, and the
max-likelihood topic proportions, given an estimate of the model parameters, respectively.
The way we derived them, these updates appear to be an approximate form of the variational updates
in [12]. However it is possible to also view them in a more principled manner. These updates
3
approximate the posterior distribution P (Z, ?|X, ?t , ? t ) by first approximating this posterior by
P (Z|X, ? ? , ?t , ? t ), where ? ? is the max-likelihood value for ?, given our current estimates of
?, ?, and then setting P (Z|X, ? ? , ?t , ? t ) to be a product distribution. It is intuitively clear that
in the large document limit, this approximation should not be much worse than the one in [12],
as the posterior concentrates around the maximum likelihood value. (And in fact, our proofs will
work for finite, but long documents.) Finally, we will rewrite the above equations in a slightly
K
X
t
more convenient form. Denoting fd,j =
?d,i ?i,j
, the E-step can be written as: iterate until
i=1
convergence ?d,i = ?d,i
N
X
j=1
t
fd,j
=
K
X
f?d,j t
t+1
t
? . The M-step becomes: ?i,j
= ?i,j
fd,j i,j
f?d,j t
t ?d,i
d=1 fd,j
PD
PD
d=1
t
?d,i
where
t
t
t
?d,i
?i,j
and ?d,i
is the converged value of ?d,i .
i=1
4.2
Alternating KL minimization and thresholded updates
We will further modify the E and M-step update equations we derived above. In a slightly modified
form, these updates were used in a paper by [21] in the context of non-negative matrix factorization.
PD
t
There the authors proved that under these updates d=1 KL(fd,j
||f?d,j ) is non-decreasing. One can
easily modify their arguments to show that the same property is preserved if the E-step is replaced
by a step ?dt = min?dt ??K KL(f?d ||fd ), where ?K is the K-dimensional simplex - i.e. minimizing
the KL divergence between the counts and the ?predicted counts? with respect to the ? variables. (In
fact, iterating the ? updates above is a way to solve this convex minimization problem via a version
of gradient descent which makes multiplicative updates, rather than additive updates.)
Thus the updates are performing alternating minimization using the KL divergence as the distance
measure (with the difference that for the ? variables one essentially just performs a single gradient
step). In this paper, we will make a modification of the M-step which is very natural. Intuitively, the
t
update for ?i,j
goes over all appearances of the word j and adds the ?fractional assignment? of the
word j to topic i under our current estimates of the variables ?, ?. In the modified version we will
0
t
t
> ?d,i
only average over those documents d, where ?d,i
0 , ?i 6= i. The intuitive reason behind this
modification is the following. The EM updates we are studying work with the KL divergence, which
t
puts more weight on the larger entries. Thus, for the documents in Di , the estimates for ?d,i
should
t
be better than they might be in the documents D \ Di . (Of course, since the terms fd,j
involve all
t
the variables ?d,i
, it is not a priori clear that this modification will gain us much, but we will prove
that it in fact does.) Formally, we discuss the three modifications of variational inference specified
as Algorithm 1, 2 and 3 (we call them tEM, for thresholded EM):
Algorithm 1 KL-tEM
t
(E-step) Solve the following convex program for each document d: min?d,i
P t
t
t
?d,i ? 0, i ?d,i = 1 and ?d,i = 0 if i is not in the support of document d
t
t
0
(M-step) Let Di to be the set of documents d, s.t. ?d,i
> ?d,i
0 , ?i 6= i.
P ?
f?d,j
j fd,j log( f t ), s.t.
d,j
f?d,j t
?d,i
d?Di f t
d,j
P
t
?
d?Di d,i
P
t+1
t
Set ?i,j
= ?i,j
5
Initializations
We will consider two different strategies for initialization. First, we will consider the case where
we initialize with the topic-word matrix, and the document priors having the correct support. The
analysis of tEM in this case will be the cleanest. While the main focus of the paper is tEM, we?ll
show that this initialization can actually be done for our case efficiently. Second, we will consider
an initialization that is inspired by what the current LDA-c implementation uses. Concretely, we?ll
4
Algorithm 2 Iterative tEM
(E-step) Initialize ?d,i uniformly among the topics in the support of document d.
Repeat
N ?
X
fd,j t
?d,i = ?d,i
?i,j
f
j=1 d,j
(4.1)
until convergence.
(M-step) Same as above.
Algorithm 3 Incomplete tEM
(E-step) Initialize ?d,i with the values gotten in the previous iteration, then perform just one step
of 4.1.
(M-step) Same as before.
assume that the user has some way of finding, for each topic i, a seed document in which the
proportion of topic i is at least Cl . Then, when initializing, one treats this document as if it were
0
pure: namely one sets ?i,j
to be the fractional count of word j in this document. We do not attempt
to design an algorithm to find these documents.
6
Case study 1: Sparse topic priors, support initialization
We start with a simple case. As mentioned, all of our results only hold in the long documents
regime: we will assume for each document d, the number of sampled words is large enough, so that
?
one can approximate the expected frequencies of the words, i.e., one can find values ?d,i
, such that
P
K
?
?
f?d,j = (1?) i=1 ?d,i ?i,j . We?ll split the rest of the assumptions into those that apply to the topicword matrix, and the topic priors. Let?s first consider the assumptions on the topic-word matrix. We
will impose conditions that ensure the topics don?t overlap too much. Namely, we assume:
? Words are discriminative: Each word appears in o(K) topics.
P
?
? Almost disjoint supports: ?i, i0 , if the intersection of the supports of i and i0 is S, j?S ?i,j
?
P ?
o(1) ? j ?i,j .
We also need assumptions on the topic priors. The documents will be sparse, and all topics will
be roughly equally likely to appear. There will be virtually no dependence between the topics:
conditioning on the size or presence of a certain topic will not influence much the probability of
another topic being included. These are analogues of distributions that have been analyzed for
dictionary learning [6]. Formally:
? Sparse and gapped documents: Each of the documents in our samples has at most T = O(1)
?
topics. Furthermore, for each document d, the largest topic i0 = argmaxi ?d,i
is such that for any
0
?
?
other topic i , ?d,i0 ? ?d,i0 > ? for some (arbitrarily small) constant ?.
?
?
0
? Dominant topic equidistribution: The probability that topic i is such that ?d,i
> ?d,i
0 , ?i 6= i is
?(1/K).
? Weak topic correlations and independent topic distribution: For all sets S with o(K) topics, it
?
?
?
?
?
must be the case that: E[?d,i
|?d,i
is dominating] = (1 ? o(1))E[?d,i
|?d,i
is dominating, ?d,i
0 =
0
?
?
0
0, i ? S]. Furthermore, for any set S of topics, s.t. |S| ? T ? 1, Pr[?d,i > 0|?d,i0 ?i ? S] =
1
)
?( K
These assumptions are a less smooth version of properties of the Dirichlet prior. Namely, it?s a
folklore result that Dirichlet draws are sparse with high probability, for a certain reasonable range of
parameters. This was formally proven by [25] - though sparsity there means a small number of large
coordinates. It?s also well known that Dirichlet essentially cannot enforce any correlation between
different topics. 1
1
We show analogues of the weak topic correlations property and equidistribution in the supplementary
material for completeness sake.
5
The above assumptions can be viewed as a local notion of separability of the model, in the following
sense. First, consider a particular document d. For each topic i that participates in that document,
consider the words j, which only appear in the support of topic i in the document. In some sense,
these words are local anchor words for that document: these words appear only in one topic of that
document. Because of the ?almost disjoint supports? property, there will be a decent mass on these
?
words in each document. Similarly, consider a particular non-zero element ?i,j
of the topic-word
?
matrix. Let?s call Dl the set of documents where ?i0 ,j = 0 for all other topics i0 6= i appearing in
that document. These documents are like local anchor documents for that word-topic pair: in those
documents, the word appears as part of only topic i. It turns out the above properties imply there is
a decent number of these for any word-topic pair.
1
?
?
are at least poly(N
Finally, a technical condition: we will also assume that all nonzero ?d,i
, ?i,j
).
Intuitively, this means if a topic is present, it needs to be reasonably large, and similarly for words
in topics. Such assumptions also appear in the context of dictionary learning [6].
We will prove the following
Theorem 1. Given an instance of topic modelling satisfying the properties specified above, where
2
N
t
t
the number of documents is ?( K log
), if we initialize the supports of the ?i,j
and ?d,i
variables
2
0
correctly, after O (log(1/ ) + log N ) KL-tEM, iterative-tEM updates or incomplete-tEM updates,
we recover the topic-word matrix and topic proportions to multiplicative accuracy 1 + 0 , for any 0
1
s.t. 1 + 0 ? (1?)
7.
Theorem 2. If the number of documents is ?(K 4 log2 K), there is a polynomial-time procedure
1
?
?
which with probability 1 ? ?( K
) correctly identifies the supports of the ?i,j
and ?d,i
variables.
Provable convergence of tEM: The correctness of the tEM updates is proven in 3 steps:
t
? Identifying dominating topic: First, we prove that if ?d,i
is the largest one among all topics in the
document, topic i is actually the largest topic.
? Phase I: Getting constant multiplicative factor estimates: After initialization, after O(log N )
t
t
rounds, we will get to variables ?i,j
, ?d,i
which are within a constant multiplicative factor from
?
?
?i,j , ?d,i .
? Phase II (Alternating minimization - lower and upper bound evolution): Once the ? and ? estimates are within a constant factor of their true values, we show that the lone words and documents have a boosting effect: they cause the multiplicative upper and lower bounds to improve
at each round.
The updates we are studying are multiplicative, not additive in nature, and the objective they are
optimizing is non-convex, so the standard techniques do not work. The intuition behind our proof in
t
Phase II can be described as follows. Consider one update for one of the variables, say ?i,j
. We show
t+1
?
t ?
t
that ?i,j ? ??i,j + (1 ? ?)C ?i,j for some constant C at time step t. ? is something fairly large
(one should think of it as 1 ? o(1)), and comes from the existence of the local anchor documents.
A similar equation holds for the ? variables, in which case the ?good? term comes from the local
anchor words. Furthermore, we show that the error in the ? decreases over time, as does the value
?
of C t , so that eventually we can reach ?i,j
. The analysis bears a resemblance to the state evolution
and density evolution methods in error decoding algorithm analysis - in the sense that we maintain
a quantity about the evolving system, and analyze how it evolves under the specified iterations. The
quantities we maintain are quite simple - upper and lower multiplicative bounds on our estimates at
any round t.
Initialization: Recall the goal of this phase is to recover the supports - i.e. to find out which topics
are present in a document, and identify the support of each topic. We will find the topic supports
first. This uses an idea inspired by [8] in the setting of dictionary learning. Roughly, we devise a
test, which will take as input two documents d, d0 , and will try to determine if the two documents
have a topic in common or not. The test will have no false positives, i.e., will never say YES, if the
documents don?t have a topic in common, but might say NO even if they do. We then ensure that
with high probability, for each topic we find a pair of documents intersecting in that topic, such that
the test says YES. 2
2
The detailed initialization algorithm is included in the supplementary material.
6
7
Case study 2: Dominating topics, seeded initialization
Next, we?ll consider an initialization which is essentially what the current implementation of LDA-c
uses. Namely, we will call the following initialization a seeded initialization:
?
? For each topic i, the user supplies a document d, in which ?d,i
? Cl .
0
?
? We treat the document as if it only contains topic i and initialize with ?i,j
= fd,j
.
We show how to modify the previous analysis to show that with a few more assumptions, this
strategy works as well. Firstly, we will have to assume anchor words, that make up a decent fraction
of the mass of each topic. Second, we also assume that the words have a bounded dynamic range, i.e.
the values of a word in two different topics are within a constant B from each other. The documents
are still gapped, but the gap now must be larger. Finally, in roughly 1/B fraction of the documents
where topic i is dominant, that topic has proportion 1 ? ?, for some small (but still constant) ?. A
similar assumption (a small fraction of almost pure documents) appeared in a recent paper by [10].
Formally, we have:
?
? Small dynamic range and large fraction of anchors: For each discriminative words, if ?i,j
6= 0
?
and ?i?0 ,j 6= 0, ?i,j
? B?i?0 ,j . Furthermore, each topic i has anchor words, such that their total
weight is at least p.
? Gapped documents: In each document, the largest topic has proportion at least Cl , and all the
other topics are at most Cs , s.t.
s
!
p
1
1
Cl ? Cs ?
2 p log( ) + (1 ? p) log(BCl ) + log(1 + ) +
p
Cl
? Small fraction of 1 ? ? dominant documents: Among all the documents where topic i is domi?
nating, in a 8/B fraction of them, ?d,i
? 1 ? ?, where
s
!
!
p
p
Cl2
1
1
? := min
?
2 p log( ) + (1 ? p) log(BCl ) + log(1 + ) ? , 1 ? Cl
2B 3
p
Cl
The dependency between the parameters B, p, Cl is a little difficult to parse, but if one thinks of?Cl
as 1?? for ? small, and p ? 1? log? B , since log( C1l ) ? 1+?, roughly we want that Cl ?Cs p2 ?.
(In other words, the weight we require to have on the anchors depends only logarithmically on the
range B.) In the documents where the dominant topic has proportion 1 ? ?, a similar reasoning as
2?
1 ? 2?
?
+
?. The precise statement is as
above gives that we want is approximately ?d,i
? 1?
2B 3
p
follows:
Theorem 3. Given an instance of topic modelling satisfying the properties specified above,
2
N
where the number of documents is ?( K log
), if we initialize with seeded initialization, after
2
0
O (log(1/ ) + log N ) of KL-tEM updates, we recover the topic-word matrix and topic proportions
1
to multiplicative accuracy 1 + 0 , if 1 + 0 ? (1?)
7.
The proof is carried out in a few phases:
? Phase I: Anchor identification: We show that as long as we can identify the dominating topic in
each of the documents, anchor words will make progress: after O(log N ) number of rounds, the
values for the topic-word estimates will be almost zero for the topics for which word w is not an
anchor. For topic for which a word is an anchor we?ll have a good estimate.
? Phase II: Discriminative word identification: After the anchor words are properly identified in
?
t
the previous phase, if ?i,j
= 0, ?i,j
will keep dropping and quickly reach almost zero. The
?
values corresponding to ?i,j 6= 0 will be decently estimated.
? Phase III: Alternating minimization: After Phase I and II above, we are back to the scenario of
the previous section: namely, there is improvement in each next round.
During Phase I and II the intuition is the following: due to our initialization, even in the beginning,
each topic is ?correlated? with the correct values. In a ? update, we are minimizing KL(f?d ||fd )
with respect to the ?d variables, so we need a way to argue that whenever the ? estimates are not too
bad, minimizing this quantity provides an estimate about how far the optimal ?d variables are from
?d? . We show the following useful claim:
7
Lemma 4. If, for all topics i, KL(?i? ||?it ) ? R? , and min?d ??K KL(f?d,j ||fd,j ) ? Rf , after
running
minimization step with respect to the ?d variables, we get that ||?d? ??d ||1 ?
q a KL divergence
p
1
1
1
Rf ) + .
p(
2 R? + 2
This lemma critically uses the existence of anchor words - namely we show ||? ? v||1 ? p||v||1 .
Intuitively, if one thinks of v as ? ? ? ? t , ||? ? v||1 will be large if ||v||1 is large. Hence, if ||? ? ? ? t ||1
is not too large, whenever ||f ? ? f t ||1 is small, so is ||? ? ? ? t ||1 . We will be able to maintain R?
and Rf small enough throughout the iterations, so that we can identify the largest topic in each of
the documents.
8
On common words
?
We briefly remark on common words: words such that ?i,j
? ??i?0 ,j , ?i, i0 , ? ? B. In this case, the
3
proofs above, as they are, will not work, since common words do not have any lone documents.
1
However, if 1 ? ?100
fraction of the documents where topic i is dominant contains topic i with
1
proportion 1 ? ?100 and furthermore, in each topic, the weight on these words is no more than
1
4
?100 , then our proofs still work with either initialization The idea for the argument is simple: when
f?
the dominating topic is very large, we show that fd,j
is very highly correlated with
t
d,j
documents behave like anchor documents. Namely, one can show:
?
?i,j
t ,
?i,j
so these
Theorem 5. If we additionally have common words satisfying the properties specified above, after
O(log(1/0 ) + log N ) KL-tEM updates in Case Study 2, or any of the tEM variants in Case Study 1,
and we use the same initializations as before, we recover the topic-word matrix and topic proportions
1
to multiplicative accuracy 1 + 0 , if 1 + 0 ? (1?)
7.
9
Discussion and open problems
In this work we provide the first characterization of sufficient conditions when variational inference
leads to optimal parameter estimates for topic models. Our proofs also suggest possible hard cases
for variational inference, namely instances with large dynamic range compared to the proportion of
anchor words and/or correlated topic priors. It?s not hard to hand-craft such instances where support
initialization performs very badly, even with only anchor and common words. We made no effort to
explore the optimal relationship between the dynamic range and the proportion of anchor words, as
it?s not clear what are the ?worst case? instances for this trade-off.
Seeded initialization, on the other hand, empirically works much better. We found that when Cl ?
0.6, and when the proportion of anchor words is as low as 0.2, variational inference recovers the
ground truth, even on instances with fairly large dynamic range. Our current proof methods are too
weak to capture this observation. (In fact, even the largest topic is sometimes misidentified in the
initial stages, so one cannot even run tEM, only the vanilla variational inference updates.) Analyzing
the dynamics of variational inference in this regime seems like a challenging problem which would
require significantly new ideas.
References
[1] A. Agarwal, A. Anandkumar, P. Jain, and P. Netrapalli. Learning sparsely used overcomplete
dictionaries via alternating minimization. In Proceedings of The 27th Conference on Learning
Theory (COLT), 2013.
[2] A. Anandkumar, D. Hsu, A. Javanmard, and S. Kakade. Learning latent bayesian networks and
topic models under expansion constraints. In Proceedings of the 30th International Conference
on Machine Learning (ICML), 2013.
3
We stress we want to analyze whether variational inference will work or not. Handling common words
algorithmically is easy: they can be detected and ?filtered out? initially. Then we can perform the variational
inference updates over the rest of the words only. This is in fact often done in practice.
4
See supplementary material.
8
[3] A. Anandkumar, S. Kakade, D. Foster, Y. Liu, and D. Hsu. Two svds suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. Technical report,
2012.
[4] S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical
algorithm for topic modeling with provable guarantees. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
[5] S. Arora, R. Ge, R. Kanna, and A. Moitra. Computing a nonnegative matrix factorization?
provably. In Proceedings of the forty-fourth annual ACM symposium on Theory of Computing,
pages 145?162. ACM, 2012.
[6] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse
coding. In Proceedings of The 28th Conference on Learning Theory (COLT), 2015.
[7] S. Arora, R. Ge, and A. Moitra. Learning topic models ? going beyond svd. In Proceedings of
the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2012.
[8] S. Arora, R. Ge, and A. Moitra. New algorithms for learning incoherent and overcomplete
dictionaries. In Proceedings of The 27th Conference on Learning Theory (COLT), 2014.
[9] S. Balakrishnan, M.J. Wainwright, and B. Yu. Statistical guarantees for the em algorithm:
From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014.
[10] T. Bansal, C. Bhattacharyya, and R. Kannan. A provable svd-based algorithm for learning
topics in dominant admixture corpus. In Advances in Neural Information Processing Systems
(NIPS), 2014.
[11] D. Blei and J.D. Lafferty. Topic models. Text mining: classification, clustering, and applications, 10:71, 2009.
[12] D. Blei, A. Ng, , and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[13] S. Dasgupta and L. Schulman. A two-round variant of em for gaussian mixtures. In Proceedings of Uncertainty in Artificial Intelligence (UAI), 2000.
[14] S. Dasgupta and L. Schulman. A probabilistic analysis of em for mixtures of separated, spherical gaussians. Journal of Machine Learning Research, 8:203?226, 2007.
[15] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[16] W. Ding, M.H. Rohban, P. Ishwar, and V. Saligrama. Topic discovery through data dependent
and random projections. arXiv preprint arXiv:1303.3664, 2013.
[17] W. Ding, M.H. Rohban, P. Ishwar, and V. Saligrama. Efficient distributed topic modeling
with provable guarantees. In Proceedings ot the 17th International Conference on Artificial
Intelligence and Statistics, pages 167?175, 2014.
[18] M. Hoffman, D. Blei, J. Paisley, and C. Wan. Stochastic variational inference. Journal of
Machine Learning Research, 14:1303?1347, 2013.
[19] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods
for graphical models. Machine learning, 37(2):183?233, 1999.
[20] A. Kumar and R. Kannan. Clustering with spectral norm and the k-means algorithm. In
Proceedings of Foundations of Computer Science (FOCS), 2010.
[21] D. Lee and S. Seung. Algorithms for non-negative matrix factorization. In Advances in Neural
Information Processing Systems (NIPS), 2000.
[22] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In
Advances in Neural Information Processing Systems (NIPS), 2013.
[23] D. Sontag and D. Roy. Complexity of inference in latent dirichlet allocation. In Advances in
Neural Information Processing Systems (NIPS), 2000.
[24] R. Sundberg. Maximum likelihood from incomplete data via the em algorithm. Scandinavian
Journal of Statistics, 1:49?58, 1974.
[25] M. Telgarsky. Dirichlet draws are sparse with high probability. Manuscript, 2013.
9
| 5746 |@word version:3 briefly:3 polynomial:3 norm:2 proportion:16 seems:2 nd:2 open:1 decomposition:1 pick:2 initial:1 liu:1 contains:3 series:1 denoting:1 document:82 bhattacharyya:1 existing:1 current:6 wd:3 malized:1 written:1 must:2 additive:3 mstep:1 update:39 alone:1 generative:1 intelligence:2 beginning:1 vanishing:1 short:2 filtered:1 blei:3 characterization:2 defacto:1 completeness:1 boosting:1 provides:1 firstly:2 along:1 supply:1 symposium:2 focs:2 prove:5 manner:1 javanmard:1 indeed:1 expected:4 roughly:4 inspired:2 decreasing:1 spherical:1 td:2 little:1 becomes:2 xx:1 underlying:1 bounded:1 suffice:1 mass:3 what:4 lone:2 finding:1 nj:2 suite:1 guarantee:5 enjoy:1 appear:5 before:2 positive:1 local:7 modify:3 treat:2 limit:4 analyzing:2 approximately:1 might:5 initialization:22 challenging:2 factorization:3 range:7 unique:1 practical:1 practice:2 procedure:2 empirical:1 evolving:1 significantly:2 alleviated:1 convenient:1 word:72 projection:1 suggest:1 get:3 cannot:2 close:1 andrej:1 put:2 context:7 influence:1 go:1 convex:5 focused:1 identifying:1 pure:2 nating:1 importantly:1 proving:2 handle:1 notion:1 coordinate:1 population:1 updated:1 user:2 us:4 element:1 logarithmically:1 satisfying:3 roy:1 sparsely:1 ep:2 preprint:2 ding:2 initializing:1 capture:1 worst:1 svds:1 decrease:1 trade:1 mentioned:2 intuition:3 principled:1 pd:3 equidistribution:2 dempster:1 complexity:1 ideally:1 seung:1 gapped:3 dynamic:6 halpern:1 rewrite:1 c1l:1 easily:2 separated:1 jain:2 argmaxi:1 detected:1 artificial:2 choosing:1 quite:1 heuristic:6 emerged:1 dominating:7 solve:2 say:5 larger:2 supplementary:3 statistic:2 think:3 laird:1 sequence:1 product:1 saligrama:2 relevant:1 achieve:1 intuitive:3 getting:1 convergence:7 optimum:4 zp:2 smax:2 disappeared:1 telgarsky:1 converges:1 develop:1 progress:1 p2:1 netrapalli:2 c:4 misidentified:1 come:5 predicted:1 direction:1 concentrate:1 closely:2 correct:3 gotten:1 stochastic:1 material:3 require:2 behaviour:1 assign:1 exploring:1 hidt:1 hold:2 around:1 considered:1 ground:4 seed:1 algorithmic:1 claim:1 dictionary:8 largest:6 correctness:1 weighted:1 hoffman:1 minimization:12 awasthi:2 gaussian:1 modified:3 rather:4 jaakkola:1 derived:2 focus:5 properly:1 improvement:1 modelling:2 likelihood:7 sense:3 inference:24 dependent:1 i0:9 entire:1 initially:1 going:1 provably:4 issue:3 among:5 colt:3 classification:1 priori:1 initialize:6 fairly:2 field:1 once:1 never:1 having:1 ng:1 yu:1 icml:2 tem:15 simplex:1 report:1 sanghavi:1 simplify:2 few:4 divergence:5 replaced:1 phase:13 argmax:3 luxury:1 maintain:3 attempt:2 interest:1 fd:13 highly:1 mining:1 mixture:3 analyzed:1 behind:2 immense:1 incomplete:4 re:1 overcomplete:2 theoretical:4 instance:10 modeling:5 assignment:2 subset:2 entry:1 too:5 dependency:1 density:1 international:3 lee:1 probabilistic:2 participates:1 off:1 decoding:1 picking:1 quickly:1 intersecting:1 vastly:1 satisfied:1 moitra:5 containing:1 wan:1 worse:1 lloyd:1 coding:1 includes:1 satisfy:1 depends:1 multiplicative:10 try:1 view:1 closed:2 analyze:3 characterizes:1 start:1 recover:6 contribution:1 possi:1 accuracy:3 efficiently:2 correspond:1 identify:4 yes:2 weak:3 identification:2 bayesian:1 critically:1 none:1 converged:3 reach:3 decently:1 whenever:2 frequency:1 naturally:1 proof:8 di:5 recovers:1 sampled:2 gain:1 proved:1 hsu:2 popular:6 recall:2 fractional:3 actually:3 back:2 appears:5 ishwar:2 manuscript:1 originally:1 dt:2 done:2 though:1 furthermore:5 just:2 stage:1 until:2 correlation:3 hand:3 working:2 parse:1 lda:3 resemblance:1 believe:1 effect:1 contain:1 true:1 evolution:3 hence:2 seeded:4 alternating:10 nonzero:1 round:6 ll:5 during:1 trying:1 bansal:1 stress:2 performs:2 reasoning:1 meaning:1 variational:33 common:9 multinomial:3 empirically:1 conditioning:1 belong:1 paisley:1 rd:1 vanilla:1 similarly:3 scandinavian:1 risteski:2 qj0:2 add:1 dominant:6 something:1 posterior:5 recent:3 optimizing:1 scenario:1 certain:3 arbitrarily:2 devise:1 relaxed:1 impose:1 converge:1 determine:1 forty:1 ii:5 d0:1 ing:1 technical:3 smooth:1 long:6 retrieval:1 equally:1 ensuring:1 variant:3 essentially:3 rutgers:2 arxiv:4 iteration:3 sometimes:2 agarwal:1 preserved:1 addition:1 hurdle:1 want:3 appropriately:1 ot:1 rest:2 virtually:1 balakrishnan:1 lafferty:1 jordan:2 practitioner:1 call:3 anandkumar:3 presence:1 split:1 enough:2 decent:4 iii:1 iterate:1 easy:1 zi:5 identified:1 idea:3 qj:1 whether:1 expression:2 eeq:1 effort:1 sontag:2 cause:1 remark:1 useful:1 iterating:1 clear:5 informally:1 involve:1 detailed:1 zj:2 notice:1 estimated:1 disjoint:2 popularity:1 per:1 correctly:2 algorithmically:1 dropping:1 dasgupta:2 neither:1 thresholded:2 vast:1 graph:1 relaxation:3 fraction:7 year:1 run:3 fourth:1 uncertainty:1 family:6 almost:7 throughout:2 reasonable:1 wu:1 draw:2 ble:1 bound:3 fascinating:1 nonnegative:1 badly:1 annual:2 constraint:1 sake:1 argument:3 min:5 cl2:1 kumar:1 performing:2 department:2 structured:1 according:3 alternate:1 remain:1 slightly:2 em:16 separability:1 kakade:2 evolves:1 modification:4 den:1 intuitively:4 pr:1 computationally:2 equation:4 previously:1 discus:1 count:8 turn:1 eventually:1 know:1 ge:5 studying:2 available:1 gaussians:2 apply:1 appropriate:1 enforce:1 spectral:2 appearing:1 existence:2 dirichlet:11 ensure:2 running:1 clustering:2 graphical:1 log2:1 maintaining:1 folklore:1 ghahramani:1 approximating:1 classical:1 society:1 objective:1 already:2 quantity:3 strategy:3 dependence:1 gradient:3 distance:1 topic:137 argue:1 reason:2 provable:5 kannan:2 assuming:1 relationship:1 providing:2 minimizing:3 difficult:4 statement:1 negative:2 implementation:3 design:1 perform:2 upper:3 observation:3 pnd:2 finite:1 descent:2 behave:1 precise:1 introduced:1 namely:9 pair:4 kl:17 specified:5 eluded:1 nip:4 able:1 beyond:1 usually:2 below:1 regime:2 sparsity:1 appeared:1 program:1 rf:3 max:4 including:1 royal:1 analogue:2 wainwright:1 suitable:1 overlap:1 natural:4 force:1 difficulty:1 solvable:1 indicator:1 retreival:1 zhu:1 improve:1 imply:1 identifies:1 arora:5 admixture:1 carried:1 incoherent:1 dating:1 faced:1 prior:16 understanding:2 review:2 text:1 schulman:2 discovery:1 sundberg:1 bcl:2 bear:1 allocation:3 proven:3 foundation:2 sufficient:1 minp:1 principle:1 foster:1 rubin:1 pranjal:2 course:1 repeat:1 last:1 formal:1 saul:1 sparse:7 distributed:1 mimno:1 overcome:1 stuck:1 author:1 concretely:1 made:1 simplified:1 far:2 approximate:4 observable:1 ignore:1 keep:1 ml:1 global:6 uai:1 anchor:21 corpus:2 xi:7 discriminative:3 don:2 latent:10 iterative:3 why:1 additionally:2 nature:3 reasonably:1 correlated:3 expansion:4 cl:11 poly:1 cleanest:1 main:2 body:1 position:5 theorem:5 bad:2 topicword:1 list:1 dl:1 intractable:1 false:1 conditioned:1 gap:1 intersection:1 appearance:1 likely:1 explore:1 truth:4 acm:2 ma:1 sized:1 formulated:1 viewed:2 goal:1 feasible:1 hard:2 included:3 operates:1 uniformly:1 lemma:2 called:1 total:2 svd:2 craft:1 formally:4 support:16 brunswick:1 princeton:3 phenomenon:1 handling:1 |
5,243 | 5,747 | Extending Gossip Algorithms to
Distributed Estimation of U -Statistics
Igor Colin, Joseph Salmon, St?ephan Cl?emenc?on
LTCI, CNRS, T?el?ecom ParisTech
Universit?e Paris-Saclay
75013 Paris, France
first.last@telecom-paristech.fr
Aur?elien Bellet
Magnet Team
INRIA Lille - Nord Europe
59650 Villeneuve d?Ascq, France
aurelien.bellet@inria.fr
Abstract
Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample
mean statistics has been the subject of a good deal of attention, computation of U statistics, relying on more expensive averaging over pairs of observations, is a less
investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under
the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip
algorithms which simultaneously propagate data across the network and maintain local estimates of the U -statistic of interest. We establish convergence rate
bounds of O(1/t) and O(log t/t) for the synchronous and asynchronous cases
respectively, where t is the number of iterations, with explicit data and network
dependent terms. Beyond favorable comparisons in terms of rate analysis, numerical experiments provide empirical evidence the proposed algorithms surpasses
the previously introduced approach.
1
Introduction
Decentralized computation and estimation have many applications in sensor and peer-to-peer networks as well as for extracting knowledge from massive information graphs such as interlinked Web
documents and on-line social media. Algorithms running on such networks must often operate under
tight constraints: the nodes forming the network cannot rely on a centralized entity for communication and synchronization, without being aware of the global network topology and/or have limited
resources (computational power, memory, energy). Gossip algorithms [19, 18, 5], where each node
exchanges information with at most one of its neighbors at a time, have emerged as a simple yet powerful technique for distributed computation in such settings. Given a data observation on each node,
gossip algorithms can be used to compute averages or sums of functions of the data that are separable across observations (see for example [10, 2, 15, 11, 9] and references therein). Unfortunately,
these algorithms cannot be used to efficiently compute quantities that take the form of an average
over pairs of observations, also known as U -statistics [12]. Among classical U -statistics used in
machine learning and data mining, one can mention, among others: the sample variance, the Area
Under the Curve (AUC) of a classifier on distributed data, the Gini mean difference, the Kendall
tau rank correlation coefficient, the within-cluster point scatter and several statistical hypothesis test
statistics such as Wilcoxon Mann-Whitney [14].
In this paper, we propose randomized synchronous and asynchronous gossip algorithms to efficiently
compute a U -statistic, in which each node maintains a local estimate of the quantity of interest
throughout the execution of the algorithm. Our methods rely on two types of iterative information
exchange in the network: propagation of local observations across the network, and averaging of lo1
cal estimates. We show that the local estimates generated by our approach converge in expectation to
the value of the U -statistic at rates of O(1/t) and O(log t/t) for the synchronous and asynchronous
versions respectively, where t is the number of iterations. These convergence bounds feature datadependent terms that reflect the hardness of the estimation problem, and network-dependent terms
related to the spectral gap of the network graph [3], showing that our algorithms are faster on wellconnected networks. The proofs rely on an original reformulation of the problem using ?phantom
nodes?, i.e., on additional nodes that account for data propagation in the network. Our results largely
improve upon those presented in [17]: in particular, we achieve faster convergence together with
lower memory and communication costs. Experiments conducted on AUC and within-cluster point
scatter estimation using real data confirm the superiority of our approach.
The rest of this paper is organized as follows. Section 2 introduces the problem of interest as well as
relevant notation. Section 3 provides a brief review of the related work in gossip algorithms. We then
describe our approach along with the convergence analysis in Section 4, both in the synchronous and
asynchronous settings. Section 5 presents our numerical results.
2
Background
2.1
Definitions and Notations
For any integer p > 0, we denote by [p] the set {1, . . . , p} and by |F | the cardinality of any finite set
F . We represent a network of size n > 0 as an undirected graph G = (V, E), where V = [n] is the
set of vertices and E ? V ? V the set of edges. We denote by A(G) the adjacency matrix related
to the graph G, that is for all (i, j) ? V 2 , [A(G)]ij = 1 if and only if (i, j) ? E. For any node
i ? V , we denote its degree by di = |{j : (i, j) ? E}|. We denote by L(G) the graph Laplacian of
G, defined by L(G) = D(G) ? A(G) where D(G) = diag(d1 , . . . , dn ) is the matrix of degrees. A
graph G = (V, E) is said to be connected if for all (i, j) ? V 2 there exists a path connecting i and j;
it is bipartite if there exist S, T ? V such that S ? T = V , S ? T = ? and E ? (S ? T ) ? (T ? S).
A matrix M ? Rn?n is nonnegative (resp. positive) if and only if for all (i, j) ? [n]2 , [M ]ij ? 0,
(resp. [M ]ij > 0). We write M ? 0 (resp. M > 0) when this holds. The transpose of M is
denoted by M > . A matrix P ? Rn?n is stochastic if and only if P ? 0 and P 1n = 1n , where
1n = (1, . . . , 1)> ? Rn . The matrix P ? Rn?n is bi-stochastic if and only if P and P > are
stochastic. We denote by In the identity matrix in Rn?n , (e1 , . . . , en ) the standard basis in Rn , I{E}
the indicator function of an event E and k ? k the usual `2 norm.
2.2
Problem Statement
Let X be an input space and (X1 , . . . , Xn ) ? X n a sample of n ? 2 points in that space. We assume
X ? Rd for some d > 0 throughout the paper, but our results straightforwardly extend to the more
general setting. We denote as X = (X1 , . . . , Xn )> the design matrix. Let H : X ? X ? R be
a measurable function, symmetric in its two arguments and with H(X, X) = 0, ?X ? X . We
consider the problem of estimating the following quantity, known as a degree two U -statistic [12]:1
n
X
?n (H) = 1
U
H(Xi , Xj ).
(1)
n2 i,j=1
In this paper, we illustrate the interest of U -statistics on two applications, among many others. The
first one is the within-cluster point scatter [4], which measures the clustering quality of a partition
P of X as the average distance between points in each cell C ? P. It is of the form (1) with
X
HP (X, X 0 ) = kX ? X 0 k ?
I{(X,X 0 )?C 2 } .
(2)
C?P
We also study the AUC measure [8]. For a given sample (X1 , `1 ), . . . , (Xn , `n ) on X ? {?1, +1},
the AUC measure of a linear classifier ? ? Rd?1 is given by:
P
1?i,j?n (1 ? `i `j )I{`i (? > Xi )>?`j (? > Xj )}
P
.
AUC(?) = P
(3)
4
1?i?n I{`i =1}
1?i?n I{`i =?1}
1
We point out that the usual definition of U -statistic differs slightly from (1) by a factor of n/(n ? 1).
2
Algorithm 1 GoSta-sync: a synchronous gossip algorithm for computing a U -statistic
Require: Each node k holds observation Xk
1: Each node k initializes its auxiliary observation Yk = Xk and its estimate Zk = 0
2: for t = 1, 2, . . . do
3:
for p = 1, . . . , n do
4:
Set Zp ? t?1
Zp + 1t H(Xp , Yp )
t
5:
end for
6:
Draw (i, j) uniformly at random from E
7:
Set Zi , Zj ? 21 (Zi + Zj )
8:
Swap auxiliary observations of nodes i and j: Yi ? Yj
9: end for
This score is the probability for a classifier to rank a positive observation higher than a negative one.
We focus here on the decentralized setting, where the data sample is partitioned across a set of nodes
in a network. For simplicity, we assume V = [n] and each node i ? V only has access to a single
data observation Xi .2 We are interested in estimating (1) efficiently using a gossip algorithm.
3
Related Work
Gossip algorithms have been extensively studied in the context of decentralized averaging in networks, where the goal is to compute the average of n real numbers (X = R):
n
X
1
?n = 1
X
Xi = X> 1n .
n i=1
n
(4)
One of the earliest work on this canonical problem is due to [19], but more efficient algorithms have
recently been proposed, see for instance [10, 2]. Of particular interest to us is the work of [2], which
introduces a randomized gossip algorithm for computing the empirical mean (4) in a context where
nodes wake up asynchronously and simply average their local estimate with that of a randomly
chosen neighbor. The communication probabilities are given by a stochastic matrix P , where pij
is the probability that a node i selects neighbor j at a given iteration. As long as the network
graph is connected and non-bipartite, the local estimates converge to (4) at a rate O(e?ct ) where
the constant c can be tied to the spectral gap of the network graph [3], showing faster convergence
for well-connected networks.3 Such algorithms
Pncan be extended to compute other functions such
as maxima and minima, or sums of the form i=1 f (Xi ) for some function f : X ? R (as done
for instance in [15]). Some work has also gone into developing faster gossip algorithms for poorly
connected networks, assuming that nodes know their (partial) geographic location [6, 13]. For a
detailed account of the literature on gossip algorithms, we refer the reader to [18, 5].
However, existing gossip algorithms cannot be used to efficiently compute (1) as it depends on pairs
of observations. To the best of our knowledge, this problem has only been investigated in [17].
Their algorithm, coined U2-gossip, achieves O(1/t) convergence rate but has several drawbacks.
First, each node must store two auxiliary observations, and two pairs of nodes must exchange an
observation at each iteration. For high-dimensional problems (large d), this leads to a significant
memory and communication load. Second, the algorithm is not asynchronous as every node must
update its estimate at each iteration. Consequently, nodes must have access to a global clock, which
is often unrealistic in practice. In the next section, we introduce new synchronous and asynchronous
algorithms with faster convergence as well as smaller memory and communication cost per iteration.
4
GoSta Algorithms
In this section, we introduce gossip algorithms for computing (1). Our approach is based on the
?n (H) = 1/n Pn hi , with hi = 1/n Pn H(Xi , Xj ), and we write h =
observation that U
j=1
i=1
(h1 , . . . , hn )> . The goal is thus similar to the usual distributed averaging problem (4), with the
2
3
Our results generalize to the case where each node holds a subset of the observations (see Section 4).
For the sake of completeness, we provide an analysis of this algorithm in the supplementary material.
3
?
(b) New graph G.
(a) Original graph G.
Figure 1: Comparison of original network and ?phantom network?.
key difference that each local value hi is itself an average depending on the entire data sample.
Consequently, our algorithms will combine two steps at each iteration: a data propagation step to
allow each node i to estimate hi , and an averaging step to ensure convergence to the desired value
?n (H). We first present the algorithm and its analysis for the (simpler) synchronous setting in
U
Section 4.1, before introducing an asynchronous version (Section 4.2).
4.1
Synchronous Setting
In the synchronous setting, we assume that the nodes have access to a global clock so that they can
all update their estimate at each time instance. We stress that the nodes need not to be aware of the
global network topology as they will only interact with their direct neighbors in the graph.
?n (H) by node k at iteration t. In order to propagate
Let us denote by Zk (t) the (local) estimate of U
data across the network, each node k maintains an auxiliary observation Yk , initialized to Xk . Our
algorithm, coined GoSta, goes as follows. At each iteration, each node k updates its local estimate
by taking the running average of Zk (t) and H(Xk , Yk ). Then, an edge of the network is drawn uniformly at random, and the corresponding pair of nodes average their local estimates and swap their
auxiliary observations. The observations are thus each performing a random walk (albeit coupled)
on the network graph. The full procedure is described in Algorithm 1.
In order to prove the convergence of Algorithm 1, we consider an equivalent reformulation of the
problem which allows us to model the data propagation and the averaging steps separately. Specifically, for each k ? V , we define a phantom Gk = (Vk , Ek ) of the original network G, with
? = (V? , E)
?
Vk = {vik ; 1 ? i ? n} and Ek = {(vik , vjk ); (i, j) ? E}. We then create a new graph G
k
where each node k ? V is connected to its counterpart vk ? Vk :
V? = V ? (?nk=1 Vk )
? = E ? (?n Ek ) ? {(k, v k ); k ? V }
E
k=1
k
? is illustrated in Figure 1. In this new graph, the nodes V from the original
The construction of G
network will hold the estimates Z1 (t), . . . , Zn (t) as described above. The role of each Gk is to
simulate the data propagation in the original graph G. For i ? [n], vik ? V k initially holds the value
H(Xk , Xi ). At each iteration, we draw a random edge (i, j) of G and nodes vik and vjk swap their
value for all k ? [n]. To update its estimate, each node k will use the current value at vkk .
We can now represent the system state at iteration t by a vector S(t) = (S1 (t)> , S2 (t)> )> ?
2
Rn+n . The first n coefficients, S1 (t), are associated with nodes in V and correspond to the estimate
vector Z(t) = [Z1 (t), . . . , Zn (t)]> . The last n2 coefficients, S2 (t), are associated with nodes in
(Vk )1?k?n and represent the data propagation in the network. Their initial value is set to S2 (0) =
>
2
k
(e>
1 H, . . . , en H) so that for any (k, l) ? [n] , node vl initially stores the value H(Xk , Xl ).
? is of size O(n2 ), but we stress the fact that it is used solely
Remark 1. The ?phantom network? G
as a tool for the convergence analysis: Algorithm 1 operates on the original graph G.
The transition matrix of this system accounts for three events: the averaging step (the action of G
on itself), the data propagation (the action of Gk on itself for all k ? V ) and the estimate update
4
(the action of Gk on node k for all k ? V ). At a given step t > 0, we are interested in characterizing
the transition matrix M (t) such that E[S(t + 1)] = M (t)E[S(t)]. For the sake of clarity, we write
M (t) as an upper block-triangular (n + n2 ) ? (n + n2 ) matrix:
M1 (t) M2 (t)
M (t) =
,
(5)
0
M3 (t)
2
2
2
with M1 (t) ? Rn?n , M2 (t) ? Rn?n and M3 (t) ? Rn ?n . The bottom left part is necessarily
0, because G does not influence any Gk . The upper left M1 (t) block corresponds to the averaging
step; therefore, for any t > 0, we have:
t?1 1 X
t?1
1
M1 (t) =
?
In ? (ei ? ej )(ei ? ej )> =
W2 (G) ,
t
|E|
2
t
(i,j)?E
where for any ? > 1, W? (G) is defined by:
1 X
1
2
W? (G) =
In ? (ei ? ej )(ei ? ej )> = In ?
L(G).
|E|
?
?|E|
(6)
(i,j)?E
Furthermore, M2 (t) and M3 (t) are defined as follows:
? >
e1
?
1?0
M2 (t) = ?
.
t?
? ..
0
..
.
0
???
|
???
..
.
0
{z
B
?
?W (G) 0 ? ? ?
0
1
.. ?
..
?
?
.
. ?
? 0
and
M
(t)
=
? .
3
?
..
?
..
.
0?
>
0
?
?
?
0
en
|
{z
}
C
0 ?
.. ?
. ?
,
.. ?
?
.
W1 (G)
}
where M2 (t) is a block diagonal matrix corresponding to the observations being propagated, and
M3 (t) represents the estimate update for each node k. Note that M3 (t) = W1 (G) ? In where ? is
the Kronecker product.
We can now describe the expected state evolution. At iteration t = 0, one has:
0 B
0
BS2 (0)
E[S(1)] = M (1)E[S(0)] = M (1)S(0) =
=
.
(7)
0 C
S2 (0)
CS2 (0)
Using recursion, we can write:
Pt
t?s
1
W (G)
BC s?1 S2 (0)
E[S(t)] = M (t)M (t ? 1) . . . M (1)S(0) = t s=1 2 t
.
(8)
C S2 (0)
Therefore, in order to prove the convergence of Algorithm 1, one needs to show that
Pt
t?s
?n (H)1n . We state this precisely in the next theolimt?+? 1t s=1 W2 (G)
BC s?1 S2 (0) = U
rem.
Theorem 1. Let G be a connected and non-bipartite graph with n nodes, X ? Rn?d a design
matrix and (Z(t)) the sequence of estimates generated by Algorithm 1. For all k ? [n], we have:
1 X
?n (H).
H(Xi , Xj ) = U
(9)
lim E[Zk (t)] = 2
t?+?
n
1?i,j?n
Moreover, for any t > 0,
2
1
?n (H)1n
?n (H)1n
+ e?ct
H ? h1>
E[Z(t)] ? U
?
h ? U
+
n ,
ct
ct
where c = c(G) := 1 ? ?2 (2) and ?2 (2) is the second largest eigenvalue of W2 (G).
Proof. See supplementary material.
?n (H) at a rate
Theorem 1 shows that the local estimates generated by Algorithm 1 converge to U
O(1/t). Furthermore, the constants reveal the rate dependency on the particular problem instance.
Indeed, the two norm terms are data-dependent and quantify the difficulty of the estimation problem
itself through a dispersion measure. In contrast, c(G) is a network-dependent term since 1??2 (2) =
?n?1 /|E|, where ?n?1 is the second smallest eigenvalue of the graph Laplacian L(G) (see Lemma 1
in the supplementary material). The value ?n?1 is also known as the spectral gap of G and graphs
with a larger spectral gap typically have better connectivity [3]. This will be illustrated in Section 5.
5
Algorithm 2 GoSta-async: an asynchronous gossip algorithm for computing a U -statistic
Require: Each node k holds observation Xk and pk = 2dk /|E|
1: Each node k initializes Yk = Xk , Zk = 0 and mk = 0
2: for t = 1, 2, . . . do
3:
Draw (i, j) uniformly at random from E
4:
Set mi ? mi + 1/pi and mj ? mj + 1/pj
5:
Set Zi , Zj ? 12 (Zi + Zj )
6:
Set Zi ? (1 ? pi1mi )Zi + pi1mi H(Xi , Yi )
7:
Set Zj ? (1 ? pj 1mj )Zj + pj 1mj H(Xj , Yj )
8:
Swap auxiliary observations of nodes i and j: Yi ? Yj
9: end for
?n (H), U2-gossip [17] does not use averaging. Instead,
Comparison to U2-gossip. To estimate U
(1)
(2)
each node k requires two auxiliary observations Yk and Yk which are both initialized to Xk .
At each iteration, each node k updates its local estimate by taking the running average of Zk and
(1)
(2)
H(Yk , Yk ). Then, two random edges are selected: the nodes connected by the first (resp. second) edge swap their first (resp. second) auxiliary observations. A precise statement of the algorithm
is provided in the supplementary material. U2-gossip has several drawbacks compared to GoSta: it
requires initiating communication between two pairs of nodes at each iteration, and the amount of
communication and memory required is higher (especially when data is high-dimensional). Furthermore, applying our convergence analysis to U2-gossip, we obtain the following refined rate:4
?n
2
1
>
?
?
,
H ? h1n
h ? Un (H)1n
+
E[Z(t)] ? Un (H)1n
?
t
1 ? ?2 (1)
1 ? ?2 (1)2
(10)
where 1 ? ?2 (1) = 2(1 ? ?2 (2)) = 2c(G) and ?2 (1) is the second largest eigenvalue of W1 (G).
The advantage of propagating two observations in U2-gossip
is seen in the 1/(1 ? ?2 (1)2 ) term,
?
however the absence of averaging leads to an overall n factor. Intuitively, this is because nodes do
not benefit from each other?s estimates. In practice, ?2 (2) and ?2 (1) are close to 1 for reasonablysized networks (for instance, ??
2 (2) = 1 ? 1/n for the complete graph), so the square term does
not provide much gain and the n factor dominates in (10). We thus expect U2-gossip to converge
slower than GoSta, which is confirmed by the numerical results presented in Section 5.
4.2
Asynchronous Setting
In practical settings, nodes may not have access to a global clock to synchronize the updates. In this
section, we remove the global clock assumption and propose a fully asynchronous algorithm where
each node has a local clock, ticking at a rate 1 Poisson process. Yet, local clocks are i.i.d. so one
can use an equivalent model with a global clock ticking at a rate n Poisson process and a random
edge draw at each iteration, as in synchronous setting (one may refer to [2] for more details on clock
modeling). However, at a given iteration, the estimate update step now only involves the selected
pair of nodes. Therefore, the nodes need to maintain an estimate of the current iteration number to
?n (H). Hence for all k ? [n], let pk ? [0, 1] denote
ensure convergence to an unbiased estimate of U
the probability of node k being picked at any iteration. With our assumption that nodes activate with
a uniform distribution over E, pk = 2dk /|E|. Moreover, the number of times a node k has been
selected at a given iteration t > 0 follows a binomial distribution with parameters t and pk . Let us
define mk (t) such that mk (0) = 0 and for t > 0:
mk (t ? 1) + p1k if k is picked at iteration t,
(11)
mk (t) =
mk (t ? 1)
otherwise.
For any k ? [n] and any t > 0, one has E[mk (t)] = t ? pk ? 1/pk = t. Therefore, given that
every node knows its degree and the total number of edges in the network, the iteration estimates are
unbiased. We can now give an asynchronous version of GoSta, as stated in Algorithm 2.
?n (H), we use a similar model as in the synchronous
To show that local estimates converge to U
setting. The time dependency of the transition matrix is more complex ; so is the upper bound.
4
The proof can be found in the supplementary material.
6
Dataset
Wine Quality (n = 1599)
SVMguide3 (n = 1260)
Complete graph
Watts-Strogatz
2d-grid graph
6.26 ? 10?4
7.94 ? 10?4
2.72 ? 10?5
5.49 ? 10?5
3.66 ? 10?6
6.03 ? 10?6
Table 1: Value of 1 ? ?2 (2) for each network.
Theorem 2. Let G be a connected and non bipartite graph with n nodes, X ? Rn?d a design
matrix and (Z(t)) the sequence of estimates generated by Algorithm 2. For all k ? [n], we have:
1 X
?n (H).
lim E[Zk (t)] = 2
H(Xi , Xj ) = U
(12)
t?+?
n
1?i,j?n
Moreover, there exists a constant c0 (G) > 0 such that, for any t > 1,
log t
?n (H)1n
kHk.
(13)
? c0 (G) ?
E[Z(t)] ? U
t
Proof. See supplementary material.
Remark 2. Our methods can be extended to the situation where nodes contain multiple observations: when drawn, a node will pick a random auxiliary observation to swap. Similar convergence
results are achieved by splitting each node into a set of nodes, each containing only one observation
and new edges weighted judiciously.
5
Experiments
In this section, we present two applications on real datasets: the decentralized estimation of the Area
Under the ROC Curve (AUC) and of the within-cluster point scatter. We compare the performance of
our algorithms to that of U2-gossip [17] ? see supplementary material for additional comparisons
to some baseline methods. We perform our simulations on the three types of network described
below (corresponding values of 1 ? ?2 (2) are shown in Table 1).
? Complete graph: This is the case where all nodes are connected to each other. It is the ideal
situation in our framework, since any pair of nodes can communicate directly. For a complete graph
G of size n > 0, 1 ? ?2 (2) = 1/n, see [1, Ch.9] or [3, Ch.1] for details.
? Two-dimensional grid: Here, nodes are located on a 2D grid, and each node is connected to its
four neighbors
? on the grid. This network offers a regular graph with isotropic communication, but
its diameter ( n) is quite high, especially in comparison to usual scale-free networks.
? Watts-Strogatz: This random network generation technique is introduced in [20] and allows us to
create networks with various communication properties. It relies on two parameters: the average
degree of the network k and a rewiring probability p. In expectation, the higher the rewiring probability, the better the connectivity of the network. Here, we use k = 5 and p = 0.3 to achieve a
connectivity compromise between the complete graph and the two-dimensional grid.
AUC measure. We first focus on the AUC measure of a linear classifier ? as defined in (3). We use
the SMVguide3 binary classification dataset which contains n = 1260 points in d = 23 dimensions.5
We set ? to the difference between the class means. For each generated network, we perform 50 runs
of GoSta-sync (Algorithm 1) and U2-gossip. The top row of Figure 2 shows the evolution over time
of the average relative error and the associated standard deviation across nodes for both algorithms
on each type of network. On average, GoSta-sync outperforms U2-gossip on every network. The
variance of the estimates across nodes is also lower due to the averaging step. Interestingly, the
performance gap between the two algorithms is greatly increasing early on, presumably because the
exponential term in the convergence bound of GoSta-sync is significant in the first steps.
Within-cluster point scatter. We then turn to the within-cluster point scatter defined in (2). We use
the Wine Quality dataset which contains n = 1599 points in d = 12 dimensions, with a total of K =
11 classes.6 We focus on the partition P associated to class centroids and run the aforementioned
5
6
This dataset is available at http://mldata.org/repository/data/viewslug/svmguide3/
This dataset is available at https://archive.ics.uci.edu/ml/datasets/Wine
7
Figure 2: Evolution of the average relative error (solid line) and its standard deviation (filled area)
with the number of iterations for U2-gossip (red) and Algorithm 1 (blue) on the SVMguide3 dataset
(top row) and the Wine Quality dataset (bottom row).
(a) 20% error reaching time.
(b) Average relative error.
Figure 3: Panel (a) shows the average number of iterations needed to reach an relative error below
0.2, for several network sizes n ? [50, 1599]. Panel (b) compares the relative error (solid line) and
its standard deviation (filled area) of synchronous (blue) and asynchronous (red) versions of GoSta.
methods 50 times. The results are shown in the bottom row of Figure 2. As in the case of AUC,
GoSta-sync achieves better perfomance on all types of networks, both in terms of average error and
variance. In Figure 3a, we show the average time needed to reach a 0.2 relative error on a complete
graph ranging from n = 50 to n = 1599. As predicted by our analysis, the performance gap
widens in favor of GoSta as the size of the graph increases. Finally, we compare the performance
of GoSta-sync and GoSta-async (Algorithm 2) in Figure 3b. Despite the slightly worse theoretical
convergence rate for GoSta-async, both algorithms have comparable performance in practice.
6
Conclusion
We have introduced new synchronous and asynchronous randomized gossip algorithms to compute
statistics that depend on pairs of observations (U -statistics). We have proved the convergence rate in
both settings, and numerical experiments confirm the practical interest of the proposed algorithms.
In future work, we plan to investigate whether adaptive communication schemes (such as those of
[6, 13]) can be used to speed-up our algorithms. Our contribution could also be used as a building
block for decentralized optimization of U -statistics, extending for instance the approaches of [7, 16].
Acknowledgements This work was supported by the chair Machine Learning for Big Data of
T?el?ecom ParisTech, and was conducted when A. Bellet was affiliated with T?el?ecom ParisTech.
8
References
[1] B?ela Bollob?as. Modern Graph Theory, volume 184. Springer, 1998.
[2] Stephen P. Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip
algorithms. IEEE Transactions on Information Theory, 52(6):2508?2530, 2006.
[3] Fan R. K. Chung. Spectral Graph Theory, volume 92. American Mathematical Society, 1997.
[4] St?ephan Cl?emenc?on. On U-processes and clustering performance. In Advances in Neural
Information Processing Systems 24, pages 37?45, 2011.
[5] Alexandros G. Dimakis, Soummya Kar, Jos?e M. F. Moura, Michael G. Rabbat, and Anna
Scaglione. Gossip Algorithms for Distributed Signal Processing. Proceedings of the IEEE,
98(11):1847?1864, 2010.
[6] Alexandros G. Dimakis, Anand D. Sarwate, and Martin J. Wainwright. Geographic Gossip: Efficient Averaging for Sensor Networks. IEEE Transactions on Signal Processing, 56(3):1205?
1216, 2008.
[7] John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual Averaging for Distributed
Optimization: Convergence Analysis and Network Scaling. IEEE Transactions on Automatic
Control, 57(3):592?606, 2012.
[8] James A. Hanley and Barbara J. McNeil. The meaning and use of the area under a receiver
operating characteristic (ROC) curve. Radiology, 143(1):29?36, 1982.
[9] Richard Karp, Christian Schindelhauer, Scott Shenker, and Berthold Vocking. Randomized
rumor spreading. In Symposium on Foundations of Computer Science, pages 565?574. IEEE,
2000.
[10] David Kempe, Alin Dobra, and Johannes Gehrke. Gossip-Based Computation of Aggregate
Information. In Symposium on Foundations of Computer Science, pages 482?491. IEEE, 2003.
[11] Wojtek Kowalczyk and Nikos A. Vlassis. Newscast EM. In Advances in Neural Information
Processing Systems, pages 713?720, 2004.
[12] Alan J. Lee. U-Statistics: Theory and Practice. Marcel Dekker, New York, 1990.
[13] Wenjun Li, Huaiyu Dai, and Yanbing Zhang. Location-Aided Fast Distributed Consensus in
Wireless Networks. IEEE Transactions on Information Theory, 56(12):6208?6227, 2010.
[14] Henry B. Mann and Donald R. Whitney. On a Test of Whether one of Two Random Variables
is Stochastically Larger than the Other. Annals of Mathematical Statistics, 18(1):50?60, 1947.
[15] Damon Mosk-Aoyama and Devavrat Shah. Fast distributed algorithms for computing separable
functions. IEEE Transactions on Information Theory, 54(7):2997?3007, 2008.
[16] Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48?61, 2009.
[17] Kristiaan Pelckmans and Johan Suykens. Gossip Algorithms for Computing U-Statistics. In
IFAC Workshop on Estimation and Control of Networked Systems, pages 48?53, 2009.
[18] Devavrat Shah. Gossip Algorithms. Foundations and Trends in Networking, 3(1):1?125, 2009.
[19] John N. Tsitsiklis. Problems in decentralized decision making and computation. PhD thesis,
Massachusetts Institute of Technology, 1984.
[20] Duncan J Watts and Steven H Strogatz. Collective dynamics of ?small-world?networks. Nature,
393(6684):440?442, 1998.
9
| 5747 |@word repository:1 version:4 norm:2 c0:2 dekker:1 simulation:1 propagate:2 pick:1 mention:1 solid:2 initial:1 contains:2 score:1 document:1 bc:2 interestingly:1 outperforms:1 existing:1 current:2 yet:3 scatter:7 must:5 bs2:1 john:2 numerical:4 partition:2 christian:1 remove:1 update:9 selected:3 pelckmans:1 xk:9 isotropic:1 alexandros:2 provides:1 completeness:1 node:64 location:2 viewslug:1 org:1 simpler:1 zhang:1 mathematical:2 along:1 dn:1 direct:1 vjk:2 symposium:2 prove:2 khk:1 combine:1 sync:6 introduce:2 expected:1 indeed:1 hardness:1 multi:1 rem:1 relying:1 initiating:1 cardinality:1 increasing:1 provided:1 estimating:2 notation:2 moreover:3 panel:2 medium:1 dimakis:2 ghosh:1 every:3 universit:1 classifier:4 control:3 ozdaglar:1 superiority:1 positive:2 before:1 schindelhauer:1 local:15 despite:1 path:1 wellconnected:1 solely:1 inria:2 therein:1 studied:1 limited:1 bi:1 gone:1 practical:2 yj:3 practice:4 block:4 differs:1 procedure:1 area:7 empirical:3 boyd:1 regular:1 donald:1 cannot:3 close:1 cal:1 context:2 influence:1 applying:1 measurable:1 phantom:4 equivalent:2 emenc:2 attention:1 go:1 simplicity:1 splitting:1 m2:5 population:1 resp:5 construction:1 pt:2 annals:1 massive:1 hypothesis:1 trend:1 expensive:1 located:1 balaji:1 bottom:3 role:1 steven:1 connected:10 yk:8 dynamic:1 depend:1 tight:1 compromise:1 upon:1 bipartite:4 basis:1 swap:6 various:1 rumor:1 fast:2 describe:3 activate:1 gini:2 aggregate:1 refined:1 peer:2 quite:1 emerged:1 supplementary:7 larger:2 otherwise:1 triangular:1 favor:1 statistic:20 p1k:1 radiology:1 itself:4 asynchronously:1 sequence:2 eigenvalue:3 advantage:1 propose:2 rewiring:2 product:1 fr:2 relevant:1 uci:1 networked:1 poorly:1 achieve:2 convergence:18 cluster:7 extending:2 zp:2 prabhakar:1 illustrate:1 depending:1 propagating:1 ij:3 auxiliary:9 predicted:1 involves:1 marcel:1 quantify:1 drawback:2 stochastic:4 material:7 mann:2 adjacency:1 exchange:3 require:2 villeneuve:1 hold:6 ic:1 presumably:1 achieves:2 early:1 smallest:1 wine:4 estimation:9 favorable:1 spreading:1 largest:2 create:2 gehrke:1 tool:1 weighted:1 sensor:2 reaching:1 pn:2 ej:4 karp:1 earliest:1 focus:3 vk:6 rank:2 greatly:1 contrast:1 centroid:1 baseline:1 dependent:4 el:3 cnrs:1 vl:1 entire:1 typically:1 initially:2 france:2 interested:2 selects:1 overall:1 among:3 classification:1 aforementioned:1 denoted:1 dual:1 proposes:1 plan:1 kempe:1 aware:2 represents:1 lille:1 igor:1 future:1 others:2 richard:1 modern:1 randomly:1 simultaneously:1 maintain:2 ltci:1 interest:6 centralized:1 mining:1 investigate:1 introduces:2 wojtek:1 edge:8 partial:1 filled:2 initialized:2 desired:1 walk:1 theoretical:1 mk:7 instance:6 modeling:1 whitney:2 zn:2 cost:2 introducing:1 surpasses:1 vertex:1 subset:1 deviation:3 vkk:1 uniform:1 conducted:2 straightforwardly:1 dependency:2 angelia:1 st:2 randomized:6 aur:1 lee:1 jos:1 michael:1 together:1 connecting:1 w1:3 connectivity:3 reflect:1 thesis:1 containing:1 hn:1 worse:1 stochastically:1 ek:3 chung:1 american:1 yp:1 elien:1 li:1 account:3 coefficient:3 depends:1 h1:2 picked:2 kendall:1 red:2 maintains:2 contribution:1 square:1 variance:4 largely:1 efficiently:4 characteristic:1 correspond:1 asuman:1 generalize:1 confirmed:1 scaglione:1 moura:1 reach:2 networking:1 definition:2 energy:1 james:1 proof:4 di:1 associated:4 mi:2 propagated:1 gain:1 dataset:7 proved:1 massachusetts:1 knowledge:2 lim:2 organized:1 dobra:1 higher:3 done:1 furthermore:3 correlation:1 clock:8 web:1 ei:4 propagation:7 quality:4 reveal:1 building:1 contain:1 geographic:2 unbiased:2 counterpart:1 evolution:3 hence:1 symmetric:1 illustrated:2 deal:1 auc:9 stress:2 complete:6 duchi:1 ranging:1 meaning:1 salmon:1 recently:1 volume:2 sarwate:1 extend:1 perfomance:1 m1:4 shenker:1 refer:2 significant:2 rd:2 automatic:2 grid:5 hp:1 henry:1 access:4 europe:1 alekh:1 operating:1 damon:1 wilcoxon:1 lo1:1 barbara:1 store:2 kar:1 binary:1 yi:3 seen:1 minimum:1 additional:2 dai:1 nikos:1 converge:5 colin:1 signal:2 stephen:1 full:1 multiple:1 alan:1 ifac:1 faster:5 offer:1 long:1 e1:2 laplacian:2 expectation:2 poisson:2 iteration:23 represent:3 h1n:1 agarwal:1 achieved:1 cell:1 suykens:1 whereas:1 background:1 separately:1 wake:1 w2:3 operate:1 rest:1 archive:1 subject:1 undirected:1 anand:1 integer:1 extracting:1 ideal:1 xj:6 zi:6 kristiaan:1 topology:2 rabbat:1 vik:4 judiciously:1 synchronous:14 whether:2 york:1 remark:2 action:3 detailed:1 johannes:1 amount:1 extensively:1 ticking:2 diameter:1 http:2 exist:1 zj:6 canonical:1 async:3 per:1 blue:2 write:4 arpita:1 key:1 four:1 reformulation:2 drawn:2 clarity:1 pj:3 graph:31 subgradient:1 mcneil:1 sum:2 run:2 powerful:1 communicate:1 throughout:2 reader:1 draw:4 decision:1 duncan:1 scaling:1 comparable:1 bound:4 ct:4 hi:4 fan:1 nonnegative:1 constraint:1 kronecker:1 precisely:1 aurelien:1 sake:2 simulate:1 argument:1 speed:1 chair:1 performing:1 separable:2 martin:2 developing:1 watt:3 bellet:3 across:7 slightly:2 smaller:1 em:1 partitioned:1 joseph:1 making:1 s1:2 intuitively:1 ecom:3 resource:1 previously:1 devavrat:3 turn:1 needed:2 know:2 end:3 available:2 decentralized:7 spectral:5 kowalczyk:1 shah:3 slower:1 original:7 binomial:1 running:3 clustering:2 ensure:2 top:2 widens:1 coined:2 hanley:1 especially:2 establish:1 classical:1 society:1 initializes:2 quantity:3 usual:4 diagonal:1 said:1 distance:1 entity:1 consensus:1 svmguide3:3 assuming:1 bollob:1 unfortunately:1 statement:2 nord:1 gk:5 negative:1 stated:1 alin:1 design:3 affiliated:1 collective:1 perform:2 upper:3 observation:28 dispersion:1 datasets:2 finite:1 situation:2 extended:2 communication:10 team:1 precise:1 vlassis:1 rn:12 ephan:2 introduced:3 david:1 pair:9 paris:2 required:1 z1:2 beyond:1 below:2 interlinked:1 scott:1 saclay:1 including:1 memory:5 tau:1 wainwright:2 power:1 event:2 unrealistic:1 difficulty:1 rely:3 synchronize:1 ela:1 indicator:1 recursion:1 nedic:1 scheme:1 improve:1 technology:1 brief:1 ascq:1 huaiyu:1 coupled:1 review:1 literature:1 acknowledgement:1 relative:6 synchronization:1 fully:1 expect:1 generation:1 mldata:1 foundation:3 degree:5 agent:1 pij:1 xp:1 pi:1 row:4 supported:1 last:2 asynchronous:14 transpose:1 free:1 wireless:1 tsitsiklis:1 allow:1 institute:1 neighbor:5 taking:2 characterizing:1 distributed:11 benefit:1 curve:4 dimension:2 xn:3 transition:3 berthold:1 world:1 adaptive:1 social:1 transaction:6 functionals:1 confirm:2 ml:1 global:8 receiver:1 xi:10 un:2 iterative:1 table:2 mj:4 zk:7 robust:1 johan:1 nature:1 interact:1 investigated:2 cl:2 necessarily:1 complex:1 diag:1 anna:1 pk:6 s2:7 big:1 n2:5 x1:3 telecom:1 en:3 gossip:33 roc:2 cs2:1 explicit:1 exponential:1 xl:1 tied:1 theorem:3 load:1 showing:2 dk:2 evidence:1 dominates:1 essential:2 exists:2 workshop:1 albeit:1 phd:1 execution:1 kx:1 nk:1 gap:6 simply:1 forming:1 datadependent:1 strogatz:3 u2:11 springer:1 ch:2 corresponds:1 relies:1 identity:1 goal:2 consequently:2 absence:1 paristech:4 aided:1 specifically:1 uniformly:3 operates:1 averaging:13 lemma:1 total:2 newscast:1 m3:5 magnet:1 d1:1 |
5,244 | 5,748 | The Self-Normalized Estimator for Counterfactual
Learning
Thorsten Joachims
Department of Computer Science
Cornell University
tj@cs.cornell.edu
Adith Swaminathan
Department of Computer Science
Cornell University
adith@cs.cornell.edu
Abstract
This paper identifies a severe problem of the counterfactual risk estimator typically used in batch learning from logged bandit feedback (BLBF), and proposes
the use of an alternative estimator that avoids this problem. In the BLBF setting,
the learner does not receive full-information feedback like in supervised learning, but observes feedback only for the actions taken by a historical policy. This
makes BLBF algorithms particularly attractive for training online systems (e.g., ad
placement, web search, recommendation) using their historical logs. The Counterfactual Risk Minimization (CRM) principle [1] offers a general recipe for designing BLBF algorithms. It requires a counterfactual risk estimator, and virtually
all existing works on BLBF have focused on a particular unbiased estimator. We
show that this conventional estimator suffers from a propensity overfitting problem
when used for learning over complex hypothesis spaces. We propose to replace
the risk estimator with a self-normalized estimator, showing that it neatly avoids
this problem. This naturally gives rise to a new learning algorithm ? Normalized
Policy Optimizer for Exponential Models (Norm-POEM) ? for structured output
prediction using linear rules. We evaluate the empirical effectiveness of NormPOEM on several multi-label classification problems, finding that it consistently
outperforms the conventional estimator.
1
Introduction
Most interactive systems (e.g. search engines, recommender systems, ad platforms) record large
quantities of log data which contain valuable information about the system?s performance and user
experience. For example, the logs of an ad-placement system record which ad was presented in a
given context and whether the user clicked on it. While these logs contain information that should
inform the design of future systems, the log entries do not provide supervised training data in the
conventional sense. This prevents us from directly employing supervised learning algorithms to
improve these systems. In particular, each entry only provides bandit feedback since the loss/reward
is only observed for the particular action chosen by the system (e.g. the presented ad) but not for
all the other actions the system could have taken. Moreover, the log entries are biased since actions
that are systematically favored by the system will by over-represented in the logs.
Learning from historical logs data can be formalized as batch learning from logged bandit feedback
(BLBF) [2, 1]. Unlike the well-studied problem of online learning from bandit feedback [3], this
setting does not require the learner to have interactive control over the system. Learning in such
a setting is closely related to the problem of off-policy evaluation in reinforcement learning [4] ?
we would like to know how well a new system (policy) would perform if it had been used in the
past. This motivates the use of counterfactual estimators [5]. Following an approach analogous
to Empirical Risk Minimization (ERM), it was shown that such estimators can be used to design
learning algorithms for batch learning from logged bandit feedback [6, 5, 1].
1
However the conventional counterfactual risk estimator used in prior works on BLBF exhibits severe
anomalies that can lead to degeneracies when used in ERM. In particular, the estimator exhibits a
new form of Propensity Overfitting that causes severely biased risk estimates for the ERM minimizer. By introducing multiplicative control variates, we propose to replace this risk estimator with
a Self-Normalized Risk Estimator that provably avoids these degeneracies. An extensive empirical
evaluation confirms that the desirable theoretical properties of the Self-Normalized Risk Estimator
translate into improved generalization performance and robustness.
2
Related work
Batch learning from logged bandit feedback is an instance of causal inference. Classic inference
techniques like propensity score matching [7] are, hence, immediately relevant. BLBF is closely
related to the problem of learning under covariate shift (also called domain adaptation or sample
bias correction) [8] as well as off-policy evaluation in reinforcement learning [4]. Lower bounds for
domain adaptation [8] and impossibility results for off-policy evaluation [9], hence, also apply to
propensity score matching [7], costing [10] and other importance sampling approaches to BLBF.
Several counterfactual estimators have been developed for off-policy evaluation [11, 6, 5]. All these
estimators are instances of importance sampling for Monte Carlo approximation and can be traced
back to What-If simulations [12]. Learning (upper) bounds have been developed recently [13, 1, 14]
that show that these estimators can work for BLBF. We additionally show that importance sampling
can overfit in hitherto unforeseen ways with the capacity of the hypothesis space during learning.
We call this new kind of overfitting Propensity Overfitting.
Classic variance reduction techniques for importance sampling are also useful for counterfactual
evaluation and learning. For instance, importance weights can be ?clipped? [15] to trade-off bias
against variance in the estimators [5]. Additive control variates give rise to regression estimators
[16] and doubly robust estimators [6]. Our proposal uses multiplicative control variates. These
are widely used in financial applications (see [17] and references therein) and policy iteration for
reinforcement learning (e.g. [18]). In particular, we study the self-normalized estimator [12] which
is superior to the vanilla estimator when fluctuations in the weights dominate the variance [19]. We
additionally show that the self-normalized estimator neatly addresses propensity overfitting.
3
Batch learning from logged bandit feedback
Following [1], we focus on the stochastic, cardinal, contextual bandit setting and recap the essence
of the CRM principle. The inputs of a structured prediction problem x ? X are drawn i.i.d. from a
fixed but unknown distribution Pr(X ). The outputs are denoted by y ? Y. The hypothesis space H
contains stochastic hypotheses h(Y | x) that define a probability distribution over Y. A hypothesis
h ? H makes predictions by sampling from the conditional distribution y ? h(Y | x). This definition
of H also captures deterministic hypotheses. For notational convenience, we denote the probability
distribution h(Y | x) by h(x), and the probability assigned by h(x) to y as h(y | x). We use (x, y) ? h
to refer to samples of x ? Pr(X ), y ? h(x), and when clear from the context, we will drop (x, y).
Bandit feedback means we only observe the feedback ?(x, y) for the specific y that was predicted,
but not for any of the other possible predictions Y \ {y}. The feedback is just a number, called the
loss ? : X ? Y 7? R. Smaller numbers are desirable. In general, the loss is the (noisy) realization of
a stochastic random variable. The following exposition can be readily extended to the general case
by setting ?(x, y) = E [? | x, y]. The expected loss ? called risk ? of a hypothesis R(h) is
R(h) = Ex?Pr(X ) Ey?h(x) [?(x, y)] = Eh [?(x, y)] .
(1)
The aim of learning is to find a hypothesis h ? H that has minimum risk.
Counterfactual estimators. We wish to use the logs of a historical system to perform learning. To
ensure that learning will not be impossible [9], we assume the historical algorithm whose predictions
we record in our logged data is a stationary policy h0 (x) with full support over Y. For a new
hypothesis h 6= h0 , we cannot use the empirical risk estimator used in supervised learning [20] to
directly approximate R(h), because the data contains samples drawn from h0 while the risk from
Equation (1) requires samples from h.
2
Importance sampling fixes this distribution mismatch,
h(y | x)
R(h) = Eh [?(x, y)] = Eh0 ?(x, y)
.
h0 (y | x)
So, with data collected from the historical system
D = {(x1 , y1 , ?1 , p1 ), . . . , (xn , yn , ?n , pn )},
where (xi , yi ) ? h0 , ?i ? ?(xi , yi ) and pi ? h0 (yi | xi ), we can derive an unbiased estimate of
R(h) via Monte Carlo approximation,
n
1 X h(yi | xi )
?
R(h)
=
?i
.
(2)
n i=1
pi
?
This classic inverse propensity estimator [7] has unbounded variance: pi ' 0 in D can cause R(h)
to be arbitrarily far away from the true risk R(h). To remedy this problem, several thresholding
schemes have been proposed and studied in the literature [15, 8, 5, 11]. The straightforward option
is to cap the propensity weights [15, 1], i.e. pick M > 1 and set
n
1X
h(yi | xi )
M
?
R (h) =
?i min M,
.
n i=1
pi
? M (h) but induce a larger bias.
Smaller values of M reduce the variance of R
? M (h)
Counterfactual Risk Minimization. Importance sampling also introduces variance in R
h(yi |xi )
through the variability of pi . This variance can be drastically different for different h ? H. The
CRM principle is derived from a generalization error bound that reasons about this variance using
an empirical Bernstein
argument
n
o [1, 13]. Let ?(?, ?) ? [?1, 0] and consider the random variable
h(y|x)
uh = ?(x, y) min M, h0 (y|x) . Note that D contains n i.i.d. observations uh i .
Theorem 1. Denote the empirical variance of uh by V ?ar(uh ). With probability at least 1?? in the
random vector (xi , yi ) ? h0 , for a stochastic hypothesis space H with capacity C(H) and n ? 16,
s
15 log( 10C(H)
18V ?ar(uh ) log( 10C(H)
)
)
?
?
M
?
+M
.
?h ? H : R(h) ? R (h) +
n
n?1
Proof. Refer Theorem 1 of [1] and the proof of Theorem 6 of [13].
Following Structural Risk Minimization [20], this bound motivates the CRM principle for designing
? M (h) as well as
algorithms for BLBF. A learning algorithm should jointly optimize the estimate R
its empirical standard deviation, where the latter serves as a data-dependent regularizer.
?
?
s
?
?
?
? CRM = argmin R
? M (h) + ? V ar(uh ) .
h
(3)
?
n
h?H ?
M > 1 and ? ? 0 are regularization hyper-parameters.
4
The Propensity Overfitting problem
The CRM objective in Equation (3) penalizes those h ? H that are ?far? from the logging policy
h0 (as measured by their empirical variance V ?ar(uh )). This can be intuitively understood as a
safeguard against overfitting. However, overfitting in BLBF is more nuanced than in conventional
supervised learning. In particular, the unbiased risk estimator of Equation (2) has two anomalies.
?
Even if ?(?, ?) ? [5, 4], the value of R(h)
estimated on a finite sample need not lie in that range.
Furthermore, if ?(?, ?) is translated by a constant ?(?, ?) + C, R(h) becomes R(h) + C by linearity of
?
expectation ? but the unbiased estimator on a finite sample need not equal R(h)
+ C. In short, this
risk estimator is not equivariant [19]. The various thresholding schemes for importance sampling
only exacerbate this effect. These anomalies leave us vulnerable to a peculiar kind of overfitting, as
we see in the following example.
3
Example 1. For the input space of integers X = {1..k} and the output space Y = {1..k}, define
?2 if y = x
?(x, y) =
?1 otherwise.
The hypothesis space H is the set of all deterministic functions f : X 7? Y.
1 if f (x) = y
hf (y|x) =
0 otherwise.
Data is drawn uniformly, x ? U(X ) and h0 (Y|x) = U(Y) for all x. The hypothesis h? with
minimum true risk is h?f with f ? (x) = x, which has risk R(h? ) = ?2.
When drawing a training sample D = ((x1 , y1 , ?1 , p1 ), ..., (xn , yn , ?n , pn )), let us first consider the
special case where all xi in the sample are distinct. This is quite likely if n is small relative to k. In
this case H contains a hypothesis hoverf it , which assigns f (xi ) = yi for all i. This hypothesis has
the following empirical risk as estimated by Equation (2):
n
n
n
1 X hoverf it (yi | xi )
1
1
1X
1X
?
R(hoverf it ) =
?i
?i
?1
=
?
= ?k.
n i=1
pi
n i=1 1/k
n i=1
1/k
Clearly this risk estimate shows severe overfitting, since it can be arbitrarily lower than the true risk
R(h? ) = ?2 of the best hypothesis h? with appropriately chosen k (or, more generally, the choice
of h0 ). This is in stark contrast to overfitting in full-information supervised learning, where at least
the overfitted risk is bounded by the lower range of the loss function. Note that the empirical risk
? ? ) of h? concentrates around ?2. ERM will, hence, almost always select hoverf it over h? .
R(h
Even if we are not in the special case of having a sample with all distinct xi , this type of overfitting
still exists. In particular, if there are only l distinct xi in D, then there still exists a hoverf it with
? overf it ) ? ?k l . Finally, note that this type of overfitting behavior is not an artifact of this
R(h
n
example. Section 7 shows that this is ubiquitous in all the datasets we explored.
Maybe this problem could be avoided by transforming the loss? For example, let?s translate the
loss by adding 2 to ? so that now all loss values become non-negative. This results in the new loss
function ? 0 (x, y) taking values 0 and 1. In conventional supervised learning an additive translation
of the loss does not change the empirical risk minimizer. Suppose we draw a sample D in which not
all possible values y for xi are observed for all xi in the sample (again, such a sample is likely for
sufficiently large k). Now there are many hypotheses hoverf it0 that predict one of the unobserved y
for each xi , basically avoiding the training data.
n
n
X
hoverf it0 (yi | xi )
0
1X
? overf it0 ) = 1
?i
?i
= 0.
R(h
=
n i=1
pi
n i=1 1/k
Again we are faced with overfitting, since many overfit hypotheses are indistinguishable from the
? ? ) = 0.
true risk minimizer h? with true risk R(h? ) = 0 and empirical risk R(h
These examples indicate that this overfitting occurs regardless of how the loss is transformed. Intuitively, this type of overfitting occurs since the risk estimate according to Equation (2) can be minimized not only by putting large probability mass h(y | x) on the examples with low loss ?(x, y),
but by maximizing (for negative losses) or minimizing (for positive losses) the sum of the weights
n
1 X h(yi | xi )
?
.
(4)
S(h)
=
n i=1
pi
For this reason, we call this type of overfitting Propensity Overfitting. This is in stark contrast to
overfitting in supervised learning, which we call Loss Overfitting. Intuitively, Loss Overfitting occurs because the capacity of H fits spurious patterns of low ?(x, y) in the data. In Propensity Overfitting, the capacity in H allows overfitting of the propensity weights pi ? for positive ?, hypotheses
that avoid D are selected; for negative ?, hypotheses that overrepresent D are selected.
The variance regularization of CRM combats both Loss Overfitting and Propensity Overfitting by
optimizing a more informed generalization error bound. However the empirical variance estimate
is also affected by Propensity Overfitting ? especially for positive losses. Can we avoid Propensity
Overfitting more directly?
4
5
Control variates and the Self-Normalized estimator
To avoid Propensity Overfitting, we must first detect when and where it is occurring. For this,
we draw on diagnostic tools used in importance sampling. Note that for any h ? H, the sum
?
of propensity weights S(h)
from Equation (4) always has expected value 1 under the conditions
required for the unbiased estimator of Equation (2).
n Z
n Z
h
i
1X
h(yi | xi )
1X
?
E S(h)
1 Pr(xi )dxi = 1.
(5)
=
h0 (yi | xi ) Pr(xi )dyi dxi =
n i=1
h0 (yi | xi )
n i=1
This means that we can identify hypotheses that suffer from Propensity Overfitting based on how far
?
S(h)
deviates from its expected value of 1. Since h(y|x) is likely correlated with ?(x, y) h(y|x) , a
h0 (y|x)
h0 (y|x)
?
?
large deviation in S(h)
suggests a large deviation in R(h)
and consequently a bad risk estimate.
h
i
?
How can we use the knowledge that ?h ? H : E S(h)
= 1 to avoid degenerate risk estimates in
a principled way? While one could use concentration inequalities to explicitly detect and eliminate
?
overfit hypotheses based on S(h),
we use control variates to derive an improved risk estimator that
directly incorporates this knowledge.
Control variates. Control variates ? random variables whose expectation is known ? are a classic
tool used to reduce the variance of Monte Carlo approximations [21]. Let V (X) be a control variate
with known expectation EX [V (X)] = v 6= 0, and let EX [W (X)] be an expectation that we would
like to estimate based on independent samples of X. Employing V (X) as a multiplicative control
(X)]
variate, we can write EX [W (X)] = E[W
E[V (X)] v. This motivates the ratio estimator
Pn
W (Xi )
? SN = Pi=1
v,
(6)
W
n
i=1 V (Xi )
which is called the Self-Normalized estimator in the importance sampling literature [12, 22, 23].
This estimator has substantially lower variance if W (X) and V (X) are correlated.
Self-Normalized risk estimator. Let us use S(h) as a control variate for R(h), yielding
Pn
h(yi |xi )
i=1 ?i
pi
SN
?
.
R (h) = P
n
h(yi |xi )
i=1
pi
(7)
Hesterberg reports that this estimator tends be more accurate than the unbiased estimator of Equation (2) when fluctuations in the sampling weights dominate the fluctuations in ?(x, y) [19].
Observe that the estimate is just a convex combination of the ?i observed in the sample. If ?(?, ?)
is now translated by a constant ?(?, ?) + C, both the true risk R(h) and the finite sample estimate
? SN (h) get shifted by C. Hence R
? SN (h) is equivariant, unlike R(h)
?
? SN (h) is
R
[19]. Moreover, R
always bounded within the range of ?. So, the overfitted risk due to ERM will now be bounded by
the lower range of the loss, analogous to full-information supervised learning.
h? i
Finally, while the self-normalized risk estimator is not unbiased (E R(h)
6= ER(h)
in general), it
?
?
S(h)
[S(h)
]
is strongly consistent and approaches the desired expectation when n is large.
i.i.d.
Theorem 2. Let D be drawn (xi , yi ) ? h0 , from a h0 that has full support over Y. Then,
? SN (h) = R(h)) = 1.
?h ? H : Pr( lim R
n??
? SN (h) in (7) are i.i.d. observations with mean R(h). Strong law
Proof. The numerator of R
Pn
of large numbers gives Pr(limn?? n1 i=1 ?i h(ypii|xi ) = R(h)) = 1. Similarly, the denominator has i.i.d. observations with mean 1. So, the strong law of large numbers implies
Pn
? SN (h) = R(h)) = 1.
Pr(limn?? n1 i=1 h(ypii|xi ) = 1) = 1. Hence, Pr(limn?? R
? SN (h) in Equation (7) resolves all the problems of
In summary, the self-normalized risk estimator R
?
the unbiased estimator R(h) from Equation (2) identified in Section 4.
5
6
Learning method: Norm-POEM
We now derive a learning algorithm, called Norm-POEM, for structured output prediction. The
algorithm is analogous to POEM [1] in its choice of hypothesis space and its application of the
CRM principle, but it replaces the conventional estimator (2) with the self-normalized estimator (7).
Hypothesis space. Following [1, 24], Norm-POEM learns stochastic linear rules hw ? Hlin
parametrized by w that operate on a d?dimensional joint feature map ?(x, y).
hw (y | x) = exp(w ? ?(x, y))/Z(x).
P
0
Z(x) = y0 ?Y exp(w ? ?(x, y )) is the partition function.
Variance estimator. In order to instantiate the CRM objective from Equation (3), we need an
? SN (h)) for the self-normalized risk estimator. Following [23,
empirical variance estimate V ?ar(R
Section 4.3], we use an approximate variance estimate for the ratio estimator of Equation (6). Using
the Normal approximation argument [21, Equation 9.9],
Pn
? SN (h))2 ( h(yi |xi ) )2
i=1 (?i ? R
pi
SN
?
?
V ar(R (h)) =
.
(8)
Pn h(yi |xi ) 2
( i=1 pi )
Using the delta method to approximate the variance [22] yields the same formula. To invoke asymptotic normality of the estimator (and indeed, for reliable importance sampling estimates) we require
? SN (h)) to exist. We can guarantee this by
the true variance of the self-normalized estimator V ar(R
M
?
thresholding the importance weights, analogous to R (h).
The benefits of the self-normalized estimator come at a computational cost. The risk estimator
of POEM had a simpler variance estimate which could be approximated by Taylor expansion and
optimized using stochastic gradient descent. The variance of Equation (8) does not admit stochastic
optimization. Surprisingly, in our experiments in Section 7 we find that the improved robustness of
Norm-POEM permits fast convergence during training even without stochastic optimization.
Training objective of Norm-POEM. The objective is now derived by substituting the selfnormalized risk estimator of Equation (7) and its sample variance estimate from Equation (8) into
the CRM objective (3) for the hypothesis space Hlin . By design, hw lies in the exponential family
of distributions. So, the gradient of the resulting objective can be tractably computed whenever the
partition functions Z(xi ) are tractable. Doing so yields a non-convex objective in the parameters
w which we optimize using L-BFGS. The choice of L-BFGS for non-convex and non-smooth optimization is well supported [25, 26]. Analogous to POEM, the hyper-parameters M (clipping to
prevent unbounded variance) and ? (strength of variance regularization) can be calibrated via counterfactual evaluation on a held out validation set. In summary, the per-iteration cost of optimizing the
Norm-POEM objective has the same complexity as the per-iteration cost of POEM with L-BFGS. It
requires the same set of hyper-parameters. And it can be done tractably whenever the corresponding supervised CRF can be learnt efficiently. Software implementing Norm-POEM is available at
http://www.cs.cornell.edu/?adith/POEM.
7
Experiments
We will now empirically verify if the self-normalized estimator as used in Norm-POEM can indeed
guard against propensity overfitting and attain robust generalization performance. We follow the
Supervised 7? Bandit methodology [2, 1] to test the limits of counterfactual learning in a wellcontrolled environment. As in prior work [1], the experiment setup uses supervised datasets for
multi-label classification from the LibSVM repository. In these datasets, the inputs x ? Rp . The
predictions y ? {0, 1}q are bitvectors indicating the labels assigned to x. The datasets have a range
of features p, labels q and instances n:
Name
Scene
Yeast
TMC
LYRL
p(# features)
294
103
30438
47236
q(# labels)
6
14
22
4
6
ntrain
1211
1500
21519
23149
ntest
1196
917
7077
781265
POEM uses the CRM principle instantiated with the unbiased estimator while Norm-POEM uses the
self-normalized estimator. Both use a hypothesis space isomorphic to a Conditional Random Field
(CRF) [24]. We therefore report the performance of a full-information CRF (essentially, logistic
regression for each of the q labels independently) as a ?skyline? for what we can possibly hope to
reach by partial-information batch learning from logged bandit feedback. The joint feature map
?(x, y) = x ? y for all approaches. To simulate a bandit feedback dataset D, we use a CRF with
default hyper-parameters trained on 5% of the supervised dataset as h0 , and replay the training data
4 times and collect sampled labels from h0 . This is inspired by the observation that supervised
labels are typically hard to collect relative to bandit feedback. The BLBF algorithms only have
access to the Hamming loss ?(y ? , y) between the supervised label y ? and the sampled label y for
input x. Generalization performance R is measured by the expected Hamming loss on the held-out
supervised test set. Lower is better. Hyper-parameters ?, M were calibrated as recommended and
validated on a 25% hold-out of D ? in summary, our experimental setup is identical to POEM [1].
We report performance of BLBF approaches without l2?regularization here; we observed NormPOEM dominated POEM even after l2?regularization. Since the choice of optimization method
could be a confounder, we use L-BFGS for all methods and experiments.
What is the generalization performance of Norm-POEM ? The key question is whether the appealing theoretical properties of the self-normalized estimator actually lead to better generalization
performance. In Table 1, we report the test set loss for Norm-POEM and POEM averaged over 10
runs. On each run, h0 has varying performance (trained on random 5% subsets) but Norm-POEM
consistently beats POEM.
Table 1: Test set Hamming loss averaged over 10 runs. Norm-POEM significantly outperforms
POEM on all four datasets (one-tailed paired difference t-test at significance level of 0.05).
R
h0
POEM
Norm-POEM
CRF
Scene
1.511
1.200
1.045
0.657
Yeast
5.577
4.520
3.876
2.830
TMC
3.442
2.152
2.072
1.187
LYRL
1.459
0.914
0.799
0.222
The plot below (Figure 1) shows how generalization performance improves with more training data
for a single run of the experiment on the Yeast dataset. We achieve this by varying the number of
times we replay the training set to collect samples from h0 (ReplayCount). Norm-POEM consistently outperforms POEM for all training sample sizes.
h0
CRF
POEM
Norm-POEM
R
4
3.5
3
20
21
22
23
24
25
ReplayCount
26
27
28
Figure 1: Test set Hamming loss as n ? ? on the Yeast dataset. All approaches will converge to
CRF performance in the limit, but the rate of convergence is slow since h0 is thin-tailed.
Does Norm-POEM avoid Propensity Overfitting? While the previous results indicate that
Norm-POEM achieves better performance, it remains to be verified that this improved performance
?
? h)
is indeed due to improved control over Propensity Overfitting. Table 2 (left) shows the average S(
? selected by each approach. Indeed, S(
? is close to its known expectation of
? h)
for the hypothesis h
? depends
? h)
1 for Norm-POEM, while it is severely biased for POEM. Furthermore, the value of S(
heavily on how the losses ? are translated for POEM, as predicted by theory. As anticipated by
our earlier observation that the self-normalized estimator is equivariant, Norm-POEM is unaffected
by translations of ?. Table 2 (right) shows that the same is true for the prediction error on the test
7
set. Norm-POEM is consistenly good while POEM fails catastrophically (for instance, on the TMC
dataset, POEM is worse than random guessing).
? (left) and test set Hamming loss R (right), averaged
? h)
Table 2: Mean of the unclipped weights S(
over 10 runs. ? > 0 and ? < 0 indicate whether the loss was translated to be positive or negative.
POEM(? > 0)
POEM(? < 0)
Norm-POEM(? > 0)
Norm-POEM(? < 0)
Scene
0.274
1.782
0.981
0.981
?
? h)
S(
Yeast TMC
0.028 0.000
5.352 2.802
0.840 0.941
0.821 0.938
LYRL
0.175
1.230
0.945
0.945
Scene
2.059
1.200
1.058
1.045
?
R(h)
Yeast
TMC
5.441 17.305
4.520
2.152
3.881
2.079
3.876
2.072
LYRL
2.399
0.914
0.799
0.799
Is CRM variance regularization still necessary? It may be possible that the improved selfnormalized estimator no longer requires variance regularization. The loss of the unregularized estimator is reported (Norm-IPS) in Table 3. We see that variance regularization still helps.
Table 3: Test set Hamming loss for Norm-POEM and the variance agnostic Norm-IPS averaged over
the same 10 runs as Table 1. On Scene, TMC and LYRL, Norm-POEM is significantly better than
Norm-IPS (one-tailed paired difference t-test at significance level of 0.05).
R
Norm-IPS
Norm-POEM
Scene
1.072
1.045
Yeast
3.905
3.876
TMC
3.609
2.072
LYRL
0.806
0.799
How computationally efficient is Norm-POEM ? The runtime of Norm-POEM is surprisingly
faster than POEM. Even though normalization increases the per-iteration computation cost, optimization tends to converge in fewer iterations than for POEM. We find that POEM picks a hypothesis with large kwk, attempting to assign a probability of 1 to all training points with negative losses.
However, Norm-POEM converges to a much shorter kwk. The loss of an instance relative to others
in a sample D governs how Norm-POEM tries to fit to it. This is another nice consequence of the
? SN (h) is bounded and small. Overall, the runtime of Norm-POEM
fact that the overfitted risk of R
is on the same order of magnitude as those of a full-information CRF, and is competitive with the
runtimes reported for POEM with stochastic optimization and early stopping [1], while providing
substantially better generalization performance.
Table 4: Time in seconds, averaged across validation runs. CRF is implemented by scikit-learn [27].
Time(s)
POEM
Norm-POEM
CRF
Scene
78.69
7.28
4.94
Yeast
98.65
10.15
3.43
TMC
716.51
227.88
89.24
LYRL
617.30
142.50
72.34
We observe the same trends for Norm-POEM when different properties of h0 are varied (e.g.
stochasticity and quality), as reported for POEM [1].
8
Conclusions
We identify the problem of propensity overfitting when using the conventional unbiased risk estimator for ERM in batch learning from bandit feedback. To remedy this problem, we propose the use of
a multiplicative control variate that leads to the self-normalized risk estimator. This provably avoids
the anomalies of the conventional estimator. Deriving a new learning algorithm called Norm-POEM
based on the CRM principle using the new estimator, we show that the improved estimator leads to
significantly improved generalization performance.
Acknowledgement
This research was funded in part through NSF Awards IIS-1247637, IIS-1217686, IIS-1513692, the
JTCII Cornell-Technion Research Fund, and a gift from Bloomberg.
8
References
[1] Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged
bandit feedback. In ICML, 2015.
[2] Alina Beygelzimer and John Langford. The offset tree for learning with partial labels. In KDD, pages
129?138, 2009.
[3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA, 2006.
[4] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998.
[5] L?eon Bottou, Jonas Peters, Joaquin Q. Candela, Denis X. Charles, Max Chickering, Elon Portugaly,
Dipankar Ray, Patrice Y. Simard, and Ed Snelson. Counterfactual reasoning and learning systems: the
example of computational advertising. Journal of Machine Learning Research, 14(1):3207?3260, 2013.
[6] Miroslav Dud??k, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. In ICML,
pages 1097?1104, 2011.
[7] P. Rosenbaum and D. Rubin. The central role of the propensity score in observational studies for causal
effects. Biometrika, 70(1):41?55, 1983.
[8] C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighting. In NIPS, pages 442?
450, 2010.
[9] John Langford, Alexander Strehl, and Jennifer Wortman. Exploration scavenging. In ICML, pages 528?
535, 2008.
[10] Bianca Zadrozny, John Langford, and Naoki Abe. Cost-sensitive learning by cost-proportionate example
weighting. In ICDM, pages 435?, 2003.
[11] Alexander L. Strehl, John Langford, Lihong Li, and Sham Kakade. Learning from logged implicit exploration data. In NIPS, pages 2217?2225, 2010.
[12] H. F. Trotter and J. W. Tukey. Conditional monte carlo for normal samples. In Symposium on Monte
Carlo Methods, pages 64?79, 1956.
[13] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample-variance penalization.
In COLT, 2009.
[14] Philip S. Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. High-confidence off-policy
evaluation. In AAAI, pages 3000?3006, 2015.
[15] Edward L. Ionides. Truncated importance sampling. Journal of Computational and Graphical Statistics,
17(2):295?311, 2008.
[16] Lihong Li, R. Munos, and C. Szepesvari. Toward minimax off-policy value estimation. In AISTATS, 2015.
[17] Phelim Boyle, Mark Broadie, and Paul Glasserman. Monte carlo methods for security pricing. Journal of
Economic Dynamics and Control, 21(89):1267?1321, 1997.
[18] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy
optimization. In ICML, pages 1889?1897, 2015.
[19] Tim Hesterberg. Weighted average importance sampling and defensive mixture distributions. Technometrics, 37:185?194, 1995.
[20] V. Vapnik. Statistical Learning Theory. Wiley, Chichester, GB, 1998.
[21] Art B. Owen. Monte Carlo theory, methods and examples. 2013.
[22] Augustine Kong. A note on importance sampling using standardized weights. Technical Report 348,
Department of Statistics, University of Chicago, 1992.
[23] R. Rubinstein and D. Kroese. Simulation and the Monte Carlo Method. Wiley, 2 edition, 2008.
[24] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282?289, 2001.
[25] Adrian S. Lewis and Michael L. Overton. Nonsmooth optimization via quasi-newton methods. Mathematical Programming, 141(1-2):135?163, 2013.
[26] Jin Yu, S. V. N. Vishwanathan, S. G?unter, and N. Schraudolph. A quasi-Newton approach to nonsmooth
convex optimization problems in machine learning. JMLR, 11:1145?1200, 2010.
[27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011.
9
| 5748 |@word kong:1 repository:1 norm:39 trotter:1 adrian:1 pieter:1 confirms:1 simulation:2 pick:2 catastrophically:1 reduction:1 contains:4 score:3 dubourg:1 outperforms:3 existing:1 past:1 contextual:1 beygelzimer:1 must:1 readily:1 john:7 additive:2 partition:2 chicago:1 kdd:1 drop:1 plot:1 fund:1 stationary:1 selected:3 instantiate:1 fewer:1 ntrain:1 mccallum:1 short:1 record:3 provides:1 denis:1 philipp:1 simpler:1 unbounded:2 mathematical:1 guard:1 become:1 symposium:1 jonas:1 doubly:2 ray:1 blondel:1 indeed:4 expected:4 behavior:1 p1:2 equivariant:3 multi:2 inspired:1 resolve:1 glasserman:1 clicked:1 becomes:1 gift:1 moreover:2 linearity:1 bounded:4 mass:1 agnostic:1 hitherto:1 what:3 kind:2 argmin:1 substantially:2 developed:2 informed:1 finding:1 unobserved:1 guarantee:1 combat:1 interactive:2 runtime:2 biometrika:1 control:14 yn:2 segmenting:1 positive:4 understood:1 naoki:1 tends:2 limit:2 severely:2 consequence:1 sutton:1 theocharous:1 fluctuation:3 lugosi:1 therein:1 studied:2 suggests:1 collect:3 confounder:1 range:5 averaged:5 skyline:1 pontil:1 empirical:15 attain:1 significantly:3 matching:2 gabor:1 confidence:1 induce:1 get:1 convenience:1 cannot:1 close:1 risk:46 context:2 impossible:1 optimize:2 conventional:9 deterministic:2 map:2 www:1 maximizing:1 straightforward:1 regardless:1 independently:1 convex:4 focused:1 formalized:1 defensive:1 immediately:1 assigns:1 boyle:1 estimator:65 rule:2 dominate:2 deriving:1 financial:1 classic:4 analogous:5 suppose:1 heavily:1 user:2 anomaly:4 programming:1 us:4 designing:2 hypothesis:27 trend:1 approximated:1 particularly:1 observed:4 role:1 levine:1 capture:1 region:1 trade:1 overfitted:3 observes:1 valuable:1 principled:1 transforming:1 environment:1 complexity:1 reward:1 dynamic:1 ghavamzadeh:1 trained:2 passos:1 logging:1 learner:2 uh:7 translated:4 joint:2 represented:1 various:1 regularizer:1 distinct:3 fast:1 instantiated:1 massimiliano:1 monte:8 rubinstein:1 labeling:1 hyper:5 h0:25 whose:2 quite:1 widely:1 larger:1 drawing:1 otherwise:2 statistic:2 jointly:1 noisy:1 ip:4 online:2 patrice:1 sequence:1 propose:3 adaptation:2 relevant:1 realization:1 translate:2 degenerate:1 achieve:1 recipe:1 convergence:2 leave:1 converges:1 help:1 derive:3 andrew:2 tim:1 measured:2 ex:4 edward:1 strong:2 implemented:1 c:3 predicted:2 indicate:3 implies:1 come:1 rosenbaum:1 concentrate:1 closely:2 stochastic:9 exploration:2 observational:1 implementing:1 require:2 assign:1 fix:1 generalization:10 abbeel:1 varoquaux:1 correction:1 hold:1 recap:1 around:1 sufficiently:1 normal:2 exp:2 predict:1 substituting:1 optimizer:1 achieves:1 early:1 estimation:1 label:11 prettenhofer:1 propensity:23 sensitive:1 tool:2 weighted:1 minimization:5 hope:1 mit:1 clearly:1 always:3 aim:1 unclipped:1 pn:8 avoid:5 cornell:6 varying:2 barto:1 derived:2 focus:1 validated:1 joachim:2 notational:1 consistently:3 grisel:1 impossibility:1 contrast:2 sense:1 detect:2 inference:2 dependent:1 stopping:1 hesterberg:2 typically:2 eliminate:1 spurious:1 bandit:15 transformed:1 quasi:2 provably:2 overall:1 classification:2 colt:1 denoted:1 favored:1 proposes:1 platform:1 special:2 gramfort:1 art:1 equal:1 field:2 having:1 sampling:15 runtimes:1 identical:1 yu:1 icml:5 thin:1 anticipated:1 future:1 minimized:1 report:5 others:1 nonsmooth:2 cardinal:1 richard:1 n1:2 technometrics:1 cournapeau:1 evaluation:9 severe:3 chichester:1 introduces:1 mixture:1 yielding:1 tj:1 held:2 dyi:1 accurate:1 peculiar:1 overton:1 partial:2 necessary:1 experience:1 unter:1 shorter:1 tree:1 taylor:1 maurer:1 penalizes:1 desired:1 causal:2 theoretical:2 miroslav:1 instance:6 earlier:1 ar:7 portugaly:1 clipping:1 cost:6 introducing:1 deviation:3 entry:3 subset:1 technion:1 wortman:1 reported:3 learnt:1 calibrated:2 probabilistic:1 off:7 invoke:1 safeguard:1 michael:2 unforeseen:1 kroese:1 again:2 central:1 cesa:1 aaai:1 possibly:1 worse:1 admit:1 simard:1 stark:2 li:3 michel:1 bfgs:4 explicitly:1 ad:5 depends:1 multiplicative:4 try:1 candela:1 doing:1 kwk:2 tukey:1 competitive:1 hf:1 option:1 proportionate:1 variance:29 efficiently:1 yield:2 identify:2 basically:1 carlo:8 advertising:1 unaffected:1 inform:1 suffers:1 reach:1 whenever:2 ed:1 definition:1 against:3 naturally:1 proof:3 dxi:2 hamming:6 degeneracy:2 sampled:2 dataset:5 exacerbate:1 counterfactual:14 knowledge:2 cap:1 lim:1 ubiquitous:1 improves:1 actually:1 back:1 supervised:16 follow:1 methodology:1 improved:8 wei:1 done:1 though:1 strongly:1 furthermore:2 just:2 implicit:1 swaminathan:2 langford:5 overfit:3 joaquin:1 web:1 trust:1 scavenging:1 scikit:2 logistic:1 artifact:1 quality:1 nuanced:1 yeast:8 pricing:1 adith:4 name:1 effect:2 usa:1 normalized:21 unbiased:10 contain:2 true:8 remedy:2 hence:5 assigned:2 regularization:8 verify:1 dud:1 moritz:1 attractive:1 poem:61 during:2 self:20 indistinguishable:1 numerator:1 essence:1 game:1 crf:10 mohammad:1 reasoning:1 snelson:1 recently:1 charles:1 superior:1 empirically:1 refer:2 cambridge:1 vanilla:1 similarly:1 neatly:2 stochasticity:1 had:2 funded:1 lihong:3 access:1 longer:1 nicolo:1 optimizing:2 inequality:1 arbitrarily:2 yi:19 minimum:2 ey:1 converge:2 fernando:1 recommended:1 ii:3 full:7 desirable:2 sham:1 smooth:1 technical:1 faster:1 offer:1 schraudolph:1 icdm:1 award:1 paired:2 prediction:9 regression:2 denominator:1 essentially:1 expectation:6 iteration:5 normalization:1 wellcontrolled:1 sergey:1 receive:1 proposal:1 limn:3 appropriately:1 biased:3 operate:1 unlike:2 virtually:1 incorporates:1 lafferty:1 effectiveness:1 ionides:1 call:3 integer:1 structural:1 jordan:1 bernstein:2 crm:13 variate:11 fit:2 identified:1 reduce:2 andreas:1 economic:1 shift:1 whether:3 gb:1 suffer:1 peter:1 york:1 cause:2 action:4 useful:1 generally:1 clear:1 governs:1 maybe:1 http:1 exist:1 nsf:1 shifted:1 estimated:2 diagnostic:1 delta:1 per:3 write:1 brucher:1 affected:1 putting:1 key:1 four:1 traced:1 drawn:4 alina:1 costing:1 prevent:1 libsvm:1 verified:1 sum:2 run:7 inverse:1 logged:9 clipped:1 almost:1 family:1 draw:2 bound:7 replaces:1 strength:1 placement:2 vishwanathan:1 scene:7 software:1 eh0:1 dominated:1 simulate:1 argument:2 min:2 attempting:1 department:3 structured:3 according:1 combination:1 smaller:2 across:1 y0:1 appealing:1 kakade:1 intuitively:3 pr:9 erm:6 thorsten:2 taken:2 unregularized:1 computationally:1 equation:16 remains:1 jennifer:1 thirion:1 know:1 tractable:1 serf:1 available:1 permit:1 apply:1 observe:3 away:1 batch:7 alternative:1 robustness:2 rp:1 thomas:1 standardized:1 ensure:1 graphical:1 newton:2 eon:1 especially:1 selfnormalized:2 objective:8 perrot:1 question:1 quantity:1 occurs:3 concentration:1 guessing:1 exhibit:2 gradient:2 capacity:4 parametrized:1 philip:1 collected:1 reason:2 toward:1 ratio:2 minimizing:1 providing:1 setup:2 negative:5 rise:2 design:3 motivates:3 policy:14 unknown:1 perform:2 bianchi:1 recommender:1 upper:1 observation:5 datasets:5 finite:3 descent:1 jin:1 beat:1 zadrozny:1 truncated:1 extended:1 variability:1 y1:2 mansour:1 varied:1 abe:1 required:1 vanderplas:1 extensive:1 optimized:1 tmc:8 security:1 engine:1 tractably:2 nip:2 address:1 below:1 pattern:1 mismatch:1 reliable:1 max:1 eh:2 normality:1 scheme:2 improve:1 minimax:1 identifies:1 augustine:1 sn:14 faced:1 prior:2 literature:2 schulman:1 python:1 deviate:1 l2:2 nice:1 acknowledgement:1 relative:3 law:2 asymptotic:1 loss:31 georgios:1 validation:2 penalization:1 consistent:1 rubin:1 principle:7 thresholding:3 systematically:1 pi:14 strehl:2 translation:2 summary:3 mohri:1 surprisingly:2 supported:1 drastically:1 bias:3 taking:1 munos:1 benefit:1 bitvectors:1 feedback:17 default:1 xn:2 avoids:4 reinforcement:4 avoided:1 historical:6 employing:2 far:3 approximate:3 overfitting:33 it0:3 xi:32 search:2 tailed:3 table:9 additionally:2 learn:2 robust:3 szepesvari:1 expansion:1 bottou:1 complex:1 domain:2 elon:1 significance:2 aistats:1 paul:1 edition:1 x1:2 bianca:1 slow:1 ny:1 wiley:2 fails:1 duchesnay:1 hlin:2 wish:1 pereira:1 exponential:2 lie:2 replay:2 chickering:1 jmlr:1 weighting:2 learns:1 hw:3 theorem:4 formula:1 bad:1 specific:1 covariate:1 showing:1 er:1 explored:1 offset:1 cortes:1 exists:2 vapnik:1 adding:1 importance:16 magnitude:1 occurring:1 likely:3 prevents:1 dipankar:1 recommendation:1 vulnerable:1 minimizer:3 lewis:1 conditional:4 consequently:1 exposition:1 replace:2 owen:1 change:1 hard:1 uniformly:1 called:6 isomorphic:1 ntest:1 experimental:1 indicating:1 select:1 pedregosa:1 support:2 mark:1 latter:1 alexander:2 evaluate:1 avoiding:1 correlated:2 |
5,245 | 5,749 | Frank-Wolfe Bayesian Quadrature: Probabilistic
Integration with Theoretical Guarantees
Chris J. Oates
School of Mathematical and Physical Sciences
University of Technology, Sydney
christopher.oates@uts.edu.au
Franc?ois-Xavier Briol
Department of Statistics
University of Warwick
f-x.briol@warwick.ac.uk
Mark Girolami
Department of Statistics
University of Warwick
m.girolami@warwick.ac.uk
Michael A. Osborne
Department of Engineering Science
University of Oxford
mosb@robots.ox.ac.uk
Abstract
There is renewed interest in formulating integration as a statistical inference problem, motivated by obtaining a full distribution over numerical error that can be
propagated through subsequent computation. Current methods, such as Bayesian
Quadrature, demonstrate impressive empirical performance but lack theoretical
analysis. An important challenge is therefore to reconcile these probabilistic integrators with rigorous convergence guarantees. In this paper, we present the first
probabilistic integrator that admits such theoretical treatment, called Frank-Wolfe
Bayesian Quadrature (FWBQ). Under FWBQ, convergence to the true value of
the integral is shown to be up to exponential and posterior contraction rates are
proven to be up to super-exponential. In simulations, FWBQ is competitive with
state-of-the-art methods and out-performs alternatives based on Frank-Wolfe optimisation. Our approach is applied to successfully quantify numerical error in the
solution to a challenging Bayesian model choice problem in cellular biology.
1
Introduction
Computing integrals is a core challenge in machine learning and numerical methods play a central
role in this area. This can be problematic when a numerical integration routine is repeatedly called,
maybe millions of times, within a larger computational pipeline. In such situations, the cumulative
impact of numerical errors can be unclear, especially in cases where the error has a non-trivial
structural component. One solution is to model the numerical error statistically and to propagate
this source of uncertainty through subsequent computations. Conversely, an understanding of how
errors arise and propagate can enable the efficient focusing of computational resources upon the
most challenging numerical integrals in a pipeline.
Classical numerical integration schemes do not account for prior information on the integrand and,
as a consequence, can require an excessive number of function evaluations to obtain a prescribed
level of accuracy [21]. Alternatives such as Quasi-Monte Carlo (QMC) can exploit knowledge on
the smoothness of the integrand to obtain optimal convergence rates [7]. However these optimal
rates can only hold on sub-sequences of sample sizes n, a consequence of the fact that all function
evaluations are weighted equally in the estimator [24]. A modern approach that avoids this problem
is to consider arbitrarily weighted combinations of function values; the so-called quadrature rules
(also called cubature rules). Whilst quadrature rules with non-equal weights have received comparatively little theoretical attention, it is known that the extra flexibility given by arbitrary weights can
1
lead to extremely accurate approximations in many settings (see applications to image de-noising
[3] and mental simulation in psychology [13]).
Probabilistic numerics, introduced in the seminal paper of [6], aims at re-interpreting numerical
tasks as inference tasks that are amenable to statistical analysis.1 Recent developments include
probabilistic solvers for linear systems [14] and differential equations [5, 26]. For the task of computing integrals, Bayesian Quadrature (BQ) [22] and more recent work by [20] provide probabilistic
numerics methods that produce a full posterior distribution on the output of numerical schemes. One
advantage of this approach is that we can propagate uncertainty through all subsequent computations
to explicitly model the impact of numerical error [15]. Contrast this with chaining together classical
error bounds; the result in such cases will typically be a weak bound that provides no insight into the
error structure. At present, a significant shortcoming of these methods is the absence of theoretical
results relating to rates of posterior contraction. This is unsatisfying and has likely hindered the
adoption of probabilistic approaches to integration, since it is not clear that the induced posteriors
represent a sensible quantification of the numerical error (by classical, frequentist standards).
This paper establishes convergence rates for a new probabilistic approach to integration. Our results thus overcome a key perceived weakness associated with probabilistic numerics in the quadrature setting. Our starting point is recent work by [2], who cast the design of quadrature rules as
a problem in convex optimisation that can be solved using the Frank-Wolfe (FW) algorithm. We
propose a hybrid approach of [2] with BQ, taking the form of a quadrature rule, that (i) carries a
full probabilistic interpretation, (ii) is amenable to rigorous theoretical analysis, and (iii) converges
orders-of-magnitude faster, empirically, compared with the original approaches in [2]. In particular,
we prove that super-exponential rates hold for posterior contraction (concentration of the posterior
probability mass on the true value of the integral), showing that the posterior distribution provides
a sensible and effective quantification of the uncertainty arising from numerical error. The methodology is explored in simulations and also applied to a challenging model selection problem from
cellular biology, where numerical error could lead to mis-allocation of expensive resources.
2
2.1
Background
Quadrature and Cubature Methods
Let X ? Rd be a measurable space such that d ? N+ and consider a probability density p(x) defined
Rwith respect to the Lebesgue measure on X . This paper focuses on computing integrals of the form
f (x)p(x)dx for a test function f : X ? R where, for simplicity, we assume f is square-integrable
with respect to p(x). A quadrature rule approximates such integrals as a weighted sum of function
values at some design points {xi }ni=1 ? X :
Z
f (x)p(x)dx ?
X
n
X
wi f (xi ).
(1)
i=1
Viewing integrals
Pn as projections, we write p[f ] for the left-hand side and p?[f ] for the right-hand side,
where p? = i=1 wi ?(xi ) and ?(xi ) is a Dirac measure at xi . Note that p? may not be a probability
distribution; in fact, weights {wi }ni=1 do not have to sum to one or be non-negative. Quadrature
rules can be extended to multivariate functions f : X ? Rd by taking each component in turn.
There are many ways of choosing combinations {xi , wi }ni=1 in the literature. For example, taking
weights to be wi = 1/n with points {xi }ni=1 drawn independently from the probability distribution
p(x) recovers basic Monte Carlo integration. The case with weights wi = 1/n, but with points chosen with respect to some specific (possibly deterministic) schemes includes kernel herding [4] and
Quasi-Monte Carlo (QMC) [7]. In Bayesian Quadrature, the points {xi }ni=1 are chosen to minimise
a posterior variance, with weights {wi }ni=1 arising from a posterior probability distribution.
Classical error analysis for quadrature rules is naturally couched in terms of minimising the worstcase estimation error. Let H be a Hilbert space of functions f : X ? R, equipped with the inner
1
A detailed discussion on probabilistic numerics and an extensive up-to-date bibliography can be found at
http://www.probabilistic-numerics.org.
2
product h?, ?iH and associated norm k ? kH . We define the maximum mean discrepancy (MMD) as:
p[f ] ? p?[f ].
(2)
MMD {xi , wi }ni=1 :=
sup
f ?H:kf kH =1
The reader can refer to [27] for conditions on H that are needed for the existence of the MMD. The
rate at which the MMD decreases with the number of samples n is referred to as the ?convergence
rate? of the quadrature rule. For Monte Carlo, the MMD decreases with the slow rate of OP (n?1/2 )
(where the subscript P specifies that the convergence is in probability). Let H be a RKHS with
reproducing kernel k : X ? X ? R and denote the corresponding canonical feature map by ?(x) =
k(?, x), so that the mean element is given by ?p (x) = p[?(x)] ? H. Then, following [27]
(3)
MMD {xi , wi }ni=1 = k?p ? ?p?kH .
This shows that to obtain low integration error in the RKHS H, one only needs to obtain a good
approximation of its mean element ?p (as ?f ? H: p[f ] = hf, ?p iH ). Establishing theoretical
results for such quadrature rules is an active area of research [1].
2.2
Bayesian Quadrature
Bayesian Quadrature (BQ) was originally introduced in [22] and later revisited by [11, 12] and [23].
The main idea is to place a functional prior on the integrand f , then update this prior through Bayes?
theorem by conditioning on both samples {xi }ni=1 and function evaluations at those sample points
{fi }ni=1 where fi = f (xi ). This induces a full posterior distribution over functions f and hence over
the value of the integral p[f ]. The most common implementation assumes a Gaussian Process (GP)
prior f ? GP(0, k). A useful property motivating the use of GPs is that linear projection preserves
normality, so that the posterior distribution for the integral p[f ] is also a Gaussian, characterised by
its mean and covariance. A natural estimate of the integral p[f ] is given by the mean of this posterior
distribution, which can be compactly written as
p?BQ [f ] = zT K ?1 f.
(4)
where zi = ?p (xi ) and Kij = k(xi , xj ). Notice that this estimator takes the form of a quadrature
rule with weights wBQ = zT K ?1 . Recently, [25] showed how specific choices of kernel and design
points for BQ can recover classical quadrature rules. This begs the question of how to select design
points {xi }ni=1 . A particularly natural approach aims to minimise the posterior uncertainty over the
integral p[f ], which was shown in [16, Prop. 1] to equal:
vBQ {xi }ni=1 = p[?p ] ? zT K ?1 z = MMD2 {xi , wiBQ }ni=1 .
(5)
Thus, in the RKHS setting, minimising the posterior variance corresponds to minimising the worst
case error of the quadrature rule. Below we refer to Optimal BQ (OBQ) as BQ coupled with design
points {xOBQ
}ni=1 chosen to globally minimise (5). We also call Sequential BQ (SBQ) the algorithm
i
that greedily selects design points to give the greatest decrease in posterior variance at each iteration.
OBQ will give improved results over SBQ, but cannot be implemented in general, whereas SBQ is
comparatively straight-forward to implement. There are currently no theoretical results establishing
the convergence of either BQ, OBQ or SBQ.
Remark: (5) is independent of observed function values f. As such, no active learning is possible in
SBQ (i.e. surprising function values never cause a revision of a planned sampling schedule). This
is not always the case: For example [12] approximately encodes non-negativity of f into BQ which
leads to a dependence on f in the posterior variance. In this case sequential selection becomes an
active strategy that outperforms batch selection in general.
2.3
Deriving Quadrature Rules via the Frank-Wolfe Algorithm
Despite the elegance of BQ, its convergence rates have not yet been rigorously established. In brief,
this is because p?BQ [f ] is an orthogonal projection of f onto the affine hull of {?(xi )}ni=1 , rather than
e.g. the convex hull. Standard results from the optimisation literature apply to bounded domains, but
the affine hull is not bounded (i.e. the BQ weights can be arbitrarily large and possibly negative).
Below we describe a solution to the problem of computing integrals recently proposed by [2], based
on the FW algorithm, that restricts attention to the (bounded) convex hull of {?(xi )}ni=1 .
3
Algorithm 1 The Frank-Wolfe (FW) and Frank-Wolfe with Line-Search (FWLS) Algorithms.
Require: function J, initial state g1 = g?1 ? G (and, for FW only: step-size sequence {?i }ni=1 ).
1: for i = 2, . . . , n do
2:
Compute g?i = argming?G g, (DJ)(gi?1 ) ?
3:
[For FWLS only, line search: ?i = argmin??[0,1] J (1 ? ?)gi?1 + ? g?i ]
4:
Update gi = (1 ? ?i )gi?1 + ?i g?i
5: end for
The Frank-Wolfe (FW) algorithm (Alg. 1), also called the conditional gradient algorithm, is a convex
optimization method introduced in [9]. It considers problems of the form ming?G J(g) where the
function J : G ? R is convex and continuously differentiable. A particular case of interest in this
paper will be when the domain G is a compact and convex space of functions, as recently investigated
in [17]. These assumptions imply the existence of a solution to the optimization problem.
In order to define the algorithm rigorously in this case, we introduce the Fr?echet derivative of J,
denoted DJ, such that for H? being the dual space of H, we have the unique map DJ
: H ? H?
such that for each g ? H, (DJ)(g) is the function mapping h ? H to (DJ)(g)(h) = g ? ?, h H .
We also introduce the bilinear map h?, ?i? : H ? H? ? R which, for F ? H? given by F (g) =
hg, f iH , is the rule giving hh, F i? = hh, f iH .
At each iteration i, the FW algorithm computes a linearisation of the objective function J at the
previous state gi?1 ? G along its gradient (DJ)(gi?1 ) and selects an ?atom? g?i ? G that minimises
the inner product a state g and (DJ)(gi?1 ). The new state gi ? G is then a convex combination of
the previous state gi?1 and of the atom g?i . This convex combination depends on a step-size ?i which
is pre-determined and different versions of the algorithm may have different step-size sequences.
Our goal in quadrature is to approximate the mean element ?p . Recently [2] proposed to frame
integration as a FW optimisation problem. Here, the domain G ? H is a space of functions and
taking the objective function to be:
2
1
J(g) =
g ? ?p
H .
(6)
2
This gives an approximation of the mean element and J takes the form of half the posterior variance
(or the MMD2 ). In this functional approximation setting, minimisation of J is carried out over
G = M, the marginal polytope of the RKHS H. The marginal polytope M is defined as the
closure of the convex hull of ?(X ), so that in particular ?p ? M. Assuming as in [18] that ?(x) is
uniformly bounded in feature space (i.e. ?R > 0 : ?x ? X , k?(x)kH ? R), then M is a closed
and bounded set and can be optimised over.
A particular advantage of this method is that it leads to ?sparse? solutions which are linear combinations of the atoms {?
gi }ni=1 [2]. In particular this provides a weighted estimate for the mean element:
?
?FW := gn =
n Y
n
X
i=1
n
X
1 ? ?j?1 ?i?1 g?i :=
wiFW g?i ,
j=i+1
(7)
i=1
where by default ?0 = 1 which leads to all wiFW ? [0, 1] when ?i = 1/(i + 1). A typical sequence of
approximations to the mean element is shown in Fig. 1 (left), demonstrating that the approximation
quickly converges to the ground truth (in black). Since minimisation of a linear function can be
FW
restricted to extreme points of the domain, the atoms will be of the form g?i = ?(xFW
i ) = k(?, xi )
FW
for some xi ? X . The minimisation in g over G from step 2 in Algorithm 1 therefore becomes a
minimisation in x over X and this algorithm therefore provides us design points. In practice, at each
iteration i, the FW algorithm hence selects a design point xFW
? X which induces an atom g?i and
i
gives us an approximation of the mean element ?p . We denote by ?
?FW this approximation after n
iterations. Using the reproducing property, we can show that the FW estimate is a quadrature rule:
n
n
n
D X
E
X
X
p?FW [f ] := f, ?
?FW H = f,
wiFW g?i
=
wiFW f, k(?, xFW
wiFW f (xFW
i ) H =
i ). (8)
i=1
H
i=1
i=1
The total computational cost for FW is O(n2 ). An extension known as FW with Line Search
(FWLS) uses a line-search method to find the optimal step size ?i at each iteration (see Alg. 1).
4
x2
10
* * **
0
**
**
*
****************
* **********
**
**
?10
?10
0
x1
10
Figure 1: Left: Approximations of the mean element ?p using the FWLS algorithm, based on n =
1, 2, 5, 10, 50 design points (purple, blue, green, red and orange respectively). It is not possible to
distinguish between approximation and ground truth when n = 50. Right: Density of a mixture
of 20 Gaussian distributions, displaying the first n = 25 design points chosen by FW (red), FWLS
(orange) and SBQ (green). Each method provides well-spaced design points in high-density regions.
Most FW and FWLS design points overlap, partly explaining their similar performance in this case.
Once again, the approximation obtained by FWLS has a sparse expression as a convex combination
of all the previously visited states and we obtain an associated quadrature rule. FWLS has theoretical convergence rates that can be stronger than standard versions of FW but has computational cost
in O(n3 ). The authors in [10] provide a survey of FW-based algorithms and their convergence rates
under different regularity conditions on the objective function and domain of optimisation.
n
Remark: The FW design points {xFW
i }i=1 are generally not available in closed-form. We follow
mainstream literature by selecting, at each iteration, the point that minimises the MMD over a finite
collection of M points, drawn i.i.d from p(x). The authors in [18] proved that this approximation
adds a O(M ?1/4 ) term to the MMD, so that theoretical results on FW convergence continue to
apply provided that M (n) ? ? sufficiently quickly. Appendix A provides full details. In practice,
one may also make use of a numerical optimisation scheme in order to select the points.
3
A Hybrid Approach: Frank-Wolfe Bayesian Quadrature
To combine the advantages of a probabilistic integrator with a formal convergence theory, we pron
pose Frank-Wolfe Bayesian Quadrature (FWBQ). In FWBQ, we first select design points {xFW
i }i=1
using the FW algorithm. However, when computing the quadrature approximation, instead of using
the usual FW weights {wiFW }ni=1 we use instead the weights {wiBQ }ni=1 provided by BQ. We denote
this quadrature rule by p?FWBQ and also consider p?FWLSBQ , which uses FWLS in place of FW. As
we show below, these hybrid estimators (i) carry the Bayesian interpretation of Sec. 2.2, (ii) permit a rigorous theoretical analysis, and (iii) out-perform existing FW quadrature rules by orders of
magnitude in simulations. FWBQ is hence ideally suited to probabilistic numerics applications.
For these theoretical results we assume that f belongs to a finite-dimensional RKHS H, in line with
recent literature [2, 10, 17, 18]. We further assume that X is a compact subset of Rd , that p(x) > 0
?x ? X and that k is continuous on X ? X . Under these hypotheses, Theorem 1 establishes consistency of the posterior mean, while Theorem 2 establishes contraction for the posterior distribution.
Theorem 1 (Consistency). The posterior mean p?FWBQ [f ] converges to the true integral p[f ] at the
following rates:
(
2D 2 ?1
for FWBQ
n
R n
?
(9)
p[f ] ? p?FWBQ [f ] ? MMD {xi , wi }i=1 ?
R2
2D exp(? 2D2 n) for FWLSBQ
where the FWBQ uses step-size ?i = 1/(i+ 1), D ? (0, ?) is the diameter of the marginal polytope
M and R ? (0, ?) gives the radius of the smallest ball of center ?p included in M.
5
Note that all the proofs of this paper can be found in Appendix B. An immediate corollary of Theorem 1 is that FWLSBQ has an asymptotic error which is exponential in n and is therefore superior
to that of any QMC estimator [7]. This is not a contradiction - recall that QMC restricts attention to
uniform weights, while FWLSBQ is able to propose arbitrary weightings. In addition we highlight a
robustness property: Even when the assumptions of this section do not hold, one still obtains atleast
a rate OP (n?1/2 ) for the posterior mean using either FWBQ or FWLSBQ [8].
Remark: The choice of kernel affects the convergence of the FWBQ method [15]. Clearly, we expect
faster convergence if the function we are integrating is ?close? to the space of functions induced by
our kernel. Indeed, the kernel specifies the geometry of the marginal polytope M, that in turn
directly influences the rate constant R and D associated with FW convex optimisation.
Consistency is only a stepping stone towards our main contribution which establishes posterior contraction rates for FWBQ. Posterior contraction is important as these results justify, for the first time,
the probabilistic numerics approach to integration; that is, we show that the full posterior distribution
is a sensible quantification (at least asymptotically) of numerical error in the integration routine:
Theorem 2 (Contraction). Let S ? R be an open neighbourhood of the true integral p[f ] and let
? = inf r?S C |r ? p[f ]| > 0. Then the posterior probability mass on S c = R \ S vanishes at a rate:
?
?
? 2 R2 2
2? 2D 2 ?1
?
n
exp
?
for FWBQ
4 n
8D
?R?
prob(S c ) ?
(10)
? ?2D exp ? R22 n ? ?? 2 exp R22 n
for FWLSBQ
2D
2D
??
2 2D
where the FWBQ uses step-size ?i = 1/(i+ 1), D ? (0, ?) is the diameter of the marginal polytope
M and R ? (0, ?) gives the radius of the smallest ball of center ?p included in M.
The contraction rates are exponential for FWBQ and super-exponential for FWLBQ, and thus the
two algorithms enjoy both a probabilistic interpretation and rigorous theoretical guarantees. A notable corollary is that OBQ enjoys the same rates as FWLSBQ, resolving a conjecture by Tony
O?Hagan that OBQ converges exponentially [personal communication]:
Corollary. The consistency and contraction rates obtained for FWLSBQ apply also to OBQ.
4
4.1
Experimental Results
Simulation Study
To facilitate the experiments in this paper we followed [1, 2, 11, 18] and employed an exponentiatedquadratic (EQ) kernel k(x, x0 ) := ?2 exp(?1/2?2 kx ? x0 k22 ). This corresponds to an infinitedimensional RKHS, not covered by our theory; nevertheless, we note that all simulations are
practically finite-dimensional due to rounding at machine precision. See Appendix E for a finitedimensional approximation using random Fourier features. EQ kernels are popular in the BQ literature as, when p is a mixture of Gaussians, the mean element ?p is analytically tractable (see
Appendix C). Some other (p, k) pairs that produce analytic mean elements are discussed in [1].
For this simulation study, we took p(x) to be a 20-component mixture of 2D-Gaussian distributions. Monte Carlo (MC) is often used for such distributions but has a slow convergence rate in
OP (n?1/2 ). FW and FWLS are known to converge more quickly and are in this sense preferable to
MC [2]. In our simulations (Fig. 2, left), both our novel methods FWBQ and FWLSBQ decreased
the MMD much faster than the FW/FWLS methods of [2]. Here, the same kernel hyper-parameters
(?, ?) = (1, 0.8) were employed for all methods to have a fair comparison. This suggests that the
best quadrature rules correspond to elements outside the convex hull of {?(xi )}ni=1 . Examples of
those, including BQ, often assign negative weights to features (Fig. S1 right, Appendix D).
The principle advantage of our proposed methods is that they reconcile theoretical tractability with
a fully probabilistic interpretation. For illustration, Fig. 2 (right) plots the posterior uncertainty due
to numerical error for a typical integration problem based on this p(x). In-depth empirical studies
of such posteriors exist already in the literature and the reader is referred to [3, 13, 22] for details.
Beyond these theoretically tractable integrators, SBQ seems to give even better performance as
n increases. An intuitive explanation is that SBQ picks {xi }ni=1 to minimise the MMD whereas
6
Estimator
0.1
0.0
FWLS
?0.1
FWLSBQ
100
200
300
number of design points
Figure 2: Simulation study. Left: Plot of the worst-case integration error squared (MMD2 ). Both
FWBQ and FWLSBQ are seen to outperform FW and FWLS, with SBQ performing best overall.
Right: Integral estimates for FWLS and FWLSBQ for a function f ? H. FWLS converges more
slowly and provides only a point estimate for a given number of design points. In contrast, FWLSBQ
converges faster and provides a full probability distribution over numerical error shown shaded in
orange (68% and 95% credible intervals). Ground truth corresponds to the dotted black line.
FWBQ and FWLSBQ only minimise an approximation of the MMD (its linearisation along DJ). In
addition, the SBQ weights are optimal at each iteration, which is not true for FWBQ and FWLSBQ.
We conjecture that Theorem 1 and 2 provide upper bounds on the rates of SBQ. This conjecture is
partly supported by Fig. 1 (right), which shows that SBQ selects similar design points to FW/FWLS
(but weights them optimally). Note also that both FWBQ and FWLSBQ give very similar result.
This is not surprising as FWLS has no guarantees over FW in infinite-dimensional RKHS [17].
4.2
Quantifying Numerical Error in a Proteomic Model Selection Problem
A topical bioinformatics application that extends recent work by [19] is presented. The objective is
to select among a set of candidate models {Mi }m
i=1 for protein regulation. This choice is based on a
dataset D of protein expression levels, in order to determine a ?most plausible? biological hypothesis
for further experimental investigation. Each Mi is specified by a vector of kinetic parameters ?i (full
details in Appendix D). Bayesian model selection requires that theseRparameters are integrated out
against a prior p(?i ) to obtain marginal likelihood terms L(Mi ) = p(D|?i )p(?i )d?i . Our focus
here is on obtaining the maximum a posteriori
(MAP) model Mj , defined as the maximiser of the
Pm
posterior model probability L(Mj )/ i=1 L(Mi ) (where we have assumed a uniform prior over
model space). Numerical error in the computation of each term L(Mi ), if unaccounted for, could
cause us to return a model Mk that is different from the true MAP estimate Mj and lead to the
mis-allocation of valuable experimental resources.
The problem is quickly exaggerated when the number m of models increases, as there are more
opportunities for one of the L(Mi ) terms to be ?too large? due to numerical error. In [19], the number
m of models was combinatorial in the number of protein kinases measured in a high-throughput
assay (currently ? 102 but in principle up to ? 104 ). This led [19] to deploy substantial computing
resources to ensure that numerical error in each estimate of L(Mi ) was individually controlled.
Probabilistic numerics provides a more elegant and efficient solution: At any given stage, we have
a fully probabilistic quantification of our uncertainty in each of the integrals L(Mi ), shown to be
sensible both theoretically and empirically. This induces a full posterior distribution over numerical
uncertainty in the location of the MAP estimate (i.e. ?Bayes all the way down?). As such we can
determine, on-line, the precise point in the computational pipeline when numerical uncertainty near
the MAP estimate becomes acceptably small, and cease further computation.
The FWBQ methodology was applied to one of the model selection tasks in [19]. In Fig. 3 (left) we
display posterior model probabilities for each of m = 352 candidates models, where a low number
(n = 10) of samples were used for each integral. (For display clarity only the first 50 models
are shown.) In this low-n regime, numerical error introduces a second level of uncertainty that we
quantify by combining the FWBQ error models for all integrals in the computational pipeline; this is
summarised by a box plot (rather than a single point) for each of the models (obtained by sampling
- details in Appendix D). These box plots reveal that our estimated posterior model probabilities are
7
n = 10
n = 100
0.06
Posterior Probability
Posterior Probability
0.03
0.02
0.01
0
10
20
30
Candidate Models
40
0.04
0.02
0
50...
10
20
30
Candidate Models
40
50...
Figure 3: Quantifying numerical error in a model selection problem. FWBQ was used to model
the numerical error of each integral L(Mi ) explicitly. For integration based on n = 10 design
points, FWBQ tells us that the computational estimate of the model posterior will be dominated by
numerical error (left). When instead n = 100 design points are used (right), uncertainty due to
numerical error becomes much smaller (but not yet small enough to determine the MAP estimate).
completely dominated by numerical error. In contrast, when n is increased through 50, 100 and 200
(Fig. 3, right and Fig. S2), the uncertainty due to numerical error becomes negligible. At n = 200
we can conclude that model 26 is the true MAP estimate and further computations can be halted.
Correctness of this result was confirmed using the more computationally intensive methods in [19].
In Appendix D we compared the relative performance of FWBQ, FWLSBQ and SBQ on this problem. Fig. S1 shows that the BQ weights reduced the MMD by orders of magnitude relative to FW
and FWLS and that SBQ converged more quickly than both FWBQ and FWLSBQ.
5
Conclusions
This paper provides the first theoretical results for probabilistic integration, in the form of posterior contraction rates for FWBQ and FWLSBQ. This is an important step in the probabilistic
numerics research programme [15] as it establishes a theoretical justification for using the posterior distribution as a model for the numerical integration error (which was previously assumed [e.g.
11, 12, 20, 23, 25]). The practical advantages conferred by a fully probabilistic error model were
demonstrated on a model selection problem from proteomics, where sensitivity of an evaluation of
the MAP estimate was modelled in terms of the error arising from repeated numerical integration.
The strengths and weaknesses of BQ (notably, including scalability in the dimension d of X ) are
well-known and are inherited by our FWBQ methodology. We do not review these here but refer
the reader to [22] for an extended discussion. Convergence, in the classical sense, was proven here
to occur exponentially quickly for FWLSBQ, which partially explains the excellent performance
of BQ and related methods seen in applications [12, 23], as well as resolving an open conjecture.
As a bonus, the hybrid quadrature rules that we developed turned out to converge much faster in
simulations than those in [2], which originally motivated our work.
A key open problem for kernel methods in probabilistic numerics is to establish protocols for the
practical elicitation of kernel hyper-parameters. This is important as hyper-parameters directly affect
the scale of the posterior over numerical error that we ultimately aim to interpret. Note that this problem applies equally to BQ, as well as related quadrature methods [2, 11, 12, 20] and more generally
in probabilistic numerics [26]. Previous work, such as [13], optimised hyper-parameters on a perapplication basis. Our ongoing research seeks automatic and general methods for hyper-parameter
elicitation that provide good frequentist coverage properties for posterior credible intervals, but we
reserve the details for a future publication.
Acknowledgments
The authors are grateful for discussions with Simon Lacoste-Julien, Simo S?arkk?a, Arno Solin, Dino
Sejdinovic, Tom Gunter and Mathias Cronj?ager. FXB was supported by EPSRC [EP/L016710/1].
CJO was supported by EPSRC [EP/D002060/1]. MG was supported by EPSRC [EP/J016934/1],
an EPSRC Established Career Fellowship, the EU grant [EU/259348] and a Royal Society Wolfson
Research Merit Award.
8
References
[1] F. Bach. On the Equivalence between Quadrature Rules and Random Features. arXiv:1502.06800, 2015.
[2] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the Equivalence between Herding and Conditional
Gradient Algorithms. In Proceedings of the 29th International Conference on Machine Learning, pages
1359?1366, 2012.
[3] Y. Chen, L. Bornn, N. de Freitas, M. Eskelin, J. Fang, and M. Welling. Herded Gibbs Sampling. Journal
of Machine Learning Research, 2015. To appear.
[4] Y. Chen, M. Welling, and A. Smola. Super-Samples from Kernel Herding. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 109?116, 2010.
[5] P. Conrad, M. Girolami, S. S?arkk?a, A. Stuart, and K. Zygalakis. Probability Measures for Numerical
Solutions of Differential Equations. arXiv:1506.04592, 2015.
[6] P. Diaconis. Bayesian Numerical Analysis. Statistical Decision Theory and Related Topics IV, pages
163?175, 1988.
[7] J. Dick and F. Pillichshammer. Digital Nets and Sequences - Discrepancy Theory and Quasi-Monte Carlo
Integration. Cambridge University Press, 2010.
[8] J. C. Dunn. Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length
Rules. SIAM Journal on Control and Optimization, 18(5):473?487, 1980.
[9] M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly,
3:95?110, 1956.
[10] D. Garber and E. Hazan. Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets. In Proceedings of the 32nd International Conference on Machine Learning, pages 541?549, 2015.
[11] Z. Ghahramani and C. Rasmussen. Bayesian Monte Carlo. In Advances in Neural Information Processing
Systems, pages 489?496, 2003.
[12] T. Gunter, R. Garnett, M. Osborne, P. Hennig, and S. Roberts. Sampling for Inference in Probabilistic
Models with Fast Bayesian Quadrature. In Advances in Neural Information Processing Systems, 2014.
[13] J.B. Hamrick and T.L. Griffiths. Mental Rotation as Bayesian Quadrature. In NIPS 2013 Workshop on
Bayesian Optimization in Theory and Practice, 2013.
[14] P. Hennig. Probabilistic Interpretation of Linear Solvers. SIAM Journal on Optimization, 25:234?260,
2015.
[15] P. Hennig, M. Osborne, and M. Girolami. Probabilistic Numerics and Uncertainty in Computations.
Proceedings of the Royal Society A, 471(2179), 2015.
[16] F. Huszar and D. Duvenaud. Optimally-Weighted Herding is Bayesian Quadrature. In Uncertainty in
Artificial Intelligence, pages 377?385, 2012.
[17] M. Jaggi. Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. In Proceedings of the
30th International Conference on Machine Learning, volume 28, pages 427?435, 2013.
[18] S. Lacoste-Julien, F. Lindsten, and F. Bach. Sequential Kernel Herding : Frank-Wolfe Optimization
for Particle Filtering. In Proceedings of the 18th International Conference on Artificial Intelligence and
Statistics, pages 544?552, 2015.
[19] C.J. Oates, F. Dondelinger, N. Bayani, J. Korkola, J.W. Gray, and S. Mukherjee. Causal Network Inference
using Biochemical Kinetics. Bioinformatics, 30(17):i468?i474, 2014.
[20] C.J. Oates, M. Girolami, and N. Chopin. Control Functionals for Monte Carlo Integration. arXiv:
1410.2392, 2015.
[21] A. O?Hagan. Monte Carlo is Fundamentally Unsound. Journal of the Royal Statistical Society, Series D,
36(2):247?249, 1984.
[22] A. O?Hagan. Bayes-Hermite Quadrature. Journal of Statistical Planning and Inference, 29:245?260,
1991.
[23] M. Osborne, R. Garnett, S. Roberts, C. Hart, S. Aigrain, and N. Gibson. Bayesian Quadrature for Ratios.
In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, pages 832?
840, 2012.
[24] A. B. Owen. A Constraint on Extensible Quadrature Rules. Numerische Mathematik, pages 1?8, 2015.
[25] S. S?arkk?a, J. Hartikainen, L. Svensson, and F. Sandblom. On the Relation between Gaussian Process
Quadratures and Sigma-Point Methods. arXiv:1504.05994, 2015.
[26] M. Schober, D. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge-Kutta means. In
Advances in Neural Information Processing Systems 27, pages 739?747, 2014.
[27] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. Lanckriet. Hilbert Space Embeddings
and Metrics on Probability Measures. Journal of Machine Learning Research, 11:1517?1561, 2010.
9
| 5749 |@word version:2 seems:1 norm:1 nd:1 stronger:1 open:3 closure:1 d2:1 simulation:10 propagate:3 seek:1 contraction:10 covariance:1 pick:1 carry:2 initial:1 series:1 selecting:1 renewed:1 rkhs:7 outperforms:1 existing:1 freitas:1 current:1 arkk:3 surprising:2 yet:2 dx:2 written:1 subsequent:3 numerical:36 analytic:1 plot:4 update:2 half:1 intelligence:4 core:1 mental:2 provides:10 revisited:1 location:1 org:1 hermite:1 mathematical:1 along:2 differential:2 prove:1 combine:1 introduce:2 theoretically:2 x0:2 notably:1 indeed:1 planning:1 integrator:4 globally:1 ming:1 little:1 equipped:1 solver:3 revision:1 becomes:5 provided:2 bounded:5 bonus:1 mass:2 wolfson:1 argmin:1 arno:1 developed:1 lindsten:1 whilst:1 l016710:1 guarantee:4 preferable:1 uk:3 control:2 grant:1 enjoy:1 acceptably:1 appear:1 negligible:1 engineering:1 consequence:2 despite:1 bilinear:1 oxford:1 establishing:2 subscript:1 optimised:2 approximately:1 black:2 au:1 equivalence:2 conversely:1 challenging:3 suggests:1 shaded:1 statistically:1 adoption:1 unique:1 practical:2 acknowledgment:1 practice:3 implement:1 dunn:1 area:2 empirical:2 gibson:1 projection:4 pre:1 integrating:1 griffith:1 protein:3 cannot:1 onto:1 selection:8 close:1 noising:1 influence:1 seminal:1 schober:1 www:1 measurable:1 deterministic:1 map:10 center:2 demonstrated:1 attention:3 starting:1 independently:1 convex:14 survey:1 numerische:1 simplicity:1 contradiction:1 estimator:5 rule:24 insight:1 deriving:1 fang:1 justification:1 argming:1 play:1 deploy:1 programming:1 gps:1 us:4 hypothesis:2 lanckriet:1 wolfe:14 element:11 expensive:1 particularly:1 hagan:3 mukherjee:1 mosb:1 observed:1 role:1 epsrc:4 ep:3 solved:1 worst:2 revisiting:1 region:1 eu:2 decrease:3 valuable:1 substantial:1 vanishes:1 ideally:1 rigorously:2 personal:1 ultimately:1 grateful:1 upon:1 completely:1 basis:1 compactly:1 fast:1 shortcoming:1 effective:1 monte:9 describe:1 artificial:4 tell:1 hyper:5 choosing:1 outside:1 pillichshammer:1 larger:1 plausible:1 warwick:4 garber:1 statistic:4 gi:10 g1:1 gp:2 runge:1 sequence:6 advantage:5 differentiable:1 mg:1 net:1 took:1 propose:2 product:2 fr:1 turned:1 combining:1 date:1 flexibility:1 intuitive:1 kh:4 dirac:1 scalability:1 olkopf:1 convergence:17 regularity:1 produce:2 converges:6 ac:3 pose:1 minimises:2 measured:1 op:3 school:1 received:1 eq:2 sydney:1 implemented:1 ois:1 coverage:1 girolami:5 quantify:2 proteomic:1 radius:2 hull:6 enable:1 viewing:1 explains:1 require:2 assign:1 investigation:1 biological:1 hartikainen:1 extension:1 kinetics:1 hold:3 practically:1 sufficiently:1 duvenaud:2 ground:3 exp:5 mapping:1 reserve:1 smallest:2 perceived:1 estimation:1 combinatorial:1 currently:2 visited:1 individually:1 correctness:1 successfully:1 establishes:5 weighted:5 gunter:2 fukumizu:1 clearly:1 gaussian:5 always:1 super:4 aim:3 rather:2 pn:1 publication:1 minimisation:4 corollary:3 focus:2 naval:1 likelihood:1 contrast:3 rigorous:4 greedily:1 sense:2 posteriori:1 inference:5 biochemical:1 typically:1 integrated:1 relation:1 quasi:3 chopin:1 selects:4 overall:1 dual:1 among:1 denoted:1 development:1 art:1 integration:19 orange:3 marginal:6 equal:2 once:1 never:1 sampling:4 atom:5 biology:2 stuart:1 excessive:1 throughput:1 discrepancy:2 future:1 fundamentally:1 franc:1 modern:1 unsound:1 diaconis:1 preserve:1 geometry:1 lebesgue:1 interest:2 evaluation:4 weakness:2 introduces:1 mixture:3 extreme:1 hg:1 amenable:2 accurate:1 integral:20 simo:1 bq:20 orthogonal:1 ager:1 iv:1 re:1 causal:1 theoretical:16 mk:1 kij:1 increased:1 gn:1 planned:1 extensible:1 halted:1 cost:2 tractability:1 zygalakis:1 subset:1 uniform:2 rounding:1 too:1 motivating:1 optimally:2 density:3 international:5 sensitivity:1 siam:2 probabilistic:28 michael:1 together:1 continuously:1 quickly:6 again:1 central:1 squared:1 possibly:2 slowly:1 derivative:1 return:1 account:1 de:2 sec:1 includes:1 unsatisfying:1 notable:1 explicitly:2 depends:1 pron:1 later:1 closed:2 hazan:1 sup:1 red:2 competitive:1 hf:1 bayes:3 recover:1 inherited:1 simon:1 contribution:1 square:1 ni:22 accuracy:1 purple:1 variance:5 who:1 spaced:1 correspond:1 weak:1 bayesian:19 modelled:1 mc:2 carlo:9 confirmed:1 straight:1 converged:1 herding:5 mmd2:3 against:1 sriperumbudur:1 echet:1 naturally:1 associated:4 mi:11 recovers:1 elegance:1 propagated:1 proof:1 proved:1 treatment:1 popular:1 dataset:1 recall:1 knowledge:1 ut:1 credible:2 hilbert:2 schedule:1 routine:2 focusing:1 originally:2 follow:1 methodology:3 tom:1 improved:1 ox:1 box:2 strongly:1 stage:1 smola:1 implicit:1 hand:2 christopher:1 lack:1 reveal:1 gray:1 facilitate:1 k22:1 true:7 xavier:1 hence:3 analytically:1 assay:1 couched:1 chaining:1 stone:1 demonstrate:1 performs:1 interpreting:1 image:1 novel:1 fi:2 recently:4 bornn:1 common:1 superior:1 rotation:1 functional:2 physical:1 empirically:2 stepping:1 conditioning:1 exponentially:2 unaccounted:1 million:1 discussed:1 interpretation:5 approximates:1 relating:1 interpret:1 volume:1 significant:1 refer:3 cambridge:1 gibbs:1 smoothness:1 rd:3 automatic:1 consistency:4 pm:1 particle:1 dino:1 dj:8 robot:1 impressive:1 mainstream:1 add:1 jaggi:1 posterior:38 multivariate:1 recent:5 showed:1 exaggerated:1 linearisation:2 belongs:1 cubature:2 inf:1 arbitrarily:2 continue:1 integrable:1 seen:2 conrad:1 employed:2 converge:2 determine:3 ii:2 resolving:2 full:9 gretton:1 faster:6 minimising:3 bach:3 hamrick:1 hart:1 equally:2 award:1 controlled:1 impact:2 basic:1 optimisation:7 proteomics:1 metric:1 arxiv:4 iteration:7 represent:1 kernel:13 mmd:13 sejdinovic:1 background:1 whereas:2 addition:2 fellowship:1 decreased:1 interval:2 ode:1 source:1 sch:1 extra:1 induced:2 elegant:1 call:1 structural:1 near:1 iii:2 enough:1 embeddings:1 xj:1 affect:2 psychology:1 zi:1 hindered:1 inner:2 idea:1 intensive:1 minimise:5 motivated:2 expression:2 cause:2 repeatedly:1 remark:3 useful:1 generally:2 clear:1 detailed:1 covered:1 maybe:1 induces:3 diameter:2 reduced:1 http:1 specifies:2 outperform:1 exist:1 restricts:2 problematic:1 canonical:1 notice:1 dotted:1 estimated:1 arising:3 r22:2 blue:1 summarised:1 write:1 hennig:4 key:2 demonstrating:1 nevertheless:1 drawn:2 clarity:1 lacoste:3 asymptotically:1 sum:2 prob:1 uncertainty:14 place:2 extends:1 reader:3 decision:1 appendix:8 huszar:1 maximiser:1 bound:3 followed:1 distinguish:1 display:2 quadratic:1 strength:1 occur:1 constraint:1 x2:1 bibliography:1 encodes:1 n3:1 dominated:2 integrand:3 fourier:1 prescribed:1 formulating:1 extremely:1 performing:1 conjecture:4 department:3 combination:6 ball:2 smaller:1 wi:10 s1:2 restricted:1 pipeline:4 computationally:1 resource:4 equation:2 previously:2 mathematik:1 turn:2 conferred:1 hh:2 needed:1 merit:1 tractable:2 end:1 available:1 gaussians:1 aigrain:1 permit:1 apply:3 quarterly:1 neighbourhood:1 frequentist:2 alternative:2 batch:1 robustness:1 existence:2 original:1 assumes:1 include:1 tony:1 ensure:1 opportunity:1 exploit:1 giving:1 ghahramani:1 especially:1 establish:1 classical:6 comparatively:2 society:3 objective:4 question:1 already:1 strategy:1 concentration:1 dependence:1 usual:1 unclear:1 gradient:4 kutta:1 sensible:4 chris:1 topic:1 polytope:5 considers:1 cellular:2 trivial:1 assuming:1 length:1 illustration:1 ratio:1 dick:1 regulation:1 robert:2 frank:14 sigma:1 negative:3 numerics:12 design:19 implementation:1 zt:3 kinase:1 perform:1 upper:1 finite:3 solin:1 logistics:1 immediate:1 situation:1 extended:2 communication:1 precise:1 frame:1 topical:1 reproducing:2 arbitrary:2 introduced:3 cast:1 pair:1 specified:1 extensive:1 vbq:1 established:2 nip:1 able:1 beyond:1 elicitation:2 below:3 fwls:18 regime:1 challenge:2 oates:4 green:2 including:2 explanation:1 royal:3 greatest:1 overlap:1 natural:2 quantification:4 hybrid:4 normality:1 scheme:4 technology:1 brief:1 imply:1 julien:3 carried:1 negativity:1 coupled:1 qmc:4 prior:6 understanding:1 literature:6 review:1 kf:1 asymptotic:1 relative:2 fully:3 expect:1 highlight:1 allocation:2 filtering:1 proven:2 digital:1 affine:2 displaying:1 begs:1 principle:2 atleast:1 supported:4 rasmussen:1 free:1 enjoys:1 side:2 formal:1 explaining:1 taking:4 sparse:3 overcome:1 dimension:1 default:1 finitedimensional:1 depth:1 cumulative:1 avoids:1 computes:1 infinitedimensional:1 forward:1 author:3 collection:1 programme:1 welling:2 functionals:1 approximate:1 compact:2 obtains:1 dondelinger:1 active:3 assumed:2 conclude:1 xi:24 search:4 continuous:1 svensson:1 mj:3 career:1 obtaining:2 alg:2 investigated:1 excellent:1 domain:5 protocol:1 garnett:2 main:2 s2:1 reconcile:2 arise:1 n2:1 osborne:4 fair:1 repeated:1 quadrature:41 x1:1 fig:9 referred:2 slow:2 precision:1 sub:1 exponential:6 candidate:4 weighting:1 theorem:7 down:1 briol:2 specific:2 showing:1 explored:1 r2:2 admits:1 cease:1 workshop:1 ih:4 sequential:3 magnitude:3 kx:1 chen:2 suited:1 led:1 likely:1 partially:1 applies:1 corresponds:3 truth:3 worstcase:1 kinetic:1 prop:1 obozinski:1 conditional:3 goal:1 quantifying:2 towards:1 owen:1 absence:1 fw:34 included:2 characterised:1 determined:1 uniformly:1 typical:2 justify:1 infinite:1 called:5 total:1 mathias:1 partly:2 experimental:3 select:4 mark:1 bioinformatics:2 ongoing:1 |
5,246 | 575 | Networks with Learned Unit Response Functions
John Moody and Norman Yarvin
Yale Computer Science, 51 Prospect St.
P.O. Box 2158 Yale Station, New Haven, CT 06520-2158
Abstract
Feedforward networks composed of units which compute a sigmoidal function of a weighted sum of their inputs have been much investigated. We
tested the approximation and estimation capabilities of networks using
functions more complex than sigmoids. Three classes of functions were
tested: polynomials, rational functions, and flexible Fourier series. Unlike sigmoids, these classes can fit non-monotonic functions. They were
compared on three problems: prediction of Boston housing prices, the
sunspot count, and robot arm inverse dynamics. The complex units attained clearly superior performance on the robot arm problem, which is
a highly non-monotonic, pure approximation problem. On the noisy and
only mildly nonlinear Boston housing and sunspot problems, differences
among the complex units were revealed; polynomials did poorly, whereas
rationals and flexible Fourier series were comparable to sigmoids.
1
Introduction
A commonly studied neural architecture is the feedforward network in which each
unit of the network computes a nonlinear function g( x) of a weighted sum of its
inputs x
wtu. Generally this function is a sigmoid, such as g( x)
tanh x or
g(x) = 1/(1 + e(x-9?). To these we compared units of a substantially different
type: they also compute a nonlinear function of a weighted sum of their inputs,
but the unit response function is able to fit a much higher degree of nonlinearity
than can a sigmoid. The nonlinearities we considered were polynomials, rational
functions (ratios of polynomials), and flexible Fourier series (sums of cosines.) Our
comparisons were done in the context of two-layer networks consisting of one hidden
layer of complex units and an output layer of a single linear unit.
=
1048
=
Networks with Learned Unit Response Functions
This network architecture is similar to that built by projection pursuit regression
(PPR) [1, 2], another technique for function approximation. The one difference is
that in PPR the nonlinear function of the units of the hidden layer is a nonparametric smooth. This nonparametric smooth has two disadvantages for neural modeling:
it has many parameters, and, as a smooth, it is easily trained only if desired output
values are available for that particular unit. The latter property makes the use of
smooths in multilayer networks inconvenient. If a parametrized function of a type
suitable for one-dimensional function approximation is used instead of the nonparametric smooth, then these disadvantages do not apply. The functions we used are
all suitable for one-dimensional function approximation.
2
Representation
A few details of the representation of the unit response functions are worth noting.
Polynomials: Each polynomial unit computed the function
g(x) = alX + a2x2 + ... + anx n
=
with x wT u being the weighted sum of the input. A zero'th order term was not
included in the above formula, since it would have been redundant among all the
units. The zero'th order term was dealt with separately and only stored in one
location.
Rationals: A rational function representation was adopted which could not have
zeros in the denominator. This representation used a sum of squares of polynomials,
as follows:
ao + alx + ... + anx n
9 (x ) - 1 + (b o + b1x)2 + (b 2x + b3 x2)2 + (b 4x + b5x 2 + b6X3 + b7x4)2 + .,.
This representation has the qualities that the denominator is never less than 1,
and that n parameters are used to produce a denominator of degree n. If the above
formula were continued the next terms in the denominator would be of degrees eight,
sixteen, and thirty-two. This powers-of-two sequence was used for the following
reason: of the 2( n - m) terms in the square of a polynomial p = am xm + '" + anx n ,
it is possible by manipulating am ... a n to determine the n - m highest coefficients,
with the exception that the very highest coefficient must be non-negative. Thus
if we consider the coefficients of the polynomial that results from squaring and
adding together the terms of the denominator of the above formula, the highest
degree squared polynomial may be regarded as determining the highest half of the
coefficients, the second highest degree polynomial may be regarded as determining
the highest half of the rest of the coefficients, and so forth. This process cannot set
all the coefficients arbitrarily; some must be non-negative.
Flexible Fourier series: The flexible Fourier series units computed
n
g(x) =
L: ai COS(bi X + Ci)
i=O
where the amplitudes ai, frequencies bi and phases Ci were unconstrained and could
assume any value.
1049
1050
Moody and Yarvin
Sigmoids: We used the standard logistic function:
g(x) = 1/(1 + e(x-9))
3
Training Method
All the results presented here were trained with the Levenberg-Marquardt modification of the Gauss-Newton nonlinear least squares algorithm. Stochastic gradient
descent was also tried at first, but on the problems where the two were compared,
Levenberg- Marquardt was much superior both in convergence time and in quality of
result. Levenberg-Marquardt required substantially fewer iterations than stochastic gradient descent to converge. However, it needs O(p2) space and O(p 2n) time
per iteration in a network with p parameters and n input examples, as compared
to O(p) space and O(pn) time per epoch for stochastic gradient descent. Further
details of the training method will be discussed in a longer paper.
With some data sets, a weight decay term was added to the energy function to be
optimized. The added term was of the form A L~=l
When weight decay was
used, a range of values of A was tried for every network trained.
w;.
Before training, all the data was normalized: each input variable was scaled so that
its range was (-1,1), then scaled so that the maximum sum of squares of input
variables for any example was 1. The output variable was scaled to have mean zero
and mean absolute value 1. This helped the training algorithm, especially in the
case of stochastic gradient descent.
4
Results
We present results of training our networks on three data sets: robot arm inverse
dynamics, Boston housing data, and sunspot count prediction. The Boston and
sunspot data sets are noisy, but have only mild nonlinearity. The robot arm inverse
dynamics data has no noise, but a high degree of nonlinearity. Noise-free problems
have low estimation error. Models for linear or mildly nonlinear problems typically
have low approximation error. The robot arm inverse dynamics problem is thus a
pure approximation problem, while performance on the noisy Boston and sunspots
problems is limited more by estimation error than by approximation error.
Figure la is a graph, as those used in PPR, of the unit response function of a oneunit network trained on the Boston housing data. The x axis is a projection (a
weighted sum of inputs wT u) of the 13-dimensional input space onto 1 dimension,
using those weights chosen by the unit in training. The y axis is the fit to data. The
response function of the unit is a sum ofthree cosines. Figure Ib is the superposition
of five graphs of the five unit response functions used in a five-unit rational function
solution (RMS error less than 2%) of the robot arm inverse dynamics problem. The
domain for each curve lies along a different direction in the six-dimensional input
space. Four of the five fits along the projection directions are non-monotonic, and
thus can be fit only poorly by a sigmoid.
Two different error measures are used in the following. The first is the RMS error,
normalized so that error of 1 corresponds to no training. The second measure is the
Networks with Learned Unit Response Functions
Robot arm fit to data
40
20
~
.; 2
o
o
~
o
.
!
c
o
-zo
. .' .. ."'.
-2
-40
-2.0
1.0
Figure 1:
-4
b
a
square of the normalized RMS error, otherwise known as the fraction of explained
varIance. We used whichever error measure was used in earlier work on that data
set.
4.1
Robot arm inverse dynamics
This problem is the determination of the torque necessary at the joints of a twojoint robot arm required to achieve a given acceleration of each segment of the
arm , given each segment's velocity and position. There are six input variables to
the network, and two output variables. This problem was treated as two separate
estimation problems, one for the shoulder torque and one for the elbow torque. The
shoulder torque was a slightly more difficult problem, for almost all networks. The
1000 points in the training set covered the input space relatively thoroughly. This,
together with the fact that the problem had no noise, meant that there was little
difference between training set error and test set error.
Polynomial networks of limited degree are not universal approximators, and that
is quite evident on this data set; polynomial networks of low degree reached their
minimum error after a few units. Figure 2a shows this. If polynomial, cosine, rational, and sigmoid networks are compared as in Figure 2b, leaving out low degree
polynomials , the sigmoids have relatively high approximation error even for networks with 20 units. As shown in the following table, the complex units have more
parameters each, but still get better performance with fewer parameters total.
Type
degree 7 polynomial
degree 6 rational
2 term cosine
sigmoid
sigmoid
Units
5
5
6
10
20
Parameters
65
95
73
81
161
Error
.024
.027
.020
.139
.119
Since the training set is noise-free, these errors represent pure approximation error .
1051
1052
Moody and Yarvin
~.Iilte ......
+ootII1n.. 3 ler....
0.8
0.8
0.8
O.S
de,
.
?~ 0.4
E
0
0.4
0.2
0.0
Ooooln.. 4 tel'lNl
opoJynomleJ
7
XrationeJ do, 8
? ..."'0101
0.2
L---,b-----+--~::::::::8~~?=t::::::!::::::1J
10
numbel' of WIIt11
Figure 2:
number
Dr
111
20
WIIt11
b
a
The superior performance of the complex units on this problem is probably due to
their ability to approximate non-monotonic functions.
4.2
Boston housing
The second data set is a benchmark for statistical algorithms: the prediction of
Boston housing prices from 13 factors [3]. This data set contains 506 exemplars and
is relatively simple; it can be approximated well with only a single unit. Networks
of between one and six units were trained on this problem. Figure 3a is a graph
of training set performance from networks trained on the entire data set; the error
measure used was the fraction of explained variance. From this graph it is apparent
o polJDomll1 d., fi
dec 2
02 term.....m.
+raUo,,"1
03 tenD coolh.
x.itmold
0 .20
1.0
0 3 term COllin.
x.tpnotd
O. lfi
?~
0.5
0.10
0.05
Figure 3:
a
b
Networks with Learned Unit Response Functions
1053
that training set performance does not vary greatly between different types of units,
though networks with more units do better.
On the test set there is a large difference. This is shown in Figure 3b. Each point
on the graph is the average performance of ten networks of that type. Each network
was trained using a different permutation of the data into test and training sets, the
test set being 1/3 of the examples and the training set 2/3. It can be seen that the
cosine nets perform the best, the sigmoid nets a close second, the rationals third,
and the polynomials worst (with the error increasing quite a bit with increasing
polynomial degree.)
It should be noted that the distribution of errors is far from a normal distribution,
and that the training set error gives little clue as to the test set error. The following
table of errors, for nine networks of four units using a degree 5 polynomial, is
somewhat typical:
Set
training
test
Error
0.091
0.395
I
Our speculation on the cause of these extremely high errors is that polynomial approximations do not extrapolate well; if the prediction of some data point results in
a polynomial being evaluated slightly outside the region on which the polynomial
was trained, the error may be extremely high. Rational functions where the numerator and denominator have equal degree have less of a problem with this, since
asymptotically they are constant. However, over small intervals they can have the
extrapolation characteristics of polynomials. Cosines are bounded, and so, though
they may not extrapolate well if the function is not somewhat periodic, at least do
not reach large values like polynomials.
4.3
Sunspots
The third problem was the prediction of the average monthly sunspot count in a
given year from the values of the previous twelve years. We followed previous work
in using as our error measure the fraction of variance explained, and in using as
the training set the years 1700 through 1920 and as the test set the years 1921
through 1955. This was a relatively easy test set - every network of one unit which
we trained (whether sigmoid, polynomial, rational, or cosine) had, in each of ten
runs, a training set error between .147 and .153 and a test set error between .105
and .111. For comparison, the best test set error achieved by us or previous testers
was about .085. A similar set of runs was done as those for the Boston housing
data, but using at most four units; similar results were obtained. Figure 4a shows
training set error and Figure 4b shows test set error on this problem.
4.4
Weight Decay
The performance of almost all networks was improved by some amount of weight
decay. Figure 5 contains graphs of test set error for sigmoidal and polynomial units,
1054
Moody and Yarvin
0.18 ,..-,------=..::.;==.::.....:::...:=:..:2..,;:.::.:..----r--1 0.25
~---..::.S.::.:un:::;;a.!:..po.:...:l:....:t:.::e.:...:Bt:....:lI:.::e..:..l..:.:,mre.::.:an~_ _--,-,
OP0lr.!:0mt.. dea
Opolynomlal d ?? 1\
"""allon.. de. 2
02 term co.lne
cs term coolne
x.tamcld
0.14
.
..I:
tC ~?leO::
o~:~~
3 hrm corlne
X_lamold
0.20
O.IZ
0
0.15
0.10
0.10
O.OB
0.08 ' - - + 1 - - - - - ? 2 - - - - - ! S e - - - - - - + - - '
number of WIlle
Figure 4:
a
2
3
Dumb .... of unit.
b
using various values of the weight decay parameter A. For the sigmoids, very little
weight decay seems to be needed to give good results, and there is an order of
magnitude range (between .001 and .01) which produces close to optimal results.
For polynomials of degree 5, more weight decay seems to be necessary for good
results; in fact, the highest value of weight decay is the best. Since very high values
of weight decay are needed, and at those values there is little improvement over
using a single unit, it may be supposed that using those values of weight decay
restricts the multiple units to producing a very similar solution to the one-unit
solution. Figure 6 contains the corresponding graphs for sunspots. Weight decay
seems to help less here for the sigmoids, but for the polynomials, moderate amounts
of weight decay produce an improvement over the one-unit solution.
Acknowledgements
The authors would like to acknowledge support from ONR grant N00014-89-J1228, AFOSR grant 89-0478, and a fellowship from the John and Fannie Hertz
Foundation. The robot arm data set was provided by Chris Atkeson.
References
[1] J. H. Friedman, W. Stuetzle, "Projection Pursuit Regression", Journal of the
American Statistical Association, December 1981, Volume 76, Number 376,
817-823
[2] P. J. Huber, "Projection Pursuit", The Annals of Statistics, 1985 Vol. 13 No.
2,435-475
[3] L. Breiman et aI, Classification and Regression Trees, Wadsworth and Brooks,
1984, pp217-220
Networks with Learned Unit Response Functions
0.30
Boston housin
hi decay
r-T"=::...:..:;.:;:....:r:-=::;.5I~;=::::..:;=:-;;..:..:..::.....;;-=..:.!ar:......::=~...,
00
+.0001
0.001
0.01
)(.1
'.3
00
+.0001
0.001
1.0
0.01
X.l
?.3
0.25
~0.20
?
0.5
0.15
Figure 5: Boston housing test error with various amounts of weight decay
moids wilh wei hl decay
0. 16
0. 111
1.8
0.14
.1:
O.IB
00
+.0001
0 .001
0 .01
><.1
? .3
0.1?
0 . 12
~
D
0.10
~~
0. 12
sea
::::::,.
0.08
3
2
Dum be .. of 1IJlIt,
<4
0. 10
0.08
2
3
Dumb.,. 01 WIll'
Figure 6: Sunspot test error with various amounts of weight decay
1055
Perturbing Hebbian Rules
Peter Dayan
CNL, The Salk Institute
PO Box 85800
San Diego CA 92186-5800, USA
Geoffrey Goodhill
COGS
University of Sussex, Falmer
Brighton BNl 9QN, UK
dayan~helrnholtz.sdsc.edu
geoffg~cogs.susx.ac.uk
Abstract
Recently Linsker [2] and MacKay and Miller [3,4] have analysed Hebbian
correlational rules for synaptic development in the visual system, and
Miller [5,8] has studied such rules in the case of two populations of fibres
(particularly two eyes). Miller's analysis has so far assumed that each of
the two populations has exactly the same correlational structure. Relaxing
this constraint by considering the effects of small perturbative correlations
within and between eyes permits study of the stability of the solutions.
We predict circumstances in which qualitative changes are seen, including
the production of binocularly rather than monocularly driven units.
1 INTRODUCTION
Linsker [2] studied how a Hebbian correlational rule could predict the development
of certain receptive field structures seen in the visual system. MacKay and Miller
[3,4] pointed out that the form of this learning rule meant that it could be analysed
in terms of the eigenvectors of the matrix of time-averaged presyna ptic correlations.
Miller [5,8, 7] independently studied a similar correlational rule for the case of two
eyes (or more generally two populations), explaining how cells develop in VI
that are ultimately responsive to only one eye, despite starting off as responsive
to both. This process is again driven by the eigenvectors and eigenvalues of
the developmental equation, and Miller [7] relates Linsker's model to the two
population case.
Miller's analysis so far assumes that the correlations of activity within each population are identical. This special case simplifies the analysis enabling the projections
from the two eyes to be separated out into sum and difference variables. In general,
19
20
Dayan and Goodhill
one would expect the correlations to differ slightly, and for correlations between the
eyes to be not exactly zero. We analyse how such perturbations affect the eigenvectors and eigenvalues of the developmental equation, and are able to explain some
of the results found empirically by Miller [6].
Further details on this analysis and on the relationship between Hebbian and
non-Hebbian models of the development of ocular dominance and orientation
selectivity can be found in Goodhill (1991).
2 THE EQUATION
MacKay and Miller [3,4] study Linsker's [2] developmental equation in the form:
w=
(Q + k2J)W+ kIn
where W = [wd, i E [1, n] are the weights from the units in one layer 'R, to a
particular unit in the next layer S, Q is the covariance matrix of the activities of the
units in layer'R" J is the matrix hi = 1, Vi, j, and n is the 'DC' vector ni = 1, Vi.
The equivalent for two populations of cells is:
(
:~ )
= (
g~! ~~~ g~! ~~~ ) ( :~ ) + kl ( ~ )
where Ql gives the covariance between cells within the first population, Q2 gives
that between cells within the second, and Qc (assumed symmetric) gives the covariance between cells in the two populations. Define Q. as this full, two population,
development matrix.
Miller studies the case in which Ql = Q2 = Q and Qc is generally zero or slightly
negative. Then the development of WI - W2 (which Miller calls So) and WI + W2
(SS) separate; for Qc = 0, these go like:
SS 0
SSS
bt = QSo and St = (Q + 2k2J)SS + 2kln.
and, up to various forms of normalisation and/or weight saturation, the patterns
of dominance between the two populations are determined by the initial value
and the fastest growing components of So. If upper and lower weight saturation
limits are reached at roughly the same time (Berns, personal communication), the
conventional assumption that the fastest growing eigenvectors of So dominate the
terminal state is borne out.
The starting condition Miller adopts has WI - W2 = ?' a and WI + W2 = b, where
?' is small, and a and b are 0(1). Weights are constrained to be positive, and
saturate at some upper limit. Also, additive normalisation is applied throughout
development, which affects the growth of the SS (but not the SO) modes. As
discussed by MacKay and Miller [3,4]' this is approximately accommodated in the
k2J component.
Mackay and Miller analyse the eigendecomposition of Q + k2J for general and
radially symmetric covariance matrices Q and all values of k2. It turns out that the
eigendecomposition of Q. for the case Ql = Q2 = Q and Qc = 0 (that studied by
Miller) is given in table form by:
Perturbing Hebbian Rules
E-vector
(Xi, xt)
(Xi, -xl)
(Yi, -yt)
(Zit zl)
E-value
Ai
Ai
~i
'Vi
Conditions
QXi = AiXi
QXi = AiXi
QYi = ~iYi
(Q + 2k2J)Zi = 'ViZi
n'Xi =
n.Xi =
n?Yi f.
n.zi f.
0
0
0
0
Figure 1 shows the matrix and the two key (y, -y) and (x, -x) eigenvectors.
The details of the decomposition of Q. in this table are slightly obscured by degeneracy in the eigendecomposition of Q + k2J. Also, for clarity, we write (Xi, Xi)
for (Xi, Xi) T. A consequence of the first two rows in the table is that (l1Xi, aXi) is an
eigenvector for any 11 and a; this becomes important later.
That the development of SD and S5 separates can be seen in the (u, u) and (u, -u)
forms of the eigenvectors. In Miller's terms the onset of dominance of one of the
two populations is seen in the (u, -u) eigenvectors - dominance requires that ~j
for the eigenvector whose elements are all of the same sign (one such exists for
Miller's Q) is larger than the ~i and the Ai for all the other such eigenvectors. In
particular, on pages 296-300 of [6], he shows various cases for which this does and
one in which it does not happen. To understand how this comes about, we can
treat the latter as a perturbed version of the former.
3 PERTURBATIONS
Consider the case in which there are small correlations between the projections
and/ or small differences between the correlations within each projection. For
instance, one of Miller's examples indicates that small within-eye anti-correlations
can prevent the onset of dominance. This can be perturbatively analysed by setting
Ql = Q + eEl, Q2 = Q + eE2 and Qe = eE e. Call the resulting matrix Q;.
Two questions are relevant. Firstly, are the eigenvectors stable to this perturbation,
ie are there vectors al and a2 such that (Ul + eal, U2 + ea2) is an eigenvector of
Q; if (Ul, U2) is an eigenvector of Q. with eigenvalue 4>? Secondly, how do the
eigenvalues change?
One way to calculate this is to consider the equation the perturbed eigenvector
must satisfy:l
Q? ( Ul + eal ) = (4) + elP) ( Ul + eal )
? U2 + ea2
U2 + ea2
and look for conditions on Ul and U2 and the values of al, a2 and lP by equating
the O( e) terms. We now consider a specific exam pIe. Using the notation of the
table above, (Yi + eal, -Yi + ea2) is an eigenvector with eigenvalue ~i + elPi if
(Q - ~i1) al + k2J(al + a2)
(Q - ~i1) a2 + k2J (al + a2)
Subtracting these two implies that
(Q -
~i1)
=
-(El- Ee - lPd)Yi, and
- (Ee - E2 + lPiI)Yi.
(al - a2) = - (El - 2Ee
+ E2 -
2lPi1) Yi.
lThis is a standard method for such linear systems, eg in quantum mechanics.
21
22
Dayan and Goodhill
However, Y{ (Q - lii I) = 0, since Q is symmetric and Yi is an eigenvector with
eigenvalue Iii, so multiplying on the left by yl, we require that
2lViyJ Yi = y[ (E 1 - 2Ee + E2) Yi
which sets the value of lVi' Therefore (Yit -yt) is stable in the required manner.
Similarly (Zit Zi) is stable too, with an equivalent perturbation to its eigenvalue.
However the pair (Xit xt) and (Xit -Xi) are not stable - the degeneracy from their
having the same eigenvalue is broken, and two specific eigenvectors, (~Xit (3iXi)
and (- (3iXit ~Xi) are stable, for particular values (Xi and (3i' This means that to first
order, SD and SS no longer separate, and the full, two-population, matrix must be
solved.
To model Miller's results, call Q;,m the special case of Q; for which El = E2 = E
and Ee = O. Also, assume that the Xit Yi and Zi are normalised, let el (u) = uTE 1u t
etc, and define 1'(u) = (el (u) - e2(u) )/2e e(u), for ee (u) =f. 0, and 1'i = 1'(xt). Then
we have
(1)
and the eigenvalues are:
Eigenvalue for case:
E-vector
((XiXit (3iXt)
( - (3iXit (XiX;.)
("Yit -yt)
(Zit zt)
Q.
Ai
Ai
Iii
'Vi
Q;,m
Ai + eel
Ai + eel
Iii + eel
'Vi + eel
Xi
Xi
Yd
Zi
9:
Ai + e ell xl) + e2(Xi) + =i]/Z
Ai - e ell xd + e2(xd + =d/2
Iii + e[ el Yi + e2 Yi - Zee YdJ/Z
'Vi + e el Zi ) + e2 Zi +Zee zt)J/2
where =i = v'[ el (Xi) - e2(Xi)]2 + 4e e(xi)2. For the case Miller treats, since E1 = E2,
the degeneracy in the original solution is preserved, ie the perturbed versions of
(Xit xt) and (Xit -xt) have the same eigenvalues. Therefore the SD and SS modes
still separate.
This perturbed eigendecomposition suffices to show how small additional correlations affect the solutions. We will give three examples. The case mentioned above
on page 299 of [6], shows how small same-eye anti-correlations within the radius
of the arbor function cause a particular (Yit -yt) eigenvector (Le. one for which
all the components of Yi have the same sign) to change from growing faster than
a (Xit -xt) (for which some components of Xi are positive and some negative to
ensure that n.Xi = 0) to growing slower than it, converting a monocular solution
to a binocular one.
In our terms, this is the Q;' m case, with E1 a negative matrix. Given the conditions
on signs of their components, el (yt) is more negative than el(xi), and so the
eigenvalue for the perturbed (Yit -Yi) would be expected to decrease more than
that for the perturbed (Xit -xt). This is exactly what is found. Different binocular
eigensolutions are affected by different amounts, and it is typically a delicate issue
as to which will ultimately prevail. Figure 2 shows a sample perturbed matrix for
which dominance will not develop. If the change in the correlations is large (0(1 ),
then the eigenfunctions can change shape (eg Is becomes 2s in the notation of [4]).
We do not address this here, since we are considering only changes of O( e).
Perturbing Hebbian Rules
..
"
80
Figure 1: Unperturbed two-eye correlation matrix and (y, -y), (x, -x) eigenvectors. Eigenvalues are 7.1 and 6.4 respectively.
80
Figure 2: Same-eye anti-correlation matrix and eigenvectors. (y, -y), (x, -x) eigenvalues are 4.8 and 5.4 respectivel)" and so the order has swapped.
23
24
Dayan and Goodhill
Positive opposite-eyecorrelations can have exactly the same effect. This time ec(yd
is greater than ec(xd, and so, again, the eigenvalue for the perturbed (Yi. -Yd
would be expected to decrease more than that for the perturbed (Xi. -Xi)' Figure 3
shows an example which is infelicitous for dominance.
The third case is for general perturbations in Q!. Now the mere signs of the
components of the eigenvectors are not enough to predict which will be affected
more. Figure 4 gives an example for which ocular dominance will still occur. Note
that the (Xi. -Xi) eigenvector is no longer stable, and has been replaced by one of
the form (~Xi. f3i.xd.
If general perturbations of the same order of magnitude as the difference between
WI and W2 (ie ?' ~ ?) are applied, the OCi and f3i terms complicate Miller's So
analysis to first order. Let Wl(O) - W2(0) = ?a and apply Q! as an iteration matrix.
WI (n) -w2(n), the difference between the projections aftern iterations has no 0(1)
component, but two sets of O(?) components; {21l-f (a.Yi) yd, and
{ Af[l
Af[l
+ ?(Ti + 3i)/2Ad n (OCiXi.Wl(O) + f3iXi.W2(0)) (OCi -
f3i)Xi -
+ ?(Ti - 3i)/2Ai]n (OCiXi.W2(0) - f3iXi.Wl (0)) (OCi + f3i)Xi
}
= el(xi) + e2(xd. Collecting the terms in this expression, and using
where Ti
equation 1, we derive
{Af
[(oct + f3f) xi. a + 2n ~~),i~f3iXi.b1Xi}
where b = Wl(O) + W2(0). The second part of this expression depends on n,
and is substantial because Wl(O) + W2(0) is 0(1). Such a term does not appear
in the unperturbed system, and can bias the competition between the Yi and the
Xi eigenvectors, in particular towards the binocular solutions. Again, its precise
effects will be sensitive to the unperturbed eigenvalues.
4
CONCLUSIONS
Perturbation analysis applied to simple Hebbian correlational learning rules reveals
the following:
? Introducing small anti-correlations within each eye causes a tendency toward
binocularity. This agrees with the results of Miller.
? Introducing small positive correlations between the eyes (as will inevitably
occur once they experience a natural environment) has the same effect.
? The overall eigensolution is not stable to small perturbations that make the
correlational structure of the two eyes unequal. This also produces interesting
effects on the growth rates of the eigenvectors concerned, given the initial
conditions of approximately equivalent projections from both eyes.
Acknowledgements
We are very grateful to Ken Miller for helpful discussions, and to Christopher
Longuet-Higgins for pointing us in the direction of perturbation analysis. Support
Perturbing Hebbian Rules
so
Figure 3: Opposite-eye positive correlation matrix and eigenvectors. Eigenvalues
of (y, -Y)I (x, -x) are 4.8 and 5.41 so ocular dominance is again inhibited.
so
Figure 4: The effect of random perturbations to the matrix. Although the order is
restored (eigenvalues are 7.1 and 6.4)1 note the ((xx, (3x) eigenvector.
25
26
Dayan and Goodhill
was from the SERC and a Nuffield Foundation Science travel grant to GG. GG
is grateful to David Willshaw and the Centre for Cognitive Science for their hospitality. GG's current address is The Centre for Cognitive Science, University of
Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland, and correspondence
should be directed to him there.
References
[1] Goodhill, GJ (1991). Correlations, Competition and Optimality: Modelling the Development of Topography and Ocular Dominance. PhD Thesis, Sussex University.
[2] Linsker, R (1986). From basic network principles to neural architecture (series).
Proc. Nat. Acad. Sci., USA, 83, pp 7508-7512,8390-8394,8779-8783.
[3] MacKay, DJC & Miller, KD (1990). Analysis of Linsker's simulations of Hebbian rules. Neural Computation, 2, pp 169-182.
[4] MacKajj DJC & Miller, KD (1990). Analysis of Linsker' sa pplication of Hebbian
rules to linear networks. Network, 1, pp 257-297.
[5] Miller, KD (1989). Correlation-based Mechanisms in Visual Cortex: Theoretical and
Empirical Studies. PhD Thesis, Stanford University Medical School.
[6] Miller, KD (1990). Correlation-based mechanisms of neural development. In
MA Gluck & DE Rumelhart, editors, Neuroscience and Connectionist Theory.
Hillsborough, NJ: Lawrence Erlbaum.
[7] Miller, KD (1990). Derivation of linear Hebbian equations from a nonlinear
Hebbian model of synaptic plasticity. Neural Computation, 2, pp 321-333.
[81 Miller, KD, Keller, JB & Stryker, MP (1989). Ocular dominance column development: Analysis and simulation. Science, 245, pp 605-615.
| 575 |@word mild:1 version:2 polynomial:28 seems:3 simulation:2 tried:2 covariance:4 decomposition:1 initial:2 series:6 contains:3 allon:1 current:1 wd:1 marquardt:3 analysed:3 perturbative:1 must:4 john:2 additive:1 happen:1 plasticity:1 shape:1 half:2 fewer:2 wiit11:2 scotland:1 location:1 sigmoidal:2 firstly:1 five:4 along:2 qualitative:1 manner:1 huber:1 expected:2 roughly:1 growing:4 mechanic:1 terminal:1 torque:4 infelicitous:1 little:4 considering:2 elbow:1 increasing:2 provided:1 becomes:2 bounded:1 notation:2 xx:1 nuffield:1 qyi:1 anx:3 what:1 substantially:2 eigenvector:10 q2:4 nj:1 every:2 collecting:1 ti:3 f3f:1 growth:2 xd:5 exactly:4 willshaw:1 scaled:3 k2:1 uk:2 zl:1 unit:46 grant:3 medical:1 appear:1 producing:1 before:1 positive:5 treat:2 sd:3 limit:2 consequence:1 acad:1 despite:1 approximately:2 bnl:1 yd:4 studied:5 equating:1 relaxing:1 co:2 fastest:2 limited:2 bi:2 range:3 averaged:1 directed:1 thirty:1 stuetzle:1 universal:1 empirical:1 projection:10 get:1 cannot:1 onto:1 close:2 context:1 equivalent:3 conventional:1 yt:5 go:1 starting:2 independently:1 keller:1 k2j:8 qc:4 pure:3 rule:12 continued:1 higgins:1 regarded:2 dominate:1 wille:1 population:12 stability:1 annals:1 diego:1 aixi:2 ixi:1 velocity:1 element:1 approximated:1 lfi:1 particularly:1 rumelhart:1 solved:1 worst:1 calculate:1 region:1 decrease:2 prospect:1 highest:7 mentioned:1 substantial:1 developmental:3 broken:1 environment:1 dynamic:6 personal:1 ultimately:2 trained:9 grateful:2 segment:2 easily:1 joint:1 po:2 various:5 leo:1 zo:1 separated:1 derivation:1 outside:1 quite:2 apparent:1 whose:1 cnl:1 larger:1 stanford:1 s:6 otherwise:1 buccleuch:1 ability:1 statistic:1 sdsc:1 analyse:2 noisy:3 housing:8 sequence:1 eigenvalue:18 net:2 subtracting:1 relevant:1 poorly:2 achieve:1 supposed:1 forth:1 competition:2 convergence:1 sea:1 produce:4 help:1 derive:1 develop:2 ac:1 qso:1 exam:1 exemplar:1 school:1 sa:1 zit:3 p2:1 c:1 come:1 implies:1 differ:1 direction:3 tester:1 radius:1 stochastic:4 require:1 ao:1 suffices:1 secondly:1 considered:1 normal:1 lawrence:1 predict:3 pointing:1 vary:1 a2:6 estimation:4 proc:1 travel:1 tanh:1 superposition:1 sensitive:1 him:1 agrees:1 wl:5 weighted:5 clearly:1 hospitality:1 rather:1 pn:1 breiman:1 xit:8 improvement:2 modelling:1 indicates:1 greatly:1 am:2 helpful:1 dayan:6 el:11 squaring:1 typically:2 entire:1 bt:2 hidden:2 manipulating:1 i1:3 issue:1 among:2 flexible:5 overall:1 classification:1 orientation:1 development:10 constrained:1 special:2 mackay:6 wadsworth:1 ell:2 equal:1 field:1 never:1 having:1 once:1 identical:1 look:1 linsker:7 jb:1 connectionist:1 haven:1 few:2 inhibited:1 composed:1 replaced:1 phase:1 consisting:1 delicate:1 friedman:1 normalisation:2 djc:2 highly:1 dumb:2 zee:2 necessary:2 experience:1 f3ixi:3 tree:1 accommodated:1 desired:1 inconvenient:1 obscured:1 theoretical:1 instance:1 column:1 eal:4 modeling:1 a2x2:1 earlier:1 ar:1 disadvantage:2 introducing:2 erlbaum:1 too:1 ixt:1 stored:1 perturbed:9 periodic:1 eigensolution:1 thoroughly:1 st:2 twelve:1 ie:3 off:1 eel:5 yl:1 together:2 moody:4 squared:1 again:4 thesis:2 dr:1 borne:1 qxi:2 lii:1 american:1 cognitive:2 li:1 nonlinearities:1 de:3 fannie:1 coefficient:6 satisfy:1 mp:1 vi:7 onset:2 ad:1 later:1 helped:1 extrapolation:1 depends:1 reached:2 capability:1 perturbatively:1 square:5 ni:1 variance:3 characteristic:1 miller:29 dealt:1 mere:1 multiplying:1 worth:1 ea2:4 explain:1 reach:1 collin:1 synaptic:2 complicate:1 energy:1 frequency:1 ocular:5 pp:5 e2:12 degeneracy:3 rational:11 radially:1 amplitude:1 mre:1 attained:1 higher:1 response:10 improved:1 wei:1 done:2 box:2 though:2 evaluated:1 binocular:3 correlation:19 christopher:1 nonlinear:7 elp:1 logistic:1 mode:2 quality:2 b3:1 usa:2 effect:6 normalized:3 norman:1 former:1 symmetric:3 eg:2 numerator:1 sussex:2 noted:1 levenberg:3 qe:1 cosine:7 gg:3 brighton:1 evident:1 fi:1 recently:1 superior:3 sigmoid:8 mt:1 empirically:1 perturbing:4 volume:1 discussed:2 association:1 he:1 s5:1 monthly:1 ai:13 unconstrained:1 similarly:1 pointed:1 nonlinearity:3 centre:2 had:2 ute:1 robot:10 stable:7 longer:3 iyi:1 gj:1 etc:1 cortex:1 moderate:1 driven:2 selectivity:1 n00014:1 certain:1 onr:1 arbitrarily:1 approximators:1 yi:18 seen:5 minimum:1 additional:1 somewhat:2 greater:1 converting:1 determine:1 converge:1 redundant:1 alx:2 relates:1 multiple:1 full:2 hebbian:13 smooth:5 faster:1 determination:1 af:3 e1:2 prediction:5 regression:3 basic:1 multilayer:1 denominator:6 circumstance:1 iteration:4 represent:1 achieved:1 dec:1 cell:5 preserved:1 whereas:1 fellowship:1 separately:1 interval:1 leaving:1 w2:11 rest:1 unlike:1 swapped:1 probably:1 eigenfunctions:1 tend:1 december:1 call:3 ee:7 noting:1 feedforward:2 revealed:1 easy:1 lne:1 iii:4 enough:1 affect:3 fit:6 zi:7 concerned:1 architecture:3 opposite:2 f3i:4 simplifies:1 whether:1 six:3 expression:2 rms:3 ul:5 lpd:1 peter:1 nine:1 cause:3 generally:3 covered:1 eigenvectors:16 amount:5 nonparametric:3 ten:2 ken:1 restricts:1 ofthree:1 sign:4 neuroscience:1 per:2 write:1 iz:1 vol:1 affected:2 dominance:11 key:1 four:3 yit:4 clarity:1 prevent:1 falmer:1 graph:7 asymptotically:1 fraction:3 sum:10 year:4 fibre:1 run:2 inverse:6 place:1 almost:2 throughout:1 ob:1 comparable:1 bit:1 layer:7 ct:1 hi:2 followed:1 yale:2 correspondence:1 activity:2 occur:2 constraint:1 x2:1 fourier:5 extremely:2 optimality:1 relatively:4 kd:6 hertz:1 slightly:5 wi:6 lp:1 modification:1 hl:1 explained:3 binocularly:1 numbel:1 equation:7 monocular:1 wtu:1 count:3 turn:1 mechanism:2 needed:2 whichever:1 adopted:1 pursuit:3 available:1 permit:1 apply:2 eight:1 responsive:2 slower:1 original:1 kln:1 assumes:1 ensure:1 newton:1 serc:1 especially:1 monocularly:1 added:2 question:1 restored:1 receptive:1 stryker:1 gradient:4 separate:5 sci:1 parametrized:1 chris:1 reason:1 toward:1 dum:1 relationship:1 ler:1 ratio:1 hrm:1 difficult:1 ql:4 pie:1 negative:6 zt:2 perform:1 upper:2 benchmark:1 acknowledge:1 enabling:1 descent:4 anti:4 inevitably:1 shoulder:2 communication:1 precise:1 dc:1 perturbation:10 station:1 david:1 pair:1 required:3 kl:1 speculation:1 optimized:1 unequal:1 learned:5 eh8:1 brook:1 address:2 able:2 goodhill:7 xm:1 pattern:1 saturation:2 built:1 including:1 power:1 suitable:2 treated:1 natural:1 arm:11 eye:15 axis:2 ss:1 ppr:3 epoch:1 acknowledgement:2 determining:2 afosr:1 expect:1 permutation:1 topography:1 interesting:1 geoffrey:1 sixteen:1 oci:3 foundation:2 eigendecomposition:4 degree:15 principle:1 editor:1 production:1 row:1 free:2 bern:1 bias:1 normalised:1 understand:1 institute:1 explaining:1 absolute:1 xix:1 edinburgh:2 curve:1 dimension:1 axi:1 yarvin:4 quantum:1 computes:1 lnl:1 author:1 commonly:1 clue:1 san:1 qn:1 adopts:1 atkeson:1 far:3 ec:2 approximate:1 reveals:1 assumed:2 xi:30 un:1 lthis:1 table:6 ca:1 longuet:1 tel:1 dea:1 investigated:1 complex:6 domain:1 lvi:1 did:1 noise:4 twojoint:1 sunspot:9 salk:1 position:1 xl:2 lie:1 ib:2 lw:1 third:3 kin:1 formula:3 saturate:1 cog:2 xt:7 specific:2 unperturbed:3 decay:16 exists:1 adding:1 prevail:1 ci:2 phd:2 magnitude:2 nat:1 sigmoids:7 mildly:2 boston:11 gluck:1 tc:1 visual:3 u2:5 monotonic:4 corresponds:1 ma:1 ptic:1 oct:1 acceleration:1 towards:1 price:2 change:6 included:1 typical:1 determined:1 wt:2 correlational:6 total:1 gauss:1 la:1 arbor:1 tendency:1 exception:1 support:2 latter:2 meant:2 tested:2 extrapolate:2 |
5,247 | 5,750 | Newton-Stein Method:
A Second Order Method for GLMs via Stein?s Lemma
Murat A. Erdogdu
Department of Statistics
Stanford University
erdogdu@stanford.edu
Abstract
We consider the problem of efficiently computing the maximum likelihood estimator in Generalized Linear Models (GLMs) when the number of observations
is much larger than the number of coefficients (n p 1). In this regime, optimization algorithms can immensely benefit from approximate second order information. We propose an alternative way of constructing the curvature information by formulating it as an estimation problem and applying a Stein-type lemma,
which allows further improvements through sub-sampling and eigenvalue thresholding. Our algorithm enjoys fast convergence rates, resembling that of second
order methods, with modest per-iteration cost. We provide its convergence analysis for the case where the rows of the design matrix are i.i.d. samples with bounded
support. We show that the convergence has two phases, a quadratic phase followed
by a linear phase. Finally, we empirically demonstrate that our algorithm achieves
the highest performance compared to various algorithms on several datasets.
1
Introduction
Generalized Linear Models (GLMs) play a crucial role in numerous statistical and machine learning problems. GLMs formulate the natural parameter in exponential families as a linear model
and provide a miscellaneous framework for statistical methodology and supervised learning tasks.
Celebrated examples include linear, logistic, multinomial regressions and applications to graphical
models [MN89, KF09].
In this paper, we focus on how to solve the maximum likelihood problem efficiently in the GLM
setting when the number of observations n is much larger than the dimension of the coefficient
vector p, i.e., n
p. GLM optimization task is typically expressed as a minimization problem
where the objective function is the negative log-likelihood that is denoted by `( ) where 2 Rp is
the coefficient vector. Many optimization algorithms are available for such minimization problems
[Bis95, BV04, Nes04]. However, only a few uses the special structure of GLMs. In this paper, we
consider updates that are specifically designed for GLMs, which are of the from
Qr `( ) ,
where
(1.1)
is the step size and Q is a scaling matrix which provides curvature information.
For the updates of the form Eq. (1.1), the performance of the algorithm is mainly determined by the
scaling matrix Q. Classical Newton?s Method (NM) and Natural Gradient Descent (NG) are recovered by simply taking Q to be the inverse Hessian and the inverse Fisher?s information at the current
iterate, respectively [Ama98, Nes04]. Second order methods may achieve quadratic convergence
rate, yet they suffer from excessive cost of computing the scaling matrix at every iteration. On the
other hand, if we take Q to be the identity matrix, we recover the simple Gradient Descent (GD)
method which has a linear convergence rate. Although GD?s convergence rate is slow compared to
that of second order methods, modest per-iteration cost makes it practical for large-scale problems.
The trade-off between the convergence rate and per-iteration cost has been extensively studied
[BV04, Nes04]. In n
p regime, the main objective is to construct a scaling matrix Q that
1
is computational feasible and provides sufficient curvature information. For this purpose, several
Quasi-Newton methods have been proposed [Bis95, Nes04]. Updates given by Quasi-Newton methods satisfy an equation which is often referred as the Quasi-Newton relation. A well-known member
of this class of algorithms is the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [Nes04].
In this paper, we propose an algorithm that utilizes the structure of GLMs by relying on a Stein-type
lemma [Ste81]. It attains fast convergence rate with low per-iteration cost. We call our algorithm
Newton-Stein Method which we abbreviate as NewSt. Our contributions are summarized as follows:
? We recast the problem of constructing a scaling matrix as an estimation problem and apply
a Stein-type lemma along with sub-sampling to form a computationally feasible Q.
? Newton method?s O(np2 + p3 ) per-iteration cost is replaced by O(np + p2 ) per-iteration
cost and a one-time O(n|S|2 ) cost, where |S| is the sub-sample size.
? Assuming that the rows of the design matrix are i.i.d. and have bounded support, and
denoting the iterates of Newton-Stein method by { ?t }t 0 , we prove a bound of the form
?t+1
? 2
? ?1 ?t
? 2
+ ?2 ?t
2
? 2,
(1.2)
where ? is the minimizer and ?1 , ?2 are the convergence coefficients. The above bound
implies that the convergence starts with a quadratic phase and transitions into linear later.
? We demonstrate its performance on four datasets by comparing it to several algorithms.
The rest of the paper is organized as follows: Section 1.1 surveys the related work and Section 1.2
introduces the notations used throughout the paper. Section 2 briefly discusses the GLM framework
and its relevant properties. In Section 3, we introduce Newton-Stein method, develop its intuition,
and discuss the computational aspects. Section 4 covers the theoretical results and in Section 4.3
we discuss how to choose the algorithm parameters. Finally, in Section 5, we provide the empirical
results where we compare the proposed algorithm with several other methods on four datasets.
1.1
Related work
There are numerous optimization techniques that can be used to find the maximum likelihood estimator in GLMs. For moderate values of n and p, classical second order methods such as NM, NG
are commonly used. In large-scale problems, data dimensionality is the main factor while choosing the right optimization method. Large-scale optimization tasks have been extensively studied
through online and batch methods. Online methods use a gradient (or sub-gradient) of a single,
randomly selected observation to update the current iterate [Bot10]. Their per-iteration cost is independent of n, but the convergence rate might be extremely slow. There are several extensions of the
classical stochastic descent algorithms (SGD), providing significant improvement and/or stability
[Bot10, DHS11, SRB13].
On the other hand, batch algorithms enjoy faster convergence rates, though their per-iteration cost
may be prohibitive. In particular, second order methods attain quadratic rate, but constructing the
Hessian matrix requires excessive computation. Many algorithms aim at forming an approximate,
cost-efficient scaling matrix,. This idea lies at the core of Quasi-Newton methods [Bis95].
Another approach to construct an approximate Hessian makes use of sub-sampling techniques
[Mar10, BCNN11, VP12, EM15]. Many contemporary learning methods rely on sub-sampling as
it is simple and it provides significant boost over the first order methods. Further improvements
through conjugate gradient methods and Krylov sub-spaces are available.
Many hybrid variants of the aforementioned methods are proposed. Examples include the combinations of sub-sampling and Quasi-Newton methods [BHNS14], SGD and GD [FS12], NG and NM
[LRF10], NG and low-rank approximation [LRMB08]. Lastly, algorithms that specialize on certain types of GLMs include coordinate descent methods for the penalized GLMs [FHT10] and trust
region Newton methods [LWK08].
1.2
Notation
Let [n] = {1, 2, ..., n}, and denote the size of a set S by |S|. The gradient and the Hessian of f
with respect to are denoted by r f and r2 f , respectively. The j-th derivative of a function g
is denoted by g (j) . For vector x 2 Rp and matrix X 2 Rp?p , kxk2 and kXk2 denote the `2 and
spectral norms, respectively. PC is the Euclidean projection onto set C, and Bp (R) ? Rp is the
ball of radius R. For random variables x, y, d(x, y) and D(x, y) denote probability metrics (to be
explicitly defined later), measuring the distance between the distributions of x and y.
2
2
Generalized Linear Models
Distribution of a random variable y 2 R belongs to an exponential family with natural parameter ? 2
R if its density can be written of the form f (y|?) = exp ?y
(?) h(y), where is the cumulant
generating function and h is the carrier density. Let y1 , y2 , ..., yn be independent observations such
that 8i 2 [n], yi ? f (yi |?i ). For ? = (?1 , ..., ?n ), the joint likelihood is
( n
) n
X
Y
f (y1 , y2 , ..., yn |?) = exp
[yi ?i
(?i )]
h(yi ).
i=1
i=1
We consider the problem of learning the maximum likelihood estimator in the above exponential
family framework, where the vector ? 2 Rn is modeled through the linear relation,
?=X ,
for some design matrix X 2 Rn?p with rows xi 2 Rp , and a coefficient vector 2 Rp . This formulation is known as Generalized Linear Models (GLMs) in canonical form. The cumulant generating
function determines the class of GLMs, i.e., for the ordinary least squares (OLS) (z) = z 2 and
for the logistic regression (LR) (z) = log(1 + ez ).
Maximum likelihood estimation in the above formulation is equivalent to minimizing the negative
log-likelihood function `( ),
n
1X
`( ) =
[ (hxi , i) yi hxi , i] ,
(2.1)
n i=1
where hx, i is the inner product between the vectors x and . The relation to OLS and LR can be
seen much easier by plugging in the corresponding (z) in Eq. (2.1). The gradient and the Hessian
of `( ) can be written as:
n
n
i
1 X h (1)
1 X (2)
r `( ) =
(hxi , i)xi yi xi , r2 `( ) =
(hxi , i)xi xTi .
(2.2)
n i=1
n i=1
For a sequence of scaling matrices {Qt }t>0 2 Rp?p , we consider iterations of the form
?t+1
?t
tQ
t
r `( ?t ),
where t is the step size. The above iteration is our main focus, but with a new approach on how to
compute the sequence of matrices {Qt }t>0 . We formulate the problem of finding a scalable Qt as
an estimation problem and use a Stein-type lemma that provides a computationally efficient update.
3
Newton-Stein Method
Classical Newton-Raphson update is generally used for training GLMs. However, its per-iteration
cost makes it impractical for large-scale optimization. The main bottleneck is the computation of
the Hessian matrix that requires O(np2 ) flops which is prohibitive when n
p
1. Numerous
methods have been proposed to achieve NM?s fast convergence rate while keeping the per-iteration
cost manageable.
The task of constructing an approximate Hessian can be viewed as an estimation problem. Assuming
that the rows of X are i.i.d. random vectors, the Hessian of GLMs with cumulant generating function
has the following form
n
? t? 1
1X
Q
=
xi xTi (2) (hxi , i) ? E[xxT (2) (hx, i)] .
n i=1
1
We observe that [Qt ] is just a sum of i.i.d. matrices. Hence, the true Hessian is nothing but a sample mean estimator to its expectation. Another natural estimator would be the sub-sampled Hessian
method suggested by [Mar10, BCNN11, EM15]. Similarly, our goal is to propose an appropriate
estimator that is also computationally efficient.
We use the following Stein-type lemma to derive an efficient estimator to the expectation of Hessian.
Lemma 3.1 (Stein-type lemma). Assume that x ? Np (0, ?) and 2 Rp is a constant vector. Then
for any function f : R ! R that is twice ?weakly" differentiable, we have
E[xxT f (hx, i)] = E[f (hx, i)]? + E[f (2) (hx, i)]?
3
T
?.
(3.1)
Algorithm 1 Newton-Stein method
Input: ?0 , r, ?, .
1. Set t = 0 and sub-sample a set of indices S ? [n] uniformly at random.
b S ), and ?r (?
b S ) = ? 2 I + argminrank(M ) = r ?
bS
2. Compute: ? 2 = r+1 (?
3. while ?t+1 ?t 2 ? ? do
Pn
Pn
?
?2 ( ?t ) = n1 i=1 (2) (hxi , ?t i),
?
?4 ( ?t ) = n1 i=1 (4) (hxi , ?t i),
Qt =
1
?
? 2 ( ?t )
?t+1 = PB
p
t
t + 1.
4. end while
Output: ?t .
h
b S)
?r ( ?
?
?t
(R)
1
?t [ ?t ]T
b S ) ?t , ?t i
?
? 2 ( ?t )/?
?4 ( ?t )+h?r (?
?
Qt r `( ?t ) ,
i
?2I
M
F
.
,
The proof of Lemma 3.1 is given in Appendix. The right hand side of Eq.(3.1) is a rank-1 update to
the first term. Hence, its inverse can be computed with O(p2 ) cost. Quantities that change at each
iteration are the ones that depend on , i.e.,
?2 ( ) = E[
(2)
(hx, i)] and ?4 ( ) = E[
(4)
(hx, i)].
?2 ( ) and ?4 ( ) are scalar quantities and can be estimated by their corresponding sample means
?
?2 ( ) and ?
?4 ( ) (explicitly defined at Step 3 of Algorithm 1), with only O(np) computation.
To complete the estimation task suggested by Eq. (3.1), we need an estimator for the covariance
matrix ?. A natural estimator is the sample mean where, we only use a sub-sample S ? [n] so
that the cost is reduced to O(|S|p2 ) from O(np2 ). Sub-sampling based sample mean estimator
T
bS = P
is denoted by ?
i2S xi xi /|S|, which is widely used in large-scale problems [Ver10]. We
highlight the fact that Lemma 3.1 replaces NM?s O(np2 ) per-iteration cost with a one-time cost of
O(np2 ). We further use sub-sampling to reduce this one-time cost to O(|S|p2 ).
In general, important curvature information is contained in the largest few spectral features. Following [EM15], we take the largest r eigenvalues of the sub-sampled covariance estimator, setting rest
of them to (r + 1)-th eigenvalue. This operation helps denoising and would require only O(rp2 )
computation. Step 2 of Algorithm 1 performs this procedure.
Inverting the constructed Hessian estimator can make use of the low-rank structure several times.
First, notice that the updates in Eq. (3.1) are based on rank-1 matrix additions. Hence, we can simply use a matrix inversion formula to derive an explicit equation (See Qt in Step 3 of Algorithm
1). This formulation would impose another inverse operation on the covariance estimator. Since
the covariance estimator is also based on rank-r approximation, one can utilize the low-rank inversion formula again. We emphasize that this operation is performed once. Therefore, instead of
NM?s per-iteration cost of O(p3 ) due to inversion, Newton-Stein method (NewSt ) requires O(p2 )
per-iteration and a one-time cost of O(rp2 ). Assuming that NewSt and NM converge in T1 and
T2 iterations respectively, the overall complexity of NewSt is O npT1 + p2 T1 + (|S| + r)p2 ?
O npT1 + p2 T1 + |S|p2 whereas that of NM is O np2 T2 + p3 T2 .
Even though Proposition 3.1 assumes that the covariates are multivariate Gaussian random vectors,
in Section 4, the only assumption we make on the covariates is that they have bounded support,
which covers a wide class of random variables. The left plot of Figure 1 shows that the estimation
is accurate for various distributions. This is a consequence of the fact that the proposed estimator in
Eq. (3.1) relies on the distribution of x only through inner products of the form hx, vi, which in turn
results in approximate normal distribution due to the central limit theorem when p is sufficiently
large. We will discuss this phenomenon in detail in Section 4.
The convergence rate of Newton-Stein method has two phases. Convergence starts quadratically and
transitions into a linear rate when it gets close to the true minimizer. The phase transition behavior
can be observed through the right plot in Figure 1. This is a consequence of the bound provided in
Eq. (1.2), which is the main result of our theorems stated in Section 4.
4
Difference between estimated and true Hessian
Convergence Rate
Randomness
Bernoulli
Gaussian
Poisson
Uniform
?1
0
log10(Error)
log10(Estimation error)
0
?2
?3
Sub?sample size
NewSt : S = 1000
NewSt : S = 10000
?1
?2
?4
?3
0
100
200
Dimension (p)
300
400
0
10
20
30
Iterations
40
50
Figure 1: The left plot demonstrates the accuracy of proposed Hessian estimation over different distributions.
Number of observations is set to be n = O(p log(p)). The right plot shows the phase transition in the convergence rate of Newton-Stein method (NewSt ). Convergence starts with a quadratic rate and transitions into
linear. Plots are obtained using Covertype dataset.
4
Theoretical results
We start this section by introducing the terms that will appear in the theorems. Then, we provide our
technical results on uniformly bounded covariates. The proofs are provided in Appendix.
4.1 Preliminaries
Hessian estimation described in the previous section relies on a Gaussian approximation. For theoretical purposes, we use the following probability metric to quantify the gap between the distribution
of xi ?s and that of a normal vector.
Definition 1. Given a family of functions H, and random vectors x, y 2 Rp , and any h 2 H, define
dH (x, y) = sup dh (x, y)
where
dh (x, y) = E [h(x)]
h2H
E [h(y)] .
Many probability metrics can be expressed as above by choosing a suitable function class H. Examples include Total Variation (TV), Kolmogorov and Wasserstein metrics [GS02, CGS10]. Based on
the second and fourth derivatives of cumulant generating function, we define the following classes:
n
o
n
o
H1 = h(x) = (2) (hx, i) : 2 Bp (R) ,
H2 = h(x) = (4) (hx, i) : 2 Bp (R) ,
n
o
H3 = h(x) = hv, xi2 (2) (hx, i) : 2 Bp (R), kvk2 = 1 ,
where Bp (R) 2 Rp is the ball of radius R. Exact calculation of such probability metrics are often
difficult. The general approach is to upper bound the distance by a more intuitive metric. In our
case, we observe that dHj (x, y) for j = 1, 2, 3, can be easily upper bounded by dTV (x, y) up to a
scaling constant, when the covariates have bounded support.
We will further assume that the covariance matrix follows r-spiked model, i.e., ? = 2 I +
P
r
T
i=1 ?i ui ui , which is commonly encountered in practice [BS06]. This simply means that the first
r eigenvalues of the covariance matrix are large and the rest are small and equal to each other. Large
eigenvalues of ? correspond to the signal part and small ones (denoted by 2 ) can be considered as
the noise component.
4.2
Composite convergence rate
We have the following per-step bound for the iterates generated by the Newton-Stein method, when
the covariates are supported on a p-dimensional ball.
Theorem 4.1.pAssume that the covariates x1 , x2 , ..., xn are i.i.d. random vectors supported on a
ball of radius K with
?
?
E[xi ] = 0
and
E xi xTi = ?,
where ? follows the r-spiked model. Further assume that the cumulant generating function has
bounded 2nd-5th derivatives and that R is the radius of the projection PBp (R) . For ?t t>0 given
5
by the Newton-Stein method for = 1, define the event
n
E = ?2 ( ?t ) + ?4 ( ?t )h? ?t , ?t i > ? ,
?
2 Bp (R)
o
(4.1)
for some positive constant ?, and the optimal value ? . If n, |S| and p are sufficiently large, then
there exist constants c, c1 , c2 and ? depending on the radii K, R, P(E) and the bounds on | (2) | and
| (4) | such that conditioned on the event E, with probability at least 1 c/p2 , we have
?t+1
? 2
? ?1 ?t
? 2
+ ?2 ?t
where the coefficients ?1 and ?2 are deterministic constants defined as
r
p
?1 = ?D(x, z) + c1 ?
,
min {p/ log(p)|S|, n/ log(n)}
2
? 2,
(4.2)
?2 = c2 ?,
and D(x, z) is defined as
D(x, z) = k?k2 dH1 (x, z) + k?k22 R2 dH2 (x, z) + dH3 (x, z),
(4.3)
for a multivariate Gaussian random variable z with the same mean and covariance as xi ?s.
The bound in Eq. (4.2) holds with high probability, and the coefficients ?1 and ?2 are deterministic
constants which will describe the convergence behavior of the Newton-Stein method. Observe that
the coefficient ?1 is sum of two terms: D(x, z) measures how accurate the Hessian estimation is,
and the second term depends on the sub-sample size and the data dimensions.
Theorem 4.1 shows that the convergence of Newton-Stein method can be upper bounded by a compositely converging sequence, that is, the squared term will dominate at first giving a quadratic
rate, then the convergence will transition into a linear phase as the iterate gets close to the optimal
value. The coefficients ?1 and ?2 govern the linear and quadratic terms, respectively. The effect of
sub-sampling appears in the coefficient of linear term. In theory, there is a threshold for the subsampling size |S|, namely O(n/ log(n)), beyond which further sub-sampling has no effect. The
transition point between the quadratic and the linear phases is determined by the sub-sampling size
and the properties of the data. The phase transition can be observed through the right plot in Figure
1. Using the above theorem, we state the following corollary.
Corollary 4.2. Assume that the assumptions of Theorem 4.1 hold. For a constant
P EC , a
tolerance ? satisfying
? 20R c/p2 + ,
?
?
and for an iterate satisfying E k ?t
? k2 > ?, the iterates of Newton-Stein method will satisfy,
h
i
h
i
h
i
2
?t
E k ?t+1
?1 E k ?t
? k2 ? ?
? k2 + ?2 E k
? k2 ,
where ??1 = ?1 + 0.1 and , ?1 , ?2 are as in Theorem 4.1.
The bound stated in the above corollary is an analogue of composite convergence (given in Eq. (4.2))
in expectation. Note that our results make strong assumptions on the derivatives of the cumulant generating function . We emphasize that these assumptions are valid for linear and logistic regressions.
An example that does not fit in our scheme is Poisson regression with (z) = ez . However, we observed empirically that the algorithm still provides significant improvement. The following theorem
states a sufficient condition for the convergence of composite sequence.
Theorem 4.3. Let { ?t }t 0 be a compositely converging sequence with convergence coefficients
?1 and ?2 as in Eq. (4.2) to the minimizer ? . Let the starting point satisfy ?0
? 2 = # <
?
?
?1 #
(1 ?1 )/?2 and define ? = 1 ?2 # , # . Then the sequence of `2 -distances converges to 0. Further,
the number of iterations to reach a tolerance of ? can be upper bounded by inf ?2? J (?), where
?
?
log ( (?1 /? + ?2 ))
log(?/?)
J (?) = log2
+
.
(4.4)
log (?1 /? + ?2 ) #
log(?1 + ?2 ?)
Above theorem gives an upper bound on the number of iterations until reaching a tolerance of ?. The
first and second terms on the right hand side of Eq. (4.4) stem from the quadratic and linear phases,
respectively.
6
4.3
Algorithm parameters
NewSt takes three input parameters and for those, we suggest near-optimal choices based on our
theoretical results.
? Sub-sample size: NewSt uses a subset of indices to approximate the covariance matrix ?.
Corollary 5.50 of [Ver10] proves that a sample size of O(p) is sufficient for sub-gaussian
covariates and that of O(p log(p)) is sufficient for arbitrary distributions supported in some
ball to estimate a covariance matrix by its sample mean estimator. In the regime we consider, n
p, we suggest to use a sample size of |S| = O(p log(p)).
? Rank: Many methods have been suggested to improve the estimation of covariance matrix and almost all of them rely on the concept of shrinkage [CCS10, DGJ13]. Eigenvalue
thresholding can be considered as a shrinkage operation which will retain only the important second order information [EM15]. Choosing the rank threshold r can be simply done
on the sample mean estimator of ?. After obtaining the sub-sampled estimate of the mean,
one can either plot the spectrum and choose manually or use a technique from [DG13].
? Step size: Step size choices of NewSt are quite similar to Newton?s method (i.e., See
[BV04]). The main difference comes from the eigenvalue thresholding. If the data follows
the r-spiked model, the optimal step size will be close to 1 if there is no sub-sampling.
However, due to fluctuations resulting from sub-sampling, we suggest the following step
size choice for NewSt:
2
p
=
.
(4.5)
? 2 O( p/|S|)
1+
2
?
In general, this formula yields a step size greater than 1, which is due to rank thresholding,
providing faster convergence. See [EM15] for a detailed discussion.
5
Experiments
In this section, we validate the performance of NewSt through extensive numerical studies. We
experimented on two commonly used GLM optimization problems, namely, Logistic Regression
(LR) and Linear Regression (OLS). LR minimizes Eq. (2.1) for the logistic function (z) = log(1 +
ez ), whereas OLS minimizes the same equation for (z) = z 2 . In the following, we briefly describe
the algorithms that are used in the experiments:
? Newton?s Method (NM) uses the inverse Hessian evaluated at the current iterate, and may
achieve quadratic convergence. NM steps require O(np2 + p3 ) computation which makes
it impractical for large-scale datasets.
? Broyden-Fletcher-Goldfarb-Shanno (BFGS) forms a curvature matrix by cultivating the
information from the iterates and the gradients at each iteration. Under certain assumptions,
the convergence rate is locally super-linear and the per-iteration cost is comparable to that
of first order methods.
? Limited Memory BFGS (L-BFGS) is similar to BFGS, and uses only the recent few iterates to construct the curvature matrix, gaining significant performance in terms of memory.
? Gradient Descent (GD) update is proportional to the negative of the full gradient evaluated
at the current iterate. Under smoothness assumptions, GD achieves a linear convergence
rate, with O(np) per-iteration cost.
? Accelerated Gradient Descent (AGD) is proposed by Nesterov [Nes83], which improves
over the gradient descent by using a momentum term. Performance of AGD strongly depends of the smoothness of the function.
For all the algorithms, we use a constant step size that provides the fastest convergence. Sub-sample
size, rank and the constant step size for NewSt is selected by following the guidelines in Section 4.3.
We experimented over two real, two synthetic datasets which are summarized in Table 1. Synthetic
data are generated through a multivariate Gaussian distribution and data dimensions are chosen so
that Newton?s method still does well. The experimental results are summarized in Figure 2. We
observe that NewSt provides a significant improvement over the classical techniques. The methods
that come closer to NewSt is Newton?s method for moderate n and p and BFGS when n is large.
Observe that the convergence rate of NewSt has a clear phase transition point. As argued earlier,
this point depends on various factors including sub-sampling size |S| and data dimensions n, p, the
7
S20#
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
?2
?3
?3
0
Covertype#
Logistic Regression, rank=40
?1
?2
20
30
Time(sec)
Linear Regression, rank=3
?1
log(Error)
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
?2
?3
?4
0
10
20
30
Time(sec)
?4
Linear Regression, rank=20
?1
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
?2
2
?2
?3
0.0
2.5
5.0
Time(sec)
7.5
10.0
Linear Regression, rank=40
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
1
log(Error)
10
log(Error)
0
0
?1
?2
?3
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
?1
?3
?4
Logistic Regression, rank=2
0
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
log(Error)
?2
CT#Slices#
Logistic Regression, rank=20
?1
log(Error)
log(Error)
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
?4
0
10
20
30
Time(sec)
Linear Regression, rank=2
?1
Method
NewSt
BFGS
LBFGS
Newton
GD
AGD
log(Error)
S3#
Logistic Regression, rank=3
?1
log(Error)
Dataset:#
?2
?3
?3
?4
0
10
20
30
Time(sec)
?4
0
10
20
30
Time(sec)
?4
0
1
2
3
Time(sec)
4
5
?4
0
1
2
3
Time(sec)
4
5
Figure 2: Performance of various optimization methods on different datasets. Red straight line represents
the proposed method NewSt . Algorithm parameters including the rank threshold is selected by the guidelines
described in Section 4.3.
rank threshold r and structure of the covariance matrix. The prediction of the phase transition point
is an interesting line of research, which would allow further tuning of algorithm parameters.
The optimal step-size for NewSt will typically be larger than 1 which is mainly due to the eigenvalue
thresholding operation. This feature is desirable if one is able to obtain a large step-size that provides
convergence. In such cases, the convergence is likely to be faster, yet more unstable compared to
the smaller step size choices. We observed that similar to other second order algorithms, NewSt is
susceptible to the step size selection. If the data is not well-conditioned, and the sub-sample size
is not sufficiently large, algorithm might have poor performance. This is mainly because the subsampling operation is performed only once at the beginning. Therefore, it might be good in practice
to sub-sample once in every few iterations.
Dataset
CT slices
Covertype
S3
S20
n
53500
581012
500000
500000
Reference, UCI repo [Lic13]
[GKS+ 11]
[BD99]
3-spiked model, [DGJ13]
20-spiked model, [DGJ13]
p
386
54
300
300
Table 1: Datasets used in the experiments.
6
Discussion
In this paper, we proposed an efficient algorithm for training GLMs. We call our algorithm
Newton-Stein method (NewSt) as it takes a Newton update at each iteration relying on a Stein-type
lemma. The algorithm requires a one time O(|S|p2 ) cost to estimate the covariance structure and
O(np) per-iteration cost to form the update equations. We observe that the convergence of NewSt
has a phase transition from quadratic rate to linear. This observation is justified theoretically along
with several other guarantees for covariates with bounded support, such as per-step bounds, conditions for convergence, etc. Parameter selection guidelines of NewSt are based on our theoretical
results. Our experiments show that NewSt provides high performance in GLM optimization.
Relaxing some of the theoretical constraints is an interesting line of research. In particular, bounded
support assumption as well as strong constraints on the cumulant generating functions might be
loosened. Another interesting direction is to determine when the phase transition point occurs,
which would provide a better understanding of the effects of sub-sampling and rank thresholding.
Acknowledgements
The author is grateful to Mohsen Bayati and Andrea Montanari for stimulating conversations on the
topic of this work. The author would like to thank Bhaswar B. Bhattacharya and Qingyuan Zhao for
carefully reading this article and providing valuable feedback.
8
References
[Ama98] Shun-Ichi Amari, Natural gradient works efficiently in learning, Neural computation 10 (1998).
[BCNN11] Richard H Byrd, Gillian M Chin, Will Neveitt, and Jorge Nocedal, On the use of stochastic hessian
information in optimization methods for machine learning, SIAM Journal on Optimization (2011).
[BD99]
Jock A Blackard and Denis J Dean, Comparative accuracies of artificial neural networks and
discriminant analysis in predicting forest cover types from cartographic variables, Computers and
electronics in agriculture (1999), 131?151.
[BHNS14] Richard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer, A stochastic quasi-newton method
for large-scale optimization, arXiv preprint arXiv:1401.7020 (2014).
[Bis95]
Christopher M. Bishop, Neural networks for pattern recognition, Oxford University Press, 1995.
[Bot10]
L?on Bottou, Large-scale machine learning with stochastic gradient descent, COMPSTAT, 2010.
[BS06]
Jinho Baik and Jack W Silverstein, Eigenvalues of large sample covariance matrices of spiked
population models, Journal of Multivariate Analysis 97 (2006), no. 6, 1382?1408.
[BV04]
Stephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge University Press, 2004.
[CCS10] Jian-Feng Cai, Emmanuel J Cand?s, and Zuowei Shen, A singular value thresholding algorithm
for matrix completion, SIAM Journal on Optimization 20 (2010), no. 4, 1956?1982.
? method,
[CGS10] Louis HY Chen, Larry Goldstein, and Qi-Man Shao, Normal approximation by Stein?A? Zs
Springer Science, 2010.
[DE15]
Lee H Dicker and Murat A Erdogdu, Flexible results for quadratic forms with applications to
variance components estimation, arXiv preprint arXiv:1509.04388 (2015).
[DG13]
David L Donoho and Matan Gavish, The optimal hard threshold for singular values is 4/sqrt3,
arXiv:1305.5870 (2013).
[DGJ13]
David L Donoho, Matan Gavish, and Iain M Johnstone, Optimal shrinkage of eigenvalues in the
spiked covariance model, arXiv preprint arXiv:1311.0851 (2013).
[DHS11] John Duchi, Elad Hazan, and Yoram Singer, Adaptive subgradient methods for online learning
and stochastic optimization, J. Mach. Learn. Res. 12 (2011), 2121?2159.
[EM15]
Murat A Erdogdu and Andrea Montanari, Convergence rates of sub-sampled Newton methods,
arXiv preprint arXiv:1508.02810 (2015).
[FHT10]
Jerome Friedman, Trevor Hastie, and Rob Tibshirani, Regularization paths for generalized linear
models via coordinate descent, Journal of statistical software 33 (2010), no. 1, 1.
[FS12]
Michael P Friedlander and Mark Schmidt, Hybrid deterministic-stochastic methods for data fitting,
SIAM Journal on Scientific Computing 34 (2012), no. 3, A1380?A1405.
[GKS+ 11] Franz Graf, Hans-Peter Kriegel, Matthias Schubert, Sebastian P?lsterl, and Alexander Cavallaro,
2d image registration in ct images using radial image descriptors, MICCAI 2011, Springer, 2011.
[GS02]
Alison L Gibbs and Francis E Su, On choosing and bounding probability metrics, ISR 70 (2002).
[KF09]
Daphne Koller and Nir Friedman, Probabilistic graphical models, MIT press, 2009.
[Lic13]
M. Lichman, UCI machine learning repository, 2013.
[LRF10]
Nicolas Le Roux and Andrew W Fitzgibbon, A fast natural newton method, ICML, 2010.
[LRMB08] Nicolas Le Roux, Pierre-A Manzagol, and Yoshua Bengio, Topmoumoute online natural gradient
algorithm, NIPS, 2008.
[LWK08] Chih-J Lin, Ruby C Weng, and Sathiya Keerthi, Trust region newton method for logistic regression,
JMLR (2008).
[Mar10]
James Martens, Deep learning via hessian-free optimization, ICML, 2010, pp. 735?742.
[MN89]
Peter McCullagh and John A Nelder, Generalized linear models, vol. 2, Chapman and Hall, 1989.
[Nes83]
Yurii Nesterov, A method for unconstrained convex minimization problem with the rate of convergence o (1/k2), Doklady AN SSSR, vol. 269, 1983, pp. 543?547.
[Nes04]
, Introductory lectures on convex optimization: A basic course, vol. 87, Springer, 2004.
[SRB13] Mark Schmidt, Nicolas Le Roux, and Francis Bach, Minimizing finite sums with the stochastic
average gradient, arXiv preprint arXiv:1309.2388 (2013).
[Ste81]
Charles M Stein, Estimation of the mean of a multivariate normal distribution, Annals of Statistics
(1981), 1135?1151.
[Ver10]
Roman Vershynin, Introduction to the non-asymptotic analysis of random matrices,
arXiv:1011.3027 (2010).
[VP12]
Oriol Vinyals and Daniel Povey, Krylov Subspace Descent for Deep Learning, AISTATS, 2012.
9
| 5750 |@word repository:1 briefly:2 bot10:3 manageable:1 norm:1 inversion:3 nd:1 covariance:14 sgd:2 mar10:3 electronics:1 celebrated:1 lichman:1 daniel:1 denoting:1 recovered:1 current:4 comparing:1 yet:2 written:2 john:2 lic13:2 numerical:1 designed:1 plot:7 update:11 selected:3 prohibitive:2 beginning:1 vp12:2 core:1 lr:4 provides:9 iterates:5 denis:1 daphne:1 along:2 constructed:1 kvk2:1 c2:2 prove:1 specialize:1 nes83:2 fitting:1 introductory:1 introduce:1 theoretically:1 andrea:2 cand:1 behavior:2 relying:2 byrd:2 xti:3 provided:2 bounded:11 notation:2 de15:1 minimizes:2 z:1 finding:1 impractical:2 guarantee:1 every:2 doklady:1 demonstrates:1 k2:6 enjoy:1 yn:2 appear:1 louis:1 carrier:1 t1:3 positive:1 limit:1 consequence:2 mach:1 oxford:1 fluctuation:1 path:1 might:4 twice:1 studied:2 relaxing:1 fastest:1 limited:1 practical:1 practice:2 fitzgibbon:1 procedure:1 empirical:1 attain:1 composite:3 projection:2 boyd:1 radial:1 suggest:3 get:2 onto:1 close:3 selection:2 cartographic:1 applying:1 nes04:6 equivalent:1 deterministic:3 dean:1 marten:1 resembling:1 compstat:1 starting:1 convex:3 survey:1 formulate:2 shen:1 roux:3 fs12:2 estimator:17 iain:1 dominate:1 vandenberghe:1 stability:1 population:1 coordinate:2 variation:1 annals:1 play:1 exact:1 us:4 satisfying:2 recognition:1 observed:4 role:1 preprint:5 hv:1 region:2 trade:1 highest:1 contemporary:1 repo:1 valuable:1 intuition:1 govern:1 complexity:1 covariates:8 ui:2 argminrank:1 nesterov:2 ste81:2 neveitt:1 weakly:1 depend:1 grateful:1 mohsen:1 shao:1 easily:1 joint:1 various:4 kolmogorov:1 xxt:2 fast:4 kf09:2 describe:2 artificial:1 bhaswar:1 choosing:4 matan:2 quite:1 stanford:2 larger:3 solve:1 widely:1 elad:1 amari:1 lsterl:1 statistic:2 online:4 sequence:6 eigenvalue:10 differentiable:1 matthias:1 cai:1 dh1:1 propose:3 product:2 relevant:1 uci:2 achieve:3 intuitive:1 validate:1 qr:1 convergence:37 generating:7 comparative:1 converges:1 i2s:1 help:1 derive:2 develop:1 depending:1 completion:1 andrew:1 h3:1 qt:7 eq:12 strong:2 p2:12 implies:1 come:2 quantify:1 direction:1 radius:5 sssr:1 mn89:2 stochastic:7 larry:1 shun:1 require:2 argued:1 hx:11 preliminary:1 dhs11:2 proposition:1 extension:1 hold:2 immensely:1 sufficiently:3 considered:2 hall:1 normal:4 exp:2 fletcher:2 achieves:2 agriculture:1 purpose:2 gavish:2 estimation:14 hansen:1 largest:2 minimization:3 ama98:2 mit:1 gaussian:6 aim:1 super:1 reaching:1 pn:2 shrinkage:3 corollary:4 np2:7 focus:2 improvement:5 rank:21 likelihood:8 mainly:3 lrmb08:2 bernoulli:1 attains:1 lrf10:2 typically:2 relation:3 koller:1 quasi:6 schubert:1 overall:1 aforementioned:1 flexible:1 denoted:5 special:1 equal:1 construct:3 once:3 a1380:1 ng:4 sampling:14 manually:1 chapman:1 represents:1 icml:2 excessive:2 np:5 t2:3 yoshua:1 roman:1 richard:2 few:4 randomly:1 replaced:1 phase:15 keerthi:1 tq:1 n1:2 friedman:2 a1405:1 introduces:1 weng:1 pc:1 accurate:2 bcnn11:3 closer:1 modest:2 euclidean:1 re:1 theoretical:6 earlier:1 cover:3 measuring:1 ordinary:1 cost:24 introducing:1 subset:1 uniform:1 synthetic:2 gd:13 vershynin:1 density:2 shanno:2 siam:3 retain:1 lee:1 off:1 h2h:1 probabilistic:1 michael:1 again:1 central:1 nm:10 squared:1 choose:2 derivative:4 zhao:1 bfgs:14 summarized:3 sec:8 coefficient:11 satisfy:3 explicitly:2 vi:1 depends:3 later:2 performed:2 h1:1 hazan:1 sup:1 red:1 start:4 recover:1 francis:2 contribution:1 square:1 accuracy:2 variance:1 descriptor:1 efficiently:3 correspond:1 yield:1 silverstein:1 straight:1 randomness:1 dhj:1 reach:1 sebastian:1 trevor:1 definition:1 pp:2 james:1 proof:2 sampled:4 dataset:3 conversation:1 dimensionality:1 improves:1 organized:1 carefully:1 goldstein:1 appears:1 supervised:1 methodology:1 formulation:3 done:1 though:2 bhns14:2 evaluated:2 strongly:1 just:1 lastly:1 miccai:1 until:1 glms:15 hand:4 jerome:1 trust:2 christopher:1 su:1 logistic:10 scientific:1 effect:3 k22:1 concept:1 y2:2 true:3 hence:3 regularization:1 goldfarb:2 bv04:4 em15:6 generalized:6 chin:1 ruby:1 complete:1 demonstrate:2 performs:1 duchi:1 loosened:1 image:3 jack:1 pbp:1 charles:1 ols:4 multinomial:1 empirically:2 lieven:1 significant:5 cambridge:1 gibbs:1 broyden:2 smoothness:2 tuning:1 unconstrained:1 dh2:1 similarly:1 hxi:7 han:1 etc:1 curvature:6 multivariate:5 recent:1 moderate:2 belongs:1 inf:1 certain:2 jorge:2 yi:6 seen:1 wasserstein:1 greater:1 impose:1 zuowei:1 converge:1 determine:1 signal:1 stephen:1 full:1 desirable:1 stem:1 technical:1 faster:3 calculation:1 bach:1 raphson:1 lin:1 plugging:1 qi:1 converging:2 variant:1 basic:1 regression:15 scalable:1 prediction:1 metric:7 expectation:3 poisson:2 jock:1 iteration:27 arxiv:12 dicker:1 c1:2 justified:1 addition:1 whereas:2 ccs10:2 singular:2 jian:1 crucial:1 rest:3 ver10:3 member:1 bd99:2 dgj13:4 call:2 near:1 bengio:1 baik:1 iterate:6 fit:1 dtv:1 hastie:1 inner:2 idea:1 reduce:1 bottleneck:1 suffer:1 peter:2 hessian:19 deep:2 generally:1 detailed:1 clear:1 gks:2 stein:25 extensively:2 locally:1 reduced:1 sl:1 exist:1 canonical:1 notice:1 s3:2 estimated:2 per:18 tibshirani:1 vol:3 ichi:1 four:2 threshold:5 pb:1 povey:1 registration:1 utilize:1 nocedal:2 subgradient:1 sum:3 inverse:5 fourth:1 throughout:1 family:4 almost:1 chih:1 p3:4 utilizes:1 appendix:2 scaling:8 comparable:1 bound:10 ct:3 cgs10:2 followed:1 quadratic:12 replaces:1 encountered:1 covertype:3 constraint:2 bp:6 x2:1 software:1 rp2:2 hy:1 aspect:1 extremely:1 formulating:1 min:1 department:1 tv:1 combination:1 ball:5 poor:1 conjugate:1 smaller:1 rob:1 b:2 spiked:7 bis95:4 alison:1 glm:5 computationally:3 equation:4 discus:4 turn:1 xi2:1 singer:2 end:1 yurii:1 available:2 operation:6 apply:1 observe:6 spectral:2 appropriate:1 pierre:1 bhattacharya:1 alternative:1 batch:2 schmidt:2 rp:10 cavallaro:1 assumes:1 include:4 subsampling:2 graphical:2 log2:1 newton:41 log10:2 yoram:2 giving:1 emmanuel:1 prof:1 classical:5 feng:1 objective:2 quantity:2 occurs:1 gradient:16 subspace:1 distance:3 thank:1 topic:1 unstable:1 discriminant:1 assuming:3 modeled:1 index:2 manzagol:1 providing:3 minimizing:2 difficult:1 susceptible:1 negative:3 stated:2 design:3 guideline:3 murat:3 upper:5 observation:6 datasets:7 finite:1 descent:10 flop:1 y1:2 rn:2 arbitrary:1 david:2 inverting:1 namely:2 extensive:1 s20:2 quadratically:1 boost:1 nip:1 beyond:1 suggested:3 krylov:2 able:1 pattern:1 kriegel:1 regime:3 reading:1 recast:1 gaining:1 memory:2 including:2 analogue:1 suitable:1 event:2 natural:8 rely:2 hybrid:2 predicting:1 abbreviate:1 scheme:1 improve:1 numerous:3 nir:1 understanding:1 acknowledgement:1 friedlander:1 graf:1 asymptotic:1 lecture:1 highlight:1 interesting:3 proportional:1 bayati:1 h2:1 sufficient:4 article:1 thresholding:7 row:4 course:1 penalized:1 supported:3 keeping:1 free:1 enjoys:1 side:2 allow:1 johnstone:1 wide:1 erdogdu:4 taking:1 isr:1 benefit:1 tolerance:3 slice:2 dimension:5 xn:1 transition:12 valid:1 feedback:1 author:2 commonly:3 adaptive:1 franz:1 ec:1 agd:10 approximate:6 emphasize:2 blackard:1 sathiya:1 nelder:1 xi:11 spectrum:1 table:2 learn:1 nicolas:3 obtaining:1 forest:1 bottou:1 constructing:4 aistats:1 main:6 montanari:2 bounding:1 noise:1 nothing:1 x1:1 referred:1 slow:2 sub:30 momentum:1 explicit:1 exponential:3 lie:1 kxk2:2 topmoumoute:1 jmlr:1 formula:3 theorem:11 bishop:1 r2:3 experimented:2 srb13:2 conditioned:2 gap:1 easier:1 chen:1 simply:4 likely:1 lbfgs:8 forming:1 ez:3 vinyals:1 expressed:2 contained:1 scalar:1 springer:3 minimizer:3 determines:1 relies:2 dh:3 stimulating:1 identity:1 viewed:1 goal:1 donoho:2 miscellaneous:1 fisher:1 feasible:2 change:1 man:1 hard:1 specifically:1 determined:2 uniformly:2 mccullagh:1 denoising:1 lemma:11 total:1 experimental:1 support:6 mark:2 cumulant:7 alexander:1 accelerated:1 oriol:1 gillian:1 phenomenon:1 |
5,248 | 5,751 | Asynchronous Parallel Stochastic Gradient for
Nonconvex Optimization
Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu
Department of Computer Science, University of Rochester
{lianxiangru,huangyj0,raingomm,ji.liu.uwisc}@gmail.com
Abstract
Asynchronous parallel implementations of stochastic gradient (SG) have been
broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and
speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and
provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other
? is on a shared
memory system. We establish an ergodic convergence rate O(1/ K) for both algorithms and prove
? that the linear speedup is achievable if the number of workers
is bounded by K (K is the total number of iterations). Our results generalize
and improve existing analysis for convex minimization.
1
Introduction
The asynchronous parallel optimization recently received many successes and broad attention in
machine learning and optimization [Niu et al., 2011, Li et al., 2013, 2014b, Yun et al., 2013, Fercoq
and Richt?arik, 2013, Zhang and Kwok, 2014, Marecek et al., 2014, Tappenden et al., 2015, Hong,
2014]. It is mainly due to that the asynchronous parallelism largely reduces the system overhead
comparing to the synchronous parallelism. The key idea of the asynchronous parallelism is to allow
all workers work independently and have no need of synchronization or coordination. The asynchronous parallelism has been successfully applied to speedup many state-of-the-art optimization
algorithms including stochastic gradient [Niu et al., 2011, Agarwal and Duchi, 2011, Zhang et al.,
2014, Feyzmahdavian et al., 2015, Paine et al., 2013, Mania et al., 2015], stochastic coordinate descent [Avron et al., 2014, Liu et al., 2014a, Sridhar et al., 2013], dual stochastic coordinate ascent
[Tran et al., 2015], and randomized Kaczmarz algorithm [Liu et al., 2014b].
In this paper, we are particularly interested in the asynchronous parallel stochastic gradient algorithm (A SY SG) for nonconvex optimization mainly due to its recent successes and popularity in
deep neural network [Bengio et al., 2003, Dean et al., 2012, Paine et al., 2013, Zhang et al., 2014,
Li et al., 2014a] and matrix completion [Niu et al., 2011, Petroni and Querzoni, 2014, Yun et al.,
2013]. While some research efforts have been made to study the convergence and speedup properties
of A SY SG for convex optimization, people still know very little about its properties in nonconvex
optimization. Existing theories cannot explain its convergence and excellent speedup property in
practice, mainly due to the nonconvexity of most deep learning formulations and the asynchronous
parallel mechanism. People even have no idea if its convergence is certified for nonconvex optimization, although it has been used widely in solving deep neural network and implemented on different
platforms such as computer network and shared memory (for example, multicore and multiGPU)
system.
To fill these gaps in theory, this paper tries to make the first attempt to study A SY SG for the following
nonconvex optimization problem
minx?Rn f (x) := E? [F (x; ?)]
(1)
1
where ? ? ? is a random variable and f (x) is a smooth (but not necessarily convex) function. The
most common specification is that ? is an index set of all training samples ? = {1, 2, ? ? ? , N } and
F (x; ?) is the loss function with respect to the training sample indexed by ?.
We consider two popular asynchronous parallel implementations of SG: one is for the computer
network originally proposed in [Agarwal and Duchi, 2011] and the other one is for the shared memory (including multicore/multiGPU) system originally proposed in [Niu et al., 2011]. Note that due
to the architecture diversity, it leads to two different algorithms. The key difference lies on that
the computer network can naturally (also efficiently) ensure the atomicity of reading and writing
the whole vector of x, while the shared memory system is unable to do that efficiently and usually
only ensures efficiency for atomic reading and writing on a single coordinate of parameter x. The
implementation on computer cluster is described by the ?consistent asynchronous parallel SG? algorithm (A SY SG- CON), because the value of parameter x used for stochastic gradient evaluation is
consistent ? an existing value of parameter x at some time point. Contrarily, we use the ?inconsistent asynchronous parallel SG? algorithm (A SY SG- INCON) to describe the implementation on the
shared memory platform, because the value of parameter x used is inconconsistent, that is, it might
not be the real state of x at any time point.
This paper studies the theoretical convergence
?and speedup properties for both algorithms. We establish an asymptotic convergence rate of O(1/ KM ) for A SY SG- CON where K is the total iteration
1
number and M is the size of minibatch.
? The linear speedup is proved to be achievable while the
number of workers is bounded by O( K). For A SY SG- INCON, we establish an asymptotic convergence and speedup properties similar to A SY SG- CON. The intuition of the linear speedup of
asynchronous parallelism for SG can be explained in the following: Recall that the serial SG essentially uses the ?stochastic? gradient to surrogate the accurate gradient. A SY SG brings additional
deviation from the accurate gradient due to using ?stale? (or delayed) information. If the additional
deviation is relatively minor to the deviation caused by the ?stochastic? in SG, the total iteration
complexity (or convergence rate) of A SY SG would be comparable to the serial SG, which implies a
nearly linear speedup. This is the key reason why A SY SG works.
The main contributions of this paper are highlighted as follows:
? Our result for A SY SG- CON generalizes and improves earlier analysis of A SY SG- CON for convex
optimization in [Agarwal and Duchi, 2011]. Particularly, we improve the upper bound of the maximal number of workers to ensure the linear speedup from O(K 1/4 M ?3/4 ) to O(K 1/2 M ?1/2 )
by a factor K 1/4 M 1/4 ;
? The proposed A SY SG- INCON algorithm provides a more accurate description than H OGWILD !
[Niu et al., 2011] for the lock-free implementation of A SY SG on the shared memory system.
Although our result does not strictly dominate the result for H OGWILD ! due to different problem
settings, our result can be applied to more scenarios (e.g., nonconvex optimization);
? Our analysis provides theoretical (convergence and speedup) guarantees for many recent successes of A SY SG in deep learning. To the best of our knowledge, this is the first work that offers
such theoretical support.
Notation x? denotes the global optimal solution to (1). kxk0 denotes the `0 norm of vector x, that
is, the number of nonzeros in x; ei ? Rn denotes the ith natural unit basis vector. We use E?k,? (?)
to denote the expectation with respect to a set of variables {?k,1 , ? ? ? , ?k,M }. E(?) means taking the
expectation in terms of all random variables. G(x; ?) is used to denote ?F (x; ?) for short. We use
?i f (x) and (G(x; ?))i to denote the ith element of ?f (x) and G(x; ?) respectively.
Assumption Throughout this paper, we make the following assumption for the objective function.
All of them are quite common in the analysis of stochastic gradient algorithms.
Assumption 1. We assume that the following holds:
? (Unbiased Gradient): The stochastic gradient G(x; ?) is unbiased, that is to say,
?f (x) = E? [G(x; ?)]
(2)
1
The speedup for T workers is defined as the ratio between the total work load using one worker and the
average work load using T workers to obtain a solution at the same precision. ?The linear speedup is achieved?
means that the speedup with T workers greater than cT for any values of T (c ? (0, 1] is a constant independent
to T ).
2
? (Bounded Variance): The variance of stochastic gradient is bounded:
E? (kG(x; ?) ? ?f (x)k2 ) ? ? 2 , ?x.
(3)
? (Lipschitzian Gradient): The gradient function ?f (?) is Lipschitzian, that is to say,
k?f (x) ? ?f (y)k? Lkx ? yk ?x, ?y.
(4)
Under the Lipschitzian gradient assumption, we can define two more constants Ls and Lmax . Let
s be any positive integer. Define Ls to be the minimal constant satisfying the following inequality:
P
P
?f (x) ? ?f x +
? Ls
i?S ?i ei
i?S ?i ei , ?S ? {1, 2, ..., n} and |S|? s (5)
Define Lmax as the minimum constant that satisfies:
|?i f (x) ? ?i f (x + ?ei )|? Lmax |?|, ?i ? {1, 2, ..., n}.
(6)
It can be seen that Lmax ? Ls ? L.
2
Related Work
This section mainly reviews asynchronous parallel gradient algorithms, and asynchronous parallel
stochastic gradient algorithms and refer readers to the long version of this paper2 for review of
stochastic gradient algorithms and synchronous parallel stochastic gradient algorithms.
The asynchronous parallel algorithms received broad attention in optimization recently, although
pioneer studies started from 1980s [Bertsekas and Tsitsiklis, 1989]. Due to the rapid development
of hardware resources, the asynchronous parallelism recently received many successes when applied to parallel stochastic gradient [Niu et al., 2011, Agarwal and Duchi, 2011, Zhang et al., 2014,
Feyzmahdavian et al., 2015, Paine et al., 2013], stochastic coordinate descent [Avron et al., 2014, Liu
et al., 2014a], dual stochastic coordinate ascent [Tran et al., 2015], randomized Kaczmarz algorithm
[Liu et al., 2014b], and ADMM [Zhang and Kwok, 2014]. Liu et al. [2014a] and Liu and Wright
[2014] studied the asynchronous parallel stochastic coordinate descent algorithm with consistent
read and inconsistent read respectively and prove the linear speedup is achievable if T ? O(n1/2 )
for smooth convex functions and T ? O(n1/4 ) for functions with ?smooth convex loss + nonsmooth
convex separable regularization?. Avron et al. [2014] studied this asynchronous parallel stochastic
coordinate descent algorithm in solving Ax = b where A is a symmetric positive definite matrix,
and showed that the linear speedup is achievable if T ? O(n) for consistent read and T ? O(n1/2 )
for inconsistent read. Tran et al. [2015] studied a semi-asynchronous parallel version of Stochastic Dual Coordinate Ascent algorithm which periodically enforces primal-dual synchronization in a
separate thread.
We review the asynchronous parallel stochastic gradient algorithms in the last. Agarwal and Duchi
[2011] analyzed the A SY SG- CON algorithm
? (on computer cluster) for convex smooth optimization
and proved a convergence rate of O(1/ M K + M T 2 /K) which implies that linear speedup is
achieved when T is bounded by O(K 1/4 /M 3/4 ). In comparison, our analysis for the more general
nonconvex smooth optimization improves the upper bound by a factor K 1/4 M 1/4 . A very recent
work [Feyzmahdavian et al., 2015] extended the analysis in Agarwal and Duchi [2011] to minimize functions in the form ?smooth convex loss + nonsmooth convex regularization? and obtained
similar results. Niu et al. [2011] proposed a lock free asynchronous parallel implementation of SG
on the shared memory system and described this implementation as H OGWILD ! algorithm. They
proved a sublinear convergence rate O(1/K) for strongly convex smooth objectives. Another recent work Mania et al. [2015] analyzed asynchronous stochastic optimization algorithms for convex
functions by viewing it as a serial algorithm with the input perturbed by bounded noise and proved
the convergences rates no worse than using traditional point of view for several algorithms.
3
Asynchronous parallel stochastic gradient for computer network
This section considers the asynchronous parallel implementation of SG on computer network proposed by Agarwal and Duchi [2011]. It has been successfully applied to the distributed neural
network [Dean et al., 2012] and the parameter server [Li et al., 2014a] to solve deep neural network.
2
http://arxiv.org/abs/1506.08272
3
3.1
Algorithm Description: A SY SG- CON
Algorithm 1 A SY SG- CON
Require: x0 , K, {?k }k=0,???,K?1
Ensure: xK
1: for k = 0, ? ? ? , K ? 1 do
2:
Randomly select M training samples indexed by ?k,1 , ?k,2 , ...?k,M ;
PM
3: xk+1 = xk ? ?k m=1 G(xk??k,m , ?k,m );
4: end for
The ?star? in the star-shaped network is a master machine3 which maintains the parameter x.
Other machines in the computer network serve
as workers which only communicate with the
master. All workers exchange information with
the master independently and simultaneously,
basically repeating the following steps:
?
?
?
?
(Select): randomly select a subset of training samples S ? ?;
(Pull): pull parameter x from the master;
P
(Compute): compute the stochastic gradient g ? ??S G(x; ?);
(Push): push g to the master.
The master basically repeats the following steps:
? (Aggregate): aggregate a certain amount of stochastic gradients ?g? from workers;
? (Sum): summarize all ?g?s into a vector ?;
? (Update): update parameter x by x ? x ? ??.
While the master is aggregating stochastic gradients from workers, it does not care about the sources
of the collected stochastic gradients. As long as the total amount achieves the predefined quantity,
the master will compute ? and perform the update on x. The ?update? step is performed as an atomic
operation ? workers cannot read the value of x during this step, which can be efficiently implemented
in the network (especially in the parameter server [Li et al., 2014a]). The key difference between this
asynchronous parallel implementation of SG and the serial (or synchronous parallel) SG algorithm
lies on that in the ?update? step, some stochastic gradients ?g? in ??? might be computed from
some early value of x instead of the current one, while in the serial SG, all g?s are guaranteed to use
the current value of x.
The asynchronous parallel implementation substantially reduces the system overhead and overcomes
the possible large network delay, but the cost is to use the old value of ?x? in the stochastic gradient
evaluation. We will show in Section 3.2 that the negative affect of this cost will vanish asymptotically.
To mathematically characterize this asynchronous parallel implementation, we monitor parameter x
in the master. We use the subscript k to indicate the kth iteration on the master. For example, xk
denotes the value of parameter x after k updates, so on and so forth. We introduce a variable ?k,m
to denote how many delays for x used in evaluating the mth stochastic gradient at the kth iteration.
This asynchronous parallel implementation of SG on the ?star-shaped? network is summarized by
the A SY SG- CON algorithm, see Algorithm 1. The suffix ?CON? is short for ?consistent read?.
?Consistent read? means that the value of x used to compute the stochastic gradient is a real state
of x no matter at which time point. ?Consistent read? is ensured by the atomicity of the ?update?
step. When the atomicity fails, it leads to ?inconsistent read? which will be discussed in Section 4.
It is worth noting that on some ?non-star? structures the asynchronous implementation can also
be described as A SY SG- CON in Algorithm 1, for example, the cyclic delayed architecture and the
locally averaged delayed architecture [Agarwal and Duchi, 2011, Figure 2] .
3.2
Analysis for A SY SG- CON
To analyze Algorithm 1, besides Assumption 1 we make the following additional assumptions.
Assumption 2. We assume that the following holds:
? (Independence): All random variables in {?k,m }k=0,1,???,K;m=1,???,M in Algorithm 1 are independent to each other;
? (Bounded Age): All delay variables ?k,m ?s are bounded: maxk,m ?k,m ? T .
The independence assumption strictly holds if all workers select samples with replacement. Although it might not be satisfied strictly in practice, it is a common assumption made for the analysis
3
There could be more than one machines in some networks, but all of them serves the same purpose and
can be treated as a single machine.
4
purpose. The bounded delay assumption is much more important. As pointed out before, the asynchronous implementation may use some old value of parameter x to evaluate the stochastic gradient.
Intuitively, the age (or ?oldness?) should not be too large to ensure the convergence. Therefore, it
is a natural and reasonable idea to assume an upper bound for ages. This assumption is commonly
used in the analysis for asynchronous algorithms, for example, [Niu et al., 2011, Avron et al., 2014,
Liu and Wright, 2014, Liu et al., 2014a, Feyzmahdavian et al., 2015, Liu et al., 2014b]. It is worth
noting that the upper bound T is roughly proportional to the number of workers.
Under Assumptions 1 and 2, we have the following convergence rate for nonconvex optimization.
Theorem 1. Assume that Assumptions 1 and 2 hold and the steplength sequence {?k }k=1,???,K in
Algorithm 1 satisfies
PT
LM ?k + 2L2 M 2 T ?k ?=1 ?k+? ? 1 for all k = 1, 2, ....
(7)
We have the following ergodic convergence rate for the iteration of Algorithm 1
P
Pk?1
2
2
2
2
2
PK
2(f (x1 )?f (x? ))+ K
2
k=1 (?k M L+2L M ?k
j=k?T ?j )?
PK1
PK
?
E(k?f
(x
)k
)
?
.
k
k
k=1
?
M
?
k=1
k
k=1
k
(8)
where E(?) denotes taking expectation in terms of all random variables in Algorithm 1.
To evaluate the convergence rate, the commonly used metrics in convex optimization are not eligible, for example, f (xk ) ? f ? and kxk ? x? k2 . For nonsmooth optimization, we use the ergodic
convergence as the metric, that is, the weighted average of the `2 norm of all gradients k?f (xk )k2 ,
which is used in the analysis for nonconvex optimization [Ghadimi and Lan, 2013]. Although the
metric used in nonconvex optimization is not exactly comparable to f (xk ) ? f ? or kxk ? x? k2 used
in the analysis for convex optimization, it is not totally unreasonable to think that they are roughly
in the same order. The ergodic convergence directly indicates the following convergence: If ran? from {1, 2, ? ? ? , K} with probability {?k /PK ?k }, then E(k?f (x ? )k2 )
domly select an index K
k=1
K
is bounded by the right hand side of (8) and all bounds we will show in the following.
Taking a close look at Theorem 1, we can properly choose the steplength ?k as a constant value and
obtain the following convergence rate:
Corollary 2. Assume that Assumptions 1 and 2 hold. Set the steplength ?k to be a constant ?
p
(9)
? := f (x1 ) ? f (x? )/(M LK? 2 ).
If the delay parameter T is bounded by
K ? 4M L(f (x1 ) ? f (x? ))(T + 1)2 /? 2 ,
then the output of Algorithm 1 satisfies the following ergodic convergence rate
p
PK
1
2
?
mink?{1,???,K} Ek?f (xk )k2 ? K
k=1 Ek?f (xk )k ? 4 (f (x1 ) ? f (x ))L/(M K)?.
(10)
(11)
2
This corollary basically claims that
? when the total iteration number K is greater than O(T ), the
convergence rate achieves O(1/ M K). Since this rate does not depend on the delay parameter
T after sufficient number of iterations, the negative effect of using old values of x for stochastic
gradient
p evaluation vanishes asymptoticly. In other words, if the total number of workers is bounded
by O( K/M ), the linear speedup is achieved.
?
Note that our convergence rate O(1/ M K) is consistent with the serial SG (with M = 1) for
convex optimization [Nemirovski et al., 2009], the synchronous parallel (or mini-batch) SG for
convex optimization [Dekel et al., 2012], and nonconvex smooth optimization [Ghadimi and Lan,
2013]. Therefore, an important observation
is that as long as the number of workers (which is
p
proportional to T ) is bounded by O( K/M ), the iteration complexity to achieve the same accuracy
level will be roughly the same. In other words, the average work load for each worker is reduced
by p
the factor T comparing to the serial SG. Therefore, the linear speedup is achievable if T ?
O( K/M ). Since our convergence rate meets several special cases, it is tight.
Next we compare with the analysis of A SY SG- CON for convex smooth optimization
? in Agarwal
and Duchi [2011, Corollary 2]. They proved an asymptotic convergence rate O(1/ M K), which
is consistent with ours. But their results require T ? O(K 1/4 M ?3/4 ) to guarantee linear speedup.
Our result improves it by a factor O(K 1/4 M 1/4 ).
5
4
Asynchronous parallel stochastic gradient for shared memory architecture
This section considers a widely used lock-free asynchronous implementation of SG on the shared
memory system proposed in Niu et al. [2011]. Its advantages have been witnessed in solving SVM,
graph cuts [Niu et al., 2011], linear equations [Liu et al., 2014b], and matrix completion [Petroni
and Querzoni, 2014]. While the computer network always involves multiple machines, the shared
memory platform usually only includes a single machine with multiple cores / GPUs sharing the
same memory.
Algorithm 2 A SY SG- INCON
4.1
Algorithm Description: A SY SG- INCON Require: x0 , K, ?
Ensure: xK
For the shared memory platform, one can ex1: for k = 0, ? ? ? , K ? 1 do
actly follow A SY SG- CON on the computer 2: Randomly select M training samples indexed
network using software locks, which is exby ?k,1 , ?k,2 , ...?k,M ;
pensive4 . Therefore, in practice the lock free
3:
Randomly select ik ? {1, 2, ..., n} with uniasynchronous parallel implementation of SG
form distribution; P
is preferred. This section considers the same
xk,m ; ?k,m ))ik ;
4: (xk+1 )ik = (xk )ik ? ? M
m=1 (G(?
implementation as Niu et al. [2011], but pro5: end for
vides a more precise algorithm description
A SY SG- INCON than H OGWILD ! proposed in Niu et al. [2011].
In this lock free implementation, the shared memory stores the parameter ?x? and allows all workers
reading and modifying parameter x simultaneously without using locks. All workers repeat the
following steps independently, concurrently, and simultaneously:
? (Read): read the parameter from the shared memory to the local memory without software locks
(we use x
? to denote its value);
? (Compute): sample a training data ? and use x
? to compute the stochastic gradient G(?
x; ?) locally;
? (Update): update parameter x in the shared memory without software locks x ? x ? ?G(?
x; ?).
Since we do not use locks in both ?read? and ?update? steps, it means that multiple workers may
manipulate the shared memory simultaneously. It causes the ?inconsistent read? at the ?read? step,
that is, the value of x
? read from the shared memory might not be any state of x in the shared
memory at any time point. For example, at time 0, the original value of x in the shared memory is a
two dimensional vector [a, b]; at time 1, worker W is running the ?read? step and first reads a from
the shared memory; at time 2, worker W 0 updates the first component of x in the shared memory
from a to a0 ; at time 2, worker W 0 updates the second component of x in the shared memory from
b to b0 ; at time 3, worker W reads the value of the second component of x in the shared memory as
b0 . In this case, worker W eventually obtains the value of x
? as [a, b0 ], which is not a real state of x
in the shared memory at any time point. Recall that in A SY SG- CON the parameter value obtained
by any worker is guaranteed to be some real value of parameter x at some time point.
To precisely characterize this implementation and especially represent x
?, we monitor the value of
parameter x in the shared memory. We define one iteration as a modification on any single component of x in the shared memory since the update on a single component can be considered to be
atomic on GPUs and DSPs [Niu et al., 2011]. We use xk to denote the value of parameter x in the
shared memory after k iterations and x
?k to denote the value read from the shared memory and used
for computing stochastic gradient at the kth iteration. x
?k can be represented by xk with a few earlier
updates missing
P
x
?k = xk ? j?J(k) (xj+1 ? xj )
(12)
where J(k) ? {k ? 1, k, ? ? ? , 0} is a subset of index numbers of previous iterations. This way is
also used in analyzing asynchronous parallel coordinate descent algorithms in [Avron et al., 2014,
Liu and Wright, 2014]. The kth update happened in the shared memory can be described as
(xk+1 )ik = (xk )ik ? ?(G(?
xk ; ?k ))ik
where ?k denotes the index of the selected data and ik denotes the index of the component being
updated at kth iteration. In the original analysis for the H OGWILD ! implementation [Niu et al.,
2011], x
?k is assumed to be some earlier state of x in the shared memory (that is, the consistent read)
for simpler analysis, although it is not true in practice.
4
The time consumed by locks is roughly equal to the time of 104 floating-point computation. The additional
cost for using locks is the waiting time during which multiple worker access the same memory address.
6
One more complication is to apply the mini-batch strategy like before. Since the ?update? step
needs physical modification in the shared memory, it is usually much more time consuming than
both ?read? and ?compute? steps are. If many workers run the ?update? step simultaneously, the
memory contention will seriously harm the performance. To reduce the risk of memory contention,
a common trick is to ask each worker to gather multiple (say M ) stochastic gradients and write the
shared memory only once. That is, in each cycle, run both ?update? and ?compute? steps for M
times before you run the ?update? step. Thus, the mini-batch updates happen in the shared memory
can be written as
PM
(xk+1 )ik = (xk )ik ? ? m=1 (G(?
xk,m ; ?k,m ))ik
(13)
where ik denotes the coordinate index updated at the kth iteration, and G(?
xk,m ; ?k,m ) is the mth
stochastic gradient computed from the data sample indexed by ?k,m and the parameter value denoted
by x
?k,m at the kth iteration. x
?k,m can be expressed by:
P
x
?k,m = xk ? j?J(k,m) (xj+1 ? xj )
(14)
where J(k, m) ? {k ? 1, k, ? ? ? , 0} is a subset of index numbers of previous iterations. The algorithm is summarized in Algorithm 2 from the view of the shared memory.
4.2
Analysis for A SY SG- INCON
To analyze the A SY SG- INCON, we need to make a few assumptions similar to Niu et al. [2011], Liu
et al. [2014b], Avron et al. [2014], Liu and Wright [2014].
Assumption 3. We assume that the following holds for Algorithm 2:
? (Independence): All groups of variables {ik , {?k,m }M
m=1 } at different iterations from k = 1 to
K are independent to each other.
? (Bounded Age): Let T be the global bound for delay: J(k, m) ? {k ? 1, ...k ? T }, ?k, ?m,
so |J(k, m)|? T .
The independence assumption might not be true in practice, but it is probably the best assumption
one can make in order to analyze the asynchronous parallel SG algorithm. This assumption was also
used in the analysis for H OGWILD ! [Niu et al., 2011] and asynchronous randomized Kaczmarz algorithm [Liu et al., 2014b]. The bounded delay assumption basically restricts the age of all missing
components in x
?k,m (?m, ?k). The upper bound ?T ? here serves a similar purpose as in Assumption 2. Thus we abuse this notation in this section. The value of T is proportional to the number of
workers and does not depend on the size of mini-batch M . The bounded age assumption is used in
the analysis for asynchronous stochastic coordinate descent with ?inconsistent read? [Avron et al.,
2014, Liu and Wright, 2014]. Under Assumptions 1 and 3, we have the following results:
Theorem 3. Assume that Assumptions 1 and 3 hold and the constant steplength ? satisfies
?
2M 2 T L2T ( n + T ? 1)? 2 /n3/2 + 2M Lmax ? ? 1.
(15)
We have the following ergodic convergence rate for Algorithm 2
PK
1
2n
2
?
? KM
t=1 E k?f (xt )k
K
? (f (x1 ) ? f (x )) +
(16)
L2T T M ? 2 2
?
2n
+ Lmax ?? 2 .
Taking a close look at Theorem 3, we can choose the steplength ? properly and obtain the following
error bound:
Corollary 4. Assume that Assumptions 1 and 3 hold. Set the steplength to be a constant ?
p
p
? := 2(f (x1 ) ? f (x? ))n/( KLT M ?).
(17)
If the total iterations K is greater than
?
K ? 16(f (x1 ) ? f (x? ))LT M n3/2 + 4T 2 /( n? 2 ),
then the output of Algorithm 2 satisfies the following ergodic convergence rate
p
PK
1
2
72 (f (x1 ) ? f (x? )) LT n/(KM )?.
k=1 E(k?f (xk )k ) ?
K
7
(18)
(19)
?
This corollary indicates the asymptotic convergence rate achieves O(1/ M K) when the total iteration number K exceeds a threshold in the order of O(T 2 ) (if n is considered as a constant). We
can see that this rate and the threshold are consistent with
? the result in Corollary 2 for A SY SG- CON.
One may argue that why there is an additional factor n in the numerator of (19). That is due to the
way we count iterations ? one iteration is defined as updating a single component of x. If we take
into account this factor in the comparison to A SY SG- CON, the convergence rates for A SY SG- CON
and A SY SG- INCON are essentially consistent. This comparison implies that the ?inconsistent read?
would not make a big difference from the ?consistent read?.
Next we compare our result with the analysis of H OGWILD ! by [Niu et al., 2011]. In principle,
our analysis and their analysis consider the same implementation of asynchronous parallel SG, but
differ in the following aspects: 1) our analysis considers the smooth nonconvex optimization which
includes the smooth strongly convex optimization considered in their analysis; 2) our analysis considers the ?inconsistent read? model which meets the practice while their analysis assumes the impractical ?consistent read? model. Although the two results are not absolutely comparable, it is still
interesting to see the difference. Niu et al. [2011] proved that the linear speedup is achievable if the
maximal number of nonzeros in stochastic gradients is bounded by O(1) and the number of workers is bounded by O(n1/4 ). Our analysis does not need this
?prerequisite and guarantees the linear
speedup as long as the number of workers is bounded by O( K). Although it is hard to say that our
result strictly dominates H OGWILD ! in Niu et al. [2011], our asymptotic result is eligible for more
scenarios.
5
Experiments
The successes of A SY SG- CON and A SY SG- INCON and their advantages over synchronous parallel
algorithms have been widely witnessed in many applications such as deep neural network [Dean
et al., 2012, Paine et al., 2013, Zhang et al., 2014, Li et al., 2014a], matrix completion [Niu et al.,
2011, Petroni and Querzoni, 2014, Yun et al., 2013], SVM [Niu et al., 2011], and linear equations
[Liu et al., 2014b]. We refer readers to these literatures for more comphrehensive comparison and
empirical studies. This section mainly provides the empirical study to validate the speedup properties for completeness. Due to the space limit, please find it in Supplemental Materials.
6
Conclusion
This paper studied two popular asynchronous parallel implementations for SG on computer cluster
and shared memory system respectively. Two algorithms (A SY SG- CON and A SY SG- INCON) are
used to describe two implementations. An asymptotic sublinear convergence rate is proven for
both algorithms on nonconvex smooth optimization. This rate is consistent with the result of SG
for convex optimization.
The linear speedup is proven to achievable when the number of workers
?
is bounded by K, which improves the earlier analysis of A SY SG- CON for convex optimization
in [Agarwal and Duchi, 2011]. The proposed A SY SG- INCON algorithm provides a more precise
description for lock free implementation on shared memory system than H OGWILD ! [Niu et al.,
2011]. Our result for A SY SG- INCON can be applied to more scenarios.
Acknowledgements
This project is supported by the NSF grant CNS-1548078, the NEC fellowship, and the startup funding at University of Rochester. We thank Professor Daniel Gildea and Professor Sandhya Dwarkadas
at University of Rochester, Professor Stephen J. Wright at University of Wisconsin-Madison, and
anonymous (meta-)reviewers for their constructive comments and helpful advices.
References
A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. NIPS, 2011.
H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate
through randomization. IPDPS, 2014.
Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of
Machine Learning Research, 3:1137?1155, 2003.
8
D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods, volume 23.
Prentice hall Englewood Cliffs, NJ, 1989.
J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al.
Large scale distributed deep networks. NIPS, 2012.
O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches.
Journal of Machine Learning Research, 13(1):165?202, 2012.
O. Fercoq and P. Richt?arik.
arXiv:1312.5799, 2013.
Accelerated, parallel and proximal coordinate descent.
arXiv preprint
H. R. Feyzmahdavian, A. Aytekin, and M. Johansson. An asynchronous mini-batch algorithm for regularized
stochastic optimization. ArXiv e-prints, May 18 2015.
S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming.
SIAM Journal on Optimization, 23(4):2341?2368, 2013.
M. Hong. A distributed, asynchronous and incremental algorithm for nonconvex optimization: An ADMM
based approach. arXiv preprint arXiv:1412.6058, 2014.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Computer Science
Department, University of Toronto, Tech. Rep, 1(4):7, 2009.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks.
NIPS, pages 1097?1105, 2012.
M. Li, L. Zhou, Z. Yang, A. Li, F. Xia, D. G. Andersen, and A. Smola. Parameter server for distributed machine
learning. Big Learning NIPS Workshop, 2013.
M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su.
Scaling distributed machine learning with the parameter server. OSDI, 2014a.
M. Li, D. G. Andersen, A. J. Smola, and K. Yu. Communication efficient distributed machine learning with the
parameter server. NIPS, 2014b.
J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties.
arXiv preprint arXiv:1403.3862, 2014.
J. Liu, S. J. Wright, C. R?e, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate descent
algorithm. ICML, 2014a.
J. Liu, S. J. Wright, and S. Sridhar. An asynchronous parallel randomized kaczmarz algorithm. arXiv preprint
arXiv:1401.4780, 2014b.
H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed iterate analysis
for asynchronous stochastic optimization. arXiv preprint arXiv:1507.06970, 2015.
J. Marecek, P. Richt?arik, and M. Tak?ac. Distributed block coordinate descent for minimizing partially separable
functions. arXiv preprint arXiv:1406.0238, 2014.
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic
programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
F. Niu, B. Recht, C. Re, and S. Wright. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. NIPS, 2011.
T. Paine, H. Jin, J. Yang, Z. Lin, and T. Huang. Gpu asynchronous stochastic gradient descent to speed up
neural network training. NIPS, 2013.
F. Petroni and L. Querzoni. Gasgd: stochastic gradient descent for distributed asynchronous matrix completion
via graph partitioning. ACM Conference on Recommender systems, 2014.
S. Sridhar, S. Wright, C. Re, J. Liu, V. Bittorf, and C. Zhang. An approximate, efficient LP solver for lp
rounding. NIPS, 2013.
R. Tappenden, M. Tak?ac? , and P. Richt?arik. On the complexity of parallel coordinate descent. arXiv preprint
arXiv:1503.03033, 2015.
K. Tran, S. Hosseini, L. Xiao, T. Finley, and M. Bilenko. Scaling up stochastic dual coordinate ascent. ICML,
2015.
H. Yun, H.-F. Yu, C.-J. Hsieh, S. Vishwanathan, and I. Dhillon. Nomad: Non-locking, stochastic multi-machine
algorithm for asynchronous and decentralized matrix completion. arXiv preprint arXiv:1312.0193, 2013.
R. Zhang and J. Kwok. Asynchronous distributed ADMM for consensus optimization. ICML, 2014.
S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with elastic averaging SGD. CoRR, abs/1412.6651,
2014.
9
| 5751 |@word version:2 achievable:7 norm:2 johansson:1 dekel:2 km:3 hsieh:1 sgd:1 liu:22 cyclic:1 daniel:1 seriously:1 ours:1 existing:4 current:2 com:1 comparing:2 guadarrama:1 gmail:1 written:1 gpu:1 pioneer:1 devin:1 periodically:1 happen:1 numerical:1 update:20 juditsky:1 selected:1 xk:26 ith:2 short:2 core:1 provides:4 completeness:1 complication:1 toronto:1 bittorf:2 org:1 simpler:1 zhang:9 ik:13 prove:2 overhead:2 introduce:1 x0:2 rapid:1 roughly:4 multi:1 bilenko:1 little:1 solver:2 totally:1 project:1 bounded:20 notation:2 kg:1 substantially:1 supplemental:1 impractical:1 nj:1 guarantee:3 avron:8 exactly:1 ensured:1 k2:6 partitioning:1 unit:1 grant:1 bertsekas:2 positive:2 before:3 aggregating:1 local:1 limit:1 analyzing:1 cliff:1 niu:23 subscript:1 meet:2 abuse:1 might:5 zeroth:1 studied:4 josifovski:1 nemirovski:2 averaged:1 lecun:1 enforces:1 atomic:3 practice:7 block:1 kaczmarz:4 definite:1 ipdps:1 empirical:2 word:2 cannot:3 close:2 prentice:1 risk:1 tappenden:2 writing:2 ghadimi:3 dean:4 reviewer:1 missing:2 attention:2 independently:3 convex:20 ergodic:7 l:4 druinsky:1 bachrach:1 fill:2 dominate:1 pull:2 ogwild:9 embedding:1 coordinate:17 updated:2 papailiopoulos:1 pt:1 shamir:1 programming:2 us:1 trick:1 element:1 satisfying:1 particularly:2 updating:1 cut:1 preprint:9 revisiting:1 ensures:1 cycle:1 richt:4 yijun:1 yk:1 intuition:1 ran:1 vanishes:1 complexity:3 locking:1 depend:2 solving:4 tight:1 serve:1 efficiency:1 basis:1 represented:1 fast:1 describe:2 aggregate:2 startup:1 caffe:1 quite:1 widely:3 solve:1 ducharme:1 say:4 atomicity:3 think:1 highlighted:1 certified:1 online:1 sequence:1 advantage:2 karayev:1 tran:4 maximal:2 achieve:1 forth:1 description:5 validate:1 sutskever:1 convergence:33 cluster:3 darrell:1 paper2:1 incremental:1 completion:5 ac:2 multicore:2 minor:1 b0:3 received:4 implemented:2 involves:1 implies:3 indicate:1 differ:1 modifying:1 stochastic:55 viewing:1 material:1 require:3 exchange:1 anonymous:1 randomization:1 mathematically:1 strictly:4 hold:8 considered:3 wright:11 hall:1 feyzmahdavian:5 lm:1 claim:1 steplength:6 achieves:3 early:1 purpose:3 domly:1 coordination:1 successfully:2 weighted:1 minimization:1 concurrently:1 always:1 arik:4 querzoni:4 zhou:1 corollary:6 ax:1 klt:1 properly:2 indicates:2 mainly:6 tech:1 helpful:1 osdi:1 suffix:1 a0:1 mth:2 tak:2 choromanska:1 interested:1 dual:5 classification:1 denoted:1 development:1 art:1 platform:4 special:1 equal:1 once:1 shaped:2 broad:2 look:2 park:1 nearly:1 yu:2 icml:3 nonsmooth:3 few:2 randomly:4 simultaneously:5 delayed:4 floating:1 replacement:1 cns:1 n1:4 attempt:1 ab:2 englewood:1 evaluation:3 analyzed:2 primal:1 predefined:1 accurate:3 worker:34 indexed:4 old:3 re:2 girshick:1 theoretical:4 minimal:1 witnessed:2 earlier:4 cost:3 deviation:3 subset:3 delay:8 krizhevsky:2 rounding:1 too:1 characterize:2 perturbed:2 proximal:1 recht:2 randomized:4 siam:2 probabilistic:1 andersen:3 satisfied:1 huang:2 choose:2 worse:1 ek:2 li:10 account:1 diversity:1 star:4 summarized:2 includes:2 matter:1 caused:1 performed:1 try:1 view:2 hogwild:1 analyze:3 maintains:1 parallel:40 rochester:3 jia:1 contribution:1 gildea:1 minimize:1 accuracy:1 convolutional:2 variance:2 largely:1 efficiently:3 sy:41 generalize:1 vincent:1 basically:4 worth:2 explain:2 sharing:1 tucker:1 naturally:1 con:21 proved:6 popular:2 ask:1 recall:2 knowledge:1 improves:4 yuncheng:1 originally:2 follow:1 nomad:1 formulation:2 strongly:2 smola:3 pk1:1 hand:1 ei:4 su:1 minibatch:1 brings:1 stale:1 effect:1 unbiased:2 true:2 regularization:2 read:26 symmetric:1 dhillon:1 ex1:1 during:2 numerator:1 please:1 hong:2 yun:4 duchi:11 image:1 contention:2 recently:4 funding:1 common:4 ji:2 physical:1 volume:1 discussed:1 refer:2 pm:2 pointed:1 language:1 specification:1 access:1 lkx:1 mania:3 recent:4 showed:1 scenario:3 store:1 certain:1 nonconvex:15 server:5 inequality:1 meta:1 success:6 rep:1 seen:1 minimum:1 additional:5 greater:3 care:1 kxk0:1 corrado:1 semi:1 stephen:1 multiple:6 reduces:2 nonzeros:2 smooth:12 exceeds:1 ahmed:1 offer:1 long:6 lin:1 serial:7 manipulate:1 prediction:1 paine:5 essentially:2 expectation:3 metric:3 arxiv:20 iteration:22 represent:1 monga:1 agarwal:11 achieved:3 gilad:1 fellowship:1 source:1 contrarily:1 ascent:4 probably:1 comment:1 inconsistent:8 jordan:1 integer:1 noting:2 yang:3 bengio:2 iterate:1 affect:1 independence:4 xj:4 architecture:5 reduce:1 idea:3 consumed:1 synchronous:5 thread:1 effort:1 cause:1 deep:11 amount:2 repeating:1 locally:2 hardware:1 reduced:1 http:1 shapiro:1 restricts:1 nsf:1 happened:1 popularity:1 broadly:1 write:1 waiting:1 group:1 key:4 lan:4 threshold:2 monitor:2 nonconvexity:2 asymptotically:1 graph:2 sum:1 run:3 master:10 communicate:1 you:1 throughout:1 reader:2 reasonable:1 eligible:2 shekita:1 scaling:2 comparable:3 bound:8 ct:1 layer:1 guaranteed:2 precisely:1 vishwanathan:1 n3:2 software:3 petroni:4 aspect:1 speed:1 fercoq:2 separable:2 relatively:1 gpus:2 speedup:25 department:2 pan:1 lp:2 modification:2 explained:1 intuitively:1 xiangru:1 resource:1 equation:2 eventually:1 mechanism:2 count:1 know:1 end:2 serf:2 generalizes:1 operation:1 decentralized:1 unreasonable:1 apply:1 kwok:3 prerequisite:1 batch:6 original:2 denotes:8 running:1 ensure:5 assumes:1 lock:14 madison:1 lipschitzian:3 especially:2 establish:3 hosseini:1 objective:2 print:1 quantity:1 strategy:1 traditional:1 surrogate:1 gradient:42 minx:1 kth:7 unable:1 separate:1 thank:1 argue:1 considers:5 collected:1 consensus:1 reason:1 provable:1 besides:1 index:7 mini:6 ratio:1 l2t:2 minimizing:1 negative:2 mink:1 implementation:25 perform:1 upper:5 recommender:1 observation:1 descent:14 jin:1 maxk:1 extended:1 hinton:2 precise:2 communication:1 rn:2 dsps:1 parallelizing:1 janvin:1 imagenet:1 nip:8 address:1 parallelism:7 usually:3 reading:3 summarize:1 including:2 memory:40 natural:2 treated:1 regularized:1 improve:2 lk:1 started:1 finley:1 review:3 sg:65 l2:1 literature:1 acknowledgement:1 asymptotic:6 wisconsin:1 synchronization:2 loss:3 sublinear:2 interesting:1 proportional:3 proven:2 age:6 shelhamer:1 gather:1 sufficient:1 consistent:15 xiao:2 principle:1 tiny:1 actly:1 lmax:6 repeat:2 last:1 free:7 asynchronous:52 supported:1 tsitsiklis:2 side:1 allow:1 senior:1 taking:4 distributed:12 xia:1 evaluating:1 made:2 commonly:2 approximate:1 obtains:1 preferred:1 overcomes:1 aytekin:1 global:2 harm:1 assumed:1 consuming:1 why:2 robust:1 elastic:1 excellent:1 necessarily:1 pk:7 main:1 whole:1 noise:1 big:2 sridhar:4 x1:8 advice:1 precision:1 fails:1 mao:1 lie:2 vanish:1 donahue:1 theorem:4 load:3 xt:1 svm:2 gupta:1 dominates:1 workshop:1 corr:1 nec:1 ramchandran:1 push:2 gap:2 chen:1 lt:2 kxk:2 expressed:1 uwisc:1 partially:1 satisfies:5 acm:1 shared:35 admm:3 professor:3 hard:1 vides:1 averaging:1 total:9 select:7 support:2 people:2 absolutely:1 accelerated:1 constructive:1 evaluate:2 lian:1 |
5,249 | 5,752 | Distributed Submodular Cover:
Succinctly Summarizing Massive Data
Baharan Mirzasoleiman
ETH Zurich
Amin Karbasi
Yale University
Ashwinkumar Badanidiyuru
Google
Andreas Krause
ETH Zurich
Abstract
How can one find a subset, ideally as small as possible, that well represents a
massive dataset? I.e., its corresponding utility, measured according to a suitable
utility function, should be comparable to that of the whole dataset. In this paper,
we formalize this challenge as a submodular cover problem. Here, the utility is
assumed to exhibit submodularity, a natural diminishing returns condition prevalent in many data summarization applications. The classical greedy algorithm is
known to provide solutions with logarithmic approximation guarantees compared
to the optimum solution. However, this sequential, centralized approach is impractical for truly large-scale problems. In this work, we develop the first distributed
algorithm ? D IS C OVER ? for submodular set cover that is easily implementable
using MapReduce-style computations. We theoretically analyze our approach,
and present approximation guarantees for the solutions returned by D IS C OVER.
We also study a natural trade-off between the communication cost and the number of rounds required to obtain such a solution. In our extensive experiments,
we demonstrate the effectiveness of our approach on several applications, including active set selection, exemplar based clustering, and vertex cover on tens of
millions of data points using Spark.
1
Introduction
A central challenge in machine learning is to extract useful information from massive data. Concretely, we are often interested in selecting a small subset of data points such that they maximize a
particular quality criterion. For example, in nonparametric learning, we often seek to select a small
subset of points along with associated basis functions that well approximate the hypothesis space
[1]. More abstractly, in data summarization problems, we often seek a small subset of images [2],
news articles [3], scientific papers [4], etc., that are representative w.r.t. an entire corpus. In many
such applications, the utility function that measures the quality of the selected data points satisfies
submodularity, i.e., adding an element from the dataset helps more in the context of few selected
elements than if we have already selected many elements (c.f., [5]).
Our focus in this paper is to find a succinct summary of the data, i.e., a subset, ideally as small as
possible, which achieves a desired (large) fraction of the utility provided by the full dataset. Hereby,
utility is measured according to an appropriate submodular function. We formalize this problem as a
submodular cover problem, and seek efficient algorithms for solving it in face of massive data. The
celebrated result of Wolsey [6] shows that a greedy approach that selects elements sequentially in
order to maximize the gain over the items selected so far, yields a logarithmic factor approximation.
It is also known that improving upon this approximation ratio is hard under natural complexity
theoretic assumptions [7]. Even though such a greedy algorithm produces near-optimal solutions,
1
it is impractical for massive datasets, as sequential procedures that require centralized access to the
full data are highly constrained in terms of speed and memory.
In this paper, we develop the first distributed algorithm ? D IS C OVER ? for solving the submodular
cover problem. It can be easily implemented in MapReduce-style parallel computation models [8]
and provides a solution that is competitive with the (impractical) centralized solution. We also study
a natural trade-off between the communication cost (for each round of MapReduce) and the number
of rounds. The trade-off lets us choose between a small communication cost between machines
while having more rounds to perform or a large communication cost with the benefit of running
fewer rounds. Our experimental results demonstrate the effectiveness of our approach on a variety
of submodular cover instances: vertex cover, exemplar-based clustering, and active set selection in
non-parametric learning. We also implemented D IS C OVER on Spark [9] and approximately solved
vertex cover on a social graph containing more than 65 million nodes and 1.8 billion edges.
2
Background and Related Work
Recently, submodular optimization has attracted a lot of interest in machine learning and data mining where it has been applied to a variety of problems including viral marketing [10], information
gathering [11], and active learning [12], to name a few. Like convexity in continuous optimization,
submodularity allows many discrete problems to become efficiently approximable (e.g., constrained
submodular maximization).
In the submodular cover problem, the main objective is to find the smallest subset of data points
such that its utility reaches a desirable fraction of the entire dataset. As stated earlier, the sequential,
centralized greedy method fails to appropriately scale. Once faced with massive data, MapReduce
[8] (and modern implementations like Spark [9]) offer arguably one of the most successful programming models for reliable parallel computing. Distributed solutions for some special cases of
the submodular cover problem have been recently proposed. In particular, for the set cover problem (i.e., find the smallest subcollection of sets that covers all the data points), Berger et al. [13]
provided the first distributed solution with an approximation guarantee similar to that of the greedy
procedure. Blelloch et al. [14] improved their result in terms of the number of rounds required
by a MapReduce-based implementation. Very recently, Stergiou et al. [15] introduced an efficient
distributed algorithm for set cover instances of massive size. Another variant of the set cover problem that has received some attention is maximum k-cover (i.e., cover as many elements as possible
from the ground set by choosing at most k subsets) for which Chierichetti et al. [16] introduced a
distributed solution with a (1 ? 1/e ? ) approximation guarantee.
Going beyond the special case of coverage functions, distributed constrained submodular maximization has also been the subject of recent research in the machine learning and data mining communities. In particular, Mirzasoleiman et al. [17] provided a simple two-round distributed algorithm
called G REE D I for submodular maximization under cardinality constraints. Contemporarily, Kumar
et al [18] developed a multi-round algorithm for submodular maximzation subject to cardinality and
matroid constraints. There have also been very recent efforts to either make use of randomization
methods or treat data in a streaming fashion [19, 20]. To the best of our knowledge, we are the first
to address the general distributed submodular cover problem and propose an algorithm D IS C OVER
for approximately solving it.
3
The Distributed Submodular Cover Problem
The goal of data summarization is to select a small subset A out of a large dataset indexed by V
(called the ground set) such that A achieves a certain quality. To this end, we first need to define a
utility function f : 2V ? R+ that measures the quality of any subset A ? V , i.e., f (A) quantifies
how well A represents V according to some objective. In many data summarization applications, the
utility function f satisfies submodularity, stating that the gain in utility of an element e in context of
a summary A decreases as A grows. Formally, f is submodular if
f (A ? {e}) ? f (A) ? f (B ? {e}) ? f (B),
for any A ? B ? V and e ? V \ B. Note that the meaning of utility is application specific and
submodular functions provide a wide range of possibilities to define appropriate utility functions. In
2
Section 3.2 we discuss concrete instances of functions f that we consider in our experiments. Let us
denote the marginal utility of an element e w.r.t. a subset A as 4(e|A) = f (A ? {e}) ? f (A). The
utility function f is called monotone if 4(e|A) ? 0 for any e ? V \ A and A ? V . Throughout this
paper we assume that the utility function is monotone submodular.
The focus of this paper is on the submodular cover problem, i.e., finding the smallest set Ac such
that it achieves a utility Q = (1 ? )f (V ) for some 0 ? ? 1. More precisely,
Ac = arg minA?V |A|,
such that
c
f (A) ? Q.
(1)
c
We call A the optimum centralized solution with size k = |A |. Unfortunately, finding Ac
is NP-hard, for many classes of submodular functions [7]. However, a simple greedy algorithm is known to be very effective. This greedy algorithm starts with the empty set A0 , and at
each iteration i, it chooses an element e ? V that maximizes 4(e|Ai?1 ), i.e., Ai = Ai?1 ?
{arg maxe?V 4f (e|Ai?1 )}. Let us denote this (centralized) greedy solution by Ag . When f is
integral (i.e., f : 2V ? N) it is known that the size of the solution returned by the greedy algorithm
|Ag | is at most H(maxe f ({e}))|Ac |, where H(z) is the z-th harmonic number and is bounded by
H(z) ? 1 + ln z [6]. Thus, we have |Ag | ? (1 + ln(maxe f ({e})))|Ac |, and obtaining a better
solution is hard under natural complexity theoretic assumptions [7]. As it is standard practice, for
our theoretical analysis to hold, we assume that f is an integral, monotone submodular function.
Scaling up: Distributed computation in MapReduce. In many data summarization applications
where the ground set V is large, the sequential greedy algorithm is impractical: either the data cannot
be stored on a single computer or the centralized solution is too expensive in terms of computation
time. Instead, we seek an algorithm for solving the submodular cover problem in a distributed
manner, preferably amenable to MapReduce implementations. In this model, at a high level, the
data is first distributed to m machines in a cluster, then each part is processed by the corresponding
machine (in parallel, without communication), and finally the outputs are either merged or used
for the next round of MapReduce computation. While in principle multiple rounds of computation
can be realized, in practice, expensive synchronization is required after each round. Hence, we are
interested in distributed algorithms that require few rounds of computation.
3.1
Naive Approaches Towards Distributed Submodular Cover
One way of solving the distributed submodular cover problem in multiple rounds is as follows. In
each round, all machines ? in parallel ? compute the marginal gains for the data points assigned
to them. Then, they communicate their best candidate to a central processor, who then identifies
the globally best element, and sends it back to all the m machines. This element is then taken
into account when selecting the next element with highest marginal gain, and so on. Unfortunately,
this approach requires synchronization after each round and we have exactly |Ag | many rounds.
In many applications, k and hence |Ag | is quite large, which renders this approach impractical for
MapReduce style computations.
An alternative approach would be for each machine i to select greedily enough elements from its
partition Vi until it reaches at least Q/m utility. Then, all machines merge their solution. This
approach is much more communication efficient, and can be easily implemented, e.g., using a single
MapReduce round. Unfortunately, many machines may select redundant elements, and the merged
solution may suffer from diminishing returns and never reach Q. Instead of aiming for Q/m, one
could aim for a larger fraction, but it is not clear how to select this target value.
In Section 4, we introduce our solution D IS C OVER, which requires few rounds of communication,
while at the same time yielding a solution competitive with the centralized one. Before that, let us
briefly discuss the specific utility functions that we use in our experiments (described in Section 5).
3.2
Example Applications of the Distributed Submodular Cover Problem
In this part, we briefly discuss three concrete utility functions that have been extensively used in previous work for finding a diverse subset of data points and ultimately leading to good data summaries
[1, 17, 21, 22, 23].
Truncated Vertex Cover: Let G = (V, E) be a graph with the vertex set V and edge set E. Let
%(C) denote the neighbours of C ? V in the graph G. One way to measure the influence of a set C
3
is to look at its cover f (C) = |%(C) ? C|. It is easy to see that f is a monotone submodular function.
The truncated vertex cover is the problem of choosing a small subset of nodes C such that it covers
a desired fraction of |V | [21].
Active Set Selection in Kernel Machines: In many application such as feature selections [22],
determinantal point processes [24], and GP regression [23], where the data is described in terms of a
kernel matrix K, we want to select a small subset of elements while maintaining a certain diversity.
Very often, the utility function boils down to f (S) = log det(I + ?KS,S ) where ? > 0 and KS,S is
the principal sub-matrix of K indexed by S. It is known that f is monotone submodular [5].
Exemplar-Based Clustering: Another natural application is to select a small number of exemplars from the data representing the clusters present in it. A
Pnatural utility function (see, [1] and
[17]) is f (S) = L({e0 }) ? L(S ? {e0 }) where L(S) = |V1 | e?V min??S d(e, ?) is the k-medoid
loss function and e0 is an appropriately chosen reference element. The utility function f is monotone submodular [1]. The goal of distributed submodular cover here is to select the smallest set of
exemplars that satisfies a specified bound on the loss.
4
The D IS C OVER Algorithm for Distributed Submodular Cover
On a high level, our main approach is to reduce the submodular cover to a sequence of cardinality
constrained submodular maximization problems1 , a problem for which good distributed algorithms
(e.g., G REE D I [17, 25, 26]) are known. Concretely, our reduction is based on a combination of the
following three ideas.
To get an intuition, we will first assume that we have access to an optimum algorithm which can
solve cardinality constrained submodular maximization exactly, i.e., solve, for some specified `,
Aoc [`] = arg max f (S).
|S|?`
(2)
We will then consider how to solve the problem when, instead of Aoc [`], we only have access to an
approximation algorithm for cardinality constrained maximization. Lastly, we will illustrate how we
can parametrize our algorithm to trade-off the number of rounds of the distributed algorithm versus
communication cost per round.
4.1
Estimating Size of the Optimal Solution
Momentarily, assume that we have access to an optimum algorithm O PT C ARD(V, `) for computing
Aoc [`] on the ground set V . Then one simple way to solve the submodular cover problem would
be to incrementally check for each ` = {1, 2, 3, . . .} if f (Aoc [`]) ? Q. But this is very inefficient
since it will take k = |Ac | rounds of running the distributed algorithm for computing Aoc [`]. A
simple fix that we will follow is to instead start with ` = 1 and double it until we find an ` such
that f (Aoc [`]) ? Q. This way we are guaranteed to find a solution of size at most 2k in at most
dlog2 (k)e rounds of running Aoc [`]. The pseudocode is given in Algorithm 1. However, in practice,
we cannot run Algorithm 1. In particular, there is no efficient way to identify the optimum subset
Aoc [`] in set V , unless P=NP. Hence, we need to rely on approximation algorithms.
4.2
Handling Approximation Algorithms for Submodular Maximization
Assume that there is a distributed algorithm D IS C ARD(V, m, `), for cardinality constrained submodular maximization, that runs on the dataset V with m machines and provides a set Agd [m, `]
with ?-approximation guarantee to the optimal solution Aoc [`], i.e., f (Agd [m, `]) ? ?f (Aoc [`]). Let
us assume that we could run D IS C ARD with the unknown value ` = k. Then the solution we get
satisfies f (Agd [m, k]) ? ?Q. Thus, we are not guaranteed to get Q anymore. Now, what we can do
(still under the assumption that we know k) is to repeatedly run D IS C ARD in order to augment our
solution set until we get the desired value Q. Note that for each invocation of D IS C ARD, to find a
set of size ` = k, we have to take into account the solutions A that we have accumulated so far. So,
1
Note that while reduction from submodular coverage to submodular maximization has been used (e.g.,
[27]), the straightforward application to the distributed setting incurs large communication cost.
4
Algorithm 1 Approximate Submodular Cover
Algorithm 2 Approximate O PT C ARD
Input: Set V , constraint Q.
Output: Set A.
1: ` = 1.
2: Aoc [`] = O PT C ARD(V, `).
3: while f (Aoc [`]) < Q do
4:
` = ` ? 2.
5:
Aoc [l] = O PT C ARD(V, `).
Input: Set V , #of partitions m, constraint Q, `.
Output: Set Adc [m].
1: r = 0, Agd [m, `] = ?, .
2: while f (Agd [m, `]) < Q do
3:
A = Agd [m, `].
4:
r = r + 1.
5:
Agd [m, `] = D IS C ARD(V, m, `, A).
6:
if f (Agd [m, `])?f (A) ? ?(Q?f (A)) then
7:
Adc [m] = {Agd [m, `] ? A}.
8:
else
9:
break
10: Return Adc [m].
6: A = Aoc [`].
7: Return A.
by overloading the notation, D IS C ARD(V, m, `, A) returns a set of size ` given that A has already
been selected in previous rounds (i.e., D IS C ARD computes the marginal gains w.r.t. A). Note that at
every invocation ?thanks to submodularity? D IS C ARD increases the value of the solution by at least
?(Q ? f (A)). Therefore, by running D IS C ARD at most dlog(Q)/?e times we get Q.
Unfortunately, we do not know the optimum value k. So, we can feed an estimate ` of the size of
the optimum solution k to D IS C ARD. Now, again thanks to submodularity, D IS C ARD can check
whether this ` is good enough or not: if the improvement in the value of the solution is not at least
?(Q ? f (A)) during the augmentation process, we can infer that ` is a too small estimate of k and
we cannot get the desired value Q by using ` ? so we apply the doubling strategy again.
Theorem 4.1. Let D IS C ARD be a distributed algorithm for cardinality-constrained submodular
maximization with ? approximation guarantee. Then, Algorithm 1 (where O PT C ARD is replaced
with Approximate O PT C ARD, Algorithm 2) runs in at most dlog(k) + log(Q)/? + 1e rounds and
produces a solution of size at most d2k + 2 log(Q)k/?e.
4.3
Trading Off Communication Cost and Number of Rounds
While Algorithm 1 successfully finds a distributed solution Adc [m] with f (Adc [m]) ? Q, (c.f. 4.1),
the intermediate problem instances (i.e., invocations of D IS C ARD) are required to select sets of size
up to twice the size of the optimal solution k, and these solutions are communicated between all
machines. Oftentimes, k is quite large and we do not want to have such a large communication
cost per round. Now, instead of finding an ` ? k what we can do is to find a smaller ` ? ?k,
for 0 < ? ? 1 and augment these smaller sets in each round of Algorithm 2. This way, the
communication cost reduces to an ? fraction (per round), while the improvement in the value of
the solution is at least ??(Q ? f (Agd [m, `])). Consequently, we can trade-off the communication
cost per round with the total number of rounds. As a positive side effect, for ? < 1, since in each
invocation of D IS C ARD it returns smaller sets, the final solution set size can potentially get closer to
the optimum solution size k. For instance, for the extreme case of ? = 1/k we recover the solution
of the sequential greedy algorithm (up to O(1/?)). We see this effect in our experimental results.
4.4
D IS C OVER
The D IS C OVER algorithm is shown in Algorithm 3. The algorithm proceeds in rounds, with communication between machines taking place only between successive rounds. In particular, D IS C OVER
takes the ground set V , the number of partitions m, and the trade-off parameter ?. It starts with
` = 1, and Adc [m] = ?. It then augments the set Adc [m] with set Agd [m, `] of at most ` new elements
using an arbitrary distributed algorithm for submodular maximization under cardinality constraint,
D IS C ARD. If the gain from adding Agd [m, `] to Adc [m] is at least ??(Q ? f (Agd [m, `])), then we
continue augmenting Agd [m, `] with another set of at most ` elements. Otherwise, we double ` and
restart the process with 2`. We repeat this process until we get Q.
Theorem 4.2. Let D IS C ARD be a distributed algorithm for cardinality-constrained submodular
maximization with ? approximation guarantee. Then, D IS C OVER runs in at most dlog(?k) +
log(Q)/(??) + 1e rounds and produces a solution of size d2?k + log(Q)2k/?e.
5
Algorithm 3 D IS C OVER
Input: Set V , #of partitions m, constraint Q, trade off parameter ?.
Output: Set Adc [m].
1: Adc [m] = ?, r = 0.
2: while f (Adc [m]) < Q do
3:
r = r + 1.
4:
Agd [m, `] = D IS C ARD(V, m, `, Adc [m]).
5:
if f (Adc [m] ? Agd [m, `]) ? f (Adc [m]) ? ??(Q ? f (Adc [m])) then
6:
Adc [m] = {Adc [m] ? Agd [m, `]}.
7:
else
8:
` = ` ? 2.
9: Return Adc [m].
G REE D I as Subroutine: So far, we have assumed that a distributed algorithm D IS C ARD that
runs on m machines is given to us as a black box, which can be used to find sets of cardinality
` and obtain a ?-factor of the optimal solution. More concretely, we can use G REE D I, a recently
proposed distributed algorithm for maximizing submodular functions under a cardinality constraint
[17] (outlined in Algorithm 4). It first distributes the ground set V to m machines. Then each
machine i separately runs the standard greedy algorithm to produce a set Agc
i [`] of size `. Finally, the
solutions are merged, and another round of greedy selection is performed (over the merged results)
in order to return the solution Agd [m, `] of size `. It was proven that G REE D I provides a (1 ?
e?1 )2 / min(m, `)-approximation to the optimal solution [17]. Here, we prove a (tight) improved
bound on the performance of G REE D I. More formally, we have the following theorem.
Theorem 4.3. Let f be a monotone submodular function and let ` > 0. Then, G REE D I produces a
f (Ac [`]).
solution Agd [m, `] where f (Agd [m, `]) ? ? 1
36
min(m,`)
Algorithm 4 Greedy Distributed Submodular Maximization (G REE D I)
Input: Set V , #of partitions m, constraint `.
Output: Set Agd [m, `].
1: Partition V into m sets V1 , V2 , . . . , Vm .
gc
2: Run the standard greedy algorithm on each set Vi . Find a solution Ai [`].
gc
m
3: Merge the resulting sets: B = ?i=1 Ai [`].
4: Run the standard greedy algorithm on B until ` elements are selected. Return Agd [m, `].
We illustrate the resulting algorithm D IS C OVER using G REE D I as subroutine in Figure 1. By combining Theorems 4.2 and 4.3, we will have the following.
Corollary 4.4.
produces a solution of size d2?k +
p By using G REE D I, we get that D IS C OVER p
72 log(Q)k min(m, ?k))e and runs in at most dlog(?k)+36 min(m, ?k) log(Q)/?+1e rounds.
Note that for a constant number of machines m, ? = 1 and a large solution size ?k ? m, the above
result simply implies that in at most O(log(kQ)) rounds, D IS C OVER produces a solution of size
O(k log Q). In contrast, the greedy solution with O(k log Q) rounds (which is much larger than
O(log(kQ))) produces a solution of the same quality.
Very recently, a (1 ? e?1 )/2-approximation guarantee was proven for the randomized version of
G REE D I [26, 25]. This suggests that, if it is possible to reshuffle (i.e., randomly re-distribute V
among the m machines) the ground set each time that we revoke G REE D I, we can benefit from
these stronger approximation guarantees (which are independent of m and k). Note that Theorem 4.2
does not directly apply here, since it requires a deterministic subroutine for constrained submodular
maximization. We defer the analysis to a longer version of this paper.
As a final technical remark, for our theoretical results to hold we have assumed that the utility
function f is integral. In some applications (like active set selection) this assumption may not hold.
In these cases, either we can appropriately discretize and rescale the function, or instead of achieving
6
r=1
r=2
?
Cover
Cluster Nodes
?
?
Data
GreeDi
GreeDi
Figure 1: Illustration of our multi-round algorithm D IS C OVER , assuming it terminates in two rounds
(without doubling search for `).
the utility Q, try to reach (1 ? )Q, for some 0 < < 1. In the latter case, we can simply replace Q
with Q/ in Theorem 4.2.
5
Experiments
In our experiments we wish to address the following questions: 1) How well does D IS C OVER
perform compare to the centralized greedy solution; 2) How is the trade-off between the solution
size and the number of rounds affected by parameter ?; and 3) How well does D IS C OVER scale to
massive data sets. To this end, we run D IS C OVER on three scenarios: exemplar based clustering,
active set selection in GPs, and vertex cover problem. For vertex cover, we report experiments on a
large social graph with more than 65.6 millionp
vertices and 1.8 billion edges. Since the constant in
Theorem 4.3 is not optimized, we used ? = 1/ min(m, k) in all the experiments.
Exemplar based Clustering. Our exemplar based clustering experiments involve D IS C OVER applied to the clustering utility f (S) described in Section 3.2 with d(x, x0 ) = kx ? x0 k2 . We perform
our experiments on a set of 10,000 Tiny Images [28]. Each 32 by 32 RGB pixel image is represented
as a 3,072 dimentional vectors. We subtract from each vector the mean value, then normalize it to
have unit norm. We use the origin as the auxiliary exemplar for this experiment. Fig. 2a compares
the performance of our approach to the centralized benchmark with the number of machines set to
m = 10 and varying coverage percentage Q = (1 ? )f (V ). Here, we have ? = (1 ? ). It can
be seen that D IS C OVER provides a solution which is very close to the centralized solution, with
a number of rounds much smaller than the solution size. Varying ? results in a tradeoff between
solution size and number of rounds.
Active Set Selection. Our active set selection experiments involve D IS C OVER applied to the
log-determinant function f (S) described in Section 3.2, using an exponential kernel K(ei , ej ) =
exp(?|ei ? ej |2 /0.75). We use the Parkinsons Telemonitoring dataset [29] comprised of 5,875
biomedical voice measurements with 22 attributes from people in early-stage Parkinson?s disease.
Fig. 2b compares the performance of our approach to the benchmark with the number of machines
set to m = 6 and varying coverage percentage Q = (1 ? )f (V ). Again, D IS C OVER performs close
to the centralized greedy solution, even with very few rounds. Again we see a tradeoff by varying ?.
Large Scale Vertex Cover with Spark. As our large scale experiment, we applied D IS C OVER to
the Friendster network consists of 65,608,366 nodes and 1,806,067,135 edges [30]. The average outdegree is 55.056 while the maximum out-degree is 5,214. The disk footprint of the graph is 30.7GB,
stored in 246 part files on HDFS. Our experimental infrastructure was a cluster of 8 quad-core
machines with 32GB of memory each, running Spark. We set the number of reducers to m = 64.
Each machine carried out a set of map/reduce tasks in sequence, where each map/reduce stage
corresponds to running G REE D I with a specific values of ` on the whole data set. We first distributed
the data uniformly at random to the machines, where each machine received ?1,025,130 vertices
(?12.5GB RAM). Then we start with ` = 1, perform a map/reduce task to extract one element. We
then communicate back the results to each machine and based on the improvement in the value of
the solution, we perform another round of map/reduce calculation with either the the same value for
` or 2 ? `. We continue performing map/reduce tasks until we get the desired value Q.
We examine the performance of D IS C OVER by obtaining covers for 50%, 30%, 20% and 10% of
the whole graph. The total running time of the algorithm for the above coverage percentages with
? = 1 was about 5.5, 1.5, 0.6 and 0.1 hours respectively. For comparison, we ran the centralized
7
Solution Set Size
,=1
, = 0.4
, = 0.2
2500
0
0
0
0
0
0
0
0
2500
= 0.20
= 0.20
= 0.23
= 0.23
= 0.24
= 0.24
= 0.25
= 0.25
,=1
2000
,=1
, = 0.1
1500
,=1
, = 0.1
, = 0.2
,=1
1000
, = 0.2
20
, = 0.05
40
,=1
80
, = 0.05
100
4.95
Solution Set Size
, = 0.4
0
2
#10
100
4.7
200
DisCover 0 = 0.8
Greedy 0 = 0.8
, = 0.05
0
100
4100
200
400
200
300
, = 0.4
, = 0.2
3800
, = 0.01
100
DisCover 0 = 0.9
Greedy 0 = 0.9
, =1
3900
0
300
4000
, = 0.1
1.6
1.5
4.75
, = 0.2
1.8
1.7
, = 0.1
4
, = 1, 0.4
1.9
150
, = 0.2
4.8
, = 0.1
50
200
DisCover 0 = 0.7
Greedy 0 = 0.7
, = 0.4
4.85 , = 1
, = 0.2
3
150
#10 4
4.9
3.6
3.2
100
(b) Parkinsons Telemonitoring
DisCover 0 = 0.5
Greedy 0 = 0.5
3.4
, = 0.1
Number of Rounds
(a) Images 10K
3.8
, = 0.05 , = 0.01
, = 0.4, = 0.1
, = 0.05
0 , = 1 , = 0.4
0
50
#10 5
, =1
= 0.20
= 0.20
= 0.35
= 0.35
= 0.55
= 0.55
= 0.65
= 0.65
,=1
Number of Rounds
4
, = 0.1
1000
, = 0.1
60
0
0
0
0
0
0
0
0
1500
500
, = 0.6
500
DisCover
Greedy
DisCover
Greedy
DisCover
Greedy
DisCover
Greedy
, = 0.1
2000
Solution Set Size
DisCover
Greedy
DisCover
Greedy
DisCover
Greedy
DisCover
Greedy
3000
3700
400
, = 0.1
0
20
40
60
80
100
Number of Rounds
(c) Friendster
Figure 2: Performance of D IS C OVER compared to the centralized solution. a, b) show the solution
set size vs. the number of rounds for various ?, for a set of 10,000 Tiny Images and Parkinsons
Telemonitoring. c) shows the same quantities for the Friendster network with 65,608,366 vertices.
greedy on a computer of 24 cores and 256GB memory. Note that, loading the entire data set into
memory requires 200GB of RAM, and running the centralized greedy algorithm for 50% cover
requires at least another 15GB of RAM. This highlights the challenges in applying the centralized
greedy algorithm to larger scale data sets. Fig. 2c shows the solution set size versus the number of
rounds for various ? and different coverage constraints. We find that by decreasing ?, D IS C OVER?s
solutions quickly converge (in size) to those obtained by the centralized solution.
6
Conclusion
We have developed the first efficient distributed algorithm ?D IS C OVER ? for the submodular cover
problem. We have theoretically analyzed its performance and showed that it can perform arbitrary
close to the centralized (albeit impractical in context of large data sets) greedy solution. We also
demonstrated the effectiveness of our approach through extensive experiments, including vertex
cover on a graph with 65.6 million vertices using Spark. We believe our results provide an important
step towards solving submodular optimization problems in very large scale, real applications.
Acknowledgments. This research was supported by ERC StG 307036, a Microsoft Faculty
Fellowship and an ETH Fellowship.
8
References
[1] Ryan Gomes and Andreas Krause. Budgeted nonparametric learning from data streams. In ICML, 2010.
[2] Sebastian Tschiatschek, Rishabh Iyer, Haochen Wei, and Jeff Bilmes. Learning Mixtures of Submodular
Functions for Image Collection Summarization. In NIPS, 2014.
[3] Khalid El-Arini, Gaurav Veda, Dafna Shahaf, and Carlos Guestrin. Turning down the noise in the blogosphere. In KDD, 2009.
[4] Khalid El-Arini and Carlos Guestrin. Beyond keyword search: Discovering relevant scientific literature.
In KDD, 2011.
[5] Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability: Practical
Approaches to Hard Problems. Cambridge University Press, 2013.
[6] Laurence A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 1982.
[7] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 1998.
[8] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI, 2004.
[9] Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, and Ion Stoica. In Spark: cluster
computing with working sets, pages 181?213. Springer, 2010.
? Tardos. Maximizing the spread of influence through a social
[10] David Kempe, Jon Kleinberg, and Eva
network. In Proceedings of the ninth ACM SIGKDD, 2003.
[11] Andreas Krause and Carlos Guestrin. Intelligent information gathering and submodular function optimization. Tutorial at the International Joint Conference in Artificial Intelligence, 2009.
[12] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. Journal of Artificial Intelligence Research, 2011.
[13] Bonnie Berger, John Rompel, and Peter W Shor. Efficient nc algorithms for set cover with applications
to learning and geometry. Journal of Computer and System Sciences, 1994.
[14] Guy E. Blelloch, Richard Peng, and Kanat Tangwongsan. Linear-work greedy parallel approximate set
cover and variants. In SPAA, 2011.
[15] Stergios Stergiou and Kostas Tsioutsiouliklis. Set cover at web scale. In SIGKDD. ACM, 2015.
[16] Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In WWW, 2010.
[17] Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, 2013.
[18] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in
mapreduce and streaming. In SPAA, 2013.
[19] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, and Andreas Krause.
Lazier than lazy greedy. In AAAI, 2015.
[20] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming
submodular maximization: Massive data summarization on the fly. In SIGKDD. ACM, 2014.
[21] Silvio Lattanzi, Benjamin Moseley, Siddharth Suri, and Sergei Vassilvitskii. Filtering: a method for
solving graph problems in mapreduce. In SPAA, 2011.
[22] Roberto Battiti. Using mutual information for selecting features in supervised neural net learning. Neural
Networks, IEEE Transactions on, 5(4):537?550, 1994.
[23] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning
(Adaptive Computation and Machine Learning). 2006.
[24] Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Mach. Learn, 2012.
[25] Rafael Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. The power of randomization: Distributed
submodular maximization on massive datasets. In arXiv, 2015.
[26] Vahab Mirrokni and Morteza Zadimoghaddam. Randomized composable core-sets for distributed submodular maximization. In STOC, 2015.
[27] Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular
knapsack constraints. In NIPS, pages 2436?2444, 2013.
[28] Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for
nonparametric object and scene recognition. TPAMI, 2008.
[29] Athanasios Tsanas, Max Little, Patrick McSharry, and Lorraine Ramig. Enhanced classical dysphonia
measures and sparse regression for telemonitoring of parkinson?s disease progression. In ICASSP, 2010.
[30] Jaewon Yang and Jure Leskovec. Defining and evaluating network communities based on ground-truth.
Knowledge and Information Systems, 42(1):181?213, 2015.
9
| 5752 |@word determinant:1 version:2 briefly:2 agc:1 stronger:1 norm:1 loading:1 disk:1 faculty:1 laurence:1 d2:2 seek:4 rgb:1 incurs:1 lorraine:1 reduction:2 celebrated:1 selecting:3 daniel:2 franklin:1 attracted:1 sergei:2 determinantal:2 john:1 partition:6 kdd:2 v:1 greedy:39 selected:6 fewer:1 item:1 discovering:1 intelligence:2 core:3 dysphonia:1 infrastructure:1 provides:4 node:4 successive:1 along:1 become:1 prove:1 consists:1 introduce:1 manner:1 x0:2 theoretically:2 peng:1 andrea:1 problems1:1 examine:1 multi:2 freeman:1 globally:1 decreasing:1 siddharth:1 little:1 quad:1 cardinality:11 provided:3 estimating:1 bounded:1 notation:1 maximizes:1 discover:12 what:2 developed:2 adc:18 finding:4 ag:5 impractical:6 guarantee:9 preferably:1 every:1 exactly:2 k2:1 unit:1 arguably:1 before:1 positive:1 treat:1 aiming:1 mach:1 ree:13 approximately:2 merge:2 black:1 twice:1 k:2 suggests:1 tschiatschek:1 range:1 acknowledgment:1 practical:1 practice:3 communicated:1 footprint:1 procedure:2 jan:1 eth:3 get:10 cannot:3 close:3 selection:9 context:3 influence:2 applying:1 www:1 deterministic:1 map:6 demonstrated:1 maximizing:2 dean:1 straightforward:1 attention:1 williams:1 spark:7 identifying:1 tardos:1 target:1 pt:6 enhanced:1 massive:11 programming:1 gps:1 carl:1 hypothesis:1 origin:1 element:20 expensive:2 recognition:1 taskar:1 fly:1 solved:1 barbosa:1 eva:1 news:1 momentarily:1 keyword:1 trade:8 decrease:1 highest:1 reducer:1 ran:1 disease:2 intuition:1 benjamin:2 convexity:1 complexity:2 haochen:1 ideally:2 ultimately:1 solving:7 badanidiyuru:3 tight:1 upon:1 basis:1 easily:3 joint:1 icassp:1 represented:1 various:2 fast:1 effective:1 artificial:2 choosing:2 quite:2 larger:3 solve:4 otherwise:1 tsioutsiouliklis:1 ward:1 abstractly:1 gp:1 final:2 sequence:2 tpami:1 net:1 propose:1 relevant:1 combining:1 amin:4 normalize:1 billion:2 empty:1 optimum:8 cluster:6 double:2 produce:8 mirzasoleiman:5 ben:1 object:1 help:1 illustrate:2 andrew:1 develop:2 augmenting:1 stating:1 measured:2 rescale:1 ac:7 exemplar:9 ard:24 received:2 edward:1 implemented:3 auxiliary:1 coverage:6 trading:1 implies:1 submodularity:7 merged:4 attribute:1 stochastic:1 require:2 fix:1 blelloch:2 randomization:2 ryan:1 hold:3 ground:8 exp:1 achieves:3 early:1 smallest:4 torralba:1 successfully:1 gaurav:1 gaussian:1 aim:1 ej:2 parkinson:5 varying:4 corollary:1 focus:2 improvement:3 prevalent:1 check:2 contrast:1 sigkdd:3 greedily:1 stg:1 summarizing:1 osdi:1 el:2 streaming:3 accumulated:1 entire:3 a0:1 diminishing:2 going:1 subroutine:3 selects:1 interested:2 pixel:1 arg:3 among:1 augment:2 constrained:10 special:2 kempe:1 mutual:1 marginal:4 once:1 never:1 having:1 represents:2 outdegree:1 look:1 icml:1 jon:1 np:2 report:1 intelligent:1 richard:1 few:5 modern:1 randomly:1 neighbour:1 replaced:1 geometry:1 microsoft:1 william:1 centralized:18 interest:1 highly:1 mining:2 possibility:1 khalid:2 chowdhury:1 truly:1 extreme:1 analyzed:1 yielding:1 rishabh:2 mixture:1 amenable:1 edge:4 integral:3 closer:1 approximable:1 unless:1 indexed:2 desired:5 re:1 e0:3 theoretical:2 leskovec:1 instance:5 vahab:1 earlier:1 cover:47 maximization:19 cost:10 tractability:1 vertex:14 subset:14 lazier:1 kq:2 comprised:1 successful:1 too:2 stored:2 chooses:1 thanks:2 international:1 randomized:2 off:9 vm:1 michael:1 quickly:1 concrete:2 again:4 aaai:1 central:2 augmentation:1 containing:1 arini:2 choose:1 d2k:1 guy:1 inefficient:1 style:3 return:9 leading:1 vattani:1 account:2 distribute:1 diversity:1 mcsharry:1 vi:2 stream:1 performed:1 break:1 lot:1 try:1 stoica:1 analyze:1 competitive:2 start:4 recover:1 parallel:5 carlos:3 defer:1 aoc:14 who:1 efficiently:1 yield:1 identify:1 stergiou:2 bilmes:2 processor:1 reach:4 sebastian:1 hereby:1 associated:1 boil:1 gain:6 dataset:8 knowledge:2 formalize:2 back:2 athanasios:1 feed:1 supervised:1 follow:1 improved:2 wei:1 though:1 box:1 marketing:1 biomedical:1 lastly:1 stage:2 until:6 uriel:1 working:1 shahaf:1 web:1 ei:2 christopher:1 google:1 incrementally:1 quality:5 scientific:2 grows:1 believe:1 name:1 effect:2 hence:3 assigned:1 round:49 during:1 covering:1 bonnie:1 criterion:1 mina:1 theoretic:2 demonstrate:2 performs:1 image:7 meaning:1 harmonic:1 suri:1 recently:5 viral:1 pseudocode:1 million:4 shenker:1 measurement:1 cambridge:1 ai:6 dafna:1 outlined:1 erc:1 submodular:60 access:4 longer:1 ashwinkumar:3 etc:1 patrick:1 recent:2 showed:1 zadimoghaddam:1 scenario:1 certain:2 continue:2 battiti:1 flavio:1 seen:1 guestrin:3 converge:1 maximize:2 redundant:1 full:2 desirable:1 multiple:2 infer:1 reduces:1 technical:1 calculation:1 offer:1 variant:2 regression:2 hdfs:1 dimentional:1 arxiv:1 iteration:1 kernel:3 ion:1 background:1 want:2 krause:8 separately:1 fellowship:2 else:2 sends:1 appropriately:3 subcollection:1 file:1 subject:2 tangwongsan:1 effectiveness:3 call:1 near:1 yang:1 intermediate:1 enough:2 easy:1 variety:2 matroid:1 shor:1 andreas:8 reduce:7 idea:1 tradeoff:2 det:1 whether:1 veda:1 vassilvitskii:2 utility:25 gb:6 effort:1 render:1 suffer:1 returned:2 peter:1 reshuffle:1 kanat:1 repeatedly:1 remark:1 antonio:1 greedi:2 useful:1 clear:1 involve:2 nonparametric:3 ten:1 extensively:1 processed:1 augments:1 percentage:3 tutorial:1 medoid:1 per:4 diverse:1 discrete:1 affected:1 threshold:1 achieving:1 alina:1 budgeted:1 ravi:2 v1:2 ram:3 graph:8 monotone:7 fraction:5 run:12 communicate:2 place:1 throughout:1 scaling:1 comparable:1 bound:2 guaranteed:2 yale:1 constraint:10 precisely:1 alex:1 scene:1 kleinberg:1 vondrak:1 speed:1 min:6 kumar:3 performing:1 according:3 revoke:1 combination:1 smaller:4 terminates:1 feige:1 rob:1 dlog:4 karbasi:4 gathering:2 ene:1 taken:1 ln:3 zurich:2 discus:3 know:2 end:2 parametrize:1 apply:2 progression:1 v2:1 appropriate:2 matei:1 anymore:1 alternative:1 voice:1 knapsack:1 running:8 clustering:7 tomkins:1 maintaining:1 approximating:1 classical:2 objective:2 already:2 realized:1 question:1 quantity:1 parametric:1 strategy:1 mirrokni:1 exhibit:1 restart:1 assuming:1 friendster:3 berger:2 illustration:1 ratio:1 nc:1 unfortunately:4 telemonitoring:4 potentially:1 stoc:1 stated:1 implementation:3 summarization:7 unknown:1 perform:6 discretize:1 datasets:2 benchmark:2 implementable:1 truncated:2 defining:1 communication:14 gc:2 ninth:1 arbitrary:2 community:2 sarkar:1 introduced:2 david:1 required:4 specified:2 extensive:2 optimized:1 mosharaf:1 jaewon:1 hour:1 nip:3 address:2 beyond:2 justin:1 proceeds:1 jure:1 scott:1 kulesza:1 challenge:3 baharan:4 including:3 memory:4 reliable:1 max:3 power:1 suitable:1 natural:6 rely:1 turning:1 representing:1 ramig:1 identifies:1 carried:1 extract:2 naive:1 roberto:1 faced:1 literature:1 mapreduce:13 synchronization:2 loss:2 highlight:1 wolsey:2 filtering:1 zaharia:1 proven:2 versus:2 composable:1 degree:1 rik:1 article:1 principle:1 tiny:3 succinctly:1 summary:3 repeat:1 supported:1 rasmussen:1 side:1 wide:1 face:1 taking:1 sparse:1 distributed:37 benefit:2 evaluating:1 computes:1 concretely:3 collection:1 adaptive:2 simplified:1 oftentimes:1 far:3 agd:22 social:3 transaction:1 nguyen:1 approximate:5 rafael:1 dlog2:1 active:9 sequentially:1 corpus:1 assumed:3 gomes:1 fergus:1 continuous:1 search:2 quantifies:1 learn:1 spaa:3 golovin:2 obtaining:2 improving:1 main:2 spread:1 whole:3 noise:1 huy:1 succinct:1 lattanzi:1 fig:3 representative:2 fashion:1 chierichetti:2 kostas:1 fails:1 sub:1 wish:1 exponential:1 candidate:1 invocation:4 down:2 theorem:8 specific:3 ghemawat:1 tsanas:1 albeit:1 sequential:5 adding:2 overloading:1 iyer:2 kx:1 morteza:1 subtract:1 logarithmic:2 simply:2 blogosphere:1 lazy:1 doubling:2 springer:1 corresponds:1 truth:1 satisfies:4 acm:4 goal:2 consequently:1 towards:2 jeff:2 replace:1 hard:4 uniformly:1 distributes:1 principal:1 called:3 total:2 silvio:1 experimental:3 moseley:2 maxe:3 select:9 formally:2 combinatorica:1 people:1 latter:1 handling:1 |
5,250 | 5,753 | Probabilistic Line Searches
for Stochastic Optimization
Maren Mahsereci and Philipp Hennig
Max Planck Institute for Intelligent Systems
Spemannstra?e 38, 72076 T?ubingen, Germany
[mmahsereci|phennig]@tue.mpg.de
Abstract
In deterministic optimization, line searches are a standard tool ensuring stability
and efficiency. Where only stochastic gradients are available, no direct equivalent
has so far been formulated, because uncertain gradients do not allow for a strict
sequence of decisions collapsing the search space. We construct a probabilistic line
search by combining the structure of existing deterministic methods with notions
from Bayesian optimization. Our method retains a Gaussian process surrogate of
the univariate optimization objective, and uses a probabilistic belief over the Wolfe
conditions to monitor the descent. The algorithm has very low computational cost,
and no user-controlled parameters. Experiments show that it effectively removes
the need to define a learning rate for stochastic gradient descent.
1
Introduction
Stochastic gradient descent (SGD) [1] is currently the standard in machine learning for the optimization
of highly multivariate functions if their gradient is corrupted by noise. This includes the online or
batch training of neural networks, logistic regression [2, 3] and variational models [e.g. 4, 5, 6]. In all
these cases, noisy gradients arise because an exchangeable loss-function L(x) of the optimization
parameters x ? RD , across a large dataset {di }i=1 ...,M , is evaluated only on a subset {dj }j=1,...,m :
M
m
1 X
1 X
?
L(x) :=
`(x, di ) ?
`(x, dj ) =: L(x)
M i=1
m j=1
m M.
(1)
?
If the indices j are i.i.d. draws from [1, M ], by the Central Limit Theorem, the error L(x)
? L(x)
is unbiased and approximately normal distributed. Despite its popularity and its low cost per step,
SGD has well-known deficiencies that can make it inefficient, or at least tedious to use in practice.
Two main issues are that, first, the gradient itself, even without noise, is not the optimal search
direction; and second, SGD requires a step size (learning rate) that has drastic effect on the algorithm?s
efficiency, is often difficult to choose well, and virtually never optimal for each individual descent
step. The former issue, adapting the search direction, has been addressed by many authors [see 7, for
an overview]. Existing approaches range from lightweight ?diagonal preconditioning? approaches
like ADAGRAD [8] and ?stochastic meta-descent?[9], to empirical estimates for the natural gradient
[10] or the Newton direction [11], to problem-specific algorithms [12], and more elaborate estimates
of the Newton direction [13]. Most of these algorithms also include an auxiliary adaptive effect on
the learning rate. And Schaul et al. [14] recently provided an estimation method to explicitly adapt
the learning rate from one gradient descent step to another. None of these algorithms change the
size of the current descent step. Accumulating statistics across steps in this fashion requires some
conservatism: If the step size is initially too large, or grows too fast, SGD can become unstable and
?explode?, because individual steps are not checked for robustness at the time they are taken.
1
function value f (t)
6.5
?
?
6
?
?
?
?
5.5
0
0.5
1
distance t in line search direction
Figure 1: Sketch: The task of a classic line search is to tune
the step taken by a optimization algorithm along a univariate
search direction. The search starts at the endpoint ? of the
previous line search, at t = 0. A sequence of exponentially
growing extrapolation steps ?,?,? finds a point of positive
gradient at ?. It is followed by interpolation steps ?,? until an acceptable point ? is found. Points of insufficient
decrease, above the line f (0) + c1 tf 0 (0) (gray area) are excluded by the Armijo condition W-I, while points of steep
gradient (orange areas) are excluded by the curvature condition W-II (weak Wolfe conditions in solid orange, strong
extension in lighter tone). Point ? is the first to fulfil both
conditions, and is thus accepted.
The principally same problem exists in deterministic (noise-free) optimization problems. There,
providing stability is one of several tasks of the line search subroutine. It is a standard constituent of
algorithms like the classic nonlinear conjugate gradient [15] and BFGS [16, 17, 18, 19] methods [20,
?3].1 In the noise-free case, line searches are considered a solved problem [20, ?3]. But the methods
used in deterministic optimization are not stable to noise. They are easily fooled by even small
disturbances, either becoming overly conservative or failing altogether. The reason for this brittleness
is that existing line searches take a sequence of hard decisions to shrink or shift the search space.
This yields efficiency, but breaks hard in the presence of noise. Section 3 constructs a probabilistic
line search for noisy objectives, stabilizing optimization methods like the works cited above. As
line searches only change the length, not the direction of a step, they could be used in combination
with the algorithms adapting SGD?s direction, cited above. The algorithm presented below is thus a
complement, not a competitor, to these methods.
2
Connections
2.1
Deterministic Line Searches
There is a host of existing line search variants [20, ?3]. In essence, though, these methods explore a
univariate domain ?to the right? of a starting point, until an ?acceptable? point is reached (Figure 1).
More precisely, consider the problem of minimizing L(x) : RD _ R, with access to ?L(x) :
RD _ RD . At iteration i, some ?outer loop? chooses, at location xi , a search direction si ? RD
(e.g. by the BFGS rule, or simply si = ??L(xi ) for gradient descent). It will not be assumed that
si has unit norm. The line search operates along the univariate domain x(t) = xi + tsi for t ? R+ .
Along this direction it collects scalar function values and projected gradients that will be denoted
f (t) = L(x(t)) and f 0 (t) = s|i ?L(x(t)) ? R. Most line searches involve an initial extrapolation
phase to find a point tr with f 0 (tr ) > 0. This is followed by a search in [0, tr ], by interval nesting or
by interpolation of the collected function and gradient values, e.g. with cubic splines.2
2.1.1
The Wolfe Conditions for Termination
As the line search is only an auxiliary step within a larger iteration, it need not find an exact root
of f 0 ; it suffices to find a point ?sufficiently? close to a minimum. The Wolfe [21] conditions are a
widely accepted formalization of this notion; they consider t acceptable if it fulfills
f (t) ? f (0) + c1 tf 0 (0)
(W-I)
and
f 0 (t) ? c2 f 0 (0)
(W-II),
(2)
using two constants 0 ? c1 < c2 ? 1 chosen by the designer of the line search, not the user. W-I is
the Armijo [22], or sufficient decrease condition. It encodes that acceptable functions values should
lie below a linear extrapolation line of slope c1 f 0 (0). W-II is the curvature condition, demanding
1
In these algorithms, another task of the line search is to guarantee certain properties of surrounding
estimation rule. In BFGS, e.g., it ensures positive definiteness of the estimate. This aspect will not feature here.
2
This is the strategy in minimize.m by C. Rasmussen, which provided a model for our implementation. At
the time of writing, it can be found at http://learning.eng.cam.ac.uk/carl/code/minimize/minimize.m
2
f (t)
6.5
6
??
?
?
?
?
1
?(t)
0
1
0
1
0
?1
pWolfe (t)
pb (t) pa (t)
5.5
1
0.8
0.6
0.4
0.2
0
weak
strong
0
0.5
1
distance t in line search direction
Figure 2: Sketch of a probabilistic line search. As in
Fig. 1, the algorithm performs extrapolation (?,?,?)
and interpolation (?,?), but receives unreliable, noisy
function and gradient values. These are used to construct a GP posterior (top. solid posterior mean, thin
lines at 2 standard deviations, local pdf marginal as
shading, three dashed sample paths). This implies a
bivariate Gaussian belief (?3.3) over the validity of the
weak Wolfe conditions (middle three plots. pa (t) is the
marginal for W-I, pb (t) for W-II, ?(t) their correlation).
Points are considered acceptable if their joint probability pWolfe (t) (bottom) is above a threshold (gray). An
approximation (?3.3.1) to the strong Wolfe conditions
is shown dashed.
a decrease in slope. The choice c1 = 0 accepts any value below f (0), while c1 = 1 rejects all
points for convex functions. For the curvature condition, c2 = 0 only accepts points with f 0 (t) ? 0;
while c2 = 1 accepts any point of greater slope than f 0 (0). W-I and W-II are known as the weak
form of the Wolfe conditions. The strong form replaces W-II with |f 0 (t)| ? c2 |f 0 (0)| (W-IIa). This
guards against accepting points of low function value but large positive gradient. Figure 1 shows a
conceptual sketch illustrating the typical process of a line search, and the weak and strong Wolfe
conditions. The exposition in ?3.3 will initially focus on the weak conditions, which can be precisely
modeled probabilistically. Section 3.3.1 then adds an approximate treatment of the strong form.
2.2
Bayesian Optimization
A recently blossoming sample-efficient approach to global optimization revolves around modeling
the objective f with a probability measure p(f ); usually a Gaussian process (GP). Searching for
extrema, evaluation points are then chosen by a utility functional u[p(f )]. Our line search borrows
the idea of a Gaussian process surrogate, and a popular utility, expected improvement [23]. Bayesian
optimization methods are often computationally expensive, thus ill-suited for a cost-sensitive task
like a line search. But since line searches are governors more than information extractors, the kind of
sample-efficiency expected of a Bayesian optimizer is not needed. The following sections develop a
lightweight algorithm which adds only minor computational overhead to stochastic optimization.
3
A Probabilistic Line Search
?
We now consider minimizing y(t) = L(x(t))
from Eq. (1). That is, the algorithm can access only
noisy function values and gradients yt , yt0 at location t, with Gaussian likelihood
2
?f
0
yt
f (t)
p(yt , yt0 | f ) = N
;
,
.
(3)
yt0
f 0 (t)
0 ?f2 0
The Gaussian form is supported by the Central Limit argument at Eq. (1), see ?3.4 regarding estimation
of the variances ?f2 , ?f2 0 . Our algorithm has three main ingredients: A robust yet lightweight Gaussian
process surrogate on f (t) facilitating analytic optimization; a simple Bayesian optimization objective
for exploration; and a probabilistic formulation of the Wolfe conditions as a termination criterion.
3.1
Lightweight Gaussian Process Surrogate
We model information about the objective in a probability measure p(f ). There are two requirements
on such a measure: First, it must be robust to irregularity of the objective. And second, it must allow
analytic computation of discrete candidate points for evaluation, because a line search should not call
yet another optimization subroutine itself. Both requirements are fulfilled by a once-integrated Wiener
process, i.e. a zero-mean Gaussian process prior p(f ) = GP(f ; 0, k) with covariance function
k(t, t0 ) = ?2 1/3 min3 (t?, t?0 ) + 1/2|t ? t0 | min2 (t?, t?0 ) .
(4)
3
Here t? := t + ? and t?0 := t0 + ? denote a shift by a constant ? > 0. This ensures this kernel is positive
semi-definite, the precise value ? is irrelevant as the algorithm only considers positive values of t
(our implementation uses ? = 10). See ?3.4 regarding the scale ?2 . With the likelihood of Eq. (3),
this prior gives rise to a GP posterior whose mean function is a cubic spline3 [25]. We note in passing
that regression on f and f 0 from N observations of pairs (yt , yt0 ) can be formulated as a filter [26]
and thus performed in O(N ) time. However, since a line search typically collects < 10 data points,
generic GP inference, using a Gram matrix, has virtually the same, low cost.
Because Gaussian measures are closed under linear maps [27, ?10], Eq. (4) implies a Wiener process
(linear spline) model on f 0 :
f
k
k?
p(f ; f 0 ) = GP
;
0,
,
(5)
?
f0
k ?k?
with (using the indicator function I(x) = 1 if x, else 0)
2
02
k ? (t, t0 ) = ?2 I(t < t0 )t /2 + I(t ? t0 )(tt0 ? t /2)
i+j
0
i
j
? k(t, t )
02
2
? ?
?
, thus
k =
k (t, t0 ) = ?2 I(t0 < t)t /2 + I(t0 ? t)(tt0 ? t /2) .
?ti ?t0 j
? ?
0
2
0
k (t, t ) = ? min(t, t )
(6)
Given a set of evaluations (t, y, y 0 ) (vectors, with elements ti , yti , yt0 i ) with independent likelihood
(3), the posterior p(f | y, y 0 ) is a GP with posterior mean ? and covariance and k? as follows:
|
?1
ktt + ?f2 I
k ? tt
ktt0
ktt
y
0
|
?
0
. (7)
?(t) = ?
, k(t, t ) = ktt ? g (t) ?
?
? ?
y0
k tt0
k tt
k tt
k tt + ?f2 0 I
|
{z
}
=:g | (t)
? t). To see that ? is indeed piecewise
The posterior marginal variance will be denoted by V(t) = k(t,
cubic (i.e. a cubic spline), we note that it has at most three non-vanishing derivatives4 , because
?2
?
3
?2 ?
k (t, t0 ) = ?2 I(t ? t0 )(t0 ? t)
k (t, t0 ) = ?2 I(t ? t0 )
k (t, t0 ) = ??2 I(t ? t0 )
?
3
k ? (t, t0 ) = 0.
(8)
This piecewise cubic form of ? is crucial for our purposes: having collected N values of f and
f 0 , respectively, all local minima of ? can be found analytically in O(N ) time in a single sweep
through the ?cells? ti?1 < t < ti , i = 1, . . . , N (here t0 = 0 denotes the start location, where (y0 , y00 )
are ?inherited? from the preceding line search. For typical line searches N < 10, c.f. ?4). In each
cell, ?(t) is a cubic polynomial with at most one minimum in the cell, found by a trivial quadratic
computation from the three scalars ?0 (ti ), ?00 (ti ), ?000 (ti ). This is in contrast to other GP regression
models?for example the one arising from a Gaussian kernel?which give more involved posterior
means whose local minima can be found only approximately. Another advantage of the cubic spline
interpolant is that it does not assume the existence of higher derivatives (in contrast to the Gaussian
kernel, for example), and thus reacts robustly to irregularities in the objective.
0
In our algorithm, after each evaluation of (yN , yN
), we use this property to compute a short list
of candidates for the next evaluation, consisting of the ? N local minimizers of ?(t) and one
additional extrapolation node at tmax + ?, where tmax is the currently largest evaluated t, and ? is
an extrapolation step size starting at ? = 1 and doubled after each extrapolation step.
3.2
Choosing Among Candidates
The previous section described the construction of < N + 1 discrete candidate points for the next
evaluation. To decide at which of the candidate points to actually call f and f 0 , we make use of
a popular utility from Bayesian optimization. Expected improvement [23] is the expected amount,
3
Eq. (4) can be generalized to the ?natural spline?, removing the need for the constant ? [24, ?6.3.1]. However,
this notion is ill-defined in the case of a single observation, which is crucial for the line search.
4
There is no well-defined probabilistic belief over f 00 and higher derivatives?sample paths of the Wiener
process are almost surely non-differentiable almost everywhere [28, ?2.2]. But ?(t) is always a member of the
reproducing kernel Hilbert space induced by k, thus piecewise cubic [24, ?6.1].
4
?f = 0.0028
?f = 0.28
?f 0 = 0.0049
?f 0 = 0.0049
?f = 0.082
?f = 0.17
?f = 0.24
?f 0 = 0.014
?f 0 = 0.012
?f 0 = 0.011
0.5
2
0.2
f (t)
0.2
0
0
?2
pWolfe (t)
?0.2
1
0
1
0 0.5 1 1.5
t ? constraining
0
0
2
4
t ? extrapolation
0.2
0
0
?0.2
?0.5
?0.2
1
1
1
0
0
0 0.5 1 1.5
t ? interpolation
0
0
0 0.5 1 1.5
0 0.5 1 1.5
t ? immediate accept t ? high noise interpolation
Figure 3: Curated snapshots of line searches (from MNIST experiment, ?4), showing variability of
the objective?s shape and the decision process. Top row: GP posterior and evaluations, bottom row:
approximate pWolfe over strong Wolfe conditions. Accepted point marked red.
under the GP surrogate, by which the function f (t) might be smaller than a ?current best? value ? (we
set ? = mini=0,...,N {?(ti )}, where ti are observed locations),
uEI (t) = Ep(ft | y,y0 ) [min{0, ? ? f (t)}]
! r
? ? ?(t)
V(t)
? ? ?(t)
(? ? ?(t))2
=
1 + erf p
+
exp ?
.
2
2?
2V(t)
2V(t)
(9)
The next evaluation point is chosen as the candidate maximizing this utility, multiplied by the
probability for the Wolfe conditions to be fulfilled, which is derived in the following section.
3.3
Probabilistic Wolfe Conditions for Termination
The key observation for a probabilistic extension of W-I and W-II is that they are positivity constraints
on two variables at , bt that are both linear projections of the (jointly Gaussian) variables f and f 0 :
?
?
f0(0)
at
1 c1 t ?1 0 ?f (0)?
=
? 0.
(10)
bt
0 ?c2 0 1 ? f (t) ?
f 0 (t)
The GP of Eq. (5) on f thus implies, at each value of t, a bivariate Gaussian distribution
a aa
mt
at
Ct
Ctab
p(at , bt ) = N
;
,
,
bt
mbt
Ctba Ctbb
with
and
Ctab =
mat = ?(0) ? ?(t) + c1 t?0 (0)
and
mbt = ?0 (t) ? c2 ?0 (0)
?
?
Ctaa = k?00 + (c1 t)2 ? k?00
+ k?tt + 2[c1 t(k?00
? ? k?0t ) ? k?0t ]
C bb = c2 ? k? ? ? 2c2 ? k? ? + ? k? ?
t
ba
Ct
00
?
?c2 (k?00
2
=
+
0t
? ??
c1 t k00 )
tt
(11)
(12)
(13)
?
+ (1 + c2 ) ? k?0t + c1 t ? k? ?0t ? k?tt
.
The quadrant probability pWolfe
= p(at > 0 ? bt > 0) for the Wolfe conditions to hold is an integral
t
over a bivariate normal probability,
Z ?
Z ?
a
0
1 ?t
Wolfe
da db,
(14)
pt
=
N
;
,
ma
mb
b
0
?t 1
? ? taa ? ? t
C
bb
t
Ct
p
with correlation coefficient ?t = Ctab / Ctaa Ctbb . It can be computed efficiently [29], using readily
available code5 (on a laptop, one evaluation of pWolfe
cost about 100 microseconds, each line search
t
requires < 50 such calls). The line search computes this probability for all evaluation nodes, after
each evaluation. If any of the nodes fulfills the Wolfe conditions with pWolfe
> cW , greater than
t
some threshold 0 < cW ? 1, it is accepted and returned. If several nodes simultaneously fulfill this
requirement, the t of the lowest ?(t) is returned. Section 3.4 below motivates fixing cW = 0.3.
5
e.g. http://www.math.wsu.edu/faculty/genz/software/matlab/bvn.m
5
3.3.1
Approximation for strong conditions:
As noted in Section 2.1.1, deterministic optimizers tend to use the strong Wolfe conditions, which
use |f 0 (0)| and |f 0 (t)|. A precise extension of these conditions to the probabilistic setting is numerically taxing, because the distribution over |f 0 | is a non-central ?-distribution, requiring customized
computations. However, a straightforward variation to (14) captures the spirit of the strong Wolfe
conditions, that large positive derivatives should not be accepted: Assuming f 0 (0) < 0 (i.e. that the
search direction is a descent direction), the strong second Wolfe condition can be written exactly as
0 ? bt = f 0 (t) ? c2 f (0) ? ?2c2 f 0 (0).
(15)
The value ?2c2 f 0 (0) is bounded to 95% confidence by
?2c2 f 0 (0) . ?2c2 (|?0 (0)| + 2
p
V0 (0)) =: ?b.
(16)
Hence, an approximation to the strong Wolfe conditions
p can be reached by replacing the infinite
upper integration limit on b in Eq. (14) with (?b ? mbt )/ Ctbb . The effect of this adaptation, which
adds no overhead to the computation, is shown in Figure 2 as a dashed line.
3.4
Eliminating Hyper-parameters
As a black-box inner loop, the line search should not require any tuning by the user. The preceding
section introduced six so-far undefined parameters: c1 , c2 , cW , ?, ?f , ?f 0 . We will now show that
c1 , c2 , cW , can be fixed by hard design decisions. ? can be eliminated by standardizing the optimization objective within the line search; and the noise levels can be estimated at runtime with low
overhead for batch objectives of the form in Eq. (1). The result is a parameter-free algorithm that
effectively removes the one most problematic parameter from SGD?the learning rate.
Design Parameters c1 , c2 , cW Our algorithm inherits the Wolfe thresholds c1 and c2 from its
deterministic ancestors. We set c1 = 0.05 and c2 = 0.8. This is a standard setting that yields a
?lenient? line search, i.e. one that accepts most descent points. The rationale is that the stochastic
aspect of SGD is not always problematic, but can also be helpful through a kind of ?annealing? effect.
The acceptance threshold cW is a new design parameter arising only in the probabilistic setting. We
fix it to cW = 0.3. To motivate this value, first note that in the noise-free limit, all values 0 < cW < 1
are equivalent, because pWolfe then switches discretely between 0 and 1 upon observation of the
function. A back-of-the-envelope computation (left out for space), assuming only two evaluations
at t = 0 and t = t1 and the same fixed noise level on f and f 0 (which then cancels out), shows
that function values barely fulfilling the conditions, i.e. at1 = bt1 = 0, can have pWolfe ? 0.2 while
function values at at1 = bt1 = ? for _ 0 with ?unlucky? evaluations (both function and gradient
values one standard-deviation from true value) can achieve pWolfe ? 0.4. The choice cW = 0.3
balances the two competing desiderata for precision and recall. Empirically (Fig. 3), we rarely
observed values of pWolfe close to this threshold. Even at high evaluation noise, a function evaluation
typically either clearly rules out the Wolfe conditions, or lifts pWolfe well above the threshold.
Scale ? The parameter ? of Eq. (4) simply scales the prior variance. It can be eliminated by scaling
0
the optimization objective: We set ? = 1 and scale yi ^ (yi ?y0 )/|y00 |, yi0 ^ yi/|y00 | within the code of
0
the line search. This gives y(0) = 0 and y (0) = ?1, and typically ensures the objective ranges in
the single digits across 0 < t < 10, where most line searches take place. The division by |y00 | causes
a non-Gaussian disturbance, but this does not seem to have notable empirical effect.
Noise Scales ?f , ?f 0 The likelihood (3) requires standard deviations for the noise on both function
values (?f ) and gradients (?f 0 ). One could attempt to learn these across several line searches.
However, in exchangeable models, as captured by Eq. (1), the variance of the loss and its gradient
can be estimated directly within the batch, at low computational overhead?an approach already
advocated by Schaul et al. [14]. We collect the empirical statistics
m
1 X 2
?
S(x)
:=
` (x, yj ),
m j
m
and
6
1 X
?
?S(x)
:=
?`(x, yj ).2
m j
(17)
(where .2 denotes the element-wise square) and estimate, at the beginning of a line search from xk ,
1 ?
1 ?
|
? k )2
? 2 .
?f2 =
S(xk ) ? L(x
and
?f2 0 = si .2
?S(xk ) ? (?L).
m?1
m?1
(18)
This amounts to the cautious assumption that noise on the gradient is independent. We finally scale
the two empirical estimates as described in ?3.4: ?f ^ ?f /|y 0 (0)|, and ditto for ?f 0 . The overhead of
this estimation is small if the computation of `(x, yj ) itself is more expensive than the summation over
j (in the neural network examples of ?4, with their comparably simple `, the additional steps added
only ? 1% cost overhead to the evaluation of the loss). Of course, this approach requires a batch size
m > 1. For single-sample batches, a running averaging could be used instead (single-sample batches
are not necessarily a good choice. In our experiments, for example, vanilla SGD with batch size 10
converged faster in wall-clock time than unit-batch SGD). Estimating noise separately for each input
dimension captures the often inhomogeneous structure among gradient elements, and its effect on the
noise along the projected direction. For example, in deep models, gradient noise is typically higher
on weights between the input and first hidden layer, hence line searches along the corresponding
directions are noisier than those along directions affecting higher-level weights.
3.4.1
Propagating Step Sizes Between Line Searches
As will be demonstrated in ?4, the line search can find good step sizes even if the length of the
direction si (which is proportional to the learning rate ? in SGD) is mis-scaled. Since such scale
issues typically persist over time, it would be wasteful to have the algorithm re-fit a good scale in each
line search. Instead, we propagate step lengths from one iteration of the search to another: We set the
? 0 ) with some initial learning rate ?0 . Then, after each line
initial search direction to s0 = ??0 ?L(x
? i ).
search ending at xi = xi?1 + t? si , the next search direction is set to si+1 = ?1.3 ? t? ?0 ?L(x
Thus, the next line search starts its extrapolation at 1.3 times the step size of its predecessor.
Remark on convergence of SGD with line searches: We note in passing that it is straightforward
to ensure that SGD instances using the line search inherit the convergence guarantees of SGD:
Putting even an extremely
? i on the step sizes taken by the i-th line search, such that
P
P? 2 loose bound ?
?
? i = ? and i ?
? i < ?, ensures the line search-controlled SGD converges in probability [1].
i ?
4
Experiments
Our experiments were performed on the well-worn problems of training a 2-layer neural net with
logistic nonlinearity on the MNIST and CIFAR-10 datasets.6 In both cases, the network had 800 hidden units, giving optimization problems with 636 010 and 2 466 410 parameters, respectively. While
this may be ?low-dimensional? by contemporary standards, it exhibits the stereotypical challenges
of stochastic optimization for machine learning. Since the line search deals with only univariate
subproblems, the extrinsic dimensionality of the optimization task is not particularly relevant for an
empirical evaluation. Leaving aside the cost of the function evaluations themselves, computation cost
associated with the line search is independent of the extrinsic dimensionality.
The central nuisance of SGD is having to choose the learning rate ?, and potentially also a schedule for
its decrease. Theoretically, a decaying learning rate is necessary to guarantee convergence of SGD [1],
but empirically, keeping the rate constant, or only decaying it cautiously, often work better (Fig. 4). In
a practical setting, a user would perform exploratory experiments (say, for 103 steps), to determine a
good learning rate and decay schedule, then run a longer experiment in the best found setting. In our
networks, constant learning rates of ? = 0.75 and ? = 0.08 for MNIST and CIFAR-10, respectively,
achieved the lowest test error after the first 103 steps of SGD. We then trained networks with vanilla
SGD with and without ?-decay (using the schedule ?(i) = ?0 /i), and SGD using the probabilistic
line search, with ?0 ranging across five orders of magnitude, on batches of size m = 10.
Fig. 4, top, shows test errors after 10 epochs as a function of the initial learning rate ?0 (error bars
based on 20 random re-starts). Across the broad range of ?0 values, the line search quickly identified
good step sizes ?(t), stabilized the training, and progressed efficiently, reaching test errors similar
6
http://yann.lecun.com/exdb/mnist/ and http://www.cs.toronto.edu/?kriz/cifar.html. Like other authors, we only used the ?batch 1? sub-set of CIFAR-10.
7
MNIST 2layer neural net
CIFAR10 2layer neural net
SGD fixed ?
SGD decaying ?
Line Search
100
test error
0.9
0.8
10?1
0.7
0.6
10?4
10?3
10?2
10?1
intial learning rate
100
10?2 ?4
10
101
1
10?3
10?2
10?1
intial learning rate
100
101
1
test error
0.8
0.8
0.6
0.4
0.6
0.2
0 2 4 6 8 10
0 2 4 6 8 10
epoch
0 2 4 6 8 10
0
0 2 4 6 8 10
0 2 4 6 8 10
epoch
0 2 4 6 8 10
Figure 4: Top row: test error after 10 epochs as function of initial learning rate (note logarithmic
ordinate for MNIST). Bottom row: Test error as function of training epoch (same color and symbol
scheme as in top row). No matter the initial learning rate, the line search-controlled SGD perform
close to the (in practice unknown) optimal SGD instance, effectively removing the need for exploratory
experiments and learning-rate tuning. All plots show means and 2 std.-deviations over 20 repetitions.
to those reported in the literature for tuned versions of this kind of architecture on these datasets.
While in both datasets, the best SGD instance without rate-decay just barely outperformed the line
searches, the optimal ? value was not the one that performed best after 103 steps. So this kind of
exploratory experiment (which comes with its own cost of human designer time) would have led to
worse performance than simply starting a single instance of SGD with the linesearch and ?0 = 1,
letting the algorithm do the rest.
Average time overhead (i.e. excluding evaluation-time for the objective) was about 48ms per line
search. This is independent of the problem dimensionality, and expected to drop significantly with
optimized code. Analysing one of the MNIST instances more closely, we found that the average
length of a line search was ? 1.4 function evaluations, 80% ? 90% of line searches terminated
after the first evaluation. This suggests good scale adaptation and thus efficient search (note that an
?optimally tuned? algorithm would always lead to accepts).
The supplements provide additional plots, of raw objective values, chosen step-sizes, encountered
gradient norms and gradient noises during the optimization, as well as test-vs-train error plots, for each
of the two datasets, respectively. These provide a richer picture of the step-size control performed by
the line search. In particular, they show that the line search chooses step sizes that follow a nontrivial
dynamic over time. This is in line with the empirical truism that SGD requires tuning of the step size
during its progress, a nuisance taken care of by the line search. Using this structured information for
more elaborate analytical purposes, in particular for convergence estimation, is an enticing prospect,
but beyond the scope of this paper.
5
Conclusion
The line search paradigm widely accepted in deterministic optimization can be extended to noisy
settings. Our design combines existing principles from the noise-free case with ideas from Bayesian
optimization, adapted for efficiency. We arrived at a lightweight ?black-box? algorithm that exposes
no parameters to the user. Our method is complementary to, and can in principle be combined with,
virtually all existing methods for stochastic optimization that adapt a step direction of fixed length.
Empirical evaluations suggest the line search effectively frees users from worries about the choice of
a learning rate: Any reasonable initial choice will be quickly adapted and lead to close to optimal
performance. Our matlab implementation will be made available at time of publication of this article.
8
References
[1] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics,
22(3):400?407, Sep. 1951.
[2] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In
Twenty-first International Conference on Machine Learning (ICML 2004), 2004.
[3] L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th Int.
Conf. on Computational Statistic (COMPSTAT), pages 177?186. Springer, 2010.
[4] M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14(1):1303?1347, 2013.
[5] J. Hensman, M. Rattray, and N.D. Lawrence. Fast variational inference in the conjugate exponential family.
In Advances in Neural Information Processing Systems (NIPS 25), pages 2888?2896, 2012.
[6] T. Broderick, N. Boyd, A. Wibisono, A.C. Wilson, and M.I. Jordan. Streaming variational Bayes. In
Advances in Neural Information Processing Systems (NIPS 26), pages 1727?1735, 2013.
[7] A.P. George and W.B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate
dynamic programming. Machine Learning, 65(1):167?198, 2006.
[8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[9] N.N. Schraudolph. Local gain adaptation in stochastic gradient descent. In Ninth International Conference
on Artificial Neural Networks (ICANN) 99, volume 2, pages 569?574, 1999.
[10] S.-I. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for
multilayer perceptrons. Neural Computation, 12(6):1399?1409, 2000.
[11] N.L. Roux and A.W. Fitzgibbon. A fast natural Newton method. In 27th International Conference on
Machine Learning (ICML), pages 623?630, 2010.
[12] R. Rajesh, W. Chong, D. Blei, and E. Xing. An adaptive learning rate for stochastic variational inference.
In 30th International Conference on Machine Learning (ICML), pages 298?306, 2013.
[13] P. Hennig. Fast Probabilistic Optimization from Noisy Gradients. In 30th International Conference on
Machine Learning (ICML), 2013.
[14] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. In 30th International Conference on
Machine Learning (ICML-13), pages 343?351, 2013.
[15] R. Fletcher and C.M. Reeves. Function minimization by conjugate gradients. The Computer Journal,
7(2):149?154, 1964.
[16] C.G. Broyden. A new double-rank minimization algorithm. Notices of the AMS, 16:670, 1969.
[17] R. Fletcher. A new approach to variable metric algorithms. The Computer Journal, 13(3):317, 1970.
[18] D. Goldfarb. A family of variable metric updates derived by variational means. Math. Comp., 24(109):23?
26, 1970.
[19] D.F. Shanno. Conditioning of quasi-Newton methods for function minimization. Math. Comp., 24(111):647?
656, 1970.
[20] J. Nocedal and S.J. Wright. Numerical Optimization. Springer Verlag, 1999.
[21] P. Wolfe. Convergence conditions for ascent methods. SIAM Review, pages 226?235, 1969.
[22] L. Armijo. Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal
of Mathematics, 16(1):1?3, 1966.
[23] D.R. Jones, M. Schonlau, and W.J. Welch. Efficient global optimization of expensive black-box functions.
Journal of Global Optimization, 13(4):455?492, 1998.
[24] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT, 2006.
[25] G. Wahba. Spline models for observational data. Number 59 in CBMS-NSF Regional Conferences series
in applied mathematics. SIAM, 1990.
[26] S. S?arkk?a. Bayesian filtering and smoothing. Cambridge University Press, 2013.
[27] A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, 3rd ed.
edition, 1991.
[28] R.J. Adler. The Geometry of Random Fields. Wiley, 1981.
[29] Z. Drezner and G.O. Wesolowsky. On the computation of the bivariate normal integral. Journal of
Statistical Computation and Simulation, 35(1-2):101?107, 1990.
9
| 5753 |@word illustrating:1 faculty:1 middle:1 polynomial:1 norm:2 eliminating:1 yi0:1 version:1 tedious:1 termination:3 simulation:1 propagate:1 eng:1 covariance:2 sgd:26 tr:3 solid:2 shading:1 papoulis:1 initial:7 lightweight:5 series:1 tuned:2 existing:6 current:2 com:1 arkk:1 si:7 yet:2 must:2 readily:1 written:1 numerical:1 shape:1 analytic:2 remove:2 plot:4 drop:1 update:1 aside:1 v:1 tone:1 xk:3 beginning:1 vanishing:1 short:1 realizing:1 accepting:1 blei:2 math:3 node:4 philipp:1 location:4 toronto:1 ditto:1 zhang:2 five:1 mathematical:1 along:6 c2:21 direct:1 become:1 guard:1 predecessor:1 overhead:7 combine:1 theoretically:1 indeed:1 expected:5 mpg:1 themselves:1 growing:1 provided:2 estimating:1 bounded:1 laptop:1 lowest:2 kind:4 extremum:1 guarantee:3 ti:9 runtime:1 exactly:1 scaled:1 uk:1 exchangeable:2 unit:3 control:1 yn:2 planck:1 positive:6 t1:1 local:5 limit:4 despite:1 path:2 interpolation:5 approximately:2 becoming:1 tmax:2 might:1 black:3 collect:3 suggests:1 revolves:1 range:3 practical:1 lecun:2 yj:3 practice:2 recursive:1 definite:1 pesky:1 irregularity:2 optimizers:1 digit:1 fitzgibbon:1 powell:1 area:2 empirical:7 adapting:2 reject:1 projection:1 significantly:1 confidence:1 boyd:1 quadrant:1 suggest:1 doubled:1 close:4 writing:1 accumulating:1 www:2 equivalent:2 deterministic:8 map:1 yt:4 maximizing:1 demonstrated:1 straightforward:2 worn:1 starting:3 compstat:1 convex:1 williams:1 welch:1 stabilizing:1 roux:1 schonlau:1 rule:3 stereotypical:1 nesting:1 brittleness:1 stability:2 classic:2 notion:3 fulfil:1 searching:1 variation:1 exploratory:3 annals:1 construction:1 pt:1 user:6 exact:1 lighter:1 carl:1 us:2 k00:1 programming:1 pa:2 wolfe:22 element:3 expensive:3 particularly:1 curated:1 std:1 persist:1 bottom:3 observed:2 ep:1 ft:1 solved:1 capture:2 wang:1 ensures:4 decrease:4 contemporary:1 prospect:1 cautiously:1 phennig:1 broderick:1 interpolant:1 cam:1 dynamic:2 motivate:1 trained:1 solving:1 upon:1 division:1 efficiency:5 f2:7 preconditioning:1 easily:1 joint:1 sep:1 surrounding:1 train:1 fast:4 artificial:1 lift:1 hyper:1 choosing:1 whose:2 richer:1 larger:1 widely:2 say:1 amari:1 statistic:4 erf:1 gp:11 jointly:1 noisy:6 itself:3 online:2 sequence:3 advantage:1 differentiable:1 net:3 analytical:1 mb:1 adaptation:3 relevant:1 combining:1 loop:2 achieve:1 schaul:3 cautious:1 constituent:1 convergence:5 double:1 requirement:3 converges:1 develop:1 ac:1 propagating:1 fixing:1 minor:1 advocated:1 progress:1 eq:10 strong:12 auxiliary:2 c:1 implies:3 come:1 direction:20 inhomogeneous:1 closely:1 filter:1 stochastic:17 exploration:1 human:1 observational:1 require:1 suffices:1 fix:1 wall:1 summation:1 extension:3 hold:1 sufficiently:1 considered:2 around:1 normal:3 exp:1 wright:1 lawrence:1 scope:1 fletcher:2 y00:4 optimizer:1 enticing:1 purpose:2 failing:1 estimation:6 outperformed:1 currently:2 expose:1 sensitive:1 robbins:1 largest:1 repetition:1 tf:2 tool:1 hoffman:1 minimization:4 fukumizu:1 mit:1 clearly:1 always:3 gaussian:16 fulfill:1 reaching:1 stepsizes:1 wilson:1 probabilistically:1 publication:1 derived:2 focus:1 inherits:1 improvement:2 rank:1 likelihood:4 fooled:1 contrast:2 am:1 helpful:1 inference:4 minimizers:1 streaming:1 integrated:1 typically:5 accept:1 initially:2 bt:6 hidden:2 ancestor:1 quasi:1 subroutine:2 germany:1 issue:3 among:2 ill:2 html:1 denoted:2 smoothing:1 integration:1 orange:2 marginal:3 field:1 construct:3 never:1 once:1 having:3 eliminated:2 min3:1 broad:1 cancel:1 progressed:1 thin:1 icml:5 park:1 jones:1 spline:6 intelligent:1 piecewise:3 simultaneously:1 individual:2 phase:1 consisting:1 geometry:1 attempt:1 acceptance:1 highly:1 evaluation:22 chong:1 unlucky:1 undefined:1 rajesh:1 integral:2 partial:1 necessary:1 cifar10:1 spemannstra:1 re:2 uncertain:1 instance:5 modeling:1 linesearch:1 retains:1 cost:9 deviation:4 subset:1 intial:2 too:2 optimally:1 reported:1 corrupted:1 chooses:2 combined:1 adler:1 cited:2 international:6 shanno:1 siam:2 probabilistic:14 quickly:2 central:4 choose:2 positivity:1 collapsing:1 worse:1 conf:1 genz:1 inefficient:1 derivative:4 de:1 bfgs:3 standardizing:1 includes:1 coefficient:1 matter:1 int:1 notable:1 explicitly:1 performed:4 break:1 extrapolation:9 root:1 closed:1 hazan:1 reached:2 start:4 red:1 decaying:3 bayes:1 maren:1 inherited:1 xing:1 slope:3 monro:1 minimize:3 square:1 wiener:3 variance:4 efficiently:2 yield:2 weak:6 bayesian:8 raw:1 comparably:1 none:1 comp:2 converged:1 checked:1 ed:1 competitor:1 against:1 involved:1 associated:1 di:2 mi:1 gain:1 dataset:1 treatment:1 popular:2 recall:1 color:1 dimensionality:3 hilbert:1 schedule:3 actually:1 back:1 cbms:1 worry:1 higher:4 follow:1 formulation:1 evaluated:2 shrink:1 though:1 box:3 just:1 until:2 correlation:2 sketch:3 receives:1 clock:1 replacing:1 nonlinear:1 logistic:2 gray:2 tt0:3 grows:1 effect:6 validity:1 requiring:1 unbiased:1 true:1 former:1 analytically:1 hence:2 excluded:2 goldfarb:1 deal:1 during:2 nuisance:2 essence:1 noted:1 kriz:1 criterion:1 generalized:1 m:1 pdf:1 exdb:1 arrived:1 tt:7 hill:1 performs:1 duchi:1 ranging:1 variational:6 wise:1 recently:2 functional:1 mt:1 empirically:2 overview:1 endpoint:1 exponentially:1 volume:1 conditioning:1 numerically:1 cambridge:1 paisley:1 broyden:1 rd:6 tuning:3 vanilla:2 reef:1 iia:1 mathematics:2 nonlinearity:1 dj:2 had:1 stable:1 access:2 f0:2 longer:1 v0:1 add:3 curvature:3 multivariate:1 posterior:8 own:1 irrelevant:1 certain:1 verlag:1 ubingen:1 meta:1 yi:3 captured:1 minimum:4 greater:2 additional:3 preceding:2 care:1 george:1 surely:1 determine:1 paradigm:1 dashed:3 ii:7 semi:1 faster:1 adapt:2 schraudolph:1 cifar:4 host:1 controlled:3 ensuring:1 prediction:1 variant:1 regression:3 desideratum:1 multilayer:1 metric:2 iteration:3 kernel:4 achieved:1 cell:3 c1:17 affecting:1 taxing:1 separately:1 addressed:1 interval:1 else:1 annealing:1 leaving:1 crucial:2 envelope:1 rest:1 regional:1 strict:1 ascent:1 induced:1 tend:1 virtually:3 db:1 member:1 spirit:1 seem:1 jordan:1 call:3 presence:1 constraining:1 reacts:1 switch:1 fit:1 architecture:1 competing:1 ktt:3 identified:1 inner:1 idea:2 regarding:2 wahba:1 shift:2 t0:19 six:1 utility:4 returned:2 mbt:3 passing:2 cause:1 york:1 remark:1 matlab:2 deep:1 involve:1 tune:1 amount:2 http:4 problematic:2 stabilized:1 notice:1 nsf:1 designer:2 fulfilled:2 overly:1 popularity:1 per:2 arising:2 estimated:2 extrinsic:2 rattray:1 discrete:2 hennig:2 mat:1 key:1 putting:1 threshold:6 pb:2 monitor:1 wasteful:1 nocedal:1 subgradient:1 run:1 everywhere:1 place:1 almost:2 reasonable:1 decide:1 yann:1 family:2 draw:1 decision:4 acceptable:5 scaling:1 layer:4 ct:3 bound:1 followed:2 replaces:1 quadratic:1 encountered:1 discretely:1 nontrivial:1 adapted:2 precisely:2 deficiency:1 constraint:1 software:1 encodes:1 explode:1 aspect:2 argument:1 min:2 extremely:1 structured:1 pacific:1 combination:1 conjugate:3 across:6 smaller:1 y0:4 fulfilling:1 principally:1 taa:1 taken:4 computationally:1 loose:1 needed:1 singer:1 letting:1 drastic:1 available:3 multiplied:1 generic:1 robustly:1 batch:10 robustness:1 altogether:1 existence:1 top:5 denotes:2 include:1 running:1 ensure:1 newton:4 lenient:1 giving:1 sweep:1 objective:14 already:1 added:1 strategy:1 diagonal:1 surrogate:5 exhibit:1 gradient:32 distance:2 cw:10 tue:1 outer:1 collected:2 unstable:1 considers:1 reason:1 trivial:1 barely:2 assuming:2 length:5 code:3 index:1 modeled:1 insufficient:1 providing:1 minimizing:2 mini:1 balance:1 difficult:1 steep:1 potentially:1 subproblems:1 rise:1 min2:1 ba:1 implementation:3 design:4 motivates:1 unknown:1 perform:2 twenty:1 upper:1 observation:4 snapshot:1 datasets:4 descent:13 immediate:1 extended:1 variability:1 precise:2 excluding:1 reproducing:1 ninth:1 ordinate:1 introduced:1 complement:1 pair:1 connection:1 optimized:1 accepts:5 nip:2 beyond:1 bar:1 below:4 usually:1 challenge:1 max:1 belief:3 demanding:1 natural:4 disturbance:2 indicator:1 customized:1 scheme:1 picture:1 governor:1 prior:3 epoch:5 literature:1 review:1 adagrad:1 loss:3 rationale:1 proportional:1 bt1:2 filtering:1 borrows:1 ingredient:1 at1:2 sufficient:1 s0:1 article:1 principle:2 row:5 yt0:5 course:1 supported:1 free:6 rasmussen:2 keeping:1 allow:2 institute:1 distributed:1 dimension:1 hensman:1 gram:1 ending:1 computes:1 author:2 made:1 adaptive:5 projected:2 far:2 bb:2 approximate:3 mcgraw:1 unreliable:1 global:3 tsi:1 conceptual:1 assumed:1 xi:5 search:78 continuous:1 learn:1 robust:2 conservatism:1 necessarily:1 bottou:1 domain:2 da:1 inherit:1 icann:1 main:2 terminated:1 noise:19 arise:1 edition:1 facilitating:1 complementary:1 fig:4 mahsereci:1 elaborate:2 fashion:1 cubic:8 definiteness:1 wiley:1 formalization:1 precision:1 sub:1 exponential:1 lie:1 candidate:6 extractor:1 theorem:1 removing:2 specific:1 showing:1 symbol:1 list:1 decay:3 bivariate:4 exists:1 mnist:7 effectively:4 supplement:1 magnitude:1 suited:1 logarithmic:1 uei:1 simply:3 univariate:5 explore:1 led:1 wsu:1 scalar:2 springer:2 aa:1 ma:1 marked:1 formulated:2 exposition:1 microsecond:1 lipschitz:1 yti:1 change:2 hard:3 analysing:1 typical:2 infinite:1 operates:1 averaging:1 conservative:1 accepted:6 perceptrons:1 rarely:1 fulfills:2 armijo:3 noisier:1 wibisono:1 |
5,251 | 5,754 | COEVOLVE: A Joint Point Process Model for
Information Diffusion and Network Co-evolution
Mehrdad Farajtabar?
Yichen Wang?
Manuel Gomez-Rodriguez?
?
?
Shuang Li
Hongyuan Zha
Le Song?
?
Georgia Institute of Technology
MPI for Software Systems?
{mehrdad,yichen.wang,sli370}@gatech.edu
manuelgr@mpi-sws.org
{zha,lsong}@cc.gatech.edu
Abstract
Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly
creating new links when exposed to new information sources, and in turn these
links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have
been predominantly studied separately, ignoring their co-evolutionary dynamics.
We propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other.
This model allows us to efficiently simulate interleaved diffusion and network
events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and
network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as
well as more accurate predictions than alternatives.
1
Introduction
Online social networks, such as Twitter or Weibo, have become large information networks where
people share, discuss and search for information of personal interest as well as breaking news [1].
In this context, users often forward to their followers information they are exposed to via their
followees, triggering the emergence of information cascades that travel through the network [2],
and constantly create new links to information sources, triggering changes in the network itself
over time. Importantly, recent empirical studies with Twitter data have shown that both information
diffusion and network evolution are coupled and network changes are often triggered by information
diffusion [3, 4, 5].
While there have been many recent works on modeling information diffusion [2, 6, 7, 8] and network
evolution [9, 10, 11], most of them treat these two stochastic processes independently and separately,
ignoring the influence one may have on the other over time. Thus, to better understand information
diffusion and network evolution, there is an urgent need for joint probabilistic models of the two
processes, which are largely inexistent to date.
In this paper, we propose a probabilistic generative model, COEVOLVE, for the joint dynamics of
information diffusion and network evolution. Our model is based on the framework of temporal
point processes, which explicitly characterize the continuous time interval between events, and it
consists of two interwoven and interdependent components (refer to Appendix B for an illustration):
I. Information diffusion process. We design an ?identity revealing? multivariate Hawkes process [12] to capture the mutual excitation behavior of retweeting events, where the intensity of
such events in a user is boosted by previous events from her time-varying set of followees. Al1
though Hawkes processes have been used for information diffusion before [13, 14, 15, 16, 17, 18,
19], the key innovation of our approach is to explicitly model the excitation due to a particular
source node, hence revealing the identity of the source. Such design reflects the reality that information sources are explicitly acknowledged, and it also allows a particular information source to
acquire new links in a rate according to her ?informativeness?.
II. Network evolution process. We model link creation as an ?information driven? survival process,
and couple the intensity of this process with retweeting events. Although survival processes have
been used for link creation before [20, 21], the key innovation in our model is to incorporate retweeting events as the driving force for such processes. Since our model has captured the source
identity of each retweeting event, new links will be targeted toward the information sources, with
an intensity proportional to their degree of excitation and each source?s influence.
Our model is designed in such a way that it allows the two processes, information diffusion and
network evolution, unfold simultaneously in the same time scale and excise bidirectional influence
on each other, allowing sophisticated coevolutionary dynamics to be generated (e.g., see Figure 5).
Importantly, the flexibility of our model does not prevent us from efficiently simulating diffusion
and link events from the model and learning its parameters from real world data:
? Efficient simulation. We design a scalable sampling procedure that exploits the sparsity of the
generated networks. Its complexity is O(nd log m), where n is the number of samples, m is the
number of nodes and d is the maximum number of followees per user.
? Convex parameters learning. We show that the model parameters that maximize the joint likelihood of observed diffusion and link creation events can be found via convex optimization.
Finally, we experimentally verify that our model can produce coevolutionary dynamics of information diffusion and network evolution, and generate retweet and link events that obey common
information diffusion patterns (e.g., cascade structure, size and depth), static network patterns (e.g.,
node degree) and temporal network patterns (e.g., shrinking diameter) described in related literature [22, 10, 23]. Furthermore, we show that, by modeling the coevolutionary dynamics, our model
provide significantly more accurate link and diffusion event predictions than alternatives in large
scale Twitter dataset [3].
2
Backgrounds on Temporal Point Processes
A temporal point process is a random process whose realization consists of a list of discrete events
localized in time, {ti } with ti ? R+ and i ? Z+ . Many different types of data produced in online
social networks can be represented as temporal point processes, such as the times of retweets and
link creations. A temporal point process can be equivalently represented as a counting process, N (t),
which records the number of events before time t. Let the history H(t) be the list of times of events
{t1 , t2 , . . . , tn } up to but not including time t. Then, the number of observed events in a small time
"t
!
window dt between [t, t+dt) is dN (t) = ti ?H(t) ?(t?ti ) dt, and hence N (t) = 0 dN (s), where
?(t) is a Dirac delta function. More generally, given a function f (t), we can define the convolution
with respect to dN (t) as
# t
$
f (t) ? dN (t) :=
f (t ? ? ) dN (? ) =
f (t ? ti ).
(1)
ti ?H(t)
0
The point process representation of temporal data is fundamentally different from the discrete time
representation typically used in social network analysis. It directly models the time interval between
events as random variables, and avoid the need to pick a time window to aggregate events. It allows
temporal events to be modeled in a more fine grained fashion, and has a remarkably rich theoretical
support [24].
An important way to characterize temporal point processes is via the conditional intensity function
? a stochastic model for the time of the next event given all the times of previous events. Formally, the conditional intensity function ?? (t) (intensity, for short) is the conditional probability of
observing an event in a small window [t, t + dt) given the history H(t), i.e.,
?? (t)dt := P {event in [t, t + dt)|H(t)} = E[dN (t)|H(t)],
(2)
where one typically assumes that only one event can happen in a small window of size dt,
i.e., dN (t) ? {0, 1}. Then, given a time t? ! t, we can also characterize the conditional probability that no event happens during [t, t? ) and the conditional density that an event occurs at time t?
2
" t?
as S ? (t? ) = exp(? t ?? (? ) d? ) and f ? (t? ) = ?? (t? ) S ? (t? ) respectively [24]. Furthermore, we can
express the log-likelihood of a list of events {t1 , t2 , . . . , tn } in an observation window [0, T ) as
# T
n
$
?
L=
log ? (ti ) ?
?? (? ) d?, T ! tn .
(3)
i=1
0
This simple log-likelihood will later enable us to learn the parameters of our model from observed
data.
Finally, the functional form of the intensity ?? (t) is often designed to capture the phenomena of
interests. Some useful functional forms we will use later are [24]:
(i) Poisson process. The intensity is assumed to be independent of the history H(t), but it can be
a time-varying function, i.e., ?? (t) = g(t) ! 0;
(ii) Hawkes Process. The intensity models a mutual excitation between events, i.e.,
$
?? (t) = ? + ??? (t) ? dN (t) = ? + ?
?? (t ? ti ),
(4)
ti ?H(t)
where ?? (t) := exp(??t)I[t ! 0] is an exponential triggering kernel, ? ! 0 is a baseline
intensity independent of the history. Here, the occurrence of each historical event increases the
intensity by a certain amount determined by the kernel and the weight ? ! 0, making the intensity
history dependent and a stochastic process by itself. We will focus on the exponential kernel in
this paper. However, other functional forms for the triggering kernel, such as log-logistic function,
are possible, and our model does not depend on this particular choice; and,
(iii) Survival process. There is only one event for an instantiation of the process, i.e.,
?? (t) = g ? (t)(1 ? N (t)),
where ?? (t) becomes 0 if an event already happened before t.
3
(5)
Generative Model of Information Diffusion and Network Co-evolution
In this section, we use the above background on temporal point processes to formulate our probabilistic generative model for the joint dynamics of information diffusion and network evolution.
3.1
Event Representation
We model the generation of two types of events: tweet/retweet events, er , and link creation events,
el . Instead of just the time t, we record each event as a triplet
source
?
er or el := ( u, s, t ).
?
destination
?
(6)
time
For retweet event, the triplet means that the destination node u retweets at time t a tweet originally
posted by source node s. Recording the source node s reflects the real world scenario that information sources are explicitly acknowledged. Note that the occurrence of event er does not mean
that u is directly retweeting from or is connected to s. This event can happen when u is retweeting
a message by another node u? where the original information source s is acknowledged. Node u
will pass on the same source acknowledgement to its followers (e.g., ?I agree @a @b @c @s?).
Original tweets posted by node u are allowed in this notation. In this case, the event will simply be
r
er = (u, u, t). Given a list of retweet events up to but not including time t, the history Hus
(t) of
r
r
retweets by u due to source s is Hus (t) = {ei = (ui , si , ti )|ui = u and si = s} . The entire history
r
of retweet events is denoted as Hr (t) := ?u,s?[m] Hus
(t).
For link creation event, the triplet means that destination node u creates at time t a link to source
node s, i.e., from time t on, node u starts following node s. To ease the exposition, we restrict
ourselves to the case where links cannot be deleted and thus each (directed) link is created only
once. However, our model can be easily augmented to consider multiple link creations and deletions
per node pair, as discussed in Section 8. We denote the link creation history as Hl (t).
3.2
Joint Model with Two Interwoven Components
Given m users, we use two sets of counting processes to record the generated events, one for information diffusion and the other for network evolution. More specifically,
3
I. Retweet events are recorded using a matrix N (t) of size m ? m for each fixed time point t. The
(u, s)-th entry in the matrix, Nus (t) ? {0} ? Z+ , counts the number of retweets of u due to
source s up to time t. These counting processes are ?identity revealing?, since they keep track of
the source node that triggers each retweet. This matrix N (t) can be dense, since Nus (t) can be
nonzero even when node u does not directly follow s. We also let dN (t) := ( dNus (t) )u,s?[m] .
II. Link events are recorded using an adjacency matrix A(t) of size m ? m for each fixed time point
t. The (u, s)-th entry in the matrix, Aus (t) ? {0, 1}, indicates whether u is directly following s.
That is Aus (t) = 1 means the directed link has been created before t. For simplicity of exposition,
we do not allow self-links. The matrix A(t) is typically sparse, but the number of nonzero entries
can change over time. We also define dA(t) := ( dAus (t) )u,s?[m] .
Then the interwoven information diffusion and network evolution processes can be characterized
using their respective intensities E[dN (t) | Hr (t) ? Hl (t)] = ?? (t) dt and E[dA(t) | Hr (t) ?
?
Hl (t)] = ?? (t) dt, where ?? (t) = ( ?us
(t) )u,s?[m] and ?? (t) = ( ??us (t) )u,s?[m] . The sign
?
means that the intensity matrices will depend on the joint history, Hr (t) ? Hl (t), and hence their
evolution will be coupled. By this coupling, we make: (i) the counting processes for link creation to
be ?information driven? and (ii) the evolution of the linking structure to change the information diffusion process. Refer to Appendix B for an illustration of our joint model. In the next two sections,
we will specify the details of these two intensity matrices.
3.3
Information Diffusion Process
We model the intensity, ?? (t), for retweeting events using multivariate Hawkes process [12]:
$
?
?us
(t) = I[u = s] ?u + I[u ?= s] ?s
??1 (t) ? (Auv (t) dNvs (t)) ,
v?Fu (t)
(7)
where I[?] is the indicator function and Fu (t) := {v ? [m] : Auv (t) = 1} is the current set of followees of u. The term ?u ! 0 is the intensity of original
tweets by a user u on his own initiative,
!
becoming the source of a cascade and the term ?s v?Fu (t) ?? (t) ? (Auv (t) dNvs (t)) models the
propagation of peer influence over the network, where the triggering kernel ??1 (t) models the decay
of peer influence over time.
Note that the retweet intensity matrix ?? (t) is by itself a stochastic process that depends on the timevarying network topology, the non-zero entries in A(t), whose growth is controlled by the network
evolution process in Section 3.4. Hence the model design captures the influence of the network
topology and each source?s influence, ?s , on the information diffusion process. More specifically,
?
to compute ?us
(t), one first finds the current set Fu (t) of followees of u, and then aggregates
the retweets of these followees that are due to source s. Note that these followees may or may
not directly follow source s. Then, the more frequently node u is exposed to retweets of tweets
originated from source s via her followees, the more likely she will also retweet a tweet originated
from source s. Once node u retweets due to source s, the corresponding Nus (t) will be incremented,
and this in turn will increase the likelihood of triggering retweets due to source s among the followers
of u. Thus, the source does not simply broadcast the message to nodes directly following her but
her influence propagates through the network even to those nodes that do not directly follow her.
Finally, this information diffusion model allows a node to repeatedly generate events in a cascade,
and is very different from the independent cascade or linear threshold models [25] which allow at
most one event per node per cascade.
3.4
Network Evolution Process
We model the intensity, ?? (t), for link creation using a combination of survival and Hawkes process:
??us (t) = (1 ? Aus (t))(?u + ?u ??2 (t) ? dNus (t))
(8)
where the term 1 ? Aus (t) effectively ensures a link is created only once, and after that, the corresponding intensity is set to zero. The term ?u ! 0 denotes a baseline intensity, which models when a
node u decides to follow a source s spontaneously at her own initiative. The term ?u ??2 (t)?dNus (t)
corresponds to the retweets of node u due to tweets originally published by source s, where the triggering kernel ??2 (t) models the decay of interests over time. Here, the higher the corresponding
retweet intensity, the more likely u will find information by source s useful and will create a direct
link to s.
4
The link creation intensity ?? (t) is also a stochastic process by itself, which depends on the retweet
events, and is driven by the retweet count increments dNus (t). It captures the influence of retweets
on the link creation, and closes the loop of mutual influence between information diffusion and
network topology.
Note that creating a link is more than just adding a path or allowing information sources to take
shortcuts during diffusion. The network evolution makes fundamental changes to the diffusion
dynamics and stationary distribution of the diffusion process in Section 3.3. As shown in [14],
given a fixed network structure A, the expected retweet intensity ?s (t) at time t due to source
s will depend of the network structure in a highly nonlinear fashion, i.e., ?s (t) := E[???s (t)] =
(e(A??1 I)t + ?1 (A ? ?1 I)?1 (e(A??1 I)t ? I)) ?s , where ?s ? Rm has a single nonzero entry
with value ?s and e(A??1 I)t is the matrix exponential. When t ? ?, the stationary intensity
? s = (I ? A/?)?1 ?s is also nonlinearly related to the network structure. Thus given two network
?
structures A(t) and A(t? ) at two points in time, which are different by a few edges, the effect of
these edges on the information diffusion is not just simply an additive relation. Depending on how
these newly created edges modify the eigen-structure of the sparse matrix A(t), their effect can be
drastic to the information diffusion.
Remark 1. In our model, each user is exposed to information through a time-varying set of neighbors. By doing so, we couple information diffusion with the network evolution, increasing the
practical application of our model to real-network datasets. The particular definition of exposure
(e.g., a retweet?s neighbor) will depend on the type of historical information that is available. Remarkably, the flexibility of our model allows for different types of diffusion events, which we can
broadly classify into two categories. In a first category, events corresponds to the times when an
information cascade hits a person, for example, through a retweet from one of her neighbors, but
she does not explicitly like or forward the associated post. In a second category, the person decides
to explicitly like or forward the associated post and events corresponds to the times when she does
so. Intuitively, events in the latter category are more prone to trigger new connections but are also
less frequent. Therefore, it is mostly suitable to large event dataset for examples those ones generated synthetically. In contrast, the events in the former category are less likely to inspire new links
but found in abundance. Therefore, it is very suitable for real-world sparse data. Consequently, in
synthetic experiments we used the latter and in the real one we used the former. It?s noteworthy that
Eq. (8) is written based on the latter category, but, Fig. 7 in appendix is drawn based on the former.
4
Efficient Simulation of Coevolutionary Dynamics
We can simulate samples (link creations, tweets and retweets) from our model by adapting Ogata?s
thinning algorithm [26], originally designed for multidimensional Hawkes processes. However, a
naive implementation of Ogata?s algorithm would scale poorly, i.e., for each sample, we would
need to re-evaluate ?? (t) and ?? (t), thus, to draw n samples, we would need to perform O(m2 n2 )
operations, where m is the number of nodes.
We designed a sampling procedure that is especially well-fitted for the structure of our model. The
algorithm is based on the following key idea: if we consider each intensity function in ?? (t) and
?? (t) as a separate Hawkes process and draw a sample from each, it is easy to show that the minimum among all these samples is a valid sample from the model [12]. However, by drawing samples
from all intensities, the computational complexity would not improve. However, when the network
is sparse, whenever we sample a new node (or link) event from the model, only a small number
of intensity functions, in the local neighborhood of the node (or the link), will change. As a consequence, we can reuse most of the samples from the intensity functions for the next new sample
and find which intensity functions we need to change in O(log m) operations, using a heap. Finally, we exploit the properties of the exponential function to update individual intensities for each
new sample in O(1): let ti and ti+1 be two consecutive events, then, we can compute ?? (ti+1 ) as
(?? (ti ) ? ?) exp(??(ti+1 ? ti )) + ? without the need to compare all previous events.
The complete simulation algorithm is summarized in Algorithm 2 in Appendix C. By using Algorithm 2, we reduce the complexity from O(n2 m2 ) to O(nd log m), where d is the maximum number
of followees per node. That means, our algorithm scales logarithmically with the number of nodes
and linearly with the number of edges at any point in time during the simulation. We also note that
the events for link creations, tweets and retweets are generated in a temporally intertwined and inter5
Retweet
Intensity
0
20
40
60
Event occurrence time
4
Link
Cross covariance
Link
Spike trains
Retweet
0.6
0
0
2
0
20
40
60
Event occurrence time
?50
0
Lag
50
(a)
(b)
(c)
Figure 1: Coevolutionary dynamics for synthetic data. a) Spike trains of link and retweet events. b)
Link and retweet intensities. c) Cross covariance of link and retweet intensities.
Data
Power?law fit
Poisson fit
4
Power?law fit
Poisson fit
Data
2
2
Data
2
0
Poisson fit
2
10
0
10 0
10
1
10
Power?law fit
4
10
10
10 0
10
1
10
Poisson fit
10
10
0
Power?law fit
4
10
10
10 0
10
Data
4
10
0
10 0
10
1
10
1
10
2
10
(a) ? = 0
(b) ? = 0.001
(c) ? = 0.1
(d) ? = 0.8
Figure 2: Degree distributions when network sparsity level reaches 0.001 for fixed ? = 0.1.
leaving fashion by Algorithm 2. This is because every new retweet event will modify the intensity
for link creation, and after each link creation we also need to update the retweet intensities.
5
Efficient Parameter Estimation from Coevolutionary Events
Given a collection of retweet events E = {eri } and link creation events A = {eli } recorded within
a time window [0, T ), we can easily estimate the parameters needed in our model using maximum
likelihood estimation. Here, we compute the joint log-likelihood L({?u } , {?u } , {?u } , {?s }) of
these events using Eq. (3), i.e.,
$
$ # T
$
$ # T
% ?
&
% ?
&
?
log ?ui si (ti ) ?
?us (? ) d? +
log ?ui si (ti ) ?
??us (? ) d? . (9)
eri ?E
'
u,s?[m]
()
tweet / retweet
0
*
eli ?A
'
u,s?[m]
0
()
links
*
For the terms corresponding to retweets, the log term only sums over the actual observed events,
but the integral term actually sums over all possible combination of destination and source pairs,
even if there is no event between a particular pair of destination and source. For such pairs with
no observed events, the corresponding counting processes have essentially survived the observation
"T ?
window [0, T ), and the term ? 0 ?us
(? )d? simply corresponds to the log survival probability.
Terms corresponding to links have a similar structure to those for retweet.
?
?
Since ?us
(t) and ??us are linear in the parameters (?u , ?s ) and (?u , ?u ) respectively, then log(?us
(t))
?
?
?
and log(?us ) are concave functions in these parameters. Integration of ?us (t) and ?us still results
in linear functions of the parameters. Thus the overall objective in Eq. (9) is concave, and the global
optimum can be found by many algorithms. In our experiments, we adapt the efficient algorithm
developed in previous work [18, 19]. Furthermore, the optimization problem decomposes in m
independent problems, one per node u, and can be readily parallelized.
6
Properties of Simulated Co-evolution, Networks and Cascades?
In this section, we perform an empirical investigation of the properties of the networks and information cascades generated by our model. In particular, we show that our model can generate coevolutionary retweet and link dynamics and a wide spectrum of static and temporal network patterns
and information cascades. Appendix D contains additional simulation results and visualizations.
Appendix E contains an evaluation of our model estimation method in synthetic data.
Retweet and link coevolution. Figures 1(a,b) visualize the retweet and link events, aggregated
across different sources, and the corresponding intensities for one node and one realization, picked
at random. Here, it is already apparent that retweets and link creations are clustered in time and often
follow each other. Further, Figure 1(c) shows the cross-covariance of the retweet and link creation
intensity, computed across multiple realizations, for the same node, i.e., if f (t)"and g(t) are two
intensities, the cross-covariance is a function of the time lag ? defined as h(? ) = f (t + ? )g(t) dt.
It can be seen that the cross-covariance has its peak around 0, i.e., retweets and link creations are
6
?=0
?=0.05
?=0.1
?=0.2
?=0
40
0
5
sparsity
?=0.001
?=0.1
?=0.8
80
diameter
diameter
80
40
0
10
5
sparsity
?4
x 10
10
?4
x 10
(a) Diameter, ? = 0.1 (b) Diameter, ? = 0.1
Figure 3: Diameter for network sparsity 0.001. Panels (a) and (b) show the diameter against sparsity
over time for fixed ? = 0.1, and for fixed ? = 0.1 respectively. 100%
100%
,=0
,=0.1
percentage
percentage
,=0.1
,=0.8
10%
1%
1%
0.1%
0.1%
Others
0
,=0
,=0.8
10%
0
1
2
3
4
5
6
cascade size
7
8 others
0
1
2
3
4
5
6
7 others
cascade depth
Figure 4: Distribution of cascade structure, size and depth for different ? values and fixed ? = 0.2.
highly correlated and co-evolve over time. For ease of exposition, we illustrated co-evolution using
one node, however, we found consistent results across nodes.
Degree distribution. Empirical studies have shown that the degree distribution of online social
networks and microblogging sites follow a power law [9, 1], and argued that it is a consequence of
the rich get richer phenomena. The degree distribution of a network is a power law if the expected
number of nodes md with degree d is given by md ? d?? , where ? > 0. Intuitively, the higher the
values of the parameters ? and ?, the closer the resulting degree distribution follows a power-law;
the lower their values, the closer the distribution to an Erdos-Renyi random graph [27]. Figure 2
confirms this intuition by showing the degree distribution for different values of ?.
Small (shrinking) diameter. There is empirical evidence that the diameter of online social networks
and microblogging sites exhibit relatively small diameter and shrinks (or flattens) as the network
grows [28, 9, 22]. Figures 3(a-b) show the diameter on the largest connected component (LCC)
against the sparsity of the network over time for different values of ? and ?. Although at the
beginning, there is a short increase in the diameter due to the merge of small connected components,
the diameter decreases as the network evolves. Here, nodes arrive to the network when they follow
(or are followed by) a node in the largest connected component.
Cascade patterns. Our model can produce the most commonly occurring cascades structures as
well as heavy-tailed cascade size and depth distributions, as observed in historical Twitter data [23].
Figure 4 summarizes the results. The higher the ? value, the shallower and wider the cascades.
7
Experiments on Real Dataset
In this section, we validate our model using a large Twitter dataset containing nearly 550,000 tweet,
retweet and link events from more than 280,000 users [3]. We will show that our model can capture
the co-evolutionary dynamics and, by doing so, it predicts retweet and link creation events more
accurately than several alternatives. Appendix F contains detailed information about the dataset and
additional experiments.
Retweet and link coevolution. Figures 5(a, b) visualize the retweet and link events, aggregated
across different sources, and the corresponding intensities given by our trained model for one node,
picked at random. Here, it is already apparent that retweets and link creations are clustered in time
and often follow each other, and our fitted model intensities successfully track such behavior. Further, Figure 5(c) compares the cross-covariance between the empirical retweet and link creation
intensities and between the retweet and link creation intensities given by our trained model, computed across multiple realizations, for the same node. The similarity between both cross-covariances
is striking and both has its peak around 0, i.e., retweets and link creations are highly correlated and
co-evolve over time. For ease of exposition, as in Section 6, we illustrated co-evolution using one
node, however, we found consistent results across nodes (see Appendix F).
Link prediction. We use our model to predict the identity of the source for each test link event,
given the historical (link and retweet) events before the time of the prediction, and compare its
performance with two state of the art methods, denoted as TRF [3] and WENG [5]. TRF measures
?
Implementation codes are available at https://github.com/farajtabar/Coevolution
7
Intensity
0
Link
0.5
0
0
20
40
60
80
Event occurrence time
Retweet
20
40
60
80
Event occurrence time
Cross covariance
1
Link
Spike trains
Retweet
Estimated
4
Empirical
2
0
?100
0
Lag
100
(a)
(b)
(c)
Figure 5: Coevolutionary dynamics for real data a) Spike trains of link and retweet events. b)
Estimated link and retweet intensities. c) Empirical and estimated cross covariance of link and
retweet intensities
80
3
# events
5
?105
COEVOLVE
TRF
WENG
0.1
0
1
3
# events
5
?105
40
1
0.3
COEVOLVE
HAWKES
Top1
70
10
1
0.2
AvgRank
COEVOLVE
TRF
WENG
Top1
AvgRank
140
3
# events
5
?105
COEVOLVE
HAWKES
0.15
0
1
3
# events
5
?105
(a) Links: AR
(b) Links: Top-1
(c) Activity: AR
Activity: Top-1
Figure 6: Prediction performance in the Twitter dataset by means of average rank (AR) and success
probability that the true (test) events rank among the top-1 events (Top-1).
the probability of creating a link from a source at a given time by simply computing the proportion
of new links created from the source with respect to the total number of links created up to the given
time. WENG considers different link creation strategies and makes a prediction by combining them.
We evaluate the performance by computing the probability of all potential links using different
methods, and then compute (i) the average rank of all true (test) events (AvgRank) and, (ii) the
success probability (SP) that the true (test) events rank among the top-1 potential events at each test
time (Top-1). We summarize the results in Fig. 6(a-b), where we consider an increasing number of
training retweet/tweet events. Our model outperforms TRF and WENG consistently. For example,
for 8 ? 104 training events, our model achieves a SP 2.5x times larger than TRF and WENG.
Activity prediction. We use our model to predict the identity of the node that is going to generate
each test diffusion event, given the historical events before the time of the prediction, and compare
its performance with a baseline consisting of a Hawkes process without network evolution. For
the Hawkes baseline, we take a snapshot of the network right before the prediction time, and use
all historical retweeting events to fit the model. Here, we evaluate the performance the via the
same two measures as in the link prediction task and summarize the results in Figure 6(c-d) against
an increasing number of training events. The results show that, by modeling the co-evolutionary
dynamics, our model performs significantly better than the baseline.
8
Discussion
We proposed a joint continuous-time model of information diffusion and network evolution, which
can capture the coevolutionary dynamics, mimics the most common static and temporal network
patterns observed in real-world networks and information diffusion data, and predicts the network
evolution and information diffusion more accurately than previous state-of-the-arts. Using point
processes to model intertwined events in information networks opens up many interesting future
modeling work. Our current model is just a show-case of a rich set of possibilities offered by a point
process framework, which have been rarely explored before in large scale social network modeling. For example, we can generalize our model to support link deletion by introducing an intensity
?
matrix ?? (t) modeling link deletions as survival processes, i.e., ?? (t) = (gus
(t)Aus (t))u,s?[m] ,
and then consider the counting process A(t) associated with the adjacency matrix to evolve as
E[dA(t)|Hr (t) ? Hl (t)] = ?? (t) dt ? ?? (t) dt. We also can consider the number of nodes varying over time. Furthermore, a large and diverse range of point processes can also be used in the
framework without changing the efficiency of the simulation and the convexity of the parameter
estimation, e.g., condition the intensity on additional external features, such as node attributes.
Acknowledge
The authors would like to thank Demetris Antoniades and Constantine Dovrolis for providing them
with the dataset. The research was supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR
N00014-15-1-2340, NSF IIS-1218749, NSF CAREER IIS-1350983.
8
References
[1] H. Kwak, C. Lee, H. Park, and others. What is Twitter, a social network or a news media? WWW, 2010.
[2] J. Cheng, L. Adamic, P. A. Dow, and others. Can cascades be predicted? WWW, 2014.
[3] D. Antoniades and C. Dovrolis. Co-evolutionary dynamics in social networks: A case study of twitter.
arXiv:1309.6001, 2013.
[4] S. Myers and J. Leskovec. The bursty dynamics of the twitter information network. WWW, 2014.
[5] L. Weng, J. Ratkiewicz, N. Perra, B. Goncalves, C. Castillo, F. Bonchi, R. Schifanella, F. Menczer, and
A. Flammini. The role of information diffusion in the evolution of social networks. KDD, 2013.
[6] N. Du, L. Song, M. Gomez-Rodriguez, and H. Zha. Scalable influence estimation in continuous-time
diffusion networks. NIPS, 2013.
[7] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch?olkopf. Uncovering the temporal dynamics of diffusion
networks. ICML, 2011.
[8] M. Gomez-Rodriguez, J. Leskovec, A. Krause. Inferring networks of diffusion and influence. KDD, 2010.
[9] D. Chakrabarti, Y. Zhan, and C. Faloutsos. R-mat: A recursive model for graph mining. Computer Science
Department, page 541, 2004.
[10] J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and J. Leskovec. Kronecker graphs: An approach
to modeling networks. JMLR, 2010.
[11] J. Leskovec, L. Backstrom, R. Kumar, and others. Microscopic evolution of social networks. KDD, 2008.
[12] T.J. Liniger. Multivariate Hawkes Processes. PhD thesis, ETHZ, 2009.
[13] C. Blundell, J. Beck, K. Heller. Modelling reciprocating relationships with hawkes processes. NIPS, 2012.
[14] M. Farajtabar, N. Du, M. Gomez-Rodriguez, I. Valera, H. Zha, and L. Song. Shaping social activity by
incentivizing users. NIPS, 2014.
[15] T. Iwata, A. Shah, and Z. Ghahramani. Discovering latent influence in online social activities via shared
cascade poisson processes. KDD, 2013.
[16] S. Linderman and R. Adams. Discovering latent network structure in point process data. ICML, 2014.
[17] I. Valera, M. Gomez-Rodriguez, Modeling adoption of competing products and conventions in social
media. ICDM, 2015.
[18] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multidimensional hawkes processes. AISTATS, 2013.
[19] K. Zhou, H. Zha, and L. Song. Learning triggering kernels for multi-dimensional hawkes processes.
ICML, 2013.
[20] D. Hunter, P. Smyth, D. Q. Vu, and others. Dynamic egocentric models for citation networks. ICML, 2011.
[21] D. Q. Vu, D. Hunter, P. Smyth, and A. Asuncion. Continuous-time regression models for longitudinal
networks. NIPS, 2011.
[22] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over time: densification laws, shrinking diameters
and possible explanations. KDD, 2005.
[23] S. Goel, D. J. Watts, and D. G. Goldstein. The structure of online diffusion networks. EC, 2012.
[24] O. Aalen, O. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view, 2008.
? Tardos. Maximizing the spread of influence through a social network.
[25] D. Kempe, J. Kleinberg, and E.
KDD, 2003.
[26] Y. Ogata. On lewis? simulation method for point processes. IEEE TIT, 27(1):23?31, 1981.
[27]
[28]
[29]
[30]
P. Erdos and A R?enyi. On the evolution of random graphs. Hungar. Acad. Sci, 5:17?61, 1960.
L. Backstrom, P. Boldi, M. Rosa, J. Ugander, and S. Vigna. Four degrees of separation. WebSci, 2012.
M. Granovetter. The strength of weak ties. American journal of sociology, pages 1360?1380, 1973.
D. Romero and J. Kleinberg. The directed closure process in hybrid social-information networks, with an
analysis of link formation on twitter. ICWSM, 2010.
[31] J. Ugander, L. Backstrom, and J. Kleinberg. Subgraph frequencies: Mapping the empirical and extremal
geography of large graph collections. WWW, 2013.
[32] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 1998.
[33] T. Gross and B. Blasius. Adaptive coevolutionary networks: a review. Royal Society Interface, 2008.
[34] P. Singer, C. Wagner, and M. Strohmaier. Factors influencing the co-evolution of social and content
networks in online social media. Modeling and Mining Ubiquitous Social Media, pages 40?59. Springer,
2012.
9
| 5754 |@word proportion:1 nd:2 open:1 closure:1 confirms:1 simulation:7 covariance:9 pick:1 contains:3 longitudinal:1 outperforms:1 current:3 com:1 manuel:1 si:4 follower:3 written:1 readily:1 boldi:1 additive:1 happen:2 romero:1 kdd:6 designed:4 update:2 stationary:2 generative:3 discovering:2 beginning:1 ugander:2 short:2 record:3 provides:1 node:44 org:1 dn:10 direct:1 become:1 chakrabarti:2 initiative:2 consists:2 excise:1 bonchi:1 expected:2 behavior:2 frequently:1 multi:1 actual:1 window:7 increasing:3 becomes:1 menczer:1 underlying:1 notation:1 panel:1 medium:4 what:1 developed:1 temporal:14 every:1 multidimensional:2 ti:18 concave:2 growth:1 tie:1 rm:1 hit:1 before:9 t1:2 influencing:1 manuelgr:1 treat:1 modify:2 local:1 consequence:2 acad:1 infectivity:1 path:1 becoming:1 noteworthy:1 merge:1 au:5 studied:1 co:12 ease:3 range:1 adoption:1 directed:3 practical:1 spontaneously:1 vu:2 recursive:1 procedure:2 survived:1 unfold:1 empirical:8 cascade:19 revealing:3 significantly:2 adapting:1 get:1 cannot:1 close:1 context:1 influence:14 www:4 maximizing:1 exposure:1 independently:1 convex:3 formulate:1 simplicity:1 m2:2 importantly:2 his:1 increment:1 tardos:1 trigger:2 user:9 smyth:2 logarithmically:1 predicts:2 observed:8 role:1 wang:2 capture:6 ensures:1 news:2 connected:4 decrease:1 incremented:1 gjessing:1 coevolutionary:10 intuition:1 borgan:1 convexity:1 complexity:3 ui:4 gross:1 dynamic:20 personal:1 trained:2 depend:4 tit:1 exposed:4 creation:26 creates:1 efficiency:1 gu:1 easily:2 joint:11 represented:2 train:4 enyi:1 aggregate:2 formation:1 neighborhood:1 peer:2 whose:2 lag:3 apparent:2 richer:1 larger:1 drawing:1 emergence:1 itself:4 online:9 triggered:1 myers:1 propose:2 product:1 frequent:1 loop:1 realization:4 date:1 combining:1 subgraph:1 flexibility:2 poorly:1 validate:1 dirac:1 olkopf:1 optimum:1 produce:2 adam:1 wider:1 coupling:1 develop:1 depending:1 lsong:1 eq:3 predicted:1 convention:1 attribute:1 stochastic:6 lcc:1 enable:1 adjacency:2 argued:1 clustered:2 geography:1 investigation:1 around:2 exp:3 bursty:1 mapping:1 predict:2 visualize:2 driving:1 al1:1 consecutive:1 heap:1 achieves:1 inexistent:1 estimation:5 travel:1 extremal:1 largest:2 create:2 successfully:1 reflects:2 avoid:1 zhou:2 boosted:1 varying:4 gatech:2 timevarying:1 focus:1 she:3 consistently:1 rank:5 likelihood:6 indicates:1 kwak:1 modelling:1 contrast:1 retweeting:8 baseline:5 twitter:11 dependent:1 el:2 typically:3 entire:1 her:8 relation:1 going:1 overall:1 among:4 uncovering:1 denoted:2 art:2 integration:1 kempe:1 mutual:3 rosa:1 once:3 sampling:2 park:1 icml:4 nearly:1 mimic:1 future:1 t2:2 others:7 fundamentally:1 few:1 simultaneously:1 individual:1 beck:1 ourselves:1 consisting:1 interest:3 message:2 highly:4 possibility:1 mining:2 evaluation:1 weng:7 r01gm108341:1 accurate:2 fu:4 edge:4 integral:1 closer:2 respective:1 re:1 theoretical:1 leskovec:6 fitted:2 sociology:1 classify:1 modeling:9 ar:3 yichen:2 introducing:1 entry:5 shuang:1 characterize:3 synthetic:4 person:2 density:1 fundamental:1 peak:2 coevolve:7 probabilistic:3 destination:5 lee:1 thesis:1 recorded:3 containing:1 broadcast:1 external:1 creating:3 granovetter:1 american:1 li:1 potential:2 microblogging:2 summarized:1 explicitly:6 depends:2 later:2 retweet:42 picked:2 view:1 observing:1 doing:2 zha:6 start:1 asuncion:1 largely:1 efficiently:2 gathered:1 generalize:1 weak:1 accurately:2 produced:1 hunter:2 cc:1 published:1 history:10 followees:9 reach:1 whenever:1 definition:1 against:3 frequency:1 associated:3 static:3 couple:2 newly:1 dataset:7 ubiquitous:1 shaping:1 sophisticated:1 actually:1 thinning:1 goldstein:1 bidirectional:1 originally:3 dt:12 higher:3 follow:8 specify:1 inspire:1 though:1 shrink:1 furthermore:5 just:4 dow:1 adamic:1 ei:1 nonlinear:1 propagation:1 rodriguez:6 logistic:1 grows:1 effect:2 verify:1 true:3 evolution:30 hence:4 former:3 alternating:1 nonzero:3 illustrated:2 during:3 self:1 hawkes:15 mpi:2 excitation:4 complete:1 tn:3 performs:1 interface:1 predominantly:1 common:3 nih:1 functional:3 discussed:1 linking:1 reciprocating:1 refer:2 hus:3 similarity:1 multivariate:3 own:2 recent:2 constantine:1 driven:3 scenario:1 certain:1 n00014:1 top1:2 onr:1 success:2 captured:1 minimum:1 additional:3 seen:1 goel:1 parallelized:1 aggregated:2 maximize:1 ii:7 multiple:3 characterized:1 adapt:1 cross:9 icdm:1 post:2 controlled:1 prediction:10 scalable:2 regression:1 essentially:1 poisson:6 arxiv:1 kernel:7 background:2 remarkably:2 separately:2 fine:1 interval:2 krause:1 source:40 leaving:1 sch:1 recording:1 counting:6 synthetically:1 iii:1 easy:1 fit:10 coevolution:3 topology:4 triggering:8 restrict:1 reduce:1 idea:1 competing:1 blundell:1 whether:1 reuse:1 song:5 repeatedly:1 remark:1 generally:1 useful:2 detailed:1 amount:1 category:6 diameter:14 generate:5 http:1 percentage:2 nsf:3 happened:1 sign:1 delta:1 estimated:3 per:6 track:2 broadly:1 diverse:1 discrete:2 intertwined:3 mat:1 affected:1 express:1 key:3 four:1 threshold:1 acknowledged:3 deleted:1 drawn:1 changing:1 prevent:1 diffusion:44 graph:6 egocentric:1 tweet:12 sum:2 eli:2 striking:1 farajtabar:3 arrive:1 separation:1 draw:2 appendix:8 summarizes:1 zhan:1 interleaved:1 followed:1 gomez:6 cheng:1 auv:3 activity:5 strength:1 kronecker:1 software:1 kleinberg:5 simulate:2 weibo:1 kumar:1 relatively:1 department:1 according:1 combination:2 watt:2 across:6 urgent:1 backstrom:3 evolves:1 making:1 happens:1 hl:5 intuitively:2 agree:1 visualization:1 turn:2 discus:1 count:2 needed:1 singer:1 drastic:1 available:2 operation:2 linderman:1 obey:1 simulating:1 occurrence:6 alternative:3 faloutsos:3 shah:1 eigen:1 original:3 assumes:1 denotes:1 top:6 eri:2 sw:1 liniger:1 exploit:2 balduzzi:1 especially:1 ghahramani:1 society:1 objective:1 already:3 flattens:1 occurs:1 spike:4 strategy:1 mehrdad:2 md:2 evolutionary:4 exhibit:1 microscopic:1 link:77 separate:1 simulated:1 thank:1 sci:1 vigna:1 considers:1 toward:1 code:1 modeled:1 relationship:1 illustration:2 providing:1 hungar:1 acquire:1 innovation:2 equivalently:1 mostly:1 trace:2 design:4 implementation:2 collective:1 perform:2 allowing:3 shallower:1 convolution:1 observation:2 datasets:1 snapshot:1 acknowledge:1 incorporate:1 intensity:49 pair:4 nonlinearly:1 connection:1 antoniades:2 deletion:3 nu:3 nip:4 pattern:7 sparsity:7 summarize:2 including:2 royal:1 explanation:1 power:8 event:97 suitable:2 force:1 hybrid:1 indicator:1 hr:5 valera:2 improve:1 github:1 technology:1 temporally:1 created:6 coupled:2 naive:1 heller:1 interdependent:1 literature:1 acknowledgement:1 review:1 evolve:3 law:8 generation:1 interesting:1 proportional:1 goncalves:1 dovrolis:2 localized:1 degree:10 offered:1 consistent:2 informativeness:1 propagates:1 share:1 heavy:1 prone:1 supported:1 interwoven:3 allow:2 understand:1 institute:1 neighbor:3 wide:1 wagner:1 sparse:5 depth:4 world:6 valid:1 rich:3 forward:3 collection:2 commonly:1 author:1 adaptive:1 historical:7 ec:1 social:20 citation:1 erdos:2 keep:1 global:1 decides:2 instantiation:1 hongyuan:1 assumed:1 spectrum:1 search:1 continuous:4 latent:2 triplet:3 decomposes:1 tailed:1 reality:1 learn:2 nature:1 career:1 ignoring:2 du:2 posted:2 da:3 sp:2 aistats:1 spread:2 dense:1 linearly:1 n2:2 allowed:1 augmented:1 fig:2 site:2 georgia:1 fashion:3 shrinking:3 retweets:17 inferring:1 originated:2 obeying:1 exponential:4 breaking:1 jmlr:1 renyi:1 grained:1 abundance:1 ogata:3 incentivizing:1 showing:1 er:4 densification:1 list:4 experimented:1 decay:2 explored:1 survival:7 evidence:1 adding:1 effectively:1 inter5:1 phd:1 occurring:1 simply:5 likely:3 strogatz:1 trf:6 springer:1 corresponds:4 iwata:1 constantly:2 lewis:1 conditional:5 identity:6 targeted:1 consequently:1 exposition:4 shared:1 shortcut:1 change:8 experimentally:1 content:1 determined:1 specifically:2 total:1 castillo:1 pas:1 rarely:1 formally:1 aalen:1 people:1 support:2 latter:3 modulated:1 icwsm:1 ethz:1 bigdata:1 evaluate:3 phenomenon:2 correlated:2 |
5,252 | 5,755 | Linear Response Methods for Accurate Covariance
Estimates from Mean Field Variational Bayes
Ryan Giordano
UC Berkeley
rgiordano@berkeley.edu
Tamara Broderick
MIT
tbroderick@csail.mit.edu
Michael Jordan
UC Berkeley
jordan@cs.berkeley.edu
Abstract
Mean ?eld variational Bayes (MFVB) is a popular posterior approximation
method due to its fast runtime on large-scale data sets. However, a well known major failing of MFVB is that it underestimates the uncertainty of model variables
(sometimes severely) and provides no information about model variable covariance. We generalize linear response methods from statistical physics to deliver
accurate uncertainty estimates for model variables?both for individual variables
and coherently across variables. We call our method linear response variational
Bayes (LRVB). When the MFVB posterior approximation is in the exponential
family, LRVB has a simple, analytic form, even for non-conjugate models. Indeed, we make no assumptions about the form of the true posterior. We demonstrate the accuracy and scalability of our method on a range of models for both
simulated and real data.
1
Introduction
With increasingly ef?cient data collection methods, scientists are interested in quickly analyzing
ever larger data sets. In particular, the promise of these large data sets is not simply to ?t old models
but instead to learn more nuanced patterns from data than has been possible in the past. In theory,
the Bayesian paradigm yields exactly these desiderata. Hierarchical modeling allows practitioners
to capture complex relationships between variables of interest. Moreover, Bayesian analysis allows
practitioners to quantify the uncertainty in any model estimates?and to do so coherently across all
of the model variables.
Mean ?eld variational Bayes (MFVB), a method for approximating a Bayesian posterior distribution, has grown in popularity due to its fast runtime on large-scale data sets [1?3]. But a well known
major failing of MFVB is that it gives underestimates of the uncertainty of model variables that
can be arbitrarily bad, even when approximating a simple multivariate Gaussian distribution [4?
6]. Also, MFVB provides no information about how the uncertainties in different model variables
interact [5?8].
By generalizing linear response methods from statistical physics [9?12] to exponential family variational posteriors, we develop a methodology that augments MFVB to deliver accurate uncertainty
estimates for model variables?both for individual variables and coherently across variables. In
particular, as we elaborate in Section 2, when the approximating posterior in MFVB is in the exponential family, MFVB de?nes a ?xed-point equation in the means of the approximating posterior,
1
and our approach yields a covariance estimate by perturbing this ?xed point. We call our method
linear response variational Bayes (LRVB).
We provide a simple, intuitive formula for calculating the linear response correction by solving a
linear system based on the MFVB solution (Section 2.2). We show how the sparsity of this system
for many common statistical models may be exploited for scalable computation (Section 2.3). We
demonstrate the wide applicability of LRVB by working through a diverse set of models to show that
the LRVB covariance estimates are nearly identical to those produced by a Markov Chain Monte
Carlo (MCMC) sampler, even when MFVB variance is dramatically underestimated (Section 3).
Finally, we focus in more depth on models for ?nite mixtures of multivariate Gaussians (Section 3.3),
which have historically been a sticking point for MFVB covariance estimates [5, 6]. We show that
LRVB can give accurate covariance estimates orders of magnitude faster than MCMC (Section 3.3).
We demonstrate both theoretically and empirically that, for this Gaussian mixture model, LRVB
scales linearly in the number of data points and approximately cubically in the dimension of the
parameter space (Section 3.4).
Previous Work. Linear response methods originated in the statistical physics literature [10?13].
These methods have been applied to ?nd new learning algorithms for Boltzmann machines [13],
covariance estimates for discrete factor graphs [14], and independent component analysis [15]. [16]
states that linear response methods could be applied to general exponential family models but works
out details only for Boltzmann machines. [10], which is closest in spirit to the present work, derives
general linear response corrections to variational approximations; indeed, the authors go further to
formulate linear response as the ?rst term in a functional Taylor expansion to calculate full pairwise
joint marginals. However, it may not be obvious to the practitioner how to apply the general formulas
of [10]. Our contributions in the present work are (1) the provision of concrete, straightforward
formulas for covariance correction that are fast and easy to compute, (2) demonstrations of the
success of our method on a wide range of new models, and (3) an accompanying suite of code.
2
2.1
Linear response covariance estimation
Variational Inference
Suppose we observe N data points, denoted by the N -long column vector x, and denote our unobserved model parameters by ?. Here, ? is a column vector residing in some space ?; it has J
subgroups and total dimension D. Our model is speci?ed by a distribution of the observed data
given the model parameters?the likelihood p(x|?)?and a prior distributional belief on the model
parameters p(?). Bayes? Theorem yields the posterior p(?|x).
Mean-?eld variational Bayes (MFVB) approximates p(?|x) by a factorized distribution of the form
?J
q(?) = j=1 q(?j ). q is chosen so that the Kullback-Liebler divergence KL(q||p) between q and p
is minimized. Equivalently, q is chosen so that E := L + S, for L := Eq [log p(?|x)] (the expected
log posterior) and S := ?Eq [log q(?)] (the entropy of the variational distribution), is maximized:
(1)
q ? := arg min KL(q||p) = arg min Eq [log q(?) ? log p(?|x)] = arg max E.
q
q
q
Up to a constant in ?, the objective E is sometimes called the ?evidence lower bound?, or the ELBO
[5]. In what follows, we further assume that our variational distribution, q (?), is in the exponential
family with natural parameter ? and log partition function A: log q (?|?) = ? T ? ? A (?) (expressed
with respect to some base measure in ?). We assume that p (?|x) is expressed with respect to the
same base measure in ? as for q. Below, we will make only mild regularity assumptions about the
true posterior p(?|x) and no assumptions about its form.
If we assume additionally that the parameters ? ? at the optimum q ? (?) = q(?|? ? ) are in the interior
of the feasible space, then q(?|?) may instead be described by the mean parameterization: m := Eq ?
2
with m? := Eq? ?. Thus, the objective E can be expressed as a function of m, and the ?rst-order
condition for the optimality of q ? becomes the ?xed point equation
?
??
?
?
?E
?E ??
?E
+ m ??
+ m. (2)
=0 ?
= m? ? M (m? ) = m? for M (m) :=
?
?m m=m?
?m
?m
m=m?
2.2
Linear Response
Let V denote the covariance matrix of ? under the variational distribution q ? (?), and let ? denote
the covariance matrix of ? under the true posterior, p(?|x):
? := Covp ?.
V := Covq? ?,
In MFVB, V may be a poor estimator of ?, even when m? ? Ep ?, i.e., when the marginal estimated
means match well [5?7]. Our goal is to use the MFVB solution and linear response methods to
construct an improved estimator for ?. We will focus on the covariance of the natural suf?cient
statistic ?, though the covariance of functions of ? can be estimated similarly (see Appendix A).
The essential idea of linear response is to perturb the ?rst-order condition M (m? ) = m? around its
optimum. In particular, de?ne the distribution pt (?|x) as a log-linear perturbation of the posterior:
log pt (?|x)
:=
log p (?|x) + tT ? ? C (t) ,
(3)
where C (t) is a constant in ?. We assume that pt (?|x) is a well-de?ned distribution for any t in an
open ball around 0. Since C (t) normalizes pt (?|x), it is in fact the cumulant-generating function
of p(?|x), so the derivatives of C (t) evaluated at t = 0 give the cumulants of ?. To see why this
perturbation may be useful, recall that the second cumulant of a distribution is the covariance matrix,
our desired estimand:
?
?
?
?
d
d
?
= T Ept ???
.
? = Covp (?) = T C(t)?
dt dt
dt
t=0
t=0
The practical success of MFVB relies on the fact that its estimates of the mean are often good in
practice. So we assume that m?t ? Ept ?, where m?t is the mean parameter characterizing qt? and
qt? is the MFVB approximation to pt . (We examine this assumption further in Section 3.) Taking
derivatives with respect to t on both sides of this mean approximation and setting t = 0 yields
?
dm?t ??
?
? = Covp (?) ?
=: ?,
(4)
dtT ?t=0
? the linear response variational Bayes (LRVB) estimate of the posterior covariance
where we call ?
of ?.
? Recalling the form of the KL divergence
We next show that there exists a simple formula for ?.
T
(see Eq. (1)), we have that ?KL(q||pt ) = E +t m =: Et . Then by Eq. (2), we have m?t = Mt (m?t )
for Mt (m) := M (m) + t. It follows from the chain rule that
?
?
dm?t
?Mt ??
?Mt
?Mt ??
dm?t
dm?t
=
+
=
+ I,
(5)
dt
?mT ?m=m? dt
?t
?mT ?m=m? dt
t
t
where I is the identity matrix. If we assume that we are at a strict local optimum and so can invert
the Hessian of E, then evaluating at t = 0 yields
?
?
?
?
??1
??
?2E
?2E
?M ?
? +I ? ?
? =?
? = dmt ?
=
+
I
?
, (6)
?
+
I
=
?
dtT ?t=0
?m
?m?mT
?m?mT
3
? is the negative inverse
where we have used the form for M in Eq. (2). So the LRVB estimator ?
Hessian of the optimization objective, E, as a function of the mean parameters. It follows from
? is both symmetric and positive de?nite when the variational distribution q ? is at least
Eq. (6) that ?
a local maximum of E.
We can further simplify Eq. (6) by using the exponential family form of the variational approximating distribution q. For q in exponential family form as above, the negative entropy ?S is dual to the
log partition function A [17], so S = ?? T m + A(?); hence,
?
?
dS
?S d?
?S
?A
d?
=
+
=
?
m
? ?(m) = ??(m).
dm
?? T dm ?m
??
dm
Recall that for exponential families, ??(m)/?m = V ?1 . So Eq. (6) becomes1
?
??1
?2L
?2S
?2L
?1 ?1
?
?=?
+
=
?(H
?
V
)
,
for
H
:=
.?
?m?mT
?m?mT
?m?mT
? = (I ? V H)?1 V.
?
(7)
When the true posterior p(?|x) is in the exponential family and contains no products of the vari? = V . In this case, the mean ?eld assumption is
ational moment parameters, then H = 0 and ?
correct, and the LRVB and MFVB covariances coincide at the true posterior covariance. Furthermore, even when the variational assumptions fail, as long as certain mean parameters are estimated
exactly, then this formula is also exact for covariances. E.g., notably, MFVB is well-known to provide arbitrarily bad estimates of the covariance of a multivariate normal posterior [4?7], but since
MFVB estimates the means exactly, LRVB estimates the covariance exactly (see Appendix B).
2.3
Scaling the matrix inverse
Eq. (7) requires the inverse of a matrix as large as the parameter dimension of the posterior p(?|x),
which may be computationally prohibitive. Suppose we are interested in the covariance of parameter
T
sub-vector ?, and let z denote the remaining parameters: ? = (?, z) . We can partition ? =
(?? , ??z ; ?z? , ?z ) . Similar partitions exist for V and H. If we assume a mean-?eld factorization
q(?, z) = q(?)q(z), then V?z = 0. (The variational distributions may factor further as well.) We
? in Eq. (7) with respect to its zth component to ?nd that
calculate the Schur complement of ?
?
?
? ? = (I? ? V? H? ? V? H?z Iz ? Vz Hz )?1 Vz Hz? ?1 V? .
(8)
?
Here, I? and Iz refer to ?- and z-sized identity matrices, respectively. In cases where
?1
(Iz ? Vz Hz ) can be ef?ciently calculated (e.g., all the experiments in Section 3; see Fig. (5)
in Appendix D), Eq. (8) requires only an ?-sized inverse.
3
Experiments
We compare the covariance estimates from LRVB and MFVB in a range of models, including models
both with and without conjugacy 2 . We demonstrate the superiority of the LRVB estimate to MFVB
in all models before focusing in on Gaussian mixture models for a more detailed scalability analysis.
For each model, we simulate datasets with a range of parameters. In the graphs, each point represents
the outcome from a single simulation. The horizontal axis is always the result from an MCMC
1
For a comparison of this formula with the frequentist ?supplemented expectation-maximization? procedure
see Appendix C.
2
All the code is available on our Github repository, rgiordan/LinearResponseVariationalBayesNIPS2015,
4
procedure, which we take as the ground truth. As discussed in Section 2.2, the accuracy of the
LRVB covariance for a suf?cient statistic depends on the approximation m?t ? Ept ?. In the models
to follow, we focus on regimes of moderate dependence where this is a reasonable assumption for
most of the parameters (see Section 3.2 for an exception). Except where explicitly mentioned,
the MFVB means of the parameters of interest coincided well with the MCMC means, so our key
assumption in the LRVB derivations of Section 2 appears to hold.
3.1
Normal-Poisson model
Model. First consider a Poisson generalized linear mixed model, exhibiting non-conjugacy. We
observe Poisson draws yn and a design vector xn , for n = 1, ..., N . Implicitly below, we will
everywhere condition on the xn , which we consider to be a ?xed design matrix. The generative
model is:
?
?
indep
indep
zn |?, ? ? N zn |?xn , ? ?1 , yn |zn ? Poisson (yn | exp(zn )) ,
(9)
? ? N (?|0, ??2 ),
? ? ?(? |?? , ?? ).
?N
For MFVB, we factorize q (?, ?, z) = q (?) q (? ) n=1 q (zn ). Inspection reveals that the optimal
q (?) will be Gaussian, and the optimal q (? ) will be gamma (see Appendix D). Since the optimal
q (zn ) does not take a standard exponential family form, we restrict further to Gaussian q (zn ). There
are product terms in L (for example, the term Eq [? ] Eq [?] Eq [zn ]), so H ?= 0, and the mean ?eld
approximation does not hold; we expect LRVB to improve on the MFVB covariance estimate. A
detailed description of how to calculate the LRVB estimate can be found in Appendix D.
Results. We simulated 100 datasets, each with 500 data points and a randomly chosen value for
? and ? . We drew the design matrix x from a normal distribution and held it ?xed throughout. We
set prior hyperparameters ??2 = 10, ?? = 1, and ?? = 1. To get the ?ground truth? covariance
matrix, we took 20000 draws from the posterior with the R MCMCglmm package [18], which
used a combination of Gibbs and Metropolis Hastings sampling. Our LRVB estimates used the
autodifferentiation software JuMP [19].
Results are shown in Fig. (1). Since ? is high in many of the simulations, z and ? are correlated,
and MFVB underestimates the standard deviation of ? and ? . LRVB matches the MCMC standard
deviation for all ?, and matches for ? in all but the most correlated simulations. When ? gets very
high, the MFVB assumption starts to bias the point estimates of ? , and the LRVB standard deviations
start to differ from MCMC. Even in that case, however, the LRVB standard deviations are much more
accurate than the MFVB estimates, which underestimate the uncertainty dramatically. The ?nal plot
shows that LRVB estimates the covariances of z with ?, ? , and log ? reasonably well, while MFVB
considers them independent.
Figure 1: Posterior mean and covariance estimates on normal-Poisson simulation data.
3.2
Linear random effects
Model. Next, we consider a simple random slope linear model, with full details in Appendix E. We
observe scalars yn and rn and a vector xn , for n = 1, ..., N . Implicitly below, we will everywhere
5
condition on all the xn and rn , which we consider to be ?xed design matrices. In general, each
random effect may appear in multiple observations, and the index k(n) indicates which random
effect, zk , affects which observation, yn . The full generative model is:
?
?
?
?
indep
iid
yn |?, z, ? ? N yn |? T xn + rn zk(n) , ? ?1 , zk |? ? N zk |0, ? ?1 ,
? ? ?(? |?? , ?? ).
?K
We assume the mean-?eld factorization q (?, ?, ?, z) = q (?) q (? ) q (?) k=1 q (zn ). Since this is
a conjugate model, the optimal q will be in the exponential family with no additional assumptions.
? ? N (?|0, ?? ),
? ? ?(?|?? , ?? ),
Results. We simulated 100 datasets of 300 datapoints each and 30 distinct random effects. We
set prior hyperparameters to ?? = 2, ?? = 2, ?? = 2 , ?? = 2, and ?? = 0.1?1 I. Our xn was
2-dimensional. As in Section 3.1, we implemented the variational solution using the autodifferentiation software JuMP [19]. The MCMC ?t was performed with using MCMCglmm [18].
Intuitively, when the random effect explanatory variables rn are highly correlated with the ?xed
effects xn , then the posteriors for z and ? will also be correlated, leading to a violation of the
mean ?eld assumption and an underestimated MFVB covariance. In our simulation, we used rn =
x1n + N (0, 0.4), so that rn is correlated with x1n but not x2n . The result, as seen in Fig. (2),
is that ?1 is underestimated by MFVB, but ?2 is not. The ? parameter, in contrast, is not wellestimated by the MFVB approximation in many of the simulations. Since the LRVB depends on the
approximation m?t ? Ept ?, its LRVB covariance is not accurate either (Fig. (2)). However, LRVB
still improves on the MFVB standard deviation.
Figure 2: Posterior mean and covariance estimates on linear random effects simulation data.
3.3
Mixture of normals
Model. Mixture models constitute some of the most popular models for MFVB application [1, 2]
and are often used as an example of where MFVB covariance estimates may go awry [5, 6]. Thus, we
will consider in detail a Gaussian mixture model (GMM) consisting of a K-component mixture of
P -dimensional multivariate normals with unknown component means, covariances, and weights. In
what follows, the weight ?k is the probability of the kth component, ?k is the P -dimensional mean
of the kth component, and ?k is the P ? P precision matrix of the kth component (so ??1
k is the
covariance parameter). N is the number of data points, and xn is the nth observed P -dimensional
data point. We employ the standard trick of augmenting the data generating process with the latent
indicator variables znk , for n = 1, ..., N and k = 1, ..., K, such that znk = 1 implies xn ?
N (?k , ??1
k ). So the generative model is:
? ?
znk
N (xn |?k , ??1
(10)
P (znk = 1) = ?k , p(x|?, ?, ?, z) =
k )
n=1:N k=1:K
We used diffuse conditionally conjugate priors (see Appendix F for details). We make the variational
?N
?K
assumption q (?, ?, ?, z) = k=1 q (?k ) q (?k ) q (?k ) n=1 q (zn ). We compare the accuracy and
6
speed of our estimates to Gibbs sampling on the augmented model (Eq. (10)) using the function
rnmixGibbs from the R package bayesm. We implemented LRVB in C++, making extensive use
of RcppEigen [20]. We evaluate our results both on simulated data and on the MNIST data set [21].
Results. For simulations, we generated N = 10000 data points from K = 2 multivariate normal
components in P = 2 dimensions. MFVB is expected to underestimate the marginal variance of ?,
?, and log(?) when the components overlap since that induces correlation in the posteriors due to
the uncertain classi?cation of points between the clusters. We check the covariances estimated with
Eq. (7) against a Gibbs sampler, which we treat as the ground truth.3
We performed 198 simulations, each of which had at least 500 effective Gibbs samples in each
variable?calculated with the R tool e?ectiveSize from the coda package [22]. The ?rst three plots
show the diagonal standard deviations, and the third plot shows the off-diagonal covariances. Note
that the off-diagonal covariance plot excludes the MFVB estimates since most of the values are
zero. Fig. (3) shows that the raw MFVB covariance estimates are often quite different from the
Gibbs sampler results, while the LRVB estimates match the Gibbs sampler closely.
For a real-world example, we ?t a K = 2 GMM to the N = 12665 instances of handwritten 0s
and 1s in the MNIST data set. We used PCA to reduce the pixel intensities to P = 25 dimensions.
Full details are provided in Appendix G. In this MNIST analysis, the ? standard deviations were
under-estimated by MFVB but correctly estimated by LRVB (Fig. (3)); the other parameter standard
deviations were estimated correctly by both and are not shown.
Figure 3: Posterior mean and covariance estimates on GMM simulation and MNIST data.
3.4
Scaling experiments
We here explore the computational scaling of LRVB in more depth for the ?nite Gaussian mixture
model (Section 3.3). In the terms of Section 2.3, ? includes the suf?cient statistics from ?, ?, and ?,
and grows as O(KP 2 ). The suf?cient statistics for the variational posterior of ? contain the P -length
vectors ?k , for each k, and the (P + 1)P/2 second-order products in the covariance matrix ?k ?Tk .
Similarly, for each k, the variational posterior of ? involves the (P + 1)P/2 suf?cient statistics
in the symmetric matrix ?k as well as the term log |?k |. The suf?cient statistics for the posterior
of ?k are the K terms log ?k .4 So, minimally, Eq. (7) will require the inverse of a matrix of size
3
The likelihood described in Section 3.3 is symmetric under relabeling. When the component locations
and shapes have a real-life interpretation, the researcher is generally interested in the uncertainty of ?, ?, and
? for a particular labeling, not the marginal uncertainty over all possible re-labelings. This poses a problem
for standard MCMC methods, and we restrict our simulations to regimes where label switching did not occur
in our Gibbs sampler. The MFVB solution conveniently avoids this problem since the mean ?eld assumption
prevents it from representing more than one mode of the joint posterior.
?
4
Since K
k=1 ?k = 1, using K suf?cient statistics involves one redundant parameter. However, this does
not violate any of the necessary assumptions for Eq. (7), and it considerably simpli?es the calculations. Note
that though the perturbation argument of Section 2 requires the parameters of p(?|x) to be in the interior of the
feasible space, it does not require that the parameters of p(x|?) be interior.
7
O(KP 2 ). The suf?cient statistics for z have dimension K ? N . Though the number of parameters
thus grows with the number of data points, Hz = 0 for the multivariate normal (see Appendix F),
so we can apply Eq. (8) to replace the inverse of an O(KN )-sized matrix with multiplication by
the same matrix. Since a matrix inverse is cubic in the size of the matrix, the worst-case scaling for
LRVB is then O(K 2 ) in K, O(P 6 ) in P , and O(N ) in N .
In our simulations (Fig. (4)) we can see that, in practice, LRVB scales linearly5 in N and approximately cubically in P across the dimensions considered.6 The P scaling is presumably better than
the theoretical worst case of O(P 6 ) due to extra ef?ciency in the numerical linear algebra. Note that
the vertical axis of the leftmost plot is on the log scale. At all the values of N , K and P considered
here, LRVB was at least as fast as Gibbs sampling and often orders of magnitude faster.
Figure 4: Scaling of LRVB and Gibbs on simulation data in both log and linear scales. Before taking
logs, the line in the two lefthand (N) graphs is y ? x, and in the righthand (P) graph, it is y ? x3 .
4
Conclusion
The lack of accurate covariance estimates from the widely used mean-?eld variational Bayes
(MFVB) methodology has been a longstanding shortcoming of MFVB. We have demonstrated that
in sparse models, our method, linear response variational Bayes (LRVB), can correct MFVB to deliver these covariance estimates in time that scales linearly with the number of data points. Furthermore, we provide an easy-to-use formula for applying LRVB to a wide range of inference problems.
Our experiments on a diverse set of models have demonstrated the ef?cacy of LRVB, and our detailed study of scaling of mixtures of multivariate Gaussians shows that LRVB can be considerably
faster than traditional MCMC methods. We hope that in future work our results can be extended
to more complex models, including Bayesian nonparametric models, where MFVB has proven its
practical success.
Acknowledgments. The authors thank Alex Blocker for helpful comments. R. Giordano and
T. Broderick were funded by Berkeley Fellowships.
5
The Gibbs sampling time was linearly rescaled to the amount of time necessary to achieve 1000 effective
samples in the slowest-mixing component of any parameter. Interestingly, this rescaling leads to increasing
ef?ciency in the Gibbs sampling at low P due to improved mixing, though the bene?ts cease to accrue at
moderate dimensions.
6
For numeric stability we started the optimization procedures for MFVB at the true values, so the time to
compute the optimum in our simulations was very fast and not representative of practice. On real data, the
optimization time will depend on the quality of the starting point. Consequently, the times shown for LRVB
are only the times to compute the LRVB estimate. The optimization times were on the same order.
8
References
[1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[2] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis,
1(1):121?143, 2006.
[3] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14(1):1303?1347, 2013.
[4] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press,
2003. Chapter 33.
[5] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006. Chapter 10.
[6] R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for time-series
models. In D. Barber, A. T. Cemgil, and S. Chiappa, editors, Bayesian Time Series Models. 2011.
[7] B. Wang and M. Titterington. Inadequacy of interval estimates corresponding to variational Bayesian
approximations. In Workshop on Arti?cial Intelligence and Statistics, pages 373?380, 2004.
[8] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models by using
integrated nested Laplace approximations. Journal of the Royal Statistical Society: Series B (statistical
methodology), 71(2):319?392, 2009.
[9] G. Parisi. Statistical Field Theory, volume 4. Addison-Wesley New York, 1988.
[10] M. Opper and O. Winther. Variational linear response. In Advances in Neural Information Processing
Systems, 2003.
[11] M. Opper and D. Saad. Advanced mean ?eld methods: Theory and practice. MIT press, 2001.
[12] T. Tanaka. Information geometry of mean-?eld approximation. Neural Computation, 12(8):1951?1968,
2000.
[13] H. J. Kappen and F. B. Rodriguez. Ef?cient learning in Boltzmann machines using linear response theory.
Neural Computation, 10(5):1137?1156, 1998.
[14] M. Welling and Y. W. Teh. Linear response algorithms for approximate inference in graphical models.
Neural Computation, 16(1):197?221, 2004.
[15] P. A. d. F. R. H?jen-S?rensen, O. Winther, and L. K. Hansen. Mean-?eld approaches to independent
component analysis. Neural Computation, 14(4):889?918, 2002.
[16] T. Tanaka. Mean-?eld theory of Boltzmann machine learning. Physical Review E, 58(2):2302, 1998.
[17] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends? in Machine Learning, 1(1-2):1?305, 2008.
[18] J. D. Had?eld. MCMC methods for multi-response generalized linear mixed models: The MCMCglmm
R package. Journal of Statistical Software, 33(2):1?22, 2010.
[19] M. Lubin and I. Dunning. Computing in operations research using Julia. INFORMS Journal on Computing, 27(2):238?248, 2015.
[20] D. Bates and D. Eddelbuettel. Fast and elegant numerical linear algebra using the RcppEigen package.
Journal of Statistical Software, 52(5):1?24, 2013.
[21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[22] M. Plummer, N. Best, K. Cowles, and K. Vines. CODA: Convergence diagnosis and output analysis for
MCMC. R News, 6(1):7?11, 2006.
[23] X. L. Meng and D. B. Rubin. Using EM to obtain asymptotic variance-covariance matrices: The SEM
algorithm. Journal of the American Statistical Association, 86(416):899?909, 1991.
[24] A. W?achter and L. T. Biegler. On the implementation of an interior-point ?lter line-search algorithm for
large-scale nonlinear programming. Mathematical Programming, 106(1):25?57, 2006.
9
| 5755 |@word mild:1 repository:1 nd:2 open:1 simulation:14 covariance:42 arti:1 eld:15 kappen:1 moment:1 contains:1 series:3 document:1 interestingly:1 past:1 numerical:2 partition:4 shape:1 analytic:1 plot:5 generative:3 prohibitive:1 intelligence:1 parameterization:1 inspection:1 avoids:1 blei:3 provides:2 awry:1 location:1 covp:3 mathematical:1 pairwise:1 theoretically:1 notably:1 indeed:2 expected:2 examine:1 multi:1 increasing:1 becomes:1 provided:1 moreover:1 factorized:1 what:2 xed:7 titterington:1 unobserved:1 suite:1 cial:1 berkeley:5 runtime:2 exactly:4 superiority:1 yn:7 appear:1 positive:1 before:2 scientist:1 local:2 treat:1 cemgil:1 severely:1 switching:1 analyzing:1 meng:1 approximately:2 minimally:1 cacy:1 factorization:2 range:5 practical:2 acknowledgment:1 lecun:1 practice:4 maximisation:1 x3:1 procedure:3 nite:3 giordano:2 get:2 interior:4 applying:1 demonstrated:2 go:2 straightforward:1 starting:1 formulate:1 estimator:3 rule:1 datapoints:1 stability:1 laplace:1 pt:6 suppose:2 exact:1 programming:2 trick:1 trend:1 recognition:2 distributional:1 observed:2 ep:1 wang:2 capture:1 worst:2 calculate:3 vine:1 news:1 indep:3 rescaled:1 mentioned:1 broderick:2 depend:1 solving:1 algebra:2 deliver:3 joint:2 chapter:2 grown:1 derivation:1 distinct:1 fast:6 effective:2 shortcoming:1 monte:1 kp:2 plummer:1 labeling:1 outcome:1 quite:1 larger:1 widely:1 elbo:1 statistic:9 covq:1 parisi:1 took:1 product:3 zth:1 lefthand:1 mixing:2 achieve:1 intuitive:1 sticking:1 description:1 scalability:2 rst:4 convergence:1 regularity:1 optimum:4 cluster:1 generating:2 tk:1 develop:1 informs:1 pose:1 augmenting:1 chiappa:1 qt:2 eq:22 implemented:2 c:1 involves:2 implies:1 quantify:1 exhibiting:1 differ:1 closely:1 correct:2 stochastic:1 require:2 ryan:1 correction:3 accompanying:1 hold:2 around:2 residing:1 ground:3 normal:8 exp:1 considered:2 presumably:1 major:2 failing:2 estimation:1 label:1 hansen:1 vz:3 tool:1 hoffman:1 hope:1 mit:3 gaussian:8 always:1 focus:3 martino:1 likelihood:2 indicates:1 check:1 slowest:1 contrast:1 ept:4 helpful:1 inference:8 cubically:2 integrated:1 explanatory:1 chopin:1 labelings:1 interested:3 pixel:1 arg:3 dual:1 denoted:1 mackay:1 uc:2 marginal:3 field:2 construct:1 x2n:1 sampling:5 ng:1 identical:1 represents:1 nearly:1 future:1 minimized:1 simplify:1 employ:1 randomly:1 gamma:1 divergence:2 individual:2 relabeling:1 geometry:1 consisting:1 recalling:1 interest:2 highly:1 righthand:1 violation:1 mixture:10 held:1 chain:2 accurate:7 lrvb:39 necessary:2 old:1 taylor:1 desired:1 re:1 accrue:1 theoretical:1 uncertain:1 instance:1 column:2 modeling:1 cumulants:1 zn:10 maximization:1 applicability:1 deviation:8 kn:1 considerably:2 winther:2 csail:1 physic:3 off:2 michael:1 quickly:1 concrete:1 american:1 derivative:2 leading:1 rescaling:1 achter:1 de:4 includes:1 explicitly:1 depends:2 performed:2 start:2 bayes:10 slope:1 contribution:1 accuracy:3 variance:3 maximized:1 yield:5 generalize:1 bayesian:8 raw:1 handwritten:1 produced:1 iid:1 bates:1 carlo:1 researcher:1 cation:1 liebler:1 ed:1 against:1 underestimate:5 tamara:1 obvious:1 dm:7 popular:2 recall:2 improves:1 provision:1 focusing:1 appears:1 wesley:1 dt:6 follow:1 methodology:3 response:20 improved:2 evaluated:1 though:4 furthermore:2 correlation:1 d:1 working:1 hastings:1 horizontal:1 nonlinear:1 lack:1 rodriguez:1 mode:1 quality:1 nuanced:1 grows:2 effect:7 contain:1 true:6 hence:1 symmetric:3 dunning:1 conditionally:1 x1n:2 coda:2 generalized:2 leftmost:1 tt:1 demonstrate:4 julia:1 variational:29 ef:6 common:1 functional:1 mt:12 empirically:1 perturbing:1 physical:1 volume:1 discussed:1 interpretation:1 approximates:1 association:1 marginals:1 refer:1 cambridge:1 gibbs:11 paisley:1 similarly:2 had:2 funded:1 base:2 posterior:27 multivariate:7 closest:1 moderate:2 certain:1 arbitrarily:2 success:3 life:1 exploited:1 seen:1 additional:1 simpli:1 speci:1 paradigm:1 redundant:1 full:4 multiple:1 violate:1 faster:3 match:4 calculation:1 long:2 desideratum:1 scalable:1 expectation:2 poisson:5 sometimes:2 invert:1 dmt:1 fellowship:1 interval:1 underestimated:3 extra:1 saad:1 strict:1 comment:1 hz:4 elegant:1 spirit:1 schur:1 jordan:5 call:3 practitioner:3 ciently:1 bengio:1 easy:2 affect:1 restrict:2 reduce:1 idea:1 haffner:1 pca:1 inadequacy:1 lubin:1 hessian:2 york:2 constitute:1 dramatically:2 useful:1 generally:1 detailed:3 amount:1 nonparametric:1 induces:1 augments:1 exist:1 rensen:1 estimated:7 popularity:1 correctly:2 diverse:2 diagnosis:1 discrete:1 promise:1 iz:3 key:1 gmm:3 nal:1 lter:1 graph:4 excludes:1 blocker:1 estimand:1 inverse:7 everywhere:2 uncertainty:9 package:5 family:12 reasonable:1 throughout:1 draw:2 appendix:10 scaling:7 bound:1 occur:1 alex:1 software:4 diffuse:1 simulate:1 speed:1 min:2 optimality:1 argument:1 ned:1 ball:1 poor:1 combination:1 conjugate:3 across:4 increasingly:1 em:1 metropolis:1 making:1 intuitively:1 computationally:1 equation:2 conjugacy:2 fail:1 addison:1 available:1 gaussians:2 operation:1 apply:2 observe:3 hierarchical:1 frequentist:1 remaining:1 dirichlet:2 graphical:2 calculating:1 perturb:1 approximating:5 society:1 objective:3 coherently:3 dependence:1 diagonal:3 traditional:1 gradient:1 kth:3 thank:1 simulated:4 barber:1 considers:1 code:2 length:1 index:1 relationship:1 demonstration:1 equivalently:1 negative:2 design:4 implementation:1 boltzmann:4 unknown:1 teh:1 vertical:1 observation:2 markov:1 datasets:3 t:1 tbroderick:1 extended:1 ever:1 rn:6 perturbation:3 intensity:1 complement:1 kl:4 extensive:1 bene:1 subgroup:1 dtt:2 tanaka:2 below:3 pattern:2 regime:2 sparsity:1 max:1 including:2 royal:1 belief:1 wainwright:1 overlap:1 natural:2 indicator:1 turner:1 nth:1 representing:1 advanced:1 improve:1 github:1 historically:1 ne:2 axis:2 started:1 sahani:1 prior:4 literature:1 review:1 multiplication:1 asymptotic:1 expect:1 mixed:2 suf:8 allocation:1 proven:1 foundation:1 znk:4 rubin:1 editor:1 normalizes:1 side:1 bias:1 wide:3 characterizing:1 taking:2 sparse:1 opper:2 depth:2 numeric:1 dimension:8 evaluating:1 vari:1 calculated:2 xn:11 world:1 author:2 collection:1 jump:2 coincide:1 longstanding:1 welling:1 approximate:2 implicitly:2 kullback:1 reveals:1 factorize:1 biegler:1 search:1 latent:3 why:1 additionally:1 learn:1 reasonably:1 zk:4 correlated:5 sem:1 interact:1 expansion:1 bottou:1 complex:2 rue:1 did:1 linearly:3 hyperparameters:2 augmented:1 fig:7 representative:1 cient:10 elaborate:1 cubic:1 ational:1 sub:1 precision:1 originated:1 ciency:2 exponential:12 third:1 coincided:1 formula:7 theorem:1 bad:2 bishop:1 jen:1 supplemented:1 cease:1 evidence:1 derives:1 essential:1 exists:1 mnist:4 workshop:1 drew:1 magnitude:2 entropy:2 generalizing:1 simply:1 explore:1 conveniently:1 prevents:1 expressed:3 scalar:1 springer:1 nested:1 truth:3 relies:1 goal:1 identity:2 sized:3 consequently:1 replace:1 feasible:2 except:1 sampler:5 classi:1 total:1 called:1 e:1 exception:1 cumulant:2 evaluate:1 mcmc:11 cowles:1 |
5,253 | 5,756 | Latent Bayesian melding for integrating individual
and population models
Mingjun Zhong, Nigel Goddard, Charles Sutton
School of Informatics
University of Edinburgh
United Kingdom
{mzhong,nigel.goddard,csutton}@inf.ed.ac.uk
Abstract
In many statistical problems, a more coarse-grained model may be suitable for
population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow
the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently
expressed as latent variable models. We propose latent Bayesian melding, which
is motivated by averaging the distributions over populations statistics of both the
individual-level and the population-level models under a logarithmic opinion pool
framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding
leads to significantly more accurate predictions than an approach based solely on
generalized moment matching.
1
Introduction
Good statistical models of populations are often very different from good models of individuals.
As an illustration, the population distribution over human height might be approximately normal,
but to model an individual?s height, we might use a more detailed discriminative model based on
many features of the individual?s genotype. As another example, in social network analysis, simple
models like the preferential attachment model [3] replicate aggregate network statistics such as
degree distributions, whereas to predict whether two individuals have a link, a social networking
web site might well use a classifier with many features of each person?s previous history. Of course
every model of an individual implies a model of the population, but models whose goal is to model
individuals tend to be necessarily more detailed.
These two styles of modelling represent different types of information, so it is natural to want to
combine them. A recent line of research in machine learning has explored the idea of incorporating
constraints into Bayesian models that are difficult to encode in standard prior distributions. These
methods, which include posterior regularization [9], learning with measurements [16], and the generalized expectation criterion [18], tend to follow a moment matching idea, in which expectations of
the distribution of one model are encouraged to match values based on prior information.
Interestingly, these ideas have precursors in the statistical literature on simulation models. In particular, Bayesian melding [21] considers applications in which there is a computer simulation M that
maps from model parameters ? to a quantity ? = M (?). For example, M might summarize the
output of a deterministic simulation of population dynamics or some other physical phenomenon.
Bayesian melding considers the case in which we can build meaningful prior distributions over both
? and ?. These two prior distributions need to be merged because of the deterministic relationship;
1
this is done using a logarithmic opinion pool [5]. We show that there is a close connection between
Bayesian melding and the later work on posterior regularization, which does not seem to have been
recognized in the machine learning literature. We also show that Bayesian melding has the additional advantage that it can be conveniently applied when both individual-level and population-level
models contain latent variables, as would commonly be the case, e.g., if they were mixture models
or hierarchical Bayesian models. We call this approach latent Bayesian melding.
We present a detailed case study of latent Bayesian melding in the domain of energy disaggregation
[11, 20], which is a particular type of blind source separation (BSS) problem. The goal of the
electricity disaggregation problem is to separate the total electricity usage of a building into a sum of
source signals that describe the energy usage of individual appliances. This problem is hard because
the source signals are not identifiable, which motivates work that adds additional prior information
into the model [14, 15, 20, 25, 26, 8]. We show that the latent Bayesian melding approach allows
incorporation of new types of constraints into standard models for this problem, yielding a strong
improvement in performance, in some cases amounting to a 50% error reduction over a moment
matching approach.
2
The Bayesian melding approach
We briefly describe the Bayesian melding approach to integrating prior information in deterministic
simulation models [21], which has seen wide application [1, 6, 23]. In the Bayesian modelling
context, denote Y as the observation data, and suppose that the model includes unknown variables
S, which could include model parameters and latent variables. We are then interested in the posterior
p(S|Y ) = p(Y )?1 p(Y |S)pS (S).
(1)
However, in some situations, the variables S may be related to a new random variable ? by a deterministic simulation function f (?) such that ? = f (S). We call S and ? input and output variables. For example, in the energy disaggregation problem, the total energy consumption variable
PT
? = t=1 StT ? where St are the state variables of a hidden Markov model (one-hot encoding) and
? is a vector containing the mean energy consumption of each state (see Section 5.2). Both ? and
S are random variables, and so in the Bayesian context, the modellers usually choose appropriate
priors p? (? ) and pS (S) based on prior knowledge. However, given pS (S), the map f naturally
introduces another prior for ? , which is an induced prior denoted by p?? (? ). Therefore, there are
two different priors for the same variable ? from different sources, which might not be consistent.
In the energy disaggregation example, p?? is induced by the state variables St of the hidden Markov
model which is the individual model of a specific household, and p? could be modelled by using
population information, e.g. from a national survey ? we can think of this as a population model
since it combines information from many households. The Bayesian melding approach combines
the two priors into one by using the logarithmic pooling method so that the logarithmically pooled
prior is pe? (? ) ? p?? (? )? p? (? )1?? where 0 ? ? ? 1. The prior pe? melds the prior information
of both S and ? . In the model (1), the prior pS does not include information about ? . Thus it is
required to derive a melded prior for S. If f is invertible, the prior for S can be obtained by using
the change-of-variable technique. If f is not invertible, Poole and Raftery [21] heuristically derived
a melded prior
1??
p? (f (S))
peS (S) = c? pS (S)
(2)
p?? (f (S))
R
where c? is a constant given ? such that peS (S)dS = 1. This gives a new posterior pe(S|Y ) =
pe(Y )?1 p(Y |S)e
pS (S). Note that it is interesting to infer ? [22, 7], however we use a fixed value in
this paper. So far we have been assuming there are no latent variables in p? . We now consider the
situation when ? is generated by some latent variables.
3
The latent Bayesian melding approach
It is common that the variable ? is modelled by a latent variable ?, see the examples in Section 5.2.
So we could assume that we have a conditional
R distribution p(? |?) and a prior distribution p? (?).
This defines a marginal distribution p? (? ) = p? (?)p(? |?)d?. This could be used to produce the
2
melded prior (2) of the Bayesian melding approach
R
1??
p? (f (S)|?)p? (?)d?
peS (S) = c? pS (S)
.
p?? (f (S))
(3)
The integration in (3) is generally intractable. We could employ the Monte Carlo method to approximate it for a fixed ? . However, importantly we are also interested in inferring the latent variable ?
which is meaningful for example in the energy disaggregation problem. When we are interested in
finding the maximum a posteriori (MAP) Rvalue of the posterior where peS (S) was used as the prior,
we propose to use a rough approximation p? (?)p? (? |?)d? ? max? p? (?)p? (? |?). This leads to an
approximate prior
1??
p? (f (S)|?)p? (?)
peS (S) ? max peS,? (S, ?) = max c? pS (S)
.
(4)
?
?
p?? (f (S))
To obtain this approximate prior for S, the joint prior peS,? (S, ?) has to exist, and so we show that it
does exist under certain conditions by the following theorem. We assume that S and ? are continuous
random variables, and that both p?? and p? are positive and share the same support. Also, EpS (S) [?]
denotes the expectation with respect to pS .
h
i
(f (S))
Theorem 1. If EpS (S) pp?? (f
< ?, then a constant c? < ? exists such that
(S))
?
R
peS,? (S, ?)d?dS = 1, for any fixed ? ? [0, 1].
The proof can be found in the supplementary materials. In (4) we heuristically derived an approximate joint prior peS,? . Interestingly, if ? and S are independent conditional on ? , we can show as
follows that peS,? is a limit distribution derived from a joint distribution of ? and S induced by ? . To
see this, we derive a joint prior for S and ?,
Z
Z
pS,? (S, ?) =
p(S, ?|? )p? (? )d? = p(S|? )p(?|? )p? (? )d?
Z
Z
p(? |S)pS (S) p(? |?)p? (?)
p(? |?)
p
(?
)d?
=
p
(S)p
(?)
p(? |S) ?
d?.
=
?
S
?
p?? (? )
p? (? )
p? (? )
For a deterministic simulation ? = f (S), the distribution p(? |S) = p(? |S, ? = f (S)) is ill-defined
due to the Borel?s paradox [24]. The distribution p(? |S) depends on the parameterization. We
assume that ? is uniform on [f (S) ? ?, f (S) + ?] conditional on S and
R ? > 0, and the distribution
is then denoted by p? (? |S). The marginal distribution is p? (? ) = p? (? |S)pS (S)dS. Denote
|?)
p(? |?)
g(? ) = p(?
p? (? ) and g? (? ) = p? (? ) . Then we have the following theorem.
?
?
Theorem
R 2. If lim??0 p? (? ) = p? (? ), and g? (? ) has bounded derivatives in any order, then
lim??0 p? (? |S)g? (? )d? = g(f (S)).
See the supplementary materials for the proof. Under this parameterization, we denote p?S,? (S, ?) =
R
(S)|?)
pS (S)p? (?) lim??0 p? (? |S)g? (? )d? = pS (S)p? (?) p(f
. By applying the logarithmic poolp?
? (f (S))
ing method, we have a joint prior
1??
p? (f (S)|?)p? (?)
?
1??
peS,? (S, ?) = c? (pS (S)) (?
pS,? (S, ?))
= c? pS (S)
.
p?? (f (S))
Since the joint prior blends the variable S and the latent variable ?, we call this approximation the latent Bayesian melding (LBM) approach, which gives the posterior pe(S, ?|Y ) =
pe(Y )?1 p(Y |S)e
pS,? (S, ?). Note that if there are no latent variables, then latent Bayesian melding collapses to the Bayesian melding approach. In section 6 we will apply this method to an energy
disaggregation problem for integrating population information with an individual model.
4
Related methods
We now discuss possible connections between Bayesian melding (BM) and other related methods.
Recently in machine learning, moment matching methods have been proposed, e.g., posterior regularization (PR) [9], learning with measurements [16] and the generalized expectation criterion [18].
3
These methods share the common idea that the Bayesian models (or posterior distributions) are constrained by some observations or measurements to obtain a least-biased distribution. The idea is
that the system we are modelling is too complex and unobservable, and thus we have limited prior
information. To alleviate this problem, we assume we can obtain some observations of the system
in some way, e.g., by experiments, for example those observations could be the mean values of
the functions of the variables. Those observations could then guide the modelling of the system.
Interestingly, a very similar idea has been employed in the bias correction method in information
theory and statistics [12, 10, 19], where the least-biased distribution is obtained by optimizing the
Kullback-Leibler divergence subject to the moment constraints. Note that the bias correction method
in [17] is different to others where the bias of a consistent estimator was corrected when the bias
function could be estimated.
We now consider the posteriors derived by PR and BM. In general, given a function f (S) and values
bi , PR solves the constrained problem
minimize KL(e
p(S)||p(S|Y ))
p
e
subject to Epe (mi (f (S))) ? bi ? ?i , ||?i || ? ; i = 1, 2, ? ? ? , I.
where mi could be any function such as a power function. This gives an optimal posterior
QI
peP R (S) = Z(?)?1 p(Y |S)p(S) i=1 exp(??i mi (f (S))) where Z(?) is the normalizing constant. BM has a deterministic simulation f (S) = ? where ? ? p? . The posterior is then
1??
(S))
peBM (S) = Z(?)?1 p(Y |S)p(S) pp?? (f
. They have a similar form and the key difference is
(f
(S))
?
the last factor which is derived from the constraints or the deterministic simulation. peP R and peBM
PI
(S))
are identical, if ? i=1 ?i mi (f (S)) = (1 ? ?) log pp?? (f
(f (S)) .
?
The difference between BM and LBM is the latent variable ?. We could perform BM by integrating
out ? in (3), but this is computationally expensive. Instead, LBM jointly models S and ? allowing
possibly joint inference, which is an advantage over BM.
5
The energy disaggregation problem
In energy disaggregation, we are given a time series of energy consumption readings from a sensor.
We consider the energy measured in watt hours as read from a household?s electricity meter, which is
denoted by Y = (Y1 , Y2 , ? ? ? , YT ) where Yt ? R+ . The recorded energy signal Y is assumed to be
the aggregation of the consumption of individual appliances in the household. Suppose there are I
appliances, and the energy consumption of each appliance is denoted by Xi = (Xi1 , Xi2 , ? ? ? , XiT )
where Xit ? R+ . The observed aggregate signal is assumed to be the sum of the component
PI
2
signals so that Yt =
i=1 Xit + t where t ? N (0, ? ). Given Y , the task is to infer the
unknown component signals Xi . This is essentially the single-channel BSS problem, for which
there is no unique solution. It can also be useful to add an extra component U = (U1 , U2 , ? ? ? , UT )
to model the unknown appliances to make
more robust as
n the model
o proposed in [15]. The prior
PT ?1
of Ut is defined as p(U ) = v2(T1?1) exp ? 2v12 t=1 |Ut+1 ? Ut | . The model then has a new
PI
form Yt = i=1 Xit + Ut + t . A natural way to represent this model is as an additive factorial
hidden Markov model (AFHMM) where the appliances are treated as HMMs [15, 20, 26]; this is
now described.
5.1
The additive factorial hidden Markov model
In the AFHMM, each component signal Xi is represented by a HMM. We suppose there are Ki
states for each Xit , and so the state variable is denoted by Zit ? {1, 2, ? ? ? , Ki }. Since Xi is a
PKi
HMM, the initial probabilities are ?ik = P (Zi1 = k) (k = 1, 2, ? ? ? , Ki ) where k=1
?ik = 1;
the mean values are ?i = {?1 , ?2 , ? ? ? , ?Ki } such that Xit ? ?i ; the transition probabilities are
PKi (i)
(i)
(i)
P (i) = (pjk ) where pjk = P (Zit = j|Zi,t?1 = k) and j=1
pjk = 1. We denote all these
parameters {?i , ?i , P (i) } by ?. We assume they are known and can be learned from the training
data. Instead of using Z, we could use a binary vector Sit = (Sit1 , Sit2 , ? ? ? , SitKi )T to represent
the variable Z such that Sitk = 1 when Zit = k and for all Sitj = 0 when j 6= k. Then we are
T
interested in inferring the states Sit instead of inferring Xit directly, since Xit = Sit
?i . Therefore
4
we want to make inference over the posterior distribution
P (S, U, ? 2 |Y, ?) ? p(Y |S, U, ? 2 )P (S|?)p(U )p(? 2 )
QI QKi Si1k
?
where the HMM defines the prior of the states P (S|?) ?
k=1 ?ik
i=1
QT QI Q (i) Sitk Si,t?1,j
, the inverse noise variance is assumed to be a Gamma dist=2
i=1
k,j pkj
?2
tribution p(? ?2 ) ? (? ?2 )??1 exp
???
, and the data likelihood has
the Gaussian form
2
P
P
T
T
I
T
?i ? Ut
p(Y |S, U, ? 2 , ?) = |2?? 2 |? 2 exp ? 2?1 2 t=1 Yt ? i=1 Sit
. To make the MAP
inference over S, we relax the binary variable Sitk to be continuous in the range [0, 1] as in [15, 26].
It has been shown that incorporating domain knowledge into AFHMM can help to reduce the identifiability problem [15, 20, 26]. The domain knowledge we will incorporate using LBM is the
summary statistics.
5.2
Population modelling of summary statistics
In energy disaggregation, it is useful to provide a summaries of energy consumption to the users.
For example, it would be useful to show the householders the total energy they had consumed in
one day for their appliances, the duration that each appliance was in use, and the number of times
that they had used these appliances. Since there already exists data about typical usage of different
appliances [4], we can employ these data to model the distributions of those summary statistics.
We denote those desired statistics by ? = {?i }Ii=1 , where i denotes the appliances. For appliance
i, we assume we have measured some time series from different houses for many days. This is
always possible because we can collect them from public data sets, e.g., the data reviewed in [4].
We can then empirically obtain the distributions of those statistics. The distribution is represented by
pm (?im |?im , ?im ) where ?im represents the empirical quantities of the statistic m of the appliance
i which can be obtained from data and ?im are the latent variables which might not be known. Since
?im are variables, we can employ a prior distribution p(?im ).
We now give some examples of those statistics. Total energy consumption: The total energy
consumption of an appliance can be represented as a function of the states of HMM such that ?i =
PT
T
duration of using the appliance i can also be
t=1 Sit ?i . Duration of appliance usage:PThe P
T
Ki
represented as a function of states ?i = ?t t=1 k=2
Sitk where ?t represents the sampling
duration for a data point of the appliances, and we assume that Sit1 represents the off state which
means the appliance was turned off. Number of cycles: The number of cycles (the number of times
an appliance is used) can be counted by computing the number of alterations from OFF state to ON
PT PKi
such that ?i = t=2 k=2
I(Sitk = 1, Si,t?1,1 = 0).
Let the binary vector ?i = (?i1 , ?i2 , ? ? ? , ?ic , ? ? ? , ?iCi ) represent the number of cycles, where ?ic =
PCi
1 means that the appliance i had been used c cycles, and c=1
?ic = 1. (Note ?i is an example of ?i
in this case.) To model these statistics in our LBM framework, the latent variable that we use is the
number of cycles ?. The distributions of ?i could be empirically modelled by using the observation
PCi
data. One approach is to assume a Gaussian mixture density such that p(?i |?i ) = c=1
p(?ic =
PCi
1)pc (?i |?i ), where c=1 p(?ic = 1) = 1 and pc is the Gaussian component density. Using the
mixture Gaussian, we basically assume that, for an appliance, given the number of cycles the total
energy consumption is modelled by a Gaussian with mean ?ic and variance ? 2ic . A simpler model
PCi
would be a linear regression model such that ?i = c=1
?ic ?ic + i where i ? N (0, ?i2 ). This
model assumes that given the number of cycles the total energy consumption is close to the mean
?ic . The mixture model is more appropriate than the regression model, but the inference is more
difficult.
PCi
When ?i represents the number of cycles for appliance i, we can use ?i = c=1
cic ?ic where cic
represents the number of cycles. When the state variables Si are relaxed to [0, 1], we can then
PCi
employ a noise model such that ?i = c=1
c ? + i where ? N (0, ?i2 ). We model ?i with a
QCi ic?icic
discrete distribution such that P (?i ) = c=1 pic where pic represents the prior probability of the
number of cycles for the appliance i, which can be obtained from the training data. We now show
that how to use the LBM to integrate the AFHMM with these population distributions.
5
6
The latent Bayesian melding approach to energy disaggregation
We have shown that the summary statistics ? can be represented as a deterministic function of the
state variable of HMMs S such that ? = f (S), which means that the ? itself can be represented as
a latent variable model. We could then straightforwardly employ the LBM to produce a joint prior
1??
(S)|?)p(?)
over S and ? such that peS,? (S, ?) = c? pS (S) p? (f
. Since in our model f is not
?
p? (f (S))
?
invertible, we need to generate a proper density for p? . One possible way is to generate N random
?
samples {S (n) }N
n=1 from the prior pS (S) which is a HMM, and then p? can be modelled by using
kernel density estimation. However, this will make the inference difficult. Instead, we employ a
2
2
Gaussian density p??im (?im ) = N (?
?im , ?
?im
) where ?
?im and ?
?im
are computed from {S (n) }N
n=1 .
The new posterior distribution of LBM thus has the form
p(S, U, ?|Y, ?) ?
=
p(?)p(U )e
pS,? (S, ?)p(Y |S, U, ? 2 )
1??
p? (f (S)|?)p(?)
p(Y |S, U, ? 2 )
p(?)p(U )c? pS (S)
p?? (f (S))
where ? represents the collection of all the noise variances. All the inverse noise variances employ
the Gamma distribution as the prior. We are interested in inferring the MAP values. Since the variables S and ? are binary, we have to solve a combinatorial optimization problem which is intractable,
so we solve a relaxed problem as in [15, 26]. Since log pS (S) is not convex, we employ the relaxit
ation method of [15]. So a new Ki ?Ki variable matrix H it = (hit
jk ) is introduced such that hjk = 1
it
when Si,t?1,k = 1 and Sitj = 1 and otherwise hjk = 0. Under these constraints, we then obtain
PI
P
(i)
T
log pS (S) = log p(S, H) = i=1 Si1
log ?i + i,t,k,j hit
jk log pjk ; this is now linear. We optimize
the log-posterior which
is denoted by L(S, H, U, ?, ?). The
nP
o constraints
nP for those variables are repreo
Ki
Ci
sented as sets QS =
S
=
1,
S
?
[0,
1],
?i,
t
,
Q
=
?
=
1,
?
?
[0,
1],
?i
,
itk
itk
?
ic
ic
c=1
nP k=1
o
P
Ki
K
i
it
T
it
it
QH,S
=
and QU,?
=
l=1 Hl. = Si,t?1 ,
l=1 H.l = Sit , hjk ? [0, 1], ?i, t ,
2
2
U ? 0, ? ? 0, ?im < ?
?im , ?i, m . Denote Q = QS ? Q? ? QH,S ? QU,? . The relaxed
optimization problem is then
maximize L(S, H, U, ?, ?) subject to Q.
S,H,U,?,?
We oberved that every term in L is either quadratic or linear when ? are fixed, and the solutions
for ? are deterministic when the other variables are fixed. The constraints are all linear. Therefore,
we optimize ? while fixing all the other variables, and then optimize all the other variables simultaneously while fixing ?. This optimization problem is then a convex quadratic program (CQP), for
which we use MOSEK [2]. We denote this method by AFHMM+LBM.
7
Experimental results
We have incorporated population information into the AFHMM by employing the latent Bayesian
melding approach. In this section, we apply the proposed model to the disaggregation problem. We
will compare the new approach with the AFHMM+PR [26] using the set of statistics ? described
in Section 5.2. The key difference between our method AFHMM+LBM and AFHMM+PR is that
AFHMM+LBM models the statistics ? conditional on the number of cycles ?.
7.1
The HES data
We apply AFHMM, AFHMM+PR and AFHMM+LBM to the Household Electricity Survey (HES)
data1 . This data set was gathered in a recent study commissioned by the UK Department of Food and
Rural Affairs. The study monitored 251 households, selected to be representative of the population,
across England from May 2010 to July 2011 [27]. Individual appliances were monitored, and in
some households the overall electricity consumption was also monitored. The data were monitored
1
The HES dataset and information on how the raw data was cleaned can be found from
https://www.gov.uk/government/publications/household-electricity-survey.
6
Table 1: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate
error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on synthetic
mains in HES data.
M ETHODS
AFHMM
AFHMM+PR
AFHMM+LBM
NDE
1.45? 0.88
0.87? 0.21
0.89? 0.49
SAE
1.42? 0.39
0.86? 0.39
0.87? 0.37
DAE
1.56?0.23
0.83?0.53
0.76?0.32
CAE
1.41?0.31
1.57?0.66
0.79?0.35
T IME ( S )
179.3?1.9
195.4?3.2
198.1?3.1
Table 2: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate
error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on mains in
HES data.
M ETHODS
AFHMM
AFHMM+PR
AFHMM+LBM
NDE
1.90?1.16
0.91?0.11
0.77?0.23
SAE
2.26?0.86
0.67? 0.07
0.68? 0.19
DAE
1.91?0.67
0.68? 0.18
0.61? 0.22
CAE
1.12 ?0.17
1.65 ?0.49
0.98?0.32
T IME ( S )
170.8?33.3
214.2?38.1
224.8?34.8
every 2 or 10 minutes for different houses. We used only the 2-minute data. We then used the
individual appliances to train the model parameters ? of the AFHMM, which will be used as the
input to the models for disaggregation. Note that we assumed the HMMs have 3 states for all the
appliances. This number of states is widely applied in energy disaggregation problems, though our
method could easily be applied to larger state spaces. In the HES data, in some houses the overall
electricity consumption (the mains) was monitored. However, in most houses, only a subset of
individual appliances were monitored, and the total electricity readings were not recorded.
Generating the population information: Most of the houses in HES did not monitor the mains
readings. They all recorded the individual appliances consumption. We used a subset of the houses
to generate the population information of the individual appliances. We used the population information of total energy consumption, duration of appliance usage and the number of cycles in a time
period. In our experiments, the time period was one day. We modelled the distributions of these
summary statistics by using the methods described in the Section 5.2, where the distributions were
Gaussian. All the required quantities for modelling these distributions were generated by using the
samples of the individual appliances.
Houses without mains readings: In this experiment, we randomly selected one hundred households, and one day?s usage was used as test data for each household. Since no mains readings were
monitored in these houses, we added up the appliance readings to generate synthetic mains readings. We then applied the AFHMM, AFHMM+PR and AFHMM+LBM to these synthetic mains to
predict the individual appliance usage. To compare these three methods, we employed four error
measures. Denote x
?i as the inferred
signal for the appliance usage xi . One measure is the normalP
(x ??
x )2
ized disaggregation error (NDE): itP it x2 it . This measures how well the method predicts the
it it
energy consumption at every time point. However, the householders might be more interested in the
summaries of the appliance usage. For example, in a particular time period, e.g, one day, people
are interested in the total energy consumption of the appliances, the total time they have been using
PI
those appliances and how many times they have used them. We thus employ I1 i=1 |?rPi ?rrii | as the
i
signal aggregate error (SAE), the duration aggregate error (DAE) or the cycle aggregate error (CAE),
where ri represents the total energy consumption, the duration or the number of cycles, respectively,
and r?i represents the predicted summary statistics.
All the methods were applied to the synthetic data. Table 1 shows the overall error computed by
these methods. We see that both the methods using prior information improved over the base line
method AFHMM. The AFHMM+PR and AFHMM+LBM performed similarly in terms of NDE and
SAE, but AFHMM+LBM improved over AFHMM+PR in terms of DAE (8%) and CAE (50%).
Houses with mains readings: We also applied those methods to 6 houses which have mains readings. We used 10 days data for each house, and the recorded mains readings were used as the input
to the models. All the methods were used to predict the appliance consumption. Table 2 shows the
7
Table 3: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate
error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on UK-DALE
data.
M ETHODS
AFHMM
AFHMM+PR
AFHMM+LBM
NDE
1.57?1.16
0.83?0.27
0.84?0.25
SAE
1.99?0.52
0.82? 0.38
0.89? 0.38
DAE
2.81?0.79
1.68? 1.21
0.49? 0.33
CAE
1.37 ? 0.28
1.90 ?0.52
0.59?0.21
T IME ( S )
118.6?23.1
120.4?25.3
123.1?25.8
error of each house and also the overall errors. This experiment is more realistic than the synthetic
mains readings, since the real mains readings were used as the input. We see that both the methods incorporating prior information have improved over the AFHMM in terms of NDE, SAE and
DAE. The AFHMM+PR and AFHMM+LBM have the similar results for SAE. AFHMM+LBM is
improved over AFHMM+PR for NDE (15%), DAE (10%) and CAE (40%).
7.2
UK-DALE data
In the previous section we have trained the model using the HES data, and applied the models to
different houses of the same data set. A more realistic situation is to train the model in one data set,
and apply the model to a different data set, because it is unrealistic to expect to obtain appliancelevel data from every household on which the system will be deployed. In this section, we use the
HES data to train the model parameters of the AFHMM, and model the distribution of the summary
statistics. We then apply the models to the UK-DALE dataset [13], which was also gathered from
UK households, to make the predictions. There are five houses in UK-DALE, and all of them have
mains readings and as well as the individual appliance readings. All the mains meters were sampled
every 6 seconds and some of them also sampled at a higher rate, details of the data and how to access
it can be found in [13]. We employ three of the houses for analysis in our experiments (houses 1, 2
& 5 in the data). The other two houses were excluded because the correlation between the sum of
submeters and mains is very low, which suggests that there might be recording errors in the meters.
We selected 7 appliances for disaggregation, based on those that typically use the most energy. Since
the sample rate of the submeters in the HES data is 2 minutes, we downsampled the signal from 6
seconds to 2 minutes for the UK-DALE data. For each house, we randomly selected a month for
analysis. All the four methods were applied to the mains readings. For comparison purposes, we
computed the NDE, SAE, DAE and CAE errors of all three methods, averaged over 30 days. Table 3
shows the results. The results are consistent with the results of the HES data. Both the AFHMM+PR
and AFHMM+LBM improve over the basic AFHMM, except that AFHMM+PR did not improve the
CAE. As for HES testing data, AFHMM+PR and AFHMM+LBM have similar results on NDE and
SAE. And AFHMM+LBM again improved over AFHMM+PR in DAE (70%) and CAE (68%).
These results are consistent in suggesting that incorporating population information into the model
can help to reduce the identifiability problem in single-channel BSS problems.
8
Conclusions
We have proposed a latent Bayesian melding approach for incorporating population information
with latent variables into individual models, and have applied the approach to energy disaggregation
problems. The new approach has been evaluated by applying it to two real-world electricity data sets.
The latent Bayesian melding approach has been compared to the posterior regularization approach
(a case of the Bayesian melding approach) and AFHMM. Both the LBM and PR have significantly
lower error than the base line method. LBM improves over PR in predicting the duration and the
number of cycles. Both methods were similar in NDE and the SAE errors.
Acknowledgments
This work is supported by the Engineering and Physical Sciences Research Council, UK (grant
numbers EP/K002732/1 and EP/M008223/1).
8
References
[1] Leontine Alkema, Adrian E Raftery, and Samuel J Clark. Probabilistic projections of HIV prevalence
using Bayesian melding. The Annals of Applied Statistics, pages 229?248, 2007.
[2] MOSEK ApS. The MOSEK optimization toolbox for Python manual. Version 7.1 (Revision 28), 2015.
[3] Albert-Laszlo Barabasi and Reka Albert.
286(5439):509?512, 1999.
Emergence of scaling in random networks.
Science,
[4] N. Batra et al. Nilmtk: An open source toolkit for non-intrusive load monitoring. In Proceedings of the
5th International Conference on Future Energy Systems, pages 265?276, New York, NY, USA, 2014.
[5] Robert F. Bordley. A multiplicative formula for aggregating probability assessments. Management Science, 28(10):1137?1148, 1982.
[6] Grace S Chiu and Joshua M Gould. Statistical inference for food webs with emphasis on ecological
networks via Bayesian melding. Environmetrics, 21(7-8):728?740, 2010.
[7] Luiz Max F de Carvalhoa, Daniel AM Villelaa, Flavio Coelhoc, and Leonardo S Bastosa. On the choice
of the weights for the logarithmic pooling of probability distributions. September 24, 2015.
[8] E. Elhamifar and S. Sastry. Energy disaggregation via learning powerlets and sparse coding. In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (AAAI), pages 629?635, 2015.
[9] K. Ganchev, J. Grac?a, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable
models. Journal of Machine Learning Research, 11:2001?2049, 2010.
[10] A. Giffin and A. Caticha. Updating probabilities with data and moments. The 27th Int. Workshop on
Bayesian Inference and Maximum Entropy Methods in Science and Engineering, NY, July 8-13,2007.
[11] G.W. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12):1870 ?1891, 1992.
[12] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957.
[13] Jack Kelly and William Knottenbelt. The UK-DALE dataset, domestic appliance-level electricity demand
and whole-house demand from five UK homes. 2(150007), 2015.
[14] H. Kim, M. Marwah, M. Arlitt, G. Lyon, and J. Han. Unsupervised disaggregation of low frequency
power measurements. In Proceedings of the SIAM Conference on Data Mining, pages 747?758, 2011.
[15] J. Z. Kolter and T. Jaakkola. Approximate inference in additive factorial HMMs with application to energy
disaggregation. In Proceedings of AISTATS, volume 22, pages 1472?1482, 2012.
[16] P. Liang, M.I. Jordan, and D. Klein. Learning from measurements in exponential families. In The 26th
Annual International Conference on Machine Learning, pages 641?648, 2009.
[17] James G MacKinnon and Anthony A Smith. Approximate bias correction in econometrics. Journal of
Econometrics, 85(2):205?230, 1998.
[18] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional
random fields. In Proceedings of ACL, pages 870?878, Columbus, Ohio, June 2008.
[19] Keith Myerscough, Jason Frank, and Benedict Leimkuhler. Least-biased correction of extended dynamical systems using observational data. arXiv preprint arXiv:1411.6011, 2014.
[20] O. Parson, S. Ghosh, M. Weal, and A. Rogers. Non-intrusive load monitoring using prior models of
general appliance types. In Proceedings of AAAI, pages 356?362, July 2012.
[21] David Poole and Adrian E. Raftery. Inference for deterministic simulation models: The Bayesian melding
approach. Journal of the American Statistical Association, pages 1244?1255, 2000.
[22] MJ Rufo, J Mart??n, CJ P?erez, et al. Log-linear pool to combine prior distributions: A suggestion for a
calibration-based approach. Bayesian Analysis, 7(2):411?438, 2012.
? c??kov?a, A. Raftery, and P. Waddell. Uncertain benefits: Application of Bayesian melding to the
[23] H. Sev?
Alaskan way viaduct in Seattle. Transportation Research Part A: Policy and Practice, 45:540?553, 2011.
[24] Robert L Wolpert. Comment on ?Inference from a deterministic population dynamics model for bowhead
whales?. Journal of the American Statistical Association, 90(430):426?427, 1995.
[25] M. Wytock and J. Zico Kolter. Contextually supervised source separation with application to energy
disaggregation. In Proceedings of AAAI, pages 486?492, 2014.
[26] M. Zhong, N. Goddard, and C. Sutton. Signal aggregate constraints in additive factorial HMMs, with
application to energy disaggregation. In NIPS, pages 3590?3598, 2014.
[27] J.-P. Zimmermann, M. Evans, J. Griggs, N. King, L. Harding, P. Roberts, and C. Evans. Household
electricity survey, 2012.
9
| 5756 |@word version:1 briefly:1 replicate:1 open:1 adrian:2 heuristically:2 simulation:9 reduction:1 moment:7 initial:1 series:2 united:1 itp:1 daniel:1 interestingly:3 disaggregation:25 jaynes:1 si:5 rpi:1 evans:2 additive:4 realistic:2 aps:1 intelligence:1 selected:4 parameterization:2 mccallum:1 affair:1 smith:1 coarse:1 appliance:43 simpler:1 si1:1 five:2 height:2 ik:3 kov:1 combine:4 dist:1 mechanic:1 food:2 gov:1 lyon:1 precursor:1 revision:1 domestic:1 bounded:1 harding:1 finding:1 ghosh:1 every:6 classifier:1 hit:2 uk:12 zico:1 grant:1 positive:1 t1:1 engineering:2 aggregating:1 benedict:1 limit:1 sutton:2 encoding:1 solely:1 approximately:1 might:8 acl:1 emphasis:1 collect:1 suggests:1 hmms:5 collapse:1 limited:1 contextually:1 bi:2 range:1 averaged:1 unique:1 acknowledgment:1 testing:1 practice:1 tribution:1 prevalence:1 sitj:2 empirical:1 significantly:2 matching:6 projection:1 integrating:4 downsampled:1 leimkuhler:1 close:2 context:2 applying:2 optimize:3 www:1 map:5 deterministic:11 yt:5 sit1:2 transportation:1 rural:1 duration:11 convex:2 survey:4 estimator:1 q:2 importantly:1 population:22 annals:1 pt:4 suppose:3 qh:2 user:1 logarithmically:1 expensive:1 jk:2 updating:1 afhmm:52 econometrics:2 predicts:1 observed:1 ep:2 taskar:1 preprint:1 cycle:18 dynamic:2 trained:1 raise:1 edwin:1 cae:12 joint:8 easily:1 represented:6 epe:1 train:3 describe:2 monte:1 artificial:1 aggregate:15 pci:6 whose:1 hiv:1 supplementary:2 solve:2 widely:1 larger:1 relax:1 otherwise:1 statistic:18 think:1 jointly:1 itself:1 emergence:1 advantage:2 propose:2 turned:1 pthe:1 amounting:1 seattle:1 p:24 produce:2 generating:1 help:2 derive:2 ac:1 fixing:2 measured:2 qt:1 school:1 keith:1 zit:3 solves:1 strong:1 predicted:1 implies:1 merged:1 human:1 observational:1 opinion:2 material:2 public:1 pkj:1 mann:1 rogers:1 government:1 behaviour:2 pjk:4 alleviate:1 im:15 correction:4 cic:2 stt:1 ic:14 normal:1 exp:4 predict:3 barabasi:1 purpose:1 estimation:1 combinatorial:1 council:1 ganchev:1 grac:1 rough:1 sensor:1 gaussian:7 always:1 pki:3 zhong:2 jaakkola:1 publication:1 encode:1 derived:5 xit:8 june:1 improvement:1 modelling:7 likelihood:1 kim:1 am:1 posteriori:1 inference:10 typically:1 hidden:4 interested:7 i1:2 unobservable:1 overall:4 ill:1 denoted:6 constrained:2 integration:1 marginal:2 reka:1 field:1 sampling:1 encouraged:1 identical:1 represents:9 whale:1 unsupervised:1 mosek:3 future:1 others:1 np:3 employ:10 randomly:2 simultaneously:1 ime:3 gamma:2 national:1 individual:23 divergence:1 william:1 mining:1 introduces:1 mixture:4 genotype:1 yielding:1 pc:2 accurate:2 laszlo:1 preferential:1 desired:1 dae:12 uncertain:1 electricity:12 subset:2 uniform:1 hundred:1 too:1 nigel:2 straightforwardly:1 synthetic:5 person:1 st:2 density:5 international:2 siam:1 probabilistic:1 xi1:1 informatics:1 off:3 pool:3 invertible:3 again:1 aaai:3 recorded:4 management:1 containing:1 choose:1 possibly:1 wytock:1 nonintrusive:1 derivative:1 style:1 american:2 suggesting:1 de:1 alteration:1 pooled:1 coding:1 includes:1 int:1 kolter:2 blind:2 depends:1 later:1 performed:1 multiplicative:1 jason:1 aggregation:1 identifiability:2 minimize:1 variance:4 gathered:2 modelled:6 bayesian:35 raw:1 basically:1 carlo:1 monitoring:3 modeller:1 history:1 sae:13 networking:1 manual:1 ed:1 energy:34 pp:3 frequency:1 james:1 naturally:1 proof:2 mi:4 monitored:7 sampled:2 dataset:3 knowledge:3 lim:3 ut:6 improves:1 cj:1 higher:1 day:7 follow:2 supervised:2 improved:5 done:1 though:1 evaluated:1 correlation:1 d:3 web:2 assessment:1 defines:2 columbus:1 building:1 usage:9 usa:1 contain:1 y2:1 normalized:3 regularization:6 read:1 excluded:1 leibler:1 i2:3 samuel:1 criterion:3 generalized:5 nde:13 meld:1 jack:1 ohio:1 recently:1 charles:1 common:2 data1:1 physical:3 empirically:2 volume:1 association:2 he:12 pep:2 eps:2 measurement:5 sastry:1 pm:1 hjk:3 similarly:1 erez:1 gillenwater:1 had:3 toolkit:1 access:1 han:1 calibration:1 add:2 base:2 posterior:17 recent:2 optimizing:1 inf:1 certain:1 ecological:1 binary:4 arlitt:1 qci:1 flavio:1 joshua:1 seen:1 additional:2 relaxed:3 employed:2 recognized:1 maximize:1 period:3 signal:14 ii:1 july:3 semi:1 infer:2 ing:1 match:1 england:1 cqp:1 hart:1 qi:3 prediction:2 regression:2 basic:1 essentially:1 expectation:6 albert:2 itk:2 sometimes:1 represent:4 kernel:1 arxiv:2 whereas:2 want:2 source:7 biased:3 extra:1 comment:1 pooling:2 tend:2 induced:3 subject:3 recording:1 seem:1 jordan:1 call:3 zi:1 reduce:2 idea:7 consumed:1 whether:1 motivated:1 york:1 knottenbelt:1 generally:1 useful:3 detailed:4 factorial:4 generate:4 http:1 exist:2 estimated:1 klein:1 discrete:1 key:2 four:2 monitor:1 sum:3 inverse:2 family:1 v12:1 separation:3 sented:1 environmetrics:1 home:1 scaling:1 ki:9 quadratic:2 identifiable:1 annual:1 constraint:8 incorporation:1 marwah:1 x2:1 ri:1 leonardo:1 u1:1 gould:1 department:1 structured:1 watt:1 across:1 qu:2 hl:1 pr:23 lbm:28 zimmermann:1 computationally:1 discus:1 xi2:1 apply:5 hierarchical:1 v2:1 appropriate:3 denotes:2 assumes:1 include:3 household:13 goddard:3 build:1 question:1 quantity:3 already:1 blend:1 added:1 grace:1 september:1 link:1 separate:1 hmm:5 consumption:18 considers:2 assuming:1 relationship:1 illustration:1 liang:1 kingdom:1 difficult:3 robert:3 frank:1 ized:1 motivates:1 proper:1 unknown:3 perform:1 allowing:1 twenty:1 policy:1 observation:6 markov:4 melded:3 situation:3 zi1:1 incorporated:1 extended:1 paradox:1 y1:1 ninth:1 householder:2 pic:2 inferred:1 introduced:1 david:1 required:2 csutton:1 kl:1 connection:2 cleaned:1 ethods:3 toolbox:1 learned:1 hour:1 nip:1 poole:2 usually:1 dynamical:1 reading:15 summarize:1 program:1 max:4 hot:1 suitable:1 power:2 natural:2 treated:1 ation:1 unrealistic:1 predicting:1 improve:2 griggs:1 attachment:1 qki:1 raftery:4 sev:1 prior:41 literature:2 review:1 meter:3 python:1 kelly:1 mingjun:1 expect:1 interesting:1 suggestion:1 intrusive:2 clark:1 integrate:2 degree:1 weal:1 consistent:4 share:2 pi:5 course:1 summary:9 supported:1 last:1 guide:1 allow:1 bias:5 wide:1 sparse:1 edinburgh:1 benefit:1 bs:3 transition:1 world:1 dale:6 commonly:1 collection:1 ici:1 bm:6 counted:1 far:1 employing:1 social:2 caticha:1 approximate:6 kullback:1 assumed:4 discriminative:1 xi:5 continuous:2 latent:28 luiz:1 reviewed:1 table:6 channel:2 mj:1 robust:1 parson:1 necessarily:1 complex:1 anthony:1 domain:3 did:2 aistats:1 main:17 whole:1 noise:4 site:1 representative:1 borel:1 deployed:1 ny:2 bordley:1 inferring:4 exponential:1 house:19 pe:18 grained:1 theorem:4 commissioned:1 minute:4 load:3 specific:1 formula:1 explored:1 normalizing:1 sit:6 incorporating:5 intractable:2 exists:2 workshop:1 ci:1 elhamifar:1 demand:2 entropy:1 wolpert:1 logarithmic:5 conveniently:2 expressed:1 u2:1 mart:1 conditional:5 goal:2 month:1 king:1 hard:1 change:1 typical:1 except:1 corrected:1 averaging:1 total:12 batra:1 experimental:1 meaningful:2 chiu:1 support:1 sitk:5 people:1 incorporate:1 phenomenon:1 |
5,254 | 5,757 | Rapidly Mixing Gibbs Sampling for a Class of Factor
Graphs Using Hierarchy Width
Christopher De Sa, Ce Zhang, Kunle Olukotun, and Christopher R?e
cdesa@stanford.edu, czhang@cs.wisc.edu,
kunle@stanford.edu, chrismre@stanford.edu
Departments of Electrical Engineering and Computer Science
Stanford University, Stanford, CA 94309
Abstract
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance
are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show
that under suitable conditions on the weights, bounded hierarchy width ensures
polynomial mixing time. Our study of hierarchy width is in part motivated by a
class of factor graph templates, hierarchical templates, which have bounded hierarchy width?regardless of the data used to instantiate them. We demonstrate a
rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
1
Introduction
We study inference on factor graphs using Gibbs sampling, the de facto Markov Chain Monte Carlo
(MCMC) method [8, p. 505]. Specifically, our goal is to compute the marginal distribution of some
query variables using Gibbs sampling, given evidence about some other variables and a set of factor
weights. We focus on the case where all variables are discrete. In this situation, a Gibbs sampler
randomly updates a single variable at each iteration by sampling from its conditional distribution
given the values of all the other variables in the model. Many systems?such as Factorie [14],
OpenBugs [12], PGibbs [5], DimmWitted [28], and others [15, 22, 25]?use Gibbs sampling for
inference because it is fast to run, simple to implement, and often produces high quality empirical
results. However, theoretical guarantees about Gibbs are lacking. The aim of the technical result of
this paper is to provide new cases in which one can guarantee that Gibbs gives accurate results.
For an MCMC sampler like Gibbs sampling, the standard measure of efficiency is the mixing time
of the underlying Markov chain. We say that a Gibbs sampler mixes rapidly over a class of models
if its mixing time is at most polynomial in the number of variables in the model. Gibbs sampling
is known to mix rapidly for some models. For example, Gibbs sampling on the Ising model on a
graph with bounded degree is known to mix in quasilinear time for high temperatures [10, p. 201].
Recent work has outlined conditions under which Gibbs sampling of Markov Random Fields mixes
rapidly [11]. Continuous-valued Gibbs sampling over models with exponential-family distributions
is also known to mix rapidly [2, 3]. Each of these celebrated results still leaves a gap: there are
many classes of factor graphs on which Gibbs sampling seems to work very well?including as part
of systems that have won quality competitions [24]?for which there are no theoretical guarantees
of rapid mixing.
Many graph algorithms that take exponential time in general can be shown to run in polynomial
time as long as some graph property is bounded. For inference on factor graphs, the most commonly
1
used property is hypertree width, which bounds the complexity of dynamic programming algorithms
on the graph. Many problems, including variable elimination for exact inference, can be solved in
polynomial time on graphs with bounded hypertree width [8, p. 1000]. In some sense, bounded hypertree width is a necessary and sufficient condition for tractability of inference in graphical models
[1, 9]. Unfortunately, it is not hard to construct examples of factor graphs with bounded weights and
hypertree width 1 for which Gibbs sampling takes exponential time to mix. Therefore, bounding
hypertree width is insufficient to ensure rapid mixing of Gibbs sampling. To analyze the behavior
of Gibbs sampling, we define a new graph property, called the hierarchy width. This is a stronger
condition than hypertree width; the hierarchy width of a graph will always be larger than its hypertree width. We show that for graphs with bounded hierarchy width and bounded weights, Gibbs
sampling mixes rapidly.
Our interest in hierarchy width is motivated by so-called factor graph templates, which are common
in practice [8, p. 213]. Several types of models, such as Markov Logic Networks (MLN) and Relational Markov Networks (RMN) can be represented as factor graph templates. Many state-of-the-art
systems use Gibbs sampling on factor graph templates and achieve better results than competitors
using other algorithms [14, 27]. We exhibit a class of factor graph templates, called hierarchical
templates, which, when instantiated, have a hierarchy width that is bounded independently of the
dataset used; Gibbs sampling on models instantiated from these factor graph templates will mix in
polynomial time. This is a kind of sampling analog to tractable Markov logic [4] or so-called ?safe
plans? in probabilistic databases [23]. We exhibit a real-world templated program that outperforms
human annotators at a complex text extraction task?and provably mixes in polynomial time.
In summary, this work makes the following contributions:
? We introduce a new notion of width, hierarchy width, and show that Gibbs sampling mixes
in polynomial time for all factor graphs with bounded hierarchy width and factor weight.
? We describe a new class of factor graph templates, hierarchical factor graph templates,
such that Gibbs sampling on instantiations of these templates mixes in polynomial time.
? We validate our results experimentally and exhibit factor graph templates that achieve high
quality on tasks but for which our new theory is able to provide mixing time guarantees.
1.1
Related Work
Gibbs sampling is just one of several algorithms proposed for use in factor graph inference. The
variable elimination algorithm [8] is an exact inference method that runs in polynomial time for
graphs of bounded hypertree width. Belief propagation is another widely-used inference algorithm
that produces an exact result for trees and, although it does not converge in all cases, converges to a
good approximation under known conditions [7]. Lifted inference [18] is one way to take advantage
of the structural symmetry of factor graphs that are instantiated from a template; there are lifted
versions of many common algorithms, such as variable elimination [16], belief propagation [21], and
Gibbs sampling [26]. It is also possible to leverage a template for fast computation: Venugopal et al.
[27] achieve orders of magnitude of speedup of Gibbs sampling on MLNs. Compared with Gibbs
sampling, these inference algorithms typically have better theoretical results; despite this, Gibbs
sampling is a ubiquitous algorithm that performs practically well?far outstripping its guarantees.
Our approach of characterizing runtime in terms of a graph property is typical for the analysis of
graph algorithms. Many algorithms are known to run in polynomial time on graphs of bounded
treewidth [19], despite being otherwise NP-hard. Sometimes, using a stronger or weaker property
than treewidth will produce a better result; for example, the submodular width used for constraint
satisfaction problems [13].
2
Main Result
In this section, we describe our main contribution. We analyze some simple example graphs, and
use them to show that bounded hypertree width is not sufficient to guarantee rapid mixing of Gibbs
sampling. Drawing intuition from this, we define the hierarchy width graph property, and prove that
Gibbs sampling mixes in polynomial time for graphs with bounded hierarchy width.
2
Q
Q
?T
T1
T2
???
Tn
F1
F2
???
Fn
T1
(a) linear semantics
T2
?F
???
Tn
F1
F2
???
Fn
(b) logical/ratio semantics
Figure 1: Factor graph diagrams for the voting model; single-variable prior factors are omitted.
First, we state some basic definitions. A factor graph G is a graphical model that consists of a set of
variables V and factors ?, and determines a distribution over those variables. If I is a world for G
(an assignment of a value to each variable in V ), then , the energy of the world, is defined as
P
(I) = ??? ?(I).
(1)
The probability of world I is ?(I) = Z1 exp((I)), where Z is the normalization constant necessary
for this to be a distribution. Typically, each ? depends only on a subset of the variables; we can draw
G as a bipartite graph where a variable v ? V is connected to a factor ? ? ? if ? depends on v.
Definition 1 (Mixing Time). The mixing time of a Markov chain is the first time t at which the
estimated distribution ?t is within statistical distance 41 of the true distribution [10, p. 55]. That is,
tmix = min t : maxA?? |?t (A) ? ?(A)| ? 14 .
2.1
Voting Example
We start by considering a simple example model [20], called the voting model, that models the sign
of a particular ?query? variable Q ? {?1, 1} in the presence of other ?voter? variables Ti ? {0, 1}
and Fi ? {0, 1}, for i ? {1, . . . , n}, that suggest that Q is positive and negative (true and false),
respectively. We consider three versions of this model. The first, the voting model with linear
semantics, has energy function
Pn
Pn
Pn
Pn
(Q, T, F ) = wQ i=1 Ti ? wQ i=1 Fi + i=1 wTi Ti + i=1 wFi Fi ,
where wTi , wFi , and w > 0 are constant weights. This model has a factor connecting each voter
variable to the query, which represents the value of that vote, and an additional factor that gives a
prior for each voter. It corresponds to the factor graph in Figure 1(a). The second version, the voting
model with logical semantics, has energy function
Pn
Pn
(Q, T, F ) = wQ maxi Ti ? wQ maxi Fi + i=1 wTi Ti + i=1 wFi Fi .
Here, in addition to the prior factors, there are only two other factors, one of which (which we call
?T ) connects all the true-voters to the query, and the other of which (?F ) connects all the false-voters
to the query. The third version, the voting model with ratio semantics, is an intermediate between
these two models, and has energy function
Pn
Pn
Pn
Pn
(Q, T, F ) = wQ log (1 + i=1 Ti ) ? wQ log (1 + i=1 Fi ) + i=1 wTi Ti + i=1 wFi Fi .
With either logical or ratio semantics, this model can be drawn as the factor graph in Figure 1(b).
These three cases model different distributions and therefore different ways of representing the
power of a vote; the choice of names is motivated by considering the marginal odds of Q given
the other variables. For linear semantics, the odds of Q depend linearly on the difference between
the number of nonzero positive-voters Ti and nonzero negative-voters Fi . For ratio semantics, the
odds of Q depend roughly on their ratio. For logical semantics, only the presence of nonzero voters
matters, not the number of voters.
We instantiated this model with random weights wTi and wFi , ran Gibbs sampling on it, and computed the variance of the estimated marginal probability of Q for the different models (Figure 2).
The results show that the models with logical and ratio semantics produce much lower-variance estimates than the model with linear semantics. This experiment motivates us to try to prove a bound
on the mixing time of Gibbs sampling on this model.
Theorem 1. Fix any constant ? > 0, and run Gibbs sampling on the voting model with bounded
factor weights {wTi , wFi , w} ? [??, ?]. For the voting model with linear semantics, the largest
3
variance of marginal estimate for Q
variance of marginal estimate for Q
Convergence of Voting Model (n = 50)
1
0.1
0.01
0.001
linear
ratio
logical
0.0001
0 10 20 30 40 50 60 70 80 90 100
iterations (thousands)
Convergence of Voting Model (n = 500)
1
0.1
0.01
0.001
0.0001
linear
ratio
logical
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
iterations (millions)
Figure 2: Convergence for the voting model with w = 0.5, and random prior weights in (?1, 0).
possible mixing time tmix of any such model is tmix = 2? (n). For the voting model with either
logical or ratio semantics, the largest possible mixing time is tmix = ?(n log n).
This result validates our observation that linear semantics mix poorly compared to logical and ratio
semantics. Intuitively, the reason why linear semantics performs worse is that the Gibbs sampler will
switch the state of Q only very infrequently?in fact exponentially so. This is because the energy
roughly depends linearly on the number of voters n, and therefore the probability of switching Q
depends exponentially on n. This does not happen in either the logical or ratio models.
2.2
Hypertree Width
In this section, we describe the commonly-used graph property of hypertree width, and show using
the voting example that bounding it is insufficient to ensure rapid Gibbs sampling. Hypertree width
is typically used to bound the complexity of dynamic programming algorithms on a graph; in particular, variable elimination for exact inference runs in polynomial time on factor graphs with bounded
hypertree width [8, p. 1000]. The hypertree width of a hypergraph, which we denote tw(G), is a
generalization of the notion of acyclicity; since the definition of hypertree width is technical, we
instead state the definition of an acyclic hypergraph, which is sufficient for our analysis. In order to
apply these notions to factor graphs, we can represent a factor graph as a hypergraph that has one
vertex for each node of the factor graph, and one hyperedge for each factor, where that hyperedge
contains all variables the factor depends on.
Definition 2 (Acyclic Factor Graph [6]). A join tree, also called a junction tree, of a factor graph G
is a tree T such that the nodes of T are the factors of G and, if two factors ? and ? both depend on
the same variable x in G, then every factor on the unique path between ? and ? in T also depends on
x. A factor graph is acyclic if it has a join tree. All acyclic graphs have hypertree width tw(G) = 1.
Note that all trees are acyclic; in particular the voting model (with any semantics) has hypertree
width 1. Since the voting model with linear semantics and bounded weights mixes in exponential
time (Theorem 1), this means that bounding the hypertree width and the factor weights is insufficient
to ensure rapid mixing of Gibbs sampling.
2.3
Hierarchy Width
Since the hypertree width is insufficient, we define a new graph property, the hierarchy width, which,
when bounded, ensures rapid mixing of Gibbs sampling. This result is our main contribution.
Definition 3 (Hierarchy Width). The hierarchy width hw(G) of a factor graph G is defined recursively such that, for any connected factor graph G = hV, ?i,
hw(G) = 1 + min
hw(hV, ? ? {?? }i),
?
? ??
(2)
and for any disconnected factor graph G with connected components G1 , G2 , . . .,
hw(G) = max hw(Gi ).
i
4
(3)
As a base case, all factor graphs G with no factors have
hw(hV, ?i) = 0.
(4)
To develop some intuition about how to use the definition of hierarchy width, we derive the hierarchy
width of the path graph drawn in Figure 3.
v1
?1
v2
?2
v3
?3
v4
?4
v5
?5
v6
?6
v7
???
vn
Figure 3: Factor graph diagram for an n-variable path graph.
Lemma 1. The path graph model has hierarchy width hw(G) = dlog2 ne.
Proof. Let Gn denote the path graph with n variables. For n = 1, the lemma follows from (4). For
n > 1, Gn is connected, so we must compute its hierarchy width by applying (2). It turns out that
the factor that minimizes this expression is the factor in the middle, and so applying (2) followed by
(3) shows that hw(Gn ) = 1 + hw(Gd n2 e ). Applying this inductively proves the lemma.
Similarly, we are able to compute the hierarchy width of the voting model factor graphs.
Lemma 2. The voting model with logical or ratio semantics has hierarchy width hw(G) = 3.
Lemma 3. The voting model with linear semantics has hierarchy width hw(G) = 2n + 1.
These results are promising, since they separate our polynomially-mixing examples from our
exponentially-mixing examples. However, the hierarchy width of a factor graph says nothing about
the factors themselves and the functions they compute. This means that it, alone, tells us nothing
about the model; for example, any distribution can be represented by a trivial factor graph with a
single factor that contains all the variables. Therefore, in order to use hierarchy width to produce a
result about the mixing time of Gibbs sampling, we constrain the maximum weight of the factors.
Definition 4 (Maximum Factor Weight). A factor graph has maximum factor weight M , where
M = max max ?(I) ? min ?(I) .
???
I
I
For example, the maximum factor weight of the voting example with linear semantics is M = 2w;
with logical semantics, it is M = 2w; and with ratio semantics, it is M = 2w log(n + 1). We now
show that graphs with bounded hierarchy width and maximum factor weight mix rapidly.
Theorem 2 (Polynomial Mixing Time). If G is a factor graph with n variables, at most s states per
variable, e factors, maximum factor weight M , and hierarchy width h, then
tmix ? (log(4) + n log(s) + eM ) n exp(3hM ).
In particular, if e is polynomial in n, the number of values for each variable is bounded, and hM =
O(log n), then tmix () = O(nO(1) ).
To show why bounding the hierarchy width is necessary for this result, we outline the proof of
Theorem 2. Our technique involves bounding the absolute spectral gap ?(G) of the transition matrix
of Gibbs sampling on graph G; there are standard results that use the absolute spectral gap to bound
the mixing time of a process [10, p. 155]. Our proof proceeds via induction using the definition of
hierarchy width and the following three lemmas.
? be two factor graphs with maximum factor weight M ,
Lemma 4 (Connected Case). Let G and G
which differ only inasmuch as G contains a single additional factor ?? . Then,
? exp (?3M ) .
?(G) ? ?(G)
Lemma 5 (Disconnected Case). Let G be a disconnected factor graph with n variables and m
connected components G1 , G2 , . . . , Gm with n1 , n2 , . . . nm variables, respectively. Then,
ni
?(G) ? min ?(Gi ).
i?m n
5
Lemma 6 (Base Case). Let G be a factor graph with one variable and no factors. The absolute
spectral gap of Gibbs sampling running on G will be ?(G) = 1.
Using these Lemmas inductively, it is not hard to show that, under the conditions of Theorem 2,
1
?(G) ? exp (?3hM ) ;
n
converting this to a bound on the mixing time produces the result of Theorem 2.
To gain more intuition about the hierarchy width, we compare its properties to those of the hypertree
width. First, we note that, when the hierarchy width is bounded, the hypertree width is also bounded.
Statement 1. For any factor graph G, tw(G) ? hw(G).
One of the useful properties of the hypertree width is that, for any fixed k, computing whether a
graph G has hypertree width tw(G) ? k can be done in polynomial time in the size of G. We show
the same is true for the hierarchy width.
Statement 2. For any fixed k, computing whether hw(G) ? k can be done in time polynomial in
the number of factors of G.
Finally, we note that we can also bound the hierarchy width using the degree of the factor graph.
Notice that a graph with unbounded node degree contains the voting program with linear semantics
as a subgraph. This statement shows that bounding the hierarchy width disallows such graphs.
Statement 3. Let d be the maximum degree of a variable in factor graph G. Then, hw(G) ? d.
3
Factor Graph Templates
Our study of hierarchy width is in part motivated by the desire to analyze the behavior of Gibbs
sampling on factor graph templates, which are common in practice and used by many state-of-theart systems. A factor graph template is an abstract model that can be instantiated on a dataset to
produce a factor graph. The dataset consists of objects, each of which represents a thing we want to
reason about, which are divided into classes. For example, the object Bart could have class Person
and the object Twilight could have class Movie. (There are many ways to define templates; here, we
follow the formulation in Koller and Friedman [8, p. 213].)
A factor graph template consists of a set of template variables and template factors. A template
variable represents a property of a tuple of zero or more objects of particular classes. For example, we could have an IsPopular(x) template, which takes a single argument of class Movie. In
the instantiated graph, this would take the form of multiple variables like IsPopular(Twilight) or
IsPopular(Avengers). Template factors are replicated similarly to produce multiple factors in the
instantiated graph. For example, we can have a template factor
? (TweetedAbout(x, y), IsPopular(x))
for some factor function ?. This would be instantiated to factors like
? (TweetedAbout(Avengers, Bart), IsPopular(Avengers)) .
We call the x and y in a template factor object symbols. For an instantiated factor graph with template
factors ?, if we let A? denote the set of possible assignments to the object symbols in a template
factor ?, and let ?(a, I) denote the value of its factor function in world I under the object symbol
assignment a, then the standard way to define the energy function is with
P
P
(I) = ??? a?A? w? ?(a, I),
(5)
where w? is the weight of template factor ?. This energy function results from the creation of
a single factor ?a (I) = ?(a, I) for each object symbol assignment a of ?. Unfortunately, this
standard energy definition is not suitable for all applications. To deal with this, Shin et al. [20]
introduce the notion of a semantic function g, which counts the of energy of instances of the factor
template in a non-standard way. In order to do this, they first divide the object symbols of each
template factor into two groups, the head symbols and the body symbols. When writing out factor
templates, we distinguish head symbols by writing them with a hat (like x
?). If we let H? denote
the set of possible assignments to the head symbols, let B? denote the set of possible assignments
6
voting
(linear)
bounded factor weight
bounded hypertree width
polynomial mixing time
bounded hierarchy width
hierarchical templates
voting
voting
(logical)
(ratio)
Figure 4: Subset relationships among classes of factor graphs, and locations of examples.
to the body symbols, and let ?(h, b, I) denote the value of its factor function in world I under the
assignment (h, b), then the energy of a world is defined as
P
P
P
(I) = ??? h?H? w? (h) g
(6)
b?B? ?(h, b, I) .
P
This results in the creation of a single factor ?h (I) = g ( b ?(h, b, I)) for each assignment of the
template?s head symbols. We focus on three semantic functions in particular [20]. For the first,
linear semantics, g(x) = x. This is identical to the standard semantics in (5). For the second,
logical semantics, g(x) = sgn(x). For the third, ratio semantics, g(x) = sgn(x) log(1 + |x|). These
semantics are analogous to the different semantics used in our voting example. Shin et al. [20]
exhibit several classification problems where using logical or ratio semantics gives better F1 scores.
3.1
Hierarchical Factor Graphs
In this section, we outline a class of templates, hierarchical templates, that have bounded hierarchy
width. We focus on models that have hierarchical structure in their template factors; for example,
?(A(?
x, y?, z), B(?
x, y?), Q(?
x, y?))
(7)
should have hierarchical structure, while
?(A(z), B(?
x), Q(?
x, y))
(8)
should not. Armed with this intuition, we give the following definitions.
Definition 5 (Hierarchy Depth). A template factor ? has hierarchy depth d if the first d object
symbols that appear in each of its terms are the same. We call these symbols hierarchical symbols.
For example, (7) has hierarchy depth 2, and x
? and y? are hierarchical symbols; also, (8) has hierarchy
depth 0, and no hierarchical symbols.
Definition 6 (Hierarchical). We say that a template factor is hierarchical if all of its head symbols
are hierarchical symbols. For example, (7) is hierarchical, while (8) is not. We say that a factor
graph template is hierarchical if all its template factors are hierarchical.
We can explicitly bound the hierarchy width of instances of hierarchical factor graphs.
Lemma 7. If G is an instance of a hierarchical template with E template factors, then hw(G) ? E.
We would now like to use Theorem 2 to prove a bound on the mixing time; this requires us to
bound the maximum factor weight of the graph. Unfortunately, for linear semantics, the maximum
factor weight of a graph is potentially O(n), so applying Theorem 2 won?t get us useful results.
Fortunately, for logical or ratio semantics, hierarchical factor graphs do mix in polynomial time.
Statement 4. For any fixed hierarchical factor graph template G, if G is an instance of G with
bounded weights using either logical or ratio semantics, then the mixing time of Gibbs
sampling on
G is polynomial in the number of objects n in its dataset. That is, tmix = O nO(1) .
So, if we want to construct models with Gibbs samplers that mix rapidly, one way to do it is with
hierarchical factor graph templates using logical or ratio semantics.
4
Experiments
Synthetic Data We constructed a synthetic dataset by using an ensemble of Ising model graphs
each with 360 nodes, 359 edges, and treewidth 1, but with different hierarchy widths. These graphs
7
Max Error of Marginal Estimate for KBP Dataset
0.25
mean square error
square error
Errors of Marginal Estimates for Synthetic Ising Model
1
0.1
0.01
0.001
10
w = 0.5
w = 0.7
w = 0.9
100
hierarchy width
0.2
0.15
0.1
0.05
0
(a) Error of marginal estimates for synthetic
Ising model after 105 samples.
linear
ratio
logical
0
20
40
60
iterations per variable
80
100
(b) Maximum error marginal estimates for KBP
dataset after some number of samples.
Figure 5: Experiments illustrate how convergence is affected by hierarchy width and semantics.
ranged from the star graph (like in Figure 1(a)) to the path graph; and each had different hierarchy
width. For each graph, we were able to calculate the exact true marginal of each variable because
of the small tree-width. We then ran Gibbs sampling on each graph, and calculated the error of the
marginal estimate of a single arbitrarily-chosen query variable. Figure 5(a) shows the result with
different weights and hierarchy width. It shows that, even for tree graphs with the same number of
nodes and edges, the mixing time can still vary depending on the hierarchy width of the model.
Real-World Applications We observed that the hierarchical templates that we focus on in this
work appear frequently in real applications. For example, all five knowledge base population (KBP)
systems illustrated by Shin et al. [20] contain subgraphs that are grounded by hierarchical templates.
Moreover, sometimes a factor graph is solely grounded by hierarchical templates, and thus provably
mixes rapidly by our theorem while achieving high quality. To validate this, we constructed a hierarchical template for the Paleontology application used by Shanan et al. [17]. We found that when
using the ratio semantic, we were able to get an F1 score of 0.86 with precision of 0.96. On the
same task, this quality is actually higher than professional human volunteers [17]. For comparison,
the linear semantic achieved an F1 score of 0.76 and the logical achieved 0.73.
The factor graph we used in this Paleontology application is large enough that it is intractable, using
exact inference, to estimate the true marginal to investigate the mixing behavior. Therefore, we
chose a subgraph of a KBP system used by Shin et al. [20] that can be grounded by a hierarchical
template and chose a setting of the weight such that the true marginal was 0.5 for all variables. We
then ran Gibbs sampling on this subgraph and report the average error of the marginal estimation in
Figure 5(b). Our results illustrate the effect of changing the semantic on a more complicated model
from a real application, and show similar behavior to our simple voting example.
5
Conclusion
This paper showed that for a class of factor graph templates, hierarchical templates, Gibbs sampling
mixes in polynomial time. It also introduced the graph property hierarchy width, and showed that
for graphs of bounded factor weight and hierarchy width, Gibbs sampling converges rapidly. These
results may aid in better understanding the behavior of Gibbs sampling for both template and general
factor graphs.
Acknowledgments
Thanks to Stefano Ermon and Percy Liang for helpful conversations.
The authors acknowledge the support of: DARPA FA8750-12-2-0335; NSF IIS-1247701; NSF CCF-1111943;
DOE 108845; NSF CCF-1337375; DARPA FA8750-13-2-0039; NSF IIS-1353606; ONR N000141210041
and N000141310129; NIH U54EB020405; Oracle; NVIDIA; Huawei; SAP Labs; Sloan Research Fellowship;
Moore Foundation; American Family Insurance; Google; and Toshiba.
8
References
[1] Venkat Chandrasekaran, Nathan Srebro, and Prahladh Harsha. Complexity of inference in graphical
models. arXiv preprint arXiv:1206.3240, 2012.
[2] Persi Diaconis, Kshitij Khare, and Laurent Saloff-Coste. Gibbs sampling, exponential families and orthogonal polynomials. Statist. Sci., 23(2):151?178, May 2008.
[3] Persi Diaconis, Kshitij Khare, and Laurent Saloff-Coste. Gibbs sampling, conjugate priors and coupling.
Sankhya A, (1):136?169, 2010.
[4] Pedro Domingos and William Austin Webb. A tractable first-order probabilistic logic. In AAAI, 2012.
[5] Joseph Gonzalez, Yucheng Low, Arthur Gretton, and Carlos Guestrin. Parallel gibbs sampling: From
colored fields to thin junction trees. In AISTATS, pages 324?332, 2011.
[6] Georg Gottlob, Gianluigi Greco, and Francesco Scarcello. Treewidth and hypertree width. Tractability:
Practical Approaches to Hard Problems, page 1, 2014.
[7] Alexander T Ihler, John Iii, and Alan S Willsky. Loopy belief propagation: Convergence and effects of
message errors. In Journal of Machine Learning Research, pages 905?936, 2005.
[8] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press,
2009.
[9] Johan Kwisthout, Hans L Bodlaender, and Linda C van der Gaag. The necessity of bounded treewidth for
efficient inference in bayesian networks. In ECAI, pages 237?242, 2010.
[10] David Asher Levin, Yuval Peres, and Elizabeth Lee Wilmer. Markov chains and mixing times. American
Mathematical Soc., 2009.
[11] Xianghang Liu and Justin Domke. Projecting markov random field parameters for fast mixing. In
Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural
Information Processing Systems 27, pages 1377?1385. Curran Associates, Inc., 2014.
[12] David Lunn, David Spiegelhalter, Andrew Thomas, and Nicky Best. The BUGS project: evolution,
critique and future directions. Statistics in medicine, (25):3049?3067, 2009.
[13] D?aniel Marx. Tractable hypergraph properties for constraint satisfaction and conjunctive queries. Journal
of the ACM (JACM), (6):42, 2013.
[14] Andrew McCallum, Karl Schultz, and Sameer Singh. Factorie: Probabilistic programming via imperatively defined factor graphs. In NIPS, pages 1249?1257, 2009.
[15] David Newman, Padhraic Smyth, Max Welling, and Arthur U Asuncion. Distributed inference for latent
dirichlet allocation. In NIPS, pages 1081?1088, 2007.
[16] Kee Siong Ng, John W Lloyd, and William TB Uther. Probabilistic modelling, inference and learning
using logical theories. Annals of Mathematics and Artificial Intelligence, (1-3):159?205, 2008.
[17] Shanan E Peters, Ce Zhang, Miron Livny, and Christopher R?e. A machine reading system for assembling
synthetic Paleontological databases. PloS ONE, 2014.
[18] David Poole. First-order probabilistic inference. In IJCAI, pages 985?991. Citeseer, 2003.
[19] Neil Robertson and Paul D. Seymour. Graph minors. ii. algorithmic aspects of tree-width. Journal of
algorithms, (3):309?322, 1986.
[20] Jaeho Shin, Sen Wu, Feiran Wang, Christopher De Sa, Ce Zhang, Feiran Wang, and Christopher R?e.
Incremental knowledge base construction using deepdive. PVLDB, 2015.
[21] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In AAAI, pages 1094?1099,
2008.
[22] Alexander Smola and Shravan Narayanamurthy. An architecture for parallel topic models. PVLDB, 2010.
[23] Dan Suciu, Dan Olteanu, Christopher R?e, and Christoph Koch. Probabilistic databases. Synthesis Lectures
on Data Management, (2):1?180, 2011.
[24] Mihai Surdeanu and Heng Ji. Overview of the english slot filling track at the TAC2014 knowledge base
population evaluation.
[25] Lucas Theis, Jascha Sohl-dickstein, and Matthias Bethge. Training sparse natural image models with a
fast gibbs sampler of an extended state space. In NIPS, pages 1124?1132. 2012.
[26] Deepak Venugopal and Vibhav Gogate. On lifting the gibbs sampling algorithm. In F. Pereira, C.J.C.
Burges, L. Bottou, and K.Q. Weinberger, editors, NIPS, pages 1655?1663. Curran Associates, Inc., 2012.
[27] Deepak Venugopal, Somdeb Sarkhel, and Vibhav Gogate. Just count the satisfied groundings: Scalable
local-search and sampling based inference in mlns. In AAAI Conference on Artificial Intelligence, 2015.
[28] Ce Zhang and Christopher R?e. DimmWitted: A study of main-memory statistical analytics. PVLDB,
2014.
9
| 5757 |@word middle:1 version:4 polynomial:21 seems:1 stronger:2 citeseer:1 recursively:1 necessity:1 celebrated:1 contains:4 score:3 liu:1 disallows:1 fa8750:2 outperforms:1 conjunctive:1 must:1 john:2 fn:2 happen:1 update:1 bart:2 alone:1 intelligence:2 instantiate:1 leaf:1 mln:1 mccallum:1 pvldb:3 colored:1 node:5 location:1 zhang:4 five:1 unbounded:1 daphne:1 mathematical:1 constructed:2 prove:3 consists:3 dan:2 introduce:3 rapid:6 roughly:2 themselves:1 frequently:1 behavior:6 armed:1 considering:2 project:1 bounded:30 underlying:1 moreover:1 linda:1 kind:1 minimizes:1 maxa:1 livny:1 guarantee:7 every:1 voting:25 ti:8 runtime:1 facto:1 appear:2 t1:2 positive:2 engineering:1 local:1 seymour:1 switching:1 despite:2 khare:2 critique:1 laurent:2 path:6 solely:1 chose:2 voter:10 christoph:1 analytics:1 unique:1 acknowledgment:1 practical:1 practice:2 implement:1 shin:5 empirical:2 saloff:2 suggest:1 get:2 applying:4 writing:2 gaag:1 regardless:1 independently:1 chrismre:1 jascha:1 subgraphs:1 surdeanu:1 population:2 notion:4 analogous:1 annals:1 hierarchy:52 gm:1 construction:1 exact:6 programming:3 smyth:1 curran:2 domingo:2 associate:2 infrequently:1 robertson:1 ising:4 database:3 observed:1 preprint:1 electrical:1 solved:1 hv:3 thousand:1 calculate:1 wang:2 ensures:2 connected:6 plo:1 ran:3 intuition:4 complexity:3 hypergraph:4 inductively:2 dynamic:2 depend:3 singh:1 creation:2 bipartite:1 efficiency:1 f2:2 darpa:2 represented:2 instantiated:9 fast:4 describe:3 monte:1 query:7 artificial:2 tell:1 newman:1 hyper:1 tmix:7 stanford:5 widely:2 valued:1 say:4 larger:1 otherwise:1 drawing:1 statistic:1 gi:2 g1:2 neil:1 validates:1 advantage:1 matthias:1 sen:1 rapidly:11 mixing:29 poorly:1 achieve:3 subgraph:3 bug:1 validate:2 competition:1 nicky:1 convergence:5 ijcai:1 produce:9 incremental:1 converges:2 object:11 help:1 derive:1 develop:1 illustrate:2 depending:1 coupling:1 andrew:2 minor:1 sa:2 soc:1 c:1 involves:1 treewidth:5 differ:1 direction:1 safe:1 human:3 sgn:2 ermon:1 elimination:4 parag:1 f1:5 fix:1 generalization:1 dimmwitted:2 practically:1 koch:1 exp:4 lawrence:1 algorithmic:1 achieves:1 vary:1 omitted:1 mlns:2 estimation:1 singla:1 largest:2 mit:1 always:1 aim:1 sarkhel:1 pn:10 lifted:3 focus:4 modelling:1 sense:1 helpful:1 inference:19 huawei:1 typically:3 koller:2 semantics:36 provably:3 among:1 classification:1 lucas:1 plan:1 art:1 marginal:14 field:3 construct:2 extraction:1 ng:1 sampling:51 identical:1 represents:3 filling:1 theart:1 thin:1 future:1 others:1 np:1 t2:2 report:1 randomly:1 diaconis:2 connects:2 n1:1 william:2 friedman:2 interest:1 message:1 investigate:1 insurance:1 evaluation:1 suciu:1 chain:4 accurate:1 coste:2 tuple:1 edge:2 necessary:3 arthur:2 wfi:6 orthogonal:1 tree:11 divide:1 theoretical:4 instance:4 n000141310129:1 lunn:1 gn:3 assignment:8 loopy:1 tractability:2 vertex:1 subset:2 gianluigi:1 levin:1 synthetic:5 gd:1 person:1 thanks:1 kshitij:2 probabilistic:7 v4:1 lee:1 bethge:1 connecting:1 synthesis:1 aaai:3 satisfied:1 nm:1 padhraic:1 management:1 worse:1 v7:1 american:2 de:3 star:1 lloyd:1 matter:1 inc:2 explicitly:1 sloan:1 depends:6 try:1 lab:1 analyze:3 shravan:1 start:1 czhang:1 complicated:1 carlos:1 parallel:2 asuncion:1 contribution:3 square:2 ni:1 accuracy:1 variance:4 ensemble:1 weak:1 bayesian:1 carlo:1 definition:13 competitor:1 energy:10 proof:3 ihler:1 gain:1 sap:1 dataset:7 persi:2 logical:21 knowledge:3 conversation:1 ubiquitous:1 actually:1 higher:1 follow:1 factorie:2 formulation:1 done:2 just:2 smola:1 christopher:7 propagation:4 google:1 quality:5 vibhav:2 openbugs:1 name:1 effect:2 grounding:1 ranged:1 true:7 contain:1 ccf:2 evolution:1 nonzero:3 moore:1 semantic:5 illustrated:1 deal:1 width:77 won:2 outline:2 demonstrate:1 tn:2 performs:2 percy:1 temperature:1 stefano:1 image:1 fi:8 nih:1 common:3 rmn:1 ji:1 overview:1 exponentially:3 million:1 analog:1 assembling:1 mihai:1 gibbs:56 outlined:1 mathematics:1 similarly:2 narayanamurthy:1 submodular:1 language:1 had:1 han:1 base:5 recent:1 showed:2 nvidia:1 hyperedge:2 arbitrarily:1 onr:1 der:1 guestrin:1 additional:2 fortunately:1 converting:1 converge:1 v3:1 ii:3 multiple:2 mix:20 sameer:1 gretton:1 exceeds:1 technical:2 alan:1 long:1 divided:1 kunle:2 scalable:1 basic:1 volunteer:2 arxiv:2 iteration:4 sometimes:2 normalization:1 represent:1 grounded:3 achieved:2 addition:1 want:2 fellowship:1 diagram:2 thing:1 call:3 odds:3 structural:1 leverage:1 presence:2 intermediate:1 iii:1 enough:1 switch:1 wti:6 architecture:1 whether:2 motivated:4 expression:1 quasilinear:1 peter:1 useful:2 u54eb020405:1 statist:1 nsf:4 notice:1 sign:1 estimated:2 per:2 track:1 discrete:1 affected:1 georg:1 group:1 dickstein:1 deepdive:1 achieving:1 drawn:2 wisc:1 changing:1 ce:4 v1:1 graph:108 olukotun:1 run:6 kbp:4 family:3 chandrasekaran:1 wu:1 vn:1 draw:1 gonzalez:1 bound:9 followed:1 distinguish:1 tac2014:1 oracle:1 constraint:2 toshiba:1 constrain:1 nathan:1 aspect:1 argument:1 min:4 speedup:1 department:1 structured:1 disconnected:3 conjugate:1 em:1 elizabeth:1 tw:4 joseph:1 avenger:3 intuitively:1 projecting:1 turn:1 count:2 tractable:3 junction:2 apply:1 hierarchical:28 v2:1 spectral:3 harsha:1 inasmuch:1 weinberger:2 professional:1 hat:1 bodlaender:1 xianghang:1 thomas:1 running:1 ensure:3 dirichlet:1 graphical:4 medicine:1 ghahramani:1 prof:1 greco:1 v5:1 exhibit:4 distance:1 separate:1 sci:1 topic:1 trivial:1 reason:2 induction:1 willsky:1 relationship:1 insufficient:4 ratio:21 gogate:2 liang:1 unfortunately:3 webb:1 hypertree:25 statement:5 potentially:1 negative:2 twilight:2 motivates:1 observation:1 francesco:1 markov:9 acknowledge:1 situation:1 relational:1 peres:1 head:5 extended:1 introduced:1 david:5 z1:1 nip:4 able:4 yucheng:1 proceeds:1 justin:1 poole:1 reading:1 program:2 tb:1 including:2 max:5 marx:1 belief:4 memory:1 power:1 suitable:2 satisfaction:2 natural:2 representing:1 movie:2 spiegelhalter:1 ne:1 hm:3 nir:1 text:1 prior:5 understanding:1 theis:1 lacking:1 lecture:1 allocation:1 acyclic:5 srebro:1 annotator:1 kwisthout:1 foundation:1 degree:4 sufficient:3 uther:1 principle:1 editor:2 heng:1 austin:1 karl:1 summary:1 ecai:1 wilmer:1 english:1 weaker:1 understand:1 burges:1 template:53 characterizing:1 deepak:2 absolute:3 sparse:1 van:1 distributed:1 depth:4 calculated:1 world:8 transition:1 rich:1 author:1 commonly:2 replicated:1 schultz:1 far:1 polynomially:1 welling:2 logic:3 dlog2:1 instantiation:1 continuous:1 latent:1 search:1 why:2 promising:1 johan:1 ca:1 symmetry:1 bottou:1 complex:1 venugopal:3 aistats:1 main:4 linearly:2 bounding:6 paul:1 n2:2 nothing:2 asher:1 body:2 join:2 venkat:1 sankhya:1 aid:1 precision:1 pereira:1 exponential:6 third:2 hw:15 theorem:9 maxi:2 symbol:18 cortes:1 evidence:1 imperatively:1 intractable:1 false:2 sohl:1 n000141210041:1 lifting:1 magnitude:1 gap:4 jacm:1 gottlob:1 desire:1 v6:1 g2:2 pedro:2 corresponds:1 determines:1 somdeb:1 acm:1 cdesa:1 conditional:1 slot:1 goal:1 kee:1 experimentally:1 hard:4 specifically:1 typical:1 yuval:1 sampler:6 domke:1 lemma:11 called:7 vote:2 acyclicity:1 wq:6 support:1 alexander:2 mcmc:2 |
5,255 | 5,758 | Automatic Variational Inference in Stan
Rajesh Ranganath
Princeton University
rajeshr@cs.princeton.edu
Alp Kucukelbir
Columbia University
alp@cs.columbia.edu
David M. Blei
Columbia University
david.blei@columbia.edu
Andrew Gelman
Columbia University
gelman@stat.columbia.edu
Abstract
Variational inference is a scalable technique for approximate Bayesian inference.
Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult for non-experts to use. We propose an automatic variational inference algorithm, automatic differentiation variational inference (advi);
we implement it in Stan (code available), a probabilistic programming system. In
advi the user provides a Bayesian model and a dataset, nothing else. We make
no conjugacy assumptions and support a broad class of models. The algorithm
automatically determines an appropriate variational family and optimizes the variational objective. We compare advi to mcmc sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model.
We train the mixture model on a quarter million images. With advi we can use
variational inference on any model we write in Stan.
1
Introduction
Bayesian inference is a powerful framework for analyzing data. We design a model for data using
latent variables; we then analyze data by calculating the posterior density of the latent variables. For
machine learning models, calculating the posterior is often difficult; we resort to approximation.
Variational inference (vi) approximates the posterior with a simpler distribution [1, 2]. We search
over a family of simple distributions and find the member closest to the posterior. This turns approximate inference into optimization. vi has had a tremendous impact on machine learning; it is
typically faster than Markov chain Monte Carlo (mcmc) sampling (as we show here too) and has
recently scaled up to massive data [3].
Unfortunately, vi algorithms are difficult to derive. We must first define the family of approximating
distributions, and then calculate model-specific quantities relative to that family to solve the variational optimization problem. Both steps require expert knowledge. The resulting algorithm is tied to
both the model and the chosen approximation.
In this paper we develop a method for automating variational inference, automatic differentiation
variational inference (advi). Given any model from a wide class (specifically, probability models
differentiable with respect to their latent variables), advi determines an appropriate variational family and an algorithm for optimizing the corresponding variational objective. We implement advi in
Stan [4], a flexible probabilistic programming system. Stan describes a high-level language to define
probabilistic models (e.g., Figure 2) as well as a model compiler, a library of transformations, and an
efficient automatic differentiation toolbox. With advi we can now use variational inference on any
model we write in Stan.1 (See Appendices F to J.)
1 advi
is available in Stan 2.8. See Appendix C.
1
Average Log Predictive
Average Log Predictive
0
300
600
ADVI
900
NUTS [5]
102
Seconds
103
(a) Subset of 1000 images
400
0
400
B=50
800
B=500
B=100
B=1000
102
103
Seconds
104
(b) Full dataset of 250 000 images
Figure 1: Held-out predictive accuracy results | Gaussian mixture model (gmm) of the imageclef
image histogram dataset. (a) advi outperforms the no-U-turn sampler (nuts), the default sampling
method in Stan [5]. (b) advi scales to large datasets by subsampling minibatches of size B from the
dataset at each iteration [3]. We present more details in Section 3.3 and Appendix J.
Figure 1 illustrates the advantages of our method. Consider a nonconjugate Gaussian mixture model
for analyzing natural images; this is 40 lines in Stan (Figure 10). Figure 1a illustrates Bayesian
inference on 1000 images. The y-axis is held-out likelihood, a measure of model fitness; the xaxis is time on a log scale. advi is orders of magnitude faster than nuts, a state-of-the-art mcmc
algorithm (and Stan?s default inference technique) [5]. We also study nonconjugate factorization
models and hierarchical generalized linear models in Section 3.
Figure 1b illustrates Bayesian inference on 250 000 images, the size of data we more commonly find in
machine learning. Here we use advi with stochastic variational inference [3], giving an approximate
posterior in under two hours. For data like these, mcmc techniques cannot complete the analysis.
Related work. advi automates variational inference within the Stan probabilistic programming
system [4]. This draws on two major themes.
The first is a body of work that aims to generalize vi. Kingma and Welling [6] and Rezende et al.
[7] describe a reparameterization of the variational problem that simplifies optimization. Ranganath
et al. [8] and Salimans and Knowles [9] propose a black-box technique, one that only requires the
model and the gradient of the approximating family. Titsias and L?zaro-Gredilla [10] leverage the
gradient of the joint density for a small class of models. Here we build on and extend these ideas to
automate variational inference; we highlight technical connections as we develop the method.
The second theme is probabilistic programming. Wingate and Weber [11] study vi in general probabilistic programs, as supported by languages like Church [12], Venture [13], and Anglican [14]. Another probabilistic programming system is infer.NET, which implements variational message passing
[15], an efficient algorithm for conditionally conjugate graphical models. Stan supports a more comprehensive class of nonconjugate models with differentiable latent variables; see Section 2.1.
2
Automatic Differentiation Variational Inference
Automatic differentiation variational inference (advi) follows a straightforward recipe. First we
transform the support of the latent variables to the real coordinate space. For example, the logarithm
transforms a positive variable, such as a standard deviation, to the real line. Then we posit a Gaussian
variational distribution to approximate the posterior. This induces a non-Gaussian approximation in
the original variable space. Last we combine automatic differentiation with stochastic optimization
to maximize the variational objective. We begin by defining the class of models we support.
2.1
Differentiable Probability Models
Consider a dataset X D x1WN with N observations. Each xn is a discrete or continuous random vector. The likelihood p.X j / relates the observations to a set of latent random variables . Bayesian
2
? D 1:5; D 1
data {
i n t N;
// number o f o b s e r v a t i o n s
i n t x [ N ] ; // d i s c r e t e - v a l u e d o b s e r v a t i o n s
}
parameters {
// l a t e n t v a r i a b l e , must be p o s i t i v e
r e a l < l o w e r =0> t h e t a ;
}
model {
// non - c o n j u g a t e p r i o r f o r l a t e n t v a r i a b l e
theta ~ w e i b u l l ( 1 . 5 , 1) ;
xn
N
// l i k e l i h o o d
f o r ( n i n 1 :N)
x [ n ] ~ poisson ( theta ) ;
}
Figure 2: Specifying a simple nonconjugate probability model in Stan.
analysis posits a prior density p./ on the latent variables. Combining the likelihood with the prior
gives the joint density p.X; / D p.X j / p./.
We focus on approximate inference for differentiable probability models. These models have continuous latent variables . They also have a gradient of the log-joint with respect to the latent? variables
r log p.X; /. The gradient is valid within the support of the prior supp.p.// D j 2
RK and p./ > 0 RK , where K is the dimension of the latent variable space. This support set
is important: it determines the support of the posterior density and plays a key role later in the paper.
We make no assumptions about conjugacy, either full or conditional.2
For example, consider a model that contains a Poisson likelihood with unknown rate, p.x j /. The
observed variable x is discrete; the latent rate is continuous and positive. Place a Weibull prior
on , defined over the positive real numbers. The resulting joint density describes a nonconjugate
differentiable probability model. (See Figure 2.) Its partial derivative @=@ p.x; / is valid within the
support of the Weibull distribution, supp.p. // D RC R. Because this model is nonconjugate, the
posterior is not a Weibull distribution. This presents a challenge for classical variational inference.
In Section 2.3, we will see how advi handles this model.
Many machine learning models are differentiable. For example: linear and logistic regression, matrix
factorization with continuous or discrete measurements, linear dynamical systems, and Gaussian processes. Mixture models, hidden Markov models, and topic models have discrete random variables.
Marginalizing out these discrete variables renders these models differentiable. (We show an example
in Section 3.3.) However, marginalization is not tractable for all models, such as the Ising model,
sigmoid belief networks, and (untruncated) Bayesian nonparametric models.
2.2
Variational Inference
Bayesian inference requires the posterior density p. j X/, which describes how the latent variables
vary when conditioned on a set of observations X. Many posterior densities are intractable because
their normalization constants lack closed forms. Thus, we seek to approximate the posterior.
Consider an approximating density q. I / parameterized by . We make no assumptions about its
shape or support. We want to find the parameters of q. I / to best match the posterior according to
some loss function. Variational inference (vi) minimizes the Kullback-Leibler (kl) divergence from
the approximation to the posterior [2],
D arg min KL.q. I / k p. j X//:
(1)
Typically the kl divergence also lacks a closed form. Instead we maximize the evidence lower bound
(elbo), a proxy to the kl divergence,
L./ D Eq./ log p.X; /
Eq./ log q. I / :
The first term is an expectation of the joint density under the approximation, and the second is the
entropy of the variational density. Maximizing the elbo minimizes the kl divergence [1, 16].
2 The posterior of a fully conjugate model is in the same family as the prior; a conditionally conjugate model
has this property within the complete conditionals of the model [3].
3
The minimization problem from Eq. (1) becomes
D arg max L./ such that
supp.q. I // supp.p. j X//:
(2)
We explicitly specify the support-matching constraint implied in the kl divergence.3 We highlight
this constraint, as we do not specify the form of the variational approximation; thus we must ensure
that q. I / stays within the support of the posterior, which is defined by the support of the prior.
Why is vi difficult to automate? In classical variational inference, we typically design a conditionally conjugate model. Then the optimal approximating family matches the prior. This satisfies the
support constraint by definition [16]. When we want to approximate models that are not conditionally conjugate, we carefully study the model and design custom approximations. These depend on
the model and on the choice of the approximating density.
One way to automate vi is to use black-box variational inference [8, 9]. If we select a density whose
support matches the posterior, then we can directly maximize the elbo using Monte Carlo (mc)
integration and stochastic optimization. Another strategy is to restrict the class of models and use a
fixed variational approximation [10]. For instance, we may use a Gaussian density for inference in
unrestrained differentiable probability models, i.e. where supp.p.// D RK .
We adopt a transformation-based approach. First we automatically transform the support of the latent
variables in our model to the real coordinate space. Then we posit a Gaussian variational density. The
transformation induces a non-Gaussian approximation in the original variable space and guarantees
that it stays within the support of the posterior. Here is how it works.
2.3
Automatic Transformation of Constrained Variables
Begin by transforming the support of the latent variables such that they live in the real coordinate
space RK . Define a one-to-one differentiable function T W supp.p.// ! RK and identify the
transformed variables as D T ./. The transformed joint density g.X; / is
?
?
g.X; / D p X; T 1 ./ ? det JT 1 ./?;
where p is the joint density in the original latent variable space, and JT 1 is the Jacobian of the
inverse of T . Transformations of continuous probability densities require a Jacobian; it accounts for
how the transformation warps unit volumes [17]. (See Appendix D.)
Consider again our running example. The rate lives in RC . The logarithm D T . / D log. /
transforms RC to the real line R. Its Jacobian adjustment is the derivative of the inverse of the
logarithm, j det JT 1 . / j D exp./. The transformed density is
g.x; / D Poisson.x j exp.// Weibull.exp./ I 1:5; 1/ exp./:
Figures 3a and 3b depict this transformation.
As we describe in the introduction, we implement our algorithm in Stan to enable generic inference.
Stan implements a model compiler that automatically handles transformations. It works by applying
a library of transformations and their corresponding Jacobians to the joint model density.4 This
transforms the joint density of any differentiable probability model to the real coordinate space. Now
we can choose a variational distribution independent from the model.
2.4
Implicit Non-Gaussian Variational Approximation
After the transformation, the latent variables have support on RK . We posit a diagonal (mean-field)
Gaussian variational approximation
q. I / D N . I ; / D
K
Y
N .k I k ; k /:
kD1
supp.q/ ? supp.p/ then outside the support of p we have KL.q k p/ D Eq ?log q? Eq ?log p? D 1.
provides transformations for upper and lower bounds, simplex and ordered vectors, and structured
matrices such as covariance matrices and Cholesky factors [4].
3 If
4 Stan
4
Density
T
1
T
0
1
2
S;!
1
1
3
1
(a) Latent variable space
1
1
S;!
0
1
2
(b) Real coordinate space
Prior
Posterior
Approximation
2 1 0 1 2
(c) Standardized space
Figure 3: Transformations for advi. The purple line is the posterior. The green line is the approximation. (a) The latent variable space is RC . (a!b) T transforms the latent variable space to R. (b)
The variational approximation is a Gaussian. (b!c) S;! absorbs the parameters of the Gaussian.
(c) We maximize the elbo in the standardized space, with a fixed standard Gaussian approximation.
The vector D .1 ; ; K ; 1 ; ; K / contains the mean and standard deviation of each Gaussian factor. This defines our variational approximation in the real coordinate space. (Figure 3b.)
The transformation T maps the support of the latent variables to the real coordinate space; its inverse
T 1 maps back to the support of the latent variables. This implicitly
defines
the variational approx?
?
imation in the original latent variable space as q.T ./ I /? det JT ./?: The transformation ensures
that the support of this approximation is always bounded by that of the true posterior in the original
latent variable space (Figure 3a). Thus we can freely optimize the elbo in the real coordinate space
(Figure 3b) without worrying about the support matching constraint.
The elbo in the real coordinate space is
L.; / D Eq./ log p X; T
1
?
./ C log ? det JT
K
X
?
K
?
C .1 C log.2// C
log k ;
1 ./
2
kD1
where we plug in the analytic form of the Gaussian entropy. (The derivation is in Appendix A.)
We choose a diagonal Gaussian for efficiency. This choice may call to mind the Laplace approximation technique, where a second-order Taylor expansion around the maximum-a-posteriori estimate
gives a Gaussian approximation to the posterior. However, using a Gaussian variational approximation is not equivalent to the Laplace approximation [18]. The Laplace approximation relies on maximizing the probability density; it fails with densities that have discontinuities on its boundary. The
Gaussian approximation considers probability mass; it does not suffer this degeneracy. Furthermore,
our approach is distinct in another way: because of the transformation, the posterior approximation
in the original latent variable space (Figure 3a) is non-Gaussian.
2.5
Automatic Differentiation for Stochastic Optimization
We now maximize the elbo in real coordinate space,
; D arg max L.; /
;
such that 0:
(3)
We use gradient ascent to reach a local maximum of the elbo. Unfortunately, we cannot apply automatic differentiation to the elbo in this form. This is because the expectation defines an intractable
integral that depends on and ; we cannot directly represent it as a computer program. Moreover, the standard deviations in must remain positive. Thus, we employ one final transformation:
elliptical standardization5 [19], shown in Figures 3b and 3c.
First re-parameterize the Gaussian distribution with the log of the standard deviation, ! D log. /,
applied element-wise. The support of ! is now the real coordinate space and is always positive.
Then define the standardization D S;! ./ D diag exp .!/ 1 . /. The standardization
5 Also known as a ?co-ordinate transformation? [7], an ?invertible transformation? [10], and the ?reparameterization trick? [6].
5
Algorithm 1: Automatic differentiation variational inference (advi)
Input: Dataset X D x1WN , model p.X; /.
Set iteration counter i D 0 and choose a stepsize sequence .i / .
Initialize .0/ D 0 and !.0/ D 0.
while change in elbo is above some threshold do
Draw M samples m N .0; I/ from the standard multivariate Gaussian.
Invert the standardization m D diag.exp .!.i / //m C .i / .
Approximate r L and r! L using mc integration (Eqs. (4) and (5)).
Update .iC1/
.i / C .i / r L and !.i C1/
!.i / C .i / r! L.
Increment iteration counter.
end
Return
.i / and !
!.i / .
encapsulates the variational parameters and gives the fixed density
q. I 0; I/ D N . I 0; I/ D
K
Y
N .k I 0; 1/:
kD1
The standardization transforms the variational problem from Eq. (3) into
; ! D arg max L.; !/
;!
D arg max EN . I 0;I/ log p X; T
;!
1
1
.S;!
.//
?
C log ? det JT
1
X
K
?
? C
!k ;
1
S;!
./
kD1
where we drop constant terms from the calculation. This expectation is with respect to a standard
Gaussian and the parameters and ! are both unconstrained (Figure 3c). We push the gradient
inside the expectations and apply the chain rule to get
?
?
r L D EN ./ r log p.X; /r T 1 ./ C r log ? det JT 1 ./? ;
(4)
?
?
1
?
?
r! L D EN . / r log p.X; /r T ./ C r log det JT 1 ./ k exp.!k / C 1: (5)
k
k
k
k
k
(The derivations are in Appendix B.)
We can now compute the gradients inside the expectation with automatic differentiation. The only
thing left is the expectation. mc integration provides a simple approximation: draw M samples from
the standard Gaussian and evaluate the empirical mean of the gradients within the expectation [20].
This gives unbiased noisy gradients of the elbo for any differentiable probability model. We can
now use these gradients in a stochastic optimization routine to automate variational inference.
2.6
Automatic Variational Inference
Equipped with unbiased noisy gradients of the elbo, advi implements stochastic gradient ascent
(Algorithm 1). We ensure convergence by choosing a decreasing step-size sequence. In practice, we
use an adaptive sequence [21] with finite memory. (See Appendix E for details.)
advi has complexity O.2NMK/ per iteration, where M is the number of mc samples (typically
between 1 and 10). Coordinate ascent vi has complexity O.2NK/ per pass over the dataset. We
scale advi to large datasets using stochastic optimization [3, 10]. The adjustment to Algorithm 1 is
simple: sample a minibatch of size B N from the dataset and scale the likelihood of the sampled
minibatch by N=B [3]. The stochastic extension of advi has per-iteration complexity O.2BMK/.
6
Average Log Predictive
Average Log Predictive
3
5
ADVI (M=1)
7
ADVI (M=10)
NUTS
9
HMC
10
1
100
Seconds
101
(a) Linear Regression with ard
0:7
0:9
1:1
1:3
1:5
ADVI (M=1)
ADVI (M=10)
NUTS
HMC
10
1
100
101
Seconds
102
(b) Hierarchical Logistic Regression
Figure 4: Hierarchical generalized linear models. Comparison of advi to mcmc: held-out predictive likelihood as a function of wall time.
3
Empirical Study
We now study advi across a variety of models. We compare its speed and accuracy to two Markov
chain Monte Carlo (mcmc) sampling algorithms: Hamiltonian Monte Carlo (hmc) [22] and the noU-turn sampler (nuts)6 [5]. We assess advi convergence by tracking the elbo. To place advi and
mcmc on a common scale, we report predictive likelihood on held-out data as a function of time. We
approximate the posterior predictive likelihood using a mc estimate. For mcmc, we plug in posterior
samples. For advi, we draw samples from the posterior approximation during the optimization. We
initialize advi with a draw from a standard Gaussian.
We explore two hierarchical regression models, two matrix factorization models, and a mixture
model. All of these models have nonconjugate prior structures. We conclude by analyzing a dataset
of 250 000 images, where we report results across a range of minibatch sizes B.
3.1
A Comparison to Sampling: Hierarchical Regression Models
We begin with two nonconjugate regression models: linear regression with automatic relevance determination (ard) [16] and hierarchical logistic regression [23].
Linear Regression with ard. This is a sparse linear regression model with a hierarchical prior
structure. (Details in Appendix F.) We simulate a dataset with 250 regressors such that half of the
regressors have no predictive power. We use 10 000 training samples and hold out 1000 for testing.
Logistic Regression with Spatial Hierarchical Prior. This is a hierarchical logistic regression
model from political science. The prior captures dependencies, such as states and regions, in a
polling dataset from the United States 1988 presidential election [23]. (Details in Appendix G.)
We train using 10 000 data points and withhold 1536 for evaluation. The regressors contain age,
education, state, and region indicators. The dimension of the regression problem is 145.
Results. Figure 4 plots average log predictive accuracy as a function of time. For these simple
models, all methods reach the same predictive accuracy. We study advi with two settings of M , the
number of mc samples used to estimate gradients. A single sample per iteration is sufficient; it is
also the fastest. (We set M D 1 from here on.)
3.2
Exploring Nonconjugacy: Matrix Factorization Models
We continue by exploring two nonconjugate non-negative matrix factorization models: a constrained
Gamma Poisson model [24] and a Dirichlet Exponential model. Here, we show how easy it is to
explore new models using advi. In both models, we use the Frey Face dataset, which contains 1956
frames (28 20 pixels) of facial expressions extracted from a video sequence.
Constrained Gamma Poisson. This is a Gamma Poisson factorization model with an ordering
constraint: each row of the Gamma matrix goes from small to large values. (Details in Appendix H.)
6 nuts
is an adaptive extension of hmc. It is the default sampler in Stan.
7
Average Log Predictive
Average Log Predictive
5
7
9
11
ADVI
NUTS
101
102
103
Seconds
104
0
200
400
600
101
ADVI
NUTS
102
103
Seconds
104
(a) Gamma Poisson Predictive Likelihood
(b) Dirichlet Exponential Predictive Likelihood
(c) Gamma Poisson Factors
(d) Dirichlet Exponential Factors
Figure 5: Non-negative matrix factorization of the Frey Faces dataset. Comparison of advi to
mcmc: held-out predictive likelihood as a function of wall time.
Dirichlet Exponential. This is a nonconjugate Dirichlet Exponential factorization model with a
Poisson likelihood. (Details in Appendix I.)
Results. Figure 5 shows average log predictive accuracy as well as ten factors recovered from both
models. advi provides an order of magnitude speed improvement over nuts (Figure 5a). nuts
struggles with the Dirichlet Exponential model (Figure 5b). In both cases, hmc does not produce
any useful samples within a budget of one hour; we omit hmc from the plots.
3.3
Scaling to Large Datasets: Gaussian Mixture Model
We conclude with the Gaussian mixture model (gmm) example we highlighted earlier. This is a
nonconjugate gmm applied to color image histograms. We place a Dirichlet prior on the mixture
proportions, a Gaussian prior on the component means, and a lognormal prior on the standard deviations. (Details in Appendix J.) We explore the imageclef dataset, which has 250 000 images [25].
We withhold 10 000 images for evaluation.
In Figure 1a we randomly select 1000 images and train a model with 10 mixture components. nuts
struggles to find an adequate solution and hmc fails altogether. This is likely due to label switching,
which can affect hmc-based techniques in mixture models [4].
Figure 1b shows advi results on the full dataset. Here we use advi with stochastic subsampling
of minibatches from the dataset [3]. We increase the number of mixture components to 30. With a
minibatch size of 500 or larger, advi reaches high predictive accuracy. Smaller minibatch sizes lead
to suboptimal solutions, an effect also observed in [3]. advi converges in about two hours.
4
Conclusion
We develop automatic differentiation variational inference (advi) in Stan. advi leverages automatic
transformations, an implicit non-Gaussian variational approximation, and automatic differentiation.
This is a valuable tool. We can explore many models and analyze large datasets with ease. We
emphasize that advi is currently available as part of Stan; it is ready for anyone to use.
Acknowledgments
We thank Dustin Tran, Bruno Jacobs, and the reviewers for their comments. This work is supported
by NSF IIS-0745520, IIS-1247664, IIS-1009542, SES-1424962, ONR N00014-11-1-0651, DARPA
FA8750-14-2-0009, N66001-15-C-4032, Sloan G-2015-13987, IES DE R305D140059, NDSEG,
Facebook, Adobe, Amazon, and the Siebel Scholar and John Templeton Foundations.
8
References
[1] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183?233, 1999.
[2] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[3] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational
inference. The Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[4] Stan Development Team. Stan Modeling Language Users Guide and Reference Manual, 2015.
[5] Matthew D Hoffman and Andrew Gelman. The No-U-Turn sampler. The Journal of Machine
Learning Research, 15(1):1593?1623, 2014.
[6] Diederik Kingma and Max Welling. Auto-encoding variational Bayes. arXiv:1312.6114, 2013.
[7] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278?1286, 2014.
[8] Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In AISTATS,
pages 814?822, 2014.
[9] Tim Salimans and David Knowles. On using control variates with stochastic approximation for
variational Bayes. arXiv preprint arXiv:1401.1022, 2014.
[10] Michalis Titsias and Miguel L?zaro-Gredilla. Doubly stochastic variational Bayes for nonconjugate inference. In ICML, pages 1971?1979, 2014.
[11] David Wingate and Theophane Weber. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013.
[12] Noah D Goodman, Vikash K Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua B Tenenbaum. Church: A language for generative models. In UAI, pages 220?229, 2008.
[13] Vikash Mansinghka, Daniel Selsam, and Yura Perov. Venture: a higher-order probabilistic
programming platform with programmable inference. arXiv:1404.0099, 2014.
[14] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In AISTATS, pages 2?46, 2014.
[15] John M Winn and Christopher M Bishop. Variational message passing. In Journal of Machine
Learning Research, pages 661?694, 2005.
[16] Christopher M Bishop. Pattern Recognition and Machine Learning. Springer New York, 2006.
[17] David J Olive. Statistical Theory and Inference. Springer, 2014.
[18] Manfred Opper and C?dric Archambeau. The variational Gaussian approximation revisited.
Neural computation, 21(3):786?792, 2009.
[19] Wolfgang H?rdle and L?opold Simar. Applied multivariate statistical analysis. Springer, 2012.
[20] Christian P Robert and George Casella. Monte Carlo statistical methods. Springer, 1999.
[21] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[22] Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo
methods. Journal of the Royal Statistical Society: Series B, 73(2):123?214, 2011.
[23] Andrew Gelman and Jennifer Hill. Data analysis using regression and multilevel/hierarchical
models. Cambridge University Press, 2006.
[24] John Canny. GaP: a factor model for discrete data. In ACM SIGIR, pages 122?129. ACM, 2004.
[25] Mauricio Villegas, Roberto Paredes, and Bart Thomee. Overview of the ImageCLEF 2013
Scalable Concept Image Annotation Subtask. In CLEF Evaluation Labs and Workshop, 2013.
9
| 5758 |@word proportion:1 paredes:1 tedious:1 seek:1 covariance:1 jacob:1 contains:3 siebel:1 united:1 series:1 daniel:2 fa8750:1 outperforms:1 elliptical:1 recovered:1 diederik:1 must:4 olive:1 john:5 shape:1 analytic:1 christian:1 drop:1 plot:2 depict:1 update:1 bart:1 half:1 generative:2 hamiltonian:2 manfred:1 blei:4 provides:4 revisited:1 simpler:1 rc:4 wierstra:1 doubly:1 combine:1 absorbs:1 inside:2 decreasing:1 riemann:1 automatically:3 election:1 equipped:1 becomes:1 begin:3 theophane:1 bounded:1 moreover:1 mass:1 minimizes:2 weibull:4 transformation:19 differentiation:12 guarantee:1 scaled:1 control:1 unit:1 omit:1 mauricio:1 positive:5 local:1 frey:2 struggle:2 switching:1 encoding:1 analyzing:3 black:3 specifying:1 co:1 fastest:1 factorization:9 ease:1 archambeau:1 range:1 acknowledgment:1 zaro:2 testing:1 practice:1 implement:6 backpropagation:1 jan:1 empirical:2 matching:2 zoubin:1 get:1 cannot:3 gelman:4 thomee:1 live:1 applying:1 optimize:1 equivalent:1 map:2 reviewer:1 maximizing:2 straightforward:1 go:1 sigir:1 amazon:1 rule:1 deriving:1 reparameterization:2 handle:2 coordinate:12 increment:1 laplace:3 play:1 user:2 massive:1 programming:8 trick:1 element:1 trend:1 roy:1 recognition:1 ising:1 observed:2 role:1 preprint:2 wingate:2 capture:1 parameterize:1 calculate:1 wang:1 region:2 ensures:1 ordering:1 counter:2 valuable:1 subtask:1 transforming:1 complexity:3 advi:46 automates:1 depend:1 predictive:18 titsias:2 calderhead:1 efficiency:1 joint:9 darpa:1 derivation:2 train:3 distinct:1 describe:2 monte:6 outside:1 choosing:1 whose:1 larger:1 solve:1 elad:1 s:1 elbo:13 presidential:1 transform:2 noisy:2 highlighted:1 final:1 shakir:1 online:1 advantage:1 differentiable:11 sequence:4 net:1 propose:2 tran:1 canny:1 combining:1 x1wn:2 venture:2 recipe:1 convergence:2 nonconjugacy:1 produce:1 converges:1 ben:1 tim:1 derive:1 andrew:3 develop:3 stat:1 miguel:1 ard:3 mansinghka:3 keith:1 eq:8 ic1:1 c:2 girolami:1 tommi:1 posit:4 stochastic:14 alp:2 enable:1 education:1 villegas:1 require:2 multilevel:1 scholar:1 wall:2 extension:2 exploring:2 hold:1 around:1 exp:7 lawrence:1 automate:4 matthew:2 major:1 vary:1 adopt:1 label:1 currently:1 tool:1 hoffman:2 minimization:1 gaussian:30 always:2 aim:1 dric:1 jaakkola:1 rezende:2 focus:1 improvement:1 likelihood:12 unrestrained:1 political:1 posteriori:1 inference:43 typically:4 hidden:1 transformed:3 polling:1 pixel:1 arg:5 flexible:1 development:1 art:1 integration:3 constrained:3 initialize:2 spatial:1 field:1 platform:1 clef:1 sampling:5 broad:1 icml:2 simplex:1 report:2 employ:1 randomly:1 gamma:6 divergence:5 comprehensive:1 fitness:1 kd1:4 message:2 custom:1 evaluation:3 chong:1 mixture:12 held:5 xaxis:1 chain:3 rajesh:2 integral:1 partial:1 facial:1 taylor:1 logarithm:3 re:1 instance:1 earlier:1 modeling:1 perov:1 deviation:5 subset:1 too:1 dependency:1 density:25 automating:1 stay:2 probabilistic:10 invertible:1 michael:2 again:1 ndseg:1 kucukelbir:1 choose:3 expert:2 resort:1 derivative:2 return:1 jacobians:1 supp:8 account:1 de:2 explicitly:1 sloan:1 vi:9 depends:1 later:1 closed:2 wolfgang:1 analyze:2 hazan:1 compiler:2 lab:1 bayes:3 annotation:1 imageclef:3 ass:1 purple:1 accuracy:6 identify:1 generalize:1 bayesian:8 mc:6 carlo:6 reach:3 casella:1 manual:1 facebook:1 definition:1 mohamed:1 degeneracy:1 sampled:1 dataset:16 rdle:1 knowledge:1 color:1 anglican:1 routine:1 carefully:1 sean:1 back:1 higher:1 danilo:1 nonconjugate:13 specify:2 box:3 furthermore:1 implicit:2 christopher:2 lack:2 minibatch:5 defines:3 logistic:5 effect:1 contain:1 true:1 unbiased:2 concept:1 nut:12 leibler:1 conditionally:4 during:1 meent:1 generalized:3 hill:1 complete:2 duchi:1 image:13 variational:57 weber:2 wise:1 recently:1 sigmoid:1 common:1 quarter:1 overview:1 volume:1 million:1 extend:1 approximates:1 measurement:1 cambridge:1 paisley:1 automatic:18 approx:1 unconstrained:1 bruno:1 language:4 had:1 posterior:25 closest:1 multivariate:2 optimizing:1 optimizes:1 n00014:1 onr:1 continue:1 life:1 joshua:1 george:1 freely:1 maximize:5 ii:3 relates:1 full:3 infer:1 technical:1 faster:2 match:3 calculation:2 plug:2 determination:1 y:1 impact:1 adobe:1 scalable:2 regression:14 expectation:7 poisson:9 arxiv:6 histogram:2 iteration:6 normalization:1 represent:1 invert:1 c1:1 want:2 conditionals:1 winn:1 else:1 goodman:1 ascent:3 comment:1 thing:1 member:1 jordan:2 call:1 leverage:2 easy:1 automated:1 variety:1 marginalization:1 affect:1 variate:1 restrict:1 suboptimal:1 simplifies:1 idea:1 selsam:1 bmk:1 det:7 vikash:3 expression:1 render:1 suffer:1 passing:2 york:1 adequate:1 deep:1 programmable:1 useful:1 transforms:5 nonparametric:1 ten:1 tenenbaum:1 induces:2 nsf:1 per:4 write:2 discrete:6 key:1 threshold:1 gmm:3 n66001:1 worrying:1 subgradient:1 wood:1 inverse:3 parameterized:1 powerful:1 place:3 family:9 knowles:2 draw:5 appendix:12 scaling:1 bound:2 noah:1 constraint:5 speed:2 simulate:1 min:1 anyone:1 martin:1 structured:1 gredilla:2 according:1 conjugate:5 across:3 describes:3 remain:1 smaller:1 templeton:1 encapsulates:1 untruncated:1 conjugacy:2 jennifer:1 turn:4 singer:1 mind:1 imation:1 tractable:1 end:1 available:3 willem:1 apply:2 hierarchical:11 salimans:2 appropriate:2 generic:1 stepsize:1 altogether:1 original:6 standardized:2 running:1 subsampling:2 ensure:2 dirichlet:7 graphical:3 michalis:1 calculating:2 yoram:1 giving:1 ghahramani:1 build:1 approximating:5 classical:2 society:1 implied:1 objective:3 quantity:1 strategy:1 diagonal:2 gradient:13 thank:1 topic:1 manifold:1 considers:1 code:1 difficult:4 unfortunately:2 hmc:8 robert:1 frank:1 negative:2 design:3 unknown:1 upper:1 observation:3 markov:3 datasets:4 daan:1 finite:1 defining:1 langevin:1 team:1 frame:1 ordinate:1 david:7 toolbox:1 kl:7 connection:1 yura:1 tremendous:1 hour:3 kingma:2 discontinuity:1 dynamical:1 pattern:1 challenge:1 program:2 max:5 green:1 memory:1 belief:1 video:1 power:1 wainwright:1 royal:1 natural:1 indicator:1 theta:2 library:2 stan:21 axis:1 church:2 ready:1 auto:1 columbia:6 roberto:1 prior:15 marginalizing:1 relative:1 loss:1 fully:1 highlight:2 age:1 foundation:2 sufficient:1 proxy:1 standardization:4 nmk:1 row:1 supported:2 last:1 guide:1 warp:1 wide:1 saul:1 face:2 lognormal:1 sparse:1 van:1 boundary:1 default:3 xn:2 valid:2 dimension:2 withhold:2 opper:1 commonly:1 adaptive:3 regressors:3 welling:2 ranganath:3 approximate:10 emphasize:1 implicitly:1 kullback:1 uai:1 conclude:2 search:1 latent:24 continuous:5 why:1 bonawitz:1 expansion:1 diag:2 aistats:2 nothing:1 body:1 en:3 fails:2 theme:2 exponential:7 tied:1 jacobian:3 dustin:1 rk:6 specific:2 bishop:2 jt:8 evidence:1 intractable:2 workshop:1 magnitude:2 illustrates:3 conditioned:1 push:1 budget:1 nk:1 gap:1 entropy:2 explore:4 likely:1 adjustment:2 ordered:1 tracking:1 springer:4 determines:3 satisfies:1 relies:1 minibatches:2 extracted:1 gerrish:1 conditional:1 acm:2 change:1 specifically:1 sampler:4 pas:1 select:2 support:24 cholesky:1 mark:1 relevance:1 evaluate:1 mcmc:9 princeton:2 |
5,256 | 5,759 | Data Generation as Sequential Decision Making
Philip Bachman
Doina Precup
McGill University, School of Computer Science
phil.bachman@gmail.com
McGill University, School of Computer Science
dprecup@cs.mcgill.ca
Abstract
We connect a broad class of generative models through their shared reliance on
sequential decision making. Motivated by this view, we develop extensions to an
existing model, and then explore the idea further in the context of data imputation
? perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as
an MDP and develop models capable of representing effective policies for it. We
construct the models using neural networks and train them using a form of guided
policy search [9]. Our models generate predictions through an iterative process of
feedback and refinement. We show that this approach can learn effective policies
for imputation problems of varying difficulty and across multiple datasets.
1
Introduction
Directed generative models are naturally interpreted as specifying sequential procedures for generating data. We traditionally think of this process as sampling, but one could also view it as making
sequences of decisions for how to set the variables at each node in a model, conditioned on the
settings of its parents, thereby generating data from the model. The large body of existing work
on reinforcement learning provides powerful tools for addressing such sequential decision making
problems. We encourage the use of these tools to understand and improve the extended processes
currently driving advances in generative modelling. We show how sequential decision making can be
applied to general prediction tasks by developing models which construct predictions by iteratively
refining a working hypothesis under guidance from exogenous input and endogenous feedback.
We begin this paper by reinterpreting several recent generative models as sequential decision making
processes, and then show how changes inspired by this point of view can improve the performance
of the LSTM-based model introduced in [3]. Next, we explore the connections between directed
generative models and reinforcement learning more fully by developing an approach to training
policies for sequential data imputation. We base our approach on formulating imputation as a finitehorizon Markov Decision Process which one can also interpret as a deep, directed graphical model.
We propose two policy representations for the imputation MDP. One extends the model in [3] by
inserting an explicit feedback loop into the generative process, and the other addresses the MDP
more directly. We train our models/policies using techniques motivated by guided policy pearch
[9, 10, 11, 8]. We examine their qualitative and quantitative performance across imputation problems
covering a range of difficulties (i.e. different amounts of data to impute and different ?missingness
mechanisms?), and across multiple datasets. Given the relative paucity of existing approaches to the
general imputation problem, we compare our models to each other and to two simple baselines. We
also test how our policies perform when they use fewer/more steps to refine their predictions.
As imputation encompasses both classification and standard (i.e. unconditional) generative modelling, our work suggests that further study of models for the general imputation problem is worthwhile. The performance of our models suggests that sequential stochastic construction of predictions, guided by both input and feedback, should prove useful for a wide range of problems. Training
these models can be challenging, but lessons from reinforcement learning may bring some relief.
1
2
Directed Generative Models as Sequential Decision Processes
Directed generative models have grown in popularity relative to their undirected counter-parts [6,
14, 12, 4, 5, 16, 15] (etc.). Reasons include: the development of efficient methods for training them,
the ease of sampling from them, and the tractability of bounds on their log-likelihoods. Growth in
available computing power compounds these benefits. One can interpret the (ancestral) sampling
process in a directed model as repeatedly setting subsets of the latent variables to particular values,
in a sequence of decisions conditioned on preceding decisions. Each subsequent decision restricts
the set of potential outcomes for the overall sequence. Intuitively, these models encode stochastic
procedures for constructing plausible observations. This section formally explores this perspective.
2.1
Deep AutoRegressive Networks
The deep autoregressive networks investigated in [4] define distributions of the following form:
T
X
Y
p(x) =
p(x|z)p(z), with p(z) = p0 (z0 )
pt (zt |z0 , ..., zt?1 )
(1)
z
t=1
in which x indicates a generated observation and z0 , ..., zT represent latent variables in the model.
The distribution p(x|z) may be factored similarly to p(z). The form of p(z) in Eqn. 1 can represent
arbitrary distributions over the latent variables, and the work work in [4] mainly concerned approaches to parameterizing the conditionals pt (zt |z0 , ..., zt?1 ) that restricted representational power
in exchange for computational tractability. To appreciate the generality of Eqn. 1, consider using zt
that are univariate, multivariate, structured, etc. One can interpret any model based on this sequential factorization of p(z) as a non-stationary policy pt (zt |st ) for selecting each action zt in a state
st , with each st determined by all zt0 for t0 < t, and train it using some form of policy search.
2.2
Generalized Guided Policy Search
We adopt a broader interpretation of guided policy search than one might initially take from, e.g.,
[9, 10, 11, 8]. We provide a review of guided policy search in the supplementary material. Our
expanded definition of guided policy search includes any optimization of the general form:
minimize E
E
E
[`(?, iq , ip )] + ? div (q(? |iq , ip ), p(? |ip ))
(2)
p,q
iq ?Iq ip ?Ip (?|iq ) ? ?q(? |iq ,ip )
in which p indicates the primary policy, q indicates the guide policy, Iq indicates a distribution over
information available only to q, Ip indicates a distribution over information available to both p and
q, `(?, iq , ip ) computes the cost of trajectory ? in the context of iq /ip , and div(q(? |iq , ip ), p(? |ip ))
measures dissimilarity between the trajectory distributions generated by p/q. As ? > 0 goes to
infinity, Eqn. 2 enforces the constraint p(? |ip ) = q(? |iq , ip ), ??, ip , iq . Terms for controlling, e.g.,
the entropy of p/q can also be added. The power of the objective in Eq. 2 stems from two main
points: the guide policy q can use information iq that is unavailable to the primary policy p, and the
primary policy need only be trained to minimize the dissimilarity term div(q(? |iq , ip ), p(? |ip )).
For example, a directed model structured as in Eqn. 1 can be interpreted as specifying a policy for
a finite-horizon MDP whose terminal state distribution encodes p(x). In this MDP, the state at time
1 ? t ? T +1 is determined by {z0 , ..., zt?1 }. The policy picks an action zt ? Zt at time 1 ? t ? T ,
and picks an action x ? X at time t = T + 1. I.e., the policy can be written as pt (zt |z0 , ..., zt?1 )
for 1 ? t ? T , and as p(x|z0 , ..., zT ) for t = T + 1. The initial state z0 ? Z0 is drawn from p0 (z0 ).
Executing the policy for a single trial produces a trajectory ? , {z0 , ..., zT , x}, and the distribution
over xs from these trajectories is just p(x) in the corresponding directed generative model.
The authors of [4] train deep autoregressive networks by maximizing a variational lower bound on
the training set log-likelihood. To do this, they introduce a variational distribution q which provides
q0 (z0 |x? ) and qt (zt |z0 , ..., zt?1 , x? ) for 1 ? t ? T , with the final step q(x|z0 , ..., zT , x? ) given by
a Dirac-delta at x? . Given these definitions, the training in [4] can be interpreted as guided policy
search for the MDP described in the previous paragraph. Specifically, the variational distribution q
provides a guide policy q(? |x? ) over trajectories ? , {z0 , ..., zT , x? }:
T
Y
q(? |x? ) , q(x|z0 , ..., zT , x? )q0 (z0 |x? )
qt (zt |z0 , ..., zt?1 , x? )
(3)
t=1
2
The primary policy p generates trajectories distributed according to:
T
Y
p(? ) , p(x|z0 , ..., zT )p0 (z0 )
pt (zt |z0 , ..., zt?1 )
(4)
t=1
which does not depend on x? . In this case, x? corresponds to the guide-only information iq ? Iq in
Eqn. 2. We now rewrite the variational optimization as:
minimize ? E
E ? [`(?, x? )] + KL(q(? |x? ) || p(? ))
(5)
p,q
x ?DX
? ?q(? |x )
?
where `(?, x ) , 0 and DX indicates the target distribution for the terminal state of the primary
policy p.1 When expanded, the KL term in Eqn. 5 becomes:
KL(q(? |x? ) || p(? )) =
(6)
#
"
T
q0 (z0 |x? ) X
qt (zt |z0 , ..., zt?1 , x? )
E ? log
+
? log p(x? |z0 , ..., zT )
log
p0 (z0 )
p
(z
|z
,
...,
z
)
? ?q(? |x )
t
t
0
t?1
t=1
Thus, the variational approach used in [4] for training directed generative models can be interpreted
as a form of generalized guided policy search. As the form in Eqn. 1 can represent any finite directed
generative model, the preceding derivation extends to all models we discuss in this paper.2
2.3
Time-reversible Stochastic Processes
One can simplify Eqn. 1 by assuming suitable forms for X and Z0 , ..., ZT . E.g., the authors of [16]
proposed a model in which Zt ? X for all t and p0 (x0 ) was Gaussian. We can write their model as:
p(xT ) =
X
pT (xT |xT ?1 )p0 (x0 )
x0 ,...,xT ?1
TY
?1
pt (xt |xt?1 )
(7)
t=1
where p(xT ) indicates the terminal state distribution of the non-stationary, finite-horizon Markov
process determined by {p0 (x0 ), p1 (x1 |x0 ), ..., pT (xT |xT ?1 )}. Note that, throughout this paper, we
(ab)use sums over latent variables and trajectories which could/should be written as integrals.
The authors of [16] observed that, for any reasonably smooth target distribution DX and sufficiently
large T , one can define a ?reverse-time? stochastic process qt (xt?1 |xt ) with simple, time-invariant
dynamics that transforms q(xT ) , DX into the Gaussian distribution p0 (x0 ). This q is given by:
q0 (x0 ) =
X
q1 (x0 |x1 )DX (xT )
x1 ,...,xT
T
Y
qt (xt?1 |xt ) ? p0 (x0 )
(8)
t=2
Next, we define q(? ) as the distribution over trajectories ? , {x0 , ..., xT } generated by the reversetime process determined by {q1 (x0 |x1 ), ..., qT (xT ?1 |xT ), DX (xT )}. We define p(? ) as the distribution over trajectories generated by the ?forward-time? process in Eqn. 7. The training in [16] is
equivalent to guided policy search using guide trajectories sampled from q, i.e. it uses the objective:
"
#
T ?1
q1 (x0 |x1 ) X
DX (xT )
qt+1 (xt |xt+1 )
minimize E
log
+
+ log
(9)
log
p,q
p0 (x0 )
pt (xt |xt?1 )
pT (xT |xT ?1 )
? ?q(? )
t=1
which corresponds to minimizing KL(q || p). If the log-densities in Eqn. 9 are tractable, then this
minimization can be done using basic Monte-Carlo.h If, as in [16], the reverse-time processi q is not
PT
trained, then Eqn. 9 simplifies to: minimizep Eq(? ) ? log p0 (x0 ) ? t=1 log pt (xt |xt?1 ) .
This trick for generating guide trajectories exhibiting a particular distribution over terminal states
xT ? i.e. running dynamics backwards in time starting from xT ? DX ? may prove useful in settings
other than those considered in [16]. E.g., the LapGAN model in [1] learns to approximately invert
a fixed (and information destroying) reverse-time process. The supplementary material expands on
the content of this subsection, including a derivation of Eqn. 9 as a bound on Ex?DX [? log p(x)].
We could pull the ? log p(x? |z0 , ..., zT ) term from the KL and put it in the cost `(?, x? ), but we prefer the
?path-wise KL? formulation for its elegance. We abuse notation using KL(?(x = x? ) || p(x)) , ? log p(x? ).
2
This also includes all generative models implemented and executed on an actual computer.
1
3
2.4
Learning Generative Stochastic Processes with LSTMs
The authors of [3] introduced a model for sequentially-deep generative processes. We interpret their
model as a primary policy p which generates trajectories ? , {z0 , ..., zT , x} with distribution:
p(? ) , p(x|s? (?<x ))p0 (z0 )
T
Y
pt (zt ), with ?<x , {z0 , ..., zT }
(10)
t=1
in which ?<x indicates a latent trajectory and s? (?<x ) indicates a state trajectory {s0 , ..., sT } computed recursively from ?<x using the update st ? f? (st?1 , zt ) for t ? 1. The initial state s0 is
given by a trainable constant. Each state st , [ht ; vt ] represents the joint hidden/visible state ht /vt
of an LSTM and f? (state, input) computes a standard LSTM update.3 The authors of [3] defined
all pt (zt ) as isotropic Gaussians and defined the output distribution p(x|s? (?<x )) as p(x|cT ), where
PT
cT , c0 + t=1 ?? (vt ). Here, c0 is a trainable constant and ?? (vt ) is, e.g., an affine transform of
vt . Intuitively, ?? (vt ) transforms vt into a refinement of the ?working hypothesis? ct?1 , which gets
updated to ct = ct?1 + ?? (vt ). p is governed by parameters ? which affect f? , ?? , s0 , and c0 . The
supplementary material provides pseudo-code and an illustration for this model.
To train p, the authors of [3] introduced a guide policy q with trajectory distribution:
q(? |x? ) , q(x|s? (?<x ), x? )q0 (z0 |x? )
T
Y
qt (zt |?
st , x? ), with ?<x , {z0 , ..., zT }
(11)
t=1
in which s? (?<x ) indicates a state trajectory {?
s0 , ..., s?T } computed recursively from ?<x using the
guide policy?s state update s?t ? f? (?
st?1 , g? (s? (?<t ), x? )). In this update s?t?1 is the previous guide
state and g? (s? (?<t ), x? ) is a deterministic function of x? and the partial (primary) state trajectory
s? (?<t ) , {s0 , ..., st?1 }, which is computed recursively from ?<t , {z0 , ..., zt?1 } using the state
update st ? f? (st?1 , zt ). The output distribution q(x|s? (?<x ), x? ) is defined as a Dirac-delta at
x? .4 Each qt (zt |?
st , x? ) is a diagonal Gaussian distribution with means and log-variances given by
an affine function L? (?
vt ) of v?t . q0 (z0 ) is defined as identical to p0 (z0 ). q is governed by parameters
? which affect the state updates f? (?
st?1 , g? (s? (?<t ), x? )) and the step distributions qt (zt |?
st , x? ).
?
g? (s? (?<t ), x ) corresponds to the ?read? operation of the encoder network in [3].
Using our definitions for p/q, the training objective in [3] is given by:
" T
#
X
qt (zt |?
st , x? )
?
minimize ? E
E
log
? log p(x |s(?<x ))
p,q
x ?DX ? ?q(? |x? )
pt (zt )
t=1
(12)
which can be written more succinctly as Ex? ?DX KL(q(? |x? ) || p(? )). This objective upper-bounds
P
Ex? ?DX [? log p(x? )], where p(x) , ?<x p(x|s? (?<x ))p(?<x ).
2.5
Extending the LSTM-based Generative Model
QT
We propose changing p in Eqn. 10 to: p(? ) , p(x|s? (?<x ))p0 (z0 ) t=1 pt (zt |st?1 ). We define
pt (zt |st?1 ) as a diagonal Gaussian distribution with means and log-variances given by an affine
function L? (vt?1 ) of vt?1 (remember that st , [ht ; vt ]), and we define p0 (z0 ) as an isotropic
Gaussian. We set s0 using s0 ? f? (z0 ), where f? is a trainable function (e.g. a neural network).
Intuitively, our changes make the model more like a typical policy by conditioning its ?action? zt on
its state st?1 , and upgrade the model to an infinite mixture by placing a distribution over its initial
state s0 . We also consider using ct , L? (ht ), which transforms the hidden part of the LSTM state st
directly into an observation. This makes ht a working memory in which to construct an observation.
The supplementary material provides pseudo-code and an illustration for this model.
We train this model by optimizing the objective:
#
"
T
qt (zt |?
st , x? )
q0 (z0 |x? ) X
?
+
log
? log p(x |s(?<x ))
minimize ? E
E
log
p,q
x ?DX ? ?q(? |x? )
p0 (z0 )
pt (zt |st?1 )
t=1
(13)
3
For those unfamiliar with LSTMs, a good introduction can be found in [2]. We use LSTMs including input
gates, forget gates, output gates, and peephole connections for all tests presented in this chapter.
4
It may be useful to relax this assumption.
4
where we now have to deal with pt (zt |st?1 ), p0 (z0 ), and q0 (z0 |x? ), which could be treated as
constants in the model from [3]. We define q0 (z0 |x? ) as a diagonal Gaussian distribution whose
means and log-variances are given by a trainable function g? (x? ).
When trained for the binarized MNIST benchmark
used in [3], our extended model scored a negative
log-likelihood of 85.5 on the test set.5 For comparison, the score reported in [3] was 87.4.6 After finetuning the variational distribution (i.e. q) on the test
set, our model?s score improved to 84.8, which is
quite strong considering it is an upper bound. For
comparison, see the best upper bound reported for
this benchmark in [15], which was 85.1. When the
model used the alternate cT , L? (hT ), the raw/finetuned test scores were 85.9/85.3. Fig. 1 shows
samples from the model. Model/test code is available at http://github.com/Philip-Bachman/
Sequential-Generation.
3
Figure 1: The left block shows ?(ct ) for t ?
{1, 3, 5, 9, 16}, for a policy p with ct , c0 +
Pt
0
t0 =1 L? (vt ). The right block is analogous,
for a model using ct , L? (ht ).
Developing Models for Sequential Imputation
The goal of imputation is to estimate p(xu |xk ), where x , [xu ; xk ] indicates a complete observation
with known values xk and missing values xu . We define a mask m ? M as a (disjoint) partition of
x into xu /xk . By expanding xu to include all of x, one recovers standard generative modelling. By
shrinking xu to include a single element of x, one recovers standard classification/regression. Given
distribution DM over m ? M and distribution DX over x ? X , the objective for imputation is:
minimize E
E
? log p(xu |xk )
(14)
p
x?DX m?DM
We now describe a finite-horizon MDP for which guided policy search minimizes a bound on the
objective in Eqn. 14. The MDP is defined by mask distribution DM , complete observation distribution DX , and the state spaces {Z0 , ..., ZT } associated with each of T steps. Together, DM and
DX define a joint distribution over initial states and rewards in the MDP. For the trial determined
by x ? DX and m ? DM , the initial state z0 ? p(z0 |xk ) is selected by the policy p based on the
known values xk . The cost `(?, xu , xk ) suffered by trajectory ? , {z0 , ..., zT } in the context (x, m)
is given by ? log p(xu |?, xk ), i.e. the negative log-likelihood of p guessing the missing values xu
after following trajectory ? , while seeing the known values xk .
QT
We consider a policy p with trajectory distribution p(? |xk ) , p(z0 |xk ) t=1 p(zt |z0 , ..., zt?1 , xk ),
where xk is determined by x/m for the current trial and p can?t observe the missing values xu . With
these definitions, we can find an approximately optimal imputation policy by solving:
minimize E
E
E
? log p(xu |?, xk )
(15)
p
x?DX m?DM ? ?p(? |xk )
I.e. the expected negative log-likelihood of making a correct imputation on any given trial. This is a
valid, but loose, upper bound on the imputation objective in Eq. 14 (from Jensen?s inequality). We
can tighten the bound by introducing a guide policy (i.e. a variational distribution).
As with the unconditional generative models in Sec. 2, we train p to imitate a guide policy q shaped
by additional information (here it?s xu ). This q generates trajectories with distribution q(? |xu , xk ) ,
QT
q(z0 |xu , xk ) t=1 q(zt |z0 , ..., zt?1 , xu , xk ). Given this p and q, guided policy search solves:
u
minimize E
E
E
[? log q(x |?, iq , ip )] + KL(q(? |iq , ip ) || p(? |ip ))
(16)
p,q
x?DX m?DM
? ?q(? |iq ,ip )
where we define iq , xu , ip , xk , and q(xu |?, iq , ip ) , p(xu |?, ip ).
5
6
Data splits from: http://www.cs.toronto.edu/?larocheh/public/datasets/binarized_mnist
The model in [3] significantly improves its score to 80.97 when using an image-specific architecture.
5
3.1
A Direct Representation for Sequential Imputation Policies
We define an imputation trajectory as c? , {c0 , ..., cT }, where each partial imputation ct ? X is
computed from a partial step trajectory ?<t , {z1 , ..., zt }. A partial imputation ct?1 encodes the
policy?s guess for the missing values xu immediately prior to selecting step zt , and cT gives the policy?s final guess. At each step of iterative refinement, the policy selects a zt based on ct?1 and the
known values xk , and then updates its guesses to ct based on ct?1 and zt . By iteratively refining its
guesses based on feedback from earlier guesses and the known values, the policy can construct complexly structured distributions over its final guess cT after just a few steps. This happens naturally,
without any post-hoc MRFs/CRFs (as in many approaches to structured prediction), and without
sampling values in cT one at a time (as required by existing NADE-type models [7]). This property
of our approach should prove useful for many tasks.
We consider two ways of updating the guesses in ct , mirroring those described in Sec. 2. The first
way sets ct ? ct?1 + ?? (zt ), where ?? (zt ) is a trainable function. We set c0 , [cu0 ; ck0 ] using a
trainable bias. The second way sets ct ? ?? (zt ). We indicate models using the first type of update
with the suffix -add, and models using the second type of update with -jump. Our primary policy p?
selects zt at each step 1 ? t ? T using p? (zt |ct?1 , xk ), which we restrict to be a diagonal Gaussian.
This is a simple, stationary policy. Together, the step selector p? (zt |ct?1 , xk ) and the imputation
constructor ?? (zt ) fully determine the behaviour of the primary policy. The supplementary material
provides pseudo-code and an illustration for this model.
We construct a guide policy q similarly to p. The guide policy shares the imputation constructor
?? (zt ) with the primary policy. The guide policy incorporates additional information x , [xu ; xk ],
i.e. the complete observation for which the primary policy must reconstruct some missing values.
The guide policy chooses steps using q? (zt |ct?1 , x), which we restrict to be a diagonal Gaussian.
We train the primary/guide policy components ?? , p? , and q? simultaneously on the objective:
u u
u
k
k
minimize E
E
E
[? log q(x |cT )] + KL(q(? |x , x ) || p(? |x ))
(17)
?,?
x?DX m?DM
? ?q? (? |xu ,xk )
where q(xu |cuT ) , p(xu |cuT ). We train our models using Monte-Carlo roll-outs of q, and stochastic
backpropagation as in [6, 14]. Full implementations and test code are available from http://
github.com/Philip-Bachman/Sequential-Generation.
3.2
Representing Sequential Imputation Policies using LSTMs
To make it useful for imputation, which requires conditioning on the exogenous information xk , we
modify the LSTM-based model from Sec. 2.5 to include a ?read? operation in its primary policy p.
We incorporate a read operation by spreading p over two LSTMs, pr and pw , which respectively
?read? and ?write? an imputation trajectory c? , {c0 , ..., cT }. Conveniently, the guide policy q
for this model takes the same form as the primary policy?s reader pr . This model also includes an
?infinite mixture? initialization step, as used in Sec. 2.5, but modified to incorporate conditioning on
x and m. The supplementary material provides pseudo-code and an illustration for this model.
Following the infinite mixture initialization step, a single full step of execution for p involves several
k
substeps: first p updates the reader state using srt ? f?r (srt?1 , ??r (ct?1 , sw
t?1 , x )), then p selects a
r
w
w w
step zt ? p? (zt |vt ), then p updates the writer state using st ? f? (st?1 , zt ), and finally p updates
r,w
r,w
its guesses by setting ct ? ct?1 + ??w (vtw ) (or ct ? ??w (hw
, [hr,w
t ; vt ]
t )). In these updates, st
r,w
refer to the states of the (r)reader and (w)writer LSTMs. The LSTM updates f? and the read/write
operations ??r,w are governed by the policy parameters ?.
We train p to imitate trajectories sampled from a guide policy q. The guide policy shares the primary
policy?s writer updates f?w and write operation ??w , but has its own reader updates f?q and read operation ??q . At each step, the guide policy: updates the guide state sqt ? f?q (sqt?1 , ??q (ct?1 , sw
t?1 , x)),
w w
then selects zt ? q? (zt |vtq ), then updates the writer state sw
?
f
(s
,
z
),
and
finally
updates
t
t?1 t
?
its guesses ct ? ct?1 + ??w (vtw ) (or ct ? ??w (hw
)).
As
in
Sec.
3.1,
the
guide
policy?s
read
opt
eration ??q gets to see the complete observation x, while the primary policy only gets to see the
known values xk . We restrict the step distributions p? /q? to be diagonal Gaussians whose means
and log-variances are affine functions of vtr /vtq . The training objective has the same form as Eq. 17.
6
350
300
250
200
Imputation NLL vs. Available Information
88
TM-orc
TM-hon
VAE-imp
GPSI-add
GPSI-jump
LSTM-add
LSTM-jump
86
84
82
Imputation NLL vs. Available Information
98
GPSI-add
GPSI-jump
LSTM-add
LSTM-jump
94
92
90
78
88
76
86
74
100
84
72
0.60
0.65
GPSI-add
GPSI-jump
80
150
50
0.55
The Effect of Increased Refinement Steps
96
0.70 0.75 0.80
Mask Probability
0.85
0.90
0.95
70
0.55
(a)
0.60
0.65
0.70 0.75 0.80
Mask Probability
0.85
0.90
0.95
(b)
82
0
2
4
6
8
10
Refinement Steps
12
14
16
(c)
Figure 2: (a) Comparing the performance of our imputation models against several baselines, using
MNIST digits. The x-axis indicates the % of pixels which were dropped completely at random, and
the scores are normalized by the number of imputed pixels. (b) A closer view of results from (a),
just for our models. (c) The effect of increased iterative refinement steps for our GPSI models.
4
Experiments
We tested the performance of our sequential imputation models on three datasets: MNIST (28x28),
SVHN (cropped, 32x32) [13], and TFD (48x48) [17]. We converted images to grayscale and
shift/scaled them to be in the range [0...1] prior to training/testing. We measured the imputation
log-likelihood log q(xu |cuT ) using the true missing values xu and the models? guesses given by
?(cuT ). We report negative log-likelihoods, so lower scores are better in all of our tests. We refer to
variants of the model from Sec. 3.1 as GPSI-add and GPSI-jump, and to variants of the model from
Sec. 3.2 as LSTM-add and LSTM-jump. Except where noted, the GPSI models used 6 refinement
steps and the LSTM models used 16.7
We tested imputation under two types of data masking: missing completely at random (MCAR)
and missing at random (MAR). In MCAR, we masked pixels uniformly at random from the source
images, and indicate removal of d% of the pixels by MCAR-d. In MAR, we masked square regions,
with the occlusions located uniformly at random within the borders of the source image. We indicate
occlusion of a d ? d square by MAR-d.
On MNIST, we tested MCAR-d for d ? {50, 60, 70, 80, 90}. MCAR-100 corresponds to unconditional generation. On TFD and SVHN we tested MCAR-80. On MNIST, we tested MAR-d for
d ? {14, 16}. On TFD we tested MAR-25 and on SVHN we tested MAR-17. For test trials we
sampled masks from the same distribution used in training, and we sampled complete observations
from a held-out test set. Fig. 2 and Tab. 1 present quantitative results from these tests. Fig. 2(c)
shows the behavior of our GPSI models when we allowed them fewer/more refinement steps.
LSTM-add
LSTM-jump
GPSI-add
GPSI-jump
VAE-imp
MNIST
MAR-14 MAR-16
170
167
172
169
177
175
183
177
374
394
TFD
MCAR-80 MAR-25
1381
1377
?
?
1390
1380
1394
1384
1416
1399
SVHN
MCAR-80 MAR-17
525
568
?
?
531
569
540
572
567
624
Table 1: Imputation performance in various settings. Details of the tests are provided in the main
text. Lower scores are better. Due to time constraints, we did not test LSTM-jump on TFD or
SVHN. These scores are normalized for the number of imputed pixels.
We tested our models against three baselines. The baselines were ?variational auto-encoder imputation?, honest template matching, and oracular template matching. VAE imputation ran multiple
steps of VAE reconstruction, with the known values held fixed and the missing values re-estimated
with each reconstruction step.8 After 16 refinement steps, we scored the VAE based on its best
7
GPSI stands for ?Guided Policy Search Imputer?. The tag ?-add? refers to additive guess updates, and
?-jump? refers to updates that fully replace the guesses.
8
We discuss some deficiencies of VAE imputation in the supplementary material.
7
(a)
(b)
(c)
Figure 3: This figure illustrates the policies learned by our models. (a): models trained for (MNIST,
MAR-16). From top?bottom the models are: GPSI-add, GPSI-jump, LSTM-add, LSTM-jump.
(b): models trained for (TFD, MAR-25), with models in the same order as (a) ? but without LSTMjump. (c): models trained for (SVHN, MAR-17), with models arranged as for (b).
guesses. Honest template matching guessed the missing values based on the training image which
best matched the test image?s known values. Oracular template matching was like honest template
matching, but matched directly on the missing values.
Our models significantly outperformed the baselines. In general, the LSTM-based models outperformed the more direct GPSI models. We evaluated the log-likelihood of imputations produced by
our models using the lower bounds provided by the variational objectives with respect to which they
were trained. Evaluating the template-based imputations was straightforward. For VAE imputation,
we used the expected log-likelihood of the imputations sampled from multiple runs of the 16-step
imputation process. This provides a valid, but loose, lower bound on their log-likelihood.
As shown in Fig. 3, the imputations produced by our models appear promising. The imputations are
generally of high quality, and the models are capable of capturing strongly multi-modal reconstruction distributions (see subfigure (a)). The behavior of GPSI models changed intriguingly when we
swapped the imputation constructor. Using the -jump imputation constructor, the imputation policy learned by the direct model was rather inscrutable. Fig. 2(c) shows that additive guess updates
extracted more value from using more refinement steps. When trained on the binarized MNIST
benchmark discussed in Sec. 2.5, i.e. with binarized images and subject to MCAR-100, the LSTMadd model produced raw/fine-tuned scores of 86.2/85.7. The LSTM-jump model scored 87.1/86.3.
Anecdotally, on this task, these ?closed-loop? models seemed more prone to overfitting than the
?open-loop? models in Sec. 2.5. The supplementary material provides further qualitative results.
5
Discussion
We presented a point of view which links methods for training directed generative models with
policy search in reinforcement learning. We showed how our perspective can guide improvements
to existing models. The importance of these connections will only grow as generative models rapidly
increase in structural complexity and effective decision depth.
We introduced the notion of imputation as a natural generalization of standard, unconditional generative modelling. Depending on the relation between the data-to-generate and the available information, imputation spans from full unconditional generative modelling to classification/regression. We
showed how to successfully train sequential imputation policies comprising millions of parameters
using an approach based on guided policy search [9]. Our approach outperforms the baselines quantitatively and appears qualitatively promising. Incorporating, e.g., the local read/write mechanisms
from [3] should provide further improvements.
8
References
[1] Emily L Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative models
using a laplacian pyramid of adversarial networks. arXiv:1506.05751 [cs.CV], 2015.
[2] Alex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850 [cs.NE],
2013.
[3] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural
network for image generation. In International Conference on Machine Learning (ICML),
2015.
[4] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning (ICML), 2014.
[5] Diederik P Kingma, Danilo J Rezende, Shakir Mohamed, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems
(NIPS), 2014.
[6] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International
Conference on Learning Representations (ICLR), 2014.
[7] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In International Conference on Machine Learning (ICML), 2011.
[8] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search
under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS),
2014.
[9] Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on
Machine Learning (ICML), 2013.
[10] Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In
Advances in Neural Information Processing Systems (NIPS), 2013.
[11] Sergey Levine and Vladlen Koltun. Learning complex neural network policies with trajectory
optimization. In International Conference on Machine Learning (ICML), 2014.
[12] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks.
In International Conference on Machine Learning (ICML), 2014.
[13] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.
Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep
Learning and Unsupervised Feature Learning, 2011.
[14] Danilo Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine
Learning (ICML), 2014.
[15] Danilo J Rezende and Shakir Mohamed. Variational inference with normalizing flows. In
International Conference on Machine Learning (ICML), 2015.
[16] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on
Machine Learning (ICML), 2015.
[17] Joshua Susskind, Adam Anderson, and Geoffrey E Hinton. The toronto face database. 2010.
9
| 5759 |@word trial:5 pw:1 c0:7 open:1 pieter:1 bachman:4 p0:17 q1:3 pick:2 thereby:1 recursively:3 mcar:9 initial:5 score:9 selecting:2 tuned:1 outperforms:1 existing:5 current:1 com:3 comparing:1 gmail:1 dx:21 written:3 must:1 diederik:2 subsequent:1 visible:1 partition:1 additive:2 update:22 v:2 stationary:3 generative:26 fewer:2 selected:1 guess:14 imitate:2 ivo:2 isotropic:2 xk:27 bissacco:1 provides:9 node:1 toronto:2 wierstra:3 direct:3 koltun:3 qualitative:2 prove:3 reinterpreting:1 paragraph:1 introduce:1 finitehorizon:1 x0:14 mask:5 dprecup:1 expected:2 behavior:2 p1:1 examine:1 multi:1 terminal:4 inspired:1 actual:1 soumith:1 considering:1 becomes:1 begin:1 provided:2 notation:1 matched:2 interpreted:4 minimizes:1 pseudo:4 quantitative:2 remember:1 binarized:3 expands:1 growth:1 scaled:1 szlam:1 appear:1 danihelka:2 dropped:1 local:1 modify:1 encoding:1 path:1 approximately:2 abuse:1 might:1 initialization:2 specifying:2 suggests:2 challenging:1 ease:1 factorization:1 range:3 directed:11 enforces:1 testing:1 block:2 backpropagation:2 digit:2 procedure:2 susskind:1 significantly:2 matching:5 refers:2 seeing:1 get:3 put:1 context:3 www:1 equivalent:1 deterministic:1 phil:1 maximizing:1 destroying:1 go:1 missing:11 starting:1 crfs:1 straightforward:1 emily:1 formulate:1 x32:1 immediately:1 jascha:1 factored:1 parameterizing:1 iain:1 estimator:1 pull:1 notion:1 traditionally:1 constructor:4 analogous:1 updated:1 mcgill:3 construction:1 pt:21 controlling:1 target:2 us:1 hypothesis:2 trick:1 element:1 finetuned:1 updating:1 located:1 cut:4 database:1 observed:1 bottom:1 levine:4 wang:1 region:1 counter:1 ran:1 alessandro:1 complexity:1 reward:1 dynamic:3 trained:8 depend:1 rewrite:1 solving:1 writer:4 eric:1 completely:2 joint:2 finetuning:1 maheswaranathan:1 chapter:1 various:1 grown:1 derivation:2 train:11 effective:3 describe:1 monte:2 niru:1 outcome:1 whose:3 quite:1 supplementary:8 plausible:1 relax:1 reconstruct:1 encoder:2 peephole:1 think:1 transform:1 ip:23 final:3 shakir:3 hoc:1 sequence:4 nll:2 propose:2 reconstruction:3 inserting:1 loop:3 rapidly:1 representational:1 dirac:2 parent:1 extending:1 produce:1 generating:4 karol:3 executing:1 adam:2 iq:21 develop:2 depending:1 recurrent:2 andrew:1 measured:1 qt:15 school:2 eq:4 solves:1 strong:1 implemented:1 c:4 involves:1 indicate:3 larochelle:1 exhibiting:1 guided:16 correct:1 stochastic:7 material:8 public:1 exchange:1 behaviour:1 abbeel:1 generalization:1 opt:1 extension:1 sufficiently:1 considered:1 driving:1 adopt:1 outperformed:2 spreading:1 currently:1 successfully:1 tool:2 minimization:1 gaussian:8 modified:1 rather:1 varying:1 broader:1 vae:7 encode:1 rezende:3 refining:2 improvement:2 modelling:6 likelihood:10 indicates:12 mainly:1 adversarial:1 baseline:6 inference:3 ganguli:1 mrfs:1 suffix:1 initially:1 hidden:2 relation:2 selects:4 comprising:1 tao:1 pixel:5 overall:1 classification:3 hon:1 development:1 construct:5 shaped:1 intriguingly:1 sampling:4 ng:1 identical:1 represents:1 broad:1 placing:1 unsupervised:3 denton:1 icml:9 imp:2 report:1 simplify:1 quantitatively:1 few:1 simultaneously:1 relief:1 occlusion:2 zt0:1 ab:1 investigate:1 mnih:2 mixture:3 unconditional:6 held:2 integral:1 capable:2 encourage:1 partial:4 closer:1 arthur:1 netzer:1 x48:1 re:1 guidance:1 subfigure:1 increased:2 earlier:1 tractability:2 cost:3 addressing:1 subset:1 introducing:1 nonequilibrium:1 masked:2 reported:2 connect:1 chooses:1 st:27 density:1 lstm:21 explores:1 international:10 ancestral:1 together:2 precup:1 potential:1 converted:1 sec:9 includes:3 doina:1 view:5 endogenous:1 exogenous:2 tab:1 closed:1 bayes:1 masking:1 minimize:10 square:2 roll:1 variance:4 guessed:1 lesson:1 upgrade:1 raw:2 produced:3 carlo:2 trajectory:27 definition:4 against:2 ty:1 mohamed:3 dm:8 naturally:2 elegance:1 associated:1 recovers:2 chintala:1 sampled:5 subsection:1 improves:1 appears:1 danilo:3 supervised:1 modal:1 improved:1 wei:1 formulation:1 done:1 arranged:1 mar:13 generality:1 evaluated:1 just:3 strongly:1 anderson:1 working:3 eqn:14 lstms:6 reversible:1 quality:1 perhaps:1 mdp:9 effect:2 normalized:2 true:1 read:8 q0:9 iteratively:2 deal:1 impute:1 covering:1 noted:1 generalized:2 complete:5 bring:1 svhn:6 image:9 variational:13 wise:1 charles:1 hugo:1 conditioning:3 million:1 discussed:1 interpretation:1 interpret:4 unfamiliar:1 refer:2 cv:1 similarly:2 etc:2 base:1 add:13 multivariate:1 own:1 recent:1 showed:2 perspective:2 optimizing:1 reverse:3 compound:1 inequality:1 vt:15 joshua:1 additional:2 preceding:2 determine:1 semi:1 multiple:4 full:3 stem:1 smooth:1 x28:1 post:1 laplacian:1 prediction:6 variant:2 basic:1 regression:2 arxiv:2 represent:3 sergey:4 pyramid:1 invert:1 cropped:1 conditionals:1 fine:1 grow:1 source:2 suffered:1 swapped:1 subject:1 undirected:1 incorporates:1 flow:1 structural:1 backwards:1 split:1 concerned:1 affect:2 architecture:1 restrict:3 andriy:2 idea:1 simplifies:1 tm:2 shift:1 honest:3 t0:2 blundell:1 motivated:2 repeatedly:1 action:4 deep:11 mirroring:1 useful:5 generally:1 amount:1 transforms:3 simplest:1 imputed:2 generate:2 http:3 restricts:1 coates:1 delta:2 disjoint:1 popularity:1 estimated:1 write:5 cu0:1 dickstein:1 reliance:1 drawn:1 imputation:48 changing:1 ht:7 sum:1 missingness:1 minimizep:1 run:1 powerful:1 extends:2 throughout:1 reader:4 wu:1 draw:1 decision:12 prefer:1 capturing:1 bound:11 ct:36 refine:1 infinity:1 constraint:2 deficiency:1 alex:2 encodes:2 tag:1 generates:3 span:1 formulating:1 expanded:2 structured:4 developing:3 according:1 alternate:1 oracular:2 vladlen:3 across:3 making:7 happens:1 intuitively:3 restricted:1 invariant:1 pr:2 lapgan:1 discus:2 loose:2 mechanism:2 tractable:1 available:8 gaussians:2 operation:6 observe:1 worthwhile:1 gate:3 top:1 running:1 include:4 graphical:1 sw:3 paucity:1 murray:1 gregor:3 appreciate:1 objective:11 added:1 primary:16 diagonal:6 guessing:1 div:3 iclr:1 link:1 philip:3 reason:1 assuming:1 code:6 illustration:4 minimizing:1 executed:1 robert:1 negative:4 implementation:1 zt:74 policy:78 unknown:1 perform:1 upper:4 observation:9 datasets:4 markov:2 benchmark:3 finite:4 daan:3 extended:2 hinton:1 arbitrary:1 introduced:4 required:1 kl:10 trainable:6 connection:3 z1:1 learned:2 kingma:2 nip:4 address:1 reading:1 encompasses:1 including:2 memory:1 max:2 belief:1 power:3 suitable:1 eration:1 difficulty:2 treated:1 tfd:6 natural:2 hr:1 representing:2 thermodynamics:1 improve:2 github:2 ne:1 axis:1 auto:2 text:1 review:1 prior:2 removal:1 relative:2 graf:2 fully:3 generation:5 geoffrey:1 larocheh:1 affine:4 s0:8 share:2 prone:1 succinctly:1 changed:1 guide:23 bias:1 understand:1 wide:1 template:6 face:1 benefit:1 distributed:1 feedback:5 depth:1 valid:2 stand:1 evaluating:1 autoregressive:5 computes:2 author:6 forward:1 refinement:10 reinforcement:4 jump:16 vtw:2 seemed:1 qualitatively:1 tighten:1 welling:2 approximate:1 selector:1 sequentially:1 overfitting:1 fergus:1 grayscale:1 surya:1 search:17 iterative:3 latent:5 table:1 promising:2 learn:1 reasonably:1 ca:1 expanding:1 unavailable:1 investigated:1 anecdotally:1 complex:1 constructing:1 did:1 main:2 border:1 scored:3 allowed:1 body:1 x1:5 fig:5 xu:26 nade:1 orc:1 shrinking:1 explicit:1 governed:3 learns:1 hw:2 z0:51 xt:31 specific:1 jensen:1 x:1 normalizing:1 incorporating:1 workshop:1 mnist:8 sequential:17 sohl:1 importance:1 dissimilarity:2 execution:1 conditioned:2 illustrates:1 horizon:3 entropy:1 forget:1 explore:2 univariate:1 conveniently:1 bo:1 corresponds:4 extracted:1 conditional:1 goal:1 srt:2 shared:1 replace:1 content:1 change:2 determined:6 specifically:1 typical:1 infinite:3 except:1 uniformly:2 yuval:1 formally:1 incorporate:2 tested:8 vtr:1 ex:3 |
5,257 | 576 | HARMONET: A Neural Net for Harmonizing
Chorales in the Style of l.S.Bach
Hermann Hild
Johannes Feulner
Wolfram Menzel
hhild@ira.uka.de
johannes@ira.uka.de
menzel@ira.uka.de
Institut fur Logik, Komplexitat und Deduktionssysteme
Am Fasanengarten 5
Universitat Karlsruhe
W-7500 Karlsruhe 1, Germany
Abstract
HARMONET, a system employing connectionist networks for music processing, is presented. After being trained on some dozen Bach chorales
using error backpropagation, the system is capable of producing four-part
chorales in the style of J .s.Bach, given a one-part melody. Our system
solves a musical real-world problem on a performance level appropriate
for musical practice. HARMONET's power is based on (a) a new coding
scheme capturing musically relevant information and (b) the integration of
backpropagation and symbolic algorithms in a hierarchical system, combining the advantages of both.
1
INTRODUCTION
Neural approaches to music processing have been previously proposed (Lischka,
1989) and implemented (Mozer, 1991)(Todd, 1989). The promise neural networks
offer is that they may shed some light on an aspect of human creativity that doesn't
seem to be describable in terms of symbols and rules. Ultimately what music is (or
isn't) lies in the eye (or ear) of the beholder . The great composers, such as Bach
or Mozart, learned and obeyed quite a number of rules, e.g. the famous prohibition
of parallel fifths. But these rules alone do not suffice to characterize a personal
or even historic style. An easy test is to generate music at random, using only
267
268
Hild, Feulner, and Menzel
A Chorale Melody
Bach's Chorale Harmonization
Figure 1: The beginning of the chorale melody "Jesu, meine Zuversicht" and its
harmonization by J .S.Bach
schoolbook rules as constraints. The result is "error free" but aesthetically offensive.
To overcome this gap between obeying rules and producing music adhering to an
accepted aesthetic standard, we propose HARMONET, which integrates symbolic
algorithms and neural networks to compose four part chorales in the style of J .S .
Bach (1685 - 1750), given the one part melody. The neural nets concentrate on
the creative part of the task, being responsible for aesthetic conformance to the
standard set by Bach in nearly 400 examples. Original Bach Chorales are used
as training data. Conventional algorithms do the bookkeeping tasks like observing
pitch ranges, or preventing parallel fifths. HARMONET's level of performance
approaches that of improvising church organists, making it applicable to musical
practice.
2
TASK DEFINITION
The process of composing an accompaniment for a given chorale melody is called
chorale harmonization. Typically, a chorale melody is a plain melody often
harmonized to be sung by a choir. Correspondingly, the four voices of a chorale
harmonization are called soprano (the melody part), alto, tenor and bass. Figure 1
depicts an example of a chorale melody and its harmonization by J .S.Bach. For
centuries, music students have been routinely taught to solve the task of chorale
harmonization. Many theories and rules about "dos" and "don'ts" have been developed. However, the task of HARMONET is to learn to harmonize chorales from
example. Neural nets are used to find stylisticly characteristic harmonic sequences
and ornamentations.
HARMONET: A Neural Net for Harmonizing Chorales
3
SYSTEM OVERVIEW
Given a set of Bach chorales, our goal is to find an approximation j of the quite complex function l f which maps chorale melodies into their harmonization as demonstrated by J .S.Bach on almost 400 examples. In the following sections we propose
a decomposition of f into manageable subfunctions.
3.1
TASK DECOMPOSITION
The learning task is decomposed along two dimensions:
Different levels of abstractions. The chord skeleton is obtained if eighth
and sixteenth notes are viewed as omitable ornamentations. Furthermore, if the
chords are conceived as harmonies with certain attributes such as "inversion" or
"characteristic dissonances", the chorale is reducible to its harmonic skeleton, a
thoroughbass-like representation (Figure 2).
Locality in time. The accompaniment is divided into smaller parts, each of which
is learned independently by looking at some local context, a window. Treating
small parts independently certainly hurts global consistency. Some of the dependencies lost can be regained if the current decision window additionally considers
the outcome of its predecessors (external feedback). Figure 3 shows two consecutive
windows cut out from the harmonic skeleton.
To harmonize a chorale, HARMONET starts by learning the harmonic skeleton,
which then is refined to the chord skeleton and finally augmented with ornamenting
quavers (Figure 4, left side).
3.2
THE HARMONIC SKELETON
Chorales have a rich harmonic structure, which is mainly responsible for their "musical appearance". Thus generating a good harmonic skeleton is the most important
of HARMONET's subtasks. HARMONET creates a harmonic sequence by sweeping through the chorale melody and determining a harmony for each quarter note,
considering its local context and the previously found harmonies as input.
At each quarterbeat position t, the following information is extracted to form one
training example:
[\'-3 i'-2 ~~~1 ~j:.! 8'H 1 !N;.:al! '~'
I at
I at
I at
I
L?????????????? J. ?????????????? J. ??????????????????????????????????????????????????????????????????????????????????????????????? ~
The target to be learned (the harmony H t at position t) is marked by the box.
The input consists of the harmonic context to the left (the external feedback H t - 3 ,
H t - 2 and H t - 1 ) and the melodic context (pitches St-I! St and st+t). phrt contains
ITo be sure, f is not a function but a relation, since there are many ''legal" accompaniments for one melody. For simplicity, we view f as a function.
269
270
Hild, Feulner, and Menzel
JJ
If J J J J (J J
Chord Skeleton
j
Harmonic Skeleton
Figure 2: The chord and the harmonic skeleton of the chorale from figure 1.
information about the relative position of t to the beginning or end of a musical
phrase. strt is a boolean value indicating whether St is a stressed quarter. A
harmony H t has three components: Most importantly, the harmonic function relates
the key of the harmony to the key of the piece. The inversion indicates the bass
note of the harmony. The characteristic dissonances are notes which do not directly
belong to the harmony, thus giving it additional tension.
The coding of pitch is decisive for recognizing musically relevant regularities in
the training examples. This problem is discussed in many places (Shepard, 1982)
(Mozer, 1991). We developed a new coding scheme guided by the harmonic necessities of homophonic music pieces: A note s is represented as the set of harmonic
functions that contain s, as shown below:
Fct.
T
D
S Tp Sp Dp DD DP TP d Vtp SS
C
1
0
0
1
1
0
D
E
..
1
0
0
1
0
0
0
1
0
0
0
0
0
1
0
1
0
1
T, D, S, Tp etc. are standard musical abbreviations to denote harmonic functions.
The resulting representation is distributed with respect to pitch. However, it is local
with respect to harmonic functions. This allows the network to anticipate future
harmonic developments even though there cannot be a lookahead for harmonies yet
uncomposed.
Besides the 12 input units for each of the pitches
St-1, St, St+l,
we need 12+5+3
=
HARMONET: A Neural Net for Harmonizing Chorales
t
ftt
I
...
.. ..
2 I
I
I
,-j
u
T
r.
T
~
--
Tpr-..,.H t ?
"""
I
....-
u
T
T
t+l
2 1
I
I
v;--
Tp DP3-+Wt+l?
-
Figure 3: The harmonic skeleton broken into local windows. The harmony Ht ,
determined at quarterbeat position t, becomes part of the input of the window at
position t + 1.
20 input units for each of the 3 components of the harmonies H t -3, H t -2 and Ht-l,
9 units to code the phrase information phrt and 1 unit for the stress Strt. Thus our
net has a total of 3 * 12 + 3 * 20 + 9 + 1 = 106 input units and 20 output units. We
used one hidden layer with 70 units.
In a more advanced version (Figure 4, right side), we use three nets (Nl, N2, N3)
in parallel, each of which was trained on windows of different size. The harmonic
function for which the majority of these three nets votes is passed to two subsequent
nets (N4, N5) determining the chord inversion and characteristic dissonances of the
harmony. Using windows of different sizes in parallel employs statistical information
to solve the problem of chosing an appropriate window size.
3.3
THE CHORD SKELETON
The task on this level is to find the two middle parts (alto and tenor) given the
soprano S of the chorale melody and the harmony H determined by the neural
nets. Since H includes information about the chord inversion, the pitch of the bass
(modulo its octave) is already given. The problem is tackled with a "generate and
test" approach: Symbolic algorithms select a "best" chord out of the set of all
chords consistent with the given harmony H and common chorale constraints.
3.4
QUAVER ORNAMENTATIONS
In the last subtask, another net is taught how to add ornamenting eighths to the
chord skeleton. The output of this network is the set of eighth notes (if any) by
which a particular chord C t can be augmented. The network's input describes the
local context of C t in terms of attributes such as the intervals between Ct and C t +1 ,
voice leading characteristics, or the presence of eighths in previous chords.
271
272
Hild, Feulner, and Menzel
Chorale Melody
[If J J J J I
I
I
Determine Harmonies
If J J J J (J J
T
If J J J J I
T
Tp
?
,
T T TpDP3
I
Expand Harmonies to Chords
I
"
I
ri r r
.J I.J J
"
I
I
Harmonic
Function
I
IT
"
ill
I
Inversion
I
Insert Eighth Notes
r""
Characteristic
Di880nances
I
. .J I.J J
I
UI
T
H
Harmonized Chorale
Figure 4: Left side: Overall structure of HARMONET. Right side: A more specialized architecture with parallel and sequential nets (see text).
4
PERFORMANCE
HARMONET was trained separately on two sets of Bach chorales, each containing
20 chorales in major and minor keys, respectively. By passing the chorales through
a window as explained above, each set amounted to approx. 1000 training examples.
All nets were trained with the error backpropagation algorithm, needing 50 to 100
epochs to achieve reasonable convergence.
Figures 5 and 6 show two harmonizations produced by HARMONET, given melodies
which were not in the training set. An audience of music professionals judged the
quality of these and other chorales produced by HARMONET to be on the level
on an improvising organist. HARMONET also compares well to non-neural approaches. In figure 6 HARMONET's accompaniment is shown on a chorale melody
also used in the Ph.D. thesis of (Ebcioglu, 1986) to demonstrate the expert system
"CHORAL" .
HARMONET: A Neural Net for Harmonizing Chorales
Christus, der ist mein Leben
:i
It, I J I JJ JJ I J J J I JJ JJ I J. J I J J
e
9
5
t
~
.
ri
I
5
I
6
"r
I
fT
7
I
8
J I J J J I J J J J I 2.11
r
I
ri
r
r
f1
ro-,
U
I
r
I
11
9
~.
i iTt
I
r
J
ro-"
I
r ,-
I
I
r
I
8
7
.
Uf
i
I
ur
I
9
./
i wr r tri r
J r1
I 1
ri
I
I
~
Ur i i
i
IT LSi
iT
6
9
r
f
.
"
./
r
r
I
.
r
I
T
Figure 5: A chorale in a major key harmonized by HARMONET.
Happy Birthday to You
t
"
-.J
I
~
9
J
5
r
r
r
1
J
J d
JJ-J
J
J -J
D7
Tp DP
T3 S
I
JJ
I
I
I
S
I
[JI
I I
T
I
T
I
D
T
I
LJ' U
~
-J~
6
n
7.Dl
I
r
.Jl-J 11J
r
I
I
; , DP,+ Tp D
DD~+
D
DP~+ Tp
Figure 6: "Happy Birthday" harmonized by HARMONET.
8
l
--I
JJ
I
D
T
273
274
Hild, Feulner, and Menzel
5
CONCLUSIONS
The music processing system HARMONET presented in this paper clearly shows
that musical real-world applications are well within the reach of connectionist approaches. We believe that HARMONET owes much of its success to a clean task
decomposition and a meaningful selection and representation of musically relevant
features. By using a hybrid approach we allow the networks to concentrate on musical essentials instead of on structural constraints which may be hard for a network to
learn but easy to code symbolically. The abstraction of chords to harmonies reduces
the problem space and resembles a musician's problem approach. The "harmonic
representation" of pitch shows the harmonic character of the given melody more
explicitly.
We have also experimented to replace the neural nets in HARMONET by other
learning techniques such as decision trees (ID3) or nearest neighbor classification.
However, as also reported on other tasks (Dietterich et al., 1990), they were outperformed by the neural nets.
HARMONET is not a general music processing system, its architecture is designed
to solve a quite difficult but also quite specific task. However, due to HARMONET's
neural learning component, only a comparatively small amount of musical expert
knowledge was necessary to design the system, making it easier to build and more
flexible than a pure rule based system.
Acknowledgements
We thank Heinz Braun, Heiko Harms and Gudrun Socher for many fruitful discussions and contributions to this research and our music lab.
References
J .S.Bach (Ed.: Bernhard Friedrich Fischer) 389 Choralgesange fur vierstimmigen Chor. Edition Breitkopf, Nr. 3765.
Dietterich,T.G ., Hild,H ., & Bakiri,G. A comparative study of ID3 and Backpropagation for English Text-to-Speech Mapping. Proc. of the Seventh
International Conference on Machine Learning (pp. 24-31). Kaufmann, 1990.
Ebcioglu,K. An Expert System for Harmonization of Chorales in the Style
of J.S.Bach. Ph.D. Dissertation, Department ofC.S., State University of New York
at Buffalo, New York, 1986.
Lischka,C. Understanding Music Cognition. GMD St.Augustin, FRG, 1989.
Mozer,M.C ., Soukup,T. Connectionist Music Composition Based on
Melodic and Stylistic Constraints. Advances in Neural Information Processing
3 (NIPS 3), R .P. Lippmann, J. E. Moody, D.S. Touretzky (eds.), Kaufmann 1991.
Shepard, Roger N. Geometrical Approximations to the Structure of Musical
Pitch. Psychological Review, Vol. 89, Nr. 4, July 1982.
Todd, Peter M. A Connectionist Approach To Algorithmic Composition.
Computer Music Journal, Vol. 13, No.4, Winter 1989.
| 576 |@word middle:1 version:1 manageable:1 inversion:5 decomposition:3 necessity:1 contains:1 accompaniment:4 feulner:5 current:1 yet:1 subsequent:1 treating:1 designed:1 alone:1 vtp:1 beginning:2 dissertation:1 wolfram:1 harmonize:2 along:1 predecessor:1 consists:1 compose:1 heinz:1 decomposed:1 window:9 considering:1 becomes:1 suffice:1 alto:2 what:1 developed:2 sung:1 shed:1 braun:1 ro:2 unit:7 producing:2 local:5 todd:2 birthday:2 resembles:1 range:1 responsible:2 practice:2 lost:1 backpropagation:4 melodic:2 symbolic:3 cannot:1 selection:1 judged:1 context:5 conventional:1 map:1 demonstrated:1 fruitful:1 musician:1 independently:2 adhering:1 simplicity:1 pure:1 rule:7 importantly:1 century:1 hurt:1 target:1 modulo:1 cut:1 ft:1 reducible:1 bass:3 chord:15 mozer:3 und:1 broken:1 subtask:1 skeleton:13 ui:1 personal:1 ultimately:1 trained:4 creates:1 routinely:1 soprano:2 represented:1 beholder:1 outcome:1 refined:1 quite:4 solve:3 s:1 fischer:1 id3:2 advantage:1 sequence:2 net:16 propose:2 relevant:3 combining:1 achieve:1 lookahead:1 sixteenth:1 convergence:1 regularity:1 r1:1 generating:1 comparative:1 organist:2 nearest:1 minor:1 solves:1 implemented:1 aesthetically:1 concentrate:2 guided:1 hermann:1 attribute:2 human:1 melody:17 frg:1 musically:3 f1:1 creativity:1 harmonization:9 anticipate:1 insert:1 hild:6 great:1 mapping:1 cognition:1 algorithmic:1 major:2 consecutive:1 proc:1 integrates:1 applicable:1 harmony:17 outperformed:1 augustin:1 ofc:1 clearly:1 heiko:1 harmonizing:4 ira:3 fur:2 indicates:1 mainly:1 am:1 abstraction:2 typically:1 lj:1 hidden:1 relation:1 expand:1 germany:1 overall:1 classification:1 ill:1 deduktionssysteme:1 flexible:1 development:1 integration:1 harmonet:25 dissonance:3 nearly:1 future:1 connectionist:4 employ:1 winter:1 certainly:1 nl:1 ornamentation:3 light:1 capable:1 necessary:1 institut:1 owes:1 tree:1 psychological:1 boolean:1 tp:8 phrase:2 recognizing:1 menzel:6 seventh:1 universitat:1 characterize:1 obeyed:1 dependency:1 reported:1 st:8 international:1 moody:1 thesis:1 ear:1 containing:1 external:2 expert:3 style:5 leading:1 de:3 coding:3 student:1 includes:1 explicitly:1 decisive:1 piece:2 view:1 lab:1 observing:1 start:1 parallel:5 ftt:1 contribution:1 musical:10 characteristic:6 kaufmann:2 t3:1 famous:1 produced:2 reach:1 touretzky:1 ed:2 definition:1 pp:1 knowledge:1 tension:1 box:1 though:1 furthermore:1 roger:1 quality:1 believe:1 karlsruhe:2 dietterich:2 contain:1 mozart:1 octave:1 stress:1 demonstrate:1 geometrical:1 harmonic:22 common:1 bookkeeping:1 specialized:1 quarter:2 ji:1 overview:1 shepard:2 jl:1 belong:1 discussed:1 tpr:1 composition:2 approx:1 consistency:1 etc:1 add:1 certain:1 success:1 der:1 regained:1 additional:1 determine:1 july:1 relates:1 d7:1 needing:1 reduces:1 bach:15 offer:1 divided:1 pitch:8 n5:1 audience:1 separately:1 interval:1 uka:3 sure:1 tri:1 seem:1 structural:1 presence:1 aesthetic:2 easy:2 offensive:1 architecture:2 whether:1 passed:1 peter:1 speech:1 passing:1 york:2 jj:8 johannes:2 amount:1 ph:2 gmd:1 generate:2 lsi:1 conceived:1 wr:1 logik:1 promise:1 vol:2 taught:2 ist:1 key:4 four:3 clean:1 ht:2 symbolically:1 you:1 place:1 almost:1 reasonable:1 stylistic:1 improvising:2 decision:2 capturing:1 layer:1 ct:1 tackled:1 constraint:4 n3:1 ri:4 aspect:1 fct:1 uf:1 department:1 creative:1 smaller:1 describes:1 character:1 ur:2 chor:1 describable:1 n4:1 making:2 explained:1 legal:1 subfunctions:1 previously:2 end:1 hierarchical:1 appropriate:2 voice:2 professional:1 original:1 music:14 soukup:1 giving:1 build:1 bakiri:1 comparatively:1 already:1 nr:2 dp:5 thank:1 majority:1 considers:1 besides:1 code:2 happy:2 difficult:1 design:1 t:1 buffalo:1 looking:1 sweeping:1 subtasks:1 friedrich:1 learned:3 nip:1 below:1 eighth:5 komplexitat:1 power:1 hybrid:1 advanced:1 chorale:36 scheme:2 eye:1 church:1 isn:1 text:2 epoch:1 understanding:1 acknowledgement:1 review:1 determining:2 relative:1 historic:1 consistent:1 dd:2 last:1 free:1 english:1 side:4 allow:1 neighbor:1 correspondingly:1 fifth:2 distributed:1 overcome:1 plain:1 dimension:1 world:2 feedback:2 rich:1 doesn:1 preventing:1 employing:1 lippmann:1 bernhard:1 global:1 harm:1 don:1 tenor:2 additionally:1 learn:2 itt:1 composing:1 composer:1 complex:1 sp:1 edition:1 n2:1 chosing:1 augmented:2 depicts:1 position:5 obeying:1 lie:1 ito:1 dozen:1 specific:1 symbol:1 experimented:1 dl:1 essential:1 socher:1 sequential:1 gap:1 easier:1 locality:1 appearance:1 extracted:1 abbreviation:1 goal:1 viewed:1 marked:1 harmonized:4 replace:1 hard:1 determined:2 wt:1 called:2 total:1 accepted:1 amounted:1 vote:1 meaningful:1 indicating:1 select:1 stressed:1 |
5,258 | 5,760 | Stochastic Expectation Propagation
Yingzhen Li
University of Cambridge
Cambridge, CB2 1PZ, UK
yl494@cam.ac.uk
Jos?e Miguel Hern?andez-Lobato
Harvard University
Cambridge, MA 02138 USA
jmh@seas.harvard.edu
Richard E. Turner
University of Cambridge
Cambridge, CB2 1PZ, UK
ret26@cam.ac.uk
Abstract
Expectation propagation (EP) is a deterministic approximation algorithm that is
often used to perform approximate Bayesian parameter learning. EP approximates
the full intractable posterior distribution through a set of local approximations that
are iteratively refined for each datapoint. EP can offer analytic and computational
advantages over other approximations, such as Variational Inference (VI), and is
the method of choice for a number of models. The local nature of EP appears to
make it an ideal candidate for performing Bayesian learning on large models in
large-scale dataset settings. However, EP has a crucial limitation in this context:
the number of approximating factors needs to increase with the number of datapoints, N , which often entails a prohibitively large memory overhead. This paper
presents an extension to EP, called stochastic expectation propagation (SEP), that
maintains a global posterior approximation (like VI) but updates it in a local way
(like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full
EP, but reduces the memory consumption by a factor of N . SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large
dataset setting.
1
Introduction
Recently a number of methods have been developed for applying Bayesian learning to large datasets.
Examples include sampling approximations [1, 2], distributional approximations including stochastic variational inference (SVI) [3] and assumed density filtering (ADF) [4], and approaches that mix
distributional and sampling approximations [5, 6]. One family of approximation method has garnered less attention in this regard: Expectation Propagation (EP) [7, 8]. EP constructs a posterior
approximation by iterating simple local computations that refine factors which approximate the posterior contribution from each datapoint. At first sight, it therefore appears well suited to large-data
problems: the locality of computation make the algorithm simple to parallelise and distribute, and
good practical performance on a range of small data applications suggest that it will be accurate
[9, 10, 11]. However the elegance of local computation has been bought at the price of prohibitive
memory overhead that grows with the number of datapoints N , since local approximating factors
need to be maintained for every datapoint, which typically incur the same memory overhead as the
global approximation. The same pathology exists for the broader class of power EP (PEP) algorithms [12] that includes variational message passing [13]. In contrast, variational inference (VI)
methods [14, 15] utilise global approximations that are refined directly, which prevents memory
overheads from scaling with N .
Is there ever a case for preferring EP (or PEP) to VI methods for large data? We believe that there
certainly is. First, EP can provide significantly more accurate approximations. It is well known
that variational free-energy approaches are biased and often severely so [16] and for particular models the variational free-energy objective is pathologically ill-suited such as those with non-smooth
likelihood functions [11, 17]. Second, the fact that EP is truly local (to factors in the posterior distri1
bution and not just likelihoods) means that it affords different opportunities for tractable algorithm
design, as the updates can be simpler to approximate.
As EP appears to be the method of choice for some applications, researchers have attempted to
push it to scale. One approach is to swallow the large computational burden and simply use large
data structures to store the approximating factors (e.g. TrueSkill [18]). This approach can only
be pushed so far. A second approach is to use ADF, a simple variant of EP that only requires a
global approximation to be maintained in memory [19]. ADF, however, provides poorly calibrated
uncertainty estimates [7] which was one of the main motivating reasons for developing EP in the first
place. A third idea, complementary to the one described here, is to use approximating factors that
have simpler structure (e.g. low rank, [20]). This reduces memory consumption (e.g. for Gaussian
factors from O(N D2 ) to O(N D)), but does not stop the scaling with N . Another idea uses EP to
carve up the dataset [5, 6] using approximating factors for collections of datapoints. This results in
coarse-grained, rather than local, updates and other methods must be used to compute them. (Indeed,
the spirit of [5, 6] is to extend sampling methods to large datasets, not EP itself.)
Can we have the best of both worlds? That is, accurate global approximations that are derived from
truly local computation. To address this question we develop an algorithm based upon the standard
EP and ADF algorithms that maintains a global approximation which is updated in a local way. We
call this class of algorithms Stochastic Expectation Propagation (SEP) since it updates the global
approximation with (damped) stochastic estimates on data sub-samples in an analogous way to SVI.
Indeed, the generalisation of the algorithm to the PEP setting directly relates to SVI. Importantly,
SEP reduces the memory footprint by a factor of N when compared to EP. We further extend the
method to control the granularity of the approximation, and to treat models with latent variables
without compromising on accuracy or unnecessary memory demands. Finally, we demonstrate the
scalability and accuracy of the method on a number of real world and synthetic datasets.
2
Expectation Propagation and Assumed Density Filtering
We begin by briefly reviewing the EP and ADF algorithms upon which our new method is based.
Consider for simplicity observing a dataset comprising N i.i.d. samples D = {xn }N
n=1 from a
probabilistic model p(x|?) parametrised by an unknown D-dimensional vector ? that is drawn from
a prior p0 (?). Exact Bayesian inference involves computing the (typically intractable) posterior
distribution of the parameters given the data,
p(?|D) ? p0 (?)
N
Y
p(xn |?) ? q(?) ? p0 (?)
n=1
N
Y
fn (?).
(1)
n=1
Here q(?) is a simpler tractable approximating distribution that will be refined by EP. The goal of
EP is to refine the approximate factors so that they capture the contribution of each of the likelihood terms to the posterior i.e. fn (?) ? p(xn |?). In this spirit, one approach would be to find
each approximating factor fn (?) by minimising the Kullback-Leibler (KL) divergence between the
posterior and the distribution formed by replacing one of the likelihoods by its corresponding approximating factor, KL[p(?|D)||p(?|D)fn (?)/p(xn |?)]. Unfortunately, such an update is still intractable as it involves computing the full posterior. Instead, EP approximates this procedure by
replacing the exact leave-one-out posterior p?n (?) ? p(?|D)/p(xn |?) on both sides of the KL
by the approximate leave-one-out posterior (called the cavity distribution) q?n (?) ? q(?)/fn (?).
Since this couples the updates for the approximating factors, the updates must now be iterated.
In more detail, EP iterates four simple steps. First, the factor selected for update is removed from the
approximation to produce the cavity distribution. Second, the corresponding likelihood is included
to produce the tilted distribution p?n (?) ? q?n (?)p(xn |?). Third EP updates the approximating
factor by minimising KL[?
pn (?)||q?n (?)fn (?)]. The hope is that the contribution the true-likelihood
makes to the posterior is similar to the effect the same likelihood has on the tilted distribution. If the
approximating distribution is in the exponential family, as is often the case, then the KL minimisation
reduces to a moment matching step [21] that we denote fn (?) ? proj[?
pn (?)]/q?n (?). Finally,
having updated the factor, it is included into the approximating distribution.
We summarise the update procedure for a single factor in Algorithm 1. Critically, the approximation
step of EP involves local computations since one likelihood term is treated at a time. The assumption
2
Algorithm 1 EP
1: choose a factor fn to refine:
2: compute cavity distribution
q?n (?) ? q(?)/fn (?)
3: compute tilted distribution
p?n (?) ? p(xn |?)q?n (?)
4: moment matching:
fn (?) ? proj[?
pn (?)]/q?n (?)
5: inclusion:
q(?) ? q?n (?)fn (?)
Algorithm 2 ADF
1: choose a datapoint xn ? D:
2: compute cavity distribution
q?n (?) = q(?)
3: compute tilted distribution
p?n (?) ? p(xn |?)q?n (?)
4: moment matching:
fn (?) ? proj[?
pn (?)]/q?n (?)
5: inclusion:
q(?) ? q?n (?)fn (?)
Algorithm 3 SEP
1: choose a datapoint xn ? D:
2: compute cavity distribution
q?1 (?) ? q(?)/f (?)
3: compute tilted distribution
p?n (?) ? p(xn |?)q?1 (?)
4: moment matching:
fn (?) ? proj[?
pn (?)]/q?1 (?)
5: inclusion:
q(?) ? q?1 (?)fn (?)
6: implicit update:
1
1
f (?) ? f (?)1? N fn (?) N
Figure 1: Comparing the Expectation Propagation (EP), Assumed Density Filtering (ADF), and
Stochastic Expectation Propagation (SEP) update steps. Typically, the algorithms will be initialised
using q(?) = p0 (?) and, where appropriate, fn (?) = 1 or f (?) = 1.
is that these local computations, although possibly requiring further approximation, are far simpler
to handle compared to the full posterior p(?|D). In practice, EP often performs well when the
updates are parallelised. Moreover, by using approximating factors for groups of datapoints, and
then running additional approximate inference algorithms to perform the EP updates (which could
include nesting EP), EP carves up the data making it suitable for distributed approximate inference.
There is, however, one wrinkle that complicates deployment of EP at scale. Computation of the
cavity distribution requires removal of the current approximating factor, which means any implementation of EP must store them explicitly necessitating an O(N ) memory footprint. One option
is to simply ignore the removal step replacing the cavity distribution with the full approximation,
resulting in the ADF algorithm (Algorithm 2) that needs only maintain a global approximation in
memory. But as the moment matching step now over-counts the underlying approximating factor
(consider the new form of the objective KL[q(?)p(xn |?)||q(?)]) the variance of the approximation shrinks to zero as multiple passes are made through the dataset. Early stopping is therefore
required to prevent overfitting and generally speaking ADF does not return uncertainties that are
well-calibrated to the posterior. In the next section we introduce a new algorithm that sidesteps EP?s
large memory demands whilst avoiding the pathological behaviour of ADF.
3
Stochastic Expectation Propagation
In this section we introduce a new algorithm, inspired by EP, called Stochastic Expectation Propagation (SEP) that combines the benefits of local approximation (tractability of updates, distributability,
and parallelisability) with global approximation (reduced memory demands). The algorithm can
be interpreted as a version of EP in which the approximating factors are tied, or alternatively as a
corrected version of ADF that prevents overfitting. The key idea is that, at convergence, the approximating factors in EP can be interpreted as parameterising a global factor, f (?), that captures the
QN
4 QN
average effect of a likelihood on the posterior f (?)N = n=1 fn (?) ? n=1 p(xn |?). In this
spirit, the new algorithm employs direct iterative refinement of a global approximation comprising
the prior and N copies of a single approximating factor, f (?), that is q(?) ? f (?)N p0 (?).
SEP uses updates that are analogous to EP?s in order to refine f (?) in such a way that it captures
the average effect a likelihood function has on the posterior. First the cavity distribution is formed
by removing one of the copies of the factor, q?1 (?) ? q(?)/f (?). Second, the corresponding
likelihood is included to produce the tilted distribution p?n (?) ? q?1 (?)p(xn |?) and, third, SEP
finds an intermediate factor approximation by moment matching, fn (?) ? proj[?
pn (?)]/q?1 (?).
Finally, having updated the factor, it is included into the approximating distribution. It is important
here not to make a full update since fn (?) captures the effect of just a single likelihood function
p(xn |?). Instead, damping should be employed to make a partial update f (?) ? f (?)1? fn (?) .
A natural choice uses = 1/N which can be interpreted as minimising KL[?
pn (?)||p0 (?)f (?)N ]
3
in the moment update, but other choices of may be more appropriate, including decreasing
according to the Robbins-Monro condition [22].
SEP is summarised in Algorithm 3. Unlike ADF, the cavity is formed by dividing out f (?) which
captures the average affect of the likelihood and prevents the posterior from collapsing. Like ADF,
1
however, SEP only maintains the global approximation q(?) since f (?) ? (q(?)/p0 (?)) N and
1
1
q?1 (?) ? q(?)1? N p0 (?) N . When Gaussian approximating factors are used, for example, SEP
reduces the storage requirement of EP from O(N D2 ) to O(D2 ) which is a substantial saving that
enables models with many parameters to be applied to large datasets.
4
Algorithmic extensions to SEP and theoretical results
SEP has been motivated from a practical perspective by the limitations inherent in EP and ADF. In
this section we extend SEP in four orthogonal directions relate SEP to SVI. Many of the algorithms
described here are summarised in Figure 2 and they are detailed in the supplementary material.
4.1
Parallel SEP: relating the EP fixed points to SEP
The SEP algorithm outlined above approximates one likelihood at a time which can be computationally slow. However, it is simple to parallelise the SEP updates by following the same recipe by
which EP is parallelised. Consider a minibatch comprising M datapoints (for a full parallel batch
update use M = N ). First we form the cavity distribution for each likelihood. Unlike EP these are
all identical. Next, in parallel, compute M intermediate factors fm (?) ? proj[?
pm (?)]/q?1 (?).
In EP these intermediate factors become
the
new
likelihood
approximations
and
the approximaQ
Q
tion is updated to q(?) = p0 (?) n6=m fn (?) m fm (?). In SEP, the same update is used for
Q
the approximating distribution, which becomes q(?) ? p0 (?)fold (?)N ?M m fm (?) and, by imQM
plication, the approximating factor is fnew (?) = fold (?)1?M/N m=1 fm (?)1/N . One way of
understanding parallel SEP is as a double loop algorithm. The inner loop produces intermediate
approximations qm (?) ? arg minq KL[?
pm (?)||q(?)]; these are then combined in the outer loop:
PM
q(?) ? arg minq m=1 KL[q(?)||qm (?)] + (N ? M )KL[q(?)||qold (?)].
For M = 1 parallel SEP reduces to the original SEP algorithm. For M = N parallel SEP is
equivalent to the so-called Averaged EP algorithm proposed in [23] as a theoretical tool to study
the convergence properties of normal EP. This work showed that, under fairly restrictive conditions
(likelihood functions that are log-concave and varying slowly as a function of the parameters), AEP
converges to the same fixed points as EP in the large data limit (N ? ?).
There is another illuminating connection between SEP and AEP. Since SEP?s approximating factor
QN
1
f (?) converges to the geometric average of the intermediate factors f?(?) ? [ n=1 fn (?)] N , SEP
converges to the same fixed points as AEP if the learning rates satisfy the Robbins-Monro condition
[22], and therefore under certain conditions [23], to the same fixed points as EP. But it is still an
open question whether there are more direct relationships between EP and SEP.
4.2
Stochastic power EP: relationships to variational methods
The relationship between variational inference and stochastic variational inference [3] mirrors the
relationship between EP and SEP. Can these relationships be made more formal? If the moment
projection step in EP is replaced by a natural parameter matching step then the resulting algorithm
is equivalent to the Variational Message Passing (VMP) algorithm [24] (and see supplementary
material). Moreover, VMP has the same fixed points as variational inference [13] (since minimising
the local variational KL divergences is equivalent to minimising the global variational KL).
These results carry over to the new algorithms with minor modifications. Specifically VMP can be
transformed into SVMP by replacing VMP?s local approximations with the global form employed
by SEP. In the supplementary material we show that this algorithm is an instance of standard SVI
and that it therefore has the same fixed points as VI when satisfies the Robbins-Monro condition
[22]. More generally, the procedure can be applied any member of the power EP (PEP) [12] family
of algorithms which replace the moment projection step in EP with alpha-divergence minimization
4
B) Relationships between ?xed points
A) Relationships between algorithms
par-VMP
VI
VMP
AVMP
a=-1
PEP
VMP
alpha
divergence
updates
AVMP
EP
AEP
EP
SEP
K=N
a=1
AEP
multiple
approximating
factors
same
par-SEP
same (stochastic methods)
SEP
M=N
K=1
M=1
AEP: Averaged EP
AVMP: Averaged VMP
EP: Expectation Propagation
same in large data limit
(conditions apply)
parallel
minibatch
updates
PEP: Power EP
SEP: Stochastic EP
SVMP: Stochastic VMP
par-EP: EP with parallel updates
par-SEP: SEP with parallel updates
par-VMP: VMP with parallel updates
VI: Variational Inference
VMP: Variational Message Passing
Figure 2: Relationships between algorithms. Note that care needs to be taken when interpreting the
alpha-divergence as a ? ?1 (see supplementary material).
[21], but care has to be taken when taking the limiting cases (see supplementary). These results lend
weight to the view that SEP is a natural stochastic generalisation of EP.
4.3
Distributed SEP: controlling granularity of the approximation
EP uses a fine-grained approximation comprising a single factor for each likelihood. SEP, on
the other hand, uses a coarse-grained approximation comprising a signal global factor to approximate the average effect of all likelihood terms. One might worry that SEP?s approximation is
too severe if the dataset contains sets of datapoints that have very different likelihood contributions (e.g. for odd-vs-even handwritten digits classification consider the affect of a 5 and a 9 on the
posterior). It might be more sensible in such cases to partition the dataset into K disjoint pieces
PK
K
k
{Dk = {xn }N
n=Nk?1 }k=1 with N =
k=1 Nk and use an approximating factor for each partition.
If normal EP updates are performed on the subsets, i.e. treating p(Dk |?) as a single true factor to be
approximated, we arrive at the Distributed EP algorithm [5, 6]. But such updates are challenging as
multiple likelihood terms must be included during each update necessitating additional approximations (e.g. MCMC). A simpler alternative uses SEP/AEP inside each partition, implying a posterior
QK
approximation of the form q(?) ? p0 (?) k=1 fk (?)Nk with fk (?)Nk approximating p(Dk |?).
The limiting cases of this algorithm, when K = 1 and K = N , recover SEP and EP respectively.
4.4
SEP with latent variables
Many applications of EP involve latent variable models. Although this is not the main focus of the
paper, we show that SEP is applicable in this case without scaling the memory footprint with N .
Consider a model containing hidden variables, hn , associated with each observation p(xn , hn |?)
that are drawn i.i.d. from a prior p0 (hn ). The goal is
Q to approximate the true posterior over parameters and hidden variables p(?, {hn }|D) ? p0 (?) n p0 (hn )p(xn |hn , ?). Typically, EP would
approximate the effect of each intractable term as p(xn |hn , ?)p0 (hn ) ? fn (?)gn (hn ). Instead,
SEP ties the approximate parameter factors p(xn |hn , ?)p0 (hn ) ? f (?)gn (hn ) yielding:
4
N
q(?, {hn }) ? p0 (?)f (?)
N
Y
gn (hn ).
(2)
n=1
Critically, as proved in supplementary, the local factors gn (hn ) do not need to be maintained in
memory. This means that all of the advantages of SEP carry over to more complex models involving
latent variables, although this can potentially increase computation time in cases where updates for
gn (hn ) are not analytic, since then they will be initialised from scratch at each update.
5
5
Experiments
The purpose of the experiments was to evaluate SEP on a number of datasets (synthetic and realworld, small and large) and on a number of models (probit regression, mixture of Gaussians and
Bayesian neural networks).
5.1
Bayesian probit regression
The first experiments considered a simple Bayesian classification problem and investigated the
stability and quality of SEP in relation to EP and ADF as well as the effect of using minibatches and varying the granularity of the approximation. The model comprised a probit likelihood function P (yn = 1|?) = ?(? T xn ) and a Gaussian prior over the hyper-plane parameter
p(?) = N (?; 0, ?I). The synthetic data comprised N = 5, 000 datapoints {(xn , yn )}, where xn
were D = 4 dimensional and were either sampled from a single Gaussian distribution (Fig. 3(a)) or
from a mixture of Gaussians (MoGs) with J = 5 components (Fig. 3(b)) to investigate the sensitivity of the methods to the homogeneity of the dataset. The labels were produced by sampling from
the generative model. We followed [6] measuring the performance by computing an approximation
of KL[p(?|D)||q(?)], where p(?|D) was replaced by a Gaussian that had the same mean and covariance as samples drawn from the posterior using the No-U-Turn sampler (NUTS) [25], to quantify
the calibration of uncertainty estimations.
Results in Fig. 3(a) indicate that EP is the best performing method and that ADF collapses towards a
delta function. SEP converges to a solution which appears to be of similar quality to that obtained by
EP for the dataset containing Gaussian inputs, but slightly worse when the MoGs was used. Variants
of SEP that used larger mini-batches fluctuated less, but typically took longer to converge (although
for the small minibatches shown this effect is not clear). The utility of finer grained approximations
depended on the homogeneity of the data. For the second dataset containing MoG inputs (shown in
Fig. 3(b)), finer-grained approximations were found to be advantageous if the datapoints from each
mixture component are assigned to the same approximating factor. Generally it was found that there
is no advantage to retaining more approximating factors than there were clusters in the dataset.
To verify whether these conclusions about the granularity of the approximation hold in real datasets,
we sampled N = 1, 000 datapoints for each of the digits in MNIST and performed odd-vs-even
classification. Each digit class was assigned its own global approximating factor, K = 10. We
compare the log-likelihood of a test set using ADF, SEP (K = 1), full EP and DSEP (K = 10)
in Figure 3(c). EP and DSEP significantly outperform ADF. DSEP is slightly worse than full EP
initially, however it reduces the memory to 0.001% of full EP without losing accuracy substantially.
SEP?s accuracy was still increasing at the end of learning and was slightly better than ADF. Further
empirical comparisons are reported in the supplementary, and in summary the three EP methods are
indistinguishable when likelihood functions have similar contributions to the posterior.
Finally, we tested SEP?s performance on six small binary classification datasets from the UCI machine learning repository.1 We did not consider the effect of mini-batches or the granularity of the
approximation, using K = M = 1. We ran the tests with damping and stopped learning after
convergence (by monitoring the updates of approximating factors). The classification results are
summarised in Table 1. ADF performs reasonably well on the mean classification error metric,
presumably because it tends to learn a good approximation to the posterior mode. However, the posterior variance is poorly approximated and therefore ADF returns poor test log-likelihood scores. EP
achieves significantly higher test log-likelihood than ADF indicating that a superior approximation
to the posterior variance is attained. Crucially, SEP performs very similarly to EP, implying that SEP
is an accurate alternative to EP even though it is refining a cheaper global posterior approximation.
5.2
Mixture of Gaussians for clustering
The small scale experiments on probit regression indicate that SEP performs well for fully-observed
probabilistic models. Although it is not the main focus of the paper, we sought to test the flexibility
of the method by applying it to a latent variable model, specifically a mixture of Gaussians. A synthetic MoGs dataset containing N = 200 datapoints was constructed comprising J = 4 Gaussians.
1
https://archive.ics.uci.edu/ml/index.html
6
(a)
(b)
(c)
Figure 3: Bayesian logistic regression experiments. Panels (a) and (b) show synthetic data experiments. Panel (c) shows the results on MNIST (see text for full details).
Table 1: Average test results all methods on probit regression. All methods appear to capture the
posterior?s mode, however EP outperforms ADF in terms of test log-likelihood on almost all of the
datasets, with SEP performing similarly to EP.
Dataset
Australian
Breast
Crabs
Ionos
Pima
Sonar
ADF
0.328?0.0127
0.037?0.0045
0.056?0.0133
0.126?0.0166
0.242?0.0093
0.198?0.0208
mean error
SEP
0.325?0.0135
0.034?0.0034
0.033?0.0099
0.130?0.0147
0.244?0.0098
0.198?0.0217
EP
0.330?0.0133
0.034?0.0039
0.036?0.0113
0.131?0.0149
0.241?0.0093
0.198?0.0243
ADF
-0.634?0.010
-0.100?0.015
-0.242?0.012
-0.373?0.047
-0.516?0.013
-0.461?0.053
test log-likelihood
SEP
EP
-0.631?0.009 -0.631?0.009
-0.094?0.011 -0.093?0.011
-0.125?0.013 -0.110?0.013
-0.336?0.029 -0.324?0.028
-0.514?0.012 -0.513?0.012
-0.418?0.021 -0.415?0.021
The means were sampled from a Gaussian distribution, p(?j ) = N (?; m, I), the cluster identity
variables were sampled from a uniform categorical distribution p(hn = j) = 1/4, and each mixture
component was isotropic p(xn |hn ) = N (xn ; ?hn , 0.52 I). EP, ADF and SEP were performed to
approximate the joint posterior over the cluster means {?j } and cluster identity variables {hn } (the
other parameters were assumed known).
Figure 4(a) visualises the approximate posteriors after 200 iterations. All methods return good
estimates for the means, but ADF collapses towards a point estimate as expected. SEP, in contrast,
captures the uncertainty and returns nearly identical approximations to EP. The accuracy of the
methods is quantified in Fig. 4(b) by comparing the approximate posteriors to those obtained from
NUTS. In this case the approximate KL-divergence measure is analytically intractable, instead we
used the averaged F-norm of the difference of the Gaussian parameters fitted by NUTS and EP
methods. These measures confirm that SEP approximates EP well in this case.
5.3
Probabilistic backpropagation
The final set of tests consider more complicated models and large datasets. Specifically we evaluate the methods for probabilistic backpropagation (PBP) [4], a recent state-of-the-art method for
scalable Bayesian learning in neural network models. Previous implementations of PBP perform
several iterations of ADF over the training data. The moment matching operations required by ADF
are themselves intractable and they are approximated by first propagating the uncertainty on the
synaptic weights forward through the network in a sequential way, and then computing the gradient
of the marginal likelihood by backpropagation. ADF is used to reduce the large memory cost that
would be required by EP when the amount of available data is very large.
We performed several experiments to assess the accuracy of different implementations of PBP based
on ADF, SEP and EP on regression datasets following the same experimental protocol as in [4] (see
supplementary material). We considered neural networks with 50 hidden units (except for Year and
Protein which we used 100). Table 2 shows the average test RMSE and test log-likelihood for each
method. Interestingly, SEP can outperform EP in this setting (possibly because the stochasticity
enabled it to find better solutions), and typically it performed similarly. Memory reductions using
7
(a)
(b)
Figure 4: Posterior approximation for the mean of the Gaussian components. (a) visualises posterior
approximations over the cluster means (98% confidence level). The coloured dots indicate the true
label (top-left) or the inferred cluster assignments (the rest). In (b) we show the error (in F-norm) of
the approximate Gaussians? means (top) and covariances (bottom).
Table 2: Average test results for all methods. Datasets are also from the UCI machine learning
repository.
Dataset
Kin8nm
Naval
Power
Protein
Wine
Year
ADF
0.098?0.0007
0.006?0.0000
4.124?0.0345
4.727?0.0112
0.635?0.0079
8.879? NA
RMSE
SEP
0.088?0.0009
0.002?0.0000
4.165?0.0336
4.670?0.0109
0.650?0.0082
8.922?NA
EP
0.089?0.0006
0.004?0.0000
4.191?0.0349
4.748?0.0137
0.637?0.0076
8.914?NA
ADF
0.896?0.006
3.731?0.006
-2.837?0.009
-2.973?0.003
-0.968?0.014
-3.603? NA
test log-likelihood
SEP
EP
1.013?0.011 1.005?0.007
4.590?0.014 4.207?0.011
-2.846?0.008 -2.852?0.008
-2.961?0.003 -2.979?0.003
-0.976?0.013 -0.958?0.011
-3.924?NA
-3.929?NA
SEP instead of EP were large e.g. 694Mb for the Protein dataset and 65,107Mb for the Year dataset
(see supplementary). Surprisingly ADF often outperformed EP, although the results presented for
ADF use a near-optimal number of sweeps and further iterations generally degraded performance.
ADF?s good performance is most likely due to an interaction with additional moment approximation
required in PBP that is more accurate as the number of factors increases.
6
Conclusions and future work
This paper has presented the stochastic expectation propagation method for reducing EP?s large
memory consumption which is prohibitive for large datasets. We have connected the new algorithm
to a number of existing methods including assumed density filtering, variational message passing,
variational inference, stochastic variational inference and averaged EP. Experiments on Bayesian
logistic regression (both synthetic and real world) and mixture of Gaussians clustering indicated
that the new method had an accuracy that was competitive with EP. Experiments on the probabilistic
back-propagation on large real world regression datasets again showed that SEP comparably to
EP with a vastly reduced memory footprint. Future experimental work will focus on developing
data-partitioning methods to leverage finer-grained approximations (DESP) that showed promising
experimental performance and also mini-batch updates. There is also a need for further theoretical
understanding of these algorithms, and indeed EP itself. Theoretical work will study the convergence
properties of the new algorithms for which we only have limited results at present. Systematic
comparisons of EP-like algorithms and variational methods will guide practitioners to choosing the
appropriate scheme for their application.
Acknowledgements
We thank the reviewers for valuable comments. YL thanks the Schlumberger Foundation Faculty for
the Future fellowship on supporting her PhD study. JMHL acknowledges support from the Rafael
del Pino Foundation. RET thanks EPSRC grant # EP/G050821/1 and EP/L000776/1.
8
References
[1] Sungjin Ahn, Babak Shahbaba, and Max Welling. Distributed stochastic gradient mcmc. In Proceedings
of the 31st International Conference on Machine Learning (ICML-14), pages 1044?1052, 2014.
[2] R?emi Bardenet, Arnaud Doucet, and Chris Holmes. Towards scaling up markov chain monte carlo:
an adaptive subsampling approach. In Proceedings of the 31st International Conference on Machine
Learning (ICML-14), pages 405?413, 2014.
[3] Matthew D. Hoffman, David M. Blei, Chong Wang, and John William Paisley. Stochastic variational
inference. Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[4] Jos?e Miguel Hern?andez-Lobato and Ryan P. Adams. Probabilistic backpropagation for scalable learning
of bayesian neural networks. arXiv:1502.05336, 2015.
[5] Andrew Gelman, Aki Vehtari, Pasi Jylnki, Christian Robert, Nicolas Chopin, and John P. Cunningham.
Expectation propagation as a way of life. arXiv:1412.4869, 2014.
[6] Minjie Xu, Balaji Lakshminarayanan, Yee Whye Teh, Jun Zhu, and Bo Zhang. Distributed bayesian
posterior sampling via moment sharing. In NIPS, 2014.
[7] Thomas P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in Artificial Intelligence, volume 17, pages 362?369, 2001.
[8] Manfred Opper and Ole Winther. Expectation consistent approximate inference. The Journal of Machine
Learning Research, 6:2177?2204, 2005.
[9] Malte Kuss and Carl Edward Rasmussen. Assessing approximate inference for binary gaussian process
classification. The Journal of Machine Learning Research, 6:1679?1704, 2005.
[10] Simon Barthelm?e and Nicolas Chopin. Expectation propagation for likelihood-free inference. Journal of
the American Statistical Association, 109(505):315?333, 2014.
[11] John P Cunningham, Philipp Hennig, and Simon Lacoste-Julien. Gaussian probabilities and expectation
propagation. arXiv preprint arXiv:1111.6832, 2011.
[12] Thomas P. Minka. Power EP. Technical Report MSR-TR-2004-149, Microsoft Research, Cambridge,
2004.
[13] John M Winn and Christopher M Bishop. Variational message passing. In Journal of Machine Learning
Research, pages 661?694, 2005.
[14] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to
variational methods for graphical models. Machine learning, 37(2):183?233, 1999.
[15] Matthew James Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, University
of London, 2003.
[16] Richard E. Turner and Maneesh Sahani. Two problems with variational expectation maximisation for
time-series models. In D. Barber, T. Cemgil, and S. Chiappa, editors, Bayesian Time series models,
chapter 5, pages 109?130. Cambridge University Press, 2011.
[17] Richard E. Turner and Maneesh Sahani. Probabilistic amplitude and frequency demodulation. In
J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 981?989. 2011.
[18] Ralf Herbrich, Tom Minka, and Thore Graepel. Trueskill: A bayesian skill rating system. In Advances in
Neural Information Processing Systems, pages 569?576, 2006.
[19] Peter S. Maybeck. Stochastic models, estimation and control. Academic Press, 1982.
[20] Yuan Qi, Ahmed H Abdel-Gawad, and Thomas P Minka. Sparse-posterior gaussian processes for general
likelihoods. In Uncertainty and Artificial Intelligence (UAI), 2010.
[21] Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191. Oxford University
Press, 2000.
[22] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical
statistics, pages 400?407, 1951.
[23] Guillaume Dehaene and Simon Barthelm?e.
arXiv:1503.08060, 2015.
Expectation propagation in the large-data limit.
[24] Thomas Minka. Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft Research, Cambridge, 2005.
[25] Matthew D Hoffman and Andrew Gelman. The no-u-turn sampler: Adaptively setting path lengths in
hamiltonian monte carlo. The Journal of Machine Learning Research, 15(1):1593?1623, 2014.
9
| 5760 |@word repository:2 version:2 faculty:1 briefly:1 advantageous:1 norm:2 msr:2 open:1 plication:1 d2:3 crucially:1 covariance:2 p0:17 g050821:1 tr:2 carry:2 reduction:1 moment:12 contains:1 score:1 series:2 interestingly:1 outperforms:1 existing:1 trueskill:2 current:1 comparing:2 must:4 john:4 fn:24 tilted:6 partition:3 analytic:2 enables:1 christian:1 treating:1 update:34 v:2 implying:2 generative:1 prohibitive:2 selected:1 intelligence:2 plane:1 isotropic:1 hamiltonian:1 manfred:1 blei:1 provides:1 coarse:2 iterates:1 philipp:1 herbrich:1 simpler:5 zhang:1 mathematical:1 constructed:1 direct:2 become:1 yuan:1 overhead:4 combine:1 inside:1 introduce:2 expected:1 indeed:3 themselves:1 inspired:1 decreasing:1 increasing:1 becomes:1 begin:1 moreover:2 underlying:1 panel:2 xed:1 interpreted:3 substantially:1 developed:1 whilst:1 ret:1 every:1 concave:1 tie:1 prohibitively:1 qm:2 uk:4 control:2 unit:1 partitioning:1 grant:1 yn:2 appear:1 local:16 treat:1 tends:1 limit:3 severely:1 depended:1 cemgil:1 sutton:1 oxford:1 path:1 might:2 quantified:1 challenging:1 deployment:1 collapse:2 limited:1 range:1 averaged:5 practical:2 practice:1 maximisation:1 backpropagation:4 cb2:2 svi:5 footprint:4 procedure:3 digit:3 empirical:1 maneesh:2 significantly:3 wrinkle:1 matching:8 projection:2 confidence:1 suggest:1 protein:3 zoubin:1 gelman:2 storage:1 context:1 applying:2 yee:1 equivalent:3 deterministic:1 reviewer:1 gawad:1 lobato:2 attention:1 minq:2 simplicity:1 holmes:1 nesting:1 importantly:1 datapoints:10 enabled:1 ralf:1 stability:1 handle:1 analogous:2 updated:4 limiting:2 controlling:1 annals:1 exact:2 losing:1 carl:1 us:6 jmhl:1 harvard:2 approximated:3 swallow:1 balaji:1 distributional:2 ep:107 observed:1 bottom:1 epsrc:1 preprint:1 wang:1 capture:7 connected:1 removed:1 valuable:1 ran:1 substantial:1 vehtari:1 ideally:1 cam:2 babak:1 reviewing:1 incur:1 upon:2 sep:70 joint:1 chapter:1 london:1 monte:2 ole:1 artificial:2 zemel:1 hiroshi:1 hyper:1 choosing:1 refined:3 supplementary:9 larger:1 amari:1 statistic:1 nagaoka:1 itself:2 final:1 beal:1 advantage:3 took:1 interaction:1 mb:2 uci:3 loop:3 poorly:2 flexibility:1 dsep:3 scalability:1 recipe:1 convergence:4 double:1 requirement:1 cluster:6 sea:1 produce:4 assessing:1 adam:1 leave:2 converges:4 develop:1 ac:2 propagating:1 chiappa:1 miguel:2 andrew:2 odd:2 minor:1 edward:1 dividing:1 involves:3 indicate:4 australian:1 quantify:1 tommi:1 direction:1 compromising:1 stochastic:20 material:5 qold:1 shun:1 behaviour:1 andez:2 ryan:1 fnew:1 extension:2 hold:1 crab:1 considered:2 ic:1 normal:2 presumably:1 lawrence:1 algorithmic:1 matthew:3 achieves:1 early:1 sought:1 wine:1 purpose:1 estimation:2 outperformed:1 applicable:1 label:2 pasi:1 robbins:4 tool:1 hoffman:2 hope:1 minimization:1 gaussian:12 sight:1 rather:1 pn:7 varying:2 ret26:1 broader:1 jaakkola:1 minimisation:1 derived:1 focus:3 refining:1 naval:1 rank:1 likelihood:33 contrast:2 inference:18 stopping:1 typically:6 initially:1 hidden:3 relation:1 her:1 proj:6 transformed:1 chopin:2 comprising:6 cunningham:2 arg:2 classification:7 ill:1 html:1 retaining:1 art:1 fairly:1 marginal:1 construct:1 saving:1 having:2 sampling:5 identical:2 ionos:1 vmp:12 nearly:1 icml:2 future:3 summarise:1 report:2 richard:3 employ:1 inherent:1 pathological:1 divergence:7 homogeneity:2 cheaper:1 replaced:2 geometry:1 maintain:1 william:1 schlumberger:1 microsoft:2 message:6 investigate:1 certainly:1 severe:1 chong:1 truly:2 mixture:7 yielding:1 parametrised:1 damped:1 parameterising:1 visualises:2 chain:1 accurate:5 partial:1 orthogonal:1 damping:2 taylor:1 theoretical:4 complicates:1 stopped:1 instance:1 fitted:1 gn:5 measuring:1 assignment:1 tractability:1 cost:1 subset:1 uniform:1 comprised:2 too:1 motivating:1 reported:1 barthelm:2 synthetic:7 calibrated:2 combined:1 thanks:2 density:4 st:2 sensitivity:1 international:2 winther:1 preferring:1 adaptively:1 probabilistic:7 systematic:1 yl:1 jos:2 michael:1 na:6 thesis:1 again:1 jmh:1 vastly:1 containing:4 choose:3 possibly:2 slowly:1 hn:20 collapsing:1 worse:2 american:1 sidestep:1 return:4 li:1 distribute:1 includes:1 lakshminarayanan:1 satisfy:1 explicitly:1 vi:7 piece:1 tion:1 view:1 performed:5 shahbaba:1 observing:1 bution:1 competitive:1 recover:1 maintains:3 option:1 parallel:10 complicated:1 simon:3 rmse:2 monro:4 contribution:5 ass:1 formed:3 accuracy:7 degraded:1 variance:3 qk:1 bayesian:17 handwritten:1 iterated:1 critically:2 produced:1 comparably:1 carlo:2 monitoring:1 researcher:1 finer:3 kuss:1 datapoint:5 sharing:1 parallelised:2 synaptic:1 energy:2 initialised:2 frequency:1 minka:5 james:1 elegance:1 associated:1 couple:1 stop:1 sampled:4 dataset:16 proved:1 graepel:1 amplitude:1 back:1 appears:4 adf:36 worry:1 higher:1 attained:1 tom:1 shrink:1 though:1 just:2 implicit:1 hand:1 replacing:4 christopher:1 propagation:18 del:1 minibatch:2 mode:2 logistic:2 quality:2 indicated:1 believe:1 grows:1 thore:1 usa:1 effect:9 requiring:1 true:4 verify:1 analytically:1 assigned:2 arnaud:1 iteratively:1 leibler:1 nut:3 indistinguishable:1 during:1 aki:1 maintained:3 whye:1 svmp:2 demonstrate:1 necessitating:2 performs:5 interpreting:1 variational:24 recently:1 pbp:4 kin8nm:1 superior:1 garnered:1 volume:2 extend:3 association:1 approximates:4 relating:1 pep:6 pino:1 cambridge:8 paisley:1 outlined:1 pm:3 fk:2 inclusion:3 similarly:3 stochasticity:1 pathology:1 had:2 dot:1 shawe:1 calibration:1 entail:1 longer:1 ahn:1 posterior:34 own:1 showed:3 recent:1 perspective:1 store:2 certain:1 binary:2 life:1 herbert:1 additional:3 care:2 employed:2 converge:1 signal:1 relates:1 full:11 mix:1 multiple:3 reduces:7 smooth:1 technical:2 academic:1 ahmed:1 offer:1 minimising:5 l000776:1 demodulation:1 qi:1 variant:2 involving:1 regression:8 breast:1 scalable:2 expectation:19 mog:1 metric:1 arxiv:5 iteration:3 fellowship:1 fine:1 winn:1 crucial:1 biased:1 rest:1 unlike:2 archive:1 pass:1 comment:1 dehaene:1 member:1 spirit:3 jordan:1 practitioner:1 bought:1 call:1 near:1 leverage:1 ideal:1 granularity:5 intermediate:5 affect:2 fm:4 inner:1 idea:3 reduce:1 whether:2 motivated:1 six:1 utility:1 bartlett:1 peter:1 passing:6 speaking:1 generally:4 iterating:1 detailed:1 involve:1 clear:1 amount:1 maybeck:1 reduced:2 http:1 outperform:2 affords:1 canonical:1 delta:1 disjoint:1 summarised:3 hennig:1 group:1 key:1 four:2 ichi:1 drawn:3 prevent:1 bardenet:1 lacoste:1 year:3 realworld:1 uncertainty:7 place:1 almost:2 family:3 arrive:1 scaling:4 pushed:1 followed:1 fold:2 refine:4 carve:1 emi:1 performing:4 developing:2 according:1 poor:1 slightly:3 making:1 modification:1 taken:2 computationally:1 hern:2 turn:2 count:1 tractable:2 end:1 available:1 gaussians:7 operation:1 apply:1 appropriate:3 batch:4 alternative:2 mogs:3 weinberger:1 original:1 thomas:4 top:2 running:1 include:2 clustering:2 subsampling:1 graphical:1 opportunity:1 carves:1 restrictive:1 ghahramani:1 approximating:30 sweep:1 objective:2 question:2 pathologically:1 gradient:2 thank:1 outer:1 consumption:3 sensible:1 chris:1 barber:1 reason:1 length:1 index:1 relationship:8 mini:3 minjie:1 unfortunately:1 yingzhen:1 robert:1 potentially:1 relate:1 pima:1 design:1 implementation:3 unknown:1 perform:3 teh:1 observation:1 datasets:14 markov:1 supporting:1 ever:1 inferred:1 rating:1 david:1 required:4 kl:14 connection:1 nip:1 address:1 including:3 memory:20 lend:1 max:1 power:6 suitable:1 malte:1 treated:1 natural:3 turner:3 zhu:1 scheme:1 julien:1 acknowledges:1 categorical:1 jun:1 n6:1 sahani:2 text:1 prior:4 understanding:2 geometric:1 removal:2 coloured:1 acknowledgement:1 probit:5 par:5 fully:1 limitation:2 filtering:4 abdel:1 foundation:2 illuminating:1 consistent:1 editor:2 summary:1 surprisingly:1 free:3 copy:2 rasmussen:1 side:1 formal:1 guide:1 saul:1 taking:1 sparse:1 distributed:5 regard:1 benefit:1 opper:1 xn:25 world:5 qn:3 forward:1 collection:1 made:2 refinement:1 sungjin:1 adaptive:1 far:2 welling:1 approximate:21 alpha:3 ignore:1 rafael:1 kullback:1 skill:1 cavity:10 confirm:1 ml:1 global:17 overfitting:2 doucet:1 uai:1 assumed:5 unnecessary:1 alternatively:1 latent:5 iterative:1 sonar:1 table:4 promising:1 nature:1 reasonably:1 learn:1 nicolas:2 aep:7 complex:1 investigated:1 protocol:1 did:1 pk:1 main:3 complementary:1 xu:1 fig:5 slow:1 sub:1 pereira:1 exponential:1 candidate:1 tied:1 third:3 grained:6 removing:1 bishop:1 pz:2 dk:3 parallelise:2 intractable:6 exists:1 burden:1 mnist:2 sequential:1 mirror:1 phd:2 push:1 demand:3 nk:4 suited:3 locality:1 simply:2 likely:1 prevents:3 bo:1 utilise:1 satisfies:1 fluctuated:1 ma:1 minibatches:2 goal:2 identity:2 towards:3 price:1 replace:1 included:5 generalisation:2 specifically:3 corrected:1 except:1 sampler:2 reducing:1 called:4 experimental:3 attempted:1 indicating:1 guillaume:1 support:1 avoiding:1 evaluate:2 mcmc:2 tested:1 scratch:1 |
5,259 | 5,761 | Deep learning with Elastic Averaging SGD
Anna Choromanska
Courant Institute, NYU
achoroma@cims.nyu.edu
Sixin Zhang
Courant Institute, NYU
zsx@cims.nyu.edu
Yann LeCun
Center for Data Science, NYU & Facebook AI Research
yann@cims.nyu.edu
Abstract
We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm
is proposed in this setting where the communication and coordination of work
among concurrent processes (local workers), is based on an elastic force which
links the parameters they compute with a center variable stored by the parameter
server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the
center variable by reducing the amount of communication between local workers
and the master. We empirically demonstrate that in the deep learning setting, due
to the existence of many local optima, allowing more exploration can lead to the
improved performance. We propose synchronous and asynchronous variants of
the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized
method ADMM. We show that the stability of EASGD is guaranteed when a simple
stability condition is satisfied, which is not the case for ADMM. We additionally
propose the momentum-based version of our algorithm that can be applied in both
synchronous and asynchronous settings. Asynchronous variant of the algorithm
is applied to train convolutional neural networks for image classification on the
CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm
accelerates the training of deep architectures compared to DOWNPOUR and other
common baseline approaches and furthermore is very communication efficient.
1
Introduction
One of the most challenging problems in large-scale machine learning is how to parallelize the
training of large models that use a form of stochastic gradient descent (SGD) [1]. There have been
attempts to parallelize SGD-based training for large-scale deep learning models on large number
of CPUs, including the Google?s Distbelief system [2]. But practical image recognition systems
consist of large-scale convolutional neural networks trained on few GPU cards sitting in a single
computer [3, 4]. The main challenge is to devise parallel SGD algorithms to train large-scale deep
learning models that yield a significant speedup when run on multiple GPU cards.
In this paper we introduce the Elastic Averaging SGD method (EASGD) and its variants. EASGD
is motivated by quadratic penalty method [5], but is re-interpreted as a parallelized extension of the
averaging SGD algorithm [6]. The basic idea is to let each worker maintain its own local parameter,
and the communication and coordination of work among the local workers is based on an elastic
force which links the parameters they compute with a center variable stored by the master. The center
variable is updated as a moving average where the average is taken in time and also in space over
the parameters computed by local workers. The main contribution of this paper is a new algorithm
that provides fast convergent minimization while outperforming DOWNPOUR method [2] and other
1
baseline approaches in practice. Simultaneously it reduces the communication overhead between the
master and the local workers while at the same time it maintains high-quality performance measured
by the test error. The new algorithm applies to deep learning settings such as parallelized training of
convolutional neural networks.
The article is organized as follows. Section 2 explains the problem setting, Section 3 presents
the synchronous EASGD algorithm and its asynchronous and momentum-based variants, Section 4
provides stability analysis of EASGD and ADMM in the round-robin scheme, Section 5 shows experimental results and Section 6 concludes. The Supplement contains additional material including
additional theoretical analysis.
2
Problem setting
Consider minimizing a function F (x) in a parallel computing environment [7] with p ? N workers
and a master. In this paper we focus on the stochastic optimization problem of the following form
min F (x) := E[f (x, ?)],
x
(1)
where x is the model parameter to be estimated
and ? is a random variable that follows the probabilR
ity distribution P over ? such that F (x) = ? f (x, ?)P(d?). The optimization problem in Equation 1
can be reformulated as follows
p
X
?
min
E[f (xi , ? i )] + kxi ? x
? k2 ,
(2)
1
p
2
x ,...,x ,?
x
i=1
where each ? i follows the same distribution P (thus we assume each worker can sample the entire
dataset). In the paper we refer to xi ?s as local variables and we refer to x
? as a center variable. The
problem of the equivalence of these two objectives is studied in the literature and is known as the
augmentability or the global variable consensus problem [8, 9]. The quadratic penalty term ? in
Equation 2 is expected to ensure that local workers will not fall into different attractors that are far
away from the center variable. This paper focuses on the problem of reducing the parameter communication overhead between the master and local workers [10, 2, 11, 12, 13]. The problem of data
communication when the data is distributed among the workers [7, 14] is a more general problem
and is not addressed in this work. We however emphasize that our problem setting is still highly
non-trivial under the communication constraints due to the existence of many local optima [15].
3
EASGD update rule
The EASGD updates captured in resp. Equation 3 and 4 are obtained by taking the gradient descent
step on the objective in Equation 2 with respect to resp. variable xi and x
?,
xit+1
=
x
?t+1
=
xit ? ?(gti (xit ) + ?(xit ? x
?t ))
p
X
x
?t + ?
?(xit ? x
?t ),
(3)
(4)
i=1
where gti (xit ) denotes the stochastic gradient of F with respect to xi evaluated at iteration t, xit and
x
?t denote respectively the value of variables xi and x
? at iteration t, and ? is the learning rate.
The update rule for the center variable x
? takes the form of moving average where the average is
taken over both space and time. Denote ? = ?? and ? = p?, then Equation 3 and 4 become
xit+1
=
x
?t+1
=
xit ? ?gti (xit ) ? ?(xit ? x
?t )
!
p
1X i
x .
(1 ? ?)?
xt + ?
p i=1 t
(5)
(6)
Note that choosing ? = p? leads to an elastic symmetry in the update rule, i.e. there exists an
symmetric force equal to ?(xit ? x
?t ) between the update of each xi and x
?. It has a crucial influence on the algorithm?s stability as will be explained in Section 4. Also in order to minimize the
staleness [16] of the difference xit ? x
?t between the center and the local variable, the update for the
master in Equation 4 involves xit instead of xit+1 .
2
Note also that ? = ??, where the magnitude of ? represents the amount of exploration we allow in
the model. In particular, small ? allows for more exploration as it allows xi ?s to fluctuate further
from the center x
?. The distinctive idea of EASGD is to allow the local workers to perform more
exploration (small ?) and the master to perform exploitation. This approach differs from other
settings explored in the literature [2, 17, 18, 19, 20, 21, 22, 23], and focus on how fast the center
variable converges. In this paper we show the merits of our approach in the deep learning setting.
3.1
Asynchronous EASGD
We discussed the synchronous update of EASGD algorithm in the previous section. In this section
we propose its asynchronous variant. The local workers are still responsible for updating the local
variables xi ?s, whereas the master is updating the center variable x
?. Each worker maintains its own
clock ti , which starts from 0 and is incremented by 1 after each stochastic gradient update of xi
as shown in Algorithm 1. The master performs an update whenever the local workers finished ?
steps of their gradient updates, where we refer to ? as the communication period. As can be seen
in Algorithm 1, whenever ? divides the local clock of the ith worker, the ith worker communicates
with the master and requests the current value of the center variable x
?. The worker then waits until
the master sends back the requested parameter value, and computes the elastic difference ?(x ? x
?)
(this entire procedure is captured in step a) in Algorithm 1). The elastic difference is then sent back
to the master (step b) in Algorithm 1) who then updates x
?.
The communication period ? controls the frequency of the communication between every local
worker and the master, and thus the trade-off between exploration and exploitation.
Algorithm 2: Asynchronous EAMSGD:
Processing by worker i and the master
Input: learning rate ?, moving rate ?,
communication period ? ? N,
momentum term ?
Initialize: x
? is initialized randomly, xi = x
?,
v i = 0, ti = 0
Repeat
x ? xi
if (? divides ti ) then
a) xi ? xi ? ?(x ? x
?)
b) x
? ?x
? + ?(x ? x
?)
end
v i ? ?v i ? ?gtii (x + ?v i )
xi ? xi + v i
ti ? ti + 1
Until forever
Algorithm 1: Asynchronous EASGD:
Processing by worker i and the master
Input: learning rate ?, moving rate ?,
communication period ? ? N
Initialize: x
? is initialized randomly, xi = x
?,
ti = 0
Repeat
x ? xi
if (? divides ti ) then
a) xi ? xi ? ?(x ? x
?)
b) x
? ?x
? + ?(x ? x
?)
end
xi ? xi ? ?gtii (x)
ti ? ti + 1
Until forever
3.2
Momentum EASGD
The momentum EASGD (EAMSGD) is a variant of our Algorithm 1 and is captured in Algorithm 2.
It is based on the Nesterov?s momentum scheme [24, 25, 26], where the update of the local worker
of the form captured in Equation 3 is replaced by the following update
i
vt+1
xit+1
=
?vti ? ?gti (xit + ?vti )
xit
i
vt+1
(7)
??(xit
=
+
?
?x
?t ),
where ? is the momentum term. Note that when ? = 0 we recover the original EASGD algorithm.
As we are interested in reducing the communication overhead in the parallel computing environment where the parameter vector is very large, we will be exploring in the experimental section the
asynchronous EASGD algorithm and its momentum-based variant in the relatively large ? regime
(less frequent communication).
4
Stability analysis of EASGD and ADMM in the round-robin scheme
In this section we study the stability of the asynchronous EASGD and ADMM methods in the roundrobin scheme [20]. We first state the updates of both algorithms in this setting, and then we study
3
their stability. We will show that in the one-dimensional quadratic case, ADMM algorithm can
exhibit chaotic behavior, leading to exponential divergence. The analytic condition for the ADMM
algorithm to be stable is still unknown, while for the EASGD algorithm it is very simple1 .
The analysis of the synchronous EASGD algorithm, including its convergence rate, and its averaging
property, in the quadratic and strongly convex case, is deferred to the Supplement.
In our setting, the ADMM method [9, 27, 28] involves solving the following minimax problem2 ,
max p
1
minp
1
? ,...,? x ,...,x
p
X
?
F (xi ) ? ?i (xi ? x
?) + kxi ? x
? k2 ,
2
,?
x
i=1
(8)
where ?i ?s are the Lagrangian multipliers. The resulting updates of the ADMM algorithm in the
round-robin scheme are given next. Let t ? 0 be a global clock. At each t, we linearize the function
2
1
i
x ? xit
as in [28]. The updates become
F (xi ) with F (xit ) + ?F (xit ), xi ? xit + 2?
i
?t ? (xit ? x
?t ) if mod (t, p) = i ? 1;
i
?t+1 =
(9)
if mod (t, p) 6= i ? 1.
?it
( i
xt ???F (xit )+??(?it+1 +?
xt )
if mod (t, p) = i ? 1;
i
1+??
xt+1 =
(10)
i
xt
if mod (t, p) 6= i ? 1.
p
x
?t+1
=
1X i
(x
? ?it+1 ).
p i=1 t+1
(11)
Each local variable xi is periodically updated (with period p). First, the Lagrangian multiplier ?i is
updated with the dual ascent update as in Equation 9. It is followed by the gradient descent update
of the local variable as given in Equation 10. Then the center variable x
? is updated with the most
recent values of all the local variables and Lagrangian multipliers as in Equation 11. Note that
since the step size for the dual ascent update is chosen to be ? by convention [9, 27, 28], we have
re-parametrized the Lagrangian multiplier to be ?it ? ?it /? in the above updates.
The EASGD algorithm in the round-robin scheme is defined similarly and is given below
i
?t ) if mod (t, p) = i ? 1;
xt ? ??F (xit ) ? ?(xit ? x
xit+1 =
if mod (t, p) 6= i ? 1.
xit
X
i
x
?t+1 = x
?t +
?(xt ? x
?t ).
i:
(12)
(13)
mod (t,p)=i?1
At time t, only the i-th local worker (whose index i?1 equals t modulo p) is activated, and performs
the update in Equations 12 which is followed by the master update given in Equation 13.
We will now focus on the one-dimensional quadratic case without noise, i.e. F (x) =
x2
2 ,x
? R.
For the ADMM algorithm, let the state of the (dynamical) system at time t be st =
(?1t , x1t , . . . , ?pt , xpt , x
?t ) ? R2p+1 . The local worker i?s updates in Equations 9, 10, and 11 are
composed of three linear maps which can be written as st+1 = (F3i ? F2i ? F1i )(st ). For simplicity,
we will only write them out below for the case when i = 1 and p = 2:
? 1
? 0
F11=?
? 0
0
0
?1
1
0
0
0
0
0
1
0
0
0
0
0
1
0
1
0
0
0
1
?
?
? 1 ?
?, F2 =?
?
?
?
1
0
??
1+??
1??
1+??
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
??
1+??
0
0
1
?
?
?
?
? 1 ?
?, F3 =?
?
?
1
0
0
0
? p1
0
1
0
0
1
p
0
0
1
0
? p1
0
0
0
1
1
p
0
0
0
0
0
?
?
?
?.
?
For each of the p linear maps, it?s possible to find a simple condition such that each map, where the
ith map has the form F3i ? F2i ? F1i , is stable (the absolute value of the eigenvalues of the map are
1
This condition resembles the stability condition for the synchronous EASGD algorithm (Condition 17 for
p = 1) in the analysis in the Supplement.
2
The convergence analysis in [27] is based on the assumption that ?At any master iteration, updates from the
workers have the same probability of arriving at the master.?, which is not satisfied in the round-robin scheme.
4
smaller or equal to one). However, when these non-symmetric maps are composed one after another
as follows F = F3p ? F2p ? F1p ? . . . ? F31 ? F21 ? F11 , the resulting map F can become unstable! (more
precisely, some eigenvalues of the map can sit outside the unit circle in the complex plane).
We now present the numerical conditions for which the ADMM algorithm becomes unstable in the
round-robin scheme for p = 3 and p = 8, by computing the largest absolute eigenvalue of the map
F. Figure 1 summarizes the obtained result.
p=3
?3
x 10
p=8
?3
x 10
1.001
1.002
9
8
8
0.999
1
7
0.998
7
6
0.997
6
? (eta)
? (eta)
9
1
0.996
5
0.995
4
0.998
5
0.996
4
0.994
3
3
0.994
0.993
2
2
0.992
0.992
1
1
0.991
1
2
3
4
5
6
7
8
9
1
2
3
4
? (rho)
5
6
7
8
9
? (rho)
F3p
? F2p
? F1p
Figure 1: The largest absolute eigenvalue of the linear map F =
? . . . ? F31 ? F21 ? F11
?2
as a function of ? ? (0, 10 ) and ? ? (0, 10) when p = 3 and p = 8. To simulate the chaotic
behavior of the ADMM algorithm, one may pick ? = 0.001 and ? = 2.5 and initialize the state s0
either randomly or with ?i0 = 0, xi0 = x
?0 = 1000, ?i. Figure should be read in color.
On the other hand, the EASGD algorithm involves composing only symmetric linear maps due to
?t ) ? Rp+1 .
the elasticity. Let the state of the (dynamical) system at time t be st = (x1t , . . . , xpt , x
The activated local worker i?s update in Equation 12 and the master update in Equation 13 can be
written as st+1 = F i (st ). In case of p = 2, the map F 1 and F 2 are defined as follows
!
!
1???? 0
?
1
0
0
0
1
0
?
F 1=
, F 2= 0 1 ? ? ? ?
?
0 1??
0
?
1??
For the composite map F p ? . . . ? F 1 to be stable, the condition that needs to be satisfied is actually
i
the same for each i, and is furthermore independent of p (since each linear map
F is symmetric).
It essentially involves the stability of the 2 ? 2 matrix
1????
?
?
1??
, whose two (real)
2
eigenvalues ? satisfy (1 ? ? ? ? ? ?)(1 ? ? ? ?) = ? . The resulting stability condition (|?| ? 1)
is simple and given as 0 ? ? ? 2, 0 ? ? ? 4?2?
4?? .
5
Experiments
In this section we compare the performance of EASGD and EAMSGD with the parallel method
DOWNPOUR and the sequential method SGD, as well as their averaging and momentum variants.
All the parallel comparator methods are listed below3 :
? DOWNPOUR [2], the pseudo-code of the implementation of DOWNPOUR used in this
paper is enclosed in the Supplement.
? Momentum DOWNPOUR (MDOWNPOUR), where the Nesterov?s momentum scheme is
applied to the master?s update (note it is unclear how to apply it to the local workers or for
the case when ? > 1). The pseudo-code is in the Supplement.
? A method that we call ADOWNPOUR, where we compute the average over time of the
1
center variable x
? as follows: zt+1 = (1 ? ?t+1 )zt + ?t+1 x
?t , and ?t+1 = t+1
is a moving
rate, and z0 = x
?0 . t denotes the master clock, which is initialized to 0 and incremented
every time the center variable x
? is updated.
? A method that we call MVADOWNPOUR, where we compute the moving average of the
center variable x
? as follows: zt+1 = (1 ? ?)zt + ??
xt , and the moving rate ? was chosen
to be constant, and z0 = x
?0 . t denotes the master clock and is defined in the same way as
for the ADOWNPOUR method.
3
We have compared asynchronous ADMM [27] with EASGD in our setting as well, the performance is
nearly the same. However, ADMM?s momentum variant is not as stable for large communication periods.
5
All the sequential comparator methods (p = 1) are listed below:
?
?
?
?
SGD [1] with constant learning rate ?.
Momentum SGD (MSGD) [26] with constant momentum ?.
1
.
ASGD [6] with moving rate ?t+1 = t+1
MVASGD [6] with moving rate ? set to a constant.
We perform experiments in a deep learning setting on two benchmark datasets: CIFAR-10 (we refer
to it as CIFAR) 4 and ImageNet ILSVRC 2013 (we refer to it as ImageNet) 5 . We focus on the image
classification task with deep convolutional neural networks. We next explain the experimental setup.
The details of the data preprocessing and prefetching are deferred to the Supplement.
5.1
Experimental setup
For all our experiments we use a GPU-cluster interconnected with InfiniBand. Each node has 4 Titan
GPU processors where each local worker corresponds to one GPU processor. The center variable of
the master is stored and updated on the centralized parameter server [2]6 .
To describe the architecture of the convolutional neural network, we will first introduce a notation. Let (c, y) denotes the size of the input image to each layer, where c is the number of color
channels and y is both the horizontal and the vertical dimension of the input. Let C denotes
the fully-connected convolutional operator and let P denotes the max pooling operator, D denotes the linear operator with dropout rate equal to 0.5 and S denotes the linear operator with
softmax output non-linearity. We use the cross-entropy loss and all inner layers use rectified
linear units. For the ImageNet experiment we use the similar approach to [4] with the following 11-layer convolutional neural network (3,221)C(96,108)P(96,36)C(256,32)P(256,16)C(384,14)
C(384,13)C(256,12)P(256,6)D(4096,1)D(4096,1)S(1000,1).
For the CIFAR experiment we
use the similar approach to [29] with the following 7-layer convolutional neural network
(3,28)C(64,24)P(64,12)C(128,8)P(128,4)C(64,2)D(256,1)S(10,1).
In our experiments all the methods we run use the same initial parameter chosen randomly, except
that we set all the biases to zero for CIFAR case and to 0.1 for ImageNet case. This parameter is
2
used to initialize the master and all the local workers7 . We add l2 -regularization ?2 kxk to the loss
function F (x). For ImageNet we use ? = 10?5 and for CIFAR we use ? = 10?4 . We also compute
the stochastic gradient using mini-batches of sample size 128.
5.2
Experimental results
For all experiments in this section we use EASGD with ? = 0.98 , for all momentum-based methods
we set the momentum term ? = 0.99 and finally for MVADOWNPOUR we set the moving rate to
? = 0.001. We start with the experiment on CIFAR dataset with p = 4 local workers running on
a single computing node. For all the methods, we examined the communication periods from the
following set ? = {1, 4, 16, 64}. For comparison we also report the performance of MSGD which
outperformed SGD, ASGD and MVASGD as shown in Figure 6 in the Supplement. For each method
we examined a wide range of learning rates (the learning rates explored in all experiments are summarized in Table 1, 2, 3 in the Supplement). The CIFAR experiment was run 3 times independently
from the same initialization and for each method we report its best performance measured by the
smallest achievable test error. From the results in Figure 2, we conclude that all DOWNPOURbased methods achieve their best performance (test error) for small ? (? ? {1, 4}), and become
highly unstable for ? ? {16, 64}. While EAMSGD significantly outperforms comparator methods
for all values of ? by having faster convergence. It also finds better-quality solution measured by the
test error and this advantage becomes more significant for ? ? {16, 64}. Note that the tendency to
achieve better test performance with larger ? is also characteristic for the EASGD algorithm.
4
Downloaded from http://www.cs.toronto.edu/?kriz/cifar.html.
Downloaded from http://image-net.org/challenges/LSVRC/2013.
6
Our implementation is available at https://github.com/sixin-zh/mpiT.
7
On the contrary, initializing the local workers and the master with different random seeds ?traps? the algorithm in the symmetry breaking phase.
8
Intuitively the ?effective ?? is ?/? = p? = p?? (thus ? = ? ?p? ) in the asynchronous setting.
5
6
?=1
2
1.5
1
0.5
50
100
?=1
28
2
test error (%)
MSGD
DOWNPOUR
ADOWNPOUR
MVADOWNPOUR
MDOWNPOUR
EASGD
EAMSGD
test loss (nll)
training loss (nll)
?=1
1.5
1
150
50
100
wallclock time (min)
?=4
?=4
24
22
20
18
16
150
wallclock time (min)
26
50
100
150
wallclock time (min)
?=4
1.5
1
0.5
50
100
2
test error (%)
2
test loss (nll)
training loss (nll)
28
1.5
1
150
50
100
wallclock time (min)
?=16
?=16
24
22
20
18
16
150
wallclock time (min)
26
50
100
150
wallclock time (min)
?=16
1.5
1
0.5
50
100
2
test error (%)
2
test loss (nll)
training loss (nll)
28
1.5
1
150
50
100
wallclock time (min)
?=64
?=64
24
22
20
18
16
150
wallclock time (min)
26
50
100
150
wallclock time (min)
?=64
1.5
1
0.5
50
100
150
wallclock time (min)
2
test error (%)
2
test loss (nll)
training loss (nll)
28
1.5
1
50
100
150
wallclock time (min)
26
24
22
20
18
16
50
100
150
wallclock time (min)
Figure 2: Training and test loss and the test error for the center variable versus a wallclock time for
different communication periods ? on CIFAR dataset with the 7-layer convolutional neural network.
We next explore different number of local workers p from the set p = {4, 8, 16} for the CIFAR
experiment, and p = {4, 8} for the ImageNet experiment9 . For the ImageNet experiment we report
the results of one run with the best setting we have found. EASGD and EAMSGD were run with
? = 10 whereas DOWNPOUR and MDOWNPOUR were run with ? = 1. The results are in Figure 3
and 4. For the CIFAR experiment, it?s noticeable that the lowest achievable test error by either
EASGD or EAMSGD decreases with larger p. This can potentially be explained by the fact that
larger p allows for more exploration of the parameter space. In the Supplement, we discuss further
the trade-off between exploration and exploitation as a function of the learning rate (section 9.5) and
the communication period (section 9.6). Finally, the results obtained for the ImageNet experiment
also shows the advantage of EAMSGD over the competitor methods.
6
Conclusion
In this paper we describe a new algorithm called EASGD and its variants for training deep neural networks in the stochastic setting when the computations are parallelized over multiple GPUs.
Experiments demonstrate that this new algorithm quickly achieves improvement in test error compared to more common baseline approaches such as DOWNPOUR and its variants. We show that
our approach is very stable and plausible under communication constraints. We provide the stability
analysis of the asynchronous EASGD in the round-robin scheme, and show the theoretical advantage
of the method over ADMM. The different behavior of the EASGD algorithm from its momentumbased variant EAMSGD is intriguing and will be studied in future works.
9
For the ImageNet experiment, the training loss is measured on a subset of the training data of size 50,000.
7
p=4
1.5
1
0.5
50
100
p=4
28
2
test error (%)
MSGD
DOWNPOUR
MDOWNPOUR
EASGD
EAMSGD
test loss (nll)
training loss (nll)
p=4
2
1.5
1
150
50
wallclock time (min)
p=8
100
26
24
22
20
18
16
150
wallclock time (min)
p=8
50
100
150
50
100
150
50
100
150
wallclock time (min)
p=8
1.5
1
0.5
50
100
2
test error (%)
test loss (nll)
training loss (nll)
28
2
1.5
1
150
50
wallclock time (min)
p=16
100
26
24
22
20
18
16
150
wallclock time (min)
p=16
wallclock time (min)
p=16
1.5
1
0.5
50
100
2
test error (%)
test loss (nll)
training loss (nll)
28
2
1.5
1
150
50
wallclock time (min)
100
26
24
22
20
18
16
150
wallclock time (min)
wallclock time (min)
Figure 3: Training and test loss and the test error for the center variable versus a wallclock time
for different number of local workers p for parallel methods (MSGD uses p = 1) on CIFAR with
the 7-layer convolutional neural network. EAMSGD achieves significant accelerations compared to
other methods, e.g. the relative speed-up for p = 16 (the best comparator method is then MSGD) to
achieve the test error 21% equals 11.1.
p=4
6
5
4
3
2
1
0
50
100
p=4
54
6
5
4
3
2
0
150
wallclock time (hour)
p=8
test error (%)
MSGD
DOWNPOUR
EASGD
EAMSGD
test loss (nll)
training loss (nll)
p=4
50
100
52
50
48
46
44
42
0
150
wallclock time (hour)
p=8
50
100
150
50
100
150
wallclock time (hour)
p=8
5
4
3
2
1
0
50
100
150
wallclock time (hour)
6
test error (%)
test loss (nll)
training loss (nll)
54
6
5
4
3
2
0
50
100
150
wallclock time (hour)
52
50
48
46
44
42
0
wallclock time (hour)
Figure 4: Training and test loss and the test error for the center variable versus a wallclock time for
different number of local workers p (MSGD uses p = 1) on ImageNet with the 11-layer convolutional neural network. Initial learning rate is decreased twice, by a factor of 5 and then 2, when we
observe that the online predictive loss [30] stagnates. EAMSGD achieves significant accelerations
compared to other methods, e.g. the relative speed-up for p = 8 (the best comparator method is then
DOWNPOUR) to achieve the test error 49% equals 1.8, and simultaneously it reduces the communication overhead (DOWNPOUR uses communication period ? = 1 and EAMSGD uses ? = 10).
Acknowledgments
The authors thank R. Power, J. Li for implementation guidance, J. Bruna, O. Henaff, C. Farabet, A.
Szlam, Y. Bakhtin for helpful discussion, P. L. Combettes, S. Bengio and the referees for valuable
feedback.
8
References
[1] Bottou, L. Online algorithms and stochastic approximations. In Online Learning and Neural Networks.
Cambridge University Press, 1998.
[2] Dean, J, Corrado, G, Monga, R, Chen, K, Devin, M, Le, Q, Mao, M, Ranzato, M, Senior, A, Tucker, P,
Yang, K, and Ng, A. Large scale distributed deep networks. In NIPS. 2012.
[3] Krizhevsky, A, Sutskever, I, and Hinton, G. E. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems 25, pages 1106?1114, 2012.
[4] Sermanet, P, Eigen, D, Zhang, X, Mathieu, M, Fergus, R, and LeCun, Y. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. ArXiv, 2013.
[5] Nocedal, J and Wright, S. Numerical Optimization, Second Edition. Springer New York, 2006.
[6] Polyak, B. T and Juditsky, A. B. Acceleration of stochastic approximation by averaging. SIAM Journal
on Control and Optimization, 30(4):838?855, 1992.
[7] Bertsekas, D. P and Tsitsiklis, J. N. Parallel and Distributed Computation. Prentice Hall, 1989.
[8] Hestenes, M. R. Optimization theory: the finite dimensional case. Wiley, 1975.
[9] Boyd, S, Parikh, N, Chu, E, Peleato, B, and Eckstein, J. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1?122, 2011.
[10] Shamir, O. Fundamental limits of online and distributed algorithms for statistical learning and estimation.
In NIPS. 2014.
[11] Yadan, O, Adams, K, Taigman, Y, and Ranzato, M. Multi-gpu training of convnets. In Arxiv. 2013.
[12] Paine, T, Jin, H, Yang, J, Lin, Z, and Huang, T. Gpu asynchronous stochastic gradient descent to speed
up neural network training. In Arxiv. 2013.
[13] Seide, F, Fu, H, Droppo, J, Li, G, and Yu, D. 1-bit stochastic gradient descent and application to dataparallel distributed training of speech dnns. In Interspeech 2014, September 2014.
[14] Bekkerman, R, Bilenko, M, and Langford, J. Scaling up machine learning: Parallel and distributed
approaches. Camridge Universityy Press, 2011.
[15] Choromanska, A, Henaff, M. B, Mathieu, M, Arous, G. B, and LeCun, Y. The loss surfaces of multilayer
networks. In AISTATS, 2015.
[16] Ho, Q, Cipar, J, Cui, H, Lee, S, Kim, J. K, Gibbons, P. B, Gibson, G. A, Ganger, G, and Xing, E. P. More
effective distributed ml via a stale synchronous parallel parameter server. In NIPS. 2013.
[17] Azadi, S and Sra, S. Towards an optimal stochastic alternating direction method of multipliers. In ICML,
2014.
[18] Borkar, V. Asynchronous stochastic approximations. SIAM Journal on Control and Optimization,
36(3):840?851, 1998.
[19] Nedi?c, A, Bertsekas, D, and Borkar, V. Distributed asynchronous incremental subgradient methods. In
Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, volume 8 of Studies
in Computational Mathematics, pages 381 ? 407. 2001.
[20] Langford, J, Smola, A, and Zinkevich, M. Slow learners are fast. In NIPS, 2009.
[21] Agarwal, A and Duchi, J. Distributed delayed stochastic optimization. In NIPS. 2011.
[22] Recht, B, Re, C, Wright, S. J, and Niu, F. Hogwild: A Lock-Free Approach to Parallelizing Stochastic
Gradient Descent. In NIPS, 2011.
[23] Zinkevich, M, Weimer, M, Smola, A, and Li, L. Parallelized stochastic gradient descent. In NIPS, 2010.
[24] Nesterov, Y. Smooth minimization of non-smooth functions. Math. Program., 103(1):127?152, 2005.
[25] Lan, G. An optimal method for stochastic composite optimization. Mathematical Programming, 133(12):365?397, 2012.
[26] Sutskever, I, Martens, J, Dahl, G, and Hinton, G. On the importance of initialization and momentum in
deep learning. In ICML, 2013.
[27] Zhang, R and Kwok, J. Asynchronous distributed admm for consensus optimization. In ICML, 2014.
[28] Ouyang, H, He, N, Tran, L, and Gray, A. Stochastic alternating direction method of multipliers. In
Proceedings of the 30th International Conference on Machine Learning, pages 80?88, 2013.
[29] Wan, L, Zeiler, M. D, Zhang, S, LeCun, Y, and Fergus, R. Regularization of neural networks using
dropconnect. In ICML, 2013.
[30] Cesa-Bianchi, N, Conconi, A, and Gentile, C. On the generalization ability of on-line learning algorithms.
IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[31] Nesterov, Y. Introductory lectures on convex optimization, volume 87. Springer Science & Business
Media, 2004.
9
| 5761 |@word exploitation:3 version:1 achievable:2 bekkerman:1 cipar:1 pick:1 sgd:10 arous:1 initial:2 contains:1 outperforms:1 current:1 com:1 intriguing:1 written:2 gpu:7 chu:1 devin:1 periodically:1 numerical:2 enables:1 analytic:1 update:27 juditsky:1 plane:1 ith:3 provides:2 math:1 node:2 toronto:1 org:1 zhang:4 mathematical:1 become:4 f1p:2 seide:1 overhead:4 introductory:1 introduce:2 expected:1 behavior:3 p1:2 f11:3 multi:1 bilenko:1 cpu:1 prefetching:1 becomes:2 notation:1 linearity:1 medium:1 lowest:1 interpreted:1 ouyang:1 pseudo:2 every:2 ti:9 k2:2 control:3 unit:2 szlam:1 bertsekas:2 local:35 limit:1 mach:1 parallelize:2 niu:1 twice:1 initialization:2 studied:2 resembles:1 equivalence:1 examined:2 challenging:1 range:1 practical:1 lecun:4 responsible:1 acknowledgment:1 practice:1 differs:1 chaotic:2 procedure:1 gibson:1 significantly:1 composite:2 boyd:1 wait:1 operator:4 prentice:1 influence:1 www:1 zinkevich:2 map:14 lagrangian:4 center:21 dean:1 marten:1 independently:1 convex:2 nedi:1 simplicity:1 rule:3 ity:1 stability:12 updated:6 resp:2 pt:1 shamir:1 modulo:1 programming:1 us:4 referee:1 trend:1 recognition:2 updating:2 initializing:1 connected:1 ranzato:2 trade:2 incremented:2 decrease:1 valuable:1 environment:3 gibbon:1 nesterov:4 trained:1 solving:1 predictive:1 distinctive:1 localization:1 f2:1 learner:1 train:2 fast:3 describe:2 effective:2 choosing:1 outside:1 whose:2 larger:3 rho:2 plausible:1 ability:1 online:4 nll:18 advantage:3 eigenvalue:5 wallclock:30 net:1 propose:3 tran:1 interconnected:1 frequent:1 achieve:4 x1t:2 sutskever:2 convergence:3 cluster:1 optimum:2 adam:1 converges:1 incremental:1 linearize:1 measured:4 noticeable:1 c:1 involves:4 convention:1 direction:3 droppo:1 stochastic:18 exploration:8 material:1 explains:1 dnns:1 generalization:1 extension:1 exploring:1 hall:1 wright:2 seed:1 achieves:3 smallest:1 estimation:1 outperformed:1 coordination:2 concurrent:1 largest:2 f21:2 minimization:2 fluctuate:2 focus:5 xit:29 improvement:1 baseline:3 kim:1 helpful:1 hestenes:1 i0:1 msgd:8 entire:2 integrated:1 choromanska:2 interested:1 among:3 classification:3 dual:2 html:1 overfeat:1 softmax:1 initialize:4 equal:6 f3:1 having:1 ng:1 represents:1 yu:1 icml:4 nearly:1 future:1 report:3 few:1 randomly:4 composed:2 simultaneously:2 divergence:1 delayed:1 replaced:1 phase:1 attractor:1 maintain:1 attempt:1 detection:1 centralized:1 highly:2 deferred:2 activated:2 fu:1 worker:34 elasticity:1 divide:3 initialized:3 re:3 circle:1 guidance:1 theoretical:2 eta:2 subset:1 krizhevsky:1 stored:3 kxi:2 st:6 recht:1 fundamental:1 siam:2 international:1 lee:1 off:2 f2p:2 quickly:1 satisfied:3 cesa:1 huang:1 wan:1 dropconnect:1 leading:1 li:3 summarized:1 titan:1 satisfy:1 asgd:2 hogwild:1 start:2 recover:1 maintains:2 parallel:11 xing:1 contribution:1 minimize:1 convolutional:13 who:1 characteristic:1 sitting:1 yield:1 rectified:1 processor:2 explain:1 whenever:2 stagnates:1 facebook:1 farabet:1 competitor:1 frequency:1 tucker:1 dataset:3 color:2 organized:1 actually:1 back:2 courant:2 improved:1 evaluated:1 strongly:1 roundrobin:1 furthermore:2 smola:2 convnets:1 langford:2 clock:5 until:3 hand:1 horizontal:1 dataparallel:1 google:1 quality:2 gray:1 stale:1 gti:4 multiplier:7 regularization:2 read:1 symmetric:4 alternating:3 staleness:1 round:8 interspeech:1 kriz:1 demonstrate:3 performs:2 duchi:1 image:5 parikh:1 common:3 empirically:1 volume:2 discussed:1 xi0:1 he:1 significant:4 refer:5 cambridge:1 ai:1 mathematics:1 similarly:1 moving:10 stable:5 bruna:1 surface:1 add:1 own:2 recent:1 henaff:2 sixin:2 server:3 outperforming:1 vt:2 devise:1 captured:4 seen:1 additional:2 gentile:1 parallelized:5 period:10 f3p:2 corrado:1 multiple:2 reduces:2 smooth:2 faster:1 cross:1 cifar:13 lin:1 feasibility:1 variant:13 basic:1 paine:1 multilayer:1 xpt:2 essentially:1 arxiv:3 iteration:3 monga:1 agarwal:1 whereas:2 addressed:1 decreased:1 sends:1 crucial:1 ascent:2 pooling:1 sent:1 contrary:1 mod:7 call:2 yang:2 bengio:1 architecture:2 f3i:2 inner:1 idea:2 polyak:1 easgd:34 synchronous:7 motivated:1 penalty:2 reformulated:1 speech:1 york:1 deep:14 listed:2 amount:2 http:3 estimated:1 write:1 infiniband:1 lan:1 dahl:1 nocedal:1 subgradient:1 taigman:1 run:6 master:26 yann:2 summarizes:1 scaling:1 bit:1 accelerates:1 layer:7 dropout:1 guaranteed:1 followed:2 convergent:1 quadratic:5 constraint:3 precisely:1 x2:1 simulate:1 speed:3 min:23 relatively:1 gpus:1 speedup:1 request:1 cui:1 smaller:1 explained:2 intuitively:1 taken:2 equation:15 discus:1 f1i:2 merit:1 end:2 available:1 apply:1 observe:1 kwok:1 away:1 batch:1 ho:1 eigen:1 rp:1 existence:2 original:1 denotes:8 running:1 ensure:1 zeiler:1 downpour:13 lock:1 objective:2 unclear:1 exhibit:1 gradient:11 september:1 link:2 card:2 thank:1 parametrized:1 f2i:2 consensus:2 trivial:1 unstable:3 code:2 index:1 mini:1 minimizing:1 sermanet:1 setup:2 potentially:1 implementation:3 zt:4 unknown:1 perform:4 allowing:1 bianchi:1 vertical:1 datasets:2 benchmark:1 finite:1 descent:7 jin:1 hinton:2 communication:23 parallelizing:1 peleato:1 eckstein:1 imagenet:12 hour:6 nip:7 below:3 dynamical:2 regime:1 challenge:2 program:1 including:3 max:2 power:1 business:1 force:3 minimax:1 scheme:11 github:1 cim:3 finished:1 mathieu:2 concludes:1 literature:2 l2:1 zh:1 relative:2 fully:1 loss:26 lecture:1 enclosed:1 versus:3 downloaded:2 s0:1 article:1 minp:1 repeat:2 asynchronous:18 arriving:1 free:1 tsitsiklis:1 bias:1 allow:2 senior:1 institute:2 fall:1 wide:1 taking:1 absolute:3 distributed:11 f31:2 feedback:1 dimension:1 computes:1 author:1 preprocessing:1 far:1 transaction:1 emphasize:1 forever:2 ml:1 global:2 vti:2 conclude:1 xi:26 fergus:2 robin:8 additionally:1 table:1 learn:1 channel:1 composing:1 elastic:7 sra:1 symmetry:2 inherently:1 requested:1 bottou:1 complex:1 anna:1 aistats:1 main:2 weimer:1 noise:1 edition:1 slow:1 wiley:1 combettes:1 momentum:17 mao:1 exponential:1 breaking:1 communicates:1 z0:2 ganger:1 xt:8 distbelief:1 nyu:6 explored:2 r2p:1 sit:1 consist:1 exists:1 trap:1 sequential:2 importance:1 supplement:9 magnitude:1 chen:1 entropy:1 borkar:2 explore:1 kxk:1 conconi:1 problem2:1 applies:1 springer:2 corresponds:1 comparator:5 acceleration:3 towards:1 admm:16 lsvrc:1 except:1 reducing:3 averaging:6 called:1 experimental:5 tendency:1 ilsvrc:1 |
5,260 | 5,762 | Competitive Distribution Estimation:
Why is Good-Turing Good
Ananda Theertha Suresh
UC San Diego
asuresh@ucsd.edu
Alon Orlitsky
UC San Diego
alon@ucsd.edu
Abstract
Estimating distributions over large alphabets is a fundamental machine-learning
tenet. Yet no method is known to estimate all distributions well. For example,
add-constant estimators are nearly min-max optimal but often perform poorly in
practice, and practical estimators such as absolute discounting, Jelinek-Mercer,
and Good-Turing are not known to be near optimal for essentially any distribution.
We describe the first universally near-optimal probability estimators. For every
discrete distribution, they are provably nearly the best in the following two competitive ways. First they estimate every distribution nearly as well as the best
estimator designed with prior knowledge of the distribution up to a permutation.
Second, they estimate every distribution nearly as well as the best estimator designed with prior knowledge of the exact distribution, but as all natural estimators,
restricted to assign the same probability to all symbols appearing the same number
of times.
Specifically, for distributions over k symbols and n samples, we show that for
both comparisons, a simple variant of Good-Turing estimator is always within KL
divergence of (3 + on (1))/n1/3 from the best estimator, and that a more involved
?
estimator is within O?n (min(k/n, 1/ n)). Conversely, we show that any esti? n (min(k/n, 1/n2/3 )) over the best
mator must have a KL divergence at least ?
? n (min(k/n, 1/?n)) for the secestimator for the first comparison, and at least ?
ond.
1
1.1
Introduction
Background
Many learning applications, ranging from language-processing staples such as speech recognition
and machine translation to biological studies in virology and bioinformatics, call for estimating large
discrete distributions from their samples. Probability estimation over large alphabets has therefore
long been the subject of extensive research, both by practitioners deriving practical estimators [1, 2],
and by theorists searching for optimal estimators [3].
Yet even after all this work, provably-optimal estimators remain elusive. The add-constant estimators frequently analyzed by theoreticians are nearly min-max optimal, yet perform poorly for
many practical distributions, while common practical estimators, such as absolute discounting [4],
Jelinek-Mercer [5], and Good-Turing [6], are not well understood and lack provable performance
guarantees.
To understand the terminology and approach a solution we need a few definitions. The performance
of an estimator q for an underlying distribution p is typically evaluated in terms of the Kullback1
Leibler (KL) divergence [7],
def
D(p||q) =
X
px log
x
px
,
qx
reflecting the expected increase in the ambiguity about the outcome of p when it is approximated by
q. KL divergence is also the increase in the number of bits over the entropy that q uses to compress
the output of p, and is also the log-loss of estimating p by q. It is therefore of interest to construct
estimators that approximate a large class of distributions to within small KL divergence. We now
describe one of the problem?s simplest formulations.
1.2
Min-max loss
A distribution estimator over a support set X associates with any observed sample sequence x? ?
def
X ? a distribution q(x? ) over X . Given n samples X n = X1 , X2 , . . . , Xn , generated independently
according to a distribution p over X , the expected KL loss of q is
rn (q, p) =
E
X n ?pn
[D(p||q(X n ))].
Let P be a known collection of distributions over a discrete set X . The worst-case loss of an
estimator q over all distributions in P is
def
rn (q, P) = max rn (q, p),
p?P
(1)
and the lowest worst-case loss for P, achieved by the best estimator, is the min-max loss
def
rn (P) = min rn (q, P) = min max rn (q, p).
q
q
p?P
(2)
Min-max performance can be viewed as regret relative to an oracle that knows the underlying distribution. Hence from here on we refer to it as regret.
The most natural and important collection of distributions, and the one we study here, is the set
of all discrete distributions over an alphabet of some size k, which without loss of generality we
assume to be [k] = {1, 2, . . . k}. Hence the set of all distributions is the simplex in k dimensions,
P
def
?k = {(p1 , . . . , pk ) : pi ? 0 and
pi = 1}. Following [8], researchers have studied rn (?k ) and
related quantities, for example see [9]. We outline some of the results derived.
1.3
Add-constant estimators
The add-? estimator assigns to a symbol that appeared t times a probability proportional to t+?. For
example, if three coin tosses yield one heads and two tails, the add-1/2 estimator assigns probability
1.5/(1.5 + 2.5) = 3/8 to heads, and 2.5/(1.5 + 2.5) = 5/8 to tails. [10] showed that as for every
k, as n ? ?, an estimator related to add-3/4 is near optimal and achieves
rn (?k ) =
k?1
? (1 + o(1)).
2n
(3)
The more challenging, and practical, regime is where the sample size n is not overwhelmingly larger
than the alphabet size k. For example in English text processing, we need to estimate the distribution
of words following a context. But the number of times a context appears in a corpus may not be
much larger than the vocabulary size. Several results are known for other regimes as well. When the
sample size n is linear in the alphabet size k, rn (?k ) can be shown to be a constant, and [3] showed
that as k/n ? ?, add-constant estimators achieve the optimal
rn (?k ) = log
k
? (1 + o(1)),
n
(4)
While add-constant estimators are nearly min-max optimal, the distributions attaining the min-max
regret are near uniform. In practice, large-alphabet distributions are rarely uniform, and instead, tend
to follow a power-law. For these distributions, add-constant estimators under-perform the estimators
described in the next subsection.
2
1.4
Practical estimators
For real applications, practitioners tend to use more sophisticated estimators, with better empirical
performance. These include the Jelinek-Mercer estimator that cross-validates the sample to find the
best fit for the observed data. Or the absolute-discounting estimators that rather than add a positive
constant to each count, do the opposite, and subtract a positive constant.
Perhaps the most popular and enduring have been the Good-Turing estimator [6] and some of its
def
def
variations. Let nx = nx (xn ) be the number of times a symbol x appears in xn and let ?t = ?t (xn )
be the number of symbols appearing t times in xn . The basic Good-Turing estimator posits that if
nx = t,
?t+1 t + 1
qx (xn ) =
?
,
?t
n
surprisingly relating the probability of an element not just to the number of times it was observed,
but also to the number other elements appearing as many, and one more, times. It is easy to see
that this basic version of the estimator may not work well, as for example it assigns any element
appearing ? n/2 times 0 probability. Hence in practice the estimator is modified, for example,
using empirical frequency to elements appearing many times.
The Good-Turing Estimator was published in 1953, and quickly adapted for language-modeling
use, but for half a century no proofs of its performance were known. Following [11], several papers,
e.g., [12, 13], showed that Good-Turing variants estimate the combined probability of symbols
appearing any given number of times with accuracy that does not depend on the alphabet size, and
[14] showed that a different variation of Good-Turing similarly estimates the probabilities of each
previously-observed symbol, and all unseen symbols combined.
However, these results do not explain why Good-Turing estimators work well for the actual probability estimation problem, that of estimating the probability of each element, not of the combination
of elements appearing a certain number of times. To define and derive uniformly-optimal estimators,
we take a different, competitive, approach.
2
2.1
Competitive optimality
Overview
To evaluate an estimator, we compare its performance to the best possible performance of two estimators designed with some prior knowledge of the underlying distribution. The first estimator is
designed with knowledge of the underlying distribution up to a permutation of the probabilities,
namely knowledge of the probability multiset, e.g., {.5, .3, .2}, but not of the association between
probabilities and symbols. The second estimator is designed with exact knowledge of the distribution, but like all natural estimators, forced to assign the same probabilities to symbols appearing the
same number of times. For example, upon observing the sample a, b, c, a, b, d, e, the estimator must
assign the same probability to a and b, and the same probability to c, d, and e.
These estimators cannot be implemented in practice as in reality we do not have prior knowledge
of the estimated distribution. But the prior information is chosen to allow us to determine the best
performance of any estimator designed with that information, which in turn is better than the performance of any data-driven estimator designed without prior information. We then show that certain
variations of the Good-Turing estimators, designed without any prior knowledge, approach the performance of both prior-knowledge estimators for every underlying distribution.
2.2
Competing with near full information
We first define the performance of an oracle-aided estimator, designed with some knowledge of the
underlying distribution. Suppose that the estimator is designed with the aid of an oracle that knows
the value of f (p) for some given function f over the class ?k of distributions.
The function f partitions ?k into subsets, each corresponding to one possible value of f . We denote
the subsets by P , and the partition by P, and as before, denote the individual distributions by p.
Then the oracle knows the unique partition part P such that p ? P ? P. For example, if f (p) is
3
the multiset of p, then each subset P corresponds to set of distributions with the same probability
multiset, and the oracle knows the multiset of probabilities.
For every partition part P ? P, an estimator q incurs the worst-case regret in (1),
rn (q, P ) = max rn (q, p).
p?P
The oracle, knowing the unique partition part P , incurs the least worst-case regret (2),
rn (P ) = min rn (q, P ).
q
The competitive regret of q over the oracle, for all distributions in P is
rn (q, P ) ? rn (P ),
the competitive regret over all partition parts and all distributions in each is
def
rnP (q, ?k ) = max (rn (q, P ) ? rn (P )) ,
P ?P
and the best possible competitive regret is
def
rnP (?k ) = min rnP (q, ?k ).
q
Consolidating the intermediate definitions,
rnP (?k ) = min max max rn (q, p) ? rn (P ) .
q
P ?P
p?P
Namely, an oracle-aided estimator who knows the partition part incurs a worst-case regret rn (P )
over each part P , and the competitive regret rnP (?k ) of data-driven estimators is the least overall
increase in the part-wise regret due to not knowing P . In Appendix A.1, we give few examples of
such partitions.
A partition P0 refines a partition P if every part in P is partitioned by some parts in P0 . For example
{{a, b}, {c}, {d, e}} refines {{a, b, c}, {d, e}}. In Appendix A.2, we show that if P0 refines P then
for every q
0
rnP (q, ?k ) ? rnP (q, ?k ).
(5)
Considering the collection ?k of all distributions over [k], it follows that as we start with single-part
partition {?k } and keep refining it till the oracle knows p, the competitive regret of estimators will
increase from 0 to rn (q, ?k ). A natural question is therefore how much information can the oracle
have and still keep the competitive regret low? We show that the oracle can know the distribution
exactly up to permutation, and still the regret will be very small.
Two distributions p and p0 permutation equivalent if for some permutation ? of [k],
p0?(i) = pi ,
for all 1 ? i ? k. For example, (0.5, 0.3, 0.2) and (0.3, 0.5, 0.2) are permutation equivalent.
Permutation equivalence is clearly an equivalence relation, and hence partitions the collection of
distributions over [k] into equivalence classes. Let P? be the corresponding partition. We construct
estimators q that uniformly bound rnP? (q, ?k ), thus the same estimator uniformly bounds rnP (q, ?k )
for any coarser partition of ?k , such as partitions into classes of distributions with the same support
size, or entropy. Note that the partition P? corresponds to knowing the underlying distribution up
to permutation, hence rnP? (?k ) is the additional KL loss compared to an estimator designed with
knowledge of the underlying distribution up to permutation.
This notion of competitiveness has appeared in several contexts. In data compression it is called
twice-redundancy [15, 16, 17, 18], while in statistics it is often called adaptive or local minmax [19, 20, 21, 22, 23], and recently in property testing it is referred as competitive [24, 25, 26]
or instance-by-instance [27]. Subsequent to this work, [28] studied competitive estimation in `1
? ?n).
distance, however their regret is poly(1/ log n), compared to our O(1/
4
2.3
Competing with natural estimators
Our second comparison is with an estimator designed with exact knowledge of p, but forced to be
natural, namely, to assign the same probability to all symbols appearing the same number of times
in the sample. For example, for the observed sample a, b, c, a, b, d, e, the same probability must be
assigned to a and b, and the same probability to c, d, and e. Since data-driven estimators derive all
their knowledge of the distribution from the data, we expect them to be natural.
We compare the regret of data-driven estimators to that of natural oracle-aided estimators. Let Qnat
be the set of all natural estimators. For a distribution p, the lowest regret of a natural estimator,
designed with prior knowledge of p is
def
rnnat (p) = minnat rn (q, p),
q?Q
and the regret of an estimator q relative to the least-regret natural-estimator is
rnnat (q, p) = rn (q, p) ? rnnat (p).
Thus the regret of an estimator q over all distributions in ?k is
rnnat (q, ?k ) = max rnnat (q, p),
p??k
and the best possible competitive regret is rnnat (?k ) = minq rnnat (q, ?k ).
In the next section we state the results, showing in particular that rnnat (?k ) is uniformly bounded. In
Section 5, we outline the proofs, and in Section 4 we describe experiments comparing the performance of competitive estimators to that of min-max motivated estimators.
3
Results
Good-Turing estimators are often used in conjunction with empirical frequency, where Good-Turing
estimates low probabilities and empirical frequency estimates large probabilities. We first show that
even this simple Good-Turing version, defined in Appendix C and denoted q 0 , is uniformly optimal
for all distributions. For simplicity we prove the result when the number of samples is n0 ? poi(n),
P?
nat
a Poisson random variable with mean n. Let rpoi(n)
(q 0 , ?k ) and rpoi(n)
(q 0 , ?k ) be the regrets in this
sampling process. A similar result holds with exactly n samples, but the proof is more involved as
the multiplicities are dependent.
Theorem 1 (Appendix C). For any k and n,
3 + on (1)
P?
nat
.
rpoi(n)
(q 0 , ?k ) ? rpoi(n)
(q 0 , ?k ) ?
n1/3
Furthermore, a lower bound in [13] shows that this bound is optimal up to logarithmic factors.
A more complex variant of Good-Turing, denoted q 00 , was proposed in [13]. We show that its regret
diminishes uniformly in both the partial-information and natural-estimator formulations.
Theorem 2 (Section 5). For any k and n,
1 k
P? 00
nat
00
?
rn (q , ?k ) ? rn (q , ?k ) ? On min ? ,
.
n n
?n , and below also ?
? n , hide multiplicative logarithmic factors in n. Lemma 6 in Section 5
Where O
and a lower bound in [13] can be combined to prove a matching lower bound on the competitive
regret of any estimator for the second formulation,
? n min ?1 , k
rnnat (?k ) ? ?
.
n n
Hence q 00 has near-optimal competitive regret relative to natural estimators.
Fano?s inequality usually yields lower bounds on KL loss, not regret. By carefully constructing
distribution classes, we lower bound the competitive regret relative to the oracle-aided estimators.
Theorem 3 (Appendix D). For any k and n,
1 k
P?
?
rn (?k ) ? ?n min
,
.
n2/3 n
5
3.1
Illustration and implications
Figure 1 demonstrates some of the results. The horizontal axis reflects the set ?k of distributions
illustrated on one dimension. The vertical axis indicates the KL loss, or absolute regret, for clarity,
shown for k n. The blue line is the previously-known min-max upper bound on the regret,
which by (4) is very high for this regime, log(k/n). The red line is the regret of the estimator
designed with prior knowledge of the probability multiset. Observe that while for some probability
multisets the regret approaches the log(k/n) min-max upper bound, for other probability multisets
it is much lower, and for some, such as uniform over 1 or over k symbols, where the probability
multiset determines the distribution it is even 0. For many practically relevant distributions, such
as power-law distributions and sparse distributions, the regret is small compared to log(k/n). The
green line is an upper ?
bound on the absolute regret of the data-driven estimator q 00 . By Theorem 2,
it is always at most 1/ n larger than the red line. It follows that for many distributions, possibly for
distributions with more structure, such as those occurring in nature, the regret of q 00 is significantly
smaller than the pessimistic min-max bound implies.
rn (?k ) = log
k
n
? min( ?1 , k
?O
n n
KL loss
Distributions
Uniform distribution
Figure 1: Qualitative behavior of the KL loss as a function of distributions in different formulations
We observe a few consequences of these results.
0
00
? Theorems 1 and 2 establish two uniformly-optimal
? estimators q and q . Their relative regrets
1/3
diminish to zero at least as fast as 1/n , and 1/ n respectively, independent of how large the
alphabet size k is.
? Although the results are for relative regret, as shown in Figure 1, they lead to estimator with
smaller absolute regret, namely, the expected KL divergence.
? The same regret upper bounds hold for all coarser partitions of ?k i.e., where instead of knowing
the multiset, the oracle knows some property of multiset such as entropy.
4
Experiments
Recall that for a sequence xn , nx denotes the number of times a symbol x appears and ?t denotes
the number of symbols appearing t times. For small values of n and k, the estimator proposed
in [13] simplifies to a combination of Good-Turing and empirical estimators. By [13, Lemmas 10
?
and 11], for symbols appearing t times, if ?t+1 ? ?(t),
then the Good-Turing estimate is close
to the underlying total probability mass, otherwise the empirical estimate is closer. Hence, for a
symbol appearing t times, if ?t+1 ? t we use the Good-Turing estimator, otherwise we use the
empirical estimator. If nx = t,
(
t
if t > ?t+1 ,
N
qx = ?
t+1
t+1 +1
?
else,
?t
N
where N is a normalization factor. Note that we have replaced ?t+1 in the Good-Turing estimator
by ?t+1 + 1 to ensure that every symbol is assigned a non-zero probability.
6
0.5
0.4
Expected KL divergence
0.35
0.3
0.25
0.2
0.15
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.45
0.35
0.3
0.25
0.2
0.15
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
0.5
1
1.5
2
2.5
3
Number of samples
3.5
4
0
4.5
5
#10 4
0
0.5
1
1.5
(a) Uniform
3.5
4
4.5
5
#10 4
0.5
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.4
Expected KL divergence
0.35
0.3
0.25
0.2
0.15
0.35
0.3
0.25
0.2
0.15
0.2
0.15
0.1
3.5
4
4.5
5
4
#10
(d) Zipf with parameter 1.5
0
4.5
5
#10 4
0.3
0.05
2
2.5
3
Number of samples
4
0.25
0.1
1.5
3.5
0.35
0.05
1
2
2.5
3
Number of samples
0.5
1
1.5
2
2.5
3
Number of samples
3.5
4
4.5
5
#10 4
(e) Uniform prior (Dirichlet 1)
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.4
0.1
0.5
1.5
0.45
0.05
0
1
0.5
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.45
Expected KL divergence
0.4
0.5
(c) Zipf with parameter 1
(b) Step
0.5
0.45
Expected KL divergence
2
2.5
3
Number of samples
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.45
Expected KL divergence
0.4
Expected KL divergence
0.5
0.5
Best-natural
Laplace
Braess-Sauer
Krichevsky-Trofimov
Good-Turing + empirical
0.45
0
0.5
1
1.5
2
2.5
3
Number of samples
3.5
4
4.5
5
#10 4
(f) Dirichlet 1/2 prior
Figure 2: Simulation results for support 10000, number of samples ranging from 1000 to 50000,
averaged over 200 trials.
We compare the performance of this estimator to four estimators: three popular add-? estimators
and the optimal natural estimator. An add-beta estimator S? has the form
?
?
qxS =
nx + ?nSx
,
?
N (S)
? is a normalization factor to ensure that the probabilities add up to 1. The Laplace
where N (S)
estimator, ?tL = 1 ? t, minimizes the expected loss when the underlying distribution is generated
by a uniform prior over ?k . The Krichevsky-Trofimov estimator, ?tKT = 1/2 ? t, is asymptotically
min-max optimal for the cumulative regret, and minimizes the expected loss when the underlying
distribution is generated according to a Dirichlet-1/2 prior. The Braess-Sauer estimator, ?0BS =
1/2, ?1BS = 1, ?tBS = 3/4 ? t > 1, is asymptotically min-max optimal for rn (?k ). Finally,
S
as shown in Lemma 10, the optimal estimator qx = ?nnx achieves the lowest loss of any natural
x
estimator designed with knowledge of the underlying distribution.
We compare the performance of the proposed estimator to that of the four estimators above. We
consider six distributions: uniform distribution, step distribution with half the symbols having probability 1/2k and the other half have probability 3/2k, Zipf distribution with parameter 1 (pi ? i?1 ),
Zipf distribution with parameter 1.5 (pi ? i?1.5 ), a distribution generated by the uniform prior
on ?k , and a distribution generated from Dirichlet-1/2 prior. All distributions have support size
k = 10000. n ranges from 1000 to 50000 and the results are averaged over 200 trials.
Figure 2 shows the results. Observe that the proposed estimator performs similarly to the best
natural estimator for all six distributions. It also significantly outperforms the other estimators for
Zipf, uniform, and step distributions.
The performance of other estimators depends on the underlying distribution. For example, since
Laplace is the optimal estimator when the underlying distribution is generated from the uniform
prior, it performs well in Figure 2(e), however performs poorly on other distributions.
Furthermore, even though for distributions generated by Dirichlet priors, all the estimators have
similar looking regrets (Figures 2(e), 2(f)), the proposed estimator performs better than estimators
which are not designed specifically for that prior.
7
5
Proof sketch of Theorem 2
The proof consists of two parts. We first show that for every estimator q, rnP? (q, ?k ) ? rnnat (q, ?k )
and then upper bound rnnat (q, ?k ) using results on combined probability mass.
Lemma 4 (Appendix B.1). For every estimator q,
rnP? (q, ?k ) ? rnnat (q, ?k ).
The proof of the above lemma relies on showing that the optimal estimator for every class in P ? P?
is natural.
5.1
Relation between rnnat (q, ?k ) and combined probability estimation
We now relate the regret in estimating distribution to that of estimating the combined or total probability mass, defined as follows. Recall that ?t denotes the number of symbols appearing t times.
def
For a sequence xn , let St = St (xn ) denote the total probability of symbols appearing t times. For
notational convenience, we use St to denote both St (xn ) and St (X n ) and the usage becomes clear
in the context. Similar to KL divergence between distributions, we define KL divergence between S
and their estimates S? as
n
X
St
? =
St log .
D(S||S)
S?t
t=0
Since the natural estimator assigns same probability to symbols that appear the same number of
times, estimating probabilities is same as estimating the total probability of symbols appearing a
given number of times. We formalize it in the next lemma.
P
Lemma 5 (Appendix B.2). For a natural estimator q let S?t (xn ) = x:nx =t qx (xn ), then
?
rnnat (q, p) = E[D(S||S)].
In Lemma 11(Appendix B.3), we show that there is a natural estimator that achieves rnnat (?k ). Taking
maximum over all distributions p and minimum over all estimators q results in
P
Lemma 6. For a natural estimator q let S?t (xn ) = x:nx =t qx (xn ), then
?
rnat (q, ?k ) = max E[D(S||S)].
n
Furthermore,
p??k
?
rnnat (?k ) = min max E[D(S||S)].
?
S
p??k
Thus finding the best competitive natural estimator is same as finding the best estimator for the
combined probability mass S. [13] proposed an algorithm for estimating S such that for all k and
for all p ? ?k , with probability ? 1 ? 1/n ,
? =O
?n ?1 .
D(S||S)
n
The result is stated in Theorem 2 of [13]. One can convert this result to a result on expectation easily
using the property that their estimator is bounded below by 1/2n and show that
? =O
?n ?1 .
max E[D(S||S)]
p??k
n
Pn ?
A slight modification of their proofs for Lemma 17 and Theorem 2 in their paper using t=1 ?t ?
Pn
?
t=1 ?t ? k shows that their estimator S for the combined probability mass S satisfies
? =O
?n min ?1 , k
max E[D(S||S)]
.
p??k
n n
The above equation together with Lemmas 4 and 6 results in Theorem 2.
6
Acknowledgements
We thank Jayadev Acharya, Moein Falahatgar, Paul Ginsparg, Ashkan Jafarpour, Mesrob Ohannessian, Venkatadheeraj Pichapati, Yihong Wu, and the anonymous reviewers for helpful comments.
8
References
[1] William A. Gale and Geoffrey Sampson. Good-turing frequency estimation without tears. Journal of
Quantitative Linguistics, 2(3):217?237, 1995.
[2] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. In ACL,
1996.
[3] Liam Paninski. Variational minimax estimation of discrete distributions under KL loss. In NIPS, 2004.
[4] Hermann Ney, Ute Essen, and Reinhard Kneser. On structuring probabilistic dependences in stochastic
language modelling. Computer Speech & Language, 8(1):1?38, 1994.
[5] Fredrick Jelinek and Robert L. Mercer. Probability distribution estimation from sparse data. IBM Tech.
Disclosure Bull., 1984.
[6] Irving J. Good. The population frequencies of species and the estimation of population parameters.
Biometrika, 40(3-4):237?264, 1953.
[7] Thomas M. Cover and Joy A. Thomas. Elements of information theory (2. ed.). Wiley, 2006.
[8] R. Krichevsky. Universal Compression and Retrieval. Dordrecht,The Netherlands: Kluwer, 1994.
[9] Sudeep Kamath, Alon Orlitsky, Dheeraj Pichapati, and Ananda Theertha Suresh. On learning distributions
from their samples. In COLT, 2015.
[10] Dietrich Braess and Thomas Sauer. Bernstein polynomials and learning theory. Journal of Approximation
Theory, 128(2):187?206, 2004.
[11] David A. McAllester and Robert E. Schapire. On the convergence rate of Good-Turing estimators. In
COLT, 2000.
[12] Evgeny Drukh and Yishay Mansour. Concentration bounds for unigrams language model. In COLT,
2004.
[13] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. Optimal probability
estimation with applications to prediction and classification. In COLT, 2013.
[14] Alon Orlitsky, Narayana P. Santhanam, and Junan Zhang. Always Good Turing: Asymptotically optimal
probability estimation. In FOCS, 2003.
[15] Boris Yakovlevich Ryabko. Twice-universal coding. Problemy Peredachi Informatsii, 1984.
[16] Boris Yakovlevich Ryabko. Fast adaptive coding algorithm. Problemy Peredachi Informatsii, 26(4):24?
37, 1990.
[17] Dominique Bontemps, St?ephane Boucheron, and Elisabeth Gassiat. About adaptive coding on countable
alphabets. IEEE Transactions on Information Theory, 60(2):808?821, 2014.
[18] St?ephane Boucheron, Elisabeth Gassiat, and Mesrob I. Ohannessian. About adaptive coding on countable
alphabets: Max-stable envelope classes. CoRR, abs/1402.6305, 2014.
[19] David L Donoho and Jain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika,
81(3):425?455, 1994.
[20] Felix Abramovich, Yoav Benjamini, David L Donoho, and Iain M Johnstone. Adapting to unknown
sparsity by controlling the false discovery rate. The Annals of Statistics, 2006.
[21] Peter J Bickel, Chris A Klaassen, YA?Acov Ritov, and Jon A Wellner. Efficient and adaptive estimation
for semiparametric models. Johns Hopkins University Press Baltimore, 1993.
[22] Andrew Barron, Lucien Birg?e, and Pascal Massart. Risk bounds for model selection via penalization.
Probability theory and related fields, 113(3):301?413, 1999.
[23] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2004.
[24] Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky, and Shengjun Pan. Competitive
closeness testing. COLT, 2011.
[25] Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky, Shengjun Pan, and Ananda Theertha
Suresh. Competitive classification and closeness testing. In COLT, 2012.
[26] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. A competitive test for
uniformity of monotone distributions. In AISTATS, 2013.
[27] Gregory Valiant and Paul Valiant. An automatic inequality prover and instance optimal identity testing.
In FOCS, 2014.
[28] Gregory Valiant and Paul Valiant. Instance optimal learning. CoRR, abs/1504.05321, 2015.
[29] Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
9
| 5762 |@word trial:2 version:2 polynomial:1 compression:2 trofimov:7 simulation:1 dominique:1 p0:5 incurs:3 jafarpour:5 minmax:1 outperforms:1 comparing:1 yet:3 must:3 john:1 tenet:1 refines:3 subsequent:1 partition:17 designed:16 n0:1 joy:1 half:3 theoretician:1 multiset:8 zhang:1 narayana:1 beta:1 competitiveness:1 prove:2 qualitative:1 consists:1 focs:2 expected:11 behavior:1 p1:1 frequently:1 actual:1 considering:1 becomes:1 estimating:9 underlying:14 bounded:2 mass:5 lowest:3 minimizes:2 moein:1 finding:2 guarantee:1 esti:1 quantitative:1 every:12 orlitsky:7 exactly:2 biometrika:2 demonstrates:1 appear:1 positive:2 before:1 understood:1 local:1 felix:1 consequence:1 ginsparg:1 kneser:1 acl:1 twice:2 studied:2 equivalence:3 conversely:1 challenging:1 liam:1 range:1 averaged:2 practical:6 unique:2 ond:1 testing:4 practice:4 regret:41 suresh:5 universal:2 empirical:14 significantly:2 adapting:1 matching:1 word:1 staple:1 cannot:1 close:1 convenience:1 selection:1 context:4 risk:1 equivalent:2 reviewer:1 elusive:1 independently:1 minq:1 simplicity:1 assigns:4 estimator:117 iain:1 deriving:1 century:1 searching:1 notion:1 variation:3 population:2 laplace:8 annals:1 diego:2 suppose:1 yishay:1 controlling:1 exact:3 us:1 associate:1 element:7 recognition:1 approximated:1 coarser:2 observed:5 worst:5 ryabko:2 depend:1 uniformity:1 upon:1 easily:1 alphabet:10 forced:2 fast:2 describe:3 jain:1 outcome:1 dordrecht:1 larger:3 otherwise:2 statistic:2 unseen:1 validates:1 sequence:3 dietrich:1 adaptation:1 relevant:1 till:1 poorly:3 achieve:1 convergence:1 boris:2 derive:2 alon:8 andrew:1 implemented:1 implies:1 posit:1 hermann:1 stochastic:1 mcallester:1 assign:4 anonymous:1 biological:1 pessimistic:1 hold:2 practically:1 diminish:1 achieves:3 bickel:1 estimation:13 diminishes:1 lucien:1 reflects:1 clearly:1 always:3 modified:1 rather:1 pn:3 poi:1 shrinkage:1 overwhelmingly:1 conjunction:1 structuring:1 derived:1 refining:1 notational:1 modelling:1 indicates:1 tech:1 mesrob:2 problemy:2 helpful:1 dependent:1 typically:1 relation:2 provably:2 overall:1 classification:2 colt:6 pascal:1 denoted:2 smoothing:1 spatial:1 uc:2 field:1 construct:2 having:1 sampling:1 nearly:6 jon:1 simplex:1 ephane:2 acharya:5 few:3 divergence:14 individual:1 replaced:1 n1:2 william:1 ab:2 interest:1 essen:1 analyzed:1 implication:1 closer:1 partial:1 sauer:8 instance:4 modeling:2 cover:1 yoav:1 shengjun:2 bull:1 subset:3 consolidating:1 uniform:11 gregory:2 combined:8 st:9 fundamental:1 randomized:1 probabilistic:2 michael:1 together:1 quickly:1 hopkins:1 ambiguity:1 possibly:1 gale:1 attaining:1 coding:4 abramovich:1 depends:1 multiplicative:1 unigrams:1 observing:1 red:2 competitive:21 start:1 accuracy:1 mator:1 who:1 yield:2 pichapati:2 researcher:1 published:1 explain:1 qxs:1 ashkan:5 ed:1 definition:2 frequency:5 involved:2 elisabeth:2 proof:7 popular:2 recall:2 knowledge:16 subsection:1 formalize:1 sophisticated:1 carefully:1 reflecting:1 appears:3 alexandre:1 follow:1 formulation:4 evaluated:1 though:1 ritov:1 generality:1 furthermore:3 just:1 mitzenmacher:1 sketch:1 horizontal:1 lack:1 perhaps:1 usage:1 discounting:3 hence:7 assigned:2 boucheron:2 leibler:1 illustrated:1 irving:1 outline:2 performs:4 ranging:2 wise:1 variational:1 recently:1 common:1 overview:1 tail:2 association:1 slight:1 relating:1 kluwer:1 refer:1 theorist:1 cambridge:1 zipf:5 automatic:1 similarly:2 fano:1 benjamini:1 language:6 ute:1 stable:1 add:13 hide:1 showed:4 driven:5 certain:2 inequality:2 minimum:1 additional:1 determine:1 full:1 tear:1 cross:1 long:1 retrieval:1 prediction:1 variant:3 basic:2 essentially:1 expectation:1 poisson:1 normalization:2 achieved:1 background:1 semiparametric:1 baltimore:1 else:1 goodman:1 envelope:1 massart:1 comment:1 subject:1 tend:2 call:1 practitioner:2 near:6 ideal:1 intermediate:1 bernstein:1 easy:1 fit:1 competing:2 opposite:1 simplifies:1 knowing:4 yihong:1 motivated:1 six:2 wellner:1 peter:1 speech:2 clear:1 ohannessian:2 netherlands:1 nonparametric:1 tsybakov:1 reinhard:1 simplest:1 schapire:1 estimated:1 brae:8 blue:1 discrete:5 santhanam:1 redundancy:1 four:2 terminology:1 hirakendu:2 clarity:1 asymptotically:3 monotone:1 convert:1 turing:28 eli:1 klaassen:1 wu:1 appendix:8 bit:1 def:11 bound:16 oracle:14 adapted:1 informatsii:2 x2:1 min:27 optimality:1 px:2 according:2 combination:2 rnp:12 remain:1 smaller:2 pan:2 partitioned:1 b:2 modification:1 tkt:1 restricted:1 multiplicity:1 equation:1 previously:2 turn:1 count:1 know:8 disclosure:1 venkatadheeraj:1 observe:3 barron:1 birg:1 appearing:15 ney:1 coin:1 thomas:3 compress:1 denotes:3 dirichlet:5 include:1 ensure:2 linguistics:1 establish:1 jayadev:5 question:1 quantity:1 prover:1 concentration:1 dependence:1 junan:1 krichevsky:8 distance:1 thank:1 nx:8 chris:1 provable:1 illustration:1 robert:2 kamath:1 relate:1 stated:1 countable:2 unknown:1 perform:3 upper:5 vertical:1 virology:1 looking:1 head:2 rn:29 ucsd:2 mansour:1 david:3 namely:4 kl:21 extensive:1 nip:1 acov:1 below:2 usually:1 appeared:2 regime:3 sparsity:1 tb:1 max:25 green:1 power:2 natural:28 minimax:1 axis:2 multisets:2 text:1 prior:19 acknowledgement:1 discovery:1 relative:6 law:2 loss:16 expect:1 permutation:9 proportional:1 geoffrey:1 penalization:1 upfal:1 mercer:4 pi:5 translation:1 ibm:1 surprisingly:1 english:1 allow:1 understand:1 johnstone:2 taking:1 absolute:6 jelinek:4 sparse:2 peredachi:2 dimension:2 xn:14 vocabulary:1 cumulative:1 collection:4 adaptive:5 san:2 universally:1 qx:6 transaction:1 approximate:1 keep:2 corpus:1 evgeny:1 why:2 reality:1 nature:1 poly:1 complex:1 constructing:1 da:2 aistats:1 pk:1 paul:3 gassiat:2 n2:2 falahatgar:1 x1:1 referred:1 tl:1 aid:1 wiley:1 wavelet:1 theorem:9 showing:2 symbol:22 theertha:5 closeness:2 enduring:1 false:1 corr:2 valiant:4 nat:3 occurring:1 chen:1 subtract:1 entropy:3 logarithmic:2 paninski:1 springer:1 corresponds:2 determines:1 relies:1 satisfies:1 viewed:1 identity:1 donoho:2 sampson:1 toss:1 aided:4 specifically:2 uniformly:7 ananda:5 lemma:11 called:2 total:4 specie:1 ya:1 rarely:1 support:4 bioinformatics:1 evaluate:1 |
5,261 | 5,763 | Fast Convergence of Regularized Learning in Games
Vasilis Syrgkanis
Microsoft Research
New York, NY
vasy@microsoft.com
Alekh Agarwal
Microsoft Research
New York, NY
alekha@microsoft.com
Haipeng Luo
Princeton University
Princeton, NJ
haipengl@cs.princeton.edu
Robert E. Schapire
Microsoft Research
New York, NY
schapire@microsoft.com
Abstract
We show that natural classes of regularized learning algorithms with a form of
recency bias achieve faster convergence rates to approximate efficiency and to
coarse correlated equilibria in multiplayer normal form games. When each player
in a game uses an algorithm from our class, their individual regret decays at
O(T 3/4 ), while the sum of utilities converges to an approximate optimum at
O(T 1 )?an improvement upon the worst case O(T 1/2 ) rates. We show a black1/2
?
box reduction for any algorithm in the class to achieve O(T
) rates against an
adversary, while maintaining the faster rates against algorithms in the class. Our
results extend those of Rakhlin and Shridharan [17] and Daskalakis et al. [4], who
only analyzed two-player zero-sum games for specific algorithms.
1
Introduction
What happens when players in a game interact with one another, all of them acting independently
and selfishly to maximize their own utilities? If they are smart, we intuitively expect their utilities
? both individually and as a group ? to grow, perhaps even to approach the best possible. We
also expect the dynamics of their behavior to eventually reach some kind of equilibrium. Understanding these dynamics is central to game theory as well as its various application areas, including
economics, network routing, auction design, and evolutionary biology.
It is natural in this setting for the players to each make use of a no-regret learning algorithm for making their decisions, an approach known as decentralized no-regret dynamics. No-regret algorithms
are a strong match for playing games because their regret bounds hold even in adversarial environments. As a benefit, these bounds ensure that each player?s utility approaches optimality. When
played against one another, it can also be shown that the sum of utilities approaches an approximate
optimum [2, 18], and the player strategies converge to an equilibrium under appropriate conditions [6, 1, 8], at rates governed by the regret bounds. Well-known families of no-regret algorithms
include multiplicative-weights [13, 7], Mirror Descent [14], and Follow the Regularized/Perturbed
Leader [12]. (See [3, 19] for
p excellent overviews.) For all of these, the average regret vanishes at
the worst-case rate of O(1/ T ), which is unimprovable in fully adversarial scenarios.
However, the players in our setting are facing other similar, predictable no-regret learning algorithms, a chink that hints at the possibility of improved convergence rates for such dynamics. This
was first observed and exploited by Daskalakis et al. [4]. For two-player zero-sum games, they developed a decentralized variant of Nesterov?s accelerated saddle point algorithm [15] and showed
that each player?s average regret converges at the remarkable rate of O(1/T ). Although the resulting
1
dynamics are somewhat unnatural, in later work, Rakhlin and Sridharan [17] showed surprisingly
that the same convergence rate holds for a simple variant of Mirror Descent with the seemingly
minor modification that the last utility observation is counted twice.
Although major steps forward, both these works are limited to two-player zero-sum games, the very
simplest case. As such, they do not cover many practically important settings, such as auctions or
routing games, which are decidedly not zero-sum, and which involve many independent actors.
In this paper, we vastly generalize these techniques to the practically important but far more challenging case of arbitrary multi-player normal-form games, giving natural no-regret dynamics whose
convergence rates are much faster than previously possible for this general setting.
Contributions. We show that the average welfare of the game, that is, the sum of player utilities,
converges top
approximately optimal welfare at the rate O(1/T ), rather than the previously known
rate of O(1/ T ). Concretely, we show a natural class of regularized no-regret algorithms with recency bias that achieve welfare at least ( /(1 + ?))O PT O(1/T ), where and ? are parameters
in a smoothness condition on the game introduced by Roughgarden [18]. For the same class of algorithms, we show that each individual player?s average regret converges to zero at the rate O T 3/4 .
Thus, our results entail an algorithm for computing coarse correlated equilibria in a decentralized
manner with significantly faster convergence than existing methods.
We additionally give a black-boxpreduction that preserves the fast rates in favorable environments,
?
while robustly maintaining O(1/
T ) regret against any opponent in the worst case.
Even for two-person zero-sum games, our results for general games expose a hidden generality and
modularity underlying the previous results [4, 17]. First, our analysis identifies stability and recency
bias as key structural ingredients of an algorithm with fast rates. This covers the Optimistic Mirror
Descent of Rakhlin and Sridharan [17] as an example, but also applies to optimistic variants of Follow the Regularized Leader (FTRL), including dependence on arbitrary weighted windows in the
history as opposed to just the utility from the last round. Recency bias is a behavioral pattern commonly observed in game-theoretic environments [9]; as such, our results can be viewed as a partial
theoretical justification. Second, previous approaches in [4, 17]
p on achieving both faster conver?
gence against similar algorithms while at the same time O(1/
T ) regret rates against adversaries
were shown via ad-hoc modifications of specific algorithms. We give a black-box modification
which is not algorithm specific and works for all these optimistic algorithms.
Finally, we simulate a 4-bidder simultaneous auction game, and compare our optimistic algorithms
against Hedge [7] in terms of utilities, regrets and convergence to equilibria.
2
Repeated Game Model and Dynamics
Consider a static game G among a set N of n players. Each player i has a strategy space Si and a
utility function ui : S1 ? . . . ? Sn ! [0, 1] that maps a strategy profile s = (s1 , . . . , sn ) to a utility
ui (s). We assume that the strategy space of each player is finite and has cardinality d, i.e. |Si | = d.
We denote with w = (w1 , . . . , wn ) a profile of mixed strategies, where wi 2 (Si ) and wi,x is the
probability of strategy x 2 Si . Finally let Ui (w) = Es?w [ui (s)], the expected utility of player i.
We consider the setting where the game G is played repeatedly for T time steps. At each time
step t each player i picks a mixed strategy wit 2 (Si ). At the end of the iteration each player i
observes the expected utility he would have received had he played any possible strategy x 2 Si .
More formally, let uti,x = Es i ?wt i [ui (x, s i )], where s i is the set of strategies of all but the ith
player, and let uti = (uti,x )x2Si . At the end of each iteration each player i observes uti . Observe that
the expected utility of a player at iteration t is simply the inner product hwit , uti i.
No-regret dynamics. We assume that the players each decide their strategy wit based on a vanishing regret algorithm. Formally, for each player i, the regret after T time steps is equal to the
maximum gain he could have achieved by switching to any other fixed strategy:
ri (T ) =
sup
T
X
?
wi? 2 (Si ) t=1
2
wi?
?
wit , uti .
The algorithm has vanishing regret if ri (T ) = o(T ).
Approximate Efficiency of No-Regret Dynamics. We are interested in analyzing the average
welfare of such vanishing regret sequences. For a P
given strategy profile s the social welfare is
defined as the sum of the player utilities: W (s) = i2N ui (s). We overload notation to denote
W (w) = Es?w [W (s)]. We want to lower bound how far the average welfare of the sequence is,
with respect to the optimal welfare of the static game:
O PT =
max
s2S1 ?...?Sn
W (s).
This is the optimal welfare achievable in the absence of player incentives and if a central coordinator
could dictate each player?s strategy. We next define a class of games first identified by Roughgarden [18] on which we can approximate the optimal welfare using decoupled no-regret dynamics.
?
Definition 1 (Smooth game [18]).
P A game? is ( , ?)-smooth if there exists a strategy profile s such
that for any strategy profile s: i2N ui (si , s i )
O PT ?W (s).
In words, any player using his optimal strategy continues to do well irrespective of other players?
strategies. This condition directly implies near-optimality of no-regret dynamics as we show below.
Proposition 2. In a ( , ?)-smooth game, if each player i suffers regret at most ri (T ), then:
T
1X
W (wt )
T t=1
1+?
O PT
1 1 X
1
ri (T ) = O PT
1+?T
?
i2N
1 1 X
ri (T ),
1+?T
i2N
where the factor ? = (1 + ?)/ is called the price of anarchy (P OA).
This proposition is essentially a more explicit version of Roughgarden?s result [18]; we provide a
proof in the appendix for
P completeness. The result shows that the convergence to P OA is driven
1 1
by the quantity 1+?
i2N ri (T ). There are many algorithms which achieve a regret rate of
T
p
ri (T ) = O( log(d)T ), in which
p case the latter theorem would imply that the average welfare converges to P OA at a rate of O(n log(d)/T ). As we will show, for some natural classes of no-regret
algorithms the average welfare converges at the much faster rate of O(n2 log(d)/T ).
3
Fast Convergence to Approximate Efficiency
In this section, we present our main theoretical results characterizing a class of no-regret dynamics
which lead to faster convergence in smooth games. We begin by describing this class.
Definition 3 (RVU property). We say that a vanishing regret algorithm satisfies the Regret bounded
by Variation in Utilities (RVU) property with parameters ? > 0 and 0 < ? and a pair of dual
norms (k ? k, k ? k? )1 if its regret on any sequence of utilities u1 , u2 , . . . , uT is bounded as
T
X
?
t=1
w
?
t
w ,u
t
?
??+
T
X
t=1
ku
t
ut 1 k2?
T
X
t=1
kwt
wt
1 2
k .
(1)
Typical online learning algorithms such as Mirror Descent and FTRL do not satisfy the RVU property
PT
in their vanilla form, as the middle term grows as t=1 kut k2? for these methods. However, Rakhlin
and Sridharan [16] give a modification of Mirror Descent with this property, and we will present a
similar variant of FTRL in the sequel.
We now present two sets of results when each player uses an algorithm with this property. The
first discusses the convergence of social welfare, while the second governs the convergence of the
individual players? utilities at a fast rate.
1
The dual to a norm k ? k is defined as kvk? = supkuk?1 hu, vi.
3
3.1
Fast Convergence of Social Welfare
Given Proposition 2, we only need to understand the evolution of the sum of players? regrets
PT
t=1 ri (T ) in order to obtain convergence rates of the social welfare. Our main result in this
section bounds this sum when each player uses dynamics with the RVU property.
Theorem 4. Suppose that the algorithm of each player i satisfies
P the property RVU with parameters
?, and such that ? /(n 1)2 and k ? k = k ? k1 . Then i2N ri (T ) ? ?n.
P
Q
Q
t 1
t
Proof. Since ui (s) ? 1, definitions imply: kuti uti 1 k? ? s i
j6=i wj,sj
j6=i wj,sj . The
latter is the total variation distance of two product distributions. By known properties of total variation (see e.g. [11]), this is bounded by the sum of the total variations of each marginal distribution:
X Y
s
i
t
wj,s
j
j6=i
Y
j6=i
t 1
wj,s
?
j
X
j6=i
kwjt
wjt
1
k
(2)
?P
?2
P
t 1
t
By Jensen?s inequality,
kw
w
k
? (n 1) j6=i kwjt wjt 1 k2 , so that
j
j
j6=i
X
X
XX
kuti uti 1 k2? ? (n 1)
kwjt wjt 1 k2 = (n 1)2
kwit wit 1 k2 .
i2N j6=i
i2N
i2N
The theorem follows by summing up the RVU property (1) for each player i and observing that the
summation of the second terms is smaller than that of the third terms and thereby can be dropped.
Remark: The rates from the theorem depend on ?, which will be O(1) in the sequel. The above
theorem extends to the case where k ? k is any norm equivalent to the `1 norm. The resulting
requirement on in terms of can however be more stringent. Also, the theorem does not require
that all players use the same no-regret algorithm unlike previous results [4, 17], as long as each
player?s algorithm satisfies the RVU property with a common bound on the constants.
We now instantiate the result with examples that satisfy the RVU property with different constants.
3.1.1
Optimistic Mirror Descent
The optimistic mirror descent (OMD) algorithm of Rakhlin and Sridharan [16] is parameterized by
an adaptive predictor sequence Mti and a regularizer2 R which is 1-strongly convex3 with respect
to a norm k ? k. Let DR denote the Bregman divergence associated with R. Then the update rule is
defined as follows: let gi0 = argming2 (Si ) R(g) and
(u, g) = argmax ? ? hw, ui
w2 (Si )
DR (w, g),
then:
wit =
(Mti , git
1
), and git =
(uti , git
1
)
Then the following proposition can be obtained for this method.
Proposition 5. The OMD algorithm using stepsize ? and Mti = uti 1 satisfies the RVU property
with constants ? = R/?, = ?, = 1/(8?), where R = maxi supf DR (f, gi0 ).
The proposition follows by further crystallizing the arguments of Rakhlin and Sridaran [17], and we
provide a proof in the appendix for completeness. The above proposition, along with Theorem 4,
immediately yields the following corollary, which had been proved by Rakhlin and Sridharan [17]
for two-person zero-sum games, and which we here extend to general games.
p
Corollary 6. If each player runs OMD with
Mti = uti 1 and stepsize ? = 1/( 8(n 1)), then we
p
P
have i2N ri (T ) ? nR/? ? n(n 1) 8R = O(1).
The corollary follows by noting that the condition
2
? /(n
1)2 is met with our choice of ?.
Here and in the sequel, we can use a different regularizer Ri for each player i, without qualitatively
affecting any of the results.
ku vk2
3
R is 1-strongly convex if R u+v
? R(u)+R(v)
, 8u, v.
2
2
8
4
3.1.2
Optimistic Follow the Regularized Leader
We next consider a different class of algorithms denoted as optimistic follow the regularized leader
(OFTRL). This algorithm is similar but not equivalent to OMD, and is an analogous extension of
standard FTRL [12]. This algorithm takes the same parameters as for OMD and is defined as follows:
Let wi0 = argminw2 (Si ) R(w) and:
* T 1
+
X
R(w)
wiT = argmax w,
uti + MTi
.
?
w2 (Si )
t=1
We consider three variants of OFTRL with different choices of the sequence Mti , incorporating the
recency bias in different forms.
t
t
One-step recency bias: ? The simplest form of OFTRL uses M
? i = ui
result, where R = maxi supf 2
(Si )
R(f )
inf f 2
(Si )
1
and obtains the following
R(f ) .
Proposition 7. The OFTRL algorithm using stepsize ? and Mti = uti
with constants ? = R/?, = ? and = 1/(4?).
1
satisfies the RVU property
Combined with Theorem 4, this yields the following constant bound on the total regret of all players:
Corollary
8. If each player runs OFTRL with Mti = uti 1 and ? = 1/(2(n 1)), then we have
P
1)R = O(1).
i2N ri (T ) ? nR/? ? 2n(n
Rakhlin and Sridharan [16] also analyze an FTRL variant, but require a self-concordant barrier for
the constraint set as opposed to an arbitrary strongly convex regularizer, and their bound is missing
the crucial negative terms of the RVU property which are essential for obtaining Theorem 4.
H-step recency bias: More generally, given a window size H, one can define Mti =
Pt 1
?
? =t H ui /H. We have the following proposition.
Pt 1
?
Proposition 9. The OFTRL algorithm using stepsize ? and Mti =
? =t H ui /H satisfies the
2
RVU property with constants ? = R/?, = ?H and = 1/(4?).
Setting ? = 1/(2H(n
1)), we obtain the analogue of Corollary 8, with an extra factor of H.
Geometrically discounted recency bias: The next proposition considers an alternative form of
recency bias which includes all the previous utilities, but with a geometric discounting.
Pt 1
Proposition 10. The OFTRL algorithm using stepsize ? and Mti = Pt 11 ? ? =0 ? u?i satisfies
the RVU property with constants ? = R/?,
= ?/(1
)3 and
? =0
= 1/(8?).
Note that these choices for Mti can also be used in OMD with qualitatively similar results.
3.2
Fast Convergence of Individual Utilities
The previous section shows implications of the RVU property on the social welfare. This section
complements these with a similar result for each player?s individual utility.
Theorem 11. Suppose that the players use algorithms satisfying the RVU property with parameters
? > 0, > 0,
0. If we further have the stability property kwit wit+1 k ? ?, then for any
PT
?
player t=1 hwi wit , uti i ? ? + ?2 (n 1)2 T.
P
Similar reasoning as in Theorem 4 yields: kuti uti 1 k2? ? (n 1) j6=i kwjt wjt 1 k2 ? (n 1)2 ?2 ,
and summing the terms gives the theorem.
Noting that OFTRL satisfies the RVU property with constants given in Proposition 7 and stability
property with ? = 2? (see Lemma 20 in the appendix), we have the following corollary.
Corollary 12. If all players use the OFTRL algorithm with Mti = uti
p
PT
then we have t=1 hwi? wit , uti i ? (R + 4) n 1 ? T 1/4 .
5
1
and ? = (n 1)
1/2
T
1/4
,
Similar results hold for the other forms of recency bias, as well as for OMD. Corollary 12 gives a
fast convergence rate of the players? strategies to the set of coarse
p correlated equilibria (CCE) of the
game. This improves the previously known convergence rate T (e.g. [10]) to CCE using natural,
decoupled no-regret dynamics defined in [4].
4
Robustness to Adversarial Opponent
So far we have shown simple dynamics with rapid convergence properties in favorable environments
when each player in the game uses an algorithm with the RVU property. It is natural to wonder if
this comes at the cost of worst-case guarantees when some players do not use algorithms with this
property. Rakhlin and Sridharan [17] address this concern by modifying the OMD algorithm with
additional smoothing and adaptive
p step-sizes so as to preserve the fast rates in the favorable case
while still guaranteeing O(1/ T ) regret for each player, no matter how the opponents play. It is
not so obvious how this modification might extend to other procedures, and it seems undesirable
to abandon the black-box regret transformations we used to obtain Theorem 4. In this section, we
present a generic way of transforming an algorithm which satisfies the RVU property so that it p
retains
?
the fast convergence in favorable settings, but always guarantees a worst-case regret of O(1/
T ).
In order to present our modification, we need a parametric form of the RVU property which will
also involve a tunable parameter of the algorithm. For most online learning algorithms, this will
correspond to the step-size parameter used by the algorithm.
Definition 13 (RVU(?) property). We say that a parametric algorithm A(?) satisfies the Regret
bounded by Variation in Utilities(?) (RVU(?)) property with parameters ?, , > 0 and a pair of
dual norms (k ? k, k ? k? ) if its regret on any sequence of utilities u1 , u2 , . . . , uT is bounded as
T
X
?
w?
t=1
T
X
? ?
w t , ut ? + ?
kut
?
t=1
ut
1 2
k?
?
T
X
t=1
kwt
wt
1 2
(3)
k .
In both OMD and OFTRL algorithms from Section 3, the parameter ? is precisely the stepsize ?.
We now show an adaptive choice of ? according to an epoch-based doubling schedule.
Black-box reduction. Given a parametric algorithm A(?) as a black-box we construct a wrapper
A0 based on the doubling trick: The algorithm of each player proceeds in epochs. At each epoch r
PT
the player i has an upper bound of Br on the quantity t=1 kuti uti 1 k2? . We start with a parameter
?? and B1 = 1, and for ? = 1, 2, . . . , T repeat:
1. Play according to A(?r ) and receive u?i .
P?
2. If t=1 |uti uti 1 k2? Br :
(a) Update r
r + 1, Br
2Br , ?r = min
n
p? , ??
Br
o
, with ? as in Equation (3).
(b) Start a new run of A with parameter ?r .
Theorem 14. Algorithm A0 achieves regret at most the minimum of the following two terms:
T
X
?
wi?
t=1
T
X
?
t=1
wi?
T
X
?
? log(T ) 2 +
+ (2 + ?? ? )
kuti uti
??
t=1
v
0
u T
u X
?
?
t
t
@
wi , ui ? log(T ) 1 +
+ (1 + ? ? ) ? t2
kuti
??
t=1
?
wit , uti
1 2
k?
uti
!
??
1
1 2A
k?
T
X
t=1
kwit
wit
1 2
k ; (4)
(5)
p
? T ).
That is, the algorithm satisfies the RVU property, and also has regret that can never exceed O(
The theorem thus yields the following corollary, which illustrates the stated robustness of A0 .
p
? T ) against any
Corollary 15. Algorithm A0 , with ?? = (2+ )(n 1)2 log(T ) , achieves regret O(
adversarial sequence, while at the same
4. Thereby, if all
P time satisfying the conditions of Theorem
?
players use such an algorithm, then: i2N ri (T ) ? n log(T )(?/?? + 2) = O(1).
6
Sum of regrets
Max of regrets
400
Hedge
Optimistic Hedge
Cumulative regret
Cumulative regret
1500
1000
500
0
0
2000
4000
6000
8000
350
300
250
200
150
100
50
0
0
10000
Number of rounds
Hedge
Optimistic Hedge
2000
4000
6000
8000
10000
Number of rounds
Figure 1: Maximum and sum of individual regrets over time under the Hedge (blue) and
Optimistic Hedge (red) dynamics.
Proof. Observe that for such ? ? , we have that: (2 + ?? ? ) log(T ) ? (2 + ) log(T ) ?
Therefore, algorithm A0 , satisfies the sufficient conditions of Theorem 4.
?? (n 1)2 .
If A(?) is the OFTRL algorithm, then we know by Proposition 7 that the above result applies with
? = R = maxw R(w), = 1, = 14 and ? = ?. Setting ?? = (2+ )(n 1)2 = 12(n1 1)2 , the
p
? 2 T ) against an arbitrary adversary, while if
resulting algorithm A0 will have regret
at most: O(n
P
all players use algorithm A0 then i2N ri (T ) = O(n3 log(T )).
An analogue of Theorem 11 can also be established for this algorithm:
Corollary 16. If A satisfies the RVU(?) property, and also kwit pwit 1 k ? ??, then A0 with
? 1/4 ) if played against itself, and O(
? T ) against any opponent.
?? = T 1/4 achieves regret O(T
Once again, OFTRL satisfies the above conditions with ? = 2, implying robust convergence.
5
Experimental Evaluation
We analyzed the performance of optimistic follow the regularized leader with the entropy regularizer,
which corresponds to the Hedge algorithm [7] modified so that the last iteration?s utility for each
strategy is double counted; we refer to it as Optimistic Hedge.
of
? More
?Pformally, the probability
??
T 2 t
T 1
player i playing strategy j at iteration T is proportional to exp
??
, rather
t=1 uij + 2uij
?
PT 1 t ?
than exp
? ? t=1 uij as is standard for Hedge.
We studied a simple auction where n players are bidding for m items. Each player has a value v
for getting at least one item and no extra value for more items. The utility of a player is the value
for the allocation he derived minus the payment he has to make. The game is defined as follows:
simultaneously each player picks one of the m items and submits a bid on that item (we assume
bids to be discretized). For each item, the highest bidder wins and pays his bid. We let players play
this game repeatedly with each player invoking either Hedge or optimistic Hedge. This game, and
generalizations of it, are known to be (1 1/e, 0)-smooth [20], if we also view the auctioneer as a
player whose utility is the revenue. The welfare of the game is the value of the resulting allocation,
hence not a constant-sum game. The welfare maximization problem corresponds to the unweighted
bipartite matching problem. The P OA captures how far from the optimal matching is the average
allocation of the dynamics. By smoothness we know it converges to at least 1 1/e of the optimal.
Fast convergence of individual and average regret. We run the game for n = 4 bidders and
m = 4 items and valuation v = 20. The bids are discretized to be any integer in [1, 20]. We find
that the sum of the regrets and the maximum individual regret of each player are remarkably lower
under Optimistic Hedge as opposed to Hedge. In Figure 1 we plot the maximum individual regret
as well as the sum of the regrets under the two algorithms, using ? = 0.1 for both methods. Thus
convergence to the set of coarse correlated equilibria is substantially faster under Optimistic Hedge,
7
Expected bids of a player
Utility of a player
3
18
16
2.5
Hedge
Optimistic Hedge
14
Utility
Expected bid
Hedge
Optimistic Hedge
2
1.5
12
10
8
1
0.5
0
6
2000
4000
6000
8000
4
0
10000
Number of rounds
2000
4000
6000
8000
10000
Number of rounds
Figure 2: Expected bid and per-iteration utility of a player on one of the four items over time, under
Hedge (blue) and Optimistic Hedge (red) dynamics.
confirming our results in Section 3.2. We also observe similar behavior when each player only has
value on a randomly picked player-specific subset of items, or uses other step sizes.
More stable dynamics. We observe that the behavior under Optimistic Hedge is more stable than
under Hedge. In Figure 2, we plot the expected bid of a player on one of the items and his expected
utility under the two dynamics. Hedge exhibits the sawtooth behavior that was observed in generalized first price auction run by Overture (see [5, p. 21]). In stunning contrast, Optimistic Hedge
leads to more stable expected bids over time. This stability property of optimistic Hedge is one of
the main intuitive reasons for the fast convergence of its regret.
Welfare. In this class of games, we did not observe any significant difference between the average
welfare of the methods. The key reason is the following: the proof that no-regret dynamics are
approximately efficient (Proposition 2) only relies on the fact that each player does not have regret
against the strategy s?i used in the definition of a smooth game. In this game, regret against these
strategies is experimentally comparable under both algorithms, even though regret against the best
fixed strategy is remarkably different. This indicates a possibility for faster rates for Hedge in
terms of welfare. In Appendix H, we show fast convergence of the efficiency of Hedge for costminimization games, though with a worse P OA .
6
Discussion
This work extends and generalizes a growing body of work on decentralized no-regret dynamics in
many ways. We demonstrate a class of no-regret algorithms which enjoy rapid convergence when
played against each other, while being robust to adversarial opponents. This has implications in
computation of correlated equilibria, as well as understanding the behavior of agents in complex
multi-player games. There are a number of interesting questions and directions for future research
which are suggested by our results, including the following:
Convergence rates for vanilla Hedge: The fast rates of our paper do not apply to algorithms
such as Hedge without modification. Is this modification to satisfy RVU only sufficient or also
necessary? If not, are there counterexamples? In the supplement, we include a sketch hinting at such
a counterexample, but also showing fast rates to a worse equilibrium than our optimistic algorithms.
Convergence of players? strategies: The OFTRL algorithm often produces much more stable trajectories empirically, as the players converge to an equilibrium, as opposed to say Hedge. A precise
quantification of this desirable behavior would be of great interest.
Better rates with partial information: If the players do not observe the expected utility function,
but only the moves of the other players at each round, can we still obtain faster rates?
8
References
[1] A. Blum and Y. Mansour. Learning, regret minimization, and equilibria. In Noam Nisan, Tim Rough? Tardos, and Vijay Vazirani, editors, Algorithmic Game Theory, chapter 4, pages 4?30. Camgarden, Eva
bridge University Press, 2007.
[2] Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the
price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing,
STOC ?08, pages 373?382, New York, NY, USA, 2008. ACM.
[3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA, 2006.
[4] Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for
zero-sum games. Games and Economic Behavior, 92:327?348, 2014.
[5] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and the generalized
second price auction: Selling billions of dollars worth of keywords. Working Paper 11765, National
Bureau of Economic Research, November 2005.
[6] Dean P. Foster and Rakesh V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic Behavior, 21(12):40 ? 55, 1997.
[7] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 ? 139, 1997.
[8] Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29(1):79?103, 1999.
[9] Drew Fudenberg and Alexander Peysakhovich. Recency, records and recaps: Learning and nonequilibrium behavior in a simple decision problem. In Proceedings of the Fifteenth ACM Conference
on Economics and Computation, EC ?14, pages 971?986, New York, NY, USA, 2014. ACM.
[10] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium.
Econometrica, 68(5):1127?1150, 2000.
[11] Wassily Hoeffding and J. Wolfowitz. Distinguishability of sets of distributions. Ann. Math. Statist.,
29(3):700?718, 1958.
[12] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291 ? 307, 2005. Learning Theory 2003 Learning Theory 2003.
[13] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212?261, 1994.
[14] AS Nemirovsky and DB Yudin. Problem complexity and method efficiency in optimization. 1983.
[15] Yu. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[16] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In COLT 2013,
pages 993?1019, 2013.
[17] Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences.
In Advances in Neural Information Processing Systems, pages 3066?3074, 2013.
[18] T. Roughgarden. Intrinsic robustness of the price of anarchy. In Proceedings of the 41st annual ACM
symposium on Theory of computing, pages 513?522, New York, NY, USA, 2009. ACM.
[19] Shai Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn.,
4(2):107?194, February 2012.
? Tardos. Composable and efficient mechanisms. In Proceedings of the Forty[20] Vasilis Syrgkanis and Eva
fifth Annual ACM Symposium on Theory of Computing, STOC ?13, pages 211?220, New York, NY, USA,
2013. ACM.
9
| 5763 |@word middle:1 version:1 achievable:1 norm:6 seems:1 hu:1 git:3 nemirovsky:1 invoking:1 pick:2 thereby:2 minus:1 reduction:2 wrapper:1 ftrl:5 existing:1 com:3 luo:1 si:14 confirming:1 plot:2 ligett:1 update:2 hwit:1 implying:1 instantiate:1 item:10 warmuth:1 ith:1 vanishing:4 record:1 manfred:1 coarse:4 completeness:2 boosting:1 math:1 mathematical:1 along:1 symposium:3 edelman:1 wassily:1 behavioral:1 manner:1 expected:10 rapid:2 behavior:10 growing:1 multi:2 rvu:24 discretized:2 discounted:1 window:2 cardinality:1 begin:1 xx:1 underlying:1 notation:1 bounded:5 what:1 kind:1 substantially:1 developed:1 transformation:1 nj:1 guarantee:2 hajiaghayi:1 k2:10 ostrovsky:1 enjoy:1 anarchy:3 dropped:1 switching:1 mach:1 analyzing:1 approximately:2 lugosi:1 black:5 might:1 twice:1 studied:1 challenging:1 limited:1 i2n:13 regret:65 procedure:2 area:1 significantly:1 dictate:1 matching:2 gabor:1 word:1 submits:1 undesirable:1 recency:11 equivalent:2 map:1 dean:1 missing:1 roth:1 syrgkanis:2 economics:2 omd:9 independently:1 convex:3 wit:11 immediately:1 rule:1 his:3 stability:4 variation:5 justification:1 analogous:1 tardos:2 pt:15 suppose:2 play:3 programming:1 us:6 trick:1 trend:1 satisfying:2 continues:1 observed:3 capture:1 worst:5 wj:4 eva:2 alekha:1 highest:1 observes:2 benjamin:1 vanishes:1 predictable:3 environment:4 ui:13 transforming:1 econometrica:1 complexity:1 nesterov:2 dynamic:22 depend:1 smart:1 upon:1 bipartite:1 efficiency:5 conver:1 selling:1 bidding:1 various:1 chapter:1 regularizer:3 fast:15 shalev:1 whose:2 say:3 katrina:1 itself:1 abandon:1 seemingly:1 online:6 hoc:1 sequence:9 pwit:1 product:2 vasilis:2 achieve:4 intuitive:1 haipeng:1 getting:1 billion:1 convergence:27 double:1 optimum:2 requirement:1 produce:1 guaranteeing:1 converges:7 adam:1 tim:1 peysakhovich:1 keywords:1 minor:1 received:1 strong:1 c:1 implies:1 come:1 met:1 direction:1 modifying:1 routing:2 stringent:1 require:2 generalization:2 proposition:15 summation:1 extension:1 hold:3 practically:2 recap:1 normal:2 welfare:20 exp:2 equilibrium:13 great:1 algorithmic:1 major:1 achieves:3 favorable:4 expose:1 bridge:1 individually:1 schwarz:1 weighted:2 minimization:3 rough:1 always:1 modified:1 rather:2 kalai:1 corollary:11 derived:1 improvement:1 indicates:1 contrast:1 adversarial:5 kim:1 dollar:1 vk2:1 a0:8 hidden:1 coordinator:1 uij:3 interested:1 deckelbaum:1 among:1 dual:3 colt:1 denoted:1 smoothing:1 marginal:1 equal:1 construct:1 never:1 once:1 santosh:1 biology:1 kw:1 yu:1 constantinos:1 future:1 t2:1 hint:1 randomly:1 preserve:2 divergence:1 kwt:2 individual:9 simultaneously:1 national:1 stunning:1 argmax:2 microsoft:6 n1:1 karthik:2 interest:1 unimprovable:1 possibility:2 evaluation:1 analyzed:2 kvk:1 hwi:2 implication:2 bregman:1 partial:2 necessary:1 decoupled:2 littlestone:1 theoretical:2 cover:2 retains:1 yoav:2 maximization:1 cost:1 subset:1 nonequilibrium:1 predictor:1 wonder:1 colell:1 perturbed:1 combined:1 calibrated:1 person:2 st:1 kut:2 sequel:3 michael:2 w1:1 vastly:1 central:2 again:1 cesa:1 opposed:4 hoeffding:1 dr:3 worse:2 leading:1 bidder:3 includes:1 matter:1 satisfy:3 ad:1 vi:1 nisan:1 multiplicative:2 later:1 picked:1 view:1 optimistic:23 observing:1 sup:1 analyze:1 start:2 red:2 shai:1 contribution:1 who:1 yield:4 correspond:1 generalize:1 trajectory:1 advertising:1 worth:1 vohra:1 j6:9 history:1 simultaneous:1 reach:1 suffers:1 definition:5 against:15 obvious:1 proof:5 associated:1 static:2 gain:1 proved:1 tunable:1 ut:5 improves:1 schedule:1 follow:5 improved:1 box:5 strongly:3 generality:1 though:2 just:1 sketch:1 working:1 vasy:1 perhaps:1 grows:1 usa:5 evolution:1 wi0:1 discounting:1 hence:1 round:6 game:49 self:1 generalized:2 theoretic:2 demonstrate:1 auction:6 reasoning:1 common:1 empirically:1 overview:1 extend:3 he:5 refer:1 significant:1 cambridge:1 counterexample:2 smoothness:2 vanilla:2 had:2 stable:4 actor:1 alekh:1 entail:1 nicolo:1 own:1 showed:2 inf:1 driven:1 scenario:1 inequality:1 exploited:1 minimum:1 haipengl:1 somewhat:1 additional:1 converge:2 maximize:1 overture:1 wolfowitz:1 forty:1 desirable:1 smooth:8 alan:1 faster:10 match:1 long:1 hart:1 prediction:1 variant:6 essentially:1 fifteenth:1 iteration:6 agarwal:1 achieved:1 receive:1 affecting:1 want:1 remarkably:2 grow:1 crucial:1 w2:2 extra:2 unlike:1 db:1 sridharan:9 integer:1 structural:1 near:2 noting:2 exceed:1 wn:1 bid:9 identified:1 inner:1 economic:4 br:5 utility:31 unnatural:1 york:8 repeatedly:2 remark:1 generally:1 governs:1 involve:2 statist:1 simplest:2 schapire:4 per:1 kwjt:4 blue:2 incentive:1 group:1 key:2 four:1 blum:2 achieving:1 mti:13 geometrically:1 sum:19 run:5 fortieth:1 parameterized:1 auctioneer:1 extends:2 family:1 uti:24 decide:1 decision:4 appendix:4 sergiu:1 comparable:1 bound:9 internet:1 pay:1 played:5 annual:3 roughgarden:4 constraint:1 precisely:1 ri:14 n3:1 u1:2 simulate:1 argument:1 optimality:2 min:1 vempala:1 according:2 multiplayer:1 smaller:1 wi:7 making:1 happens:1 modification:8 s1:2 intuitively:1 equation:1 previously:3 payment:1 describing:1 eventually:1 discus:1 mechanism:1 know:2 end:2 generalizes:1 decentralized:4 opponent:5 apply:1 observe:6 appropriate:1 generic:1 robustly:1 stepsize:6 alternative:1 robustness:3 bureau:1 top:1 ensure:1 include:2 maintaining:2 giving:1 k1:1 february:1 move:1 question:1 quantity:2 strategy:24 parametric:3 dependence:1 nr:2 mohammadtaghi:1 evolutionary:1 exhibit:1 win:1 distance:1 gence:1 oa:5 majority:1 valuation:1 considers:1 reason:2 robert:3 stoc:2 noam:1 negative:1 stated:1 design:1 bianchi:1 upper:1 observation:1 finite:1 descent:7 november:1 precise:1 andreu:1 mansour:1 arbitrary:4 introduced:1 complement:1 pair:2 nick:1 established:1 distinguishability:1 address:1 adversary:3 proceeds:1 below:1 pattern:1 suggested:1 including:3 max:2 analogue:2 natural:7 quantification:1 regularized:8 decidedly:1 imply:2 identifies:1 irrespective:1 cce:2 sn:3 epoch:3 understanding:2 geometric:1 sawtooth:1 freund:2 fully:1 expect:2 mixed:2 interesting:1 proportional:1 allocation:3 facing:1 remarkable:1 ingredient:1 composable:1 revenue:1 agent:1 sufficient:2 foster:1 editor:1 gi0:2 playing:3 surprisingly:1 last:3 repeat:1 bias:10 understand:1 characterizing:1 barrier:1 fifth:1 benefit:1 cumulative:2 unweighted:1 yudin:1 forward:1 concretely:1 commonly:1 adaptive:5 qualitatively:2 counted:2 far:4 ec:1 social:5 sj:2 approximate:6 obtains:1 vazirani:1 summing:2 b1:1 leader:5 shwartz:1 daskalakis:3 modularity:1 additionally:1 ku:2 learn:1 robust:2 obtaining:1 interact:1 excellent:1 complex:1 anthony:1 did:1 main:3 fudenberg:1 profile:5 n2:1 repeated:1 body:1 ny:8 explicit:1 governed:1 third:1 hw:1 theorem:18 specific:4 showing:1 jensen:1 maxi:2 rakhlin:11 decay:1 hinting:1 concern:1 exists:1 incorporating:1 essential:1 avrim:1 intrinsic:1 drew:1 mirror:7 supplement:1 illustrates:1 vijay:1 supf:2 entropy:1 simply:1 saddle:1 doubling:2 u2:2 maxw:1 applies:2 corresponds:2 satisfies:14 relies:1 acm:8 hedge:31 ma:1 viewed:1 ann:1 wjt:4 price:5 absence:1 experimentally:1 typical:1 acting:1 wt:4 lemma:1 called:1 total:5 e:3 concordant:1 player:75 experimental:1 rakesh:1 aaron:1 formally:2 latter:2 alexander:3 accelerated:1 overload:1 princeton:3 correlated:7 |
5,262 | 5,764 | Interactive Control of Diverse Complex Characters
with Neural Networks
Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, Emanuel Todorov
Department of Computer Science, University of Washington
{mordatch,lowrey,galen,zoran,todorov}@cs.washington.edu
Abstract
We present a method for training recurrent neural networks to act as near-optimal
feedback controllers. It is able to generate stable and realistic behaviors for a
range of dynamical systems and tasks ? swimming, flying, biped and quadruped
walking with different body morphologies. It does not require motion capture or
task-specific features or state machines. The controller is a neural network, having
a large number of feed-forward units that learn elaborate state-action mappings,
and a small number of recurrent units that implement memory states beyond the
physical system state. The action generated by the network is defined as velocity.
Thus the network is not learning a control policy, but rather the dynamics under an
implicit policy. Essential features of the method include interleaving supervised
learning with trajectory optimization, injecting noise during training, training for
unexpected changes in the task specification, and using the trajectory optimizer to
obtain optimal feedback gains in addition to optimal actions.
Figure 1: Illustration of the dynamical systems and tasks we have been able to control using the
same method and architecture. See the video accompanying the submission.
1
Introduction
Interactive real-time controllers that are capable of generating complex, stable and realistic movements have many potential applications including robotic control, animation and gaming. They can
also serve as computational models in biomechanics and neuroscience. Traditional methods for designing such controllers are time-consuming and largely manual, relying on motion capture datasets
or task-specific state machines. Our goal is to automate this process, by developing universal synthesis methods applicable to arbitrary behaviors, body morphologies, online changes in task objectives,
perturbations due to noise and modeling errors. This is also the ambitious goal of much work in
Reinforcement Learning and stochastic optimal control, however the goal has rarely been achieved
in continuous high-dimensional spaces involving complex dynamics.
Deep learning techniques on modern computers have produced remarkable results on a wide range
of tasks, using methods that are not significantly different from what was used decades ago. The
objective of the present paper is to design training methods that scale to larger and harder control
problems, even if most of the components were already known. Specifically, we combine supervised
1
learning with trajectory optimization, namely Contact-Invariant Optimization (CIO) [12], which has
given rise to some of the most elaborate motor behaviors synthesized automatically. Trajectory
optimization however is an offline method, so the rationale here is to use a neural network to learn
from the optimizer, and eventually generate similar behaviors online. There is closely related recent
work along these lines [9, 11], but the method presented here solves substantially harder problems
? in particular it yields stable and realistic locomotion in three-dimensional space, where previous
work was applied to only two-dimensional characters. That this is possible is due to a number of
technical improvements whose effects are analyzed below.
Control was historically among the earliest applications of neural networks, but the recent surge in
performance has been in computer vision, speech recognition and other classification problems that
arise in artificial intelligence and machine learning, where large datasets are available. In contrast,
the data needed to learn neural network controllers is much harder to obtain, and in the case of imaginary characters and novel robots we have to synthesize the training data ourselves (via trajectory
optimization). At the same time the learning task for the network is harder. This is because we need
precise real-valued outputs as opposed to categorical outputs, and also because our network must
operate not on i.i.d. samples, but in a closed loop, where errors can amplify over time and cause
instabilities. This necessitates specialized training procedures where the dataset of trajectories and
the network parameters are optimized together. Another challenge caused by limited datasets is the
potential for over-fitting and poor generalization. Our solution is to inject different forms of noise
during training. The scale of our problem requires cloud computing and a GPU implementation, and
training that takes on the order of hours. Interestingly, we invest more computing resources in generating the data than in learning from it. Thus the heavy lifting is done by the trajectory optimizer,
and yet the neural network complements it in a way that yields interactive real-time control.
Neural network controllers can also be trained with more traditional methods which do not involve
trajectory optimization. This has been done in discrete action settings [10] as well as in continuous
control settings [3, 6, 14]. A systematic comparison of these more direct methods with the present
trajectory-optimization-based methods remains to be done. Nevertheless our impression is that networks trained with direct methods give rise to successful yet somewhat chaotic behaviors, while the
present class of methods yield more realistic and purposeful behaviors.
Using physics based controllers allows for interaction, but these controllers need specially designed
architectures for each range of tasks or characters. For example, for biped location common approaches include state machines and use of simplified models (such as the inverted pendulum) and
concepts (such as zero moment or capture points) [21, 18]. For quadrupedal characters, a different
set of state machines, contact schedules and simplified models is used [13]. For flying and swimming yet another set of control architectures, commonly making use of explicit cyclic encodings,
have been used [8, 7]. It is our aim to unity these disparate approaches.
2
Overview
Let the state of the character be defined as [q f r], where q is the physical pose of the character (root
position, orientation and joint angles), f are the contact forces being applied on the character by the
ground, and r is the recurrent memory state
of the character. The
motion of the character is a state
trajectory of length T defined by X = q0 f 0 r0 ... qT f T rT . Let X1 , ..., XN be a collection of
N trajectories, each starting with different initial conditions and executing a different task (such as
moving the character to a particular location).
We introduce a neural network control policy ? ? : s 7? a, parametrized by neural network weights
?, that maps a sensory state of the character s at each point in time to an optimal action a that
controls the character. In general, the sensory state can be designed by the user to include arbitrary
informative features, but in this preliminary work we use the following simple and general-purpose
representation:
st = qt rt q? t?1 f t?1
at = q? t r? t f t ,
where, e.g., q? t , qt+1 ? qt denotes the instantaneous rate of change of q at time t. With this
representation of the action, the policy directly commands the desired velocity of the character and
applied contact forces, and determines the evolution of the recurrent state r. Thus, our network
learns both optimal controls and a model of dynamics simultaneously.
2
Let Ci (X) be the total cost of the trajectory X, which rewards accurate execution of task i and
physical realism of the character?s motion. We want to jointly find a collection of optimal trajectories
that each complete a particular task, along with a policy ? ? that is able to reconstruct the sense and
action pairs st (X) and at (X) of all trajectories at all timesteps:
X
minimize
Ci (Xi )
subject to ? i, t : at (Xi ) = ? ? (st (Xi )).
(1)
1
N
? X ... X
i
The optimized policy parameters ? can then be used to execute policy in real-time and interactively
control the character by the user.
2.1
Stochastic Policy and Sensory Inputs
Injecting noise has been shown to produce more robust movement strategies in graphics and optimal
control [20, 6], reduce overfitting and prevent feature co-adaptation in neural network training [4],
and stabilize recurrent behaviour of neural networks [5]. We inject noise in a principled way to aid
in learning policies that do not diverge when rolled out at execution time.
In particular, we inject additive Gaussian noise into the sensory inputs s given to the neural network.
Let the sensory noise be denoted ? ? N (0, ? 2? I), so the resulting noisy policy inputs are s + ?.
This is similar to denoising autoencoders [17] with one important difference: the change in input in
our setting also induces a change in the optimal action to output. If the noise is small enough, the
optimal action at nearby noisy states is given by the first order expansion
a(s + ?) = a + as ?,
(2)
where as (alternatively da
ds ) is the matrix of optimal feedback gains around s. These gains can be
calculated as a byproduct of trajectory optimization as described in section 3.2. Intuitively, such
feedback helps the neural network trainer to learn a policy that can automatically correct for small
deviations from the optimal trajectory and allows us to use much less training data.
2.2
Distributed Stochastic Optimization
The resulting constrained optimization problem (1) is nonconvex and too large to solve directly. We
replace the hard equality constraint with a quadratic penalty with weight ?:
?
2
R(s, a, ?, ?) = k(a + as ?) ? ? ? (s + ?)k ,
(3)
2
leading to the relaxed, unconstrained objective
X
X
minimize
Ci (Xi ) +
R(st (Xi ), at (Xi ), ?, ?i,t ).
(4)
? X1 ... XN
i
i,t
We then proceed to solve the problem in block-alternating optimization fashion, optimizing for one
set of variables while holding others fixed. In particular, we independently optimize for each Xi
(trajectory optimization) and for ? (neural network regression).
As the target action a + as ? depends on the optimal feedback gains as , the noise ? is resampled after
optimizing each policy training sub-problem. In principle the noisy sensory state and corresponding
action could be recomputed within the neural network training procedure, but we found it expedient
to freeze the noise during NN optimization (so that the optimal feedback gains need not be passed
to the NN training process). Similar to recent stochastic optimization approaches, we introduce
quadratic proximal regularization terms (weighted by rate ?) that keep the solution of the current
iteration close to its previous optimal value. The resulting algorithm is
Algorithm 1: Distributed Stochastic Optimization
3
Sample sensor noise ??i,t for each t and i.
P
i,t
i,t ? i,t
? i = argminX Ci (X) +
? i
2
Optimize N trajectories (sec 3): X
? ) + ?2
X ? X
t R(s , a , ?, ?
2
P
Solve neural network regression (sec 4): ?? = argmin?
R(?
si,t , ?
ai,t , ?, ??i,t ) + ?
? ? ??
4
Repeat.
1
2
i,t
3
2
Thus we have reduced a complex policy search problem in (1) to an alternating sequence of independent trajectory optimization and neural network regression problems, each of which are wellstudied and allow the use of existing implementations. While previous work [9, 11] used ADMM
or dual gradient descent to solve similar optimization problems, it is non-trivial to adapt them to
asynchronous and stochastic setting we have. Despite potentially slower rate, we still observe convergence as shown in section 8.1.
3
Trajectory Optimization
We wish to find trajectories that start with particular initial conditions and execute the task, while
satisfying physical realism of the character?s motion. The existing approach we use is ContactInvariant Optimization (CIO) [12], which is a direct trajectory optimization method based on inverse
dynamics. Define the total cost for a trajectory X:
X
C(X) =
c(?t (X)),
(5)
t
t
where ? (X) is a function that extracts a vector of features (such as root forces, contact distances,
control torques, etc.) from the trajectory at time t and c(?) is the state cost over these features.
Physical realism is achieved by satisfying equations of motion, non-penetration, and force complementarity conditions at every point in the trajectory [12]:
? = ? + J > (q, q)f
? ,
H(q)?
q + C(q, q)
d(q) ? 0,
d(q)> f = 0,
f ? K(q)
(6)
where d(q) is the distance of the contact to the ground and K is the contact friction cone. These
constraints are implemented as soft constraints, as in [12] and are included in C(X). Initial conditions are also implemented as soft constraints in C(X). Additionally we want to make sure the
task is satisfied, such as moving to a particular location while minimizing effort. These task costs
are the same for all our experiments and are described in section 8. Importantly, CIO is able to find
solutions with trivial initializations, which makes it possible to have a broad range of characters and
behaviors without requiring hand-designed controllers or motion capture for initialization.
3.1
Optimal Trajectory
The trajectory optimization problem consists of finding the optimal trajectory parameters X that
minimize the total cost (5) with objective (3) now folded into C for simplicity:
X? = argmin C(X).
(7)
X
We solve the above optimization problem using Newton?s method, which requires the gradient and
Hessian of the total cost function. Using the chain rule, these quantities are
X
X
X
CX =
ct? ?tX
CXX =
(?tX )> ct?? ?tX + ct? ?tXX ?
(?tX )> ct?? ?tX
t
t
t
where the truncation of the last term in CXX is the common Gauss-Newton Hessian approximation
[1]. We choose cost functions for which c? and c?? can be calculated analytically. On the other
hand, ?X is calculated by finite differencing. The optimum can then be found by the following
recursion:
?1
X? = X? ? CXX
CX .
(8)
Because this optimization is only a sub-problem (step 2 in algorithm 1), we don?t run it to convergence, and instead take between one and ten iterations.
3.2
Optimal Feedback Gains
In addition to the optimal trajectory, we also need to find optimal feedback gains as necessary
to generate optimal actions for noisy inputs in (2). While these feedback gains are a byproduct
of indirect trajectory optimization methods such as LQG, they are not an obvious result of direct
trajectory optimization methods like CIO. While we can use Linear Quadratic Gaussian (LQG)
4
pass around our optimal solution to compute these gains, this is inefficient as it does not make use
of computation already performed during direct trajectory optimization. Moreover, we found the
resulting process can produce very large and ill-conditioned feedback gains. One could change the
objective function for the LQG pass when calculating feedback gains to make them smoother (for
example, by adding explicit trajectory smoothness cost), but then the optimal actions would be using
feedback gains from a different objective. Instead, we describe a perturbation method that reuses
computation done during direct trajectory optimization, also producing better-conditioned gains.
This is a general method for producing feedback gains that stabilize resulting optimal trajectories
and can be useful for other applications.
Suppose we perturb a certain aspect of optimal trajectory X, such that the sensory state changes:
s(X) = ?s. We wish to find how the optimal action a(X) will change given this perturbation. We
can enforce the perturbation with a soft constraint of weight ?, resulting in an augmented total cost:
?
?
?s) = C(X) + ks(X) ? ?sk2 .
C(X,
(9)
2
? s) = argmin? C(X
? ? ) be the optimum of the augmented total cost. For ?s near s(X) (as is the
Let X(?
X
case with local feedback control), the minimizer of augmented cost is the minimizer of a quadratic
around optimal trajectory X
? s) = X ? C? ?1 (X, ?s)C?X (X, ?s) = X ? (CXX + ?s> sX )?1 (CX + ?s> (s(X) ? ?s)),
X(?
X
X
XX
where all derivatives are calculated around X. Differentiating the above w.r.t. ?s,
? ?s = ?(CXX + ?s> sX )?1 s> = C ?1 s> (sX C ?1 s> + 1 I)?1 ,
X
X
X
XX X
XX X
?
?1
where the last equality follows from Woodbury identity and has the benefit of reusing CXX
, which
is already computed as part of trajectory optimization. The optimal feedback gains for a are a?s =
? ?s . Note that sX and aX are subsets of ?X , and are already calculated as part of trajectory
aX X
optimization. Thus, computing optimal feedback gains comes at very little additional cost.
Our approach produces softer feedback gains according to parameter ? without modifying the cost
function. The intuition is that instead of holding perturbed initial state fixed (as LQG does, for
example), we make matching the initial state a soft constraint. By weakening this constraint, we
can modify initial state to better achieve the master cost function without using very aggressive
feedback.
4
Neural Network Policy Regression
After performing trajectory optimization, we perform standard regression to fit a neural network
i,t
to the noisy fixed input and output pairs {s + ?, a + as ?} for each timestep and trajectory. Our
neural network policy has a total of K layers, hidden layer activation function ? (tanh, in the present
work) and hidden units hk for layer k. To learn a model that is robust to small changes in neural state,
we add independent Gaussian noise ? k ? N (0, ? 2? I) with variance ? 2? to the neural activations at
each layer during training. Wager et al. [19] observed this noise model makes hidden units tend
toward saturated regions and less sensitive to precise values of individual units.
As with the trajectory optimization sub-problems, we do not run the neural network trainer to convergence but rather perform only a single pass of batched stochastic gradient descent over the dataset
before updating the parameters ? in step 3 of Algorithm 1.
All our experiments use 3 hidden layer neural networks with 250 hidden units in each layer (other
network sizes are evaluated in section 8.1). The neural network weight matrices are initialized with
a spectral radius of just above 1, similar to [15, 5]. This helps to make sure initial network dynamics
are stable and do not vanish or explode.
5
Training Trajectory Generation
To train a neural network for interactive use, we required a data set that includes dynamically changing task?s goal state. The task, in this case, is the locomotion of a character to a movable goal
5
position controlled by the user. (Our character?s goal position was always set to be the origin, which
encodes the characters state position in the goal position?s coordinate frame. Thus the ?origin? may
shift relative to the character, but this keeps behavior invariant to the global frame of reference.)
Our trajectory generation creates a dataset consisting of trials and segments. Each trial k starts with
init
a reference physical pose and null recurrent memory [q q? r] and must reach goal location gk,0 .
k,0
After generating an optimal trajectory X according to section 3, a random timestep t is chosen to
t
branch a new segment with [q q? r] used as the initial state. A new goal location gk,1 is also chosen
k,1
randomly for optimal trajectory X .
This process represents the character changing direction at some point along its original trajectory
plan: ?interaction? in this case is simply a new change in goal position. This technique allows for
our initial states and goals to come from the distribution that reflects the character?s typical motion.
In all our experiments, we use between 100 to 200 trials, each with 5 branched segments.
6
Distributed Training Architecture
Our training algorithm was implemented in a asynchronous, distributed architecture, utilizing a
GPU for neural network training. Simple parallelism was achieved by distributing the trajectory
optimization processes to multiple node machines, while the resulting data was used to train the NN
policy on a single GPU node.
Amazon Web Service?s EC2 3.8xlarge instances provided the nodes for optimization, while a
g2.2xlarge instance provided the GPU. Utilizing a star-topology with the GPU instance at the center,
a Network File System server distributes the training data X and network parameters ? to necessary
processes within the cluster. Each optimization node is assigned a subset of the total trials and
segments for the given task. This simple usage of files for data storage meant no supporting infrastructure other than standard file locking for concurrency.
We used a custom GPU implementation of stochastic gradient descent (SGD) to train the neural
network control policy. For the first training epoch, all trajectories and action sequences are loaded
onto the GPU, randomly shuffling the order of the frames. Then the neural network parameters ?
are updated using batched SGD in a single pass over the data to reduce the objective in (4). At the
start of subsequent training epochs, trajectories which have been updated by one of the trajectory
optimization processes (and injected with new sensor noise ?) are reloaded.
Although this architecture is asynchronous, the proximal regularization terms in the objective prevent the training data and policy results from changing too quickly and keep the optimization from
diverging. As a result, we can increase our training performance linearly for the size of cluster we
are using, to about 30 optimization nodes per GPU machine. We run the overall optimization process until the average of 200 trajectory optimization iterations has been reached across all machines.
This usually results in about 10000 neural network training epochs, and takes about 2.5 hours to
complete, depending on task parameters and number of nodes.
7
Policy Execution
Once we find the optimal policy parameters ? offline, we can execute the resulting policy in realtime under user control. Unlike non-parametric methods like motion graphs or Gaussian Processes,
we do not need to keep any trajectory data at execution time. Starting with an initial
x0, we
desstate
des
compute sensory state s and query the policy (without noise) for the desired action q? r? f .
To evolve the physical state of the system, we directly optimize the next state x1 to match q? des while
satisfying equations of motion
2
2
2
x1 = argmin
q? ? q? des
+
r? ? r? des
+
f ? f des
subject to (6)
x
Note that this is simply the optimization problem (7) with horizon T = 1, which can be solved at
real-time rates and does not require any additional implementation. This approach is reminiscent of
feature-based control in computer graphics and robotics.
6
Because our physical state evolution is a result of optimization (similar to an implicit integrator),
it does not suffer from instabilities or divergence as Euler integration would, and allows the use of
larger timesteps (we use ?t of 50ms in all our experiments). In the current work, the dynamics
constraints are enforced softly and thus may include some root forces in simulation.
8
Results
This algorithm was applied to learn a policy that allows interactive locomotion for a range of very
different three-dimensional characters. We used a single network architecture and parameters to
create all controllers without any specialized initializations. While the task is locomotion, different
character types exhibit very different behaviors. The experiments include three-dimensional swimming and flying characters as well as biped and quadruped walking tasks. Unlike in two-dimensional
scenarios, it is much easier for characters to fall or go into unstable regions, yet our method manages
to learn successful controllers. We strongly suggest viewing the supplementary video for examples
of resulting behaviors.
The swimming creature featured four fins with two degrees of freedom each. It is propelled by lift
and drag forces for simulated water density of 1000kg/m3 . To move, orient, or maintain position,
controller learned to sweep down opposite fins in a cyclical patter, as in treading water. The bird
creature was a modification of the swimmer, with opposing two-segment wings and the medium
density changed changed to that of air (1.2kg/m3 ). The learned behavior that emerged is cyclical
flapping motion (more vigorous now, because of the lower medium density) as well as utilization of
lift forces to coast to distant goal positions and modulation of flapping speed to change altitude.
Three bipedal creatures were created to explore the controller?s function with respect to contact
forces. Two creatures were akin to a humanoid - one large and one small, both with arms - while
the other had a very wide torso compared to its height. All characters learned to walk to the target
location and orientation with a regular, cyclic gait. The same algorithm also learned a stereotypical
trot gait for a dog-like and spider-like quadrupeds. This alternating left/right footstep cyclic behavior
for bipeds or trot gaits for quadrupeds emerged without any user input or hand-crafting.
The costs in the trajectory optimization were to reach goal position and orientation while minimizing
torque usage and contact force magnitudes. We used the MuJoCo physics simulator [16] engine for
our dynamics calculations. The values of the algorithmic constants used in all experiments are
?? = 10?2 ?? = 10?2 ? = 10 ? = 102 ? = 10?2 .
8.1
Comparative Evaluation
We show the performance of our method on a biped walking task in figure 2 under full method case.
To test the contribution of our proposed joint optimization technique, we compared our algorithm to
naive neural network training on a static optimal trajectory dataset. We disabled the neural network
and generated optimal trajectories as according to 5. Then, we performed our regression on this
static data set with no trajectories being re-optimized. The results are shown in no joint case. We
see that at test time, our full method performs two orders of magnitude better than static training.
To test the contribution of noise injection, we used our full method, but disabled sensory and hidden
unit noise (sections 2.1 and 4). The results are under no noise case. We observe typical overfitting,
with good training performance, but very poor test performance. In practice, both ablations above
lead to policy rollouts that quickly diverge from expected behavior.
Additionally, we have compared the performance of different policy network architectures on the
biped walking task by varying the number of layers and hidden units. The results are shown in table
1. We see that 3 hidden layers of 250 units gives the best performance/complexity tradeoff.
Model-predictive control (MPC) is another potential choice of a real-time controller for task-driven
character behavior. In fact, the trajectory costs for both MPC and our method are very similar. The
resulting trajectories, however, end up being different: MPC creates effective trajectories that are
not cyclical (both are shown in figure 3 for a bird character). This suggests a significant nullspace
of task solutions, but from all these solutions, our joint optimization - through the cost terms of
matching the neural network output - act to regularize trajectory optimization to predictable and less
chaotic behaviors.
7
Figure 2: Performance of our full method and two ablated configurations as training progresses over
10000 neutral network updates. Mean and variance of the error is over 1000 training and test trials.
10 neurons
25 neurons
100 neurons
250 neurons
500 neurons
0.337 ? 0.06
0.309 ? 0.06
0.186 ? 0.02
0.153 ? 0.02
0.148 ? 0.02
1 layer
2 layers
3 layers
4 layers
0.307 ? 0.06
0.253 ? 0.06
0.153 ? 0.02
0.158 ? 0.02
(b) Increasing Layers with 250 neurons per layer
(a) Increasing Neurons per layer with 4 layers
Table 1: Mean and variance of joint position error on test rollouts with our method after training
with different neural network configurations.
9
Conclusions and Future Work
We have presented an automatic way of generating neural network parameters that represent a control policy for physically consistent interactive character control, only requiring a dynamical character model and task description. Using both trajectory optimization and stochastic neural networks
together combines correct behavior with real-time interactive use. Furthermore, the same algorithm
and controller architecture can provide interactive control for multiple creature morphologies.
While the behavior of the characters reflected efficient task completion in this work, additional
modifications could be made to affect the style of behavior ? costs during trajectory optimization
can affect how a task is completed. Incorporation of muscle actuation effects into our character
models may result in more biomechanically plausible actions for that (biologically based) character.
In addition to changing the character?s physical characteristics, we could explore different neural
network architectures and how they compare to biological systems. With this work, we have networks that enable diverse physical action, which could be augmented to further reflect biological
sensorimotor systems. This model could be used to experiment with the effects of sensor delays and
the resulting motions, for example [2].
This work focused on locomotion of different creatures with the same algorithm. Previous work
has demonstrated behaviors such as getting up, climbing, and reaching with the same trajectory
optimization method [12]. Real-time policies using this algorithm could allow interactive use of
these behaviors as well. Extending beyond character animation, this work could be used to develop
controllers for robotics applications that are robust to sensor noise and perturbations if the trained
character model accurately reflects the robot?s physical parameters.
Figure 3: Typical joint angle trajectories that result from MPC and our
method. While both trajectories successfully maintain position for a bird
character, our method generates trajectories that are
cyclic and regular.
8
References
[1] P. Chen. Hessian matrix vs. gauss-newton hessian matrix. SIAM J. Numerical Analysis,
49(4):1417?1435, 2011.
[2] H. Geyer and H. Herr. A muscle-reflex model that encodes principles of legged mechanics
produces human walking dynamics and muscle activities. Neural Systems and Rehabilitation
Engineering, IEEE Transactions on, 18(3):263?273, 2010.
[3] R. Grzeszczuk, D. Terzopoulos, and G. Hinton. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th Annual Conference on
Computer Graphics and Interactive Techniques, SIGGRAPH ?98, pages 9?20, New York, NY,
USA, 1998. ACM.
[4] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580, 2012.
[5] G. M. Hoerzer, R. Legenstein, and W. Maass. Emergence of complex computational structures
from chaotic neural networks through reward-modulated hebbian learning. Cerebral Cortex,
2012.
[6] D. Huh and E. Todorov. Real-time motor control using recurrent neural networks. In Adaptive
Dynamic Programming and Reinforcement Learning, 2009. ADPRL ?09. IEEE Symposium on,
pages 42?49, March 2009.
[7] A. J. Ijspeert. Central pattern generators for locomotion control in animals and robots: a review,
2008.
[8] E. Ju, J. Won, J. Lee, B. Choi, J. Noh, and M. G. Choi. Data-driven control of flapping flight.
ACM Trans. Graph., 32(5):151:1?151:12, Oct. 2013.
[9] S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In ICML ?14: Proceedings of the 31st International Conference on Machine Learning,
2014.
[10] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013.
[11] I. Mordatch and E. Todorov. Combining the benefits of function approximation and trajectory
optimization. In Robotics: Science and Systems (RSS), 2014.
[12] I. Mordatch, E. Todorov, and Z. Popovi?c. Discovery of complex behaviors through contactinvariant optimization. ACM Transactions on Graphics (TOG), 31(4):43, 2012.
[13] J. R. Rebula, P. D. Neuhaus, B. V. Bonnlander, M. J. Johnson, and J. E. Pratt. A controller
for the littledog quadruped walking on rough terrain. In Robotics and Automation, 2007 IEEE
International Conference on, pages 1467?1473. IEEE, 2007.
[14] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.
[15] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and
momentum in deep learning. In Proceedings of the 30th International Conference on Machine
Learning (ICML-13), volume 28, pages 1139?1147, May 2013.
[16] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In
IROS?12, pages 5026?5033, 2012.
[17] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust
features with denoising autoencoders. pages 1096?1103, 2008.
[18] M. Vukobratovic and B. Borovac. Zero-moment point - thirty five years of its life. I. J.
Humanoid Robotics, 1(1):157?173, 2004.
[19] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In Advances in
Neural Information Processing Systems (NIPS), 2013.
[20] J. M. Wang, D. J. Fleet, and A. Hertzmann. Optimizing walking controllers for uncertain inputs
and environments. ACM Trans. Graph., 29(4):73:1?73:8, July 2010.
[21] K. Yin, K. Loken, and M. van de Panne. Simbicon: Simple biped locomotion control. ACM
Trans. Graph., 26(3):Article 105, 2007.
9
| 5764 |@word trial:5 r:1 simulation:1 sgd:2 contactinvariant:2 harder:4 moment:2 initial:10 cyclic:4 configuration:2 interestingly:1 imaginary:1 existing:2 current:2 si:1 yet:4 activation:2 must:2 gpu:8 reminiscent:1 realistic:4 additive:1 informative:1 subsequent:1 distant:1 lqg:4 motor:2 numerical:1 designed:3 littledog:1 update:1 v:1 intelligence:1 geyer:1 realism:3 infrastructure:1 node:6 location:6 loken:1 five:1 height:1 wierstra:1 along:3 direct:6 symposium:1 koltun:1 consists:1 combine:2 fitting:1 introduce:2 x0:1 expected:1 behavior:21 surge:1 mechanic:1 morphology:3 integrator:1 simulator:1 torque:2 salakhutdinov:1 relying:1 automatically:2 little:1 increasing:2 provided:2 xx:3 moreover:1 medium:2 null:1 what:1 kg:2 argmin:4 atari:1 substantially:1 finding:1 every:1 act:2 interactive:10 control:30 utilization:1 unit:9 reuses:1 producing:2 before:1 service:1 engineering:1 local:1 modify:1 despite:1 encoding:1 modulation:1 bird:3 initialization:4 k:1 drag:1 dynamically:1 suggests:1 neuroanimator:1 co:2 mujoco:2 limited:1 range:5 woodbury:1 thirty:1 practice:1 block:1 implement:1 chaotic:3 procedure:2 riedmiller:1 featured:1 universal:1 significantly:1 matching:2 regular:2 suggest:1 amplify:1 close:1 onto:1 storage:1 instability:2 optimize:3 map:1 demonstrated:1 center:1 marten:1 go:1 starting:2 independently:1 focused:1 simplicity:1 amazon:1 rule:1 stereotypical:1 utilizing:2 importantly:1 regularize:1 coordinate:1 updated:2 target:2 suppose:1 user:5 programming:1 designing:1 locomotion:7 origin:2 complementarity:1 velocity:2 synthesize:1 recognition:1 satisfying:3 walking:7 updating:1 swimmer:1 submission:1 observed:1 cloud:1 levine:2 preprint:1 solved:1 capture:4 wang:2 region:3 movement:2 principled:1 intuition:1 predictable:1 environment:1 locking:1 complexity:1 reward:2 hertzmann:1 dynamic:9 legged:1 zoran:2 trained:3 segment:5 concurrency:1 predictive:1 flying:3 serve:1 creates:2 tog:1 expedient:1 necessitates:1 joint:6 indirect:1 patter:1 siggraph:1 tx:5 train:3 fast:1 describe:1 quadruped:5 effective:1 artificial:1 query:1 lift:2 whose:1 emerged:2 larger:2 valued:1 solve:5 supplementary:1 plausible:1 reconstruct:1 jointly:1 noisy:5 emergence:1 online:2 sequence:2 gait:3 interaction:2 adaptation:2 loop:1 ablation:1 combining:1 achieve:1 description:1 getting:1 invest:1 convergence:3 cluster:2 optimum:2 extending:1 sutskever:2 produce:4 generating:4 comparative:1 executing:1 silver:1 help:2 depending:1 andrew:1 recurrent:7 pose:2 completion:1 develop:1 qt:4 progress:1 solves:1 implemented:3 c:1 come:2 larochelle:1 direction:1 radius:1 closely:1 correct:2 emulation:1 modifying:1 stochastic:9 human:1 viewing:1 softer:1 enable:1 require:2 galen:2 behaviour:1 adprl:1 abbeel:1 generalization:1 preliminary:1 biological:2 txx:1 neuhaus:1 accompanying:1 around:4 ground:2 mapping:1 algorithmic:1 automate:1 optimizer:3 purpose:1 injecting:2 applicable:1 tanh:1 sensitive:1 create:1 successfully:1 weighted:1 reflects:2 rough:1 sensor:4 gaussian:4 always:1 aim:1 rather:2 reaching:1 varying:1 command:1 earliest:1 ax:2 improvement:1 hk:1 contrast:1 sense:1 nn:3 softly:1 weakening:1 hidden:8 footstep:1 trainer:2 overall:1 among:1 classification:1 orientation:3 denoted:1 noh:1 dual:1 ill:1 plan:1 constrained:1 integration:1 animal:1 once:1 having:1 washington:2 represents:1 broad:1 icml:2 igor:1 future:1 others:1 modern:1 randomly:2 simultaneously:1 divergence:1 individual:1 ourselves:1 argminx:1 consisting:1 rollouts:2 maintain:2 opposing:1 ab:2 freedom:1 mnih:1 custom:1 evaluation:1 wellstudied:1 saturated:1 rolled:1 analyzed:1 bipedal:1 wager:2 chain:1 accurate:1 capable:1 byproduct:2 necessary:2 initialized:1 desired:2 walk:1 re:1 ablated:1 uncertain:1 panne:1 instance:3 modeling:1 soft:4 cost:18 deviation:1 subset:2 euler:1 neutral:1 delay:1 successful:2 krizhevsky:1 johnson:1 graphic:4 too:2 perturbed:1 proximal:2 st:5 density:3 ju:1 ec2:1 siam:1 international:3 systematic:1 physic:4 lee:1 diverge:2 synthesis:1 together:2 quickly:2 reflect:1 satisfied:1 interactively:1 opposed:1 choose:1 central:1 inject:3 inefficient:1 leading:1 derivative:1 wing:1 reusing:1 style:1 aggressive:1 potential:3 de:6 star:1 sec:2 stabilize:2 includes:1 automation:1 caused:1 depends:1 performed:2 root:3 closed:1 kendall:1 pendulum:1 reached:1 start:3 contribution:2 cio:4 air:1 minimize:3 variance:3 largely:1 loaded:1 characteristic:1 yield:3 climbing:1 vincent:1 kavukcuoglu:1 accurately:1 produced:1 manages:1 trajectory:69 ago:1 detector:1 reach:2 manual:1 sensorimotor:1 mpc:4 obvious:1 static:3 gain:17 emanuel:1 dataset:4 torso:1 schedule:1 feed:1 popovi:1 supervised:2 reflected:1 bonnlander:1 done:4 execute:3 evaluated:1 strongly:1 furthermore:1 just:1 implicit:2 autoencoders:2 d:1 hand:3 until:1 flight:1 web:1 trust:1 gaming:1 disabled:2 usage:2 effect:3 usa:1 concept:1 requiring:2 evolution:2 equality:2 regularization:3 analytically:1 alternating:3 q0:1 assigned:1 moritz:1 maass:1 during:7 won:1 m:1 impression:1 complete:2 performs:1 motion:12 coast:1 instantaneous:1 novel:1 common:2 propelled:1 specialized:2 physical:11 overview:1 cerebral:1 volume:1 tassa:1 synthesized:1 significant:1 freeze:1 ai:1 smoothness:1 shuffling:1 unconstrained:1 automatic:1 erez:1 biped:7 had:1 moving:2 stable:4 specification:1 robot:3 cortex:1 etc:1 add:1 movable:1 recent:3 optimizing:3 driven:2 scenario:1 certain:1 nonconvex:1 server:1 life:1 muscle:3 inverted:1 additional:3 somewhat:1 relaxed:1 r0:1 july:1 smoother:1 branch:1 multiple:2 full:4 hebbian:1 technical:1 match:1 adapt:1 calculation:1 huh:1 biomechanics:1 controlled:1 involving:1 regression:6 controller:18 vision:1 physically:1 iteration:3 represent:1 arxiv:2 achieved:3 robotics:5 addition:3 want:2 operate:1 specially:1 unlike:2 sure:2 file:3 subject:2 tend:1 jordan:1 extracting:1 near:2 bengio:1 enough:1 pratt:1 todorov:6 affect:2 fit:1 timesteps:2 architecture:10 topology:1 opposite:1 reduce:2 tradeoff:1 shift:1 fleet:1 distributing:1 passed:1 effort:1 akin:1 penalty:1 suffer:1 speech:1 proceed:1 cause:1 hessian:4 action:18 york:1 deep:3 useful:1 involve:1 ten:1 induces:1 reduced:1 generate:3 flapping:3 neuroscience:1 per:3 diverse:2 discrete:1 recomputed:1 four:1 nevertheless:1 purposeful:1 quadrupedal:1 changing:4 prevent:2 iros:1 branched:1 dahl:1 timestep:2 graph:4 swimming:4 year:1 cone:1 enforced:1 run:3 angle:2 inverse:1 master:1 injected:1 orient:1 realtime:1 legenstein:1 simbicon:1 cxx:6 dropout:1 layer:16 resampled:1 ct:4 quadratic:4 lowrey:2 activity:1 annual:1 constraint:8 incorporation:1 encodes:2 explode:1 nearby:1 generates:1 aspect:1 speed:1 friction:1 performing:1 injection:1 department:1 developing:1 according:3 march:1 poor:2 across:1 character:40 unity:1 making:1 penetration:1 modification:2 biologically:1 rehabilitation:1 intuitively:1 invariant:2 altitude:1 resource:1 equation:2 remains:1 eventually:1 needed:1 antonoglou:1 end:1 available:1 observe:2 enforce:1 spectral:1 slower:1 original:1 denotes:1 include:5 completed:1 newton:3 calculating:1 perturb:1 contact:9 sweep:1 objective:8 move:1 already:4 quantity:1 crafting:1 strategy:1 parametric:1 rt:2 traditional:2 exhibit:1 gradient:4 distance:2 simulated:1 parametrized:1 unstable:1 trivial:2 toward:1 water:2 length:1 illustration:1 manzagol:1 minimizing:2 differencing:1 liang:1 potentially:1 holding:2 spider:1 gk:2 rise:2 disparate:1 design:1 implementation:4 ambitious:1 policy:29 perform:2 neuron:7 datasets:3 fin:2 finite:1 descent:3 supporting:1 hinton:3 precise:2 frame:3 perturbation:5 arbitrary:2 complement:1 namely:1 pair:2 required:1 dog:1 optimized:3 engine:2 learned:4 hour:2 nip:1 trans:3 able:4 beyond:2 dynamical:3 mordatch:4 below:1 parallelism:1 usually:1 pattern:1 challenge:1 including:1 memory:3 video:2 grzeszczuk:1 force:9 recursion:1 arm:1 historically:1 created:1 categorical:1 extract:1 naive:1 epoch:3 review:1 discovery:1 schulman:1 evolve:1 relative:1 graf:1 sk2:1 rationale:1 generation:2 remarkable:1 generator:1 humanoid:2 degree:1 consistent:1 article:1 principle:2 playing:1 heavy:1 changed:2 repeat:1 last:2 asynchronous:3 truncation:1 offline:2 allow:2 terzopoulos:1 wide:2 fall:1 differentiating:1 distributed:4 benefit:2 feedback:18 calculated:5 xn:2 van:1 xlarge:2 sensory:9 forward:1 commonly:1 reinforcement:3 collection:2 simplified:2 made:1 preventing:1 adaptive:2 transaction:2 keep:4 global:1 robotic:1 overfitting:2 popovic:1 consuming:1 xi:7 alternatively:1 don:1 terrain:1 continuous:2 search:1 decade:1 table:2 additionally:2 learn:7 robust:4 composing:1 init:1 improving:1 expansion:1 complex:7 da:1 linearly:1 noise:19 animation:2 arise:1 body:2 x1:4 augmented:4 elaborate:2 fashion:1 batched:2 creature:6 aid:1 ny:1 sub:3 position:11 momentum:1 explicit:2 wish:2 vanish:1 nullspace:1 learns:1 interleaving:1 reloaded:1 down:1 choi:2 specific:2 essential:1 adding:1 corr:2 importance:1 ci:4 lifting:1 magnitude:2 execution:4 conditioned:2 sx:4 horizon:1 chen:1 easier:1 cx:3 vigorous:1 yin:1 simply:2 explore:2 unexpected:1 g2:1 cyclical:3 reflex:1 minimizer:2 determines:1 acm:5 oct:1 goal:13 identity:1 replace:1 admm:1 change:11 hard:1 included:1 specifically:1 folded:1 typical:3 trot:2 denoising:2 distributes:1 total:8 ijspeert:1 pas:4 gauss:2 diverging:1 m3:2 rarely:1 modulated:1 meant:1 actuation:1 srivastava:1 |
5,263 | 5,765 | The Human Kernel
Andrew Gordon Wilson
CMU
Christoph Dann
CMU
Christopher G. Lucas
University of Edinburgh
Eric P. Xing
CMU
Abstract
Bayesian nonparametric models, such as Gaussian processes, provide a compelling framework for automatic statistical modelling: these models have a high
degree of flexibility, and automatically calibrated complexity. However, automating human expertise remains elusive; for example, Gaussian processes with standard kernels struggle on function extrapolation problems that are trivial for human
learners. In this paper, we create function extrapolation problems and acquire human responses, and then design a kernel learning framework to reverse engineer
the inductive biases of human learners across a set of behavioral experiments. We
use the learned kernels to gain psychological insights and to extrapolate in humanlike ways that go beyond traditional stationary and polynomial kernels. Finally, we
investigate Occam?s razor in human and Gaussian process based function learning.
1
Introduction
Truly intelligent systems can learn and make decisions without human intervention. Therefore it
is not surprising that early machine learning efforts, such as the perceptron, have been neurally
inspired [1]. In recent years, probabilistic modelling has become a cornerstone of machine learning
approaches [2, 3, 4], with applications in neural processing [5, 6, 3, 7] and human learning [8, 9].
From a probabilistic perspective, the ability for a model to automatically discover patterns and perform extrapolation is determined by its support (which solutions are a priori possible), and inductive
biases (which solutions are a priori likely). Ideally, we want a model to be able to represent many
possible solutions to a given problem, with inductive biases which can extract intricate structure
from limited data. For example, if we are performing character recognition, we would want our
support to contain a large collection of potential characters, accounting even for rare writing styles,
and our inductive biases to reasonably reflect the probability of encountering each character [10].
The support and inductive biases of a wide range of probabilistic models, and thus the ability for
these models to learn and generalise, is implicitly controlled by a covariance kernel, which determines the similarities between pairs of datapoints. For example, Bayesian basis function regression
(including, e.g., all polynomial models), splines, and infinite neural networks, can all exactly be represented as a Gaussian process with a particular kernel function [11, 10, 12]. Moreover, the Fisher
kernel provides a mechanism to reformulate probabilistic generative models as kernel methods [13].
In this paper, we wish to reverse engineer human-like support and inductive biases for function
learning, using a Gaussian process (GP) based kernel learning formalism. In particular:
? We create new human function learning datasets, including novel function extrapolation
problems and multiple-choice questions that explore human intuitions about simplicity and
explanatory power, available at http://functionlearning.com/.
? We develop a statistical framework for kernel learning from the predictions of a model,
conditioned on the (training) information that model is given. The ability to sample multiple
sets of posterior predictions from a model, at any input locations of our choice, given any
dataset of our choice, provides unprecedented statistical strength for kernel learning. By
contrast, standard kernel learning involves fitting a kernel to a fixed dataset that can only be
viewed as a single realisation from a stochastic process. Our framework leverages spectral
mixture kernels [14] and non-parametric estimates.
1
? We exploit this framework to directly learn kernels from human responses, which contrasts
with all prior work on human function learning, where one compares a fixed model to human responses. Further, we consider individual rather than averaged human extrapolations.
? We interpret the learned kernels to gain scientific insights into human inductive biases,
including the ability to adapt to new information for function learning. We also use the
learned ?human kernels? to inspire new types of covariance functions which can enable
extrapolation on problems which are difficult for conventional GP models.
? We study Occam?s razor in human function learning, and compare to GP marginal likelihood based model selection, which we show is biased towards under-fitting.
? We provide an expressive quantitative means to compare existing machine learning algorithms with human learning, and a mechanism to directly infer human prior representations.
Our work is intended as a preliminary step towards building probabilistic kernel machines that encapsulate human-like support and inductive biases. Since state of the art machine learning methods
perform conspicuously poorly on a number of extrapolation problems which would be easy for
humans [12], such efforts have the potential to help automate machine learning and improve performance on a wide range of tasks ? including settings which are difficult for humans to process (e.g.,
big data and high dimensional problems). Finally, the presented framework can be considered in
a more general context, where one wishes to efficiently reverse engineer interpretable properties of
any model (e.g., a deep neural network) from its predictions.
We further describe related work in section 2. In section 3 we introduce a framework for learning
kernels from human responses, and employ this framework in section 4. In the supplement, we
provide background on Gaussian processes [11], which we recommend as a review.
2
Related Work
Historically, efforts to understand human function learning have focused on rule-based relationships
(e.g., polynomial or power-law functions) [15, 16], or interpolation based on similarity learning
[17, 18]. Griffiths et al. [19] were the first to note that a Gaussian process framework can be used to
unify these two perspectives. They introduced a GP model with a mixture of RBF and polynomial
kernels to reflect the human ability to learn arbitrary smooth functions while still identifying simple
parametric functions. They applied this model to a standard set of evaluation tasks, comparing
predictions on simple functions to averaged human judgments, and interpolation performance to
human error rates. Lucas et al. [20, 21] extended this model to accommodate a wider range of
phenomena, and to shed light on human predictions given sparse data.
Our work complements these pioneering Gaussian process models and prior work on human function learning, but has many features that distinguish it from previous contributions: (1) rather than
iteratively building models and comparing them to human predictions, based on fixed assumptions
about the regularities humans can recognize, we are directly learning the properties of the human
model through advanced kernel learning techniques; (2) essentially all models of function learning, including past GP models, are evaluated on averaged human responses, setting aside individual
differences and erasing critical statistical structure in the data1 . By contrast, our approach uses individual responses; (3) many recent model evaluations rely on relatively small and heterogeneous
sets of experimental data. The evaluation corpora using recent reviews [22, 19] are limited to a
small set of parametric forms, and more detailed analyses tend to involve only linear, quadratic
and logistic functions. Other projects have collected richer data [23, 24], but we are only aware of
coarse-grained, qualitative analyses using these data. Moreover, experiments that depart from simple parametric functions tend to use very noisy data. Thus it is unsurprising that participants tend
to revert to the prior mode that arises in almost all function learning experiments: linear functions,
especially with slope-1 and intercept-0 [23, 24] (but see [25]). In a departure from prior work, we
create original function learning problems with no simple parametric description and no noise ?
where it is obvious that human learners cannot resort to simple rules ? and acquire the human data
ourselves. We hope these novel datasets will inspire more detailed findings on function learning; (4)
we learn kernels from human responses, which (i) provide insights into the biases driving human
function learning and the human ability to progressively adapt to new information, and (ii) enable
human-like extrapolations on problems that are difficult for conventional GP models; and (5) we
investigate Occam?s razor in human function learning and nonparametric model selection.
1
For example, averaging prior draws from a Gaussian process would remove the structure necessary for
kernel learning, leaving us simply with an approximation of the prior mean function.
2
3
The Human Kernel
The rule-based and associative theories for human function learning can be unified as part of a Gaussian process framework. Indeed, Gaussian processes contain a large array of probabilistic models,
and have the non-parametric flexibility to produce infinitely many consistent (zero training error) fits
to any dataset. Moreover, the support and inductive biases of a GP are encaspulated by a covariance
kernel. Our goal is to learn GP covariance kernels from predictions made by humans on function
learning experiments, to gain a better understanding of human learning, and to inspire new machine
learning models, with improved extrapolation performance, and minimal human intervention.
3.1 Problem Setup
A (human) learner is given access to data y at training inputs X, and makes predictions y? at
testing inputs X? . We assume the predictions y? are samples from the learner?s posterior distribution
over possible functions, following results showing that human inferences and judgments resemble
posterior samples across a wide range of perceptual and decision-making tasks [26, 27, 28]. We
assume we can obtain multiple draws of y? for a given X and y.
3.2 Kernel Learning
In standard GP applications, one has access to a single realisation of data y, and performs kernel
learning by optimizing the marginal likelihood of the data with respect to covariance function hyperparameters ? (supplement). However, with only a single realisation of data we are highly constrained
in our ability to learn an expressive kernel function ? requiring us to make strong assumptions, such
as RBF covariances, to extract useful information from the data. One can see this by simulating
N datapoints from a GP with a known kernel, and then visualising the empirical estimate yy> of
the known covariance matrix K. The empirical estimate, in most cases, will look nothing like K.
However, perhaps surprisingly, if we have even a small number of multiple draws from a GP, we
?y
?>,
can recover a wide array of covariance matrices K using the empirical estimator Y Y > /M ? y
? is a vector of empirical means.
where Y is an N ? M data matrix, for M draws, and y
The typical goal in choosing kernels is to use training data to find one that minimizes some loss
function, e.g., generalisation error, but here we want to reverse engineer the kernel of a model ?
here, whatever model human learners are tacitly using ? that has been applied to training data, based
on both training data and predictions of the model. If we have a single sample extrapolation, y? ,
at test inputs X? , based on training points y, and Gaussian noise, the probability p(y? |y, k? ) is
given by the posterior predictive distribution of a Gaussian process, with f ? ? y? . One can use
this probability as a utility function for kernel learning, much like the marginal likelihood. See the
supplement for details of these distributions.
Our problem setup affords unprecedented opportunities for flexible kernel learning. If we have mul(1)
(2)
(W )
tiple sample extrapolations from a given set of training data, y? , y? , . . . , y? , then the predictive
QW
(j)
conditional marginal likelihood becomes j=1 p(y? |y, k? ). One could apply this new objective,
for instance, if we were to view different human extrapolations as multiple draws from a common
generative model. Clearly this assumption is not entirely correct, since different people will have different biases, but it naturally suits our purposes: we are not as interested in the differences between
people, as the shared inductive biases, and assuming multiple draws from a common generative
model provides extraordinary statistical strength for learning these shared biases. Ultimately, we
will study both the differences and similarities between the responses.
One option for kernel learning is to specify a flexible parametric form for k and then learn ? by
optimizing our chosen objective functions. For this approach, we choose the recent spectral mixture
kernels of Wilson and Adams [14], which can model a wide range of stationary covariances, and are
intended to help automate kernel selection. However, we note that our objective function can readily
be applied to other parametric forms. We also consider empirical non-parametric kernel estimation,
since non-parametric kernel estimators can have the flexibility to converge to any positive definite
kernel, and thus become appealing when we have the signal strength provided by multiple draws
from a stochastic process.
4
Human Experiments
We wish to discover kernels that capture human inductive biases for learning functions and extrapolating from complex or ambiguous training data. We start by testing the consistency of our kernel
learning procedure in section 4.1. In section 4.2, we study progressive function learning. Indeed,
3
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
-1
-1
-1
-2
-2
-2
0
0.2
0.4
0.6
0.8
1
Prediction kernel
Data kernel
Learned kernel
0
(a) 1 Posterior Draw
0.2
0.4
0.6
0.8
(b) 10 Posterior Draws
1
0
0.2
0.4
0.6
0.8
1
(c) 20 Posterior Draws
Figure 1: Reconstructing a kernel used for predictions: Training data were generated with an RBF
kernel (green, ??), and multiple independent posterior predictions were drawn from a GP with a
spectral-mixture prediction kernel (blue, - -). As the number of posterior draws increases, the learned
spectral-mixture kernel (red, ?) converges to the prediction kernel.
humans participants will have a different representation (e.g., learned kernel) for different observed
data, and examining how these representations progressively adapt with new information can shed
light on our prior biases. In section 4.3, we learn human kernels to extrapolate on tasks which are
difficult for Gaussian processes with standard kernels. In section 4.4, we study model selection in
human function learning. All human participants were recruited using Amazon?s mechanical turk
and saw experimental materials provided at http://functionlearning.com. When we are
considering stationary ground truth kernels, we use a spectral mixture for kernel learning; otherwise,
we use a non-parametric empirical estimate.
4.1
Reconstructing Ground Truth Kernels
We use simulations with a known ground truth to test the consistency of our kernel learning procedure, and the effects of multiple posterior draws, in converging to a kernel which has been used to
make predictions.
We sample 20 datapoints y from a GP with RBF kernel (the supplement describes GPs),
kRBF (x, x0 ) = exp(?0.5||x ? x0 ||/`2 ), at random input locations. Conditioned on these data, we
(1)
(W )
then sample multiple posterior draws, y? , . . . , y? , each containing 20 datapoints, from a GP
with a spectral mixture kernel [14] with two components (the prediction kernel). The prediction
kernel has deliberately not been trained to fit the data kernel. To reconstruct the prediction kernel,
we learn the parameters ? of a randomly initialized spectral mixture kernel with five components,
QW
(j)
by optimizing the predictive conditional marginal likelihood j=1 p(y? |y, k? ) wrt ?.
Figure 1 compares the learned kernels for different numbers of posterior draws W against the data
kernel (RBF) and the prediction kernel (spectral mixture). For a single posterior draw, the learned
kernel captures the high-frequency component of the prediction kernel but fails at reconstructing the
low-frequency component. Only with multiple draws does the learned kernel capture the longerrange dependencies. The fact that the learned kernel converges to the prediction kernel, which is
different from the data kernel, shows the consistency of our procedure, which could be used to infer
aspects of human inductive biases.
4.2
Progressive Function Learning
We asked humans to extrapolate beyond training data in two sets of 5 functions, each drawn from
GPs with known kernels. The learners extrapolated on these problems in sequence, and thus had an
opportunity to progressively learn about the underlying kernel in each set. To further test progressive
function learning, we repeated the first function at the end of the experiment, for six functions in each
set. We asked for extrapolation judgments because they provide more information about inductive
biases than interpolation, and pose difficulties for conventional GP kernels [14, 12, 29].
The observed functions are shown in black in Figure 2, the human responses in blue, and the true
extrapolation in dashed black. In the first two rows, the black functions are drawn from a GP with a
rational quadratic (RQ) kernel [11] (for heavy tailed correlations); there are 20 participants.
We show the learned human kernel, the data generating kernel, the human kernel learned from a
spectral mixture, and an RBF kernel trained only on the data, in Figures 2(g) and 2(h), respectively
corresponding to Figures 2(a) and 2(f). Initially, both the human learners and RQ kernel show heavy
tailed behaviour, and a bias for decreasing correlations with distance in the input space, but the
human learners have a high degree of variance. By the time they have seen Figure 2(h), they are
4
1
4
2
2
2
1
1
0.5
0
-0.5
0
0
0
-2
-1
-1
-1
-1.5
-2
-2.5
-4
0
5
10
0
5
(a)
10
-2
0
5
(b)
1
10
-2
0
5
(c)
1
1.5
1.5
1
1
0.5
0.5
Human kernel
Data kernel
RBF kernel
0
0
10
(d)
-1
-1
-2
-2
0
5
10
-3
0
5
(e)
0
10
0
2
(f)
0
4
0
2
(g)
4
(h)
4
4
2
6
2
2
4
0
0
0
2
-2
-2
-2
0
-4
-4
-2
-4
-6
-6
-8
-8
0
0.5
1
1.5
2
-4
0
0.5
(i)
1
1.5
2
-6
(j)
0
0.5
1
1.5
2
-6
0
0.5
1
(k)
(l)
(o)
(p)
1.5
2
2
8
6
0
4
-2
2
-4
0
-6
-2
-4
-8
0
0.5
1
(m)
1.5
2
0
0.5
1
1.5
2
(n)
Figure 2: Progressive Function Learning. Humans are shown functions in sequence and asked to
make extrapolations. Observed data are in black, human predictions in blue, and true extrapolations
in dashed black. (a)-(f): observed data are drawn from a rational quadratic kernel, with identical data
in (a) and (f). (g): Learned human and RBF kernels on (a) alone, and (h): on (f), after seeing the data
in (a)-(e). The true data generating rational quadratic kernel is shown in red. (i)-(n): observed data
are drawn from a product of spectral mixture and linear kernels with identical data in (i) and (n).
(o): the empirical estimate of the human posterior covariance matrix from all responses in (i)-(n).
(p): the true posterior covariance matrix for (i)-(n).
more confident in their predictions, and more accurately able to estimate the true signal variance of
the function. Visually, the extrapolations look more confident and reasonable. Indeed, the human
learners will adapt their representations (e.g., learned kernels) to different datasets. However ?
although the human learners will adapt their representations (e.g., learned kernels) to observed data
? we can see in Figure 2(f) that the human learners are still over-estimating the tails of the kernel,
perhaps suggesting a strong prior bias for heavy-tailed correlations.
The learned RBF kernel, by contrast, cannot capture the heavy tailed nature of the training data (long
range correlations), due to its Gaussian parametrization. Moreover, the learned RBF kernel underestimates the signal variance of the data, because it overestimates the noise variance (not shown), to
explain away the heavy tailed properties of the data (its model misspecification).
In the second two rows, we consider a problem with highly complex structure, and only 10 participants. Here, the functions are drawn from a product of spectral mixture and linear kernels. As
the participants see more functions, they appear to expect linear trends, and become more similar
in their predictions. In Figures 2(o) and 2(p), we show the learned and true predictive correlation
matrices using empirical estimators which indicate similar correlation structure.
4.3
Discovering Unconventional Kernels
The experiments reported in this section follow the same general procedure described in Section 4.2.
In this case, 40 human participants were asked to extrapolate from two single training sets, in counterbalanced order: a sawtooth function (Figure 3(a)), and a step function (Figure 3(b)), with traing
data showing as dashed black lines.
5
2
2
2
1
1
1
0
0
0
-1
0
0.5
1
-1
0
0.5
(a)
-1
1
0
0.5
(b)
1
(c)
2
2
2
1
1
1
(d)
2
1.5
1
0.5
0
0
0
0
-0.5
-1
0
0.5
1
-1
0
0.5
(e)
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
0
0.5
1
-1
0
0.5
(f)
1
-1
0
(i)
0.5
1
-1
0
(g)
(k)
(l)
1
1
1
0.5
0.5
0.5
0.5
0
0
0
0
-0.5
-0.5
-0.5
-0.5
0
0.5
(m)
1
-1
0
0.5
1
1
(j)
1
-1
0.5
(h)
1
(n)
-1
0
0.5
(o)
1
-1
0
0.5
1
(p)
Figure 3: Learning Unconventional Kernels. (a)-(c): sawtooth function (dashed black), and three
clusters of human extrapolations. (d) empirically estimated human covariance matrix for (a). (e)-(g):
corresponding posterior draws for (a)-(c) from empirically estimated human covariance matrices.
(h): posterior predictive draws from a GP with a spectral mixture kernel learned from the dashed
black data. (i)-(j): step function (dashed black), and two clusters of human extrapolations. (k)
and (l) are the empirically estimated human covariance matrices for (i) and (j), and (m) and (n) are
posterior samples using these matrices. (o) and (p) are respectively spectral mixture and RBF kernel
extrapolations from the data in black.
These types of functions are notoriously difficult for standard Gaussian process kernels [11], due to
sharp discontinuities and non-stationary behaviour. In Figures 3(a), 3(b), 3(c), we used agglomerative clustering to process the human responses into three categories, shown in purple, green, and
blue. The empirical covariance matrix of the first cluster (Figure 3(d)) shows the dependencies of
the sawtooth form that characterize this cluster. In Figures 3(e), 3(f), 3(g), we sample from the
learned human kernels, following the same colour scheme. The samples appear to replicate the human behaviour, and the purple samples provide reasonable extrapolations. By contrast, posterior
samples from a GP with a spectral mixture kernel trained on the black data in this case quickly
revert to a prior mean, as shown in Fig 3(h). The data are sufficiently sparse, non-differentiable, and
non-stationary, that the spectral mixture kernel is less inclined to produce a long range extrapolation
than human learners, who attempt to generalise from a very small amount of information.
For the step function, we clustered the human extrapolations based on response time and total variation of the predicted function. Responses that took between 50 and 200 seconds and did not vary
by more than 3 units, shown in Figure 3(i), appeared reasonable. The other responses are shown in
Figure 3(j). The empirical covariance matrices of both sets of predictions in Figures 3(k) and 3(l)
show the characteristics of the responses. While the first matrix exhibits a block structure indicating
step-functions, the second matrix shows fast changes between positive and negative dependencies
characteristic for the high-frequency responses. Posterior sample extrapolations using the empirical
human kernels are shown in Figures 3(m) and 3(n). In Figures 3(o) and 3(p) we show posterior
samples from GPs with spectral mixture and RBF kernels, trained on the black data (e.g., given the
same information as the human learners). The spectral mixture kernel is able to extract some structure (some horizontal and vertical movement), but is overconfident, and unconvincing compared to
the human kernel extrapolations. The RBF kernel is unable to learn much structure in the data.
6
4.4 Human Occam?s Razor
If you were asked to predict the next number in the sequence 9, 15, 21, . . . , you are likely more
inclined to guess 27 than 149.5. However, we can produce either answer using different hypotheses
that are entirely consistent with the data. Occam?s razor describes our natural tendency to favour the
simplest hypothesis that fits the data, and is of foundational importance in statistical model selection.
For example, MacKay [30] argues that Occam?s razor is automatically embodied by the marginal
likelihood in performing Bayesian inference: indeed, in our number sequence example, marginal
likelihood computations show that 27 is millions of times more probable than 149.5, even if the
prior odds are equal.
Occam?s razor is vitally important in nonparametric models such as Gaussian processes, which have
the flexibility to represent infinitely many consistent solutions to any given problem, but avoid overfitting through Bayesian inference. For example, the marginal likelihood of a Gaussian process
(supplement) separates into automatically calibrated model fit and model complexity terms, sometimes referred to as automatic Occam?s razor [31].
Complex
Simple
Appropriate
p(y|M)
Output, f(x)
2
1
0
?1
?2
All Possible Datasets
?2
0
2
4
6
8
Input, x
(a)
(b)
Figure 4: Bayesian Occam?s Razor. a) The marginal likelihood (evidence) vs. all possible datasets.
? . b) Posterior mean functions of a GP
The dashed vertical line corresponds to an example dataset y
with RBF kernel and too short, too large, and maximum marginal likelihood length-scales. Data are
denoted by crosses.
The marginal likelihood p(y|M) is the probability that if we were to randomly sample parameters
from M that we would create dataset y [e.g., 31]. Simple models can only generate a small number
of datasets, but because the marginal likelihood must normalise, it will generate these datasets with
high probability. Complex models can generate a wide range of datasets, but each with typically low
probability. For a given dataset, the marginal likelihood will favour a model of more appropriate
complexity. This argument is illustrated in Fig 4(a). Fig 4(b) illustrates this principle with GPs.
60
40
20
0
1
2
3
4
5
6
Function Label
(a)
7
7
6
5
4
3
2
1
1
2
3
4
5
6
First Choice Ranking
(b)
7
Average Human Ranking
First Place Votes
80
Average Human Ranking
Here we examine Occam?s razor in human learning, and compare the Gaussian process marginal
likelihood ranking of functions, all consistent with the data, to human preferences. We generated a
dataset sampled from a GP with an RBF kernel, and presented users with a subsample of 5 points,
as well as seven possible GP function fits, internally labelled as follows: (1) the predictive mean of
a GP after maximum marginal likelihood hyperparameter estimation; (2) the generating function;
(3-7) the predictive means of GPs with larger to smaller length-scales (simpler to more complex
fits). We repeated this procedure four times, to create four datasets in total, and acquired 50 human
rankings on each, for 200 total rankings. Each participant was shown the same unlabelled functions
but with different random orderings.
7
6
-2.5
5
-1.5
4
-1.0
3
2
1
+1.0
Truth +0.5
ML
2
3
4
5
6
7
GP Marginal Likelihood Ranking
(c)
Figure 5: Human Occam?s Razor. (a) Number of first place (highest ranking) votes for each function.
(b) Average human ranking (with standard deviations) of functions compared to first place ranking
defined by (a). (c) Average human ranking vs. average GP marginal likelihood ranking of functions.
?ML? = marginal likelihood optimum, ?Truth? = true extrapolation. Blue numbers are offsets to the
log length-scale from the ML optimum. Positive offsets correspond to simpler solutions.
7
Figure 5(a) shows the number of times each function was voted as the best fit to the data, which
follows the internal (latent) ordering defined above. The maximum marginal likelihood solution
receives the most (37%) first place votes. Functions 2, 3, and 4 received similar numbers (between
15% and 18%) of first place votes. The solutions which have a smaller length-scale (greater complexity) than the marginal likelihood best fit ? represented by functions 5, 6, and 7 ? received a
relatively small number of first place votes. These findings suggest that on average humans prefer
overly simple explanations of the data. Moreover, participants generally agree with the GP marginal
likelihood?s first choice preference, even over the true generating function. However, these data
also suggest that participants have a wide array of prior biases, leading to variability in first choice
preferences. Furthermore, 86% (43/50) of participants responded that their first ranked choice was
?likely to have generated the data? and looks ?very similar? to imagined.
It?s possible for highly probable solutions to be underrepresented in Figure 5(a): we might imagine,
for example, that a particular solution is never ranked first, but always second. In?Figure 5(b), we
show the average rankings, with standard deviations (the standard errors are stdev/ 200), compared
to the first choice rankings, for each function. There is a general correspondence between rankings,
suggesting that although human distributions over functions have different modes, these distributions
have a similar allocation of probability mass. The standard deviations suggest that there is relatively
more agreement that the complex small length-scale functions (labels 5, 6, 7) are improbable, than
about specific preferences for functions 1, 2, 3, and 4.
Finally, in Figure 5(c), we compare the average human rankings with the average GP marginal likelihood rankings. There are clear trends: (1) humans agree with the GP marginal likelihood about
the best fit, and that empirically decreasing the length-scale below the best fit value monotonically
decreases a solution?s probability; (2) humans penalize simple solutions less than the marginal likelihood, with function 4 receiving a last (7th) place ranking from the marginal likelihood.
Despite the observed human tendency to favour simplicity more than the GP marginal likelihood,
Gaussian process marginal likelihood optimisation is surprisingly biased towards under-fitting in
function space. If we generate data from a GP with a known length-scale, the mode of the marginal
likelihood, on average, will over-estimate the true length-scale (Figures 1 and 2 in the supplement).
If we are unconstrained in estimating the GP covariance matrix, we will converge to the maximum
? = (y? y?)(y? y?)> , which is degenerate and therefore biased. Parametrizing
likelihood estimator, K
a covariance matrix by a length-scale (for example, by using an RBF kernel), restricts this matrix to
a low-dimensional manifold on the full space of covariance matrices. A biased estimator will remain
biased when constrained to a lower dimensional manifold, as long as the manifold allows movement
in the direction of the bias. Increasing a length-scale moves a covariance matrix towards the degeneracy of the unconstrained maximum likelihood estimator. With more data, the low-dimensional
manifold becomes more constrained, and less influenced by this under-fitting bias.
5
Discussion
We have shown that (1) human learners have systematic expectations about smooth functions that
deviate from the inductive biases inherent in the RBF kernels that have been used in past models of
function learning; (2) it is possible to extract kernels that reproduce qualitative features of human
inductive biases, including the variable sawtooth and step patterns; (3) that human learners favour
smoother or simpler functions, even in comparison to GP models that tend to over-penalize complexity; and (4) that is it possible to build models that extrapolate in human-like ways which go
beyond traditional stationary and polynomial kernels.
We have focused on human extrapolation from noise-free nonparametric relationships. This approach complements past work emphasizing simple parametric functions and the role of noise [e.g.,
24], but kernel learning might also be applied in these other settings. In particular, iterated learning
(IL) experiments [23] provide a way to draw samples that reflect human learners? a priori expectations. Like most function learning experiments, past IL experiments have presented learners with
sequential data. Our approach, following Little and Shiffrin [24], instead presents learners with plots
of functions. This method is useful in reducing the effects of memory limitations and other sources
of noise (e.g., in perception). It is possible that people show different inductive biases across these
two presentation modes. Future work, using multiple presentation formats with the same underlying
relationships, will help resolve these questions.
Finally, the ideas discussed in this paper could be applied more generally, to discover interpretable
properties of unknown models from their predictions. Here one encounters fascinating questions at
the intersection of active learning, experimental design, and information theory.
8
References
[1] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of
mathematical biology, 5(4):115?133, 1943.
[2] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[3] K. Doya, S. Ishii, A. Pouget, and R.P.N. Rao. Bayesian brain: probabilistic approaches to neural coding.
MIT Press, 2007.
[4] Zoubin Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452?
459, 2015.
[5] Daniel M Wolpert, Zoubin Ghahramani, and Michael I Jordan. An internal model for sensorimotor
integration. Science, 269(5232):1880?1882, 1995.
[6] David C Knill and Whitman Richards. Perception as Bayesian inference. Cambridge University Press,
1996.
[7] Sophie Deneve. Bayesian spiking neurons i: inference. Neural computation, 20(1):91?117, 2008.
[8] Thomas L Griffiths and Joshua B Tenenbaum. Optimal predictions in everyday cognition. Psychological
Science, 17(9):767?773, 2006.
[9] J.B. Tenenbaum, C. Kemp, T.L. Griffiths, and N.D. Goodman. How to grow a mind: Statistics, structure,
and abstraction. Science, 331(6022):1279?1285, 2011.
[10] R.M. Neal. Bayesian Learning for Neural Networks. Springer Verlag, 1996. ISBN 0387947248.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. MIT Press, 2006.
[12] Andrew Gordon Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with
Gaussian processes. PhD thesis, University of Cambridge, 2014.
http://www.cs.cmu.edu/?andrewgw/andrewgwthesis.pdf.
[13] Tommi Jaakkola, David Haussler, et al. Exploiting generative models in discriminative classifiers. Advances in neural information processing systems, pages 487?493, 1998.
[14] Andrew Gordon Wilson and Ryan Prescott Adams. Gaussian process kernels for pattern discovery and
extrapolation. International Conference on Machine Learning (ICML), 2013.
[15] J Douglas Carroll. Functional learning: The learning of continuous functional mappings relating stimulus
and response continua. ETS Research Bulletin Series, 1963(2), 1963.
[16] Kyunghee Koh and David E Meyer. Function learning: Induction of continuous stimulus-response relations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(5):811, 1991.
[17] Edward L DeLosh, Jerome R Busemeyer, and Mark A McDaniel. Extrapolation: The sine qua non for
abstraction in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition,
23(4):968, 1997.
[18] Jerome R Busemeyer, Eunhee Byun, Edward L Delosh, and Mark A McDaniel. Learning functional
relations based on experience with input-output pairs by humans and artificial neural networks. Concepts
and Categories, 1997.
[19] Thomas L Griffiths, Chris Lucas, Joseph Williams, and Michael L Kalish. Modeling human function
learning with Gaussian processes. In Neural Information Processing Systems, 2009.
[20] Christopher G Lucas, Thomas L Griffiths, Joseph J Williams, and Michael L Kalish. A rational model of
function learning. Psychonomic bulletin & review, pages 1?23, 2015.
[21] Christopher G Lucas, Douglas Sterling, and Charles Kemp. Superspace extrapolation reveals inductive
biases in function learning. In Cognitive Science Society, 2012.
[22] Mark A Mcdaniel and Jerome R Busemeyer. The conceptual basis of function learning and extrapolation:
Comparison of rule-based and associative-based models. Psychonomic bulletin & review, 12(1):24?42,
2005.
[23] Michael L Kalish, Thomas L Griffiths, and Stephan Lewandowsky. Iterated learning: Intergenerational
knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14(2):288?294, 2007.
[24] Daniel R Little and Richard M Shiffrin. Simplicity bias in the estimation of causal functions. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pages 1157?1162, 2009.
[25] Samuel GB Johnson, Andy Jin, and Frank C Keil. Simplicity and goodness-of-fit in explanation: The
case of intuitive curve-fitting. In Proceedings of the 36th Annual Conference of the Cognitive Science
Society, pages 701?706, 2014.
[26] Samuel J Gershman, Edward Vul, and Joshua B Tenenbaum. Multistability and perceptual inference.
Neural computation, 24(1):1?24, 2012.
[27] Thomas L Griffiths, Edward Vul, and Adam N Sanborn. Bridging levels of analysis for probabilistic
models of cognition. Current Directions in Psychological Science, 21(4):263?268, 2012.
[28] Edward Vul, Noah Goodman, Thomas L Griffiths, and Joshua B Tenenbaum. One and done? optimal
decisions from very few samples. Cognitive science, 38(4):599?637, 2014.
[29] Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, and John P. Cunningham. Fast kernel learning for
multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, 2014.
[30] David JC MacKay. Information theory, inference, and learning algorithms. Cambridge U. Press, 2003.
[31] Carl Edward Rasmussen and Zoubin Ghahramani. Occam?s razor. In Neural Information Processing
Systems (NIPS), 2001.
[32] Andrew Gordon Wilson. A process over all stationary kernels. Technical report, University of Cambridge,
2012.
9
| 5765 |@word polynomial:5 replicate:1 vitally:1 calculus:1 simulation:1 accounting:1 covariance:21 accommodate:1 series:1 daniel:2 past:4 existing:1 current:1 com:2 comparing:2 surprising:1 must:1 readily:1 john:1 remove:1 extrapolating:1 interpretable:2 progressively:3 plot:1 aside:1 stationary:7 generative:4 alone:1 discovering:1 guess:1 v:2 nervous:1 intelligence:1 parametrization:1 short:1 provides:3 coarse:1 location:2 preference:4 simpler:3 five:1 mathematical:1 become:3 qualitative:2 fitting:5 behavioral:1 introduce:1 acquired:1 x0:2 indeed:4 intricate:1 examine:1 brain:1 inspired:1 decreasing:2 automatically:4 resolve:1 little:2 considering:1 increasing:1 becomes:2 project:1 discover:3 moreover:5 provided:2 underlying:2 estimating:2 qw:2 mass:1 mcculloch:1 minimizes:1 unified:1 finding:2 quantitative:1 multidimensional:1 shed:2 exactly:1 classifier:1 whatever:1 unit:1 internally:1 intervention:2 appear:2 encapsulate:1 overestimate:1 humanlike:1 positive:3 struggle:1 despite:1 ets:1 interpolation:3 black:12 might:2 christoph:1 limited:2 range:8 averaged:3 busemeyer:3 testing:2 block:1 definite:1 procedure:5 foundational:1 empirical:11 griffith:8 seeing:1 prescott:1 suggest:3 zoubin:3 cannot:2 selection:5 context:1 writing:1 intercept:1 www:1 conventional:3 elusive:1 go:2 williams:3 focused:2 unify:1 simplicity:4 identifying:1 amazon:1 underrepresented:1 pouget:1 insight:3 rule:4 array:3 estimator:6 haussler:1 datapoints:4 variation:1 imagine:1 user:1 gps:5 us:1 carl:1 hypothesis:2 agreement:1 trend:2 recognition:2 richards:1 observed:7 role:1 visualising:1 capture:4 inclined:2 ordering:2 movement:2 highest:1 decrease:1 rq:2 intuition:1 complexity:5 traing:1 ideally:1 asked:5 tacitly:1 ultimately:1 nehorai:1 trained:4 predictive:7 eric:1 learner:19 basis:2 whitman:1 represented:2 stdev:1 revert:2 fast:3 describe:1 artificial:2 choosing:1 richer:1 larger:1 elad:1 otherwise:1 reconstruct:1 ability:7 statistic:1 gp:31 noisy:1 associative:2 kalish:3 sequence:4 differentiable:1 unprecedented:2 isbn:1 took:1 product:2 flexibility:4 poorly:1 degenerate:1 shiffrin:2 description:1 intuitive:1 everyday:1 exploiting:1 regularity:1 cluster:4 optimum:2 transmission:1 produce:3 generating:4 adam:3 converges:2 help:3 wider:1 andrew:5 develop:1 pose:1 received:2 edward:6 strong:2 predicted:1 involves:1 resemble:1 indicate:1 c:1 tommi:1 direction:2 correct:1 stochastic:2 human:108 enable:2 material:1 behaviour:3 clustered:1 preliminary:1 probable:2 ryan:1 sufficiently:1 considered:1 ground:3 exp:1 visually:1 cognition:4 predict:1 mapping:1 pitt:1 automate:2 driving:1 vary:1 early:1 continuum:1 purpose:1 estimation:3 label:2 saw:1 create:5 hope:1 mit:2 clearly:1 gaussian:24 always:1 rather:2 avoid:1 wilson:6 jaakkola:1 modelling:2 likelihood:30 contrast:5 ishii:1 inference:7 abstraction:2 typically:1 explanatory:1 initially:1 cunningham:1 relation:2 reproduce:1 interested:1 flexible:2 denoted:1 priori:3 lucas:5 art:1 constrained:3 mackay:2 integration:1 marginal:28 equal:1 aware:1 never:1 identical:2 progressive:4 biology:1 look:3 icml:1 future:1 report:1 stimulus:2 spline:1 inherent:1 gordon:5 employ:1 realisation:3 randomly:2 intelligent:1 recommend:1 richard:1 recognize:1 lewandowsky:1 individual:3 sterling:1 intended:2 ourselves:1 suit:1 attempt:1 investigate:2 highly:3 evaluation:3 truly:1 mixture:18 light:2 andy:1 necessary:1 improbable:1 experience:1 initialized:1 causal:1 minimal:1 psychological:3 instance:1 formalism:1 modeling:1 compelling:1 rao:1 goodness:1 deviation:3 rare:1 examining:1 johnson:1 too:2 unsurprising:1 characterize:1 reported:1 dependency:3 answer:1 calibrated:2 confident:2 st:1 international:1 automating:1 probabilistic:9 systematic:1 receiving:1 michael:4 quickly:1 thesis:1 reflect:3 containing:1 choose:1 cognitive:4 resort:1 style:1 leading:1 suggesting:2 potential:2 coding:1 jc:1 dann:1 ranking:18 sine:1 view:1 extrapolation:33 red:2 xing:1 recover:1 participant:11 option:1 start:1 slope:1 contribution:1 purple:2 voted:1 il:2 responded:1 variance:4 who:1 efficiently:1 characteristic:2 judgment:3 correspond:1 bayesian:9 iterated:2 accurately:1 expertise:1 notoriously:1 explain:1 influenced:1 against:1 underestimate:1 sensorimotor:1 frequency:3 turk:1 obvious:1 naturally:1 degeneracy:1 gain:3 rational:4 dataset:7 sampled:1 arye:1 logical:1 knowledge:1 follow:1 response:18 inspire:3 improved:1 specify:1 evaluated:1 done:1 furthermore:1 correlation:6 jerome:3 receives:1 horizontal:1 christopher:4 expressive:2 logistic:1 mode:4 perhaps:2 scientific:1 building:2 effect:2 byun:1 contain:2 requiring:1 true:9 deliberately:1 inductive:18 concept:1 iteratively:1 andrewgw:1 neal:1 illustrated:1 razor:12 ambiguous:1 samuel:2 pdf:1 performs:1 argues:1 novel:2 charles:1 common:2 data1:1 functional:3 spiking:1 empirically:4 psychonomic:3 million:1 tail:1 imagined:1 discussed:1 relating:1 interpret:1 cambridge:4 automatic:3 unconstrained:2 consistency:3 had:1 access:2 encountering:1 similarity:3 carroll:1 posterior:22 recent:4 perspective:2 optimizing:3 krbf:1 reverse:4 verlag:1 delosh:2 vul:3 joshua:3 seen:1 greater:1 converge:2 monotonically:1 dashed:7 signal:3 ii:1 full:1 neurally:1 multiple:12 infer:2 smoother:1 smooth:2 technical:1 unlabelled:1 adapt:5 cross:1 long:3 controlled:1 prediction:28 converging:1 regression:1 heterogeneous:1 essentially:1 cmu:4 optimisation:1 expectation:2 kernel:121 represent:2 sometimes:1 penalize:2 background:1 want:3 grow:1 leaving:1 source:1 goodman:2 biased:5 tiple:1 tend:4 recruited:1 jordan:1 odds:1 leverage:1 easy:1 stephan:1 fit:11 counterbalanced:1 psychology:2 idea:2 favour:4 six:1 utility:1 colour:1 gb:1 bridging:1 effort:3 deep:1 cornerstone:1 useful:2 generally:2 detailed:2 involve:1 clear:1 conspicuously:1 amount:1 nonparametric:4 tenenbaum:4 category:2 simplest:1 mcdaniel:3 http:3 generate:4 affords:1 restricts:1 estimated:3 overly:1 yy:1 blue:5 hyperparameter:1 four:2 drawn:6 douglas:2 deneve:1 year:1 you:2 place:7 almost:1 reasonable:3 doya:1 draw:19 decision:3 prefer:1 entirely:2 distinguish:1 correspondence:1 quadratic:4 fascinating:1 annual:2 activity:1 strength:3 noah:1 aspect:1 argument:1 performing:2 relatively:3 format:1 overconfident:1 across:3 describes:2 reconstructing:3 character:3 smaller:2 remain:1 appealing:1 joseph:2 making:1 koh:1 agree:2 remains:1 mechanism:2 wrt:1 mind:1 unconventional:2 end:1 available:1 multistability:1 apply:1 away:1 spectral:17 appropriate:2 simulating:1 encounter:1 original:1 thomas:6 clustering:1 opportunity:2 exploit:1 ghahramani:3 especially:1 build:1 society:3 objective:3 move:1 question:3 depart:1 parametric:12 traditional:2 exhibit:1 sanborn:1 distance:1 unable:1 separate:1 normalise:1 chris:1 seven:1 manifold:4 agglomerative:1 collected:1 kemp:2 trivial:1 induction:1 assuming:1 length:10 relationship:3 reformulate:1 acquire:2 difficult:5 setup:2 frank:1 negative:1 design:2 unknown:1 perform:2 vertical:2 neuron:1 datasets:9 parametrizing:1 jin:1 keil:1 extended:1 variability:1 misspecification:1 arbitrary:1 sharp:1 introduced:1 complement:2 pair:2 mechanical:1 david:4 learned:20 discontinuity:1 nip:1 beyond:3 able:3 below:1 pattern:6 perception:2 departure:1 appeared:1 pioneering:1 including:6 green:2 explanation:2 memory:3 power:2 critical:1 difficulty:1 rely:1 natural:1 ranked:2 advanced:1 scheme:1 improve:1 historically:1 extract:4 embodied:1 deviate:1 review:5 prior:12 understanding:1 discovery:2 sawtooth:4 law:1 loss:1 expect:1 limitation:1 allocation:1 gershman:1 degree:2 consistent:4 principle:1 occam:12 erasing:1 heavy:5 row:2 extrapolated:1 surprisingly:2 last:1 free:1 rasmussen:2 gilboa:1 bias:28 understand:1 perceptron:1 generalise:2 wide:7 bulletin:5 sparse:2 edinburgh:1 curve:1 collection:1 made:1 implicitly:1 ml:3 overfitting:1 active:1 reveals:2 corpus:1 conceptual:1 discriminative:1 continuous:2 latent:1 tailed:5 learn:12 reasonably:1 nature:2 complex:6 did:1 immanent:1 big:1 noise:6 hyperparameters:1 subsample:1 knill:1 nothing:1 repeated:2 mul:1 fig:3 referred:1 extraordinary:1 fails:1 meyer:1 wish:3 perceptual:2 grained:1 few:1 emphasizing:1 specific:1 bishop:1 qua:1 showing:2 offset:2 evidence:1 sequential:1 importance:1 supplement:6 phd:1 conditioned:2 illustrates:1 wolpert:1 intersection:1 simply:1 likely:3 explore:1 infinitely:2 springer:2 corresponds:1 truth:5 determines:1 conditional:2 viewed:1 goal:2 presentation:2 rbf:17 towards:4 labelled:1 shared:2 fisher:1 change:1 determined:1 infinite:1 typical:1 generalisation:1 averaging:1 reducing:1 sophie:1 engineer:4 total:3 experimental:5 tendency:2 vote:5 indicating:1 internal:2 support:6 people:3 mark:3 arises:1 phenomenon:1 extrapolate:5 |
5,264 | 5,766 | The Pseudo-Dimension of Near-Optimal Auctions
Jamie Morgenstern?
Computer and Information Science
University of Pennsylvania
Philadelphia, PA
jamiemor@cis.upenn.edu
Tim Roughgarden
Stanford University
Palo Alto, CA
tim@cs.stanford.edu
Abstract
This paper develops a general approach, rooted in statistical learning theory, to
learning an approximately revenue-maximizing auction from data. We introduce
t-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing
demands of expressivity and simplicity. We prove that such auctions have small
representation error, in the sense that for every product distribution F over bidders? valuations, there exists a t-level auction with small t and expected revenue
close to optimal. We show that the set of t-level auctions has modest pseudodimension (for polynomial t) and therefore leads to small learning error. One
consequence of our results is that, in arbitrary single-parameter settings, one can
learn a mechanism with expected revenue arbitrarily close to optimal from a polynomial number of samples.
1
Introduction
In the traditional economic approach to identifying a revenue-maximizing auction, one first posits
a prior distribution over all unknown information, and then solves for the auction that maximizes
expected revenue with respect to this distribution. The first obstacle to making this approach operational is the difficulty of formulating an appropriate prior. The second obstacle is that, even if an
appropriate prior distribution is available, the corresponding optimal auction can be far too complex
and unintuitive for practical use. This motivates the goal of identifying auctions that are ?simple?
and yet nearly-optimal in terms of expected revenue.
In this paper, we apply tools from learning theory to address both of these challenges. In our model,
we assume that bidders? valuations (i.e., ?willingness to pay?) are drawn from an unknown distribution F . A learning algorithm is given i.i.d. samples from F . For example, these could represent
the outcomes of comparable transactions that were observed in the past. The learning algorithm
suggests an auction to use for future bidders, and its performance is measured by comparing the
expected revenue of its output auction to that earned by the optimal auction for the distribution F .
The possible outputs of the learning algorithm correspond to some set C of auctions. We view C as a
design parameter that can be selected by a seller, along with the learning algorithm. A central goal of
this work is to identify classes C that balance representation error (the amount of revenue sacrificed
by restricting to auctions in C) with learning error (the generalization error incurred by learning over
C from samples). That is, we seek a set C that is rich enough to contain an auction that closely
approximates an optimal auction (whatever F might be), yet simple enough that the best auction
in C can be learned from a small amount of data. Learning theory offers tools both for rigorously
defining the ?simplicity? of a set C of auctions, through well-known complexity measures such as the
?
Part of this work done while visiting Stanford University. Partially supported by a Simons Award for
Graduate Students in Theoretical Computer Science, as well as NSF grant CCF-1415460.
1
pseudo-dimension, and for quantifying the amount of data necessary to identify the approximately
best auction from C. Our goal of learning a near-optimal auction also requires understanding the
representation error of different classes C; this task is problem-specific, and we develop the necessary
arguments in this paper.
1.1
Our Contributions
The primary contributions of this paper are the following. First, we show that well-known concepts
from statistical learning theory can be directly applied to reason about learning from data an approximately revenue-maximizing auction. Precisely, for a set C of auctions and an arbitrary unknown
2
distribution F over valuations in [1, H], O( H?2 dC log H? ) samples from F are enough to learn (up to
a 1 ? factor) the best auction in C, where dC denotes the pseudo-dimension of the set C (defined
in Section 2). Second, we introduce the class of t-level auctions, to interpolate smoothly between
simple auctions, such as welfare maximization subject to individualized reserve prices (when t = 1),
and the complex auctions that can arise as optimal auctions (as t ! 1). Third, we prove that in
quite general auction settings with n bidders, the pseudo-dimension of the set of t-level auctions is
O(nt log nt). Fourth, we quantify the number t of levels required for the set of t-level auctions to
have low representation error, with respect to the optimal auctions that arise from arbitrary product distributions F . For example, for single-item auctions and several generalizations thereof, if
t = ?( H? ), then for every product distribution F there exists a t-level auction with expected revenue
at least 1 ? times that of the optimal auction for F .
In the above sense, the ?t? in t-level auctions is a tunable ?sweet spot?, allowing a designer to balance the competing demands of expressivity (to achieve near-optimality) and simplicity (to achieve
learnability). For example, given a fixed amount of past data, our results indicate how much auction
complexity (in the form of the number of levels t) one can employ without risking overfitting the
auction to the data.
Alternatively, given a target approximation factor 1 ?, our results give sufficient conditions on t
and consequently on the number of samples needed to achieve this approximation factor. The resulting sample complexity upper bound has polynomial dependence on H, ? 1 , and the number n of
bidders. Known results [1, 8] imply that any method of learning a (1 ?)-approximate auction from
samples must have sample complexity with polynomial dependence on all three of these parameters,
even for single-item auctions.
1.2
Related Work
The present work shares much of its spirit and high-level goals with Balcan et al. [4], who proposed
applying statistical learning theory to the design of near-optimal auctions. The first-order difference
between the two works is that our work assumes bidders? valuations are drawn from an unknown
distribution, while Balcan et al. [4] study the more demanding ?prior-free? setting. Since no auction
can achieve near-optimal revenue ex-post, Balcan et al. [4] define their revenue benchmark with
respect to a set G of auctions on each input v as the maximum revenue obtained by any auction
of G on v. The idea of learning from samples enters the work of Balcan et al. [4] through the
internal randomness of their partitioning of bidders, rather than through an exogenous distribution
over inputs (as in this work). Both our work and theirs requires polynomial dependence on H, 1? :
ours in terms of a necessary number of samples, and theirs in terms of a necessary number of bidders;
as well as a measure of the complexity of the class G (in our case, the pseudo-dimension, and in
theirs, an analagous measure). The primary improvement of our work over of the results in Balcan
et al. [4] is that our results apply for single item-auctions, matroid feasibility, and arbitrary singleparameter settings (see Section 2 for definitions); while their results apply only to single-parameter
settings of unlimited supply.1 We also view as a feature the fact that our sample complexity upper
bounds can be deduced directly from well-known results in learning theory ? we can focus instead
on the non-trivial and problem-specific work of bounding the pseudo-dimension and representation
error of well-chosen auction classes.
Elkind [12] also considers a similar model to ours, but only for the special case of single-item auctions. While her proposed auction format is similar to ours, our results cover the far more general
1
See Balcan et al. [3] for an extension to the case of a large finite supply.
2
case of arbitrary single-parameter settings and and non-finite support distributions; our sample complexity bounds are also better even in the case of a single-item auction (linear rather than quadratic
dependence on the number of bidders). On the other hand, the learning algorithm in [12] (for singleitem auctions) is computationally efficient, while ours is not.
Cole and Roughgarden [8] study single-item auctions with n bidders with valuations drawn from
independent (not necessarily identical) ?regular? distributions (see Section 2), and prove upper and
lower bounds (polynomial in n and ? 1 ) on the sample complexity of learning a (1 ?)-approximate
auction. While the formalism in their work is inspired by learning theory, no formal connections
are offered; in particular, both their upper and lower bounds were proved from scratch. Our positive
results include single-item auctions as a very special case and, for bounded or MHR valuations, our
sample complexity upper bounds are much better than those in Cole and Roughgarden [8].
Huang et al. [15] consider learning the optimal price from samples when there is a single buyer
and a single seller; this problem was also studied implicitly in [10]. Our general positive results
obviously cover the bounded-valuation and MHR settings in [15], though the specialized analysis in
[15] yields better (indeed, almost optimal) sample complexity bounds, as a function of ? 1 and/or
H.
Medina and Mohri [17] show how to use a combination of the pseudo-dimension and Rademacher
complexity to measure the sample complexity of selecting a single reserve price for the VCG mechanism to optimize revenue. In our notation, this corresponds to analyzing a single set C of auctions
(VCG with a reserve). Medina and Mohri [17] do not address the expressivity vs. simplicity trade-off
that is central to this paper.
Dughmi et al. [11] also study the sample complexity of learning good auctions, but their main results
are negative (exponential sample complexity), for the difficult scenario of multi-parameter settings.
(All settings in this paper are single-parameter.)
Our work on t-level auctions also contributes to the literature on simple approximately revenuemaximizing auctions (e.g., [6, 14, 7, 9, 21, 24, 2]). Here, one takes the perspective of a seller who
knows the valuation distribution F but is bound by a ?simplicity constraint? on the auction deployed,
thereby ruling out the optimal auction. Our results that bound the representation error of t-level auctions (Theorems 3.4, 4.1, 5.4, and 6.2) can be interpreted as a principled way to trade off the simplicity of an auction with its approximation guarantee. While previous work in this literature generally
left the term ?simple? safely undefined, this paper effectively proposes the pseudo-dimension of an
auction class as a rigorous and quantifiable simplicity measure.
2
Preliminaries
This section reviews useful terminology and notation standard in Bayesian auction design and learning theory.
Bayesian Auction Design We consider single-parameter settings with n bidders. This means that
each bidder has a single unknown parameter, its valuation or willingness to pay for ?winning.? (Every bidder has value 0 for losing.) A setting is specified by a collection X of subsets of {1, 2, . . . , n};
each such subset represent a collection of bidders that can simultaneously ?win.? For example, in a
setting with k copies of an item, where no bidder wants more than one copy, X would be all subsets
of {1, 2, . . . , n} of cardinality at most k.
A generalization of this case, studied in the supplementary materials (Section 5), is matroid settings.
These satisfy: (i) whenever X 2 X and Y ? X, Y 2 X ; and (ii) for two sets |I1 | < |I2 |, I1 , I2 2 X ,
there is always an augmenting element i2 2 I2 \ I1 such that I1 [ {i2 } 2 X , X . The supplementary
materials (Section 6) also consider arbitrary single-parameter settings, where the only assumption is
that ; 2 X . To ease comprehension, we often illustrate our main ideas using single-item auctions
(where X is the singletons and the empty set).
We assume bidders? valuations are drawn from the continuous joint cumulative distribution F . Except in the extension in Section 4, we assume that the support of F is limited to [1, H]n . As
in most of optimal auction theory [18], we usually assume that F is a product distribution, with
F = F1 ? F2 ? . . . ? Fn and each vi ? Fi drawn independently but not identically. The virtual
3
i)
value of bidder i is denoted by i (vi ) = vi 1 fiF(vi (v
. A distribution satisfies the monotone-hazard
i)
rate (MHR) condition if fi (vi )/(1 Fi (vi )) is nondecreasing; intuitively, if its tails are no heavier
than those of an exponential distribution. In a fundamental paper, [18] proved that when every virtual valuation function is nondecreasing (the ?regular? case), the auction that maximizes expected
revenue for n Bayesian bidders chooses winners in a way which maximizes the sum of the virtual
values of the winners. This auction is known as Myerson?s auction, which we refer to as M. The
result can be extended to the general, ?non-regular? case by replacing the virtual valuation functions
by ?ironed virtual valuation functions.? The details are well-understood but technical; see Myerson
[18] and Hartline [13] for details.
Sample Complexity, VC Dimension, and the Pseudo-Dimension This section reviews several
well-known definitions from learning theory. Suppose there is some domain Q, and let c be some
unknown target function c : Q ! {0, 1}. Let D be an unknown distribution over Q. We wish to
understand how many labeled samples (x, c(x)), x ? D, are necessary and sufficient to be able to
output a c? which agrees with c almost everywhere with respect to D. The distribution-independent
sample complexity of learning c depends fundamentally on the ?complexity? of the set of binary
functions C from which we are choosing c?. We define the relevant complexity measure next.
Let S be a set of m samples from Q. The set S is said to be shattered by C if, for every subset
T ? S, there is some cT 2 C such that cT (x) = 1 if x 2 T and cT (y) = 0 if y 2
/ T . That is, ranging
over all c 2 C induces all 2|S| possible projections onto S. The VC dimension of C, denoted VC(C),
is the size of the largest set S that can be shattered by C.
P
Let errS (?
c) = ( x2S |c(x) c?(x)|)/|S| denote the empirical error of c? on S, and let err(?
c) =
Ex?D [|c(x) c?(x)|] denote the true expected error of c? with respect to D. A key result from learning
theory [23] is: for every distribution D, a sample S of size ?(? 2 (VC(C) + ln 1 )) is sufficient to
guarantee that errS (?
c) 2 [err(?
c) ?, err(?
c) + ?] for every c? 2 C with probability 1
. In this
case, the error on the sample is close to the true error, simultaneously for every hypothesis in C. In
particular, choosing the hypothesis with the minimum sample error minimizes the true error, up to
2?. We say C is (?, )-uniformly learnable with sample complexity m if, given a sample S of size
m, with probability 1
, for all c 2 C, |errS (c) err(c)| < ?: thus, any class C is (?, )-uniformly
learnable with m = ? ?12 VC(C) + ln 1 samples. Conversely, for every learning algorithm A
that uses fewer than VC(C)
samples, there exists a distribution D0 and a constant q such that, with
?
probability at least q, A outputs a hypothesis c?0 2 C with err(?
c0 ) > err(?
c) + 2? for some c? 2 C. That
?
is, the true error of the output hypothesis is more than 2 larger the best hypothesis in the class.
To learn real-valued functions, we need a generalization of VC dimension (which concerns binary
functions). The pseudo-dimension [19] does exactly this.2 Formally, let c : Q ! [0, H] be a realvalued function over Q, and C the class we are learning over. Let S be a sample drawn from D, |S| =
m, labeled according to c. Both the empirical and true error of a hypothesis c? are defined as before,
though |?
c(x) c(x)| can now take on values in [0, H] rather than in {0, 1}. Let (r1 , . . . , rm ) 2
m
[0, H] be a set of targets for S. We say (r1 , . . . , rm ) witnesses the shattering of S by C if, for each
T ? S, there exists some cT 2 C such that fT (xi )
ri for all xi 2 T and cT (xi ) < ri for all
xi 2
/ T . If there exists some ~r witnessing the shattering of S, we say S is shatterable by C. The
pseudo-dimension of C, denoted dC , is the size of the largest set S which is shatterable by C. The
sample complexity upper bounds of this paper are derived from the following theorem, which states
that the distribution-independent sample complexity of learning over a class of real-valued functions
C is governed by the class?s pseudo-dimension.
Theorem 2.1 [E.g. [1]] Suppose C is a class of real-valued functions with range in [0, H] and
pseudo-dimension dC . For every
? ? > 0, 2 [0, 1], the sample? complexity of (?, )-uniformly learning
f with respect to C is m = O
H 2
?
dC ln
H
?
+ ln
1
.
Moreover, the guarantee in Theorem 2.1 is realized by the learning algorithm that simply outputs
the function c 2 C with the smallest empirical error on the sample.
2
The fat-shattering dimension is a weaker condition that is also sufficient for sample complexity bounds.
All of our arguments give the same upper bounds on the pseudo-dimension and the fat-shattering dimension of
various auction classes, so we present the stronger statements.
4
Applying Pseudo-Dimension to Auction Classes For the remainder of this paper, we consider
classes of truthful auctions C.3 When we discuss some auction c 2 C, we treat c : [0, H]n ! R
as the function that maps (truthful) bid tuples to the revenue achieved on them by the auction c.
Then, rather than minimizing error, we aim to maximize revenue. In our setting, the guarantee of
Theorem 2.1 directly implies that, with probability at least 1
(over the m samples), the output of
the empirical revenue maximization learning algorithm ? which returns the auction c 2 C with the
highest average revenue on the samples ? chooses an auction with expected revenue (over the true
underlying distribution F ) that is within an additive ? of the maximum possible.
3
Single-Item Auctions
To illustrate out ideas, we first focus on single-item auctions. The results of this section are generalized significantly in the supplementary (see Sections 5 and 6).
Section 3.1 defines the class of t-level single-item auctions, gives an example, and interprets the auctions as approximations to virtual welfare maximizers. Section 3.2 proves that the pseudo-dimension
of the set of such auctions is O(nt log nt), which by Theorem 2.1 implies a sample-complexity upper bound. Section 3.3 proves that taking t = ?( H? ) yields low representation error.
3.1
t-Level Auctions: The Single-Item Case
We now introduce t-level auctions, or Ct for short. Intuitively, one can think of each bidder as
facing one of t possible prices; the price they face depends upon the values of the other bidders.
Consider, for each bidder i, t numbers 0 ? `i,0 ? `i,1 ? . . . ? `i,t 1 . We refer to these t numbers
as thresholds. This set of tn numbers defines a t-level auction with the following allocation rule.
Consider a valuation tuple v:
1. For each bidder i, let ti (vi ) denote the index ? of the largest threshold `i,? that lower bounds
vi (or -1 if vi < `i,0 ). We call ti (vi ) the level of bidder i.
2. Sort the bidders from highest level to lowest level and, within a level, use a fixed lexicographical tie-breaking ordering to pick the winner.4
3. Award the item to the first bidder in this sorted order (unless ti = 1 for every bidder i, in
which case there is no sale).
The payment rule is the unique one that renders truthful bidding a dominant strategy and charges 0
to losing bidders ? the winning bidder pays the lowest bid at which she would continue to win. It is
important for us to understand this payment rule in detail; there are three interesting cases. Suppose
bidder i is the winner. In the first case, i is the only bidder who might be allocated the item (other
bidders have level -1), in which case her bid must be at least her lowest threshold. In the second
case, there are multiple bidders at her level, so she must bid high enough to be at her level (and,
since ties are broken lexicographically, this is her threshold to win). In the final case, she need not
compete at her level: she can choose to either pay one level above her competition (in which case
her position in the tie-breaking ordering does not matter) or she can bid at the same level as her
highest-level competitors (in which case she only wins if she dominates all of those bidders at the
next-highest level according to ). Formally, the payment p of the winner i (if any) is as follows.
Let ?? denote the highest level ? such that there at least two bidders at or above level ? , and I be the
set of bidders other than i whose level is at least ??.
Monop If ?? = 1, then pi = `i,0 (she is the only potential winner, but must have level
Mult If ti (vi ) = ?? then pi = `i,?? (she needs to be at level ??).
3
0 to win).
An auction is truthful if truthful bidding is a dominant strategy for every bidder. That is: for every bidder i,
and all possible bids by the other bidders, i maximizes its expected utility (value minus price paid) by bidding
its true value. In the single-parameter settings that we study, the expected revenue of the optimal non-truthful
auction (measured at a Bayes-Nash equilibrium with respect to the prior distribution) is no larger than that of
the optimal truthful auction.
4
When the valuation distributions are regular, this tie-breaking can be done by value, or randomly; when
it is done by value, this equates to a generalization of VCG with nonanonymous reserves (and is IC and has
identical representation error as this analysis when bidders are regular).
5
Unique If ti (vi ) > ??, if i i0 for all i0 2 I, she pays pi = `i,?? , otherwise she pays pi = `i,?? +1
(she either needs to be at level ?? + 1, in which case her position in does not matter, or at
level ??, in which case she would need to be the highest according to ).
We now describle a particular t-level auction, and demonstrate each case of the payment rule.
Example 3.1 Consider the following 4-level auction for bidders a, b, c. Let `a,? = [2, 4, 6, 8], `b,? =
[1.5, 5, 9, 10], and `c,? = [1.7, 3.9, 6, 7]. For example, if bidder a bids less than 2 she is at level 1,
a bid in [2, 4) puts her at level 0, a bid in [4, 6) at level 1, a bid in [6, 8) at level 2, and a bid of at
least 8 at level 3. Let a b c.
Monop If va = 3, vb < 1.5, vc < 1.7, then b, c are at level 1 (to which the item is never allocated).
So, a wins and pays 2, the minimum she needs to bid to be at level 0.
Mult If va 8, vb 10, vc < 7, then a and b are both at level 3, and a b, so a will win and
pays 8 (the minimum she needs to bid to be at level 3).
Unique If va
8, vb 2 [5, 9], vc 2 [3.9, 6], then a is at level 3, and b and c are at level 1. Since
a
b and a
c, a need only pay 4 (enough to be at level 1). If, on the other hand,
va 2 [4, 6], vb = [5, 9] and vc
6, c has level at least 2 (while a, b have level 1), but c
needs to pay 6 since a, b c.
Remark 3.2 (Connection to virtual valuation functions) t-level auctions are naturally interpreted as discrete approximations to virtual welfare maximizers, and our representation error bound
in Theorem 3.4 makes this precise. Each level corresponds to a constraint of the form ?If any bidder
has level at least ? , do not sell to any bidder with level less than ? .? We can interpret the `i,? ?s
(with fixed ? , ranging over bidders i) as the bidder values that map to some common virtual value.
For example, 1-level auctions treat all values below the single threshold as having negative virtual
value, and above the threshold uses values as proxies for virtual values. 2-level auctions use the
second threshold to the refine virtual value estimates, and so on. With this interpretation, it is intuitively clear that as t ! 1, it is possible to estimate bidders? virtual valuation functions and thus
approximate Myerson?s optimal auction to arbitrary accuracy.
3.2
The Pseudo-Dimension of t-Level Auctions
This section shows that the pseudo-dimension of the class of t-level single-item auctions with n
bidders is O(nt log nt). Combining this with Theorem 2.1 immediately yields sample complexity
bounds (parameterized by t) for learning the best such auction from samples.
Theorem 3.3 For a fixed tie-breaking order, the set of n-bidder single-item t-level auctions has
pseudo-dimension O (nt log(nt)).
Proof: Recall from Section 2 that we need to upper bound the size of every set that is shatterable
using t-level auctions. Fix a set of samples S = v1 , . . . , vm of size m and a potential witness
R = r1 , . . . , rm . Each auction c induces a binary labeling of the samples vj of S (whether c?s
revenue on vj is at least rj or strictly less than rj ). The set S is shattered with witness R if and only
if the number of distinct labelings of S given by any t-level auction is 2m .
We upper-bound the number of distinct labelings of S given by t-level auctions (for some fixed
potential witness R), counting the labelings in two stages. Note that S involves nm numbers ? one
value vij for each bidder for each sample, and a t-level auction involves nt numbers ? t thresholds
`i,? for each bidder. Call two t-level auctions with thresholds {`i,? } and {`?i,? } equivalent if
1. The relative order of the `i,? ?s agrees with that of the `?i,? ?s, in that both induce the same
permutation of {1, 2, . . . , n} ? {0, 1, . . . , t 1}.
2. merging the sorted list of the vij ?s with the sorted list of the `i,? ?s yields the same partition
of the vij ?s as does merging it with the sorted list of the `?i,? ?s.
Note that this is an equivalence relation. If two t-level auctions are equivalent, every comparison
between a valuation and a threshold or two valuations is resolved identically by those auctions.
6
Using the defining properties of equivalence, a crude upper bound on the number of equivalence
classes is
?
?
nm + nt
(nt)! ?
? (nm + nt)2nt .
(1)
nt
We now upper-bound the number of distinct labelings of S that can be generated by t-level auctions
in a single equivalence class C. First, as all comparisons between two numbers (valuations or
thresholds) are resolved identically for all auctions in C, each bidder i in each sample vj of S
is assigned the same level (across auctions in C), and the winner (if any) in each sample vj is
constant across all of C. By the same reasoning, the identity of the parameter that gives the winner?s
payment (some `i,? ) is uniquely determined by pairwise comparisons (recall Section 3.1) and hence
is common across all auctions in C. The payments `i,? , however, can vary across auctions in the
equivalence class.
For a bidder i and level ? 2 {0, 1, 2, . . . , t 1}, let Si,? ?S be the subset of samples in which bidder i
wins and pays `i,? . The revenue obtained by each auction in C on a sample of Si,? is simply `i,?
(and independent of all other parameters of the auction). Thus, ranging over all t-level auctions in
C generates at most |Si,? | distinct binary labelings of Si,? ? the possible subsets of Si,? for which
an auction meets the corresponding target rj form a nested collection.
Summarizing, within the equivalence class C of t-level auctions, varying a parameter `i,? generates
at most |Si,? | different labelings of the samples Si,? and has no effect on the other samples. Since
the subsets {Si,? }i,? are disjoint, varying all of the `i,? ?s (i.e., ranging over C) generates at most
n tY
1
Y
i=1 ? =0
|Si,? | ? mnt
(2)
distinct labelings of S.
Combining (1) and (2), the class of all t-level auctions produces at most (nm + nt)3nt distinct
labelings of S. Since shattering S requires 2m distinct labelings, we conclude that 2m ? (nm +
nt)3nt , implying m = O(nt log nt) as claimed. ?
3.3
The Representation Error of Single-Item t-Level Auctions
In this section, we show that for every bounded product distribution, there exists a t-level auction
with expected revenue close to that of the optimal single-item auction when bidders are independent
and bounded. The analsysis ?rounds? an optimal auction to a t-level auction without losing much
expected revenue. This is done using thresholds to approximate each bidder?s virtual value: the
lowest threshold at the bidder?s monopoly reserve price, the next 1? thresholds at the values at which
bidder i?s virtual value surpasses multiples of ?, and the remaining thresholds at those values where
bidder i?s virtual value reaches powers of 1 + ?. Theorem 3.4 formalizes this intuition.
Theorem 3.4 Suppose F is distribution over [1, H]n . If t = ? 1? + log1+? H , Ct contains a
single-item auction with expected revenue at least 1 ? times the optimal expected revenue.
Theorem 3.4 follows immediately from the following lemma, with ? =
general result for later use.
= 1. We prove this more
Lemma 3.5 Consider n bidders with valuations in [0, H] and with P[maxi vi > ?]
. Then,
Ct contains a single-item
auction
with
expected
revenue
at
least
a
1
?
times
that
of
an
optimal
?
?
1
H
auction, for t = ? ? + log1+? ? .
Proof: Consider a fixed bidder i. We define t thresholds for i, bucketing i by her virtual value,
and prove that the t-level auction A using these thresholds for each bidder closely approximates the
expected revenue of the optimal auction M. Let ?0 be a parameter defined later.
7
Set `i,0 =
(
i
1
i
(0), bidder i?s monopoly reserve.5 For ? 2 [1, d
2 [0, ?]). For ? 2 [d
1
1
?0 e, d ?0 e
+ dlog1+ 2?
H
? e],
let `i,? =
1
i
1
?0 e],
1
let `i,? =
(?(1 +
? ?
2)
d
1
?0
i
e
)(
(? ? ? ?0 )
i
> ?).
0
Consider a fixed valuation profile v. Let i? denote the winner according to A, and i the winner
according to the optimal auction M. If there is no winner, we interpret i? (vi? ) and i0 (vi0 ) as 0.
Recall that M always awards the item to a bidder with the highest positive virtual value (or no one,
if no such bidders exist). The definition of the thresholds immediately implies the following.
1. A only allocates to non-negative ironed virtual-valued bidders.
0
2. If there is no tie (that is, there is a unique bidder at the highest level), then i = i? .
3. When there is a tie at level ? , the virtual value of the winner of A is close to that of M:
0
If ? 2 [0, d 1?0 e] then i0 (vi0 )
i? (vi? ) ? ? ? ;
if ? 2 [d
1
1
?0 e, d ?0 e
+ dlog1+ 2?
H
? e],
i? (vi? )
0 (v 0 )
i
i
1
?
2.
These facts imply that
Ev [Rev(A)] = Ev [
i? (vi? )]
(1
?
0
0
2 ) ? Ev [ i (vi )]
? ?0 = (1
?
2 ) ? Ev [Rev(M)]
? ?0 . (3)
are equal. The first and final equality follow from A and M?s allocations depending on ironed
virtual values, not on the values themselves, thus, the ironed virtual values are equal in expectation
to the unironed virtual values, and thus the revenue of the mechanisms (see [13], Chapter 3.5 for
discussion).
As P[maxi vi > ?]
, it must be that E[Rev(M)]
? (a posted price of ? will achieve this
revenue). Combining this with (3), and setting ?0 = 2? implies Ev [Rev(A)] (1 ?) Ev [Rev(M)].
?
Combining Theorems 2.1 and 3.4 yields the following Corollary 3.6.
Corollary 3.6 Let F be a product distribution
with all bidders? valuations
?
? in [1,
? H].
?Assume that
1
H 2
H
1
H2n
?
t = ? ? + log1+? H and m = O
nt log (nt) log ? + log
= O ?3 . Then with
?
probability at least 1
, the single-item empirical revenue maximizer of Ct on a set of m samples
from F has expected revenue at least 1 ? times that of the optimal auction.
Open Questions
There are some significant opportunities for follow-up research. First, there is much to do on the
design of computationally efficient (in addition to sample-efficient) algorithms for learning a nearoptimal auction. The present work focuses on sample complexity, and our learning algorithms are
generally not computationally efficient.6 The general research agenda here is to identify auction
classes C for various settings such that:
1. C has low representation error;
2. C has small pseudo-dimension;
3. There is a polynomial-time algorithm to find an approximately revenue-maximizing auction
from C on a given set of samples.7
There are also interesting open questions on the statistical side, notably for multi-parameter problems. While the negative result in [11] rules out a universally good upper bound on the sample
complexity of learning a near-optimal mechanism in multi-parameter settings, we suspect that positive results are possible for several interesting special cases.
5
Recall from Section 2 that i denotes the virtual valuation function of bidder i. (From here on, we always
mean the ironed version of virtual values.) It is convenient to assume that these functions are strictly increasing
(not just nondecreasing); this can be enforced at the cost of losing an arbitrarily small amount of revenue.
6
There is a clear parallel with computational learning theory [22]: while the information-theoretic foundations of classification (VC dimension, etc. [23]) have been long understood, this research area strives to
understand which low-dimensional concept classes are learnable in polynomial time.
7
The sample-complexity and performance bounds implied by pseudo-dimension analysis, as in Theorem 2.1, hold with such an approximation algorithm, with the algorithm?s approximation factor carrying
through to the learning algorithm?s guarantee. See also [4, 11].
8
References
[1] Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, NY, NY, USA, 1999.
[2] Moshe Babaioff, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg. A simple and approximately optimal mechanism for an additive buyer. SIGecom Exch., 13(2):31?35, January 2015.
[3] Maria-Florina Balcan, Avrim Blum, and Yishay Mansour. Single price mechanisms for revenue maximization in unlimited supply combinatorial auctions. Technical report, Carnegie Mellon University, 2007.
[4] Maria-Florina Balcan, Avrim Blum, Jason D Hartline, and Yishay Mansour. Reducing mechanism design
to algorithm design via machine learning. Jour. of Comp. and System Sciences, 74(8):1245?1270, 2008.
[5] Yang Cai and Constantinos Daskalakis. Extreme-value theorems for optimal multidimensional pricing.
In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 522?531,
Palm Springs, CA, USA., Oct 2011. IEEE.
[6] Shuchi Chawla, Jason Hartline, and Robert Kleinberg. Algorithmic pricing via virtual valuations. In
Proceedings of the 8th ACM Conf. on Electronic Commerce, pages 243?251, NY, NY, USA, 2007. ACM.
[7] Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. Multi-parameter mechanism design and sequential posted pricing. In Proceedings of the Forty-second ACM Symposium on
Theory of Computing, pages 311?320, NY, NY, USA, 2010. ACM.
[8] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of
the 46th Annual ACM Symposium on Theory of Computing, pages 243?252, NY, NY, USA, 2014. SIAM.
[9] Nikhil Devanur, Jason Hartline, Anna Karlin, and Thach Nguyen. Prior-independent multi-parameter
mechanism design. In Internet and Network Economics, pages 122?133. Springer, Singapore, 2011.
[10] Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample.
In Proceedings of the 11th ACM Conf. on Electronic Commerce, pages 129?138, NY, NY, USA, 2010.
ACM.
[11] Shaddin Dughmi, Li Han, and Noam Nisan. Sampling and representation complexity of revenue maximization. In Web and Internet Economics, volume 8877 of Lecture Notes in Computer Science, pages
277?291. Springer Intl. Publishing, Beijing, China, 2014.
[12] Edith Elkind. Designing and learning optimal finite support auctions. In Proceedings of the eighteenth
annual ACM-SIAM symposium on Discrete algorithms, pages 736?745. SIAM, 2007.
[13] Jason Hartline. Mechanism design and approximation. Jason Hartline, Chicago, Illinois, 2015.
[14] Jason D. Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In ACM Conf. on Electronic
Commerce, Stanford, CA, USA., 2009. ACM.
[15] Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. abs/1407.2479:
1?3, 2014. URL http://arxiv.org/abs/1407.2479.
[16] Michael J. Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT
Press, Cambridge, MA, 1994.
[17] Andres Munoz Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in
second price auctions with reserve. In Proceedings of The 31st Intl. Conf. on Machine Learning, pages
262?270, 2014.
[18] Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58?73, 1981.
[19] David Pollard. Convergence of stochastic processes. David Pollard, New Haven, Connecticut, 1984.
[20] T. Roughgarden and O. Schrijvers. Ironing in the dark. Submitted, 2015.
[21] Tim Roughgarden, Inbal Talgam-Cohen, and Qiqi Yan. Supply-limiting mechanisms. In Proceedings of
the 13th ACM Conf. on Electronic Commerce, pages 844?861, NY, NY, USA, 2012. ACM.
[22] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
[23] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events
to their probabilities. Theory of Probability & Its Applications, 16(2):264?280, 1971.
[24] Andrew Chi-Chih Yao. An n-to-1 bidder reduction for multi-item auctions and its applications. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 92?109, San
Diego, CA, USA., 2015. ACM.
9
| 5766 |@word version:1 polynomial:8 stronger:1 nd:1 c0:1 open:2 seek:1 pick:1 paid:1 thereby:2 fif:1 minus:1 reduction:1 contains:2 selecting:1 chervonenkis:1 ours:4 past:2 ironing:1 err:6 comparing:1 nt:22 si:9 yet:2 must:5 fn:1 additive:2 partition:1 chicago:1 v:1 implying:1 selected:1 fewer:1 item:25 short:1 org:1 along:1 supply:4 symposium:5 focs:1 prove:5 introduce:3 pairwise:1 notably:1 indeed:1 upenn:1 expected:18 themselves:1 multi:6 chi:1 inspired:1 balasubramanian:1 cardinality:1 increasing:1 bounded:4 notation:2 alto:1 maximizes:4 moreover:1 lowest:4 underlying:1 x2s:1 interpreted:2 minimizes:1 morgenstern:1 guarantee:5 safely:1 pseudo:21 every:16 formalizes:1 ti:5 multidimensional:1 charge:1 tie:7 fat:2 exactly:1 rm:3 connecticut:1 whatever:1 partitioning:1 grant:1 sale:1 positive:4 before:1 understood:2 treat:2 consequence:1 analyzing:1 meet:1 approximately:6 might:2 studied:2 china:1 equivalence:6 suggests:1 conversely:1 ease:1 limited:1 revenuemaximizing:1 graduate:1 range:1 lexicographical:1 practical:1 unique:4 commerce:4 babaioff:1 spot:1 area:1 empirical:5 yan:2 significantly:1 mult:2 projection:1 convenient:1 induce:1 regular:5 onto:1 close:5 put:1 zhiyi:1 applying:2 optimize:1 equivalent:2 map:2 nicole:1 maximizing:4 eighteenth:1 economics:2 independently:1 devanur:1 simplicity:7 identifying:2 immediately:3 rule:5 limiting:1 target:4 suppose:4 monopoly:2 yishay:3 diego:1 losing:4 us:2 designing:1 hypothesis:6 pa:1 element:1 labeled:2 observed:1 ft:1 enters:1 mhr:3 earned:1 ordering:2 trade:2 highest:8 principled:1 intuition:1 broken:1 complexity:30 nash:1 rigorously:1 seller:3 carrying:1 upon:1 f2:1 bidding:3 resolved:2 joint:1 various:2 chapter:1 sacrificed:1 distinct:7 labeling:1 edith:1 outcome:1 choosing:2 quite:1 whose:1 stanford:4 supplementary:3 larger:2 say:3 valued:4 otherwise:1 nikhil:1 think:1 nondecreasing:3 final:2 obviously:1 cai:1 karlin:1 jamie:1 product:6 remainder:1 relevant:1 combining:4 achieve:5 competition:1 quantifiable:1 convergence:2 empty:1 r1:3 rademacher:1 produce:1 intl:2 tim:7 illustrate:2 develop:1 depending:1 augmenting:1 andrew:1 measured:2 solves:1 dughmi:2 c:1 involves:2 indicate:1 implies:4 quantify:1 malec:1 posit:1 closely:2 stochastic:1 vc:12 material:2 virtual:26 f1:1 generalization:5 fix:1 preliminary:1 comprehension:1 extension:2 strictly:2 hold:1 ic:1 welfare:4 equilibrium:1 algorithmic:1 reserve:8 matthew:1 vary:1 smallest:1 combinatorial:1 palo:1 cole:3 agrees:2 largest:3 tool:2 mit:1 always:3 aim:1 rather:4 varying:2 corollary:2 derived:1 focus:3 improvement:1 she:16 maria:2 rigorous:1 brendan:1 sense:2 summarizing:1 i0:4 shattered:3 her:13 relation:1 labelings:9 i1:4 classification:1 denoted:3 proposes:1 special:3 equal:2 never:1 having:1 sampling:1 shattering:5 identical:2 sell:1 nearly:1 constantinos:1 future:1 report:1 develops:1 fundamentally:1 employ:1 richard:1 haven:1 randomly:1 sweet:1 simultaneously:2 interpolate:2 peerapong:1 ab:2 extreme:1 undefined:1 tuple:1 necessary:5 allocates:1 modest:1 unless:1 vi0:2 theoretical:2 formalism:1 obstacle:2 cover:2 maximization:7 leslie:1 cost:1 surpasses:1 subset:7 uniform:1 too:1 learnability:1 nearoptimal:1 chooses:2 deduced:1 jour:1 fundamental:1 siam:4 st:1 dlog1:2 off:2 vm:1 michael:1 yao:1 central:2 nm:5 huang:2 choose:1 conf:5 return:1 li:1 potential:3 singleton:1 bidder:69 student:1 matter:2 analagous:1 satisfy:1 vi:19 depends:2 nisan:1 later:2 view:2 jason:7 exogenous:1 sort:1 bayes:1 parallel:1 simon:1 contribution:2 accuracy:1 who:3 correspond:1 identify:3 yield:5 bayesian:3 elkind:2 andres:1 comp:1 hartline:8 randomness:1 submitted:1 reach:1 whenever:1 definition:3 sixth:1 competitor:1 ty:1 frequency:1 thereof:1 naturally:1 proof:2 tunable:1 proved:2 recall:4 lucier:1 follow:2 done:4 though:2 risking:1 stage:1 just:1 roger:1 hand:2 web:1 replacing:1 maximizer:1 defines:2 willingness:2 pricing:3 pseudodimension:1 effect:1 usa:9 contain:1 concept:2 true:7 ccf:1 hence:1 assigned:1 equality:1 i2:5 round:1 uniquely:1 rooted:1 generalized:1 schrijvers:1 theoretic:1 demonstrate:1 tn:1 auction:138 balcan:8 reasoning:1 ranging:4 umesh:1 fi:3 common:2 specialized:1 cohen:1 winner:12 volume:1 tail:1 interpretation:1 approximates:2 theirs:3 interpret:2 refer:2 significant:1 mellon:1 cambridge:2 munoz:1 mathematics:1 illinois:1 han:1 etc:1 dominant:2 perspective:1 scenario:1 claimed:1 binary:4 arbitrarily:2 errs:3 continue:1 vcg:3 minimum:3 forty:1 truthful:7 maximize:1 ii:1 multiple:2 rj:3 d0:1 technical:2 lexicographically:1 offer:1 long:1 hazard:1 post:1 award:3 feasibility:1 va:4 florina:2 expectation:1 arxiv:1 represent:2 achieved:1 addition:1 want:1 allocated:2 subject:1 suspect:1 spirit:1 call:2 near:6 counting:1 yang:1 enough:5 identically:3 bid:13 matroid:2 pennsylvania:1 competing:2 interprets:1 economic:1 idea:3 whether:1 heavier:1 utility:1 bartlett:1 url:1 render:1 peter:1 pollard:2 remark:1 generally:2 useful:1 clear:2 amount:5 dark:1 induces:2 http:1 exist:1 nsf:1 singapore:1 designer:1 disjoint:1 discrete:3 carnegie:1 key:1 terminology:1 threshold:18 blum:2 sivan:1 drawn:6 v1:1 monotone:1 sum:1 beijing:1 enforced:1 compete:1 everywhere:1 fourth:1 parameterized:1 shuchi:2 almost:2 ruling:1 chih:1 electronic:4 thach:1 vb:4 comparable:1 bound:22 ct:9 pay:11 internet:2 quadratic:1 refine:1 annual:4 roughgarden:9 precisely:1 constraint:2 your:1 ri:2 unlimited:2 generates:3 kleinberg:1 argument:2 optimality:1 formulating:1 spring:1 format:1 martin:1 palm:1 according:5 combination:1 bucketing:1 across:4 strives:1 rev:5 making:2 intuitively:3 computationally:3 ln:4 payment:6 discus:1 mechanism:12 needed:1 know:1 available:1 operation:1 apply:3 appropriate:2 chawla:2 assumes:1 denotes:2 include:1 remaining:1 publishing:1 opportunity:1 prof:2 shatterable:3 implied:1 question:2 realized:1 moshe:1 strategy:2 primary:2 dependence:4 traditional:1 visiting:1 said:1 analsysis:1 win:8 individualized:1 valuation:25 considers:1 trivial:1 reason:1 index:1 balance:2 minimizing:1 vladimir:1 difficult:1 weinberg:1 statement:1 robert:1 noam:1 negative:4 unintuitive:1 design:11 agenda:1 motivates:1 unknown:7 twenty:1 allowing:1 upper:13 benchmark:1 finite:3 january:1 defining:2 extended:1 witness:4 precise:1 communication:1 dc:5 mansour:3 arbitrary:7 david:3 shaddin:1 required:1 specified:1 connection:2 learned:1 expressivity:3 address:2 able:1 usually:1 below:1 ev:6 challenge:1 dhangwatnotai:1 power:1 event:1 demanding:1 difficulty:1 imply:2 realvalued:1 log1:3 ironed:5 philadelphia:1 prior:6 understanding:1 literature:2 review:2 relative:2 lecture:1 permutation:1 interesting:3 allocation:2 facing:1 versus:1 revenue:40 foundation:3 incurred:1 offered:1 sufficient:4 proxy:1 vij:3 share:1 balancing:1 pi:4 mohri:3 supported:1 qiqi:2 free:1 copy:2 formal:1 weaker:1 understand:3 side:1 taking:1 face:1 dimension:27 cumulative:1 rich:1 collection:3 universally:1 san:1 nguyen:1 far:2 transaction:1 vazirani:1 approximate:4 implicitly:1 overfitting:1 conclude:1 tuples:1 xi:4 alternatively:1 daskalakis:1 continuous:1 learn:3 ca:4 operational:1 contributes:1 h2n:1 mehryar:1 complex:2 necessarily:1 posted:2 domain:1 vj:4 anthony:1 anna:1 main:2 bounding:1 arise:2 profile:1 deployed:1 ny:12 position:2 medina:3 wish:1 exponential:2 winning:2 governed:1 crude:1 breaking:4 third:1 theorem:15 specific:2 learnable:4 list:3 maxi:2 concern:1 maximizers:2 exists:6 dominates:1 restricting:1 merging:2 effectively:1 equates:1 ci:1 avrim:2 sequential:1 valiant:1 vapnik:1 demand:2 smoothly:1 simply:2 myerson:4 partially:1 springer:2 corresponds:2 nested:1 satisfies:1 acm:15 ma:1 oct:1 goal:4 sorted:4 identity:1 quantifying:1 consequently:1 price:11 determined:1 except:1 uniformly:3 reducing:1 lemma:2 kearns:1 buyer:2 ya:1 formally:2 internal:1 support:3 immorlica:1 witnessing:1 scratch:1 ex:2 |
5,265 | 5,767 | High-dimensional neural spike train analysis with
generalized count linear dynamical systems
Lars Buesing
Department of Statistics
Columbia University
New York, NY 10027
lars@stat.columbia.edu
Yuanjun Gao
Department of Statistics
Columbia University
New York, NY 10027
yg2312@columbia.edu
Krishna V. Shenoy
Department of Electrical Engineering
Stanford University
Stanford, CA 94305
shenoy@stanford.edu
John P. Cunningham
Department of Statistics
Columbia University
New York, NY 10027
jpc2181@columbia.edu
Abstract
Latent factor models have been widely used to analyze simultaneous recordings of
spike trains from large, heterogeneous neural populations. These models assume
the signal of interest in the population is a low-dimensional latent intensity that
evolves over time, which is observed in high dimension via noisy point-process
observations. These techniques have been well used to capture neural correlations
across a population and to provide a smooth, denoised, and concise representation of high-dimensional spiking data. One limitation of many current models
is that the observation model is assumed to be Poisson, which lacks the flexibility to capture under- and over-dispersion that is common in recorded neural data,
thereby introducing bias into estimates of covariance. Here we develop the generalized count linear dynamical system, which relaxes the Poisson assumption by
using a more general exponential family for count data. In addition to containing Poisson, Bernoulli, negative binomial, and other common count distributions
as special cases, we show that this model can be tractably learned by extending recent advances in variational inference techniques. We apply our model to
data from primate motor cortex and demonstrate performance improvements over
state-of-the-art methods, both in capturing the variance structure of the data and
in held-out prediction.
1
Introduction
Many studies and theories in neuroscience posit that high-dimensional populations of neural spike
trains are a noisy observation of some underlying, low-dimensional, and time-varying signal of
interest. As such, over the last decade researchers have developed and used a number of methods
for jointly analyzing populations of simultaneously recorded spike trains, and these techniques have
become a critical part of the neural data analysis toolkit [1]. In the supervised setting, generalized
linear models (GLM) have used stimuli and spiking history as covariates driving the spiking of the
neural population [2, 3, 4, 5]. In the unsupervised setting, latent variable models have been used
to extract low-dimensional hidden structure that captures the variability of the recorded data, both
temporally and across the population of neurons [6, 7, 8, 9, 10, 11].
1
In both these settings, however, a limitation is that spike trains are typically assumed to be conditionally Poisson, given the shared signal [8, 10, 11]. The Poisson assumption, while offering algorithmic
conveniences in many cases, implies the property of equal dispersion: the conditional mean and variance are equal. This well-known property is particularly troublesome in the analysis of neural spike
trains, which are commonly observed to be either over- or under-dispersed [12] (variance greater
than or less than the mean). No doubly stochastic process with a Poisson observation can capture
under-dispersion, and while such a model can capture over-dispersion, it must do so at the cost of
erroneously attributing variance to the latent signal, rather than the observation process.
To allow for deviation from the Poisson assumption, some previous work has instead modeled the
data as Gaussian [7] or using more general renewal process models [13, 14, 15]; the former of
which does not match the count nature of the data and has been found inferior [8], and the latter of
which requires costly inference that has not been extended to the population setting. More general
distributions like the negative binomial have been proposed [16, 17, 18], but again these families do
not generalize to cases of under-dispersion. Furthermore, these more general distributions have not
yet been applied to the important setting of latent variable models.
Here we employ a count-valued exponential family distribution that addresses these needs and includes much previous work as special cases. We call this distribution the generalized count (GC)
distribution [19], and we offer here four main contributions: (i) we introduce the GC distribution and
derive a variety of commonly used distributions that are special cases, using the GLM as a motivating example (?2); (ii) we combine this observation likelihood with a latent linear dynamical systems
prior to form a GC linear dynamical system (GCLDS; ?3); (iii) we develop a variational learning algorithm by extending the current state-of-the-art methods [20] to the GCLDS setting (?3.1); and (iv)
we show in data from the primate motor cortex that the GCLDS model provides superior predictive
performance and in particular captures data covariance better than Poisson models (?4).
2
Generalized count distributions
We define the generalized count distribution as the family of count-valued probability distributions:
pGC (k; ?, g(?)) =
exp(?k + g(k))
, k?N
k!M (?, g(?))
(1)
where ? ? R and the function g : N ? R parameterizes the distribution, and M (?, g(?)) =
P? exp(?k+g(k))
is the normalizing constant. The primary virtue of the GC family is that it recovk=0
k!
ers all common count-valued distributions as special cases and naturally parameterizes many common supervised and unsupervised models (as will be shown); for example, the function g(k) = 0
implies a Poisson distribution with rate parameter ? = exp{?}. Generalizations of the Poisson
distribution have been of interest since at least [21], and the paper [19] introduced the GC family
and proved two additional properties: first, that the expectation of any GC distribution is monotonically increasing in ?, for a fixed g(k); and second ? and perhaps most relevant to this study ?
concave (convex) functions g(?) imply under-dispersed (over-dispersed) GC distributions. Furthermore, often desired features like zero truncation or zero inflation can also be naturally incorporated
by modifying the g(0) value [22, 23]. Thus, with ? controlling the (log) rate of the distribution
and g(?) controlling the ?shape? of the distribution, the GC family provides a rich model class for
capturing the spiking statistics of neural data. Other discrete distribution families do exist, such as
the Conway-Maxwell-Poisson distribution [24] and ordered logistic/probit regression [25], but the
GC family offers a rich exponential family, which makes computation somewhat easier and allows
the g(?) functions to be interpreted.
Figure 1 demonstrates the relevance of modeling dispersion in neural data analysis. The left panel
shows a scatterplot where each point is an individual neuron in a recorded population of neurons
from primate motor cortex (experimental details will be described in ?4). Plotted are the mean and
variance of spiking activity of each neuron; activity is considered in 20ms bins. For reference, the
equi-dispersion line implied by a homogeneous Poisson process is plotted in red, and note further
that all doubly stochastic Poisson models would have an implied dispersion above this Poisson line.
These data clearly demonstrate meaningful under-dispersion, underscoring the need for the present
advance. The right panel demonstrates the appropriateness of the GC model class, showing that a
convex/linear/concave function g(k) will produce the expected over/equal/under-dispersion. Given
2
the left panel, we expect under-dispersed GC distributions to be most relevant, but indeed many
neural datasets also demonstrate over and equi-dispersion [12], highlighting the need for a flexible
observation family.
2
3
2.5
Variance
Variance
1.5
neuron 1
1
Convex g
Linear g
Concave g
2
1.5
1
0.5
0.5
neuron 2
0
0
0.5
1
1.5
Mean firing rate per time bin (20ms)
0
0
2
0.5
1
1.5
Expectation
2
2.5
Figure 1: Left panel: mean firing rate and variance of neurons in primate motor cortex during
the peri-movement period of a reaching experiment (see ?4). The data exhibit under-dispersion,
especially for high firing-rate neurons. The two marked neurons will be analyzed in detail in Figure
2. Right panel: the expectation and variance of the GC distribution with different choices of the
function g
To illustrate the generality of the GC family and to lay the foundation for our unsupervised learning
approach, we consider briefly the case of supervised learning of neural spike train data, where generalized linear models (GLM) have been used extensively [4, 26, 17]. We define GCGLM as that which
models a single neuron with count data yi ? N, and associated covariates xi ? Rp (i = 1, ..., n) as
yi ? GC(?(xi ), g(?)), where ?(xi ) = xi ?.
(2)
Here GC(?, g(?)) denotes a random variable distributed according to (1), ? ? Rp are the regression
coefficients. This GCGLM model is highly general. Table 1 shows that many of the commonly
used count-data models are special cases of GCGLM, by restricting the g(?) function to have certain
parametric form. In addition to this convenient generality, one benefit of our parametrization of the
GC model is that the curvature of g(?) directly measures the extent to which the data deviate from
the Poisson assumption, allowing us to meaningfully interrogate the form of g(?). Note that (2) has
no intercept term because it can be absorbed in the g(?) function as a linear term ?k (see Table 1).
Unlike previous GC work [19], our parameterization implies that maximum likelihood parameter
estimation (MLE) is a tractable convex program, which can be seen by considering:
n
n
X
X
? g?(?)) = arg max
log p(yi ) = arg max
[(xi ?)yi + g(yi ) ? log M (xi ?, g(?))] . (3)
(?,
(?,g(?))
i=1
(?,g(?))
i=1
First note that, although we have to optimize over a function g(?) that is defined on all non-negative
integers, we can exploit the empirical support of the distribution to produce a finite optimization
problem. Namely, for any k ? that is not achieved by any data point yi (i.e., the count #{i|yi =
k ? } = 0), the MLE for g(k ? ) must be ??, and thus we only need to optimize g(k) for k that
have empirical support in the data. Thus g(k) is a finite dimensional vector. To avoid the potential
overfitting caused by truncation of gi (?) beyond the empirical support of the data, we can enforce a
large (finite) support and impose a quadratic penalty on the second difference of g(.), to encourage
linearity in g(?) (which corresponds to a Poisson distribution). Second, note that we can fix g(0) = 0
without loss of generality, which ensures model identifiability. With these constraints, the remaining
g(k) values can be fit as free parameters or as convex-constrained (a set of linear inequalities on g(k);
similarly for concave case). Finally, problem convexity is ensured as all terms are either linear or
linear within the log-sum-exp function M (?), leading to fast optimization algorithms [27].
3
Generalized count linear dynamical system model
With the GC distribution in hand, we now turn to the unsupervised setting, namely coupling the GC
observation model with a latent, low-dimensional dynamical system. Our model is a generalization
3
Table 1: Special cases of GCGLM. For all models, the GCGLM parametrization for ? is only associated with the slope ?(x) = ?x, and the intercept ? is absorbed into the g(?) function. In all
cases we have g(k) = ?? outside the stated support of the distribution. Whenever unspecified, the
support of the distribution and the domain of the g(?) function are non-negative integers N.
Model Name
Logistic regression
(e.g. [25])
Typical Parameterization
exp (k(? + x?))
P (y = k) =
1 + exp(? + x?)
?k
P (y = k) =
exp(??);
k!
? = exp(? + x?)
Poisson regression
(e.g., [4, 26] )
Adjacent category regression
(e.g., [25] )
P (y = k + 1)
= exp(?k + x?)
P (y = k)
GCGLM Parametrization
g(k) = ?k; k = 0, 1
g(k) = ?k
g(k) =
k
X
(?i?1 + log i);
i=1
k =0, 1, ..., K
Negative binomial regression
(e.g., [17, 18])
COM-Poisson regression
(e.g., [24])
(k + r ? 1)!
P (y = k) =
(1 ? p)r pk
k!(r ? 1)!
p = exp(? + x?)
+?
?k X ?j
P (y = k) =
/
?
(k!) j=1 (j!)?
g(k) =?k + log (k + r ? 1)!
g(k) = ?k + (1 ? ?) log k!
? = exp(? + x?)
of linear dynamical systems with Poisson likelihoods (PLDS), which have been extensively used
for analysis of populations of neural spike trains [8, 11, 28, 29]. Denoting yrti as the observed
spike-count of neuron i ? {1, ..., N } at time t ? {1, ..., T } on experimental trial r ? {1, ..., R},
the PLDS assumes that the spike activity of neurons is a noisy Poisson observation of an underlying
low-dimensional latent state xrt ? Rp ,(where p N ), such that:
yrti |xrt ? Poisson exp c>
.
(4)
i xrt + di
>
Here C = [c1 ... cN ] ? RN ?p is the factor loading matrix mapping the latent state xrt to a
log rate, with time and trial invariant baseline log rate d ? RN . Thus the vector Cxrt + d denotes
the vector of log rates for trial r and time t. Critically, the latent state xrt can be interpreted as the
underlying signal of interest that acts as the ?common input signal? to all neurons, which is modeled
a priori as a linear Gaussian dynamical system (to capture temporal correlations):
xr1 ? N (?1 , Q1 )
xr(t+1) |xrt ? N (Axrt + bt , Q),
(5)
where ?1 ? Rp and Q1 ? Rp?p parameterize the initial state. The transition matrix A ? Rp?p
and innovations covariance Q ? Rp?p parameterize the dynamical state update. The optional term
bt ? Rp allows the model to capture a time-varying firing rate that is fixed across experimental
trials. The PLDS has been widely used and has been shown to outperform other models in terms of
predictive performance, including in particular the simpler Gaussian linear dynamical system [8].
The PLDS model is naturally extended to what we term the generalized count linear dynamical
system (GCLDS) by modifying equation (4) using a GC likelihood:
yrti |xrt ? GC c>
(6)
i xrt , gi (?) .
Where gi (?) is the g(?) function in (1) that models the dispersion for neuron i. Similar to the GLM,
for identifiability, the baseline rate parameter d is dropped in (6) and we can fix g(0) = 0. As with
the GCGLM, one can recover preexisting models, such as an LDS with a Bernoulli observation, as
special cases of GCLDS (see Table 1).
3.1
Inference and learning in GCLDS
As is common in LDS models, we use expectation-maximization to learn parameters ? =
{A, {bt }t , Q, Q1 , ?1 , {gi (?)}i , C} . Because the required expectations do not admit a closed form
4
as in previous similar work [8, 30], we required an additional approximation step, which we implemented via a variational lower bound. Here we briefly outline this algorithm and our novel
contributions, and we refer the reader to the full details in the supplementary materials.
First, each E-step requires calculating p(xr |yr , ?) for each trial r ? {1, ..., R} (the conditional distribution of the latent trajectories xr = {xrt }1?t?T , given observations yr = {yrti }1?t?T,1?i?N
and parameter ?). For ease of notation below we drop the trial index r. These posterior distributions are intractable, and in the usual way we make a normal approximation p(x|y, ?) ? q(x) =
N (m, V ). We identify the optimal (m, V ) by maximizing a variational Bayesian lower bound (the
so-called evidence lower bound or ?ELBO?) over the variational parameters m, V as:
p(x|?)
L(m, V ) =Eq(x) log
+ Eq(x) [log p(y|x, ?)]
(7)
q(x)
X
1
= log |V | ? tr[??1 V ] ? (m ? ?)T ??1 (m ? ?) +
Eq(xt ) [log p(yti |xt )] + const,
2
t,i
which is the usual form to be maximized in a variational Bayesian EM (VBEM) algorithm [11]. Here
? ? RpT and ? ? RpT ?pT are the expectation and variance of x given by the LDS prior in (5). The
first term of (7) is the negative Kullback-Leibler divergence between the variational distribution and
prior distribution, encouraging the variational distribution to be close to the prior. The second term
involving the GC likelihood encourages the variational distribution to explain the observations well.
The integrations in the second term are intractable (this is in contrast to the PLDS case, where all
integrals can be calculated analytically [11]). Below we use the ideas of [20] to derive a tractable,
further lower bound. Here the term Eq(xt ) [log p(yti |xt )] can be reduced to:
Eq(xt ) [log p(yti |xt )] =Eq(?ti ) [log pGC (y|?ti , gi (?))]
"
=Eq(?ti )
#
K
X
(8)
1
exp(k?ti + gi (k)) ,
yti ?ti + gi (yti ) ? log yti ! ? log
k!
k=0
where ?ti = cTi xt . Denoting P
?tik = k?ti + gi (k) ? log(k!) = kcTi xt + gi (k) ? log k!, (8) is
reduced to Eq(?) [?tiyti ? log( 0?k?K exp(?tik ))]. Since ?tik is a linear transformation of xt ,
under the variational distribution ?tik is also normally distributed ?tik ? N (htik , ?tik ). We have
htik = kcTi mt +gi (k)?log k!, ?tik = k 2 cTi Vt ci , where (mt , Vt ) are the expectation and covariance
matrix of xt under variational distribution. Now we can derive a lower bound for the expectation by
Jensen?s inequality:
"
#
K
X
X
Eq(?ti ) ?tiyti ? log
exp(?tik ) ?htiyti ? log
exp(htik + ?tik /2) =: fti (hti , ?ti ). (9)
k
k=1
Combining (7) and (9), we get a tractable variational lower bound:
X
p(x|?)
?
L(m, V ) ? L (m, V ) = Eq(x) log
+
fti (hti , ?ti ).
q(x)
t,i
(10)
For computational convenience, we complete the E-step by maximizing the new evidence lower
bound L? via its dual [20]. Full details are derived in the supplementary materials.
The M-step then requires maximization of L? over ?. Similar to the PLDS case, the set of parameters involving the latent Gaussian dynamics (A, {bt }t , Q, Q1 , ?1 ) can be optimized analytically [8].
Then, the parameters involving the GC likelihood (C, {gi }i ) can be optimized efficiently via convex
optimization techniques [27] (full details in supplementary material).
In practice we initialize our VBEM algorithm with a Laplace-EM algorithm, and we initialize each
E-step in VBEM with a Laplace approximation, which empirically gives substantial runtime advantages, and always produces a sensible optimum. With the above steps, we have a fully specified
learning and inference algorithm, which we now use to analyze real neural data. Code can be found
at https://bitbucket.org/mackelab/pop_spike_dyn.
5
4
Experimental results
We analyze recordings of populations of neurons in the primate motor cortex during a reaching
experiment (G20040123), details of which have been described previously [7, 8]. In brief, a rhesus
macaque monkey executed 56 cued reaches from a central target to 14 peripheral targets. Before the
subject was cued to move (the go cue), it was given a preparatory period to plan the upcoming reach.
Each trial was thus separated into two temporal epochs, each of which has been suggested to have
their own meaningful dynamical structure [9, 31]. We separately analyze these two periods: the
preparatory period (1200ms period preceding the go cue), and the reaching period (50ms before to
370ms after the movement onset). We analyzed data across all 14 reach targets, and results were
highly similar; in the following for simplicity we show results for a single reaching target (one 56
trial dataset). Spike trains were simultaneously recorded from 96 electrodes (using a Blackrock
multi-electrode array). We bin neural activity at 20ms. To include only units with robust activity, we
remove all units with mean rates less than 1 spike per second on average, resulting in 81 units for the
preparatory period, and 85 units for the reaching period. As we have already shown in Figure 1, the
reaching period data are strongly under-dispersed, even absent conditioning on the latent dynamics
(implying further under-dispersion in the observation noise). Data during the preparatory period are
particularly interesting due to its clear cross-correlation structure.
To fully assess the GCLDS model, we analyze four LDS models ? (i) GCLDS-full: a separate function gi (?) is fitted for each neuron i ? {1, ..., N }; (ii) GCLDS-simple: a single function g(?) is shared
across all neurons (up to a linear term modulating the baseline firing rate); (iii) GCLDS-linear: a
truncated linear function gi (?) is fitted, which corresponds to truncated-Poisson observations; and
(iv) PLDS: the Poisson case is recovered when gi (?) is a linear function on all nonnegative integers.
In all cases we use the learning and inference of ?3.1. We initialize the PLDS using nuclear norm
minimization [10], and initialize the GCLDS models with the fitted PLDS. For all models we vary
the latent dimension p from 2 to 8.
To demonstrate the generality of the GCLDS and verify our algorithmic implementation, we first
considered extensive simulated data with different GCLDS parameters (not shown). In all cases
GCLDS model outperformed PLDS in terms of negative log-likelihood (NLL) on test data, with
high statistical significance. We also compared the algorithms on PLDS data and found very similar performance between GCLDS and PLDS, implying that GCLDS does not significantly overfit,
despite the additional free parameters and computation due to the g(?) functions.
Analysis of the reaching period. Figure 2 compares the fits of the two neural units highlighted
in Figure 1. These two neurons are particularly high-firing (during the reaching period), and thus
should be most indicative of the differences between the PLDS and GCLDS models. The left column
of Figure 2 shows the fitted g(?) functions the for four LDS models being compared. It is apparent in
both the GCLDS-full and GCLDS-simple cases that the fitted g function is concave (though it was
not constrained to be so), agreeing with the under-dispersion observed in Figure 1.
The middle column of Figure 2 shows that all four cases produce models that fit the mean activity of
these two neurons very well. The black trace shows the empirical mean of the observed data, and all
four lines (highly overlapping and thus not entirely visible) follow that empirical mean closely. This
result is confirmatory that the GCLDS matches the mean and the current state-of-the-art PLDS.
More importantly, we have noted the key feature of the GCLDS is matching the dispersion of the
data, and thus we expect it should outperform the PLDS in fitting variance. The right column of
Figure 2 shows this to be the case: the PLDS significantly overestimates the variance of the data.
The GCLDS-full model tracks the empirical variance quite closely in both neurons. The GCLDSlinear result shows that only adding truncation does not materially improve the estimate of variance
and dispersion: the dotted blue trace is quite far from the true data in black, and indeed it is quite
close to the Poisson case. The GCLDS-simple still outperforms the PLDS case, but it does not
model the dispersion as effectively as the GPLDS-full case where each neuron has its own dispersion
parameter (as Figure 1 suggests). The natural next question is whether this outperformance is simply
in these two illustrative neurons, or if it is a population effect. Figure 3 shows that indeed the
population is much better modeled by the GCLDS model than by competing alternatives. The left
and middle panels of Figure 3 show leave-one-neuron-out prediction error of the LDS models. For
each reaching target we use 4-fold cross-validation and the results are averaged across all 14 reaching
6
2.5
2.5
2
Mean
g(k)
0
?2
Variance
3
neuron 1
2
1.5
1.5
1
?4
1
0
5
k (spikes per bin)
neuron 2
0.5
?4
0
Variance
Mean
g(k)
observed data
PLDS
GCLDS?full
GCLDS?simple
GCLDS?linear
1
0
5
k (spikes per bin)
0
100
200
300
Time after movement onset (ms)
1.5
1.5
0
?2
0.5
0
100
200
300
Time after movement onset (ms)
1
0.5
0
0
100
200
300
Time after movement onset (ms)
0
100
200
300
Time after movement onset (ms)
Figure 2: Examples of fitting result for selected high-firing neurons. Each row corresponds to one
neuron as marked in left panel of Figure 1 ? left column: fitted g(?) using GCLDS and PLDS; middle
and right column: fitted mean and variance of PLDS and GCLDS. See text for details.
11.5
PLDS
GCLDS?full
GCLDS?simple
GCLDS?linear
11
10.5
2
4
6
Latent dimension
8
9
2
8
1.5
Fitted variance
% NLL reduction
% MSE reduction
12
7
6
5
1
0.5
PLDS
GCLDS?full
2
4
6
Latent dimension
8
0
0
1
Observed variance
2
Figure 3: Goodness-of-fit for monkey data during the reaching period ? left panel: percentage
reduction of mean-squared-error (MSE) compared to the baseline (homogeneous Poisson process);
middle panel: percentage reduction of predictive negative log likelihood (NLL) compared to the
baseline; right panel: fitted variance of PLDS and GCLDS for all neurons compared to the observed
data. Each point gives the observed and fitted variance of a single neuron, averaged across time.
targets. Critically, these predictions are made for all neurons in the population. To give informative
performance metrics, we defined baseline performance as a straightforward, homogeneous Poisson
process for each neuron, and compare the LDS models with the baseline using percentage reduction
of mean-squared-error and negative log likelihood (thus higher error reduction numbers imply better
performance). The mean-squared-error (MSE; left panel) shows that the GCLDS offers a minor
improvement (reduction in MSE) beyond what is achieved by the PLDS. Though these standard
error bars suggest an insignificant result, a paired t-test is indeed significant (p < 10?8 ). Nonetheless
this minor result agrees with the middle column of Figure 2, since predictive MSE is essentially a
measurement of the mean.
In the middle panel of Figure 3, we see that the GCLDS-full significantly outperforms alternatives
in predictive log likelihood across the population (p < 10?10 , paired t-test). Again this largely
agrees with the implication of Figure 2, as negative log likelihood measures both the accuracy of
mean and variance. The right panel of Figure 3 shows that the GCLDS fits the variance of the data
exceptionally well across the population, unlike the PLDS.
Analysis of the preparatory period. To augment the data analysis, we also considered the
preparatory period of neural activity. When we repeated the analyses of Figure 3 on this dataset,
the same results occurred: the GCLDS model produced concave (or close to concave) g functions
7
and outperformed the PLDS model both in predictive MSE (minority) and negative log likelihood
(significantly). For brevity we do not show this analysis here. Instead, we here compare the temporal
cross-covariance, which is also a common analysis of interest in neural data analysis [8, 16, 32] and,
as noted, is particularly salient in preparatory activity. Figure 4 shows that GCLDS model fits both
the temporal cross-covariance (left panel) and variance (right panel) considerably better than PLDS,
which overestimates both quantities.
?3
x 10
Covariance
8
1
recorded data
GCLDS?full
PLDS
0.8
Fitted variance
10
6
4
2
0.6
0.4
0.2
0
?200
?100
0
100
Time lag (ms)
0
0
200
PLDS
GCLDS?full
0.2
0.4
0.6
Observed variance
0.8
Figure 4: Goodness-of-fit for monkey data during the preparatory period ? Left panel: Temporal
cross-covariance averaged over all 81 units during the preparatory period, compared to the fitted
cross-covariance by PLDS and GCLDS-full. Right panel: fitted variance of PLDS and GCLDS-full
for all neurons compared to the observed data (averaged across time).
5
Discussion
In this paper we showed that the GC family better captures the conditional variability of neural
spiking data, and further improves inference of key features of interest in the data. We note that
it is straightforward to incorporate external stimuli and spike history in the model as covariates, as
has been done previously in the Poisson case [8]. Beyond the GCGLM and GCLDS, the GC family
is also extensible to other models that have been used in this setting, such as exponential family
PCA [10] and subspace clustering [11]. The cost of this performance, compared to the PLDS, is an
extra parameterization (the gi (?) functions) and the corresponding algorithmic complexity. While
we showed that there seems to be no empirical sacrifice to doing so, it is likely that data with few
examples and reasonably Poisson dispersion may cause GCLDS to overfit.
Acknowledgments
JPC received funding from a Sloan Research Fellowship, the Simons Foundation (SCGB#325171
and SCGB#325233), the Grossman Center at Columbia University, and the Gatsby Charitable Trust.
Thanks to Byron Yu, Gopal Santhanam and Stephen Ryu for providing the cortical data.
References
[1] J. P. Cunningham and B. M Yu, ?Dimensionality reduction for large-scale neural recordings,? Nature
neuroscience, vol. 17, no. 71, pp. 1500?1509, 2014.
[2] L. Paninski, ?Maximum likelihood estimation of cascade point-process neural encoding models,? Network: Computation in Neural Systems, vol. 15, no. 4, pp. 243?262, 2004.
[3] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown, ?A point process framework
for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects,?
Journal of neurophysiology, vol. 93, no. 2, pp. 1074?1089, 2005.
[4] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, ?Spatiotemporal correlations and visual signalling in a complete neuronal population,? Nature, vol. 454, no. 7207,
pp. 995?999, 2008.
[5] M. Vidne, Y. Ahmadian, J. Shlens, J. W. Pillow, J. Kulkarni, A. M. Litke, E. Chichilnisky, E. Simoncelli,
and L. Paninski, ?Modeling the impact of common noise inputs on the network activity of retinal ganglion
cells,? Journal of computational neuroscience, vol. 33, no. 1, pp. 97?121, 2012.
8
[6] J. E. Kulkarni and L. Paninski, ?Common-input models for multiple neural spike-train data,? Network:
Computation in Neural Systems, vol. 18, no. 4, pp. 375?407, 2007.
[7] B. M Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, ?Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity,? in NIPS, pp. 1881?
1888, 2009.
[8] J. H. Macke, L. Buesing, J. P. Cunningham, B. M Yu, K. V. Shenoy, and M. Sahani, ?Empirical models
of spiking in neural populations,? in NIPS, pp. 1350?1358, 2011.
[9] B. Petreska, B. M Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, ?Dynamical segmentation of single trials from population neural data,? in NIPS, pp. 756?764, 2011.
[10] D. Pfau, E. A. Pnevmatikakis, and L. Paninski, ?Robust learning of low-dimensional dynamics from large
neural ensembles,? in NIPS, pp. 2391?2399, 2013.
[11] L. Buesing, T. A. Machado, J. P. Cunningham, and L. Paninski, ?Clustered factor analysis of multineuronal spike data,? in NIPS, pp. 3500?3508, 2014.
[12] M. M. Churchland, B. M Yu, J. P. Cunningham, L. P. Sugrue, M. R. Cohen, G. S. Corrado, W. T.
Newsome, A. M. Clark, P. Hosseini, B. B. Scott, et al., ?Stimulus onset quenches neural variability:
a widespread cortical phenomenon,? Nature neuroscience, vol. 13, no. 3, pp. 369?378, 2010.
[13] J. P. Cunningham, B. M Yu, K. V. Shenoy, and S. Maneesh, ?Inferring neural firing rates from spike trains
using gaussian processes,? in NIPS, pp. 329?336, 2007.
[14] R. P. Adams, I. Murray, and D. J. MacKay, ?Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities,? in ICML, pp. 9?16, ACM, 2009.
[15] S. Koyama, ?On the spike train variability characterized by variance-to-mean power relationship,? Neural
computation, 2015.
[16] R. L. Goris, J. A. Movshon, and E. P. Simoncelli, ?Partitioning neuronal variability,? Nature neuroscience,
vol. 17, no. 6, pp. 858?865, 2014.
[17] J. Scott and J. W. Pillow, ?Fully bayesian inference for neural models with negative-binomial spiking,? in
NIPS, pp. 1898?1906, 2012.
[18] S. W. Linderman, R. Adams, and J. Pillow, ?Inferring structured connectivity from spike trains under
negative-binomial generalized linear models,? COSYNE, 2015.
[19] J. del Castillo and M. P?erez-Casany, ?Overdispersed and underdispersed poisson generalizations,? Journal
of Statistical Planning and Inference, vol. 134, no. 2, pp. 486?500, 2005.
[20] M. Emtiyaz Khan, A. Aravkin, M. Friedlander, and M. Seeger, ?Fast dual variational inference for nonconjugate latent gaussian models,? in ICML, pp. 951?959, 2013.
[21] C. R. Rao, ?On discrete distributions arising out of methods of ascertainment,? Sankhy?a: The Indian
Journal of Statistics, Series A, pp. 311?324, 1965.
[22] D. Lambert, ?Zero-inflated poisson regression, with an application to defects in manufacturing,? Technometrics, vol. 34, no. 1, pp. 1?14, 1992.
[23] J. Singh, ?A characterization of positive poisson distribution and its statistical application,? SIAM Journal
on Applied Mathematics, vol. 34, no. 3, pp. 545?548, 1978.
[24] K. F. Sellers and G. Shmueli, ?A flexible regression model for count data,? The Annals of Applied Statistics, pp. 943?961, 2010.
[25] C. V. Ananth and D. G. Kleinbaum, ?Regression models for ordinal responses: a review of methods and
applications.,? International journal of epidemiology, vol. 26, no. 6, pp. 1323?1333, 1997.
[26] L. Paninski, J. Pillow, and J. Lewi, ?Statistical models for neural encoding, decoding, and optimal stimulus
design,? Progress in brain research, vol. 165, pp. 493?507, 2007.
[27] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2009.
[28] L. Buesing, J. H. Macke, and M. Sahani, ?Learning stable, regularised latent models of neural population
dynamics,? Network: Computation in Neural Systems, vol. 23, no. 1-2, pp. 24?47, 2012.
[29] L. Buesing, J. H. Macke, and M. Sahani, ?Estimating state and parameters in state-space models of spike
trains,? in Advanced State Space Methods for Neural and Clinical Data, Cambridge Univ Press., 2015.
[30] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski, ?Population decoding of motor cortical activity
using a generalized linear model with hidden states,? Journal of neuroscience methods, vol. 189, no. 2,
pp. 267?280, 2010.
[31] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V.
Shenoy, ?Neural population dynamics during reaching,? Nature, vol. 487, no. 7405, pp. 51?56, 2012.
[32] M. R. Cohen and A. Kohn, ?Measuring and interpreting neuronal correlations,? Nature neuroscience,
vol. 14, no. 7, pp. 811?819, 2011.
9
| 5767 |@word neurophysiology:1 trial:10 briefly:2 middle:6 loading:1 norm:1 seems:1 rhesus:1 covariance:9 q1:4 concise:1 thereby:1 tr:1 reduction:8 initial:1 series:1 offering:1 denoting:2 outperforms:2 current:3 com:1 recovered:1 yet:1 must:2 john:1 visible:1 informative:1 shape:1 motor:6 remove:1 drop:1 update:1 implying:2 cue:2 selected:1 yr:2 parameterization:3 indicative:1 signalling:1 parametrization:3 provides:2 equi:2 characterization:1 org:1 simpler:1 become:1 doubly:2 combine:1 fitting:2 introduce:1 bitbucket:1 sacrifice:1 indeed:4 expected:1 preparatory:9 planning:1 multi:1 brain:1 encouraging:1 considering:1 increasing:1 fti:2 estimating:1 underlying:3 linearity:1 panel:17 notation:1 what:2 kaufman:1 interpreted:2 unspecified:1 monkey:3 developed:1 transformation:1 temporal:5 fellow:1 act:1 concave:7 ti:10 runtime:1 ensured:1 demonstrates:2 partitioning:1 normally:1 unit:6 shenoy:7 before:2 overestimate:2 engineering:1 dropped:1 positive:1 despite:1 troublesome:1 encoding:2 analyzing:1 firing:8 black:2 suggests:1 ease:1 averaged:4 acknowledgment:1 practice:1 lewi:1 xr:3 empirical:8 maneesh:1 significantly:4 cascade:1 convenient:1 matching:1 boyd:1 suggest:1 get:1 convenience:2 close:3 intercept:2 optimize:2 center:1 maximizing:2 go:2 straightforward:2 convex:7 simplicity:1 array:1 importantly:1 nuclear:1 shlens:2 vandenberghe:1 population:23 laplace:2 annals:1 controlling:2 pt:1 target:6 homogeneous:3 regularised:1 particularly:4 lay:1 observed:11 electrical:1 capture:9 parameterize:2 ensures:1 movement:6 hatsopoulos:1 substantial:1 convexity:1 complexity:1 covariates:3 dynamic:5 seller:1 singh:1 churchland:2 predictive:6 train:14 separated:1 univ:1 fast:2 preexisting:1 ahmadian:1 outside:1 apparent:1 quite:3 stanford:3 widely:2 valued:3 supplementary:3 lag:1 elbo:1 statistic:6 gi:15 jointly:1 noisy:3 highlighted:1 nll:3 advantage:1 relevant:2 combining:1 flexibility:1 electrode:2 optimum:1 extending:2 produce:4 adam:2 leave:1 cued:2 yuanjun:1 develop:2 derive:3 stat:1 illustrate:1 coupling:1 minor:2 received:1 progress:1 eq:10 implemented:1 implies:3 inflated:1 appropriateness:1 aravkin:1 posit:1 closely:2 modifying:2 lars:2 stochastic:2 material:3 bin:5 truccolo:1 fix:2 generalization:3 clustered:1 inflation:1 considered:3 normal:1 exp:16 algorithmic:3 mapping:1 driving:1 vary:1 estimation:2 outperformed:2 tik:9 modulating:1 agrees:2 pnevmatikakis:1 minimization:1 clearly:1 gaussian:8 always:1 gopal:1 rather:1 reaching:12 avoid:1 varying:2 derived:1 improvement:2 bernoulli:2 likelihood:13 contrast:1 seeger:1 litke:2 baseline:7 inference:10 underdispersed:1 typically:1 bt:4 cunningham:9 hidden:2 kleinbaum:1 arg:2 dual:2 flexible:2 augment:1 priori:1 mackelab:1 plan:1 art:3 special:7 renewal:1 initialize:4 integration:1 equal:3 constrained:2 mackay:1 yu:7 unsupervised:4 icml:2 sankhy:1 stimulus:4 employ:1 few:1 simultaneously:2 divergence:1 individual:1 technometrics:1 interest:6 highly:3 analyzed:2 held:1 implication:1 integral:1 encourage:1 iv:2 desired:1 plotted:2 fitted:13 vbem:3 modeling:2 column:6 rao:1 nuyujukian:1 extensible:1 goodness:2 newsome:1 measuring:1 maximization:2 cost:2 introducing:1 deviation:1 motivating:1 spatiotemporal:1 considerably:1 thanks:1 peri:1 international:1 siam:1 epidemiology:1 decoding:2 conway:1 connectivity:1 again:2 central:1 recorded:6 squared:3 containing:1 gclds:45 cosyne:1 admit:1 external:1 macke:3 leading:1 grossman:1 potential:1 retinal:1 includes:1 coefficient:1 caused:1 sloan:1 onset:6 closed:1 analyze:5 doing:1 red:1 recover:1 denoised:1 identifiability:2 slope:1 simon:1 contribution:2 ass:1 accuracy:1 variance:28 largely:1 efficiently:1 maximized:1 ensemble:2 identify:1 emtiyaz:1 generalize:1 buesing:5 lds:7 bayesian:4 lambert:1 critically:2 produced:1 trajectory:1 researcher:1 history:3 simultaneous:1 explain:1 reach:3 whenever:1 outperformance:1 nonetheless:1 pp:28 naturally:3 associated:2 di:1 proved:1 dataset:2 improves:1 dimensionality:1 segmentation:1 maxwell:1 higher:1 supervised:3 follow:1 nonconjugate:1 response:1 done:1 though:2 strongly:1 generality:4 furthermore:2 correlation:5 overfit:2 hand:1 trust:1 overlapping:1 lack:1 del:1 widespread:1 logistic:2 perhaps:1 name:1 effect:2 verify:1 true:1 brown:1 former:1 analytically:2 overdispersed:1 leibler:1 rpt:2 conditionally:1 adjacent:1 during:8 encourages:1 inferior:1 noted:2 illustrative:1 m:11 generalized:11 outline:1 complete:2 demonstrate:4 interpreting:1 variational:13 novel:1 funding:1 common:9 superior:1 machado:1 spiking:10 mt:2 empirically:1 confirmatory:1 cohen:2 conditioning:1 occurred:1 relating:1 refer:1 significant:1 measurement:1 cambridge:2 mathematics:1 similarly:1 erez:1 toolkit:1 stable:1 cortex:5 curvature:1 posterior:1 own:2 recent:1 showed:2 certain:1 inequality:2 vt:2 yi:7 krishna:1 seen:1 greater:1 additional:3 somewhat:1 impose:1 preceding:1 period:17 monotonically:1 signal:6 ii:2 stephen:1 full:15 simoncelli:3 multiple:1 corrado:1 smooth:1 match:2 characterized:1 offer:3 cross:6 jpc:1 clinical:1 goris:1 mle:2 paired:2 impact:1 prediction:3 involving:3 regression:10 heterogeneous:1 essentially:1 expectation:8 poisson:32 metric:1 achieved:2 cell:1 c1:1 addition:2 fellowship:1 separately:1 pgc:2 extra:1 unlike:2 recording:3 subject:1 byron:1 meaningfully:1 call:1 integer:3 iii:2 relaxes:1 variety:1 fit:7 competing:1 idea:1 parameterizes:2 cn:1 donoghue:1 absent:1 whether:1 pca:1 kohn:1 penalty:1 movshon:1 multineuronal:1 york:3 cause:1 clear:1 nonparametric:1 extensively:2 category:1 reduced:2 http:1 outperform:2 exist:1 percentage:3 xr1:1 dotted:1 neuroscience:7 extrinsic:1 per:4 track:1 arising:1 blue:1 discrete:2 vol:17 santhanam:3 key:2 four:5 salient:1 eden:1 defect:1 jpc2181:1 sum:1 family:15 reader:1 wu:1 capturing:2 bound:7 entirely:1 fold:1 quadratic:1 nonnegative:1 activity:12 constraint:1 erroneously:1 underscoring:1 department:4 structured:1 according:1 peripheral:1 across:10 petreska:1 em:2 agreeing:1 evolves:1 primate:5 invariant:1 glm:4 equation:1 previously:2 turn:1 count:18 ordinal:1 tractable:4 plds:32 linderman:1 apply:1 enforce:1 alternative:2 rp:8 vidne:1 binomial:5 denotes:2 remaining:1 assumes:1 include:1 clustering:1 const:1 calculating:1 exploit:1 especially:1 hosseini:1 murray:1 upcoming:1 implied:2 move:1 already:1 question:1 spike:21 quantity:1 parametric:1 costly:1 primary:1 usual:2 exhibit:1 subspace:1 separate:1 simulated:1 koyama:1 sensible:1 extent:1 g20040123:1 minority:1 xrt:9 code:1 modeled:3 index:1 relationship:1 providing:1 innovation:1 executed:1 trace:2 negative:13 stated:1 implementation:1 design:1 allowing:1 observation:14 dispersion:20 neuron:32 datasets:1 lawhern:1 finite:3 optional:1 truncated:2 extended:2 variability:5 incorporated:1 gc:25 rn:2 intensity:2 introduced:1 namely:2 required:2 specified:1 extensive:1 optimized:2 chichilnisky:2 khan:1 pfau:1 learned:1 ryu:4 tractably:1 macaque:1 address:1 beyond:3 suggested:1 bar:1 dynamical:13 below:2 nip:7 scott:2 program:1 max:2 including:1 power:1 critical:1 natural:1 advanced:1 improve:1 brief:1 imply:2 temporally:1 extract:1 columbia:7 sher:1 sahani:5 deviate:1 prior:4 epoch:1 text:1 review:1 friedlander:1 probit:1 expect:2 loss:1 fully:3 interesting:1 limitation:2 clark:1 validation:1 foundation:2 foster:1 charitable:1 row:1 last:1 truncation:3 free:2 bias:1 allow:1 distributed:2 benefit:1 dimension:4 calculated:1 transition:1 cortical:3 rich:2 pillow:5 commonly:3 made:1 far:1 kullback:1 overfitting:1 scgb:2 assumed:2 xi:6 latent:18 decade:1 quenches:1 table:4 nature:7 learn:1 robust:2 ca:1 reasonably:1 mse:6 domain:1 pk:1 main:1 significance:1 noise:2 repeated:1 neuronal:3 ananth:1 gatsby:1 ny:3 inferring:2 exponential:4 hti:2 xt:10 covariate:1 showing:1 er:1 jensen:1 insignificant:1 blackrock:1 virtue:1 normalizing:1 evidence:2 intractable:2 scatterplot:1 restricting:1 adding:1 effectively:1 ci:1 easier:1 interrogate:1 attributing:1 simply:1 likely:1 paninski:8 gao:1 absorbed:2 visual:1 ganglion:1 highlighting:1 ordered:1 corresponds:3 dispersed:5 acm:1 cti:2 conditional:3 marked:2 manufacturing:1 shared:2 yti:6 exceptionally:1 typical:1 called:1 castillo:1 experimental:4 sugrue:1 meaningful:2 support:6 latter:1 brevity:1 relevance:1 indian:1 kulkarni:2 incorporate:1 phenomenon:1 |
5,266 | 5,768 | Measuring Sample Quality with Stein?s Method
Jackson Gorham
Department of Statistics
Stanford University
Lester Mackey
Department of Statistics
Stanford University
Abstract
To improve the efficiency of Monte Carlo estimation, practitioners are turning to
biased Markov chain Monte Carlo procedures that trade off asymptotic exactness
for computational speed. The reasoning is sound: a reduction in variance due to
more rapid sampling can outweigh the bias introduced. However, the inexactness
creates new challenges for sampler and parameter selection, since standard measures of sample quality like effective sample size do not account for asymptotic
bias. To address these challenges, we introduce a new computable quality measure
based on Stein?s method that bounds the discrepancy between sample and target
expectations over a large class of test functions. We use our tool to compare exact,
biased, and deterministic sample sequences and illustrate applications to hyperparameter selection, convergence rate assessment, and quantifying bias-variance
tradeoffs in posterior inference.
1
Introduction
When faced with a complex target distribution, one often turns to RMarkov chain Monte Carlo
(MCMC) [1] to approximate intractable expectations EP [h(Z)] = X p(x)h(x)dx with asympPn
totically exact sample estimates EQ [h(X)] = i=1 q(xi )h(xi ). These complex targets commonly
arise as posterior distributions in Bayesian inference and as candidate distributions in maximum
likelihood estimation [2]. In recent years, researchers [e.g., 3, 4, 5] have introduced asymptotic bias
into MCMC procedures to trade off asymptotic correctness for improved sampling speed. The rationale is that more rapid sampling can reduce the variance of a Monte Carlo estimate and hence
outweigh the bias introduced. However, the added flexibility introduces new challenges for sampler
and parameter selection, since standard sample quality measures, like effective sample size, asymptotic variance, trace and mean plots, and pooled and within-chain variance diagnostics, presume
eventual convergence to the target [1] and hence do not account for asymptotic bias.
To address this shortcoming, we develop a new measure of sample quality suitable for comparing
asymptotically exact, asymptotically biased, and even deterministic sample sequences. The quality
measure is based on Stein?s method and is attainable by solving a linear program. After outlining
our design criteria in Section 2, we relate the convergence of the quality measure to that of standard
probability metrics in Section 3, develop a streamlined implementation based on geometric spanners
in Section 4, and illustrate applications to hyperparameter selection, convergence rate assessment,
and the quantification of bias-variance tradeoffs in posterior inference in Section 5. We discuss
related work in Section 6 and defer all proofs to the appendix.
Notation We denote the `2 , `1 , and `1 norms on Rd by k?k2 , k?k1 , and k?k1 respectively. We will
?
often refer to a generic norm k?k on Rd with associated dual norms kwk , supv2Rd :kvk=1 hw, vi
?
?
?
for vectors w 2 Rd , kM k , supv2Rd :kvk=1 kM vk for matrices M 2 Rd?d , and kT k ,
?
supv2Rd :kvk=1 kT [v]k for tensors T 2 Rd?d?d . We denote the j-th standard basis vector by ej , the
@
partial derivative @xk by rk , and the gradient of any Rd -valued function g by rg with components
(rg(x))jk , rk gj (x).
1
2
Quality Measures for Samples
Consider a target distribution P with open convex support X ? Rd and continuously differentiable
density p. We assume that p is known up to its normalizing constant and that exact integration under
P is intractable for most functions of interest. We will approximate expectations under P with the
aid of a weighted sample, a collection of distinct sample points x1 , . . . , xn 2 X with weights q(xi )
encoded in a probability mass function q. ThePprobability mass function q induces a discrete distrin
bution Q and an approximation EQ [h(X)] = i=1 q(xi )h(xi ) for any target expectation EP [h(Z)].
We make no assumption about the provenance of the sample points; they may arise as random draws
from a Markov chain or even be deterministically selected.
Our goal is to compare the fidelity of different samples approximating a common target distribution.
That is, we seek to quantify the discrepancy between EQ and EP in a manner that (i) detects when
a sequence of samples is converging to the target, (ii) detects when a sequence of samples is not
converging to the target, and (iii) is computationally feasible. A natural starting point is to consider
the maximum deviation between sample and target expectations over a class of real-valued test
functions H,
dH (Q, P ) = sup |EQ [h(X)]
h2H
(1)
EP [h(Z)]|.
When the class of test functions is sufficiently large, the convergence of dH (Qm , P ) to zero implies
that the sequence of sample measures (Qm )m 1 converges weakly to P . In this case, the expression
(1) is termed an integral probability metric (IPM) [6]. By varying the class of test functions H, we
can recover many well-known probability metrics as IPMs, including the total variation distance,
generated by H = {h : X ! R | supx2X |h(x)| ? 1}, and the Wasserstein distance (also known as
the Kantorovich-Rubenstein or earth mover?s distance), dWk?k , generated by
H = Wk?k , {h : X ! R | supx6=y2X
|h(x) h(y)|
kx yk
? 1}.
The primary impediment to adopting an IPM as a sample quality measure is that exact computation
is typically infeasible when generic integration under P is intractable. However, we could skirt this
intractability by focusing on classes of test functions with known expectation under P . For example,
if we consider only test functions h for which EP [h(Z)] = 0, then the IPM value dH (Q, P ) is the
solution of an optimization problem depending on Q alone. This, at a high level, is our strategy,
but many questions remain. How do we select the class of test functions h? How do we know that
the resulting IPM will track convergence and non-convergence of a sample sequence (Desiderata
(i) and (ii))? How do we solve the resulting optimization problem in practice (Desideratum (iii))?
To address the first two of these questions, we draw upon tools from Charles Stein?s method of
characterizing distributional convergence. We return to the third question in Section 4.
3
Stein?s Method
Stein?s method [7] for characterizing convergence in distribution classically proceeds in three steps:
1. Identify a real-valued operator T acting on a set G of Rd -valued1 functions of X for which
EP [(T g)(Z)] = 0 for all g 2 G.
(2)
Together, T and G define the Stein discrepancy,
S(Q, T , G) , sup |EQ [(T g)(X)]| = sup |EQ [(T g)(X)]
g2G
g2G
EP [(T g)(Z)]| = dT G (Q, P ),
an IPM-type quality measure with no explicit integration under P .
2. Lower bound the Stein discrepancy by a familiar convergence-determining IPM dH . This
step can be performed once, in advance, for large classes of target distributions and ensures
that, for any sequence of probability measures (?m )m 1 , S(?m , T , G) converges to zero
only if dH (?m , P ) does (Desideratum (ii)).
1
One commonly considers real-valued functions g when applying Stein?s method, but we will find it more
convenient to work with vector-valued g.
2
3. Upper bound the Stein discrepancy by any means necessary to demonstrate convergence to
zero under suitable conditions (Desideratum (i)). In our case, the universal bound established in Section 3.3 will suffice.
While Stein?s method is typically employed as an analytical tool, we view the Stein discrepancy as
a promising candidate for a practical sample quality measure. Indeed, in Section 4, we will adopt an
optimization perspective and develop efficient procedures to compute the Stein discrepancy for any
sample measure Q and appropriate choices of T and G. First, we assess the convergence properties
of an equivalent Stein discrepancy in the subsections to follow.
3.1
Identifying a Stein Operator
The generator method of Barbour [8] provides a convenient and general means of constructing operators T which produce mean-zero functions under P (2) . Let (Zt )t 0 represent a Markov process
with unique stationary distribution P . Then the infinitesimal generator A of (Zt )t 0 , defined by
(Au)(x) = lim (E[u(Zt ) | Z0 = x]
t!0
u(x))/t
for
u : Rd ! R,
satisfies EP [(Au)(Z)] = 0 under mild conditions on A and u. Hence, a candidate operator T can
be constructed from any infinitesimal generator.
For example, the overdamped Langevin diffusion, defined by the stochastic differential equation
dZt = 12 r log p(Zt )dt + dWt for (Wt )t 0 a Wiener process, gives rise to the generator
1
1
(AP u)(x) = hru(x), r log p(x)i + hr, ru(x)i.
(3)
2
2
After substituting g for 12 ru, we obtain the associated Stein operator
(TP g)(x) , hg(x), r log p(x)i + hr, g(x)i.
(4)
The Stein operator TP is particularly well-suited to our setting as it depends on P only through the
derivative of its log density and hence is computable even when the normalizing constant of p is not.
If we let @X denote the boundary of X (an empty set when X = Rd ) and n(x) represent the outward
unit normal vector to the boundary at x, then we may define the classical Stein set
?
?
??
rg(y)k
?
? krg(x)
Gk?k , g : X ! Rd sup max kg(x)k , krg(x)k ,
? 1 and
kx yk
x6=y2X
hg(x), n(x)i = 0, 8x 2 @X with n(x) defined
of sufficiently smooth functions satisfying a Neumann-type boundary condition. The following
proposition ? a consequence of integration by parts ? shows that Gk?k is a suitable domain for TP .
Proposition 1. If EP [kr log p(Z)k] < 1, then EP [(TP g)(Z)] = 0 for all g 2 Gk?k .
Together, TP and Gk?k form the classical Stein discrepancy S(Q, TP , Gk?k ), our chief object of study.
3.2
Lower Bounding the Classical Stein Discrepancy
In the univariate setting (d = 1), it is known for a wide variety of targets P that the classical Stein
discrepancy S(?m , TP , Gk?k ) converges to zero only if the Wasserstein distance dWk?k (?m , P ) does
[9, 10]. In the multivariate setting, analogous statements are available for multivariate Gaussian
targets [11, 12, 13], but few other target distributions have been analyzed. To extend the reach of the
multivariate literature, we show in Theorem 2 that the classical Stein discrepancy also determines
Wasserstein convergence for a large class of strongly log-concave densities, including the Bayesian
logistic regression posterior under Gaussian priors.
Theorem 2 (Stein Discrepancy Lower Bound for Strongly Log-concave Densities). If X = Rd , and
log p is strongly concave with third and fourth derivatives bounded and continuous, then, for any
probability measures (?m )m 1 , S(?m , TP , Gk?k ) ! 0 only if dWk?k (?m , P ) ! 0.
We emphasize that the sufficient conditions in Theorem 2 are certainly not necessary for lower
bounding the classical Stein discrepancy. We hope that the theorem and its proof will provide a template for lower bounding S(Q, TP , Gk?k ) for other large classes of multivariate target distributions.
3
3.3
Upper Bounding the Classical Stein Discrepancy
We next establish sufficient conditions for the convergence of the classical Stein discrepancy to zero.
Proposition 3 (Stein Discrepancy Upper Bound). If X ? Q and Z ? P with r log p(Z) integrable,
?
?
S(Q, TP , Gk?k ) ? E[kX Zk] + E[kr log p(X) r log p(Z)k] + E r log p(Z)(X Z)>
r h
i
i h
2
2
? E[kX Zk] + E[kr log p(X) r log p(Z)k] + E kr log p(Z)k E kX Zk .
One implication of Proposition 3 is that S(Qm , TP , Gk?k ) converges to zero whenever Xm ? Qm
converges in mean-square to Z ? P and r log p(Xm ) converges in mean to r log p(Z).
3.4
Extension to Non-uniform Stein Sets
The analyses and algorithms in this paper readily accommodate non-uniform Stein sets of the form
?
?
?
kg(x)k? krg(x)k? krg(x) rg(y)k?
,
,
? 1 and
c1:3
d supx6=y2X max
c
c
c
kx
yk
1
2
3
Gk?k , g : X ! R
(5)
hg(x), n(x)i = 0, 8x 2 @X with n(x) defined
for constants c1 , c2 , c3 > 0 known as Stein factors in the literature. We will exploit this additional
flexibility in Section 5.2 to establish tight lower-bounding relations between the Stein discrepancy
and Wasserstein distance for well-studied target distributions. For general use, however, we advocate
the parameter-free classical Stein set and graph Stein sets to be introduced in the sequel. Indeed, any
non-uniform Stein discrepancy is equivalent to the classical Stein discrepancy in a strong sense:
Proposition 4 (Equivalence of Non-uniform Stein Discrepancies). For any c1 , c2 , c3 > 0,
c1:3
min(c1 , c2 , c3 )S(Q, TP , Gk?k ) ? S(Q, TP , Gk?k
) ? max(c1 , c2 , c3 )S(Q, TP , Gk?k ).
4
Computing Stein Discrepancies
In this section, we introduce an efficiently computable Stein discrepancy with convergence properties equivalent to those of the classical discrepancy. We restrict attention to the unconstrained
domain X = Rd in Sections 4.1-4.3 and present extensions for constrained domains in Section 4.4.
4.1
Graph Stein Discrepancies
Evaluating a Stein discrepancy S(Q, TP , G) for a fixed (Q, P ) pair reduces to solving an optimization program over functions g 2 G. For example, the classical Stein discrepancy is the optimum
Pn
S(Q, TP , Gk?k ) = sup i=1 q(xi )(hg(xi ), r log p(xi )i + hr, g(xi )i)
(6)
g
?
?
s.t. kg(x)k ? 1, krg(x)k ? 1, krg(x)
?
rg(y)k ? kx
yk, 8x, y 2 X .
Note that the objective associated with any Stein discrepancy S(Q, TP , G) is linear in g and, since
Q is discrete, only depends on g and rg through their values at each of the n sample points xi . The
primary difficulty in solving the classical Stein program (6) stems from the infinitude of constraints
imposed by the classical Stein set Gk?k . One way to avoid this difficulty is to impose the classical
smoothness constraints at only a finite collection of points. To this end, for each finite graph G =
(V, E) with vertices V ? X and edges E ? V 2 , we define the graph Stein set,
?
?
?
Gk?k,Q,G , g : X ! Rd | 8 x 2 V, max kg(x)k , krg(x)k ? 1 and, 8 (x, y) 2 E,
max
?
kg(x) g(y)k? krg(x) rg(y)k? kg(x) g(y) rg(x)(x y)k? kg(x) g(y) rg(y)(x y)k?
,
,
,
2
2
1
1
kx yk
kx yk
2 kx yk
2 kx yk
?
?1 ,
the family of functions which satisfy the classical constraints and certain implied Taylor compatibility constraints at pairs of points in E. Remarkably, if the graph G1 consists of edges between all
distinct sample points xi , then the associated complete graph Stein discrepancy S(Q, TP , Gk?k,Q,G1 )
is equivalent to the classical Stein discrepancy in the following strong sense.
4
Proposition 5 (Equivalence of Classical and Complete Graph Stein Discrepancies). If X = Rd , and
G1 = (supp(Q), E1 ) with E1 = {(xi , xl ) 2 supp(Q)2 : xi 6= xl }, then
S(Q, TP , Gk?k ) ? S(Q, TP , Gk?k,Q,G1 ) ? ?d S(Q, TP , Gk?k ),
where ?d is a constant, independent of (Q, P ), depending only on the dimension d and norm k?k.
Proposition 5 follows from the Whitney-Glaeser extension theorem for smooth functions [14, 15]
and implies that the complete graph Stein discrepancy inherits all of the desirable convergence properties of the classical discrepancy. However, the complete graph also introduces order n2 constraints,
rendering computation infeasible for large samples. To achieve the same form of equivalence while
enforcing only O(n) constraints, we will make use of sparse geometric spanner subgraphs.
4.2
Geometric Spanners
For a given dilation factor t 1, a t-spanner [16, 17] is a graph G = (V, E) with weight kx yk
on each edge (x, y) 2 E and a path between each pair x0 6= y 0 2 V with total weight no larger
than tkx0 y 0 k. The next proposition shows that spanner Stein discrepancies enjoy the same convergence properties as the complete graph Stein discrepancy.
Proposition 6 (Equivalence of Spanner and Complete Graph Stein Discrepancies). If X = Rd ,
Gt = (supp(Q), E) is a t-spanner, and G1 = (supp(Q), {(xi , xl ) 2 supp(Q)2 : xi 6= xl }), then
S(Q, TP , Gk?k,Q,G1 ) ? S(Q, TP , Gk?k,Q,Gt ) ? 2t2 S(Q, TP , Gk?k,Q,G1 ).
Moreover, for any `p norm, a 2-spanner with O(?d n) edges can be computed in O(?d n log(n))
expected time for ?d a constant depending only on d and k?k [18]. As a result, we will adopt a
2-spanner Stein discrepancy, S(Q, TP , Gk?k,Q,G2 ), as our standard quality measure.
4.3
Decoupled Linear Programs
The final unspecified component of our Stein discrepancy is the choice of norm k?k. We recommend
the `1 norm, as the resulting optimization problem decouples into d independent finite-dimensional
linear programs (LPs) that can be solved in parallel. More precisely, S(Q, TP , Gk?k1 ,Q,(V,E) ) equals
Pd
P|V |
sup
(7)
j=1
i=1 q(vi )( ji rj log p(vi ) + jji )
j 2R
|V | ,
j 2R
d?|V |
s.t. k j k1 ? 1, k j k1 ? 1, and 8 i 6= l : (vi , vl ) 2 E,
?
|
| k j (ei el )k1 | ji
jl h j ei ,vi vl i| |
max kvjii vljlk , kv
,
,
1
kv v k2
i vl k
1
1
2
i
ji
l 1
We have arbitrarily numbered the elements vi of the vertex set V so that
value gj (vi ), and jki represents the gradient value rk gj (vi ).
4.4
jl h j el ,vi
2
1
2 kvi vl k1
ji
vl i|
?
? 1.
represents the function
Constrained Domains
A small modification to the unconstrained formulation (7) extends our tractable Stein discrepancy
computation to any domain defined by coordinate boundary constraints, that is, to X = (?1 , 1 ) ?
? ? ? ? (?d , d ) with 1 ? ?j < j ? 1 for all j. Specifically, for each dimension j, we augment
the j-th coordinate linear program of (7) with the boundary compatibility constraints
?
?
|
|
|
|
|
(v
b )|
max |vij jibj | , |vijjkibj | , ji 1 (vjji bij)2 j
? 1, for each i, bj 2 {?j , j } \ R, and k 6= j. (8)
2
ij
j
These additional constraints ensure that our candidate function and gradient values can be extended
to a smooth function satisfying the boundary conditions hg(z), n(z)i = 0 on @X . Proposition 15
in the appendix shows that the spanner Stein discrepancy so computed is strongly equivalent to the
classical Stein discrepancy on X .
Algorithm 1 summarizes the complete solution for computing our recommended, parameter-free
spanner Stein discrepancy in the multivariate setting. Notably, the spanner step is unnecessary in the
univariate setting, as the complete graph Stein discrepancy S(Q, TP , Gk?k1 ,Q,G1 ) can be computed
directly by sorting the sample and boundary points and only enforcing constraints between consecutive points in this ordering. Thus, the complete graph Stein discrepancy is our recommended quality
measure when d = 1, and a recipe for its computation is given in Algorithm 2.
5
Algorithm 1 Multivariate Spanner Stein Discrepancy
input: Q, coordinate bounds (?1 , 1 ), . . . , (?d , d ) with 1 ? ?j < j ? 1 for all j
G2
Compute sparse 2-spanner of supp(Q)
for j = 1 to d do (in parallel)
rj
Solve j-th coordinate linear program (7) with graph G2 and boundary constraints (8)
Pd
return
j=1 rj
Algorithm 2 Univariate Complete Graph Stein Discrepancy
input: Q, bounds (?, ) with 1 ? ? < ? 1
(x(1) , . . . , x(n0 ) )
S ORT({x1 , . . . , xn , ?, } \ R)
P n0
d
return sup 2Rn0 , 2Rn0 i=1 q(x(i) )( i dx
log p(x(i) ) + i )
?
?
s.t. k k1 ? 1, 8i ? n0 , | i | ? I ? < x(i) < , and, 8i < n0 ,
?
| i
| i
i+1
i (x(i) x(i+1) )| | i
i+1
i
i+1 |
i+1 |
max x|(i+1)
,
1
1
x(i) , x(i+1) x(i) ,
(x
x )2
(x
(i+1)
2
5
(i)
2
i+1 (x(i)
(i+1)
x(i)
x(i+1) )|
)2
?
?1
Experiments
We now turn to an empirical evaluation of our proposed quality measures. We compute all spanners
using the efficient C++ greedy spanner implementation of Bouts et al. [19] and solve all optimization
programs using Julia for Mathematical Programming [20] with the default Gurobi 6.0.4 solver [21].
All reported timings are obtained using a single core of an Intel Xeon CPU E5-2650 v2 @ 2.60GHz.
5.1
A Simple Example
We begin with a simple example to illuminate a few properties of the Stein diagnostic. For the target
P = N (0, 1), we generate a sequence of sample points i.i.d. from the target and a second sequence
i.i.d. from a scaled Student?s t distribution with matching variance and 10 degrees of freedom. The
left panel of Figure 1 shows that the complete graph Stein discrepancy applied to the first n Gaussian
sample points decays to zero at an n 0.52 rate, while the discrepancy applied to the scaled Student?s
t sample remains bounded away from zero. The middle panel displays optimal Stein functions g
recovered by the Stein program for different sample sizes. Each g yields a test function h , TP g,
featured in the right panel, that best discriminates the sample Q from the target P . Notably, the
Student?s t test functions exhibit relatively large magnitude values in the tails of the support.
5.2
Comparing Discrepancies
We show in Theorem 14 in the appendix that, when d = 1, the classical Stein discrepancy is the
optimum of a convex quadratically constrained quadratic program with a linear objective, O(n)
variables, and O(n) constraints. This offers the opportunity to directly compare the behavior of the
graph and classical Stein discrepancies. We will also compare to the Wasserstein distance dWk?k ,
?
?
???
?
?
g
??
??
?
?
?
100
1000
?
10000
Number of sample points, n
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?6 ?3 0
3
6 ?6 ?3 0
x
3
6
4
2
0
?2
5.0
2.5
0.0
?2.5
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
??
?
?
??
??
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?6 ?3 0
3
Sample
?
Gaussian
Scaled
Student's t
n = 30000
?
?
?
n = 30000
0.01
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
n = 3000
?
0.03
?
2
1
0
?1
?2
h = TP g
?
n = 3000
Stein discrepancy
?
Scaled
Student's t
Gaussian
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
n = 300
?
1.0
0.5
0.0
?0.5
?1.0
1.0
0.5
0.0
?0.5
?1.0
1.0
0.5
0.0
?0.5
?1.0
n = 300
?
0.10
Scaled
Student's t
Gaussian
?
6 ?6 ?3 0
3 6
x
Figure 1: Left: Complete graph Stein discrepancy for a N (0, 1) target. Middle / right: Optimal
Stein functions g and discriminating test functions h = TP g recovered by the Stein program.
6
seed = 8
seed = 9
?
?
?
?
?
?
? ?
?? ?
?
?
?
?
?
?
?
?
?
? ?
0.03
0.01
?
? ?
?
? ? ???
?
? ???
?
?
? ? ?
?? ?
?
?
?
? ?
? ???
?
?
?
?
???
?
?
?
Gaussian
0.030
0.010
?
?
?
?
?
?
?
??
?
?
?
?
? ? ??
0.003
0.001
?
? ?
?
?
?
?
???
?
?
?
?
? ? ? ???
?
?
?
?
Discrepancy
?
Classical Stein
Wasserstein
?
?
? ?
?
?
?
?
?
?
???
?
Uniform
Discrepancy value
seed = 7
0.30
0.10
??
???
Complete graph Stein
?
?
?
100
1000
10000
100
1000
10000
100
1000
10000
Number of sample points, n
Figure 2: Comparison of discrepancy measures for sample sequences drawn i.i.d. from their targets.
which is computable for simple univariate target distributions [22] and provably lower bounds the
non-uniform Stein discrepancies (5) with c1:3 = (0.5, 0.5, 1) for P = Unif(0, 1) and c1:3 = (1, 4, 2)
for P = N (0, 1) [9, 23]. For N (0, 1) and Unif(0, 1) targets and several random number generator
seeds, we generate a sequence of sample points i.i.d. from the target distribution and plot the nonuniform classical and complete graph Stein discrepancies and the Wasserstein distance as functions
of the first n sample points in Figure 2. Two apparent trends are that the graph Stein discrepancy
very closely approximates the classical and that both Stein discrepancies track the fluctuations in
Wasserstein distance even when a magnitude separation exists. In the Unif(0, 1) case, the Wasserstein distance in fact equals the classical Stein discrepancy because TP g = g 0 is a Lipschitz function.
5.3
Selecting Sampler Hyperparameters
Stochastic Gradient Langevin Dynamics (SGLD) [3] with constant step size ? is a biased MCMC
procedure designed for scalable inference. It approximates the overdamped Langevin diffusion,
but, because no Metropolis-Hastings (MH) correction is used, the stationary distribution of SGLD
deviates increasingly from its target as ? grows. If ? is too small, however, SGLD explores the sample
space too slowly. Hence, an appropriate choice of ? is critical for accurate posterior inference. To
illustrate the value of the Stein diagnostic for this task, we adopt the bimodal Gaussian mixture
model (GMM) posterior of [3] as our target. For a range of step sizes ?, we use SGLD with minibatch
size 5 to draw 50 independent sequences of length n = 1000, and we select the value of ? with the
highest median quality ? either the maximum effective sample size (ESS, a standard diagnostic based
on autocorrelation [1]) or the minimum spanner Stein discrepancy ? across these sequences. The
average discrepancy computation consumes 0.4s for spanner construction and 1.4s per coordinate
linear program. As seen in Figure 3a, ESS, which does not detect distributional bias, selects the
largest step size presented to it, while the Stein discrepancy prefers an intermediate value. The
rightmost plot of Figure 3b shows that a representative SGLD sample of size n using the ? selected
by ESS is greatly overdispersed; the leftmost is greatly underdispersed due to slow mixing. The
middle sample, with ? selected by the Stein diagnostic, most closely resembles the true posterior.
5.4
Quantifying a Bias-Variance Trade-off
The approximate random walk MH (ARWMH) sampler [5] is a second biased MCMC procedure
designed for scalable posterior inference. Its tolerance parameter ? controls the number of datapoint
likelihood evaluations used to approximate the standard MH correction step. Qualitatively, a larger ?
implies fewer likelihood computations, more rapid sampling, and a more rapid reduction of variance.
A smaller ? yields a closer approximation to the MH correction and less bias in the sampler stationary
distribution. We will use the Stein discrepancy to explicitly quantify this bias-variance trade-off.
We analyze a dataset of 53 prostate cancer patients with six binary predictors and a binary outcome
indicating whether cancer has spread to surrounding lymph nodes [24]. Our target is the Bayesian
logistic regression posterior [1] under a N (0, I) prior on the parameters. We run RWMH (? = 0)
and ARWMH (? = 0.1 and batch size = 2) for 105 likelihood evaluations, discard the points from
the first 103 evaluations, and thin the remaining points to sequences of length 1000. The discrepancy
computation time for 1000 points averages 1.3s for the spanner and 12s for a coordinate LP. Figure 4
displays the spanner Stein discrepancy applied to the first n points in each sequence as a function of
the likelihood evaluation count. We see that the approximate sample is of higher Stein quality for
smaller computational budgets but is eventually overtaken by the asymptotically exact sequence.
7
diagnostic = ESS
Step size, ? = 5e?05
Step size, ? = 5e?03
Step size, ? = 5e?02
?
?
?
2.0
?
4
?
3
1.0
2
1
?
?
?
diagnostic = Spanner Stein
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?2
?3
?4
1e?04
1e?03
1e?02
?2 ?1
Step size, ?
(a) Step size selection criteria
0
Spanner Stein discrepancy
0.3
?
0.2
?
16
?
?
?? ?
? ?
?
?
?
?
?
?
?
?
?
????
?
?
???? ? ????????
?
???
?? ?
??
?
? ??
?? ??
??
0.1
3e+03 1e+04 3e+04 1e+05
3
?2 ?1
0
x1
1
2
3
?2 ?1
0
1
2
; Stein discrepancy minimized at ? = 5 ? 10
?
3
1.0
?
2.0
?
1.5
?
?
?
???
?
?
??
? ??
??
??
??? ????
?
?? ?
?
?
?
?
?
?
?
?
?
?
???
????
??
?
?
???
?????
0.5
3e+03 1e+04 3e+04 1e+05
??
? ??
???
?
??
?
?
?
?????
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??? ?
??
??
??
???? ???
.
?
?
?
3
Second moment error
2.5
?
?
?
2
Mean error
?
0.4
20
2
Normalized prob. error
?
24
1
?
(b) 1000 SGLD sample points with equidensity contours of p overlaid
Figure 3: (a) ESS maximized at ? = 5 ? 10
Discrepancy
? ?
???
??
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
??
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
???
?
?
??
??
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
? ??
?
???
?
??
?
?
?
?
??
??
?
?
?
?
?
??
?
?
?
?
?
??
?
???
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
??
??
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
??
?
?
?
??
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
? ?
?
??
?
?
?
?
?
?
??
?
?
?
?
? ??
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
??
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
? ?
?
?
?
?
??
?
??
?
??
?
?
??
?
0
?1
3.0
2.5
2.0
1.5
1.0
?
?
? ?
?
?
?
??
?
??
? ? ??
?
?
?
?
??
?
?
???
?
?
? ??
??
? ????
?
??
?
? ?
?
?
?
?
?
?
?
?
?
? ??
??
?
??
? ?
?? ??
?? ???
? ??
?
? ? ??
?? ?
????
?
? ??
? ??
??
?
??
?
???
??
?? ? ?
?
?
?
? ? ??
? ??
?
?
????? ? ??
?
????
?? ?
?? ? ?
? ???
?? ?
??
??
?
?????? ?
??
?
??? ??
??
??
???
????
?
?
?
? ?
?
??
??
? ?
?
?
?
?
?
? ?
?
??
?
? ? ?? ?
??
???
??? ?
?
???? ????
?
?
?
?
?
? ??
?
?
?
? ? ???
?
?? ?
?
?
??
??
???
?
?? ?
? ?
??
?
??
? ?? ? ? ?
?
??
??
?
?
??
??
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
??
?? ?
?
??? ?
? ?
?
? ?
? ??
??
?? ??
? ? ? ??
?
??
???
? ? ?
?
??
??
??
??
?
?? ?
? ?
?? ?
?? ??
??
? ???
? ???
???
? ?
???
???? ?
???
??
?
?? ??
? ?? ?
? ?
??
? ??? ? ? ??
?
? ?
???
?
?? ?
?
?
??
?
?? ?
???
??
?
? ?
?
?
?
??
??
??
? ?
? ??? ?????
??
????
? ?? ?
? ? ? ? ??
??? ??
????
?? ?? ?
? ?
?
???
?
?
??
?
?
?
?
? ? ? ??? ??
?
?
? ?
?
?
?
?
?
?? ?
?? ?
?
????
?
?
?
?
? ???
?
?
??? ?
?
??
?
?
?
?
? ??
?
??
?
?
?
?
? ?
?
? ? ? ??
?
??
?
??
?
?
?
? ??
??
?
?
?
?
?
?? ? ?? ? ?
?
??
?
??? ? ?
?
?
? ??
???
?
?
? ?
?? ?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
??
? ? ? ??
?
?
?
?
? ?? ?
??
?
?
??
?? ?
?
? ? ???
??
??
?
?
? ??
?
?
? ? ??
?
?????
?
?
?
? ?? ?????? ??
??
??
??
?? ?
? ? ?
?
???
?
? ??? ?? ? ??
?
? ???
?
? ? ??
? ?? ? ?
???
? ? ? ?
?
?
??
?
?
?
??
???
???? ?
? ? ??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
? ??
? ?? ?
?
? ?? ?
?
??
?
?
?
?
?? ?? ?
??? ?
? ?? ??
??
? ?
?
?
?
?
????? ?
?
?? ? ?
????
???
?
?
? ? ???
??
? ??? ? ?
? ?? ? ? ??? ?
?
?
?
? ?
? ?
? ?
?
?
?? ?
?
?
??
? ? ?? ?
?? ?
?
?
? ?
? ? ??
?
??
?? ?
? ?
?
?
?
?
?
?
? ?
? ?
?
?
?
? ?
?
?
?
?
?
1.5
x2
Log median diagnostic
2.5
1.0
0.5
3e+03 1e+04 3e+04 1e+05
Hyperparameter
?
?
?
??
? ? ??
??
?
?
??
?
??
????
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?????
??
?
?? ??????
?
?
?=0
? = 0.1
3e+03 1e+04 3e+04 1e+05
Number of likelihood evaluations
Figure 4: Bias-variance trade-off curves for Bayesian logistic regression with approximate RWMH.
To corroborate our result, we use a Metropolis-adjusted Langevin chain [25] of length 107 as a surrogate Q? for the target and compute several error measures for each sample Q: normalized probability
max |E[X Z ]|
error maxl |E[ (hX, wl i)
(hZ, wl i)]|/kwl k1 , mean error maxjj |EQ?j [Zjj]| , and second moment
max
|E[X X
Z Z ]|
j k
j k
j,k
error max
for X ? Q, Z ? Q? , (t) , 1+e1 t , and wl the l-th datapoint covariate
j,k |EQ? [Zj Zk ]|
vector. The measures, also found in Figure 4, accord with the Stein discrepancy quantification.
5.5
Assessing Convergence Rates
The Stein discrepancy can also be used to assess the quality of deterministic sample sequences. In
Figure 5 in the appendix, for P = Unif(0, 1), we plot the complete graph Stein discrepancies of the
first n points of an i.i.d. Unif(0, 1) sample, a deterministic Sobol sequence [26], and a deterministic
R1
kernel herding sequence [27] defined by the norm khkH = 0 (h0 (x))2 dx. We use the median
value over 50 sequences in the i.i.d. case and estimate the convergence rate for each sampler using
the slope of the best least squares affine fit to each log-log plot. The discrepancy computation time
averages 0.08s for n = 200 points, andpthe recovered rates of n 0.49 and n 1 for the i.i.d. and Sobol
sequences accord with expected O(1/ n) and O(log(n)/n) bounds from the literature [28, 26]. As
0.96
witnessed also in other
outpaces its best known bound of
p metrics [29], the herding rate of n
dH (Qn , P ) = O(1/ n), suggesting an opportunity for sharper analysis.
6
Discussion of Related Work
We have developed a quality measure suitable for comparing biased, exact, and deterministic sample
sequences by exploiting an infinite class of known target functionals. The diagnostics of [30, 31]
also account for asymptotic bias but lose discriminating power by considering only a finite collection of functionals. For example, for a N (0, 1) target, the score statistic of [31] cannot distinguish
two samples with equal first and second moments. Maximum mean discrepancy (MMD) on a characteristic Hilbert space [32] takes full distributional bias into account but is only viable when the
expected kernel evaluations are easily computed under the target. One can approximate MMD, but
this requires access to a separate trustworthy ground-truth sample from the target.
Acknowledgments
The authors thank Madeleine Udell, Andreas Eberle, and Jessica Hwang for their pointers and feedback and Quirijn Bouts, Kevin Buchin, and Francis Bach for sharing their code and counsel.
8
References
[1] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov chain monte carlo. CRC press,
2011.
[2] C. J. Geyer. Markov chain monte carlo maximum likelihood. Computer Science and Statistics: Proc.
23rd Symp. Interface, pages 156?163, 1991.
[3] M. Welling and Y.-W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings
of the 28th International Conference on Machine Learning, pages 681?688, 2011.
[4] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. In Proceeding of 29th International Conference on Machine Learning (ICML?12), 2012.
[5] A. Korattikara, Y. Chen, and M. Welling. Austerity in MCMC land: Cutting the Metropolis-Hastings
budget. In Proceeding of 31th International Conference on Machine Learning (ICML?14), 2014.
[6] A. M?uller. Integral probability metrics and their generating classes of functions. Advances in Applied
Probability, 29(2):pp. 429?443, 1997.
[7] C. Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent
random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, pages 583?602, Berkeley, CA, 1972. University of California Press.
[8] A. D. Barbour. Stein?s method and Poisson process convergence. J. Appl. Probab., (Special Vol. 25A):
175?184, 1988. A celebration of applied probability.
[9] L. HY. Chen, L. Goldstein, and Q.-M. Shao. Normal approximation by Steins method. Springer Science
& Business Media, 2010.
[10] S. Chatterjee and Q.-M. Shao. Nonnormal approximation by Steins method of exchangeable pairs with
application to the Curie?Weiss model. Annals of Applied Probability, 21(2):464?483, 2011.
[11] G. Reinert and A. R?ollin. Multivariate normal approximation with Steins method of exchangeable pairs
under a general linearity condition. Annals of Probability, 37(6):2150?2173, 2009.
[12] S. Chatterjee and E. Meckes. Multivariate normal approximation using exchange-able pairs. Alea, 4:
257?283, 2008.
[13] E. Meckes. On Steins method for multivariate normal approximation. In High dimensional probability
V: The Luminy volume, pages 153?178. Institute of Mathematical Statistics, 2009.
?
[14] G. Glaeser. Etude
de quelques alg`ebres tayloriennes. J. Analyse Math., 6:1?124; erratum, insert to 6
(1958), no. 2, 1958.
[15] P. Shvartsman. The Whitney extension problem and Lipschitz selections of set-valued mappings in jetspaces. Transactions of the American Mathematical Society, 360(10):5529?5550, 2008.
[16] P. Chew. There is a planar graph almost as good as the complete graph. In Proceedings of the Second
Annual Symposium on Computational Geometry, SCG ?86, pages 169?177, New York, NY, 1986. ACM.
[17] D. Peleg and A. A. Sch?affer. Graph spanners. Journal of Graph Theory, 13(1):99?116, 1989.
[18] S. Har-Peled and M. Mendel. Fast construction of nets in low-dimensional metrics and their applications.
SIAM Journal on Computing, 35(5):1148?1184, 2006.
[19] Q. W. Bouts, A. P. ten Brink, and K. Buchin. A framework for computing the greedy spanner. In
Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG?14, pages 11:11?
11:19, New York, NY, 2014. ACM.
[20] M. Lubin and I. Dunning. Computing in operations research using Julia. INFORMS Journal on Computing, 27(2):238?248, 2015.
[21] Gurobi Optimization. Gurobi optimizer reference manual, 2015. URL http://www.gurobi.com.
[22] S. S. Vallender. Calculation of the Wasserstein distance between probability distributions on the line.
Theory of Probability & Its Applications, 18(4):784?786, 1974.
[23] C. D?obler. Stein?s method of exchangeable pairs for the Beta distribution and generalizations.
arXiv:1411.4477, 2014.
[24] A. Canty and B. D. Ripley. boot: Bootstrap R (S-Plus) Functions, 2015. R package version 1.3-15.
[25] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their discrete
approximations. Bernoulli, pages 341?363, 1996.
[26] R. E. Caflisch. Monte carlo and quasi-monte carlo methods. Acta numerica, 7:1?49, 1998.
[27] Y. Chen, M. Welling, and A. Smola. Super-samples from kernel herding. In Proceeding of 26th Uncertainty in Artificial Intelligence (UAI?10), 2010.
[28] E. del Barrio, E. Gin, and C. Matrn. Central limit theorems for the Wasserstein distance between the
empirical and the true distributions. Ann. Probab., 27(2):1009?1071, 04 1999.
[29] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In Proceeding of 29th International Conference on Machine Learning (ICML?12), 2012.
[30] A. Zellner and C.-K. Min. Gibbs sampler convergence criteria. Journal of the American Statistical
Association, 90(431):921?927, 1995.
[31] Y. Fan, S. P. Brooks, and A. Gelman. Output assessment for monte carlo simulations via the score statistic.
Journal of Computational and Graphical Statistics, 15(1), 2006.
[32] A. Gretton, K. M Borgwardt, M. Rasch, B. Sch?olkopf, and A. J. Smola. A kernel method for the twosample-problem. In Advances in Neural Information Processing Systems, pages 513?520, 2006.
9
| 5768 |@word mild:1 version:1 middle:3 norm:8 open:1 unif:5 km:2 seek:1 scg:1 simulation:1 attainable:1 accommodate:1 ipm:6 moment:3 reduction:2 score:2 selecting:1 sobol:2 rightmost:1 recovered:3 comparing:3 trustworthy:1 com:1 dx:3 readily:1 plot:5 designed:2 n0:4 mackey:1 alone:1 stationary:3 selected:3 greedy:2 fewer:1 intelligence:1 xk:1 es:5 geyer:1 supx2x:1 core:1 pointer:1 provides:1 math:1 node:1 mendel:1 mathematical:4 constructed:1 c2:4 differential:1 symposium:3 viable:1 beta:1 consists:1 advocate:1 symp:1 autocorrelation:1 manner:1 introduce:2 chew:1 x0:1 notably:2 expected:3 indeed:2 rapid:4 behavior:1 detects:2 cpu:1 solver:1 considering:1 begin:1 linearity:1 notation:1 suffice:1 bounded:2 mass:2 moreover:1 rn0:2 kg:7 panel:3 unspecified:1 medium:1 nonnormal:1 developed:1 berkeley:2 concave:3 decouples:1 k2:2 qm:4 scaled:5 lester:1 unit:1 control:1 enjoy:1 exchangeable:3 timing:1 limit:1 consequence:1 meng:1 path:1 fluctuation:1 ap:1 plus:1 au:2 studied:1 jki:1 equivalence:5 resembles:1 acta:1 appl:1 range:1 practical:1 unique:1 acknowledgment:1 eberle:1 practice:1 bootstrap:1 procedure:5 featured:1 universal:1 empirical:2 convenient:2 matching:1 numbered:1 affer:1 cannot:1 selection:6 operator:6 gelman:2 dzt:1 applying:1 quelques:1 www:1 outweigh:2 deterministic:6 equivalent:5 imposed:1 attention:1 starting:1 convex:2 identifying:1 subgraphs:1 jackson:1 variation:1 coordinate:6 analogous:1 annals:2 target:32 construction:2 exact:7 programming:1 element:1 trend:1 satisfying:2 jk:1 particularly:1 distributional:3 ep:10 solved:1 ensures:1 ordering:1 trade:5 highest:1 consumes:1 yk:9 discriminates:1 pd:2 peled:1 dynamic:2 weakly:1 solving:3 tight:1 creates:1 upon:1 efficiency:1 basis:1 zjj:1 shao:2 easily:1 mh:4 surrounding:1 distinct:2 fast:1 effective:3 shortcoming:1 monte:9 artificial:1 kevin:1 gorham:1 outcome:1 h0:1 apparent:1 encoded:1 stanford:2 valued:6 solve:3 larger:2 statistic:8 g1:8 analyse:1 final:1 sequence:22 differentiable:1 analytical:1 net:1 korattikara:2 mixing:1 flexibility:2 achieve:1 thepprobability:1 kv:2 olkopf:1 recipe:1 exploiting:1 convergence:22 empty:1 optimum:2 neumann:1 y2x:3 produce:1 generating:1 assessing:1 converges:6 r1:1 object:1 zellner:1 illustrate:3 develop:3 depending:3 informs:1 ij:1 eq:8 strong:2 implies:3 peleg:1 quantify:2 rasch:1 closely:2 stochastic:4 crc:1 exchange:1 hx:1 generalization:1 proposition:10 adjusted:1 extension:4 insert:1 correction:3 sufficiently:2 ground:1 normal:6 sgld:6 seed:4 overlaid:1 bj:1 mapping:1 substituting:1 optimizer:1 adopt:3 consecutive:1 earth:1 estimation:2 proc:1 lose:1 largest:1 wl:3 correctness:1 tool:3 weighted:1 hope:1 uller:1 exactness:1 gaussian:8 super:1 pn:1 ej:1 avoid:1 varying:1 thirtieth:1 inherits:1 vk:1 rubenstein:1 bernoulli:1 likelihood:7 greatly:2 sense:2 detect:1 inference:6 underdispersed:1 austerity:1 el:2 dependent:1 vl:5 typically:2 relation:1 quasi:1 selects:1 provably:1 compatibility:2 dual:1 fidelity:1 augment:1 overtaken:1 constrained:3 integration:4 special:1 equal:3 once:1 sampling:5 represents:2 jones:1 icml:3 thin:1 discrepancy:75 minimized:1 t2:1 recommend:1 prostate:1 few:2 supx6:2 mover:1 familiar:1 geometry:2 freedom:1 jessica:1 interest:1 evaluation:7 certainly:1 reinert:1 introduces:2 analyzed:1 mixture:1 kvk:3 diagnostics:2 hg:5 har:1 chain:7 implication:1 kt:2 accurate:1 integral:2 edge:4 partial:1 necessary:2 closer:1 decoupled:1 tweedie:1 taylor:1 walk:1 skirt:1 witnessed:1 xeon:1 corroborate:1 tp:31 measuring:1 whitney:2 deviation:1 vertex:2 uniform:6 predictor:1 too:2 reported:1 density:4 explores:1 international:4 discriminating:2 siam:1 borgwardt:1 sequel:1 off:5 h2h:1 together:2 continuously:1 central:1 slowly:1 classically:1 american:2 derivative:3 return:3 supp:6 account:4 suggesting:1 de:1 kwl:1 pooled:1 wk:1 student:6 satisfy:1 explicitly:1 vi:9 depends:2 performed:1 view:1 kwk:1 sup:7 bution:1 analyze:1 recover:1 francis:1 parallel:2 defer:1 slope:1 curie:1 ass:2 square:2 wiener:1 variance:11 characteristic:1 efficiently:1 maximized:1 yield:2 identify:1 bayesian:6 carlo:9 researcher:1 presume:1 herding:4 datapoint:2 reach:1 whenever:1 sharing:1 manual:1 sixth:1 streamlined:1 infinitesimal:2 pp:1 celebration:1 proof:2 associated:4 dataset:1 subsection:1 lim:1 hilbert:1 barrio:1 goldstein:1 focusing:1 higher:1 dt:2 follow:1 x6:1 planar:1 improved:1 wei:1 formulation:1 strongly:4 smola:2 hastings:2 ei:2 assessment:3 del:1 minibatch:1 logistic:3 quality:18 hwang:1 grows:1 normalized:2 true:2 hence:5 overdispersed:1 dunning:1 krg:8 criterion:3 leftmost:1 complete:16 demonstrate:1 julia:2 interface:1 reasoning:1 charles:1 common:1 ji:5 volume:2 jl:2 extend:1 tail:1 approximates:2 association:1 refer:1 gibbs:1 smoothness:1 rd:17 unconstrained:2 access:1 ahn:1 gj:3 gt:2 ort:1 posterior:10 multivariate:9 recent:1 perspective:1 discard:1 termed:1 certain:1 binary:2 arbitrarily:1 scoring:1 integrable:1 seen:1 minimum:1 wasserstein:11 additional:2 impose:1 employed:1 recommended:2 ii:3 full:1 sound:1 desirable:1 reduces:1 stem:1 rj:3 smooth:3 gretton:1 calculation:1 offer:1 bach:2 e1:3 converging:2 desideratum:4 regression:3 scalable:2 patient:1 expectation:6 metric:6 poisson:1 arxiv:1 represent:2 adopting:1 accord:2 bimodal:1 kernel:4 mmd:2 c1:8 remarkably:1 median:3 sch:2 biased:6 hz:1 practitioner:1 intermediate:1 iii:2 rendering:1 variety:1 fit:1 restrict:1 impediment:1 reduce:1 andreas:1 computable:4 tradeoff:2 whether:1 expression:1 six:1 url:1 lubin:1 york:2 prefers:1 outward:1 stein:99 ten:1 induces:1 generate:2 http:1 zj:1 diagnostic:7 track:2 per:1 discrete:3 hyperparameter:3 numerica:1 vol:1 drawn:1 gmm:1 diffusion:2 lacoste:1 asymptotically:3 graph:27 year:1 sum:1 run:1 prob:1 package:1 fourth:1 uncertainty:1 extends:1 ipms:1 family:1 almost:1 separation:1 draw:3 appendix:4 summarizes:1 bound:12 barbour:2 distinguish:1 display:2 jji:1 quadratic:1 fan:1 annual:2 constraint:12 precisely:1 x2:1 hy:1 speed:2 min:2 relatively:1 department:2 remain:1 across:1 increasingly:1 smaller:2 lp:2 metropolis:3 modification:1 socg:1 computationally:1 equation:1 remains:1 turn:2 discus:1 count:1 eventually:1 know:1 madeleine:1 tractable:1 end:1 available:1 operation:1 v2:1 generic:2 appropriate:2 away:1 dwt:1 batch:1 remaining:1 ensure:1 graphical:1 opportunity:2 exploit:1 k1:10 establish:2 approximating:1 classical:26 society:1 tensor:1 objective:2 implied:1 added:1 question:3 strategy:1 primary:2 valued1:1 kantorovich:1 illuminate:1 exhibit:1 gradient:7 surrogate:1 gin:1 distance:11 separate:1 thank:1 considers:1 enforcing:2 ru:2 length:3 code:1 robert:1 statement:1 relate:1 sharper:1 gk:27 trace:1 rise:1 design:1 implementation:2 zt:4 teh:1 upper:3 boot:1 etude:1 markov:5 finite:4 langevin:6 extended:1 nonuniform:1 provenance:1 introduced:4 pair:7 gurobi:4 c3:4 lymph:1 california:1 bout:3 quadratically:1 established:1 brook:2 address:3 able:1 proceeds:1 xm:2 khkh:1 challenge:3 spanner:24 program:12 including:2 max:11 power:1 suitable:4 critical:1 natural:1 quantification:2 difficulty:2 business:1 turning:1 hr:3 improve:1 julien:1 faced:1 prior:2 geometric:3 literature:3 deviate:1 probab:2 shvartsman:1 determining:1 asymptotic:7 rationale:1 outlining:1 generator:5 degree:1 affine:1 sufficient:2 inexactness:1 vij:1 intractability:1 land:1 cancer:2 twosample:1 free:2 infeasible:2 bias:14 institute:1 wide:1 template:1 characterizing:2 sparse:2 ghz:1 tolerance:1 feedback:1 boundary:8 dimension:2 xn:2 evaluating:1 default:1 contour:1 curve:1 maxl:1 qn:1 commonly:2 collection:3 qualitatively:1 author:1 welling:4 transaction:1 functionals:2 approximate:7 emphasize:1 cutting:1 uai:1 handbook:1 unnecessary:1 xi:15 ripley:1 continuous:1 chief:1 dilation:1 promising:1 zk:4 ca:1 e5:1 alg:1 complex:2 constructing:1 domain:5 spread:1 bounding:5 arise:2 hyperparameters:1 n2:1 x1:3 intel:1 representative:1 g2g:2 slow:1 aid:1 ny:2 deterministically:1 explicit:1 exponential:1 xl:4 candidate:4 third:2 hw:1 bij:1 rk:3 z0:1 theorem:7 covariate:1 udell:1 kvi:1 decay:1 normalizing:2 intractable:3 exists:1 kr:4 magnitude:2 budget:2 chatterjee:2 kx:12 sorting:1 chen:3 suited:1 rg:9 univariate:4 erratum:1 g2:3 springer:1 truth:1 satisfies:1 determines:1 dh:6 acm:2 obozinski:1 conditional:1 goal:1 quantifying:2 ann:1 eventual:1 lipschitz:2 fisher:1 feasible:1 specifically:1 infinite:1 sampler:7 acting:1 wt:1 dwk:4 total:2 indicating:1 select:2 support:2 overdamped:2 mcmc:5 |
5,267 | 5,769 | Biologically Inspired Dynamic Textures
for Probing Motion Perception
Andrew Isaac Meso
Institut de Neurosciences de la Timone
UMR 7289 CNRS/Aix-Marseille Universit?e
13385 Marseille Cedex 05, FRANCE
andrew.meso@univ-amu.fr
Jonathan Vacher
CNRS UNIC and Ceremade
Univ. Paris-Dauphine
75775 Paris Cedex 16, FRANCE
vacher@ceremade.dauphine.fr
Laurent Perrinet
Institut de Neurosciences de la Timone
UMR 7289 CNRS/Aix-Marseille Universit?e
13385 Marseille Cedex 05, FRANCE
laurent.perrinet@univ-amu.fr
Gabriel Peyr?e
CNRS and Ceremade
Univ. Paris-Dauphine
75775 Paris Cedex 16, FRANCE
peyre@ceremade.dauphine.fr
Abstract
Perception is often described as a predictive process based on an optimal inference
with respect to a generative model. We study here the principled construction
of a generative model specifically crafted to probe motion perception. In that
context, we first provide an axiomatic, biologically-driven derivation of the model.
This model synthesizes random dynamic textures which are defined by stationary
Gaussian distributions obtained by the random aggregation of warped patterns.
Importantly, we show that this model can equivalently be described as a stochastic
partial differential equation. Using this characterization of motion in images, it
allows us to recast motion-energy models into a principled Bayesian inference
framework. Finally, we apply these textures in order to psychophysically probe
speed perception in humans. In this framework, while the likelihood is derived
from the generative model, the prior is estimated from the observed results and
accounts for the perceptual bias in a principled fashion.
1
Motivation
A normative explanation for the function of perception is to infer relevant hidden parameters from
the sensory input with respect to a generative model [7]. Equipped with some prior knowledge
about this representation, this corresponds to the Bayesian brain hypothesis, as has been perfectly
illustrated by the particular case of motion perception [19]. However, the Gaussian hypothesis
related to the parameterization of knowledge in these models ?for instance in the formalization
of the prior and of the likelihood functions? does not always fit with psychophysical results [17].
As such, a major challenge is to refine the definition of generative models so that they conform to
the widest variety of results.
From this observation, the estimation problem inherent to perception is linked to the definition of an
adequate generative model. In particular, the simplest generative model to describe visual motion
is the luminance conservation equation. It states that luminance I(x, t) for (x, t) ? R2 ? R is
approximately conserved along trajectories defined as integral lines of a vector field v(x, t) ? R2 ?
R. The corresponding generative model defines random fields as solutions to the stochastic partial
differential equation (sPDE),
?I
hv, ?Ii +
= W,
(1)
?t
1
where h?, ?i denotes the Euclidean scalar product in R2 , ?I is the spatial gradient of I. To match
the statistics of natural scenes or some category of textures, the driving term W is usually defined
as a colored noise corresponding to some average spatio-temporal coupling, and is parameterized
by a covariance matrix ?, while the field is usually a constant vector v(x, t) = v0 accounting for a
full-field translation with constant speed.
Ultimately, the application of this generative model is essential for probing the visual system, for
instance to understand how observers might detect motion in a scene. Indeed, as shown by [9, 19],
the negative log-likelihood corresponding to the luminance conservation model (1) and determined by a hypothesized speed v0 is proportional to the value of the motion-energy model [1]
||hv0 , ?(K ? I)i + ?(K?I)
||2 , where K is the whitening filter corresponding to the inverse of ?,
?t
and ? is the convolution operator. Using some prior knowledge on the distribution of motions, for
instance a preference for slow speeds, this indeed leads to a Bayesian formalization of this inference
problem [18]. This has been successful in accounting for a large class of psychophysical observations [19]. As a consequence, such probabilistic frameworks allow one to connect different models
from computer vision to neuroscience with a unified, principled approach.
However the model defined in (1) is obviously quite simplistic with respect to the complexity of natural scenes. It is therefore useful here to relate this problem to solutions proposed by texture synthesis
methods in the computer vision community. Indeed, the literature on the subject of static textures
synthesis is abundant (see [16] and the references therein for applications in computer graphics).
Of particular interest for us is the work of Galerne et al. [6], which proposes a stationary Gaussian
model restricted to static textures. Realistic dynamic texture models are however less studied, and
the most prominent method is the non-parametric Gaussian auto-regressive (AR) framework of [3],
which has been refined in [20].
Contributions. Here, we seek to engender a better understanding of motion perception by improving generative models for dynamic texture synthesis. From that perspective, we motivate the
generation of optimal stimulation within a stationary Gaussian dynamic texture model. We base our
model on a previously defined heuristic [10, 11] coined ?Motion Clouds?. Our first contribution is
Figure 1: Parameterization of the class of Motion Clouds stimuli. The illustration relates the
parametric changes in MC with real world (top row) and observer (second row) movements.
(A) Orientation changes resulting in scene rotation are parameterized through ? as shown in
the bottom row where a horizontal a and obliquely oriented b MC are compared. (B) Zoom
movements, either from scene looming or observer movements in depth, are characterised by
scale changes reflected by a scale or frequency term z shown for a larger or closer object b
compared to more distant a. (C) Translational movements in the scene characterised by V
using the same formulation for static (a) slow (b) and fast moving MC, with the variability in
these speeds quantified by ?V . (? and ? ) in the third row are the spatial and temporal frequency
scale parameters. The development of this formulation is detailed in the text.
2
an axiomatic derivation of this model, seen as a shot noise aggregation of dynamically warped ?textons?. This formulation is important to provide a clear understanding of the effects of the model?s
parameters manipulated during psychophysical experiments. Within our generative model, they
correspond to average translation speed and orientation of the ?textons? and standard deviations
of random fluctuations around this average. Our second contribution (proved in the supplementary materials) is to demonstrate an explicit equivalence between this model and a class of linear
stochastic partial differential equations (sPDE). This shows that our model is a generalization of the
well-known luminance conservation equation. This sPDE formulation has two chief advantages: it
allows for a real-time synthesis using an AR recurrence and it allows one to recast the log-likelihood
of the model as a generalization of the classical motion energy model, which in turn is crucial to
allow for a Bayesian modeling of perceptual biases. Our last contribution is an illustrative application of this model to the psychophysical study of motion perception in humans. This application
shows how the model allows us to define a likelihood, which enables a simple fitting procedure to
determine the prior driving the perceptual bias.
Notations. In the following, we will denote (x, t) ? R2 ? R the space/time variable, and (?, ? ) ?
R2 ? R the corresponding frequency variables. If f (x, t) is a function defined on R3 , then f?(?, ? )
denotes its Fourier transform. For ? ? R2 , we denote ? = ||?||(cos(??), sin(??)) ? R2 its polar
coordinates. For a function g in R2 , we denote g?(x) = g(?x). In the following, we denote with
a capital letter such as A a random variable, a we denote a a realization of A, we let PA (a) be the
corresponding distribution of A.
2
Axiomatic Construction of a Dynamic Texture Stimulation Model
Solving a model-based estimation problem and finding optimal dynamic textures for stimulating an
instance of such a model can be seen as equivalent mathematical problems. In the luminance conservation model (1), the generative model is parameterized by a spatio-temporal coupling function,
which is encoded in the covariance ? of the driving noise and the motion flow v0 . This coupling
(covariance) is essential as it quantifies the extent of the spatial integration area as well as the integration dynamics, an important issue in neuroscience when considering the implementation of
integration mechanisms from the local to the global scale. In particular, it is important to understand
modular sensitivity in the various lower visual areas with different spatio-temporal selectivities such
as Primary Visual Cortex (V1) or ascending the processing hierarchy, Middle Temple area (MT).
For instance, by varying the frequency bandwidth of such dynamic textures, distinct mechanisms
for perception and action have been identified [11]. However, such textures were based on a heuristic [10], and our goal here is to develop a principled, axiomatic definition.
2.1
From Shot Noise to Motion Clouds
We propose a mathematically-sound derivation of a general parametric model of dynamic textures.
This model is defined by aggregation, through summation, of a basic spatial ?texton? template g(x).
The summation reflects a transparency hypothesis, which has been adopted for instance in [6]. While
one could argue that this hypothesis is overly simplistic and does not model occlusions or edges, it
leads to a tractable framework of stationary Gaussian textures, which has proved useful to model
static micro-textures [6] and dynamic natural phenomena [20]. The simplicity of this framework
allows for a fine tuning of frequency-based (Fourier) parameterization, which is desirable for the
interpretation of psychophysical experiments.
We define a random field as
X
def. 1
I? (x, t) = ?
g(?Ap (x ? Xp ? Vp t))
? p?N
(2)
where ?a : R2 ? R2 is a planar warping parameterized by a finite dimensional vector a. Intuitively,
this model corresponds to a dense mixing of stereotyped, static textons as in [6]. The originality is
two-fold. First, the components of this mixing are derived from the texton by visual transformations
?Ap which may correspond to arbitrary transformations such as zooms or rotations, illustrated in
Figure 1. Second, we explicitly model the motion (position Xp and speed Vp ) of each individual
texton. The parameters (Xp , Vp , Ap )p?N are independent random vectors. They account for the
3
variability in the position of objects or observers and their speed, thus mimicking natural motions in
an ambient scene. The set of translations (Xp )p?N is a 2-D Poisson point process of intensity ? > 0.
The following section instantiates this idea and proposes canonical choices for these variabilities.
The warping parameters (Ap )p are distributed according to a distribution PA . The speed parameters
(Vp )p are distributed according to a distribution PV on R2 . The following result shows that the
model (2) converges to a stationary Gaussian field and gives the parameterization of the covariance.
Its proof follows from a specialization of [5, Theorem 3.1] to our setting.
Proposition 1. I? is stationary with bounded second order moments. Its covariance is
?(x, t, x0 , t0 ) = ?(x ? x0 , t ? t0 ) where ? satisfies
Z Z
? (x, t) ? R3 , ?(x, t) =
cg (?a (x ? ?t))PV (?)PA (a)d?da
(3)
R2
where cg = g ? g? is the auto-correlation of g. When ? ? +?, it converges (in the sense of finite
dimensional distributions) toward a stationary Gaussian field I of zero mean and covariance ?.
2.2
Definition of ?Motion Clouds?
We detail this model here with warpings as rotations and scalings (see Figure 1). These account for
the characteristic orientations and sizes (or spatial scales) in a scene with respect to the observer
? a = (?, z) ? [??, ?) ? R?+ ,
def.
?a (x) = zR?? (x),
where R? is the planar rotation of angle ?. We now give some physical and biological motivation
underlying our particular choice for the distributions of the parameters. We assume that the distributions PZ and P? of spatial scales z and orientations ?, respectively (see Figure 1), are independent
and have densities, thus considering ? a = (?, z) ? [??, ?) ? R?+ , PA (a) = PZ (z) P? (?). The
speed vector ? is assumed to be randomly fluctuating around a central speed v0 , so that
? ? ? R2 ,
PV (?) = P||V ?v0 || (||? ? v0 ||).
(4)
In order to obtain ?optimal? responses to the stimulation (as advocated by [21]), it makes sense to
define the texton g to be equal to an oriented Gabor acting as an atom, based on the structure of
a standard receptive field of V1. Each would have a scale ? and a central frequency ?0 . Since the
orientation and scale of the texton is handled by the (?, z) parameters, we can impose without loss of
generality the normalization ?0 = (1, 0). In the special case where ? ? 0, g is a grating of frequency
?0 , and the image I is a dense mixture of drifting gratings, whose power-spectrum has a closed form
expression detailed in Proposition 2. Its proof can be found in the supplementary materials. We call
this Gaussian field a Motion Cloud (MC), and it is parameterized by the envelopes (PZ , P? , PV ) and
has central frequency and speed (?0 , v0 ). Note that it is possible to consider any arbitrary textons
g, which would give rise to more complicated parameterizations for the power spectrum g?, but we
decided here to stick to the simple case of gratings.
Proposition 2. When g(x) = eihx, ?0 i , the image I defined in Proposition 1 is a stationary Gaussian
field of covariance having the power-spectrum
PZ (||?||)
? + hv0 , ?i
? (?, ? ) ? R2 ? R, ?? (?, ? ) =
,
(5)
P
(??)
L(P
)
?
?
||V ?v0 ||
||?||2
||?||
R?
where the linear transform L is such that ?u ? R, L(f )(u) = ?? f (?u/ cos(?))d?.
Remark 1. Note that the envelope of ?? is shaped along a cone in the spatial and temporal domains.
This is an important and novel contribution when compared to a Gaussian formulation like a classical Gabor. In particular, the bandwidth is then constant around the speed plane or the orientation
line with respect to spatial frequency. Basing the generation of the textures on all possible translations, rotations and zooms, we thus provide a principled approach to show that bandwidth should be
proportional to spatial frequency to provide a better model of moving textures.
2.3
Biologically-inspired Parameter Distributions
We now give meaningful specialization for the probability distributions (PZ , P? , P||V ?v0 || ), which
are inspired by some known scaling properties of the visual transformations relevant to dynamic
scene perception.
4
First, small, centered, linear movements of the observer along the axis of view (orthogonal to the
plane of the scene) generate centered planar zooms of the image. From the linear modeling of the
observer?s displacement and the subsequent multiplicative nature of zoom, scaling should follow a
Weber-Fechner law stating that subjective sensation when quantified is proportional to the logarithm
of stimulus intensity. Thus, we choose the scaling z drawn from a log-normal distribution PZ ,
defined in (6). The bandwidth ?Z quantifies the variance in the amplitude of zooms of individual
textons relative to the set characteristic scale z0 . Similarly, the texture is perturbed by variation in the
global angle ? of the scene: for instance, the head of the observer may roll slightly around its normal
position. The von-Mises distribution ? as a good approximation of the warped Gaussian distribution
around the unit circle ? is an adapted choice for the distribution of ? with mean ?0 and bandwidth
?? , see (6). We may similarly consider that the position of the observer is variable in time. On first
order, movements perpendicular to the axis of view dominate, generating random perturbations to
the global translation v0 of the image at speed ? ? v0 ? R2 . These perturbations are for instance
described by a Gaussian random walk: take for instance tremors, which are constantly jittering,
small (6 1 deg) movements of the eye. This justifies the choice of a radial distribution (4) for
PV . This radial distribution P||V ?v0 || is thus selected as a bell-shaped function of width ?V , and we
choose here a Gaussian function for simplicity, see (6). Note that, as detailed in the supplementary
a slightly different bell-function (with a more complicated expression) should be used to obtain an
exact equivalence with the sPDE discretization mentioned in Section 4.
The distributions of the parameters are thus chosen as
2
ln( z )
z0
2
cos(2(???0 ))
? r2
z0 ? 2 ln(1+?Z2 )
2
PZ (z) ? e
, P? (?) ? e 4??
and P||V ?v0 || (r) ? e 2?V .
(6)
z
Remark 2. Note that in practice
we have parametrized PZ by its mode mZ = argmaxz PZ (z) and
qR
2
standard deviation dZ =
z PZ (z)dz, see the supplementary material and [4].
?
?2
??
?Z Slope: ?v0
z0
?0
?1
?V
?Z
z0
?1
Two different projections of ?? in Fourier space
t
MC of two different spatial frequencies z
Figure 2: Graphical representation of the covariance ? (left) ?note the cone-like shape of the
envelopes? and an example of synthesized dynamics for narrow-band and broad-band Motion
Clouds (right).
Plugging these expressions (6) into the definition (5) of the power spectrum of the motion cloud,
one obtains a parameterization which is very similar to the one originally introduced in [11]. The
following table gives the speed v0 and frequency (?0 , z0 ) central parameters in terms of amplitude
and orientation, each one being coupled with the relevant dispersion parameters. Figure 1 and 2
shows a graphical display of the influence of these parameters.
(mean, dispersion)
Speed
(v0 , ?V )
Freq. orient.
(?0 , ?? )
Freq. amplitude
(z0 , ?Z ) or (mZ , dZ )
Remark 3. Note that the final envelope of ?? is in agreement with the formulation that is used in [10].
However, that previous derivation was based on a heuristic which intuitively emerged from a long
interaction between modelers and psychophysicists. Herein, we justified these different points from
first principles.
Remark 4. The MC model can equally be described as a stationary solution of a stochastic partial
differential equation (sPDE). This sPDE formulation is important since we aim to deal with dynamic
stimulation, which should be described by a causal equation which is local in time. This is crucial
for numerical simulations, since, this allows us to perform real-time synthesis of stimuli using an
5
auto-regressive time discretization. This is a significant departure from previous Fourier-based implementation of dynamic stimulation [10, 11]. This is also important to simplify the application
of MC inside a bayesian model of psychophysical experiments (see Section 3)The derivation of an
equivalent sPDE model exploits a spectral formulation of MCs as Gaussian Random fields. The full
proof along with the synthesis algorithm can be found in the supplementary material.
3
Psychophysical Study: Speed Discrimination
To exploit the useful features of our MC model and provide a generalizable proof of concept based
on motion perception, we consider here the problem of judging the relative speed of moving dynamical textures and the impact of both average spatial frequency and average duration of temporal
correlations.
3.1
Methods
The task was to discriminate the speed v ? R of MC stimuli moving with a horizontal central
speed v = (v, 0). We assign as independent experimental variable the most represented spatial
frequency mZ , that we denote in the following z for easier reading. The other parameters are
?
set to the following values ?V = t?1z , ?0 = ?2 , ?? = 12
, and dZ = 1.0 c/? . Note
that ?V is thus dependent of the value of z (that is computed from mZ and dZ , see Remark 2
and the supplementary ) to ensure that t? = ?V1 z stays constant. This parameter t? controls the
temporal frequency bandwidth, as illustrated on the middle of Figure 2. We used a two alternative
forced choice (2AFC) paradigm. In each trial a grey fixation screen with a small dark fixation spot
was followed by two stimulus intervals of 250 ms each, separated by a grey 250 ms inter-stimulus
interval. The first stimulus had parameters (v1 , z1 ) and the second had parameters (v2 , z2 ). At the
end of the trial, a grey screen appeared asking the participant to report which one of the two intervals
was perceived as moving faster by pressing one of two buttons, that is whether v1 > v2 or v2 > v1 .
Given reference values (v ? , z ? ), for each trial, (v1 , z1 ) and (v2 , z2 ) are selected so that
vi = v ? , z i ? z ? + ? Z
?V = {?2, ?1, 0, 1, 2},
where
vj ? v ? + ? V , z j = z ?
?Z = {?0.48, ?0.21, 0, 0.32, 0.85},
where (i, j) = (1, 2) or (i, j) = (2, 1) (i.e. the ordering is randomized across trials), and where z
values are expressed in cycles per degree (c/? ) and v values in ? /s. Ten repetitions of each of the 25
possible combinations of these parameters are made per block of 250 trials and at least four such
blocks were collected per condition tested. The outcome of these experiments are summarized by
psychometric curves ??v? ,z? , where for all (v ? v ? , z ? z ? ) ? ?V ? ?Z , the value ??v? ,z? (v, z) is
the empirical probability (each averaged over the typically 40 trials) that a stimulus generated with
parameters (v ? , z) is moving faster than a stimulus with parameters (v, z ? ).
To assess the validity of our model, we tested four different scenarios by considering all possible
choices among z ? = 1.28 c/? , v ? ? {5? /s, 10? /s}, and t? ? {0.1s, 0.2s}, which corresponds
to combinations of low/high speeds and a pair of temporal frequency parameters. Stimuli were generated on a Mac running OS 10.6.8 and displayed on a 20? Viewsonic p227f monitor with resolution
1024 ? 768 at 100 Hz. Routines were written using Matlab 7.10.0 and Psychtoolbox 3.0.9 controlled the stimulus display. Observers sat 57 cm from the screen in a dark room. Three observers
with normal or corrected to normal vision took part in these experiments. They gave their informed
consent and the experiments received ethical approval from the Aix-Marseille Ethics Committee in
accordance with the declaration of Helsinki.
3.2
Bayesian modeling
To make full use of our MC paradigm in analyzing the obtained results, we follow the methodology
of the Bayesian observer used for instance in [13, 12, 8]. We assume the observer makes its decision using a Maximum A Posteriori (MAP) estimator v?z (m) = argmin [? log(PM |V,Z (m|v, z)) ?
v
log(PV |Z (v|z))] computed from some internal representation m ? R of the observed stimulus. For
simplicity, we assume that the observer estimates z from m without bias. To simplify the numerical
analysis, we assume that the likelihood is Gaussian, with a variance independent of v. Furthermore,
6
we assume that the prior is Laplacian as this gives a good description of the a priori statistics of
speeds in natural images [2]:
PM |V,Z (m|v, z) = ?
|m?v|
1
?
e 2?z2
2??z
2
and PV |Z (v|z) ? eaz v 1[0,vmax ] (v).
(7)
where vmax > 0 is a cutoff speed ensuring that PV |Z is a well defined density even if az > 0.
Both az and ?z are unknown parameters of the model, and are obtained from the outcome of the
experiments by a fitting process we now explain.
3.3
Likelihood and Prior Estimation
Following for instance [13, 12, 8], the theoretical psychophysical curve obtained by a Bayesian
decision model is
def.
?v? ,z? (v, z) = E(?
vz? (Mv,z? ) > v?z (Mv? ,z ))
where Mv,z ? N (v, ?z2 ) is a Gaussian variable having the distribution PM |V,Z (?|v, z).
The following proposition shows that in our special case of Gaussian prior and Laplacian likelihood,
it can be computed in closed form. Its proof follows closely the derivation of [12, Appendix A], and
can be found in the supplementary materials.
Proposition 3. In the special case of the estimator (3.2) with a parameterization (7), one has
!
v ? v ? ? az? ?z2? + az ?z2
p
?v? ,z? (v, z) = ?
(8)
?z2? + ?z2
Rt
2
where ?(t) = ?12? ?? e?s /2 ds is a sigmoid function.
One can fit the experimental psychometric function
to compute the
perceptual bias term ?z,z? ? R
and an uncertainty ?z,z? such that ??v? ,z? (v, z) ? ?
v?v ? ??z,z?
?z,z?
.
Remark 5. Note that in practice we perform a fit in a log-speed domain ie we consider ?v?? ,z? (?
v , z)
where v? = ln(1 + v/v0 ) with v0 = 0.3? /s following [13].
By comparing the theoretical and experimental psychopysical curves (8) and (3.3), one thus obtains
? ?
?2
the following expressions ?z2 = ?2z,z? ? 21 ?2z? ,z? and az = az? ?z2? ? z,z
?z2 . The only remaining
z
unknown is az? , that can be set as any negative number based on previous work on low speed priors
or, alternatively estimated in future by performing a wiser fitting method.
3.4
Psychophysic Results
The main results are summarized in Figure 3 showing the parameters ?z,z? in Figure 3(a) and the
parameters ?z in Figure 3(b). Spatial frequency has a positive effect on perceived speed; speed is
systematically perceived as faster as spatial frequency is increased, moreover this shift cannot simply
be explained to be the result of an increase in the likelihood width (Figure 3(b)) at the tested spatial
frequency, as previously observed for contrast changes [13, 12]. Therefore the positive effect could
be explained by a negative effect in prior slopes az as the spatial frequency increases. However, we
do not have any explanation for the observed constant likelihood width as it is not consistent with
the speed width of the stimuli ?V = t?1z which is decreasing with spatial frequency.
3.5
Discussion
We exploited the principled and ecologically motivated parameterization of MC to ask about the effect of scene scaling on speed judgements. In the experimental task, MC stimuli, in which the spatial
scale content was systematically varied (via frequency manipulations) around a central frequency of
1.28 c/? were found to be perceived as slightly faster at higher frequencies slightly slower at lower
frequencies. The effects were most prominent at the faster speed tested, of 10 ? /s relative to those at
5 ? /s. The fitted psychometic functions were compared to those predicted by a Bayesian model in
which the likelihood or the observer?s sensory representation was characterised by a simple Gaussian. Indeed, for this small data set intended as a proof of concept, the model was able to explain
7
Subject 1
PSE bias (?z,z ? )
0.15
Likehood width (?z )
0.2
0.05
0.1
0.00
?0.05
v?
v?
v?
v?
?0.10
?0.15
?0.20
(a)
0.8
1.0
1.2
1.4
0.0
= 5, t? = 100
= 5, t? = 200
= 10, t? = 100
= 10, t? = 200
1.6
?0.1
?0.2
1.8
2.0
0.8
0.25
0.8
0.20
0.6
0.15
0.4
0.10
0.2
0.05
1.0
1.2
1.4
1.6
1.8
2.0
1.0
1.2
1.4
1.6
1.8
2.0
0.0
0.00
?0.2
?0.05
0.8
(b)
Subject 2
0.3
0.10
1.0
1.2
1.4
1.6
1.8
?0.4
2.0
Spatial frequency (z) in cycles/deg
0.8
Spatial frequency (z) in cycles/deg
Figure 3: 2AFC speed discrimination results. (a) Task generates psychometric functions which
show shifts in the point of subjective equality for the range of test z. Stimuli of lower frequency
with respect to the reference (intersection of dotted horizontal and vertical lines gives the reference stimulus) are perceived as going slower, those with greater mean frequency are perceived
as going relatively faster. This effect is observed under all conditions but is stronger at the
highest speed and for subject 1. (b) The estimated ?z appear noisy but roughly constant as a
function of z for each subject. Widths are generally higher for v = 5 (red) than v = 10 (blue)
traces. The parameter t? does not show a significant effect across the conditions tested.
these systematic biases for spatial frequency as shifts in our a priori on speed during the perceptual
judgements as the likelihood width are constant across tested frequencies but lower at the higher of
the tested speeds. Thus having a larger measured bias given the case of the smaller likelihood width
(faster speed) is consistent with a key role for the prior in the observed perceptual bias.
A larger data set, including more standard spatial frequencies and the use of more observers, is
needed to disambiguate the models predicted prior function.
4
Conclusions
We have proposed and detailed a generative model for the estimation of the motion of images based
on a formalization of small perturbations from the observer?s point of view during parameterized
rotations, zooms and translations. We connected these transformations to descriptions of ecologically motivated movements of both observers and the dynamic world. The fast synthesis of naturalistic textures optimized to probe motion perception was then demonstrated, through fast GPU
implementations applying auto-regression techniques with much potential for future experimentation. This extends previous work from [10] by providing an axiomatic formulation. Finally, we
used the stimuli in a psychophysical task and showed that these textures allow one to further understand the processes underlying speed estimation. By linking them directly to the standard Bayesian
formalism, we show that the sensory representations of the stimulus (the likelihoods) in such models can be described directly from the generative MC model. In our case we showed this through
the influence of spatial frequency on speed estimation. We have thus provided just one example
of how the optimized motion stimulus and accompanying theoretical work might serve to improve
our understanding of inference behind perception. The code associated to this work is available at
https://jonathanvacher.github.io.
Acknowledgements
We thank Guillaume Masson for useful discussions during the development of the experiments. We
?
also thank Manon Bouy?e and Elise
Amfreville for proofreading. LUP was supported by EC FP7269921, ?BrainScaleS?. The work of JV and GP was supported by the European Research Council
(ERC project SIGMA-Vision). AIM and LUP were supported by SPEED ANR-13-SHS2-0006.
8
References
[1] Adelson, E. H. and Bergen, J. R. (1985). Spatiotemporal energy models for the perception of
motion. Journal of Optical Society of America, A., 2(2):284?99.
[2] Dong, D. (2010). Maximizing causal information of natural scenes in motion. In Ilg, U. J. and
Masson, G. S., editors, Dynamics of Visual Motion Processing, pages 261?282. Springer US.
[3] Doretto, G., Chiuso, A., Wu, Y. N., and Soatto, S. (2003). Dynamic textures. International
Journal of Computer Vision, 51(2):91?109.
[4] Field, D. J. (1987). Relations between the statistics of natural images and the response properties
of cortical cells. J. Opt. Soc. Am. A, 4(12):2379?2394.
[5] Galerne, B. (2011). Stochastic image models and texture synthesis. PhD thesis, ENS de Cachan.
[6] Galerne, B., Gousseau, Y., and Morel, J. M. (2011). Micro-Texture synthesis by phase randomization. Image Processing On Line, 1.
[7] Gregory, R. L. (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal
Society B: Biological Sciences, 290(1038):181?197.
[8] Jogan, M. and Stocker, A. A. (2015). Signal integration in human visual speed perception. The
Journal of Neuroscience, 35(25):9381?9390.
[9] Nestares, O., Fleet, D., and Heeger, D. (2000). Likelihood functions and confidence bounds for
total-least-squares problems. In IEEE Conference on Computer Vision and Pattern Recognition.
CVPR 2000, volume 1, pages 523?530. IEEE Comput. Soc.
[10] Sanz-Leon, P., Vanzetta, I., Masson, G. S., and Perrinet, L. U. (2012). Motion clouds: modelbased stimulus synthesis of natural-like random textures for the study of motion perception. Journal of Neurophysiology, 107(11):3217?3226.
[11] Simoncini, C., Perrinet, L. U., Montagnini, A., Mamassian, P., and Masson, G. S. (2012). More
is not always better: adaptive gain control explains dissociation between perception and action.
Nature Neurosci, 15(11):1596?1603.
[12] Sotiropoulos, G., Seitz, A. R., and Seri`es, P. (2014). Contrast dependency and prior expectations in human speed perception. Vision Research, 97(0):16 ? 23.
[13] Stocker, A. A. and Simoncelli, E. P. (2006). Noise characteristics and prior expectations in
human visual speed perception. Nature Neuroscience, 9(4):578?585.
[14] Unser, M. and Tafti, P. (2014). An Introduction to Sparse Stochastic Processes. Cambridge
University Press, Cambridge, UK. 367 p.
[15] Unser, M., Tafti, P. D., Amini, A., and Kirshner, H. (2014). A unified formulation of gaussian versus sparse stochastic processes - part II: Discrete-Domain theory. IEEE Transactions on
Information Theory, 60(5):3036?3051.
[16] Wei, L. Y., Lefebvre, S., Kwatra, V., and Turk, G. (2009). State of the art in example-based
texture synthesis. In Eurographics 2009, State of the Art Report, EG-STAR. Eurographics Association.
[17] Wei, X.-X. and Stocker, A. A. (2012). Efficient coding provides a direct link between prior
and likelihood in perceptual bayesian inference. In Bartlett, P. L., Pereira, F. C. N., Burges, C.
J. C., Bottou, L., and Weinberger, K. Q., editors, NIPS, pages 1313?1321.
[18] Weiss, Y. and Fleet, D. J. (2001). Velocity likelihoods in biological and machine vision. In In
Probabilistic Models of the Brain: Perception and Neural Function, pages 81?100.
[19] Weiss, Y., Simoncelli, E. P., and Adelson, E. H. (2002). Motion illusions as optimal percepts.
Nature Neuroscience, 5(6):598?604.
[20] Xia, G. S., Ferradans, S., Peyr?e, G., and Aujol, J. F. (2014). Synthesizing and mixing stationary
gaussian texture models. SIAM Journal on Imaging Sciences, 7(1):476?508.
[21] Young, R. A. and Lesperance, R. M. (2001). The gaussian derivative model for spatial-temporal
vision: II. cortical data. Spatial vision, 14(3):321?390.
9
| 5769 |@word neurophysiology:1 trial:6 middle:2 judgement:2 stronger:1 meso:2 grey:3 seitz:1 seek:1 simulation:1 lup:2 covariance:8 accounting:2 shot:2 moment:1 subjective:2 discretization:2 z2:12 comparing:1 written:1 gpu:1 realistic:1 distant:1 subsequent:1 numerical:2 shape:1 enables:1 discrimination:2 stationary:10 generative:14 selected:2 parameterization:7 plane:2 colored:1 regressive:2 characterization:1 parameterizations:1 provides:1 preference:1 mathematical:1 along:4 direct:1 differential:4 chiuso:1 fixation:2 fitting:3 inside:1 x0:2 inter:1 indeed:4 roughly:1 brain:2 inspired:3 approval:1 decreasing:1 equipped:1 considering:3 provided:1 project:1 notation:1 bounded:1 underlying:2 moreover:1 cm:1 argmin:1 generalizable:1 informed:1 unified:2 finding:1 transformation:4 temporal:9 universit:2 stick:1 control:2 unit:1 uk:1 appear:1 positive:2 local:2 accordance:1 consequence:1 io:1 analyzing:1 laurent:2 fluctuation:1 approximately:1 ap:4 might:2 umr:2 therein:1 studied:1 quantified:2 dynamically:1 equivalence:2 co:3 perpendicular:1 range:1 averaged:1 decided:1 practice:2 block:2 illusion:1 spot:1 procedure:1 displacement:1 area:3 empirical:1 bell:2 gabor:2 projection:1 confidence:1 radial:2 naturalistic:1 cannot:1 operator:1 context:1 influence:2 applying:1 equivalent:2 map:1 demonstrated:1 dz:5 maximizing:1 masson:4 duration:1 tremor:1 resolution:1 simplicity:3 estimator:2 importantly:1 dominate:1 coordinate:1 variation:1 construction:2 hierarchy:1 exact:1 aujol:1 hypothesis:5 jogan:1 agreement:1 pa:4 velocity:1 recognition:1 observed:6 cloud:8 bottom:1 role:1 hv:1 eihx:1 cycle:3 connected:1 ordering:1 movement:8 mz:4 marseille:5 highest:1 principled:7 mentioned:1 complexity:1 dynamic:18 ultimately:1 jittering:1 motivate:1 sotiropoulos:1 solving:1 predictive:1 serve:1 various:1 represented:1 america:1 derivation:6 univ:4 distinct:1 fast:3 describe:1 forced:1 separated:1 seri:1 psychophysicist:1 outcome:2 refined:1 quite:1 heuristic:3 larger:3 supplementary:7 encoded:1 modular:1 whose:1 emerged:1 anr:1 cvpr:1 statistic:3 gp:1 transform:2 noisy:1 final:1 obviously:1 advantage:1 pressing:1 took:1 propose:1 interaction:1 product:1 fr:4 relevant:3 realization:1 consent:1 mixing:3 description:2 obliquely:1 az:8 qr:1 sanz:1 generating:1 converges:2 engender:1 object:2 coupling:3 andrew:2 develop:1 stating:1 simoncini:1 measured:1 received:1 advocated:1 grating:3 soc:2 predicted:2 sensation:1 closely:1 filter:1 stochastic:7 centered:2 human:5 material:5 kirshner:1 explains:1 fechner:1 assign:1 generalization:2 randomization:1 proposition:6 biological:3 opt:1 summation:2 mathematically:1 accompanying:1 around:6 normal:4 driving:3 major:1 perceived:6 estimation:6 polar:1 axiomatic:5 council:1 ilg:1 vz:1 basing:1 repetition:1 reflects:1 morel:1 gaussian:22 always:2 aim:2 varying:1 derived:2 likelihood:17 ceremade:4 contrast:2 cg:2 detect:1 sense:2 posteriori:1 inference:5 am:1 dependent:1 bergen:1 cnrs:4 typically:1 hidden:1 relation:1 going:2 france:4 mimicking:1 translational:1 issue:1 orientation:7 among:1 psychtoolbox:1 priori:2 proposes:2 development:2 art:2 spatial:25 integration:4 special:3 field:12 equal:1 having:3 shaped:2 atom:1 broad:1 adelson:2 afc:2 future:2 report:2 stimulus:20 simplify:2 inherent:1 micro:2 looming:1 oriented:2 randomly:1 manipulated:1 zoom:7 individual:2 intended:1 occlusion:1 phase:1 interest:1 mixture:1 perrinet:4 behind:1 stocker:3 ambient:1 integral:1 closer:1 partial:4 edge:1 institut:2 orthogonal:1 euclidean:1 logarithm:1 walk:1 circle:1 abundant:1 causal:2 mamassian:1 theoretical:3 fitted:1 instance:11 increased:1 modeling:3 formalism:1 asking:1 temple:1 ar:2 mac:1 deviation:2 successful:1 peyr:2 graphic:1 connect:1 dependency:1 perturbed:1 spatiotemporal:1 gregory:1 psychophysically:1 density:2 international:1 sensitivity:1 randomized:1 ie:1 stay:1 siam:1 probabilistic:2 systematic:1 dong:1 aix:3 modelbased:1 synthesis:11 von:1 central:6 thesis:1 eurographics:2 choose:2 warped:3 derivative:1 account:3 potential:1 de:5 star:1 summarized:2 coding:1 textons:5 explicitly:1 mv:3 vi:1 multiplicative:1 view:3 observer:18 closed:2 linked:1 red:1 aggregation:3 participant:1 complicated:2 slope:2 contribution:5 ass:1 square:1 roll:1 variance:2 characteristic:3 percept:1 dissociation:1 correspond:2 vp:4 bayesian:11 mc:14 trajectory:1 ecologically:2 explain:2 definition:5 energy:4 frequency:33 turk:1 isaac:1 proof:6 mi:1 modeler:1 static:5 associated:1 gain:1 proved:2 ask:1 knowledge:3 ethic:1 amplitude:3 routine:1 originally:1 higher:3 follow:2 reflected:1 planar:3 response:2 methodology:1 wei:4 formulation:10 generality:1 furthermore:1 just:1 correlation:2 d:1 horizontal:3 o:1 defines:1 mode:1 effect:8 hypothesized:1 concept:2 validity:1 equality:1 soatto:1 freq:2 illustrated:3 deal:1 eg:1 sin:1 during:4 width:8 recurrence:1 illustrative:1 m:2 prominent:2 demonstrate:1 motion:32 image:10 weber:1 novel:1 sigmoid:1 rotation:6 stimulation:5 mt:1 physical:1 volume:1 linking:1 interpretation:1 association:1 synthesized:1 significant:2 cambridge:2 tuning:1 pm:3 similarly:2 erc:1 had:2 moving:6 cortex:1 v0:18 whitening:1 base:1 showed:2 perspective:1 manon:1 driven:1 scenario:1 selectivity:1 manipulation:1 dauphine:4 exploited:1 conserved:1 seen:2 greater:1 impose:1 determine:1 paradigm:2 doretto:1 signal:1 ii:3 relates:1 full:3 simoncelli:2 sound:1 infer:1 transparency:1 desirable:1 match:1 faster:7 long:1 equally:1 plugging:1 controlled:1 impact:1 laplacian:2 ensuring:1 simplistic:2 basic:1 regression:1 vision:10 expectation:2 poisson:1 normalization:1 texton:5 cell:1 justified:1 fine:1 interval:3 crucial:2 envelope:4 cedex:4 subject:5 hz:1 flow:1 call:1 ferradans:1 variety:1 fit:3 gave:1 perfectly:1 bandwidth:6 identified:1 idea:1 shift:3 t0:2 whether:1 specialization:2 handled:1 expression:4 motivated:2 pse:1 fleet:2 bartlett:1 action:2 adequate:1 remark:6 gabriel:1 useful:4 matlab:1 detailed:4 clear:1 generally:1 dark:2 band:2 ten:1 category:1 simplest:1 generate:1 http:1 canonical:1 dotted:1 judging:1 neuroscience:7 estimated:3 overly:1 per:3 blue:1 conform:1 discrete:1 key:1 four:2 monitor:1 drawn:1 capital:1 jv:1 cutoff:1 luminance:5 v1:7 button:1 imaging:1 cone:2 orient:1 inverse:1 parameterized:6 letter:1 angle:2 uncertainty:1 extends:1 wu:1 decision:2 appendix:1 scaling:5 cachan:1 def:3 bound:1 followed:1 display:2 fold:1 refine:1 adapted:1 scene:13 helsinki:1 generates:1 fourier:4 speed:41 leon:1 performing:1 proofreading:1 optical:1 relatively:1 according:2 combination:2 instantiates:1 across:3 slightly:4 smaller:1 lefebvre:1 biologically:3 intuitively:2 restricted:1 explained:2 ln:3 equation:7 previously:2 turn:1 r3:2 mechanism:2 committee:1 needed:1 ascending:1 tractable:1 end:1 adopted:1 available:1 experimentation:1 probe:3 apply:1 fluctuating:1 spectral:1 v2:4 amini:1 alternative:1 weinberger:1 slower:2 drifting:1 top:1 denotes:2 ensure:1 running:1 remaining:1 graphical:2 likehood:1 exploit:2 coined:1 widest:1 classical:2 society:2 psychophysical:9 warping:3 parametric:3 primary:1 receptive:1 rt:1 gradient:1 thank:2 link:1 parametrized:1 argue:1 extent:1 collected:1 toward:1 code:1 illustration:1 providing:1 equivalently:1 relate:1 wiser:1 trace:1 negative:3 rise:1 sigma:1 synthesizing:1 implementation:3 unknown:2 perform:2 vertical:1 observation:2 convolution:1 dispersion:2 finite:2 displayed:1 variability:3 head:1 peyre:1 perturbation:3 varied:1 arbitrary:2 community:1 intensity:2 introduced:1 pair:1 paris:4 z1:2 optimized:2 philosophical:1 narrow:1 herein:1 nip:1 able:1 usually:2 perception:22 pattern:2 departure:1 dynamical:1 appeared:1 reading:1 challenge:1 recast:2 including:1 royal:1 explanation:2 power:4 natural:8 zr:1 improve:1 github:1 eye:1 axis:2 auto:4 coupled:1 originality:1 text:1 prior:15 literature:1 understanding:3 acknowledgement:1 relative:3 law:1 loss:1 generation:2 proportional:3 versus:1 degree:1 xp:4 consistent:2 principle:1 editor:2 systematically:2 translation:6 row:4 supported:3 last:1 bias:9 allow:3 understand:3 burges:1 template:1 sparse:2 distributed:2 curve:3 depth:1 cortical:2 world:2 xia:1 sensory:3 made:1 adaptive:1 vmax:2 ec:1 transaction:2 obtains:2 deg:3 global:3 sat:1 conservation:4 assumed:1 spatio:3 alternatively:1 spectrum:4 quantifies:2 chief:1 table:1 disambiguate:1 nature:4 synthesizes:1 improving:1 hv0:2 european:1 bottou:1 domain:3 da:1 vj:1 dense:2 stereotyped:1 main:1 neurosci:1 motivation:2 noise:5 crafted:1 psychometric:3 en:1 screen:3 fashion:1 slow:2 probing:2 formalization:3 position:4 pv:8 explicit:1 heeger:1 pereira:1 comput:1 perceptual:7 third:1 young:1 theorem:1 z0:7 showing:1 normative:1 r2:16 pz:10 unser:2 essential:2 texture:29 phd:1 justifies:1 argmaxz:1 easier:1 intersection:1 simply:1 visual:9 expressed:1 ethical:1 scalar:1 kwatra:1 springer:1 corresponds:3 satisfies:1 constantly:1 stimulating:1 declaration:1 goal:1 room:1 brainscales:1 content:1 change:4 specifically:1 determined:1 characterised:3 corrected:1 acting:1 total:1 discriminate:1 experimental:4 la:2 e:1 meaningful:1 guillaume:1 internal:1 jonathan:1 tested:7 phenomenon:1 |
5,268 | 577 | Reverse TDNN: An Architecture for Trajectory
Generation
Patrice Simard
AT &T Bell Laboratories
101 Crawford Corner Rd
Holmdel, NJ 07733
Yann Le Cun
AT&T Bell Laboratories
101 Crawford Corner Rd
Holmdel, NJ 07733
Abstract
The backpropagation algorithm can be used for both recognition and generation of time trajectories. When used as a recognizer, it has been shown
that the performance of a network can be greatly improved by adding
structure to the architecture. The same is true in trajectory generation.
In particular a new architecture corresponding to a "reversed" TDNN is
proposed . Results show dramatic improvement of performance in the generation of hand-written characters. A combination of TDNN and reversed
TDNN for compact encoding is also suggested.
1
INTRODUCTION
Trajectory generation finds interesting applications in the field of robotics, automation, filtering, or time series prediction. Neural networks, with their ability to learn
from examples, have been proposed very early on for solving non-linear control problems adaptively. Several neural net architectures have been proposed for trajectory
generation, most notably recurrent networks, either with discrete time and externalloops (Jordan, 1986), or with continuous time (Pearlmutter, 1988). Aside from
being recurrent, these networks are not specifically tailored for trajectory generation. It has been shown that specific architectures, such as the Time Delay Neural
Networks (Lang and Hinton, 1988), or convolutional networks in general, are better
than fully connected networks at recognizing time sequences such as speech (Waibel
et al., 1989), or pen trajectories (Guyon et al., 1991). We show that special architectures can also be devised for trajectory generation, with dramatic performance
improvement.
579
580
Simard and Le Cun
Two main ideas are presented in this paper. The first one rests on the assumption
that most trajectory generation problems deal with continuous trajectories. Following (Pearlmutter, 1988), we present the "differential units", in which the total
input to the neuron controls the em rate of change (time derivative) of that unit
state, instead of directly controlling its state. As will be shown the "differential
units" can be implemented in terms of regular units.
The second idea comes from the fact that trajectories are usually come from a
plan, resulting in the execution of a "motor program". Executing a complete motor
program will typically involve executing a hierarchy of sub-programs, modified by
the information coming from sensors. For example drawing characters on a piece
of paper involves deciding which character to draw (and what size), then drawing
each stroke of the character. Each stroke involves particular sub-programs which
are likely to be common to several characters (straight lines of various orientations,
curved lines, loops ... ). Each stroke is decomposed in precise motor patterns. In
short, a plan can be described in a hierarchical fashion, starting from the most
abstract level (which object to draw), which changes every half second or so, to
the lower level (the precise muscle activation patterns) which changes every 5 or
10 milliseconds. It seems that this scheme can be particularly well embodied by
an "Oversampled Reverse TDNN". a multilayer architecture in which the states
of the units in the higher layers are updated at a faster rate than the states of
units in lower layers. The ORTDNN resembles a Subsampled TDNN (Bottou et al.,
1990)(Guyon et al., 1991), or a subsampled weight-sharing network (Le Cun et al.,
1990a), in which all the connections have been reversed, and the input and output
have been interchanged. The advantage of using the ORTDNN, as opposed to a
table lookup, or a memory intensive scheme, is the ability to generalize the learned
trajectories to unseen inputs (plans). With this new architecture it is shown that
trajectory generation problems of large complexity can be solved with relatively
small resources.
2
THE DIFFERENTIAL UNITS
In a time continuous network, the forward propagation can be written as:
8x(t)
T{jt
= -x(t) + g(wx(t? + I(t)
(1)
where x(t) is the activation vector for the units, T is a diagonal matrix such that
is the time constant for unit i, It is the input vector at time t, w is a weight
matrix such that Wij is the connection from unit j to unit i, and 9 is a differentiable
(multi-valued) function.
ni
A reasonable discretization of this equation is:
(2)
where ~t is the time step used in the discretization, the superscript t means at time
t~t (i.e. xt
x(t~t?. Xo is the starting point and is a constant. t ranges from 0
to M, with 10 o.
=
=
Reverse TDNN: An Architecture for Trajectory Generarion
The cost function to be minimized is:
t=M
E ~
(stxt - dt)T (stxt - dt)
t=1
= L:
581
(3)
Where Dt is the desired output, and st is a rectangular matrix which has a 0 if
the corresponding
is unconstrained and 1 otherwise. Each pattern is composed
of pairs (It, Dt) for t E [1..M]. To minimize equation 3 with the constraints given
by equation 2 we express the Lagrage function (Le Cun, 1988):
x:
t=M
L=
t=M-l
~ L:(Stxt_Dt)(Stxt_Dt)T +
t=1
L:
(bt+l)T(_xt+l+xt+LltT-l(_xt+g(wxt)+It?)
t=O
(4)
Where bt+l are Lagrange multipliers (for t E [1..MD. The superscript T means that
the corresponding matrix is transposed. If we differentiate with respect to xt we
get:
(:~ ) T = 0 = (sti' _ d') _ ii' + ;;'+1 _ ~tT-1ii'+1 _ ~tT-1wT g'(wi')ii'+1
(5)
For t E [l..M - 1] and 8~'t, = 0 = (S'x M - DM) - bM for the boundary condition.
g' a diagonal matrix containing the derivatives of 9 (g'(wx)w is the jacobian of g).
From this an update rule for bt can be derived:
bM
(SMXM _ dM )
(S'x t - dt) + (1 - LltT-l)bt+l
+ LltT-lwTyrg(wxt)bt+l
for t E [1..M - 1]
(6)
This is the rule used to compute the gradient (backpropagation). If the Lagrangian
is differentiated with respect to Wij, the standard updating rule for the weight is
obtained:
oL
t=M-l_
1
(7)
ow .. = LltTb;+lxjg;(L: wil:xi)
~
t=1
l:
If the Lagrangian is differentiated with respect to T, we get:
L:
t=M-l
1
oL
_
T~
--L.J (-t+l
x
-x-t)b-t+l
oT
t=O
(8)
From the last two equations, we can derived a learning algorithm by gradient descent
(9)
(10)
where 7]w and 7]T are respectively the learning rates for the weights and the time
constants (in practice better results are obtained by having different learning rates
7]Wjj and 7]Tii per connections). The constant 7]T must be chosen with caution
582
Simard and Le Cun
Figure 1: A backpropagation implementation of equation 2 for a two units network
between time t and t + 1. This figure rer.eats itself vertically for every time step
from t
0 to t
M. The quantities x /1, x~+l, d~
-x~ + gl (wxt) + If and
d~
-x~ + g2(wxt) +
are computed with linear units.
=
=
=
=
n
since if any time constants tii were to become less than one, the system would
be unstable. Performing gradient descent in Tl instead of in tii is preferable for
numerical stability reasons.
II
Equation 2 is implemented with a feed forward backpropagation network. It should
first be noted that this equation can be written as a linear combination of xt (the
activation at the previous time), the input, and a non-linear function g of wx'.
Therefore, this can be implemented with two linear units and one nonlinear unit
with activation function g. To keep the time constraint, the network is "unfolded"
in time , with the weights shared from one time step to another. For instance a
simple two fully connected units network with no threshold can be implemented
as in Fig. 1 (only the layer between time t and t + 1 is shown). The network
repeats itself vertically for each time step with the weights shared between time
steps. The main advantage of this implementation is that all equations 6, 7 and 8
are implemented implicitly by the back-propagation algorithm.
3
CHARACTER GENERATION: LEARNING TO
GENERATE A SINGLE LETTER
In this section we describe a simple experiment designed to 1) illustrate how trajectory generation can be implemented with a recurrent network, 2) to show the
advantages of using differential units instead of the traditional non linear units and
3) to show how the fully connected architecture (with differential units) severly
limits the learning capacity of the network. The task is to draw the letter "A" with
Reverse TDNN: An Architecture for Trajectory Generation
Target drawing
Output trajectories
1.25
1.25
.15
.15
.25
Ou1pJtO
.25
- . 25
-.15
-1.25 _ _ _ _ _ _ __
-.25
o
-.15
-1.25
~_ _ _ _ _ _ _ _
-1.25 -.15
-.25
.25
.15
1.25
OulpAl
15 30 45 60 15 '0105120135
1.25
.15
.25
-.25
NetworK drawing
-.15
-1.25'--_ _ _ _ _ __
1.25
o
.15
15 )0 45 60 15 '0 105120135
1.25
.25
.15
0Jtpu12
.25
- .25
-.25
-.15
-1.25"___ _ _ _ _ __
-1.25 -.75
- . 25
. 25
OulpAl
. 15
- . 15
1.25
Time
Figure 2: Top left: Trajectory representing the letter "A". Bottom left: Trajectory
produced by the network after learning. The dots correspond to the target points of
the original trajectory. The curve is produced by drawing output unit 2 as a function
of output unit 1, using output unit 0 for deciding when the pen is up or down. Right:
Trajectories of the three output units (pen-up/pen-down, X coordinate of the pen
and Y coordinate of the pen) as a function of time. The dots corresponds to the
target points of the original trajectory.
a pen. The network has 3 output units, two for the X and Y position of the pen,
and one to code whether the pen is up or down. The network has a total 21 units,
no input unit, 18 hidden units and 3 output units. The network is fully connected.
Character glyphs are obtained from a tablet which records points at successive
instants of time. The data therefore is a sequence of triplets indicating the time,
and the X and Y positions. When the pen is up, or if there are no constraint for
some specific time steps (misreading of the tablet), the activation of the unit is left
unconstrained. The letter to be learned is taken from a handwritten letter database
and is displayed in figure 2 (top left) .
The letter trajectory covers a maximum of 90 time stamps. The network is unfolded
135 steps (10 unconstrained steps are left at the begining to allow the network to
settle and 35 additional steps are left at the end to monitor the network activity).
The learning rate 'f/w is set to 1.0 (the actual learning rate is per connection and is
obtained by dividing the global learning rate by the fanin to the destination unit,
and by dividing by the number of connections sharing the same weight). The time
constants are set to 10 to produce a smooth trajectory on the output. The learning
rate 'f/T is equal to zero (no learning on the time constants). The initial values for
the weights are picked from a uniform distribution between -1 and +1.
583
584
Simard and Le Cun
The trajectories fo units 0, 1 and 2 are shown in figure 2 (right). The top graphs
represent the state of the pen as a function of time. The straight lines are the desired
positions (1 means pen down, -1 means pen up). The middle and bottom graphs
are the X and Y positions of the pen respectively. The network is unconstrained
after time step 100. Even though the time constants are large, the output units
reach the right values before time step 10. The top trajectory (pen-up/pen-down),
however, is difficult to learn with time constants as large as 10 because it is not
smooth.
The letter drawn by the network after learning is shown in figure 2 (left bottom).
The network successfully learned to draw the letter on the fully connected network.
Different fixed time constants were tried. For small time constant (like 1.0), the
network was unable to learn the pattern for any learning rate TJw we tried. This
is not surprising since the (vertical) weight sharing makes the trajectories very
sensitive to any variation of the weights. This fact emphasizes the importance of
using differential units. Larger time constants allow larger learning rate for the
weights. Of course, if those are too large, fast trajectories can not be learned.
The error can be further improved by letting the time constant adapt as well.
However the gain in doing so is minimal. If the learning rate TJT is small, the gain
over 'TJT = 0 is negligible. If TJT is too big, learning becomes quickly unstable.
This simulation was done with no input, and the target trajectories were for the
drawing of a single letter. In the next section, the problem is extended to that of
learning to draw multiple letters, depending on an input vector.
4
LEARNING TO GENERATE MULTIPLE LETTERS:
THE REVERSE TDNN ARCHITECTURE
In a first attempt, the fully connected network of the previous section was used to
try to generate the eight first letters of the alphabet. Eight units were used for
the input, 3 for the output, and various numbers of hidden units were tried. Every
time, all the units, visible and hidden, were fully interconnected. Each input unit
was associated to one letter, and the input patterns consisted of one +1 at the
unit corresponding to the letter, and -1/7 for all other input units. No success was
achieved for all the set of parameters which were tried. The error curves reached
plateaus, and the letter .glyphs were not recognizable. Even bringing the number of
letter to two (one "A" and one "B") was unsuccessful. In all cases the network was
acting like it was ignoring its input: the activation of the output units were almost
identical for all input patterns. This was attributed to the network architecture.
A new kind of architecture was then used, which we call" Oversampled Reverse
TDNN" because of its resemblance with a Subsampled TDNN with input and output interchanged. Subsampled TDNN have been used in speech recognition (Bottou
et al., 1990), and on-line character recognition (Guyon et al., 1991). They can be
seen one-dimensional versions of locally-connected, weight sharing networks (Le
Cun, 1989 )(Le Cun et al., 1990b). Time delay connections allow units to be connected to unit at an earlier time. Weight sharing in time implements a convolution
of the input layer. In the Subsampled TDNN, the rate at which the units states
are updated decreases gradually with the layer index. The subsampling provides
Reverse TDNN: An Architecture for Trajectory Generation
t=13
t=5
Input
Hidden1
Hidden 2
Output
Figure 3: Architecture of a simple reverse TDNN. Time goes from bottom to top,
data flows from left to right. The left module is the input and has 2 units. The
next module (hidden!) has 3 units and is undersampled every 4 time steps. The
following module (hidden2) has 4 units and is undersampled every 2 time steps. The
right module is the output, has 3 units and is not undersampled. All modules have
time delay connections from the preceding module. Thus the hidden! is connected
to hidden2 over a window of 5 time steps, and hidden2 to the output over a window
of 3 time steps. For each pattern presented on the 2 input units, a trajectory of 8
time steps is produced by the network on each of the 3 units of the output.
585
586
Simard and Le Cun
?:LL??.~l?~
-.:l?l':~"':~:LR
r 1-.
-.
-. J/' -.
-.
,l~A!il~:
, , 'il'
TtK~
-I
~
...
?1
_J
??
I
'.
I
.....
_.
_.
?
~
... 1
_J
??
I
?
?
I
-.
- ???
I
?
_I
_.
_.
?
?
I
-I
~
...
-I
-.
-J
??
I
?
...
_.
_.
?
?
I
-I
...
-I
_J
??
I
??
-.
?
?
I
-.
- ???
I
'~
..
_I
...
?
_I
_.
_.
?
?
I
_I
_ I
L.
-.
?
?
I
:..'~'kL'LQ'i?'~'LK"
~ ,:.~ ..:...:. ,:, ..... ,:..t?:: .~. ) .. ,
?
?
?
' . .
-.
. '
I
'.
:l C :~ T :~ ~ :f \ t:l ~ I :l \;
:L2-,:LL:~::WL:~:LL
tt
...
-I
_???
I
...
-I
-J
??
I
...1
oJ
??
I
-I
-I
_ ???
I
.... 1
_???
I
...
-,
-J
??
I
111L
Figure 4: Letters drawn by the reverse TDNN network after 10000 iteration of
learning.
a gradual reduction of the time resolution. In a reverse TDNN the subsampling
starts from the units from the output (which have no subsampling) toward the input. Equivalently, each layer is oversampled when compared to the previous layer.
This is illustrated in Figure 3 which shows a small reverse TDNN. The input is
applied to the 2 units in the lower left. The next layer is unfolded in time two steps
and has time delay connections toward step zero of the input. The next layer after
this is unfolded in time 4 steps (with again time delay connections), and finally the
output is completely unfolded in time. The advantage of such an architecture is
its ability to generate trajectories progressively, starting with the lower frequency
components at each layer. This parallels recognition TDNN's which extract features
progressively. Since the weights are shared between time steps, the network on the
figures has only 94 free weights.
With the reverse TDNN architecture, it was easy to learn the 26 letters of of the
alphabet. We found that the learning is easier if all the weights are initialized to 0
except those with the shortest time delay. As a result, the network initially only sees
its fastest connections. The influence of the remaining connections starts at zero
and increase as the network learns. The glyphs drawn by the network after 10,000
training epochs are shown in figure 4. To avoid ambiguity, we give subsampling
rates with respect to the output, although it would be more natural to mention
oversampling rates with respect to the input. The network has 26 input units, 30
hidden units in the first layer subsampled at every 27 time steps, 25 units at the next
layer subsampled at every 9 time steps, and 3 output units with no subsampling.
Every layer has time delay connections from the previous layer, and is connected
with 3 different updates of the previous layer. The time constants were not subject
Reverse TDNN: An Architecture for Trajectory Generation
to learning and were initialized to 10 for the x and y output units, and to 1 for the
remaining units. No effort was made to optimize these values.
Big initial time constants prevent the network from making fast variations on the
output units and in general slow down the learning process. On the other hand,
small time constants make learning more difficult. The correct strategy is to adapt
the time constants to the intrinsic frequencies of the trajectory. With all the time
constants equal to one, the network was not able to learn the alphabet (as it was
the case in the experiment of the previous section). Good results are obtained with
time constants of 10 for the two x-y output units and time constants of 1 for all
other units.
5
VARIATIONS OF THE ORTDNN
Many variations of the Oversampled Reverse TDNN architecture can be imagined.
For example, recurrent connections can be added: connections can go from right to
left on figure 3, as long as they go up. Recurrent connections become necessary when
information needs to be stored for an arbitrary long time. Another variation would
be to add sensor inputs at various stages of the network, to allow adjustment of the
trajectory based on sensor data, either on a global scale (first layers), or locally (last
layers). Tasks requiring recurrent ORTDNN's and/or sensor input include dynamic
robot control or speech synthesis.
Another interesting variation is an encoder network consisting of a Subsampled
TDNN and an Oversmapled Reverse TDNN connected back to back. The Subsampled TDNN encodes the time sequence shown on its input, and the ORTDNN
reconstructs an time sequence from the output of the TDNN. The main application
of this network would be the compact encoding of time series. This network can be
trained to reproduce its input on its output (auto-encoder), in which case the state
of the middle layer can be used as a compact code of the input sequence.
6
CONCLUSION
We have presented a new architecture capable of learning to generate trajectories
efficiently. The architecture is designed to favor hierarchical representations of trajectories in terms of subtasks.
The experiment shows how the ORTDNN can produce different letters as a function
of the input. Although this application does not have practical consequences, it
shows the learning capabilities of the model for generating trajectories. The task
presented here was particularly difficult because there is no correlation between
the patterns. The inputs for an A or a Z only differ on 2 of the 26 input units.
Yet, the network produces totally different trajectories on the output units. This is
promising since typical neural net application have very correlated patterns which
are in general much easier to learn.
References
Bottou, L., Fogelman, F., Blanchet, P., and Lienard, J. S. (1990). Speaker inde-
587
588
Simard and Le Cun
pendent isolated digit recognition: Multilayer perceptron vs Dynamic Time
Warping. Neural Networks, 3:453-465.
Guyon, I., Albrecht, P., Le Cun, Y., Denker, J. S., and W ., H. (1991). design of a
neural network character recognizer for a touch terminal. Pattern Recognition,
24(2):105-119.
Jordan, M. I. (1986). Serial Order: A Parallel Distributed Processing Approach.
Technical Report ICS-8604, Institute for Cognitive Science, University of California at San Diego, La Jolla, CA.
Lang, K. J. and Hinton, G. E. (1988). A Time Delay Neural Network Architecture
for Speech Recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon
University, Pittsburgh PA.
Le Cun, Y. (1988). A theoretical framework for Back-Propagation. In Touretzky,
D., Hinton, G., and Sejnowski, T., editors, Proceedings of the 1988 Connectionist Models Summer School, pages 21-28, CMU, Pittsburgh, Pa. Morgan
Kaufmann.
Le Cun, Y. (1989). Generalization and Network Design Strategies. In Pfeifer, R.,
Schreter, Z., Fogelman, F., and Steels, L., editors, Connectionism in Perspective, Zurich, Switzerland. Elsevier. an extended version was published as a
technical report of the University of Toronto.
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard,
W., and Jackel, L. D. (1990a). Handwritten digit recognition with a backpropagation network. In Touretzky, D., editor, Advances in Neural Information
Processing Systems 2 (NIPS *89) , Denver, CO. Morgan Kaufman.
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W.,
and Jackel, 1. D. (1990b). Back-Propagation Applied to Handwritten Zipcode
Recognition. Neural Computation.
Pearlmutter, B. (1988). Learning State Space Trajectories in Recurrent Neural
Networks . Neural Computation, 1(2).
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., and Lang, K. (1989). Phoneme
Recognition Using Time-Delay Neural Networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37:328-339.
| 577 |@word version:2 middle:2 seems:1 simulation:1 tried:4 gradual:1 dramatic:2 mention:1 reduction:1 initial:2 series:2 discretization:2 surprising:1 lang:3 activation:6 yet:1 written:3 must:1 visible:1 numerical:1 wx:3 motor:3 designed:2 update:2 progressively:2 aside:1 v:1 half:1 tjw:1 short:1 record:1 lr:1 provides:1 toronto:1 successive:1 differential:6 become:2 recognizable:1 notably:1 multi:1 ol:2 terminal:1 decomposed:1 unfolded:5 actual:1 window:2 totally:1 becomes:1 what:1 kind:1 kaufman:1 caution:1 nj:2 every:9 preferable:1 control:3 unit:62 before:1 negligible:1 vertically:2 limit:1 consequence:1 encoding:2 resembles:1 co:1 fastest:1 range:1 practical:1 practice:1 implement:1 backpropagation:5 digit:2 bell:2 regular:1 get:2 influence:1 optimize:1 lagrangian:2 go:3 starting:3 rectangular:1 resolution:1 rule:3 stability:1 coordinate:2 variation:6 updated:2 controlling:1 hierarchy:1 target:4 diego:1 tablet:2 pa:2 recognition:10 particularly:2 updating:1 database:1 bottom:4 module:6 solved:1 connected:11 decrease:1 complexity:1 wil:1 wjj:1 dynamic:2 trained:1 solving:1 completely:1 various:3 alphabet:3 fast:2 describe:1 sejnowski:1 larger:2 valued:1 drawing:6 otherwise:1 encoder:2 ability:3 favor:1 unseen:1 itself:2 hanazawa:1 patrice:1 superscript:2 zipcode:1 differentiate:1 sequence:5 advantage:4 differentiable:1 net:2 interconnected:1 coming:1 loop:1 produce:3 generating:1 executing:2 object:1 illustrate:1 recurrent:7 depending:1 school:1 pendent:1 dividing:2 implemented:6 c:1 involves:2 come:2 differ:1 switzerland:1 hidden2:3 correct:1 settle:1 generalization:1 connectionism:1 tjt:3 ic:1 deciding:2 interchanged:2 early:1 recognizer:2 jackel:2 sensitive:1 hubbard:2 wl:1 successfully:1 ttk:1 eats:1 sensor:4 modified:1 avoid:1 derived:2 improvement:2 greatly:1 elsevier:1 typically:1 bt:5 initially:1 hidden:7 wij:2 reproduce:1 fogelman:2 orientation:1 plan:3 special:1 field:1 equal:2 having:1 identical:1 minimized:1 report:3 connectionist:1 composed:1 subsampled:9 consisting:1 attempt:1 henderson:2 capable:1 necessary:1 initialized:2 desired:2 isolated:1 theoretical:1 minimal:1 instance:1 earlier:1 cover:1 cost:1 uniform:1 delay:9 recognizing:1 too:2 stored:1 adaptively:1 st:1 destination:1 synthesis:1 quickly:1 again:1 ambiguity:1 opposed:1 containing:1 reconstructs:1 corner:2 cognitive:1 simard:6 derivative:2 albrecht:1 tii:3 lookup:1 automation:1 piece:1 try:1 picked:1 doing:1 reached:1 start:2 parallel:2 capability:1 minimize:1 il:2 ni:1 convolutional:1 kaufmann:1 phoneme:1 efficiently:1 correspond:1 generalize:1 handwritten:3 produced:3 emphasizes:1 trajectory:40 straight:2 published:1 stroke:3 plateau:1 fo:1 reach:1 touretzky:2 sharing:5 frequency:2 dm:2 associated:1 attributed:1 transposed:1 gain:2 back:5 feed:1 higher:1 dt:5 improved:2 done:1 though:1 stage:1 correlation:1 hand:2 touch:1 nonlinear:1 propagation:4 resemblance:1 glyph:3 consisted:1 true:1 multiplier:1 requiring:1 laboratory:2 illustrated:1 deal:1 ll:3 noted:1 speaker:1 complete:1 tt:3 pearlmutter:3 common:1 denver:1 imagined:1 mellon:1 rd:2 unconstrained:4 dot:2 robot:1 add:1 wxt:4 perspective:1 jolla:1 reverse:15 success:1 muscle:1 seen:1 morgan:2 additional:1 preceding:1 shortest:1 signal:1 ii:4 multiple:2 smooth:2 technical:3 faster:1 adapt:2 long:2 devised:1 serial:1 prediction:1 multilayer:2 cmu:2 iteration:1 represent:1 tailored:1 robotics:1 achieved:1 ot:1 rest:1 bringing:1 subject:1 flow:1 jordan:2 call:1 easy:1 architecture:23 idea:2 intensive:1 whether:1 effort:1 speech:5 involve:1 locally:2 generate:5 millisecond:1 oversampling:1 per:2 discrete:1 carnegie:1 express:1 begining:1 threshold:1 monitor:1 drawn:3 prevent:1 graph:2 sti:1 letter:19 almost:1 guyon:4 reasonable:1 yann:1 rer:1 draw:5 holmdel:2 layer:18 summer:1 activity:1 constraint:3 encodes:1 schreter:1 performing:1 relatively:1 waibel:2 combination:2 em:1 character:9 wi:1 cun:15 making:1 gradually:1 xo:1 taken:1 resource:1 equation:8 zurich:1 letting:1 end:1 eight:2 denker:3 hierarchical:2 differentiated:2 original:2 top:5 remaining:2 subsampling:5 include:1 instant:1 warping:1 added:1 quantity:1 strategy:2 md:1 diagonal:2 traditional:1 gradient:3 ow:1 reversed:3 unable:1 capacity:1 unstable:2 reason:1 toward:2 code:2 index:1 equivalently:1 difficult:3 steel:1 implementation:2 design:2 vertical:1 neuron:1 convolution:1 howard:2 descent:2 curved:1 displayed:1 hinton:4 extended:2 precise:2 lltt:3 arbitrary:1 subtasks:1 pair:1 kl:1 oversampled:4 connection:15 california:1 acoustic:1 learned:4 boser:2 nip:1 able:1 suggested:1 usually:1 pattern:10 program:4 unsuccessful:1 memory:1 oj:1 natural:1 undersampled:3 representing:1 scheme:2 lk:1 tdnn:26 extract:1 auto:1 embodied:1 crawford:2 epoch:1 l2:1 fully:7 inde:1 generation:15 interesting:2 filtering:1 blanchet:1 editor:3 course:1 gl:1 last:2 repeat:1 free:1 l_:1 allow:4 perceptron:1 institute:1 distributed:1 boundary:1 curve:2 forward:2 made:1 san:1 bm:2 transaction:1 compact:3 implicitly:1 keep:1 global:2 pittsburgh:2 severly:1 xi:1 shikano:1 continuous:3 pen:16 triplet:1 table:1 promising:1 learn:6 ca:1 ignoring:1 bottou:3 main:3 big:2 fig:1 tl:1 fashion:1 slow:1 sub:2 position:4 lq:1 stamp:1 jacobian:1 pfeifer:1 learns:1 down:6 specific:2 xt:6 jt:1 intrinsic:1 adding:1 importance:1 execution:1 easier:2 fanin:1 likely:1 lagrange:1 adjustment:1 g2:1 corresponds:1 shared:3 change:3 specifically:1 except:1 typical:1 wt:1 acting:1 total:2 la:1 indicating:1 correlated:1 |
5,269 | 5,770 | Large-Scale Bayesian Multi-Label Learning via
Topic-Based Label Embeddings
Piyush Rai?? , Changwei Hu? , Ricardo Henao? , Lawrence Carin?
?
?
CSE Dept, IIT Kanpur
ECE Dept, Duke University
piyush@cse.iitk.ac.in, {ch237,r.henao,lcarin}@duke.edu
Abstract
We present a scalable Bayesian multi-label learning model based on learning lowdimensional label embeddings. Our model assumes that each label vector is generated as a weighted combination of a set of topics (each topic being a distribution
over labels), where the combination weights (i.e., the embeddings) for each label
vector are conditioned on the observed feature vector. This construction, coupled
with a Bernoulli-Poisson link function for each label of the binary label vector,
leads to a model with a computational cost that scales in the number of positive labels in the label matrix. This makes the model particularly appealing for
real-world multi-label learning problems where the label matrix is usually very
massive but highly sparse. Using a data-augmentation strategy leads to full local
conjugacy in our model, facilitating simple and very efficient Gibbs sampling, as
well as an Expectation Maximization algorithm for inference. Also, predicting
the label vector at test time does not require doing an inference for the label embeddings and can be done in closed form. We report results on several benchmark
data sets, comparing our model with various state-of-the art methods.
1
Introduction
Multi-label learning refers to the problem setting in which the goal is to assign to an object (e.g., a
video, image, or webpage) a subset of labels (e.g., tags) from a (possibly very large) set of labels.
The label assignments of each example can be represented using a binary label vector, indicating the
presence/absence of each label. Despite a significant amount of prior work, multi-label learning [7,
6] continues to be an active area of research, with a recent surge of interest [1, 25, 18, 13, 10] in
designing scalable multi-label learning methods to address the challenges posed by problems such as
image/webpage annotation [18], computational advertising [1, 18], medical coding [24], etc., where
not only the number of examples and data dimensionality are large but the number of labels can also
be massive (several thousands to even millions).
Often, in multi-label learning problems, many of the labels tend to be correlated with each other.
To leverage the label correlations and also handle the possibly massive number of labels, a common
approach is to reduce the dimensionality of the label space, e.g., by projecting the label vectors to
a subspace [10, 25, 21], learning a prediction model in that space, and then projecting back to the
original space. However, as the label space dimensionality increases and/or the sparsity in the label
matrix becomes more pronounced (i.e., very few ones), and/or if the label matrix is only partially
observed, such methods tend to suffer [25] and can also become computationally prohibitive.
To address these issues, we present a scalable, fully Bayesian framework for multi-label learning.
Our framework is similar in spirit to the label embedding methods based on reducing the label space
dimensionality [10, 21, 25]. However, our framework offers the following key advantages: (1)
computational cost of training our model scales in the number of ones in the label matrix, which
makes our framework easily scale in cases where the label matrix is massive but sparse; (2) our
likelihood model for the binary labels, based on a Bernoulli-Poisson link, more realistically models
the extreme sparsity of the label matrix as compared to the commonly employed logistic/probit link;
and (3) our model is more interpretable - embeddings naturally correspond to topics where each
topic is a distribution over labels. Moreover, at test time, unlike other Bayesian methods [10], we do
not need to infer the label embeddings of the test example, thereby leading to faster predictions.
1
In addition to the modeling flexibility that leads to a robust, interpretrable, and scalable model, our
framework enjoys full local conjugacy, which allows us to develop simple Gibbs sampling, as well
as an Expectation Maximization (EM) algorithm for the proposed model, both of which are simple
to implement in practice (and amenable for parallelization).
2
The Model
We assume that the training data are given in the form of N examples represented by a feature matrix
X ? RD?N , along with their labels in a (possibly incomplete) label matrix Y ? {0, 1}L?N . The
goal is to learn a model that can predict the label vector y? ? {0, 1}L for a test example x? ? RD .
We model the binary label vector yn of the nth example by thresholding a count-valued vector mn
yn = 1(mn ? 1)
(1)
which, for each individual binary label yln ? yn , l = 1, . . . , L, can also be written as yln =
1(mln ? 1). In Eq. (1), mn = [m1n , . . . , mLn ] ? ZL denotes a latent count vector of size L and
is assumed drawn from a Poisson
mn ? Poisson(?n )
(2)
Eq (2) denotes drawing each component of mn independently, from a Poisson distribution, with rate
equal to the corresponding component of ?n ? RL
+ , which is defined as
?n = Vun
(3)
RL?K
+
Here V ?
and un ? RK
+ (typically K L). Note that the K columns of V can be thought
of as atoms of a label dictionary (or ?topics? over labels) and un can be thought of as the atom
weights or embedding of the label vector yn (or ?topic proportions?, i.e., how active each of the K
topics is for example n). Also note that Eq. (1)-(3) can be combined as
yn = f (?n ) = f (Vun )
(4)
where f jointly denotes drawing the latent counts mn from a Poisson (Eq. 2) with rate ?n = Vun ,
followed by thresholding mn at 1 (Eq. 1). In particular, note that marginalizing out mn from
Eq. 1 leads to yn ? Bernoulli(1 ? exp(??n )). This link function, termed as the Bernoulli-Poisson
link [28, 9], has also been used recently in modeling relational data with binary observations.
In Eq. (4), expressing the label vector yn ? {0, 1}L in terms of Vun is equivalent to a low-rank
assumption on the L ? N label matrix Y = [y1 . . . yN ]: Y = f (VU), where V = [v1 . . . vK ] ?
RL?K
and U = [u1 . . . uN ] ? RK?N
, which are modeled as follows
+
+
vk
ukn
pkn
wk
? Dirichlet(?1L )
(5)
? Gamma(rk , pkn (1 ? pkn )?1 )
(6)
?(wk> xn )
=
? Nor(0, ?)
(7)
(8)
?1
?(z) = 1/(1 + exp(?z)), ? = diag(?1?1 , . . . , ?D
), and hyperparameters rk , ?1 , . . . , ?D are given
improper gamma priors. Since columns of V are Dirichlet drawn, they correspond to distributions
(i.e., topics) over the labels. It is important to note here that the dependence of the label embedding
un = {ukn }K
k=1 on the feature vector xn is achieved by making the scale parameter of the gamma
K
prior on {ukn }K
k=1 depend on {pkn }k=1 which in turn depends on the features xn via regression
K
weight W = {wk }k=1 (Eq. 6 and 8).
Figure 1: Graphical model for the generative process of the label vector. Hyperpriors omitted for brevity.
2
2.1
Computational scalability in the number of positive labels
For the Bernoulli-Poisson likelihood model for binary labels, we can write the conditional posterior [28, 9] of the latent count vector mn as
(mn |yn , V, un ) ? yn Poisson+ (Vun )
(9)
where Poisson+ denotes the zero-truncated Poisson distribution with support only on the positive
integers, and denotes the element-wise product. Eq. 9 suggests that the zeros in yn will result
in the corresponding elements of the latent count vector mn being zero, almost surely (i.e., with
probability one). As shown in Section 3, the sufficient statistics of the model parameters do not
depend on latent counts that are equal to zero; such latent counts can be simply ignored during the
inference. This aspect leads to substantial computational savings in our model, making it scale only
in the number of positive labels in the label matrix. In the rest of the exposition, we will refer to our
model as BMLPL to denote Bayesian Multi-label Learning via Positive Labels.
2.2
Asymmetric Link Function
In addition to the computational advantage (i.e., scaling in the number of non-zeros in the label matrix), another appealing aspect of our multi-label learning framework is that the Bernoulli-Poisson
likelihood is also a more realistic model for highly sparse binary data as compared to the commonly
used logistic/probit likelihood. To see this, note that the Bernoulli-Poisson model defines the probability of an observation y being one as p(y = 1|?) = 1 ? exp(??) where ? is the positive rate
parameter. For a positive ? on the X axis, the rate of growth of the plot of p(y = 1|?) on the Y axis
from 0.5 to 1 is much slower than the rate it drops from 0.5 to 0. This benavior of the BernoulliPoisson link will encourage a much fewer number of nonzeros in the observed data as compared to
the number of zeros. On the other hand, a logistic and probit approach both 0 and 1 at the same rate,
and therefore cannot model the sparsity/skewness of the label matrix like the Bernoulli-Poisson link.
Therefore, in contrast to multilabel learning models based on logistic/probit likelihood function or
standard loss functions such as the hinge-loss [25, 14] for the binary labels, our proposed model
provides better robustness against label imbalance.
3
Inference
A key aspect of our framework is that the conditional posteriors of all the model parameters are
available in closed form using data augmentation strategies that we will describe below. In particular,
since we model binary label matrix as thresholded counts, we are also able to leverage some of the
inference methods proposed for Bayesian matrix factorization of count-valued data [27] to derive an
efficient Gibbs sampler for our model.
K?N
Inference in our model requires estimating V ? RL?K
, W ? RD?K , U ? R+
, and the
+
N
hyperparameters of the model. As we will see below, the latent count vectors {mn }n=1 (which are
functions of V and U) provide sufficient statistics for the model parameters. Each element of mn
(if the corresponding element in yn is one) is drawn from a truncated Poisson distribution
mln ? Poisson+ (Vl,: un ) = Poisson+ (?ln )
(10)
PK
PK
th
Vl,: denotes the l row of V and ?ln =
k=1 ?kln =
k=1 vlk ukn . Thus we can also write
PK
mln = k=1 mlkn where mlkn ? Poisson+ (?kln ) = Poisson+ (vlk ukn ).
On the other hand, if yln = 0 then mln = 0 with probability one (Eq. (9)), and therefore need not
be sampled because it does not affect the sufficient statistics of the model parameters.
Using the equivalence of Poisson and multinomial distribution [27], we can express the decomposiPK
tion mln = k=1 mlkn as a draw from a multinomial
[ml1n , . . . , mlKn ] ? Mult(mln ; ?l1n , . . . , ?lKn )
(11)
where ?lkn = PKvlk uvknu . This allows us to exploit the Dirichlet-multinomial conjugacy and
k=1 lk kn
helps designing efficient Gibbs sampling and EM algorithms for doing inference in our model. As
discussed before, the computational cost of both algorithms scales in the number of ones in the
label matrix Y, which males our model especially appealing for dealing with multilabel learning
problems where the label matrix is massive but highly sparse.
3
3.1
Gibbs Sampling
Gibbs sampling for our model proceeds as follows
L?K
Sampling V: Using Eq. 11 and the Dirichlet-multinomial conjugacy, each column of V ? R+
can be sampled as
vk ? Dirichlet(? + m1k , . . . , ? + mLk )
(12)
P
where mlk = n mlnk , ?l = 1, . . . , L.
K?N
Sampling U: Using the gamma-Poisson conjugacy, each entry of U ? R+
can be sampled as
ukn ? Gamma(rk + mkn , pkn )
where mkn =
(13)
?(wk> xn ).
P
mlnk and pkn =
P
Sampling W: Since mkn = l mlnk and mlnk ? Poisson+ (vlk ukn ), p(mkn |ukn ) is also Poisson.
Further, since p(ukn |r, pkn ) is gamma, we can integrate out ukn from p(mkn |ukn ) which gives
l
mkn = NegBin(rk , pkn )
where NegBin(., .) denotes the negative Binomial distribution.
Although the negative Binomial is not conjugate to the Gaussian prior on wk , we leverage the P?olyaGamma strategy [17] data augmentation to ?Gaussianify? the negative Binomial likelihood. Doing
this, we are able to derive closed form Gibbs sampling updates wk , k = 1, . . . , K. The P?olyaGamma (PG) strategy is based on sampling a set of auxiliary variables, one for each observation
(which, in the context of sampling wk , are the latent counts mkn ). For sampling wk , we draw N
P?olya-Gamma random variables [17] ?k1 , . . . , ?kN (one for each training example) as
?kn ? PG(mkn + rk , wk> xn )
(14)
where PG(., .) denotes the P?olya-Gamma distribution [17].
Given these PG variables, the posterior distribution of wk is Gaussian Nor(?wk , ?wk ) where
?w k
?wk
=
=
(X?k X> + ??1 )?1
?wk X?k
(15)
(16)
where ?k = diag(?k1 , . . . , ?kN ) and ?k = [(mk1 ? rk )/2, . . . , (mkN ? rk )/2]> .
Sampling the hyperparameters: The hyperparameter rk is given a gamma prior and can be sampled easily. The other hyperparameters ?1 , . . . , ?D are estimated using Type-II maximum likelihood
estimation [22].
3.2
Expectation Maximization
The Gibbs sampler described in Section 3.1 is efficient and has a computational complexity that
scales in the number of ones in the label matrix. To further scale up the inference, we also develop
an efficient Expectation-Maximization (EM) inference algorithm for our model. In the E-step, we
need to compute the expectations of the local variables U, the latent counts, and the P?olya-Gamma
variables ?k1 , . . . , ?kN , for k = 1, . . . , K. These expectations are available in closed form and can
thus easily be computed. In particular, the expectation of each P?olya-Gamma variable ?kn is very
efficient to compute and is available in closed form [20]
E[?kn ] =
(mkn + rk )
tanh(wk> xn /2)
2wk> xn
(17)
The M-step involves a maximization w.r.t. V and W, which essentially involves solving for their
maximum-a-posteriori (MAP) estimates, which are available in closed form. In particular, as shown
in [20], estimating wk requires solving a linear system which, in our case, is of the form
Sk wk = dk
(18)
where Sk = X?k X> + ??1 , dk = X?k , ?k and ?k are defined as in Section 3.1, except that the
P?olya-Gamma random variables are replaced by their expectations given by Eq. 17. Note that Eq. 18
4
can be straighforwardly solved as wk = S?1
k dk . However, convergence of the EM algorithm [20]
does not require solving for wk exactly in each EM iteration and running a couple of iterations of
any of the various iterative methods that solves a linear system of equations can be used for this step.
We use the Conjugate Gradient [2] method to solve this, which also allows us to exploit the sparsity
in X and ?k to very efficiently solve this system of equations, even when D and N are very large.
Although in this paper, we only use the batch EM, it is possible to speed it up even further using
an online version of this EM algorithm, as shown in [20]. The online EM processes data in small
minibatches and in each EM iteration updates the sufficient statistics of the global parameters. In
our case, these sufficient statistics include Sk and dk , for k = 1, . . . , K, and can be updated as
(t+1)
=
(1 ? ?t )Sk + ?t X(t) ?k X(t)
(t+1)
=
(1 ? ?t )dk + ?t X(t) ?k
Sk
dk
(t)
(t)
(t)
(t)
>
(t)
(t)
where X(t) denotes the set of examples in the current minibatch, and ?k and ?k denote quantities
that are computed using the data from the current minibatch.
3.3
Predicting Labels for Test Examples
Predicting the label vector y? ? {0, 1}L for a new test example x? ? RD can be done as
Z
p(y? = 1|x? ) =
(1 ? exp(?Vu? ))p(u? )du?
u?
(m)
If using Gibbs sampling, the integral above can be approximated using samples {u? }M
m=1 from
the posterior of u? . It is also possible to integrate out u? (details skipped for brevity) and get closed
form estimates of probability of each label yl? in terms of the model parameters V and W, and it is
given by
K
Y
1
(19)
p(yl? = 1|x? ) = 1 ?
>
rk
[V
exp(w
lk
k x? ) + 1]
k=1
4
Computational Cost
Computing the latent count mln for each nonzero entry yln in Y requires computing
[ml1n , . . . , mlKn ], which takes O(K) time; therefore computing all the latent counts takes
O(nnz(Y)K) time, which is very efficient if Y has very few nonzeros (which is true of most realworld multi-label learning problems). Estimating V, U, and the hyperparameters is relatively cheap
and can be done very efficiently. The P?olya-Gamma variables, when doing Gibbs sampling, can be
efficiently sampled using methods described in [17]; and when doing EM, these can be even more
cheaply computed because the P?olya-Gamma expectations, which are available in closed form (as
a hyperbolic tan function), can be very efficiently computed [20]. The most dominant step is estimating W; when doing Gibbs sampling, if done na??vely, it would O(DK 3 ) time if sampling W
row-wise, and O(KD3 ) time if sampling column-wise. However, if using the EM algorithm, estimating W can be done much more efficiently, e.g., using Conjugate Gradient updates because, it is
not even required to solved for W exactly in each iteration of the EM algorithm [20]. Also note that
since most of the parameters updates for different k = 1, . . . , K, n = 1, . . . , N are all independent
of each other, our Gibbs sampler and the EM algorithms can be easily parallelized/block-updated.
5
Connection: Topic Models with Meta-Data
As discussed earlier, our multi-label learning framework is similar in spirit to a topic model as the
label embeddings naturally correspond to topics - each Dirichlet-drawn column vk of the matrix
V ? RL?K
can be seen as representing a ?topic?. In fact, our model, interestingly, can directly be
+
seen as a topic model [3, 27] where we have side-information associated with each document (e.g.,
document features). For example, if each document yn ? {0, 1}L (in a bag-of-words representation
with vocabulary of size L) may also have some meta-data xn ? RD associated with it. Our model
can therefore also be used to perform topic modeling of text documents with such meta-data [15, 12,
29, 19] in a robust and scalable manner.
5
6
Related Work
Despite a significant number of methods proposed in the recent years, learning from multi-label
data continues to remain an active area of research, especially due to the recent surge of interest in
learning when the output space (i.e., the number of labels) is massive. To handle the huge dimensionality of the label space, a common approach is to embed the labels in a lower-dimensional space,
e.g., using methods such as Canonical Correlation Analysis or other methods for jointly embedding
feature and label vectors [26, 5, 23], Compressed Sensing[8, 10], or by assuming that the matrix
consisting of the weight vectors of all the labels is a low-rank matrix [25]. Another interesting line
of work on label embedding methods makes use of random projections to reduce the label space
dimensionality [11, 16], or use methods such as multitask learning (each label is a task).
Our proposed framework is most similar in spirit to the aforementioned class of label embedding
based methods (we compare with some of these in our experiments). In contrast to these methods,
our framework reduces the label-space dimensionality via a nonlinear mapping (Section 2), our
framework has accompanying inference algorithms that scale in the number of positive labels 2.1,
has an underlying generative model that more realistically models the imbalanced nature of the labels
in the label matrix (Section 2.2), can deal with missing labels, and is easily parallelizable. Also, the
connection to topic models provide a nice interpretability to the results, which is usually not possible
with the other methods (e.g., in our model, the columns of the matrix V can be seen as a set of topics
over the labels; in Section 7.2, we show an experiment on this). Moreover, although in this paper, we
have focused on the multi-label learning problem, our framework can also be applied for multiclass
problems via the one-vs-all reduction, in which case the label matrix is usually very sparse (each
column of the label matrix represents the labels of a single one-vs-all binary classification problem).
Finally, although not a focus of this paper, some other important aspects of the multi-label learning
problem have also been looked at in recent work. For example, fast prediction at test time is an
important concern when the label space is massive. To deal with this, some recent work focuses
on methods that only incur a logarithmic cost (in the number of labels) at test time [1, 18], e.g., by
inferring and leveraging a tree structure over the labels.
7
Experiments
We evaluate the proposed multi-label learning framework on four benchmark multi-label data sets bibtex, delicious, compphys, eurlex [25], with their statistics summarized in Table 1. The data sets
we use in our experiments have both feature and label dimensions that range from a few hundreds
to a several thousands. In addition, the feature and/or label matrices are also quite sparse.
Data set
bibtex
delicious
compphys
eurlex
D
1836
500
33,284
5000
L
159
983
208
3993
Ntrain
4880
12920
161
17413
Training set
?
L
2.40
19.03
9.80
5.30
?
D
68.74
18.17
792.78
236.69
Ntest
2515
3185
40
1935
Test set
?
L
2.40
19.00
11.83
5.32
?
D
68.50
18.80
899.20
240.96
? denotes average number of positive
Table 1: Statistics of the data sets used in our experiments. L
? denotes the average number of nonzero features per example.
labels per example; D
We compare the proposed model BMLPL with four state-of-the-art methods. All these methods,
just like our method, are based on the assumption that the label vectors live in a low dimensional
space.
? CPLST: Conditional Principal Label Space Transformation [5]: CPLST is based on embedding the label vectors conditioned on the features.
? BCS: Bayesian Compressed Sensing for multi-label learning [10]: BCS is a Bayesian
method that uses the idea of doing compressed sensing on the labels [8].
? WSABIE: It assumes that the feature as well as the label vectors live in a low dimensional
space. The model is based on optimizing a weighted approximate ranking loss [23].
? LEML: Low rank Empirical risk minimization for multi-label learning [25]. For LEML, we
report the best results across the three loss functions (squared, logistic, hinge) they propose.
6
Table 2 shows the results where we report the Area Under the ROC Curve (AUC) for each method on
all the data sets. For each method, as done in [25], we vary the label space dimensionality from 20%
- 100% of L, and report the best results. For BMLPL, both Gibbs sampling and EM based inference
perform comparably (though EM runs much faster than Gibbs); here we report results obtained with
EM inference only (Section 7.4 provides another comparison between these two inference methods).
The EM algorithms were run for 1000 iterations and they converged in all the cases.
As shown in the results in Table 2, in almost all of the cases, the proposed BMLPL model performs
better than the other methods (except for compphys data sets where the AUC is slightly worse than
LEML). The better performance of our model justifies the flexible Bayesian formulation and also
shows the evidence of the robustness provided by the asymmetric link function against sparsity and
label imbalance in the label matrix (note that the data sets we use have very sparse label matrices).
bibtex
delicious
compphys
eurlex
CPLST
0.8882
0.8834
0.7806
-
BCS
0.8614
0.8000
0.7884
-
WSABIE
0.9182
0.8561
0.8212
0.8651
LEML
0.9040
0.8894
0.9274
0.9456
BMLPL
0.9210
0.8950
0.9211
0.9520
Table 2: Comparison of the various methods in terms of AUC scores on all the data sets. Note: CPLST and
BCS were not feasible to run on the eurlex data, so we are unable to report those numbers here.
7.1
Results with Missing Labels
Our generative model for the label matrix can also handle missing labels (the missing labels may
include both zeros or ones). We perform an experiment on two of the data sets - bibtex and compphys
- where only 20% of the labels from the label matrix are revealed (note that, of all these revealed
labels, our model uses only the positive labels), and compare our model with LEML and BCS (both
are capable of handling missing labels). The results are shown in Table 3. For each method, we
set K = 0.4L. As the results show, our model yields better results as compared to the competing
methods even in the presence of missing labels.
bibtex
compphys
BCS
0.7871
0.6442
LEML
0.8332
0.7964
BMLPL
0.8420
0.8012
Table 3: AUC scores with only 20% labels observed.
7.2
Qualitative Analysis: Topic Modeling on Eurlex Data
Since in our model, each column of the L ? K matrix V represents a distribution (i.e., a ?topic?)
over the labels, to assess its ability of discovering meaningful topics, we run an experiment on the
Eurlex data with K = 20 and look at each column of V. The Eurlex data consists of 3993 labels
(each of which is a tags; a document can have a subset of the tags), so each column in V is of that
size. In Table 4, we show five of the topics (and top five labels in each topic, based on the magnitude
of the entries in the corresponding column of V). As shown in Table 4, our model is able to discover
clear and meaningful topics from the Eurlex data, which shows its usefulness as a topic model when
each document yn ? {0, 1}L has features in form of meta data xn ? RD associated with it.
Topic 1 (Nuclear)
nuclear safety
nuclear power station
radioactive effluent
radioactive waste
radioactive pollution
Topic 2 (Agreements)
EC agreement
trade agreement
EC interim agreement
trade cooperation
EC coop. agree.
Topic 3 (Environment)
environmental protection
waste management
env. monitoring
dangerous substance
pollution control measures
Topic 4 (Stats & Data)
community statistics
statistical method
agri. statistics
statistics
data transmission
Table 4: Most probable words in different topics.
7
Topic 5 (Fishing Trade)
fishing regulations
fishing agreement
fishery management
fishing area
conservation of fish stocks
7.3
Scalability w.r.t. Number of Positive Labels
To demonstrate the linear scalability in the number of positive labels, we run an experiment on the
Delicious data set by varying the number of positive labels used for training the model from 20% to
100% (to simulate this, we simply treat all the other labels as zeros, so as to have a constant label
matrix size). We run each experiment for 100 iterations (using EM for the inference) and report
the running time for each case. Fig. 2 (left) shows the results which demonstrates the roughly linear
scalability w.r.t. the number of positive labels. This experiment is only meant for a small illustration.
Note than the actual scalability will also depend on the relative values of D and L and the sparsity
of Y. In any case, the amount of computations the involve the labels (both positive and negatives)
only depend on the positive labels, and this part, for our model, is clearly linear in the number of
positive labels in the label matrix.
0.9
800
0.85
EM?CG
EM?Exact
Gibbs
600
AUC
Time Taken
700
500
0.8
0.75
400
0.7
300
200
20%
40%
60%
60%
0.65
?2
10
100%
Fraction of Positive Labels
0
10
Time
2
10
4
10
Figure 2: (Left) Scalability w.r.t. number of positive labels. (Right) Time vs accuracy comparison for Gibbs
and EM (with exact and with CG based M steps)
7.4
Gibbs Sampling vs EM
We finally show another experiment comparing both Gibbs sampling and EM for our model in terms
of accuracy vs running time. We run each inference method only for 100 iterations. For EM, we
try two settings: EM with an exact M step for W, and EM with an approximate M step where
we run 2 steps of conjugate gradient (CG). Fig. 2 (right), shows a plot comparing each inference
method in terms of the accuracy vs running time. As Fig. 2 (right) shows, the EM algorithms (both
exact as well as the one that uses CG) attain reasonably high AUC scores in a short amount of time,
which the Gibbs sampling takes much longer per iteration and seems to converge rather slowly.
Moreover, remarkably, EM with 2 iterations CG in each M steps seems to perform comparably
to the EM with an exact M step, while running considerably faster. As for the Gibbs sampler,
although it runs slower than the EM based inference, it should be noted that the Gibbs sampler
would still be considerably faster than other fully Bayesian methods for multi-label prediction (such
as BCS [10]) because it only requires evaluating the likelihoods over the positive labels in the label
matrix). Moreover, the step involving sampling of the W matrix can be made more efficient by using
cholesky decompositions which can avoid matrix inversions needed for computing the covariance
of the Gaussian posterior on wk .
8
Discussion and Conclusion
We have presented a scalable Bayesian framework for multi-label learning. In addition to providing
a flexible model for sparse label matrices, our framework is also computationally attractive and
can scale to massive data sets. The model is easy to implement and easy to parallelize. Both full
Bayesian inference via simple Gibbs sampling and EM based inference can be carried out in this
model in a computationally efficient way. Possible future work includes developing online Gibbs
and online EM algorithms to further enhance the scalability of the proposed framework to handle
even bigger data sets. Another possible extension could be to additionally impose label correlations
more explicitly (in addition to the low-rank structure already imposed by the current model), e.g.,
by replacing the Dirichlet distribution on the columns of V with logistic normal distributions [4].
Because our framework allows efficiently computing the predictive distribution of the labels (as
shown in Section 3.3), it can be easily extend for doing active learning on the labels [10]. Finally,
although here we only focused on multi-label learning, our framework can be readily used as a robust
and scalable alternative to methods that perform binary matrix factorization with side-information.
Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR
8
References
[1] Rahul Agrawal, Archit Gupta, Yashoteja Prabhu, and Manik Varma. Multi-label learning with millions of
labels: Recommending advertiser bid phrases for web pages. In WWW, 2013.
[2] Dimitri P Bertsekas. Nonlinear programming. Athena scientific Belmont, 1999.
[3] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. JMLR, 2003.
[4] Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng, and Bo Zhang. Scalable inference for logistic-normal topic
models. In NIPS, 2013.
[5] Yao-Nan Chen and Hsuan-Tien Lin. Feature-aware label space dimension reduction for multi-label classification. In NIPS, 2012.
[6] Eva Gibaja and Sebasti?an Ventura. Multilabel learning: A review of the state of the art and ongoing
research. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2014.
[7] Eva Gibaja and Sebasti?an Ventura. A tutorial on multilabel learning. ACM Comput. Surv., 2015.
[8] Daniel Hsu, Sham Kakade, John Langford, and Tong Zhang. Multi-label prediction via compressed
sensing. In NIPS, 2009.
[9] Changwei Hu, Piyush Rai, and Lawrence Carin. Zero-truncated poisson tensor factorization for massive
binary tensors. In UAI, 2015.
[10] Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. Multilabel classification using bayesian compressed sensing. In NIPS, 2012.
[11] Nikos Karampatziakis and Paul Mineiro. Scalable multilabel prediction via randomized methods. arXiv
preprint arXiv:1502.02710, 2015.
[12] Dae I Kim and Erik B Sudderth. The doubly correlated nonparametric topic model. In NIPS, 2011.
[13] Xiangnan Kong, Zhaoming Wu, Li-Jia Li, Ruofei Zhang, Philip S Yu, Hang Wu, and Wei Fan. Large-scale
multi-label learning with incomplete label assignments. In SDM, 2014.
[14] Xin Li, Feipeng Zhao, and Yuhong Guo. Conditional restricted boltzmann machines for multi-label
learning with incomplete labels. In AISTATS, 2015.
[15] David Mimno and Andrew McCallum. Topic models conditioned on arbitrary features with dirichletmultinomial regression. In UAI, 2008.
[16] Paul Mineiro and Nikos Karampatziakis. Fast label embeddings for extremely large output spaces. In
ICLR Workshop, 2015.
[17] Nicholas G Polson, James G Scott, and Jesse Windle. Bayesian inference for logistic models using p?olya?
gamma latent variables. Journal of the American Statistical Association, 108(504):1339?1349, 2013.
[18] Yashoteja Prabhu and Manik Varma. FastXML: a fast, accurate and stable tree-classifier for extreme
multi-label learning. In KDD, 2014.
[19] Maxim Rabinovich and David Blei. The inverse regression topic model. In ICML, 2014.
[20] James G Scott and Liang Sun.
arXiv:1306.0040, 2013.
Expectation-maximization for logistic regression.
arXiv preprint
[21] Farbound Tai and Hsuan-Tien Lin. Multilabel classification with principal label space transformation.
Neural Computation, 2012.
[22] Michael E Tipping. Bayesian inference: An introduction to principles and practice in machine learning.
In Advanced lectures on machine Learning, pages 41?62. Springer, 2004.
[23] Jason Weston, Samy Bengio, and Nicolas Usunier. WSABIE: Scaling up to large vocabulary image
annotation. In IJCAI, 2011.
[24] Yan Yan, Glenn Fung, Jennifer G Dy, and Romer Rosales. Medical coding classification by leveraging
inter-code relationships. In KDD, 2010.
[25] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit S Dhillon. Large-scale multi-label learning
with missing labels. In ICML, 2014.
[26] Yi Zhang and Jeff G Schneider. Multi-label output codes using canonical correlation analysis. In AISTATS,
2011.
[27] M. Zhou, L. A. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and poisson factor
analysis. In AISTATS, 2012.
[28] Mingyuan Zhou. Infinite edge partition models for overlapping community detection and link prediction.
In AISTATS, 2015.
[29] Jun Zhu, Ni Lao, Ning Chen, and Eric P Xing. Conditional topical coding: an efficient topic model
conditioned on rich features. In KDD, 2011.
9
| 5770 |@word multitask:1 kong:1 version:1 inversion:1 proportion:1 seems:2 hu:2 decomposition:1 covariance:1 pg:4 olyagamma:2 thereby:1 mlk:2 reduction:2 raajay:1 score:3 daniel:1 bibtex:5 document:6 interestingly:1 current:3 comparing:3 protection:1 written:1 readily:1 john:1 belmont:1 realistic:1 partition:1 kdd:3 cheap:1 plot:2 interpretable:1 drop:1 update:4 v:6 generative:3 prohibitive:1 fewer:1 discovering:1 ntrain:1 mln:8 mccallum:1 short:1 blei:2 provides:2 cse:2 zhang:4 five:2 along:1 become:1 beta:1 qualitative:1 consists:1 doubly:1 manner:1 inter:1 roughly:1 surge:2 nor:2 multi:30 olya:8 actual:1 becomes:1 provided:1 estimating:5 moreover:4 underlying:1 discover:1 prateek:2 skewness:1 transformation:2 growth:1 exactly:2 demonstrates:1 classifier:1 zl:1 control:1 medical:2 yn:14 bertsekas:1 positive:20 before:1 safety:1 local:3 treat:1 despite:2 parallelize:1 equivalence:1 suggests:1 factorization:3 range:1 vu:2 practice:2 block:1 implement:2 lcarin:1 area:4 nnz:1 empirical:1 yan:2 thought:2 mult:1 hyperbolic:1 projection:1 word:2 attain:1 refers:1 get:1 cannot:1 context:1 live:2 risk:1 www:1 equivalent:1 map:1 imposed:1 missing:7 fishing:4 jesse:1 independently:1 focused:2 hsuan:2 stats:1 fastxml:1 nuclear:3 varma:2 embedding:7 handle:4 updated:2 construction:1 tan:1 massive:9 exact:5 duke:2 programming:1 us:3 designing:2 samy:1 agreement:5 surv:1 element:4 approximated:1 particularly:1 continues:2 asymmetric:2 observed:4 preprint:2 solved:2 vlk:3 wang:1 thousand:2 eva:2 improper:1 sun:1 trade:3 substantial:1 environment:1 complexity:1 multilabel:7 depend:4 solving:3 incur:1 predictive:1 eric:1 easily:6 darpa:1 stock:1 iit:1 various:3 represented:2 jain:2 fast:3 describe:1 quite:1 posed:1 valued:2 solve:2 coop:1 drawing:2 compressed:5 ability:1 statistic:10 jointly:2 online:4 advantage:2 agrawal:1 sdm:1 propose:1 lowdimensional:1 aro:1 product:1 kapoor:1 flexibility:1 realistically:2 pronounced:1 xun:1 scalability:7 webpage:2 convergence:1 ijcai:1 transmission:1 object:1 piyush:3 derive:2 develop:2 ac:1 help:1 andrew:2 eq:13 solves:1 auxiliary:1 involves:2 rosales:1 ning:1 require:2 assign:1 probable:1 extension:1 accompanying:1 normal:2 exp:5 lawrence:2 mapping:1 predict:1 yashoteja:2 pkn:8 dictionary:1 vary:1 omitted:1 estimation:1 bag:1 label:166 tanh:1 weighted:2 minimization:1 clearly:1 gaussian:3 rather:1 avoid:1 zhou:2 varying:1 focus:2 vk:4 bernoulli:8 likelihood:8 rank:4 karampatziakis:2 contrast:2 skipped:1 cg:5 kim:1 posteriori:1 inference:22 vl:2 typically:1 henao:2 issue:1 aforementioned:1 classification:5 flexible:2 eurlex:8 art:3 equal:2 aware:1 saving:1 ng:1 sampling:24 atom:2 env:1 represents:2 look:1 yu:2 carin:3 icml:2 future:1 report:7 few:3 gamma:15 leml:6 individual:1 replaced:1 consisting:1 detection:1 ukn:11 interest:2 huge:1 highly:3 mining:1 zheng:1 male:1 extreme:2 amenable:1 accurate:1 integral:1 encourage:1 capable:1 fu:1 edge:1 vely:1 tree:2 incomplete:3 dae:1 column:12 modeling:4 earlier:1 assignment:2 maximization:6 phrase:1 cost:5 rabinovich:1 subset:2 entry:3 hundred:1 usefulness:1 kn:7 lkn:2 considerably:2 combined:1 randomized:1 interdisciplinary:1 yl:2 enhance:1 michael:2 fishery:1 ashish:1 yao:1 na:1 squared:1 augmentation:3 management:2 possibly:3 slowly:1 worse:1 american:1 zhao:1 leading:1 ricardo:1 dimitri:1 li:3 coding:3 wk:21 summarized:1 waste:2 includes:1 explicitly:1 ranking:1 depends:1 manik:2 tion:1 try:1 jason:1 closed:8 doing:8 xing:1 annotation:2 jia:1 ass:1 ni:1 accuracy:3 efficiently:6 correspond:3 yield:1 radioactive:3 bayesian:15 comparably:2 advertising:1 monitoring:1 converged:1 parallelizable:1 against:2 james:2 naturally:2 associated:3 couple:1 sampled:5 hsu:1 knowledge:1 dimensionality:8 back:1 mingyuan:1 tipping:1 rahul:1 wei:1 formulation:1 done:6 though:1 changwei:2 just:1 correlation:4 langford:1 hand:2 web:1 replacing:1 nonlinear:2 overlapping:1 cplst:4 minibatch:2 defines:1 logistic:9 scientific:1 true:1 nonzero:2 dhillon:1 deal:2 attractive:1 during:1 auc:6 noted:1 demonstrate:1 performs:1 image:3 wise:3 recently:1 common:2 multinomial:4 rl:5 million:2 discussed:2 extend:1 association:1 significant:2 expressing:1 refer:1 gibbs:23 rd:6 stable:1 longer:1 etc:1 dominant:1 posterior:5 imbalanced:1 recent:5 purushottam:1 optimizing:1 termed:1 negbin:2 meta:4 binary:13 onr:1 kar:1 delicious:4 tien:2 yi:1 seen:3 impose:1 nikos:2 employed:1 parallelized:1 surely:1 converge:1 schneider:1 advertiser:1 ii:1 full:3 bcs:7 infer:1 nonzeros:2 reduces:1 sham:1 faster:4 offer:1 dept:2 lin:2 bigger:1 prediction:7 scalable:9 regression:4 mk1:1 involving:1 essentially:1 expectation:10 poisson:25 arxiv:4 iteration:9 achieved:1 dirichletmultinomial:1 addition:5 remarkably:1 sudderth:1 parallelization:1 rest:1 unlike:1 tend:2 leveraging:2 spirit:3 jordan:1 integer:1 presence:2 leverage:3 revealed:2 bengio:1 embeddings:8 easy:2 bid:1 affect:1 zi:1 competing:1 reduce:2 idea:1 multiclass:1 suffer:1 ignored:1 clear:1 involve:1 jianfei:1 amount:3 nonparametric:1 canonical:2 tutorial:1 fish:1 estimated:1 windle:1 per:3 vun:5 write:2 hyperparameter:1 express:1 key:2 four:2 drawn:4 thresholded:1 v1:1 fraction:1 year:1 nga:1 realworld:1 run:9 inverse:1 almost:2 wu:2 draw:2 dy:1 scaling:2 followed:1 nan:1 fan:1 dangerous:1 tag:3 u1:1 aspect:4 speed:1 simulate:1 extremely:1 interim:1 relatively:1 developing:1 rai:2 viswanathan:1 fung:1 combination:2 conjugate:4 remain:1 across:1 em:32 wsabie:3 slightly:1 appealing:3 kakade:1 making:2 projecting:2 restricted:1 taken:1 computationally:3 ln:2 conjugacy:5 equation:2 agree:1 turn:1 count:14 tai:1 jennifer:1 needed:1 usunier:1 available:5 hyperpriors:1 nicholas:1 romer:1 batch:1 robustness:2 alternative:1 slower:2 original:1 kln:2 denotes:11 dirichlet:8 assumes:2 l1n:1 binomial:4 graphical:1 running:5 hinge:2 include:2 top:1 exploit:2 archit:1 k1:3 especially:2 m1k:1 tensor:2 pollution:2 already:1 quantity:1 looked:1 strategy:4 dependence:1 gradient:3 iclr:1 subspace:1 link:10 unable:1 athena:1 philip:1 topic:35 prabhu:2 assuming:1 erik:1 code:2 modeled:1 relationship:1 illustration:1 providing:1 liang:1 regulation:1 dunson:1 ventura:2 m1n:1 negative:5 polson:1 boltzmann:1 perform:5 imbalance:2 observation:3 benchmark:2 yln:4 truncated:3 relational:1 y1:1 topical:1 station:1 arbitrary:1 community:2 david:3 required:1 connection:2 kd3:1 nip:5 address:2 able:3 proceeds:1 usually:3 below:2 scott:2 sparsity:6 challenge:1 interpretability:1 video:1 power:1 predicting:3 advanced:1 nth:1 mn:13 representing:1 zhu:2 lao:1 axis:2 lk:2 carried:1 jun:2 coupled:1 text:1 prior:5 nice:1 acknowledgement:1 review:2 discovery:1 marginalizing:1 relative:1 fully:2 probit:4 loss:4 lecture:1 interesting:1 allocation:1 integrate:2 sufficient:5 thresholding:2 principle:1 row:2 cooperation:1 supported:1 enjoys:1 side:2 sparse:8 mimno:1 curve:1 dimension:2 xn:9 world:1 vocabulary:2 evaluating:1 rich:1 commonly:2 made:1 ec:3 approximate:2 hang:1 iitk:1 dealing:1 global:1 active:4 uai:2 assumed:1 conservation:1 recommending:1 ruofei:1 un:6 latent:13 iterative:1 mineiro:2 sk:5 glenn:1 table:10 additionally:1 learn:1 nature:1 robust:3 reasonably:1 nicolas:1 correlated:2 du:1 diag:2 aistats:4 pk:3 hyperparameters:5 paul:2 facilitating:1 sebasti:2 fig:3 roc:1 hsiang:1 wiley:1 tong:1 inferring:1 mkn:10 comput:1 jmlr:1 kanpur:1 rk:12 hannah:1 embed:1 yuhong:1 substance:1 sensing:5 dk:7 gupta:1 concern:1 evidence:1 workshop:1 maxim:1 magnitude:1 conditioned:4 justifies:1 chen:3 logarithmic:1 simply:2 cheaply:1 partially:1 bo:1 inderjit:1 springer:1 environmental:1 acm:1 minibatches:1 weston:1 conditional:5 goal:2 exposition:1 jeff:1 absence:1 feasible:1 infinite:1 except:2 reducing:1 sampler:5 principal:2 ece:1 ntest:1 xin:1 meaningful:2 indicating:1 support:1 cholesky:1 guo:1 meant:1 brevity:2 ongoing:1 evaluate:1 handling:1 |
5,270 | 5,771 | Closed-form Estimators for High-dimensional
Generalized Linear Models
Eunho Yang
IBM T.J. Watson Research Center
eunhyang@us.ibm.com
Aur?elie C. Lozano
IBM T.J. Watson Research Center
aclozano@us.ibm.com
Pradeep Ravikumar
University of Texas at Austin
pradeepr@cs.utexas.edu
Abstract
We propose a class of closed-form estimators for GLMs under high-dimensional
sampling regimes. Our class of estimators is based on deriving closed-form variants of the vanilla unregularized MLE but which are (a) well-defined even under
high-dimensional settings, and (b) available in closed-form. We then perform
thresholding operations on this MLE variant to obtain our class of estimators. We
derive a unified statistical analysis of our class of estimators, and show that it enjoys strong statistical guarantees in both parameter error as well as variable selection, that surprisingly match those of the more complex regularized GLM MLEs,
even while our closed-form estimators are computationally much simpler. We derive instantiations of our class of closed-form estimators, as well as corollaries
of our general theorem, for the special cases of logistic, exponential and Poisson
regression models. We corroborate the surprising statistical and computational
performance of our class of estimators via extensive simulations.
1
Introduction
We consider the estimation of generalized linear models (GLMs) [1], under high-dimensional settings where the number of variables p may greatly exceed the number of observations n. GLMs are
a very general class of statistical models for the conditional distribution of a response variable given
a covariate vector, where the form of the conditional distribution is specified by any exponential
family distribution. Popular instances of GLMs include logistic regression, which is widely used
for binary classification, as well as Poisson regression, which together with logistic regression, is
widely used in key tasks in genomics, such as classifying the status of patients based on genotype
data [2] and identifying genes that are predictive of survival [3], among others. Recently, GLMs
have also been used as a key tool in the construction of graphical models [4]. Overall, GLMs have
proven very useful in many modern applications involving prediction with high-dimensional data.
Accordingly, an important problem is the estimation of such GLMs under high-dimensional sampling regimes. Under such sampling regimes, it is now well-known that consistent estimators cannot be obtained unless low-dimensional structural constraints are imposed upon the underlying regression model parameter vector. Popular structural constraints include that of sparsity, which encourages parameter vectors supported with very few non-zero entries, group-sparse constraints, and
low-rank structure with matrix-structured parameters, among others. Several lines of work have
focused on consistent estimators for such structurally constrained high-dimensional GLMs. A popular instance, for the case of sparsity-structured GLMs, is the `1 regularized maximum likelihood
estimator (MLE), which has been shown to have strong theoretical guarantees, ranging from risk
1
consistency [5], consistency in the `1 and `2 -norm [6, 7, 8], and model selection consistency [9].
Another popular instance is the `1 /`q (for q 2) regularized MLE for group-sparse-structured logistic regression, for which prediction consistency has been established [10]. All of these estimators
solve general non-linear convex programs involving non-smooth components due to regularization.
While a strong line of research has developed computationally efficient optimization methods for
solving these programs, these methods are iterative and their computational complexity scales polynomially with the number of variables and samples [10, 11, 12, 13], making them expensive for very
large-scale problems.
A key reason for the popularity of these iterative methods is that while the number of iterations
are some function of the required accuracy, each iteration itself consists of a small finite number
of steps, and can thus scale to very large problems. But what if we could construct estimators
that overall require only a very small finite number of steps, akin to a single iteration of popular
iterative optimization methods? The computational gains of such an approach would require that
the steps themselves be suitably constrained, and moreover that the steps could be suitably profiled
and optimized (e.g. efficient linear algebra routines implemented in BLAS libraries), a systematic
study of which we defer to future work. We are motivated on the other hand by the simplicity of
such a potential class of ?closed-form? estimators.
In this paper, we thus address the following question: ?Is it possible to obtain closed-form estimators
for GLMs under high-dimensional settings, that nonetheless have the sharp convergence rates of the
regularized convex programs and other estimators noted above?? This question was first considered
for linear regression models [14], and was answered in the affirmative. Our goal is to see whether
a positive response can be provided for the more complex statistical model class of GLMs as well.
In this paper we focus specifically on the class of sparse-structured GLMs, though our framework
should extend to more general structures as well.
As an inkling of why closed-form estimators for high-dimensional GLMs is much trickier than that
for high-dimensional linear models is that under small-sample settings, linear regression models do
have a statistically efficient closed-form estimator ? the ordinary least-squares (OLS) estimator,
which also serves as the MLE under Gaussian noise. For GLMs on the other hand, even under
small-sample settings, we do not yet have statistically efficient closed-form estimators. A classical
algorithm to solve for the MLE of logistic regression models for instance is the iteratively reweighted
least squares (IRLS) algorithm, which as its name suggests, is iterative and not available in closedform. Indeed, as we show in the sequel, developing our class of estimators for GLMs requires far
more advanced mathematical machinery (moment polytopes, and projections onto an interior subset
of these polytopes for instance) than the linear regression case.
Our starting point to devise a closed-form estimator for GLMs is to nonetheless revisit this classical
unregularized MLE estimator for GLMs from a statistical viewpoint, and investigate the reasons
why the estimator fails or is even ill-defined in the high-dimensional setting. These insights enable
us to propose variants of the MLE that are not only well-defined but can also be easily computed
in analytic-form. We provide a unified statistical analysis for our class of closed-form GLM estimators, and instantiate our theoretical results for the specific cases of logistic, exponential, and
Poisson regressions. Surprisingly, our results indicate that our estimators have comparable statistical guarantees to the regularized MLEs, in terms of both variable selection and parameter estimation
error, which we also corroborate via extensive simulations (which surprisingly even show a slight
statistical performance edge for our closed-form estimators). Moreover, our closed-form estimators
are much simpler and competitive computationally, as is corroborated by our extensive simulations.
With respect to the conditions we impose on the GLM models, we require that the population covariance matrix of our covariates be weakly sparse, which is a different condition than those typically
imposed for regularized MLE estimators; we discuss this further in Section 3.2. Overall, we hope
our simple class of statistically as well as computationally efficient closed-form estimators for GLMs
would open up the use of GLMs in large-scale machine learning applications even to lay users on the
one hand, and on the other hand, encourage the development of new classes of ?simple? estimators
with strong statistical guarantees extending the initial proposals in this paper.
2
2
Setup
We consider the class of generalized linear models (GLMs), where a response variable y 2 Y,
conditioned on a covariate vector x 2 Rp , follows an exponential family distribution:
P(y|x; ?? ) = exp
?
h(y) + yh?? , xi A h?? , xi
c( )
(1)
where 2 R > 0 is fixed and known scale parameter, ?? 2 Rp is the GLM parameter of interest,
and A(h?? , xi) is the log-partition function or the log-normalization constant of the distribution. Our
n
goal is to estimate the GLM parameter ?? given n i.i.d. samples (x(i) , y (i) ) i=1 . By properties of
exponential families, the conditional moment of the response given the covariates can be written as
?(h?? , xi) ? E(y|x; ?? ) = A0 (h?? , xi).
Examples. Popular instances of (1) include the standard linear regression model, the logistic regression model, and the Poisson regression model, among others. In the case of the linear regression
a response
variable y 2 R, with the conditional distribution P(y|x, ?? ):
n 2model, ?we have
o
y /2+yh? ,xi h? ? ,xi2 /2
exp
, where the log-partition function (or log-normalization constant)
2
A(a) of (1) in this specific case is given by A(a) = a2 /2. Another popular GLM instance is
?
the logistic regression
output variable y 2 Y ? { 1, 1},
? model ?P(y|x, ? ), for ?a categorical
?
?
exp yh? , xi log exp( h? , xi) + exp(h? , xi) where the log-partition function A(a) =
log exp( a) + exp(a) . The exponential regression model P(y|x, ?? ) in turn is given by:
exp yh?? , xi + log
h?? , xi . Here, the domain of response variable Y = R+ is the set
of non-negative real numbers (it is typically used to model time intervals between events for instance), and the log-partition function A(a) = log( a). Our final example is the Poisson regression model, P(y|x, ?? ): exp
log(y!) + yh?? , xi exp h?? , xi where the response variable is
count-valued with domain Y ? {0, 1, 2, ...}, and with log-partition function A(a) = exp(a).
Any exponential family distribution can be used to derive a canonical GLM regression model (1)
of a response y conditioned on covariates x, by setting the canonical parameter of the exponential
family distribution to h?? , xi. For the parameterization to be valid, the conditional density should be
normalizable, so that A h?? , xi < +1.
High-dimensional Estimation Suppose that we are given n covariate vectors, x(i) 2 Rp , drawn
i.i.d. from some distribution, and corresponding response variables, y (i) 2 Y, drawn from the
distribution P(y|x(i) , ?? ) in (1). A key goal in statistical estimation is to estimate the parameters
n
?? 2 Rp , given just the samples (x(i) , y (i) ) i=1 . Such estimation becomes particularly challenging
in a high-dimensional regime, where the dimension of covariate vector p is potentially even larger
than the number of samples n. In such high-dimensional regimes, it is well understood that structural
constraints on ?? are necessary in order to find consistent estimators. In this paper, we focus on the
structural constraint of element-wise sparsity, so that the number of non-zero elements in ?? is less
than or equal to some value k much smaller than p: k?? k0 ? k.
Estimators: Regularized Convex Programs The `1 norm is known to encourage the estimation of such sparse-structured parameters ?? . Accordingly, a popular class of M -estimators
for sparse-structured GLM parameters is the `1 regularized maximum log-likelihood estimator
n
for (1). Given n samples (x(i) , y (i) ) i=1 from P(y|x, ?? ), the `1 regularized MLEs can be
? 1 Pn
?
Pn
written as: minimize ?
?, n i=1 y (i) x(i) + n1 i=1 A h?, x(i) i + n k?k1 . For notational simplicity, we collate the n observations in vector and matrix forms where we overload
the notation y 2 Rn to denote the vector of n responses so that i-th element of y, yi , is
y (i) , and X 2 Rn?p to denote the design matrix whose i-th row is [x(i) ]> . With this notation we can rewrite optimization problem characterizing the `1 -regularized MLE simply as
1 > >
1 >
minimize ?
n ? X y + n 1 A(X?) + n k?k1 where we overload the notation A(?) for an
input vector ? 2 Rn to denote A(?) ? A(?1 ), A(?2 ), . . . , A(?n )
3
>
, and 1 ? (1, . . . , 1)> 2 Rn .
3
Closed-form Estimators for High-dimensional GLMs
The goal of this paper is to derive a general class of closed-form estimators for high-dimensional
GLMs, in contrast to solving huge, non-differentiable `1 regularized optimization problems. Before
introducing our class of such closed-form estimators, we first introduce some notation.
For any u 2 Rp , we use [S (u)]i = sign(ui ) max(|ui |
, 0) to denote the element-wise softthresholding operator, with thresholding parameter . For any given matrix M 2 Rp?p , we denote
by T? (M ) : Rp?p 7! Rp?p a family of matrix thresholding operators that are defined point-wise,
so that they can be written as [T? (M )]ij := ?? (Mij ), for any scalar thresholding operator ?? (?)
that satisfies the following conditions: for any input a 2 R, (a) |?? (a)| ? |a|, (b) |?? (a)| = 0 for
|a| ? ?, and (c) |?? (a) a| ? ?. The standard soft-thresholding and hard-thresholding operators
are both pointwise operators that satisfy these properties. See [15] for further discussion of such
pointwise matrix thresholding operators.
For any ? 2 Rn , we let rA(?) denote the element-wise gradients: rA(?) ? A0 (?1 ), A0 (?2 ), . . . ,
>
A0 (?n ) . We assume that the exponential family underlying the GLM is minimal, so that this map
is invertible, and so that for any ? 2 Rn in the range of rA(?), we can denote [rA] 1 (?) as an
>
element-wise inverse map of rA(?): (A0 ) 1 (?1 ), (A0 ) 1 (?2 ), . . . , (A0 ) 1 (?n ) .
Consider the response moment polytope M := {? : ? = Ep [y], for some distribution p over
y 2 Y}, and let Mo denote the interior of M. Our closed-form estimator will use a carefully
selected subset
M ? Mo .
(2)
[?M
? (y)]i := ?M
? (yi ).
(3)
Denote the projection of a response variable y 2 Y onto this subset as ?M
? (y) = arg min?2M
? |y
?|, where the subset M is selected so that the projection step is always well-defined, and the minimum exists. Given a vector y 2 Y n , we denote the vector of element-wise projections of entries in
y as ?M
? (y) so that:
As the conditions underlying our theorem will make clear, we will need the operator [rA] 1 (?)
defined above to be both well-defined and Lipschitz in the subset M of the interior of the response
moment polytope. In later sections, we will show how to carefully construct such a subset M for
different GLM models.
We now have the machinery to describe our class of closed-form estimators:
!
h ? X > X ?i 1 X > [rA] 1 ? ? (y)
M
?bElem = S n
T?
,
n
n
(4)
where the various mathematical terms were defined above. It can be immediately seen that the
estimator is available in closed-form. In a later section, we will see instantiations of this class of
estimators for various specific GLM models, and where we will see that these estimators take very
simple forms. Before doing so, we first describe some insights that led to our particular construction
of the high-dimensional GLM estimator above.
3.1
Insights Behind Construction of Our Closed-Form Estimator
We first revisit the classical unregularized MLE for GLMs:
?b
2
1 > >
1 >
arg min?
?
X
y
+
1
A(X?)
.
Note
that
this
optimization
problem
does
not
have
a
n
n
unique minimum in general, especially under high-dimensional sample settings where p > n.
Nonetheless, it is instructive to study why this unregularized MLE is either ill-suited or even
ill-defined under high-dimensional settings. The stationary condition of unregularized MLE
optimization problem can be written as:
b
X > y = X > rA(X ?).
(5)
There are two main caveats to solving for a unique ?b satisfying this stationary condition, which we
clarify below.
4
(Mapping to mean parameters) In a high dimensional sampling regime where p
n, (5) can
T
b
be seen to reduce to y = rA(X ?) (so long as X has rank n). This then suggests solving for
X ?b = [rA] 1 (y), where we recall the definition of the operator rA(?) in terms of element-wise
operations involving A0 (?). The caveat however is that A0 (?) is only onto the interior Mo of the
response moment polytope [16], so that [A0 (?)] 1 is well-defined only when given ? 2 Mo . When
entries of the sample response vector y however lie outside of Mo , as will typically be the case and
which we will illustrate for multiple instances of GLM models in later sections, the inverse mapping
would not be well-defined. We thus first project the sample response vector y onto M ? Mo
to obtain ?M
? (y) as defined in (3). Armed with this approximation, we then consider the more
b instead of the original stationary condition in (5).
amenable ?M
? (y) ? rA(X ?),
(Sample covariance) We thus now have the approximate characterization of the MLE as X ?b ?
b via least squares as
[rA] 1 ?M
? (y) . This then suggests solving for an approximate MLE ?
>
1 >
1
b
? = [X X] X [rA]
?M
? (y) . The high-dimensional regime with p > n poses a caveat
here, since the sample covariance matrix (X > X)/n would then be rank-deficient, and hence not
>
invertible. Our approach is to then use a thresholded sample covariance matrix T? X n X defined
in the previous subsection instead, which can be shown to be invertible and consistent to the population covariance matrix ? with high probability [15, 17]. In particular, recent work [15] has shown
>
that thresholded sample covariance T? X n X is consistent with respect to the spectral norm with
? q
?
>
convergence rate T? X n X
? op ? O c0 logn p , under some mild conditions detailed in our
main theorem. Plugging in this thresholded sample covariance matrix, to get an approximate least
squares solution for the GLM parameters ?, and then performing soft-thresholding precisely yields
our closed-form estimator in (4).
Our class of closed-form estimators in (4) can thus be viewed as surgical approximations to the MLE
so that it is well-defined in high-dimensional settings, as well as being available in closed-form. But
would such an approximation actually yield rigorous consistency guarantees? Surprisingly, as we
show in the next section, not only is our class of estimators consistent, but in our corollaries we
show that the statistical guarantees are comparable to those of the state of the art iterative ways like
regularized MLEs.
We note that our class of closed-form estimators in (4) can also be written in an equivalent form that
is more amenable to analysis:
minimize k?k1
(6)
?
h
? X > X ?i
1 X > [rA] 1
?M
? (y)
? n.
n
n
1
The equivalence between (4) and (6) easily follows from the fact that the optimization problem (6)
is decomposable into independent element-wise sub-problems, and each sub-problem corresponds
to soft-thresholding. It can be seen that this form is also amenable to extending the framework in
this paper to structures beyond sparsity, by substituting in alternative regularizers. Due to space
constraints, the computational complexity is discussed in detail in the Appendix.
s. t
3.2
?
T?
Statistical Guarantees
In this subsection, we provide an unified statistical analysis for the class of estimators (4) under the
following standard conditions, namely sparse ?? and sub-Gaussian design X:
(C1) The parameter ?? in (1) is exactly sparse with k non-zero elements indexed by the support
set S, so that ?S? c = 0.
(C2) Each row of the design matrix X 2 Rn?p is i.i.d. sampled from a zero-mean distribution
with covariance matrix ? such that for any v 2 Rp , the variable hv, Xi i is sub-Gaussian with
parameter at most ?u kvk2 for every row of X, Xi .
Our next assumption is on the covariance matrix of the covariate random vector:
(C3) The covariance matrix ? of X satisfies that for all w 2 Rp , k?wk1
?` kwk1 with
fixed constant ?` > 0. Moreover, ? is approximately sparse, along the lines of [17]: for some
5
positive
Ppconstant D, ?ii ? D for all diagonal entries, and moreover, for some 0 ? q < 1 and c0 ,
maxi j=1 |?ij |q ? c0 . If q = 0, then this condition will be equivalent with ? being sparse.
We also introduce some notations used in the followingptheorem. Under the condition (C2), we
?
(i)
?
have that with high
p probability, |h? , x i| ? 2?u k? k2 log n for all samples, i = 1, . . . , n. Let
? ? := 2?u k?? k2 log n. We then let M0 be the subset of M such that
n
o
M0 := ? : ? = A0 (?) , where ? 2 [ ? ? , ? ? ] .
(7)
We also define ?u,A and ?`,A on the upper bounds of A00 (?) and (A
max
?2[ ? ? ,? ? ]
|A00 (?)| ? ?u,A ,
max
?
a2M0 [M
|(A
) (?), respectively:
1 0
1 0
) (a)| ? ?`,A .
(8)
Armed with these conditions and notations, we derive our main theorem:
Theorem 1. Consider any generalized linear model in (1) where all the conditions (C1), (C2) and
(C3) hold.
problem (4) setting the thresholding parameter
q Now, suppose that we solve the estimation
p
log p0
? = C1
where C1 := 16(maxj ?jj ) 10? for any constant ? > 2, and p0 := max{n, p}.
n
q
0
Furthermore, suppose also that we set the constraint bound n as C2 lognp + E where C2 :=
?
?
p
2
2?u,A + C1 k?? k1 and where E depends on the approximation error induced by the
?` ?u ?`,A
?
?
? ? 4?u ?`,A q log p0
projection (3), and is defined as: E := maxi=1,...,n y (i)
?M
? (y) i
?`
n .
2
(A) Then, as long as n > 2c?1`c0 1 q log p0 where c1 is a constant related only on ? and maxi ?ii ,
any optimal solution ?b of (4) is guaranteed to be consistent:
?b
??
1
? q
?
0
? 2 C2 lognp + E , ?b
??
2
? q
?
p
0
? 4 k C2 lognp + E , ?b
??
1
? q
?
0
? 8k C2 lognp + E .
(B) Moreover, the support set of the estimate ?b correctly excludes all true zero values of ?? . Moreover, when mins2S |?s? | 3 n , it correctly includes all non-zero true supports of ?? ,
0
with probability at least 1 cp0 c for some universal constants c, c0 > 0 depending on ? and ?u .
Remark 1. While our class of closed-form estimators and analyses consider sparse-structured parameters, these can be seamlessly extended to more general structures (such as group sparsity and
low rank), using appropriate thresholding functions.
Remark 2. The condition (C3) required in Theorem 1 is different from (and possibly stronger)
than the restricted strong convexity [8] required for `2 error bound of `1 regularized MLE. A key
facet of our analysis with our Condition (C3) however is that it provides much simpler and clearer
identifying constants in our non-asymptotic error bounds. Deriving constant factors in the analysis
of the `1 -regularized MLE on the other hand, with its restricted strong convexity condition, involves
many probabilistic statements, and is non-trivial, as shown in [8].
Another key facet of our analysis in Theorem 1 is that it also provides an `1 error bound, and
guarantees the sparsistency of our closed-form estimator. For `1 regularized MLEs, this requires a
separate sparsistency analysis. In the case of the simplest standard linear regression models, [18]
showed that the incoherence condition of |||?S c S ?SS1 |||1 < 1 is required for sparsistency, where
||| ? |||1 is the maximum of absolute row sum. As discussed in [18], instances of such incoherent
covariance matrices ? include the identity, and Toeplitz matrices: these matrices can be seen to
also satisfy our condition (C3). On the other hand, not all matrices that satisfy our condition (C3)
need satisfy the stringent incoherence condition in turn. For example, consider ? where ?SS =
0.95I3 + 0.0513?3 for a matrix 1 of ones, ?SS c is all zeros but the last column is 0.413?1 , and
?S c S c = I(p 3)?(p 3) . Then, this positive definite ? can be seen to satisfy our Condition (C3),
since each row has only 4 non-zeros. However, |||?S c S ?SS1 |||1 is equal to 1.0909 and larger than 1,
and consequently, the incoherence condition required for the Lasso will not be satisfied. We defer
relaxing our condition (C3) further as well as a deeper investigation of all the above conditions to
future work.
6
Remark 3. The constant C2 in the statement
depends on k?? k1 , which in the worst case where
p
?
only k? k2 is bounded, may scale with k. On the other hand, our theorem does not require an
explicit sample complexity condition that n be larger than some function on k, while the analysis
of `1 -regularized MLEs do additionally require that n
c k log p for some constant c. In our
experiments, we verify that our closed-form estimators outperform the `1 -regularized MLEs even
when k is fairly large (for instant, when (n, p, k) = (5000, 104 , 1000)).
In order to apply Theorem 1 to a specific instance of GLMs, we need to specify the quantities in (8),
as well as carefully construct a subset M of the interior of the response moment polytope. In case
of the simplest linear models described in Section 2, we have the identity mapping ? = A0 (?) = ?.
The inequalities in (8) can thus be seen to be satisfied with ?`,A = ?u,A = 1 . Moreover, we can set
M := Mo = R so that ?M
? (y) = y, and trivially recover the previous results in [14] as a special
case. In the following sections, we will derive the consequences of our framework for the complex
instances of logistic and Poisson regression models, which are also important members in GLMs.
4
Key Corollaries
In order to derive corollaries of our main Theorem 1, we need to specify the response polytope
subsets M, M0 in (2) and (7) respectively, as well as bound the two quantities ?`,A and ?u,A in (8).
Logistic regression models. The exponential family log-partition
function of logistic
regression
?
?
models described in Section 2 can be seen to be A(?) = log exp( ?) + exp(?) . Consequently,
4 exp(2?)
its double derivative A00 (?) = (exp(2?)+1)
2 ? 1 for any ?, so that (8) holds with ?u,A = 1.
The response moment polytope for the binary response variable y 2 Y ? { 1, 1} is the interval M = [ 1, 1], so that its interior is given by Mo = ( 1, 1). For the subset of the interior,
we define M = [ 1 + ?, 1 ?], for some 0 < ? < 1. At the same time, the forward mapping
is given by A0 (?) = exp(2?) 1)/(exp(2?) + 1), and hence M0 becomes [ aa+11 , aa+11 ] where
4?u k? ? k2
p
a := n log n . The inverse mapping of logistic models is given by (A0 ) 1 (?) = 12 log 11+?? , and
given M and M0 , it can be seen that (A0 ) 1 (?) is Lipschitz for M [ M0 with constant less than
n
o
4?u k? ? k2
p
?`,A := max 12 + 12 n log n , 1/? in (8). Note that with this setting of the subset M, we have
?
?
that maxi=1,...,n (y (i)
?M
?), which we will use in
? (y) i ) = ?, and moreover, ?M
? (yi ) = yi (1
the corollary below.
Poisson regression models. Another important instance of GLMs is the Poisson regression model,
that is becoming increasingly more relevant in modern big-data settings with varied multivariate
count data. For the Poisson regression model case, the double derivative
of A(?) is not uniformly
p
upper bounded: A00 (u) = exp(u). Denoting ? ? := 2?u k?? k2 plog n, we then have that for any
p
?
? in [ ? ? , ? ? ], A00 (?) ? exp 2 u k?? k2 log n = n2 u k? k2 / log n , so that (8) is satisfied with
p
?
?u,A = n2 u k? k2 / log n . The response moment polytope for the count-valued response variable
y 2 Y ? {0, 1, . . .} is given by M = [0, 1), so that its interior is given by Mo = (0, 1). For the
subset of the interior, we define M = [?, 1) for some ? s.t. 0 < ? < 1. The forward mapping in this
2?u k? ? k2
p
case is simply given by A0 (?) = exp(?), and M0 in (7) becomes [a 1 , a] where a is n log n . The
inverse mapping for the Poisson regression model then is given by (A0 ) 1 (?) = log(?), which can
2?u k? ? k2
p
be seen to be Lipschitz for M with constant ?`,A = max{n log n , 1/?} in (8). With this setting
of M, it can be seen that the projection operator is given by ?M
? (yi ) = I(yi = 0)? + I(yi 6= 0)yi .
Now, we are ready to recover the error bounds, as a corollary of Theorem 1, for logistic regression
and Poisson models when condition (C2) holds:
Corollary 1. Consider any logistic regression model or a Poisson regression model where all conditions in Theorem 1 hold. Supposeq
that we solve our closed-form estimation problem (4), setting
p
0
0
p
the thresholding parameter ? = C1 lognp , and the constraint bound n = ?2` n(1/2c clog
0 /plog n) +
q
0
C1 k?? k1 lognp where c and c0 are some constants depending only on ?u , k?? k2 and ?. Then the
7
Table 1: Comparisons on simulated datasets when parameters are tuned to minimize `2 error on
independent validation sets.
M ETHOD
(n, p, k)
(n = 2000, `1 MLE1
p = 5000, `1 MLE2
k = 10)
`1 MLE3
E LEM
(n = 4000, `1 MLE1
p = 5000, `1 MLE2
k = 10)
`1 MLE3
E LEM
(n = 5000, `1 MLE1
p = 104 ,
`1 MLE2
k = 100) `1 MLE3
E LEM
TP
FP
`2 E RROR
T IME
1
1
1
0.9900
1
1
1
1
1
1
1
0.9975
0.1094
0.0873
0.1000
0.0184
0.1626
0.1327
0.1112
0.0069
0.1301
0.1695
0.2001
0.3622
4.5450
4.0721
3.4846
2.7375
4.2132
3.6569
2.9681
2.6213
18.9079
18.5567
18.2351
16.4148
63.9
133.1
348.3
26.5
155.5
296.8
829.3
40.2
500.1
983.8
2353.3
151.8
(n = 5000, `1 MLE1
p = 104 ,
`1 MLE2
k = 1000) `1 MLE3
E LEM
(n = 8000, `1 MLE1
4
p = 10 ,
`1 MLE2
k = 100) `1 MLE3
E LEM
(n = 8000, `1 MLE1
p = 104 ,
`1 MLE2
k = 1000) `1 MLE3
E LEM
optimal solution ?b of (4) is guaranteed to be consistent:
?b
?
?
?
p
1
4
?
?`
c log p0
?
p
n(1/2 c0 / log n)
p
c log p0
p
n(1/2 c0 / log n)
+ C1 k?? k1
with probability at least 1
Moreover, when mins2S |?s? |
r
?
+ C1 k? k1
log p
n
c 1 p0
6
?`
?
0
c01
,
?b
M ETHOD
(n, p, k)
r
log p0
n
??
1
?
?
,
16k
?`
?b
?
?
?
2
p
TP
FP
`2 E RROR
T IME
0.7990
0.7935
0.7965
0.8295
1
1
1
0.9450
0.7965
0.7900
0.7865
0.7015
1
1
1
1
0.1904
0.2181
0.2364
0.0359
1
1
1
0.5103
65.1895
65.1165
65.1024
63.2359
18.6186
18.1806
17.6762
11.9881
65.0714
64.9650
64.8857
61.0532
520.7
1005.8
2560.1
152.1
810.6
1586.2
3568.9
221.1
809.5
1652.8
4196.6
219.4
r
?
p
8 k
?
?`
c log p0
p
n(1/2 c0 / log n)
+ C1 k?? k1
log p0
n
,
0
0
for some universal constants
q c1 , c1 > 0 and p := max{n, p}.
p
0
0
c logpp
+ C1 k?? k1 log p , ?b is sparsistent.
0
n(1/2
c /
log n)
n
Remarkably, the rates in Corollary 1 are asymptotically comparable to those for the `1 -regularized
MLE (see for instance Theorem 4.2 and Corollary 4.4 in [7]). In Appendix A, we place slightly
more stringent condition than (C2) and guarantee error bounds with faster convergence rates.
5
Experiments
We corroborate the performance of our elementary estimators on simulated data over varied regimes
of sample size n, number of covariates p, and sparsity size k. We consider two popular instances
of GLMs, logistic and Poisson regression models. We compare against standard `1 regularized
MLE estimators with iteration bounds of 50, 100, and 500, denoted by `1 MLE1 , `1 MLE2 and `1
MLE3 respectively. We construct the n ? p design matrices X by sampling the rows independently
from N (0, ?) where ?i,j = 0.5|i j| . For each simulation, the entries of the true model coefficient
vector ?? are set to be 0 everywhere, except for a randomly chosen subset of k coefficients, which
are chosen independently and uniformly in the interval (1, 3). We report results averaged over 100
independent trials. Noting that our theoretical results were not sensitive to the setting of ? in ?M
? (y),
we simply report the results when ? = 10 4 across all experiments.
While our theorem specified an optimal setting of the regularization parameter n and ?, this optimal
setting depended on unknown model parameters. Thus,
high-dimensional regup as is standard withp
larized estimators, we set tuning parameters n = c log p/n and ? = c0 log p/n by a holdoutvalidated fashion; finding a parameter that minimizes the `2 error on an independent validation set.
Detailed experimental setup is described in the appendix.
Table 1 summarizes the performances of `1 MLE using 3 different stopping criteria and Elem-GLM.
Besides `2 errors, the target tuning metric, we also provide the true and false positives for the support
set recovery task on the new test set where the best tuning parameters are used. The computation
times in second indicate the overall training computation time summing over the whole parameter
tuning process. As we can see from our experiments, with respect to both statistical and computational performance our closed form estimators are quite competitive compared to the classical `1
regularized MLE estimators and in certain case outperform them. Note that `1 MLE1 stops prematurely after only 50 iterations, so that training computation time is sometimes comparable to
closed-form estimator. However, its statistical performance measured by `2 is much inferior to other
`1 MLEs with more iterations as well as Elem-GLM estimator. Due to the space limit, ROC curves,
results for other settings of p and more experiments on real datasets are presented in the appendix.
8
References
[1] P. McCullagh and J.A. Nelder. Generalized linear models. Monographs on statistics and applied probability 37. Chapman and Hall/CRC, 1989.
[2] G. E. Hoffman, B. A. Logsdon, and J. G. Mezey. Puma: A unified framework for penalized multiple
regression analysis of gwas data. Plos computational Biology, 2013.
[3] D. Witten and R. Tibshirani. Survival analysis with high-dimensional covariates. Stat Methods Med Res.,
19:29?51, 2010.
[4] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur.
Info. Proc. Sys. (NIPS), 25, 2012.
[5] S. Van de Geer. High-dimensional generalized linear models and the lasso. Annals of Statistics, 36(2):
614?645, 2008.
[6] F. Bach. Self-concordant analysis for logistic regression. Electron. J. Stat., 4:384?414, 2010.
[7] S. M. Kakade, O. Shamir, K. Sridharan, and A. Tewari. Learning exponential families in high-dimensions:
Strong convexity and sparsity. In Inter. Conf. on AI and Statistics (AISTATS), 2010.
[8] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of M-estimators with decomposable regularizers. Arxiv preprint arXiv:1010.2731v1, 2010.
[9] F. Bunea. Honest variable selection in linear and logistic regression models via l1 and l1 + l2 penalization.
Electron. J. Stat., 2:1153?1194, 2008.
[10] L. Meier, S. Van de Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal of the Royal
Statistical Society, Series B, 70:53?71, 2008.
[11] Y. Kim, J. Kim, and Y. Kim. Blockwise sparse regression. Statistica Sinica, 16:375?390, 2006.
[12] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1?22, 2010.
[13] K. Koh, S. J. Kim, and S. Boyd. An interior-point method for large-scale `1 -regularized logistic regression. Jour. Mach. Learning Res., 3:1519?1555, 2007.
[14] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for high-dimensional linear regression.
In International Conference on Machine learning (ICML), 31, 2014.
[15] A. J. Rothman, E. Levina, and J. Zhu. Generalized thresholding of large covariance matrices. Journal of
the American Statistical Association (Theory and Methods), 104:177?186, 2009.
[16] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305, December 2008.
[17] P. J. Bickel and E. Levina. Covariance regularization by thresholding. Annals of Statistics, 36(6):2577?
2604, 2008.
[18] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183?2202, May 2009.
[19] Daniel A. Spielman and Shang-Hua Teng. Solving sparse, symmetric, diagonally-dominant linear systems
in time 0(m1.31 ). In 44th Symposium on Foundations of Computer Science (FOCS 2003), 11-14 October
2003, Cambridge, MA, USA, Proceedings, pages 416?427, 2003.
[20] Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao, and
Shen Chen Xu. Solving sdd linear systems in nearly mlog1/2n time. In Proceedings of the 46th Annual
ACM Symposium on Theory of Computing, STOC ?14, pages 343?352. ACM, 2014.
[21] Daniel A. Spielman and Shang-Hua Teng. Nearly linear time algorithms for preconditioning and solving
symmetric, diagonally dominant linear systems. SIAM J. Matrix Analysis Applications, 35(3):835?885,
2014.
[22] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing `1 -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011.
[23] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for sparse covariance matrices and other
structured moments. In International Conference on Machine learning (ICML), 31, 2014.
[24] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for graphical models. In Neur. Info.
Proc. Sys. (NIPS), 27, 2014.
9
| 5771 |@word mild:1 lognp:6 determinant:1 trial:1 norm:3 stronger:1 suitably:2 c0:10 open:1 simulation:4 covariance:15 p0:10 moment:9 initial:1 liu:1 series:1 daniel:2 denoting:1 tuned:1 com:2 surprising:1 yet:1 written:5 partition:6 analytic:1 stationary:3 instantiate:1 selected:2 parameterization:1 accordingly:2 sys:2 caveat:3 characterization:1 provides:2 simpler:3 mathematical:2 along:1 c2:11 kvk2:1 symposium:2 focs:1 consists:1 introduce:2 peng:1 inter:1 ra:15 indeed:1 themselves:1 armed:2 cp0:1 becomes:3 provided:1 project:1 underlying:3 moreover:9 notation:6 bounded:2 what:1 minimizes:1 affirmative:1 developed:1 unified:5 c01:1 finding:1 guarantee:9 every:1 exactly:1 k2:12 positive:4 before:2 understood:1 depended:1 consequence:1 limit:1 mach:1 incoherence:3 becoming:1 approximately:1 path:1 equivalence:1 suggests:3 challenging:1 relaxing:1 range:1 statistically:3 averaged:1 elie:1 unique:2 definite:1 universal:2 projection:6 puma:1 boyd:1 get:1 cannot:1 onto:4 selection:4 interior:10 operator:9 risk:1 sparsistent:1 equivalent:2 imposed:2 map:2 center:2 starting:1 independently:2 convex:3 focused:1 shen:1 simplicity:2 identifying:2 immediately:1 decomposable:2 recovery:2 estimator:64 insight:3 deriving:2 population:2 coordinate:1 annals:2 construction:3 suppose:3 target:1 user:1 shamir:1 programming:1 element:10 trend:1 expensive:1 particularly:1 satisfying:1 lay:1 larized:1 corroborated:1 ep:1 preprint:1 hv:1 worst:1 pradeepr:1 plo:1 monograph:1 convexity:3 complexity:3 covariates:5 ui:2 weakly:1 solving:8 rewrite:1 algebra:1 predictive:1 upon:1 preconditioning:1 easily:2 k0:1 various:2 describe:2 outside:1 whose:1 quite:1 widely:2 solve:4 valued:2 larger:3 s:2 toeplitz:1 statistic:5 itself:1 noisy:1 final:1 differentiable:1 propose:2 kyng:1 relevant:1 convergence:3 double:2 extending:2 derive:7 illustrate:1 depending:2 stat:3 pose:1 clearer:1 measured:1 ij:2 op:1 strong:7 implemented:1 c:1 involves:1 indicate:2 enable:1 stringent:2 crc:1 require:5 investigation:1 elementary:4 rothman:1 clarify:1 hold:4 considered:1 hall:1 exp:20 mapping:7 mo:9 electron:2 substituting:1 m0:7 bickel:1 a2:1 estimation:10 eunhyang:1 proc:2 uhlmann:1 utexas:1 sensitive:1 tool:1 bunea:1 hoffman:1 hope:1 gaussian:3 always:1 i3:1 pn:2 corollary:9 focus:2 notational:1 rank:4 likelihood:2 seamlessly:1 greatly:1 contrast:1 rigorous:1 kim:4 inference:1 stopping:1 typically:3 a0:17 overall:4 classification:1 among:3 ill:3 arg:2 logn:1 denoted:1 development:1 art:1 constrained:3 special:2 fairly:1 equal:2 construct:4 sampling:5 chapman:1 biology:1 yu:2 icml:2 nearly:2 future:2 others:3 report:2 richard:1 few:1 modern:2 randomly:1 ime:2 divergence:1 sparsistency:3 maxj:1 n1:1 friedman:1 interest:1 huge:1 investigate:1 withp:1 pradeep:1 genotype:1 behind:1 regularizers:2 amenable:3 edge:1 encourage:2 necessary:1 machinery:2 unless:1 indexed:1 re:2 theoretical:3 minimal:1 instance:15 column:1 soft:3 facet:2 rao:1 corroborate:3 tp:2 trickier:1 ordinary:1 introducing:1 plog:2 entry:5 subset:13 aclozano:1 mles:8 density:1 jour:1 negahban:1 aur:1 international:2 siam:1 sequel:1 systematic:1 probabilistic:1 clog:1 invertible:3 michael:1 together:1 satisfied:3 possibly:1 conf:1 american:1 derivative:2 closedform:1 potential:1 de:2 includes:1 coefficient:2 satisfy:5 depends:2 later:3 closed:33 doing:1 competitive:2 recover:2 gwas:1 defer:2 minimize:4 square:4 accuracy:1 miller:1 yield:2 surgical:1 irls:1 definition:1 against:1 nonetheless:3 ss1:2 gain:1 sampled:1 stop:1 popular:9 recall:1 subsection:2 routine:1 carefully:3 actually:1 response:22 specify:2 though:1 furthermore:1 just:1 glms:27 hand:7 logistic:19 name:1 usa:1 verify:1 true:4 lozano:4 regularization:4 hence:2 symmetric:2 iteratively:1 reweighted:1 self:1 encourages:1 inferior:1 noted:1 criterion:1 generalized:9 allen:1 l1:2 ranging:1 wise:8 variational:1 recently:1 ols:1 raskutti:1 witten:1 cohen:1 blas:1 extend:1 slight:1 discussed:2 association:1 m1:1 elem:2 a00:5 cambridge:1 ai:1 tuning:4 vanilla:1 consistency:5 trivially:1 dominant:2 multivariate:1 recent:1 showed:1 certain:1 inequality:1 binary:2 watson:2 kwk1:1 yi:8 devise:1 seen:10 minimum:2 impose:1 ii:2 multiple:2 smooth:1 match:1 faster:1 levina:2 bach:1 collate:1 long:2 ravikumar:7 mle:22 plugging:1 prediction:2 variant:3 regression:38 involving:3 patient:1 metric:1 poisson:13 arxiv:2 iteration:6 normalization:2 sometimes:1 anup:1 c1:14 proposal:1 remarkably:1 interval:3 induced:1 med:1 deficient:1 member:1 december:1 sridharan:1 jordan:1 structural:4 yang:5 noting:1 exceed:1 hastie:1 lasso:4 reduce:1 texas:1 honest:1 whether:1 motivated:1 akin:1 jj:1 remark:3 useful:1 tewari:1 clear:1 detailed:2 simplest:2 outperform:2 canonical:2 revisit:2 sign:1 popularity:1 correctly:2 tibshirani:2 group:4 key:7 threshold:1 drawn:2 thresholded:3 v1:1 asymptotically:1 excludes:1 sum:1 inverse:4 everywhere:1 place:1 family:10 electronic:1 appendix:4 summarizes:1 comparable:4 bound:10 guaranteed:2 quadratic:1 annual:1 constraint:8 precisely:1 normalizable:1 software:1 answered:1 min:2 performing:1 structured:8 developing:1 neur:2 smaller:1 slightly:1 increasingly:1 across:1 kakade:1 making:1 lem:6 restricted:2 glm:16 koh:1 unregularized:5 computationally:4 discus:1 turn:2 count:3 xi2:1 serf:1 available:4 operation:2 apply:1 spectral:1 appropriate:1 alternative:1 softthresholding:1 rp:10 original:1 include:4 graphical:4 instant:1 k1:10 especially:1 classical:4 society:1 question:2 quantity:2 diagonal:1 gradient:1 separate:1 simulated:2 polytope:7 mins2s:2 trivial:1 reason:2 besides:1 pointwise:2 eunho:1 rasmus:1 minimizing:1 setup:2 sinica:1 october:1 potentially:1 statement:2 blockwise:1 info:2 stoc:1 negative:1 design:4 ethod:2 unknown:1 perform:1 upper:2 observation:2 datasets:2 finite:2 descent:1 extended:1 prematurely:1 rn:7 varied:2 sharp:2 namely:1 required:5 specified:2 extensive:3 optimized:1 c3:8 meier:1 established:1 polytopes:2 pachocki:1 nip:2 trans:1 address:1 beyond:1 below:2 regime:8 sparsity:8 fp:2 program:4 max:7 royal:1 wainwright:4 event:1 regularized:21 advanced:1 zhu:1 library:1 ready:1 categorical:1 incoherent:1 genomics:1 l2:1 asymptotic:1 sdd:1 proven:1 validation:2 penalization:1 foundation:2 consistent:8 thresholding:14 viewpoint:1 classifying:1 ibm:4 austin:1 row:6 penalized:2 diagonally:2 surprisingly:4 supported:1 last:1 enjoys:1 profiled:1 deeper:1 characterizing:1 absolute:1 sparse:14 van:2 curve:1 dimension:2 valid:1 forward:2 far:1 polynomially:1 approximate:3 status:1 gene:1 instantiation:2 wk1:1 summing:1 nelder:1 xi:17 iterative:5 why:3 table:2 additionally:1 complex:3 domain:2 aistats:1 main:4 statistica:1 big:1 noise:1 whole:1 n2:2 xu:1 roc:1 fashion:1 structurally:1 fails:1 sub:4 explicit:1 exponential:12 lie:1 yh:5 theorem:14 specific:4 covariate:5 jakub:1 maxi:4 survival:2 exists:1 false:1 conditioned:2 chen:1 suited:1 led:1 simply:3 scalar:1 rror:2 hua:2 mij:1 corresponds:1 aa:2 satisfies:2 gary:1 acm:2 ma:1 conditional:5 goal:4 viewed:1 identity:2 consequently:2 lipschitz:3 hard:1 mccullagh:1 specifically:1 except:1 uniformly:2 shang:2 geer:2 teng:2 experimental:1 concordant:1 support:4 overload:2 spielman:2 instructive:1 |
5,271 | 5,772 | Learning Stationary Time Series using Gaussian
Processes with Nonparametric Kernels
Felipe Tobar
ftobar@dim.uchile.cl
Center for Mathematical Modeling
Universidad de Chile
Thang D. Bui
tdb40@cam.ac.uk
Department of Engineering
University of Cambridge
Richard E. Turner
ret26@cam.ac.uk
Department of Engineering
University of Cambridge
Abstract
We introduce the Gaussian Process Convolution Model (GPCM), a two-stage nonparametric generative procedure to model stationary signals as the convolution
between a continuous-time white-noise process and a continuous-time linear filter
drawn from Gaussian process. The GPCM is a continuous-time nonparametricwindow moving average process and, conditionally, is itself a Gaussian process with a nonparametric kernel defined in a probabilistic fashion. The generative model can be equivalently considered in the frequency domain, where
the power spectral density of the signal is specified using a Gaussian process.
One of the main contributions of the paper is to develop a novel variational freeenergy approach based on inter-domain inducing variables that efficiently learns
the continuous-time linear filter and infers the driving white-noise process. In
turn, this scheme provides closed-form probabilistic estimates of the covariance
kernel and the noise-free signal both in denoising and prediction scenarios. Additionally, the variational inference procedure provides closed-form expressions for
the approximate posterior of the spectral density given the observed data, leading
to new Bayesian nonparametric approaches to spectrum estimation. The proposed
GPCM is validated using synthetic and real-world signals.
1
Introduction
Gaussian process (GP) regression models have become a standard tool in Bayesian signal estimation
due to their expressiveness, robustness to overfitting and tractability [1]. GP regression begins with
a prior distribution over functions that encapsulates a priori assumptions, such as smoothness, stationarity or periodicity. The prior is then updated by incorporating information from observed data
points via their likelihood functions. The result is a posterior distribution over functions that can be
used for prediction. Critically for this work, the posterior and therefore the resultant predictions, is
sensitive to the choice of prior distribution. The form of the prior covariance function (or kernel) of
the GP is arguably the central modelling choice. Employing a simple form of covariance will limit
the GP?s capacity to generalise. The ubiquitous radial basis function or squared exponential kernel,
for example, implies prediction is just a local smoothing operation [2, 3]. Expressive kernels are
needed [4, 5], but although kernel design is widely acknowledged as pivotal, it typically proceeds
via a ?black art? in which a particular functional form is hand-crafted using intuitions about the
application domain to build a kernel using simpler primitive kernels as building blocks (e.g. [6]).
Recently, some sophisticated automated approaches to kernel design have been developed that construct kernel mixtures on the basis of incorporating different measures of similarity [7, 8], or more
generally by both adding and multiplying kernels, thus mimicking the way in which a human would
search for the best kernel [5]. Alternatively, a flexible parametric kernel can be used as in the case
of the spectral mixture kernels, where the power spectral density (PSD) of the GP is parametrised
by a mixture of Gaussians [4].
1
We see two problems with this general approach: The first is that computational tractability limits the
complexity of the kernels that can be designed in this way. Such constraints are problematic when
searching over kernel combinations and to a lesser extent when fitting potentially large numbers of
kernel hyperparameters. Indeed, many naturally occurring signals contain more complex structure
than can comfortably be entertained using current methods, time series with complex spectra like
sounds being a case in point [9, 10]. The second limitation is that hyperparameters of the kernel
are typically fit by maximisation of the model marginal likelihood. For complex kernels with large
numbers of hyperparameters, this can easily result in overfitting rearing its ugly head once more (see
sec. 4.2).
This paper attempts to remedy the existing limitations of GPs in the time series setting using the
same rationale by which GPs were originally developed. That is, kernels themselves are treated
nonparametrically to enable flexible forms whose complexity can grow as more structure is revealed
in the data. Moreover, approximate Bayesian inference is used for estimation, thus side-stepping
problems with model structure search and protecting against overfitting. These benefits are achieved
by modelling time series as the output of a linear and time-invariant system defined by a convolution
between a white-noise process and a continuous-time linear filter. By considering the filter to be
drawn from a GP, the expected second-order statistics (and, as a consequence, the spectral density)
of the output signal are defined in a nonparametric fashion. The next section presents the proposed
model, its relationship to GPs and how to sample from it. In Section 3 we develop an analytic
approximate inference method using state-of-the-art variational free-energy approximations for performing inference and learning. Section 4 shows simulations using both synthetic and real-world
datasets. Finally, Section 5 presents a discussion of our findings.
2
Regression model: Convolving a linear filter and a white-noise process
We introduce the Gaussian Process Convolution Model (GPCM) which can be viewed as constructing a distribution over functions f (t) using a two-stage generative model. In the first stage, a continuous filter function h(t) : R 7? R is drawn from a GP with covariance function Kh (t1 , t2 ). In
the second stage, the function f (t) is produced by convolving the filter with continuous time whitenoise x(t). The white-noise can be treated informally as a draw from a GP with a delta-function
covariance,1
Z
2
h(t) ? GP (0, Kh (t1 , t2 )), x(t) ? GP (0, ?x ?(t1 ? t2 )), f (t) =
h(t ? ? )x(? )d?.
(1)
R
This family of models can be motivated from several different perspectives due to the ubiquity of
continuous-time linear systems.
First, the model relates to linear time-invariant (LTI) systems [12]. The process x(t) is the input
to the LTI system, the function h(t) is the system?s impulse response (which is modelled as a draw
from a GP) and f (t) is its output. In this setting, as an LTI system is entirely characterised by its
impulse response [12], model design boils down to identifying a suitable function h(t). A second
perspective views the model through the lens of differential equations, in which case h(t) can be
considered to be the Green?s function of a system defined by a linear differential equation that is
driven by white-noise. In this way, the prior over h(t) implicitly defines a prior over the coefficients
of linear differential equations of potentially infinite order [13]. Third, the GPCM can be thought
of as a continuous-time generalisation of the discrete-time moving average process in which the
window is potentially infinite in extent and is produced by a GP prior [14].
A fourth perspective relates the GPCM to standard GP models. Consider the filter h(t) to be known.
In this case the process f (t)|h is distributed according to a GP, since f (t) is a linear combination
of Gaussian random variables. The mean function mf |h (f (t)) and covariance function Kf |h (t1 , t2 )
Rof the random variable f |h, t ? R, are then stationary and given by mf |h (f (t)) = E [f (t)|h] =
h(t ? ? )E [x(? )] d? = 0 and
R
Z
Kf |h (t1 , t2 ) = Kf |h (t) =
h(s)h(s + t)ds = (h(t) ? h(?t))(t)
(2)
R
1
Here we use informal notation common in the GP literature. A more formal treatment would use stochastic
integral notation [11], which replaces the differential element x(? )d? = dW (? ), so that eq. (1) becomes a
stochastic integral equation (w.r.t. the Brownian motion W ).
2
that is, the convolution between the filter h(t) and its mirrored version with respect to t = 0 ? see
sec. 1 of the supplementary material for the full derivation.
Since h(t) is itself is drawn from a nonparametric prior, the presented model (through the relationship above) induces a prior over nonparametric kernels. A particular case is obtained when h(t)
is chosen as the basis expansion of a reproducing kernel Hilbert space [15] with parametric kernel
(e.g., the squared exponential kernel), whereby Kf |h becomes such a kernel.
A fifth perspective considers the model in the frequency domain rather than the time domain. Here
the continuous-time linear filter shapes the spectral content of the input process x(t). As x(t) is
white-noise, it has positive PSD at all frequencies, which can potentially influence f (t). More
precisely, since the PSD of f |h is given by the Fourier transform of the covariance function (by
the Wiener?Khinchin theorem
[12]), the model places a nonparametric
R
R prior over the PSD, given
2
?
?
by F(Kf |h (t))(?) = R Kf |h (t)e?j?t dt = |h(?)|
, where h(?)
= R h(t)e?j?t dt is the Fourier
transform of the filter.
Armed with these different theoretical perspectives on the GPCM generative model, we next focus
on how to design appropriate covariance functions for the filter.
2.1
Sensible and tractable priors over the filter function
Real-world signals have finite power (which relates to the stability of the system) and potentially
complex spectral content. How can such knowledge be built into the filter covariance function
Kh (t1 , t2 )? To fulfil these conditions, we model the linear filter h(t) as a draw from a squared
exponential GP that is multiplied by a Gaussian window (centred on zero) in order to restrict its
extent. The resulting decaying squared exponential (DSE) covariance function is given by a squared
2
2
exponential (SE) covariance pre- and post-multiplied by e??t1 and e??t2 respectively, that is,
2
2
2
Kh (t1 , t2 ) = KDSE (t1 , t2 ) = ?h2 e??t1 e??(t1 ?t2 ) e??t2 , ?, ?, ?h > 0.
(3)
2
With p
the GP priors for x(t) and h(t), f (t) is zero-mean, stationary and has a variance E[f (t)] =
?x2 ?h2 ?/(2?). Consequently,
by Chebyshev?s inequality, f (t) is stochastically bounded, that is,
p
2 2
Pr(|f (t)| ? T ) ? ?x ?h ?/(2?)T ?2 , T ? R. Hence, the exponential decay of KDSE (controlled
by ?) plays a key role in the finiteness of the integral in eq. (1) ? and, consequently, of f (t).
Additionally, the DSE model for the filter h(t) provides a flexible prior distribution over linear sys2
tems,
? where the hyperparameters have physical meaning: ?h controls the power of the output f (t);
1/ ? is the characteristic timescale over which the filter varies that, in turn, determines the typical
?
frequency content of the system; finally, 1/ ? is the temporal extent of the filter which controls the
length of time correlations in the output signal and, equivalently, the bandwidth characteristics in
the frequency domain.
Although the covariance function is flexible, its Gaussian form facilitates analytic computation that
will be leveraged when (approximately) sampling from the DSE-GPCM and performing inference.
In principle, it is also possible in the framework that follows to add causal structure into the covariance function so that only causal filters receive non-zero prior probability density, but we leave that
extension for future work.
2.2
Sampling from the model
Exact sampling from the proposed model in eq. (1) is not possible, since it requires computation
of the convolution between infinite dimensional processes h(t) and x(t). It is possible to make
some analytic progress by considering, instead, the GP formulation of the GPCM in eq. (2) and
noting that sampling f (t)|h ? GP (0, Kf |h ) only requires knowledge of Kf |h = h(t) ? h(?t)
and therefore avoids explicit representation of the troublesome white-noise process x(t). Further
progress requires approximation. The first key insight is that h(t) can be sampled at a finite number
of locations h = h(t) = [h(t1 ), . . . , h(tNh )] using a multivariate Gaussian and then exact analytic
inference can be performed to infer the entire function h(t) (via noiseless GP regression). Moreover,
since the filter is drawn from the DSE kernel h(t) ? GP (0, KDSE ) it is, with high probability,
temporally limited in extent and smoothly varying. Therefore, a relatively small number of samples
Nh can potentially enable accurate estimates of h(t). The second key insight is that it is possible,
3
when using the DSE kernel, to analytically compute the expected value of the covariance of f (t)|h,
Kf |h = E[Kf |h |h] = E[h(t) ? h(?t)|h] as well as the uncertainty in this quantity. The more values
the latent process h we consider, the lower the uncertainty in h and, as a consequence, Kf |h ? Kf |h
almost surely. This is an example of a Bayesian numerical integration method since the approach
maintains knowledge of its own inaccuracy [16].
In more detail, the kernel approximation Kf |h (t1 , t2 ) is given by:
Z
Z
E[Kf |h (t1 , t2 )|h] = E
h(t1 ? ? )h(t2 ? ? )d? h =
E [h(t1 ? ? )h(t2 ? ? )|h] d?
R
R
Ng
Z
KDSE (t1 ? ?, t2 ? ? )d? +
=
R
X
Z
KDSE (t1 ? ?, tr )KDSE (ts , t2 ? ? )d?
Mr,s
R
r,s=1
where Mr,s is the (r, s)th entry of the matrix (K?1 hhT K?1 ? K?1 ), K = KDSE (t, t). The kernel
approximation and its Fourier transform, i.e., the PSD, can be calculated in closed form (see sec. 2
in the supplementary material). Fig. 1 illustrates the generative process of the proposed model.
Ke r ne l K f | h( t ) = h ( t ) ? h ( ?t )
F ilt e r h ( t ) ? G P( 0 , K h )
F ( K f | h) ( ? )
Lat e nt pr oc e s s h
Obs e r vat ions h
0.5
A ppr ox . K f | h = E [K f | h|h]
2
3
Tr ue ke r ne l K f | h
0
2
2
1
0
?0.5
?1
?10
Signal f ( t ) ? G P( 0, K f | h)
4
1
1
0
?5
0
5
T ime [s ample s ]
10
?2
?10
0
?2
0
10
T im e [s ample s ]
?1
0
1
Fr e q ue nc y [he r t z ]
2
?50
0
T ime [s ample s ]
50
Figure 1: Sampling from the proposed regression model. From left to right: filter, kernel, power
spectral density and sample of the output f (?).
3
Inference and learning using variational methods
One of the main contributions of this paper is to devise a computationally tractable method for learning the filter h(t) (known as system identification in the control community [17]) and inferring the
white-noise process x(t) from a noisy dataset
y ? RN produced by their convolution and additive
R
Gaussian noise, y(t) = f (t) + (t) = R h(t ? ? )x(? )d? + (t), (t) ? N (0, ?2 ). Performing inference and learning is challenging for three reasons: First, the convolution means that each
observed datapoint depends on the entire unknown filter and white-noise process, which are infinitedimensional functions. Second, the model is non-linear in the unknown functions since the filter and
the white-noise multiply one another in the convolution. Third, continuous-time white-noise must
be handled with care since formally it is only well-behaved inside integrals.
We propose a variational approach that addresses these three problems. First, the convolution is
made tractable by using variational inducing variables that summarise the infinite dimensional latent
functions into finite dimensional inducing points. This is the same approach that is used for scaling
GP regression [18]. Second, the product non-linearity is made tractable by using a structured meanfield approximation and leveraging the fact that the posterior is conditionally a GP when x(t) or
h(t) is fixed. Third, the direct representation of white-noise process is avoided by considering a
set of inducing variables instead, which are related to x(t) via an integral transformation (so-called
inter-domain inducing variables [19]). We outline the approach below.
In order to form the variational inter-domain approximation, we first expand the model with additional variables.
We use X to denote the set of all integral transformations of x(t) with members
R
ux (t) = w(t, ? )x(? )d? (which includes the original white-noise
process when w(t, ? ) = ?(t?? ))
R
and identically define the set H with members uh (t) = w(t, ? )h(? )d? . The variational lower
bound of the model evidence can be applied to this augmented model2 using Jensen?s inequality
Z
Z
p(y, H, X)
dHdX = F
(4)
L = log p(y) = log p(y, H, X)dHdX ? q(H, X) log
q(H, X)
2
This formulation can be made technically rigorous for latent functions [20], but we do not elaborate on that
here to simplify the exposition.
4
here q(H, X) is any variational distribution over the sets of processes X and H. The bound
can be written as the difference between the model evidence and the KL divergence between
the variational distribution over all integral transformed processes and the true posterior, F =
L ? KL[q(H, X)||p(X, H|y)]. The bound is therefore saturated when q(H, X) = p(X, H|y),
but this is intractable. Instead, we choose a simpler parameterised form, similar in spirit to that used
in the approximate sampling procedure, that allows us to side-step these difficulties. In order to construct the variational distribution, we first partition the set X into the original white-noise process,
a finite set of variables called inter-domain inducing points ux that will be used to parameterise the
approximation and the remaining variables X6=x,ux , so that X = {x, ux , X6=x,ux }. The set H is
partitioned identically H = {h, uh , H6=h,uh }. We then choose a variational distribution q(H, X)
that mirrors the form of the joint distribution,
p(y, H, X) = p(x, X6=x,ux |ux )p(h, H6=h,uh |uh )p(ux )p(uh )p(y|h, x)
q(H, X) = p(x, X6=x,ux |ux )p(h, H6=h,uh |uh )q(ux )q(uh ) = q(H)q(X).
This is a structured mean-field approximation [21]. The approximating distribution over the inducing points q(ux )q(uh ) is chosen to be a multivariate Gaussian (the optimal parametric form given
the assumed factorisation). Intuitively, the variational approximation implicitly constructs a surrogate GP regression problem, whose posterior q(ux )q(uh ) induces a predictive distribution that best
captures the true posterior distribution as measured by the KL divergence.
Critically, the resulting bound is now tractable as we will now show. First, note that the shared prior
terms in the joint and approximation cancel leading to an elegant form,
Z
p(y|h, x)p(uh )p(ux )
F = q(h, x, uh , ux ) log
dhdxduh dux
(5)
q(uh )q(ux )
= Eq [log p(y|h, x)] ? KL[q(uh )||p(uh )] ? KL[q(ux )||p(ux )].
(6)
The last two terms in the bound are simple to compute being KL divergences between multivariate
Gaussians. The first term, the average of the log-likelihood terms with respect to the variational
distribution, is more complex,
"
2 #
Z
N
1 X
N
2
Eq
y(ti ) ? h(ti ? ? )x(? )d?
.
Eq [log p(y|h, x)] = ? log(2?? ) ? 2
2
2? i=1
R
Computation of the variational bound therefore requires the first and second moments of the convolution under the variational approximation. However, these can be computed analytically for
particular choices of covariance function such as the DSE, by taking the expectations inside the
integral (this is analogous to variational inference for the Gaussian Process Latent Variable Model
[22]). For example, the first moment of the convolution is
Z
Z
Eq
h(ti ? ? )x(? )d? =
Eq(h,uh ) [h(ti ? ? )] Eq(x,ux ) [x(? )]d?
(7)
R
R
where the expectations take the form of the predictive mean in GP regression,
? and Eq(x,ux ) [x(? )] = Kx,ux (? )Ku?1
Eq(h,uh ) [h(ti ? ? )] = Kh,uh (ti ? ? )Ku?1
?
x ,ux ux
h ,uh uh
where {Kh,uh , Kuh ,uh , Kx,ux , Kux ,ux } are the covariance functions and {?uh , ?ux } are the
means of the approximate variational posterior.
Crucially, the integral is tractable if the covariance
R
functions can be convolved analytically, R Kh,uh (ti ? ? )Kx,ux (? )d? , which is the case for the SE
and DSE covariances - see sec. 4 of the supplementary material for the derivation of the variational
lower bound.
The fact that it is possible to compute the first and second moments of the convolution under the
approximate posterior
tractable to compute the mean
of the posterior distribution
means thatit is alsoR
over the kernel, Eq Kf |h (t1 , t2 ) = Eq R h(t1 ? ? )h(t2 ? ? )d? and the associated error-bars.
The method therefore supports full probabilistic inference and learning for nonparametric kernels,
in addition to extrapolation, interpolation and denoising in a tractable manner. The next section
discusses sensible choices for the integral transforms that define the inducing variables uh and ux .
3.1
Choice of the inducing variables uh and ux
In order to choose the domain of the inducing variables, it is useful to consider inference for the
white-noise process given a fixed window h(t). Typically, we assume that the window h(t) is
5
smoothly varying, in which case the data y(t) are only determined by the low-frequency content of
the white-noise; conversely in inference, the data can only reveal the low frequencies in x(t). In fact,
since a continuous time white-noise process contains power at all frequencies and infinite power in
total, most of the white-noise content will be undeterminable, as it is suppressed by the filter (or
filtered out). However, for the same reason, these components do not affect prediction of f (t).
Since we can only learn the low-frequency content of the white-noise and this is all that is important
for making predictions,
we consider inter-domain
inducing points formed by a Gaussian integral
R
transform, ux = R exp ? 2l12 (tx ? ? )2 x(? )d? . These inducing variables represent a local estimate of the white-noise process x around the inducing location tx considering a Gaussian window,
and have a squared exponential covariance by construction (these covariances are shown in sec. 3
of the supplementary material). In spectral terms, the process ux is a low-pass version of the true
process x. The variational parameters l and tx affect the approximate posterior and can be optimised
using the free-energy, although this was not investigated here to minimise computational overhead.
For the inducing variables uh we chose not to use the flexibility of the inter-domain parameterisation
and, instead, place the points in the same domain as the window.
4
Experiments
The DSE-GPCM was tested using synthetic data with known statistical properties and real-world
signals. The aim of these experiments was to validate the new approach to learn covariance functions
and PSDs while also providing error bars for the estimates, and to compare it against alternative
parametric and nonparametric approaches.
4.1
Learning known parametric kernels
We considered Gaussian processes with standard, parametric covariance kernels and verified that
our method is able to infer such kernels. Gaussian processes with squared exponential (GP-SE) and
spectral mixture (GP-SM) kernels, both of unit variance, were used to generate two time series on
the region [-44, 44] uniformly sampled at 10 Hz (i.e., 880 samples). We then constructed the observation signal by adding unit-variance white-noise. The experiment then consisted of (i) learning
the underlying kernel, (ii) estimating the latent process and (iii) performing imputation by removing
observations in the region [-4.4, 4.4] (10% of the observations).
Fig. 2 shows the results for the GP-SE case. We chose 88 inducing points for ux , that is, 1/10 of
the samples to be recovered and 30 for uh ; the hyperparameters in eq. (2) were set to ? = 0.45
and ? = 0.1, so as to allow for an uninformative prior on h(t). The variational objective F was
optimised with respect to the hyperparameter ?h and the variational parameters ?h , ?x (means) and
the Cholesky factors of Ch , Cx (covariances) using conjugate gradients. The true SE kernel was
reconstructed from the noisy data with an accuracy of 5%, while the estimation mean squared error
(MSE) was within 1% of the (unit) noise variance for both the true GP-SE and the proposed model.
Fig. 3 shows the results for the GP-SM time series. Along the lines of the GP-SE case, the reconstruction of the true kernel and spectrum is remarkably accurate and the estimate of the latent
process has virtually the same mean square error (MSE) as the true GP-SM model. These toy results
indicate that the variational inference procedure can work well, in spite of known biases [23].
4.2
Learning the spectrum of real-world signals
The ability of the DSE-GPCM to provide Bayesian estimates of the PSD of real-world signals was
verified next. This was achieved through a comparison of the proposed model to (i) the spectral
mixture kernel (GP-SM) [4], (ii) tracking the Fourier coefficients using a Kalman filter (KalmanFourier [24]), (iii) the Yule-Walker method and (iv) the periodogram [25].
We first analysed the Mauna Loa monthly CO2 concentration (de-trended). We considered the GPSM with 4 and 10 components, Kalman-Fourier with a partition of 500 points between zero and
the Nyquist frequency, Yule-Walker with 250 lags and the raw periodogram. All methods used all
the data and each PSD estimate was normalised w.r.t its maximum (shown in fig. 4). All methods
identified the three main frequency peaks at [0, year?1 , 2year?1 ]; however, notice that the KalmanFourier method does not provide sharp peaks and that GP-SM places Gaussians on frequencies with
6
F ilt e r h ( t )
Pos4t e r ior me an
I nduc ing p oint s
3
2
Ke r ne ls ( nor malis e d) . Dis c r e panc y :5.4%
Process u x
1
2
Pos t e r ior me an
I nduc ing p oint s
1
Tr ue SE ke r ne l
0.5 -GP C M ke r ne l
DSE
0
0
0
?5
0
?2
?40
5
?20
0
20
40
?5
0
5
Obs e r vat ions , lat e nt pr o c e s s and ke r ne l e s t imat e s
4
Lat e nt pr oc e s s
Obs e r vat ions
SE ke r ne l e s t imat e ( MSE =0.9984)
DSE - GP C M e s t imat e ( MSE =1.0116)
2
0
?2
?4
?40
?30
?20
?10
0
10
20
30
40
Figure 2: Joint learning of an SE kernel and data imputation using the proposed DSE-GPCM approach. Top: filter h(t) and inducing points uh (left), filtered white-noise process ux (centre) and
learnt kernel (right). Bottom: Latent signal and its estimates using both the DSE-GPCM and the
true model (GP-SE). Confidence intervals are shown in light blue (DSE-GPCM) and in between
dashed red lines (GP-SE) and they correspond to 99.7% for the kernel and 95% otherwise.
Ke r ne ls ( nor malis e d) . Dis c r e panc y : 18.6%.
20
1.5
Gr ound t r ut h
DSE - GP C M p os t e r ior
Dat a imput at ion
P SD ( nor malis e d) . Dis c r e panc y : 15.8%.
8
DSE - GP C M
Tr ue SM ke r ne l
18
6
16
1
Gr ound t r ut h
Obs e r vat ions
SM e s t imat e ( MSE =1.0149)
DSE - GP C M e s t imat e ( MSE =1.0507)
14
4
12
0.5
10
2
8
0
0
6
4
?2
2
?0.5
?4
?20
?10
0
T ime
10
20
0
0.1
0.2
0.3
Fr e q ue nc y
0.4
0.5
?10
?5
0
T ime
5
10
Figure 3: Joint learning of an SM kernel and data imputation using a nonparametric kernel. True
and learnt kernel (left), true and learnt spectra (centre) and data imputation region (right).
negligible power ? this is a known drawback of the GP-SM approach: it is sensitive to initialisation
and gets trapped in noisy frequency peaks (in this experiment, the centres of the GP-SM were initialised as multiples of one tenth of the Nyquist frequency). This example shows that the GP-SM can
overfit noise in training data. Conversely, observe how the proposed DSE-GPCM approach (with
Nh = 300 and Nx = 150) not only captured the first three peaks but also the spectral floor and
placed meaningful error bars (90%) where the raw periodogram laid.
0
0
10
10
?5
?5
10
10
Sp e c t r al mix . ( 4 c omp)
Sp e c t r al mix . ( 10 c omp)
Kalman-Four ie r
Yule -Walke r
Pe r iodogr am
DSE -GP C M
Pe r iodogr am
?10
10
?10
1/year
2/year
3/year
4/year
Fr e q ue nc y [ye ar ? 1]
10
5/year
1/year
2/year
3/year
4/year
Fr e q ue nc y [ye ar ? 1]
5/year
Figure 4: Spectral estimation of the Mauna Loa CO2 concentration. DSE-GPCM with error bars
(90%) is shown with the periodogram at the left and all other methods at the right for clarity.
The next experiment consisted of recovering the spectrum of an audio signal from the TIMIT corpus,
composed of 1750 samples (at 16kHz), only using an irregularly-sampled 20% of the available
data. We compared the proposed DSE-GPCM method to GP-SM (again 4 and 10 components) and
Kalman-Fourier; we used the periodogram and the Yule-Walker method as benchmarks, since these
7
methods cannot handle unevenly-sampled data (therefore, they used all the data). Besides the PSD,
we also computed the learnt kernel, shown alongside the autocorrelation function in fig. 5 (left).
Due to its sensitivity to initial conditions, the centres of the GP-SM were initialised every 100Hz (the
harmonics of the signal are approximately every 114Hz); however, it was only with 10 components
that the GP-SM was able to find the four main lobes of the PSD. Notice also how the DSE-GPCM
accurately finds the main lobes, both in location and width, together with the 90% error bars.
1.2
1
0.8
0.6
Pow e r s p e c t r al de ns ity
C ovar ianc e ke r ne l
0
Pow e r s p e c t r al de ns ity
0
10
DSE - GP C M
Sp e c t r al Mix . ( 4 c omp)
Sp e c t r al Mix . ( 10 c omp)
A ut o c or r e lat ion f unc t ion
10
?2
?2
10
10
0.4
0.2
0
?4
?4
10
10
?0.2
?0.4
DSE -GP C M
Pe r iodogr am
?0.6
?6
?6
10
0
10
20
T ime [milis e c onds ]
30
Sp e c t r al Mix . ( 4 c omp)
Sp e c t r al Mix . ( 10 c omp)
Kalman-Four ie r
Yule -Walke r
Pe r iodogr am
10
114 228 342 456 570 684 798
Fr e q ue nc y [he r t z ]
114 228 342 456 570 684 798
Fr e q ue nc y [he r t z ]
Figure 5: Audio signal from TIMIT. Induced kernel of DSE-GPCM and GP-SM alongside autocorrelation function (left). PSD estimate using DSE-GPCM and raw periodogram (centre). PSD
estimate using GP-SM, Kalman-Fourier, Yule-Walker and raw periodogram (right).
5
Discussion
The Gaussian Process Convolution Model (GPCM) has been proposed as a generative model for
stationary time series based on the convolution between a filter function and a white-noise process.
Learning the model from data is achieved via a novel variational free-energy approximation, which
in turn allows us to perform predictions and inference on both the covariance kernel and the spectrum in a probabilistic, analytically and computationally tractable manner. The GPCM approach
was validated in the recovery of spectral density from non-uniformly sampled time series; to our
knowledge, this is the first probabilistic approach that places nonparametric prior over the spectral
density itself and which recovers a posterior distribution over that density directly from the time
series.
The encouraging results for both synthetic and real-world data shown in sec. 4 serve as a proof of
concept for the nonparametric design of covariance kernels and PSDs using convolution processes.
In this regard, extensions of the presented model can be identified in the following directions: First,
for the proposed GPCM to have a desired performance, the number of inducing points uh and ux
needs to be increased with the (i) high frequency content and (ii) range of correlations of the data;
therefore, to avoid the computational overhead associated to large quantities of inducing points, the
filter prior or the inter-domain transformation can be designed to have a specific harmonic structure
and therefore focus on a target spectrum. Second, the algorithm can be adapted to handle longer
time series, for instance, through the use of tree-structured approximations [26]. Third, the method
can also be extended beyond time series to operate on higher-dimensional input spaces; this can be
achieved by means of a factorisation of the latent kernel, whereby the number of inducing points for
the filter only increases linearly with the dimension, rather than exponentially.
Acknowledgements
Part of this work was carried out when F.T. was with the University of Cambridge. F.T. thanks
CONICYT-PAI grant 82140061 and Basal-CONICYT Center for Mathematical Modeling (CMM).
R.T. thanks EPSRC grants EP/L000776/1 and EP/M026957/1. T.B. thanks Google. We thank Mark
Rowland, Shane Gu and the anonymous reviewers for insightful feedback.
8
References
[1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. The MIT Press, 2006.
R in Machine Learning, vol. 2,
[2] Y. Bengio, ?Learning deep architectures for AI,? Foundations and trends
no. 1, pp. 1?127, 2009.
[3] D. J. C. MacKay, ?Introduction to Gaussian processes,? in Neural Networks and Machine Learning (C. M.
Bishop, ed.), NATO ASI Series, pp. 133?166, Kluwer Academic Press, 1998.
[4] A. G. Wilson and R. P. Adams, ?Gaussian process kernels for pattern discovery and extrapolation,? in
Proc. of International Conference on Machine Learning, 2013.
[5] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani, ?Structure discovery in
nonparametric regression through compositional kernel search,? in Proc. of International Conference on
Machine Learning, pp. 1166?1174, 2013.
[6] D. Duvenaud, H. Nickisch, and C. E. Rasmussen, ?Additive Gaussian processes,? in Advances in Neural
Information Processing Systems 24, pp. 226?234, 2011.
[7] M. G?onen and E. Alpaydin, ?Multiple kernel learning algorithms,? The Journal of Machine Learning
Research, vol. 12, pp. 2211?2268, 2011.
[8] F. Tobar, S.-Y. Kung, and D. Mandic, ?Multikernel least mean square algorithm,? IEEE Trans. on Neural
Networks and Learning Systems, vol. 25, no. 2, pp. 265?277, 2014.
[9] R. E. Turner, Statistical Models for Natural Sounds. PhD thesis, Gatsby Computational Neuroscience
Unit, UCL, 2010.
[10] R. Turner and M. Sahani, ?Time-frequency analysis as probabilistic inference,? IEEE Trans. on Signal
Processing, vol. 62, no. 23, pp. 6171?6183, 2014.
[11] B. Oksendal, Stochastic Differential Equations. Springer, 2003.
[12] A. V. Oppenheim and A. S. Willsky, Signals and Systems. Prentice-Hall, 1997.
[13] C. Archambeau, D. Cornford, M. Opper, and J. Shawe-Taylor, ?Gaussian process approximations of
stochastic differential equations,? Journal of Machine Learning Research Workshop and Conference Proceedings, vol. 1, pp. 1?16, 2007.
[14] S. F. Gull, ?Developments in maximum entropy data analysis,? in Maximum Entropy and Bayesian Methods (J. Skilling, ed.), vol. 36, pp. 53?71, Springer Netherlands, 1989.
[15] B. Sch?olkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
[16] T. P. Minka, ?Deriving quadrature rules from Gaussian processes,? tech. rep., Statistics Department,
Carnegie Mellon University, 2000.
[17] A. H. Jazwinski, Stochastic Processes and Filtering Theory. New York, Academic Press., 1970.
[18] M. K. Titsias, ?Variational learning of inducing variables in sparse Gaussian processes,? in Proc. of International Conference on Artificial Intelligence and Statistics, pp. 567?574, 2009.
[19] A. Figueiras-Vidal and M. L?azaro-Gredilla, ?Inter-domain Gaussian processes for sparse inference using
inducing features,? in Advances in Neural Information Processing Systems, pp. 1087?1095, 2009.
[20] A. G. d. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani, ?On sparse variational methods and
the Kullback-Leibler divergence between stochastic processes,? arXiv preprint arXiv:1504.07027, 2015.
[21] D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms. Cambridge University Press,
2003.
[22] M. K. Titsias and N. D. Lawrence, ?Bayesian Gaussian process latent variable model,? in Proc. of International Conference on Artificial Intelligence and Statistics, pp. 844?851, 2010.
[23] R. E. Turner and M. Sahani, ?Two problems with variational expectation maximisation for time-series
models,? in Bayesian time series models (D. Barber, T. Cemgil, and S. Chiappa, eds.), ch. 5, pp. 109?130,
Cambridge University Press, 2011.
[24] Y. Qi, T. Minka, and R. W. Picara, ?Bayesian spectrum estimation of unevenly sampled nonstationary
data,? in Proc. of IEEE ICASSP, vol. 2, pp. II?1473?II?1476, 2002.
[25] D. B. Percival and A. T. Walden, Spectral Analysis for Physical Applications. Cambridge University
Press, 1993. Cambridge Books Online.
[26] T. D. Bui and R. E. Turner, ?Tree-structured Gaussian process approximations,? in Advances in Neural
Information Processing Systems 27, pp. 2213?2221, 2014.
9
| 5772 |@word version:2 simulation:1 crucially:1 lobe:2 covariance:25 tr:4 moment:3 initial:1 series:14 contains:1 initialisation:1 rearing:1 existing:1 current:1 recovered:1 nt:3 imat:5 analysed:1 must:1 written:1 numerical:1 additive:2 partition:2 shape:1 analytic:4 designed:2 stationary:5 generative:6 intelligence:2 chile:1 filtered:2 provides:3 tems:1 location:3 simpler:2 mathematical:2 along:1 constructed:1 direct:1 become:1 differential:6 fitting:1 overhead:2 inside:2 autocorrelation:2 manner:2 introduce:2 inter:8 expected:2 indeed:1 themselves:1 nor:3 dux:1 encouraging:1 armed:1 window:6 considering:4 becomes:2 begin:1 estimating:1 moreover:2 notation:2 bounded:1 linearity:1 underlying:1 multikernel:1 developed:2 finding:1 transformation:3 temporal:1 every:2 ti:7 onds:1 uk:2 control:3 unit:4 grant:2 arguably:1 t1:20 positive:1 engineering:2 local:2 negligible:1 sd:1 limit:2 consequence:2 cemgil:1 troublesome:1 optimised:2 interpolation:1 approximately:2 uchile:1 black:1 chose:2 conversely:2 challenging:1 archambeau:1 limited:1 range:1 maximisation:2 block:1 procedure:4 asi:1 thought:1 pre:1 radial:1 confidence:1 spite:1 get:1 cannot:1 unc:1 prentice:1 influence:1 reviewer:1 center:2 primitive:1 williams:1 l:2 ke:10 identifying:1 recovery:1 factorisation:2 insight:2 rule:1 deriving:1 dw:1 ity:2 stability:1 searching:1 fulfil:1 handle:2 analogous:1 updated:1 construction:1 play:1 target:1 exact:2 gps:3 element:1 trend:1 observed:3 role:1 bottom:1 epsrc:1 ep:2 preprint:1 capture:1 cornford:1 region:3 alpaydin:1 intuition:1 complexity:2 co2:2 cam:2 predictive:2 technically:1 serve:1 titsias:2 basis:3 uh:31 model2:1 gu:1 easily:1 joint:4 po:1 icassp:1 tx:3 derivation:2 artificial:2 whitenoise:1 whose:2 lag:1 widely:1 supplementary:4 otherwise:1 ability:1 statistic:4 gp:52 transform:4 itself:3 timescale:1 noisy:3 online:1 ucl:1 propose:1 reconstruction:1 product:1 fr:6 oint:2 flexibility:1 inducing:21 kh:7 validate:1 olkopf:1 figueiras:1 felipe:1 adam:1 leave:1 develop:2 ac:2 chiappa:1 measured:1 progress:2 eq:15 recovering:1 implies:1 indicate:1 direction:1 drawback:1 filter:30 stochastic:6 human:1 enable:2 material:4 anonymous:1 im:1 extension:2 around:1 considered:4 duvenaud:2 hall:1 exp:1 lawrence:1 matthew:1 driving:1 estimation:6 proc:5 sensitive:2 ound:2 tool:1 mit:2 gaussian:29 aim:1 rather:2 avoid:1 varying:2 ret26:1 wilson:1 validated:2 focus:2 modelling:2 likelihood:3 tech:1 rigorous:1 am:4 dim:1 inference:17 typically:3 entire:2 cmm:1 jazwinski:1 expand:1 transformed:1 mimicking:1 flexible:4 priori:1 development:1 smoothing:1 art:2 integration:1 mackay:2 marginal:1 field:1 construct:3 once:1 ng:1 thang:1 sampling:6 pai:1 cancel:1 future:1 t2:19 summarise:1 simplify:1 richard:1 composed:1 ime:5 divergence:4 psd:11 attempt:1 stationarity:1 multiply:1 saturated:1 mixture:5 light:1 parametrised:1 accurate:2 integral:11 tree:2 iv:1 taylor:1 desired:1 causal:2 gull:1 theoretical:1 increased:1 instance:1 modeling:2 ar:2 tractability:2 entry:1 gr:2 varies:1 learnt:4 synthetic:4 nickisch:1 thanks:3 density:9 international:4 peak:4 sensitivity:1 ie:2 probabilistic:6 universidad:1 together:1 squared:8 central:1 again:1 thesis:1 leveraged:1 choose:3 stochastically:1 book:1 convolving:2 leading:2 toy:1 de:4 centred:1 sec:6 lloyd:1 includes:1 coefficient:2 depends:1 performed:1 view:1 extrapolation:2 closed:3 red:1 decaying:1 maintains:1 timit:2 contribution:2 formed:1 square:2 accuracy:1 wiener:1 variance:4 characteristic:2 efficiently:1 correspond:1 modelled:1 bayesian:9 identification:1 raw:4 accurately:1 critically:2 produced:3 multiplying:1 datapoint:1 oppenheim:1 ed:3 against:2 energy:3 frequency:16 initialised:2 pp:15 minka:2 resultant:1 naturally:1 associated:2 recovers:1 boil:1 proof:1 sampled:6 dataset:1 treatment:1 knowledge:4 ut:3 infers:1 ubiquitous:1 hilbert:1 sophisticated:1 originally:1 dt:2 higher:1 x6:4 response:2 formulation:2 ox:1 just:1 stage:4 parameterised:1 smola:1 correlation:2 d:1 hand:1 overfit:1 expressive:1 o:1 google:1 nonparametrically:1 defines:1 freeenergy:1 reveal:1 impulse:2 behaved:1 building:1 ye:2 contain:1 true:10 remedy:1 consisted:2 concept:1 hence:1 analytically:4 regularization:1 leibler:1 white:24 conditionally:2 ue:9 width:1 whereby:2 oc:2 outline:1 motion:1 meaning:1 variational:27 harmonic:2 novel:2 recently:1 common:1 functional:1 physical:2 stepping:1 khz:1 exponentially:1 nh:2 comfortably:1 he:3 kluwer:1 mellon:1 monthly:1 cambridge:7 ai:1 smoothness:1 centre:5 shawe:1 moving:2 similarity:1 longer:1 add:1 posterior:12 brownian:1 multivariate:3 own:1 perspective:5 driven:1 scenario:1 inequality:2 rep:1 conicyt:2 devise:1 captured:1 additional:1 care:1 floor:1 mr:2 omp:6 surely:1 dashed:1 signal:20 relates:3 full:2 sound:2 ii:5 infer:2 multiple:2 mix:6 ing:2 academic:2 mandic:1 mali:3 post:1 l000776:1 psds:2 controlled:1 vat:4 prediction:7 qi:1 regression:9 pow:2 noiseless:1 expectation:3 arxiv:2 kernel:56 represent:1 achieved:4 ion:7 receive:1 addition:1 uninformative:1 remarkably:1 interval:1 unevenly:2 grow:1 finiteness:1 walker:4 sch:1 operate:1 oksendal:1 shane:1 hz:3 induced:1 elegant:1 facilitates:1 virtually:1 ample:3 member:2 leveraging:1 spirit:1 nonstationary:1 noting:1 revealed:1 iii:2 identically:2 bengio:1 automated:1 affect:2 fit:1 architecture:1 restrict:1 bandwidth:1 identified:2 lesser:1 chebyshev:1 minimise:1 expression:1 motivated:1 handled:1 nyquist:2 york:1 compositional:1 deep:1 generally:1 useful:1 se:12 informally:1 netherlands:1 transforms:1 nonparametric:14 tenenbaum:1 induces:2 generate:1 mirrored:1 problematic:1 notice:2 delta:1 trapped:1 neuroscience:1 blue:1 discrete:1 hyperparameter:1 carnegie:1 vol:7 basal:1 key:3 four:3 acknowledged:1 drawn:5 khinchin:1 imputation:4 clarity:1 verified:2 lti:3 tenth:1 year:12 fourth:1 uncertainty:2 ilt:2 place:4 family:1 almost:1 laid:1 draw:3 ob:4 scaling:1 entirely:1 bound:7 replaces:1 adapted:1 constraint:1 precisely:1 x2:1 imput:1 fourier:7 performing:4 relatively:1 department:3 structured:4 according:1 gredilla:1 combination:2 conjugate:1 suppressed:1 partitioned:1 parameterisation:1 encapsulates:1 making:1 intuitively:1 invariant:2 pr:4 computationally:2 equation:6 turn:3 discus:1 needed:1 irregularly:1 tractable:9 informal:1 available:1 operation:1 gaussians:3 h6:3 vidal:1 multiplied:2 observe:1 spectral:16 ubiquity:1 appropriate:1 skilling:1 alternative:1 robustness:1 convolved:1 original:2 top:1 remaining:1 lat:4 ghahramani:2 build:1 approximating:1 dat:1 objective:1 quantity:2 parametric:6 concentration:2 surrogate:1 gradient:1 thank:1 ior:3 capacity:1 sensible:2 nx:1 me:2 barber:1 extent:5 considers:1 l12:1 reason:2 willsky:1 length:1 kalman:6 besides:1 relationship:2 providing:1 equivalently:2 nc:6 onen:1 potentially:6 design:5 unknown:2 perform:1 convolution:16 observation:3 datasets:1 sm:16 benchmark:1 finite:4 protecting:1 t:1 extended:1 head:1 rn:1 reproducing:1 sharp:1 community:1 expressiveness:1 specified:1 kl:6 rof:1 inaccuracy:1 trans:2 address:1 able:2 bar:5 proceeds:1 below:1 alongside:2 beyond:2 pattern:1 built:1 green:1 power:8 suitable:1 meanfield:1 treated:2 difficulty:1 natural:1 turner:6 scheme:1 temporally:1 ne:10 carried:1 ppr:1 sahani:2 prior:18 literature:1 acknowledgement:1 discovery:2 kf:15 rationale:1 parameterise:1 limitation:2 filtering:1 h2:2 foundation:1 principle:1 periodicity:1 loa:2 last:1 free:4 placed:1 rasmussen:2 dis:3 side:2 ugly:1 formal:1 generalise:1 allow:1 bias:1 mauna:2 taking:1 normalised:1 fifth:1 sparse:3 benefit:1 distributed:1 regard:1 calculated:1 dimension:1 world:7 avoids:1 feedback:1 opper:1 infinitedimensional:1 hensman:1 made:3 avoided:1 employing:1 rowland:1 reconstructed:1 approximate:7 implicitly:2 nato:1 bui:2 kuh:1 kullback:1 overfitting:3 corpus:1 assumed:1 alternatively:1 spectrum:9 continuous:12 search:3 latent:9 additionally:2 ku:2 learn:2 expansion:1 mse:6 investigated:1 cl:1 complex:5 constructing:1 domain:15 sp:6 main:5 linearly:1 noise:28 hyperparameters:5 pivotal:1 quadrature:1 augmented:1 crafted:1 fig:5 elaborate:1 fashion:2 grosse:1 gatsby:1 n:2 inferring:1 explicit:1 exponential:8 pe:4 periodogram:7 third:4 learns:1 down:1 theorem:1 tobar:2 removing:1 yule:6 specific:1 bishop:1 insightful:1 jensen:1 decay:1 evidence:2 incorporating:2 intractable:1 workshop:1 adding:2 mirror:1 phd:1 entertained:1 illustrates:1 occurring:1 kx:3 mf:2 dse:26 smoothly:2 cx:1 entropy:2 azaro:1 ux:34 tracking:1 tdb40:1 springer:2 ch:2 determines:1 viewed:1 consequently:2 exposition:1 shared:1 content:7 characterised:1 infinite:5 generalisation:1 typical:1 determined:1 uniformly:2 denoising:2 lens:1 called:2 hht:1 total:1 pas:1 meaningful:1 formally:1 support:2 cholesky:1 mark:1 kung:1 audio:2 tested:1 |
5,272 | 5,773 | Deep Generative Image Models using a
Laplacian Pyramid of Adversarial Networks
Emily Denton?
Dept. of Computer Science
Courant Institute
New York University
Soumith Chintala?
Arthur Szlam
Facebook AI Research
New York
Rob Fergus
Abstract
In this paper we introduce a generative parametric model capable of producing
high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in
a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach [11].
Samples drawn from our model are of significantly higher quality than alternate
approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for
samples drawn from a GAN baseline model. We also show samples from models
trained on the higher resolution images of the LSUN scene dataset.
1
Introduction
Building a good generative model of natural images has been a fundamental problem within computer vision. However, images are complex and high dimensional, making them hard to model well,
despite extensive efforts. Given the difficulties of modeling entire scene at high-resolution, most
existing approaches instead generate image patches. In contrast, we propose an approach that is
able to generate plausible looking scenes at 32 ? 32 and 64 ? 64. To do this, we exploit the multiscale structure of natural images, building a series of generative models, each of which captures
image structure at a particular scale of a Laplacian pyramid [1]. This strategy breaks the original
problem into a sequence of more manageable stages. At each scale we train a convolutional networkbased generative model using the Generative Adversarial Networks (GAN) approach of Goodfellow
et al. [11]. Samples are drawn in a coarse-to-fine fashion, commencing with a low-frequency residual image. The second stage samples the band-pass structure at the next level, conditioned on the
sampled residual. Subsequent levels continue this process, always conditioning on the output from
the previous scale, until the final level is reached. Thus drawing samples is an efficient and straightforward procedure: taking random vectors as input and running forward through a cascade of deep
convolutional networks (convnets) to produce an image.
Deep learning approaches have proven highly effective at discriminative tasks in vision, such as
object classification [4]. However, the same level of success has not been obtained for generative
tasks, despite numerous efforts [14, 26, 30]. Against this background, our proposed approach makes
a significant advance in that it is straightforward to train and sample from, with the resulting samples
showing a surprising level of visual fidelity.
1.1
Related Work
Generative image models are well studied, falling into two main approaches: non-parametric and
parametric. The former copy patches from training images to perform, for example, texture synthesis
[7] or super-resolution [9]. More ambitiously, entire portions of an image can be in-painted, given a
sufficiently large training dataset [13]. Early parametric models addressed the easier problem of tex?
denotes equal contribution.
1
ture synthesis [3, 33, 22], with Portilla & Simoncelli [22] making use of a steerable pyramid wavelet
representation [27], similar to our use of a Laplacian pyramid. For image processing tasks, models
based on marginal distributions of image gradients are effective [20, 25], but are only designed for
image restoration rather than being true density models (so cannot sample an actual image). Very
large Gaussian mixture models [34] and sparse coding models of image patches [31] can also be
used but suffer the same problem.
A wide variety of deep learning approaches involve generative parametric models. Restricted Boltzmann machines [14, 18, 21, 23], Deep Boltzmann machines [26, 8], Denoising auto-encoders [30]
all have a generative decoder that reconstructs the image from the latent representation. Variational
auto-encoders [16, 24] provide probabilistic interpretation which facilitates sampling. However, for
all these methods convincing samples have only been shown on simple datasets such as MNIST
and NORB, possibly due to training complexities which limit their applicability to larger and more
realistic images.
Several recent papers have proposed novel generative models. Dosovitskiy et al. [6] showed how a
convnet can draw chairs with different shapes and viewpoints. While our model also makes use of
convnets, it is able to sample general scenes and objects. The DRAW model of Gregor et al. [12]
used an attentional mechanism with an RNN to generate images via a trajectory of patches, showing
samples of MNIST and CIFAR10 images. Sohl-Dickstein et al. [28] use a diffusion-based process
for deep unsupervised learning and the resulting model is able to produce reasonable CIFAR10 samples. Theis and Bethge [29] employ LSTMs to capture spatial dependencies and show convincing
inpainting results of natural textures.
Our work builds on the GAN approach of Goodfellow et al. [11] which works well for smaller
images (e.g. MNIST) but cannot directly handle large ones, unlike our method. Most relevant to our
approach is the preliminary work of Mirza and Osindero [19] and Gauthier [10] who both propose
conditional versions of the GAN model. The former shows MNIST samples, while the latter focuses
solely on frontal face images. Our approach also uses several forms of conditional GAN model but
is much more ambitious in its scope.
2
Approach
The basic building block of our approach is the generative adversarial network (GAN) of Goodfellow
et al. [11]. After reviewing this, we introduce our LAPGAN model which integrates a conditional
form of GAN model into the framework of a Laplacian pyramid.
2.1
Generative Adversarial Networks
The GAN approach [11] is a framework for training generative models, which we briefly explain in
the context of image data. The method pits two networks against one another: a generative model G
that captures the data distribution and a discriminative model D that distinguishes between samples
drawn from G and images drawn from the training data. In our approach, both G and D are convolutional networks. The former takes as input a noise vector z drawn from a distribution pNoise (z) and
? The discriminative network D takes an image as input stochastically chosen
outputs an image h.
? ? as generated from G, or h ? a real image drawn from the
(with equal probability) to be either h
training data pData (h). D outputs a scalar probability, which is trained to be high if the input was
real and low if generated from G. A minimax objective is used to train both models together:
min max Eh?pData (h) [log D(h)] + Ez?pNoise (z) [log(1 ? D(G(z)))]
(1)
G
D
? Both G and D are
This encourages G to fit pData (h) so as to fool D with its generated samples h.
trained by backpropagating the loss in Eqn. 1 through both models to update the parameters.
The conditional generative adversarial net (CGAN) is an extension of the GAN where both networks
G and D receive an additional vector of information l as input. This might contain, say, information
about the class of the training example h. The loss function thus becomes
min max Eh,l?pData (h,l) [log D(h, l)] + Ez?pNoise (z),l?pl (l) [log(1 ? D(G(z, l), l))]
(2)
G
D
where pl (l) is, for example, the prior distribution over classes. This model allows the output of
the generative model to be controlled by the conditioning variable l. Mirza and Osindero [19] and
Gauthier [10] both explore this model with experiments on MNIST and faces, using l as a class
indicator. In our approach, l will be another image, generated from another CGAN model.
2
2.2
Laplacian Pyramid
The Laplacian pyramid [1] is a linear invertible image representation consisting of a set of band-pass
images, spaced an octave apart, plus a low-frequency residual. Formally, let d(.) be a downsampling
operation which blurs and decimates a j ? j image I, so that d(I) is a new image of size j/2 ? j/2.
Also, let u(.) be an upsampling operator which smooths and expands I to be twice the size, so u(I)
is a new image of size 2j ? 2j. We first build a Gaussian pyramid G(I) = [I0 , I1 , . . . , IK ], where
I0 = I and Ik is k repeated applications of d(.) to I, i.e. I2 = d(d(I)). K is the number of levels in
the pyramid, selected so that the final level has very small spatial extent (? 8 ? 8 pixels).
The coefficients hk at each level k of the Laplacian pyramid L(I) are constructed by taking the
difference between adjacent levels in the Gaussian pyramid, upsampling the smaller one with u(.)
so that the sizes are compatible:
hk = Lk (I) = Gk (I) ? u(Gk+1 (I)) = Ik ? u(Ik+1 )
(3)
Intuitively, each level captures image structure present at a particular scale. The final level of the
Laplacian pyramid hK is not a difference image, but a low-frequency residual equal to the final
Gaussian pyramid level, i.e. hK = IK . Reconstruction from a Laplacian pyramid coefficients
[h1 , . . . , hK ] is performed using the backward recurrence:
Ik = u(Ik+1 ) + hk
(4)
which is started with IK = hK and the reconstructed image being I = Io . In other words, starting
at the coarsest level, we repeatedly upsample and add the difference image h at the next finer level
until we get back to the full resolution image.
2.3
Laplacian Generative Adversarial Networks (LAPGAN)
Our proposed approach combines the conditional GAN model with a Laplacian pyramid representation. The model is best explained by first considering the sampling procedure. Following training
(explained below), we have a set of generative convnet models {G0 , . . . , GK }, each of which captures the distribution of coefficients hk for natural images at a different level of the Laplacian pyramid. Sampling an image is akin to the reconstruction procedure in Eqn. 4, except that the generative
models are used to produce the hk ?s:
? k = u(I?k+1 ) + Gk (zk , u(I?k+1 ))
I?k = u(I?k+1 ) + h
(5)
The recurrence starts by setting I?K+1 = 0 and using the model at the final level GK to generate a
residual image I?K using noise vector zK : I?K = GK (zK ). Note that models at all levels except the
final are conditional generative models that take an upsampled version of the current image I?k+1 as
a conditioning variable, in addition to the noise vector zk . Fig. 1 shows this procedure in action for
a pyramid with K = 3 using 4 generative models to sample a 64 ? 64 image.
The generative models {G0 , . . . , GK } are trained using the CGAN approach at each level of the
pyramid. Specifically, we construct a Laplacian pyramid from each training image I. At each level
we make a stochastic choice (with equal probability) to either (i) construct the coefficients hk either
using the standard procedure from Eqn. 3, or (ii) generate them using Gk :
? k = Gk (zk , u(Ik+1 ))
h
(6)
~
I2
~
I1
~
I0
~
h1
~
h0
~
I3
l1
l0
G0
l2
G1
~
h2
G2
G3
z2
z3
z1
z0
Figure 1: The sampling procedure for our LAPGAN model. We start with a noise sample z3 (right side) and
use a generative model G3 to generate I?3 . This is upsampled (green arrow) and then used as the conditioning
variable (orange arrow) l2 for the generative model at the next level, G2 . Together with another noise sample
? 2 which is added to l2 to create I?2 . This process repeats across two
z2 , G2 generates a difference image h
subsequent levels to yield a final full resolution sample I0 .
3
I2
I1
z0
I = I0
I1
z1
~
h1
I3
~
I3
z2
G2
~
h2
D2
D3
Real/
Generated?
G3
z3
Real/
Generated?
D1
Real/Generated?
~
h0
D0
l2
h2
h1
h0
I3
G1
l1
G0
l0
I2
Real/Generated?
Figure 2: The training procedure for our LAPGAN model. Starting with a 64x64 input image I from our
training set (top left): (i) we take I0 = I and blur and downsample it by a factor of two (red arrow) to produce
I1 ; (ii) we upsample I1 by a factor of two (green arrow), giving a low-pass version l0 of I0 ; (iii) with equal
probability we use l0 to create either a real or a generated example for the discriminative model D0 . In the real
case (blue arrows), we compute high-pass h0 = I0 ? l0 which is input to D0 that computes the probability of
it being real vs generated. In the generated case (magenta arrows), the generative network G0 receives as input
? 0 = G0 (z0 , l0 ), which is input to
a random noise vector z0 and l0 . It outputs a generated high-pass image h
D0 . In both the real/generated cases, D0 also receives l0 (orange arrow). Optimizing Eqn. 2, G0 thus learns
? 0 consistent with the low-pass image l0 . The same procedure is
to generate realistic high-frequency structure h
repeated at scales 1 and 2, using I1 and I2 . Note that the models at each level are trained independently. At
level 3, I3 is an 8?8 image, simple enough to be modeled directly with a standard GANs G3 & D3 .
Note that Gk is a convnet which uses a coarse scale version of the image lk = u(Ik+1 ) as an input,
? k , along with the low-pass image lk (which is
as well as noise vector zk . Dk takes as input hk or h
?
explicitly added to hk or hk before the first convolution layer), and predicts if the image was real or
generated. At the final scale of the pyramid, the low frequency residual is sufficiently small that it
? K = GK (zK ) and DK only has hK or h
? K as input.
can be directly modeled with a standard GAN: h
The framework is illustrated in Fig. 2.
Breaking the generation into successive refinements is the key idea in this work. Note that we give
up any ?global? notion of fidelity; we never make any attempt to train a network to discriminate
between the output of a cascade and a real image and instead focus on making each step plausible.
Furthermore, the independent training of each pyramid level has the advantage that it is far more
difficult for the model to memorize training examples ? a hazard when high capacity deep networks
are used.
As described, our model is trained in an unsupervised manner. However, we also explore variants
that utilize class labels. This is done by add a 1-hot vector c, indicating class identity, as another
conditioning variable for Gk and Dk .
3
Model Architecture & Training
We apply our approach to three datasets: (i) CIFAR10 [17] ? 32?32 pixel color images of 10
different classes, 100k training samples with tight crops of objects; (ii) STL10 [2] ? 96?96 pixel
color images of 10 different classes, 100k training samples (we use the unlabeled portion of data);
and (iii) LSUN [32] ? ?10M images of 10 different natural scene types, downsampled to 64?64
pixels.
For each dataset, we explored a variety of architectures for {Gk , Dk }. Model selection was
performed using a combination of visual inspection and a heuristic based on `2 error in pixel
space. The heuristic computes the error for a given validation image at level k in the pyramid
as Lk (Ik ) = min{zj } ||Gk (zj , u(Ik+1 )) ? hk ||2 where {zj } is a large set of noise vectors, drawn
from pnoise (z). In other words, the heuristic is asking, are any of the generated residual images
close to the ground truth? Torch training and evaluation code, along with model specification files
can be found at http://soumith.ch/eyescream/. For all models, the noise vector zk is
drawn from a uniform [-1,1] distribution.
4
3.1 CIFAR10 and STL10
Initial scale: This operates at 8 ? 8 resolution, using densely connected nets for both GK & DK
with 2 hidden layers and ReLU non-linearities. DK uses Dropout and has 600 units/layer vs 1200
for GK . zK is a 100-d vector.
Subsequent scales: For CIFAR10, we boost the training set size by taking four 28 ? 28 crops from
the original images. Thus the two subsequent levels of the pyramid are 8 ? 14 and 14 ? 28. For
STL, we have 4 levels going from 8 ? 16 ? 32 ? 64 ? 96. For both datasets, Gk & Dk are
convnets with 3 and 2 layers, respectively (see [5]). The noise input zk to Gk is presented as a 4th
?color plane? to low-pass lk , hence its dimensionality varies with the pyramid level. For CIFAR10,
we also explore a class conditional version of the model, where a vector c encodes the label. This is
integrated into Gk & Dk by passing it through a linear layer whose output is reshaped into a single
plane feature map which is then concatenated with the 1st layer maps. The loss in Eqn. 2 is trained
using SGD with an initial learning rate of 0.02, decreased by a factor of (1 + 4 ? 10?4 ) at each
epoch. Momentum starts at 0.5, increasing by 0.0008 at epoch up to a maximum of 0.8. Training
time depends on the models size and pyramid level, with smaller models taking hours to train and
larger models taking up to a day.
3.2 LSUN
The larger size of this dataset allows us to train a separate LAPGAN model for each of the scene
classes. The four subsequent scales 4 ? 8 ? 16 ? 32 ? 64 use a common architecture for Gk &
Dk at each level. Gk is a 5-layer convnet with {64, 368, 128, 224} feature maps and a linear output
layer. 7 ? 7 filters, ReLUs, batch normalization [15] and Dropout are used at each hidden layer. Dk
has 3 hidden layers with {48, 448, 416} maps plus a sigmoid output. See [5] for full details. Note
that Gk and Dk are substantially larger than those used for CIFAR10 and STL, as afforded by the
larger training set.
4
Experiments
We evaluate our approach using 3 different methods: (i) computation of log-likelihood on a held
out image set; (ii) drawing sample images from the model and (iii) a human subject experiment that
compares (a) our samples, (b) those of baseline methods and (c) real images.
4.1
Evaluation of Log-Likelihood
Like Goodfellow et al. [11], we are compelled to use a Gaussian Parzen window estimator to compute log-likelihood, since there no direct way of computing it using our model. Table 1 compares the
log-likelihood on a validation set for our LAPGAN model and a standard GAN using 50k samples
for each model (the Gaussian width ? was also tuned on the validation set). Our approach shows
a marginal gain over a GAN. However, we can improve the underlying estimation technique by
leveraging the multi-scale structure of the LAPGAN model. This new approach computes a probability at each scale of the Laplacian pyramid and combines them to give an overall image probability
(see Appendix A in supplementary material for details). Our multi-scale Parzen estimate, shown in
Table 1, produces a big gain over the traditional estimator.
The shortcomings of both estimators are readily apparent when compared to a simple Gaussian, fit
to the CIFAR-10 training set. Even with added noise, the resulting model can obtain a far higher loglikelihood than either the GAN or LAPGAN models, or other published models. More generally,
log-likelihood is problematic as a performance measure due to its sensitivity to the exact representation used. Small variations in the scaling, noise and resolution of the image (much less changing
from RGB to YUV, or more substantive changes in input representation) results in wildly different
scores, making fair comparisons to other methods difficult.
Model
CIFAR10 (@32?32) STL10 (@32?32)
GAN [11] (Parzen window estimate)
-3617 ? 353
-3661 ? 347
LAPGAN (Parzen window estimate)
-3572 ? 345
-3563 ? 311
LAPGAN (multi-scale Parzen window estimate)
-1799 ? 826
-2906 ? 728
Table 1: Log-likelihood estimates for a standard GAN and our proposed LAPGAN model on CIFAR10 and STL10 datasets. The mean and std. dev. are given in units of nats/image. Rows 1 and 2
use a Parzen-window approach at full-resolution, while row 3 uses our multi-scale Parzen-window
estimator.
5
4.2
Model Samples
We show samples from models trained on CIFAR10, STL10 and LSUN datasets. Additional samples can be found in the supplementary material [5]. Fig. 3 shows samples from our models trained
on CIFAR10. Samples from the class conditional LAPGAN are organized by class. Our reimplementation of the standard GAN model [11] produces slightly sharper images than those shown in the
original paper. We attribute this improvement to the introduction of data augmentation. The LAPGAN samples improve upon the standard GAN samples. They appear more object-like and have
more clearly defined edges. Conditioning on a class label improves the generations as evidenced
by the clear object structure in the conditional LAPGAN samples. The quality of these samples
compares favorably with those from the DRAW model of Gregor et al. [12] and also Sohl-Dickstein
et al. [28]. The rightmost column of each image shows the nearest training example to the neighboring sample (in L2 pixel-space). This demonstrates that our model is not simply copying the input
examples.
Fig. 4(a) shows samples from our LAPGAN model trained on STL10. Here, we lose clear object shape but the samples remain sharp. Fig. 4(b) shows the generation chain for random STL10
samples.
Fig. 5 shows samples from LAPGAN models trained on three LSUN categories (tower, bedroom,
church front). To the best of our knowledge, no other generative model is been able to produce
samples of this complexity. The substantial gain in quality over the CIFAR10 and STL10 samples is
likely due to the much larger training LSUN training set which allows us to train bigger and deeper
models. In supplemental material we show additional experiments probing the models, e.g. drawing
multiple samples using the same fixed 4 ? 4 image, which illustrates the variation captured by the
LAPGAN models.
4.3
Human Evaluation of Samples
To obtain a quantitative measure of quality of our samples, we asked 15 volunteers to participate
in an experiment to see if they could distinguish our samples from real images. The subjects were
presented with the user interface shown in Fig. 6(right) and shown at random four different types
of image: samples drawn from three different GAN models trained on CIFAR10 ((i) LAPGAN, (ii)
class conditional LAPGAN and (iii) standard GAN [11]) and also real CIFAR10 images. After being
presented with the image, the subject clicked the appropriate button to indicate if they believed the
image was real or generated. Since accuracy is a function of viewing time, we also randomly pick
the presentation time from one of 11 durations ranging from 50ms to 2000ms, after which a gray
mask image is displayed. Before the experiment commenced, they were shown examples of real
images from CIFAR10. After collecting ?10k samples from the volunteers, we plot in Fig. 6 the
fraction of images believed to be real for the four different data sources, as a function of presentation
time. The curves show our models produce samples that are more realistic than those from standard
GAN [11].
5
Discussion
By modifying the approach in [11] to better respect the structure of images, we have proposed a
conceptually simple generative model that is able to produce high-quality sample images that are
qualitatively better than other deep generative modeling approaches. While they exhibit reasonable
diversity, we cannot be sure that they cover the full data distribution. Hence our models could
potentially be assigning low probability to parts of the manifold on natural images. Quantifying this
is difficult, but could potentially be done via another human subject experiment. A key point in our
work is giving up any ?global? notion of fidelity, and instead breaking the generation into plausible
successive refinements. We note that many other signal modalities have a multiscale structure that
may benefit from a similar approach.
Acknowledgements
We would like to thank the anonymous reviewers for their insightful and constructive comments.
We also thank Andrew Tulloch, Wojciech Zaremba and the FAIR Infrastructure team for useful
discussions and support. Emily Denton was supported by an NSERC Fellowship.
6
CC-LAPGAN: Airplane
CC-LAPGAN: Automobile
CC-LAPGAN: Bird
CC-LAPGAN: Cat
CC-LAPGAN: Deer
CC-LAPGAN: Dog
CC-LAPGAN: Frog
CC-LAPGAN: Horse
CC-LAPGAN: Ship
CC-LAPGAN: Truck
LAPGAN
GAN [14]
Figure 3: CIFAR10 samples: our class conditional CC-LAPGAN model, our LAPGAN model and
the standard GAN model of Goodfellow [11]. The yellow column shows the training set nearest
neighbors of the samples in the adjacent column.
(a)
(b)
Figure 4: STL10 samples: (a) Random 96x96 samples from our LAPGAN model. (b) Coarse-tofine generation chain.
7
Figure 5: 64 ? 64 samples from three different LSUN LAPGAN models (top: tower, middle: bedroom, bottom: church front)
100
Real
CC?LAPGAN
LAPGAN
GAN
90
80
% classified real
70
60
50
40
30
20
10
0
50
75
100
150
200
300
400
650
1000
2000
Presentation time (ms)
Figure 6: Left: Human evaluation of real CIFAR10 images (red) and samples from Goodfellow
et al. [11] (magenta), our LAPGAN (blue) and a class conditional LAPGAN (green). The error
bars show ?1? of the inter-subject variability. Around 40% of the samples generated by our class
conditional LAPGAN model are realistic enough to fool a human into thinking they are real images.
This compares with ? 10% of images from the standard GAN model [11], but is still a lot lower
than the > 90% rate for real images. Right: The user-interface presented to the subjects.
8
References
[1] P. J. Burt, Edward, and E. H. Adelson. The laplacian pyramid as a compact image code. IEEE Transactions on Communications,
31:532?540, 1983.
[2] A. Coates, H. Lee, and A. Y. Ng. An analysis of single layer networks in unsupervised feature learning. In AISTATS, 2011.
[3] J. S. De Bonet. Multiresolution sampling procedure for analysis and synthesis of texture images. In Proceedings of the 24th annual
conference on Computer graphics and interactive techniques, pages 361?368. ACM Press/Addison-Wesley Publishing Co., 1997.
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages
248?255. IEEE, 2009.
[5] E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a laplacian pyramid of adversarial networks:
Supplementary material. http://soumith.ch/eyescream.
[6] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. arXiv preprint
arXiv:1411.5928, 2014.
[7] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In ICCV, volume 2, pages 1033?1038. IEEE, 1999.
[8] S. A. Eslami, N. Heess, C. K. Williams, and J. Winn. The shape boltzmann machine: a strong model of object shape. International
Journal of Computer Vision, 107(2):155?176, 2014.
[9] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. Computer Graphics and Applications, IEEE, 22(2):56?
65, 2002.
[10] J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional
Neural Networks for Visual Recognition, Winter semester 2014 2014.
[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets.
In NIPS, pages 2672?2680. 2014.
[12] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for image generation. CoRR, abs/1502.04623,
2015.
[13] J. Hays and A. A. Efros. Scene completion using millions of photographs. ACM Transactions on Graphics (TOG), 26(3):4, 2007.
[14] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint
arXiv:1502.03167v3, 2015.
[16] D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014.
[17] A. Krizhevsky. Learning multiple layers of features from tiny images. Masters Thesis, Deptartment of Computer Science, University of
Toronto, 2009.
[18] A. Krizhevsky, G. E. Hinton, et al. Factored 3-way restricted boltzmann machines for modeling natural images. In AISTATS, pages
621?628, 2010.
[19] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
[20] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research,
37(23):3311?3325, 1997.
[21] S. Osindero and G. E. Hinton. Modeling image patches with a directed hierarchy of markov random fields. In J. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, NIPS, pages 1121?1128. 2008.
[22] J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. International
Journal of Computer Vision, 40(1):49?70, 2000.
[23] M. Ranzato, V. Mnih, J. M. Susskind, and G. E. Hinton. Modeling natural images using gated MRFs. IEEE Transactions on Pattern
Analysis & Machine Intelligence, (9):2206?2222, 2013.
[24] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. arXiv
preprint arXiv:1401.4082, 2014.
[25] S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In In CVPR, pages 860?867, 2005.
[26] R. Salakhutdinov and G. E. Hinton. Deep boltzmann machines. In AISTATS, pages 448?455, 2009.
[27] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger. Shiftable multiscale transforms. Information Theory, IEEE Transactions on, 38(2):587?607, 1992.
[28] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics.
CoRR, abs/1503.03585, 2015.
[29] L. Theis and M. Bethge. Generative image modeling using spatial LSTMs. Dec 2015.
[30] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In
ICML, pages 1096?1103, 2008.
[31] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition.
Proceedings of the IEEE, 98(6):1031?1044, 2010.
[32] Y. Zhang, F. Yu, S. Song, P. Xu, A. Seff, and J. Xiao. Large-scale scene understanding challenge. In CVPR Workshop, 2015.
[33] S. C. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling.
International Journal of Computer Vision, 27(2):107?126, 1998.
[34] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, 2011.
9
| 5773 |@word briefly:1 version:5 manageable:1 middle:1 d2:1 rgb:1 pick:1 sgd:1 inpainting:1 initial:2 series:1 score:1 tuned:1 rightmost:1 existing:1 current:1 z2:3 surprising:1 assigning:1 readily:1 subsequent:5 realistic:4 blur:2 shape:4 designed:1 plot:1 update:1 v:2 generative:36 selected:1 intelligence:1 inspection:1 plane:2 compelled:1 infrastructure:1 coarse:4 toronto:1 successive:2 semester:1 evaluator:1 zhang:1 wierstra:2 along:2 constructed:1 direct:1 ik:12 combine:2 manner:1 introduce:2 inter:1 mask:1 multi:4 freeman:2 salakhutdinov:2 soumith:3 tex:1 actual:1 considering:1 increasing:1 becomes:1 window:6 clicked:1 linearity:1 underlying:1 project:1 substantially:1 supplemental:1 unified:1 sapiro:1 quantitative:2 collecting:1 expands:1 interactive:1 zaremba:1 demonstrates:1 platt:1 szlam:2 unit:2 appear:1 producing:1 danihelka:1 before:2 limit:1 io:1 despite:2 painted:1 eslami:1 encoding:1 solely:1 might:1 plus:2 twice:1 bird:1 studied:1 frog:1 black:1 pit:1 co:1 directed:1 block:1 reimplementation:1 backpropagation:1 susskind:1 procedure:9 steerable:1 rnn:1 yan:1 cascade:3 significantly:1 word:2 upsampled:2 downsampled:1 get:1 cannot:3 unlabeled:1 selection:1 operator:1 close:1 context:1 map:4 reviewer:1 roth:1 straightforward:2 williams:1 starting:2 independently:1 emily:2 duration:1 resolution:9 pouget:1 factored:1 estimator:4 handle:1 x64:1 notion:2 variation:2 deptartment:1 hierarchy:1 user:2 exact:1 us:5 goodfellow:7 recognition:2 std:1 predicts:1 database:1 bottom:1 preprint:3 capture:5 connected:1 ranzato:1 substantial:1 complexity:2 nats:1 asked:1 warde:1 cs231n:1 trained:13 zoran:1 reviewing:1 tight:1 upon:1 tog:1 basis:1 joint:1 maheswaranathan:1 cat:1 train:7 effective:2 shortcoming:1 horse:1 deer:1 h0:4 whose:1 heuristic:3 larger:6 plausible:3 supplementary:3 say:1 drawing:3 apparent:1 loglikelihood:1 cvpr:3 stanford:1 statistic:1 g1:2 reshaped:1 final:8 sequence:1 advantage:1 net:6 propose:2 reconstruction:2 neighboring:1 relevant:1 multiresolution:1 roweis:1 stl10:9 produce:9 object:7 andrew:1 recurrent:1 completion:1 nearest:2 strong:1 edward:1 memorize:1 indicate:1 larochelle:1 attribute:1 filter:2 stochastic:2 modifying:1 human:6 viewing:1 material:4 preliminary:1 anonymous:1 extension:1 pl:2 around:2 sufficiently:2 ground:1 wright:1 scope:1 efros:2 early:1 cgan:3 estimation:1 integrates:1 lose:1 label:3 networkbased:1 pnoise:4 create:2 clearly:1 always:1 gaussian:8 super:2 i3:5 rather:1 l0:9 focus:2 rezende:1 improvement:1 likelihood:6 hk:15 contrast:1 adversarial:11 baseline:2 inference:1 ganguli:1 mrfs:1 downsample:1 i0:8 leung:1 entire:2 integrated:1 torch:1 hidden:3 koller:1 going:1 i1:7 pixel:6 overall:1 classification:1 fidelity:3 ambitiously:1 spatial:3 orange:2 brox:1 marginal:2 equal:5 construct:2 never:1 field:4 ng:1 sampling:6 adelson:2 denton:3 unsupervised:4 jones:1 pdata:4 thinking:1 yu:1 icml:1 mirza:4 dosovitskiy:2 employ:1 distinguishes:1 randomly:1 winter:1 densely:1 consisting:1 attempt:1 ab:3 highly:1 mnih:1 evaluation:4 mixture:1 farley:1 held:1 chain:2 edge:1 capable:1 cifar10:18 arthur:1 overcomplete:1 column:3 modeling:7 asking:1 dev:1 cover:1 restoration:2 applicability:1 shiftable:1 uniform:1 nonequilibrium:1 krizhevsky:2 lsun:7 osindero:4 front:2 graphic:3 dependency:1 encoders:2 varies:1 st:1 density:1 fundamental:1 sensitivity:1 international:3 probabilistic:1 lee:1 dong:1 invertible:1 parzen:7 synthesis:4 bethge:2 gans:1 together:2 augmentation:1 thesis:1 reconstructs:1 huang:1 possibly:1 stochastically:1 expert:1 yuv:1 wojciech:1 li:2 szegedy:1 diversity:1 de:1 coding:2 coefficient:5 explicitly:1 depends:1 performed:2 break:1 h1:4 lot:1 reached:1 portion:2 start:3 red:2 relus:1 bayes:1 contribution:1 accuracy:1 convolutional:7 who:1 spaced:1 yield:1 yellow:1 conceptually:1 vincent:1 trajectory:1 cc:12 finer:1 published:1 classified:1 explain:1 facebook:1 against:2 frequency:5 mohamed:1 chintala:2 sampled:1 gain:3 dataset:4 color:3 knowledge:1 dimensionality:2 improves:1 organized:1 back:1 wesley:1 higher:3 courant:1 day:1 wei:2 done:2 wildly:1 furthermore:1 stage:2 autoencoders:1 until:2 convnets:3 eqn:5 receives:2 lstms:2 x96:1 gauthier:3 multiscale:3 assessment:1 bonet:1 quality:6 gray:1 olshausen:1 building:3 contain:1 true:1 former:3 hence:2 i2:5 illustrated:1 adjacent:2 width:1 encourages:1 recurrence:2 backpropagating:1 seff:1 m:3 octave:1 l1:2 interface:2 image:100 variational:3 ranging:1 novel:1 common:1 sigmoid:1 conditioning:6 volume:1 million:1 interpretation:1 significant:1 ai:1 mistaken:1 specification:1 add:2 recent:1 showed:1 optimizing:1 apart:1 ship:1 hay:1 continue:1 success:1 captured:1 additional:3 deng:1 employed:1 v3:1 signal:1 ii:5 full:5 simoncelli:3 multiple:2 d0:5 smooth:1 believed:2 dept:1 hazard:1 cifar:1 bigger:1 laplacian:17 controlled:1 variant:1 basic:1 crop:2 vision:7 volunteer:2 arxiv:6 normalization:2 pyramid:30 dec:1 receive:1 background:1 addition:1 fine:2 fellowship:1 addressed:1 decreased:1 winn:1 source:1 modality:1 unlike:1 file:1 sure:1 subject:6 comment:1 facilitates:1 leveraging:1 extracting:1 iii:4 enough:2 ture:1 bengio:2 variety:2 fit:2 relu:1 bedroom:2 architecture:3 idea:1 airplane:1 shift:1 accelerating:1 effort:2 akin:1 song:1 suffer:1 york:2 passing:1 repeatedly:1 action:1 deep:13 heess:1 generally:1 useful:1 fool:2 involve:1 clear:2 transforms:1 band:2 category:1 generate:9 http:2 zj:3 problematic:1 coates:1 blue:2 dickstein:3 key:2 four:4 falling:1 drawn:10 d3:2 changing:1 diffusion:1 utilize:1 backward:1 v1:1 button:1 fraction:1 master:1 springenberg:1 reasonable:2 wu:1 patch:6 draw:4 appendix:1 scaling:1 dropout:2 layer:12 distinguish:1 courville:1 truck:1 annual:1 tofine:1 fei:2 scene:8 afforded:1 encodes:1 generates:1 chair:2 min:3 coarsest:1 alternate:1 combination:1 smaller:3 across:1 slightly:1 remain:1 g3:4 rob:1 making:4 intuitively:1 restricted:2 explained:2 iccv:2 lapgan:39 mechanism:1 singer:1 addison:1 operation:1 apply:1 hierarchical:1 appropriate:1 batch:2 original:3 denotes:1 running:1 top:2 gan:26 publishing:1 exploit:1 giving:2 concatenated:1 build:2 gregor:3 objective:1 g0:7 added:3 mumford:1 parametric:7 strategy:2 traditional:1 exhibit:1 gradient:1 iclr:1 convnet:5 separate:2 attentional:1 thank:2 upsampling:2 decoder:1 capacity:1 participate:1 tower:2 manifold:1 extent:1 substantive:1 ozair:1 code:2 modeled:2 copying:1 z3:3 manzagol:1 convincing:2 downsampling:1 difficult:3 sharper:1 potentially:2 favorably:1 gk:22 ambitious:1 boltzmann:5 perform:1 gated:1 convolution:1 datasets:5 pasztor:1 markov:1 displayed:1 hinton:5 looking:1 team:1 variability:1 communication:1 portilla:2 frame:1 sharp:1 burt:1 evidenced:1 dog:1 extensive:1 z1:2 imagenet:1 boost:1 hour:1 nip:2 kingma:1 able:5 bar:1 below:1 pattern:2 challenge:1 max:2 green:3 hot:1 natural:10 difficulty:1 eh:2 indicator:1 residual:7 zhu:1 minimax:1 thermodynamics:1 improve:2 numerous:1 lk:5 started:1 church:2 commencing:1 auto:3 prior:2 epoch:2 l2:5 acknowledgement:1 theis:2 understanding:1 graf:1 loss:3 generation:7 proven:1 validation:3 h2:3 consistent:1 xiao:1 viewpoint:1 editor:1 tiny:1 row:2 compatible:1 repeat:1 supported:1 copy:1 side:1 deeper:1 institute:1 wide:1 neighbor:1 taking:5 face:3 sparse:3 benefit:1 curve:1 computes:3 forward:1 qualitatively:1 refinement:2 far:2 welling:1 transaction:4 reconstructed:1 compact:1 global:2 ioffe:1 mairal:1 norb:1 fergus:2 discriminative:4 latent:2 table:3 zk:10 robust:1 composing:1 automobile:1 complex:2 aistats:3 main:1 arrow:7 big:1 noise:12 whole:1 repeated:2 fair:2 xu:2 fig:8 fashion:2 probing:1 momentum:1 heeger:1 breaking:2 wavelet:2 learns:1 z0:4 magenta:2 covariate:1 showing:2 insightful:1 explored:1 dk:11 abadie:1 stl:2 workshop:1 socher:1 mnist:5 sohl:3 corr:3 texture:6 conditioned:1 illustrates:1 easier:1 entropy:1 photograph:1 simply:1 explore:3 likely:1 ez:2 visual:3 nserc:1 upsample:2 g2:4 scalar:1 ch:2 truth:1 acm:2 ma:1 conditional:15 identity:1 presentation:3 quantifying:1 towards:1 hard:1 change:1 specifically:1 except:2 operates:1 reducing:2 denoising:2 tulloch:1 pas:8 discriminate:1 indicating:1 formally:1 internal:1 support:1 latter:1 frontal:1 constructive:1 evaluate:1 d1:1 |
5,273 | 5,774 | Shepard Convolutional Neural Networks
Jimmy SJ. Ren?
SenseTime Group Limited
rensijie@sensetime.com
Li Xu
SenseTime Group Limited
xuli@sensetime.com
Qiong Yan
SenseTime Group Limited
yanqiong@sensetime.com
Wenxiu Sun
SenseTime Group Limited
sunwenxiu@sensetime.com
Abstract
Deep learning has recently been introduced to the field of low-level computer
vision and image processing. Promising results have been obtained in a number of tasks including super-resolution, inpainting, deconvolution, filtering, etc.
However, previously adopted neural network approaches such as convolutional
neural networks and sparse auto-encoders are inherently with translation invariant
operators. We found this property prevents the deep learning approaches from
outperforming the state-of-the-art if the task itself requires translation variant interpolation (TVI). In this paper, we draw on Shepard interpolation and design
Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes endto-end trainable TVI operators in the network. We show that by adding only a few
feature maps in the new Shepard layers, the network is able to achieve stronger
results than a much deeper architecture. Superior performance on both image inpainting and super-resolution is obtained where our system outperforms previous
ones while keeping the running time competitive.
1 Introduction
In the past a few years, deep learning has been very successful in addressing many aspects of visual
perception problems such as image classification, object detection, face recognition [1, 2, 3], to name
a few. Inspired by the breakthrough in high-level computer vision, several attempts have been made
very recently to apply deep learning methods in low-level vision as well as image processing tasks.
Encouraging results has been obtained in a number of tasks including image super-resolution [4],
inpainting [5], denosing [6], image deconvolution [7], dirt removal [8], edge-aware filtering [9] etc.
Powerful models with multiple layers of nonlinearity such as convolutional neural networks (CNN),
sparse auto-encoders, etc. were used in the previous studies. Notwithstanding the rapid progress and
promising performance, we notice that the building blocks of these models are inherently translation
invariant when applying to images. The property makes the network architecture less efficient in
handling translation variant operators, exemplified by the image interpolation operation.
Figure 1 illustrates the problem of image inpainting, a typical translation variant interpolation (TVI)
task. The black region in figure 1(a) indicates the missing region where the four selected patches
with missing parts are visualized in figure 1(b). The interpolation process for the central pixel in
each patch is done by four different weighting functions shown in the bottom of figure 1(b). This
process cannot be simply modeled by a single kernel due to the inherent spatially varying property.
In fact, the TVI operations are common in many vision applications. Image super-resolution, which
aims to interpolate a high resolution image with a low resolution observation also suffers from the
?
Project page: http://www.deeplearning.cc/shepardcnn
1
(a)
1
2
2
2
3
2
1
1
1
2
2
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
2
1
0
0
2
2
2
1
1
1
0
0
0
2
2
3
3
4
3
0
0
0
2
2
0
0
0
2
1
0
0
0
1
1
1
0
0
0
0
0
1
1
1
0
0
1
1
1
1
0
1
1
1
1
0
1
2
1
1
1
1
2
(b)
Figure 1: Illustration of translation variant interpolation. (a) The application of inpainting. The black regions
indicate the missing part. (b) Four selected patches. The bottom row shows the kernels for interpolating the
central pixel of each patch.
same problem: different local patches have different pattern of anchor points. We will show that it
is thus less optimal to use the traditional convolutional neural network to do the translation variant
operations for super-resolution task.
In this paper, we draw on Shepard method [10] and devise a novel CNN architecture named Shepard Convolutional Neural Networks (ShCNN) which efficiently equips conventional CNN with the
ability to learn translation variant operations for irregularly spaced data. By adding only a few
feature maps in the new Shepard layer and optimizing a more powerful TVI procedure in the endto-end fashion, the network is able to achieve stronger results than a much deeper architecture. We
demonstrate that the resulting system is general enough to benefit a number of applications with TVI
operations.
2 Related Work
Deep learning methods have recently been introduced to the area of low-level computer vision and
image processing. Burger et al. [6] used a simple multi-layer neural network to directly learn a
mapping between noisy and clear image patches. Xie et al. [5] adopted a sparse auto-encoder and
demonstrated its ability to do blind image inpainting. A three-layer CNN was used in [8] to tackle
of problem of rain drop and dirt. It demonstrated the ability of CNN to blindly handle translation
variant problem in real world challenges.
Xu et al. [7] advocated the use of generative approaches to guide the design of the CNN for deconvolution tasks. In [9], edge-aware filters can be well approximated using CNN. While it is feasible
to use the translation invariant operators, such as convolution, to obtain the translation variant results
in a deep neural network architecture, it is less effective in achieving high quality results for interpolation operations. The first attempt using CNN to perform image super-resolution [4] connected
the CNN approach to the sparse coding ones. But it failed to beat the state-of-the-art super resolution system [11]. In this paper, we focus on the design of deep neural network layer that better fits
the translation variant interpolation tasks. We note that TVI is the essential step for a wide range of
2
low-level vision applications including inpainting, dirt removal, noise suppression, super-resolution,
to name a few.
3 Analysis
Deep learning approaches without explicit TVI mechanism generated reasonable results in a few
tasks requiring translation variant property. To some extent, deep architecture with multiple layers of
nonlinearity is expressive to approximate certain TVI operations given sufficient amount of training
data. It is, however, non-trivial to beat non-CNN based approaches while ensuring the high efficiency
and simplicity.
To see this, we experimented with the CNN architecture in [4] and [8] and trained a CNN with three
convolutional layers by using 1 million synthetic corrupted/clear image pairs. Network and training
details as well as the concrete statistics of the data will be covered in the experiment section. Typical
test images are shown in the left column of figure 2 whereas the results of this model are displayed
in the mid-left column of the same figure. We found that visually very similar results as in [5] are
obtained, namely obvious residues of the text are still left in the images. We also experimented with
a much deeper network by adding more convolutional layers, virtually replicating the network in
[8] by 2,3, and 4 times. Although slight visual differences are found in the results, no fundamental
improvement in the missing regions is observed, namely residue still remains.
A sensible next step is to explicitly inform the network about where the missing pixels are so that
the network has the opportunity to figure out more plausible solutions for TVI operations. For many
applications, the underlying mask indicating the processed regions can be detected or be known
in advance. Sample applications include image completion/inpainting, image matting, dirt/impulse
noise removal, etc. Other applications such as sparse point propagation and super resolution by
nature have the masks for unknown regions.
One way to incorporate the mask into the network is to treat it as an additional channel of the input.
We tested this idea with the same set of network and experimental settings as the previous trial.
The results showed that such additional piece of information did bring about improvement but still
considerably far from satisfactory in removing the residues. Results are visualized in the mid-right
column of figure 2. To learn a tractable TVI model, we devise in the next session a novel architecture
with an effective mechanism to exploit the information contained in the mask.
4 Shepard Convolutional Neural Networks
We initiate the attempt to leverage the traditional interpolation framework to guide the design of
neural network architecture for TVI. We turn to the Shepard framework [10] which weighs known
pixels differently according to their spatial distances to the processed pixel. Specifically, Shepard
method can be re-written in a convolution form
Jp =
(K ? I)p / (K ? M)p
Ip
if
if
Mp = 0
Mp = 1
(1)
where I and J are the input and output images, respectively. p indexes the image coordinates. M is
the binary indicator. Mp = 0 indicates the pixel values are unknown. ? is the convolution operation.
K is the kernel function with its weights inversely proportional to the distance between a pixel
with Mp = 1 and the pixel to process. The element-wise division between the convolved image
and the convolved mask naturally controls the way how pixel information is propagated across the
regions. It thus enables the capability to handle interpolation for irregularly-spaced data and make
it possible translation variant. The key element in Shepard method affecting the interpolation result
is the definition of the convolution kernel. We thus propose a new convolutional layer in the light of
Shepard method but allow for a more flexible, data-driven kernel design. The layer is referred to as
the Shepard interpolation layer.
3
Figure 2: Comparison between ShCNN and CNN in image inpainting. Input images (Left). Results from a
regular CNN (Mid-left). Results from a regular CNN trained with masks (Mid-right). Our results (Right).
4.1 The Shepard Interpolation Layer
The feed-forward pass of the trainable interpolation layer can be mathematically described as the
following equation,
X Knij ? Fjn?1
n
Fin (F n?1 , Mn ) = ?(
n ? Mn + b ),
K
j
ij
j
n = 1, 2, 3, ...
(2)
where n is the index of layers. The subscript i in Fin is the index of feature maps in layer n. j
in Fjn?1 index the feature maps in layer n ? 1. F n?1 and Mn are the input and the mask of the
current layer respectively. F n?1 represents all the feature maps in layer n ? 1. Kij are the trainable
kernels which are shared in both numerator and denominator in computing the fraction. Concretely,
same Kij is to be convolved with both the activations of the last layer in the numerator and the
mask of the current layer Mn in the denominator. F n?1 could be the output feature maps of regular
layers in a CNN such as a convolutional layer or a pooling layer. It could also be a previous Shepard
interpolation layer which is a function of both F n?2 and Mn?1 . Thus Shepard interpolation layers
can actually be stacked together to form a highly nonlinear interpolation operator. b is the bias
term and ? is the nonlinearity imposed to the network. F is a smooth and differentiable function,
therefore standard back-propagation can be used to train the parameters.
Figure 3 illustrates our neural network architecture with Shepard interpolation layers. The inputs of
the Shepard interpolation layer are images/feature maps as well as masks indicating where interpolation should occur. Note that the interpolation layer can be applied repeatedly to construct more
complex interpolation functions with multiple layers of nonlinearity. The mask is a binary map of
value one for the known area, zero for the missing area. Same kernel is applied to the image and
the mask. We note that the mask for layer n + 1 can be automatically generated by the result of
previous convolved mask Kn ? Mn , by zeroing out insignificant values and thresholding it. It is
important for tasks with relative large missing areas such as inpainting where sophisticated ways of
propagation may be learned from data by multi-stage Shepard interpolation layer with nonlinearity.
This is also a flexible way to balance the kernel size and the depth of the network. We refer to
4
Figure 3: Illustration of ShCNN architecture for multiple layers of interpolation.
a convolutional neural network with Shepard interpolation layers as Shepard convolutional neural
network (ShCNN).
4.2 Discussion
Although standard back-propagation can be used, because F is a function of both Ks in the fraction, matrix form of the quotient rule for derivatives need to be used in deriving the back-propagation
equations of the interpolation layer. To make the implementation efficient, we unroll the two convolution operations K ? F and K ? M into two matrix multiplications denoted W ? I and W ? M
where I and M are the unrolled versions of F and M. W is the rearrangement of the kernels where
each kernel is listed in a single row. E is the error function to compute the distance between the
network output and the ground truth. L2 norm is used to compute this distance. We also denote
n
n?1
?E
n
n
Z n = KKn?F
?Mn . The derivative of the error function E with respect to Z , ? = ?Z n , can be
computed the same way as in previous CNN papers [12, 1]. Once this value is computed, we show
that the derivative of E with respect to the kernels W connecting j th node in (n ? 1)th layer to ith
node in nth layer can be computed by,
n
n
X (Wij
? Mjm ) ? Ijm ? (Wij
? Ijm ) ? Mjm
?E
? ?im ,
=
n
n
2
?Wij
(Wij ? Mjm )
m
(3)
where m is the column index in I, M and ?.
The denominator of each element in the outer summation in Eq. 3 is different. Therefore, the
numerator of each summation element has to be computed separately. While this operation can still
be efficiently parallelized by vectorization, it requires significantly more memory and computations
than the regular CNNs. Though it brings extra workload in training, the new interpolation layer only
adds a fraction of more computation during the test time. We can discern this from Eq. 2, the only
added operations are the convolution of the mask with the K and the point-wise division. Because
the two convolutions shares the same kernel, it can be efficiently implemented by convolving with
samples with the batch size of 2. It thus keeps the computation of Shepard interpolation layer
competitive compare to the traditional convolution layer.
We note that it is also natural to integrate the interpolation layer to any previous CNN architecture.
This is because the new layer only adds a mask input to the convolutional layer, keeping all other
interfaces the same. This layer can also degenerate to a fully connected layer because the unrolled
version of Eq. 2 merely contains matrix multiplication in the fraction. Therefore, as long as the TVI
operators are necessary in the task, no matter where it is needed in the architecture and the type of
layer before or after it, the interpolation layer can be seamlessly plugged in.
5
Last but not least, the interpolation kernels in the layer is learned from data rather than hand-crafted,
therefore it is more flexible and could be more powerful than pre-designed kernels. On the other
hand, it is end-to-end trainable so that the learned interpolation operators are embedded in the overall
optimization objective of the model.
5 Experiments
We conducted experiments on two applications involving TVI: the inpainting and the superresolution. The training data was generated by randomly sampling 1 million patches from 1000
natural images scraped from Flickr. Grayscale patches of size 48x48 were used for both tasks to
facilitate the comparison with previous studies. All PSNR comparison in the experiment is based on
grayscale results. Our model can be directly extended to process color images.
5.1 Inpainting
The natural images are contaminated by masks containing text of different sizes and fonts as shown
in figure 2. We assume the binary masks indicating missing regions are known in advance. The
ShCNN for inpainting is consists of five layers, two of which are Shepard interpolation layers. We
use ReLU function [1] to impose nonlinearity in all our experiments. 4x4 filters were used in the
first Shepard layer to generate 8 feature maps, followed by another Shepard interpolation layer with
4x4 filters. The rest of the ShCNN is conventional CNN architecture. The filters for the third layer is
with size 9x9x8, which are use to generate 128 feature maps. 1x1x128 filters are used in the fourth
layer. 8x8 filters are used to carry out the reconstruction of image details. Visual results are shown
in the last column in figure 2. The results of the comparisons are generated using the architecture in
[8]. More examples are provided in the project webpage.
(a) Ground Truth / PSNR
(b) Bicubic / 22.10dB
(c) KSVD / 23.57dB
(d) NE+LLE / 23.38dB
(e) ANR / 23.52dB
(f) A+ / 24.42dB
(g) SRCNN / 25.07dB
(h) ShCNN / 25.63dB
Figure 4: Visual comparison. Factor 4 upscaling of the butterfly image in Set5 [14].
5.2 Super Resolution
The quantitative evaluation of super resolution is conducted using synthetic data where the high
resolution images are first downscaled by a factor to generate low resolution patches. To perform
super resolution, we upscale the low resolution patches and zero out the pixels in the upscaled
images, leaving one copy of pixels from low resolution images. In this regard, super resolution can
be seemed as a special form of inpainting with repeated patterns of missing area.
6
Set14 (x2)
baboon
barbara
bridge
coastguard
comic
face
flowers
foreman
lenna
man
monarch
pepper
ppt3
zebra
Avg PSNR
Set14 (x3)
baboon
barbara
bridge
coastguard
comic
face
flowers
foreman
lenna
man
monarch
pepper
ppt3
zebra
Avg PSNR
Set14 (x4)
baboon
barbara
bridge
coastguard
comic
face
flowers
foreman
lenna
man
monarch
pepper
ppt3
zebra
Avg PSNR
Bicubic
K-SVD
NE+NNLS
NE+LLE
24.86dB
28.00dB
26.58dB
29.12dB
26.46dB
34.83dB
30.37dB
34.14dB
34.70dB
29.25dB
32.94dB
34.97dB
26.87dB
30.63dB
30.23dB
25.47dB
28.70dB
27.55dB
30.41dB
27.89 dB
35.57 dB
32.28 dB
36.18 dB
36.21 dB
30.44 dB
35.75 dB
36.59 dB
29.30 dB
33.21dB
31.81dB
25.40dB
28.56dB
27.38dB
30.23dB
27.61dB
35.46dB
31.93dB
35.93dB
36.00dB
30.29dB
35.26dB
36.18dB
28.98dB
32.59dB
31.55dB
25.52dB
28.63dB
27.51dB
30.38dB
27.72dB
35.61dB
32.19dB
36.41dB
36.30dB
30.43dB
35.58dB
36.36dB
28.97dB
33.00dB
31.76dB
Bicubic
K-SVD
NE+NNLS
NE+LLE
23.21dB
26.25dB
24.40dB
26.55dB
23.12dB
32.82dB
27.23dB
31.18dB
31.68dB
27.01dB
29.43dB
32.39dB
23.71dB
26.63dB
27.54dB
23.52dB
26.76dB
25.02dB
27.15dB
23.96dB
33.53dB
28.43dB
33.19dB
33.00dB
27.90dB
31.10dB
34.07dB
25.23dB
28.49dB
28.67dB
23.49dB
26.67dB
24.86dB
27.00dB
23.83dB
33.45dB
28.21dB
32.87dB
32.82dB
27.72dB
30.76dB
33.56dB
24.81dB
28.12dB
28.44dB
23.55dB
26.74dB
24.98dB
27.07dB
23.98dB
33.56dB
28.38dB
33.21dB
33.01dB
27.87dB
30.95dB
33.80dB
24.94dB
28.31dB
28.60dB
Bicubic
K-SVD
NE+NNLS
NE+LLE
22.44dB
25.15dB
23.15dB
25.48dB
21.69dB
31.55dB
25.52dB
29.41dB
29.84dB
25.70dB
27.46dB
30.60dB
21.98dB
24.08dB
26.00dB
22.66dB
25.58dB
23.65dB
25.81dB
22.31dB
32.18dB
26.44dB
31.01dB
30.92dB
26.46dB
28.72dB
32.13dB
23.05dB
25.47dB
26.88dB
22.63dB
25.53dB
23.54dB
25.82dB
22.19dB
32.09dB
26.28dB
30.90dB
30.82dB
26.30dB
28.48dB
31.78dB
22.61dB
25.17dB
26.72dB
22.67dB
25.58dB
23.60dB
25.81dB
22.26dB
32.19dB
26.38dB
30.90dB
30.93dB
26.38dB
28.58dB
31.87dB
22.77dB
25.36dB
26.81dB
ANR
A+
25.54dB
28.59dB
27.54dB
30.44dB
27.80dB
35.63dB
32.29dB
36.40dB
36.32dB
30.47dB
35.71dB
36.39dB
28.97dB
33.07dB
31.80dB
25.65dB
28.70dB
27.78dB
30.57dB
28.65dB
35.74dB
33.02dB
36.94dB
36.60dB
30.87dB
37.01dB
37.02dB
30.09dB
33.59dB
32.28dB
ANR
A+
23.56dB
26.69dB
25.01dB
27.08dB
24.04dB
33.62dB
28.49dB
33.23dB
33.08dB
27.92dB
31.09dB
33.82dB
25.03dB
28.43dB
28.65dB
23.62dB
26.47dB
25.17dB
27.27dB
24.38dB
33.76dB
29.05dB
34.30dB
33.52dB
28.28dB
32.14dB
34.74dB
26.09dB
28.98dB
29.13dB
ANR
A+
22.69dB
25.60dB
23.63dB
25.80dB
22.33dB
32.23dB
26.47dB
30.83dB
30.99dB
26.43dB
28.70dB
31.93dB
22.85dB
25.47dB
26.85dB
22.74dB
25.74dB
23.77dB
25.98dB
22.59dB
32.44dB
26.90dB
32.24dB
31.41dB
26.78dB
29.39dB
32.87dB
23.64dB
25.94dB
27.32dB
SRCNN
ShCNN
25.62dB
28.59dB
27.70dB
30.49dB
28.27dB
35.61dB
33.03dB
36.20dB
36.50dB
30.82dB
37.18dB
36.75dB
30.40dB
33.29dB
32.18dB
25.79dB
28.59dB
27.92dB
30.82dB
28.70dB
35.75dB
33.53dB
36.14dB
36.71dB
31.06dB
38.09dB
37.03dB
31.07dB
33.51dB
32.48dB
SRCNN
ShCNN
23.60dB
26.66dB
25.07dB
27.20dB
24.39dB
33.58dB
28.97dB
33.35dB
33.39dB
28.18dB
32.39dB
34.35dB
26.02dB
28.87dB
29.00dB
23.69dB
26.54dB
25.28dB
27.43dB
24.70dB
33.71dB
29.42dB
34.45dB
33.68dB
28.41dB
33.37dB
34.77dB
26.89dB
29.10dB
29.39dB
SRCNN
ShCNN
22.70dB
25.70dB
23.66dB
25.93dB
22.53dB
32.12dB
26.84dB
31.47dB
31.20dB
26.65dB
29.89dB
32.34dB
23.84dB
25.97dB
27.20dB
22.75dB
25.80dB
23.83dB
26.13dB
22.74dB
32.35dB
27.18dB
32.30dB
31.45dB
26.82dB
30.30dB
32.82dB
24.49dB
26.21dB
27.51dB
Table 1: PSNR comparison on the Set14 [13] image set for upscaling of factor 2, 3 and 4. Methods compared:
Bicubic, K-SVD [13], NE+NNLS [14], NE+LLE [15], ANR [16], A+ [11], SRCNN [4], Our ShCNN
We use one Shepard interpolation layer at the top with kernel size of 8x8 and feature map number
16. Other configuration of the network is the same as that in our new network for inpainting. During
training, weights were randomly initialized by drawing from a Gaussian distribution with zero mean
and standard deviation of 0.03. AdaGrad [17] was used in all experiments with learning rate of
0.001 and fudge factor of 1e-6. Table 1 show the quantitative results of our ShCNN in a widely
used super-resolution data set [13] for upscaling images 2 times, 3 times and 4 times respectively.
We compared our method with 7 methods including the two current state-of-the-art systems [11, 4].
Clear improvement over the state-of-the-art systems can be observed. Visual comparison between
our method and the previous methods is illustrated in figure 4 and figure 5.
6 Conclusions
In this paper, we disclosed the limitation of previous CNN architectures in image processing tasks
in need of translation variant interpolation. New architecture based on Shepard interpolation was
proposed and successfully applied to image inpainting and super-resolution. The effectiveness of
7
(a) Ground Truth / PSNR
(b) Bicubic / 36.81dB
(c) KSVD / 39.93dB
(d) NE+LLE / 40.00dB
(e) ANR / 40.04dB
(f) A+ / 41.12dB
(g) SRCNN / 40.64dB
(h) ShCNN / 41.30dB
Figure 5: Visual comparison. Factor 2 upscaling of the bird image in Set5 [14].
the ShCNN with Shepard interpolation layers have been demonstrated by the state-of-the-art performance.
References
[1] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional
neural networks. In: NIPS. (2012) 1106?1114
[2] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V.,
Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015)
[3] Sun, Y., Liang, D., Wang, X., Tang, X.: Deepid3: Face recognition with very deep neural
networks. In: arXiv:1502.00873. (2015)
[4] Dong, C., Loy, C.C., He, K., , Tang, X.: Learning a deep convolutional network for image
super-resolution. In: ECCV. (2014)
[5] Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In:
NIPS. (2012)
[6] Burger, H.C., Schuler, C.J., Harmeling, S.:
compete with bm3d? In: CVPR. (2012)
Image denoising: Can plain neural networks
[7] Xu, L., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution.
In: NIPS. (2014)
[8] Eigen, D., Krishnan, D., Fergus, R.: Restoring an image taken through a window covered with
dirt or rain. In: ICCV. (2013)
[9] Xu, L., Ren, J.S., Yan, Q., Liao, R., Jia, J.: Deep edge-aware filters. In: ICML. (2015)
[10] Shepard, D.: A two-dimensional interpolation function for irregularly-spaced data. In: 23rd
ACM national conference. (1968)
[11] Timofte, R., Smet, V.D., Gool, L.V.: A+: Adjusted anchored neighborhood regression for fast
super-resolution. In: ACCV. (2014)
[12] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document
recognition. In: Proceedings of IEEE. (1998)
[13] Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations.
Curves and Surfaces 6920 (2012) 711?730
8
[14] Bevilacqua, M., Roumy, A., Guillemot, C., Morel, M.L.A.: Low-complexity single-image
super-resolution based on nonnegative neighbor embedding. In: BMVC. (2012)
[15] Chang, H., Yeung, D.Y., Xiong, Y.: Super-resolution through neighbor embedding. In: CVPR.
(2004)
[16] Timofte, R., Smet, V.D., Gool, L.V.: Anchored neighborhood regression for fast examplebased super-resolution. In: ICCV. (2013)
[17] Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (2011) 2121?2159
9
| 5774 |@word trial:1 cnn:20 version:2 stronger:2 norm:1 set5:2 inpainting:17 carry:1 configuration:1 contains:1 liu:2 document:1 outperforms:1 past:1 current:3 com:4 activation:1 written:1 enables:1 drop:1 designed:1 generative:1 selected:2 ith:1 node:2 five:1 ksvd:2 consists:1 downscaled:1 mask:17 rapid:1 multi:2 bm3d:1 inspired:1 automatically:1 encouraging:1 window:1 project:2 burger:2 underlying:1 provided:1 superresolution:1 quantitative:2 tackle:1 control:1 before:1 local:1 treat:1 subscript:1 interpolation:39 black:2 bird:1 k:1 limited:4 baboon:3 monarch:3 range:1 harmeling:1 lecun:1 restoring:1 block:1 x3:1 procedure:1 area:5 yan:2 significantly:1 pre:1 regular:4 sensetime:8 cannot:1 operator:7 applying:1 www:1 conventional:2 map:11 demonstrated:3 missing:9 imposed:1 jimmy:1 resolution:26 simplicity:1 rule:1 deriving:1 embedding:2 handle:2 coordinate:1 nnls:4 element:4 recognition:3 approximated:1 bottom:2 observed:2 wang:1 region:8 connected:2 sun:2 complexity:1 trained:2 division:2 efficiency:1 workload:1 differently:1 foreman:3 stacked:1 train:1 scraped:1 fast:2 effective:2 detected:1 neighborhood:2 widely:1 plausible:1 cvpr:3 elad:1 drawing:1 anr:6 encoder:1 ability:3 statistic:1 itself:1 noisy:1 ip:1 butterfly:1 online:1 differentiable:1 propose:1 reconstruction:1 degenerate:1 achieve:2 webpage:1 sutskever:1 object:1 completion:1 ij:1 advocated:1 progress:1 eq:3 implemented:1 quotient:1 indicate:1 filter:7 cnns:1 stochastic:1 summation:2 mathematically:1 im:1 adjusted:1 ground:3 visually:1 mapping:1 realizes:1 bridge:3 successfully:1 morel:1 gaussian:1 super:20 aim:1 rather:1 varying:1 focus:1 guillemot:1 improvement:3 indicates:2 seamlessly:1 suppression:1 wij:4 going:1 pixel:11 overall:1 classification:2 flexible:3 denoted:1 art:5 breakthrough:1 spatial:1 special:1 field:1 aware:3 construct:1 once:1 sampling:1 x4:3 represents:1 icml:1 contaminated:1 inherent:1 few:6 randomly:2 national:1 interpolate:1 attempt:3 detection:1 rearrangement:1 highly:1 evaluation:1 light:1 bicubic:6 edge:3 necessary:1 x48:1 plugged:1 initialized:1 re:1 weighs:1 kij:2 column:5 rabinovich:1 addressing:1 deviation:1 krizhevsky:1 successful:1 conducted:2 encoders:2 kn:1 corrupted:1 synthetic:2 considerably:1 fundamental:1 upscale:1 dong:1 together:1 connecting:1 concrete:1 central:2 containing:1 convolving:1 derivative:3 li:1 szegedy:1 coding:1 matter:1 explicitly:1 mp:4 blind:1 piece:1 hazan:1 competitive:2 capability:1 jia:3 convolutional:17 efficiently:4 spaced:3 qiong:1 ren:3 cc:1 inform:1 suffers:1 flickr:1 definition:1 obvious:1 naturally:1 propagated:1 color:1 psnr:7 sophisticated:1 actually:1 back:3 feed:1 xie:2 bmvc:1 done:1 though:1 stage:1 hand:2 expressive:1 nonlinear:1 propagation:5 brings:1 quality:1 impulse:1 facilitate:1 building:1 name:2 requiring:1 unroll:1 spatially:1 satisfactory:1 illustrated:1 numerator:3 during:2 demonstrate:1 duchi:1 bring:1 interface:1 image:47 dirt:5 equips:1 novel:2 recently:3 wise:2 superior:1 common:1 shepard:29 jp:1 million:2 slight:1 he:1 refer:1 anguelov:1 bevilacqua:1 zebra:3 rd:1 session:1 zeroing:1 nonlinearity:6 replicating:1 surface:1 etc:4 add:2 showed:1 optimizing:1 driven:1 barbara:3 certain:1 outperforming:1 binary:3 devise:2 additional:2 impose:1 parallelized:1 multiple:4 smooth:1 long:1 ensuring:1 variant:12 involving:1 regression:2 denominator:3 vision:6 liao:1 blindly:1 arxiv:1 kernel:15 yeung:1 whereas:1 residue:3 affecting:1 separately:1 leaving:1 extra:1 rest:1 pooling:1 virtually:1 db:374 effectiveness:1 leverage:1 bengio:1 enough:1 krishnan:1 fit:1 relu:1 pepper:3 architecture:17 idea:1 haffner:1 repeatedly:1 deep:15 clear:3 covered:2 listed:1 amount:1 mid:4 visualized:2 processed:2 http:1 generate:3 notice:1 upscaling:4 group:4 key:1 four:3 achieving:1 subgradient:1 zeyde:1 merely:1 fraction:4 year:1 denosing:1 compete:1 powerful:3 fourth:1 named:1 discern:1 reasonable:1 timofte:2 patch:10 draw:2 layer:56 followed:1 nonnegative:1 occur:1 x2:1 aspect:1 according:1 across:1 invariant:3 iccv:2 taken:1 equation:2 previously:1 remains:1 turn:1 mechanism:2 needed:1 initiate:1 singer:1 irregularly:3 tractable:1 end:4 adopted:2 operation:12 apply:1 mjm:3 xiong:1 batch:1 eigen:1 convolved:4 rain:2 running:1 include:1 kkn:1 top:1 opportunity:1 exploit:1 tvi:14 objective:1 added:1 font:1 traditional:3 gradient:1 distance:4 sensible:1 outer:1 fjn:2 extent:1 trivial:1 modeled:1 index:5 illustration:2 reed:1 balance:1 unrolled:2 upscaled:1 sermanet:1 liang:1 loy:1 design:5 implementation:1 unknown:2 perform:2 observation:1 convolution:9 fin:2 accv:1 displayed:1 beat:2 extended:1 hinton:1 introduced:2 ijm:2 pair:1 namely:2 trainable:4 imagenet:1 learned:3 nip:3 able:2 flower:3 perception:1 exemplified:1 pattern:2 challenge:1 including:4 memory:1 endto:2 gool:2 natural:3 indicator:1 mn:7 nth:1 roumy:1 inversely:1 ne:10 x8:2 auto:3 set14:4 text:2 l2:1 removal:3 multiplication:2 adagrad:1 relative:1 protter:1 embedded:1 fully:1 comic:3 limitation:1 filtering:2 proportional:1 integrate:1 vanhoucke:1 sufficient:1 thresholding:1 share:1 translation:15 row:2 eccv:1 last:3 keeping:2 copy:1 guide:2 allow:1 deeper:4 bias:1 lle:6 wide:1 neighbor:2 face:5 sparse:6 matting:1 benefit:1 regard:1 curve:1 depth:1 plain:1 world:1 seemed:1 forward:1 made:1 concretely:1 avg:3 adaptive:1 far:1 erhan:1 sj:1 approximate:1 smet:2 keep:1 anchor:1 fergus:1 grayscale:2 vectorization:1 anchored:2 table:2 promising:2 learn:3 nature:1 channel:1 schuler:1 inherently:2 bottou:1 interpolating:1 complex:1 did:1 noise:2 repeated:1 xu:5 crafted:1 referred:1 fashion:1 srcnn:6 explicit:1 weighting:1 third:1 tang:2 removing:1 experimented:2 deeplearning:1 insignificant:1 deconvolution:4 essential:1 disclosed:1 adding:3 notwithstanding:1 illustrates:2 chen:1 simply:1 visual:6 failed:1 prevents:1 contained:1 chang:1 truth:3 acm:1 shared:1 man:3 feasible:1 typical:2 specifically:1 denoising:2 lenna:3 pas:1 experimental:1 svd:4 indicating:3 incorporate:1 tested:1 handling:1 |
5,274 | 5,775 | Learning Structured Output Representation
using Deep Conditional Generative Models
Kihyuk Sohn??
Xinchen Yan?
Honglak Lee?
?
NEC Laboratories America, Inc.
?
University of Michigan, Ann Arbor
ksohn@nec-labs.com, {xcyan,honglak}@umich.edu
Abstract
Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a
large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional
generative model for structured output prediction using Gaussian latent variables.
The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction
algorithms, such as input noise-injection and multi-scale prediction objective at
training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in
generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which
leads to strong pixel-level object segmentation and semantic labeling performance
on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.
1
Introduction
In structured output prediction, it is important to learn a model that can perform probabilistic inference and make diverse predictions. This is because we are not simply modeling a many-to-one
function as in classification tasks, but we may need to model a mapping from single input to many
possible outputs. Recently, the convolutional neural networks (CNNs) have been greatly successful
for large-scale image classification tasks [17, 30, 27] and have also demonstrated promising results
for structured prediction tasks (e.g., [4, 23, 22]). However, the CNNs are not suitable in modeling a
distribution with multiple modes [32].
To address this problem, we propose novel deep conditional generative models (CGMs) for output
representation learning and structured prediction. In other words, we model the distribution of highdimensional output space as a generative model conditioned on the input observation. Building
upon recent development in variational inference and learning of directed graphical models [16,
24, 15], we propose a conditional variational auto-encoder (CVAE). The CVAE is a conditional
directed graphical model whose input observations modulate the prior on Gaussian latent variables
that generate the outputs. It is trained to maximize the conditional log-likelihood, and we formulate
the variational learning objective of the CVAE in the framework of stochastic gradient variational
Bayes (SGVB) [16]. In addition, we introduce several strategies, such as input noise-injection and
multi-scale prediction training methods, to build a more robust prediction model.
In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the
deterministic neural network counterparts in generating diverse but realistic output predictions using
stochastic inference. We demonstrate the importance of stochastic neurons in modeling the structured output when the input data is partially provided. Furthermore, we show that the proposed
training schemes are complimentary, leading to strong pixel-level object segmentation and labeling
performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.
1
In summary, the contribution of the paper is as follows:
? We propose CVAE and its variants that are trainable efficiently in the SGVB framework,
and introduce novel strategies to enhance robustness of the models for structured prediction.
? We demonstrate the effectiveness of our proposed algorithm with Gaussian stochastic neurons in modeling multi-modal distribution of structured output variables.
? We achieve strong semantic object segmentation performance on CUB and LFW datasets.
The paper is organized as follows. We first review related work in Section 2. We provide preliminaries in Section 3 and develop our deep conditional generative model in Section 4. In Section 5,
we evaluate our proposed models and report experimental results. Section 6 concludes the paper.
2
Related work
Since the recent success of supervised deep learning on large-scale visual recognition [17, 30, 27],
there have been many approaches to tackle mid-level computer vision tasks, such as object detection [6, 26, 31, 9] and semantic segmentation [4, 3, 23, 22], using supervised deep learning
techniques. Our work falls into this category of research in developing advanced algorithms for
structured output prediction, but we incorporate the stochastic neurons to model the conditional distributions of complex output representation whose distribution possibly has multiple modes. In this
sense, our work shares a similar motivation to the recent work on image segmentation tasks using
hybrid models of CRF and Boltzmann machine [13, 21, 37]. Compared to these, our proposed model
is an end-to-end system for segmentation using convolutional architecture and achieves significantly
improved performance on challenging benchmark tasks.
Along with the recent breakthroughs in supervised deep learning methods, there has been a progress
in deep generative models, such as deep belief networks [10, 20] and deep Boltzmann machines [25].
Recently, the advances in inference and learning algorithms for various deep generative models
significantly enhanced this line of research [2, 7, 8, 18]. In particular, the variational learning
framework of deep directed graphical model with Gaussian latent variables (e.g., variational autoencoder [16, 15] and deep latent Gaussian models [24]) has been recently developed. Using the
variational lower bound of the log-likelihood as the training objective and the reparameterization
trick, these models can be easily trained via stochastic optimization. Our model builds upon this
framework, but we focus on modeling the conditional distribution of output variables for structured
prediction problems. Here, the main goal is not only to model the complex output representation but
also to make a discriminative prediction. In addition, our model can effectively handle large-sized
images by exploiting the convolutional architecture.
The stochastic feed-forward neural network (SFNN) [32] is a conditional directed graphical model
with a combination of real-valued deterministic neurons and the binary stochastic neurons. The
SFNN is trained using the Monte Carlo variant of generalized EM by drawing multiple samples
from the feed-forward proposal distribution and weighing them differently with importance weights.
Although our proposed Gaussian stochastic neural network (which will be described in Section 4.2)
looks similar on surface, there are practical advantages in optimization of using Gaussian latent
variables over the binary stochastic neurons. In addition, thanks to the recognition model used in
our framework, it is sufficient to draw only a few samples during training, which is critical in training
very deep convolutional networks.
3
Preliminary: Variational Auto-encoder
The variational auto-encoder (VAE) [16, 24] is a directed graphical model with certain types of
latent variables, such as Gaussian latent variables. A generative process of the VAE is as follows: a
set of latent variable z is generated from the prior distribution p? (z) and the data x is generated by
the generative distribution p? (x|z) conditioned on z: z ? p? (z), x ? p? (x|z).
In general, parameter estimation of directed graphical models is often challenging due to intractable
posterior inference. However, the parameters of the VAE can be estimated efficiently in the stochastic gradient variational Bayes (SGVB) [16] framework, where the variational lower bound of the
log-likelihood is used as a surrogate objective function. The variational lower bound is written as:
log p? (x) = KL (q? (z|x)kp? (z|x)) + Eq? (z|x) ? log q? (z|x) + log p? (x, z)
(1)
? ?KL (q? (z|x)kp? (z)) + Eq? (z|x) log p? (x|z)
(2)
2
In this framework, a proposal distribution q? (z|x), which is also known as a ?recognition? model, is
introduced to approximate the true posterior p? (z|x). The multilayer perceptrons (MLPs) are used
to model the recognition and the generation models. Assuming Gaussian latent variables, the first
term of Equation (2) can be marginalized, while the second term is not. Instead, the second term can
be approximated by drawing samples z(l) (l = 1, ..., L) by the recognition distribution q? (z|x), and
the empirical objective of the VAE with Gaussian latent variables is written as follows:
L
1X
LeVAE (x; ?, ?) = ?KL (q? (z|x)kp? (z)) +
log p? (x|z(l) ),
L
(3)
l=1
where z(l) = g? (x, (l) ), (l) ? N (0, I). Note that the recognition distribution q? (z|x) is reparameterized with a deterministic, differentiable function g? (?, ?), whose arguments are data x and
the noise variable . This trick allows error backpropagation through the Gaussian latent variables,
which is essential in VAE training as it is composed of multiple MLPs for recognition and generation
models. As a result, the VAE can be trained efficiently using stochastic gradient descent (SGD).
4
Deep Conditional Generative Models for Structured Output Prediction
As illustrated in Figure 1, there are three types of variables in a deep conditional generative model
(CGM): input variables x, output variables y, and latent variables z. The conditional generative
process of the model is given in Figure 1(b) as follows: for given observation x, z is drawn from the
prior distribution p? (z|x), and the output y is generated from the distribution p? (y|x, z). Compared
to the baseline CNN (Figure 1(a)), the latent variables z allow for modeling multiple modes in
conditional distribution of output variables y given input x, making the proposed CGM suitable
for modeling one-to-many mapping. The prior of the latent variables z is modulated by the input
x in our formulation; however, the constraint can be easily relaxed to make the latent variables
statistically independent of input variables, i.e., p? (z|x) = p? (z) [15].
Deep CGMs are trained to maximize the conditional log-likelihood. Often the objective function is
intractable, and we apply the SGVB framework to train the model. The variational lower bound of
the model is written as follows (complete derivation can be found in the supplementary material):
log p? (y|x) ? ?KL (q? (z|x, y)kp? (z|x)) + Eq? (z|x,y) log p? (y|x, z)
(4)
and the empirical lower bound is written as:
L
1X
LeCVAE (x, y; ?, ?) = ?KL (q? (z|x, y)kp? (z|x)) +
log p? (y|x, z(l) ),
L
(5)
l=1
where z(l) = g? (x, y, (l) ), (l) ? N (0, I) and L is the number of samples. We call this model
conditional variational auto-encoder1 (CVAE). The CVAE is composed of multiple MLPs, such
as recognition network q? (z|x, y), (conditional) prior network p? (z|x), and generation network
p? (y|x, z). In designing the network architecture, we build the network components of the CVAE
on top of the baseline CNN. Specifically, as shown in Figure 1(d), not only the direct input x, but also
? made by the CNN are fed into the prior network. Such a recurrent connection has
the initial guess y
been applied for structured output prediction problems [23, 13, 28] to sequentially update the prediction by revising the previous guess while effectively deepening the convolutional network. We also
found that a recurrent connection, even one iteration, showed significant performance improvement.
Details about network architectures can be found in the supplementary material.
4.1 Output inference and estimation of the conditional likelihood
Once the model parameters are learned, we can make a prediction of an output y from an input x by
following the generative process of the CGM. To evaluate the model on structured output prediction
tasks (i.e., in testing time), we can measure a prediction accuracy by performing
a deterministic
inference without sampling z, i.e., y? = arg maxy p? (y|x, z? ), z? = E z|x .2
1
Although the model is not trained to reconstruct the input x, our model can be viewed as a type of VAE
that performs auto-encoding of the output variables y conditioned on the input x at training time.
2
Alternatively, we can draw multiple z?s from the prior distribution and use the average of the posteriors to
P
(l)
(l)
make a prediction, i.e., y? = arg maxy L1 L
? p? (z|x).
l=1 p? (y|x, z ), z
3
Z
p (y|x,z)
Y
(a) CNN
Z
p (z|x)
q (z|x,y)
p (y|x)
X
Y
Z
p (z|x)
X
p (y|x,z)
Y
(b) CGM (generation)
X
Y
(c) CGM (recognition)
X
Y
(d) recurrent connection
Figure 1: Illustration of the conditional graphical models (CGMs). (a) the predictive process of
output Y for the baseline CNN; (b) the generative process of CGMs; (c) an approximate inference
of Z (also known as recognition process [16]); (d) the generative process with recurrent connection.
Another way to evaluate the CGMs is to compare the conditional likelihoods of the test data. A
straightforward approach is to draw samples z?s using the prior network and take the average of the
likelihoods. We call this method the Monte Carlo (MC) sampling:
S
1X
p? (y|x) ?
p? (y|x, z(s) ), z(s) ? p? (z|x)
S s=1
(6)
It usually requires a large number of samples for the Monte Carlo log-likelihood estimation to be
accurate. Alternatively, we use the importance sampling to estimate the conditional likelihoods [24]:
p? (y|x) ?
S
1 X p? (y|x, z(s) )p? (z(s) |x)
, z(s) ? q? (z|x, y)
S s=1
q? (z(s) |x, y)
(7)
4.2 Learning to predict structured output
Although the SGVB learning framework has shown to be effective in training deep generative models [16, 24], the conditional auto-encoding of output variables at training may not be optimal to
make a prediction at testing in deep CGMs. In other words, the CVAE uses the recognition network
q? (z|x, y) at training, but it uses the prior network p? (z|x) at testing to draw samples z?s and make
an output prediction. Since y is given as an input for the recognition network, the objective at training can be viewed as a reconstruction of y, which is an easier task than prediction. The negative KL
divergence term in Equation (5) tries to close the gap between two pipelines, and one could consider
allocating more weights on the negative KL term of an objective function to mitigate the discrepancy
in encoding of latent variables at training and testing, i.e., ?(1 + ?)KL (q? (z|x, y)kp? (z|x)) with
? ? 0. However, we found this approach ineffective in our experiments.
Instead, we propose to train the networks in a way that the prediction pipelines at training and testing
are consistent. This can be done by setting the recognition network the same as the prior network,
i.e., q? (z|x, y) = p? (z|x), and we get the following objective function:
L
1X
LeGSNN (x, y; ?, ?) =
log p? (y|x, z(l) ) , where z(l) = g? (x, (l) ), (l) ? N (0, I)
L
(8)
l=1
We call this model Gaussian stochastic neural network (GSNN).3 Note that the GSNN can be derived from the CVAE by setting the recognition network and the prior network equal. Therefore,
the learning tricks, such as reparameterization trick, of the CVAE can be used to train the GSNN.
Similarly, the inference (at testing) and the conditional likelihood estimation are the same as those
of CVAE. Finally, we combine the objective functions of two models to obtain a hybrid objective:
Lehybrid = ?LeCVAE + (1 ? ?)LeGSNN ,
(9)
where ? balances the two objectives. Note that when ? = 1, we recover the CVAE objective; when
? = 0, the trained model will be simply a GSNN without the recognition network.
4.3 CVAE for image segmentation and labeling
Semantic segmentation [5, 23, 6] is an important structured output prediction task. In this section, we provide strategies to train a robust prediction model for semantic segmentation problems.
Specifically, to learn a high-capacity neural network that can be generalized well to unseen data, we
propose to train the network with 1) multi-scale prediction objective and 2) structured input noise.
3
If we assume a covariance matrix of auxiliary Gaussian latent variables to 0, we have a deterministic
counterpart of GSNN, which we call a Gaussian deterministic neural network (GDNN).
4
4.3.1
Training with multi-scale prediction objective
As the image size gets larger (e.g., 128 ? 128), it becomes
1/4
1/2
1
...
X
more challenging to make a fine-grained pixel-level prediction (e.g., image reconstruction, semantic label prediction).
The multi-scale approaches have been used in the sense of
Y1/4
Y1/2
Y
forming a multi-scale image pyramid for an input [5], but not
much for multi-scale output prediction. Here, we propose to
train the network to predict outputs at different scales. By doloss
+
loss
+
loss
ing so, we can make a global-to-local, coarse-to-fine-grained
Figure 2: Multi-scale prediction.
prediction of pixel-level semantic labels. Figure 2 describes
the multi-scale prediction at 3 different scales (1/4, 1/2, and original) for the training.
4.3.2 Training with input omission noise
Adding noise to neurons is a widely used technique to regularize deep neural networks during the
training [17, 29]. Similarly, we propose a simple regularization technique for semantic segmenta? according to noise process and optimize the network with the
tion: corrupt the input data x into x
e x, y). The noise process could be arbitrary, but for semantic image segmenfollowing objective: L(?
tation, we consider random block omission noise. Specifically, we randomly generate a squared
mask of width and height less than 40% of the image width and height, respectively, at random position and set pixel values of the input image inside the mask to 0. This can be viewed as providing
more challenging output prediction task during training that simulates block occlusion or missing
input. The proposed training strategy also is related to the denoising training methods [34], but in
our case, we inject noise to the input data only and do not reconstruct the missing input.
5
Experiments
We demonstrate the effectiveness of our approach in modeling the distribution of the structured
output variables. For the proof of concept, we create an artificial experimental setting for structured output prediction using MNIST database [19]. Then, we evaluate the proposed CVAE models
on several benchmark datasets for visual object segmentation and labeling, such as Caltech-UCSD
Birds (CUB) [36] and Labeled Faces in the Wild (LFW) [12]. Our implementation is based on MatConvNet [33], a MATLAB toolbox for convolutional neural networks, and Adam [14] for adaptive
learning rate scheduling algorithm of SGD optimization.
5.1 Toy example: MNIST
To highlight the importance of probabilistic inference through stochastic neurons for structured output variables, we perform an experiment using MNIST database. Specifically, we divide each digit
image into four quadrants, and take one, two, or three quadrant(s) as an input and the remaining
quadrants as an output.4 As we increase the number of quadrants for an output, the input to output
mapping becomes more diverse (in terms of one-to-many mapping).
We trained the proposed models (CVAE, GSNN) and the baseline deep neural network and compare
their performance. The same network architecture, the MLP with two-layers of 1, 000 ReLUs for
recognition, conditional prior, or generation networks, followed by 200 Gaussian latent variables,
was used for all the models in various experimental settings. The early stopping is used during the
training based on the estimation of the conditional likelihoods on the validation set.
negative CLL
NN (baseline)
GSNN (Monte Carlo)
CVAE (Monte Carlo)
CVAE (Importance Sampling)
Performance gap
- per pixel
1 quadrant
validation
test
100.03
99.75
100.03
99.82
68.62
68.39
64.05
63.91
35.98
35.91
0.061
0.061
2 quadrants
validation
test
62.14
62.18
62.48
62.41
45.57
45.34
44.96
44.73
17.51
17.68
0.045
0.045
3 quadrants
validation
test
26.01
25.99
26.20
26.29
20.97
20.96
20.97
20.95
5.23
5.33
0.027
0.027
Table 1: The negative CLL on MNIST database. We increase the number of quadrants for an input
from 1 to 3. The performance gap between CVAE (importance sampling) and NN is reported.
4
Similar experimental setting has been used in the multimodal learning framework, where the left- and right
halves of the digit images are used as two data modalities [1, 28].
5
ground
-truth
ground
-truth
NN
NN
CVAE
CVAE
Figure 3: Visualization of generated samples with (left) 1 quadrant and (right) 2 quadrants for an
input. We show in each row the input and the ground truth output overlaid with gray color (first),
samples generated by the baseline NNs (second), and samples drawn from the CVAEs (rest).
For qualitative analysis, we visualize the generated output samples in Figure 3. As we can see, the
baseline NNs can only make a single deterministic prediction, and as a result the output looks blurry
and doesn?t look realistic in many cases. In contrast, the samples generated by the CVAE models
are more realistic and diverse in shape; sometimes they can even change their identity (digit labels),
such as from 3 to 5 or from 4 to 9, and vice versa.
We also provide a quantitative evidence by estimating the conditional log-likelihoods (CLLs) in Table 1. The CLLs of the proposed models are estimated in two ways as described in Section 4.1. For
the MC estimation, we draw 10, 000 samples per example to get an accurate estimate. For the importance sampling, however, 100 samples per example were enough to obtain an accurate estimation
of the CLL. We observed that the estimated CLLs of the CVAE significantly outperforms the baseline NN. Moreover, as measured by the per pixel performance gap, the performance improvement
becomes more significant as we use smaller number of quadrants for an input, which is expected as
the input-output mapping becomes more diverse.
5.2
Visual Object Segmentation and Labeling
Caltech-UCSD Birds (CUB) database [36] includes 6, 033 images of birds from 200 species with
annotations such as a bounding box of birds and a segmentation mask. Later, Yang et al. [37]
annotated these images with more fine-grained segmentation masks by cropping the bird patches
using the bounding boxes and resized them into 128 ? 128 pixels. The training/test split proposed
in [36] was used in our experiment, and for validation purpose, we partition the training set into 10
folds and cross-validated with the mean intersection over union (IoU) score over the folds. The final
prediction on the test set was made by averaging the posterior from ensemble of 10 networks that are
trained on each of the 10 folds separately. We increase the number of training examples via ?data
augmentation? by horizontally flipping the input and output images.
We extensively evaluate the variations of our proposed methods, such as CVAE, GSNN, and the
hybrid model, and provide a summary results on segmentation mask prediction task in Table 2.
Specifically, we report the performance of the models with different network architectures and training methods (e.g., multi-scale prediction or noise-injection training).
First, we note that the baseline CNN already beat the previous state-of-the-art that is obtained by
the max-margin Boltzmann machine (MMBM; pixel accuracy: 90.42, IoU: 75.92 with GraphCut
for post-processing) [37] even without post-processing. On top of that, we observed significant performance improvement with our proposed deep CGMs.5 In terms of prediction accuracy, the GSNN
performed the best among our proposed models, and performed even better when it is trained with
hybrid objective function. In addition, the noise-injection training (Section 4.3) further improves
the performance. Compared to the baseline CNN, the proposed deep CGMs significantly reduce the
prediction error, e.g., 21% reduction in test pixel-level accuracy at the expense of 60% more time
for inference.6 Finally, the performance of our two winning entries (GSNN and hybrid) on the validation sets are both significantly better than their deterministic counterparts (GDNN) with p-values
less than 0.05, which suggests the benefit of stochastic latent variables.
5
As in the case of baseline CNNs, we found that using the multi-scale prediction was consistently better
than the single-scale counterpart for all our models. So, we used the multi-scale prediction by default.
6
Mean inference time per image: 2.32 (ms) for CNN and 3.69 (ms) for deep CGMs, measured using
GeForce GTX TITAN X card with MatConvNet; we provide more information in the supplementary material.
6
Model (training)
MMBM [37]
GLOC [13]
CNN (baseline)
CNN (msc)
GDNN (msc)
GSNN (msc)
CVAE (msc)
hybrid (msc)
GDNN (msc, NI)
GSNN (msc, NI)
CVAE (msc, NI)
hybrid (msc, NI)
CUB (val)
pixel
IoU
?
?
?
?
91.17 ?0.09
79.64 ?0.24
91.37 ?0.09
80.09 ?0.25
92.25 ?0.09
81.89 ?0.21
92.46 ?0.07
82.31 ?0.19
92.24 ?0.09
81.86 ?0.23
92.60 ?0.08
82.57 ?0.26
92.92 ?0.07
83.20 ?0.19
93.09 ?0.09 83.62 ?0.21
92.72 ?0.08
82.90 ?0.22
93.05 ?0.07 83.49 ?0.19
CUB (test)
pixel
IoU
90.42
75.92
?
?
92.30
81.90
92.52
82.43
93.24
83.96
93.39
84.26
93.03
83.53
93.35
84.16
93.78
85.07
93.91 85.39
93.48
84.47
93.78 85.07
LFW
pixel (val)
pixel (test)
?
?
?
90.70
92.09 ?0.13
91.90 ?0.08
92.19 ?0.10
92.05 ?0.06
92.72 ?0.12
92.54 ?0.04
92.88 ?0.08
92.61 ?0.09
92.80 ?0.30
92.62 ?0.06
92.95 ?0.21
92.77 ?0.06
93.59 ?0.12 93.25 ?0.06
93.71 ?0.09 93.51 ?0.07
93.29 ?0.17
93.22 ?0.08
93.69 ?0.12 93.42 ?0.07
Table 2: Mean and standard error of labeling accuracy on CUB and LFW database. The performance
of the best or statistically similar (i.e., p-value ? 0.05 to the best performing model) models are
bold-faced. ?msc? refers multi-scale prediction training and ?NI? refers the noise-injection training.
Models
CNN (baseline)
GDNN (msc, NI)
GSNN (msc, NI)
CVAE (msc, NI)
hybrid (msc, NI)
CUB (val)
4269.43 ?130.90
3386.19 ?44.11
3400.24 ?59.42
801.48 ?4.34
1019.93 ?8.46
CUB (test)
4329.94 ?91.71
3450.41 ?33.36
3461.87 ?25.57
801.31 ?1.86
1021.44 ?4.81
LFW (val)
6370.63 ?790.53
4710.46 ?192.77
4582.96 ?225.62
1262.98 ?64.43
1836.98 ?127.53
LFW (test)
6434.09 ?756.57
5170.26 ?166.81
4829.45 ?96.98
1267.58 ?57.92
1867.47 ?111.26
Table 3: Mean and standard error of negative CLL on CUB and LFW database. The performance of
the best and statistically similar models are bold-faced.
We also evaluate the negative CLL and summarize the results in Table 3. As expected, the proposed
CGMs significantly outperform the baseline CNN while the CVAE showed the highest CLL.
Labeled Faces in the Wild (LFW) database [12] has been widely used for face recognition and
verification benchmark. As mentioned in [11], the face images that are segmented and labeled into
semantically meaningful region labels (e.g., hair, skin, clothes) can greatly help understanding of
the image through the visual attributes, which can be easily obtained from the face shape.
Following region labeling protocols [35, 13], we evaluate the performance of face parts labeling
on the subset of LFW database [35], which contains 1, 046 images that are labeled into 4 semantic
categories, such as hair, skin, clothes, and background. We resized images into 128 ? 128 and used
the same network architecture to the one used in the CUB experiment.
We provide summary results of pixel-level segmentation accuracy in Table 2 and the negative CLL
in Table 3. We observe a similar trend as previously shown for the CUB database; the proposed deep
CGMs outperform the baseline CNN in terms of segmentation accuracy as well as CLL. However,
although the accuracies of the CGM variants are higher, the performance of GDNN was not significantly behind than those of GSNN and hybrid models. This may be because the level of variations in
the output space of LFW database is less than that of CUB database as the face shapes are more similar and better aligned across examples. Finally, our methods significantly outperform other existing
methods, which report 90.0% in [35] or 90.7% in [13], setting the state-of-the-art performance on
the LFW segmentation benchmark.
5.3 Object Segmentation with Partial Observations
We experimented on object segmentation under uncertainties (e.g., partial input and output observations) to highlight the importance of recognition network in CVAE and the stochastic neurons for
missing value imputation. We randomly omit the input pixels at different levels of omission noise
(25%, 50%, 70%) and different block sizes (1, 4, 8), and the task is to predict the output segmentation labels for the omitted pixel locations while given the partial labels for the observed input pixels.
This can also be viewed as a segmentation task with noisy or partial observations (e.g., occlusions).
To make a prediction for CVAE with partial output observation (yo ), we perform iterative inference
of unobserved output (yu ) and the latent variables (z) (in a similar fashion to [24]), i.e.,
yu ? p? (yu |x, z) ? z ? q? (z|x, yo , yu ).
7
(10)
Input
Input
ground
-truth
ground
-truth
CNN
CNN
CVAE
CVAE
Figure 4: Visualization of the conditionally generated samples: (first row) input image with omission
noise (noise level: 50%, block size: 8), (second row) ground truth segmentation, (third) prediction
by GDNN, and (fourth to sixth) the generated samples by CVAE on CUB (left) and LFW (right).
We report the summary results in Table 4.
Dataset
CUB (IoU)
LFW (pixel)
The CVAE performs well even when the
noise block
GDNN CVAE
GDNN CVAE
noise level is high (e.g., 50%), where the
level
size
1
89.37
98.52
96.93
99.22
GDNN significantly fails. This is because
25%
4
88.74
98.07
96.55
99.09
the CVAE utilizes the partial segmentation
8
90.72
96.78
97.14
98.73
information to iteratively refine the predic1
74.95
95.95
91.84
97.29
tion of the rest. We visualize the gener50%
4
70.48
94.25
90.87
97.08
ated samples at noise level of 50% in Fig8
76.07
89.10
92.68
96.15
ure 4. The prediction made by the GDNN
1
62.11
89.44
85.27
89.71
is blurry, but the samples generated by
70%
4
57.68
84.36
85.70
93.16
the CVAE are sharper while maintaining
8
63.59
76.87
87.83
92.06
reasonable shapes. This suggests that the
CVAE can also be potentially useful for in- Table 4: Segmentation results with omission noise on
teractive segmentation (i.e., by iteratively CUB and LFW database. We report the pixel-level accuracy on the first validation set.
incorporating partial output labels).
6
Conclusion
Modeling multi-modal distribution of the structured output variables is an important research question to achieve good performance on structured output prediction problems. In this work, we proposed stochastic neural networks for structured output prediction based on the conditional deep
generative model with Gaussian latent variables. The proposed model is scalable and efficient in
inference and learning. We demonstrated the importance of probabilistic inference when the distribution of output space has multiple modes, and showed strong performance in terms of segmentation
accuracy, estimation of conditional log-likelihood, and visualization of generated samples.
Acknowledgments This work was supported in part by ONR grant N00014-13-1-0762 and NSF
CAREER grant IIS-1453651. We thank NVIDIA for donating a Tesla K40 GPU.
References
[1] G. Andrew, R. Arora, J. Bilmes, and K. Livescu. Deep canonical correlation analysis. In ICML, 2013.
[2] Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable
by backprop. In ICML, 2014.
[3] D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks segment neuronal
membranes in electron microscopy images. In NIPS, 2012.
[4] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Scene parsing with multiscale feature learning, purity
trees, and optimal covers. In ICML, 2012.
[5] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. T.
PAMI, 35(8):1915?1929, 2013.
[6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Region-based convolutional networks for accurate
object detection and segmentation. T. PAMI, PP(99):1?1, 2015.
[7] I. Goodfellow, M. Mirza, A. Courville, and Y. Bengio. Multi-prediction deep Boltzmann machines. In
NIPS, 2013.
8
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. In ECCV, 2014.
[10] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18(7):1527?1554, 2006.
[11] G. B. Huang, M. Narayana, and E. Learned-Miller. Towards unconstrained face recognition. In CVPR
Workshop on Perceptual Organization in Computer Vision, 2008.
[12] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for
studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, 2007.
[13] A. Kae, K. Sohn, H. Lee, and E. Learned-Miller. Augmenting CRFs with Boltzmann machine shape
priors for image labeling. In CVPR, 2013.
[14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[15] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014.
[16] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2013.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, 2012.
[18] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. JMLR, 15:29?37, 2011.
[19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[20] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Unsupervised learning of hierarchical representations
with convolutional deep belief networks. Communications of the ACM, 54(10):95?103, 2011.
[21] Y. Li, D. Tarlow, and R. Zemel. Exploring compositional high order pattern potentials for structured
output learning. In CVPR, 2013.
[22] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
2015.
[23] P. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene parsing. In ICML, 2013.
[24] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In ICML, 2014.
[25] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
[26] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. OverFeat: Integrated recognition,
localization and detection using convolutional networks. In ICLR, 2013.
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2014.
[28] K. Sohn, W. Shang, and H. Lee. Improved multimodal deep learning with variation of information. In
NIPS, 2014.
[29] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 15(1):1929?1958, 2014.
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[31] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013.
[32] Y. Tang and R. Salakhutdinov. Learning stochastic feedforward neural networks. In NIPS, 2013.
[33] A. Vedaldi and K. Lenc. MatConvNet ? convolutional neural networks for MATLAB. In ACMMM, 2015.
[34] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with
denoising autoencoders. In ICML, 2008.
[35] N. Wang, H. Ai, and F. Tang. What are good parts for hair shape modeling? In CVPR, 2012.
[36] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds
200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
[37] J. Yang, S. S?af?ar, and M.-H. Yang. Max-margin Boltzmann machines for object segmentation. In CVPR,
2014.
9
| 5775 |@word cnn:15 covariance:1 sgd:2 tr:1 reduction:1 initial:1 liu:1 contains:1 score:1 document:1 outperforms:1 existing:1 cvae:38 com:1 written:4 gpu:1 parsing:2 realistic:4 partition:1 shape:6 update:1 generative:21 half:1 weighing:1 guess:2 tarlow:1 coarse:1 location:1 zhang:2 height:2 narayana:1 along:1 wierstra:1 direct:1 qualitative:1 wild:5 combine:1 inside:1 introduce:2 mask:5 expected:2 multi:16 salakhutdinov:3 becomes:4 provided:2 estimating:1 moreover:1 what:1 complimentary:2 developed:1 revising:1 clothes:2 unobserved:1 mitigate:1 quantitative:1 tackle:1 grant:2 omit:1 local:1 laufer:1 tation:1 sgvb:5 encoding:4 ure:1 pami:2 bird:8 suggests:2 challenging:5 branson:1 statistically:3 directed:6 practical:1 acknowledgment:1 lecun:4 testing:6 union:1 block:5 backpropagation:2 digit:3 empirical:2 yan:1 significantly:9 vedaldi:1 word:2 quadrant:11 refers:2 sfnn:2 get:3 close:1 scheduling:1 optimize:1 deterministic:9 demonstrated:2 missing:3 crfs:1 straightforward:1 formulate:1 cvaes:1 pouget:1 estimator:1 regularize:1 reparameterization:2 handle:1 variation:3 xinchen:1 enhanced:1 us:2 designing:1 livescu:1 goodfellow:2 trick:4 trend:1 recognition:25 approximated:1 labeled:7 database:13 observed:3 wang:1 region:3 sun:1 k40:1 highest:1 mentioned:1 environment:1 warde:1 trained:11 segment:1 predictive:1 upon:2 localization:1 easily:3 multimodal:2 differently:1 various:2 america:1 derivation:1 train:6 fast:2 effective:1 monte:5 kp:6 artificial:1 zemel:1 labeling:10 whose:3 supplementary:3 valued:1 larger:1 widely:2 drawing:2 reconstruct:2 cvpr:7 encoder:3 simonyan:1 unseen:1 noisy:1 final:1 advantage:1 differentiable:1 net:2 propose:7 reconstruction:2 aligned:1 achieve:2 exploiting:1 sutskever:2 darrell:2 cropping:1 generating:2 adam:2 object:11 help:1 develop:2 recurrent:5 augmenting:1 andrew:1 measured:2 progress:1 eq:3 strong:4 auxiliary:1 larochelle:2 iou:5 annotated:1 attribute:1 cnns:3 stochastic:24 material:3 backprop:1 preliminary:2 exploring:1 ground:6 overlaid:1 mapping:5 predict:3 visualize:2 electron:1 matconvnet:3 achieves:1 early:1 cub:15 omitted:1 purpose:1 estimation:8 schroff:1 label:7 vice:1 create:1 successfully:1 gaussian:16 resized:2 vae:7 derived:1 focus:1 validated:1 yo:2 improvement:3 cgm:6 consistently:1 likelihood:13 rezende:2 greatly:2 contrast:1 adversarial:1 baseline:15 sense:2 inference:19 stopping:1 nn:5 integrated:1 perona:1 going:1 pixel:20 arg:2 classification:3 among:1 development:1 overfeat:1 art:2 breakthrough:1 spatial:1 equal:1 once:1 ng:1 sampling:6 look:3 yu:4 icml:6 unsupervised:1 discrepancy:1 report:7 mirza:2 few:1 randomly:2 composed:2 divergence:1 occlusion:2 cns:1 detection:4 organization:1 mlp:1 farley:1 behind:1 accurate:4 allocating:1 partial:7 tree:1 divide:1 girshick:1 modeling:10 cover:1 ar:1 rabinovich:1 subset:3 entry:1 krizhevsky:2 successful:1 welinder:1 osindero:1 reported:1 thibodeau:1 nns:2 thanks:1 cll:8 amherst:1 lee:4 probabilistic:4 enhance:1 squared:1 augmentation:1 deepening:1 huang:2 possibly:1 inject:1 leading:1 toy:1 li:1 szegedy:2 potential:1 bold:2 includes:1 inc:1 titan:1 collobert:1 ated:1 tion:2 try:1 later:1 lab:1 performed:2 bayes:4 recover:1 relus:1 annotation:1 jia:1 contribution:1 mlps:3 ni:9 accuracy:10 convolutional:15 efficiently:4 ensemble:1 miller:3 vincent:1 mc:2 carlo:5 bilmes:1 ren:1 farabet:2 sixth:1 gdnn:11 pp:1 geforce:1 mohamed:2 proof:1 donating:1 dataset:3 massachusetts:1 color:1 improves:1 segmentation:29 organized:1 feed:3 higher:1 supervised:5 modal:2 improved:2 zisserman:1 formulation:1 done:1 box:2 furthermore:2 msc:14 correlation:1 autoencoders:1 multiscale:1 mode:4 gray:1 building:1 concept:1 true:1 gtx:1 counterpart:5 regularization:1 laboratory:1 iteratively:2 semantic:11 illustrated:1 conditionally:1 during:4 width:2 m:2 generalized:2 crf:1 demonstrate:5 complete:1 performs:2 l1:1 image:24 variational:16 novel:3 recently:3 yosinski:1 he:1 significant:3 anguelov:1 honglak:2 versa:1 ai:1 unconstrained:2 similarly:2 surface:1 posterior:4 recent:4 showed:3 schmidhuber:1 certain:1 n00014:1 nvidia:1 binary:2 success:1 onr:1 caltech:5 relaxed:1 purity:1 maximize:2 gambardella:1 ii:1 semi:1 multiple:8 ing:1 segmented:1 technical:2 cgms:11 cross:1 long:1 af:1 post:2 graphcut:1 prediction:55 variant:3 scalable:1 hair:3 multilayer:1 vision:2 lfw:14 iteration:1 sometimes:1 pyramid:2 microscopy:1 proposal:2 addition:5 background:1 fine:3 separately:1 modality:1 pinheiro:1 rest:2 lenc:1 ineffective:1 pooling:1 simulates:1 effectiveness:4 call:4 extracting:1 yang:3 feedforward:1 split:1 enough:1 bengio:5 architecture:7 ciresan:1 reduce:1 haffner:1 giusti:1 compositional:1 matlab:2 deep:42 useful:1 amount:1 mid:1 extensively:1 sohn:3 category:2 generate:2 outperform:3 nsf:1 canonical:1 estimated:3 per:5 diverse:7 four:1 drawn:2 imputation:1 prevent:1 uncertainty:1 fourth:1 reasonable:1 patch:1 utilizes:1 draw:5 dropout:1 bound:5 layer:1 followed:1 courville:2 fold:3 refine:1 constraint:1 scene:3 toshev:1 argument:1 performing:2 injection:5 structured:26 developing:1 according:1 combination:1 membrane:1 describes:1 smaller:1 em:1 across:1 making:1 maxy:2 pipeline:2 equation:2 visualization:3 previously:1 fed:1 end:2 umich:1 studying:1 apply:1 observe:1 hierarchical:2 blurry:2 robustness:1 eigen:1 original:1 top:2 remaining:1 graphical:7 marginalized:1 maintaining:1 build:4 murray:1 objective:17 skin:2 already:1 question:1 flipping:1 malik:1 strategy:5 surrogate:1 gradient:5 iclr:4 thank:1 card:1 capacity:1 ozair:1 assuming:1 reed:1 illustration:1 providing:1 balance:1 sermanet:2 manzagol:1 sharper:1 potentially:1 expense:1 negative:7 ba:1 implementation:1 boltzmann:7 perform:4 teh:1 observation:7 neuron:9 datasets:2 convolution:1 benchmark:4 ramesh:1 descent:1 najman:2 beat:1 reparameterized:1 hinton:4 communication:1 y1:2 ucsd:5 omission:5 arbitrary:1 introduced:1 kl:8 toolbox:1 connection:4 imagenet:1 wah:1 california:1 learned:4 kingma:3 nip:8 address:1 usually:1 pattern:1 ksohn:1 summarize:1 max:2 belief:3 suitable:2 critical:1 hybrid:9 advanced:1 scheme:1 technology:1 mathieu:1 arora:1 concludes:1 auto:7 autoencoder:1 faced:2 prior:13 review:1 understanding:1 val:4 loss:2 fully:1 highlight:2 generation:5 validation:7 shelhamer:1 vanhoucke:1 sufficient:1 consistent:1 verification:1 corrupt:1 share:1 row:3 eccv:1 summary:4 supported:1 alain:1 allow:1 deeper:1 institute:1 fall:1 face:12 benefit:1 default:1 doesn:1 autoregressive:1 forward:3 made:3 adaptive:1 erhan:2 welling:2 ranganath:1 approximate:4 global:1 sequentially:1 overfitting:1 belongie:1 discriminative:1 fergus:1 alternatively:2 latent:21 iterative:1 table:10 promising:1 learn:2 robust:4 composing:1 career:1 bottou:1 complex:4 protocol:1 aistats:1 main:1 motivation:1 noise:20 bounding:2 tesla:1 xu:1 neuronal:1 fashion:1 grosse:1 fails:1 position:1 winning:1 perceptual:1 jmlr:2 third:1 grained:3 donahue:1 tang:2 experimented:1 abadie:1 evidence:1 intractable:2 essential:1 mnist:4 incorporating:1 adding:1 effectively:3 importance:9 workshop:1 nec:2 conditioned:3 margin:2 gap:4 easier:1 intersection:1 michigan:1 simply:2 forming:1 visual:5 horizontally:1 partially:1 kae:1 truth:6 acm:1 conditional:28 modulate:1 goal:1 sized:1 viewed:4 ann:1 identity:1 towards:1 couprie:2 change:1 gloc:1 specifically:5 semantically:1 averaging:1 denoising:2 shang:1 specie:1 arbor:1 experimental:4 meaningful:1 perceptrons:1 highdimensional:1 berg:1 kihyuk:1 modulated:1 incorporate:1 evaluate:7 mita:1 trainable:2 srivastava:1 |
5,275 | 5,776 | Expressing an Image Stream with a Sequence of
Natural Sentences
Cesc Chunseong Park
Gunhee Kim
Seoul National University, Seoul, Korea
{park.chunseong,gunhee}@snu.ac.kr
https://github.com/cesc-park/CRCN
Abstract
We propose an approach for retrieving a sequence of natural sentences for an
image stream. Since general users often take a series of pictures on their special
moments, it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt
with the relation between a single image and a single natural sentence, our work
extends both input and output dimension to a sequence of images and a sequence
of sentences. To this end, we design a multimodal architecture called coherent
recurrent convolutional network (CRCN), which consists of convolutional neural
networks, bidirectional recurrent neural networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of
blog posts as text-image parallel training data. We demonstrate that our approach
outperforms other state-of-the-art candidate methods, using both quantitative measures (e.g. BLEU and top-K recall) and user studies via Amazon Mechanical Turk.
1
Introduction
Recently there has been a hike of interest in automatically generating natural language descriptions
for images in the research of computer vision, natural language processing, and machine learning
(e.g. [5, 8, 9, 12, 14, 15, 26, 21, 30]). While most of existing work aims at discovering the relation
between a single image and a single natural sentence, we extend both input and output dimension to
a sequence of images and a sequence of sentences, which may be an obvious next step toward joint
understanding of the visual content of images and language descriptions, albeit under-addressed in
current literature. Our problem setup is motivated by that general users often take a series of pictures
on their memorable moments. For example, many people who visit New York City (NYC) would
capture their experiences with large image streams, and thus it would better take the whole photo
stream into consideration for the translation to a natural language description.
Figure 1: An intuition of our problem statement with a New York City example. We aim at expressing an image
stream with a sequence of natural sentences. (a) We leverage natural blog posts to learn the relation between
image streams and sentence sequences. (b) We propose coherent recurrent convolutional networks (CRCN)
that integrate convolutional networks, bidirectional recurrent networks, and the entity-based coherence model.
1
Fig.1 illustrates an intuition of our problem statement with an example of visiting NYC. Our objective
is, given a photo stream, to automatically produce a sequence of natural language sentences that
best describe the essence of the input image set. We propose a novel multimodal architecture named
coherent recurrent convolutional networks (CRCN) that integrate convolutional neural networks for
image description [13], bidirectional recurrent neural networks for the language model [20], and the
local coherence model [1] for a smooth flow of multiple sentences. Since our problem deals with
learning the semantic relations between long streams of images and text, it is more challenging to
obtain appropriate text-image parallel corpus than previous research of single sentence generation.
Our idea to this issue is to directly leverage online natural blog posts as text-image parallel training
data, because usually a blog consists of a sequence of informative text and multiple representative
images that are carefully selected by authors in a way of storytelling. See an example in Fig.1.(a).
We evaluate our approach with the blog datasets of the NYC and Disneyland, consisting of more than
20K blog posts with 140K associated images. Although we focus on the tourism topics in our experiments, our approach is completely unsupervised and thus applicable to any domain that has a large
set of blog posts with images. We demonstrate the superior performance of our approach by comparing with other state-of-the-art alternatives, including [9, 12, 21]. We evaluate with quantitative
measures (e.g. BLEU and Top-K recall) and user studies via Amazon Mechanical Turk (AMT).
Related work. Due to a recent surge of volume of literature on this subject of generating natural language descriptions for image data, here we discuss a representative selection of ideas that are closely
related to our work. One of the most popular approaches is to pose the text generation as a retrieval
problem that learns ranking and embedding, in which the caption of a test image is transferred from
the sentences of its most similar training images [6, 8, 21, 26]. Our approach partly involves the
text retrieval, because we search for candidate sentences for each image of a query sequence from
training database. However, we then create a final paragraph by considering both compatibilities
between individual images and text, and the coherence that captures text relatedness at the level of
sentence-to-sentence transitions. There have been also video-sentence works (e.g. [23, 32]); our key
novelty is that we explicitly include the coherence model. Unlike videos, consecutive images in the
streams may show sharp changes of visual content, which cause the abrupt discontinuity between
consecutive sentences. Thus the coherence model is more demanded to make output passages fluent.
Many recent works have exploited multimodal networks that combine deep convolutional neural networks (CNN) [13] and recurrent neural network (RNN) [20]. Notable architectures in this category
integrate the CNN with bidirectional RNNs [9], long-term recurrent convolutional nets [5], longshort term memory nets [30], deep Boltzmann machines [27], dependency-tree RNN [26], and other
variants of multimodal RNNs [3, 19]. Although our method partly take advantage of such recent
progress of multimodal neural networks, our major novelty is that we integrate it with the coherence
model as a unified end-to-end architecture to retrieve fluent sequential multiple sentences.
In the following, we compare more previous work that bears a particular resemblance to ours.
Among multimodal neural network models, the long-term recurrent convolutional net [5] is related
to our objective because their framework explicitly models the relations between sequential inputs
and outputs. However, the model is applied to a video description task of creating a sentence for a
given short video clip and does not address the generation of multiple sequential sentences. Hence,
unlike ours, there is no mechanism for the coherence between sentences. The work of [11] addresses
the retrieval of image sequences for a query paragraph, which is the opposite direction of our problem. They propose a latent structural SVM framework to learn the semantic relevance relations from
text to image sequences. However, their model is specialized only for the image sequence retrieval,
and thus not applicable to the natural sentence generation.
Contributions. We highlight main contributions of this paper as follows. (1) To the best of our
knowledge, this work is the first to address the problem of expressing image streams with sentence
sequences. We extend both input and output to more elaborate forms with respect to a whole body
of existing methods: image streams instead of individual images and sentence sequences instead of
individual sentences. (2) We develop a multimodal architecture of coherent recurrent convolutional
networks (CRCN), which integrates convolutional networks for image representation, recurrent networks for sentence modeling, and the local coherence model for fluent transitions of sentences. (3)
We evaluate our method with large datasets of unstructured blog posts, consisting of 20K blog posts
with 140K associated images. With both quantitative evaluation and user studies, we show that our
approach is more successful than other state-of-the-art alternatives in verbalizing an image stream.
2
2
Text-Image Parallel Dataset from Blog Posts
We discuss how to transform blog posts to a training set B of image-text parallel data streams, each
l
l
of which is a sequence of image-sentence pairs: B l = {(I1l , T1l ),? ? ?, (IN
l , TN l )} ? B. The training
set size is denoted by L = |B|. Fig.2.(a) shows the summary of pre-processing steps for blog posts.
2.1
Blog Pre-processing
We assume that blog authors augment their text with multiple images in a semantically meaningful
manner. In order to decompose each blog into a sequence of images and associated text, we first
perform text segmentation and then text summarization. The purpose of text segmentation is to
divide the input blog text into a set of text segments, each of which is associated with a single
image. Thus, the number of segments is identical to the number of images in the blog. The objective
of text summarization is to reduce each text segment into a single key sentence. As a result of these
l
l
two processes, we can transform each blog into a form of B l = {(I1l , T1l ), ? ? ? , (IN
l , TN l )}.
Text segmentation. We first divide the blog passage into text blocks according to paragraphs. We
apply a standard paragraph tokenizer of NLTK [2] that uses rule-based regular expressions to detect
paragraph divisions. We then use the heuristics based on the image-to-text block distances proposed
in [10]. Simply, we assign each text block to the image that has the minimum index distance where
each text block and image is counted as a single index distance in the blog.
Text summarization. We summarize each text segment into a single key sentence. We apply the
Latent Semantic Analysis (LSA)-based summarization method [4], which uses the singular value
decomposition to obtain the concept dimension of sentences, and then recursively finds the most
representative sentences that maximize the inter-sentence similarity for each topic in a text segment.
Data augmentation. The data augmentation is a well-known technique for convolutional neural
networks to improve image classification accuracies [13]. Its basic idea is to artificially increase
the number of training examples by applying transformations, horizontal reflection or adding noise
to training images. We empirically observe that this idea leads better performance in our problem
l
l
as well. For each image-sentence sequence B l = {(I1l , T1l ), ? ? ? , (IN
l , TN l )}, we augment each
l
sentence Tn with multiple sentences for training. That is, when we perform the LSA-based text
summarization, we select top-? highest ranked summary sentences, among which the top-ranked
one becomes the summary sentence for the associated image, and all the top-? ones are used for
training in our model. With a slight abuse of notation, we let Tnl to denote both the single summary
sentence and ? augmented sentences. We choose ? = 3 after thorough empirical tests.
2.2
Text Description
Once we represent each text segment with ? sentences, we extract the paragraph vector [17] to represent the content of text. The paragraph vector is a neural-network based unsupervised algorithm
that learns fixed-length feature representation from variable-length pieces of passage. We learn 300dimensional dense vector representation separately from the two classes of the blog dataset using
the gensim doc2vec code. We use pn to denote the paragraph vector representation for text Tn .
We then extract a parsed tree for each Tn to identify coreferent entities and grammatical roles of the
words. We use the Stanford core NLP library [18]. The parse trees are used for the local coherence
model, which will be discussed in section 3.2.
3
Our Architecture
Many existing sentence generation models (e.g. [9, 19]) combine words or phrases from training
data to generate a sentence for a novel image. Our approach is one level higher; we use sentences
from training database to author a sequence of sentences for a novel image stream. Although our
model can be easily extended to use words or phrases as basic building blocks, such granularity
makes sequences too long to train the language model, which may cause several difficulties for
learning the RNN models. For example, the vanishing gradient effect is a well-known hardship to
backpropagate an error signal through a long-range temporal interval. Therefore, we design our
approach that retrieves individual candidate sentences for each query image from training database
and crafts a best sentence sequence, considering both the fitness of individual image-to-sentence
pairs and coherence between consecutive sentences.
3
Figure 2: Illustration of (a) pre-processing steps of blog posts, and (b) the proposed CRCN architecture.
Fig.2.(b) illustrates the structure of our CRCN. It consists of three main components, which are
convolutional neural networks (CNN) [13] for image representation, bidirectional recurrent neural
networks (BRNN) [24] for sentence sequence modeling, and the local coherence model [1] for a
smooth flow of multiple sentences. Each data stream is a variable-length sequence denoted by
{(I1 , T1 ), ? ? ? , (IN , TN )}. We use t ? {1, ? ? ? , N } to denote a position of a sentence/image in a
sequence. We define the CNN and BRNN model for each position separately, and the coherent
model for a whole data stream. For the CNN component, our choice is the VGGNet [25] that
represents images as 4,096-dimensional vectors. We discuss the details of our BRNN and coherence
model in section 3.1 and section 3.2 respectively, and finally present how to combine the output of
the three components to create a single compatibility score in section 3.3.
3.1
The BRNN Model
The role of BRNN model is to represent a content flow of text sequences. In our problem, the BRNN
is more suitable than the normal RNN, because the BRNN can simultaneously model forward and
backward streams, which allow us to consider both previous and next sentences for each sentence to
make the content of a whole sequence interact with one another. As shown in Fig.2.(b), our BRNN
has five layers: input layer, forward/backward layer, output layer, and ReLU activation layer, which
are finally merged with that of the coherent model into two fully connected layers. Note that each
text is represented by 300-dimensional paragraph vector pt as discussed in section 2.2. The exact
form of our BRNN is as follows. See Fig.2.(b) together for better understanding.
xft = f (Wif pt + bfi );
hft
=
f (xft
+
Wf hft?1
xbt = f (Wib pt + bbi );
+ bf ); hbt = f (xbt + Wb hbt+1 + bb ); ot =
(1)
Wo (hft
+ hbt ) + bo .
The BRNN takes a sequence of text vectors pt as input. We then compute xft and xbt , which are the
activations of input units to forward and backward units. Unlike other BRNN models, we separate
the input activation into forward and backward ones with different sets of parameters Wif and Wib ,
which empirically leads a better performance. We set the activation function f to the Rectified
Linear Unit (ReLU), f (x) = max(0, x). Then, we create two independent forward and backward
hidden units, denoted by hft and hbt . The final activation of the BRNN ot can be regarded as a
description for the content of the sentence at location t, which also implicitly encodes the flow of
the sentence and its surrounding context in the sequence. The parameter sets to learn include weights
{Wif , Wib , Wf , Wb , Wo } ? R300?300 and biases {bfi , bbi , bf , bb , bo } ? R300?1 .
3.2
The Local Coherence Model
The BRNN model can capture the flow of text content, but it lacks learning the coherence of passage
that reflects distributional, syntactic, and referential information between discourse entities. Thus,
we explicitly include a local coherence model based on the work of [1], which focuses on resolving
the patterns of local transitions of discourse entities (i.e. coreferent noun phrases) in the whole
text. As shown in Fig.2.(b), we first extract parse trees for every summarized text denoted by Zt
and then concatenate all sequenced parse trees into one large one, from which we make an entity
grid for the whole sequence. The entity grid is a table where each row corresponds to a discourse
4
entity and each column represents a sentence. Grammatical role are expressed by three categories
and one for absent (i.e. not referenced in the sentence): S (subjects), O (objects), X (other than
subject or object) and ?(absent). After making the entity grid, we enumerate the transitions of the
grammatical roles of entities in the whole text. We set the history parameter to three, which means
we can obtain 43 = 64 transition descriptions (e.g. SO? or OOX). By computing the ratio of
the occurrence frequency of each transition, we finally create a 64-dimensional representation that
captures the coherence of a sequence. Finally, we make this descriptor to a 300-dimensional vector
by zero-padding, and forward it to ReLU layer as done for the BRNN output.
3.3
Combination of CNN, RNN, and Coherence Model
After the ReLU activation layers of the RNN and the coherence model, their output (i.e. {ot }N
t=1 and
q) goes through two fully connected (FC) layers, whose role is to decide a proper combination of the
BRNN language factors and the coherence factors. We drop the bias terms for the fully-connected
layers, and the dimensions of variables are Wf 1 ? R512?300 , Wf 2 ? R4,096?512 , ot , q ? R300?1 ,
st , g ? R4,096?1 , O ? R300?N , and S ? R4,096?N .
O = [o1 |o2 |..|oN ];
S = [s1 |s2 |..|sN ];
Wf 2 Wf 1 [O|q] = [S|g].
(2)
We use the shared parameters for O and q so that the output mixes well the interaction between the
content flows and coherency. In our tests, joint learning outperforms learning the two terms with
separate parameters. Note that the multiplication Wf 2 Wf 1 of the last two FC layers does not reduce
to a single linear mapping, thanks to dropout. We assign 0.5 and 0.7 dropout rates to the two layers.
Empirically, it improves generalization performance much over a single FC layer with dropout.
3.4
Training the CRCN
To train our CRCN model, we first define the compatibility score between an image stream and a
paragraph sequence. While our score function is inspired by Karpathy et al. [9], there are two major
differences. First, the score function of [9] deals between sentence fragments and image fragments,
and thus the algorithm considers all combinations between them to find out the best matching. On
the other hand, we define the score by an ordered and paired compatibility between a sentence
sequence and an image sequence. Second, we also add the term that measures the relevance relation
of coherency between an image sequence and a text sequence. Finally, the score Skl for a sentence
sequence k and an image stream l is defined by
Skl =
X
skt ? vtl + g k ? vtl
(3)
t=1...N
where vtl denotes the CNN feature vector for t-th image of stream l. We then define the cost function
to train our CRCN model as follows [9].
C(?) =
XhX
k
max(0, 1 + Skl ? Skk ) +
l
X
i
max(0, 1 + Slk ? Skk ) ,
(4)
l
where Skk denotes the score between a training pair of corresponding image and sentence sequence.
The objective, based on the max-margin structured loss, encourages aligned image-sentence sequence pairs to have a higher score by a margin than misaligned pairs. For each positive training
example, we randomly sample 100 ne examples from the training set. Since each contrastive example has a random length, and is sampled from the dataset of a wide range of content, it is extremely
unlikely that the negative examples have the same length and the same content order of sentences
with positive examples.
Optimization. We use the backpropagation through time (BPTT) algorithm [31] to train our model.
We apply the stochastic gradient descent (SGD) with mini-batches of 100 data streams. Among
many SGD techniques, we select RMSprop optimizer [28], which leads the best performance in
our experiments. We initialize the weights of our CRCN model using the method of He et al. [7],
which is robust in deep rectified models. We observe that it is better than a simple Gaussian random
initialization, although our model is not extremely deep. We use dropout regularization in all layers
except the BRNN, with 0.7 dropout for the last FC layer and 0.5 for the other remaining layers.
5
3.5
Retrieval of Sentence Sequences
At test time, the objective is to retrieve a best sentence sequence for a given query image stream
{Iq1 , ? ? ? , IqN }. First, we select K-nearest images for each query image from training database using the `2 -distance on the CNN VGGNet fc7 features [25]. In our experiments K = 5 is successful.
We then generate a set of sentence sequence candidates C by concatenating the sentences associated
with the K-nearest images at each location t. Finally, we use our learned CRCN model to compute
the compatibility score between the query image stream and each sequence candidate, according to
which we rank the candidates.
However, one major difficulty of this scenario is that there are exponentially many candidates (i.e.
|C| = K N ). To resolve this issue, we use an approximate divide-and-conquer strategy; we recursively halve the problem into subproblems, until the size of the subproblem is manageable. For
example, if we halve the search candidate length Q times, then the search space of each subproblem
Q
becomes K N/2 . Using the beam search idea, we first find the top-M best sequence candidates in
the subproblem of the lowest level, and recursively increase the candidate lengths while the maximum candidate size is limited to M . We set M = 50. Though it is an approximate search, our
experiments assure that it achieves almost optimal solutions with plausible combinatorial search,
mainly because the local fluency and coherence is undoubtedly necessary for the global one. That
is, in order for a whole sentence sequence to be fluent and coherent, its any subparts must be as well.
4
Experiments
We compare the performance of our approach with other state-of-the-art candidate methods via
quantitative measures and user studies using Amazon Mechanical Turk (AMT). Please refer to the
supplementary material for more results and the details of implementation and experimental setting.
4.1
Experimental Setting
Dataset. We collect blog datasets of the two topics: NYC and Disneyland. We reuse the blog data
of Disneyland from the dataset of [11], and newly collect the data of NYC, using the same crawling
method with [11], in which we first crawl blog posts and their associated pictures from two popular
blog publishing sites, BLOGSPOT and WORDPRESS by changing query terms from Google search.
Then, we manually select the travelogue posts that describe stories and events with multiple images.
Finally, the dataset includes 11,863 unique blog posts and 78,467 images for NYC and 7,717 blog
posts and 60,545 images for Disneyland.
Task. For quantitative evaluation, we randomly split our dataset into 80% as a training set, 10% as
a validation, and the others as a test set. For each test post, we use the image sequence as a query
Iq and the sequence of summarized sentences as groundtruth TG . Each algorithm retrieves the best
sequences from training database for a query image sequence, and ideally the retrieved sequences
match well with TG . Since the training and test data are disjoint, each algorithm can only retrieve
similar (but not identical) sentences at best.
For quantitative measures, we exploit two types of metrics of language similarity (i.e. BLEU [22],
CIDEr [29], and METEOR [16] scores) and retrieval accuracies (i.e. top-K recall and median rank),
which are popularly used in text generation literature [8, 9, 19, 26]. The top-K recall R@K is
the recall rate of a groundtruth retrieval given top K candidates, and the median rank indicates the
median ranking value of the first retrieved groundtruth. A better performance is indicated by higher
BLEU, CIDEr, METEOR, R@K scores, and lower median rank values.
Baselines. Since the sentence sequence generation from image streams has not been addressed yet
in previous research, we instead extend several state-of-the-art single-sentence models that have
publicly available codes as baselines, including the log-bilinear multimodal models by Kiros et
al. [12], and recurrent convolutional models by Karpathy et al. [9] and Vinyals et al. [30]. For
[12], we use the three variants introduced in the paper, which are the standard log-bilinear model
(LBL), and two multi-modal extensions: modality-based LBL (MLBL-B) and factored three-way
LBL (MLBL-F). We use the NeuralTalk package authored by Karpathy et al. for the baseline
of [9] denoted by (CNN+RNN), and [30] denoted by (CNN+LSTM). As the simplest baseline, we
also compare with the global matching (GloMatch) in [21]. For all the baselines, we create final
sentence sequences by concatenating the sentences generated for each image in the query stream.
6
B-1
B-2
(CNN+LSTM) [30]
(CNN+RNN) [9]
(MLBL-F) [12]
(MLBL-B) [12]
(LBL) [12]
(GloMatch) [21]
(1NN)
(RCN)
(CRCN)
16.24
6.21
21.03
20.43
20.96
19.00
25.97
27.09
26.83
5.79
0.01
1.92
1.54
1.68
1.59
3.42
5.45
5.37
(CNN+LSTM) [30]
(CNN+RNN) [9]
(MLBL-F) [12]
(MLBL-B) [12]
(LBL) [12]
(GloMatch) [21]
(1NN)
(RCN)
(CRCN)
13.22
6.04
15.75
15.65
18.94
11.94
25.92
28.15
28.40
1.56
0.00
1.61
1.32
1.70
0.37
3.34
6.84
6.88
Language metrics
Retrieval metrics
B-3 B-4 CIDEr METEOR R@1 R@5 R@10 MedRank
New York City
1.38 0.10
9.1
5.73
0.95 7.38 13.33
88.5
0.00 0.00
0.5
1.34
0.48 2.86
4.29
120.5
0.12 0.01
4.3
6.03
0.71 4.52
7.86
87.0
0.09 0.01
2.6
5.30
0.48 3.57
5.48
101.5
0.08 0.01
2.6
5.29
1.19 4.52
7.38
100.5
0.04 0.0
2.80
5.17
0.24 2.62
4.05
95.00
0.60 0.22 15.9
7.06
5.95 13.57 20.71
63.50
2.56 2.10 33.5
7.87
3.80 18.33 30.24
29.00
2.57 2.08 30.9
7.69
11.67 31.19 43.57
14.00
Disneyland
0.40 0.07 10.0
4.51
2.83 10.38 16.98
61.5
0.00 0.00
0.4
1.34
1.02 3.40
5.78
88.0
0.07 0.01
4.9
7.12
0.68 4.08 10.54
63.0
0.05 0.00
3.8
5.83
0.34 2.72
6.80
69.0
0.06 0.01
3.4
4.99
1.02 4.08
7.82
62.0
0.01 0.00
2.2
4.31
2.04 5.78
7.48
73.0
0.71 0.38 19.5
7.46
9.18 19.05 27.21
45.0
4.11 3.52 51.3
8.87
5.10 20.07 28.57
29.5
4.11 3.49 52.7
8.78
14.29 31.29 43.20
16.0
Table 1: Evaluation of sentence generation for the two datasets, New York City and Disneyland, with language
similarity metrics (BLEU) and retrieval metrics (R@K, median Rank). A better performance is indicated by
higher BLEU, CIDEr, METEOR, R@K scores, and lower median rank values.
We also compare between different variants of our method to validate the contributions of key components of our method. We test the K-nearest search (1NN) without the RNN part as the simplest
variant; for each image in a test query, we find its K(= 1) most similar training images and simply
concatenate their associated sentences. The second variant is the BRNN-only method denoted by
(RCN) that excludes the entity-based coherence model from our approach. Our complete method is
denoted by (CRCN), and this comparison quantifies the improvement by the coherence model. To be
fair, we use the same VGGNet fc7 feature [25] for all the algorithms.
4.2
Quantitative Results
Table 1 shows the quantitative results of experiments using both language and retrieval metrics.
Our approach (CRCN) and (RCN) outperform, with large margins, other state-of-the-art baselines,
which generate passages without consideration of sentence-to-sentence transitions unlike ours. The
(MLBL-F) shows the best performance among the three models of [12] albeit with a small margin,
partly because they share the same word dictionary in training. Among mRNN-based models, the
(CNN+LSTM) significantly outperforms the (CNN+RNN), because the LSTM units help learn models
from irregular and lengthy data of natural blogs more robustly.
We also observe that (CRCN) outperforms (1NN) and (RCN), especially with the retrieval metrics.
It shows that the integration of two key components, the BRNN and the coherence model, indeed
contributes the performance improvement. The (CRCN) is only slightly better than the (RCN) in language metrics but significantly better in retrieval metrics. It means that (RCN) is fine with retrieving
fairly good solutions, but not good at ranking the only correct solution high compared to (CRCN).
The small margins in language metrics are also attributed by their inherent limitation; for example,
the BLEU focuses on counting the matches of n-gram words and thus is not good at comparing
between sentences, even worse between paragraphs for fully evaluating their fluency and coherency.
Fig.3 illustrates several examples of sentence sequence retrieval. In each set, we show a query
image stream and text results created by our method and baselines. Except Fig.3.(d), we show parts
of sequences because they are rather long for illustration. These qualitative examples demonstrate
that our approach is more successful to verbalize image sequences that include a variety of content.
4.3
User Studies via Amazon Mechanical Turk
We perform user studies using AMT to observe general users? preferences between text sequences
by different algorithms. Since our evaluation involves multiple images and long passages of text, we
design our AMT task to be sufficiently simple for general turkers with no background knowledge.
7
Figure 3: Examples of sentence sequence retrieval for NYC (top) and Disneyland (bottom). In each set, we
present a part of a query image stream, and its corresponding text output by our method and a baseline.
Baselines
(GloMatch)
(CNN+LSTM)
(MLBL-B)
(RCN)
(RCN N>=8)
NYC
92.7% (139/150) 80.0% (120/150) 69.3% (104/150) 54.0% (81/150) 57.0% (131/230)
Disneyland 95.3% (143/150) 82.0% (123/150) 70.7% (106/150) 56.0% (84/150) 60.1% (143/238)
Table 2: The results of AMT pairwise preference tests. We present the percentages of responses that turkers
vote for our (CRCN) over baselines. The length of query streams is 5 except the last column, which has 8?10.
We first randomly sample 100 test streams from the two datasets. We first set the maximum number
of images per query to 5. If a query is longer than that, we uniformly sample it to 5. In an AMT
test, we show a query image stream Iq , and a pair of passages generated by our method (CRCN) and
one baseline in a random order. We ask turkers to choose more agreed text sequence with Iq . We
design the test as a pairwise comparison instead of a multiple-choice question to make answering
and analysis easier. The questions look very similar to the examples of Fig.3. We obtain answers
from three different turkers for each query. We compare with four baselines; we choose (MLBL-B)
among the three variants of [12], and (CNN+LSTM) among mRNN-based methods. We also select
(GloMatch), and (RCN) as the variants of our method.
Table 2 shows the results of AMT tests, which validate that AMT annotators prefer our results to
those of baselines. The (GloMatch) is the worst because it uses too weak image representation
(i.e. GIST and Tiny images). The differences between (CRCN) and (RCN) (i.e. 4th column of Table
2) are not as significant as previous quantitative measures, mainly because our query image stream
is sampled to relatively short 5. The coherence becomes more critical as the passage is longer. To
justify this argument, we run another set of AMT tests in which we use 8?10 images per query. As
shown in the last column of Table 2, the performance margins between (CRCN) and (RCN) become
larger as the lengths of query image streams increase. This result assures that as passages are longer,
the coherence becomes more important, and thus (CRCN)?s output is more preferred by turkers.
5
Conclusion
We proposed an approach for retrieving sentence sequences for an image stream. We developed
coherent recurrent convolutional network (CRCN), which consists of convolutional networks, bidirectional recurrent networks, and entity-based local coherence model. With quantitative evaluation
and users studies using AMT on large collections of blog posts, we demonstrated that our CRCN
approach outperformed other state-of-the-art candidate methods.
Acknowledgements. This research is partially supported by Hancom and Basic Science Research
Program through National Research Foundation of Korea (2015R1C1A1A02036562).
8
References
[1] R. Barzilay and M. Lapata. Modeling Local Coherence: An Entity-Based Approach. In ACL, 2008.
[2] S. Bird, E. Loper, and E. Klein. Natural Language Processing with Python. O?Reilly Media Inc., 2009.
[3] X. Chen and C. L. Zitnick. Mind?s Eye: A Recurrent Visual Representation for Image Caption Generation.
In CVPR, 2015.
[4] F. Y. Y. Choi, P. Wiemer-Hastings, and J. Moore. Latent Semantic Analysis for Text Segmentation. In
EMNLP, 2001.
[5] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell.
Long-term Recurrent Convolutional Networks for Visual Recognition and Description. In CVPR, 2015.
[6] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik. Improving Image-Sentence Embeddings
Using Large Weakly Annotated Photo Collections. In ECCV, 2014.
[7] K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance
on ImageNet Classification. In arXiv, 2015.
[8] M. Hodosh, P. Young, and J. Hockenmaier. Framing Image Description as a Ranking Task: Data, Models
and Evaluation Metrics. JAIR, 47:853?899, 2013.
[9] A. Karpathy and L. Fei-Fei. Deep Visual-Semantic Alignments for Generating Image Descriptions. In
CVPR, 2015.
[10] G. Kim, S. Moon, and L. Sigal. Joint Photo Stream and Blog Post Summarization and Exploration. In
CVPR, 2015.
[11] G. Kim, S. Moon, and L. Sigal. Ranking and Retrieval of Image Sequences from Multiple Paragraph
Queries. In CVPR, 2015.
[12] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal Neural Language Models. In ICML, 2014.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet Classification with Deep Convolutional Neural
Networks. In NIPS, 2012.
[14] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Baby Talk: Understanding
and Generating Image Descriptions. In CVPR, 2011.
[15] P. Kuznetsova, V. Ordonez, T. L. Berg, and Y. Choi. TreeTalk: Composition and Compression of Trees
for Image Descriptions. In TACL, 2014.
[16] S. B. A. Lavie. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with
Human Judgments. In ACL, 2005.
[17] Q. Le and T. Mikolov. Distributed Representations of Sentences and Documents. In ICML, 2014.
[18] C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky. The Stanford CoreNLP
Natural Language Processing Toolkit. In ACL, 2014.
[19] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. L. Yuille. Deep Captioning with Multimodal Recurrent
Neural Networks (m-RNN). In ICLR, 2015.
[20] T. Mikolov. Statistical Language Models based on Neural Networks. In Ph. D. Thesis, Brno University
of Technology, 2012.
[21] V. Ordonez, G. Kulkarni, and T. L. Berg. Im2Text: Describing Images Using 1 Million Captioned Photographs. In NIPS, 2011.
[22] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: A Method for Automatic Evaluation of Machine
Translation. In ACL, 2002.
[23] M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal, and B. Schiele. Translating Video Content to Natural
Language Descriptions. In ICCV, 2013.
[24] M. Schuster and K. K. Paliwal. Bidirectional Recurrent Neural Networks. In IEEE TSP, 1997.
[25] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition.
In ICLR, 2015.
[26] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng. Grounded Compositional Semantics for
Finding and Describing Images with Sentences. In TACL, 2013.
[27] N. Srivastava and R. Salakhutdinov. Multimodal Learning with Deep Boltzmann Machines. In NIPS,
2012.
[28] T. Tieleman and G. E. Hinton. Lecture 6.5 ? RMSProp. In Coursera, 2012.
[29] R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based Image Description Evaluation. In
arXiv:1411.5726, 2014.
[30] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and Tell: A Neural Image Caption Generator. In
CVPR, 2015.
[31] P. J. Werbos. Generalization of Backpropagation with Application to a Recurrent Gas Market Model.
Neural Networks, 1:339?356, 1988.
[32] R. Xu, C. Xiong, W. Chen, and J. J. Corso. Jointly Modeling Deep Video and Compositional Text to
Bridge Vision and Language in a Unified Framework. In AAAI, 2015.
9
| 5776 |@word cnn:18 manageable:1 compression:1 bf:2 bptt:1 decomposition:1 contrastive:1 sgd:2 recursively:3 moment:2 series:2 score:12 fragment:2 ours:3 document:1 outperforms:4 existing:3 o2:1 current:1 com:1 comparing:2 guadarrama:1 activation:6 yet:1 crawling:1 must:1 i1l:3 concatenate:2 informative:1 drop:1 gist:1 discovering:1 selected:1 vanishing:1 short:2 core:1 location:2 preference:2 zhang:1 five:1 become:1 retrieving:3 qualitative:1 consists:4 combine:3 verbalize:1 paragraph:12 manner:1 pairwise:2 inter:1 market:1 indeed:1 surge:1 kiros:2 multi:1 inspired:1 salakhutdinov:2 automatically:2 resolve:1 considering:2 becomes:4 notation:1 medium:1 lowest:1 developed:1 unified:2 finding:1 transformation:1 temporal:1 thorough:1 every:1 quantitative:10 unit:5 lsa:2 t1:1 positive:2 local:11 referenced:1 bilinear:2 abuse:1 rnns:2 acl:4 initialization:1 bird:1 r4:3 collect:2 gunhee:2 challenging:1 misaligned:1 limited:1 range:2 unique:1 block:5 backpropagation:2 rnn:12 empirical:1 significantly:2 matching:2 reilly:1 pre:3 word:5 regular:1 selection:1 context:1 applying:1 demonstrated:1 go:1 vtl:3 fluent:4 amazon:4 abrupt:1 unstructured:1 factored:1 rule:1 regarded:1 surdeanu:1 retrieve:3 embedding:1 pt:4 user:11 caption:3 exact:1 us:3 premraj:1 assure:1 recognition:2 werbos:1 distributional:1 database:5 bottom:1 role:5 subproblem:3 wang:2 capture:4 worst:1 connected:3 sun:1 coursera:1 highest:1 intuition:2 rmsprop:2 schiele:1 ideally:1 weakly:1 segment:6 yuille:1 division:1 completely:1 multimodal:11 joint:3 easily:1 represented:1 retrieves:2 talk:1 surrounding:1 train:4 describe:2 query:22 zemel:1 tell:1 whose:1 heuristic:1 stanford:2 plausible:1 supplementary:1 larger:1 cvpr:7 simonyan:1 ward:1 syntactic:1 transform:2 tsp:1 jointly:1 final:3 online:1 sequence:60 advantage:1 net:3 propose:4 interaction:1 aligned:1 papineni:1 description:17 validate:2 sutskever:1 darrell:1 produce:2 generating:4 captioning:1 object:2 help:1 iq:3 recurrent:20 ac:1 gong:1 pose:1 develop:1 fluency:2 nearest:3 barzilay:1 progress:1 involves:2 direction:1 closely:1 merged:1 meteor:5 popularly:1 stochastic:1 correct:1 xhx:1 annotated:1 human:2 exploration:1 translating:1 material:1 assign:2 generalization:2 decompose:1 extension:1 sufficiently:1 normal:1 mapping:1 major:3 optimizer:1 consecutive:3 achieves:1 dictionary:1 purpose:1 integrates:1 applicable:2 outperformed:1 combinatorial:1 bridge:1 wordpress:1 create:5 city:4 reflects:1 gaussian:1 aim:2 cider:5 rather:1 pn:1 finkel:1 focus:3 loper:1 improvement:2 rank:6 indicates:1 mainly:2 tnl:1 kim:3 detect:1 wf:8 baseline:13 nn:4 unlikely:1 hidden:1 relation:7 captioned:1 i1:1 semantics:1 compatibility:5 issue:2 among:7 classification:3 denoted:8 augment:2 art:7 special:1 tourism:1 noun:1 initialize:1 integration:1 once:1 fairly:1 ng:1 manually:1 identical:2 represents:2 park:3 look:1 unsupervised:2 icml:2 others:1 inherent:1 randomly:3 simultaneously:1 national:2 individual:5 fitness:1 consisting:2 undoubtedly:1 interest:1 evaluation:9 alignment:1 mcclosky:1 necessary:1 experience:1 korea:2 tree:6 divide:3 lbl:5 column:4 modeling:4 wb:2 phrase:3 cost:1 tg:2 krizhevsky:1 successful:3 too:2 dependency:1 answer:1 iqn:1 st:1 thanks:1 lstm:7 together:1 corenlp:1 augmentation:2 aaai:1 thesis:1 choose:3 huang:1 emnlp:1 worse:1 creating:1 li:1 lapata:1 gensim:1 skl:3 summarized:2 includes:1 inc:1 notable:1 explicitly:3 ranking:5 stream:35 piece:1 parallel:5 contribution:3 publicly:1 accuracy:2 convolutional:19 coreferent:2 who:1 descriptor:1 moon:2 judgment:1 identify:1 dealt:1 weak:1 ren:1 venugopalan:1 rectified:2 hardship:1 history:1 halve:2 lengthy:1 frequency:1 turk:4 corso:1 obvious:1 associated:8 attributed:1 sampled:2 newly:1 dataset:7 popular:2 ask:1 recall:5 knowledge:2 improves:1 segmentation:4 agreed:1 carefully:1 bidirectional:7 higher:4 jair:1 modal:1 response:1 improved:1 zisserman:1 done:1 though:1 until:1 tacl:2 hand:1 hastings:1 horizontal:1 parse:3 correlation:1 lack:1 google:1 ordonez:2 indicated:2 resemblance:1 building:1 effect:1 concept:1 hence:1 regularization:1 xft:3 moore:1 semantic:5 deal:2 encourages:1 please:1 essence:1 complete:1 demonstrate:3 tn:7 reflection:1 passage:9 image:103 lazebnik:1 consideration:3 novel:3 recently:1 parikh:1 rcn:12 superior:1 specialized:1 mt:1 empirically:3 exponentially:1 volume:1 million:1 extend:3 slight:1 discussed:2 he:2 surpassing:1 expressing:3 refer:1 significant:1 composition:1 nyc:8 automatic:2 grid:3 language:22 toolkit:1 similarity:3 longer:3 add:1 fc7:2 recent:3 retrieved:2 scenario:1 paliwal:1 blog:31 wib:3 baby:1 exploited:1 minimum:1 novelty:2 maximize:1 signal:1 resolving:1 multiple:11 mix:1 smooth:2 match:2 long:8 retrieval:15 post:18 visit:1 paired:1 variant:7 basic:3 vision:2 metric:12 arxiv:2 represent:3 grounded:1 sequenced:1 beam:1 irregular:1 background:1 separately:2 fine:1 addressed:2 interval:1 hft:4 singular:1 median:6 modality:1 ot:4 unlike:4 subject:3 flow:6 structural:1 leverage:2 granularity:1 counting:1 split:1 wif:3 embeddings:1 yang:1 variety:1 bengio:1 relu:4 architecture:7 opposite:1 reduce:2 idea:5 absent:2 motivated:1 expression:1 reuse:1 padding:1 wo:2 york:4 cause:2 compositional:2 deep:11 enumerate:1 karpathy:5 slk:1 referential:1 authored:1 ph:1 clip:1 category:2 simplest:2 http:1 generate:3 outperform:1 percentage:1 coherency:3 disjoint:1 per:2 subpart:1 klein:1 mrnn:2 key:5 four:1 changing:1 backward:5 vast:1 excludes:1 dhar:1 run:1 package:1 named:1 extends:1 almost:2 decide:1 chunseong:2 groundtruth:3 coherence:28 prefer:1 dropout:5 layer:16 fei:2 encodes:1 toshev:1 argument:1 extremely:2 mikolov:2 relatively:1 transferred:1 structured:1 according:2 combination:3 manning:2 lavie:1 slightly:1 hodosh:2 brno:1 snu:1 hockenmaier:2 making:1 s1:1 iccv:1 resource:1 assures:1 discus:3 describing:2 mechanism:1 mind:1 end:3 photo:4 available:1 apply:3 observe:4 titov:1 appropriate:1 occurrence:1 robustly:1 xiong:1 alternative:2 batch:1 top:10 denotes:2 include:4 nlp:1 remaining:1 publishing:1 exploit:1 parsed:1 especially:1 conquer:1 skt:1 objective:5 question:2 strategy:1 visiting:1 gradient:2 iclr:2 distance:4 t1l:3 separate:2 entity:13 topic:3 considers:1 consensus:1 bleu:8 toward:1 length:9 code:2 index:2 o1:1 illustration:2 ratio:1 mini:1 setup:1 statement:2 subproblems:1 negative:1 skk:3 design:4 implementation:1 zt:1 boltzmann:2 summarization:6 perform:3 proper:1 im2text:1 datasets:5 descent:1 gas:1 extended:1 hinton:2 sharp:1 introduced:1 pair:6 mechanical:4 sentence:86 imagenet:2 coherent:8 learned:1 framing:1 discontinuity:1 nip:3 address:3 usually:1 pattern:1 hendricks:1 summarize:1 bbi:2 program:1 including:2 memory:1 video:6 max:4 suitable:1 event:1 natural:18 ranked:2 difficulty:2 blogspot:1 critical:1 zhu:1 improve:1 github:1 technology:1 library:1 ne:1 picture:3 vggnet:3 eye:1 created:1 extract:3 sn:1 text:50 understanding:3 literature:3 acknowledgement:1 python:1 multiplication:1 cesc:2 fully:4 bear:1 highlight:1 loss:1 generation:9 limitation:1 lecture:1 annotator:1 validation:1 foundation:1 integrate:4 generator:1 sigal:2 story:1 tiny:1 roukos:1 share:1 translation:2 row:1 eccv:1 summary:4 supported:1 last:4 bias:2 allow:1 turkers:5 wide:1 distributed:1 grammatical:3 hbt:4 dimension:4 bauer:1 transition:7 crawl:1 gram:1 evaluating:1 author:3 forward:6 collection:2 counted:1 erhan:1 bb:2 approximate:2 relatedness:1 implicitly:1 preferred:1 global:2 brnn:18 corpus:1 vedantam:1 search:8 demanded:1 latent:3 quantifies:1 table:7 learn:5 xbt:3 robust:1 delving:1 contributes:1 improving:1 interact:1 artificially:1 domain:1 zitnick:2 main:2 dense:1 whole:9 noise:1 s2:1 qiu:1 fair:1 body:1 augmented:1 memorable:1 fig:10 representative:3 site:1 xu:2 elaborate:1 position:2 mao:1 concatenating:2 candidate:14 answering:1 learns:3 donahue:1 young:1 nltk:1 choi:3 rectifier:1 svm:1 socher:1 albeit:2 sequential:3 adding:1 kr:1 illustrates:3 margin:6 chen:2 easier:1 backpropagate:1 r300:4 fc:4 simply:2 photograph:1 rohrbach:2 visual:5 vinyals:2 expressed:1 ordered:1 partially:1 bo:2 corresponds:1 oox:1 tieleman:1 amt:10 discourse:3 shared:1 content:12 change:1 except:3 uniformly:1 semantically:1 justify:1 called:1 partly:3 experimental:2 craft:1 meaningful:1 vote:1 saenko:1 select:5 berg:4 people:1 seoul:2 relevance:2 kulkarni:2 kuznetsova:1 evaluate:3 schuster:1 srivastava:1 |
5,276 | 5,777 | V ISALOGY: Answering Visual Analogy Questions
C. Lawrence Zitnick
Microsoft Research
larryz@microsoft.com
Fereshteh Sadeghi
University of Washington
fsadeghi@cs.washington.edu
Ali Farhadi
University of Washington, The Allen Institute for AI
ali@cs.washington.edu
Abstract
In this paper, we study the problem of answering visual analogy questions. These
questions take the form of image A is to image B as image C is to what. Answering these questions entails discovering the mapping from image A to image B and
then extending the mapping to image C and searching for the image D such that
the relation from A to B holds for C to D. We pose this problem as learning an embedding that encourages pairs of analogous images with similar transformations to
be close together using convolutional neural networks with a quadruple Siamese
architecture. We introduce a dataset of visual analogy questions in natural images,
and show first results of its kind on solving analogy questions on natural images.
1
Introduction
Analogy is the task of mapping information from a source to a target. Analogical thinking is a
crucial component in problem solving and has been regarded as a core component of cognition [1].
Analogies have been extensively explored in cognitive sciences and explained by several theories
and models: shared structure [1], shared abstraction [2], identity of relation, hidden deduction [3],
etc. The common two components among most theories are the discovery of a form of relation or
mapping in the source and extension of the relation to the target. Such a process is very similar to
the tasks in analogy questions in standardized tests such as the Scholastic Aptitude Test (SAT): A is
to B as C is to what?
In this paper, we introduce V ISALOGY to address the problem of solving visual analogy questions.
Three images Ia , Ib , and Ic are provided as input and a fourth image Id must be selected such that
Ia is to Ib as Ic is to Id . This involves discovering an extendable mapping from Ia to Ib and then
applying it to Ic to find Id . Estimating such a mapping for natural images using current feature
spaces would require careful alignment, complex reasoning, and potentially expensive training data.
Instead, we learn an embedding space where reasoning about analogies can be performed by simple
vector transformations. This is in fact aligned with the traditional logical understanding of analogy
as an arrow or homomorphism from source to the target.
Our goal is to learn a representation that given a set of training analogies can generalize to unseen
analogies across various categories and attributes. Figure 1 shows an example visual analogy question. Answering this question entails discovering the mapping from the brown bear to the white
bear (in this case a color change), applying the same mapping to the brown dog, and then searching
among a set of images (the middle row in Figure 1) to find an example that respects the discovered
mapping from the brown dog best. Such a mapping should ideally prefer white dogs. The bottom
row shows a ranking imposed by V ISALOGY.
We propose learning an embedding that encourages pairs of analogous images with similar mappings
to be close together. Specifically, we learn a Convolutional Neural Network (CNN) with Siamese
quadruple architecture (Figure 2) to obtain an embedding space where analogical reasoning can be
1
:
Analogy Question
::
:
Test set: correct answers mixed with distractor negative images
...
Answer: top ranked selections by our method
...
Figure 1: Visual analogy question asks for a missing image Id given three images Ia , Ib , Ic in the analogy
quadruple. Solving a visual analogy question entails discovering the mapping from Ia to Ib and applying it
to Ic and search among a set of images (the middle row) to find the best image for which the mapping holds.
The bottom row shows an ordering of the images imposed by V ISALOGY based on how likely they can be the
answer to the analogy question.
done with simple vector transformations. Doing so involves fine tuning the last layers of our network
so that the difference in the unit normalized activations between analogue images is similar for image
pairs with similar mapping and dissimilar for those that are not. We also evaluate V ISALOGY on
generalization to unseen analogies. To show the benefits of the proposed method, we compare
V ISALOGY against competitive baselines that use standard CNNs trained for classification. Our
experiments are conducted on datasets containing natural images as well as synthesized images and
the results include quantitative evaluations of V ISALOGY across different sizes of distractor sets.
The performance in solving analogy questions is directly affected by the size of the set from which
the candidate images are selected.
In this paper we study the problem of visual analogies for natural images and show the first results
of its kind on solving visual analogy questions for natural images. Our proposed method learns
an embedding where similarities are transferable across pairs of analogous images using a Siamese
network architecture. We introduce Visual Analogy Question Answering (VAQA), a dataset of natural images that can be used to generate analogies across different objects attributes and actions of
animals. We also compile a large set of analogy questions using the 3D chair dataset [4] containing analogies across viewpoint and style. Our experimental evaluations show promising results on
solving visual analogy questions. We explore different kinds of analogies with various numbers of
distracters, and show generalization to unseen analogies.
2
Related Work
The problem of solving analogy questions has been explored in NLP using word-pair connectives [5], supervised learning [6, 7, 8], distributional similarities [9], word vector representations
and linguistic regularities [10], and learning by reading [11].
Solving analogy questions for diagrams and sketches has been extensively explored in AI [12].
These papers either assume simple forms of drawings [13], require an abstract representation of
diagrams [14], or spatial reasoning [15]. In [16] an analogy-based framework is proposed to learn
?image filters? between a pair of images to creat an ?analogous? filtered result on a third image.
Related to analogies is learning how to separate category and style properties in images, which has
been studied using bilinear models [17]. In this paper, we study the problem of visual analogies for
natural images possessing different semantic properties where obtaining abstract representations is
extremely challenging.
Our work is also related to metric learning using deep neural networks. In [18] a convolutional
network is learned in a Siamese architecture for the task of face verification. Attributes have been
shown to be effective representations for semantical image understanding [19]. In [20], the relative
attributes are introduced to learn a ranking function per attribute. While these methods provide an
efficient feature representation to group similar objects and map similar images nearby each other in
an embedding space, they do not offer a semantic space that can capture object-to-object mapping
and cannot be directly used for object-to-object analogical inference. In [21] the relationships between multiple pairs of classes are modeled via analogies, which is shown to improve recognition
as well as GRE textual analogy tests. In our work we learn analogies without explicity considering
categories and no textual data is provided in our analogy questions.
Learning representations using both textual and visual information has also been explored using
deep architectures. These representations show promising results for learning a mapping between
2
visual data[22] the same way that it was shown for text [23]. We differ from these methods as
our objective is to directly optimized for analogy questions and our method does not use textual
information.
Different forms of visual reasoning has been explored in the Question-Answering domain. Recently,
the visual question answering problem has been studied in several papers [24, 25, 26, 27, 28, 29].
In [25] a method is introduced for answering several types of textual questions grounded with images while [27] proposes the task of open-ended visual question answering. In another recent approach [26], knowledge extracted from web visual data is used to answer open-domain questions.
While these works all use visual reasoning to answer questions, none have considered solving analogy questions.
3
Our Approach
We pose answering a visual analogy question I1 : I2 :: I3 :? as the problem of discovering the
mapping from image I1 to image I2 and searching for an image I4 that has the same relation to
image I3 as I1 to I2 . Specifically, we find a function T (parametrized by ?) that maps each pair of
images (I1 , I2 ) to a vector x12 = T (X1 , X2 ; ?). The goal is to solve for parameters ? such that
x12 ? x34 for positive image analogies I1 : I2 :: I3 : I4 . As we describe below, T is computed
using the differences in ConvNet output features between images.
3.1
Quadruple Siamese Network
A positive training example for our network is an analogical quadruple of images [I1 : I2 :: I3 :
I4 ] where the transformation from I3 to I4 is the same as that of I1 to I2 . To be able to solve
the visual analogy problem, our learned parameters ? should map these two transformations to a
similar location. To formalize this, we use a contrastive loss function L to measure how well T is
capable of placing similar transformations nearby in the embedding space and pushing dissimilar
transformations apart. Given a d-dimensional feature vector x for each pair of input images, the
contrastive loss is defined as:
Lm (x12 , x34 ) = y||x12 ? x34 || + (1 ? y) max(m ? ||x12 ? x34 ||, 0)
(1)
where x12 and x34 refer to the embedding feature vector for (I1 , I2 ) and (I3 , I4 ) respectively. Label
y is 1 if the input quadruple [I1 : I2 :: I3 : I4 ] is a correct analogy or 0 otherwise. Also, m > 0 is
the margin parameter that pushes x12 and x34 close to each other in the embedding space if y = 1
and forces the distance between x12 and x34 in wrong analogy pairs (y = 0) be bigger than m > 0,
in the embedding space. We train our network with both correct and wrong analogy quadruples and
the error is back propagated through stochastic gradient descent to adjust the network weights ?.
The overview of our network architecture is shown in Figure 2.
To compute the embedding vectors x we use the quadruple Siamese architecture shown in Figure 2.
Using this architecture, each image in the analogy quadruple is fed through a ConvNet (AlexNet
[30]) with shared parameters ?. The label y shows whether the input quadruple is a correct analogy
(y = 1) or a false analogy (y = 0) example. To capture the transformation between image pairs
(I1 , I2 ) and (I3 , I4 ), the outputs of the last fully connected layer are subtracted. We normalize our
embedding vectors to have unit L2 length, which results in the Euclidean distance being the same as
the cosine distance. If Xi are the outputs of the last fully connected layer in the ConvNet for image
Ii , xij = T (Xi , Xj ; ?) is computed by:
T (Xi , Xj ; ?) =
Xi ? Xj
||Xi ? Xj ||
(2)
Using the loss function defined in Equation (1) may lead to the network overfitting. Positive analogy
pairs in the training set can get pushed too close together in the embedding space during training.
To overcome this problem, we consider a margin mP > 0 for positive analogy quadruples. In this
case, x12 and x34 in the positive analogy pairs will be pushed close to each other only if the distance
between them is bigger than mP > 0. It is clear that 0 ? mP ? mN should hold between the two
margins.
LmP ,mN (x12 , x34 ) = y max(||x12 ? x34 || ? mP , 0) + (1 ? y) max(mN ? ||x12 ? x34 ||, 0) (3)
3
Single
?margin
?embedding
?space
?
Nega=ve
?margin
?
I1
?
384
?
384
?
256
?
256
?
:
?
Shared
??
?
:
4096
? 4096
?
:
?
A
?-??
?B
?
Loss-??
L2
?
I2
?
::
?
x34
?
Double
?margin
?embedding
?space
?
Nega=ve
?margin
?
:
?
:
?
Shared
??
?
Loss-??
L2
?
:
I4
?
an
ve
?inst
Nega=
:
I3
?
A
?-??
?B
?
ces
?
Loss+
?
?
:
x12
?
Shared
??
?
Loss
?
One
?posi=ve
?analogy
?instance
?
96
?
?
:
Loss+
?
Posi=ve
?
?margin
?
:
y
?
Figure 2: VISALOGY Network has quadruple Siamese architecture with shared ? parameters. The network
is trained with correct analogy quadruples of images [I1 , I2 , I3 , I4 ] along with wrong analogy quadruples as
negative samples. The contrastive loss function pushes (I1 , I2 ) and (I3 , I4 ) of correct analogies close to each
other in the embedding space while forcing the distance between (I1 , I2 ) and (I3 , I4 ) in negative samples to
be more than margin m.
3.2
Building Analogy Questions
For creating a dataset of visual analogy questions we assume each training image has information
(c, p) where c ? C denotes its category and p ? P denotes its property. Example properties include
color, actions, and object orientation. A valid analogy quadruple should have the form:
(ci ,p1 )
[I1
(ci ,p2 )
: I2
(co ,p1 )
:: I3
(co ,p2 )
: I4
]
where the two input images I1 and I2 have the same category ci , but their properties are different.
That is, I1 has the property p1 while I2 has the property p2 . Similarly, the output images I3 and I4
share the same category co where ci 6= co . Also, I3 has the property p1 while I4 has the property p2
and p1 6= p2 .
Generating Positive Quadruples: Given a set of labeled images, we construct our set of analogy
types. We select two distinct categories c, c0 ? C and two distinct properties p, p0 ? P which are
shared between c and c0 . Using these selections, we can build 4 different analogy types (either
c or c0 can be considered as ci and co and similarly for p and p0 ). For each analogy type (e.g.
[(ci , p1 ) : (ci , p2 ) :: (co , p1 ) : (co , p2 )]), we can generate a set of positive analogy samples by
combining corresponding images. This procedure provides a large number of positive analogy pairs.
Generating Negative Quadruples: Using only positive samples for training the network leads
to degenerate models, since the loss can be made zero by simply mapping each input image to a
constant vector. Therefore, we also generate quadraples that violate the analogy rules as negative
samples during training. To generate negative quadruples, we take two approaches. In the first
approach, we randomly select 4 images from the whole set of training images and each time check
that the generated quadruple is not a valid analogy. In the second approach, we first generate a
positive analogy quadruple, then we randomly replace either of I3 or I4 with an improper image to
break the analogy. Suppose we select I3 for replacement. Then we can either randomly select an
image with category co and property p? where p? 6= p1 and p? 6= p2 or we can randomly select an
image with property p1 but with a category c? where c? 6= co . The second approach generates a set
of hard negatives to help improve training. During the training, we randomly sample from the whole
set of possible negatives.
4
Experiments
Testing Scenario and Evaluation Metric: To evaluate the performance of our method for solving
visual analogy questions, we create a set of correct analogy quadruples [I1 : I2 :: I3 :?] using the
(c, p) labels of images. Given a set D of images which contain both positive and distracter images,
we would like to rank each image Ii in D based on how well it completes the analogy. We compute
the corresponding feature embeddings x1 , x2 , x3 , for each of the input images as well as xi for each
image in D and we rank based on:
4
D = 100
1
0.9
0.8
D = 500
0.8
Ours
AlexNet, ft
AlexNet
Chance
0.7
0.6
0.6
Ours
AlexNet, ft
AlexNet
Chance
0.5
0.4
0.5
0.6
0.4
0.5
0.4
0.3
0.3
0.4
0.3
0.2
0.3
0.2
0.2
0.2
1
10
Top-k retrieval
2
10
0
0
10
0.1
0.1
0.1
0.1
0
0
10
D = 2000
Ours
AlexNet, ft
AlexNet
Chance
0.5
0.7
Recall
D = 1000
0.7
Ours
AlexNet, ft
AlexNet
Chance
1
10
Top-k retrieval
2
10
0
0
10
1
10
Top-k retrieval
0
0
2
10 10
1
10
Top-k retrieval
2
10
Figure 3: Quantitative evaluation (log scale) on 3D chairs dataset. Recall as a function of the number (k) of
images returned (Recall at top-k). For each question the recall at top-k is either 0 or 1 and is averaged over
10,000 questions. The size of the distractor set D is varied D = [100, 500, 1000, 2000]. ?AlexNet?: AlexNet,
?AlexNet ft?: AlexNet fine-tuned on chairs dataset for categorizing view-points.
T (I1 , I2 ).T (I3 , Ii )
, i ? 1, ..., n
(4)
||T (I1 , I2 )||.||T (I3 , Ii )||
where T (.) is the embedding obtained from our network as explained in section 3. We consider the
images with the same category c as of I3 and the same property p as of I2 to be a correct retrieval
and thus a positive image and the rest of the images in D as negative images. We compute the recall
at top-k to measure whether or not an image with an appropriate label has appeared in the top k
retrieved images.
ranki =
Baseline: It has been shown that the output of the 7th layer in AlexNet produces high quality
state-of-the-art image descriptors [30]. In each of our experiments, we compare the performance of
solving visual analogy problems using the image embedding obtained from our network with the
image representation of AlexNet. In practice, we pass each test image through AlexNet and our
network, and extract the output from the last fully connected layer using both networks. Note that
for solving general analogy questions the set of properties and categories are not known at the test
time. Accordingly, our proposed network does not use any labels during training and is aimed to
generalize the transformations without explictily using the label of categories and properties.
Dataset: To evaluate the capability of our trained network for solving analogy questions in the test
scenarios explained above, we use a large dataset of 3D chairs [4] as well as a novel dataset of
natural images (VAQA), that we collected for solving analogy questions on natural images.
4.1
Implementation Details
In all the experiments, we use stochastic gradient descent (SGD) to train our network. For initializing
the weights of our network, we use the AlexNet pre-trained network for the task of large-scale object
recognition (ILSVRC2012) provided by the BVLC Caffe website [31]. We fine-tune the last two
fully connected layers (fc6, fc7) and the last convolutional layer (conv5) unless stated otherwise. We
have also used the double margin loss function introduced in Equation 3 with mP = 0.2, mN = 0.4
which we empirically found to give the best results in a held-out validatation set. The effect of using
a single margin vs. double margin loss function is also investigated in section 4.4.
4.2
Analogy Question Answering Using 3D Chairs
We use a large collection of 1,393 models of chairs with different styles introduced in [4]. To make
the dataset, the CAD models are download from Google/Trimble 3D Warehouse and each chair style
is rendered on white background from different view points. For making analogy quadruples, we use
31 different view points of each chair style which results in 1,393*31 = 43,183 synthesized images.
In this dataset, we treat different styles as different categories and different view points as different
properties of the images according to the explanations given in section 3.2. We randomly select 1000
styles and 16 view points for training and keep the rest for testing. We use the rest of 393 classes
of chairs with 15 view points (which are completely unseen during the training) to build unseen
analogy questions that test the generalization capability of our network at test time. To construct an
analogy question, we randomly select two different styles and two different view points. The first
part of the analogy quadruple (I1 , I2 ) contains two images with the same style and with two different
view points. The images from the second half of the analogy quadruple (I3 , I4 ), have another style
and I3 has the same viewpoint as I1 and I4 has the same view point as I2 . Together, I1 , I2 , I3 and I4
build an analogy question (I1 : I2 :: I3 :?) where I4 is the correct answer. Using
16this
approach, the
total number of positive analogies that could be used during training is 1000
?
2
2 ?4 = 999, 240.
5
ours
Analogy Question
:
:
:
:
::
::
::
::
baseline
:
:
:
:
Figure 4: Left: Several examples of analogy questions from the 3D chairs dataset. In each question, the first
and second chair have the same style while their view points change. The third image has the same view point
as the first image but in a different style. The correct answer to each question is retrieved from a set with 100
distractors and should have the same style as the third image while its view point should be similar to the second
image. Middle: Top-4 retrievals using the features obtained from our method . Right: Top-4 retrievals using
AlexNet features. All retrievals are sorted from left to right
To train our network, we uniformly sampled 700,000 quadruples (of positive and negative analogies) and initialized the weights with the AlexNet pre-trained network and fine-tuned its parameters.
Figure 4 shows several samples of the analogy questions (left column) used at test time and the top-4
images retrieved by our method (middle column) compared with the baseline (right column). We
see that our proposed approach can retrieve images with a style similar to that of the third image and
with a view-point similar to the second image while the baseline approach is biased towards retrieving chairs with a style similar to that of the first and the second image. To quantitatively compare
the performance of our method with the baseline, we randomly generated 10,000 analogy questions
using the test images and report the average recall at top-k retrieval while varying the number of
irrelevant images (D) in the distractor set. Note that, since there is only one image corresponding
to each (style , view-point), there is only one positive answer image for each question. The performance of chance at the top-kth retrieval is nk where n is the size of D. The images of this dataset
are synthesized and do not follow natural image statistics. Therefore, to be fair at comparing the
results obtained from our network with that of the baseline (AlexNet), we fine-tune all layers of the
AlexNet via a soft-max loss for categorization of different view-points and using the set of images
seen during training. We then use the features obtained from the last fully connected layer (fc7) of
this network to solve analogy questions. As shown in Figure 3, fine-tuning all layers of AlexNet
(the violet curve referred to as ?AlexNet,ft? in the diagram) helps improve the performance of the
baseline. However, the recall of our network still outperforms it with a large margin.
4.3
Analogy Question Answering using VAQA Dataset
As explained in section 3.2, to construct a natural image analogy dataset we need to have images of
numerous object categories with distinguishable properties. We also need to have these properties
be shared amongst object categories so that we can make valid analogy quadruples using the (c, p)
labels. In natural images, we consider the property of an object to be either the action that it is doing
(for animate objects) or its attribute (for both animate and non-animate objects). Unfortunately, we
found that current datasets have a sparse number of object properties per class, which restricts the
number of possible analogy questions. For instance, many action datasets are human centric, and do
not have analogous actions for animals. As a result, we collected our own dataset VAQA for solving
visual analogy questions.
Data collection: We considered a list of ?attributes? and ?actions? along with a list of common
objects and paired them to make a list of (c, p) labels for collecting images. Out of this list, we
removed (c, p) combinations that are not common in the real world (e.g. (horse,blue) is not common
in the real world though there might be synthesized images of ?blue horse? in the web). We used
the remaining list of labels to query Google Image Search with phrases made from concatenation
of word c and p and downloaded 100 images for each phrase. The images are manually verified
to contain the concept of interest. However, we did not pose any restriction about the view-point
of the objects. After the pruning step, there exists around 70 images per category with a total of
7,500 images. The VAQA dataset consists of images corresponding to 112 phrases which are made
out of 14 different categories and 22 properties. Using the shared properties amongst categories we
can build 756 types of analogies. In our experiments, we used over 700,000 analogy questions for
training our network.
6
0.8
Seen Attribute Analogies
0.5
Seen Action Analogies
0.8
0.45
0.7
Unseen Attribute Analogies
0.5
0.4
0.4
0.6
0.6
Recall
0.35
0.5
0.35
0.5
0.3
0.4
0.25
0.3
0.25
0.4
0.2
0.3
0.2
0.3
0.15
0.15
0.2
0.2
0.1
Ours
AlexNet features
Chance
0.1
0
0
10
Unseen Action Analogies
0.45
0.7
Top k retrieval
0.1
Ours
AlexNet features
Chance
0.05
1
10
0
0
10
Top k retrieval
Ours
AlexNet features
Chance
0.1
1
10
0
0
10
Top k retrieval
Ours
AlexNet features
Chance
0.05
1
10
0
0
10
Top k retrieval
Figure 5: Quantitative evaluation (log scale) on the VAQA dataset using ?attribute? and ?action? analogy
questions. Recall as a function of the number (k) of images returned (Recall at top-k). For each question the
recall at top-k is averaged over 10,000 questions. The size of the distractor set is fixed at 250 in all experiments.
Results shown for analogy types seen in training are shown in the left two plots, and for analogy types not seen
in training in the two right plots.
Attribute analogy: Following the procedure explained in Section 3.2 we build positive and negative
quadruples to train our network. To be able to test the generalization of the learned embeddings for
solving analogy question types that are not seen during training, we randomly select 18 attribute
analogy types and remove samples of them from the training set of analogies. Using the remaining
analogy types, we sampled a total of 700,000 quadruples (positive and negative) that are used to
train the network.
Action analogy: Similarly, we trained our network to learn action analogies. For the generalization
test, we remove 12 randomly selected analogy types and make the training quadruples using the
remaining types. We sampled 700,000 quadruples (positive and negative) to train the network.
Evaluation on VAQA: Using the unseen images during the training, we make analogy quadruples
to test the trained networks for the ?attribute? and ?action? analogies. For evaluating the specification
and generalization of our trained network we generate analogy quadruples in two scenarios of ?seen?
and ?unseen? analogies using the analogy types seen during training and the ones in the withheld
sets respectively. In each of these scenarios, we generated 10,000 analogy questions and report the
average recall at top-k. For each question [I1 : I2 :: I3 :?], images that have property p equal to that
of I2 and category c equal to I3 are considered as correct answers. The result is around 4 positive
images for each question and we fix the distracter set to have 250 negative images for each question.
Given the small size of our distracter set, we report the average recall at top-10. The obtained
results in different scenarios as summarized in Figure 5. In all the cases, our method outperforms
the baseline.
Other than training separate networks for ?attribute? and ?action? analogies, we trained and tested
our network with a combined set of analogy questions and obtained promising results with a gap
of 5% compared to our baseline on the top-5 retrievals of the seen analogy questions. Note that
our current dataset only has one property label per image (either for ?attribute? or ?action?). Thus,
a negative analogy for one property may be positive for the other. A more thorough analysis would
require multi-property data, which we leave for future work.
Qualitative Analysis: Figure 6, shows examples of attribute analogy questions that are used for
evaluating our network along with the top five retrieved images obtained from our method and the
baseline method. As explained above, during the data collection we only prune out images that
do not contain the (c, p) of interest. Also, we do not pose any restriction for generating positive
quadruples such as restricting the objects to have similar pose or having the same number of objects
of interest in the quadruples. However, as can be seen in Figure 6 our network had been able to
implicitly learn to generalize the count of objects. For example, in the first row of Figure 6, an
image pair is [?dog swimming? : ?dog standing?] and the second part of the analogy has an image of
?multiple horses swimming?. Given this analogy question as input, our network has retrieved images
with multiple ?standing horses? in the top five retrievals.
4.4
Ablation Study
In this section, we investigate the effect of training the network with double margins (mP , mN ) for
positive and negative analogy quadruples compared with only using one single margin for negative
quadruples. We perform an ablation experiment where we compare the performance of the network
at top-k retrieval while being trained using either of the loss functions explained in Section 4. Also,
in two different scenarios, we either fine-tune only the top fully connected layers fc6 and fc7 (re7
1
10
Attribute
:
:
:
:
:
::
::
::
::
::
:
:
:
:
:
Action
Analogy Question
:
:
:
:
::
::
::
::
:
:
:
:
baseline
ours
Figure 6: Left: Samples of test analogy questions from VAQA dataset. Middle: Top-4 retrievals using the
features obtained from our method. Right: Top-4 retrievals using AlexNet features.
1
0.9
0.8
Recall
0.7
Testing with Seen Analogy types
Ours[ft(fc6,fc7,c5)+(mP,mN)]
1
0.9
Ours[ft(fc6,fc7)+(mP,mN)]
Ours[ft(fc6,fc7,c5)+(mN)]
0.8
Ours[ft(fc6,fc7)+(mN)
0.7
AlexNet features
Chance
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
Ours[ft(fc6,fc7)+(mP,mN)]
Ours[ft(fc6,fc7,c5)+(mN)]
Ours[ft(fc6,fc7)+(mN)
AlexNet features
Chance
0.1
0.1
0
0
10
Testing with Unseen Analogy types
Ours[ft(fc6,fc7,c5)+(mP,mN)]
0
0
10
1
Top k retrieval10
1
10
Top k retrieval
Figure 7: Quantitative comparison for the effect of using double margin vs. single margin for training the
VISALOGY network.
ferred to as ?ft(fc6,fc7)? in Figure 7) or the top fully connected layers plus the last convolutional
layer c5 (referred to as ?ft(fc6,fc7,c5)?) in Figure 7). We use a fixed training sample set consisting of 700,000 quadruples generated from the VAQA dataset in this experiment. In each case, we
test the trained network using samples coming from the set of analogy questions whose types are
seen/unseen during the training. As can be seen from Figure 7, using double margins (mP , mN ) in
the loss function has resulted in better performance in both testing scenarios. While using double
margins results in a small increase in the ?seen analogy types? testing scenario, it has considerably
increased the recall when the network was tested with ?unseen analogy types?. This demonstrates
that the use of double margins helps generalization.
5
Conclusion
In this work, we introduce the new task of solving visual analogy questions. For exploring the
task of visual analogy questions we provide a new dataset of natural images called VAQA. We
answer the questions using a Siamese ConvNet architecture that provides an image embedding that
maps together pairs of images that share similar property differences. We have demonstrated the
performance of our proposed network using two datasets and have shown that our network can
provide an effective feature representation for solving analogy problems compared to state-of-theart image representations.
Acknowledgments: This work was in part supported by ONR N00014-13-1-0720, NSF IIS1218683, NSF IIS-IIS- 1338054, and Allen Distinguished Investigator Award.
8
References
[1] Gentner, D., Holyoak, K.J., Kokinov, B.N.: The analogical mind: Perspectives from cognitive science.
MIT press (2001)
[2] Shelley, C.: Multiple analogies in science and philosophy. John Benjamins Publishing (2003)
[3] Juthe, A.: Argument by analogy. Argumentation (2005)
[4] Aubry, M., Maturana, D., Efros, A., Russell, B., Sivic, J.: Seeing 3d chairs: exemplar part-based 2d-3d
alignment using a large dataset of cad models. In: CVPR. (2014)
[5] Turney, P.D.: Similarity of semantic relations. Comput. Linguist. (2006)
[6] Turney, P.D., Littman, M.L.: Corpus-based learning of analogies and semantic relations. CoRR (2005)
[7] Baroni, M., Lenci, A.: Distributional memory: A general framework for corpus-based semantics. Comput. Linguist. (2010)
[8] Jurgens, D.A., Turney, P.D., Mohammad, S.M., Holyoak, K.J.: Semeval-2012 task 2: Measuring degrees
of relational similarity, ACL (2012)
[9] Turney, P.D., Pantel, P.: From frequency to meaning: Vector space models of semantics. J. Artif. Int. Res.
(2010)
[10] Levy, O., Goldberg, Y.: Linguistic regularities in sparse and explicit word representations. In: CoNLL,
ACL (2014)
[11] Barbella, D.M., Forbus, K.D.: Analogical dialogue acts: Supporting learning by reading analogies in
instructional texts. In: AAAI. (2011)
[12] Chang, M.D., Forbus, K.D.: Using analogy to cluster hand-drawn sketches for sketch-based educational
software. AI Magazine (2014)
[13] Forbus, K.D., Usher, J.M., Tomai, E.: Analogical learning of visual/conceptual relationships in sketches.
In: AAAI. (2005)
[14] Forbus, K., Usher, J., Lovett, A., Lockwood, K., Wetzel, J.: Cogsketch: Sketch understanding for cognitive science research and for education. Topics in Cognitive Science (2011)
[15] Chang, M.D., Wetzel, J.W., Forbus, K.D.: Spatial reasoning in comparative analyses of physics diagrams.
In: Spatial Cognition IX. (2014)
[16] Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: SIGGRAPH,
ACM (2001)
[17] Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural computation
(2000)
[18] Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face
verification. In: CVPR. (2005)
[19] Farhadi, A., Endres, I., Hoiem, D., Forsyth, D.: Describing objects by their attributes. In: CVPR. (2009)
[20] Parikh, D., Grauman, K.: Relative attributes. In: ICCV. (2011)
[21] Hwang, S.J., Grauman, K., Sha, F.: Analogy-preserving semantic embedding for visual object categorization. In: ICML. (2013)
[22] Kiros, R., Salakhutdinov, R., Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural
language models. arXiv preprint arXiv:1411.2539 (2014)
[23] Mikolov, T., Yih, W.t., Zweig, G.: Linguistic regularities in continuous space word representations. In:
HLT-NAACL. (2013)
[24] Geman, D., Geman, S., Hallonquist, N., Younes, L.: Visual turing test for computer vision systems.
PNAS (2015)
[25] Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based
on uncertain input. In: NIPS. (2014)
[26] Sadeghi, F., Kumar Divvala, S., Farhadi, A.: VisKE: Visual Knowledge Extraction and Question Answering by Visual Verification of Relation Phrases. In: CVPR. (2015)
[27] Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: Visual question
answering. In: ICCV. (2015)
[28] Yu, L., Park, E., Berg, A.C., Berg, T.L.: Visual madlibs: Fill in the blank description generation and
question answering. In: ICCV. (2015)
[29] Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based approach to answering
questions about images. In: ICCV. (2015)
[30] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. (2012)
[31] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
9
| 5777 |@word cnn:1 middle:5 c0:3 open:2 holyoak:2 jacob:1 p0:2 contrastive:3 homomorphism:1 sgd:1 asks:1 yih:1 contains:1 hoiem:1 tuned:2 ours:18 outperforms:2 current:3 com:1 comparing:1 cad:2 blank:1 activation:1 guadarrama:1 must:1 creat:1 john:1 remove:2 plot:2 v:2 half:1 discovering:5 selected:3 website:1 accordingly:1 core:1 filtered:1 provides:2 location:1 five:2 along:3 retrieving:1 qualitative:1 consists:1 warehouse:1 introduce:4 p1:9 distractor:5 multi:2 kiros:1 freeman:1 salakhutdinov:1 farhadi:3 considering:1 provided:3 estimating:1 alexnet:30 what:2 kind:3 connective:1 transformation:9 ended:1 quantitative:4 thorough:1 collecting:1 act:1 grauman:2 wrong:3 demonstrates:1 unit:2 positive:22 treat:1 bilinear:2 id:4 quadruple:37 might:1 plus:1 acl:2 studied:2 challenging:1 compile:1 co:9 averaged:2 acknowledgment:1 lecun:1 testing:6 practice:1 x3:1 procedure:2 word:5 pre:2 distracter:3 seeing:1 get:1 cannot:1 close:6 selection:2 applying:3 restriction:2 imposed:2 map:4 missing:1 demonstrated:1 aptitude:1 educational:1 hadsell:1 ranki:1 rule:1 regarded:1 fill:1 retrieve:1 embedding:21 searching:3 analogous:5 target:3 suppose:1 magazine:1 goldberg:1 expensive:1 recognition:2 distributional:2 labeled:1 geman:2 bottom:2 ft:16 preprint:2 initializing:1 capture:2 connected:7 improper:1 ilsvrc2012:1 ordering:1 russell:1 removed:1 benjamin:1 hertzmann:1 ideally:1 littman:1 trained:11 solving:19 ali:2 animate:3 completely:1 multimodal:1 siggraph:1 various:2 train:6 distinct:2 fast:1 effective:2 describe:1 query:1 zemel:1 horse:4 caffe:2 whose:1 solve:3 cvpr:4 drawing:1 otherwise:2 statistic:1 unseen:12 agrawal:1 karayev:1 propose:1 coming:1 aligned:1 combining:1 ablation:2 re7:1 degenerate:1 pantel:1 analogical:7 description:1 normalize:1 sutskever:1 regularity:3 darrell:1 double:8 extending:1 cluster:1 produce:1 generating:3 categorization:2 leave:1 comparative:1 object:21 help:3 maturana:1 pose:5 exemplar:1 conv5:1 p2:8 c:2 involves:2 differ:1 correct:11 attribute:19 cnns:1 filter:1 stochastic:2 human:1 education:1 require:3 fix:1 generalization:7 extension:1 exploring:1 hold:3 around:2 considered:4 ic:5 lawrence:1 mapping:18 cognition:2 lm:1 efros:1 baroni:1 label:10 create:1 mit:1 i3:27 varying:1 linguistic:3 categorizing:1 rank:2 check:1 baseline:12 inst:1 inference:1 abstraction:1 hidden:1 relation:8 deduction:1 i1:25 semantics:2 among:3 classification:2 orientation:1 proposes:1 animal:2 spatial:3 art:1 equal:2 construct:3 having:1 washington:4 extraction:1 manually:1 placing:1 nega:3 yu:1 icml:1 park:1 theart:1 thinking:1 future:1 report:3 quantitatively:1 randomly:10 ve:5 resulted:1 consisting:1 replacement:1 microsoft:2 interest:3 investigate:1 evaluation:6 adjust:1 alignment:2 held:1 antol:1 oliver:1 capable:1 unless:1 euclidean:1 lockwood:1 initialized:1 re:1 girshick:1 uncertain:1 increased:1 instance:2 column:3 soft:1 measuring:1 phrase:4 violet:1 krizhevsky:1 conducted:1 too:1 answer:10 endres:1 considerably:1 extendable:1 combined:1 fritz:2 standing:2 physic:1 together:5 posi:2 aaai:2 containing:2 cognitive:4 creating:1 dialogue:1 style:17 lenci:1 summarized:1 int:1 forsyth:1 mp:11 ranking:2 performed:1 break:1 view:16 doing:2 competitive:1 capability:2 jia:1 convolutional:7 descriptor:1 salesin:1 generalize:3 curless:1 none:1 semantical:1 lu:1 argumentation:1 hlt:1 against:1 frequency:1 propagated:1 sampled:3 dataset:23 aubry:1 ask:1 logical:1 recall:15 color:2 knowledge:2 distractors:1 mitchell:1 formalize:1 back:1 centric:1 supervised:1 follow:1 done:1 though:1 sketch:5 hand:1 web:2 google:2 fereshteh:1 quality:1 hwang:1 artif:1 building:1 effect:3 naacl:1 brown:3 normalized:1 contain:3 concept:1 semantic:6 i2:27 white:3 during:12 encourages:2 transferable:1 cosine:1 jurgens:1 x34:12 mohammad:1 allen:2 reasoning:7 image:129 meaning:1 novel:1 possessing:1 recently:1 parikh:2 common:4 empirically:1 overview:1 synthesized:4 refer:1 ai:3 tuning:2 similarly:3 language:1 had:1 specification:1 entail:3 similarity:5 etc:1 fc7:13 own:1 recent:1 retrieved:5 perspective:1 irrelevant:1 apart:1 forcing:1 scenario:8 n00014:1 onr:1 seen:14 preserving:1 prune:1 ii:6 siamese:8 multiple:4 violate:1 pnas:1 offer:1 zweig:1 retrieval:20 long:1 award:1 bigger:2 paired:1 vision:1 metric:3 arxiv:4 grounded:1 background:1 fine:7 diagram:4 completes:1 source:3 crucial:1 biased:1 rest:3 usher:2 chopra:1 embeddings:3 semeval:1 xj:4 architecture:11 whether:2 returned:2 linguist:2 action:15 deep:3 clear:1 aimed:1 tune:3 malinowski:2 vqa:1 extensively:2 tenenbaum:1 bvlc:1 younes:1 category:19 generate:6 gentner:1 xij:1 restricts:1 nsf:2 per:4 blue:2 affected:1 group:1 drawn:1 ce:1 verified:1 swimming:2 turing:1 fourth:1 prefer:1 conll:1 pushed:2 layer:13 i4:19 your:1 x2:2 software:1 scene:1 nearby:2 generates:1 argument:1 chair:13 extremely:1 kumar:1 mikolov:1 rendered:1 x12:13 according:1 combination:1 across:5 making:1 explained:7 iccv:4 instructional:1 equation:2 describing:1 count:1 mind:1 fed:1 appropriate:1 distinguished:1 subtracted:1 standardized:1 top:32 nlp:1 include:2 denotes:2 remaining:3 publishing:1 unifying:1 pushing:1 build:5 objective:1 question:78 sha:1 traditional:1 gradient:2 kth:1 amongst:2 convnet:4 separate:2 distance:5 separating:1 concatenation:1 parametrized:1 topic:1 collected:2 length:1 modeled:1 relationship:2 unfortunately:1 potentially:1 negative:17 stated:1 implementation:1 perform:1 forbus:5 neuron:1 datasets:4 withheld:1 descent:2 supporting:1 relational:1 hinton:1 discovered:1 varied:1 download:1 introduced:4 pair:16 dog:5 optimized:1 imagenet:1 sivic:1 learned:3 textual:5 nip:2 address:1 able:3 lmp:1 below:1 appeared:1 reading:2 max:4 memory:1 explanation:1 analogue:1 ia:5 natural:14 ranked:1 force:1 mn:14 sadeghi:2 improve:3 numerous:1 extract:1 text:2 understanding:3 discovery:1 l2:3 relative:2 loss:15 fully:7 bear:2 gre:1 mixed:1 lovett:1 discriminatively:1 generation:1 analogy:142 shelhamer:1 downloaded:1 degree:1 verification:3 viewpoint:2 share:2 row:5 supported:1 last:8 divvala:1 institute:1 face:2 sparse:2 benefit:1 overcome:1 curve:1 valid:3 world:4 evaluating:2 made:3 collection:3 c5:6 pruning:1 implicitly:1 keep:1 overfitting:1 sat:1 corpus:2 conceptual:1 xi:6 search:2 continuous:1 promising:3 learn:8 fc6:12 obtaining:1 investigated:1 complex:1 zitnick:2 domain:2 did:1 arrow:1 whole:2 fair:1 x1:2 referred:2 explicit:1 comput:2 candidate:1 answering:17 ib:5 levy:1 third:4 shelley:1 learns:1 ix:1 donahue:1 explored:5 list:5 exists:1 false:1 restricting:1 corr:1 ci:7 push:2 margin:20 nk:1 gap:1 distinguishable:1 simply:1 likely:1 explore:1 wetzel:2 rohrbach:1 visual:35 distracters:1 chang:2 chance:11 extracted:1 acm:1 identity:1 goal:2 sorted:1 careful:1 towards:1 shared:10 replace:1 content:1 change:2 hard:1 specifically:2 uniformly:1 total:3 called:1 pas:1 batra:1 experimental:1 turney:4 select:8 berg:2 dissimilar:2 philosophy:1 investigator:1 evaluate:3 tested:2 |
5,277 | 5,778 | Bidirectional Recurrent Convolutional Networks
for Multi-Frame Super-Resolution
Yan Huang1
Wei Wang1
Liang Wang1,2
Center for Research on Intelligent Perception and Computing
National Laboratory of Pattern Recognition
2
Center for Excellence in Brain Science and Intelligence Technology
Institute of Automation, Chinese Academy of Sciences
1
{yhuang, wangwei, wangliang}@nlpr.ia.ac.cn
Abstract
Super resolving a low-resolution video is usually handled by either single-image
super-resolution (SR) or multi-frame SR. Single-Image SR deals with each video
frame independently, and ignores intrinsic temporal dependency of video frames
which actually plays a very important role in video super-resolution. Multi-Frame
SR generally extracts motion information, e.g., optical flow, to model the temporal
dependency, which often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term contextual information of temporal sequences well, we propose a bidirectional recurrent convolutional network
for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used
recurrent full connections are replaced with weight-sharing convolutional connections and 2) conditional convolutional connections from previous input layers
to the current hidden layer are added for enhancing visual-temporal dependency
modelling. With the powerful temporal dependency modelling, our model can
super resolve videos with complex motions and achieve state-of-the-art performance. Due to the cheap convolution operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame
methods.
1
Introduction
Since large numbers of high-definition displays have sprung up, generating high-resolution videos
from previous low-resolution contents, namely video super-resolution (SR), is under great demand.
Recently, various methods have been proposed to handle this problem, which can be classified into
two categories: 1) single-image SR [10, 5, 9, 8, 12, 25, 23] super resolves each of the video frames
independently, and 2) multi-frame SR [13, 17, 3, 2, 14, 13] models and exploits temporal dependency
among video frames, which is usually considered as an essential component of video SR.
Existing multi-frame SR methods generally model the temporal dependency by extracting subpixel
motions of video frames, e.g., estimating optical flow based on sparse prior integration or variation
regularity [2, 14, 13]. But such accurate motion estimation can only be effective for video sequences
which contain small motions. In addition, the high computational cost of these methods limits the
real-world applications. Several solutions have been explored to overcome these issues by avoiding
the explicit motion estimation [21, 16]. Unfortunately, they still have to perform implicit motion
estimation to reduce temporal aliasing and achieve resolution enhancement when large motions are
encountered.
Given the fact that recurrent neural networks (RNNs) can well model long-term contextual information for video sequence, we propose a bidirectional recurrent convolutional network (BRCN)
1
to efficiently learn the temporal dependency for multi-frame SR. The proposed network exploits
three convolutions. 1) Feedforward convolution models visual spatial dependency between a lowresolution frame and its high-resolution result. 2) Recurrent convolution connects the hidden layers
of successive frames to learn temporal dependency. Different from the commonly-used full recurrent
connection in vanilla RNNs, it is a weight-sharing convolutional connection here. 3) Conditional
convolution connects input layers at the previous timestep to the current hidden layer, to further enhance visual-temporal dependency modelling. To simultaneously consider the temporal dependency
from both previous and future frames, we exploit a forward recurrent network and a backward recurrent network, respectively, and then combine them together for the final prediction. We apply the
proposed model to super resolve videos with complex motions. The experimental results demonstrate that the model can achieve state-of-the-art performance, as well as orders of magnitude faster
speed than other multi-frame SR methods.
Our main contributions can be summarized as follows. We propose a bidirectional recurrent convolutional network for multi-frame SR, where the temporal dependency can be efficiently modelled
by bidirectional recurrent and conditional convolutions. It is an end-to-end framework which does
not need pre-/post-processing. We achieve better performance and faster speed than existing multiframe SR methods.
2
Related Work
We will review the related work from the following prospectives.
Single-Image SR. Irani and Peleg [10] propose the primary work for this problem, followed by
Freeman et al. [8] studying this problem in a learning-based way. To alleviate high computational
complexity, Bevilacqua et al. [4] and Chang et al. [5] introduce manifold learning techniques which
can reduce the required number of image patch exemplars. For further acceleration, Timofte et al.
[23] propose the anchored neighborhood regression method. Yang et al. [25] and Zeyde et al. [26]
exploit compressive sensing to encode image patches with a compact dictionary and obtain sparse
representations. Dong et al. [6] learn a convolutional neural network for single-image SR which
achieves the current state-of-the-art result. In this work, we focus on multi-frame SR by modelling
temporal dependency in video sequences.
Multi-Frame SR. Baker and Kanade [2] extract optical flow to model the temporal dependency in
video sequences for video SR. Then, various improvements [14, 13] around this work are explored
to better handle visual motions. However, these methods suffer from the high computational cost
due to the motion estimation. To deal with this problem, Protter et al. [16] and Takeda et al. [21]
avoid motion estimation by employing nonlocal mean and 3D steering kernel regression. In this
work, we propose bidirectional recurrent and conditional convolutions as an alternative to model
temporal dependency and achieve faster speed.
3
3.1
Bidirectional Recurrent Convolutional Network
Formulation
Given a low-resolution, noisy and blurry video, our goal is to obtain a high-resolution, noise-free
and blur-free version. In this paper, we propose a bidirectional recurrent convolutional network (BRCN) to map the low-resolution frames to high-resolution ones. As shown in Figure 1, the proposed
network contains a forward recurrent convolutional sub-network and a backward recurrent convolutional sub-network to model the temporal dependency from both previous and future frames. Note
that similar bidirectional scheme has been proposed previously in [18]. The two sub-networks of
BRCN are denoted by two black blocks with dash borders, respectively. In each sub-network, there
are four layers including the input layer, the first hidden layer, the second hidden layer and the output
layer, which are connected by three convolutional operations:
1. Feedforward Convolution. The multi-layer convolutions denoted by black lines learn
visual spatial dependency between a low-resolution frame and its high-resolution result.
Similar configurations have also been explored previously in [11, 7, 6].
2
Backward sub-network
Input layer
(low-resolution frame)
????
??
??+?
First hidden layer
?
?
Second hidden layer
?
?
Second hidden layer
?
?
First hidden layer
?
?
Output layer
(high-resolution frame)
Input layer
(low-resolution frame)
????
??
??+?
Forward sub-network
: Feedforward convolution
: Recurrent convolution
: Conditional convolution
Figure 1: The proposed bidirectional recurrent convolutional network (BRCN).
2. Recurrent Convolution. The convolutions denoted by blue lines aim to model long-term
temporal dependency across video frames by connecting adjacent hidden layers of successive frames, where the current hidden layer is conditioned on the hidden layer at the
previous timestep. We use the recurrent convolution in both forward and backward subnetworks. Such bidirectional recurrent scheme can make full use of the forward and backward temporal dynamics.
3. Conditional Convolution. The convolutions denoted by red lines connect input layer at
the previous timestep to the current hidden layer, and use previous inputs to provide longterm contextual information. They enhance visual-temporal dependency modelling with
this kind of conditional connection.
We denote the frame sets of a low-resolution video1 X as {Xi }i=1,2,...,T , and infer the other three
layers as follows.
First Hidden Layer. When inferring the first hidden layer Hf1 (Xi ) (or Hb1 (Xi )) at the ith timestep
in the forward (or backward) sub-network, three inputs are considered: 1) the current input layer
Xi connected by a feedforward convolution, 2) the hidden layer Hf1 (Xi?1 ) (or Hb1 (Xi+1 )) at the
i?1th (or i+1th ) timestep connected by a recurrent convolution, and 3) the input layer Xi?1 (or
Xi+1 ) at the i?1th (or i+1th ) timestep connected by a conditional convolution.
Hf1 (Xi ) = ?(Wvf1 ? Xi + Wrf1 ? Hf1 (Xi?1 ) + Wtf1 ? Xi?1 + Bf1 )
Hb1 (Xi ) = ?(Wvb 1 ? Xi + Wrb1 ? Hb1 (Xi+1 ) + Wtb1 ? Xi+1 + Bb1 )
(1)
where Wvf1 (or Wvb 1 ) and Wtf1 (or Wtb1 ) represent the filters of feedforward and conditional convolutions in the forward (or backward) sub-network, respectively. Both of them have the size of
c?fv1 ?fv1 ?n1 , where c is the number of input channels, fv1 is the filter size and n1 is the number
of filters. Wrf1 (or Wrb1 ) represents the filters of recurrent convolutions. Their filter size fr1 is set to
1 to avoid border effects. Bf1 (or Bb1 ) represents biases. The activation function is the rectified linear
unit (ReLu): ?(x)=max(0, x) [15]. Note that in Equation 1, the filter responses of recurrent and
1
Note that we upscale each low-resolution frame in the sequence to the desired size with bicubic interpolation in advance.
3
H i?1
H1f ( Xi )
H1f ( Xi?1 )
B1
-dimensional
vector
Hi
C0
C1
Xi?1
Xi
A1
Xi?1
(a) TRBM
Xi
(b) BRCN
Figure 2: Comparison between TRBM and the proposed BRCN.
conditional convolutions can be regarded as dynamic changing biases, which focus on modelling
the temporal changes across frames, while the filter responses of feedforward convolution focus on
learning visual content.
Second Hidden Layer. This phase projects the obtained feature maps Hf1 (Xi ) (or Hb1 (Xi )) from
n1 to n2 dimensions, which aims to capture the nonlinear structure in sequence data. In addition to
intra-frame mapping by feedforward convolution, we also consider two inter-frame mappings using
recurrent and conditional convolutions, respectively. The projected n2 -dimensional feature maps in
the second hidden layer Hf2 (Xi ) (or (Hb2 (Xi )) in the forward (or backward) sub-network can be
obtained as follows:
Hf2 (Xi ) = ?(Wvf2 ? Hf1 (Xi ) + Wrf2 ? Hf2 (Xi?1 ) + Wtf2 ? Hf1 (Xi?1 ) + Bf2 )
Hb2 (Xi ) = ?(Wvb 2 ? Hb1 (Xi ) + Wrb2 ? Hb2 (Xi+1 ) + Wtb2 ? Hb1 (Xi+1 ) + Bb2 )
(2)
where Wvf2 (or Wvb 2 ) and Wtf2 (or Wtb2 ) represent the filters of feedforward and conditional convolutions, respectively, both of which have the size of n1 ?1?1?n2 . Wrf2 (or Wrb2 ) represents the
filters of recurrent convolution, whose size is n2 ?1?1?n2 .
Note that the inference of the two hidden layers can be regarded as a representation learning phase,
where we could stack more hidden layers to increase the representability of our network to better
capture the complex data structure.
Output Layer. In this phase, we combine the projected n2 -dimensional feature maps in both forward and backward sub-networks to jointly predict the desired high-resolution frame:
O(Xi ) =Wvf3 ? Hf2 (Xi ) + Wtf3 ? Hf2 (Xi?1 ) + Bf3 + Wvb 3 ? Hb2 (Xi ) + Wtb3 ? Hb2 (Xi+1 ) + Bb3
(3)
where Wvf3 (or Wvb 3 ) and Wtf3 (or Wtb3 ) represent the filters of feedforward and conditional convolutions, respectively. Their sizes are both n2 ?fv3 ?fv3 ?c. We do not use any recurrent convolution
for output layer.
3.2
Connection with Temporal Restricted Boltzmann Machine
In this section, we discuss the connection between the proposed BRCN and temporal restricted
boltzmann machine (TRBM) [20] which is a widely used model in sequence modelling.
As shown in Figure 2, TRBM and BRCN contain similar recurrent connections (blue lines) between
hidden layers, and conditional connections (red lines) between input layer and hidden layer. They
share the common flexibility to model and propagate temporal dependency along the time. However, TRBM is a generative model while BRCN is a discriminative model, and TRBM contains an
additional connection (green line) between input layers for sample generation.
In fact, BRCN can be regarded as a deterministic, bidirectional and patch-based implementation of
TRBM. Specifically, when inferring the hidden layer in BRCN, as illustrated in Figure 2 (b), feedforward and conditional convolutions extract overlapped patches from the input, each of which is
4
fully connected to a n1 -dimensional vector in the feature maps Hf1 (Xi ). For recurrent convolutions, since each filter size is 1 and all the filters contain n1 ?n1 weights, a n1 -dimensional vector in
Hf1 (Xi ) is fully connected to the corresponding n1 -dimensional vector in Hf1 (Xi?1 ) at the previous time step. Therefore, the patch connections of BRCN are actually those of a ?discriminative?
TRBM. In other words, by setting the filter sizes of feedforward and conditional convolutions as the
size of the whole frame, BRCN is equivalent to TRBM.
Compared with TRBM, BRCN has the following advantages for handling the task of video superresolution. 1) BRCN restricts the receptive field of original full connection to a patch rather than the
whole frame, which can capture the temporal change of visual details. 2) BRCN replaces all the full
connections with weight-sharing convolutional ones, which largely reduces the computational cost.
3) BRCN is more flexible to handle videos of different sizes, once it is trained on a fixed-size video
dataset. Similar to TRBM, the proposed model can be generalized to other sequence modelling
applications, e.g., video motion modelling [22].
3.3
Network Learning
Through combining Equations 1, 2 and 3, we can obtain the desired prediction O(X ; ?) from the
low-resolution video X , where ? denotes the network parameters. Network learning proceeds by
minimizing the Mean Square Error (MSE) between the predicted high-resolution video O(X ; ?)
and the groundtruth Y:
L = kO(X ; ?) ? Yk
2
(4)
via stochastic gradient descent. Actually, stochastic gradient descent is enough to achieve satisfying
results, although we could exploit other optimization algorithms with more computational cost, e.g.,
L-BFGS. During optimization, all the filter weights of recurrent and conditional convolutions are
initialized by randomly sampling from a Gaussian distribution with mean 0 and standard deviation
0.001, whereas the filter weights of feedforward convolution are pre-trained on static images [6].
Note that the pretraining step only aims to speed up training by providing a better parameter initialization, due to the limited size of training set. This step can be avoided by alternatively using a
larger scale dataset. We experimentally find that using a smaller learning rate (e.g., 1e?4) for the
weights in the output layer is crucial to obtain good performance.
4
Experimental Results
To verify the effectiveness, we apply the proposed model to the task of video SR, and present both
quantitative and qualitative results as follows.
4.1
Datasets and Implementation Details
We use 25 YUV format video sequences2 as our training set, which have been widely used in many
video SR methods [13, 16, 21]. To enlarge the training set, model training is performed in a volumebased way, i.e., cropping multiple overlapped volumes from training videos and then regarding each
volume as a training sample. During cropping, each volume has a spatial size of 32?32 and a
temporal step of 10. The spatial and temporal strides are 14 and 8, respectively. As a result, we
can generate roughly 41,000 volumes from the original dataset. We test our model on a variety
of challenging videos, including Dancing, Flag, Fan, Treadmill and Turbine [19], which contain
complex motions with severe motion blur and aliasing. Note that we do not have to extract volumes
during testing, since the convolutional operation can scale to videos of any spatial size and temporal
step. We generate the testing dataset with the following steps: 1) using Gaussian filter with standard
deviation 2 to smooth each original frame, and 2) downsampling the frame by a factor of 4 with
bicubic method3 .
2
3
http://www.codersvoice.com/a/webbase/video/08/152014/130.html.
Here we focus on the factor of 4, which is usually considered as the most difficult case in super-resolution.
5
Table 1: The results of PSNR (dB) and running time (sec) on the testing video sequences.
Video
Dancing
Flag
Fan
Treadmill
Turbine
Average
Video
Dancing
Flag
Fan
Treadmill
Turbine
Average
Bicubic
PSNR Time
26.83
26.35
31.94
21.15
25.09
26.27
-
SC [25]
PSNR Time
26.80
45.47
26.28
12.89
32.50
12.92
21.27
15.47
25.77
16.49
26.52
20.64
K-SVD [26]
PSNR Time
27.69
2.35
27.61
0.58
33.55
1.06
22.22
0.35
27.00
0.51
27.61
0.97
NE+NNLS [4]
PSNR Time
27.63
19.89
27.41
4.54
33.45
8.27
22.08
2.60
26.88
3.67
27.49
7.79
ANR [23]
PSNR Time
27.67
0.85
27.52
0.20
33.49
0.38
22.24
0.12
27.04
0.18
27.59
0.35
NE+LLE [5]
PSNR Time
27.64
4.20
27.48
0.96
33.46
1.76
22.22
0.57
26.98
0.80
27.52
1.66
SR-CNN [6]
PSNR Time
27.81
1.41
28.04
0.36
33.61
0.60
22.42
0.15
27.50
0.23
27.87
0.55
3DSKR [21]
PSNR Time
27.81
1211
26.89
255
31.91
323
22.32
127
24.27
173
26.64
418
Enhancer [1]
PSNR Time
27.06
26.58
32.14
21.20
25.60
26.52
-
BRCN
PSNR Time
28.09
3.44
28.55
0.78
33.73
1.46
22.63
0.46
27.71
0.70
28.15
1.36
Table 2: The results of PSNR (dB) by variants of BRCN on the testing video sequences. v: feedforward convolution, r: recurrent convolution, t: conditional convolution, b: bidirectional scheme.
Video
Dancing
Flag
Fan
Treadmill
Turbine
Average
BRCN
{v}
27.81
28.04
33.61
22.42
27.50
27.87
BRCN
{v, r}
27.98
28.32
33.63
22.59
27.47
27.99
BRCN
{v, t}
27.99
28.39
33.65
22.56
27.50
28.02
BRCN
{v, r, t}
28.09
28.47
33.65
22.59
27.62
28.09
BRCN
{v, r, t, b}
28.09
28.55
33.73
22.63
27.71
28.15
Some important parameters of our network are illustrated as follows: fv1 =9, fv3 =5, n1 =64, n2 =32
and c=14 . Note that varying the number and size of filters does not have a significant impact on the
performance, because some filters with certain sizes are already in a regime where they can almost
reconstruct the high-resolution videos [24, 6].
4.2
Quantitative and Qualitative Comparison
We compare our BRCN with two multi-frame SR methods including 3DSKR [21] and a commercial
software namely Enhancer [1], and seven single-image SR methods including Bicubic, SC [25], KSVD [26], NE+NNLS [4], ANR [23], NE+LLE [5] and SR-CNN [6].
The results of all the methods are compared in Table 1, where evaluation measures include both peak
signal-to-noise ratio (PSNR) and running time (Time). Specifically, compared with the state-of-theart single-image SR methods (e.g., SR-CNN, ANR and K-SVD), our multi-frame-based method can
surpass them by 0.28?0.54 dB, which is mainly attributed to the beneficial mechanism of temporal
dependency modelling. BRCN also performs much better than the two representative multi-frame
SR methods (3DSKR and Enhancer) by 1.51 dB and 1.63 dB, respectively. In fact, most existing
multi-frame-based methods tend to fail catastrophically when dealing with very complex motions,
because it is difficult for them to estimate the motions with pinpoint accuracy.
For the proposed BRCN, we also investigate the impact of model architecture on the performance.
We take a simplified network containing only feedforward convolution as a benchmark, and then
study its several variants by successively adding other operations including bidirectional scheme,
recurrent and conditional convolutions. The results by all the variants of BRCN are shown in Table
2, where the elements in the brace represent the included operations. As we can see, due to the ben4
Similar to [23], we only deal with luminance channel in the YCrCb color space. Note that our model can
be generalized to handle all the three channels by setting c=3. Here we simply upscale the other two channels
with bicubic method for well illustration.
6
(a) Original
(b) Bicubic
(c) ANR [23]
(d) SR-CNN [6]
(e) BRCN
Figure 3: Closeup comparison among original frames and super resolved results by Bicubic, ANR,
SR-CNN and BRCN, respectively.
efit of learning temporal dependency, exploiting either recurrent convolution {v, r} or conditional
convolution {v, t} can greatly improve the performance. When combining these two convolutions
together {v, r, t}, they obtain much better results. The performance can still be further promoted
when adding the bidirectional scheme {v, r, t, b}, which results from the fact that each video frame
is related to not only its previous frame but also the future one.
In addition to the quantitative evaluation, we also present some qualitative results in terms of singleframe (in Figure 3) and multi-frame (in Figure 5). Please enlarge and view these figures on the
screen for better comparison. From these figures, we can observe that our method is able to recover
more image details than others under various motion conditions.
4.3
Running Time
We present the comparison of running
time in both Table 1 and Figure 4, where
all the methods are implemented on the
BRCN
same machine (Intel CPU 3.10 GHz and
SR-CNN
32 GB memory). The publicly available codes of compared methods are alK-SVD
ANR
NE+LLE
NE+NNLS
l in MATLAB while SR-CNN and ours
are in Python. From the table and figure, we can see that our BRCN takes
1.36 sec per frame on average, which
is orders of magnitude faster than the
3DSKR
fast multi-frame SR method 3DSKR.
SC
It should be noted that the speed gap
is not caused by the different MATLAB/Python implementations. As stat: single-image SR method
: multi-frame SR method
ed in [13, 21], the computational bottleneck for existing multi-frame SR methods is the accurate motion estimation,
Figure 4: Running time vs. PSNR for all the methods.
while our model explores an alternative
based on efficient spatial-temporal convolutions which has lower computational complexity. Note that the speed of our method is worse
than the fastest single-image SR method ANR. It is likely that our method involves the additional
phase of temporal dependency modelling but we achieve better performance (28.15 vs. 27.59 dB).
7
(a) Original
(b) Bicubic
(c) ANR [23]
(d) SR-CNN [6]
(e) BRCN
Figure 5: Comparison among original frames (2th , 3th and 4th frames, from the top row to the
bottom) of the Dancing video and super resolved results by Bicubic, ANR, SR-CNN and BRCN,
respectively.
4.4
Filter Visualization
(a) Wvf1
(b) Wtf1
(c) Wvf3
(d) Wtf3
Figure 6: Visualization of learned filters by the proposed BRCN.
We visualize the learned filters of feedforward and conditional convolutions in Figure 6. The filters
of Wvf1 and Wtf1 exhibit some strip-like patterns, which can be viewed as edge detectors. The filters
of Wvf3 and Wtf3 show some centrally-averaging patterns, which indicate that the predicted highresolution frame is obtained by averaging over the feature maps in the second hidden layer. This
averaging operation is also in consistent with the corresponding reconstruction phase in patch-based
SR methods (e.g., [25]), but the difference is that our filters are automatically learned rather than
pre-defined. When comparing the learned filters between feedforward and conditional convolutions,
we can also observe that the patterns in the filters of feedforward convolution are much more regular
and clear.
5
Conclusion and Future Work
In this paper, we have proposed the bidirectional recurrent convolutional network (BRCN) for multiframe SR. Our main contribution is the novel use of bidirectional scheme, recurrent and conditional
convolutions for temporal dependency modelling. We have applied our model to super resolve
videos containing complex motions, and achieved better performance and faster speed. In the future,
we will perform comparisons with other multi-frame SR methods.
Acknowledgments
This work is jointly supported by National Natural Science Foundation of China (61420106015,
61175003, 61202328, 61572504) and National Basic Research Program of China (2012CB316300).
8
References
[1] Video enhancer. http://www.infognition.com/videoenhancer/, version 1.9.10. 2014.
[2] S. Baker and T. Kanade. Super-resolution optical flow. Technical report, CMU, 1999.
[3] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and super-resolution from an image sequence.
European Conference on Computer Vision, pages 571?582, 1996.
[4] M. Bevilacqua, A. Roumy, C. Guillemot, and M.-L. A. Morel. Low-complexity single-image superresolution based on nonnegative neighbor embedding. British Machine Vision Conference, 2012.
[5] H. Chang, D.-Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. IEEE Conference
on Computer Vision and Pattern Recognition, page I, 2004.
[6] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image superresolution. European Conference on Computer Vision, pages 184?199, 2014.
[7] D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or
rain. IEEE International Conference on Computer Vision, pages 633?640, 2013.
[8] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael. Learning low-level vision. International Journal of
Computer Vision, pages 25?47, 2000.
[9] D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. IEEE International Conference
on Computer Vision, pages 349?356, 2009.
[10] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP: Graphical Models and Image
Processing, pages 231?239, 1991.
[11] V. Jain and S. Seung. Natural image denoising with convolutional networks. Advances in Neural Information Processing Systems, pages 769?776, 2008.
[12] K. Jia, X. Wang, and X. Tang. Image transformation based on learning dictionaries across image spaces.
IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 367?380, 2013.
[13] C. Liu and D. Sun. On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis
and Machine Intelligence, pages 346?360, 2014.
[14] D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers. Video super resolution using duality based tv-l 1
optical flow. Pattern Recognition, pages 432?441, 2009.
[15] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. International
Conference on Machine Learning, pages 807?814, 2010.
[16] M. Protter, M. Elad, H. Takeda, and P. Milanfar. Generalizing the nonlocal-means to super-resolution
reconstruction. IEEE Transactions on Image Processing, pages 36?51, 2009.
[17] R. R. Schultz and R. L. Stevenson. Extraction of high-resolution frames from video sequences. IEEE
Transactions on Image Processing, pages 996?1011, 1996.
[18] M. Schusterand and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal
Processing, pages 2673?2681, 1997.
[19] O. Shahar, A. Faktor, and M. Irani. Space-time super-resolution from a single video. IEEE Conference
on Computer Vision and Pattern Recognition, pages 3353?3360, 2011.
[20] I. Sutskever and G. E. Hinton. Learning multilevel distributed representations for high-dimensional sequences. In International Conference on Artificial Intelligence and Statistics, pages 548?555, 2007.
[21] H. Takeda, P. Milanfar, M. Protter, and M. Elad. Super-resolution without explicit subpixel motion estimation. IEEE Transactions on Image Processing, pages 1958?1975, 2009.
[22] G. Taylor, G. Hinton, and S. Roweis. Modeling human motion using binary latent variables. Advances in
Neural Information Processing Systems, pages 448?455, 2006.
[23] R. Timofte, V. De, and L. V. Gool. Anchored neighborhood regression for fast example-based superresolution. IEEE International Conference on Computer Vision, pages 1920?1927, 2013.
[24] L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution. In
Advances in Neural Information Processing Systems, pages 1790?1798, 2014.
[25] J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. IEEE
Transactions on Image Processing, pages 2861?2873, 2010.
[26] R. Zeyde, M. Elad, and M. Protte. On single image scale-up using sparse-representations. Curves and
Surfaces, pages 711?730, 2012.
9
| 5778 |@word cnn:9 longterm:1 version:2 c0:1 propagate:1 catastrophically:1 configuration:1 contains:2 liu:2 ours:1 existing:4 current:6 contextual:3 com:2 comparing:1 activation:1 blur:2 cheap:1 v:2 intelligence:4 generative:1 ith:1 successive:2 along:1 lowresolution:1 qualitative:3 ksvd:1 combine:2 webbase:1 introduce:1 excellence:1 inter:1 roughly:1 multi:22 brain:1 aliasing:2 freeman:2 automatically:1 resolve:4 cpu:1 window:1 considering:1 project:1 estimating:1 baker:2 superresolution:4 kind:1 compressive:1 transformation:1 temporal:33 quantitative:3 unit:2 pock:1 limit:1 interpolation:1 black:2 rnns:4 initialization:1 china:2 challenging:1 fastest:1 limited:1 acknowledgment:1 testing:4 restoring:1 block:1 carmichael:1 yan:1 pre:3 word:1 regular:1 closeup:1 www:2 equivalent:1 map:6 deterministic:1 center:2 independently:2 resolution:37 fv1:4 regarded:3 embedding:2 handle:4 variation:1 nnls:3 play:1 nlpr:1 commercial:1 deblurring:1 overlapped:2 element:1 recognition:4 satisfying:1 bottom:1 role:1 wang:1 capture:3 connected:6 sun:1 yk:1 complexity:4 seung:1 dynamic:2 hb1:7 trained:2 resolved:2 fr1:1 various:3 jain:1 fast:2 effective:1 artificial:1 sc:3 neighborhood:2 whose:1 huang1:1 widely:2 larger:1 elad:3 reconstruct:1 anr:9 statistic:1 jointly:2 noisy:1 final:1 sequence:14 advantage:1 trbm:11 propose:7 reconstruction:2 combining:2 flexibility:1 achieve:7 bf2:1 academy:1 treadmill:4 roweis:1 takeda:3 exploiting:1 sutskever:1 regularity:1 enhancement:1 cropping:2 generating:1 recurrent:37 ac:1 stat:1 exemplar:1 implemented:1 predicted:2 involves:1 peleg:2 indicate:1 filter:26 stochastic:2 human:1 multilevel:1 alleviate:1 around:1 considered:3 blake:1 wright:1 great:1 mapping:2 predict:1 visualize:1 dictionary:2 achieves:1 estimation:7 morel:1 gaussian:2 super:22 aim:3 rather:2 avoid:2 varying:1 encode:1 subpixel:2 focus:4 guillemot:1 improvement:1 modelling:12 mainly:1 greatly:1 wang1:2 inference:1 hidden:24 issue:1 among:3 flexible:1 html:1 denoted:4 art:3 integration:1 spatial:6 field:1 once:1 extraction:1 enlarge:2 sampling:1 represents:3 theart:1 future:5 others:1 report:1 intelligent:1 randomly:1 simultaneously:1 national:3 replaced:1 phase:5 connects:2 n1:10 investigate:1 intra:1 evaluation:2 severe:1 bb1:2 accurate:2 bicubic:9 edge:1 taylor:1 initialized:1 desired:3 modeling:1 bagon:1 cost:5 deviation:2 dependency:24 connect:1 peak:1 upscale:2 explores:1 international:6 dong:2 enhance:2 together:2 connecting:1 successively:1 containing:2 multiframe:2 huang:1 worse:1 cb316300:1 yuv:1 stevenson:1 bfgs:1 de:1 stride:1 summarized:1 sec:2 automation:1 cremers:1 caused:1 performed:1 view:1 red:2 recover:1 jia:2 contribution:2 square:1 publicly:1 accuracy:1 convolutional:20 largely:1 efficiently:2 modelled:1 bayesian:1 ren:1 rectified:2 classified:1 detector:1 sharing:3 ed:1 strip:1 definition:1 hf2:5 attributed:1 static:1 dataset:4 enhancer:4 color:1 psnr:14 actually:3 bidirectional:18 response:2 wei:1 zisserman:1 formulation:1 implicit:1 nonlinear:1 effect:1 contain:4 verify:1 irani:4 laboratory:1 illustrated:2 deal:3 adjacent:1 during:3 please:1 noted:1 generalized:2 highresolution:1 demonstrate:1 performs:1 motion:23 image:30 dirt:1 novel:1 recently:1 common:1 volume:5 he:1 significant:1 bevilacqua:2 vanilla:2 surface:1 paliwal:1 certain:1 shahar:1 binary:1 additional:2 steering:1 promoted:1 signal:2 resolving:1 full:5 multiple:1 infer:1 reduces:1 smooth:1 technical:1 faster:6 long:3 post:1 glasner:1 a1:1 impact:2 prediction:2 variant:3 regression:3 ko:1 basic:1 enhancing:1 cmu:1 vision:10 yeung:1 kernel:1 represent:4 achieved:1 c1:1 addition:3 whereas:1 crucial:1 brace:1 sr:42 tend:1 db:6 flow:5 effectiveness:1 extracting:1 yang:2 feedforward:17 enough:1 krishnan:1 variety:1 relu:1 architecture:1 reduce:2 regarding:1 cn:1 faktor:1 bottleneck:1 method3:1 handled:1 gb:1 alk:1 milanfar:2 suffer:1 pretraining:1 matlab:2 deep:2 generally:2 clear:1 covered:1 category:1 generate:2 http:2 restricts:1 per:1 blue:2 dancing:5 four:1 changing:1 registration:1 backward:9 luminance:1 timestep:6 zeyde:2 run:1 powerful:1 almost:1 groundtruth:1 timofte:2 patch:7 layer:42 hi:1 followed:1 dash:1 display:1 centrally:1 fan:4 replaces:1 encountered:1 nonnegative:1 software:1 speed:7 optical:5 format:1 tv:1 across:3 smaller:1 beneficial:1 restricted:3 taken:1 equation:2 visualization:2 previously:2 discus:1 mechanism:1 fail:1 sequences2:1 end:2 subnetworks:1 studying:1 available:1 operation:6 apply:2 observe:2 bb2:1 blurry:1 xiong:1 alternative:2 eigen:1 original:7 denotes:1 running:5 include:1 top:1 rain:1 graphical:1 exploit:5 cvgip:1 chinese:1 added:1 already:1 receptive:1 primary:1 exhibit:1 gradient:2 seven:1 manifold:1 code:1 illustration:1 providing:1 minimizing:1 downsampling:1 representability:1 liang:1 difficult:2 unfortunately:1 ratio:1 loy:1 implementation:3 boltzmann:3 perform:2 convolution:48 datasets:1 pasztor:1 benchmark:1 descent:2 hinton:3 frame:55 fv3:3 stack:1 namely:2 required:1 connection:14 learned:4 able:1 proceeds:1 usually:3 perception:1 pattern:9 regime:1 program:1 including:5 max:1 video:46 green:1 memory:1 ia:1 gool:1 natural:2 scheme:6 improve:2 technology:1 roumy:1 ne:6 extract:4 hb2:5 prior:1 review:1 python:2 protter:3 fully:2 generation:1 foundation:1 consistent:1 share:1 row:1 supported:1 free:2 bias:2 lle:3 institute:1 neighbor:2 sparse:4 ghz:1 distributed:1 overcome:1 dimension:1 curve:1 world:1 ignores:1 forward:9 commonly:2 adaptive:1 projected:2 avoided:1 simplified:1 schultz:1 employing:1 transaction:7 nonlocal:2 compact:1 dealing:1 b1:1 xi:42 discriminative:2 alternatively:1 fergus:1 latent:1 anchored:2 table:6 kanade:2 learn:4 channel:4 improving:1 mse:1 complex:6 european:2 main:2 border:2 noise:2 whole:2 hf1:10 n2:8 xu:1 representative:1 intel:1 screen:1 sub:10 inferring:2 explicit:2 pinpoint:1 tang:2 british:1 sensing:1 explored:3 deconvolution:1 intrinsic:1 essential:1 adding:2 magnitude:3 conditioned:1 demand:1 gap:1 bascle:1 generalizing:1 simply:1 likely:1 visual:8 chang:2 turbine:4 ma:1 nair:1 conditional:23 goal:1 viewed:1 acceleration:1 content:2 change:2 experimentally:1 included:1 specifically:2 averaging:3 surpass:1 flag:4 denoising:1 duality:1 experimental:2 svd:3 avoiding:1 handling:1 |
5,278 | 5,779 | SubmodBoxes: Near-Optimal Search for a Set of
Diverse Object Proposals
Qing Sun
Virginia Tech
Dhruv Batra
Virginia Tech
sunqing@vt.edu
https://mlp.ece.vt.edu/
Abstract
This paper formulates the search for a set of bounding boxes (as needed in object
proposal generation) as a monotone submodular maximization problem over the
space of all possible bounding boxes in an image. Since the number of possible
bounding boxes in an image is very large O(#pixels2 ), even a single linear scan
to perform the greedy augmentation for submodular maximization is intractable.
Thus, we formulate the greedy augmentation step as a Branch-and-Bound scheme.
In order to speed up repeated application of B&B, we propose a novel generalization of Minoux?s ?lazy greedy? algorithm to the B&B tree. Theoretically, our
proposed formulation provides a new understanding to the problem, and contains
classic heuristic approaches such as Sliding Window+Non-Maximal Suppression
(NMS) and and Efficient Subwindow Search (ESS) as special cases. Empirically,
we show that our approach leads to a state-of-art performance on object proposal
generation via a novel diversity measure.
1
Introduction
A number of problems in Computer Vision and Machine Learning involve searching for a set of
bounding boxes or rectangular windows. For instance, in object detection [9, 16, 17, 19, 34, 36, 37],
the goal is to output a set of bounding boxes localizing all instances of a particular object category.
In object proposal generation [2, 7, 39, 41], the goal is to output a set of candidate bounding boxes
that may potentially contain an object (of any category). Other scenarios include face detection,
multi-object tracking and weakly supervised learning [10].
Classical Approach: Enumeration + Diverse Subset Selection. In the context of object detection,
the classical paradigm for searching for a set of bounding boxes used to be:
? Sliding Window [9, 16, 40]: i.e., enumeration over all windows in an image with some
level of sub-sampling, followed by
? Non-Maximal Suppression (NMS): i.e., picking a spatially-diverse set of windows by
suppressing windows that are too close or overlapping.
As several previous works [3,26,40] have recognized, the problem with this approach is inefficiency
? the number of possible bounding boxes or rectangular subwindows in an image is O(#pixels2 ).
Even a low-resolution (320 x 240) image contains more than one billion rectangular windows [26]!
As a result, modern object detection pipelines [17, 19, 36] often rely on object proposals as a preprocessing step to reduce the number of candidate object locations to a few hundreds or thousands
(rather than billions).
Interestingly, this migration to object proposals has simply pushed the problem (of searching for a
set of bounding boxes) upstream. Specifically, a number of object proposal techniques [8, 32, 41]
involve the same enumeration + NMS approach ? except they typically use cheaper features to be a
fast proposal generation step.
Goal. The goal of this paper is to formally study the search for a set of bounding boxes as an optimization problem. Clearly, enumeration + post-processing for diversity (via NMS) is one widelyused heuristic approach. Our goal is to formulate a formal optimization objective and propose an
efficient algorithm, ideally with guarantees on optimization performance.
Challenge. The key challenge is the exponentially-large search space ? the number of possible
2
)
M -sized sets of bounding boxes is O(#pixels
= O(#pixels2M ) (assuming M ? #pixels2 /2).
M
1
Figure 1: Overview of our formulation: SubmodBoxes. We formulate the selection of a set of boxes as a constrained submodular maximization problem. The objective and marginal gains consists of two parts: relevance
and diversity. Figure (b) shows two candidate windows ya and yb . Relevance is the sum of edge strength
over all edge groups (black curves) wholly enclosed in the window. Figure (c) shows the diversity term. The
marginal gain in diversity due to a new window (ya or yb ) is the ability of the new window to cover the reference boxes that are currently not well-covered with the already chosen set Y = {y1 , y2 }. In this case, we can
see that ya covers a new reference box b1 . Thus, the marginal gain in diversity of ya will be larger than yb .
Overview of our formulation: SubmodBoxes. Let Y denote the set of all possible bounding boxes
or rectangular subwindows in an image. This is a structured output space [4, 21, 38], with the size of
this set growing quadratically with the size of the input image, |Y| = O(#pixels2 ).
We formulate the selection of a set of boxes as a search problem on the power set 2Y . Specifically,
given a budget of M windows, we search for a set Y of windows that are both relevant (e.g., have
high likelihood of containing an object) and diverse (to cover as many objects instances as possible):
argmax
F (Y ) = R(Y ) +
?
D(Y )
s.t. |Y | ? M
(1)
|{z}
| {z }
| {z }
| {z }
| {z }
Y ?2Y
trade-off parameter diversity
| {z }
objective
relevance
budget constraint
search over power-set
Crucially, when the objective function F : 2Y ? R is monotone and submodular, then a simple
greedy algorithm (that iteratively adds the window with the largest marginal gain [24]) achieves a
near-optimal approximation factor of (1 ? 1e ) [24, 30].
Unfortunately, although conceptually simple, this greedy augmentation step requires an enumeration
over the space of all windows Y and thus a na?ve implementation is intractable.
In this work, we show that for a broad class of relevance and diversity functions, this greedy augmentation step may be efficiently formulated as a Branch-and-Bound (B&B) step [12, 26], with easily
computable upper-bounds. This enables an efficient implementation of greedy, with significantly
fewer evaluations than a linear scan over Y.
Finally, in order to speed up repeated application of B&B across iterations of the greedy algorithm,
we present a novel generalization of Minoux?s ?lazy greedy? algorithm [29] to the B&B tree, where
different branches are explored in a lazy manner in each iteration.
We apply our proposed technique SubmodBoxes to the task of generating object proposals [2, 7, 39,
41] on the PASCAL VOC 2007 [13], PASCAL VOC 2012 [14], and MS COCO [28] datasets. Our
results show that our approach outperforms all baselines.
Contributions. This paper makes the following contributions:
1. We formulate the search for a set of bounding boxes or subwindows as the constrained
maximization of a monotone submodular function. To the best of our knowledge, despite
the popularity of object recognition and object proposal generation, this is the first such
formal optimization treatment of the problem.
2. Our proposed formulation contains existing heuristics as special cases. Specifically, Sliding Window + NMS can be viewed as an instantiation of our approach under a specific
definition of the diversity function D(?).
3. Our work can be viewed as a generalization of the ?Efficient Subwindow Search (ESS)?
of Lampert et al. [26], who proposed a B&B scheme for finding the single best bounding
box in an image. Their extension to detecting multiple objects consisted of a heuristic
for ?suppressing? features extracted from the selected bounding box and re-running the
procedure. We show that this heuristic is a special case of our formulation under a specific
diversity function, thus providing theoretical justification to their intuitive heuristic.
4. To the best of our knowledge, our work presents the first generalization of Minoux?s ?lazy
greedy? algorithm [29] to structured-output spaces (the space of bounding boxes).
2
5. Finally, our experimental contribution is a novel diversity measure which leads to state-ofart performance on the task of generating object proposals.
2
Related Work
Our work is related to a few different themes of research in Computer Vision and Machine Learning.
Submodular Maximization and Diversity. The task of searching for a diverse high-quality subset
of items from a ground set has been well-studied in a number of application domains [6, 11, 22,
25, 27, 31], and across these domains submodularity has emerged as an a fundamental property of
set functions for measuring diversity of a subset of items. Most previous work has focussed on
submodular maximization on unstructured spaces, where the ground set is efficiently enumerable.
Our work is closest in spirit to Prasad et al. [31], who studied submodular maximization on structured output spaces, i.e. where each item in the ground set is itself a structured object (such as a
segmentation of an image). Unlike [31], our ground set Y is not exponentially large, only ?quadratically? large. However, enumeration over the ground set for the greedy-augmentation step is still
infeasible, and thus we use B&B. Such structured output spaces and greedy-augmentation oracles
were not explored in [31].
Bounding Box Search in Object Detection and Object Proposals. As we mention in the introduction, the search for a set of bounding boxes via heuristics such as Sliding Window + NMS used to be
the dominant paradigm in object recognition [9, 16, 40]. Modern pipelines have shifted that search
step to object proposal algorithms [17, 19, 36]. A comparison and overview of object proposals may
be found in [20]. Zitnick et al. [41] generate candidate bounding boxes via Sliding Window + NMS
based on an ?objectness? score, which is a function of the number of contours wholly enclosed by
a bounding box. We use this objectness score as our relevance term, thus making SubmodBoxes
directly comparable to NMS. Another closely related work is [18], which presents an ?active search?
strategy for reranking selective search [39] object proposals based on a contextual cues. Unlike this
work, our formulation is not restricted to any pre-selected set of windows. We search over the entire
power set 2Y , and may generate any possible set of windows (up to convergence tolerance in B&B).
Branch-and-Bound. One key building block of our work is the ?Efficient Subwindow Search
(ESS)? B&B scheme et al. [26]. ESS was originally proposed for single-instance object detection. Their extension to detecting multiple objects consisted of a heuristic for ?suppressing? features
extracted from the selected bounding box and re-running the procedure. In this work, we extend
and generalize ESS in multiple ways. First, we show that relevance (objectness scores) and diversity
functions used in object proposal literature are amenable to upper-bound and thus B&B optimization. We also show that the ?suppression? heuristic used by [26] is a special case of our formulation
under a specific diversity function, thus providing theoretical justification to their intuitive heuristic.
Finally, [3] also proposed the use of B&B for NMS in object detection. Unfortunately, as we explain
later in the paper, the NMS objective is submodular but not monotone, and the classical greedy algorithm does not have approximation guarantees in this setting. In contrast, our work presents a general
framework for bounding-box subset-selection based on monotone submodular maximization.
3
SubmodBoxes: Formulation and Approach
We begin by establishing the notation used in the paper.
Preliminaries and Notation. For an input image x, let Yx denote the set of all possible bounding
boxes or rectangular subwindows in this image. For simplicity, we drop the explicit dependance on
x, and just use Y. Uppercase letters refer to set functions F (?), R(?), D(?), and lowercase letter refer
to functions over individual items f (y), r(y).
A set function F : 2Y ? R is submodular if its marginal gains F (b|S) ? F (S ? b) ? F (S) are
decreasing, i.e. F (b|S) ? F (b|T ) for all sets S ? T ? Y and items b ?
/ T . The function F is called
monotone if adding an item to a set does not hurt, i.e. F (S) ? F (T ), ?S ? T .
Constrained Submodular Maximization. From the classical result of Nemhauser [30], it is known
that cardinality constrained maximization of a monotone submodular F can be performed nearoptimally via a greedy algorithm. We start out with an empty set Y 0 = ?, and iteratively add the
next ?best? item with the largest marginal gain over the chosen set :
Y t = Y t?1 ? y t ,
where
y t = argmax F (y | Y t?1 ).
(2)
y?Y
The score of the final solution Y is within a factor of (1 ? 1e ) of the optimal solution. The computational bottleneck is that in each iteration, we must find the item with the largest marginal gain.
In our case, |Y| is the space of all rectangular windows in an image, and exhaustive enumeration
M
3
Figure 2: Priority queue in B&B scheme. Each vertex in the tree represents a set of windows. Blue rectangles
denote the largest and the smallest window in the set. Gray region denotes the rectangle set Yv . In each case,
the priority queue consists of all leaves in the B&B tree ranked by the upper bound Uv . Left: shows vertex v is
split along the right coordinate interval into equal halves: v1 and v2 . Middle: The highest priority vertex v1 in
Q1 is further split along bottom coordinate into v3 and v4 . Right: The highest priority vertex v4 in Q2 is split
along right coordinate into v5 and v6 . This procedure is repeated until the highest priority vertex in the queue
is a single rectangle.
is intractable. Instead of exploring subsampling as is done in Sliding Window methods, we will
formulate this greedy augmentation step as an optimization problem solved with B&B.
Sets vs Lists. For pedagogical reasons, our problem setup is motivated with the language of sets
(Y, 2Y ) and subsets (Y ). In practice, our work falls under submodular list prediction [11, 33, 35].
The generalization from sets to lists allows reasoning about an ordering of the items chosen and
(potentially) repeated entries in the list. Our final solution Y M is an (ordered) list not an (unordered)
set. All guarantees of greedy remain the same in this generalization [11, 33, 35].
3.1 Parameterization of Y and Branch-and-Bound Search
In this subsection, we briefly recap the Efficient Subwindow Search (ESS) of Lampert et al. [26],
which is used a key building block in this work. The goal of [26] is to maximize a (potentially
non-smooth) objective function over the space of all rectangular windows: maxy?Y f (y).
A rectangular window y ? Y is parameterized by its top, bottom, left, and right coordinates y =
(t, b, l, r). A set of windows is represented by using intervals for each coordinate instead of a single
integer, for example [T, B, L, R], where T = [tlow , thigh ] is a range. In this parameterization, the
set of all possible boxes in an (h ? w)-sized image can be written as Y = [[1, h], [1, h], [1, w], [1, w]].
Branch-and-Bound over Y. ESS creates a B&B tree, where each vertex v in the tree is a rectangle set Yv and an associated upper-bound on the objective function achievable in this set, i.e.
maxy?Yv f (y) ? Uv . Initially, this tree consists of a single vertex, which is the entire search space
Y and (typically) a loose upper-bound. ESS proceeds in a best-first manner [26]. In each iteration,
the vertex/set with the highest upper-bound is chosen for branching, and then new upper-bounds
are computed on each of the two children/sub-sets created. In practice, this is implemented with a
priority queue over the vertices/sets that are currently leaves in the tree. Fig. 2 shows an illustration
of this procedure. The parent rectangle set is split along its largest coordinate interval into two equal
halves, thus forming disjoint children sets. B&B explores the tree in a best-first manner till a single
rectangle is identified with a score equal to the upper-bound at which point we have found a global
optimum. In our experiments, we show results with different convergence tolerances.
Objective. In our setup, the objective (at each greedy-augmentation step) is the marginal gain of
the window y w.r.t. the currently chosen list of windows Y t?1 , i.e. f (y) = F (y | Y t?1 ) = R(y |
Y t?1 ) + ?D(y | Y t?1 ). In the following subsections, we describe the relevance and diversity terms
in detail, and show how upper bounds can be efficiently computed over the sets of windows.
3.2 Relevance Function and Upper Bound
The goal of the relevance function R(Y ) is to quantify the ?quality? or ?relevance? of the windows
chosen in Y . In our work, we define
P R(Y ) to be a modular function aggregating the quality of
all chosen windows i.e. R(Y ) = y?Y r(y). Thus, the marginal gain of window y is simply its
individual quality regardless of what else has already been chosen, i.e. R(y | Y t?1 ) = r(y).
In our application of object proposal generation, we use the objectness score produced by EdgeBoxes [41] as our relevance function. The main intuition of EdgeBoxes is that the number of
contours or ?edge groups? wholly contained in a box is indicative of its objectness score. Thus,
it first creates a grouping of edge pixels called edge groups, each associated with a real-valued edge
strength si .
Abstracting away some of the domain-specific details, EdgeBoxes essentially defines the score of a
box as a weighted sum of the strengths of edge groups contained in it, normalized by the size of the
4
P
edge group i?y wi si
box i.e. EdgeBoxesScore(y) =
, where with a slight abuse of notation, we use
size-normalization
?edge group i ? y? to mean the edge groups contained the rectangle y.
These weights and size normalizations were found to improve performance of EdgeBoxes. In our
work, we use a simplification of the EdgeBoxesScore which allow for easy computation of upper
P
bounds:
edge group i?y si
r(y) =
,
(3)
size-normalization
i.e., we ignore the weights. One simple upper-bound for a set of windows Yv can be computed by
accumulating all possible positive scores and the least necessary negative scores:
P
P
edge group i?ymax si ? [[si ? 0]] +
edge group i?ymin si ? [[si ? 0]]
max r(y) ?
,
(4)
y?Yv
size-normalization(ymin )
where ymax is the largest and ymin is the smallest box in the set Yv ; and [[?]] is the Iverson bracket.
Consistent with the experiments in [41] , we found that this simplification indeed hurts performance
in the EdgeBoxes Sliding Window + NMS pipeline. However, interestingly we found that even
with this weaker relevance term, SubmodBoxes was able to outperform EdgeBoxes. Thus, the drop
in performance due to a weaker relevance term was more than compensated for by the ability to
perform B&B jointly on the relevance and diversity terms.
3.3 Diversity Function and Upper Bound
The goal of the diversity function D(Y ) is to encourage non-redundancy in the chosen set of windows and potentially capture different objects in the image. Before we introduce our own diversity
function, we show how existing heuristics in object detection and proposal generation can be written
as special cases of this formulation, under specific diversity functions.
Sliding Window + NMS. Non-Maximal Suppression (NMS) is the most popular heuristic for selecting diverse boxes in computer vision. NMS is typically explained procedurally ? select the highest
scoring window y1 , suppress all windows that overlap with y1 by more than some threshold, select
the next highest scoring window y2 , rinse and repeat.
This procedure can be explained as a special case of our formulation. Sliding Window corresponds
to enumeration over Y with some level of sub-sampling (or stride), typically with a fixed aspect
ratio. Each step in NMS is precisely a greedy augmentation step under the following marginal gain:
argmax r(y) + ?DN M S (y | Y t?1 ),
where
(5a)
y?Ysub-sampled
0
if maxy0 ?Y t?1 IoU(y0 , y) ? NMS-threshold
(5b)
?? else.
Intuitively, the NMS diversity function imposes an infinite penalty if a new window y overlaps
with a previously chosen y0 by more than a threshold, and offers no reward for diversity beyond
that. This explains the NMS procedure of suppressing overlapping windows and picking the highest
scoring one among the unsuppressed ones. Notice that this diversity function is submodular but not
monotone (the marginals gains may be negative). A similar observation was made in [3]. For such
non-monotone functions, greedy does not have approximation guarantees and different techniques
are needed [5, 15]. This is an interesting perspective on the classical NMS heuristic.
ESS Heuristic [26]. ESS was originally proposed for single-instance object detection. Their extension to detecting multiple instances consisted of a heuristic for suppressing the features extracted
from the selected bounding box and re-running the procedure. Since their scoring function was linear in the features, this heuristic of suppressing features and rerunning B&B can be expressed as a
greedy augmentation step under the following marginal gain:
argmax r(y) + ?DESS (y | Y t?1 ), where DESS (y | Y t?1 ) = ?r y ? (y1 ? y2 . . . yt?1 ) (6)
DN M S (y | Y t?1 ) =
y?Y
i.e., the ESS diversity function subtracts the score contribution coming from the intersection region.
If the r(?) is non-negative, it is easy to see that this diversity function is monotone and submodular
? adding a new window never hurts, and since the marginal gain is the score contribution of the new
regions not covered by previous window, it is naturally diminishing. Thus, even though this heuristic
not was presented as such, the authors of [26] did in fact formulate a near-optimal greedy algorithm
for maximizing a monotone submodular function. Unfortunately, while r(?) is always positive in
our experiments, this was not the case in the experimental setup of [26].
5
Our Diversity Function. Instead of hand-designing an explicit diversity function, we use a function
that implicitly measures diversity in terms of coverage of a set of reference set of bounding boxes
B. This reference set of boxes may be a uniform sub-sampling of the space of windows as done
in Sliding Window methods, or may itself be the output of another object proposal method such as
Selective Search [39]. Specifically, each greedy augmentation step under our formulation given by:
argmax r(y) + ?Dcoverage (y | Y t?1 ), where Dcoverage (y | Y t?1 ) = max ?IoU(y, b | Y t?1 ) (7a)
b?B
y?Y
?IoU(y, b | Y t?1 ) = max{IoU(y, b) ? 0max
IoU(y0 , b), 0}. (7b)
t?1
y ?Y
Intuitively speaking, the marginal gain of a new window y under our diversity function is the largest
gain in coverage of exactly one of the references boxes. We can also formulate this diversity function
as a maximum bipartite matching problem between the reference proposal boxes Y and the reference
boxes B (in our experiments, we also study performance under top-k matches). We show in the
supplement that this marginal gain is always non-negative and decreasing with larger Y t?1 , thus the
diversity function is monotone submodular. All that remains is to compute an upper-bound on this
marginal gain. Ignoring constants, the key term to bound is IoU(y, b). We can upper-bound this
term by computing the intersection w.r.t. the largest window in the window set ymax , and computing
max ?b)
the union w.r.t. to the smallest window ymin , i.e. maxy?Yv IoU(y, b) ? area(y
area(ymin ?b) .
4
Speeding up Greedy with Minoux?s ?Lazy Greedy?
In order to speed up repeated application of B&B across iterations of the greedy algorithm, we now
present an application of Minoux?s ?lazy greedy? algorithm [29] to the B&B tree.
The key insight of classical lazy greedy is that the marginal gain function F (y | Y t ) is a nonincreasing function of t (due to submodularity of F ). Thus, at time t ? 1, we can cache the priority
queue of marginals gains F (y | Y t?2 ) for all items. At time t, lazy greedy does not recompute
all marginal gains. Rather, the item at the front of the priority queue is picked, its marginal gain is
updated F (y | Y t?1 ), and the item is reinserted into the queue. Crucially, if the item remains at
the front of the priority queue, lazy greedy can stop, and we have found the item with the largest
marginal gain.
Interleaving Lazy Greedy with B&B. In our work, the priority queue does not contain single items,
rather sets of windows Yv corresponding to the vertices in the B&B tree. Thus, we must interleave
the lazy updates with the Branch-and-Bound steps. Specifically, we pick a set from the front of
the queue and compute the upper-bound on its marginal gain. We reinsert this set into the priority
queue. Once a set remains at the front of the priority queue after reinsertion, we have found the set
with the highest upper-bound. This is when perform a B&B step, i.e. split this set into two children,
compute the upper-bounds on the children, and insert them into the queue.
Figure 3: Interleaving Lazy Greedy with B&B. The first few steps update upper-bounds, following by finally
branching on a set. Some sets, such as v2 are never updated or split, resulting in a speed-up.
Fig. 3 illustrates how the priority queue and B&B tree are updated in this process. Suppose at the
end of iteration t ? 1 and the beginning of iteration t, we have the priority queue shown on the
left. The first few updates involve recomputing the upper-bounds on the window sets (v6 , v5 , v3 ),
following by branching on v3 because it continues to stay on top of the queue, creating new vertices
v7 , v8 . Notice that v2 is never explored (updated or split), resulting in a speed-up.
5
Experiments
Setup. We evaluate SubmodBoxes for object proposal generation on three datasets: PASCAL VOC
2007 [13], PASCAL VOC 2012 [14], and MS COCO [28]. The goal of experiments is to validate our
approach by testing the accuracy of generated object proposals and the ability of handling different
kinds of reference boxes, and observe trends as we vary multiple parameters.
6
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.6
0.5
0.3
0.2
0.1
0
SubmodBoxes
SubmodBoxes,?=?
EB50
EB70
EB90
200
400
600
EB50_no_aff
EB70_no_aff
EB90_no_aff
SS
SS?EB
800
No. proposals
(a) Pascal VOC 2007
1000
0.4
0.3
0.2
0.1
0
SubmodBoxes
SubmodBoxes,?=?
EB50
EB70
EB90
200
400
600
EB50_no_aff
EB70_no_aff
EB90_no_aff
SS
SS?EB
800
1000
No. proposals
(b) Pascal VOC 2012
Figure 4: ABO vs. No. Proposals.
ABO
0.4
ABO
ABO
0.4
0.3
0.2
0.1
0
0
SubmodBoxes
SubmodBoxes,?=?
EB50
EB70
EB90
200
400
600
EB50_no_aff
EB70_no_aff
EB90_no_aff
SS
SS?EB
800
1000
No. proposals
(c) MS COCO
Evaluation. To evaluate the quality of our object proposals, we use Mean Average Best Overlap
(MABO) score. Given a set of ground-truth boxes GTc for a class c, ABO is calculated by averaging
the best IoU between each ground truth bounding box and all object proposals:
X
1
ABOc =
max IoU(g, y)
(8)
c
y?Y
|GT |
c
g?GT
MABO is a mean ABO over all classes.
Weighing the Reference Boxes. Recall that the marginal gain of our proposed diversity function
rewards covering the reference boxes with the chosen set of boxes. Instead of weighing all reference
boxes equally, we found it important to weigh different reference boxes differently. The exact form
the weighting rule is provided in the supplement. In our experiments, we present results with and
without such a weighting to show impact of our proposed scheme.
5.1 Accuracy of Object Proposals
In this section, we explore the performance of our proposed method in comparison to relevant object
proposal generators. For the two PASCAL datasets, we perform cross validation on 2510 validation
images of PASCAL VOC 2007 for the best parameter ?, then report accuracies on 4952 test images
of PASCAL VOC 2007 and 5823 validation images of PASCAL VOC 2012. The MS COCO dataset
is much larger, so we randomly select a subset of 5000 training images for tuning ?, and test on
complete validation dataset with 40138 images.
We use 1000 top ranked selective search windows [39] as reference boxes. In a manner similar
to [23], we chose a different ?M for M = 100, 200, 400, 600, 800, 1000 proposals. We compare our
approach with several baselines: 1) ? = ?, which essentially involves re-ranking selective search
windows by considering their ability to coverage other boxes. 2) Three variants of EdgeBoxes [41]
at IoU = 0.5, 0.7 and 0.9, and corresponding three variants without affinities in (3). 3) Selective
Search: compute multiple hierarchical segments via grouping superpixels and placing bounding
boxes around them. 4) SS-EB: use EdgeBoxesScore to re-rank Selective Search windows.
Fig. 4 shows that our approach at ? = ? and validation-tuned ? both outperform all baselines.
At M = 25, 100, and 500, our approach is 20%, 11%, and 3% better than Selective Search and
14%, 10%, and 6% better than EdgeBoxes70, respectively.
5.2 Ablation Studies.
We now study the performance of our system under different components and parameter settings.
Effect of ? and Reference Boxes. We test performance of our approach as a function of ? using
reference boxes from different object proposal generators (all reported at M =200 on PASCAL VOC
2012). Our reference box generators are: 1) Selective Search [39]; 2) MCG [2]; 3) CPMC [7]; 4)
EdgeBoxes [41] at IoU = 0.7; 5) Objectness [1]; and 6) Uniform-sampling [20]: i.e. uniformly
sample the bounding box center position, square root area and log aspect ratio.
Table 1 shows the performance of SubmodBoxes when used with these different reference box
generators. Our approach shows improvement (over corresponding method) for all reference boxes.
Our approach outperforms the current state of art MCG by 2% and Selective Search by 5%. This is
significantly larger than previous improvements reported in the literature.
Fig. 5a shows more fine-grained behavior as ? is varied. At ? = 0 all methods produce the same
(highest weighted) box M times. At ? = ?, they all perform a reranking of the reference set of
boxes. In nearly all curves, there is a peak at some intermediate setting of ?. The only exception is
EdgeBoxes, which is expected since it is being used in both the relevance and diversity terms.
Effect of No. B&B Steps. We analyze the convergence trends of B&B. Fig. 5b shows that both the
optimization objective function value and the mABO increase with the number of B&B iterations.
7
Selective-Search
MCG
EB
CPMC
Objectness
Uniform-sampling
? ? 0.4, weighting
0.7342
0.7377 0.6747 0.7125
0.6131
0.5937
? ? 0.4, without weighting
0.5697
0.5042 0.6350 0.5681
0.6220
0.5136
? = 10, weighting
0.7233
0.7417 0.6467 0.7130
0.5006
0.5478
? = 10, without weighting
0.5844
0.5534 0.6232 0.5849
0.5920
0.5115
? = ?, weighting
0.7222
0.7409 0.6558 0.7116
0.4980
0.5453
Original method
0.6817
0.7206 0.6755 0.7032
0.6038
0.5295
Table 1: Comparison with/without weighting scheme (rows) with different reference boxes (columns). ?Original method? row shows performance of directly using object proposals from these proposal generators. ???
means we report the best performance from ? = 0.3, 0.4 and 0.5 considering the peak occurs at different ? for
different object proposal generators.
0.71
MCG
Uniform
CPMC
0.55
0
0.5
1
?
1.5
2
295
0.65
280
0.6
265
250
1000
2000 5000 10000
No.Iterations
0.55
mABO
SS
Objectness
EB
0.6
0.7
310
mABO
Objective values
mABO
0.7
0.65
0.7
0.69
0
5
10
15
No.Matching boxes
20
(a) Performance vs. ? with differ- (b) Objective and performance vs. (c) Performance vs. No. of
ent reference box generators.
No. of iterations.
matching boxes.
Figure 5: Experiments on different parameter settings.
No.Evaluations
Effect of No. of Matching Boxes. Instead of allowing the chosen boxes to cover exactly one reference box, we analyze the effect of matching top-k reference boxes. Fig. 5c shows that the performance decreases monotonically bit as more matches are allowed.
Speed up via Lazy Greedy. Fig. 6 compares the number of B&B
7
x 10
iterations required with and without our proposed Lazy Greedy gen3
Without Lazy
eralization (averaged over 100 randomly chosen images) ? we can
Lazy
see that Lazy Greedy significantly reduces the number of B&B
2
iterations required. The cost of each B&B evaluation is nearly
the same, so the iteration speed-up is directly proportional to time
1
speed-up.
0
0
6
50
No.Proposals
Conclusions
100
To summarize, we formally studied the search for a set of diverse
Figure 6: Comparison of the bounding boxes as an optimization problem and provided theoretnumber of B&B iterations of our ical justification for greedy and heuristic approaches used in prior
Lazy Greedy generalization and work. The key challenge of this problem is the large search space.
Thus, we proposed a generalization of Minoux?s ?lazy greedy? on
independent B&B runs.
B&B tree to speed up classical greedy. We tested our formulation
on three datasets of object detection: PASCAL VOC 2007, PASCAL 2012 and Microsoft COCO. Results show that our formulation outperforms all baselines with
a novel diversity measure.
Acknowledgements. This work was partially supported by a National Science Foundation CAREER award, an Army Research Office YIP award, an Office of Naval Research grant, an AWS
in Education Research Grant, and GPU support by NVIDIA. The views and conclusions contained
herein are those of the authors and should not be interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of the U.S. Government or any sponsor.
References
[1] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. PAMI, 34(11):2189?
2202, Nov 2012. 7
[2] P. Arbelaez, J. P. Tuset, J. T.Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In
CVPR, 2014. 1, 2, 7
[3] M. Blaschko. Branch and bound strategies for non-maximal suppression in object detection. In EMMCVPR, pages 385?398, 2011. 1, 3, 5
[4] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In
ECCV, 2008. 2
[5] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A tight (1/2) linear-time approximation to unconstrained submodular maximization. In FOCS, 2012. 5
8
[6] J. Carbonell and J. Goldstein. The use of mmr, diversity-based reranking for reordering documents and
producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ?98, pages 335?336, 1998. 3
[7] J. Carreira and C. Sminchisescu. Constrained parametric min-cuts for automatic object segmentation. In
CVPR, 2010. 1, 2, 7
[8] M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr. Bing:binarized normed gradients for objectness estimation at 300fps. In CVPR, 2014. 1
[9] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1, 3
[10] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In ECCV,
2010. 1
[11] D. Dey, T. Liu, M. Hebert, and J. A. Bagnell. Contextual sequence prediction with application to control
library optimization. In Robotics Science and Systems (RSS), 2012. 3, 4
[12] E.L.Lawler and D.E.Wood. Branch-and-bound methods: A survey. Operations Research, 14(4):699?719,
1966. 2
[13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. 2, 6
[14] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. 2, 6
[15] U. Feige, V. Mirrokni, and J. Vondr?k. Maximizing non-monotone submodular functions. In FOCS, 2007.
5
[16] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 32(9):1627?1645, 2010. 1, 3
[17] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014. 1, 3
[18] A. Gonzalez-Garcia, A. Vezhnevets, and V. Ferrari. An active search strategy for efficient object detection.
In CVPR, 2015. 3
[19] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. In ECCV, 2014. 1, 3
[20] J. Hosang, R. Benenson, and B. Schiele. How good are detection proposals, really? In BMVC, 2014. 3, 7
[21] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural svms. Machine Learning,
77(1):27?59, 2009. 2
[22] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2003. 3
[23] P. Krahenbuhl and V. Koltun. Learning to propose objects. In CVPR, 2015. 7
[24] A. Krause and D. Golovin. Submodular function maximization. In Tractability: Practical Approaches to
Hard Problems (to appear). Cambridge University Press, 2014. 2
[25] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory,
efficient algorithms and empirical studies. J. Mach. Learn. Res., 9:235?284, 2008. 3
[26] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound
framework for object localization. TPMAI, 31(12):2129?2142, 2009. 1, 2, 3, 4, 5
[27] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In ACL, 2011. 3
[28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
COCO: Common objects in context. In ECCV, 2014. 2, 6
[29] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization
Techniques, pages 234?243, 1978. 2, 6
[30] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of approximations for maximizing submodular set
functions. Mathematical Programming, 14(1):265?294, 1978. 2, 3
[31] A. Prasad, S. Jegelka, and D. Batra. Submodular meets structured: Finding diverse subsets in
exponentially-large structured item sets. In NIPS, 2014. 3
[32] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region
proposal networks. In NIPS, 2015. 1
[33] S. Ross, J. Zhou, Y. Yue, D. Dey, and J. A. Bagnell. Learning policies for contextual submodular prediction. In ICML, 2013. 4
[34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition,
localization and detection using convolutional networks. In ICLR, 2014. 1
[35] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2008.
4
[36] C. Szegedy, S. Reed, and D. Erhan. Scalable, high-quality object detection. In CVPR, 2014. 1, 3
[37] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013. 1
[38] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. 2
[39] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV,
2013. 1, 2, 3, 6, 7
[40] P. Viola and M. J. Jones. Robust real-time face detection. Int. J. Comput. Vision, 57(2):137?154, May
2004. 1, 3
[41] C. Zitnick and P. Dollar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 1, 2, 3, 4,
5, 7
9
| 5779 |@word cnn:1 briefly:1 middle:1 achievable:1 interleave:1 dalal:1 everingham:2 triggs:1 r:1 crucially:2 prasad:2 q1:1 pick:1 mention:1 inefficiency:1 contains:3 score:13 selecting:1 liu:1 tuned:1 document:2 suppressing:6 interestingly:2 outperforms:3 existing:2 current:1 contextual:3 si:7 must:2 written:2 gpu:1 kdd:1 hofmann:1 enables:1 drop:2 update:3 v:5 greedy:39 fewer:1 selected:4 item:16 reranking:3 cue:1 leaf:2 parameterization:2 es:11 indicative:1 beginning:1 plane:1 provides:1 detecting:3 recompute:1 location:1 org:2 zhang:3 mathematical:1 along:4 iverson:1 dn:2 koltun:1 fps:1 focs:2 consists:3 naor:1 ijcv:1 introduce:1 manner:4 theoretically:1 expected:1 indeed:1 behavior:1 growing:1 multi:1 voc:13 decreasing:2 enumeration:8 window:57 cardinality:1 cache:1 considering:2 begin:1 provided:2 notation:3 blaschko:3 what:1 kind:1 interpreted:1 q2:1 finding:2 guarantee:4 binarized:1 exactly:2 schwartz:1 control:1 ramanan:2 grant:2 appear:1 producing:1 positive:2 before:1 aggregating:1 despite:1 mach:1 establishing:1 meet:1 abuse:1 pami:2 black:1 chose:1 acl:1 eb:6 studied:3 minoux:7 submodboxes:15 range:1 averaged:1 practical:1 lecun:1 testing:1 practice:2 block:2 union:1 procedure:7 maire:1 wholly:3 area:3 empirical:1 significantly:3 matching:5 pre:1 close:1 selection:4 context:2 influence:1 accumulating:1 www:2 compensated:1 yt:1 maximizing:6 center:1 regardless:1 williams:2 normed:1 rectangular:8 formulate:8 resolution:1 simplicity:1 unstructured:1 sigir:2 survey:1 insight:1 rule:1 classic:1 searching:4 ferrari:3 coordinate:6 justification:3 hurt:3 updated:4 tardos:1 hierarchy:1 suppose:1 thigh:1 exact:1 programming:1 designing:1 trend:2 recognition:5 continues:1 cut:1 bottom:2 taskar:1 solved:1 capture:1 thousand:1 region:4 sun:3 ordering:1 trade:1 highest:9 decrease:1 weigh:1 intuition:1 schiele:1 reward:2 ideally:1 trained:1 weakly:1 tight:1 segment:1 singh:1 creates:2 bipartite:1 localization:2 easily:1 differently:1 represented:1 fast:1 describe:1 exhaustive:1 heuristic:18 larger:4 emerged:1 modular:1 valued:1 s:8 cvpr:8 ability:4 jointly:1 itself:2 final:2 online:1 sequence:1 unsuppressed:1 propose:3 maximal:4 coming:1 relevant:2 ablation:1 till:1 ymax:3 intuitive:2 validate:1 ent:1 billion:2 convergence:3 empty:1 parent:1 optimum:1 darrell:1 produce:1 generating:2 object:64 implemented:1 coverage:3 involves:1 quantify:1 iou:11 differ:1 submodularity:2 closely:1 human:1 mcallester:1 education:1 explains:1 government:1 generalization:8 really:1 preliminary:1 extension:3 exploring:1 insert:1 recap:1 dhruv:1 ground:7 around:1 alexe:2 achieves:1 vary:1 smallest:3 estimation:1 combinatorial:1 currently:3 ross:1 largest:9 weighted:2 clearly:1 sensor:1 always:2 gaussian:1 rather:3 zhou:1 office:2 deselaers:2 naval:1 improvement:2 joachim:1 rank:1 likelihood:1 superpixels:1 tech:2 contrast:1 sigkdd:1 suppression:5 baseline:4 dollar:1 lowercase:1 typically:4 entire:2 integrated:1 initially:1 diminishing:1 ical:1 perona:1 koller:1 selective:11 pixel:2 among:1 html:2 pascal:15 development:1 overfeat:1 art:2 special:6 kempe:1 constrained:5 gtc:1 marginal:21 equal:3 once:1 never:3 yip:1 sampling:5 represents:1 broad:1 placing:1 yu:1 nearly:2 icml:1 jones:1 report:2 few:4 modern:2 randomly:2 oriented:1 ve:1 national:1 individual:2 cheaper:1 qing:1 argmax:5 microsoft:2 detection:21 mlp:1 mining:1 evaluation:4 bracket:1 uppercase:1 nonincreasing:1 amenable:1 accurate:1 edge:15 encourage:1 necessary:1 tree:13 re:6 girshick:3 theoretical:2 eralization:1 instance:6 recomputing:1 column:1 cover:4 formulates:1 localizing:2 measuring:2 maximization:12 cost:1 tractability:1 vertex:11 subset:7 entry:1 hundred:1 uniform:4 virginia:2 too:1 front:4 reported:2 migration:1 st:1 fundamental:1 explores:1 peak:2 international:1 stay:1 v4:2 off:1 picking:2 na:1 augmentation:11 nm:19 containing:1 priority:14 v7:1 creating:1 szegedy:2 diversity:36 de:1 stride:1 unordered:1 int:1 hosang:1 ranking:1 later:1 performed:1 picked:1 root:1 view:1 analyze:2 start:1 yv:8 weighing:2 gevers:1 contribution:5 smeulders:1 square:1 accuracy:3 convolutional:2 who:2 efficiently:3 conceptually:1 generalize:1 produced:1 ren:2 bilmes:1 explain:1 definition:1 naturally:1 associated:2 gain:24 sampled:1 stop:1 treatment:1 popular:1 dataset:2 recall:1 knowledge:3 subsection:2 segmentation:3 goldstein:1 lawler:1 originally:2 tlow:1 supervised:1 voc2012:2 zisserman:2 bmvc:1 yb:3 formulation:13 done:2 box:68 though:1 dey:2 just:1 until:1 hand:1 multiscale:1 overlapping:2 defines:1 quality:6 gray:1 building:2 effect:4 pascalnetwork:2 contain:2 y2:3 consisted:3 normalized:1 edgeboxes:9 spatially:1 iteratively:2 semantic:1 branching:3 covering:1 m:4 pedagogical:1 complete:1 reasoning:1 image:21 novel:5 common:1 empirically:1 overview:3 vezhnevets:1 exponentially:3 extend:1 slight:1 he:2 marginals:2 refer:2 cambridge:1 feldman:1 tuning:1 uv:2 unconstrained:1 automatic:1 submodular:27 language:1 gt:2 add:2 dominant:1 closest:1 own:1 perspective:1 coco:6 scenario:1 buchbinder:1 nvidia:1 hay:1 sande:1 vt:2 scoring:4 guestrin:2 recognized:1 paradigm:2 v3:3 maximize:1 monotonically:1 branch:10 sliding:10 multiple:6 reduces:1 smooth:1 match:2 faster:1 offer:1 cross:1 retrieval:1 lin:3 post:1 equally:1 award:2 sponsor:1 impact:1 prediction:3 variant:2 regression:1 scalable:1 emmcvpr:1 vision:4 essentially:2 iteration:14 normalization:4 histogram:1 pyramid:1 robotics:1 proposal:39 fine:1 krause:2 interval:3 winn:2 else:2 aws:1 benenson:1 unlike:2 yue:1 pooling:1 spirit:1 integer:1 structural:1 near:4 intermediate:1 split:7 easy:2 identified:1 reduce:1 computable:1 enumerable:1 bottleneck:1 motivated:1 penalty:1 queue:16 locating:1 speaking:1 v8:1 deep:2 covered:2 involve:3 svms:1 category:2 http:3 generate:2 outperform:2 shifted:1 notice:2 disjoint:1 popularity:1 blue:1 diverse:8 group:10 key:6 redundancy:1 threshold:3 localize:1 rectangle:7 v1:2 monotone:13 sum:2 cpmc:3 wood:1 run:1 letter:2 parameterized:1 procedurally:1 endorsement:1 gonzalez:1 krahenbuhl:1 comparable:1 pushed:1 bit:1 bound:30 followed:1 simplification:2 cheng:1 oracle:1 annual:1 strength:3 placement:1 constraint:1 precisely:1 kleinberg:1 aspect:2 speed:9 toshev:1 min:1 structured:8 across:3 remain:1 feige:1 y0:3 wi:1 making:1 maxy:3 explained:2 restricted:1 intuitively:2 pipeline:3 previously:1 remains:3 bing:1 loose:1 needed:2 end:1 operation:1 doll:1 apply:1 observe:1 hierarchical:1 v2:3 away:1 barron:1 eigen:1 original:2 denotes:1 running:3 include:1 subsampling:1 top:5 yx:1 classical:7 implied:1 objective:12 malik:2 already:2 v5:2 occurs:1 strategy:3 rerunning:1 parametric:1 mirrokni:1 bagnell:2 subwindows:4 nemhauser:2 affinity:1 gradient:2 iclr:1 arbelaez:1 carbonell:1 reason:1 assuming:1 nearoptimally:1 reed:1 half:2 illustration:1 providing:2 ratio:2 index:2 sermanet:1 setup:4 unfortunately:3 potentially:4 negative:4 suppress:1 implementation:2 policy:2 summarization:1 perform:5 allowing:1 upper:20 observation:1 datasets:4 markov:1 marque:1 viola:1 y1:4 varied:1 required:2 quadratically:2 herein:1 nip:5 able:1 beyond:1 proceeds:1 challenge:7 summarize:1 max:7 gool:2 power:3 overlap:3 ranked:2 rely:1 representing:1 scheme:6 improve:1 mcg:4 voc2007:2 library:1 mathieu:1 created:1 ymin:5 finley:1 speeding:1 prior:1 understanding:1 literature:2 acknowledgement:1 discovery:1 reordering:1 discriminatively:1 abstracting:1 generation:8 interesting:1 proportional:1 wolsey:1 enclosed:2 reinsertion:1 generator:7 validation:5 foundation:1 jegelka:1 consistent:1 imposes:1 row:2 eccv:5 summary:1 repeat:1 supported:1 hebert:1 infeasible:1 formal:2 allow:1 weaker:2 fall:1 face:2 focussed:1 felzenszwalb:1 abo:6 tolerance:2 van:3 curve:2 calculated:1 tuset:1 contour:2 rich:1 author:2 subwindow:5 made:1 preprocessing:1 subtracts:1 erhan:2 social:1 nov:1 maxy0:1 ignore:1 implicitly:1 mmr:1 vondr:1 cutting:1 global:1 active:2 instantiation:1 b1:1 belongie:1 fergus:1 search:34 streeter:1 table:2 learn:1 robust:1 golovin:2 career:1 ignoring:1 sminchisescu:1 upstream:1 necessarily:1 uijlings:1 domain:3 zitnick:3 official:1 did:1 main:1 spread:1 bounding:29 lampert:4 repeated:5 child:4 allowed:1 fig:7 sub:4 theme:1 position:1 explicit:2 comput:1 candidate:4 spatial:1 weighting:8 interleaving:2 grained:1 donahue:1 specific:5 explored:3 list:6 grouping:3 intractable:3 dependance:1 workshop:2 adding:2 widelyused:1 supplement:2 budget:2 illustrates:1 margin:1 intersection:2 garcia:1 simply:2 explore:1 army:1 forming:1 appearance:1 visual:3 lazy:19 expressed:2 ordered:1 v6:2 tracking:1 contained:4 partially:1 corresponds:1 truth:2 extracted:3 acm:2 goal:9 sized:2 formulated:1 viewed:2 towards:1 fisher:1 hard:1 objectness:10 specifically:5 except:1 infinite:1 uniformly:1 averaging:1 carreira:1 torr:1 batra:2 called:2 ece:1 experimental:2 ya:4 exception:1 formally:2 select:3 support:1 scan:2 relevance:15 accelerated:1 evaluate:2 tested:1 handling:1 |
5,279 | 578 | A Comparison of Projection Pursuit and Neural
Network Regression Modeling
Jellq-Nellg Hwang, Hang Li,
Information Processing Laboratory
Dept. of Elect. Engr., FT-lO
University of Washington
Seattle WA 98195
Martin Maechler, R. Douglas Martin, Jim Schimert
Department of Statistics
Mail Stop: GN-22
University of Washington
Seattle, WA 98195
Abstract
Two projection based feedforward network learning methods for modelfree regression problems are studied and compared in this paper: one is
the popular back-propagation learning (BPL); the other is the projection
pursuit learning (PPL). Unlike the totally parametric BPL method, the
PPL non-parametrically estimates unknown nonlinear functions sequentially (neuron-by-neuron and layer-by-Iayer) at each iteration while jointly
estimating the interconnection weights. In terms of learning efficiency,
both methods have comparable training speed when based on a GaussNewton optimization algorithm while the PPL is more parsimonious. In
terms of learning robustness toward noise outliers, the BPL is more sensitive to the outliers.
1
INTRODUCTION
The back-propagation learning (BPL) networks have been used extensively for essentially two distinct problem types, namely model-free regression and classification,
1159
1160
Hwang, Li, Maechler, Martin, and Schimert
which have no a priori assumption about the unknown functions to be identified
other than imposes a certain degree of smoothness. The projection pursuit learning
(PPL) networks have also been proposed for both types of problems (Friedman85
[3]), but to date there appears to have been much less actual use of PPLs for both
regression and classification than of BPLs. In this paper, we shall concentrate on regression modeling applications of BPLs and PPLs since the regression setting is one
in which some fairly deep theory is available for PPLs in the case of low-dimensional
regression (Donoh089 [2], Jones87 [6]).
A multivariate model-free regression problem can be stated as follows: given n
pairs of vector observations, (Yl , Xl) = (Yll,???, Ylq; Xll,???, Xl p ), which have been
generated from unknown models
YIi=gi(XI)+tli,
1=1,2,?.?,n;
i=I,2,???,q
(1)
where {y,} are called the multivariable "response" vector and {x,} are called the
"independent variables" or the "carriers". The {gd are unknown smooth nonparametric (model-free) functions from p-dimensional Euclidean space to the real
line, i.e., gi: RJ> ~ R, Vi. The {tli} are random variables with zero mean,
E(tli] = 0, and independent of {x,}. Often the {tli} are assumed to be independent
and identically distributed (iid) as well.
The goal of regression is to generate the estimators, 91, 92, ... , 9q, to best approximate the unknown functions, gl, g2, ... , gq, so that they can be used for prediction
of a new Y given a new x: Yi = gi(X), Vi.
2
A TWO-LAYER PERCEPTRON AND
BACK-PROPAGATION LEARNING
Several recent results have shown that a two-layer (one hidden layer) perceptron
with sigmoidal nodes can in principle represent any Borel-measurable function to
any desired accuracy, assuming "enough" hidden neurons are used. This, along with
the fact that theoretical results are known for the PPL in the analogous two-layer
case, justifies focusing on the two-layer perceptron for our studies here.
2.1
MATHEMATICAL FORMULATION
A two-layer percept ron can be mathematically formulated as follows:
p
L
WkjXj -
(h
= wf x -
(h,
k
= 1,
2,
m
j=1
m
m
(2)
Yi
k=l
k=1
where Uk denotes the weighted sum input of the kth neuron in the hidden layer;
Ok denotes the bias of the kth neuron in the hidden layer; Wkj denotes the inputlayer weight linked between the kth hidden neuron and the jth neuron of the input
A Comparison of Projection Pursuit and Neural Network Regression Modeling
layer (or ph element of the input vector x); f3ik denotes the output-layer weight
linked between the ith output neuron and the kth hidden neuron; fk is the nonlinear
activation function, which is usually assumed to be a fixed monotonically increasing
(logistic) sigmoidal function, u( u) = 1/(1 + e- U ).
The above formulation defines quite explicitly the parametric representation of
functions which are being used to approximate {gi(X), i
1,2"", q}. A simple reparametrization allows us to write gi(X) in the form:
=
m
T
A()
gj
x = "'"'
~ f3ikU( akx-/-lk )
k=l
(3)
Sk
where ak is a unit length version of weight vector Wk. This formulation reveals how
{gd are built up as a linear combination of sigmoids evaluated at translates (by
/-lk) and scaled (by Sk) projection of x onto the unit length vector ak.
2.2
BACK-PROPAGATION LEARNING AND ITS VARIATIONS
Historically, the training of a multilayer perceptron uses back-propagation learning
(BPL). There are two common types of BPL: the batch one and the sequentialone.
The batch BPL updates the weights after the presentation of the complete set of
training data. Hence, a training iteration incorporates one sweep through all the
training patterns. On the other hand, the sequential BPL adjusts the network
parameters as training patterns are presented, rather than after a complete pass
through the training set. The sequential approach is a form of Robbins-Monro
Stochastic Approximation.
While the two-layer perceptron provides a very powerful nonparametric modeling
capability, the BPL training can be slow and inefficient since only the first derivative
(or gradient) information about the training error is utilized. To speed up the training process, several second-order optimization algorithms, which take advantage of
second derivative (or Hessian matrix) information, have been proposed for training
perceptrons (Hwang90 [4]). For example, the Gauss-Newton method is also used in
the PPL (Friedman85 [3]).
The fixed nonlinear nodal (sigmoidal) function is a monotone non decreasing differentiable function with very simple first derivative form, and possesses nice properties
for numerical computation. However, it does not interpolate/extrapolate efficiently
in a wide variety of regression applications. Several attempts have been proposed to
improve the choice of nonlinear nodal functions; e.g., the Gaussian or bell-shaped
function, the locally tuned radial basis functions, and semi-parametric (non-fixed
nodal function) nonlinear functions used in PPLs and hidden Markov models.
2.3
RELATIONSHIP TO KERNEL APPROXIMATION AND DATA
SMOOTHING
It is instructive to compare the two-layer perceptron approximation in Eq. (3)
with the well-known kernel method for regression. A kernel K(.) is a non-negative
symmetric function which integrates to unity. Most kernels are also unimodal, with
1161
1162
Hwang, Li, Maechler, Martin, and Schimert
< tl < t 2.
mode at the origin, K(tl) ~ K(t 2), 0
form
_
gK,i(X)
=~
~
A kernel estimate of gi(X) has the
1
IIx - xIII
hq K(
h9
),
Yli
(4)
1=1
where h is a bandwidth parameter and q is the dimension of YI vector. Typically a
good value of h will be chosen by a data-based cross-validation method. Consider for
a moment the special case of the kernel approximator and the two-layer perceptron
in Eq. (3) respectively, with scalar YI and XI, i.e., with p q 1 (hence unit length
interconnection weight Q' 1 by definition):
= =
=
~ .!.K( Ilx - xdl) = ~ :"K(x ~
YI
h
h
1=1
~ YI h
XI)
(5)
h'
1=1
m
g(X)
L ,BkO"(
k=1
X -
Ilk)
(6)
Sk
This reveals some important connections between the two approaches.
=
Suppose that for g( x), we set 0" K, i.e., 0" is a kernel and in fact identical to the
kernel K, and that ,Bk,llk,sk s have been chosen (trained), say by BPL. That is,
all {sd are constrained to a single unknown parameter value s. In general, m < n,
or even m is a modest fraction of n when the unknown function g(x) is reasonably
smooth. Furthermore, suppose that h has been chosen by cross validation. Then one
can expect 9K(X) ~ gq(x), particularly in the event that the {1lA:} are close to the
observed values {x,} and X is close to a specific Ilk value (relative to h). However,
in this case where we force Sk
S, one might expect gK(X) to be a somewhat better
estimate overall than 9q(x), since the former is more local in character.
=
=
=
On the other hand, when one removes the restriction Sk
s, then BPL leads
to a local bandwidth selection, and in this case one may expect gq(x) to provide
better approximation than 9K(X) when the function g(x) has considerably varying
curvature, gll(X), and/or considerably varying error variance for the noise (Ii in Eq.
(1). The reason is that a fixed bandwidth kernel estimate can not cope as well with
changing curvature and/or noise variance as can a good smoothing method which
uses a good local bandwidth selection method. A small caveat is in order: if m is
fairly large, the estimation of a separate bandwidth for each kernel location, Ilk, may
cause some increased variability in gq (x) by virtue of using many more parameters
than are needed to adequately represent a nearly optimal local bandwidth selection
method. Typically a nearly optimal local bandwidth function will have some degree
of smoothness, which reflects smoothly varying curvature and/or noise variance, and
a good local bandwidth selection method should reflect the smoothness constraints.
This is the case in the high-quality "supersmoother", designed for applications like
the PPL (to be discussed), which uses cross-validation to select bandwidth locally
(Friedman85 [3]), and combines this feature with considerable speed.
=
The above arguments are probably equally valid without the restriction u J(, because two sigmoids of opposite signs (via choice of two {,Bk}) that are appropriately
A Comparison of Projection Pursuit and Neural Network Regression Modeling
shifted, will approximate a kernel up to a scaling to enforce unity area. However,
there is a novel aspect: one can have a separate local bandwidth for each half of
the kernel, thereby using an asymmetric kernel, which might improve the approximation capabilities relative to symmetric kernels with a single local bandwidth in
some situations.
In the multivariate case, the curse of dimensionality will often render useless the
kernel approximator 9K,i(X) given by Eq. (4). Instead one might consider using a
projection pursuit kernel (PPK) approximator :
n
9PPK,i(X) =
mIT
T
LL Yli hk J?(1:kX~kD:kXI)
(7)
1=1 k=l
where a different bandwidth hk is used for each direction D:k . In this case, the
similarities and differences between the PPK estimate and the BPL estimate 9q,i(X)
become evident.
The main difference between the two methods is that PPK performs explicit smoothing in each direction D:k using a kernel smoother, whereas BPL does implicit smoothing with both fJk (replacing Yli/ h k ) and /-lk (replacing
XI) being determined by
nonlinear least squares optimization. In both PPK and BPL, the D:k and hk are
determined by nonlinear optimization (cross-validation choices of bandwidth parameters are inherently nonlinear optimization problems) (Friedman85 [3]).
aT
3
PROJECTION PURSUIT LEARNING NETWORKS
The projection pursuit learning (PPL) is a statistical procedure proposed for multivariate data analysis using a two-layer network given in Eq. (2). This procedure
derives its name from the fact that it interprets high dimensional data through
well-chosen lower-dimensional projections. The "pursuit" part of the name refers
to optimization with respect to the projection directions.
3.1
COMPARATIVE STRUCTURES OF PPL AND BPL
Similar to a BPL perceptron, a PPL network forms projections of the data in
directions determined from the interconnection weights. However, unlike a BPL
perceptron, which employs a fixed set of nonlinear (sigmoidal) functions, a PPL
non-parametrically estimates the nonlinear nodal functions based on nonlinear optimization approach which involves use of a one-dimensional data-smoother (e.g., a
least squares estimator followed by a variable window span data averaging mechanism) (Friedman85 [3]) . Therefore, it is important to note that a PPL network
is a semi-parametric learning network, which consists of both parametrically and
non-parametrically estimated elements. This is in contrast to a BPL perceptron,
which is a completely parametric model.
3.2
LEARNING STRATEGIES OF PPL
In comparison with a batch BPL, which employs either 1st-order gradient descent or
2nd-order Newton-like methods to estimate the weights of all layers simultaneously
1163
1164
Hwang, Li, Maechler, Martin, and Schimert
after all the training patterns are presented, a PPL learns neuron-by-neuron and
layer-by-Iayer cyclically after all the training patterns are presented. Specifically, it
applies linear least squares to estimate the output-layer weights, a one-dimensional
data smoother to estimate the nonlinear nodal functions of each hidden neuron,
and the Gauss-Newton nonlinear least squares method to estimate the input-layer
weights.
The PPL procedure uses the batch learning technique to iteratively minimize the
mean squared error, E, over all the training data. All the parameters to be estimated are hierarchically divided into m groups (each associated with one hidden
neuron), and each group, say the kth group, is further divided into three subgroups:
the output-layer weights, {,Bik, i = 1"", q}, connected to the kth hidden neuron;
the nonlinear function, h( u), of the kth hidden neuron; and the input-layer weights,
{Wkj, j
1"" ,p}, connected to the kth hidden neuron. The PPL starts from updating the parameters associated with the first hidden neuron (group) by updating
each subgroup, {,Bid, h(u), and {Wlj} consecutively (layer-by-Iayer) to minimize
the mean squared error E. It then updates the parameters associated with the second hidden neuron by consecutively updating {,Bi2}, h(u), and {W2j}. A complete
updating pass ends at the updating of the parameters associated with the mth (the
last) hidden neuron by consecutively updating {,Bim}, fm(u), and {wmj}. Repeated
updating passes are made over all the groups until convergence (i.e., in our studies
=
of Section 4, we use the stopping criterion that
prespecified small constant, ~ = 0.005).
4
IE(new)_E(old)1
E(old)
be smaller than a
LEARNING EFFICIENCY IN BPL AND PPL
Having discussed the "parametric" BPL and the "semi-parametric" PPL from structural, computational, and theoretical viewpoints, we have also made a more practical comparison of learning efficiency via a simulation stUdy. For simplicity of
comparison, we confine the simulations to the two-dimensional univariate case, i.e.,
p
2, q = 1. This is an important situation in practice, because the models can
be visualized graphically as functions y = g(Xl' X2).
=
4.1
PROTOCOLS OF THE SIMULATIONS
Nonlinear Functions: There are five nonlinear functions gU) : [0,1]2 --+ R investigated (Maechler90 [7]), which are scaled such that the standard deviation is 1
(for a large regular grid of 2500 points on [0,1]2), and translated to make the range
nonnegative.
Training and Test Data: Two independent variables (carriers) (Xll' X12)
were generated from the uniform distribution U([O,I]2), i.e., the abscissa values
{(Xll' X12)} were generated as uniform random variates on [0,1] and independent
from each other. We generated 225 pairs {(xu, X12)} of abscissa values, and used
this same set for experiments of all five different functions, thus eliminating an
unnecessary extra random component of the simulation. In addition to one set of
noiseless training data, another set of noisy training data was also generated by
adding iid Gaussian noises.
A Comparison of Projection Pursuit and Neural Network Regression Modeling
Algorithm Used: The PPL simulations were conducted using the S-Plus package (S-Plus90 [1]) implementation of PPL, where 3 and 5 hidden neurons were tried
(with 5 and 7 maximum working hidden neurons used separately to avoid the overfitting). The S-Plus implementation is based on the Friedman code (Friedman85 [3]),
which uses a Gauss-Newton method for updating the lower layer weights. To obtain
a fair comparison, the BPL was implemented using a batch Gauss-Newton method
(rather than the usual gradient descent, which is slower) on two-layer perceptrons
with linear output neurons and nonlinear sigmoidal hidden neurons (Hwang90 [4],
Hwang9I [5]), where 5 and 10 hidden neurons were tried.
Independent Test Data Set: The assessment of performance was done by comparing the fitted models with the "true" function counterparts on a large independent test set. Throughout all the simulations, we used the same set of test data for
10000, namely a regularly
performance assessment, i.e., {g(j)( Xll, X/2)}, of size N
spaced grid on [0,1]2, defined by its marginals.
=
4.2
SIMULATION RESULTS IN LEARNING EFFICIENCY
To summarize the simulation results in learning efficiency, we focused on the chosen
three aspects: accuracy, parsimony, and speed.
Learning Accuracy: The accuracy determined by the absolute L2 error measure
of the independent test data in both learning methods are quite comparable either
trained by noiseless or noisy data (Hwang9I [5]). Note that our comparisons are
based on 5 & 10 hidden neurons of BPLs and 3 & 5 hidden neurons of PPLs.
The reason of choosing different number of hidden neurons will be explained in the
learning parsimony section.
Learning Parsimony: In comparison with BPL, the PPL is more parsimonious
in training all types of nonlinear functions, i.e., in order to achieve comparable accuracy to the BPLs for a two-layer perceptrons, the PPLs require fewer hidden neurons
(more parsimonious) to approximate the desired true function (Hwang9I [5]). Several factors may contribute to this favorable performance. First and foremost, the
data-smoothing technique creates more pertinent nonlinear nodal functions, so the
network adapts more efficiently to the observation data without using too many
terms (hidden neurons) of interpolative projections. Secondly, the batch GaussNewton BPL updates all the weights in the network simultaneously while the PPL
updates cyclically (neuron-by-neuron and layer-by-layer), which allows the most recent updating information to be used in the subsequent updating. That is, more
important projection directions can be determined first so that the less important
projections can have a easier search (the same argument used in favoring the GaussSeidel method over the Jacobi method in an iterative linear equation solver).
Learning Speed: As we reported earlier (Maechler90 [7]), the PPL took much
less time (1-2 order of magnitude speedup) in achieving accuracy comparable with
that of the sequential gradient descent BPL. Interestingly, when compared with the
batch Gauss-Newton BPL, the PPL took quite similar amount of time over all the
simulations (under the same number of hidden neurons and the same convergence
1165
1166
Hwang, Li, Maechler, Martin, and Schimert
e=
threshold
0.005). In all simulations, both the BPLs and PPLs can converge
under 100 iterations most of the time.
5
SENSITIVITY TO OUTLIERS
Both BPL's and PPL's are types of nonlinear least squares estimators. Hence like
all least squares procedures, they are all sensitive to outliers. The outliers may
come from large errors in measurements, generated by heavy tailed deviations from
a Gaussian distribution for the noise iii in Eq. (1).
When in presence of additive Gaussian noises without outliers, most functions can
be well approximated by 5-10 hidden neurons using BPL or with 3-5 hidden neurons
using PPL. When the Gaussian noise is altered by adding one outlier, the BPL with
5-10 hidden neurons can still approximate the desired function reasonably well in
general at the sacrifice of the magnified error around the vicinity of the outlier. If
the number of outliers increases to 3 in the same corner, the BPL can only get
a "distorted" approximation of the desired function. On the other hand, the PPL
with 5 hidden neurons can successfully approximate the desired function and remove
the single outlier. In case of three outliers, the PPL using simple data smoothing
techniques can no longer keep its robustness in accuracy of approximation.
Acknowledgements
This research was partially supported through grants from the National Science
Foundation under Grant No. ECS-9014243.
References
[1] S-Plus Users Manual (Version 3.0). Statistical Science Inc., Seattle, WA, 1990.
[2] D.L. Donoho and I.M. Johnstone. Projection-based approximation and a duality with kernel methods. The Annals of Statistics, Vol. 17, No.1, pp. 58-106,
1989.
[3] J .H. Friedman. Classification and multiple regression through projection pursuit. Technical Report No. 12, Department of Statistics, Stanford University,
January 1985.
[4] J. N. Hwang and P. S. Lewis. From nonlinear optimization to neural network
learning. In Proc. 24th Asilomar Conf. on Signals, Systems, & Computers, pp.
985-989, Pacific Grove, CA, November 1990.
[5] J. N. Hwang, H. Li, D. Martin, J. Schimert. The learning parsimony of projection pursuit and back-propagation networks. In 25th Asilomar Conf. on
Signals, Systems, & Computers, Pacific Grove, CA, November 1991.
[6] L.K. Jones. On a conjecture of Huber concerning the convergence of projection
pursuit regression. The Annals of Statistics, Vol. 15, No. 2,880-882, 1987.
[7] M. Maechler, D. Martin, J. Schimert, M. Csoppenszky and J. N. Hwang. Projection pursuit learning networks for regression. in Proc. 2nd Int'l Conf. Tools
for AI, pp. 350-358, Washington D.C., November 1990.
| 578 |@word version:2 eliminating:1 nd:2 simulation:10 tried:2 thereby:1 moment:1 tuned:1 interestingly:1 comparing:1 activation:1 subsequent:1 additive:1 numerical:1 pertinent:1 remove:2 designed:1 update:4 half:1 fewer:1 ith:1 prespecified:1 caveat:1 provides:1 node:1 ron:1 location:1 contribute:1 sigmoidal:5 five:2 mathematical:1 along:1 nodal:6 become:1 consists:1 combine:1 sacrifice:1 huber:1 abscissa:2 decreasing:1 actual:1 curse:1 window:1 solver:1 totally:1 increasing:1 estimating:1 maechler:6 parsimony:4 magnified:1 scaled:2 uk:1 unit:3 grant:2 carrier:2 local:8 sd:1 ak:2 might:3 plus:3 studied:1 gll:1 bim:1 range:1 practical:1 practice:1 procedure:4 area:1 bell:1 projection:22 radial:1 refers:1 regular:1 get:1 onto:1 close:2 selection:4 restriction:2 measurable:1 graphically:1 focused:1 simplicity:1 estimator:3 adjusts:1 ppls:7 variation:1 analogous:1 annals:2 suppose:2 user:1 us:5 origin:1 element:2 approximated:1 particularly:1 utilized:1 updating:10 asymmetric:1 observed:1 ft:1 connected:2 engr:1 trained:2 creates:1 efficiency:5 basis:1 completely:1 gu:1 translated:1 distinct:1 w2j:1 gaussnewton:2 choosing:1 quite:3 stanford:1 say:2 interconnection:3 statistic:4 gi:6 jointly:1 noisy:2 advantage:1 differentiable:1 took:2 gq:4 yii:1 date:1 achieve:1 adapts:1 seattle:3 convergence:3 comparative:1 eq:6 implemented:1 involves:1 come:1 concentrate:1 direction:5 stochastic:1 consecutively:3 require:1 secondly:1 mathematically:1 confine:1 around:1 estimation:1 favorable:1 integrates:1 proc:2 sensitive:2 robbins:1 successfully:1 tool:1 weighted:1 reflects:1 mit:1 gaussian:5 rather:2 avoid:1 varying:3 hk:3 contrast:1 wf:1 stopping:1 typically:2 hidden:29 mth:1 favoring:1 overall:1 classification:3 priori:1 smoothing:6 special:1 fairly:2 constrained:1 shaped:1 washington:3 having:1 identical:1 jones:1 nearly:2 report:1 xiii:1 employ:2 simultaneously:2 national:1 interpolate:1 attempt:1 friedman:2 grove:2 modest:1 euclidean:1 old:2 desired:5 theoretical:2 fitted:1 increased:1 wmj:1 modeling:6 earlier:1 gn:1 deviation:2 parametrically:4 uniform:2 conducted:1 too:1 reported:1 kxi:1 considerably:2 gd:2 st:1 ppk:5 sensitivity:1 ie:1 yl:1 squared:2 reflect:1 corner:1 conf:3 inefficient:1 derivative:3 li:6 wk:1 int:1 inc:1 explicitly:1 vi:2 tli:4 linked:2 start:1 reparametrization:1 capability:2 monro:1 minimize:2 square:6 accuracy:7 variance:3 percept:1 efficiently:2 bpl:30 spaced:1 iid:2 manual:1 definition:1 pp:3 associated:4 jacobi:1 stop:1 popular:1 dimensionality:1 back:6 appears:1 focusing:1 ok:1 response:1 formulation:3 evaluated:1 done:1 furthermore:1 implicit:1 wlj:1 until:1 hand:3 working:1 replacing:2 nonlinear:21 assessment:2 propagation:6 defines:1 logistic:1 mode:1 quality:1 hwang:8 name:2 true:2 counterpart:1 former:1 hence:3 adequately:1 vicinity:1 symmetric:2 laboratory:1 iteratively:1 ll:1 elect:1 multivariable:1 criterion:1 modelfree:1 evident:1 complete:3 performs:1 novel:1 common:1 bko:1 discussed:2 marginals:1 measurement:1 ai:1 smoothness:3 grid:2 fk:1 similarity:1 longer:1 gj:1 curvature:3 multivariate:3 recent:2 certain:1 yi:6 somewhat:1 converge:1 monotonically:1 signal:2 semi:3 ii:1 multiple:1 unimodal:1 smoother:3 rj:1 smooth:2 technical:1 cross:4 divided:2 concerning:1 equally:1 prediction:1 regression:17 multilayer:1 essentially:1 noiseless:2 foremost:1 iteration:3 represent:2 kernel:18 whereas:1 addition:1 separately:1 xdl:1 wkj:2 appropriately:1 extra:1 unlike:2 posse:1 probably:1 pass:1 regularly:1 incorporates:1 bik:1 structural:1 presence:1 feedforward:1 iii:1 identically:1 enough:1 variety:1 bid:1 variate:1 identified:1 bandwidth:13 opposite:1 interprets:1 fm:1 translates:1 render:1 hessian:1 cause:1 deep:1 amount:1 nonparametric:2 extensively:1 ph:1 locally:2 visualized:1 generate:1 shifted:1 sign:1 estimated:2 write:1 shall:1 vol:2 ppl:28 group:5 interpolative:1 threshold:1 achieving:1 changing:1 douglas:1 monotone:1 fraction:1 sum:1 package:1 powerful:1 distorted:1 inputlayer:1 throughout:1 parsimonious:3 scaling:1 comparable:4 layer:27 followed:1 ilk:3 nonnegative:1 constraint:1 x2:1 aspect:2 speed:5 argument:2 span:1 martin:8 x12:3 conjecture:1 speedup:1 department:2 pacific:2 combination:1 kd:1 smaller:1 character:1 unity:2 outlier:11 explained:1 asilomar:2 equation:1 mechanism:1 needed:1 end:1 pursuit:14 available:1 enforce:1 batch:7 robustness:2 slower:1 denotes:4 iix:1 newton:6 sweep:1 parametric:7 strategy:1 usual:1 gradient:4 kth:8 hq:1 separate:2 mail:1 toward:1 reason:2 assuming:1 length:3 code:1 useless:1 relationship:1 gk:2 stated:1 negative:1 implementation:2 unknown:7 xll:4 yli:3 neuron:36 observation:2 markov:1 descent:3 november:3 january:1 situation:2 variability:1 jim:1 bk:2 namely:2 pair:2 h9:1 connection:1 yll:1 subgroup:2 usually:1 pattern:4 summarize:1 built:1 event:1 bi2:1 force:1 improve:2 altered:1 historically:1 lk:3 fjk:1 nice:1 l2:1 acknowledgement:1 relative:2 expect:3 approximator:3 validation:4 foundation:1 degree:2 imposes:1 principle:1 viewpoint:1 heavy:1 lo:1 gl:1 last:1 free:3 supported:1 jth:1 bias:1 perceptron:10 johnstone:1 wide:1 absolute:1 distributed:1 dimension:1 valid:1 made:2 ec:1 cope:1 approximate:6 hang:1 keep:1 sequentially:1 reveals:2 overfitting:1 llk:1 assumed:2 unnecessary:1 iayer:3 xi:4 search:1 iterative:1 sk:6 tailed:1 reasonably:2 ca:2 inherently:1 investigated:1 protocol:1 main:1 hierarchically:1 noise:8 repeated:1 fair:1 akx:1 xu:1 tl:2 borel:1 slow:1 explicit:1 xl:3 learns:1 cyclically:2 specific:1 gaussseidel:1 virtue:1 derives:1 sequential:3 adding:2 magnitude:1 justifies:1 sigmoids:2 kx:1 easier:1 smoothly:1 ilx:1 univariate:1 g2:1 scalar:1 partially:1 applies:1 lewis:1 goal:1 formulated:1 presentation:1 donoho:1 considerable:1 determined:5 specifically:1 averaging:1 called:2 pas:2 duality:1 gauss:5 la:1 perceptrons:3 select:1 dept:1 instructive:1 extrapolate:1 |
5,280 | 5,780 | Galileo: Perceiving Physical Object Properties by
Integrating a Physics Engine with Deep Learning
Jiajun Wu?
EECS, MIT
jiajunwu@mit.edu
Joseph J. Lim
EECS, MIT
lim@csail.mit.edu
Ilker Yildirim?
BCS MIT, The Rockefeller University
ilkery@mit.edu
William T. Freeman
EECS, MIT
billf@mit.edu
Joshua B. Tenenbaum
BCS, MIT
jbt@mit.edu
Abstract
Humans demonstrate remarkable abilities to predict physical events in dynamic
scenes, and to infer the physical properties of objects from static images. We
propose a generative model for solving these problems of physical scene understanding from real-world videos and images. At the core of our generative model
is a 3D physics engine, operating on an object-based representation of physical
properties, including mass, position, 3D shape, and friction. We can infer these
latent properties using relatively brief runs of MCMC, which drive simulations in
the physics engine to fit key features of visual observations. We further explore
directly mapping visual inputs to physical properties, inverting a part of the generative process using deep learning. We name our model Galileo, and evaluate it on a
video dataset with simple yet physically rich scenarios. Results show that Galileo
is able to infer the physical properties of objects and predict the outcome of a variety of physical events, with an accuracy comparable to human subjects. Our study
points towards an account of human vision with generative physical knowledge at
its core, and various recognition models as helpers leading to efficient inference.
1
Introduction
Our visual system is designed to perceive a physical world that is full of dynamic content. Consider
yourself watching a Rube Goldberg machine unfold: as the kinetic energy moves through the machine, you may see objects sliding down ramps, colliding with each other, rolling, entering other
objects, falling ? many kinds of physical interactions between objects of different masses, materials and other physical properties. How does our visual system recover so much content from the
dynamic physical world? What is the role of experience in interpreting a novel dynamical scene?
Recent behavioral and computational studies of human physical scene understanding push forward
an account that people?s judgments are best explained as probabilistic simulations of a realistic, but
mental, physics engine [2, 8]. Specifically, these studies suggest that the brain carries detailed but
noisy knowledge of the physical attributes of objects and the laws of physical interactions between
objects (i.e., Newtonian mechanics). To understand a physical scene, and more crucially, to predict
the future dynamical evolution of a scene, the brain relies on simulations from this mental physics
engine. Even though the probabilistic simulation account is very appealing, there are missing practical and conceptual leaps. First, as a practical matter, the probabilistic simulation approach is shown
to work only with synthetically generated stimuli: either in 2D worlds, or in 3D worlds but each
?
Indicates equal contribution. The authors are listed in the alphabetical order.
1
object is constrained to be a block and the joint inference of the mass and friction coefficient is not
handled [2]. Second, as a conceptual matter, previous research rarely clarifies how a mental physics
engine could take advantage of previous experience of the agent [11]. It is the case that humans
have a life long experience with dynamical scenes, and a fuller account of human physical scene
understanding should address it.
Here, we build on the idea that humans utilize a realistic physics engine as part of a generative
model to interpret real-world physical scenes. We name our model Galileo. The first component of
our generative model is the physical object representations, where each object is a rigid body and
represented not only by its 3D geometric shape (or volume) and its position in space, but also by its
mass and its friction. All of these object attributes are treated as latent variables in the model, and
are approximated or estimated on the basis of the visual input.
The second part is a fully-fledged realistic physics engine ? in this paper, specifically the Bullet
physics engine [4]. The physics engine takes a scene setup as input (e.g., specification of each of the
physical objects in the scene, which constitutes a hypothesis in our generative model), and physically
simulates it forward in time, generating simulated velocity profiles and positions for each object.
The third part of Galileo is the likelihood function. We evaluate the observed real-world videos
with respect to the model?s hypotheses using the velocity vectors of objects in the scene. We use a
standard tracking algorithm to map the videos to the velocity space.
Now, given a video as observation to the model, physical scene understanding in the model corresponds to inverting the generative model by probabilistic inference to recover the underlying physical object properties in the scene. Here, we build a video dataset to evaluate our model and humans
on real-world data, which contains 150 videos of different objects with a range of materials and
masses over a simple yet physically rich scenario: an object sliding down an inclined surface, and
potentially collide with another object on the ground. Note that in the fields of computer vision
and robotics, there have been studies on predicting physical interactions or inferring 3D properties
of objects for various purposes including 3D reasoning [6, 13] and tracking [9]. However, none
of them focused on learning physical properties directly, and nor they have incorporated a physics
engine with representation learning.
Based on the estimates we derived from visual input with a physics engine, a natural extension is
to generate or synthesize training data for any automatic learning systems by bootstrapping from
the videos already collected, and labeling them with estimates of Galileo. This is a self-supervised
learning algorithm for inferring generic physical properties, and relates to the wake/sleep phases in
Helmholtz machines [5], and to the cognitive development of infants. Extensive studies suggest that
infants either are born with or can learn quickly physical knowledge about objects when they are very
young, even before they acquire more advanced high-level knowledge like semantic categories of
objects [3, 1]. Young babies are sensitive to physics of objects mainly from the motion of foreground
objects from background [1]; in other words, they learn by watching videos of moving objects. But
later in life, and clearly in adulthood, we can perceive physical attributes in just static scenes without
any motion.
Here, building upon the idea of Helmholtz machiness [5], our approach suggests one potential computational path to the development of the ability to perceive physical content in static scenes. Following the recent work [12], we train a recognition model (i.e., sleep cycle) that is in the form of a
deep convolutional network, where the training data is generated in a self-supervised manner by the
generative model itself (i.e., wake cycle: real-world videos observed by our model and the resulting
physical inferences). Interestingly, this computational solution asserts that the infant starts with a
relatively reliable mental physics engine, or acquires it soon after birth.
Our work makes three contributions. First, we propose Galileo, a novel model for estimating physical properties of objects from visual inputs by incorporating the feedback of a physics engine in
the loop. We demonstrate that it achieves encouraging performance on a real-world video dataset.
Second, we train a deep learning based recognition model that leads to efficient inference in the
generative model, and enables the generative model to predict future dynamical evolution of static
scenes (e.g., how would that scene unfold in time). Third, we test our model and compare it to humans on a variety of physical judgment tasks. Our results indicate that humans are quite successful
in these tasks, and our model closely matches humans in performance, but also consistently makes
2
RA
Physical object i
NA
NB
GA
NA
NB
GA
GB
IB
A
IA
B
NA
NB
GA
GB
-
RA
GB
Mass (m)
Friction coefficient (k)
3D shape (S)
Position offset (x)
Draw two
physical objects
1
2
3D Physics engine
Simulated velocities
Likelihood function
Observed velocities
Tracking algorithm
...
(a)
(b)
Figure 1: (a) Snapshots of the dataset. (b) Overview of the model. Our model formalizes a hypothesis space of physical object representations, where each object is defined by its mass, friction
coefficient, 3D shape, and a positional offset w.r.t. an origin. To model videos, we draw exactly two
objects from that hypothesis space into the physics engine. The simulations from the physics engine
are compared to observations in the velocity space, a much ?nicer? space than pixels.
similar errors as humans do, providing further evidence in favor of the probabilistic simulation account of human physical scene understanding.
2
Scenario
We seek to learn physical properties of objects by observing videos. Among many scenarios, we
consider an introductory setup: an object is put on an inclined surface; it may either slide down or
keep static due to gravity and friction, and may hit another object if it slides down.
This seemingly simple scenario is physically highly involved. The observed outcome of these scenario are physical values which help to describe the scenario, such as the velocity and moving
distance of objects. Causally underlying these observations are the latent physical properties of objects such as the material, density, mass and friction coefficient. As shown in Section 3, our Galileo
model intends to model the causal generative relationship between these observed and unobserved
variables.
We collect a real-world video dataset of about 100 objects sliding down a ramp, possibly hitting
another object. Figure 1a provides some exemplar videos in the dataset. The results of collisions,
including whether it will happen or not, are determined by multiple factors, such as material (density
and friction coefficient), size and shape (volume), and slope of surface (gravity). Videos in our
dataset vary in all these parameters.
Specifically, there are 15 different materials ? cardboard, dough, foam, hollow rubber, hollow
wood, metal coin, metal pole, plastic block, plastic doll, plastic ring, plastic toy, porcelain, rubber,
wooden block, and wooden pole. For each material, there are 4 to 12 objects of different sizes and
shapes. The angle between the inclined surface and the ground is either 10o or 20o . When an object
slides down, it may hit either a cardboard box, or a piece of foam, or neither.
3
3
Galileo: A Physical Object Model
The gist of our model can be summarized as probabilistically inverting a physics engine in order
to recover unobserved physical properties of objects. We collectively refer to the unobserved latent
variables of an object as its physical representation T . For each object i, Ti consists of its mass mi ,
friction coefficient ki , 3D shape Vi , and position offset pi w.r.t. an origin in 3D space.
We place uniform priors over the mass and the friction coefficient for each object: mi ?
Uniform(0.001, 1) and ki ? Uniform(0, 1), respectively.
For 3D shape Vi , we have four variables: a shape type ti , and the scaling factors for three dimensions
xi , yi , zi . We simplify the possible shape space in our model by constraining each shape type ti to
be one of the three with equal probability: a box, a cylinder, and a torus. Note that applying scaling
differently on each dimension to these three basic shapes results in a large space of shapes.1 The
scaling factors are chosen to be uniform over the range of values to capture the extent of different
shapes in the dataset.
Remember that our scenario consists of an object on the ramp and another on the ground. The
position offset, pi , for each object is uniform over the set {0, ?1, ?2, ? ? ? , ?5}. This indicates that
for the object on the ramp, its position can be perturbed along the ramp (i.e., in 2D) at most 5 units
upwards or downwards from its starting position, which is 30 units upwards on the ramp from the
ground.
The next component of our generative model is a fully-fledged realistic physics engine that we
denote as ?. Specifically we use the Bullet physics engine [4] following the earlier related work.
The physics engine takes a specification of each of the physical objects in the scene within the
basic ramp setting as input, and simulates it forward in time, generating simulated velocity vectors
for each object in the scene, vs1 and vs2 respectively ? among other physical properties such as
position, rendered image of each simulation step, etc.
In light of initial qualitative analysis, we use velocity vectors as our feature representation in evaluating the hypothesis generated by the model against data. We employ a standard tracking algorithm
(KLT point tracker [10]) to ?lift? the visual observations to the velocity space. That is, for each
video, we first run the tracking algorithm, and we obtain velocities by simply using the center locations of each of the tracked moving objects between frames. This gives us the velocity vectors for
the object on the ramp and the object on the ground, vo1 and vo2 , respectively.
Given a pair of observed velocity vectors, vo1 and vo2 , the recovery of the physical object representations T1 and T2 for the two objects via physics-based simulation can be formalized as:
P (T1 , T2 |vo1 , vo2 , ?(?)) ? P (vo1 , vo2 |vs1 , vs2 ) ? P (vs1 , vs2 |T1 , T2 , ?(?)) ? P (T1 , T2 ).
(1)
where we define the likelihood function as P (vo1 , vo2 |vs1 , vs2 ) = N (vo |vs , ?), where vo is the
concatenated vector of vo1 , vo2 , and vs is the concatenated vector of vs1 , vs2 . The dimensionality of
vo and vs are kept the same for a video by adjusting the number of simulation steps we use to obtain
vo according to the length of the video. But from video to video, the length of these vectors may
vary. In all of our simulations, we fix ? to 0.05, which is the only free parameter in our model.
3.1
Tracking algorithm as a recognition model
The posterior distribution in Equation 1 is intractable. In order to alleviate the burden of posterior
inference, we use the output of our recognition model to predict and fix some of the latent variables
in the model.
Specifically, we determine the Vi , or {ti , xi , yi , zi }, using the output of the tracking algorithm, and
fix these variables without further sampling them. Furthermore, we fix values of pi s also on the
basis of the output of the tracking algorithm.
1
For shape type box, xi , yi , and zi could all be different values; for shape type torus, we constrained the
scaling factors such that xi = zi ; and for shape type cylinder, we constrained the scaling factors such that
y i = zi .
4
Dough Cardboard
Pole
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: Simulation results. Each row represents one video in the data: (a) the first frame of the
video, (b) the last frame of the video, (c) the first frame of the simulated scene generated by Bullet,
(d) the last frame of the simulated scene, (e) the estimated object with larger mass, (f) the estimated
object with larger friction coefficient.
3.2
Inference
Once we initialize and fix the latent variables using the tracking algorithm as our recognition model,
we then perform single-site Metropolis Hasting updates on the remaining four latent variables,
m1 , m2 , k1 and k2 . At each MCMC sweep, we propose a new value for one of these random
variables, where the proposal distribution is Uniform(?0.05, 0.05). In order to help with mixing,
we also use a broader proposal distribution, Uniform(?0.5, 0.5) at every 20 MCMC sweeps.
4
Simulations
For each video, as mentioned earlier, we use the tracking algorithm to initialize and fix the shapes
of the objects, S1 and S2 , and the position offsets, p1 and p2 . We also obtain the velocity vector for
each object using the tracking algorithm. We determine the length of the physics engine simulation
by the length of the observed video ? that is, the simulation runs until it outputs a velocity vector
for each object that is as long as the input velocity vector from the tracking algorithm.
As mentioned earlier, we collect 150 videos, uniformly distributed across different object categories.
We perform 16 MCMC simulations for a single video, each of which was 75 MCMC sweeps long.
We report the results with the highest log-likelihood score across the 16 chains (i.e., the MAP estimate).
In Figure 2, we illustrate the results for three individual videos. Every two frame of the top row
shows the first and the last frame of a video, and the bottom row images show the corresponding
frames from our model?s simulations with the MAP estimate. We quantify different aspects of
our model in the following behavioral experiments, where we compare our model against human
subjects? judgments. Furthermore, we use the inferences made by our model here on the 150 videos
to train a recognition model to arrive at physical object perception in static scenes with the model.
Importantly, note that our model can generalize across a broad range of tasks beyond the ramp
scenario. For example, once we infer the density of our object, we can make a buoyancy prediction
about it by simulating a scenario in which we drop the object into a liquid. We test some of the
generalizations in Section 6.
5
Bootstrapping to efficiently see physical objects in static scenes
Based on the estimates we derived from the visual input with a physics engine, we bootstrap from the
videos already collected, by labeling them with estimates of Galileo. This is a self-supervised learning algorithm for inferring generic physical properties. As discussed in Section 1, this formulation
is also related to the wake/sleep phases in Helmholtz machines, and to the cognitive development of
infants.
5
initialization with recognition model
random initialization
0e+00
MSE
Corr
Oracle
Galileo
Uniform
0.042
0.052
0.081
0.71
0.44
0
Log Likelihood
Mass
Methods
-1e+05
-2e+05
0
Figure 3: Mean squared errors of oracle estimation, our estimation, and uniform estimations of mass on a log-normalized scale,
and the correlations between estimations and
ground truths
20
40
60
Number of MCMC sweeps
Figure 4: The log-likelihood traces of several chains with and without recognition-model
(LeNet) based initializations.
Here we focus on two physical properties: mass and friction coefficient. To do this, we first estimate
these physical properties using the method described in earlier sections. Then, we train LeNet [7], a
widely used deep neural network for small-scale datasets, using image patches cropped from videos
based on the output of the tracker as data, and estimated physical properties as labels. The trained
model can then be used to predict these physical properties of objects based on purely visual cues,
even though they might have never appeared in the training set.
We also measure masses of all objects in the dataset, which makes it possible for us to quantitatively
evaluate the predictions of the deep network. We choose one object per material as our test cases,
use all data of those objects as test data, and the others as training data. We compare our model
with a baseline, which always outputs a uniform estimate calculated by averaging the masses of
all objects in the test data, and with an oracle algorithm, which is a LeNet trained using the same
training data, but has access to the ground truth masses of training objects as labels. Apparently, the
performance of the oracle model can be viewed as an upper bound of our Galileo system.
Table 3 compares the performance of Galileo, the oracle algorithm, and the baseline. We can observe
that Galileo is much better than baseline, although there is still some space for improvement.
Because we trained LeNet using static images to predict physical object properties such as friction
and mass ratios, we can use it to recognize those attributes in a quick bottom-up pass at the very first
frame of the video. To the extent that the trained LeNet is accurate, if we initialize the MCMC chains
with these bottom-up predictions, we expect to see an overall boost in our log-likelihood traces. We
test by running several chains with and without LeNet-based initializations. Results can be seen in
Figure 4. Despite the fact that LeNet is not achieving perfect performance by itself, we indeed get a
boost in speed and quality in the inference.
6
Experiments
In this section, we conduct experiments from multiple perspectives to evaluate our model. Specifically, we use the model to predict how far objects will move after the collision; whether the object
will remain stable in a different scene; and which of the two objects is heavier based on observations
of collisions. For every experiment, we also conduct behavioral experiments on Amazon Mechanical
Turk so that we may compare the performance of human and machine on these tasks.
6.1
Outcome Prediction
In the outcome prediction experiment, our goal is to measure and compare how well human and
machines can predict the moving distance of an object if only part of the video can be observed.
6
Human
Galileo
Uniform
Error in pixels
250
200
150
100
50
n
ea
le
oo
de
M
po
n
bl
n
w
w
oo
de
po
rc
e
la
to
oc
k
in
y
ll
do
as
tic
pl
tic
as
bl
oc
k
pl
pl
as
tic
po
le
n
m
et
al
al
et
m
llo
w
w
oo
co
i
d
h
ug
do
ho
ca
rd
b
oa
rd
0
Figure 5: Mean errors in numbers of pixels of human predictions, Galileo outputs, and a uniform
estimate calculated by averaging ground truth ending points over all test cases
Figure 6: Heat maps of user predictions, Galileo outputs (orange crosses), and ground truths (white
crosses).
Specifically, for behavioral experiments on Amazon Mechanical Turk, we first provide users four full
videos of objects made of a certain material, which contain complete collisions. In this way, users
may infer the physical properties associated with that material in their mind. We select a different
object, but made of the same material, show users a video of the object, but only to the moment
of collision. We finally ask users to label where they believe the target object (either cardboard or
foam) will be after the collision, i.e., how far the target will move. We tested 30 users per case.
Given a partial video, for Galileo to generate predicted destinations, we first run it to fit the part of
the video to derive our estimate of its friction coefficient. We then estimate its density by averaging
the density values we derived from other objects with that material by observing collisions that they
are involved. We further estimate the density (mass) and friction coefficient of the target object by
averaging our estimates from other collisions. We now have all required information for the model
to predict the ending point of the target after the collision. Note that the information available to
Galileo is exactly the same as that available to humans.
We compare three kinds of predictions: human feedback, Galileo output, and, as a baseline, a uniform estimate calculated by averaging ground truth ending points over all test cases. Figure 5 shows
the Euclidean distance in pixels between each of them and the ground truth. We can see that human
predictions are much better than the uniform estimate, but still far from perfect. Galileo performs
similar to human in the average on this task. Figure 6 shows, for some test cases, heat maps of user
predictions, Galileo outputs (orange crosses), and ground truths (white crosses).
6.2
Mass Prediction
The second experiment is to predict which of two objects is heavier, after observing a video of a
collision of them. For this task, we also randomly choose 50 objects, we test each of them on 50
users. For Galileo, we can directly obtain its guess based on the estimates of the masses of the
objects.
Figure 7 demonstrates that human and our model achieve about the same accuracy on this task. We
also calculate correlations between different outputs. Here, as the relation is highly nonlinear, we
7
1
Human
Galileo
Mass
0.8
Human vs Galileo
Human vs Truth
Galileo vs Truth
0.6
Spearman?s Coeff
0.51
0.68
0.52
0.4
?Will it move?
0.2
Human vs Galileo
Human vs Truth
Galileo vs Truth
0
Mass
"Will it move"
Figure 7: Average accuracy of human predictions and Galileo outputs on the tasks of mass
prediction and ?will it move? prediction. Error
bars indicate standard deviations of human accuracies.
Pearson?s Coeff
0.56
0.42
0.20
Table 1: Correlations between pairs of outputs in
the mass prediction experiment (in Spearman?s
coefficient) and in the ?will it move? prediction
experiment (in Pearson?s coefficient).
calculate Spearman?s coefficients. From Table 1, we notice that human responses, machine outputs,
and ground truths are all positively correlated.
6.3
?Will it move? prediction in a novel setup
Our third experiment is to predict whether a certain object will move in a different scene, after
observing one of its collisions. On Amazon Mechanical Turk, we show users a video containing a
collision of two objects. In this video, the angle between the inclined surface and the ground is 20
degrees. We then show users the first frame of a 10-degree video of the same object, and ask them to
predict whether the object will slide down the surface in this case. We randomly choose 50 objects
for the experiment, and divide them into lists of 10 objects per user, and get each of the item tested
on 50 users overall.
For Galileo, it is straightforward to predict the stability of an object in the 10-degree case using
estimates from the 20-degree video. Interestingly, both humans and the model are at chance on this
task (Figure 7), and their responses are reasonably correlated (Table 1). Moreover, both subjects
and the model show a bias towards saying ?it will move.? Future controlled experimentation and
simulations will investigate what underlies this correspondence.
7
Conclusion
This paper accomplishes three goals: first, it shows that a generative vision system with physical
object representations and a realistic 3D physics engine at its core can efficiently deal with real-world
data when proper recognition models and feature spaces are used. Second, it shows that humans?
intuitions about physical outcomes are often accurate, and our model largely captures these intuitions
? but crucially, humans and the model make similar errors. Lastly, the experience of the model,
that is, the inferences it makes on the basis of dynamical visual scenes, can be used to train a deep
learning model, which leads to more efficient inference and to the ability to see physical properties
in the static images. Our study points towards an account of human vision with generative physical
knowledge at its core, and various recognition models as helpers to induce efficient inference.
Acknowledgements
This work was supported by NSF Robust Intelligence 1212849 Reconstructive Recognition and the
Center for Brains, Minds, and Machines (funded by NSF STC award CCF-1231216).
8
References
[1] Ren?ee Baillargeon. Infants? physical world. Current directions in psychological science,
13(3):89?94, 2004.
[2] Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of
physical scene understanding. PNAS, 110(45):18327?18332, 2013.
[3] Susan Carey. The origin of concepts. Oxford University Press, 2009.
[4] Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics. org, 2010.
[5] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz
machine. Neural computation, 7(5):889?904, 1995.
[6] Zhaoyin Jia, Andy Gallagher, Ashutosh Saxena, and Tsuhan Chen. 3d reasoning from blocks
to stability. IEEE TPAMI, 2014.
[7] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[8] Adam N Sanborn, Vikash K Mansinghka, and Thomas L Griffiths. Reconciling intuitive
physics and newtonian mechanics for colliding objects. Psychological review, 120(2):411,
2013.
[9] John Schulman, Alex Lee, Jonathan Ho, and Pieter Abbeel. Tracking deformable objects with
point clouds. In Robotics and Automation (ICRA), 2013 IEEE International Conference on,
pages 1130?1137. IEEE, 2013.
[10] Carlo Tomasi and Takeo Kanade. Detection and tracking of point features. International
Journal of Computer Vision, 1991.
[11] Tomer Ullman, Andreas Stuhlm?uller, Noah Goodman, and Josh Tenenbaum. Learning physics
from dynamical scenes. In CogSci, 2014.
[12] Ilker Yildirim, Tejas D Kulkarni, Winrich A Freiwald, and Joshua B Tenenbaum. Efficient
analysis-by-synthesis in vision: A computational framework, behavioral tests, and modeling
neuronal representations. In Thirty-Seventh Annual Conference of the Cognitive Science Society, 2015.
[13] Bo Zheng, Yibiao Zhao, Joey C Yu, Katsushi Ikeuchi, and Song-Chun Zhu. Detecting potential
falling objects by inferring human action and natural disturbance. In ICRA, 2014.
9
| 5780 |@word open:1 pieter:1 seek:1 crucially:2 simulation:19 llo:1 carry:1 moment:1 initial:1 born:1 contains:1 score:1 liquid:1 document:1 interestingly:2 current:1 yet:2 john:1 takeo:1 realistic:5 happen:1 shape:18 enables:1 designed:1 gist:1 update:1 drop:1 v:9 infant:5 generative:15 cue:1 guess:1 item:1 intelligence:1 ashutosh:1 core:4 mental:4 provides:1 detecting:1 location:1 org:1 rc:1 along:1 qualitative:1 consists:2 introductory:1 behavioral:5 manner:1 indeed:1 ra:2 p1:1 nor:1 mechanic:2 brain:3 freeman:1 encouraging:1 estimating:1 underlying:2 moreover:1 mass:25 what:2 tic:3 kind:2 unobserved:3 bootstrapping:2 formalizes:1 remember:1 every:3 saxena:1 ti:4 gravity:2 exactly:2 k2:1 hit:2 demonstrates:1 unit:2 causally:1 before:1 t1:4 despite:1 oxford:1 path:1 might:1 initialization:4 suggests:1 collect:2 co:1 range:3 practical:2 lecun:1 thirty:1 galileo:30 alphabetical:1 block:4 bootstrap:1 unfold:2 word:1 integrating:1 induce:1 griffith:1 suggest:2 get:2 ga:3 nb:3 put:1 applying:1 map:5 quick:1 missing:1 center:2 joey:1 straightforward:1 dough:2 starting:1 focused:1 formalized:1 recovery:1 amazon:3 perceive:3 freiwald:1 m2:1 importantly:1 stability:2 target:4 user:12 goldberg:1 hypothesis:5 origin:3 velocity:16 synthesize:1 recognition:13 approximated:1 helmholtz:4 observed:8 role:1 bottom:3 cloud:1 capture:2 calculate:2 susan:1 inclined:4 cycle:2 intends:1 highest:1 mentioned:2 intuition:2 dynamic:3 trained:4 solving:1 purely:1 upon:1 basis:3 po:3 joint:1 collide:1 differently:1 various:3 represented:1 train:5 heat:2 describe:1 reconstructive:1 cogsci:1 zemel:1 labeling:2 lift:1 outcome:5 pearson:2 birth:1 quite:1 larger:2 widely:1 ramp:9 ability:3 favor:1 noisy:1 itself:2 seemingly:1 advantage:1 tpami:1 propose:3 interaction:3 loop:1 mixing:1 achieve:1 deformable:1 coumans:1 intuitive:1 asserts:1 generating:2 adam:1 newtonian:2 ring:1 object:97 help:2 illustrate:1 perfect:2 oo:3 derive:1 exemplar:1 mansinghka:1 p2:1 predicted:1 indicate:2 quantify:1 direction:1 closely:1 attribute:4 human:36 material:11 fix:6 generalization:1 abbeel:1 alleviate:1 extension:1 pl:3 tracker:2 ground:14 mapping:1 predict:14 achieves:1 vary:2 purpose:1 battaglia:1 estimation:4 leap:1 label:3 sensitive:1 uller:1 mit:10 clearly:1 always:1 broader:1 probabilistically:1 derived:3 focus:1 klt:1 improvement:1 consistently:1 indicates:2 likelihood:7 mainly:1 baseline:4 wooden:2 inference:12 dayan:1 rigid:1 relation:1 pixel:4 overall:2 among:2 development:3 constrained:3 initialize:3 orange:2 jiajunwu:1 equal:2 field:1 fuller:1 once:2 never:1 sampling:1 represents:1 broad:1 yu:1 constitutes:1 foreground:1 future:3 report:1 t2:4 quantitatively:1 stimulus:1 simplify:1 employ:1 richard:1 randomly:2 others:1 yoshua:1 recognize:1 individual:1 baillargeon:1 phase:2 william:1 cylinder:2 jessica:1 detection:1 highly:2 investigate:1 zheng:1 light:1 chain:4 accurate:2 andy:1 helper:2 partial:1 experience:4 conduct:2 euclidean:1 cardboard:4 divide:1 causal:1 psychological:2 earlier:4 modeling:1 pole:3 deviation:1 rolling:1 uniform:14 successful:1 seventh:1 perturbed:1 eec:3 density:6 international:2 csail:1 probabilistic:5 physic:29 destination:1 lee:1 synthesis:1 quickly:1 na:3 nicer:1 squared:1 containing:1 choose:3 possibly:1 watching:2 cognitive:3 zhao:1 leading:1 ullman:1 toy:1 account:6 potential:2 de:2 summarized:1 automation:1 coefficient:14 matter:2 vi:3 piece:1 later:1 vs2:5 observing:4 apparently:1 start:1 recover:3 slope:1 jia:1 carey:1 contribution:2 accuracy:4 convolutional:1 largely:1 efficiently:2 judgment:3 clarifies:1 foam:3 generalize:1 plastic:4 yildirim:2 none:1 ren:1 carlo:1 drive:1 porcelain:1 against:2 energy:1 involved:2 turk:3 associated:1 mi:2 static:9 dataset:9 adjusting:1 ask:2 lim:2 knowledge:5 dimensionality:1 ea:1 supervised:3 response:2 formulation:1 though:2 box:3 furthermore:2 just:1 lastly:1 until:1 correlation:3 nonlinear:1 quality:1 bullet:4 believe:1 name:2 building:1 normalized:1 contain:1 concept:1 ccf:1 evolution:2 lenet:7 entering:1 jbt:1 semantic:1 neal:1 white:2 deal:1 ll:1 self:3 acquires:1 oc:2 complete:1 demonstrate:2 vo:4 performs:1 motion:2 interpreting:1 upwards:2 reasoning:2 image:7 novel:3 ug:1 physical:60 overview:1 tracked:1 volume:2 discussed:1 m1:1 interpret:1 refer:1 automatic:1 rd:2 funded:1 moving:4 specification:2 access:1 stable:1 operating:1 surface:6 etc:1 patrick:1 posterior:2 recent:2 perspective:1 scenario:10 rockefeller:1 certain:2 life:2 baby:1 yi:3 joshua:3 seen:1 accomplishes:1 determine:2 sliding:3 full:2 relates:1 bcs:2 infer:5 multiple:2 pnas:1 match:1 cross:4 long:3 hamrick:1 award:1 controlled:1 hasting:1 prediction:17 underlies:1 basic:2 vision:6 physically:4 adulthood:1 erwin:1 robotics:2 proposal:2 background:1 cropped:1 wake:3 source:1 goodman:1 subject:3 simulates:2 ikeuchi:1 ee:1 synthetically:1 constraining:1 bengio:1 variety:2 fit:2 zi:5 andreas:1 idea:2 haffner:1 vikash:1 whether:4 handled:1 heavier:2 gb:3 song:1 peter:2 action:1 deep:7 collision:12 detailed:1 listed:1 slide:4 tenenbaum:4 category:2 generate:2 http:1 nsf:2 notice:1 estimated:4 jiajun:1 per:3 key:1 four:3 falling:2 achieving:1 neither:1 utilize:1 kept:1 wood:1 run:4 angle:2 you:1 place:1 arrive:1 saying:1 wu:1 yann:1 patch:1 draw:2 scaling:5 coeff:2 comparable:1 ki:2 bound:1 correspondence:1 sleep:3 oracle:5 annual:1 noah:1 alex:1 scene:30 colliding:2 software:1 aspect:1 speed:1 friction:15 rendered:1 relatively:2 according:1 spearman:3 across:3 remain:1 appealing:1 joseph:1 metropolis:1 s1:1 explained:1 rubber:2 equation:1 vs1:5 mind:2 available:2 doll:1 experimentation:1 observe:1 generic:2 simulating:1 coin:1 ho:2 thomas:1 top:1 remaining:1 running:1 reconciling:1 eon:1 concatenated:2 k1:1 build:2 society:1 icra:2 bl:2 sweep:4 move:10 already:2 gradient:1 sanborn:1 distance:3 simulated:5 oa:1 collected:2 extent:2 length:4 relationship:1 providing:1 ratio:1 acquire:1 setup:3 tsuhan:1 bulletphysics:1 potentially:1 trace:2 proper:1 perform:2 upper:1 observation:6 snapshot:1 datasets:1 hinton:1 incorporated:1 frame:10 tomer:1 inverting:3 pair:2 mechanical:3 required:1 extensive:1 tomasi:1 engine:26 boost:2 address:1 able:1 beyond:1 bar:1 dynamical:6 perception:1 appeared:1 including:3 reliable:1 video:44 ia:1 event:2 treated:1 natural:2 disturbance:1 predicting:1 advanced:1 zhu:1 brief:1 ilker:2 prior:1 understanding:6 geometric:1 acknowledgement:1 review:1 schulman:1 law:1 billf:1 fully:2 expect:1 geoffrey:1 remarkable:1 agent:1 degree:4 metal:2 pi:3 row:3 supported:1 last:3 soon:1 free:1 bias:1 understand:1 fledged:2 distributed:1 feedback:2 dimension:2 calculated:3 world:13 evaluating:1 rich:2 ending:3 forward:3 author:1 made:3 far:3 keep:1 conceptual:2 xi:4 latent:7 table:4 kanade:1 learn:3 reasonably:1 robust:1 ca:1 mse:1 bottou:1 yibiao:1 stc:1 s2:1 profile:1 body:1 positively:1 site:1 neuronal:1 downwards:1 position:10 inferring:4 torus:2 ib:1 third:3 young:2 down:7 offset:5 list:1 chun:1 evidence:1 incorporating:1 intractable:1 burden:1 corr:1 gallagher:1 push:1 chen:1 simply:1 explore:1 visual:11 positional:1 josh:1 hitting:1 tracking:14 bo:1 collectively:1 radford:1 corresponds:1 truth:12 chance:1 relies:1 kinetic:1 tejas:1 viewed:1 goal:2 towards:3 content:3 yourself:1 perceiving:1 specifically:7 determined:1 vo1:6 uniformly:1 averaging:5 pas:1 la:1 rarely:1 select:1 people:1 stuhlm:1 jonathan:1 kulkarni:1 hollow:2 evaluate:5 mcmc:7 tested:2 correlated:2 |
5,281 | 5,781 | Learning visual biases from human imagination
Carl Vondrick
Hamed Pirsiavash?
Aude Oliva Antonio Torralba
Massachusetts Institute of Technology ?University of Maryland, Baltimore County
{vondrick,oliva,torralba}@mit.edu hpirsiav@umbc.edu
Abstract
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether
we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human
psychophysics, estimates the biases that the human visual system might use for
recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred
into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation
that constrains the orientation of the SVM hyperplane to agree with the bias from
human visual system. Our results suggest that transferring this human bias into
machines may help object recognition systems generalize across datasets and perform better when very little training data is available.
1
Introduction
Computer vision researchers often go through great lengths to remove dataset biases from their
models [32, 20]. However, not all biases are adversarial. Even natural recognition systems, such as
the human visual system, have biases. Some of the most well known human biases, for example, are
the canonical perspective (prefer to see objects from a certain perspective) [26] and Gestalt laws of
grouping (tendency to see objects in collections of parts) [11].
We hypothesize that biases in the human visual system can be beneficial for visual understanding.
Since recognition is an underconstrained problem, the biases that the human visual system developed
may provide useful priors for perception. In this paper, we develop a novel method to learn some
biases from the human visual system and incorporate them into computer vision systems.
We focus our approach on learning the biases that people may have for the appearance of objects.
To illustrate our method, consider what may seem like an odd experiment. Suppose we sample i.i.d.
white noise from a standard normal distribution, and treat it as a point in a visual feature space, e.g.
CNN or HOG. What is the chance that this sample corresponds to visual features of a car image?
Fig.1a visualizes some samples [35] and, as expected, we see noise. But, let us not stop there. We
next generate one hundred fifty thousand points from the same distribution, and ask workers on
Amazon Mechanical Turk to classify visualizations of each sample as a car or not. Fig.1c visualizes
the average of visual features that workers believed were cars. Although our dataset consists of only
white noise, a car emerges!
Sampling noise may seem unusual to computer vision researchers, but a similar procedure, named
classification images, has gained popularity in human psychophysics [2] for estimating an approximate template the human visual system internally uses for recognition [18, 4]. In the procedure,
an observer looks at an image perturbed with random noise and indicates whether they perceive a
target category. After a large number of trials, psychophysics researchers can apply basic statistics
to extract an approximation of the internal template the observer used for recognition. Since the
1
White Noise CNN Features
Human Visual System
Template for Car
Figure 1: Although all image patches on the left are just noise, when we show thousands of them
to online workers and ask them to find ones that look like cars, a car emerges in the average, shown
on the right. This noise-driven method is based on well known tools in human psychophysics that
estimates the biases that the human visual system uses for recognition. We explore how to transfer
these biases into a machine.
procedure is done with noise, the estimated template reveals some of the cues that the human visual
system used for discrimination.
We propose to extend classification images to estimate biases from the human visual system. However, our approach makes two modifications. Firstly, we estimate the template in state-of-the-art
computer vision feature spaces [8, 19], which allows us to incorporate these biases into learning algorithms in computer vision systems. To do this, we take advantage of algorithms that invert visual
features back to images [35]. By estimating these biases in a feature space, we can learn biases for
how humans may correspond mid-level features, such as shapes and colors, with objects. To our
knowledge, we are the first to estimate classification images in vision feature spaces. Secondly, we
want our template to be biased by the human visual system and not our choice of dataset. Unlike
classification images, we do not perturb real images; instead our approach only uses visualizations
of feature space noise to estimate the templates. We capitalize on the ability of people to discern
visual objects from random noise in a systematic manner [16].
2
Related Work
Mental Images: Our methods build upon work to extract mental images from a user?s head for both
general objects [15], faces [23], and scenes [17]. However, our work differs because we estimate
mental images in state-of-the-art computer vision feature spaces, which allows us to integrate the
mental images into a machine recognition system.
Visual Biases: Our paper studies biases in the human visual system similar to [26, 11], but we wish
to transfer these biases into a computer recognition system. We extend ideas [24] to use computer
vision to analyze these biases. Our work is also closely related to dataset biases [32, 28], which
motivates us to try to transfer favorable biases into recognition systems.
Human-in-the-Loop: The idea to transfer biases from the human mind into object recognition
is inspired by many recent works that puts a human in the computer vision loop [6, 27], trains
recognition systems with active learning [33], and studies crowdsourcing [34, 31]. The primary
difference of these approaches and our work is, rather than using crowds as a workforce, we want to
extract biases from the worker?s visual systems.
Feature Visualization: Our work explores a novel application of feature visualizations [36, 35, 22].
Rather than using feature visualizations to diagnose computer vision systems, we use them to inspect
and learn biases in the human visual system.
Transfer Learning: We also build upon methods in transfer learning to incorporate priors into
learning algorithms. A common transfer learning method for SVMs is to change the regularization
term ||w||22 to ||w ? c||22 where c is the prior [29, 37]. However, this imposes a prior on both the
norm and orientation of w. In our case, since the visual bias does not provide an additional prior on
the norm, we present a SVM formulation that constrains only the orientation of w to be close to c.
2
(a) RGB
(b) HOG
Figure 2: We visualize white noise in RGB and
feature spaces. To visualize white noise features,
we use feature inversion algorithms [35]. White
noise in feature space has correlations in image
space that white noise in RGB does not. We capitalize on this structure to estimate visual biases
in feature space without using real images.
(c) CNN
Our approach extends sign constraints on SVMs [12], but instead enforces orientation constraints.
Our method enforces a hard orientation constraint, which builds on soft orientation constraints [3].
3
Classification Images Review
The procedure classification images is a popular method in human psychophysics that attempts to
estimate the internal template that the human visual system might use for recognition of a category
[18, 4]. We review classification images in this section as it is the inspiration for our method.
The goal is to approximate the template c? ? Rd that a human observer uses to discriminate between
two classes A and B, e.g. male vs. female faces, or chair vs. not chair. Suppose we have intensity
images a ? A ? Rd and b ? B ? Rd . If we sample white noise ? N (0d , Id ) and ask an observer
to indicate the class label for a + , most of the time the observer will answer with the correct class
label A. However, there is a chance that might manipulate a to cause the observer to mistakenly
label a + as class B.
The insight into classification images is that, if we perform a large number of trials, then we can
estimate a decision function f (?) that discriminates between A and B, but makes the same mistakes
as the observer. Since f (?) makes the same errors, it provides an estimate of the template that the
observer internally used to discriminate A from B. By analyzing this model, we can then gain
insight into how a visual system might recognize different categories.
Since psychophysics researchers are interested in models that are interpretable, classification images
are often linear approximations of the form f (x; c?) = c?T x. The template c? ? Rd can be estimated
in many ways, but the most common is a sum of the stimulus images:
c? = (?AA + ?BA ) ? (?AB + ?BB )
(1)
where ?XY is the average image where the true class is X and the observer predicted class Y .
The template c is fairly intuitive: it will have large positive value on locations that the observer
used to predict A, and large negative value for locations correlated with predicting B. Although
classification images is simple, this procedure has led to insights in human perception. For example,
[30] used classification images to study face processing strategies in the human visual system. For a
complete analysis of classification images, we refer readers to review articles [25, 10].
4
Estimating Human Biases in Feature Spaces
Standard classification images is performed with perturbing real images with white noise. However,
this approach may negatively bias the template by the choice of dataset. Instead, we are interested
in estimating templates that capture biases in the human visual system and not datasets.
We propose to estimate these templates by only sampling white noise (with no real images). Unfortunately, sampling just white noise in RGB is extremely unlikely to result in a natural image (see
Fig.2a). To overcome this, we can estimate the templates in feature spaces [8, 19] used in computer
vision. Feature spaces encode higher abstractions of images (such as gradients, shapes, or colors).
While sampling white noise in feature space may still not lay on the manifold of natural images, it
is more likely to capture statistics relevant for recognition. Since humans cannot directly interpret
abstract feature spaces, we can use feature inversion algorithms [35, 36] to visualize them.
Using these ideas, we first sample noise from a zero-mean, unit-covariance Gaussian distribution
x ? N (0d , Id ). We then invert the noise feature x back to an image ??1 (x) where ??1 (?) is the
3
G
O
H
N
CN
Car
Television
Person
Bottle
Fire Hydrant
Figure 3: We visualize some biases estimated from trials by Mechanical Turk workers.
feature inverse. By instructing people to indicate whether a visualization of noise is a target category
or not, we can build a linear template c ? Rd that approximates people?s internal templates:
c = ?A ? ?B
(2)
where ?A ? Rd is the average, in feature space, of white noise that workers incorrectly believe is
the target object, and similarly ?B ? Rd is the average of noise that workers believe is noise.
Eqn.2 is a special case of the original classification images Eqn.1 where the background class B is
white noise and the positive class A is empty. Instead, we rely on humans to hallucinate objects in
noise to form ?A . Since we build these biases with only white Gaussian noise and no real images,
our approach may be robust to many issues in dataset bias [32]. Instead, templates from our method
can inherit the biases for the appearances of objects present in the human visual system, which we
suspect provides advantageous signals about the visual world.
In order to estimate c from noise, we need to perform many trials, which we can conduct effectively
on Amazon Mechanical Turk [31]. We sampled 150, 000 points from a standard normal multivariate
distribution, and inverted each sample with the feature inversion algorithm from HOGgles [35]. We
then instructed workers to indicate whether they see the target category or not in the visualization.
Since we found that the interpretation of noise visualizations depends on the scale, we show the
worker three different scales. We paid workers 10? to label 100 images, and workers often collectively solved the entire batch in a few hours. In order to assure quality, we occasionally gave
workers an easy example to which we knew the answer, and only retained work from workers who
performed well above chance. We only used the easy examples to qualify workers, and discarded
them when computing the final template.
5
Visualizing Biases
Although subjects are classifying zero-mean, identity covariance white Gaussian noise with no real
images, objects can emerge after many trials. To show this, we performed experiments with both
HOG [8] and the last convolutional layer (pool5) of a convolutional neural network (CNN) trained
on ImageNet [19, 9] for several common object categories. We visualize some of the templates
from our method in Fig.3. Although the templates are blurred, they seem to show significant detail
about the object. For example, in the car template, we can clearly see a vehicle-like object in the
center sitting on top of a dark road and lighter sky. The television template resembles a rectangular
structure, and the fire hydrant templates reveals a red hydrant with two arms on the side. The
templates seem to contain the canonical perspective of objects [26], but also extends them with
color and shape biases.
In these visualizations, we have assumed that all workers on Mechanical Turk share the same appearance bias of objects. However, this assumption is not necessarily true. To examine this, we
instructed workers on Mechanical Turk to find ?sport balls? in CNN noise, and clustered workers
by their geographic location. Fig.4 shows the templates for both India and the United States. Even
4
(a) India
(b) United States
Figure 4: We grouped users by their geographic location (US or India) and instructed each group to
classify CNN noise as a sports ball or not, which allows us to see how biases can vary by culture. Indians seem to imagine a red ball, which is the standard
color for a cricket ball and the predominant sport in
India. Americans seem to imagine a brown or orange ball, which could be an American football or
basketball, both popular sports in the U.S.
though both sets of workers were labeling noise from the same distribution, Indian workers seemed
to imagine red balls, while American workers tended to imagine orange/brown balls. Remarkably,
the most popular sport in India is cricket, which is played with a red ball, and popular sports in
the United States are American football and basketball, which are played with brown/orange balls.
We conjecture that Americans and Indians may have different mental images of sports balls in their
head and the color is influenced by popular sports in their country. This effect is likely attributed to
phenomena in social psychology where human perception can be influenced by culture [7, 5]. Since
environment plays a role in the development of the human vision system, people from different
cultures likely develop slightly different images inside their head.
6
Leveraging Humans Biases for Recognition
If the biases we learn are beneficial for recognition, then we would expect them to perform above
chance at recognizing objects in real images. To evaluate this, we use the visual biases c directly
as a classifier for object recognition. We quantify their performance on object classification in realworld images using the PASCAL VOC 2011 dataset [13], evaluating against the validation set. Since
PASCAL VOC does not have a fire hydrant category, we downloaded 63 images from Flickr with
fire hydrants and added them to the validation set. We report performance as the average precision
on a precision-recall curve.
The results in Fig.5 suggest that biases from the human visual system do capture some signals useful
for classifying objects in real images. Although the classifiers are estimated using only white noise,
in most cases the templates are significantly outperforming chance, suggesting that biases from the
human visual system may be beneficial computationally.
Our results suggest that shape is an important bias to discriminate objects in CNN feature space.
Notice how the top classifications in Fig.6 tend to share the same rough shape by category. For
example, the classifier for person finds people that are upright, and the television classifier fires on
rectangular shapes. The confusions are quantified Fig.7: bottles are often confused as people, and
cars are confused as buses. Moreover, some templates appear to rely on color as well. Fig.6 suggests
that the classifier for fire-hydrant correctly favors red objects, which is evidenced by it frequently
firing on people wearing red clothes. The bottle classifier seems to be incorrectly biased towards
blue objects, which contributes to its poor performance.
80
HOG
CNN
Chance
AP
60
car person f-hydrant bottle tv
HOG 22.9 45.5
0.8
15.9 27.0
CNN 27.5 65.6
5.9
6.0 23.8
Chance 7.3 32.3
0.3
4.5 2.6
40
20
0
person
car
bottle
tv
firehydrant
Figure 5: We show the average precision (AP) for object classification on PASCAL VOC 2011 using
templates estimated with noise. Even though the template is created without a dataset, it performs
significantly above chance.
5
Car
Figure 6:
We
show some of
the top classifications from the
human biases
estimated with
CNN features.
Note that real
data is not used
in building these
models.
Person
Bottle
Fire Hydrant
Television
car
tvmonitor
car
person
car
bus
train
boat
tvmonitor
sofa
motorbike
bottle
Predicted Category
Predicted Category
aeroplane
Predicted Category
firehydrant
tvmonitor
bus
person
aeroplane
train
boat
chair
firehydrant
dog
cat
motorbike
chair
car
diningtable
sofa
bird
horse
diningtable
tvmonitor
0
0.1
0.2
0.3
Probability of Retrieval
0.4
0
0.1
0.2
0.3
Probability of Retrieval
0.4
0
0.2
0.4
0.6
Probability of Retrieval
0.8
Figure 7: We plot the class confusions for some human biases on top classifications with CNN
features. We show only the top 10 classes for visualization. Notice that many of the confusions
may be sensible, e.g. the classifier for car tends to retrieve vehicles, and the fire hydrant classifier
commonly mistakes people and bottles.
While the motivation of this experiment has been to study whether human biases are favorable
for recognition, our approach has some applications. Although templates estimated from white
noise will likely never be a substitute for massive labeled datasets, our approach can be helpful for
recognizing objects when no training data is available. Rather, our approach enables us to build
classifiers for categories that a person has only imagined and never seen. In our experiments, we
evaluated on common categories to make evaluation simpler, but in principle our approach can work
for rare categories as well. We also wish to note that the CNN features used here are trained to
classify images on ImageNet [9] LSVRC 2012, and hence had access to data. However, we showed
competitive results for HOG as well, which is a hand-crafted feature, as well as results for a category
that the CNN network did not see during training (fire hydrants).
7
Learning with Human Biases
Our experiments to visualize the templates and use them as object recognition systems suggest that
visual biases from the human visual system provide some signals that are useful for discriminating
objects in real world images. In this section, we investigate how to incorporate these signals into
learning algorithms when there is some training data available. We present an SVM that constrains
the separating hyperplane to have an orientation similar to the human bias we estimated.
7.1
SVM with Orientation Constraints
Let xi ? Rm be a training point and yi ? {?1, 1} be its label for 1 ? i ? n. A
standard SVM seeks a separating hyperplane w ? Rm with a bias b ? R that maximizes the margin between positive and negative examples. We wish to add the constraint that
the SVM hyperplane w must be at most cos?1 (?) degrees away from the bias template c:
6
min
w,b,?
n
X
? T
w w+
?i
2
i=1
s.t.
yi wT xi + b ? 1 ? ?i , ?i ? 0 (3a)
w
cos-1(?)
wT c
?? ?
wT w
c
(3b)
where ?i ? R are the slack variables, ? is the regularization hyperFigure 8
parameter, and Eqn.3b is the orientation prior such that ? ? (0, 1]
bounds the maximum angle that the w is allowed to deviate from c.
Note that we have assumed, without loss of generality, that ||c||2 = 1. Fig.8 shows a visualization
of this orientation constraint. The feasible space for the solution is the grayed hypercone. The SVM
solution w is not allowed to deviate from the prior classifier c by more than cos?1 (?) degrees.
7.2
Optimization
We?optimize Eqn.3 efficiently by writing the objective as a conic program.
? We rewrite Eqn.3b
T
wT c
T
as w w ? ? and introduce an auxiliary variable ? ? R such that wT w ? ? ? w? c .
Substituting these constraints into Eqn.3 and replacing the SVM regularization term with ?2 ?2 leads
to the conic program:
n
? 2 X
? +
?i
w,b,?,? 2
i=1
min
s.t.
yi wT xi + b ? 1 ? ?i ,
??
wT c
?
?i ? 0,
?
wT w ? ?
(4a)
(4b)
Since at the minimum a2 = wT w, Eqn.4 is equivalent to Eqn.3, but in a standard conic program
form. As conic programs are convex by construction, we can then optimize it efficiently using offthe-shelf solvers, which we use MOSEK [1]. Note that removing Eqn.4b makes it equivalent to
the standard SVM. cos?1 (?) specifies the angle of the cone. In our experiments, we found 30? to
be reasonable. While this angle is not very restrictive in low dimensions, it becomes much more
restrictive as the number of dimensions increases [21].
7.3
Experiments
We previously used the bias template as a classifier for recognizing objects when there is no training
data available. However, in some cases, there may be a few real examples available for learning. We
can incorporate the bias template into learning using an SVM with orientation constraints. Using
the same evaluation procedure as the previous section, we compare three approaches: 1) a single
SVM trained with only a few positives and the entire negative set, 2) the same SVM with orientation
priors for cos(?) = 30? on the human bias, and 3) the human bias alone. We then follow the same
experimental setup as before. We show full results for the SVM with orientation priors in Fig.9.
In general, biases from the human visual system can assist the SVM when the amount of positive
training data is only a few examples. In these low data regimes, acquiring classifiers from the human
visual system can improve performance with a margin, sometimes 10% AP.
Furthermore, standard computer vision datasets often suffer from dataset biases that harm cross
dataset generalization performance [32, 28]. Since the template we estimate is biased by the human
visual system and not datasets (there is no dataset), we believe our approach may help cross dataset
generalization. We trained an SVM classifier with CNN features to recognize cars on Caltech 101
[14], but we tested it on object classification with PASCAL VOC 2011. Fig.10a suggest that, by
constraining the SVM to be close to the human bias for car, we are able to improve the generalization
performance of our classifiers, sometimes over 5% AP. We then tried the reverse experiment in
Fig.10b: we trained on PASCAL VOC 2011, but tested on Caltech 101. While PASCAL VOC
provides a much better sample of the visual world, the orientation priors still help generalization
performance when there is little training data available. These results suggest that incorporating the
biases from the human visual system may help alleviate some dataset bias issues in computer vision.
7
0 positives
Category Chance Human
car
7.3
27.5
person 32.3
65.6
f-hydrant
0.3
5.9
bottle
4.5
6.0
tv
2.6
23.8
1 positive
5 positives
SVM SVM+Human SVM SVM+Human
11.6
29.0
37.8
43.5
55.2
69.3
70.1
73.7
1.7
7.0
50.1
50.1
11.2
11.7
38.1
38.7
38.6
43.1
66.7
68.8
Figure 9: We show AP for the SVM with orientation priors for object classification on PASCAL
VOC 2011 for varying amount of positive data with CNN features. All results are means over
random subsamples of the training sets. SVM+Hum refers to SVM with the human bias as an
orientation prior.
Car Classification (CNN, train on PASCAL, test on Caltech 101)
Car Classification (CNN, train on Caltech 101, test on PASCAL)
1
0.6
0.9
0.55
0.8
0.7
0.5
0.6
AP
AP
0.45
0.4
SVM+C #pos=1
SVM+C #pos=5
SVM+C #pos=1152
C only
0.5
0.4
0.3
0.35
0.3
0.25
SVM
SVM+C #pos=1
SVM+C #pos=5
SVM+C #pos=62
C only
0.2
0.2
0.1
0.4
?
0.6
0.8
0
SVM
C
(a) Train on Caltech 101, Test on PASCAL
0.2
0.4
?
0.6
0.8
C
(b) Train on PASCAL, Test on Caltech 101
Figure 10: Since bias from humans is estimated with only noise, it tends to be biased towards the
human visual system instead of datasets. (a) We train an SVM to classify cars on Caltech 101 that
is constrained towards the bias template, and evaluate it on PASCAL VOC 2011. For every training
set size, constraining the SVM to the human bias with ? ? 0.75 is able to improve generalization
performance. (b) We train a constrained SVM on PASCAL VOC 2011 and test on Caltech 101. For
low data regimes, the human bias may help boost performance.
8
Conclusion
Since the human visual system is one of the best recognition systems, we hypothesize that its biases
may be useful for visual understanding. In this paper, we presented a novel method to estimate some
biases that people have for the appearance of objects. By estimating these biases in state-of-the-art
computer vision feature spaces, we can transfer these templates into a machine, and leverage them
computationally. Our experiments suggest biases from the human visual system may provide useful
signals for computer vision systems, especially when little, if any, training data is available.
Acknowledgements: We thank Aditya Khosla for important discussions, and Andrew Owens and Zoya
Bylinskii for helpful comments. Funding for this research was partially supported by a Google PhD Fellowship
to CV, and a Google research award and ONR MURI N000141010933 to AT.
References
[1]
[2]
[3]
[4]
The MOSEK Optimization Software. http://mosek.com/.
A. Ahumada Jr. Perceptual classification images from vernier acuity masked by noise. 1996.
Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In ICCV, 2011.
B. L. Beard and A. J. Ahumada Jr. A technique to extract relevant image features for visual tasks. In
SPIE, 1998.
[5] C. Blais, R. E. Jack, C. Scheepers, D. Fiset, and R. Caldara. Culture shapes how we look at faces. PLoS
One, 2008.
8
[6] S. Branson, C. Wah, F. Schroff, B. Babenko, P. Welinder, P. Perona, and S. Belongie. Visual recognition
with humans in the loop. 2010.
[7] H. F. Chua, J. E. Boland, and R. E. Nisbett. Cultural variation in eye movements during scene perception.
Proceedings of the National Academy of Sciences of the United States of America, 2005.
[8] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[10] M. P. Eckstein and A. J. Ahumada. Classification images: A tool to analyze visual strategies. Journal of
Vision, 2002.
[11] W. D. Ellis. A source book of Gestalt psychology. Psychology Press, 1999.
[12] A. Epshteyn and G. DeJong. Rotational prior knowledge for svms. In ECML. 2005.
[13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes challenge. IJCV, 2010.
[14] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 2006.
[15] M. Ferecatu and D. Geman. A statistical framework for image category search from a mental picture.
PAMI, 2009.
[16] F. Gosselin and P. G. Schyns. Superstitious perceptions reveal properties of internal representations.
Psychological Science, 2003.
[17] M. R. Greene, A. P. Botros, D. M. Beck, and L. Fei-Fei. Visual noise from natural scene statistics reveals
human scene category representations. arXiv, 2014.
[18] A. A. Jr and J. Lovell. Stimulus features in signal detection. The Journal of the Acoustical Society of
America, 1971.
[19] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[20] B. Kulis, K. Saenko, and T. Darrell. What you saw is not what you get: Domain adaptation using
asymmetric kernel transforms. In CVPR, pages 1785?1792, 2011.
[21] S. Li. Concise formulas for the area and volume of a hyperspherical cap. Asian Journal of Mathematics
and Statistics, 2011.
[22] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. CVPR,
2015.
[23] M. C. Mangini and I. Biederman. Making the ineffable explicit: Estimating the information employed for
face classifications. Cognitive Science, 2004.
[24] E. Mezuman and Y. Weiss. Learning about canonical views from internet image collections. In NIPS,
2012.
[25] R. F. Murray. Classification images: A review. Journal of Vision, 2011.
[26] S. Palmer, E. Rosch, and P. Chase. Canonical perspective and the perception of objects. Attention and
performance IX, 1981.
[27] D. Parikh and C. Zitnick. Human-debugging of machines. In NIPS WCSSWC, 2011.
[28] J. Ponce, T. L. Berg, M. Everingham, D. A. Forsyth, M. Hebert, S. Lazebnik, M. Marszalek, C. Schmid,
B. C. Russell, A. Torralba, et al. Dataset issues in object recognition. In Toward category-level object
recognition. 2006.
[29] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass
object detection. In CVPR, 2011.
[30] A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett. Inversion leads to quantitative, not qualitative,
changes in face processing. Current Biology, 2004.
[31] A. Sorokin and D. Forsyth. Utility data annotation with amazon mechanical turk. In CVPR Workshops,
2008.
[32] A. Torralba and A. Efros. Unbiased look at dataset bias. In CVPR.
[33] S. Vijayanarasimhan and K. Grauman. Large-scale live active learning: Training object detectors with
crawled data and crowds. In CVPR, 2011.
[34] L. Von Ahn, R. Liu, and M. Blum. Peekaboom: a game for locating objects in images. In SIGCHI Human
Factors, 2006.
[35] C. Vondrick, A. Khosla, T. Malisiewicz, and A. Torralba. HOGgles: Visualizing Object Detection Features. ICCV, 2013.
[36] P. Weinzaepfel, H. J?egou, and P. P?erez. Reconstructing an image from its local descriptors. In CVPR,
2011.
[37] J. Yang, R. Yan, and A. G. Hauptmann. Adapting svm classifiers to data with shifted distributions. In
ICDM Workshops, 2007.
9
| 5781 |@word trial:5 cnn:17 kulis:1 inversion:4 dalal:1 norm:2 advantageous:1 seems:1 triggs:1 everingham:2 mezuman:1 seek:1 tried:1 rgb:4 covariance:2 paid:1 egou:1 concise:1 shot:1 liu:1 united:4 current:1 com:1 babenko:1 surprising:1 must:1 shape:7 enables:1 remove:1 hypothesize:2 interpretable:1 plot:1 discrimination:1 v:2 cue:1 alone:1 hallucinate:1 chua:1 mental:6 provides:3 location:4 firstly:1 simpler:1 qualitative:1 consists:1 ijcv:1 inside:1 manner:1 introduce:2 expected:1 examine:1 frequently:1 hoggles:2 inspired:2 voc:9 salakhutdinov:1 little:3 solver:1 grayed:1 becomes:1 confused:2 estimating:6 moreover:1 cultural:1 maximizes:1 what:4 developed:1 dejong:1 clothes:1 sky:1 every:1 quantitative:1 grauman:1 classifier:17 rm:2 unit:1 internally:2 appear:1 positive:9 before:1 local:1 treat:1 tends:2 mistake:2 id:2 analyzing:1 firing:1 marszalek:1 ap:7 pami:2 might:4 bird:1 hpirsiav:1 resembles:1 quantified:1 suggests:1 challenging:1 co:5 branson:1 sekuler:1 palmer:1 malisiewicz:1 enforces:2 differs:1 procedure:6 area:1 yan:1 significantly:2 vedaldi:1 adapting:1 road:1 refers:1 suggest:8 diningtable:2 get:1 cannot:1 close:2 put:1 vijayanarasimhan:1 live:1 writing:1 optimize:2 equivalent:2 center:1 go:1 williams:1 attention:1 convex:1 rectangular:2 amazon:3 perceive:1 insight:3 retrieve:1 crowdsourcing:1 variation:1 construction:1 target:4 suppose:2 user:2 imagine:4 lighter:1 carl:1 us:4 play:1 massive:1 superstitious:1 assure:1 recognition:24 lay:1 asymmetric:1 muri:1 labeled:1 database:1 geman:1 role:1 solved:1 capture:4 thousand:2 plo:1 movement:1 russell:1 discriminates:1 environment:1 constrains:3 trained:5 rewrite:1 upon:2 negatively:1 po:6 cat:1 america:2 train:9 pool5:1 hydrant:11 labeling:1 horse:1 crowd:2 cvpr:9 football:2 ability:1 statistic:4 favor:1 final:1 online:1 subsamples:1 advantage:1 chase:1 propose:2 botros:1 adaptation:1 relevant:2 loop:3 academy:1 gold:1 intuitive:1 sutskever:1 empty:1 darrell:1 object:42 help:5 illustrate:1 develop:2 andrew:1 odd:1 auxiliary:1 predicted:4 indicate:3 quantify:1 closely:1 correct:1 peekaboom:1 human:70 clustered:1 generalization:5 county:1 alleviate:1 secondly:1 normal:2 great:1 predict:1 visualize:6 substituting:1 efros:1 vary:1 torralba:6 a2:1 favorable:3 sofa:2 schroff:1 label:5 saw:1 grouped:1 tool:3 mit:1 clearly:1 rough:1 gaussian:3 rather:3 shelf:1 varying:1 crawled:1 encode:1 focus:1 acuity:1 ponce:1 indicates:1 adversarial:1 helpful:2 abstraction:1 gaspar:1 unlikely:1 transferring:1 entire:2 perona:2 interested:2 issue:3 classification:28 orientation:16 pascal:14 development:1 art:3 special:1 psychophysics:6 fairly:1 orange:3 constrained:2 weinzaepfel:1 never:2 sampling:4 biology:1 look:4 capitalize:2 mosek:3 report:1 stimulus:2 few:4 oriented:1 recognize:3 national:1 asian:1 beck:1 fire:9 attempt:1 ab:1 detection:5 investigate:2 evaluation:2 male:1 predominant:1 worker:20 xy:1 culture:4 conduct:1 psychological:1 classify:4 soft:1 elli:1 rare:1 hundred:1 masked:1 recognizing:3 welinder:1 krizhevsky:1 answer:2 perturbed:1 offthe:1 person:9 explores:1 discriminating:1 systematic:1 dong:1 von:1 cognitive:1 book:1 american:5 imagination:1 li:3 suggesting:1 blurred:1 forsyth:2 depends:1 performed:3 try:1 observer:10 diagnose:1 vehicle:2 analyze:2 view:1 red:6 competitive:1 annotation:1 convolutional:3 descriptor:1 who:1 efficiently:2 correspond:1 sitting:1 generalize:1 researcher:4 visualizes:2 n000141010933:1 hamed:1 detector:1 influenced:2 tended:1 flickr:1 against:1 turk:6 attributed:1 spie:1 sampled:1 stop:1 umbc:1 dataset:15 gain:1 massachusetts:1 ask:3 popular:5 color:6 car:24 emerges:2 knowledge:2 recall:1 cap:1 back:2 higher:1 follow:1 zisserman:2 wei:1 formulation:2 done:1 though:2 evaluated:1 generality:1 furthermore:1 just:2 correlation:1 hand:1 eqn:9 mistakenly:1 replacing:1 google:2 quality:1 reveal:1 aude:1 believe:3 building:1 effect:1 concept:1 true:2 contain:1 geographic:2 brown:3 regularization:3 inspiration:1 hence:1 unbiased:1 white:18 visualizing:2 during:2 basketball:2 game:1 lovell:1 complete:1 confusion:3 performs:1 vondrick:3 image:55 lazebnik:1 jack:1 novel:4 funding:1 parikh:1 common:4 perturbing:1 volume:1 imagined:1 extend:2 interpretation:1 approximates:1 interpret:1 refer:1 significant:1 cv:1 rd:7 mathematics:1 similarly:1 rasa:1 erez:1 aytar:1 had:1 access:1 ahn:1 add:1 multivariate:1 recent:1 female:1 perspective:4 showed:1 driven:1 reverse:1 occasionally:1 certain:1 outperforming:1 success:1 onr:1 qualify:1 yi:3 caltech:8 inverted:1 seen:1 minimum:1 additional:1 tabula:1 deng:1 employed:1 signal:6 full:1 believed:1 cross:2 retrieval:3 icdm:1 manipulate:1 award:1 basic:1 oliva:2 vision:19 arxiv:1 histogram:1 sometimes:2 kernel:1 invert:2 background:1 want:2 remarkably:1 fellowship:1 baltimore:1 winn:1 country:1 source:1 fifty:1 biased:4 unlike:1 comment:1 suspect:1 subject:1 tend:1 mahendran:1 leveraging:1 seem:7 leverage:1 yang:1 constraining:2 easy:2 gave:1 psychology:3 idea:3 cn:1 multiclass:1 whether:5 utility:1 assist:1 aeroplane:2 suffer:1 locating:1 cause:1 antonio:1 deep:2 useful:5 amount:2 transforms:1 dark:1 mid:1 tenenbaum:1 svms:3 category:21 generate:1 specifies:1 http:1 canonical:4 notice:2 shifted:1 sign:1 estimated:9 popularity:1 correctly:1 blue:1 hyperspherical:1 group:1 blum:1 boland:1 sum:1 cone:1 realworld:1 inverse:1 angle:3 you:2 named:1 discern:1 extends:2 reader:1 reasonable:1 patch:1 decision:1 prefer:1 layer:1 bound:1 internet:1 played:2 greene:1 sorokin:1 constraint:9 fei:6 scene:4 software:1 chair:4 extremely:1 min:2 conjecture:1 transferred:1 tv:3 debugging:1 ball:10 poor:1 jr:3 across:1 beneficial:3 slightly:1 reconstructing:1 modification:1 making:1 iccv:2 computationally:2 agree:1 visualization:11 bus:3 slack:1 previously:1 mind:1 unusual:1 available:7 apply:1 hierarchical:1 away:1 sigchi:1 batch:1 motorbike:2 tvmonitor:4 original:1 substitute:1 top:5 restrictive:2 perturb:1 build:6 especially:1 murray:1 society:1 objective:1 added:1 rosch:1 hum:1 bylinskii:1 strategy:2 primary:1 gradient:2 cricket:2 thank:1 maryland:1 separating:2 sensible:1 acoustical:1 manifold:1 evaluate:2 toward:1 gosselin:1 length:1 retained:1 rotational:1 setup:1 unfortunately:1 vernier:1 hog:6 negative:3 ba:1 motivates:1 perform:4 workforce:1 inspect:1 datasets:6 discarded:1 ecml:1 incorrectly:2 hinton:1 head:3 blais:1 intensity:1 biederman:1 evidenced:1 bottle:9 mechanical:6 dog:1 eckstein:1 inverting:1 imagenet:4 wah:1 instructing:1 hour:1 boost:1 nip:3 able:2 perception:6 regime:2 challenge:1 program:4 pirsiavash:1 gool:1 natural:4 rely:2 predicting:1 boat:2 arm:1 improve:3 technology:1 eye:1 picture:1 conic:4 created:1 extract:5 schmid:1 deviate:2 prior:13 understanding:3 review:4 acknowledgement:1 law:1 loss:1 expect:1 validation:2 integrate:1 downloaded:1 degree:2 imposes:1 article:1 principle:1 classifying:2 share:3 supported:1 last:1 hebert:1 bias:76 side:1 institute:1 india:5 template:39 face:6 emerge:1 van:1 overcome:1 curve:1 dimension:2 world:3 evaluating:1 seemed:1 instructed:3 collection:2 commonly:1 social:1 gestalt:2 bb:1 approximate:2 active:2 reveals:3 nisbett:1 harm:1 assumed:2 belongie:1 knew:1 xi:3 fergus:1 search:1 khosla:2 learn:4 transfer:10 robust:1 contributes:1 ahumada:3 schyns:1 necessarily:1 domain:1 zitnick:1 inherit:1 did:1 motivation:1 noise:41 allowed:2 fig:13 crafted:1 beard:1 precision:3 wish:3 explicit:1 perceptual:1 ix:1 removing:1 formula:1 svm:36 grouping:1 incorporating:1 socher:1 workshop:2 underconstrained:1 effectively:1 gained:1 phd:1 hauptmann:1 television:4 margin:2 led:1 appearance:5 explore:1 likely:4 visual:54 aditya:1 sport:8 partially:1 collectively:1 acquiring:1 aa:1 corresponds:1 chance:9 goal:1 identity:1 towards:3 owen:1 bennett:1 feasible:1 change:2 hard:1 lsvrc:1 upright:1 hyperplane:4 wt:9 discriminate:3 tendency:1 experimental:1 saenko:1 berg:1 internal:4 people:10 indian:3 incorporate:5 wearing:1 tested:2 phenomenon:1 correlated:1 |
5,282 | 5,782 | Character-level Convolutional Networks for Text
Classification?
Xiang Zhang
Junbo Zhao
Yann LeCun
Courant Institute of Mathematical Sciences, New York University
719 Broadway, 12th Floor, New York, NY 10003
{xiang, junbo.zhao, yann}@cs.nyu.edu
Abstract
This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve
state-of-the-art or competitive results. Comparisons are offered against traditional
models such as bag of words, n-grams and their TFIDF variants, and deep learning
models such as word-based ConvNets and recurrent neural networks.
1
Introduction
Text classification is a classic topic for natural language processing, in which one needs to assign
predefined categories to free-text documents. The range of text classification research goes from
designing the best features to choosing the best possible machine learning classifiers. To date,
almost all techniques of text classification are based on words, in which simple statistics of some
ordered word combinations (such as n-grams) usually perform the best [12].
On the other hand, many researchers have found convolutional networks (ConvNets) [17] [18] are
useful in extracting information from raw signals, ranging from computer vision applications to
speech recognition and others. In particular, time-delay networks used in the early days of deep
learning research are essentially convolutional networks that model sequential data [1] [31].
In this article we explore treating text as a kind of raw signal at character level, and applying temporal (one-dimensional) ConvNets to it. For this article we only used a classification task as a way
to exemplify ConvNets? ability to understand texts. Historically we know that ConvNets usually
require large-scale datasets to work, therefore we also build several of them. An extensive set of
comparisons is offered with traditional models and other deep learning models.
Applying convolutional networks to text classification or natural language processing at large was
explored in literature. It has been shown that ConvNets can be directly applied to distributed [6] [16]
or discrete [13] embedding of words, without any knowledge on the syntactic or semantic structures
of a language. These approaches have been proven to be competitive to traditional models.
There are also related works that use character-level features for language processing. These include using character-level n-grams with linear classifiers [15], and incorporating character-level
features to ConvNets [28] [29]. In particular, these ConvNet approaches use words as a basis, in
which character-level features extracted at word [28] or word n-gram [29] level form a distributed
representation. Improvements for part-of-speech tagging and information retrieval were observed.
This article is the first to apply ConvNets only on characters. We show that when trained on largescale datasets, deep ConvNets do not require the knowledge of words, in addition to the conclusion
?
An early version of this work entitled ?Text Understanding from Scratch? was posted in Feb 2015 as
arXiv:1502.01710. The present paper has considerably more experimental results and a rewritten introduction.
1
from previous research that ConvNets do not require the knowledge about the syntactic or semantic
structure of a language. This simplification of engineering could be crucial for a single system that
can work for different languages, since characters always constitute a necessary construct regardless
of whether segmentation into words is possible. Working on only characters also has the advantage
that abnormal character combinations such as misspellings and emoticons may be naturally learnt.
2
Character-level Convolutional Networks
In this section, we introduce the design of character-level ConvNets for text classification. The design is modular, where the gradients are obtained by back-propagation [27] to perform optimization.
2.1
Key Modules
The main component is the temporal convolutional module, which simply computes a 1-D convolution. Suppose we have a discrete input function g(x) ? [1, l] ? R and a discrete kernel function
f (x) ? [1, k] ? R. The convolution h(y) ? [1, b(l ? k + 1)/dc] ? R between f (x) and g(x) with
stride d is defined as
k
X
h(y) =
f (x) ? g(y ? d ? x + c),
x=1
where c = k ? d + 1 is an offset constant. Just as in traditional convolutional networks in vision,
the module is parameterized by a set of such kernel functions fij (x) (i = 1, 2, . . . , m and j =
1, 2, . . . , n) which we call weights, on a set of inputs gi (x) and outputs hj (y). We call each gi (or
hj ) input (or output) features, and m (or n) input (or output) feature size. The outputs hj (y) is
obtained by a sum over i of the convolutions between gi (x) and fij (x).
One key module that helped us to train deeper models is temporal max-pooling. It is the 1-D version
of the max-pooling module used in computer vision [2]. Given a discrete input function g(x) ?
[1, l] ? R, the max-pooling function h(y) ? [1, b(l ? k + 1)/dc] ? R of g(x) is defined as
k
h(y) = max g(y ? d ? x + c),
x=1
where c = k ? d + 1 is an offset constant. This very pooling module enabled us to train ConvNets
deeper than 6 layers, where all others fail. The analysis by [3] might shed some light on this.
The non-linearity used in our model is the rectifier or thresholding function h(x) = max{0, x},
which makes our convolutional layers similar to rectified linear units (ReLUs) [24]. The algorithm
used is stochastic gradient descent (SGD) with a minibatch of size 128, using momentum [26] [30]
0.9 and initial step size 0.01 which is halved every 3 epoches for 10 times. Each epoch takes a fixed
number of random training samples uniformly sampled across classes. This number will later be
detailed for each dataset sparately. The implementation is done using Torch 7 [4].
2.2
Character quantization
Our models accept a sequence of encoded characters as input. The encoding is done by prescribing
an alphabet of size m for the input language, and then quantize each character using 1-of-m encoding
(or ?one-hot? encoding). Then, the sequence of characters is transformed to a sequence of such m
sized vectors with fixed length l0 . Any character exceeding length l0 is ignored, and any characters
that are not in the alphabet including blank characters are quantized as all-zero vectors. The character
quantization order is backward so that the latest reading on characters is always placed near the begin
of the output, making it easy for fully connected layers to associate weights with the latest reading.
The alphabet used in all of our models consists of 70 characters, including 26 english letters, 10
digits, 33 other characters and the new line character. The non-space characters are:
abcdefghijklmnopqrstuvwxyz0123456789
-,;.!?:???/\|_@#$%?&*??+-=<>()[]{}
Later we also compare with models that use a different alphabet in which we distinguish between
upper-case and lower-case letters.
2
2.3
Model Design
We designed 2 ConvNets ? one large and one small. They are both 9 layers deep with 6 convolutional
layers and 3 fully-connected layers. Figure 1 gives an illustration.
Length
Feature
Quantization
Some Text
...
Convolutions
Max-pooling
Conv. and Pool. layers
Fully-connected
Figure 1: Illustration of our model
The input have number of features equal to 70 due to our character quantization method, and the
input feature length is 1014. It seems that 1014 characters could already capture most of the texts of
interest. We also insert 2 dropout [10] modules in between the 3 fully-connected layers to regularize.
They have dropout probability of 0.5. Table 1 lists the configurations for convolutional layers, and
table 2 lists the configurations for fully-connected (linear) layers.
Table 1: Convolutional layers used in our experiments. The convolutional layers have stride 1 and
pooling layers are all non-overlapping ones, so we omit the description of their strides.
Layer
1
2
3
4
5
6
Large Feature
1024
1024
1024
1024
1024
1024
Small Feature
256
256
256
256
256
256
Kernel
7
7
3
3
3
3
Pool
3
3
N/A
N/A
N/A
3
We initialize the weights using a Gaussian distribution. The mean and standard deviation used for
initializing the large model is (0, 0.02) and small model (0, 0.05).
Table 2: Fully-connected layers used in our experiments. The number of output units for the last
layer is determined by the problem. For example, for a 10-class classification problem it will be 10.
Layer
7
8
9
Output Units Large Output Units Small
2048
1024
2048
1024
Depends on the problem
For different problems the input lengths may be different (for example in our case l0 = 1014), and
so are the frame lengths. From our model design, it is easy to know that given input length l0 , the
output frame length after the last convolutional layer (but before any of the fully-connected layers)
is l6 = (l0 ? 96)/27. This number multiplied with the frame size at layer 6 will give the input
dimension the first fully-connected layer accepts.
2.4
Data Augmentation using Thesaurus
Many researchers have found that appropriate data augmentation techniques are useful for controlling generalization error for deep learning models. These techniques usually work well when we
could find appropriate invariance properties that the model should possess. In terms of texts, it is not
reasonable to augment the data using signal transformations as done in image or speech recognition,
because the exact order of characters may form rigorous syntactic and semantic meaning. Therefore,
3
the best way to do data augmentation would have been using human rephrases of sentences, but this
is unrealistic and expensive due the large volume of samples in our datasets. As a result, the most
natural choice in data augmentation for us is to replace words or phrases with their synonyms.
We experimented data augmentation by using an English thesaurus, which is obtained from the
mytheas component used in LibreOffice1 project. That thesaurus in turn was obtained from WordNet [7], where every synonym to a word or phrase is ranked by the semantic closeness to the most
frequently seen meaning. To decide on how many words to replace, we extract all replaceable words
from the given text and randomly choose r of them to be replaced. The probability of number r
is determined by a geometric distribution with parameter p in which P [r] ? pr . The index s of
the synonym chosen given a word is also determined by a another geometric distribution in which
P [s] ? q s . This way, the probability of a synonym chosen becomes smaller when it moves distant
from the most frequently seen meaning. We will report the results using this new data augmentation
technique with p = 0.5 and q = 0.5.
3
Comparison Models
To offer fair comparisons to competitive models, we conducted a series of experiments with both traditional and deep learning methods. We tried our best to choose models that can provide comparable
and competitive results, and the results are reported faithfully without any model selection.
3.1
Traditional Methods
We refer to traditional methods as those that using a hand-crafted feature extractor and a linear
classifier. The classifier used is a multinomial logistic regression in all these models.
Bag-of-words and its TFIDF. For each dataset, the bag-of-words model is constructed by selecting
50,000 most frequent words from the training subset. For the normal bag-of-words, we use the
counts of each word as the features. For the TFIDF (term-frequency inverse-document-frequency)
[14] version, we use the counts as the term-frequency. The inverse document frequency is the
logarithm of the division between total number of samples and number of samples with the word in
the training subset. The features are normalized by dividing the largest feature value.
Bag-of-ngrams and its TFIDF. The bag-of-ngrams models are constructed by selecting the 500,000
most frequent n-grams (up to 5-grams) from the training subset for each dataset. The feature values
are computed the same way as in the bag-of-words model.
Bag-of-means on word embedding. We also have an experimental model that uses k-means on
word2vec [23] learnt from the training subset of each dataset, and then use these learnt means as
representatives of the clustered words. We take into consideration all the words that appeared more
than 5 times in the training subset. The dimension of the embedding is 300. The bag-of-means
features are computed the same way as in the bag-of-words model. The number of means is 5000.
3.2
Deep Learning Methods
Recently deep learning methods have started to be applied to text classification. We choose two
simple and representative models for comparison, in which one is word-based ConvNet and the
other a simple long-short term memory (LSTM) [11] recurrent neural network model.
Word-based ConvNets. Among the large number of recent works on word-based ConvNets for
text classification, one of the differences is the choice of using pretrained or end-to-end learned word
representations. We offer comparisons with both using the pretrained word2vec [23] embedding [16]
and using lookup tables [5]. The embedding size is 300 in both cases, in the same way as our bagof-means model. To ensure fair comparison, the models for each case are of the same size as
our character-level ConvNets, in terms of both the number of layers and each layer?s output size.
Experiments using a thesaurus for data augmentation are also conducted.
1
http://www.libreoffice.org/
4
Long-short term memory. We also offer a comparison
Mean
with a recurrent neural network model, namely long-short
term memory (LSTM) [11]. The LSTM model used in
our case is word-based, using pretrained word2vec emLSTM
LSTM
...
LSTM
bedding of size 300 as in previous models. The model is
formed by taking mean of the outputs of all LSTM cells to
form a feature vector, and then using multinomial logistic
Figure 2: long-short term memory
regression on this feature vector. The output dimension
is 512. The variant of LSTM we used is the common
?vanilla? architecture [8] [9]. We also used gradient clipping [25] in which the gradient norm is
limited to 5. Figure 2 gives an illustration.
3.3
Choice of Alphabet
For the alphabet of English, one apparent choice is whether to distinguish between upper-case and
lower-case letters. We report experiments on this choice and observed that it usually (but not always)
gives worse results when such distinction is made. One possible explanation might be that semantics
do not change with different letter cases, therefore there is a benefit of regularization.
4
Large-scale Datasets and Results
Previous research on ConvNets in different areas has shown that they usually work well with largescale datasets, especially when the model takes in low-level raw features like characters in our
case. However, most open datasets for text classification are quite small, and large-scale datasets are
splitted with a significantly smaller training set than testing [21]. Therefore, instead of confusing our
community more by using them, we built several large-scale datasets for our experiments, ranging
from hundreds of thousands to several millions of samples. Table 3 is a summary.
Table 3: Statistics of our large-scale datasets. Epoch size is the number of minibatches in one epoch
Dataset
AG?s News
Sogou News
DBPedia
Yelp Review Polarity
Yelp Review Full
Yahoo! Answers
Amazon Review Full
Amazon Review Polarity
Classes
4
5
14
2
5
10
5
2
Train Samples
120,000
450,000
560,000
560,000
650,000
1,400,000
3,000,000
3,600,000
Test Samples
7,600
60,000
70,000
38,000
50,000
60,000
650,000
400,000
Epoch Size
5,000
5,000
5,000
5,000
5,000
10,000
30,000
30,000
AG?s news corpus. We obtained the AG?s corpus of news article on the web2 . It contains 496,835
categorized news articles from more than 2000 news sources. We choose the 4 largest classes from
this corpus to construct our dataset, using only the title and description fields. The number of training
samples for each class is 30,000 and testing 1900.
Sogou news corpus. This dataset is a combination of the SogouCA and SogouCS news corpora [32],
containing in total 2,909,551 news articles in various topic channels. We then labeled each piece
of news using its URL, by manually classifying the their domain names. This gives us a large
corpus of news articles labeled with their categories. There are a large number categories but most
of them contain only few articles. We choose 5 categories ? ?sports?, ?finance?, ?entertainment?,
?automobile? and ?technology?. The number of training samples selected for each class is 90,000
and testing 12,000. Although this is a dataset in Chinese, we used pypinyin package combined
with jieba Chinese segmentation system to produce Pinyin ? a phonetic romanization of Chinese.
The models for English can then be applied to this dataset without change. The fields used are title
and content.
2
http://www.di.unipi.it/?gulli/AG_corpus_of_news_articles.html
5
Table 4: Testing errors of all the models. Numbers are in percentage. ?Lg? stands for ?large? and
?Sm? stands for ?small?. ?w2v? is an abbreviation for ?word2vec?, and ?Lk? for ?lookup table?.
?Th? stands for thesaurus. ConvNets labeled ?Full? are those that distinguish between lower and
upper letters
Model
BoW
BoW TFIDF
ngrams
ngrams TFIDF
Bag-of-means
LSTM
Lg. w2v Conv.
Sm. w2v Conv.
Lg. w2v Conv. Th.
Sm. w2v Conv. Th.
Lg. Lk. Conv.
Sm. Lk. Conv.
Lg. Lk. Conv. Th.
Sm. Lk. Conv. Th.
Lg. Full Conv.
Sm. Full Conv.
Lg. Full Conv. Th.
Sm. Full Conv. Th.
Lg. Conv.
Sm. Conv.
Lg. Conv. Th.
Sm. Conv. Th.
AG
11.19
10.36
7.96
7.64
16.91
13.94
9.92
11.35
9.91
10.88
8.55
10.87
8.93
9.12
9.85
11.59
9.51
10.89
12.82
15.65
13.39
14.80
Sogou
7.15
6.55
2.92
2.81
10.79
4.82
4.39
4.54
4.95
4.93
8.80
8.95
4.88
8.65
-
DBP.
3.39
2.63
1.37
1.31
9.55
1.45
1.42
1.71
1.37
1.53
1.72
1.85
1.58
1.77
1.66
1.89
1.55
1.69
1.73
1.98
1.60
1.85
Yelp P.
7.76
6.34
4.36
4.56
12.67
5.26
4.60
5.56
4.63
5.36
4.89
5.54
5.03
5.37
5.25
5.67
4.88
5.42
5.89
6.53
5.82
6.49
Yelp F.
42.01
40.14
43.74
45.20
47.46
41.83
40.16
42.13
39.58
41.09
40.52
41.41
40.52
41.17
38.40
38.82
38.04
37.95
39.62
40.84
39.30
40.16
Yah. A.
31.11
28.96
31.53
31.49
39.45
29.16
31.97
31.50
31.23
29.86
29.06
30.02
28.84
28.92
29.90
30.01
29.58
29.90
29.55
29.84
28.80
29.84
Amz. F.
45.36
44.74
45.73
47.56
55.87
40.57
44.40
42.59
43.75
42.50
45.95
43.66
42.39
43.19
40.89
40.88
40.54
40.53
41.31
40.53
40.45
40.43
Amz. P.
9.60
9.00
7.98
8.46
18.39
6.10
5.88
6.00
5.80
5.63
5.84
5.85
5.52
5.51
5.78
5.78
5.51
5.66
5.51
5.50
4.93
5.67
DBPedia ontology dataset. DBpedia is a crowd-sourced community effort to extract structured
information from Wikipedia [19]. The DBpedia ontology dataset is constructed by picking 14 nonoverlapping classes from DBpedia 2014. From each of these 14 ontology classes, we randomly
choose 40,000 training samples and 5,000 testing samples. The fields we used for this dataset
contain title and abstract of each Wikipedia article.
Yelp reviews. The Yelp reviews dataset is obtained from the Yelp Dataset Challenge in 2015. This
dataset contains 1,569,264 samples that have review texts. Two classification tasks are constructed
from this dataset ? one predicting full number of stars the user has given, and the other predicting a polarity label by considering stars 1 and 2 negative, and 3 and 4 positive. The full dataset
has 130,000 training samples and 10,000 testing samples in each star, and the polarity dataset has
280,000 training samples and 19,000 test samples in each polarity.
Yahoo! Answers dataset. We obtained Yahoo! Answers Comprehensive Questions and Answers
version 1.0 dataset through the Yahoo! Webscope program. The corpus contains 4,483,032 questions
and their answers. We constructed a topic classification dataset from this corpus using 10 largest
main categories. Each class contains 140,000 training samples and 5,000 testing samples. The fields
we used include question title, question content and best answer.
Amazon reviews. We obtained an Amazon review dataset from the Stanford Network Analysis
Project (SNAP), which spans 18 years with 34,686,770 reviews from 6,643,669 users on 2,441,053
products [22]. Similarly to the Yelp review dataset, we also constructed 2 datasets ? one full score
prediction and another polarity prediction. The full dataset contains 600,000 training samples and
130,000 testing samples in each class, whereas the polarity dataset contains 1,800,000 training samples and 200,000 testing samples in each polarity sentiment. The fields used are review title and
review content.
Table 4 lists all the testing errors we obtained from these datasets for all the applicable models. Note
that since we do not have a Chinese thesaurus, the Sogou News dataset does not have any results
using thesaurus augmentation. We labeled the best result in blue and worse result in red.
6
5
Discussion
90.00%
60.00%
25.00%
80.00%
40.00%
20.00%
20.00%
15.00%
0.00%
10.00%
-20.00%
5.00%
70.00%
60.00%
50.00%
40.00%
-40.00%
0.00%
-60.00%
-5.00%
10.00%
-80.00%
-10.00%
0.00%
-100.00%
30.00%
20.00%
(a) Bag-of-means
-15.00%
(b) n-grams TFIDF
20.00%
10.00%
20.00%
10.00%
10.00%
0.00%
0.00%
(c) LSTM
20.00%
0.00%
-10.00%
-10.00%
-20.00%
-10.00%
-20.00%
-30.00%
-20.00%
-30.00%
-40.00%
-30.00%
-40.00%
(d) word2vec ConvNet
AG News
DBPedia
-50.00%
-40.00%
-60.00%
-50.00%
(e) Lookup table ConvNet
Yelp P.
Yelp F.
Yahoo A.
(f) Full alphabet ConvNet
Amazon F.
Amazon P.
Figure 3: Relative errors with comparison models
To understand the results in table 4 further, we offer some empirical analysis in this section. To
facilitate our analysis, we present the relative errors in figure 3 with respect to comparison models.
Each of these plots is computed by taking the difference between errors on comparison model and
our character-level ConvNet model, then divided by the comparison model error. All ConvNets in
the figure are the large models with thesaurus augmentation respectively.
Character-level ConvNet is an effective method. The most important conclusion from our experiments is that character-level ConvNets could work for text classification without the need for words.
This is a strong indication that language could also be thought of as a signal no different from
any other kind. Figure 4 shows 12 random first-layer patches learnt by one of our character-level
ConvNets for DBPedia dataset.
Figure 4: First layer weights. For each patch, height is the kernel size and width the alphabet size
Dataset size forms a dichotomy between traditional and ConvNets models. The most obvious
trend coming from all the plots in figure 3 is that the larger datasets tend to perform better. Traditional methods like n-grams TFIDF remain strong candidates for dataset of size up to several
hundreds of thousands, and only until the dataset goes to the scale of several millions do we observe
that character-level ConvNets start to do better.
ConvNets may work well for user-generated data. User-generated data vary in the degree of how
well the texts are curated. For example, in our million scale datasets, Amazon reviews tend to be
raw user-inputs, whereas users might be extra careful in their writings on Yahoo! Answers. Plots
comparing word-based deep models (figures 3c, 3d and 3e) show that character-level ConvNets work
better for less curated user-generated texts. This property suggests that ConvNets may have better
applicability to real-world scenarios. However, further analysis is needed to validate the hypothesis
that ConvNets are truly good at identifying exotic character combinations such as misspellings and
emoticons, as our experiments alone do not show any explicit evidence.
Choice of alphabet makes a difference. Figure 3f shows that changing the alphabet by distinguishing between uppercase and lowercase letters could make a difference. For million-scale datasets, it
seems that not making such distinction usually works better. One possible explanation is that there
is a regularization effect, but this is to be validated.
7
Semantics of tasks may not matter. Our datasets consist of two kinds of tasks: sentiment analysis
(Yelp and Amazon reviews) and topic classification (all others). This dichotomy in task semantics
does not seem to play a role in deciding which method is better.
Bag-of-means is a misuse of word2vec [20]. One of the most obvious facts one could observe
from table 4 and figure 3a is that the bag-of-means model performs worse in every case. Comparing
with traditional models, this suggests such a simple use of a distributed word representation may not
give us an advantage to text classification. However, our experiments does not speak for any other
language processing tasks or use of word2vec in any other way.
There is no free lunch. Our experiments once again verifies that there is not a single machine
learning model that can work for all kinds of datasets. The factors discussed in this section could all
play a role in deciding which method is the best for some specific application.
6
Conclusion and Outlook
This article offers an empirical study on character-level convolutional networks for text classification. We compared with a large number of traditional and deep learning models using several largescale datasets. On one hand, analysis shows that character-level ConvNet is an effective method.
On the other hand, how well our model performs in comparisons depends on many factors, such as
dataset size, whether the texts are curated and choice of alphabet.
In the future, we hope to apply character-level ConvNets for a broader range of language processing
tasks especially when structured outputs are needed.
Acknowledgement
We gratefully acknowledge the support of NVIDIA Corporation with the donation of 2 Tesla K40
GPUs used for this research. We gratefully acknowledge the support of Amazon.com Inc for an
AWS in Education Research grant used for this research.
References
[1] L. Bottou, F. Fogelman Souli?e, P. Blanchet, and J. Lienard. Experiments with time delay networks and
dynamic time warping for speaker independent isolated digit recognition. In Proceedings of EuroSpeech
89, volume 2, pages 537?540, Paris, France, 1989.
[2] Y.-L. Boureau, F. Bach, Y. LeCun, and J. Ponce. Learning mid-level features for recognition. In Computer
Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2559?2566. IEEE, 2010.
[3] Y.-L. Boureau, J. Ponce, and Y. LeCun. A theoretical analysis of feature pooling in visual recognition.
In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 111?118,
2010.
[4] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning.
In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493?2537, Nov. 2011.
[6] C. dos Santos and M. Gatti. Deep convolutional neural networks for sentiment analysis of short texts. In
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69?78, Dublin, Ireland, August 2014. Dublin City University and Association for
Computational Linguistics.
[7] C. Fellbaum. Wordnet and wordnets. In K. Brown, editor, Encyclopedia of Language and Linguistics,
pages 665?670, Oxford, 2005. Elsevier.
[8] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional lstm and other neural
network architectures. Neural Networks, 18(5):602?610, 2005.
[9] K. Greff, R. K. Srivastava, J. Koutn??k, B. R. Steunebrink, and J. Schmidhuber. LSTM: A search space
odyssey. CoRR, abs/1503.04069, 2015.
[10] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural
networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
8
[11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735?1780, Nov.
1997.
[12] T. Joachims. Text categorization with suport vector machines: Learning with many relevant features. In
Proceedings of the 10th European Conference on Machine Learning, pages 137?142. Springer-Verlag,
1998.
[13] R. Johnson and T. Zhang. Effective use of word order for text categorization with convolutional neural
networks. CoRR, abs/1412.1058, 2014.
[14] K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of
Documentation, 28(1):11?21, 1972.
[15] I. Kanaris, K. Kanaris, I. Houvardas, and E. Stamatatos. Words versus character n-grams for anti-spam
filtering. International Journal on Artificial Intelligence Tools, 16(06):1047?1067, 2007.
[16] Y. Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746?1751, Doha, Qatar,
October 2014. Association for Computational Linguistics.
[17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541?551, Winter
1989.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[19] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey,
P. van Kleef, S. Auer, and C. Bizer. DBpedia - a large-scale, multilingual knowledge base extracted from
wikipedia. Semantic Web Journal, 2014.
[20] G. Lev, B. Klein, and L. Wolf. In defense of word embedding for generic text representation. In C. Biemann, S. Handschuh, A. Freitas, F. Meziane, and E. Mtais, editors, Natural Language Processing and
Information Systems, volume 9103 of Lecture Notes in Computer Science, pages 35?50. Springer International Publishing, 2015.
[21] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. Rcv1: A new benchmark collection for text categorization
research. The Journal of Machine Learning Research, 5:361?397, 2004.
[22] J. McAuley and J. Leskovec. Hidden factors and hidden topics: Understanding rating dimensions with
review text. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ?13, pages
165?172, New York, NY, USA, 2013. ACM.
[23] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111?3119. 2013.
[24] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings
of the 27th International Conference on Machine Learning (ICML-10), pages 807?814, 2010.
[25] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML
2013, volume 28 of JMLR Proceedings, pages 1310?1318. JMLR.org, 2013.
[26] B. Polyak. Some methods of speeding up the convergence of iteration methods. {USSR} Computational
Mathematics and Mathematical Physics, 4(5):1 ? 17, 1964.
[27] D. Rumelhart, G. Hintont, and R. Williams. Learning representations by back-propagating errors. Nature,
323(6088):533?536, 1986.
[28] C. D. Santos and B. Zadrozny. Learning character-level representations for part-of-speech tagging. In
Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1818?1826,
2014.
[29] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. A latent semantic model with convolutional-pooling
structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 101?110. ACM, 2014.
[30] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum
in deep learning. In S. Dasgupta and D. Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1139?1147. JMLR Workshop and Conference
Proceedings, May 2013.
[31] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang. Phoneme recognition using time-delay
neural networks. Acoustics, Speech and Signal Processing, IEEE Transactions on, 37(3):328?339, 1989.
[32] C. Wang, M. Zhang, S. Ma, and L. Ru. Automatic online news issue construction in web environment. In
Proceedings of the 17th International Conference on World Wide Web, WWW ?08, pages 457?466, New
York, NY, USA, 2008. ACM.
9
| 5782 |@word version:4 seems:2 norm:1 open:1 tried:1 sgd:1 outlook:1 mcauley:1 initial:1 configuration:2 series:1 contains:6 selecting:2 score:1 qatar:1 document:4 freitas:1 blank:1 comparing:2 com:1 lang:1 distant:1 treating:1 designed:1 plot:3 alone:1 intelligence:1 selected:1 short:6 quantized:1 pascanu:1 org:2 zhang:3 height:1 mathematical:2 constructed:7 framewise:1 junbo:2 consists:1 introduce:1 tagging:2 kuksa:1 ontology:3 frequently:2 salakhutdinov:1 considering:1 conv:17 begin:1 project:2 linearity:1 becomes:1 exotic:1 santos:2 kind:4 ag:5 transformation:1 corporation:1 temporal:3 every:3 shed:1 finance:1 classifier:4 unit:5 grant:1 omit:1 before:1 positive:1 engineering:1 yelp:11 encoding:3 mach:1 oxford:1 lev:1 might:3 initialization:1 misspelling:2 suggests:2 co:1 limited:1 range:2 lecun:5 testing:10 backpropagation:1 digit:2 area:1 empirical:4 significantly:1 thought:1 word:38 specificity:1 selection:1 applying:2 writing:1 www:3 misuse:1 dean:1 marten:1 go:2 regardless:1 latest:2 williams:1 shen:1 amazon:9 identifying:1 regularize:1 enabled:1 classic:1 embedding:6 construction:1 controlling:1 suppose:1 dbpedia:8 exact:1 user:7 play:2 us:1 designing:1 hypothesis:1 distinguishing:1 speak:1 associate:1 trend:1 documentation:1 recognition:9 expensive:1 rumelhart:1 curated:3 unipi:1 labeled:4 observed:2 role:2 module:7 preprint:1 initializing:1 capture:1 wang:1 thousand:2 connected:8 news:14 k40:1 mesnil:1 rose:1 environment:2 dynamic:1 trained:1 hintont:1 division:1 basis:1 various:1 alphabet:11 train:3 souli:1 effective:3 artificial:1 dichotomy:2 choosing:1 crowd:1 sourced:1 apparent:1 modular:1 encoded:1 quite:1 stanford:1 snap:1 larger:1 cvpr:1 ability:1 statistic:2 gi:3 syntactic:3 hanazawa:1 online:1 advantage:2 sequence:3 indication:1 product:1 coming:1 adaptation:1 frequent:2 relevant:1 date:1 bow:2 achieve:1 description:2 validate:1 sutskever:3 convergence:1 produce:1 categorization:3 donation:1 recurrent:4 propagating:1 strong:2 dividing:1 c:1 fij:2 stochastic:1 exploration:1 human:1 mcallester:1 education:1 require:3 odyssey:1 assign:1 generalization:1 clustered:1 emoticon:2 tfidf:8 koutn:1 insert:1 normal:1 deciding:2 vary:1 early:2 applicable:1 bag:14 label:1 jackel:1 title:5 hubbard:1 largest:3 faithfully:1 city:1 tool:1 hope:1 biglearn:1 always:3 gaussian:1 hj:3 broader:1 l0:5 validated:1 ponce:2 improvement:1 joachim:1 rigorous:1 kim:1 elsevier:1 lowercase:1 epfl:1 prescribing:1 torch:1 accept:1 hidden:2 transformed:1 france:1 semantics:3 fogelman:1 issue:1 classification:20 among:1 html:1 augment:1 yahoo:6 ussr:1 art:1 initialize:1 equal:1 construct:2 field:5 once:1 manually:1 jones:1 icml:5 future:1 others:3 report:2 few:1 randomly:2 winter:1 comprehensive:1 replaced:1 ab:2 interest:1 henderson:1 truly:1 light:1 uppercase:1 word2vec:7 predefined:1 necessary:1 logarithm:1 re:1 isolated:1 theoretical:1 leskovec:1 dublin:2 gatti:1 phrase:3 clipping:1 applicability:1 deviation:1 subset:5 hundred:2 delay:3 krizhevsky:1 conducted:2 johnson:1 eurospeech:1 reported:1 answer:7 learnt:4 considerably:1 combined:1 st:1 lstm:11 international:9 physic:1 pool:2 picking:1 again:1 augmentation:9 management:1 containing:1 choose:6 emnlp:1 worse:3 conf:1 zhao:2 li:1 lookup:3 stride:3 nonoverlapping:1 star:3 matter:1 inc:1 depends:2 collobert:2 piece:1 later:2 helped:1 red:1 competitive:4 relus:1 start:1 formed:1 convolutional:20 phoneme:2 raw:4 handwritten:1 kavukcuoglu:2 researcher:2 rectified:2 detector:1 splitted:1 farabet:1 against:1 web2:1 frequency:4 obvious:2 naturally:1 di:1 sampled:1 dataset:31 jentzsch:1 exemplify:1 knowledge:5 segmentation:2 auer:1 back:2 fellbaum:1 bidirectional:1 courant:1 day:1 done:3 just:1 convnets:29 until:1 hand:4 working:1 web:3 overlapping:1 propagation:1 minibatch:1 logistic:2 name:1 facilitate:1 effect:1 normalized:1 contain:2 brown:1 usa:2 regularization:2 semantic:6 width:1 speaker:1 performs:2 greff:1 ranging:2 image:1 meaning:3 consideration:1 recently:1 common:1 wikipedia:3 multinomial:2 volume:5 million:4 discussed:1 association:2 interpretation:1 he:1 refer:1 rd:1 vanilla:1 doha:1 mathematics:1 similarly:1 automatic:1 language:14 gratefully:2 feb:1 base:1 halved:1 w2v:5 recent:1 scenario:1 phonetic:1 nvidia:1 schmidhuber:3 verlag:1 entitled:1 seen:2 floor:1 zip:1 deng:1 corrado:1 signal:5 full:12 karlen:1 technical:1 offer:6 long:5 retrieval:3 bach:1 divided:1 prediction:2 variant:2 regression:2 vision:4 essentially:1 arxiv:3 iteration:1 kernel:4 hochreiter:1 cell:1 addition:1 whereas:2 aws:1 source:1 crucial:1 extra:1 posse:1 webscope:1 pooling:8 tend:2 seem:1 call:2 extracting:1 near:1 yang:1 bengio:2 easy:2 architecture:2 polyak:1 haffner:1 whether:3 defense:1 url:1 torch7:1 effort:1 sentiment:3 speech:5 york:4 constitute:1 matlab:1 deep:13 ignored:1 useful:2 detailed:1 mid:1 encyclopedia:1 category:5 http:2 percentage:1 bagof:1 klein:1 blue:1 discrete:4 dasgupta:1 key:2 changing:1 dahl:1 backward:1 sum:1 year:1 inverse:2 parameterized:1 letter:6 package:1 lehmann:1 almost:2 reasonable:1 decide:1 yann:2 patch:2 thesaurus:8 confusing:1 comparable:1 dropout:2 layer:25 abnormal:1 simplification:1 distinguish:3 span:1 rcv1:1 mikolov:2 gpus:1 structured:2 waibel:1 combination:4 across:1 smaller:2 remain:1 character:44 lunch:1 making:2 restricted:1 pr:1 turn:1 count:2 fail:1 needed:2 know:2 end:2 rewritten:1 multiplied:1 apply:2 observe:2 denker:1 appropriate:2 generic:1 weinberger:1 dbp:1 include:2 ensure:1 entertainment:1 linguistics:4 publishing:1 l6:1 ghahramani:1 build:1 especially:2 chinese:4 warping:1 move:1 already:1 question:4 traditional:11 gradient:5 ireland:1 convnet:8 recsys:1 topic:5 ru:1 length:8 code:1 index:1 polarity:8 illustration:3 lg:9 october:1 broadway:1 negative:1 design:4 implementation:1 boltzmann:1 perform:3 upper:3 recommender:1 convolution:4 datasets:18 sm:9 benchmark:1 acknowledge:2 howard:1 descent:1 anti:1 november:1 zadrozny:1 hinton:4 dc:2 frame:3 jakob:1 august:1 community:2 rating:1 compositionality:1 namely:1 paris:1 extensive:1 sentence:2 acoustic:1 accepts:1 learned:1 bedding:1 distinction:2 boser:1 nip:1 usually:6 pattern:1 appeared:1 reading:2 challenge:1 program:1 built:1 max:6 including:2 memory:5 explanation:2 hot:1 unrealistic:1 natural:6 ranked:1 difficulty:1 predicting:2 largescale:4 improve:1 historically:1 technology:1 lk:5 started:1 extract:2 speeding:1 text:31 epoch:5 literature:1 understanding:2 geometric:2 review:16 amz:2 acknowledgement:1 xiang:2 relative:2 graf:1 fully:8 lecture:1 filtering:1 proven:1 versus:1 degree:1 offered:2 article:11 thresholding:1 blanchet:1 editor:4 classifying:1 summary:1 placed:1 last:2 free:2 english:4 understand:2 deeper:2 institute:1 burges:1 wide:1 taking:2 distributed:4 benefit:1 van:1 dimension:4 gram:9 stand:3 world:2 computes:1 preventing:1 made:1 collection:1 spam:1 welling:1 transaction:1 nov:2 multilingual:1 corpus:8 hellmann:1 shikano:1 search:1 latent:1 ngrams:4 table:13 channel:1 learn:1 nature:1 steunebrink:1 improving:1 quantize:1 automobile:1 bottou:4 posted:1 european:1 domain:1 main:2 synonym:4 verifies:1 fair:2 tesla:1 categorized:1 crafted:1 representative:2 ny:3 momentum:2 exceeding:1 explicit:1 comput:1 candidate:1 jmlr:3 extractor:1 coling:1 biemann:1 rectifier:1 specific:1 nyu:1 explored:1 offset:2 list:3 experimented:1 closeness:1 evidence:1 incorporating:1 consist:1 quantization:4 workshop:2 sequential:1 corr:2 importance:1 boureau:2 chen:1 simply:1 explore:1 gao:1 visual:1 ordered:1 sport:1 pretrained:3 srivastava:2 springer:2 wolf:1 lewis:1 extracted:2 minibatches:1 acm:5 weston:1 abbreviation:1 nair:1 sized:1 ma:1 careful:1 replace:2 content:3 change:2 determined:3 uniformly:1 wordnet:3 total:2 invariance:1 experimental:2 support:2 scratch:2 mendes:1 |
5,283 | 5,783 | Winner-Take-All Autoencoders
Alireza Makhzani, Brendan Frey
University of Toronto
makhzani, frey@psi.toronto.edu
Abstract
In this paper, we propose a winner-take-all method for learning hierarchical sparse
representations in an unsupervised fashion. We first introduce fully-connected
winner-take-all autoencoders which use mini-batch statistics to directly enforce a
lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional
architectures and autoencoders for learning shift-invariant sparse representations.
We describe a way to train convolutional autoencoders layer by layer, where in
addition to lifetime sparsity, a spatial sparsity within each feature map is achieved
using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST,
CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets,
and achieve competitive classification performance.
1 Introduction
Recently, supervised learning has been developed and used successfully to produce representations
that have enabled leaps forward in classification accuracy for several tasks [1]. However, the question that has remained unanswered is whether it is possible to learn as ?powerful? representations
from unlabeled data without any supervision. It is still widely recognized that unsupervised learning
algorithms that can extract useful features are needed for solving problems with limited label information. In this work, we exploit sparsity as a generic prior on the representations for unsupervised
feature learning. We first introduce the fully-connected winner-take-all autoencoders that learn to
do sparse coding by directly enforcing a winner-take-all lifetime sparsity constraint. We then introduce convolutional winner-take-all autoencoders that learn to do shift-invariant/convolutional sparse
coding by directly enforcing winner-take-all spatial and lifetime sparsity constraints.
2 Fully-Connected Winner-Take-All Autoencoders
Training sparse autoencoders has been well studied in the literature. For example, in [2], a ?lifetime
sparsity? penalty function proportional to the KL divergence between the hidden unit marginals (??)
and the target sparsity probability (?) is added to the cost function: ?KL(?k??). A major drawback
of this approach is that it only works for certain target sparsities and is often very difficult to find
the right ? parameter that results in a properly trained sparse autoencoder. Also KL divergence
was originally proposed for sigmoidal autoencoders, and it is not clear how it can be applied to
ReLU autoencoders where ?? could be larger than one (in which case the KL divergence can not be
evaluated). In this paper, we propose Fully-Connected Winner-Take-All (FC-WTA) autoencoders to
address these concerns. FC-WTA autoencoders can aim for any target sparsity rate, train very fast
(marginally slower than a standard autoencoder), have no hyper-parameter to be tuned (except the
target sparsity rate) and efficiently train all the dictionary atoms even when very aggressive sparsity
rates (e.g., 1%) are enforced.
1
(a) MNIST, 10%
(b) MNIST, 5%
(c) MNIST, 2%
Figure 1: Learnt dictionary (decoder) of FC-WTA with 1000 hidden units trained on MNIST
Sparse coding algorithms typically comprise two steps: a highly non-linear sparse encoding operation that finds the ?right? atoms in the dictionary, and a linear decoding stage that reconstructs
the input with the selected atoms and update the dictionary. The FC-WTA autoencoder is a nonsymmetric autoencoder where the encoding stage is typically a stack of several ReLU layers and
the decoder is just a linear layer. In the feedforward phase, after computing the hidden codes of
the last layer of the encoder, rather than reconstructing the input from all of the hidden units, for
each hidden unit, we impose a lifetime sparsity by keeping the k percent largest activation of that
hidden unit across the mini-batch samples and setting the rest of activations of that hidden unit to
zero. In the backpropagation phase, we only backpropagate the error through the k percent non-zero
activations. In other words, we are using the min-batch statistics to approximate the statistics of
the activation of a particular hidden unit across all the samples, and finding a hard threshold value
for which we can achieve k% lifetime sparsity rate. In this setting, the highly nonlinear encoder of
the network (ReLUs followed by top-k sparsity) learns to do sparse encoding, and the decoder of
the network reconstructs the input linearly. At test time, we turn off the sparsity constraint and the
output of the deep ReLU network will be the final representation of the input. In order to train a
stacked FC-WTA autoencoder, we fix the weights and train another FC-WTA autoencoder on top of
the fixed representation of the previous network.
The learnt dictionary of a FC-WTA autoencoder trained on MNIST, CIFAR-10 and Toronto Face
datasets are visualized in Fig. 1 and Fig 2. For large sparsity levels, the algorithm tends to learn
very local features that are too primitive to be used for classification (Fig. 1a). As we decrease
the sparsity level, the network learns more useful features (longer digit strokes) and achieves better
classification (Fig. 1b). Nevertheless, forcing too much sparsity results in features that are too global
and do not factor the input into parts (Fig. 1c). Section 4.1 reports the classification results.
Winner-Take-All RBMs. Besides autoencoders, WTA activations can also be used in Restricted
Boltzmann Machines (RBM) to learn sparse representations. Suppose h and v denote the hidden and
visible units of RBMs. For training WTA-RBMs, in the positive phase of the contrastive divergence,
instead of sampling from P (hi |v), we first keep the k% largest P (hi |v) for each hi across the
mini-batch dimension and set the rest of P (hi |v) values to zero, and then sample hi according to
the sparsified P (hi |v). Filters of a WTA-RBM trained on MNIST are visualized in Fig. 3. We
can see WTA-RBMs learn longer digit strokes on MNIST, which as will be shown in Section 4.1,
improves the classification rate. Note that the sparsity rate of WTA-RBMs (e.g., 30%) should not be
as aggressive as WTA autoencoders (e.g., 5%), since RBMs are already being regularized by having
binary hidden states.
(a) Toronto Face Dataset (48 ? 48)
(b) CIFAR-10 Patches (11 ? 11)
Figure 2: Dictionaries (decoder) of FC-WTA autoencoder with 256 hidden units and sparsity of 5%
2
(a) Standard RBM
(b) WTA-RBM (sparsity of 30%)
Figure 3: Features learned on MNIST by 256 hidden unit RBMs.
3 Convolutional Winner-Take-All Autoencoders
There are several problems with applying conventional sparse coding methods on large images.
First, it is not practical to directly apply a fully-connected sparse coding algorithm on high-resolution
(e.g., 256 ? 256) images. Second, even if we could do that, we would learn a very redundant
dictionary whose atoms are just shifted copies of each other. For example, in Fig. 2a, the FCWTA autoencoder has allocated different filters for the same patterns (i.e., mouths/noses/glasses/face
borders) occurring at different locations. One way to address this problem is to extract random image
patches from input images and then train an unsupervised learning algorithm on these patches in
isolation [3]. Once training is complete, the filters can be used in a convolutional fashion to obtain
representations of images. As discussed in [3, 4], the main problem with this approach is that if the
receptive field is small, this method will not capture relevant features (imagine the extreme of 1 ? 1
patches). Increasing the receptive field size is problematic, because then a very large number of
features are needed to account for all the position-specific variations within the receptive field. For
example, we see that in Fig. 2b, the FC-WTA autoencoder allocates different filters to represent the
same horizontal edge appearing at different locations within the receptive field. As a result, the learnt
features are essentially shifted versions of each other, which results in redundancy between filters.
Unsupervised methods that make use of convolutional architectures can be used to address this
problem, including convolutional RBMs [5], convolutional DBNs [6, 5], deconvolutional networks
[7] and convolutional predictive sparse decomposition (PSD) [4, 8]. These methods learn features
from the entire image in a convolutional fashion. In this setting, the filters can focus on learning the
shapes (i.e., ?what?), because the location information (i.e., ?where?) is encoded into feature maps
and thus the redundancy among the filters is reduced.
In this section, we propose Convolutional Winner-Take-All (CONV-WTA) autoencoders that learn
to do shift-invariant/convolutional sparse coding by directly enforcing winner-take-all spatial and
lifetime sparsity constraints. Our work is similar in spirit to deconvolutional networks [7] and convolutional PSD [4, 8], but whereas the approach in that work is to break apart the recognition pathway
and data generation pathway, but learn them so that they are consistent, we describe a technique for
directly learning a sparse convolutional autoencoder.
A shallow convolutional autoencoder maps an input vector to a set of feature maps in a convolutional fashion. We assume that the boundaries of the input image are zero-padded, so that each
feature map has the same size as the input. The hidden representation is then mapped linearly to the
output using a deconvolution operation (Appendix A.1). The parameters are optimized to minimize
the mean square error. A non-regularized convolutional autoencoder learns useless delta function
filters that copy the input image to the feature maps and copy back the feature maps to the output.
Interestingly, we have observed that even in the presence of denoising[9]/dropout[10] regularizations, convolutional autoencoders still learn useless delta functions. Fig. 4a depicts the filters of a
convolutional autoencoder with 16 maps, 20% input and 50% hidden unit dropout trained on Street
View House Numbers dataset [11]. We see that the 16 learnt delta functions make 16 copies of the
input pixels, so even if half of the hidden units get dropped during training, the network can still
rely on the non-dropped copies to reconstruct the input. This highlights the need for new and more
aggressive regularization techniques for convolutional autoencoders.
The proposed architecture for CONV-WTA autoencoder is depicted in Fig. 4b. The CONV-WTA
autoencoder is a non-symmetric autoencoder where the encoder typically consists of a stack of
several ReLU convolutional layers (e.g., 5 ? 5 filters) and the decoder is a linear deconvolutional
layer of larger size (e.g., 11 ? 11 filters). We chose to use a deep encoder with smaller filters (e.g.,
5 ? 5) instead of a shallow one with larger filters (e.g., 11 ? 11), because the former introduces more
3
(a) Dropout CONV Autoencoder
(b) WTA-CONV Autoencoder
Figure 4: (a) Filters and feature maps of a denoising/dropout convolutional autoencoder, which
learns useless delta functions. (b) Proposed architecture for CONV-WTA autoencoder with spatial
sparsity (128conv5-128conv5-128deconv11).
non-linearity and regularizes the network by forcing it to have a decomposition over large receptive
fields through smaller filters. The CONV-WTA autoencoder is trained under two winner-take-all
sparsity constraints: spatial sparsity and lifetime sparsity.
3.1 Spatial Sparsity
In the feedforward phase, after computing the last feature maps of the encoder, rather than reconstructing the input from all of the hidden units of the feature maps, we identify the single largest
hidden activity within each feature map, and set the rest of the activities as well as their derivatives
to zero. This results in a sparse representation whose sparsity level is the number of feature maps.
The decoder then reconstructs the output using only the active hidden units in the feature maps and
the reconstruction error is only backpropagated through these hidden units as well.
Consistent with other representation learning approaches such as triangle k-means [3] and deconvolutional networks [7, 12], we observed that using a softer sparsity constraint at test time results in
a better classification performance. So, in the CONV-WTA autoencoder, in order to find the final
representation of the input image, we simply turn off the sparsity regularizer and use ReLU convolutions to compute the last layer feature maps of the encoder. After that, we apply max-pooling
(e.g., over 4 ? 4 regions) on these feature maps and use this representation for classification tasks
or in training stacked CONV-WTA as will be discussed in Section 3.3. Fig. 5 shows a CONV-WTA
autoencoder that was trained on MNIST.
0
0
5
0
0
0
20
20
10
40
40
20
60
60
30
80
80
40
10
0
20
5
30
0
40
5
100
50
100
60
0
5
10
15
20
25
0
10
20
30
40
0
20
40
60
80
100
0
20
40
60
80
100
0
10
20
30
40
50
60
0
5
0
5
0
50
100
150
Figure 5: The CONV-WTA autoencoder with 16 first layer filters and 128 second layer filters trained
on MNIST: (a) Input image. (b) Learnt dictionary (deconvolution filters). (c) 16 feature maps while
training (spatial sparsity applied). (d) 16 feature maps after training (spatial sparsity turned off). (e)
16 feature maps of the first layer after applying local max-pooling. (f) 48 out of 128 feature maps of
the second layer after turning off the sparsity and applying local max-pooling (final representation).
4
(a) Spatial sparsity only
(b) Spatial & lifetime sparsity 20%
(c) Spatial & lifetime sparsity 5%
Figure 6: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on MNIST
(64conv5-64conv5-64conv5-64deconv11).
3.2 Lifetime Sparsity
Although spatial sparsity is very effective in regularizing the autoencoder, it requires all the dictionary atoms to contribute in the reconstruction of every image. We can further increase the sparsity
by exploiting the winner-take-all lifetime sparsity as follows. Suppose we have 128 feature maps and
the mini-batch size is 100. After applying spatial sparsity, for each filter we will have 100 ?winner?
hidden units corresponding to the 100 mini-batch images. During feedforward phase, for each filter,
we only keep the k% largest of these 100 values and set the rest of activations to zero. Note that
despite this aggressive sparsity, every filter is forced to get updated upon visiting every mini-batch,
which is crucial for avoiding the dead filter problem that often occurs in sparse coding.
Fig. 6 and Fig. 7 show the effect of the lifetime sparsity on the dictionaries trained on MNIST
and Toronto Face dataset. We see that similar to the FC-WTA autoencoders, by tuning the lifetime
sparsity of CONV-WTA autoencoders, we can aim for different sparsity rates. If no lifetime sparsity
is enforced, we learn local filters that contribute to every training point (Fig. 6a and 7a). As we
increase the lifetime sparsity, we can learn rare but useful features that result in better classification
(Fig. 6b). Nevertheless, forcing too much lifetime sparsity will result in features that are too diverse
and rare and do not properly factor the input into parts (Fig. 6c and 7b).
3.3 Stacked CONV-WTA Autoencoders
The CONV-WTA autoencoder can be used as a building block to form a hierarchy. In order to train
the hierarchical model, we first train a CONV-WTA autoencoder on the input images. Then we pass
all the training examples through the network and obtain their representations (last layer of the encoder after turning off sparsity and applying local max-pooling). Now we treat these representations
as a new dataset and train another CONV-WTA autoencoder to obtain the stacked representations.
Fig. 5(f) shows the deep feature maps of a stacked CONV-WTA that was trained on MNIST.
3.4 Scaling CONV-WTA Autoencoders to Large Images
The goal of convolutional sparse coding is to learn shift-invariant dictionary atoms and encoding
filters. Once the filters are learnt, they can be applied convolutionally to any image of any size,
and produce a spatial map corresponding to different locations at the input. We can use this idea
to efficiently train CONV-WTA autoencoders on datasets containing large images. Suppose we
want to train an AlexNet [1] architecture in an unsupervised fashion on ImageNet, ILSVRC-2012
(a) Spatial sparsity only
(b) Spatial and lifetime sparsity of 10%
Figure 7: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on the
Toronto Face dataset (64conv7-64conv7-64conv7-64deconv15).
5
(a) Spatial sparsity
(b) Spatial and lifetime sparsity of 10%
Figure 8: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on ImageNet
48 ? 48 whitened patches. (64conv5-64conv5-64conv5-64deconv11).
(224x224). In order to learn the first layer 11 ? 11 shift-invariant filters, we can extract mediumsize image patches of size 48 ? 48 and train a CONV-WTA autoencoder with 64 dictionary atoms
of size 11 on these patches. This will result in 64 shift-invariant filters of size 11 ? 11 that can
efficiently capture the statistics of 48 ? 48 patches. Once the filters are learnt, we can apply them in
a convolutional fashion with the stride of 4 to the entire images and after max-pooling we will have
a 64 ? 27 ? 27 representation of the images. Now we can train another CONV-WTA autoencoder
on top of these feature maps to capture the statistics of a larger receptive field at different location
of the input image. This process could be repeated for multiple layers. Fig. 8 shows the dictionary
learnt on the ImageNet using this approach. We can see that by imposing lifetime sparsity, we could
learn very diverse filters such as corner, circular and blob detectors.
4 Experiments
In all the experiments of this section, we evaluate the quality of unsupervised features of WTA
autoencoders by training a naive linear classifier (i.e., SVM) on top them. We did not fine-tune the
filters in any of the experiments. The implementation details of all the experiments are provided in
Appendix A (in the supplementary materials). An IPython demo for reproducing important results
of this paper is publicly available at http://www.comm.utoronto.ca/?makhzani/.
4.1 Winner-Take-All Autoencoders on MNIST
The MNIST dataset has 60K training points and 10K test points. Table 1 compares the performance
of FC-WTA autoencoder and WTA-RBMs with other permutation-invariant architectures. Table 2a
compares the performance of CONV-WTA autoencoder with other convolutional architectures. In
these experiments, we have used all the available training labels (N = 60000 points) to train a linear
SVM on top of the unsupervised features.
An advantage of unsupervised learning algorithms is the ability to use them in semi-supervised scenarios where labeled data is limited. Table 2b shows the semi-supervised performance of a CONVWTA where we have assumed only N labels are available. In this case, the unsupervised features are
still trained on the whole dataset (60K points), but the SVM is trained only on the N labeled points
where N varies from 300 to 60K. We compare this with the performance of a supervised deep convnet (CNN) [17] trained only on the N labeled training points. We can see supervised deep learning
techniques fail to learn good representations when labeled data is limited, whereas our WTA algorithm can extract useful features from the unlabeled data and achieve a better classification. We also
compare our method with some of the best semi-supervised learning results recently obtained by
Shallow Denoising/Dropout Autoencoder (20% input and 50% hidden units dropout)
Stacked Denoising Autoencoder (3 layers) [9]
Deep Boltzmann Machines [13]
k-Sparse Autoencoder [14]
Shallow FC-WTA Autoencoder, 2000 units, 5% sparsity
Stacked FC-WTA Autoencoder, 5% and 2% sparsity
Restricted Boltzmann Machines
Winner-Take-All Restricted Boltzmann Machines (30% sparsity)
Error Rate
1.60%
1.28%
0.95%
1.35%
1.20%
1.11%
1.60%
1.38%
Table 1: Classification performance of FC-WTA autoencoder features + SVM on MNIST.
6
Deep Deconvolutional Network [7, 12]
Convolutional Deep Belief Network [5]
Scattering Convolution Network [15]
Convolutional Kernel Network [16]
CONV-WTA Autoencoder, 16 maps
CONV-WTA Autoencoder, 128 maps
Stacked CONV-WTA, 128 & 2048 maps
N
300
600
1K
2K
5K
10K
60K
Error
0.84%
0.82%
0.43%
0.39%
1.02%
0.64%
0.48%
(a) Unsupervised features + SVM trained on
N = 60000 labels (no fine-tuning)
CNN [17]
CKN [16]
SC [15]
CONV-WTA
7.18%
5.28%
3.21%
2.53%
1.52%
0.85%
0.53%
4.15%
2.05%
1.51%
1.21%
0.88%
0.39%
4.70%
2.30%
1.30%
1.03%
0.88 %
0.43%
3.47%
2.37%
1.92%
1.45%
1.07%
0.91%
0.48%
(b) Unsupervised features + SVM trained on few
labels N . (semi-supervised)
Table 2: Classification performance of CONV-WTA autoencoder trained on MNIST.
convolutional kernel networks (CKN) [16] and convolutional scattering networks (SC) [15]. We see
CONV-WTA outperforms both these methods when very few labels are available (N < 1K).
4.2 CONV-WTA Autoencoder on Street View House Numbers
The SVHN dataset has about 600K training points and 26K test points. Table 3 reports the classification results of CONV-WTA autoencoder on this dataset. We first trained a shallow and stacked
CONV-WTA on all 600K training cases to learn the unsupervised features, and then performed two
sets of experiments. In the first experiment, we used all the N=600K available labels to train an SVM
on top of the CONV-WTA features, and compared the result with convolutional k-means [11]. We
see that the stacked CONV-WTA achieves a dramatic improvement over the shallow CONV-WTA
as well as k-means. In the second experiment, we trained an SVM by using only N = 1000 labeled data points and compared the result with deep variational autoencoders [18] trained in a same
semi-supervised fashion. Fig. 9 shows the learnt dictionary of CONV-WTA on this dataset.
Convolutional Triangle k-means [11]
CONV-WTA Autoencoder, 256 maps (N=600K)
Stacked CONV-WTA Autoencoder, 256 and 1024 maps (N=600K)
Deep Variational Autoencoders (non-convolutional) [18] (N=1000)
Stacked CONV-WTA Autoencoder, 256 and 1024 maps (N=1000)
Supervised Maxout Network [19] (N=600K)
Accuracy
90.6%
88.5%
93.1%
63.9%
76.2%
97.5%
Table 3: CONV-WTA unsupervised features + SVM trained on N labeled points of SVHN dataset.
(a) Contrast Normalized SVHN
(b) Learnt Dictionary (64conv5-64conv5-64conv5-64deconv11)
Figure 9: CONV-WTA autoencoder trained on the Street View House Numbers (SVHN) dataset.
4.3 CONV-WTA Autoencoder on CIFAR-10
Fig. 10a reports the classification results of CONV-WTA on CIFAR-10. We see when a small number of feature maps (< 256) are used, considerable improvements over k-means can be achieved.
This is because our method can learn a shift-invariant dictionary as opposed to the redundant dictionaries learnt by patch-based methods such as k-means. In the largest deep network that we trained,
we used 256, 1024, 4096 maps and achieved the classification rate of 80.1% without using finetuning, model averaging or data augmentation. Fig. 10b shows the learnt dictionary on the CIFAR10 dataset. We can see that the network has learnt diverse shift-invariant filters such as point/corner
detectors as opposed to Fig. 2b that shows the position-specific filters of patch-based methods.
7
Accuracy
Shallow Convolutional Triangle k-means (64 maps) [3]
Shallow CONV-WTA Autoencoder (64 maps)
Shallow Convolutional Triangle k-means (256 maps) [3]
Shallow CONV-WTA Autoencoder (256 maps)
Shallow Convolutional Triangle k-means (4000 maps) [3]
Deep Triangle k-means (1600, 3200, 3200 maps) [20]
Convolutional Deep Belief Net (2 layers) [6]
Exemplar CNN (300x Data Augmentation) [21]
NOMP (3200,6400,6400 maps + Averaging 7 Models) [22]
Stacked CONV-WTA (256, 1024 maps)
Stacked CONV-WTA (256, 1024, 4096 maps)
Supervised Maxout Network [19]
62.3%
68.9%
70.2%
72.3%
79.6%
82.0%
78.9%
82.0%
82.9%
77.9%
80.1%
88.3%
(a) Unsupervised features + SVM (without fine-tuning)
(b) Learnt dictionary (deconv-filters)
64conv5-64conv5-64conv5-64deconv7
Figure 10: CONV-WTA autoencoder trained on the CIFAR-10 dataset.
5 Discussion
Relationship of FC-WTA to k-sparse autoencoders. k-sparse autoencoders impose sparsity across
different channels (population sparsity), whereas FC-WTA autoencoder imposes sparsity across
training examples (lifetime sparsity). When aiming for low sparsity levels, k-sparse autoencoders
use a scheduling technique to avoid the dead dictionary atom problem. WTA autoencoders, however,
do not have this problem since all the hidden units get updated upon visiting every mini-batch no
matter how aggressive the sparsity rate is (no scheduling required). As a result, we can train larger
networks and achieve better classification rates.
Relationship of CONV-WTA to deconvolutional networks and convolutional PSD. Deconvolutional networks [7, 12] are top down models with no direct link from the image to the feature maps.
The inference of the sparse maps requires solving the iterative ISTA algorithm, which is costly.
Convolutional PSD [4] addresses this problem by training a parameterized encoder separately to
explicitly predict the sparse codes using a soft thresholding operator. Deconvolutional networks and
convolutional PSD can be viewed as the generative decoder and encoder paths of a convolutional
autoencoder. Our contribution is to propose a specific winner-take-all approach for training a convolutional autoencoder, in which both paths are trained jointly using direct backpropagation yielding
an algorithm that is much faster, easier to implement and can train much larger networks.
Relationship to maxout networks. Maxout networks [19] take the max across different channels,
whereas our method takes the max across space and mini-batch dimensions. Also the winner-take-all
feature maps retain the location information of the ?winners? within each feature map and different
locations have different connectivity on the subsequent layers, whereas the maxout activity is passed
to the next layer using weights that are the same regardless of which unit gave the maximum.
6 Conclusion
We proposed the winner-take-all spatial and lifetime sparsity methods to train autoencoders that
learn to do fully-connected and convolutional sparse coding. We observed that CONV-WTA autoencoders learn shift-invariant and diverse dictionary atoms as opposed to position-specific Gabor-like
atoms that are typically learnt by conventional sparse coding methods. Unlike related approaches,
such as deconvolutional networks and convolutional PSD, our method jointly trains the encoder and
decoder paths by direct back-propagation, and does not require an iterative EM-like optimization
technique during training. We described how our method can be scaled to large datasets such as
ImageNet and showed the necessity of the deep architecture to achieve better results. We performed
experiments on the MNIST, SVHN and CIFAR-10 datasets and showed that the classification rates
of winner-take-all autoencoders are competitive with the state-of-the-art.
Acknowledgments
We would like to thank Ruslan Salakhutdinov and Andrew Delong for the valuable comments. We
also acknowledge the support of NVIDIA with the donation of the GPUs used for this research.
8
References
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural networks.,? in NIPS, vol. 1, p. 4, 2012.
[2] A. Ng, ?Sparse autoencoder,? CS294A Lecture notes, vol. 72, 2011.
[3] A. Coates, A. Y. Ng, and H. Lee, ?An analysis of single-layer networks in unsupervised
feature learning,? in International Conference on Artificial Intelligence and Statistics,
2011.
[4] K. Kavukcuoglu, P. Sermanet, Y.-L. Boureau, K. Gregor, M. Mathieu, and Y. LeCun,
?Learning convolutional feature hierarchies for visual recognition.,? in NIPS, vol. 1, p. 5,
2010.
[5] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, ?Convolutional deep belief networks for
scalable unsupervised learning of hierarchical representations,? in Proceedings of the 26th
Annual International Conference on Machine Learning, pp. 609?616, ACM, 2009.
[6] A. Krizhevsky, ?Convolutional deep belief networks on cifar-10,? Unpublished, 2010.
[7] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, ?Deconvolutional networks,? in
Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528?
2535, IEEE, 2010.
[8] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, ?Pedestrian detection with
unsupervised multi-stage feature learning,? in Computer Vision and Pattern Recognition
(CVPR), 2013 IEEE Conference on, pp. 3626?3633, IEEE, 2013.
[9] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, ?Stacked denoising
autoencoders: Learning useful representations in a deep network with a local denoising
criterion,? The Journal of Machine Learning Research, vol. 11, pp. 3371?3408, 2010.
[10] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, ?Improving neural networks by preventing co-adaptation of feature detectors,? arXiv preprint
arXiv:1207.0580, 2012.
[11] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, ?Reading digits in
natural images with unsupervised feature learning,? in NIPS workshop on deep learning
and unsupervised feature learning, vol. 2011, p. 5, Granada, Spain, 2011.
[12] M. D. Zeiler and R. Fergus, ?Differentiable pooling for hierarchical feature learning,?
arXiv preprint arXiv:1207.0151, 2012.
[13] R. Salakhutdinov and G. E. Hinton, ?Deep boltzmann machines,? in International Conference on Artificial Intelligence and Statistics, pp. 448?455, 2009.
[14] A. Makhzani and B. Frey, ?k-sparse autoencoders,? International Conference on Learning
Representations, ICLR, 2014.
[15] J. Bruna and S. Mallat, ?Invariant scattering convolution networks,? Pattern Analysis and
Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, pp. 1872?1886, 2013.
[16] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, ?Convolutional kernel networks,? in
Advances in Neural Information Processing Systems, pp. 2627?2635, 2014.
[17] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. Lecun, ?Unsupervised learning of invariant feature hierarchies with applications to object recognition,? in Computer Vision and
Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pp. 1?8, IEEE, 2007.
[18] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, ?Semi-supervised learning
with deep generative models,? in Advances in Neural Information Processing Systems,
pp. 3581?3589, 2014.
[19] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, ?Maxout networks,? ICML, 2013.
[20] A. Coates and A. Y. Ng, ?Selecting receptive fields in deep networks.,? in NIPS, 2011.
[21] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox, ?Discriminative unsupervised feature learning with convolutional neural networks,? in Advances in Neural Information Processing Systems, pp. 766?774, 2014.
[22] T.-H. Lin and H. Kung, ?Stable and efficient representation learning with nonnegativity
constraints,? in Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pp. 1323?1331, 2014.
9
| 5783 |@word cnn:3 version:1 decomposition:2 contrastive:1 dramatic:1 necessity:1 selecting:1 tuned:1 interestingly:1 deconvolutional:10 outperforms:1 activation:8 subsequent:1 visible:1 shape:1 update:1 half:1 selected:1 generative:2 intelligence:3 bissacco:1 contribute:2 toronto:7 location:7 sigmoidal:1 direct:3 consists:1 combine:1 pathway:2 introduce:3 multi:1 salakhutdinov:3 increasing:1 conv:52 provided:1 spain:1 linearity:1 alexnet:1 what:1 developed:1 finding:1 every:5 classifier:1 scaled:1 unit:21 positive:1 dropped:2 frey:3 local:6 tends:1 treat:1 aiming:1 despite:1 encoding:4 path:3 chose:1 studied:1 co:1 limited:3 practical:1 acknowledgment:1 lecun:3 block:1 implement:1 backpropagation:2 digit:3 riedmiller:1 gabor:1 word:1 get:3 unlabeled:2 operator:1 scheduling:2 applying:5 www:1 conventional:2 map:45 primitive:1 regardless:1 resolution:1 enabled:1 unanswered:1 population:1 variation:1 updated:2 target:4 suppose:3 imagine:1 dbns:1 hierarchy:3 mallat:1 goodfellow:1 recognition:6 labeled:6 observed:3 preprint:2 wang:1 capture:3 region:1 connected:6 ranzato:1 decrease:1 valuable:1 comm:1 warde:1 trained:27 solving:2 predictive:1 upon:2 triangle:6 finetuning:1 regularizer:1 train:19 stacked:15 forced:1 fast:1 describe:2 effective:1 artificial:2 sc:2 hyper:1 whose:2 encoded:1 widely:1 larger:6 supplementary:1 cvpr:3 reconstruct:1 encoder:10 ability:1 statistic:7 jointly:2 final:3 blob:1 advantage:1 differentiable:1 net:1 propose:5 reconstruction:2 adaptation:1 relevant:1 turned:1 achieve:5 exploiting:1 sutskever:2 produce:2 object:1 donation:1 andrew:1 exemplar:1 conv5:14 larochelle:1 drawback:1 filter:36 softer:1 material:1 require:1 fix:1 predict:1 major:1 dictionary:24 achieves:2 ruslan:1 leap:1 label:7 largest:5 successfully:1 aim:2 rather:2 avoid:1 rezende:1 focus:1 properly:2 improvement:2 contrast:1 brendan:1 glass:1 inference:1 typically:4 entire:2 hidden:23 x224:1 pixel:1 classification:18 among:1 spatial:19 art:1 delong:1 brox:1 field:7 comprise:1 once:3 having:1 ng:5 atom:10 sampling:1 unsupervised:22 icml:2 report:3 mirza:1 dosovitskiy:1 few:2 divergence:4 phase:5 psd:6 detection:1 highly:2 circular:1 introduces:1 extreme:1 yielding:1 farley:1 edge:1 cifar10:1 netzer:1 allocates:1 taylor:1 soft:1 cost:1 rare:2 krizhevsky:3 too:5 varies:1 learnt:18 st:1 international:5 retain:1 lee:2 off:5 decoding:1 connectivity:1 augmentation:2 reconstructs:3 containing:1 opposed:3 huang:1 dead:2 corner:2 derivative:1 aggressive:5 account:1 stride:1 coding:10 matter:1 pedestrian:1 explicitly:1 performed:2 view:4 break:1 competitive:2 relus:1 contribution:1 minimize:1 square:1 publicly:1 accuracy:3 convolutional:51 efficiently:3 identify:1 vincent:1 kavukcuoglu:2 marginally:1 stroke:2 detector:3 rbms:9 pp:11 mohamed:1 chintala:1 psi:1 rbm:4 dataset:14 improves:1 back:2 scattering:3 originally:1 supervised:11 evaluated:1 lifetime:23 just:2 stage:3 autoencoders:37 horizontal:1 nonlinear:1 propagation:1 quality:1 building:1 effect:1 normalized:1 former:1 regularization:2 symmetric:1 during:3 criterion:1 complete:1 svhn:5 percent:2 image:22 variational:2 recently:2 winner:26 nonsymmetric:1 discussed:2 marginals:1 imposing:1 tuning:3 bruna:1 stable:1 supervision:1 longer:2 showed:2 apart:1 forcing:3 scenario:1 certain:1 nvidia:1 binary:1 impose:2 recognized:1 redundant:2 semi:6 multiple:1 harchaoui:1 faster:1 convolutionally:1 cifar:8 lin:1 scalable:1 whitened:1 essentially:1 vision:3 arxiv:4 represent:1 alireza:1 kernel:3 achieved:3 whereas:5 want:1 separately:1 fine:3 addition:1 allocated:1 crucial:1 rest:4 unlike:1 comment:1 pooling:6 spirit:1 presence:1 feedforward:3 bengio:2 krishnan:1 relu:5 isolation:1 gave:1 architecture:8 idea:1 shift:9 whether:1 passed:1 penalty:1 deep:23 useful:5 clear:1 tune:1 backpropagated:1 visualized:2 reduced:1 http:1 problematic:1 coates:3 shifted:2 delta:4 diverse:4 vol:6 redundancy:2 threshold:1 nevertheless:2 padded:1 enforced:2 parameterized:1 powerful:1 springenberg:1 wu:1 patch:10 appendix:2 scaling:1 dropout:6 layer:20 hi:6 followed:1 courville:1 annual:1 activity:3 constraint:7 min:1 gpus:1 according:1 across:7 smaller:2 reconstructing:2 em:1 shallow:11 wta:78 invariant:12 restricted:3 turn:2 fail:1 needed:2 nose:1 available:5 operation:2 apply:3 hierarchical:4 enforce:1 generic:1 appearing:1 batch:9 slower:1 top:7 zeiler:2 exploit:1 gregor:1 deconv:1 question:1 added:1 already:1 occurs:1 receptive:7 costly:1 makhzani:4 visiting:2 iclr:1 convnet:1 link:1 mapped:1 thank:1 street:4 decoder:8 lajoie:1 enforcing:3 code:2 besides:1 useless:3 relationship:3 mini:8 manzagol:1 sermanet:2 difficult:1 implementation:1 boltzmann:5 convolution:3 datasets:5 acknowledge:1 sparsified:1 regularizes:1 hinton:3 stack:2 reproducing:1 unpublished:1 required:1 kl:4 optimized:1 imagenet:6 learned:1 kingma:1 nip:4 address:4 pattern:5 sparsity:65 reading:1 including:1 max:7 belief:4 mouth:1 natural:1 rely:1 regularized:2 turning:2 mathieu:1 autoencoder:60 extract:4 naive:1 schmid:1 prior:1 literature:1 fully:6 lecture:1 highlight:1 permutation:1 generation:1 proportional:1 consistent:2 ckn:2 imposes:1 thresholding:1 granada:1 last:4 keeping:1 copy:5 face:6 sparse:29 conv7:3 benefit:1 boundary:1 dimension:2 preventing:1 forward:1 welling:1 transaction:1 ranganath:1 approximate:1 keep:2 global:1 active:1 koniusz:1 mairal:1 assumed:1 fergus:2 demo:1 discriminative:1 iterative:2 table:7 learn:22 channel:2 ca:1 improving:1 did:1 main:1 linearly:2 border:1 whole:1 repeated:1 ista:1 fig:22 depicts:1 fashion:7 grosse:1 position:3 nonnegativity:1 house:4 learns:4 down:1 remained:1 specific:4 utoronto:1 svm:10 concern:1 deconvolution:5 workshop:1 mnist:19 occurring:1 boureau:2 easier:1 backpropagate:1 depicted:1 fc:16 simply:1 visual:1 acm:1 goal:1 viewed:1 maxout:6 considerable:1 hard:1 except:1 averaging:2 denoising:6 pas:1 ilsvrc:1 support:1 kung:1 evaluate:1 regularizing:1 avoiding:1 srivastava:1 |
5,284 | 5,784 | Learning both Weights and Connections for Efficient
Neural Networks
Jeff Pool
NVIDIA
jpool@nvidia.com
Song Han
Stanford University
songhan@stanford.edu
William J. Dally
Stanford University
NVIDIA
dally@stanford.edu
John Tran
NVIDIA
johntran@nvidia.com
Abstract
Neural networks are both computationally intensive and memory intensive, making
them difficult to deploy on embedded systems. Also, conventional networks fix
the architecture before training starts; as a result, training cannot improve the
architecture. To address these limitations, we describe a method to reduce the
storage and computation required by neural networks by an order of magnitude
without affecting their accuracy by learning only the important connections. Our
method prunes redundant connections using a three-step method. First, we train
the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the
remaining connections. On the ImageNet dataset, our method reduced the number
of parameters of AlexNet by a factor of 9?, from 61 million to 6.7 million, without
incurring accuracy loss. Similar experiments with VGG-16 found that the total
number of parameters can be reduced by 13?, from 138 million to 10.3 million,
again with no loss of accuracy.
1
Introduction
Neural networks have become ubiquitous in applications ranging from computer vision [1] to speech
recognition [2] and natural language processing [3]. We consider convolutional neural networks used
for computer vision tasks which have grown over time. In 1998 Lecun et al. designed a CNN model
LeNet-5 with less than 1M parameters to classify handwritten digits [4], while in 2012, Krizhevsky
et al. [1] won the ImageNet competition with 60M parameters. Deepface classified human faces with
120M parameters [5], and Coates et al. [6] scaled up a network to 10B parameters.
While these large neural networks are very powerful, their size consumes considerable storage,
memory bandwidth, and computational resources. For embedded mobile applications, these resource
demands become prohibitive. Figure 1 shows the energy cost of basic arithmetic and memory
operations in a 45nm CMOS process. From this data we see the energy per connection is dominated
by memory access and ranges from 5pJ for 32 bit coefficients in on-chip SRAM to 640pJ for 32bit
coefficients in off-chip DRAM [7]. Large networks do not fit in on-chip storage and hence require
the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at
20Hz would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power
envelope of a typical mobile device. Our goal in pruning networks is to reduce the energy required to
run such large networks so they can run in real time on mobile devices. The model size reduction
from pruning also facilitates storage and transmission of mobile applications incorporating DNNs.
1
Relative Energy Cost
Operation
Energy [pJ]
Relative Cost
32 bit int ADD
32 bit float ADD
32 bit Register File
32 bit int MULT
32 bit float MULT
32 bit SRAM Cache
32 bit DRAM Memory
0.1
0.9
1
3.1
3.7
5
640
1
9
10
31
37
50
6400
1
10
100
1000
10000
Figure 1: Energy table for 45nm CMOS process [7]. Memory access is 3 orders of magnitude more
energy expensive than simple arithmetic.
To achieve this goal, we present a method to prune network connections in a manner that preserves the
original accuracy. After an initial training phase, we remove all connections whose weight is lower
than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This first
phase learns the topology of the networks ? learning which connections are important and removing
the unimportant connections. We then retrain the sparse network so the remaining connections can
compensate for the connections that have been removed. The phases of pruning and retraining may
be repeated iteratively to further reduce network complexity. In effect, this training process learns
the network connectivity in addition to the weights - much as in the mammalian brain [8][9], where
synapses are created in the first few months of a child?s development, followed by gradual pruning of
little-used connections, falling to typical adult values.
2
Related Work
Neural networks are typically over-parameterized, and there is significant redundancy for deep learning models [10]. This results in a waste of both computation and memory. There have been various
proposals to remove the redundancy: Vanhoucke et al. [11] explored a fixed-point implementation
with 8-bit integer (vs 32-bit floating point) activations. Denton et al. [12] exploited the linear
structure of the neural network by finding an appropriate low-rank approximation of the parameters
and keeping the accuracy within 1% of the original model. With similar accuracy loss, Gong et al.
[13] compressed deep convnets using vector quantization. These approximation and quantization
techniques are orthogonal to network pruning, and they can be used together to obtain further gains
[14].
There have been other attempts to reduce the number of parameters of neural networks by replacing
the fully connected layer with global average pooling. The Network in Network architecture [15]
and GoogLenet [16] achieves state-of-the-art results on several benchmarks by adopting this idea.
However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them
to new tasks by only fine-tuning the fully connected layers, is more difficult with this approach. This
problem is noted by Szegedy et al. [16] and motivates them to add a linear layer on the top of their
networks to enable transfer learning.
Network pruning has been used both to reduce network complexity and to reduce over-fitting. An
early approach to pruning was biased weight decay [17]. Optimal Brain Damage [18] and Optimal
Brain Surgeon [19] prune networks to reduce the number of connections based on the Hessian of the
loss function and suggest that such pruning is more accurate than magnitude-based pruning such as
weight decay. However, second order derivative needs additional computation.
HashedNets [20] is a recent technique to reduce model sizes by using a hash function to randomly
group connection weights into hash buckets, so that all connections within the same hash bucket
share a single parameter value. This technique may benefit from pruning. As pointed out in Shi et al.
[21] and Weinberger et al. [22], sparsity will minimize hash collision making feature hashing even
more effective. HashedNets may be used together with pruning to give even better parameter savings.
2
before pruning
Train Connectivity
after pruning
pruning
synapses
Prune Connections
pruning
neurons
Train Weights
Figure 2: Three-Step Training Pipeline.
3
Figure 3: Synapses and neurons before and after
pruning.
Learning Connections in Addition to Weights
Our pruning method employs a three-step process, as illustrated in Figure 2, which begins by learning
the connectivity via normal network training. Unlike conventional training, however, we are not
learning the final values of the weights, but rather we are learning which connections are important.
The second step is to prune the low-weight connections. All connections with weights below a
threshold are removed from the network ? converting a dense network into a sparse network, as
shown in Figure 3. The final step retrains the network to learn the final weights for the remaining
sparse connections. This step is critical. If the pruned network is used without retraining, accuracy is
significantly impacted.
3.1
Regularization
Choosing the correct regularization impacts the performance of pruning and retraining. L1 regularization penalizes non-zero parameters resulting in more parameters near zero. This gives better accuracy
after pruning, but before retraining. However, the remaining connections are not as good as with L2
regularization, resulting in lower accuracy after retraining. Overall, L2 regularization gives the best
pruning results. This is further discussed in experiment section.
3.2
Dropout Ratio Adjustment
Dropout [23] is widely used to prevent over-fitting, and this also applies to retraining. During
retraining, however, the dropout ratio must be adjusted to account for the change in model capacity.
In dropout, each parameter is probabilistically dropped during training, but will come back during
inference. In pruning, parameters are dropped forever after pruning and have no chance to come back
during both training and inference. As the parameters get sparse, the classifier will select the most
informative predictors and thus have much less prediction variance, which reduces over-fitting. As
pruning already reduced model capacity, the retraining dropout ratio should be smaller.
Quantitatively, let Ci be the number of connections in layer i, Cio for the original network, Cir for
the network after retraining, Ni be the number of neurons in layer i. Since dropout works on neurons,
and Ci varies quadratically with Ni , according to Equation 1 thus the dropout ratio after pruning the
parameters should follow Equation 2, where Do represent the original dropout rate, Dr represent the
dropout rate during retraining.
r
Cir
(2)
Dr = Do
Ci = Ni Ni?1
(1)
Cio
3.3
Local Pruning and Parameter Co-adaptation
During retraining, it is better to retain the weights from the initial training phase for the connections
that survived pruning than it is to re-initialize the pruned layers. CNNs contain fragile co-adapted
features [24]: gradient descent is able to find a good solution when the network is initially trained,
but not after re-initializing some layers and retraining them. So when we retrain the pruned layers,
we should keep the surviving parameters instead of re-initializing them.
3
Table 1: Network pruning can save 9? to 13? parameters with no drop in predictive performance.
Network
Top-1 Error
Top-5 Error
Parameters
LeNet-300-100 Ref
LeNet-300-100 Pruned
LeNet-5 Ref
LeNet-5 Pruned
AlexNet Ref
AlexNet Pruned
VGG-16 Ref
VGG-16 Pruned
1.64%
1.59%
0.80%
0.77%
42.78%
42.77%
31.50%
31.34%
19.73%
19.67%
11.32%
10.88%
267K
22K
431K
36K
61M
6.7M
138M
10.3M
Compression
Rate
12?
12?
9?
13?
Retraining the pruned layers starting with retained weights requires less computation because we
don?t have to back propagate through the entire network. Also, neural networks are prone to suffer
the vanishing gradient problem [25] as the networks get deeper, which makes pruning errors harder to
recover for deep networks. To prevent this, we fix the parameters for CONV layers and only retrain
the FC layers after pruning the FC layers, and vice versa.
3.4
Iterative Pruning
Learning the right connections is an iterative process. Pruning followed by a retraining is one iteration,
after many such iterations the minimum number connections could be found. Without loss of accuracy,
this method can boost pruning rate from 5? to 9? on AlexNet compared with single-step aggressive
pruning. Each iteration is a greedy search in that we find the best connections. We also experimented
with probabilistically pruning parameters based on their absolute value, but this gave worse results.
3.5
Pruning Neurons
After pruning connections, neurons with zero input connections or zero output connections may be
safely pruned. This pruning is furthered by removing all connections to or from a pruned neuron.
The retraining phase automatically arrives at the result where dead neurons will have both zero input
connections and zero output connections. This occurs due to gradient descent and regularization.
A neuron that has zero input connections (or zero output connections) will have no contribution
to the final loss, leading the gradient to be zero for its output connection (or input connection),
respectively. Only the regularization term will push the weights to zero. Thus, the dead neurons will
be automatically removed during retraining.
4
Experiments
We implemented network pruning in Caffe [26]. Caffe was modified to add a mask which disregards
pruned parameters during network operation for each weight tensor. The pruning threshold is chosen
as a quality parameter multiplied by the standard deviation of a layer?s weights. We carried out the
experiments on Nvidia TitanX and GTX980 GPUs.
We pruned four representative networks: Lenet-300-100 and Lenet-5 on MNIST, together with
AlexNet and VGG-16 on ImageNet. The network parameters and accuracy 1 before and after pruning
are shown in Table 1.
4.1
LeNet on MNIST
We first experimented on MNIST dataset with the LeNet-300-100 and LeNet-5 networks [4]. LeNet300-100 is a fully connected network with two hidden layers, with 300 and 100 neurons each, which
achieves 1.6% error rate on MNIST. LeNet-5 is a convolutional network that has two convolutional
layers and two fully connected layers, which achieves 0.8% error rate on MNIST. After pruning,
the network is retrained with 1/10 of the original network?s original learning rate. Table 1 shows
1
Reference model is from Caffe model zoo, accuracy is measured without data augmentation
4
Table 2: For Lenet-300-100, pruning reduces the number of weights by 12? and computation by
12?.
Layer
fc1
fc2
fc3
Total
Weights
235K
30K
1K
266K
FLOP
470K
60K
2K
532K
Act%
38%
65%
100%
46%
Weights%
8%
9%
26%
8%
FLOP%
8%
4%
17%
8%
Table 3: For Lenet-5, pruning reduces the number of weights by 12? and computation by 6?.
Layer
conv1
conv2
fc1
fc2
Total
Weights
0.5K
25K
400K
5K
431K
FLOP
576K
3200K
800K
10K
4586K
Act%
82%
72%
55%
100%
77%
Weights%
66%
12%
8%
19%
8%
FLOP%
66%
10%
6%
10%
16%
Figure 4: Visualization of the first FC layer?s sparsity pattern of Lenet-300-100. It has a banded
structure repeated 28 times, which correspond to the un-pruned parameters in the center of the images,
since the digits are written in the center.
pruning saves 12? parameters on these networks. For each layer of the network the table shows (left
to right) the original number of weights, the number of floating point operations to compute that
layer?s activations, the average percentage of activations that are non-zero, the percentage of non-zero
weights after pruning, and the percentage of actually required floating point operations.
An interesting byproduct is that network pruning detects visual attention regions. Figure 4 shows the
sparsity pattern of the first fully connected layer of LeNet-300-100, the matrix size is 784 ? 300. It
has 28 bands, each band?s width 28, corresponding to the 28 ? 28 input pixels. The colored regions
of the figure, indicating non-zero parameters, correspond to the center of the image. Because digits
are written in the center of the image, these are the important parameters. The graph is sparse on the
left and right, corresponding to the less important regions on the top and bottom of the image. After
pruning, the neural network finds the center of the image more important, and the connections to the
peripheral regions are more heavily pruned.
4.2
AlexNet on ImageNet
We further examine the performance of pruning on the ImageNet ILSVRC-2012 dataset, which
has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the
reference model, which has 61 million parameters across 5 convolutional layers and 3 fully connected
layers. The AlexNet Caffe model achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%.
The original AlexNet took 75 hours to train on NVIDIA Titan X GPU. After pruning, the whole
network is retrained with 1/100 of the original network?s initial learning rate. It took 173 hours to
retrain the pruned AlexNet. Pruning is not used when iteratively prototyping the model, but rather
used for model reduction when the model is ready for deployment. Thus, the retraining time is less
a concern. Table 1 shows that AlexNet can be pruned to 1/9 of its original size without impacting
accuracy, and the amount of computation can be reduced by 3?.
5
Table 4: For AlexNet, pruning reduces the number of weights by 9? and computation by 3?.
Remaining Parameters
Pruned Parameters
60M
45M
30M
15M
fc
3
to
ta
l
fc
2
v5
fc
1
v4
co
n
v3
M
co
n
FLOP%
84%
33%
18%
14%
14%
3%
3%
10%
30%
v2
Weights%
84%
38%
35%
37%
37%
9%
9%
25%
11%
co
n
Act%
88%
52%
37%
40%
34%
36%
40%
100%
54%
v1
FLOP
211M
448M
299M
224M
150M
75M
34M
8M
1.5B
co
n
Weights
35K
307K
885K
663K
442K
38M
17M
4M
61M
co
n
Layer
conv1
conv2
conv3
conv4
conv5
fc1
fc2
fc3
Total
Table 5: For VGG-16, pruning reduces the number of weights by 12? and computation by 5?.
Layer
conv1
conv1
conv2
conv2
conv3
conv3
conv3
conv4
conv4
conv4
conv5
conv5
conv5
fc6
fc7
fc8
total
4.3
1
2
1
2
1
2
3
1
2
3
1
2
3
Weights
2K
37K
74K
148K
295K
590K
590K
1M
2M
2M
2M
2M
2M
103M
17M
4M
138M
FLOP
0.2B
3.7B
1.8B
3.7B
1.8B
3.7B
3.7B
1.8B
3.7B
3.7B
925M
925M
925M
206M
34M
8M
30.9B
Act%
53%
89%
80%
81%
68%
70%
64%
51%
45%
34%
32%
29%
19%
38%
42%
100%
64%
Weights%
58%
22%
34%
36%
53%
24%
42%
32%
27%
34%
35%
29%
36%
4%
4%
23%
7.5%
FLOP%
58%
12%
30%
29%
43%
16%
29%
21%
14%
15%
12%
9%
11%
1%
2%
9%
21%
VGG-16 on ImageNet
With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 [27],
on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three
fully-connected layers. Following a similar methodology, we aggressively pruned both convolutional
and fully-connected layers to realize a significant reduction in the number of weights, shown in
Table 5. We used five iterations of pruning an retraining.
The VGG-16 results are, like those for AlexNet, very promising. The network as a whole has
been reduced to 7.5% of its original size (13? smaller). In particular, note that the two largest
fully-connected layers can each be pruned to less than 4% of their original size. This reduction is
critical for real time image processing, where there is little reuse of fully connected layers across
images (unlike batch processing during training).
5
Discussion
The trade-off curve between accuracy and number of parameters is shown in Figure 5. The more
parameters pruned away, the less the accuracy. We experimented with L1 and L2 regularization, with
and without retraining, together with iterative pruning to give five trade off lines. Comparing solid and
dashed lines, the importance of retraining is clear: without retraining, accuracy begins dropping much
sooner ? with 1/3 of the original connections, rather than with 1/10 of the original connections.
It?s interesting to see that we have the ?free lunch? of reducing 2? the connections without losing
accuracy even without retraining; while with retraining we are ably to reduce connections by 9?.
6
L2 regularization w/o retrain
L1 regularization w/ retrain
L2 regularization w/ iterative prune and retrain
L1 regularization w/o retrain
L2 regularization w/ retrain
0.5%
0.0%
Accuracy Loss
-0.5%
-1.0%
-1.5%
-2.0%
-2.5%
-3.0%
-3.5%
-4.0%
-4.5%
40%
50%
60%
70%
80%
Parametes Pruned Away
90%
100%
Figure 5: Trade-off curve for parameter reduction and loss in top-5 accuracy. L1 regularization
performs better than L2 at learning the connections without retraining, while L2 regularization
performs better than L1 at retraining. Iterative pruning gives the best result.
conv2
conv3
conv4
conv5
fc1
0%
-5%
-5%
Accuracy Loss
Accuracy Loss
conv1
0%
-10%
-15%
fc2
fc3
-10%
-15%
-20%
-20%
0%
25%
50%
#Parameters
75%
100%
0%
25%
50%
#Parameters
75%
100%
Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet.
L1 regularization gives better accuracy than L2 directly after pruning (dotted blue and purple lines)
since it pushes more parameters closer to zero. However, comparing the yellow and green lines shows
that L2 outperforms L1 after retraining, since there is no benefit to further pushing values towards
zero. One extension is to use L1 regularization for pruning and then L2 for retraining, but this did not
beat simply using L2 for both phases. Parameters from one mode do not adapt well to the other.
The biggest gain comes from iterative pruning (solid red line with solid circles). Here we take the
pruned and retrained network (solid green line with circles) and prune and retrain it again. The
leftmost dot on this curve corresponds to the point on the green line at 80% (5? pruning) pruned to
8?. There?s no accuracy loss at 9?. Not until 10? does the accuracy begin to drop sharply.
Two green points achieve slightly better accuracy than the original model. We believe this accuracy
improvement is due to pruning finding the right capacity of the network and hence reducing overfitting.
Both CONV and FC layers can be pruned, but with different sensitivity. Figure 6 shows the sensitivity
of each layer to network pruning. The figure shows how accuracy drops as parameters are pruned on
a layer-by-layer basis. The CONV layers (on the left) are more sensitive to pruning than the fully
connected layers (on the right). The first convolutional layer, which interacts with the input image
directly, is most sensitive to pruning. We suspect this sensitivity is due to the input layer having only
3 channels and thus less redundancy than the other convolutional layers. We used the sensitivity
results to find each layer?s threshold: for example, the smallest threshold was applied to the most
sensitive layer, which is the first convolutional layer.
Storing the pruned layers as sparse matrices has a storage overhead of only 15.6%. Storing relative
rather than absolute indices reduces the space taken by the FC layer indices to 5 bits. Similarly,
CONV layer indices can be represented with only 8 bits.
7
Table 6: Comparison with other model reduction methods on AlexNet. Data-free pruning [28]
saved only 1.5? parameters with much loss of accuracy. Deep Fried Convnets [29] worked on fully
connected layers only and reduced the parameters by less than 4?. [30] reduced the parameters by
4? with inferior accuracy. Naively cutting the layer size saves parameters but suffers from 4% loss
of accuracy. [12] exploited the linear structure of convnets and compressed each layer individually,
where model compression on a single layer incurred 0.9% accuracy penalty with biclustering + SVD.
Network
Top-1 Error
Top-5 Error
Parameters
Baseline Caffemodel [26]
Data-free pruning [28]
Fastfood-32-AD [29]
Fastfood-16-AD [29]
Collins & Kohli [30]
Naive Cut
SVD [12]
Network Pruning
42.78%
44.40%
41.93%
42.90%
44.40%
47.18%
44.02%
42.77%
19.73%
23.23%
20.56%
19.67%
61.0M
39.6M
32.8M
16.4M
15.2M
13.8M
11.9M
6.7M
Weight distribution before pruning
5
x 10
10
10
9
9
8
8
7
7
6
5
5
4
3
3
2
2
1
x 10
6
4
0
?0.04
Weight distribution after pruning and retraining
4
11
Count
Count
11
Compression
Rate
1?
1.5?
2?
3.7?
4?
4.4?
5?
9?
1
?0.03
?0.02
?0.01
0
0.01
0.02
0.03
0
0.04
Weight Value
?0.04
?0.03
?0.02
?0.01
0
0.01
0.02
0.03
0.04
Weight Value
Figure 7: Weight distribution before and after parameter pruning. The right figure has 10? smaller
scale.
After pruning, the storage requirements of AlexNet and VGGNet are are small enough that all weights
can be stored on chip, instead of off-chip DRAM which takes orders of magnitude more energy to
access (Table 1). We are targeting our pruning method for fixed-function hardware specialized for
sparse DNN, given the limitation of general purpose hardware on sparse computation.
Figure 7 shows histograms of weight distribution before (left) and after (right) pruning. The weight
is from the first fully connected layer of AlexNet. The two panels have different y-axis scales.
The original distribution of weights is centered on zero with tails dropping off quickly. Almost all
parameters are between [?0.015, 0.015]. After pruning the large center region is removed. The
network parameters adjust themselves during the retraining phase. The result is that the parameters
form a bimodal distribution and become more spread across the x-axis, between [?0.025, 0.025].
6
Conclusion
We have presented a method to improve the energy efficiency and storage of neural networks without
affecting accuracy by finding the right connections. Our method, motivated in part by how learning
works in the mammalian brain, operates by learning which connections are important, pruning
the unimportant connections, and then retraining the remaining sparse network. We highlight our
experiments on AlexNet and VGGNet on ImageNet, showing that both fully connected layer and
convolutional layer can be pruned, reducing the number of connections by 9? to 13? without loss of
accuracy. This leads to smaller memory capacity and bandwidth requirements for real-time image
processing, making it easier to be deployed on mobile systems.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
8
[2] Alex Graves and J?urgen Schmidhuber. Framewise phoneme classification with bidirectional lstm and other
neural network architectures. Neural Networks, 18(5):602?610, 2005.
[3] Ronan Collobert, Jason Weston, L?eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
Natural language processing (almost) from scratch. JMLR, 12:2493?2537, 2011.
[4] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[5] Yaniv Taigman, Ming Yang, Marc?Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to
human-level performance in face verification. In CVPR, pages 1701?1708. IEEE, 2014.
[6] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with
cots hpc systems. In 30th ICML, pages 1337?1345, 2013.
[7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki.
[8] JP Rauschecker. Neuronal mechanisms of developmental plasticity in the cat?s visual system. Human
neurobiology, 3(2):109?114, 1983.
[9] Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172?172, 2013.
[10] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning.
In Advances in Neural Information Processing Systems, pages 2148?2156, 2013.
[11] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus.
In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
[12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure
within convolutional networks for efficient evaluation. In NIPS, pages 1269?1277, 2014.
[13] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks
using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with
pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[15] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[16] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint
arXiv:1409.4842, 2014.
[17] Stephen Jos?e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with
back-propagation. In Advances in neural information processing systems, pages 177?185, 1989.
[18] Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information
Processing Systems, pages 598?605. Morgan Kaufmann, 1990.
[19] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon.
Advances in neural information processing systems, pages 164?164, 1993.
[20] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural
networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
[21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash
kernels for structured data. The Journal of Machine Learning Research, 10:2615?2637, 2009.
[22] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing
for large scale multitask learning. In ICML, pages 1113?1120. ACM, 2009.
[23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. JMLR, 15:1929?1958, 2014.
[24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural
networks? In Advances in Neural Information Processing Systems, pages 3320?3328, 2014.
[25] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[26] Yangqing Jia, et al. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint
arXiv:1408.5093, 2014.
[27] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[28] Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks. arXiv
preprint arXiv:1507.06149, 2015.
[29] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang.
Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.
[30] Maxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint
arXiv:1412.1442, 2014.
9
| 5784 |@word kohli:2 multitask:1 cnn:1 compression:4 retraining:30 shuicheng:1 gradual:1 propagate:1 pavel:1 solid:4 harder:1 reduction:6 initial:3 liu:3 document:1 outperforms:1 freitas:2 com:2 comparing:3 activation:3 must:1 written:2 john:4 gpu:1 realize:1 ronan:1 informative:1 plasticity:1 christian:1 remove:2 designed:1 drop:3 moczulski:1 v:1 hash:5 greedy:1 prohibitive:1 device:2 sram:2 fried:2 vanishing:1 colored:1 five:2 framewise:1 become:3 fitting:3 overhead:1 manner:1 mask:1 cot:1 kuksa:1 themselves:1 examine:1 brain:6 salakhutdinov:1 titanx:1 detects:1 ming:2 automatically:2 little:2 cpu:1 cache:1 conv:5 begin:3 bounded:1 panel:1 alexnet:19 dror:1 finding:3 safely:1 act:4 zaremba:1 scaled:1 classifier:1 before:8 dropped:2 local:1 laurent:1 sara:1 co:7 deployment:1 catanzaro:1 walsh:1 range:1 lecun:3 digit:3 survived:1 yan:1 mult:2 significantly:1 suggest:1 get:2 cannot:1 targeting:1 storage:7 applying:1 wenlin:1 conventional:2 shi:2 center:6 attention:1 starting:1 conv4:5 emily:1 embedding:1 construction:1 deploy:1 heavily:1 losing:1 trick:1 recognition:3 expensive:1 mammalian:2 cut:1 huttenlocher:1 bottom:1 preprint:9 initializing:2 wang:2 region:5 compressing:3 connected:15 kilian:2 ranzato:1 solla:1 trade:3 removed:4 consumes:1 developmental:1 complexity:2 babak:2 trained:2 surgeon:2 predictive:1 efficiency:1 basis:1 chip:5 various:1 represented:1 cat:1 grown:1 train:4 fast:1 describe:1 effective:1 choosing:1 caffe:6 whose:1 stanford:5 widely:1 larger:1 cvpr:1 compressed:2 simonyan:1 final:4 patrice:1 took:2 tran:1 adaptation:1 achieve:2 competition:1 billion:1 sutskever:2 yaniv:1 transmission:1 requirement:2 exploiting:1 cmos:2 adam:1 andrew:4 bourdev:1 gong:2 measured:1 conv5:5 implemented:1 come:3 correct:1 saved:1 cnns:1 centered:1 human:3 nando:2 enable:1 require:2 dnns:1 fix:2 adjusted:1 extension:1 normal:1 achieves:3 early:1 smallest:1 yixin:1 purpose:1 ruslan:1 proc:1 sensitive:3 individually:1 largest:1 hpc:1 vice:1 modified:1 rather:4 denil:2 mobile:5 wilson:1 probabilistically:2 clune:1 improvement:1 rank:1 baseline:1 inference:2 typically:1 entire:1 initially:1 hidden:1 vlsi:1 dnn:1 going:1 marcin:1 tao:1 pixel:1 overall:1 classification:2 impacting:1 development:1 art:1 initialize:1 urgen:1 saving:1 having:1 ng:1 koray:1 qiang:1 frasconi:1 denton:2 icml:2 unsupervised:1 yoshua:3 quantitatively:1 few:1 employ:1 randomly:1 preserve:1 petterson:1 floating:3 phase:7 william:2 attempt:1 ab:1 evaluation:1 adjust:1 arrives:1 misha:2 accurate:1 closer:1 byproduct:1 orthogonal:1 sooner:1 penalizes:1 re:3 circle:2 minimal:1 classify:1 rabinovich:1 cost:3 deviation:1 predictor:1 krizhevsky:3 stored:1 dependency:1 varies:1 lstm:1 sensitivity:5 retain:1 v4:1 off:6 pool:1 michael:1 together:4 quickly:1 ilya:2 jos:1 connectivity:3 again:2 augmentation:1 nm:3 lorien:1 dr:2 worse:1 dead:2 horowitz:1 derivative:2 leading:1 wojciech:1 simard:1 reusing:1 szegedy:2 aggressive:1 account:1 huval:1 de:2 coding:1 waste:1 babu:1 coefficient:2 titan:1 int:2 register:1 ad:2 collobert:1 jason:2 dally:3 red:1 start:1 recover:1 jia:2 lipson:1 ably:1 contribution:1 cio:2 shakibi:1 minimize:1 accuracy:35 convolutional:16 kaufmann:1 variance:1 ni:4 phoneme:1 correspond:2 purple:1 yellow:1 handwritten:1 vincent:2 kavukcuoglu:1 zoo:1 classified:1 synapsis:3 banded:1 suffers:1 energy:10 james:2 lior:1 gain:2 dataset:5 ubiquitous:1 actually:1 back:4 bidirectional:1 maxwell:1 hashing:3 ta:1 follow:1 methodology:1 impacted:1 wei:1 zisserman:1 huizi:1 just:1 smola:3 convnets:4 until:1 langford:2 replacing:1 christopher:1 propagation:1 mode:1 quality:1 believe:1 effect:1 contain:1 lenet:15 hence:2 regularization:17 aggressively:1 iteratively:2 fc3:3 illustrated:1 during:10 width:1 inferior:1 noted:1 transferable:1 won:1 leftmost:1 performs:2 l1:9 dragomir:1 ranging:1 image:10 specialized:1 stork:1 jp:1 million:5 googlenet:1 discussed:1 tail:1 yosinski:1 significant:2 dinh:1 anguelov:1 versa:1 tuning:1 similarly:1 pointed:1 closing:1 language:2 dot:1 bruna:1 access:5 han:2 add:4 fc7:1 patrick:1 recent:2 schmidhuber:1 nvidia:7 exploited:2 morgan:1 minimum:1 additional:1 prune:8 converting:1 v3:1 redundant:1 dashed:1 arithmetic:2 stephen:2 reduces:6 karlen:1 adapt:1 compensate:1 lin:1 long:1 impact:1 prediction:1 basic:1 vision:2 arxiv:18 iteration:4 represent:2 adopting:1 histogram:1 bimodal:1 achieved:1 kernel:1 proposal:1 affecting:2 addition:2 fine:2 huffman:1 float:2 envelope:1 biased:1 unlike:2 file:1 hz:2 pooling:1 suspect:1 facilitates:1 integer:1 surviving:1 near:1 yang:3 bengio:3 enough:1 pratt:1 fit:1 gave:1 architecture:5 bandwidth:2 topology:1 reduce:9 idea:1 haffner:1 vgg:9 lubomir:1 intensive:2 svn:1 fragile:1 retrains:1 motivated:1 reuse:1 penalty:1 song:3 suffer:1 peter:1 karen:1 speech:1 hessian:1 deep:16 collision:1 clear:1 unimportant:3 tune:1 amount:1 band:2 hardware:2 reduced:7 wiki:1 percentage:3 coates:2 dotted:1 per:1 bryan:1 blue:1 dropping:2 dasgupta:1 paolo:1 group:1 redundancy:3 four:1 threshold:5 falling:1 yangqing:2 prevent:3 pj:4 v1:1 graph:1 convert:1 taigman:1 run:2 parameterized:1 powerful:1 almost:2 yann:3 wu:1 bit:13 dropout:10 layer:57 brody:1 followed:2 adapted:1 vishwanathan:1 sharply:1 worked:1 alex:6 dominated:1 speed:1 nitish:1 min:1 pruned:27 leon:1 gpus:1 structured:1 according:1 peripheral:1 anirban:1 smaller:4 across:3 slightly:1 cun:1 lunch:1 making:3 rob:1 bucket:2 pipeline:1 taken:1 computationally:1 resource:2 equation:2 visualization:1 count:2 mechanism:1 operation:5 incurring:1 multiplied:1 denker:1 v2:1 appropriate:1 away:2 pierre:1 attenberg:1 save:3 batch:1 weinberger:3 original:16 top:9 remaining:6 running:1 pushing:1 eon:1 tensor:1 already:1 v5:1 occurs:1 looked:1 damage:2 costly:1 furthered:1 yunchao:1 interacts:1 gradient:6 fc2:4 capacity:4 retained:1 index:3 reed:1 ratio:4 sermanet:1 zichao:1 difficult:3 dram:5 implementation:1 motivates:1 conv2:5 neuron:11 convolution:1 benchmark:1 descent:3 beat:1 flop:8 hinton:2 neurobiology:1 retrained:3 david:2 venkatesh:1 required:3 connection:50 imagenet:9 rauschecker:1 hanson:1 suraj:1 learned:1 quadratically:1 boost:1 hour:2 nip:2 address:1 beyond:1 adult:1 able:1 below:1 pattern:2 prototyping:1 scott:1 sparsity:3 gideon:1 green:4 memory:9 power:1 critical:2 natural:2 predicting:1 fc8:1 improve:2 cir:2 fc1:4 vggnet:2 axis:2 created:1 carried:1 ready:1 naive:1 joan:1 l2:12 relative:3 graf:1 embedded:2 fully:15 loss:14 highlight:1 interesting:2 limitation:2 geoffrey:2 validation:1 incurred:1 vanhoucke:3 verification:1 conv1:5 tyree:1 storing:2 share:1 prone:1 keeping:1 free:4 bias:1 senior:1 deeper:2 conv3:5 face:2 absolute:2 sparse:10 benefit:2 curve:3 far:1 erhan:1 pushmeet:1 transaction:1 pruning:79 forever:1 cutting:1 keep:1 global:1 overfitting:2 fergus:1 ziyu:1 don:1 search:1 iterative:6 un:1 table:14 scratch:1 promising:2 learn:2 transfer:2 fc6:1 channel:1 nature:1 improving:1 bottou:2 marc:1 did:1 dense:2 fastfood:2 spread:1 aurelio:1 whole:2 repeated:2 child:1 ref:4 neuronal:1 representative:1 retrain:11 biggest:1 deployed:1 hassibi:1 mao:2 jmlr:2 learns:2 removing:2 dumitru:1 showing:1 explored:1 decay:2 experimented:3 concern:1 incorporating:1 naively:1 quantization:4 mnist:5 workshop:1 corr:1 importance:1 ci:3 magnitude:4 push:2 hod:1 demand:1 gap:1 easier:1 chen:3 fc:9 simply:1 visual:2 josh:1 adjustment:1 biclustering:1 applies:1 deepface:2 corresponds:1 wolf:1 chance:1 acm:1 weston:1 goal:2 month:1 towards:1 jeff:2 considerable:1 change:1 typical:2 qinfeng:1 reducing:3 operates:1 total:5 svd:2 disregard:1 indicating:1 select:1 ilsvrc:2 mark:2 hashednets:2 collins:2 srinivas:1 srivastava:1 |
5,285 | 5,785 | Unsupervised Learning by Program Synthesis
Kevin Ellis
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
ellisk@mit.edu
Armando Solar-Lezama
MIT CSAIL
Massachusetts Institute of Technology
asolar@csail.mit.edu
Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
jbt@mit.edu
Abstract
We introduce an unsupervised learning algorithm that combines probabilistic
modeling with solver-based techniques for program synthesis. We apply our techniques to both a visual learning domain and a language learning problem, showing
that our algorithm can learn many visual concepts from only a few examples and
that it can recover some English inflectional morphology. Taken together, these
results give both a new approach to unsupervised learning of symbolic compositional structures, and a technique for applying program synthesis tools to noisy
data.
1
Introduction
Unsupervised learning seeks to induce good latent representations of a data set. Nonparametric
statistical approaches such as deep autoencoder networks, mixture-model density estimators, or
nonlinear manifold learning algorithms have been very successful at learning representations of
high-dimensional perceptual input. However, it is unclear how they would represent more abstract
structures such as spatial relations in vision (e.g., inside of or all in a line) [2], or morphological rules
in language (e.g., the different inflections of verbs) [1, 13]. Here we give an unsupervised learning
algorithm that synthesizes programs from data, with the goal of learning such concepts. Our approach generalizes from small amounts of data, and produces interpretable symbolic representations
parameterized by a human-readable programming language.
Programs (deterministic or probabilistic) are a natural knowledge representation for many domains
[3], and the idea that inductive learning should be thought of as probabilistic inference over programs is at least 50 years old [6]. Recent work in learning programs has focused on supervised
learning from noiseless input/output pairs, or from formal specifications [4]. Our goal here is to
learn programs from noisy observations without explicit input/output examples. A central idea in
unsupervised learning is compression: finding data representations that require the fewest bits to
write down. We realize this by treating observed data as the output of an unknown program applied
to unknown inputs. By doing joint inference over the program and the inputs, we recover compressive encodings of the observed data. The induced program gives a generative model for the data,
and the induced inputs give an embedding for each data point.
Although a completely domain general method for program synthesis would be desirable, we believe this will remain intractable for the foreseeable future. Accordingly, our approach factors out
the domain-specific components of problems in the form of a grammar for program hypotheses, and
we show how this allows the same general-purpose tools to be used for unsupervised program synthesis in two very different domains. In a domain of visual concepts [5] designed to be natural for
1
humans but difficult for machines to learn, we show that our methods can synthesize simple graphics programs representing these visual concepts from only a few example images. These programs
outperform both previous machine-learning baselines and several new baselines we introduce. We
also study the domain of learning morphological rules in language, treating rules as programs and
inflected verb forms as outputs. We show how to encode prior linguistic knowledge as a grammar
over programs and recover human-readable linguistic rules, useful for both simple stemming tasks
and for predicting the phonological form of new words.
2
The unsupervised program synthesis algorithm
The space of all programs is vast and often unamenable to the optimization methods used in much
of machine learning. We extend two ideas from the program synthesis community to make search
over programs tractable:
Sketching: In the sketching approach to program synthesis, one manually provides a sketch of the
program to be induced, which specifies a rough outline of its structure [7]. Our sketches take the
form of a probabilistic context-free grammar and make explicit the domain specific prior knowledge.
Symbolic search: Much progress has been made in the engineering of general-purpose symbolic
solvers for Satisfiability Modulo Theories (SMT) problems [8]. We show how to translate our
sketches into SMT problems. Program synthesis is then reduced to solving an SMT problem. These
are intractable in general, but often solved efficiently in practice due to the highly constrained nature
of program synthesis which these solvers can exploit.
Prior work on symbolic search from sketches has not had to cope with noisy observations or probabilities over the space of programs and inputs. Demonstrating how to do this efficiently is our main
technical contribution.
2.1
Formalization as probabilistic inference
We formalize unsupervised program synthesis as Bayesian inference within the following generative
model: Draw a program f (?) from a description length prior over programs, which depends upon
the sketch. Draw N inputs {Ii }N
i=1 to the program f (?) from a domain-dependent description length
prior PI (?). These inputs are passed to the program to yield {zi }N
i=1 with zi , f (Ii ) (zi ?defined
as? f (Ii )). Last, we compute the observed data {xi }N
by
drawing
from a noise model Px|z (?|zi ).
i=1
N
Our objective is to estimate the unobserved f (?) and {Ii }N
i=1 from the observed dataset {xi }i=1 . We
use this probabilistic model to define the description length below, which we seek to minimize:
N
X
? log Pf (f ) +
? log Px|z (xi |f (Ii ))
|
{z
}
{z
}
|
i=1
program length
data reconstruction error
2.2
? log PI (Ii )
|
{z
}
data encoding length
(1)
Defining a program space
We sketch a space of allowed programs by writing down a context free grammar G, and write L to
mean the set of all programs generated by G. Placing uniform production probabilities over each
non-terminal symbol in G gives a PCFG that serves as a prior over programs: the Pf (?) of Eq. 1.
For example, a grammar over arithmetic expressions might contain rules that say: ?expressions are
either the sum of two expressions, or a real number, or an input variable x? which we write as
E ?E +E | R | x
(2)
Having specified a space of programs, we define the meaning of a program in terms of SMT primitives, which can include objects like tuples, real numbers, conditionals, booleans, etc [8]. We write
? to mean the set of expressions built of SMT primitives. Formally, we assume G comes equipped
with a denotation for each rule, which we write as J?K : L ? ? ? ? . The denotation of a rule in G
is always written as a function of the denotations of that rule?s children. For example, a denotation
for the grammar in Eq. 2 is (where I is a program input):
JE1 + E2 K(I) = JE1 K(I) + JE2 K(I)
Jr ? RK(I) = r
2
JxK(I) = I
(3)
Defining the denotations for a grammar is straightforward and analogous to writing a ?wrapper
library? around the core primitives of the SMT solver. Our formalization factors out the grammar and
the denotation, but they are tightly coupled and, in other synthesis tools, written down together [7, 9].
The denotation shows how to construct an SMT expression from a single program in L, and we
use it to build an SMT expression that represents the space of all programs such that its solution
tells which program in the space solves the synthesis problem. The SMT solver then solves jointly
for the program and its inputs, subject to an upper bound upon the total description length. This
builds upon prior work in program synthesis, such as [9], but departs in the quantitative aspect of
the constraints and in not knowing the program inputs. Due to space constraints, we only briefly
describe the synthesis algorithm, leaving a detailed discussion to the Supplement.
We use Algorithm 1 to generate an SMT formula that (1) defines the space of programs L; (2)
computes the description length of a program; and (3) computes the output of a program on a given
input. In Algorithm 1 the returned description length l corresponds to the ? log Pf (f ) term of Eq.
1 while the returned evaluator f (?) gives us the f (Ii ) terms. The returned constraints A ensure that
the program computed by f (?) is a member of L.
The SMT formula generated by Algorithm
Algorithm 1 SMT encoding of programs generated by
1 must be supplemented with constraints
production P of grammar G
that compute the data reconstruction erfunction Generate(G,J?K,P ):
ror and data encoding length of Eq. 1.
Input: Grammar G, denotation J?K, non-terminal P
We handle infinitely recursive grammars
Output: Description length l : ? ,
by bounding the depth of recursive calls
evaluator f : ? ? ? , assertions A : 2?
to the Generate procedure, as in [7].
choices ? {P ? K(P 0 , P 00 , . . .) ? G}
SMT solvers are not designed to minimize
n ? |choices|
loss functions, but to verify the satisfiabilfor r = 1 to n do
ity of a set of constraints. We minimize
let K(Pr1 , . . . , Prk ) = choices(r)
Eq. 1 by first asking the solver for any
for j = 1 to k do
solution, then adding a constraint saying
lrj , frj , Ajr ? Generate(G,J?K,Prj )
its solution must have smaller description
end for
P
length than the one found previously, etc.
lr ? j lrj
until it can find no better solution.
// Denotation is a function of child denotations
// Let gr be that function for choices(r)
// Q1 , ? ? ? , Qk : L are arguments to constructor K
3 Experiments
let gr (JQ1 K(I), ? ? ? , JQk K(I)) =
JK(Q1 , . . . , Qk )K(I)
3.1 Visual concept learning
1
fr (I) ? gr (fr (I), ? ? ? , frk (I))
end for
Humans quickly learn new visual con// Indicator variables specifying which rule is used
cepts, often from only a few examples
// Fresh variables unused in any existing formula
[2, 5, 10]. In this section, we present evc1 , ? ? ? W
, cn = FreshBooleanVariable()
idence that an unsupervised program synA1 ? j c j
thesis approach can also learn visual conA2 ? ?j 6= k : ?(c
cepts from a small number of examples.
S j ? cjk )
A
?
A
?
A
?
1
2
r,j Ar
Our approach is as follows: given a set of
l = log n + if(c1 , l1 , if(c2 , l2 , ? ? ? ))
example images, we automatically parse
f (I) = if(c1 , f1 (I), if(c2 , f2 (I), ? ? ? ))
them into a symbolic form. Then, we
return l, f, A
synthesize a program that maximally compresses these parses. Intuitively, this program encodes the common structure needed to draw each of the example images.
We take our visual concepts from the Synthetic Visual Reasoning Test (SVRT), a set of visual
classification problems which are easily parsed into distinct shapes. Fig. 1 shows three examples
of SVRT concepts. Fig. 2 diagrams the parsing procedure for another visual concept: two arbitrary
shapes bordering each other.
We defined a space of simple graphics programs that control a turtle [11] and whose primitives
include rotations, forward movement, rescaling of shapes, etc.; see Table 1. Both the learner?s
observations and the graphics program outputs are image parses, which have three sections: (1) A
list of shapes. Each shape is a tuple of a unique ID, a scale from 0 to 1, and x, y coordinates:
3
hid, scale, x, yi. (2) A list of containment relations contains(i, j) where i, j range from one to the
number of shapes in the parse. (3) A list of reflexive borders relations borders(i, j) where i, j range
from one to the number of shapes in the parse.
The algorithm in Section 2.2 describes purely functional programs (programs without state), but
the grammar in Table 1 contains imperative commands that modify a turtle?s state. We can think
of imperative programs as syntactic sugar for purely functional programs that pass around a state
variable, as is common in the programming languages literature [7].
The grammar of Table 1 leaves unspecified the number of program inputs. When synthesizing a
program from example images, we perform a grid search over the number of inputs. Given images
with N shapes and maximum shape ID D, the grid search considers D input shapes, 1 to N input
positions, 0 to 2 input lengths and angles, and 0 to 1 input scales. We set the number of imperative
draw commands (resp. borders, contains) to N (resp. number of topological relations).
We now define a noise model Px|z (?|?) that specifies how a program output z produces a parse x,
by defining a procedure for sampling x given z. First, the x and y coordinates of each shape are
perturbed by additive noise drawn uniformly from ?? to ?; in our experiments, we put ? = 3.
Then, optional borders and contains relations (see Table 1) are erased with probability 1/2. Last,
because the order of the shapes is unidentifiable, both the list of shapes and the indices of the
borders/containment relations are randomly permuted. The Supplement has the SMT encoding of
the noise model and priors over program inputs, which are uniform.
teleport(position[0],
initialOrientation)
draw(shape[0], scale = 1)
move(distance[0], 0deg)
draw(shape[0], scale = scale[0])
move(distance[0], 0deg)
draw(shape[0], scale = scale[0])
Figure 1: Left: Pairs of examples of three SVRT concepts taken from [5]. Right: the program we
synthesize from the leftmost pair. This is a turtle program capable of drawing this pair of pictures and
is parameterized by a set of latent variables: shape, distance, scale, initial position, initial orientation.
To encourage translational and rotational invariance,
the first turtle command is constrained to always be a
teleport to a new location, and the initial orientation of
the turtle, which we write as ?0 , is made an input to the
synthesized graphics program.
We are introducing an unsupervised learning algorithm,
but the SVRT consists of supervised binary classification problems. So we chose to evaluate our visual
concept learner by having it solve these classification
problems. Given a test image t and a set of examples E1 (resp. E2 ) from class C1 (resp. C2 ), we use
C1
the decision rule P (t|E1 ) RC P (t|E2 ), or equivalently
C1
s1 = Shape(id = 1, scale = 1,
x = 10, y = 15)
s2 = Shape(id = 2, scale = 1,
x = 27, y = 54)
borders(s1 , s2 )
2
Px ({t} ? E1 )Px (E2 ) RC Px (E1 )Px ({t} ? E2 ). Each
2
term in this decision rule is written as a marginal probability, and we approximate each marginal by lower
bounding it by the largest term in its corresponding
sum. This gives
Figure 2: The parser segments shapes and
identifies their topological relations (contains, borders), emmitting their coordinates, topological relations, and scales.
C1
?l({t} ? E1 )
|
{z
}
?l(E2 ) R ?l(E1 )
| {z } C2 | {z }
?log Px ({t}?E1 ) ?log Px (E2 )
?l({t} ? E2 )
|
{z
}
?log Px (E1 ) ?log Px ({t}?E2 )
4
(4)
Grammar rule
+
English description
+
+
E ? (M; D) ; C ; B
M ? teleport(R, ?0 )
M ? move(L, A)
M ? flipX()|flipY()
M ? jitter()
D ? draw(S, Z)
Z ? 1|z1 |z2 | ? ? ?
A ? 0? | ? 90? |?1 |?2 | ? ? ?
R ? r1 |r2 | ? ? ?
S ? s1 |s2 | ? ? ?
L ? `1 |`2 | ? ? ?
C ? contains(Z, Z)
C ? contains?(Z, Z)
B ? borders(Z, Z)
B ? borders?(Z, Z)
Alternate move/draw; containment relations; borders relations
Move turtle to new location R, reset orientation to ?0
Rotate by angle A, go forward by distance L
Flip turtle over X/Y axis
Small perturbation to turtle position
Draw shape S at scale Z
Scale is either 1 (no rescaling) or program input zj
Angle is either 0? , ?90? , or a program input ?j
Positions are program inputs rj
Shapes are program inputs sj
Lengths are program inputs `j
Containment between integer indices into drawn shapes
Optional containment between integer indices into drawn shapes
Bordering between integer indices into drawn shapes
Optional bordering between integer indices into drawn shapes
Table 1: Grammar for the vision domain. The non-terminal E is the start symbol for the grammar.
The token ; indicates sequencing of imperative commands. Optional bordering/containment holds
with probability 1/2. See the Supplement for denotations of each grammar rule.
where l(?) is
!
l(E) ,
min
f,{Ie }e?E
? log Pf (f ) ?
X
log PI (Ie ) + log Px|z (Ee |f (Ie ))
(5)
e?E
So, we induce 4 programs that maximally compress a different set of image parses: E1 , E2 , E1 ?
{t}, E2 ? {t}. The maximally compressive program is found by minimizing Eq. 5, putting the
observations {xi } as the image parses, putting the inputs {Ie } as the parameters of the graphics
program, and generating the program f (?) by passing the grammar of Table 1 to Algorithm 1.
We evaluated the classification accuracy across each of the 23 SVRT problems by sampling three
positive and negative examples from each class, and then evaluating the accuracy on a held out
test example. 20 such estimates were made for each problem. We compare with three baselines, as
shown in Fig. 3. (1) To control for the effect of our parser, we consider how well discriminative classification on the image parses performs. For each image parse, we extracted the following features:
number of distinct shapes, number of rescaled shapes, and number of containment/bordering relations, for 4 integer valued features. Following [5] we used Adaboost with decision stumps on these
parse features. (2) We trained two convolutional network architectures for each SVRT problem, and
found that a variant of LeNet5 [12] did best; we report those results here. The Supplement has the
network parameters and results for both architectures. (3) In [5] several discriminative baselines
are introduced. These models are trained on low-level image features; we compare with their bestperforming model, which fed 10000 examples to Adaboost with decision stumps. Unsupervised
program synthesis does best in terms of average classification accuracy, number of SVRT problems
solved at ? 90% accuracy,1 and correlation with the human data.
We do not claim to have solved the SVRT. For example, our representation does not model some geometric transformations needed for some of the concepts, such as rotations of shapes. Additionally,
our parsing procedure occasionally makes mistakes, which accounts for the many tasks we solve at
accuracies between 90% and 100%.
3.2
Morphological rule learning
How might a language learner discover the rules that inflect verbs? We focus on English inflectional
morphology, a system with a long history of computational modeling [13]. Viewed as an unsupervised learning problem, our objective is to find a compressive representation of English verbs.
1
Humans ?learn the task? after seven consecutive correct classifications [5]. Seven correct classifications
are likely to occur when classification accuracy is ? 0.51/7 ? 0.9
5
Figure 3: Comparing human performance on the
SVRT with classification accuracy for machine
learning approaches. Human accuracy is the
fraction of humans that learned the concept: 0%
is chance level. Machine accuracy is the fraction
of correctly classified held out examples: 50% is
chance level. Area of circles is proportional to
the number of observations at that point. Dashed
line is average accuracy. Program synthesis: this
work trained on 6 examples. ConvNet: A variant
of LeNet5 trained on 2000 examples. Parse (Image) features: discriminative learners on features
of parse (pixels) trained on 6 (10000) examples.
Humans given an average of 6.27 examples and
solve an average of 19.85 problems [5].
We make the following simplification:
our learner is presented with triples of
hlexeme, tense, wordi2 . This ignores many of the difficulties involved in language acquisition, but see [14] for a unsupervised approach to extracting similar information from corpora. We
can think of these triples as the entries of a matrix whose columns correspond to different tenses
and whose rows correspond to different lexemes; see Table 3. We regard each row of this matrix
as an observation (the {xi } of Eq. 1) and identify stems with the inputs to the program we are to
synthesize (the {Ii } of Eq. 1). Thus, our objective is to synthesize a program that maps a stem to a
tuple of inflections. We put a description length prior over the stem and detail its SMT encoding in
the the Supplement. We represent words as sequences of phonemes, and define a space of programs
that operate upon words, given in Table 2.
English inflectional verb morphology has a set of regular rules that apply for almost all words, as
well as a small set of words whose inflections do not follow a regular rule: the ?irregular? forms.
We roll these irregular forms into the noise model: with some small probability , an inflected form
is produced not by applying a rule to the stem, but by drawing a sequence of phonemes from a
description length prior. In our experiments, we put = 0.1. This corresponds to a simple ?rules
plus lexicon? model of morphology, which is oversimplified in many respects but has been proposed
in the past as a crude approximation to the actual system of English morphology [13]. See the
Supplement for the SMT encoding of our noise model.
In conclusion, the learning problem is as follows: given triples of hlexeme, tense, wordi, jointly infer
the regular rules, the stems, and which words are irregular exceptions.
We took five inflected forms of the top 5000 lexemes as measured by token frequency in the CELEX
lexical inventory [15]. We split this in half to give 2500 lexemes for training and testing, and
trained our model using Random Sample Consensus (RANSAC) [16]. Concretely, we sampled many
subsets of the data, each with 4, 5, 6, or 7 lexemes (thus 20, 25, 30, or 35 words), and synthesized
the program for each subset minimizing Eq. 1. We then took the program whose likelihood on the
training set was highest. Fig. 4 plots the likelihood on the testing set as a function of the number of
subsets (RANSAC iterations) and the size of the subsets (# of lexemes). Fig. 5 shows the program
that assigned the highest likelihood to the training data; it also had the highest likelihood on the
testing data. With 7 lexemes, the learner consistently recovers the regular linguistic rule, but with
less data, it recovers rules that are almost as good, degrading more as it receives less data.
Most prior work on morphological rule learning falls into two regimes: (1) supervised learning of
the phonological form of morphological rules; and (2) unsupervised learning of morphemes from
corpora. Because we learn from the lexicon, our model is intermediate in terms of supervision. We
compare with representative systems from both regimes as follows:
2
The lexeme is the meaning of the stem or root; for example, run, ran, runs all share the same lexeme
6
Grammar rule
English description
E ? hC, ? ? ? , Ci
C ? R|if (G) R else C
R ? stem + phoneme?
G ? [VPMS]
V ? V 0 |?
V 0 ? VOICED|UNVOICED
P ? P 0 |?
P 0 ? LABIAL| ? ? ?
M ? M0 |?
M0 ? FRICATIVE| ? ? ?
S ? S 0 |?
S 0 ? SIBILANT|NOTSIBIL
Programs are tuples of conditionals, one for each tense
Conditionals have return value R, guard G, else condition C
Return values append a suffix to a stem
Guards condition upon voicing, manner, place, sibilancy
Voicing specifies of voice V 0 or doesn?t care
Voicing options
Place specifies a place of articulation P 0 or doesn?t care
Place of articulation features
Manner specifies a manner of articulation M0 or doesn?t care
Manner of articulation features
Sibilancy specifies a sibilancy S 0 or doesn?t care
Sibilancy is a binary feature
Table 2: Grammar for the morphology domain. The non-terminal E is the start symbol for
the grammar. Each guard G conditions on phonological properties of the end of the stem:
voicing, place, manner, and sibilancy. Sequences of phonemes are encoded as tuples of
hlength, phoneme1 , phoneme2 , ? ? ? i. See the Supplement for denotations of each grammar rule.
Lexeme
style
run
subscribe
rack
Present
staIl
r2n
s@bskraIb
r?k
Past
staIld
r?n
s@bskraIbd
r?kt
3rd Sing. Pres.
staIlz
r2nz
s@bskraIbz
r?ks
Past Part.
staIld
r2n
s@bskraIbd
r?kt
Prog.
staIlIN
r2nIN
s@bskraIbIN
r?kIN
Table 3: Example input to the morphological rule learner
The Morfessor system [17] induces morphemes from corpora which it then uses for segmentation.
We used Morfessor to segment phonetic forms of the inflections of our 5000 lexemes; compared
to the ground truth inflection transforms provided by CELEX, it has an error rate of 16.43%. Our
model segments the same verbs with an error rate of 3.16%. This experiment is best seen as a sanity
check: because our system knows a priori to expect only suffixes and knows which words must share
the same stem, we expect better performance due to our restricted hypothesis space. To be clear, we
are not claiming that we have introduced a stemmer that exceeds or even meets the state-of-the-art.
In [1] Albright and Hayes introduce a supervised morphological rule learner that induces phonological rules from examples of a stem being transformed into its inflected form. Because our model
learns a joint distribution over all of the inflected forms of a lexeme, we can use it to predict inflections conditioned upon their present tense. Our model recovers the regular inflections, but does not
recover the so-called ?islands of reliability? modeled in [1]; e.g., our model predicts that the past
tense of the nonce word glee is gleed, but does not predict that a plausible alternative past tense is
gled, which the model of Albright and Hayes does. This deficiency is because the space of programs
in Table 2 lacks the ability to express this class of rules.
4
4.1
Discussion
Related Work
Inductive programming systems have a long and rich history [4]. Often these systems use stochastic
search algorithms, such as genetic programming [18] or MCMC [19]. Others sufficiently constrain
the hypothesis space to enable fast exact inference [20]. The inductive logic programming community has had some success inducing Prolog programs using heuristic search [4]. Our work is
motivated by the recent successes of systems that put program synthesis in a probabilistic framework [21, 22]. The program synthesis community introduced solver-based methods for learning
programs [7, 23, 9], and our work builds upon their techniques.
7
PRESENT = s t e m
PAST
= i f [ CORONAL STOP ]
s t e m + Id
i f [ VOICED ]
stem + d
else
stem + t
PROG .
= s t e m + IN
3 r d S i n g = i f [ SIBILANT ]
s t e m + Iz
i f [ VOICED ]
stem + z
else
stem + s
Figure 4: Learning curves for our morphology model trained using RANSAC. At each
iteration, we sample 4, 5, 6, or 7 lexemes
from the training data, fit a model using
their inflections, and keep the model if it has
higher likelihood on the training data than
other models found so far. Each line was run
on a different permutation of the samples.
Figure 5: Program synthesized by morphology learner. Past Participle program was the
same as past tense program.
There is a vast literature on computational models of morphology. These include systems that learn
the phonological form of morphological rules [1, 13, 24], systems that induce morphemes from
corpora [17, 25], and systems that learn the productivity of different rules [26]. In using a general
framework, our model is similar in spirit to the early connectionist accounts [24], but our use of
symbolic representations is more in line with accounts proposed by linguists, like [1].
Our model of visual concept learning is similar to inverse graphics, but the emphasis upon synthesizing programs is more closely aligned with [2].We acknowledge that convolutional networks are
engineered to solve classification problems qualitatively different from the SVRT, and that one could
design better neural network architectures for these problems. For example, it would be interesting
to see how the very recent DRAW network [27] performs on the SVRT.
4.2
A limitation of the approach: Large datasets
Synthesizing programs from large datasets is difficult, and complete symbolic solvers often do not
degrade gracefully as the problem size increases. Our morphology learner uses RANSAC to sidestep
this limitation, but we anticipate domains for which this technique will be insufficient. Prior work in
program synthesis introduced Counter Example Guided Inductive Synthesis (CEGIS) [7] for learning from a large or possibly infinite family of examples, but it cannot accomodate noise in the data.
We suspect that a hypothetical RANSAC/CEGIS hybrid would scale to large, noisy training sets.
4.3
Future Work
The two key ideas in this work are (1) the encoding of soft probabilistic constraints as hard constraints for symbolic search, and (2) crafting a domain specific grammar that serves both to guide
the symbolic search and to provide a good inductive bias. Without a strong inductive bias, one cannot possibly generalize from a small number of examples. Yet humans can, and AI systems should,
learn over time what constitutes a good prior, hypothesis space, or sketch. Learning a good inductive
bias, as done in [22], and then providing that inductive bias to a solver, may be a way of advancing
program synthesis as a technology for artificial intelligence.
Acknowledgments
We are grateful for discussions with Timothy O?Donnell on morphological rule learners, for advice
from Brendan Lake and Tejas Kulkarni on the convolutional network baselines, and for the suggestions of our anonymous reviewers. This material is based upon work supported by funding from
NSF award SHF-1161775, from the Center for Minds, Brains and Machines (CBMM) funded by
NSF STC award CCF-1231216, and from ARO MURI contract W911NF-08-1-0242.
8
References
[1] Adam Albright and Bruce Hayes. Rules vs. analogy in english past tenses: A computational/experimental
study. Cognition, 90:119?161, 2003.
[2] Brenden M Lake, Ruslan R Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in neural information processing systems, pages 2526?2534, 2013.
[3] Noah D. Goodman, Vikash K. Mansinghka, Daniel M. Roy, Keith Bonawitz, and Joshua B. Tenenbaum.
Church: a language for generative models. In UAI, pages 220?229, 2008.
[4] Sumit Gulwani, Jose Hernandez-Orallo, Emanuel Kitzelmann, Stephen Muggleton, Ute Schmid, and Ben
Zorn. Inductive programming meets the real world. Commun. ACM, 2015.
[5] Franc?ois Fleuret, Ting Li, Charles Dubout, Emma K Wampler, Steven Yantis, and Donald Geman. Comparing machines and humans on a visual categorization test. PNAS, 108(43):17621?17625, 2011.
[6] Ray J Solomonoff. A formal theory of inductive inference. Information and control, 7(1):1?22, 1964.
[7] Armando Solar Lezama. Program Synthesis By Sketching. PhD thesis, EECS Department, University of
California, Berkeley, Dec 2008.
[8] Leonardo De Moura and Nikolaj Bj?rner. Z3: An efficient smt solver. In Tools and Algorithms for the
Construction and Analysis of Systems, pages 337?340. Springer, 2008.
[9] Emina Torlak and Rastislav Bodik. Growing solver-aided languages with rosette. In Proceedings of the
2013 ACM international symposium on New ideas, new paradigms, and reflections on programming &
software, pages 135?152. ACM, 2013.
[10] Stanislas Dehaene, V?eronique Izard, Pierre Pica, and Elizabeth Spelke. Core knowledge of geometry in
an amazonian indigene group. Science, 311(5759):381?384, 2006.
[11] David D. Thornburg. Friends of the turtle. Compute!, March 1983.
[12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[13] Mark S Seidenberg and David C Plaut. Quasiregularity and its discontents: the legacy of the past tense
debate. Cognitive science, 38(6):1190?1228, 2014.
[14] Erwin Chan and Constantine Lignos. Investigating the relationship between linguistic representation and
computation through an unsupervised model of human morphology learning. Research on Language and
Computation, 8(2-3):209?238, 2010.
[15] R Piepenbrock Baayen, R and L Gulikers. CELEX2 LDC96L14. Philadelphia: Linguistic Data Consortium, 1995. Web download.
[16] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with
applications to image analysis and automated cartography. Commun. ACM, 24(6):381?395, June 1981.
[17] Sami Virpioja, Peter Smit, Stig-Arne Grnroos, and Mikko Kurimo. Morfessor 2.0: Python implementation
and extensions for morfessor baseline. Technical report, Aalto University, Helsinki, 2013.
[18] John R. Koza. Genetic programming - on the programming of computers by means of natural selection.
Complex adaptive systems. MIT Press, 1993.
[19] Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. In ACM SIGARCH Computer Architecture News, volume 41, pages 305?316. ACM, 2013.
[20] Sumit Gulwani. Automating string processing in spreadsheets using input-output examples. In POPL,
pages 317?330, New York, NY, USA, 2011. ACM.
[21] Yarden Katz, Noah D. Goodman, Kristian Kersting, Charles Kemp, and Joshua B. Tenenbaum. Modeling
semantic cognition as logical dimensionality reduction. In CogSci, pages 71?76, 2008.
[22] Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical bayesian approach.
In Johannes F?urnkranz and Thorsten Joachims, editors, ICML, pages 639?646. Omnipress, 2010.
[23] Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. Synthesis of loop-free programs. In PLDI, pages 62?73, New York, NY, USA, 2011. ACM.
[24] D. E. Rumelhart and J. L. McClelland. On learning the past tenses of english verbs. In Parallel distributed processing: Explorations in the microstructure of cognition, pages Volume 2, 216?271. Bradford
Books/MIT Press, 1986.
[25] John Goldsmith. Unsupervised learning of the morphology of a natural language. Comput. Linguist.,
27(2):153?198, June 2001.
[26] Timothy J. O?Donnell. Productivity and Reuse in Language: A Theory of Linguistic Computation and
Storage. The MIT Press, 2015.
[27] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for
image generation. CoRR, abs/1502.04623, 2015.
9
| 5785 |@word briefly:1 compression:1 seek:2 q1:2 shot:1 reduction:1 initial:3 wrapper:1 contains:7 daniel:1 genetic:2 document:1 past:11 existing:1 z2:1 comparing:2 superoptimization:1 yet:1 written:3 must:3 parsing:2 realize:1 stemming:1 additive:1 john:2 shape:29 piepenbrock:1 treating:2 interpretable:1 designed:2 prk:1 plot:1 v:1 generative:3 leaf:1 half:1 intelligence:1 ivo:1 accordingly:1 core:2 foreseeable:1 lr:1 provides:1 plaut:1 location:2 lexicon:2 evaluator:2 five:1 rc:2 guard:3 c2:4 wierstra:1 symposium:1 consists:1 combine:1 fitting:1 ray:1 inside:1 emma:1 manner:5 introduce:3 dan:1 pr1:1 growing:1 morphology:12 brain:3 terminal:4 salakhutdinov:1 oversimplified:1 automatically:1 actual:1 pf:4 solver:12 equipped:1 provided:1 discover:1 inflectional:3 what:1 unspecified:1 string:1 degrading:1 compressive:3 finding:1 unobserved:1 transformation:1 quantitative:1 berkeley:1 hypothetical:1 control:3 danihelka:1 positive:1 engineering:1 modify:1 mistake:1 encoding:8 id:5 meet:2 hernandez:1 might:2 chose:1 plus:1 emphasis:1 k:1 specifying:1 range:2 unique:1 acknowledgment:1 lecun:1 testing:3 practice:1 recursive:2 procedure:4 area:1 thought:1 schkufza:1 word:9 induce:3 regular:5 donald:1 symbolic:10 consortium:1 cannot:2 selection:1 put:4 context:2 applying:2 writing:2 storage:1 deterministic:1 map:1 lexical:1 reviewer:1 center:1 primitive:4 straightforward:1 go:1 focused:1 estimator:1 rule:34 ity:1 embedding:1 handle:1 coordinate:3 constructor:1 analogous:1 resp:4 construction:1 parser:2 modulo:1 exact:1 programming:9 us:2 mikko:1 hypothesis:4 synthesize:5 roy:1 recognition:1 jk:1 rumelhart:1 predicts:1 muri:1 geman:1 observed:4 steven:1 solved:3 news:1 morphological:9 lrj:2 movement:1 rescaled:1 highest:3 counter:1 ran:1 sugar:1 fischler:1 lezama:2 trained:7 grateful:1 solving:1 segment:3 ror:1 purely:2 upon:9 f2:1 learner:11 completely:1 eric:1 easily:1 joint:2 fewest:1 distinct:2 fast:1 describe:1 coronal:1 artificial:1 cogsci:1 tell:1 kevin:1 sanity:1 whose:5 encoded:1 heuristic:1 solve:4 valued:1 say:1 drawing:3 plausible:1 grammar:23 ability:1 think:2 jointly:2 noisy:4 syntactic:1 sequence:3 took:2 reconstruction:2 aro:1 reset:1 fr:2 hid:1 aligned:1 loop:1 translate:1 description:12 inducing:1 prj:1 r1:1 produce:2 generating:1 adam:1 categorization:1 ben:1 object:1 karol:1 friend:1 recurrent:1 measured:1 mansinghka:1 keith:1 progress:1 strong:1 eq:9 solves:2 ois:1 come:1 guided:1 closely:1 correct:2 stochastic:2 exploration:1 human:13 engineered:1 enable:1 material:1 require:1 f1:1 microstructure:1 anonymous:1 anticipate:1 extension:1 hold:1 around:2 sufficiently:1 teleport:3 cbmm:1 ground:1 cognition:3 predict:2 bj:1 claim:1 m0:3 consecutive:1 early:1 purpose:2 ruslan:1 largest:1 pres:1 tool:4 mit:7 rough:1 always:2 fricative:1 kersting:1 command:4 linguistic:6 encode:1 focus:1 june:2 joachim:1 consistently:1 sequencing:1 indicates:1 likelihood:5 check:1 cartography:1 aalto:1 brendan:1 inflection:8 baseline:6 inference:6 dependent:1 suffix:2 relation:11 transformed:1 pixel:1 translational:1 classification:11 orientation:3 morpheme:3 priori:1 spatial:1 constrained:2 art:1 frk:1 marginal:2 construct:1 phonological:5 having:2 armando:2 manually:1 sampling:2 placing:1 represents:1 unsupervised:17 constitutes:1 icml:1 future:2 stanislas:1 report:2 others:1 connectionist:1 few:3 franc:1 randomly:1 tightly:1 geometry:1 ab:1 highly:1 mixture:1 held:2 kt:2 tuple:2 capable:1 encourage:1 old:1 bestperforming:1 circle:1 causal:1 column:1 modeling:3 elli:1 asking:1 soft:1 assertion:1 ar:1 w911nf:1 reflexive:1 introducing:1 imperative:4 entry:1 subset:4 uniform:2 successful:1 gr:3 graphic:6 sumit:3 perturbed:1 eec:1 synthetic:1 density:1 international:1 ie:4 csail:2 subscribe:1 probabilistic:8 donnell:2 contract:1 automating:1 michael:1 ashish:1 synthesis:24 together:2 sketching:3 quickly:1 thesis:2 central:1 possibly:2 cognitive:3 book:1 sidestep:1 style:1 return:3 rescaling:2 li:1 prolog:1 account:3 de:1 stump:2 jha:1 depends:1 root:1 doing:1 start:2 recover:4 option:1 parallel:1 solar:2 bruce:1 voiced:3 contribution:1 minimize:3 accuracy:10 convolutional:3 qk:2 phoneme:4 efficiently:2 roll:1 yield:1 correspond:2 identify:1 generalize:1 bayesian:2 produced:1 history:2 classified:1 moura:1 acquisition:1 frequency:1 involved:1 e2:11 recovers:3 con:1 sampled:1 stop:1 dataset:1 emanuel:1 massachusetts:3 logical:1 knowledge:4 dimensionality:1 satisfiability:1 segmentation:1 formalize:1 tiwari:1 nikolaj:1 higher:1 supervised:4 follow:1 adaboost:2 maximally:3 rahul:1 unidentifiable:1 evaluated:1 done:1 dubout:1 until:1 correlation:1 sketch:7 receives:1 parse:8 web:1 bordering:5 nonlinear:1 lack:1 rack:1 defines:1 believe:1 usa:2 effect:1 concept:13 contain:1 verify:1 tense:11 inductive:10 ccf:1 assigned:1 jbt:1 semantic:1 leftmost:1 outline:1 complete:1 goldsmith:1 bolles:1 performs:2 l1:1 reflection:1 percy:1 omnipress:1 reasoning:1 image:15 meaning:2 funding:1 charles:2 common:2 rotation:2 permuted:1 functional:2 volume:2 extend:1 katz:1 synthesized:3 orallo:1 ai:1 rd:1 grid:2 pldi:1 language:12 had:3 reliability:1 funded:1 ute:1 specification:1 supervision:1 etc:3 sibilant:2 recent:3 chan:1 constantine:1 commun:2 occasionally:1 phonetic:1 binary:2 success:2 yi:1 joshua:3 seen:1 care:4 sharma:1 paradigm:2 venkatesan:1 dashed:1 ii:8 arithmetic:1 stephen:1 desirable:1 rj:1 infer:1 stem:15 pnas:1 exceeds:1 technical:2 muggleton:1 long:2 arne:1 e1:10 award:2 variant:2 ransac:5 vision:2 noiseless:1 erwin:1 iteration:2 represent:2 spreadsheet:1 dec:1 c1:6 irregular:3 conditionals:3 diagram:1 else:4 leaving:1 goodman:2 operate:1 popl:1 induced:3 smt:17 subject:1 suspect:1 dehaene:1 member:1 spirit:1 jordan:1 call:1 integer:5 ee:1 extracting:1 unused:1 intermediate:1 split:1 bengio:1 sami:1 automated:1 fit:1 zi:4 architecture:4 idea:5 cn:1 knowing:1 haffner:1 vikash:1 jxk:1 expression:6 motivated:1 gulwani:3 passed:1 solomonoff:1 reuse:1 peter:1 returned:3 passing:1 york:2 compositional:2 linguist:2 deep:1 useful:1 fleuret:1 detailed:1 clear:1 johannes:1 amount:1 nonparametric:1 transforms:1 tenenbaum:4 induces:2 mcclelland:1 reduced:1 generate:4 specifies:6 outperform:1 zj:1 nsf:2 koza:1 correctly:1 klein:1 write:6 urnkranz:1 iz:1 express:1 group:1 putting:2 key:1 inflected:5 demonstrating:1 spelke:1 drawn:5 aiken:1 advancing:1 vast:2 zorn:1 fraction:2 year:1 sum:2 run:4 inverse:1 jose:1 angle:3 parameterized:2 jitter:1 place:5 saying:1 almost:2 prog:2 family:1 lake:2 draw:12 decision:4 bit:1 bound:1 simplification:1 topological:3 frj:1 denotation:12 occur:1 constraint:8 deficiency:1 constrain:1 noah:2 leonardo:1 software:1 encodes:1 helsinki:1 alex:2 aspect:1 turtle:9 argument:1 min:1 px:12 martin:1 department:3 alternate:1 pica:1 march:1 jr:1 remain:1 smaller:1 idence:1 describes:1 across:1 vpms:1 island:1 elizabeth:1 s1:3 intuitively:1 restricted:1 thorsten:1 taken:2 previously:1 needed:2 mind:1 know:2 flip:1 participle:1 tractable:1 cepts:2 serf:2 end:3 fed:1 generalizes:1 apply:2 hierarchical:1 voicing:4 stig:1 pierre:1 alternative:1 voice:1 ajr:1 compress:2 top:1 include:3 ensure:1 readable:2 sigarch:1 exploit:1 parsed:1 ting:1 build:3 gregor:1 lenet5:2 crafting:1 objective:3 move:5 unclear:1 gradient:1 distance:4 convnet:1 gracefully:1 degrade:1 seven:2 manifold:1 considers:1 consensus:2 kemp:1 fresh:1 length:15 index:5 relationship:1 modeled:1 rotational:1 minimizing:2 insufficient:1 providing:1 liang:1 equivalently:1 difficult:2 z3:1 robert:1 claiming:1 debate:1 negative:1 synthesizing:3 append:1 design:1 implementation:1 unknown:2 perform:1 upper:1 observation:6 unvoiced:1 datasets:2 sing:1 acknowledge:1 daan:1 november:1 optional:4 defining:3 perturbation:1 verb:7 arbitrary:1 brenden:1 community:3 download:1 introduced:4 inverting:1 pair:4 david:2 specified:1 z1:1 california:1 learned:1 below:1 articulation:4 regime:2 program:100 built:1 natural:4 difficulty:1 hybrid:1 predicting:1 indicator:1 representing:1 technology:4 library:1 picture:1 identifies:1 axis:1 church:1 autoencoder:1 coupled:1 schmid:1 philadelphia:1 rner:1 prior:13 literature:2 l2:1 geometric:1 python:1 graf:1 loss:1 par:5 expect:2 permutation:1 interesting:1 limitation:2 proportional:1 suggestion:1 analogy:1 generation:1 triple:3 editor:1 pi:3 share:2 production:2 row:2 r2n:2 token:2 supported:1 last:2 free:3 english:9 legacy:1 formal:2 guide:1 bias:4 institute:3 fall:1 stemmer:1 distributed:1 regard:1 curve:1 depth:1 evaluating:1 world:1 rich:1 computes:2 ignores:1 forward:2 made:3 concretely:1 doesn:4 qualitatively:1 adaptive:1 far:1 cope:1 sj:1 approximate:1 logic:1 keep:1 deg:2 hayes:3 uai:1 investigating:1 corpus:4 containment:7 tuples:3 xi:5 discriminative:3 search:9 latent:2 seidenberg:1 table:11 additionally:1 bonawitz:1 learn:10 nature:1 synthesizes:1 inventory:1 hc:1 bottou:1 complex:1 domain:13 stc:1 did:1 main:1 bounding:2 noise:7 border:10 s2:3 allowed:1 child:2 celex:2 fig:5 representative:1 advice:1 ny:2 formalization:2 position:5 explicit:2 comput:1 crude:1 perceptual:1 learns:1 kin:1 down:3 rk:1 departs:1 formula:3 specific:3 showing:1 supplemented:1 yantis:1 symbol:3 list:4 r2:1 intractable:2 cjk:1 pcfg:1 adding:1 smit:1 ci:1 supplement:7 phd:1 corr:1 accomodate:1 conditioned:1 timothy:2 likely:1 infinitely:1 visual:14 josh:1 springer:1 kristian:1 corresponds:2 truth:1 chance:2 extracted:1 acm:8 tejas:1 goal:2 viewed:1 erased:1 hard:1 aided:1 infinite:1 uniformly:1 total:1 lexeme:12 pas:1 invariance:1 albright:3 called:1 experimental:1 bradford:1 productivity:2 exception:1 formally:1 mark:1 rotate:1 kulkarni:1 evaluate:1 mcmc:1 |
5,286 | 5,786 | Deep Poisson Factor Modeling
Ricardo Henao, Zhe Gan, James Lu and Lawrence Carin
Department of Electrical and Computer Engineering
Duke University, Durham, NC 27708
{r.henao,zhe.gan,james.lu,lcarin}@duke.edu
Abstract
We propose a new deep architecture for topic modeling, based on Poisson Factor Analysis (PFA) modules. The model is composed of a Poisson distribution to
model observed vectors of counts, as well as a deep hierarchy of hidden binary
units. Rather than using logistic functions to characterize the probability that a
latent binary unit is on, we employ a Bernoulli-Poisson link, which allows PFA
modules to be used repeatedly in the deep architecture. We also describe an approach to build discriminative topic models, by adapting PFA modules. We derive
efficient inference via MCMC and stochastic variational methods, that scale with
the number of non-zeros in the data and binary units, yielding significant efficiency, relative to models based on logistic links. Experiments on several corpora
demonstrate the advantages of our model when compared to related deep models.
1
Introduction
Deep models, understood as multilayer modular networks, have been gaining significant interest
from the machine learning community, in part because of their ability to obtain state-of-the-art performance in a wide variety of tasks. Their modular nature is another reason for their popularity.
Commonly used modules include, but are not limited to, Restricted Boltzmann Machines (RBMs)
[10], Sigmoid Belief Networks (SBNs) [22], convolutional networks [18], feedforward neural networks, and Dirichlet Processes1 (DPs). Perhaps the two most well-known deep model architectures
are the Deep Belief Network (DBN) [11] and the Deep Boltzmann Machine (DBM) [25], the former
composed of RBM and SBN modules, whereas the latter is purely built using RBMs.
Deep models are often employed in topic modeling. Specifically, hierarchical tree-structured models
have been widely studied over the last decade, often composed of DP modules. Examples of these
include the nested Chinese Restaurant Process (nCRP) [1], the hierarchical DP (HDP) [27], and
the nested HDP (nHDP) [23]. Alternatively, topic models built using modules other than DPs have
been proposed recently, for instance the Replicated Softmax Model (RSM) [12] based on RBMs,
the Neural Autoregressive Density Estimator (NADE) [17] based on neural networks, the Overreplicated Softmax Model (OSM) [26] based on DBMs, and Deep Poisson Factor Analysis (DPFA)
[6] based on SBNs.
DP-based models have attractive characteristics from the standpoint of interpretability, in the sense
that their generative mechanism is parameterized in terms of distributions over topics, with each
topic characterized by a distribution over words. Alternatively, non-DP-based models, in which
modules are parameterized by a deep hierarchy of binary units [12, 17, 26], do not have parameters
that are as readily interpretable in terms of topics of this type, although model performance is often
excellent. The DPFA model in [6] is one of the first representations that characterizes documents
based on distributions over topics and words, while simultaneously employing a deep architecture
based on binary units. Specifically, [6] integrates the capabilities of Poisson Factor Analysis (PFA)
1
Deep models based on DP priors are usually called hierarchical models.
1
[32] with a deep architecture composed of SBNs [7]. PFA is a nonnegative matrix factorization
framework closely related to DP-based models. Results in [6] show that DPFA outperforms other
well-known deep topic models.
Building upon the success of DPFA, this paper proposes a new deep architecture for topic modeling, based entirely on PFA modules. Our model fundamentally merges two key aspects of DP and
non-DP-based architectures, namely: (i) its fully nonnegative formulation relies on Dirichlet distributions, and is thus readily interpretable throughout all its layers, not just at the base layer as in
DPFA [6]; (ii) it adopts the rationale of traditional non-DP-based models such as DBNs and DBMs,
by connecting layers via binary units, to enable learning of high-order statistics and structured correlations. The probability of a binary unit being on is controlled by a Bernoulli-Poisson link [30]
(rather than a logistic link, as in the SBN), allowing repeated application of PFA modules at all
layers of the deep architecture.
The main contributions of this paper are: (i) A deep architecture for topic models based entirely on
PFA modules. (ii) Unlike DPFA, which is based on SBNs, our model has inherent shrinkage in all
its layers, thanks to the DP-like formulation of PFA. (iii) DPFA requires sequential updates for its
binary units, while in our formulation these are updated in block, greatly improving mixing. (iv)
We show how PFA modules can be used to easily build discriminative topic models. (v) An efficient
MCMC inference procedure is developed, that scales as a function of the number of non-zeros in the
data and binary units. In contrast, models based on RBMs and SBNs scale with the size of the data
and binary units. (vi) We also employ a scalable Bayesian inference algorithm based on the recently
proposed Stochastic Variational Inference (SVI) framework [15].
2
Model
2.1 Poisson factor analysis as a module
We present the model in terms of document modeling and word counts, but the basic setup is applicable to other problems characterized by vectors of counts (and we consider such a non-document
application when presenting results). Assume xn is an M -dimensional vector containing word
counts for the n-th of N documents, where M is the vocabulary size. We impose the model,
?K
xn ? Poisson (?(?n ? hn )), where ? ? RM
is the factor loadings matrix with K factors,
+
K
K
?n ? R+ are factor intensities, hn ? {0, 1} is a vector of binary units indicating which factors
are active for observation n, and ? represents the element-wise (Hadamard) product. One possible
prior specification for this model, recently introduced in [32], is
PK
xmn = k=1 xmkn ,
xmkn ? Poisson(?mkn ) ,
?mkn = ?mk ?kn hkn ,
(1)
?1
?k ? Dirichlet(?1M ) ,
?kn ? Gamma(rk , (1 ? b)b ) ,
hkn ? Bernoulli(?kn ) ,
where 1M is an M -dimensional vector of ones, and we have used the additive property of the Poisson
distribution to decompose the m-th observed count of xn as K latent counts, {xmkn }K
k=1 . Here, ?k
is column k of ?, xmn is component m of xn , ?kn is component k of ?n , and hkn is component k
of hn . Furthermore, we let ? = 1/K, b = 0.5 and rk ? Gamma(1, 1). Note that ? controls for the
sparsity of ?, while rk accommodates for over-dispersion in xn via ?n (see [32] for details).
There is one parameter in (1) for which we have not specified a prior distribution, specifically
E[p(hkn = 1)] = ?kn . In [32], hkn is provided with a beta-Bernoulli process prior by letting
?kn = ?k ? Beta(c?, c(1 ? ?)), meaning that every document has on average the same probability
of seeing a particular topic as active, based on corpus-wide popularity. It further assumes topics are
independent of each other. These two assumptions are restrictive because: (i) in practice, documents
belong to a rather heterogeneous population, in which themes naturally occur within a corpus; letting
documents have individual topic activation probabilities will allow the model to better accommodate
for heterogeneity in the data. (ii) Some topics are likely to co-occur systematically, so being able to
harness such correlation structures can improve the ability of the model for fitting the data.
The hierarchical model in (1), which in the following we denote as xn ? PFA(?, ?n , hn ; ?, rk , b),
short for Poisson Factor Analysis (PFA), represents documents, xn , as purely additive combinations
of up to K topics (distributions over words), where hn indicates what topics are active and ?n , is the
intensity of each one of the active topics that is manifested in document xn . It is also worth noting
that the model in (1) is closely related to other widely known topic model approaches, such as Latent
Dirichlet Allocation (LDA) [3], HDP [27] and Focused Topic Modeling (FTM) [29]. Connections
between these models are discussed in Section 4.
2
2.2
Deep representations with PFA modules
Several models have been proposed recently to address the limitations described above [1, 2, 6, 27].
In particular, [6] proposed using multilayer SBNs [22], to impose correlation structure across topics,
while providing each document with the ability to control its topic activation probabilities, without
the need of a global beta-Bernoulli process [32]. Here we follow the same rationale as [6], but
without SBNs. We start by noting that for a binary vector hn with elements hkn , we can write
? kn ) ,
hkn = 1(zkn ? 1),
zkn ? Poisson(?
(2)
? kn ;
where zkn is a latent count for variable hkn , parameterized by a Poisson distribution with rate ?
1(?) = 1 if the argument is true, and 1(?) = 0 otherwise. The model in (2), recently proposed in
? n ), for ?
? n ? RK .
[30], is known as the Bernoulli-Poisson Link (BPL) and is denoted hn ? BPL(?
+
After marginalizing out the latent count zkn [30], the model in (2) has the interesting property that
? kn ). Hence, rather than using the logistic
p(hkn = 1) = Bernoulli(?kn ), where ?kn = 1 ? exp(??
? kn ).
function to represent binary unit probabilities, we employ ?kn = 1 ? exp(??
? kn , respectively, to distinguish
In (1) and (2) we have represented the Poisson rates as ?mkn and ?
between the two. However, the fact that the count vector in (1) and the binary variable in (2) are
both represented in terms of Poisson distributions suggests the following deep model, based on PFA
modules (see graphical model in Supplementary Material):
(2)
(1) (1) (1)
,
,
h(1)
xn ? PFA ?(1) , ?n(1) , h(1)
, rk , b
n = 1 zn
n ;?
..
(2)
(2) (2) (2)
,
z(2)
, ?n(2) , h(2)
, rk , b
.
n ? PFA ?
n ;?
(3)
..
,
.
hn(L?1) = 1 z(L)
n
(L)
(L+1)
(L)
h(L)
,
z(L)
, ?n(L) , hn(L) ; ? (L) , rk , b(L) ,
n = 1 zn
n ? PFA ?
where L is the number of layers in the model, and 1(?) is a vector operation in which each component
imposes the left operation in (2). In this Deep Poisson Factor Model (DPFM), the binary units at
(?)
(?+1)
(?)
(?)
(?)
layer ? ? {1, . . . , L} are drawn hn ? BPL(?n
), for ?n = ?(?) (?n ? hn ). The form of
(?)
the model in (3) introduces latent variables {zn }L+1
?=2 and the element-wise function 1(?), rather
(?) L
than explicitly drawing {hn }?=1 from the BPL distribution. Concerning the top layer, we let
(L+1)
(L+1)
(L+1)
zkn
? Poisson(?k
) and ?k
? Gamma(a0 , b0 ).
2.3
Model interpretation
(1)
Consider layer 1 of (3), from which xn is drawn. Assuming hn is known, this corresponds to a
(1)
focused topic model [29]. The columns of ?(1) correspond to topics, with the k-th column ?k
(1)
defining the probability with which words are manifested for topic k (each ?k is drawn from a
(1)
(1) (1) (1)
Dirichlet distribution, as in (1)). Generalizing the notation from (1), ?kn = ?k ?kn hkn ? RM
+ is
(1)
the rate vector associated with topic k and document n, and it is active when hkn = 1. The wordPK1
(1)
count vector for document n manifested from topic k is xkn ? Poisson(?kn ), and xn = k=1
xkn ,
where K1 is the number of topics in the model. The columns of ?(1) define correlation among the
words associated with the topics; for a given topic (column of ?(1) ), some words co-occur with high
probability, and other words are likely jointly absent.
(2)
(1)
(2)
We now consider a two-layer model, with hn assumed known. To generate hn , we first draw zn ,
PK2 (2)
(2)
(2)
(2)
zkn , with zkn ? Poisson(?kn ) and
which, analogous to above, may be expressed as zn = k=1
(2)
(2) (2) (2)
(2)
?kn = ?k ?kn hkn . Column k of ?(2) corresponds to a meta-topic, with ?k a K1 -dimensional
probability vector, denoting the probability with which each of the layer-1 topics are ?on? when
(2)
layer-2 ?meta-topic? k is on (i.e., when hkn = 1). The columns of ?(2) define correlation among
the layer-1 topics; for a given layer-2 meta-topic (column of ?(2) ), some layer-1 topics co-occur
with high probability, and other layer-1 topics are likely jointly absent.
3
As one moves up the hierarchy, to layers ? > 2, the meta-topics become increasingly more abstract
and sophisticated, manifested in terms of probabilisitic combinations of topics and meta-topics at
the layers below. Because of the properties of the Dirichlet distribution, each column of a particular
?(?) is encouraged to be sparse, implying that a column of ?(?) encourages use of a small subset
of columns of ?(??1) , with this repeated all the way down to the data layer, and the topics reflected
in the columns of ?(1) . This deep architecture imposes correlation across the layer-1 topics, and it
does it through use of PFA modules at all layers of the deep architecture, unlike [6] which uses an
SBN for layers 2 through L, and a PFA at the bottom layer. In addition to the elegance of using a
single class of modules at each layer, the proposed deep model has important computational benefits,
as later discussed in Section 3.
2.4
PFA modules for discriminative tasks
Assume that there is a label yn ? {1, . . . , C} associated with document n. We seek to learn the
model for mapping xn ? yn simultaneously with learning the above deep topic representation. In
fact, the mapping xn ? yn is based on the deep generative process for xn in (3). We represent yn
bn , which has all elements equal to zero except one, with the
via the C-dimensional one-hot vector y
non-zero value (which is set to one) located at the position of the label. We impose the model
bn) ,
bcn = ?cn / PC ?cn ,
bn ? Multinomial(1, ?
y
?
(4)
c=1
C?K
bcn is element c of ?
b n , ?n = B(?n(1) ? h(1)
, is a matrix of nonnegative
where ?
n ) and B ? R+
classification weights, with prior distribution bk ? Dirichlet(?1C ), where bk is a column of B.
Combining (3) with (4) allows us to learn the mapping xn ? yn via the shared first-layer local
(1)
(1)
representation, ?n ? hn , that encodes topic usage for document n. This sharing mechanism
allows the model to learn topics, ?(1) , and meta-topics, {?(?) }L
?=2 , biased towards discrimination,
as opposed to just explaining the data, xn . We call this construction discriminative deep Poisson
factor modeling. It is worth noting that this is the first time that PFA and multi-class classification
have been combined into a joint model. Although other DP-based discriminative topic models have
been proposed [16, 21], they rely on approximations in order to combine the topic model, usually
LDA, with softmax-based classification approaches.
3
Inference
A very convenient feature of the model in (3) is that all its conditional posterior distributions can be
written in closed form due to local conjugacy. In this section, we focus on Markov chain Monte Carlo
(MCMC) via Gibbs sampling as reference implementation and a stochastic variational inference
approach for large datasets, where the fully Bayesian treatment becomes prohibitive.
Other alternatives for scaling up inference in Bayesian models such as the parameter server [13,
19], conditional density filtering [9] and stochastic gradient-based approaches [4, 5, 28] are left as
interesting future work.
MCMC Due to local conjugacy, Gibbs sampling for the model in (3) amounts to sampling in sequence from the conditional posterior of all the parameters of the model, namely
(?) (?)
(?)
(L+1)
{?(?) , ?n , hn , rk }L
. The remaining parameters of the model are set to fixed
?=1 and ?
values: ? = 1/K, b = 0.5 and a0 = b0 = 1. We note that priors for ?, b, a0 and b0 exist that
result in Gibbs-style updates, and can be easily incorporated into the model if desired; however, we
opted to keep the model as simple as possible, without compromising flexibility. The most unique
conditional posteriors are shown below, without layer index for clarity,
?k ? Dirichlet(? + x1k? , . . . , ? + xM k? ) ,
?kn ? Gamma(rk hkn + x?kn , b?1 ) ,
(5)
?1
hkn ? ?(x?kn = 0)Bernoulli(?
?kn (?
?kn + 1 ? ?kn ) ) + ?(x?kn ? 1) ,
PN
PM
where xmk? =
?kn = ?kn (1 ? b)rk . Omitted details,
n=1 xmkn , x?kn =
m=1 xmkn and ?
including those for the discriminative DPFM in Section 2.4, are given in the Supplementary Material.
4
Initialization is done at random from prior distributions, followed by layer-wise fitting (pre-training).
In the experiments, we run 100 Gibbs sampling cycles per layer. In preliminary trials we observed
that 50 cycles are usually enough to obtain good initial values of the global parameters of the model,
(?)
(L+1)
namely {?(?) , rk }L
.
?=1 and ?
Stochastic variational inference (SVI) SVI is a scalable algorithm for approximating posterior distributions consisting of EM-style local-global updates, in which subsets of a dataset (minibatches) are used to update in closed-form the variational parameters controlling both the local and
global structure of the model in an iterative fashion [15]. This is done by using stochastic optimization with noisy natural gradients to optimize the variational objective function. Additional details
and theoretical foundations of SVI can be found in [15].
In practice the algorithm proceeds as follows, where again we have omitted the layer index for
(t)
clarity: (i) let {?(t) , rk , ?(t) } be the global variables at iteration t. (ii) Sample a mini-batch from
the full dataset. (iii) Compute updates for the variational parameters of the local variables using
?mkn ? exp(E[log ?mk ] + E[log ?kn ]) ,
PM
?kn ? Gamma(E[rk ]E[hkn ] + m=1 ?mkn , b?1 ) ,
hkn ? E[p(x?kn = 0)]Bernoulli(E[?
?kn ](E[?
?kn ] + 1 ? E[?kn ])?1 ) + E[p(x?kn ? 1)] ,
where E[xmkn ] = ?mkn and E[?
?kn ] = E[?kn ](1 ? b)E[rk ] . In practice, expectations for ?kn and
hkn are computed in log-domain. (iv) Compute a local update for the variational parameters of the
global variables (only ? is shown) using
PN B
?bmk = ? + N N ?1
?mkn ,
(6)
B
n=1
where N and NB are sizes of the corpus and mini-batch, respectively. Finally, we update the global
(t+1)
(t)
bk , where ?t = (t + ? )?? . The forgetting rate, ? ?
variables as ?k
= (1 ? ?t )?k + ?t ?
(0.5, 1] controls how fast previous information is forgotten and the delay, ? ? 0, down-weights
early iterations. These conditions for ? and ? guarantee that the iterative algorithm converges to a
local optimum of the variational objective function. In the experiments, we set ? = 0.7 and ? = 128.
Additional details of the SVI algorithm for the model in (3) are given in the Supplementary Material.
Importance of computations scaling as a function of number of non-zeros From a practical
standpoint, the most important feature of the model in (3) is that inference does not scale as a
function of the size of the corpus, but as a function of its number of non-zero elements, which is
advantageous in cases where the input data is sparse (often the case). For instance, 2% of the entries
in the widely studied 20 Newsgroup corpus are non-zero; similar proportions are also observed in
the Reuters and Wikipedia data. Furthermore, this feature also extends to all the layers of the model
(?)
regardless of {hn } being latent. Similarly, for the discriminative DPFM in Section 2.4, inference
bn has a single non-zero entry. This is particularly
scales with N , not CN , because the binary vector y
appealing in cases where C is large.
In order to show that this scaling behavior holds, it is enough to see that by construction, from (1),
PK
(?)
if xmn = k=1 xmkn = 0 (or zmn for ? > 1), thus xmkn = 0, ?k with probability 1. Besides,
from (2) we see that if hkn = 0 then zkn = 0 with probability 1. As a result, update equations for
(?)
(?)
all parameters of the model except for {hn }, depend only on non-zero elements of xn and {zn }.
(?)
(?)
Updates for the binary variables can be cheaply obtained in block from hkn ? Bernoulli(?kn ) via
? (?) , as previously described.
?
kn
It is worth mentioning that models based on multinomial or Poisson likelihoods such as LDA [3],
HDP [27], FTM [29] and PFA [32], also enjoy this property. However, the recently proposed deep
PFA [6], does not use PFA modules on layers other than the first one. It uses SBNs or RBMs that
are known to scale with the number of binary variables as opposed to their non-zero elements.
4
Related work
Connections to other DP-based topic models PFA is a nonnegative matrix factorization model
with Poisson link that is closely related to other DP-based models. Specifically, [32] showed that
5
by making p(hkn = 1) = 1 and letting ?kn have a Dirichlet, instead of a Gamma distribution as
in (1), we can recover LDA by using the equivalence between Poisson and multinomial distributions.
By looking at (5)-(6), we see that PFA and LDA have the same blocked Gibbs [3] and SVI [14]
updates, respectively, when Dirichlet distributions for ?kn are used. In [32], the authors showed that
using the Poisson-gamma representation of the negative binomial distribution and a beta-Bernoulli
specification for p(hkn ) in (1), we can recover the FTM formulation and inference in [29]. More
recently, [31] showed that PFA is comparable to HDP in that the former builds group-specific DPs
with normalized gamma processes. A more direct relationship between a three-layer HDP [27] and a
two-layer version of (3) can be established by grouping documents by categories. In the HDP, three
DPs are set for topics, document-wise topic usage and category-wise topic usage. In our model,
(1)
(1)
?(1) represent K1 topics, ?n ? hn encodes document-wise topic usage and ?(2) encodes topic
usage for K2 categories. In HDP, documents are assigned to categories a priori, but in our model
(2)
(2)
document-category soft assignments are estimated and encoded via ?n ? hn . As a result, the
model in (3) is a more flexible alternative to HDP in that it groups documents into categories in an
unsupervised manner.
Similar models Non-DP-based deep models for topic modeling employed in the deep learning
literature typically utilize RBMs or SBNs as building blocks. For instance, [12] and [20] extended
RBMs via DBNs to topic modeling and [26] proposed the over-replicated softmax model, a deep
version of RSM that generalizes RBMs.
Recently, [24] proposed a framework for generative deep models using exponential family modules.
Although they consider Poisson-Poisson and Gamma-Gamma factorization modules akin to our
PFA modules, their model lacks the explicit binary unit linking between layers commonly found in
traditional deep models. Besides, their inference approach, black-box variational inference, is not as
conceptually simple, but it scales with the number of non-zeros as our model.
DPFA, proposed in [6], is the model closest to ours. Nevertheless, our proposed model has a number of key differentiating features. (i) Both of them learn topic correlations by building a multilayer
modular representation on top of PFA. Our model uses PFA modules throughout all layers in a conceptually simple and easy to interpret way. DPFA uses Gaussian distributed weight matrices within
SBN modules; these are hard to interpret in the context of topic modeling. (ii) SBN architectures
have the shortcoming of not having block closed-form conditional posteriors for their binary variables, making them difficult to estimate, especially as the number of variables increases. (iii) Factor
loading matrices in PFAs have natural shrinkage to counter overfitting, thanks to the Dirichlet prior
used for their columns. In SBN-based models, shrinkage has to be added via variable augmentation at the cost of increasing inference complexity. (iv) Inference for SBN modules scales with the
number of hidden variables in the model, not with the number of non-zero elements, as in our case.
5
Experiments
Benchmark corpora We present experiments on three corpora: 20 Newsgroups (20 News),
Reuters corpus volume I (RCV1) and Wikipedia (Wiki). 20 News is composed of 18,845 documents and 2,000 words, partitioned into a 11,315 training set and a 7,531 test set. RCV1 has
804,414 newswire articles containing 10,000 words. A random 10,000 subset of documents is used
for testing. For Wiki, we obtained 107 random documents, from which a subset of 1,000 is set aside
for testing. Following [14], we keep a vocabulary consisting of 7,702 words taken from the top
10,000 words in the Project Gutenberg Library.
As performance measure we use held-out perplexity, defined as the geometric mean of the inverse
marginal likelihood of every word in the set. We cannot evaluate the intractable marginal for our
model, thus we compute the predictive perplexity on a 20% subset of the held-out set. The remaining
80% is used to learn document-specific variables of the model. The training set is used to estimate
the global parameters of the model. Further details on perplexity evaluation for PFA models can be
found in [6, 32].
We compare our model (denoted DPFM) against LDA [3], FTM [29], RSM [12], nHDP [23] and
DPFA with SBNs (DPFA-SBN) and RBMs (DPFA-RBM) [6]. For all these models we use the
settings described in [6]. Inference methods for RSM and DPFA are contrastive divergence with
6
Table 1: Held-out perplexities for 20 News, RCV1 and Wiki. Size indicates number of topics and/or
binary units, accordingly.
Model
DPFM
DPFM
DPFA-SBN
DPFA-SBN
DPFA-RBM
nHDP
LDA
FTM
RSM
Method
SVI
MCMC
SGNHT
SGNHT
SGNHT
SVI
Gibbs
Gibbs
CD5
Size
128-64
128-64
1024-512-256
128-64-32
128-64-32
(10,10,5)
128
128
128
20 News
818
780
??
827
896
889
893
887
877
RCV1
961
908
942
1143
920
1041
1179
1155
1171
Wiki
791
783
770
876
942
932
1059
991
1001
step size 5 (CD5) and stochastic gradient Nse-Hoover thermostats (SGNHT) [5], respectively. For
our model, we run 3,000 samples (first 2,000 as burnin) for MCMC and 4,000 iterations with 200document mini-batches for SVI. For the Wiki corpus, MCMC-based DPFM is run on a random
subset of 106 documents. The code used, implemented in Matlab, will be made publicly available.
Table 1 show results for the corpora being considered. Figures for methods other than DPFM were
taken from [6]. We see that multilayer models (DPFM, DPFA and nHDP) consistently outperform
single layer ones (LDA, FTM and RSM), and that DPFM has the best performance across all corpora for models of comparable size. OSM result (not shown) are about 20 units better than RSM
in 20 News and RCV1, see [26]. We also see that MCMC yields better perplexities when compared to SVI. The difference in performance between these two inference methods is likely due
to the mean-field approximation and the online nature of SVI. We verified empirically (results not
shown) that doubling the number of hidden units, adding a third layer or increasing the number
of samples/iterations for DPFM does not significantly change the results in Table 1. As a note on
computational complexity, one iteration of the two-layer model on the 20 News corpus takes approximately 3 and 2 seconds, for MCMC and SVI, respectively. For comparison, we also ran the
DPFA-SBN model in [6] using a two-layer model of the same size; in their case it takes about 24, 4
and 5 seconds to run one iteration using MCMC, conditional density filtering (CDF) and SGNHT,
respectively. Runtimes for DPFA-RBM are similar to those of DPFA-SBN, LDA and RSM are faster
than 1-layer DPFM, FTM is comparable to the latter, and nHDP is slower than DPFM.
(2)
Figure 1 shows a representative meta-topic, ?k , from the two-layer model for 20 News. For the
(2)
five largest weights in ?k (y-axis), which correspond to layer-1 topic indices (x-axis), we also
(1)
show the top five words in their layer-1 topic, ?k . We observe that this meta-topic is loaded with
religion specific topics, judging by the words in them. Additional graphs, and tables showing the
top words in each topic for 20 News and RCV1 are provided in the Supplementary Material.
M22
M13
0.06
0.14
0.12
Albuterol
salmeterol
Ipratropium
tiotropium
Prednisone
Cetirizine
Amoxicillin
montelukast
DiltiazemAmitriptyline
Clavulanate
olopatadine
cefdinir rizatriptan
desloratadine
0.1
fluticasone
fexofenadine
Propranolol
Carbamazepine
Methimazole
NA
rabeprazole
alcaftadine
Lactobacillus rhamnosus GG
Multivitamin preparation
?k
0.04
god
true
religion
christians fact
christianity wrong
christian people
point
?k
(2)
0.05
point
thing
people
idea
writes
0.16
(2)
god
jesus
christ
christians
bible
god
exist
existence
exists
universe
0.07
0.08
0.03
0.06
0.02
0.04
0.01
0
0.02
20
40
60
80
100
0
120
First layer topic index
10
20
30
40
50
60
First layer topic index
Figure 1: Representative meta-topics obtained from (left) 20 News and (right) medical records.
(2)
Meta-topic weights ?k vs. layer-1 topics indices, with word lists corresponding to the top five
(1)
words in layer-1 topics, ?k .
Classification We use 20 News for document classification, to evaluate the discriminative DPFM
model described in Section 2.4. We use test set accuracy on the 20-class task as performance measure and compare our model against LDA, DocNADE [17], RSM and OSM. Results for these four
models were obtained from [26], where multinomial logistic regression with cross-entropy loss func7
Table 2: Test accuracy on 20 News. Subscript accompanying model names indicate their size.
Model
Accuracy (%)
LDA128
65.7
DocNADE512
68.4
RSM512
67.7
OSM512
69.1
DPFM128
72.11
DPFM128?64
72.67
tion was used as classification module. Test accuracies in Table 2 show that our model significantly
outperforms the others being considered. Note as well that our one-layer model still improves upon
the four times larger OSM, by more than 3%. We verified that our two-layer model outperforms
well known supervised methods like multinomial logistic regression, SVM, supervised LDA and
two-layer feedforward neural networks, for which test accuracies ranged from 67% to 72.14%, using
term frequency-inverse document frequency features. We could not improve results by increasing
the size of our model, however, we may be able to do so by following the approach of [33], where a
single classification module (SVM) is shared by 20 one-layer topic models (LDAs). Exploration of
more sophisticated deep model architectures for discriminative DPFMs is left as future work.
Medical records The Duke University Health System medical records database used here, is a
5 year dataset generated within a large health system including three hospitals and an extensive
network of outpatient clinics. For this analysis, we utilized self-reported medication usage from
over 240,000 patients that had over 4.4 million patient visits. These patients reported over 34,000
different types of medications which were then mapped to one of 1,691 pharmaceutical active ingredients (AI) taken from RxNorm, a depository of medication information maintained by the National
Library of Medicine that includes trade names, brand names, dosage information and active ingredients. Counts for patient-medication usage reflected the number of times an AI appears in a patients
record. Compound medications that include multiple active ingredients incremented counts for all
AI in that medication. Removing AIs with less than 10 overall occurrences and patients lacking
medication information, results in a 1,019?131,264 matrix of AIs vs. patients.
Results for a MCMC-based DPFM of size 64-32, with the same setting used for the first experiment, indicate that pharmaceutical topics derived from this analysis form clinically reasonable
clusters of pharmaceuticals, that may be prescribed to patients for various ailments. In particular, we found that layer-1 topic 46 includes a cluster of insulin products: Insulin Glargine, Insulin
Lispro, Insulin Aspart, NPH Insulin and Regular Insulin. Insulin dependent type-2 diabetes patients
often rely on tailored mixtures of insulin products with different pharmacokinetic profiles to ensure glycemic control. In another example, we found in layer-1 topic 22, an Angiotensin Receptor
Blocker (ARB), Losartan with a HMGCoA Reductase inhibitor, Atorvastatin and a heart specific
beta blocker, Carvedilol. This combination of medications is commonly used to control hypertension and hyperlipidemia in patients with cardiovascular risk. The second layer correlation structure
between topics of drug products also provide interesting composites of patient types based on the
first-layer pharmaceutical topics. Specifically, layer-2 factor 22 in Figure 1 reveals correlation between layer-1 drug factors that would be used to treat types of respiratory patients that had chronic
obstructive respiratory disease and/or asthma (Albuterol, Montelukast) and seasonal allergies. Additional graphs, including top medications for all pharmaceutical topics found by our model are
provided in the Supplementary Material.
6
Conclusion
We presented a new deep model for topic modeling based on PFA modules. We have combined the
interpretability of DP-based specifications found in traditional topic models with deep hierarchies of
hidden binary units. Our model is elegant in that a single class of modules is used at each layer, but
at the same time, enjoys the computational benefit of scaling as a function of the number of zeros
in the data and binary units. We described a discriminative extension for our deep architecture, and
two inference methods: MCMC and SVI, the latter for large datasets. Compelling experimental
results on several corpora and on a new medical records database demonstrated the advantages of
our model.
Future directions include working towards alternatives for scaling up inference algorithms based on
gradient-based approaches, extending the use of PFA modules in deep architectures to more sophisticated discriminative models, multi-modal tasks with mixed data types, and time series modeling
using ideas similar to [8].
Acknowledgements
This research was supported in part by ARO, DARPA, DOE, NGA and ONR.
8
References
[1] D. M. Blei, D. M. Griffiths, M. I. Jordan, and J. B. Tenenbaum. Hierarchical topic models and the nested
Chinese restaurant process. In NIPS, 2004.
[2] D. M. Blei and J. D. Lafferty. A correlated topic model of science. AOAS, 2007.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 2003.
[4] T. Chen, E. B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In ICML, 2014.
[5] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian sampling using stochastic
gradient thermostats. In NIPS, 2014.
[6] Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin. Scalable deep Poisson factor analysis for topic
modeling. In ICML, 2015.
[7] Z. Gan, R. Henao, D. Carlson, and L. Carin. Learning deep sigmoid belief networks with data augmentation. In AISTATS, 2015.
[8] Z. Gan, C. Li, R. Henao, D. Carlson, and L. Carin. Deep temporal sigmoid belief networks for sequence
modeling. In NIPS, 2015.
[9] R. Guhaniyogi, S. Qamar, and D. B. Dunson. Bayesian conditional density filtering. arXiv:1401.3632,
2014.
[10] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 2002.
[11] G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation,
2006.
[12] G. E. Hinton and R. R. Salakhutdinov. Replicated softmax: an undirected topic model. In NIPS, 2009.
[13] Q. Ho, J. Cipar, H. Cui, S. Lee, J. K. Kim, P. B. Gibbons, G. A. Gibson, G. Ganger, and E. P. Xing. More
effective distributed ML via a stale synchronous parallel parameter server. In NIPS, 2013.
[14] M. Hoffman, F. R. Bach, and D. M. Blei. Online learning for latent Dirichlet allocation. In NIPS, 2010.
[15] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 2013.
[16] S. Lacoste-Julien, F. Sha, and M. I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In NIPS, 2009.
[17] H. Larochelle and S. Lauly. A neural autoregressive topic model. In NIPS, 2012.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 1998.
[19] M. Li, D. G. Andersen, A. J. Smola, and K. Yu. Communication efficient distributed machine learning
with the parameter server. In NIPS, 2014.
[20] L. Maaloe, M. Arngren, and O. Winther. Deep belief nets for topic modeling. arXiv:1501.04325, 2015.
[21] J. D. Mcauliffe and D. M. Blei. Supervised topic models. In NIPS, 2008.
[22] R. M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 1992.
[23] J. Paisley, C. Wang, D. M. Blei, and M. I. Jordan. Nested hierarchical Dirichlet processes. PAMI, 2015.
[24] R. Ranganath, L. Tang, L. Charlin, and D. M. Blei. Deep exponential families. In AISTATS, 2014.
[25] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
[26] R. R. S. Srivastava, Nitish and G. E. Hinton. Modeling documents with deep Boltzmann machines. In
UAI, 2013.
[27] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. JASA, 2006.
[28] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
[29] S. Williamson, C. Wang, K. Heller, and D. Blei. The IBP compound Dirichlet process and its application
to focused topic modeling. In ICML, 2010.
[30] M. Zhou. Infinite edge partition models for overlapping community detection and link prediction. In
AISTATS, 2015.
[31] M. Zhou and L. Carin. Negative binomial process count and mixture modeling. PAMI, 2015.
[32] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, 2012.
[33] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: maximum margin supervised topic models. JMLR, 2012.
9
| 5786 |@word trial:1 version:2 proportion:1 loading:2 advantageous:1 cipar:1 seek:1 bn:4 contrastive:2 xkn:2 accommodate:1 reduction:1 initial:1 series:1 denoting:1 document:30 ours:1 outperforms:3 activation:2 written:1 readily:2 lauly:1 additive:2 partition:1 xmk:1 maaloe:1 christian:3 interpretable:2 update:10 discrimination:1 implying:1 generative:3 prohibitive:1 aside:1 v:2 intelligence:1 accordingly:1 hamiltonian:1 short:1 record:5 blei:10 five:3 direct:1 beta:6 become:1 m22:1 fitting:2 combine:1 manner:1 forgetting:1 behavior:1 multi:2 probabilisitic:1 salakhutdinov:2 increasing:3 becomes:1 provided:3 project:1 notation:1 what:1 developed:1 angiotensin:1 guarantee:1 forgotten:1 temporal:1 every:2 rm:2 k2:1 wrong:1 control:5 unit:19 medical:4 enjoy:1 yn:5 mcauliffe:1 cardiovascular:1 hyperlipidemia:1 understood:1 engineering:1 treat:1 local:8 receptor:1 subscript:1 approximately:1 pami:2 black:1 initialization:1 studied:2 equivalence:1 suggests:1 co:3 mentioning:1 limited:1 factorization:3 unique:1 practical:1 lecun:1 testing:2 practice:3 block:4 writes:1 svi:13 lcarin:1 procedure:1 gibson:1 drug:2 adapting:1 significantly:2 convenient:1 composite:1 word:19 pre:1 regular:1 seeing:1 griffith:1 cannot:1 nb:1 context:1 risk:1 optimize:1 demonstrated:1 chronic:1 outpatient:1 regardless:1 christianity:1 focused:3 estimator:1 ftm:7 pk2:1 fang:1 population:1 analogous:1 updated:1 hierarchy:4 dbns:2 construction:2 controlling:1 duke:3 us:4 diabetes:1 element:9 recognition:1 particularly:1 located:1 utilized:1 database:2 observed:4 bottom:1 module:30 ding:1 electrical:1 wang:3 sbn:12 cycle:2 news:11 counter:1 trade:1 incremented:1 ran:1 disease:1 complexity:2 gibbon:1 dynamic:1 depend:1 predictive:1 purely:2 upon:2 efficiency:1 easily:2 joint:1 darpa:1 represented:2 various:1 fast:2 describe:1 shortcoming:1 monte:2 effective:1 artificial:1 modular:3 widely:3 supplementary:5 encoded:1 larger:1 drawing:1 otherwise:1 ability:3 statistic:1 god:3 insulin:8 jointly:2 noisy:1 online:2 beal:1 advantage:2 sequence:2 net:2 propose:1 aro:1 product:5 hadamard:1 combining:1 mixing:1 flexibility:1 cluster:2 optimum:1 extending:1 converges:1 derive:1 arb:1 ibp:1 b0:3 implemented:1 indicate:2 larochelle:1 direction:1 closely:3 compromising:1 stochastic:11 exploration:1 dbms:2 enable:1 material:5 hoover:1 decompose:1 preliminary:1 extension:1 hold:1 accompanying:1 considered:2 exp:3 lawrence:1 mapping:3 dbm:1 early:1 omitted:2 integrates:1 applicable:1 label:2 largest:1 hoffman:2 inhibitor:1 gaussian:1 rather:5 pn:2 zhou:3 shrinkage:3 derived:1 focus:1 seasonal:1 consistently:1 bernoulli:11 indicates:2 likelihood:2 greatly:1 contrast:1 opted:1 medication:9 kim:1 sense:1 inference:20 dependent:1 neven:1 typically:1 a0:3 hidden:4 henao:5 overall:1 among:2 classification:8 flexible:1 denoted:2 priori:1 proposes:1 art:1 softmax:5 marginal:2 equal:1 field:1 having:1 ng:1 sampling:5 encouraged:1 runtimes:1 represents:2 yu:1 unsupervised:1 carin:6 icml:4 future:3 others:1 connectionist:1 fundamentally:1 inherent:1 employ:3 dosage:1 composed:5 simultaneously:2 gamma:10 national:1 individual:1 pharmaceutical:5 divergence:2 consisting:2 detection:1 interest:1 evaluation:1 introduces:1 mixture:2 pfa:33 yielding:1 pc:1 bible:1 held:3 ncrp:1 chain:1 edge:1 fox:1 tree:1 iv:3 desired:1 theoretical:1 mk:2 instance:3 column:14 modeling:18 compelling:1 soft:1 zn:6 assignment:1 cost:1 nhdp:5 subset:6 entry:2 delay:1 osindero:1 gutenberg:1 characterize:1 reported:2 kn:44 combined:2 thanks:2 density:4 winther:1 lee:1 connecting:1 na:1 again:1 augmentation:2 andersen:1 containing:2 hn:21 opposed:2 expert:1 style:2 ricardo:1 li:2 hkn:22 includes:2 explicitly:1 vi:1 later:1 tion:1 closed:3 characterizes:1 start:1 recover:2 xing:2 capability:1 parallel:1 contribution:1 publicly:1 accuracy:5 convolutional:1 loaded:1 characteristic:1 bpl:4 correspond:2 yield:1 conceptually:2 bayesian:6 lu:2 carlo:2 worth:3 obstructive:1 sharing:1 against:2 rbms:9 frequency:2 james:2 naturally:1 associated:3 rbm:4 elegance:1 dataset:3 treatment:1 improves:1 dimensionality:1 sophisticated:3 appears:1 supervised:4 follow:1 harness:1 reflected:2 modal:1 formulation:4 done:2 box:1 charlin:1 furthermore:2 just:2 smola:1 correlation:9 asthma:1 working:1 overlapping:1 lack:1 logistic:6 lda:11 perhaps:1 stale:1 building:3 usage:7 name:3 normalized:1 true:2 ranged:1 former:2 hence:1 assigned:1 neal:1 bcn:2 attractive:1 self:1 encourages:1 maintained:1 gg:1 presenting:1 demonstrate:1 rsm:9 meaning:1 variational:11 wise:6 recently:8 sigmoid:3 wikipedia:2 multinomial:5 empirically:1 volume:1 million:1 belong:1 discussed:2 interpretation:1 linking:1 interpret:2 significant:2 blocked:1 gibbs:7 ai:5 paisley:2 dbn:1 pm:2 similarly:1 newswire:1 had:2 specification:3 base:1 posterior:5 closest:1 showed:3 perplexity:5 compound:2 server:3 manifested:4 meta:10 binary:23 success:1 onr:1 guestrin:1 additional:4 impose:3 employed:2 ii:5 full:1 sbns:10 multiple:1 faster:1 characterized:2 ahmed:1 cross:1 bach:1 concerning:1 visit:1 controlled:1 prediction:1 scalable:3 basic:1 regression:2 multilayer:4 heterogeneous:1 expectation:1 poisson:30 patient:12 arxiv:2 iteration:6 represent:3 tailored:1 whereas:1 addition:1 standpoint:2 biased:1 unlike:2 elegant:1 undirected:1 thing:1 lafferty:1 jordan:5 call:1 noting:3 feedforward:2 iii:3 enough:2 easy:1 bengio:1 variety:1 newsgroups:1 restaurant:2 architecture:15 idea:2 cn:3 haffner:1 bmk:1 absent:2 synchronous:1 zkn:8 x1k:1 akin:1 repeatedly:1 matlab:1 deep:47 hypertension:1 cd5:2 amount:1 tenenbaum:1 category:6 generate:1 wiki:5 outperform:1 exist:2 judging:1 estimated:1 popularity:2 per:1 write:1 medlda:1 group:2 key:2 four:2 nevertheless:1 drawn:3 clarity:2 sgnht:5 verified:2 utilize:1 lacoste:1 graph:2 blocker:2 year:1 nga:1 run:4 inverse:2 parameterized:3 extends:1 throughout:2 family:2 reasonable:1 draw:1 scaling:5 comparable:3 disclda:1 entirely:2 layer:58 followed:1 distinguish:1 nonnegative:4 zmn:1 occur:4 encodes:3 aspect:1 argument:1 prescribed:1 nitish:1 rcv1:6 department:1 structured:2 combination:3 clinically:1 cui:1 across:3 increasingly:1 em:1 partitioned:1 appealing:1 making:2 restricted:1 taken:3 heart:1 equation:1 conjugacy:2 previously:1 count:13 mechanism:2 letting:3 generalizes:1 operation:2 available:1 observe:1 hierarchical:7 occurrence:1 alternative:3 batch:3 ho:1 slower:1 existence:1 assumes:1 dirichlet:16 include:4 top:7 gan:5 graphical:1 remaining:2 binomial:3 ensure:1 medicine:1 carlson:3 restrictive:1 k1:3 build:3 chinese:2 approximating:1 especially:1 move:1 objective:2 added:1 allergy:1 sha:1 traditional:3 gradient:8 dp:19 link:7 dpfa:20 mapped:1 accommodates:1 topic:91 reason:1 assuming:1 hdp:9 besides:2 code:1 index:6 relationship:1 mini:3 providing:1 minimizing:1 nc:1 setup:1 difficult:1 dunson:2 negative:3 implementation:1 boltzmann:4 allowing:1 teh:3 observation:1 dispersion:1 markov:1 datasets:2 benchmark:1 langevin:1 heterogeneity:1 defining:1 incorporated:1 looking:1 extended:1 hinton:5 communication:1 community:2 intensity:2 introduced:1 bk:3 namely:3 specified:1 extensive:1 connection:2 merges:1 established:1 nip:10 address:1 xmn:3 able:2 proceeds:1 usually:3 below:2 xm:1 sparsity:1 built:2 gaining:1 interpretability:2 including:3 belief:7 hot:1 natural:2 rely:2 zhu:1 improve:2 library:2 julien:1 axis:2 health:2 prior:8 literature:1 geometric:1 acknowledgement:1 heller:1 marginalizing:1 relative:1 lacking:1 fully:2 loss:1 rationale:2 mixed:1 interesting:3 limitation:1 allocation:3 filtering:3 ingredient:3 foundation:1 clinic:1 docnade:1 jasa:1 jesus:1 imposes:2 article:1 systematically:1 nse:1 dpfm:15 supported:1 last:1 enjoys:1 allow:1 wide:2 explaining:1 differentiating:1 sparse:2 benefit:2 distributed:3 vocabulary:2 xn:17 skeel:1 autoregressive:2 adopts:1 commonly:3 author:1 replicated:3 made:1 employing:1 welling:1 ranganath:1 keep:2 ml:1 global:8 active:8 overfitting:1 reveals:1 uai:1 corpus:14 assumed:1 discriminative:12 zhe:2 alternatively:2 latent:9 iterative:2 decade:1 table:6 nature:2 learn:5 improving:1 m13:1 williamson:1 excellent:1 bottou:1 domain:1 aistats:5 pk:2 main:1 universe:1 reuters:2 profile:1 repeated:2 respiratory:2 osm:4 nade:1 representative:2 fashion:1 theme:1 position:1 explicit:1 mkn:7 exponential:2 jmlr:3 third:1 tang:1 hannah:1 rk:15 down:2 removing:1 ganger:1 specific:4 showing:1 list:1 svm:2 grouping:1 intractable:1 thermostat:2 exists:1 sequential:1 adding:1 importance:1 babbush:1 margin:1 chen:3 durham:1 entropy:1 generalizing:1 nph:1 likely:4 cheaply:1 expressed:1 religion:2 doubling:1 christ:1 srivastava:1 nested:4 corresponds:2 relies:1 minibatches:1 cdf:1 conditional:7 towards:2 shared:2 hard:1 change:1 specifically:5 except:2 infinite:1 called:1 hospital:1 experimental:1 brand:1 newsgroup:1 burnin:1 indicating:1 people:2 latter:3 preparation:1 evaluate:2 mcmc:12 correlated:1 |
5,287 | 5,787 | Tensorizing Neural Networks
Alexander Novikov1,4
Dmitry Podoprikhin1
Anton Osokin2
Dmitry Vetrov1,3
1
Skolkovo Institute of Science and Technology, Moscow, Russia
2
INRIA, SIERRA project-team, Paris, France
3
National Research University Higher School of Economics, Moscow, Russia
4
Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russia
novikov@bayesgroup.ru podoprikhin.dmitry@gmail.com
anton.osokin@inria.fr vetrovd@yandex.ru
Abstract
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms
of computational resources. In particular, a large amount of memory is required
by commonly used fully-connected layers, making it hard to use the models on
low-end devices and stopping the further increase of the model size. In this paper
we convert the dense weight matrices of the fully-connected layers to the Tensor
Train [17] format such that the number of parameters is reduced by a huge factor
and at the same time the expressive power of the layer is preserved. In particular,
for the Very Deep VGG networks [21] we report the compression factor of the
dense weight matrix of a fully-connected layer up to 200000 times leading to the
compression factor of the whole network up to 7 times.
1
Introduction
Deep neural networks currently demonstrate state-of-the-art performance in many domains of largescale machine learning, such as computer vision, speech recognition, text processing, etc. These
advances have become possible because of algorithmic advances, large amounts of available data,
and modern hardware. For example, convolutional neural networks (CNNs) [13, 21] show by a large
margin superior performance on the task of image classification. These models have thousands of
nodes and millions of learnable parameters and are trained using millions of images [19] on powerful
Graphics Processing Units (GPUs).
The necessity of expensive hardware and long processing time are the factors that complicate the
application of such models on conventional desktops and portable devices. Consequently, a large
number of works tried to reduce both hardware requirements (e. g. memory demands) and running
times (see Sec. 2).
In this paper we consider probably the most frequently used layer of the neural networks: the fullyconnected layer. This layer consists in a linear transformation of a high-dimensional input signal to a
high-dimensional output signal with a large dense matrix defining the transformation. For example,
in modern CNNs the dimensions of the input and output signals of the fully-connected layers are
of the order of thousands, bringing the number of parameters of the fully-connected layers up to
millions.
We use a compact multiliniear format ? Tensor-Train (TT-format) [17] ? to represent the dense
weight matrix of the fully-connected layers using few parameters while keeping enough flexibility to perform signal transformations. The resulting layer is compatible with the existing training
algorithms for neural networks because all the derivatives required by the back-propagation algorithm [18] can be computed using the properties of the TT-format. We call the resulting layer a
TT-layer and refer to a network with one or more TT-layers as TensorNet.
We apply our method to popular network architectures proposed for several datasets of different
scales: MNIST [15], CIFAR-10 [12], ImageNet [13]. We experimentally show that the networks
1
with the TT-layers match the performance of their uncompressed counterparts but require up to
200 000 times less of parameters, decreasing the size of the whole network by a factor of 7.
The rest of the paper is organized as follows. We start with a review of the related work in Sec. 2.
We introduce necessary notation and review the Tensor Train (TT) format in Sec. 3. In Sec. 4
we apply the TT-format to the weight matrix of a fully-connected layer and in Sec. 5 derive all
the equations necessary for applying the back-propagation algorithm. In Sec. 6 we present the
experimental evaluation of our ideas followed by a discussion in Sec. 7.
2
Related work
With sufficient amount of training data, big models usually outperform smaller ones. However stateof-the-art neural networks reached the hardware limits both in terms the computational power and
the memory.
In particular, modern networks reached the memory limit with 89% [21] or even 100% [25] memory
occupied by the weights of the fully-connected layers so it is not surprising that numerous attempts
have been made to make the fully-connected layers more compact. One of the most straightforward
approaches is to use a low-rank representation of the weight matrices. Recent studies show that
the weight matrix of the fully-connected layer is highly redundant and by restricting its matrix rank
it is possible to greatly reduce the number of parameters without significant drop in the predictive
accuracy [6, 20, 25].
An alternative approach to the problem of model compression is to tie random subsets of weights
using special hashing techniques [4]. The authors reported the compression factor of 8 for a twolayered network on the MNIST dataset without loss of accuracy. Memory consumption can also be
reduced by using lower numerical precision [1] or allowing fewer possible carefully chosen parameter values [9].
In our paper we generalize the low-rank ideas. Instead of searching for low-rank approximation of
the weight matrix we treat it as multi-dimensional tensor and apply the Tensor Train decomposition
algorithm [17]. This framework has already been successfully applied to several data-processing
tasks, e. g. [16, 27].
Another possible advantage of our approach is the ability to use more hidden units than was available
before. A recent work [2] shows that it is possible to construct wide and shallow (i. e. not deep)
neural networks with performance close to the state-of-the-art deep CNNs by training a shallow
network on the outputs of a trained deep network. They report the improvement of performance
with the increase of the layer size and used up to 30 000 hidden units while restricting the matrix
rank of the weight matrix in order to be able to keep and to update it during the training. Restricting
the TT-ranks of the weight matrix (in contrast to the matrix rank) allows to use much wider layers
potentially leading to the greater expressive power of the model. We demonstrate this effect by
training a very wide model (262 144 hidden units) on the CIFAR-10 dataset that outperforms other
non-convolutional networks.
Matrix and tensor decompositions were recently used to speed up the inference time of CNNs [7,
14]. While we focus on fully-connected layers, Lebedev et al. [14] used the CP-decomposition to
compress a 4-dimensional convolution kernel and then used the properties of the decomposition to
speed up the inference time. This work shares the same spirit with our method and the approaches
can be readily combined.
Gilboa et al. exploit the properties of the Kronecker product of matrices to perform fast matrix-byvector multiplication [8]. These matrices have the same structure as TT-matrices with unit TT-ranks.
Compared to the Tucker format [23] and the canonical format [3], the TT-format is immune to
the curse of dimensionality and its algorithms are robust. Compared to the Hierarchical Tucker
format [11], TT is quite similar but has simpler algorithms for basic operations.
3
TT-format
Throughout this paper we work with arrays of different dimensionality. We refer to the onedimensional arrays as vectors, the two-dimensional arrays ? matrices, the arrays of higher dimensions ? tensors. Bold lower case letters (e. g. a) denote vectors, ordinary lower case letters (e. g.
a(i) = ai ) ? vector elements, bold upper case letters (e. g. A) ? matrices, ordinary upper case letters
(e. g. A(i, j)) ? matrix elements, calligraphic bold upper case letters (e. g. A) ? for tensors and
2
ordinary calligraphic upper case letters (e. g. A(i) = A(i1 , . . . , id )) ? tensor elements, where d is
the dimensionality of the tensor A.
We will call arrays explicit to highlight cases when they are stored explicitly, i. e. by enumeration of
all the elements.
A d-dimensional array (tensor) A is said to be represented in the TT-format [17] if for each dimension k = 1, . . . , d and for each possible value of the k-th dimension index jk = 1, . . . , nk there
exists a matrix Gk [jk ] such that all the elements of A can be computed as the following matrix
product:
A(j1 , . . . , jd ) = G1 [j1 ]G2 [j2 ] ? ? ? Gd [jd ].
(1)
All the matrices Gk [jk ] related to the same dimension k are restricted to be of the same
size rk?1 ? rk . The values r0 and rd equal 1 in order to keep the matrix product (1) of size 1 ? 1. In
what follows we refer to the representation of a tensor in the TT-format as the TT-representation or
d
the TT-decomposition. The sequence {rk }k=0 is referred to as the TT-ranks of the TT-representation
of A (or the ranks for short), its maximum ? as the maximal TT-rank of the TT-representation
n
of A: r = maxk=0,...,d rk . The collections of the matrices (Gk [jk ])jkk=1 corresponding to the same
dimension (technically, 3-dimensional arrays G k ) are called the cores.
Oseledets [17, Th. 2.1] shows that for an arbitrary tensor A a TT-representation exists but is not
unique. The ranks among different TT-representations can vary and it?s natural to seek a representation with the lowest ranks.
We use the symbols Gk [jk ](?k?1 , ?k ) to denote the element of the matrix Gk [jk ] in the position
(?k?1 , ?k ), where ?k?1 = 1, . . . , rk?1 , ?k = 1, . . . , rk . Equation (1) can be equivalently rewritten
as the sum of the products of the elements of the cores:
X
A(j1 , . . . , jd ) =
G1 [j1 ](?0 , ?1 ) . . . Gd [jd ](?d?1 , ?d ).
(2)
?0 ,...,?d
The representation of a tensor A via the explicit enumeration of all its elements requires to store
Qd
Pd
k=1 nk numbers compared with
k=1 nk rk?1 rk numbers if the tensor is stored in the TT-format.
Thus, the TT-format is very efficient in terms of memory if the ranks are small.
An attractive property of the TT-decomposition is the ability to efficiently perform several types
of operations on tensors if they are in the TT-format: basic linear algebra operations, such as the
addition of a constant and the multiplication by a constant, the summation and the entrywise product
of tensors (the results of these operations are tensors in the TT-format generally with the increased
ranks); computation of global characteristics of a tensor, such as the sum of all elements and the
Frobenius norm. See [17] for a detailed description of all the supported operations.
3.1
TT-representations for vectors and matrices
The direct application of the TT-decomposition to a matrix (2-dimensional tensor) coincides with
the low-rank matrix format and the direct TT-decomposition of a vector is equivalent to explicitly
storing its elements. To be able to efficiently work with large vectors and matrices the TT-format
Qd
for them is defined in a special manner. Consider a vector b ? RN , where N = k=1 nk . We
can establish a bijection ? between the coordinate ` ? {1, . . . , N } of b and a d-dimensional vectorindex ?(`) = (?1 (`), . . . , ?d (`)) of the corresponding tensor B, where ?k (`) ? {1, . . . , nk }. The
tensor B is then defined by the corresponding vector elements: B(?(`)) = b` . Building a TTrepresentation of B allows us to establish a compact format for the vector b. We refer to it as a
TT-vector.
Qd
Now we define a TT-representation of a matrix W ? RM ?N , where M =
k=1 mk and
Qd
N = k=1 nk . Let bijections ?(t) = (?1 (t), . . . , ?d (t)) and ?(`) = (?1 (`), . . . , ?d (`)) map
row and column indices t and ` of the matrix W to the d-dimensional vector-indices whose k-th
dimensions are of length mk and nk respectively, k = 1, . . . , d. From the matrix W we can form
a d-dimensional tensor W whose k-th dimension is of length mk nk and is indexed by the tuple
(?k (t), ?k (`)). The tensor W can then be converted into the TT-format:
W (t, `) = W((?1 (t), ?1 (`)), . . . , (?d (t), ?d (`))) = G1 [?1 (t), ?1 (`)] . . . Gd [?d (t), ?d (`)], (3)
where the matrices Gk [?k (t), ?k (`)], k = 1, . . . , d, serve as the cores with tuple (?k (t), ?k (`))
being an index. Note that a matrix in the TT-format is not restricted to be square. Although indexvectors ?(t) and ?(`) are of the same length d, the sizes of the domains of the dimensions can vary.
We call a matrix in the TT-format a TT-matrix.
3
All operations available for the TT-tensors are applicable to the TT-vectors and the TT-matrices as
well (for example one can efficiently sum two TT-matrices and get the result in the TT-format). Additionally, the TT-format allows to efficiently perform the matrix-by-vector (matrix-by-matrix) product. If only one of the operands is in the TT-format, the result would be an explicit vector (matrix); if
both operands are in the TT-format, the operation would be even more efficient and the result would
be given in the TT-format as well (generally with the increased ranks). For the case of the TT-matrixby-explicit-vector product c = W b, the computational complexity is O(d r2 m max{M, N }),
where d is the number of the cores of the TT-matrix W , m = maxk=1,...,d mk , r is the maximal
Qd
rank and N = k=1 nk is the length of the vector b.
The ranks and, correspondingly, the efficiency of the TT-format for a vector (matrix) depend on the
choice of the mapping ?(`) (mappings ?(t) and ?(`)) between vector (matrix) elements and the underlying tensor elements. In what follows we use a column-major MATLAB reshape command 1
to form a d-dimensional tensor from the data (e. g. from a multichannel image), but one can choose
a different mapping.
4
TT-layer
In this section we introduce the TT-layer of a neural network. In short, the TT-layer is a fullyconnected layer with the weight matrix stored in the TT-format. We will refer to a neural network
with one or more TT-layers as TensorNet.
Fully-connected layers apply a linear transformation to an N -dimensional input vector x:
y = W x + b,
(4)
where the weight matrix W ? RM ?N and the bias vector b ? RM define the transformation.
A TT-layer consists in storing the weights W of the fully-connected layer in the TT-format, allowing
to use hundreds of thousands (or even millions) of hidden units while having moderate number of
parameters. To control the number of parameters one can vary the number of hidden units as well
as the TT-ranks of the weight matrix.
A TT-layer transforms a d-dimensional tensor X (formed from the corresponding vector x) to the ddimensional tensor Y (which correspond to the output vector y). We assume that the weight matrix
W is represented in the TT-format with the cores Gk [ik , jk ]. The linear transformation (4) of a
fully-connected layer can be expressed in the tensor form:
X
Y(i1 , . . . , id ) =
G1 [i1 , j1 ] . . . Gd [id , jd ] X (j1 , . . . , jd ) + B(i1 , . . . , id ).
(5)
j1 ,...,jd
Direct application of the TT-matrix-by-vector operation for the Eq. (5) yields the computational
complexity of the forward pass O(dr2 m max{m, n}d ) = O(dr2 m max{M, N }).
5
Learning
Neural networks are usually trained with the stochastic gradient descent algorithm where the gradient is computed using the back-propagation procedure [18]. Back-propagation allows to compute
the gradient of a loss-function L with respect to all the parameters of the network. The method starts
with the computation of the gradient of L w.r.t. the output of the last layer and proceeds sequentially
through the layers in the reversed order while computing the gradient w.r.t. the parameters and the
input of the layer making use of the gradients computed earlier. Applied to the fully-connected layers (4) the back-propagation method computes the gradients w.r.t. the input x and the parameters
W and b given the gradients ?L
?y w.r.t to the output y:
?L
?L
= W|
,
?x
?y
?L
?L |
=
x ,
?W
?y
?L
?L
=
.
?b
?y
(6)
In what follows we derive the gradients required to use the back-propagation algorithm with the TTlayer. To compute the gradient of the loss function w.r.t. the bias vector b and w.r.t. the input vector
x one can use equations (6). The latter can be applied using the matrix-by-vector product (where the
matrix is in the TT-format) with the complexity of O(dr2 n max{m, n}d ) = O(dr2 n max{M, N }).
1
http://www.mathworks.com/help/matlab/ref/reshape.html
4
Operation
FC forward pass
TT forward pass
FC backward pass
TT backward pass
Time
O(M N )
O(dr2 m max{M, N })
O(M N )
O(d2 r4 m max{M, N })
Memory
O(M N )
O(r max{M, N })
O(M N )
O(r3 max{M, N })
Table 1: Comparison of the asymptotic complexity and memory usage of an M ? N TT-layer and
an M ? N fully-connected layer (FC). The input and output tensor shapes are m1 ? . . . ? md and
n1 ? . . . ? nd respectively (m = maxk=1...d mk ) and r is the maximal TT-rank.
To perform a step of stochastic gradient descent one can use equation (6) to compute the gradient
of the loss function w.r.t. the weight matrix W , convert the gradient matrix into the TT-format
(with the TT-SVD algorithm [17]) and then add this gradient (multiplied by a step size) to the
?L
current estimate of the weight matrix: Wk+1 = Wk + ?k ?W
. However, the direct computation of
?L
requires
O(M
N
)
memory.
A
better
way
to
learn
the
TensorNet
parameters is to compute the
?W
gradient of the loss function directly w.r.t. the cores of the TT-representation of W .
In what follows we use shortened notation for prefix and postfix sequences of indices: i?
k :=
?
+
(i1 , . . . , ik?1 ), i+
:=
(i
,
.
.
.
,
i
),
i
=
(i
,
i
,
i
).
We
also
introduce
notations
for
partial
k+1
d
k
k
k
k
core products:
? ? ?
Pk [ik , jk ] := G1 [i1 , j1 ] . . . Gk?1 [ik?1 , jk?1 ],
(7)
+
Pk+ [i+
k , jk ] := Gk+1 [ik+1 , jk+1 ] . . . Gd [id , jd ].
We now rewrite the definition of the TT-layer transformation (5) for any k = 2, . . . , d ? 1:
X
?
+ + +
?
+
+
Pk? [i?
Y(i) = Y(i?
k , jk ]Gk [ik , jk ]Pk [ik , jk ]X (jk , jk , jk ) + B(i).
k , ik , ik ) =
(8)
jk? ,jk ,jk+
The gradient of the loss function L w.r.t. to the k-th core in the position [?ik , ?jk ] can be computed
using the chain rule:
X ?L
?L
?Y(i)
=
.
(9)
?Y(i) ?Gk [?ik , ?jk ]
?Gk [?ik , ?jk ]
i
{z
}
|
rk?1 ? rk
?Y(i)
?Gk [?ik ,?
jk ]
the summation (9) can be done explicitly in O(M rk?1 rk )
Given the gradient matrices
time, where M is the length of the output vector y.
We now show how to compute the matrix ?G?Y(i)
? ? for any values of the core index k ? {1, . . . , d}
k [ik ,jk ]
and ?ik ? {1, . . . , mk }, ?jk ? {1, . . . , nk }. For any i = (i1 , . . . , id ) such that ik 6= ?ik the value
of Y(i) doesn?t depend on the elements of Gk [?ik , ?jk ] making the corresponding gradient ?Y(i)
?Gk [?ik ,?
jk ]
equal zero. Similarly, any summand in the Eq. (8) such that jk 6= ?jk doesn?t affect the gradient
?Y(i)
. These observations allow us to consider only ik = ?ik and jk = ?jk .
?G [?i ,?
j ]
k
k
k
? +
Y(i?
k , i k , ik )
is a linear function of the core Gk [?ik , ?jk ] and its gradient equals the following expres-
sion:
X
? +
?Y(i?
? |
+ |
k , ik , ik )
=
Pk? [i?
Pk+ [i+
X (jk? , ?jk , jk+ ).
k , jk ]
k , jk ]
?
?
|
{z
}|
{z
}
?Gk [ik , jk ]
? +
jk ,jk
rk?1 ?1
(10)
1?rk
rk
We denote the partial sum vector as Rk [jk? , ?jk , i+
k]?R :
Rk [j1 , . . . , jk?1 , ?jk , ik+1 , . . . , id ] = Rk [jk? , ?jk , i+
k]=
X
+
? ?
+
Pk+ [i+
k , jk ] X (jk , jk , jk ).
jk+
? ?
+
Vectors Rk [jk? , ?jk , i+
k ] for all the possible values of k, jk , jk and ik can be computed via dynamic programming (by pushing sums w.r.t. each jk+1 , . . . , jd inside the equation and summing
out one index at a time) in O(dr2 m max{M, N }). Substituting these vectors into (10) and using
5
test error %
102
32 ? 32
4?8?8?4
4?4?4?4?4
2?2?8?8?2?2
101
210
matrix rank
uncompressed
100
102
103
104
105
106
number of parameters in the weight matrix of the first layer
Figure 1: The experiment on the MNIST dataset. We use a two-layered neural network and substitute
the first 1024 ? 1024 fully-connected layer with the TT-layer (solid lines) and with the matrix rank
decomposition based layer (dashed line). The solid lines of different colors correspond to different
ways of reshaping the input and output vectors to tensors (the shapes are reported in the legend). To
obtain the points of the plots we vary the maximal TT-rank or the matrix rank.
(again) dynamic programming yields us all the necesary matrices for summation (9). The overall
computational complexity of the backward pass is O(d2 r4 m max{M, N }).
The presented algorithm reduces to a sequence of matrix-by-matrix products and permutations of
dimensions and thus can be accelerated on a GPU device.
6
6.1
Experiments
Parameters of the TT-layer
In this experiment we investigate the properties of the TT-layer and compare different strategies for
setting its parameters: dimensions of the tensors representing the input/output of the layer and the
TT-ranks of the compressed weight matrix. We run the experiment on the MNIST dataset [15] for
the task of handwritten-digit recognition. As a baseline we use a neural network with two fullyconnected layers (1024 hidden units) and rectified linear unit (ReLU) achieving 1.9% error on the
test set. For more reshaping options we resize the original 28 ? 28 images to 32 ? 32.
We train several networks differing in the parameters of the single TT-layer. The networks contain
the following layers: the TT-layer with weight matrix of size 1024?1024, ReLU, the fully-connected
layer with the weight matrix of size 1024 ? 10. We test different ways of reshaping the input/output
tensors and try different ranks of the TT-layer. As a simple compression baseline in the place of
the TT-layer we use the fully-connected layer such that the rank of the weight matrix is bounded
(implemented as follows: the two consecutive fully-connected layers with weight matrices of sizes
1024 ? r and r ?1024, where r controls the matrix rank and the compression factor). The results of
the experiment are shown in Figure 1. We conclude that the TT-ranks provide much better flexibility
than the matrix rank when applied at the same compression level. In addition, we observe that the
TT-layers with too small number of values for each tensor dimension and with too few dimensions
perform worse than their more balanced counterparts.
Comparison with HashedNet [4]. We consider a two-layered neural network with 1024 hidden
units and replace both fully-connected layers by the TT-layers. By setting all the TT-ranks in the
network to 8 we achieved the test error of 1.6% with 12 602 parameters in total and by setting all
the TT-ranks to 6 the test error of 1.9% with 7 698 parameters. Chen et al. [4] report results on the
same architecture. By tying random subsets of weights they compressed the network by the factor
of 64 to the 12 720 parameters in total with the test error equal 2.79%.
6.2
CIFAR-10
CIFAR-10 dataset [12] consists of 32 ? 32 3-channel images assigned to 10 different classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The dataset contains 50000 train and
10000 test images. Following [10] we preprocess the images by subtracting the mean and performing global contrast normalization and ZCA whitening.
As a baseline we use the CIFAR-10 Quick [22] CNN, which consists of convolutional, pooling and
non-linearity layers followed by two fully-connected layers of sizes 1024 ? 64 and 64 ? 10. We fix
the convolutional part of the network and substitute the fully-connected part by a 1024 ? N TT-layer
6
TT-layers vgg-16 vgg-19 vgg-16 vgg-16 vgg-19 vgg-19
compr.
compr. compr. top 1 top 5 top 1 top 5
FC FC FC
1
1
1
30.9
11.2
29.0
10.1
TT4 FC FC
50 972
3.9
3.5
31.2
11.2
29.8
10.4
TT2 FC FC
194 622
3.9
3.5
31.5
11.5
30.4
10.9
TT1 FC FC
713 614
3.9
3.5
33.3
12.8
31.9
11.8
TT4 TT4 FC
37 732
7.4
6
32.2
12.3
31.6
11.7
MR1 FC FC
3 521
3.9
3.5
99.5
97.6
99.8
99
MR5 FC FC
704
3.9
3.5
81.7
53.9
79.1
52.4
MR50 FC FC
70
3.7
3.4
36.7
14.9
34.5
15.8
Architecture
Table 2: Substituting the fully-connected layers with the TT-layers in vgg-16 and vgg-19 networks
on the ImageNet dataset. FC stands for a fully-connected layer; TT stands for a TT-layer with
all the TT-ranks equal ??; MR stands for a fully-connected layer with the matrix rank restricted
to ??. We report the compression rate of the TT-layers matrices and of the whole network in the
second, third and fourth columns.
followed by ReLU and by a N ? 10 fully-connected layer. With N = 3125 hidden units (contrary
to 64 in the original network) we achieve the test error of 23.13% without fine-tuning which is
slightly better than the test error of the baseline (23.25%). The TT-layer treated input and output
vectors as 4 ? 4 ? 4 ? 4 ? 4 and 5 ? 5 ? 5 ? 5 ? 5 tensors respectively. All the TT-ranks equal
8, making the number of the parameters in the TT-layer equal 4 160. The compression rate of the
TensorNet compared with the baseline w.r.t. all the parameters is 1.24. In addition, substituting the
both fully-connected layers by the TT-layers yields the test error of 24.39% and reduces the number
of parameters of the fully-connected layer matrices by the factor of 11.9 and the total parameter
number by the factor of 1.7.
For comparison, in [6] the fully-connected layers in a CIFAR-10 CNN were compressed by the
factor of at most 4.7 times with the loss of about 2% in accuracy.
6.2.1
Wide and shallow network
With sufficient amount of hidden units, even a neural network with two fully-connected layers and
sigmoid non-linearity can approximate any decision boundary [5]. Traditionally, very wide shallow
networks are not considered because of high computational and memory demands and the overfitting risk. TensorNet can potentially address both issues. We use a three-layered TensorNet of
the following architecture: the TT-layer with the weight matrix of size 3 072 ? 262 144, ReLU, the
TT-layer with the weight matrix of size 262 144 ? 4 096, ReLU, the fully-connected layer with the
weight matrix of size 4 096 ? 10. We report the test error of 31.47% which is (to the best of our
knowledge) the best result achieved by a non-convolutional neural network.
6.3
ImageNet
In this experiment we evaluate the TT-layers on a large scale task. We consider the 1000-class
ImageNet ILSVRC-2012 dataset [19], which consist of 1.2 million training images and 50 000
validation images. We use deep the CNNs vgg-16 and vgg-19 [21] as the reference models2. Both
networks consist of the two parts: the convolutional and the fully-connected parts. In the both
networks the second part consist of 3 fully-connected layers with weight matrices of sizes 25088 ?
4096, 4096 ? 4096 and 4096 ? 1000.
In each network we substitute the first fully-connected layer with the TT-layer. To do this we reshape
the 25088-dimensional input vectors to the tensors of the size 2 ? 7 ? 8 ? 8 ? 7 ? 4 and the 4096dimensional output vectors to the tensors of the size 4 ? 4 ? 4 ? 4 ? 4 ? 4. The remaining fullyconnected layers are initialized randomly. The parameters of the convolutional parts are kept fixed
as trained by Simonyan and Zisserman [21]. We train the TT-layer and the fully-connected layers
on the training set. In Table 2 we vary the ranks of the TT-layer and report the compression factor of
the TT-layers (vs. the original fully-connected layer), the resulting compression factor of the whole
network, and the top 1 and top 5 errors on the validation set. In addition, we substitute the second
fully-connected layer with the TT-layer. As a baseline compression method we constrain the matrix
rank of the weight matrix of the first fully-connected layer using the approach of [2].
2
After we had started to experiment on the vgg-16 network the vgg-* networks have been improved by
the authors. Thus, we report the results on a slightly outdated version of vgg-16 and the up-to-date version of
vgg-19.
7
Type
CPU fully-connected layer
CPU TT-layer
GPU fully-connected layer
GPU TT-layer
1 im. time (ms)
16.1
1.2
2.7
1.9
100 im. time (ms)
97.2
94.7
33
12.9
Table 3: Inference time for a 25088 ? 4096 fully-connected layer and its corresponding TT-layer
with all the TT-ranks equal 4. The memory usage for feeding forward one image is 392MB for the
fully-connected layer and 0.766MB for the TT-layer.
In Table 2 we observe that the TT-layer in the best case manages to reduce the number of the
parameters in the matrix W of the largest fully-connected layer by a factor of 194 622 (from 25088?
4096 parameters to 528) while increasing the top 5 error from 11.2 to 11.5. The compression
factor of the whole network remains at the level of 3.9 because the TT-layer stops being the storage
bottleneck. By compressing the largest of the remaining layers the compression factor goes up
to 7.4. The baseline method when providing similar compression rates significantly increases the
error.
For comparison, consider the results of [26] obtained for the compression of the fully-connected layers of the Krizhevsky-type network [13] with the Fastfood method. The model achieves compression
factors of 2-3 without decreasing the network error.
6.4
Implementation details
In all experiments we use our MATLAB extension3 of the MatConvNet framework4 [24]. For the
operations related to the TT-format we use the TT-Toolbox5 implemented in MATLAB as well. The
experiments were performed on a computer with a quad-core Intel Core i5-4460 CPU, 16 GB RAM
and a single NVidia Geforce GTX 980 GPU. We report the running times and the memory usage at
the forward pass of the TT-layer and the baseline fully-connected layer in Table 3.
We train all the networks with stochastic gradient descent with momentum (coefficient 0.9). We
initialize all the parameters of the TT- and fully-connected layers with a Gaussian noise and put
L2-regularization (weight 0.0005) on them.
7
Discussion and future work
Recent studies indicate high redundancy in the current neural network parametrization. To exploit
this redundancy we propose to use the TT-decomposition framework on the weight matrix of a
fully-connected layer and to use the cores of the decomposition as the parameters of the layer. This
allows us to train the fully-connected layers compressed by up to 200 000? compared with the
explicit parametrization without significant error increase. Our experiments show that it is possible
to capture complex dependencies within the data by using much more compact representations. On
the other hand it becomes possible to use much wider layers than was available before and the
preliminary experiments on the CIFAR-10 dataset show that wide and shallow TensorNets achieve
promising results (setting new state-of-the-art for non-convolutional neural networks).
Another appealing property of the TT-layer is faster inference time (compared with the corresponding fully-connected layer). All in all a wide and shallow TensorNet can become a time and memory
efficient model to use in real time applications and on mobile devices.
The main limiting factor for an M ? N fully-connected layer size is its parameters number M N .
The limiting factor for an M ?N TT-layer is the maximal linear size max{M, N }. As a future work
we plan to consider the inputs and outputs of layers in the TT-format thus completely eliminating
the dependency on M and N and allowing billions of hidden units in a TT-layer.
Acknowledgements. We would like to thank Ivan Oseledets for valuable discussions. A. Novikov,
D. Podoprikhin, D. Vetrov were supported by RFBR project No. 15-31-20596 (mol-a-ved) and by
Microsoft: Moscow State University Joint Research Center (RPD 1053945). A. Osokin was supported by the MSR-INRIA Joint Center. The results of the tensor toolbox application (in Sec. 6) are
supported by Russian Science Foundation No. 14-11-00659.
3
https://github.com/Bihaqo/TensorNet
http://www.vlfeat.org/matconvnet/
5
https://github.com/oseledets/TT-Toolbox
4
8
References
[1] K. Asanovi and N. Morgan, ?Experimental determination of precision requirements for back-propagation
training of artificial neural networks,? International Computer Science Institute, Tech. Rep., 1991.
[2] J. Ba and R. Caruana, ?Do deep nets really need to be deep?? in Advances in Neural Information Processing Systems 27 (NIPS), 2014, pp. 2654?2662.
[3] J. D. Caroll and J. J. Chang, ?Analysis of individual differences in multidimensional scaling via n-way
generalization of Eckart-Young decomposition,? Psychometrika, vol. 35, pp. 283?319, 1970.
[4] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, ?Compressing neural networks with the
hashing trick,? in International Conference on Machine Learning (ICML), 2015, pp. 2285?2294.
[5] G. Cybenko, ?Approximation by superpositions of a sigmoidal function,? Mathematics of control, signals
and systems, pp. 303?314, 1989.
[6] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. de Freitas, ?Predicting parameters in deep learning,?
in Advances in Neural Information Processing Systems 26 (NIPS), 2013, pp. 2148?2156.
[7] E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, ?Exploiting linear structure within convolutional networks for efficient evaluation,? in Advances in Neural Information Processing Systems 27
(NIPS), 2014, pp. 1269?1277.
[8] E. Gilboa, Y. Saati, and J. P. Cunningham, ?Scaling multidimensional inference for structured gaussian
processes,? arXiv preprint, no. 1209.4120, 2012.
[9] Y. Gong, L. Liu, M. Yang, and L. Bourdev, ?Compressing deep convolutional networks using vector
quantization,? arXiv preprint, no. 1412.6115, 2014.
[10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, ?Maxout networks,? in International Conference on Machine Learning (ICML), 2013, pp. 1319?1327.
[11] W. Hackbusch and S. K?uhn, ?A new scheme for the tensor representation,? J. Fourier Anal. Appl., vol. 15,
pp. 706?722, 2009.
[12] A. Krizhevsky, ?Learning multiple layers of features from tiny images,? Master?s thesis, Computer Science Department, University of Toronto, 2009.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural
networks,? in Advances in Neural Information Processing Systems 25 (NIPS), 2012, pp. 1097?1105.
[14] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, ?Speeding-up convolutional neural
networks using fine-tuned CP-decomposition,? in International Conference on Learning Representations
(ICLR), 2014.
[15] Y. LeCun, C. Cortes, and C. J. C. Burges, ?The MNIST database of handwritten digits,? 1998.
[16] A. Novikov, A. Rodomanov, A. Osokin, and D. Vetrov, ?Putting MRFs on a Tensor Train,? in International Conference on Machine Learning (ICML), 2014, pp. 811?819.
[17] I. V. Oseledets, ?Tensor-Train decomposition,? SIAM J. Scientific Computing, vol. 33, no. 5, pp. 2295?
2317, 2011.
[18] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, ?Learning representations by back-propagating errors,?
Nature, vol. 323, no. 6088, pp. 533?536, 1986.
[19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei, ?Imagenet large scale visual recognition challenge,? International Journal of Computer Vision (IJCV), 2015.
[20] T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran, ?Low-rank matrix factorization for deep neural network training with high-dimensional output targets,? in International Conference
of Acoustics, Speech, and Signal Processing (ICASSP), 2013, pp. 6655?6659.
[21] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
in International Conference on Learning Representations (ICLR), 2015.
[22] J. Snoek, H. Larochelle, and R. P. Adams, ?Practical bayesian optimization of machine learning algorithms,? in Advances in Neural Information Processing Systems 25 (NIPS), 2012, pp. 2951?2959.
[23] L. R. Tucker, ?Some mathematical notes on three-mode factor analysis,? Psychometrika, vol. 31, no. 3,
pp. 279?311, 1966.
[24] A. Vedaldi and K. Lenc, ?Matconvnet ? convolutional neural networks for MATLAB,? in Proceeding of
the ACM Int. Conf. on Multimedia.
[25] J. Xue, J. Li, and Y. Gong, ?Restructuring of deep neural network acoustic models with singular value
decomposition,? in Interspeech, 2013, pp. 2365?2369.
[26] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang, ?Deep fried convnets,?
arXiv preprint, no. 1412.7149, 2014.
[27] Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis, and L. Daniel, ?Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition,? Computer-Aided Design
of Integrated Circuits and Systems, IEEE Transactions on, pp. 63?76, 2014.
9
| 5787 |@word cnn:2 version:2 eliminating:1 compression:17 norm:1 msr:1 nd:1 d2:2 seek:1 tried:1 decomposition:16 solid:2 necessity:1 liu:1 contains:1 daniel:1 tuned:1 prefix:1 outperforms:1 existing:1 freitas:2 current:2 com:4 surprising:1 gmail:1 readily:1 gpu:4 numerical:2 j1:9 shape:2 drop:1 plot:1 update:1 moczulski:1 v:1 fewer:1 device:4 desktop:1 fried:1 podoprikhin:2 parametrization:2 short:2 core:13 node:1 bijection:1 toronto:1 org:1 simpler:1 sigmoidal:1 zhang:1 kingsbury:1 mathematical:1 direct:4 become:2 ik:28 consists:4 ijcv:1 fullyconnected:4 inside:1 manner:1 introduce:3 snoek:1 frequently:1 multi:1 decreasing:2 cpu:3 curse:1 enumeration:2 quad:1 increasing:1 becomes:1 project:2 psychometrika:2 notation:3 underlying:1 bounded:1 linearity:2 circuit:1 lowest:1 what:4 tying:1 differing:1 transformation:7 multidimensional:2 tie:1 zaremba:1 rm:3 control:3 unit:13 vlfeat:1 before:2 treat:1 limit:2 vetrov:2 shortened:1 id:7 inria:3 tt1:1 bird:1 frog:1 r4:2 appl:1 factorization:1 unique:1 lecun:2 practical:1 digit:2 procedure:1 significantly:1 vedaldi:1 get:1 close:1 layered:3 storage:1 risk:1 applying:1 put:1 www:2 conventional:1 equivalent:1 map:1 quick:1 center:2 straightforward:1 economics:1 go:1 williams:1 sainath:1 rule:1 array:7 searching:1 coordinate:1 traditionally:1 oseledets:6 limiting:2 target:1 programming:2 goodfellow:1 trick:1 element:14 rumelhart:1 recognition:4 expensive:1 jk:57 database:1 preprint:3 wang:1 capture:1 thousand:3 eckart:1 compressing:3 connected:51 ranzato:1 valuable:1 balanced:1 pd:1 complexity:5 warde:1 dynamic:2 trained:4 depend:2 rewrite:1 algebra:1 predictive:1 technically:1 serve:1 efficiency:1 completely:1 icassp:1 joint:2 represented:2 cat:1 train:12 hashednet:1 fast:1 artificial:1 horse:1 deer:1 quite:1 whose:2 compressed:4 ability:2 simonyan:2 g1:5 advantage:1 sequence:3 net:1 propose:1 subtracting:1 product:10 maximal:5 fr:1 mb:2 j2:1 date:1 flexibility:2 achieve:2 arisoy:1 academy:1 description:1 frobenius:1 billion:1 exploiting:1 sutskever:1 requirement:2 adam:1 sierra:1 wider:2 novikov:3 derive:2 help:1 propagating:1 gong:2 bourdev:1 ganin:1 school:1 eq:2 ddimensional:1 implemented:2 indicate:1 larochelle:1 qd:5 cnns:5 stochastic:3 require:1 feeding:1 fix:1 generalization:1 really:1 preliminary:1 cybenko:1 rpd:1 summation:3 im:2 considered:1 algorithmic:1 mapping:3 substituting:3 matconvnet:3 major:1 vary:5 consecutive:1 achieves:1 applicable:1 currently:2 superposition:1 largest:2 successfully:1 jkk:1 gaussian:2 occupied:1 denil:2 sion:1 mobile:1 command:1 wilson:1 focus:1 improvement:1 rank:40 greatly:1 contrast:2 tech:1 zca:1 baseline:8 ved:1 inference:5 mrfs:1 stopping:1 integrated:1 cunningham:1 hidden:10 france:1 i1:7 issue:1 overall:1 classification:2 among:1 html:1 stateof:1 plan:1 art:5 special:2 initialize:1 equal:8 construct:1 having:1 uncompressed:2 icml:3 denton:1 future:2 report:8 asanovi:1 mirza:1 summand:1 few:2 modern:3 randomly:1 national:1 individual:1 n1:1 microsoft:1 attempt:1 huge:1 highly:1 investigate:1 evaluation:2 farley:1 chain:1 tuple:2 partial:2 necessary:2 indexed:1 initialized:1 mk:6 increased:2 column:3 earlier:1 caruana:1 ordinary:3 subset:2 hundred:1 krizhevsky:3 graphic:1 too:2 reported:2 stored:3 dependency:2 xue:1 combined:1 gd:5 international:8 siam:1 lebedev:2 again:1 thesis:1 choose:1 russia:3 huang:1 worse:1 conf:1 derivative:1 leading:2 li:1 converted:1 de:2 sec:8 bold:3 wk:2 coefficient:1 int:1 explicitly:3 yandex:1 performed:1 try:1 reached:2 start:2 option:1 shakibi:1 square:1 accuracy:3 convolutional:14 formed:1 characteristic:1 efficiently:4 correspond:2 yield:3 preprocess:1 generalize:1 anton:2 handwritten:2 bayesian:1 manages:1 rectified:1 russakovsky:1 expres:1 complicate:1 definition:1 pp:17 tucker:3 geforce:1 stop:1 dataset:9 popular:1 color:1 knowledge:1 dimensionality:3 organized:1 carefully:1 back:8 higher:2 hashing:2 zisserman:2 improved:1 entrywise:1 done:1 smola:1 convnets:1 hand:1 expressive:2 su:1 propagation:7 mode:1 scientific:1 russian:2 building:1 effect:1 usage:3 contain:1 gtx:1 counterpart:2 regularization:1 assigned:1 attractive:1 during:1 interspeech:1 coincides:1 m:2 tt:120 demonstrate:3 cp:2 image:12 recently:1 superior:1 sigmoid:1 operand:2 million:5 m1:1 onedimensional:1 refer:5 significant:2 dinh:1 ai:1 rd:1 tuning:1 mathematics:2 similarly:1 had:1 immune:1 bruna:1 whitening:1 etc:1 add:1 recent:3 moderate:1 ship:1 store:1 nvidia:1 calligraphic:2 rep:1 morgan:1 greater:1 mr:1 deng:1 r0:1 redundant:1 signal:6 dashed:1 multiple:1 reduces:2 match:1 faster:1 determination:1 long:1 cifar:7 reshaping:3 basic:2 vision:2 arxiv:3 represent:1 kernel:1 normalization:1 achieved:2 preserved:1 addition:4 fine:2 krause:1 singular:1 rakhuba:1 rest:1 lenc:1 bringing:1 probably:1 pooling:1 legend:1 contrary:1 spirit:1 call:3 yang:3 bernstein:1 bengio:1 enough:1 ivan:1 affect:1 relu:5 architecture:4 reduce:3 idea:2 vgg:15 airplane:1 bottleneck:1 gb:1 song:1 speech:2 matlab:5 deep:16 generally:2 detailed:1 necesary:1 karpathy:1 amount:4 transforms:1 bijections:1 hardware:4 multichannel:1 reduced:2 http:4 outperform:1 canonical:1 vol:5 redundancy:2 putting:1 achieving:1 anova:1 kept:1 backward:3 ram:1 convert:2 sum:5 run:1 letter:6 powerful:1 fourth:1 i5:1 master:1 uncertainty:1 place:1 throughout:1 decision:1 resize:1 scaling:2 layer:119 outdated:1 followed:3 courville:1 truck:1 kronecker:1 constrain:1 fei:2 fourier:1 speed:2 performing:1 format:36 gpus:1 structured:1 department:1 smaller:1 slightly:2 appealing:1 shallow:6 making:4 restricted:3 resource:1 equation:5 remains:1 r3:1 mathworks:1 end:1 available:4 operation:10 rewritten:1 multiplied:1 apply:4 observe:2 hierarchical:2 reshape:3 alternative:1 weinberger:1 jd:9 substitute:4 compress:1 moscow:4 running:2 original:3 top:7 remaining:2 rfbr:1 pushing:1 exploit:2 establish:2 ramabhadran:1 tensor:43 already:1 strategy:1 md:1 said:1 gradient:21 iclr:2 reversed:1 thank:1 consumption:1 portable:1 ru:2 length:5 index:7 providing:1 equivalently:1 potentially:2 gk:17 ba:1 implementation:1 anal:1 design:1 satheesh:1 perform:6 allowing:3 upper:4 models2:1 convolution:1 observation:1 datasets:1 tensorizing:1 enabling:1 descent:3 defining:1 maxk:3 hinton:2 team:1 rn:1 postfix:1 arbitrary:1 dog:1 paris:1 required:3 toolbox:2 imagenet:6 acoustic:2 nip:5 address:1 able:2 proceeds:1 usually:2 challenge:1 max:12 memory:14 power:3 demanding:1 natural:1 treated:1 quantification:1 predicting:1 largescale:1 representing:1 scheme:1 github:2 technology:1 numerous:1 started:1 speeding:1 text:1 review:2 l2:1 acknowledgement:1 multiplication:2 asymptotic:1 fully:51 loss:7 highlight:1 permutation:1 skolkovo:1 validation:2 foundation:1 sufficient:2 dr2:6 tyree:1 storing:2 share:1 tiny:1 row:1 compatible:1 uhn:1 supported:4 last:1 keeping:1 gilboa:2 bias:2 allow:1 burges:1 institute:3 wide:6 correspondingly:1 boundary:1 dimension:13 stand:3 computes:1 doesn:2 author:2 commonly:1 made:1 collection:1 forward:5 osokin:3 transaction:1 approximate:1 compact:4 dmitry:3 keep:2 global:2 sequentially:1 overfitting:1 summing:1 conclude:1 fergus:1 hackbusch:1 khosla:1 table:6 additionally:1 promising:1 learn:1 channel:1 robust:1 nature:1 mol:1 automobile:1 complex:1 domain:3 pk:7 dense:4 fastfood:1 main:1 whole:5 big:1 noise:1 ref:1 referred:1 intel:1 precision:2 position:2 momentum:1 explicit:5 third:1 young:1 rk:19 learnable:1 symbol:1 r2:1 cortes:1 exists:2 consist:3 mnist:5 restricting:3 quantization:1 margin:1 demand:2 nk:10 chen:3 fc:20 visual:1 expressed:1 restructuring:1 g2:1 chang:1 sindhwani:1 acm:1 ma:1 lempitsky:1 consequently:1 maxout:1 replace:1 hard:1 experimentally:1 aided:1 called:1 total:3 pas:7 multimedia:1 experimental:2 svd:1 ilsvrc:1 berg:1 latter:1 alexander:1 accelerated:1 evaluate:1 |
5,288 | 5,788 | Training Restricted Boltzmann Machines via the
Thouless-Anderson-Palmer Free Energy
Marylou Gabri?e
Eric W. Tramel
Florent Krzakala
Laboratoire de Physique Statistique, UMR 8550 CNRS
?
Ecole
Normale Sup?erieure & Universit?e Pierre et Marie Curie
75005 Paris, France
{marylou.gabrie, eric.tramel}@lps.ens.fr, florent.krzakala@ens.fr
Abstract
Restricted Boltzmann machines are undirected neural networks which have been
shown to be effective in many applications, including serving as initializations
for training deep multi-layer neural networks. One of the main reasons for their
success is the existence of efficient and practical stochastic algorithms, such as
contrastive divergence, for unsupervised training. We propose an alternative deterministic iterative procedure based on an improved mean field method from statistical physics known as the Thouless-Anderson-Palmer approach. We demonstrate
that our algorithm provides performance equal to, and sometimes superior to, persistent contrastive divergence, while also providing a clear and easy to evaluate
objective function. We believe that this strategy can be easily generalized to other
models as well as to more accurate higher-order approximations, paving the way
for systematic improvements in training Boltzmann machines with hidden units.
1
Introduction
A restricted Boltzmann machine (RBM) [1, 2] is a type of undirected neural network with surprisingly many applications. This model has been used in problems as diverse as dimensionality
reduction [3], classification [4], collaborative filtering [5], feature learning [6], and topic modeling
[7]. Also, quite remarkably, it has been shown that generative RBMs can be stacked into multi-layer
neural networks, forming an initialization for deep network architectures [8, 9]. Such deep architectures are believed to be crucial for learning high-order representations and concepts. Although the
amount of training data available in practice has made pretraining of deep nets dispensable for supervised tasks, RBMs remain at the core of unsupervised learning, a key area for future developments
in machine intelligence [10].
While the training procedure for RBMs can be written as a log-likelihood maximization, an exact implementation of this approach is computationally intractable for all but the smallest models.
However, fast stochastic Monte Carlo methods, specifically contrastive divergence (CD) [2] and persistent CD (PCD) [11, 12], have made large-scale RBM training both practical and efficient. These
methods have popularized RBMs even though it is not entirely clear why such approximate methods
should work as well as they do.
In this paper, we propose an alternative deterministic strategy for training RBMs, and neural networks with hidden units in general, based on the so-called mean-field, and extended mean-field,
methods of statistical mechanics. This strategy has been used to train neural networks in a number of earlier works [13, 14, 15, 16, 17]. In fact, for entirely visible networks, the use of adaptive
cluster expansion mean-field methods has lead to spectacular results in learning Boltzmann machine
representations [18, 19].
1
However, unlike these fully visible models, the hidden units of the RBM must be taken into account
during the training procedure. In 2002, Welling and Hinton [17] presented a similar deterministic
mean-field learning algorithm for general Boltzmann machines with hidden units, considering it a
priori as a potentially efficient extension of CD. In 2008, Tieleman [12] tested the method in detail
for RBMs and found it provided poor performance when compared to both CD and PCD. In the
wake of these two papers, little inquiry has been made in this direction, with the apparent consensus
being that the deterministic mean-field approach is ineffective for RBM training.
Our goal is to challenge this consensus by going beyond na??ve mean field, a mere first-order approximation, by introducing second-, and possibly third-, order terms. In principle, it is even possible to
extend the approach to arbitrary order. Using this extended mean-field approximation, commonly
known as the Thouless-Anderson-Palmer [20] approach in statistical physics, we find that RBM
training performance is significantly improved over the na??ve mean-field approximation and is even
comparable to PCD. The clear and easy to evaluate objective function, along with the extensible
nature of the approximation, paves the way for systematic improvements in learning efficiency.
2
Training restricted Boltzmann machines
A restricted Boltzmann machine, which can be viewed as a two layer undirected bipartite neural
network, is a specific case of an energy based model wherein a layer of visible units is fully connected to a layer of hidden units. Let us denote the binary visible and hidden units, indexed by i and
j respectively, as vi and hj . The energy of a given state, v = {vi }, h = {hj }, of the RBM is given
by
X
X
X
E(v, h) = ?
ai vi ?
bj hj ?
vi Wij hj ,
(1)
i
j
i,j
where Wij are the entries of the matrix specifying the weights, or couplings, between the visible and
hidden units, and ai and bj are the biases, or the external fields in the language of statistical physics,
of the visible and hidden units, respectively. Thus, the set of parameters {Wij , ai , bj } defines the
RBM model.
The joint probability distribution over the visible
Pand hidden units is given by the Gibbs-Boltzmann
measure P (v, h) = Z ?1 e?E(v,h) , where Z = v,h e?E(v,h) is the normalization constant known
as the partition function in physics.
For a given data point, represented by v, the marginal of the
P
RBM is calculated as P (v) = h P (v, h). Writing this marginal of v in terms of its log-likelihood
results in the difference
L = ln P (v) = ?F c (v) + F,
(2)
P
where F = ? ln Z is the free energy of the RBM, and F c (v) = ? ln( h e?E(v,h) ) can be interpreted as a free energy as well, but with visible units fixed to the training data point v. Hence, F c is
referred to as the clamped free energy.
One of the most important features of the RBM model is that F c can be easily computed as h
may be summed out analytically since the hidden units are conditionally independent of the visible
units, owing to the RBM?s bipartite structure. However, calculating F is computationally intractable
since the number of possible states to sum over scales combinatorially with the number of units in
the model. This complexity frustrates the exact computation of the gradients of the log-likelihood
needed in order to train the RBM parameters via gradient ascent. Monte Carlo methods for RBM
?F
training rely on the observation that ?W
= P (vi = 1, hj = 1), which can be simulated at a
ij
lower computational cost. Nevertheless, drawing independent samples from the model in order
to approximate this derivative is itself computationally expensive and often approximate sampling
algorithms, such as CD or PCD, are used instead.
3
Extended mean field theory of RBMs
Here, we present a physics-inspired tractable estimation of the free energy F of the RBM. This
approximation is based on a high temperature expansion of the free energy derived by Georges and
Yedidia in the context of spin glasses [21] following the pioneering works of [20, 22]. We refer the
reader to [23] for a review of this topic.
2
To apply the Georges-Yedidia expansion to the RBM free energy, we start with a general energy
based model which possesses arbitrary couplings Wij between undifferentiated binary spins si ?
{0, 1}, such that the energy
P of thePGibbs-Boltzmann measure on the configuration s = {si } is
defined by E(s) = ? i ai si ? (i,j) Wij si sj 1 . We also restore the role of the temperature,
usually considered constant and for simplicity set to 1 in most energy based models, by multiplying
the energy functional in the Boltzmann weight by the inverse temperature ?.
Next, we apply a Legendre transform to the free energy, a standard procedure in statistical physics,
by first writingP
the free energyPas a function of a newly introduced auxiliary external field q = {qi },
??F [q] = ln s e??E(s)+? i qi si . This external field will be eventually set to the value q = 0 in
order to recover the true free energy. The Legendre transform ? is then given as a function of the
conjugate variable m = {mi } by maximizing over q,
X
X
???[m] = ?? max[F [q] +
qi mi ] = ??(F [q? [m]] +
qi? [m]mi ),
(3)
q
i
i
where the maximizing auxiliary field q? [m], a function of the conjugate variables, is the inverse
dF
function of m[q] ? ? dF
dq . Since the derivative dq is exactly equal to ?hsi, where the operator h?i
refers to the average configuration under the Boltzmann measure, the conjugate variable m is in fact
the equilibrium magnetization vector hsi. Finally, we observe that the free energy is also the inverse
Lengendre transform of its Legendre transform at q = 0,
??F = ??F [q = 0] = ? min[?[m]] = ???[m? ],
(4)
m
where m? minimizes ?, which yields an expression of the free energy in terms of the magnetization
vector. Following [22, 21], this formulation allows us to perform a high temperature expansion of
A(?, m) ? ???[m] around ? = 0 at fixed m,
? 2 ? 2 A(?, m)
?A(?, m)
+
+ ??? ,
(5)
A(?, m) = A(0, m) + ?
??
2
?? 2
?=0
?=0
where the dependence on ? of the product ?q must carefully be taken into account. At infinite
temperature, ? = 0, the spins decorrelate, causing the average value of an arbitrary product of spins
to equal the product of their local magnetizations; a useful property. Accounting for binary spins
taking values in {0, 1}, one obtains the following expansion
X
X
X
???(m) = ?
[mi ln mi + (1 ? mi ) ln(1 ? mi )] + ?
ai mi + ?
Wij mi mj
i
i
(i,j)
2
? X 2
+
Wij (mi ? m2i )(mj ? m2j )
2
(i,j)
3 X
2?
1
1
3
2
2
+
Wij (mi ? mi )
? mi (mj ? mj )
? mj
3
2
2
(i,j)
X
+ ?3
Wij Wjk Wki (mi ? m2i )(mj ? m2j )(mk ? m2k ) + ? ? ? 1
(6)
(i,j,k)
The zeroth-order term corresponds to the entropy of non-interacting spins with constrained magnetizations values. Taking this expansion up to the first-order term, we recover the standard na??ve
mean-field theory. The second-order term is known as the Onsager reaction term in the TAP equations [20]. The higher orders terms are systematic corrections which were first derived in [21].
Returning to the RBM notation and truncating the expansion at second-order for the remainder of
the theoretical discussion, we have
X
X
?(mv , mh ) ? S(mv , mh ) ?
ai mvi ?
bj mhj
i
?
1
The notation
P
(i,j)
and
P
X
i,j
(i,j,k)
j
Wij2
(mvi ? (mvi )2 )(mhj ? (mhj )2 ),
Wij mvi mhj +
2
(7)
refers to the sum over the distinct pairs and triplets of spins, respectively.
3
where S is the entropy contribution, mv and mh are introduced to denote the magnetization of the
visible and hidden units, and ? is set equal to 1. Eq. (7) can be viewed as a weak coupling expansion
in Wij . To recover an estimate of the RBM free energy, Eq. (7) must be minimized with respect to
d?
its arguments, as in Eq. (4). Lastly, by writing the stationary condition dm
= 0, we obtain the selfconsistency constraints on the magnetizations. At second-order we obtain the following constraint
on the visible magnetizations,
?
?
X
1
mvi ? sigm ?ai +
Wij mhj ? Wij2 mvi ?
(8)
mhj ? (mhj )2 ? ,
2
j
where sigm[x] = (1 + e?x )?1 is a logistic sigmoid function. A similar constraint must be satisfied
for the hidden units, as well. Clearly, the stationarity condition for ? obtained at order n utilizes
terms up to the nth order within the sigmoid argument of these consistency relations. Whatever
the order of the approximation, the magnetizations are the solutions of a set of non-linear coupled
equations of the same cardinality as the number of units in the model. Finally, provided we can
define a procedure to efficiently derive the value of the magnetizations satisfying these constraints,
we obtain an extended mean-field approximation of the free energy which we denote as F EMF .
4
4.1
RBM evaluation and unsupervised training with EMF
An iteration for calculating F EMF
Recalling the log-likelihood of the RBM, L = ?F c (v) + F , we have shown that a tractable approximation of F , F EMF , is obtained via a weak coupling expansion so long as one can solve the
coupled system of equations over the magnetizations shown in Eq. (8). In the spirit of iterative
belief propagation [23], we propose that these self-consistency relations can serve as update rules
for the magnetizations within an iterative algorithm. In fact, the convergence of this procedure has
been rigorously demonstrated in the context of random spin glasses [24]. We expect that these convergence properties will remain present even for real data. The iteration over the self-consistency
relations for both the hidden and visible magnetizations can be written using the time index t as
"
#
X
1
h
v
2
h
v
v
2
mj [t + 1] ? sigm bj +
Wij mi [t] ? Wij mj [t] ?
mi [t] ? (mi [t])
,
(9, 10)
2
i
?
?
X
1
mvi [t + 1] ? sigm ?ai +
mhj [t + 1] ? (mhj [t + 1])2 ? ,
Wij mhj [t + 1] ? Wij2 mvi [t] ?
2
j
where the time indexing follows from application of [24]. The values of mv and mh minimizing
?(mv , mh ), and thus providing the value of F EMF , are obtained by running Eqs. (9, 10) until they
converge to a fixed point. We note that while we present an iteration to find F EMF up to second-order
above, third-order terms can easily be introduced into the procedure.
4.2
Deterministic EMF training
By using the EMF estimation of F , and the iterative algorithm detailed in the previous section to
calculate it, it is now possible to estimate the gradients of the log-likelihood used for unsupervised
training of the RBM model by substituting F with F EMF . We note that the deterministic iteration
we propose for estimating F is in stark contrast with the stochastic sampling procedures utilized in
CD and PCD to the same end. The gradient ascent update of weight Wij is approximated as
?Wij ?
?F c
?F EMF
?L
??
+
,
?Wij
?Wij
?Wij
EMF
(11)
v
h
where ?F
?Wij can be computed by differentiating Eq. (7) at fixed m and m and computing the
value of this derivative at the fixed points of Eqs. (9, 10) obtained from the iterative procedure. The
EMF
gradients with respect to the visible and hidden biases can be derived similarly. Interestingly, ?F?ai
4
EMF
and ?F?bj are merely the fixed-point magnetizations of the visible and hidden units, mvi and mhj ,
respectively.
A priori, the training procedure sketched above can be used at any order of the weak coupling
expansion. The training algorithm introduced in [17], which was shown to perform poorly for RBM
training in [12], can be recovered by retaining only the first-order of the expansion when calculating
F EMF . Taking F EMF to second-order, we expect that training efficiency and performance will be
greatly improved over [17]. In fact, including the third-order term in the training algorithm is just
as easy as including the second-order one, due to the fact that the particular structure of the RBM
model does not admit triangles in its corresponding factor graphs. Although the third-order term in
Eq. (6) does include a sum over distinct pairs of units, as well as a sum over coupled triplets of units,
such triplets are excluded by the bipartite structure of the RBM. However, coupled quadruplets do
contribute to the fourth-order term and therefore fourth- and higher-order approximations require
much more expensive computations [21], though it is possible to utilize adaptive procedures [19].
5
Numerical experiments
5.1
Experimental framework
To evaluate the performance of the proposed deterministic EMF RBM training algorithm1 , we perform a number of numerical experiments over two separate datasets and compare these results with
both CD-1 and PCD. We first use the MNIST dataset of labeled handwritten digit images [25]. The
dataset is split between 60 000 training images and 10 000 test images. Both subsets contain approximately the same fraction of the ten digit classes (0 to 9). Each image is comprised of 28 ? 28 pixels
taking values in the range [0, 255]. The MNIST dataset was binarized by setting all non-zero pixels
to 1 in all experiments.
Second, we use the 28 ? 28 pixel version of the Caltech 101 Silhouette dataset [26]. Constructed
from the Caltech 101 image dataset, the silhouette dataset consists of black regions of the primary
foreground scene objects on a white background. The images are labeled according to the object in
the original picture, of which there are 101 unevenly represented object labels. The dataset is split
between a training (4 100 images), a validation (2 264 images), and a test (2 304 images) sets.
For both datasets, the RBM models require 784 visible units. Following previous studies evaluating
RBMs on these datasets, we fix the number of RBM hidden units to 500 in all our experiments. During training, we adopt the mini-batch learning procedure for gradient averaging, with 100 training
points per batch for MNIST and 256 training points per batch for Caltech 101 Silhouette.
We test the EMF learning algorithm presented in Section 4.2 in various settings. First, we compare implementations utilizing the first-order (MF), second-order (TAP2), and third-order (TAP3)
approximations of F . Higher orders were not considered due to their greater complexity. Next,
we investigate training quality when the self-consistency relations on the magnetizations were not
converged when calculating the derivatives of F EMF , instead iterated for a small, fixed (3) number
of times, an approach similar to CD. Furthermore, we also evaluate a ?persistent? version of our
algorithm, similar to [12]. As in PCD, the iterative EMF procedure possesses multiple initializationdependent fixed-point magnetizations. Converging multiple chains allows us to collect proper statistics on these basins of attraction. In this implementation, the magnetizations of a set of points,
dubbed fantasy particles, are updated and maintained throughout the training in order to estimate
F . This persistent procedure takes advantage of the fact that the RBM-defined Boltzmann measure
changes only slightly between parameter updates. Convergence to the new fixed point magnetizations at each minibatch should therefore be sped up by initializing with the converged state from
the previous update. Our final experiments consist of persistent training algorithms using 3 iterations of the magnetization self-consistency relations (P-MF, P-TAP2 and P-TAP3) and one persistent
training algorithm using 30 iterations (P-TAP2-30) for comparison.
For comparison, we also train RBM models using CD-1, following the prescriptions of [27], and
PCD, as implemented in [12]. Given that our goal is to compare RBM training approaches rather
than achieving the best possible training across all free parameters, neither momentum nor adaptive
learning rates were included in any of the implementations tested. However, we do employ a weight
1
Available as a Julia package at https://github.com/sphinxteam/Boltzmann.jl
5
?0.04
?0.06
?0.06
?0.08
CD-1
PCD
P-TAP3
TAP2
?0.10
?0.12
0
10
P-TAP2-30
P-MF
P-TAP2
LEM F
Units?Samples
pseudo L
Units?Samples
?0.04
?0.08
?0.10
?0.12
20
30
40
50
0
Epoch
10
20
30
40
50
Epoch
Figure 1: Estimates of the per-sample log-likelihood over the MNIST test set, normalized by the
total number of units, as a function of the number of training epochs. The results for the different
training algorithms are plotted in different colors with the same color code used for both panels. Left
panel : Pseudo log-likelihood estimate. The difference between EMF algorithms and contrastive
divergence algorithms is minimal. Right panel : EMF log-likelihood estimate at 2nd order. The
improvement from MF to TAP is clear. Perhaps reasonably, TAP demonstrates an advantage over
CD and PCD. Notice how the second-order EMF approximation of L provides less noisy estimates,
at a lower computational cost.
decay regularization in all our trainings to keep weights small; a necessity for the weak coupling
expansion on which the EMF relies. When comparing learning procedures on the same plot, all free
parameters of the training (e.g. learning rate, weight decay, etc.) were set identically. All results are
presented as averages over 10 independent trainings with standard deviations reported as error bars.
5.2
Relevance of the EMF log-likelihood
Our first observation is that the implementations of the EMF training algorithms are not overly
belabored. The free parameters relevant for the PCD and CD-1 procedures were found to be equally
well suited for the EMF training algorithms. In fact, as shown in the left panel of Fig. 1, and the
right inset of Fig. 3, the ascent of the pseudo log-likelihood over training epochs is very similar
between the EMF training methods and both the CD-1 and PCD trainings.
Interestingly, for the Caltech 101 Silhouettes dataset, it seems that the persistent algorithms tested
have difficulties in ascending the pseudo-likelihood in the first epochs of training. This contradicts
the common belief that persistence yields more accurate approximations of the likelihood gradients.
The complexity of the training set, 101 classes unevenly represented over only 4 100 training points,
might explain this unexpected behavior. The persistent fantasy particles all converge to similar noninformative blurs in the earliest training epochs with many epochs being required to resolve the
particles to a distribution of values which are informative about the pseudo log-likelihood.
Examining the fantasy particles also gives an idea of the performance of the RBM as a generative
model. In Fig. 2, 24 randomly chosen fantasy particles from the 50th epoch of training with PCD,
P-MF, and P-TAP2 are displayed. The RBM trained with PCD generates recognizable digits, yet
the model seems to have trouble generating several digit classes, such as 3, 8, and 9. The fantasy
particles extracted from a P-MF training are of poorer quality, with half of the drawn particles
featuring non-identifiable digits. The P-TAP2 algorithm, however, appears to provide qualitative
improvements. All digits can be visually discerned, with visible defects found only in two of the
particles. These particles seem to indicate that it is indeed possible to efficiently persistently train
an RBM without converging on the fixed point of the magnetizations.
The relevance of the EMF log-likelihood for RBM training is further confirmed in the right panel
of Fig. 1, where we observe that both CD-1 and PCD ascend the second-order EMF log-likelihood,
even though they are not explicitly constructed to optimize over this objective. As expected, the
persistent TAP2 algorithm with 30 iterations of the magnetizations (P-TAP2-30) achieves the best
maximization of LEM F . However, P-TAP2, with only 3 iterations of the magnetizations, achieves
very similar performance, perhaps making it preferable when a faster training algorithm is desired.
6
PCD-1
P-MF
P-TAP
Figure 2: Fantasy particles generated by a 500 hidden unit RBM after 50 epochs of training on
the MNIST dataset with PCD (top two rows), P-MF (middle two rows) and P-TAP2 (bottom two
rows). These fantasy particles represent typical samples generated by the trained RBM when used as
a generative prior for handwritten numbers. The samples generated by P-TAP2 are of similar subjective quality, and perhaps slightly preferable, to those generated by PCD, while certainly preferable
to those generated by P-MF.
Moreover, we note that although P-TAP2 demonstrates improvements with respect to the P-MF, the
P-TAP3 does not yield significantly better results than P-TAP2. This is perhaps not surprising since
the third order term of the EMF expansion consists of a sum over as many terms as the second order,
but at a smaller order in {Wij }.
Lastly, we note the computation times for each of these approaches. For a Julia implementation of
the tested RBM training techniques running on a 3.2 GHz Intel i5 processor, we report the 10 trial
average wall times for fitting a single 100-sample batch normalized against the model complexity.
PCD, which uses only a single sampling step, required 14.10?0.97 ?s/batch/unit. The three EMF
techniques, P-MF, P-TAP2, and P-TAP3, each of which use 3 magnetization iterations, required
21.25 ? 0.22 ?s/batch/unit, 37.22 ? 0.34 ?s/batch/unit, and 64.88 ? 0.45 ?s/batch/unit,
respectively. If fewer magnetization iterations are required, as we have empirically observed in
limited tests, then the run times of the P-MF and P-TAP2 approaches are commesurate with PCD.
5.3
Classification task performance
We also evaluate these RBM training algorithms from the perspective of supervised classification.
An RBM can be interpreted as a deterministic function mapping the binary visible unit values to
the real-valued hidden unit magnetizations. In this case, the hidden unit magnetizations represent
the contributions of some learned features. Although no supervised fine-tuning of the weights is
implemented, we tested the quality of the features learned by the different training algorithms by
their usefulness in classification tasks. For both datasets, a logistic regression classifier was calibrated with the hidden units magnetizations mapped from the labeled training images using the
scikit-learn toolbox [28]. We purposely avoid using more sophisticated classification algorithms in order to place emphasis on the quality of the RBM training, not the classification method.
In Fig. 3, we see that the MNIST classification accuracy of the RBMs trained with the P-TAP2
algorithms is roughly equivalent with that obtained when using PCD training, while CD-1 training
yields markedly poorer classification accuracy. The slight decrease in performance of CD-1 and
TAP2 along as the training epochs increase might be emblematic of over-fitting by the non-persistent
algorithms, although no decrease in the EMF test set log-likelihood was observed.
Finally, for the Caltech 101 Silhouettes dataset, the classification task, shown in the right panel of
Fig. 3, is much more difficult a priori. Interestingly, the persistent algorithms do not yield better
results on this task. However, we observe that the performance of deterministic EMF RBM training
is at least comparable with both CD-1 and PCD.
7
MNIST
CalTech Silhouette 101
0.68
TAP2
P-TAP2
P-TAP2-30
P-TAP3
0.92
PCD
CD-1
direct
0.66
Epoch
0
40
80
?0.10
0.64
pseudo L
0.94
Classification accuracy
Classification accuracy
0.96
0.62
?0.16
?0.22
0
10
20
30
40
50
0
Epoch
20
40
60
80
100
Epoch
Figure 3: Test set classification accuracy for the MNIST (left) and Caltech 101 Silhouette (right)
datasets using logistic regression on the hidden-layer marginal probabilities as a function of the number of epochs. As a baseline comparison, the classification accuracy of logistic regression performed
directly on the data is given as a black dashed line. The results for the different training algorithms
are displayed in different colors, with the same color code being used in both panels. (Right inset:)
Pseudo log-likelihood over training epochs for the Caltech 101 Silhouette dataset.
6
Conclusion
We have presented a method for training RBMs based on an extended mean field approximation.
Although a na??ve mean field learning algorithm had already been designed for RBMs, and judged
unsatisfactory [17, 12], we have shown that extending beyond the na??ve mean field to include terms
of second-order and above brings significant improvements over the first-order approach and allows
for practical and efficient deterministic RBM training with performance comparable to the stochastic
CD and PCD training algorithms.
The extended mean field theory also provides an estimate of the RBM log-likelihood which is easy
to evaluate and thus enables practical monitoring of the progress of unsupervised learning throughout the training epochs. Furthermore, training on real-valued magnetizations is theoretically wellfounded within the presented approach, paving the way for many possible extensions. For instance,
it would be quite straightforward to apply the same kind of expansion to Gauss-Bernoulli RBMs, as
well as to multi-label RBMs.
The extended mean field approach might also be used to learn stacked RBMs jointly, rather than
separately, as is done in both deep Boltzmann machine and deep belief network pre-training, a
strategy that has shown some promise [29]. In fact, the approach can be generalized even to nonrestricted Boltzmann machines with hidden variables with very little difficulty. Another interesting
possibility would be to make use of higher-order terms in the series expansion using adaptive cluster
methods such as those used in [19]. We believe our results show that the extended mean field
approach, and in particular the Thouless-Anderson-Palmer one, may be a good starting point to
theoretically analyze the performance of RBMs and deep belief networks.
Acknowledgments
We would like to thank F. Caltagirone and A. Decelle for many insightful discussions. This research
was funded by European Research Council under the European Union?s 7th Framework Programme
(FP/2007-2013/ERC Grant Agreement 307087-SPARCS).
8
References
[1] P. Smolensky. Chapter 6: Information Processing in Dynamical Systems: Foundations of Harmony Theory. Processing of the Parallel Distributed: Explorations in the Microstructure of Cognition, Volume 1:
Foundations, 1986.
[2] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comp., 14:1771?
1800, 2002.
[3] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[4] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In
ICML, pages 536?543, 2008.
[5] R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted Boltzmann machines for collaborative filtering. In
ICML, pages 791?798, 2007.
[6] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning.
In Intl. Conf. on Artificial Intelligence and Statistics, pages 215?223, 2011.
[7] G. Hinton and R. Salakhutdinov. Replicated softmax: an undirected topic model. In NIPS, pages 1607?
1614, 2009.
[8] R. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In Intl. Conf. on Artificial Intelligence and
Statistics, pages 448?455, 2009.
[9] G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comp.,
18(7):1527?1554, 2006.
[10] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, (521):436?444, May 2015.
[11] R. M. Neal. Connectionist learning of deep belief networks. Artificial Int., 56(1):71?113, 1992.
[12] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In
ICML, pages 1064?1071, 2008.
[13] C. Peterson and J. R. Anderson. A mean field theory learning algorithm for neural networks. Complex
Systems, 1:995?1019, 1987.
[14] G. Hinton. Deterministic Boltzmann learning performs steepest descent in weight-space. Neural Comp.,
1(1):143?150, 1989.
[15] C. C. Galland. The limitations of deterministic Boltzmann machine learning. Network, 4:355?379, 1993.
[16] H. J. Kappen and F. B. Rodr??guez. Boltzmann machine learning using mean field theory and linear
response correction. In NIPS, pages 280?286, 1998.
[17] M. Welling and G. Hinton. A new learning algorithm for mean field Boltzmann machines. In Intl. Conf.
on Artificial Neural Networks, pages 351?357, 2002.
[18] S. Cocco, S. Leibler, and R. Monasson. Neuronal couplings between retinal ganglion cells inferred by
efficient inverse statistical physics methods. PNAS, 106(33):14058?14062, 2009.
[19] S. Cocco and R. Monasson. Adaptive cluster expansion for inferring Boltzmann machines with noisy
data. Physical Review Letters, 106(9):90601, 2011.
[20] D. J. Thouless, P. W. Anderson, and R. G. Palmer. Solution of ?Solvable model of a spin glass?. Philosophical Magazine, 35(3):593?601, 1977.
[21] A. Georges and J. S. Yedidia. How to expand around mean-field theory using high-temperature expansions. Journal of Physics A: Mathematical and General, 24(9):2173?2192, 1999.
[22] T. Plefka. Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model.
Journal of Physics A: Mathematical and General, 15(6):1971?1978, 1982.
[23] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT press, 2001.
[24] E. Bolthausen. An iterative construction of solutions of the TAP equations for the Sherrington?Kirkpatrick
model. Communications in Mathematical Physics, 325(1):333?366, 2014.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc. of the IEEE, 86(11):2278?2323, 1998.
[26] B. M. Marlin, K. Swersky, B. Chen, and N. de Freitas. Inductive principles for restricted Boltzmann
machine learning. In Intl. Conf. on Artificial Intelligence and Statistics, pages 509?516, 2010.
[27] G. Hinton. A practical guide to training restricted Boltzmann machines. Computer, 9:1, 2010.
[28] F. Pedregosa et al. Scikit-learn: Machine learning in Python. JMLR, 12:2825?2830, 2011.
[29] I. J. Goodfellow, A. Courville, and Y. Bengio. Joint training deep Boltzmann machines for classification.
arXiv preprint: 1301.3568, 2013.
9
| 5788 |@word trial:1 middle:1 version:2 seems:2 nd:1 accounting:1 contrastive:5 decorrelate:1 kappen:1 reduction:1 configuration:2 series:1 necessity:1 ecole:1 document:1 interestingly:3 subjective:1 reaction:1 freitas:1 recovered:1 com:1 comparing:1 surprising:1 si:5 yet:1 guez:1 written:2 must:4 visible:17 partition:1 numerical:2 blur:1 noninformative:1 informative:1 enables:1 plot:1 designed:1 update:4 stationary:1 generative:3 intelligence:4 half:1 fewer:1 steepest:1 core:1 provides:3 contribute:1 mathematical:3 along:2 constructed:2 direct:1 persistent:11 qualitative:1 consists:2 fitting:2 recognizable:1 krzakala:2 theoretically:2 ascend:1 undifferentiated:1 expected:1 indeed:1 roughly:1 behavior:1 nor:1 mechanic:1 multi:3 inspired:1 salakhutdinov:4 resolve:1 little:2 considering:1 cardinality:1 bolthausen:1 provided:2 estimating:1 wki:1 notation:2 panel:7 moreover:1 fantasy:7 interpreted:2 minimizes:1 kind:1 marlin:1 dubbed:1 onsager:1 pseudo:7 binarized:1 exactly:1 universit:1 returning:1 demonstrates:2 preferable:3 whatever:1 unit:34 classifier:1 grant:1 local:1 decelle:1 approximately:1 black:2 zeroth:1 umr:1 initialization:2 might:3 emphasis:1 specifying:1 collect:1 limited:1 palmer:5 range:1 practical:5 acknowledgment:1 lecun:2 emblematic:1 practice:2 union:1 digit:6 procedure:16 area:1 significantly:2 persistence:1 statistique:1 pre:1 refers:2 operator:1 judged:1 context:2 spectacular:1 writing:2 optimize:1 equivalent:1 deterministic:12 demonstrated:1 maximizing:2 straightforward:1 starting:1 truncating:1 simplicity:1 rule:1 attraction:1 utilizing:1 updated:1 construction:1 magazine:1 exact:2 us:1 goodfellow:1 agreement:1 persistently:1 expensive:2 satisfying:1 utilized:1 approximated:1 recognition:1 ising:1 labeled:3 bottom:1 role:1 observed:2 preprint:1 initializing:1 calculate:1 region:1 connected:1 decrease:2 complexity:4 rigorously:1 trained:3 serve:1 bipartite:3 eric:2 efficiency:2 triangle:1 easily:3 joint:2 mh:5 represented:3 m2j:2 various:1 chapter:1 sigm:4 stacked:2 train:4 distinct:2 fast:2 effective:1 monte:2 artificial:5 cocco:2 quite:2 apparent:1 solve:1 valued:2 drawing:1 statistic:4 transform:4 itself:1 noisy:2 final:1 jointly:1 advantage:2 net:2 propose:4 product:4 fr:2 remainder:1 causing:1 relevant:1 poorly:1 wjk:1 convergence:4 cluster:3 extending:1 intl:4 generating:1 object:3 coupling:7 derive:1 ij:1 progress:1 eq:8 auxiliary:2 implemented:2 indicate:1 larochelle:1 direction:1 owing:1 stochastic:4 exploration:1 require:2 fix:1 microstructure:1 wall:1 extension:2 correction:2 around:2 considered:2 visually:1 equilibrium:1 mapping:1 cognition:1 bj:6 substituting:1 achieves:2 adopt:1 smallest:1 estimation:2 proc:1 harmony:1 label:2 council:1 combinatorially:1 mit:1 clearly:1 normale:1 rather:2 avoid:1 hj:5 earliest:1 derived:3 improvement:6 unsatisfactory:1 bernoulli:1 likelihood:19 greatly:1 contrast:1 baseline:1 glass:4 cnrs:1 hidden:22 relation:5 expand:1 going:1 france:1 wij:22 sketched:1 pixel:3 classification:15 rodr:1 priori:3 retaining:1 development:1 constrained:1 softmax:1 summed:1 marginal:3 field:27 equal:4 ng:1 sampling:3 unsupervised:6 icml:3 foreground:1 future:1 minimized:1 report:1 connectionist:1 employ:1 randomly:1 divergence:5 ve:5 thouless:5 recalling:1 stationarity:1 investigate:1 possibility:1 mnih:1 evaluation:1 certainly:1 physique:1 kirkpatrick:1 chain:1 accurate:2 poorer:2 indexed:1 desired:1 plotted:1 theoretical:1 minimal:1 mk:1 instance:1 modeling:1 earlier:1 extensible:1 maximization:2 cost:2 introducing:1 deviation:1 entry:1 subset:1 plefka:1 usefulness:1 comprised:1 examining:1 osindero:1 reported:1 tap2:22 calibrated:1 systematic:3 physic:10 lee:1 na:5 satisfied:1 possibly:1 external:3 admit:1 expert:1 derivative:4 conf:4 gabri:1 stark:1 account:2 de:2 retinal:1 int:1 explicitly:1 mv:5 vi:5 performed:1 analyze:1 sup:1 start:1 recover:3 parallel:1 curie:1 collaborative:2 contribution:2 spin:10 pand:1 accuracy:6 efficiently:2 yield:5 weak:4 handwritten:2 iterated:1 mere:1 carlo:2 multiplying:1 confirmed:1 monitoring:1 comp:3 processor:1 converged:2 inquiry:1 explain:1 against:1 energy:19 rbms:15 dm:1 rbm:42 mi:17 newly:1 dataset:11 color:4 dimensionality:2 carefully:1 sophisticated:1 appears:1 higher:5 supervised:3 wherein:1 improved:3 discerned:1 response:1 formulation:1 done:1 though:3 anderson:6 furthermore:2 just:1 lastly:2 until:1 scikit:2 propagation:1 tramel:2 minibatch:1 defines:1 logistic:4 brings:1 quality:5 perhaps:4 believe:2 concept:1 true:1 contain:1 normalized:2 ranged:1 hence:1 analytically:1 regularization:1 excluded:1 inductive:1 leibler:1 neal:1 white:1 conditionally:1 during:2 self:4 quadruplet:1 maintained:1 generalized:2 demonstrate:1 magnetization:27 julia:2 performs:1 sherrington:1 temperature:6 image:10 purposely:1 superior:1 sigmoid:2 common:1 functional:1 sped:1 empirically:1 physical:1 volume:1 jl:1 extend:1 slight:1 refer:1 significant:1 gibbs:1 ai:9 tuning:1 erieure:1 consistency:5 similarly:1 erc:1 particle:11 language:1 had:1 funded:1 etc:1 perspective:1 binary:4 success:1 caltech:8 george:3 greater:1 converge:2 dashed:1 hsi:2 multiple:2 pnas:1 faster:1 believed:1 long:1 prescription:1 equally:1 qi:4 converging:2 emf:33 regression:3 m2i:2 df:2 arxiv:1 iteration:10 sometimes:1 normalization:1 represent:2 cell:1 wellfounded:1 background:1 remarkably:1 fine:1 separately:1 laboratoire:1 wake:1 unevenly:2 crucial:1 wij2:3 saad:1 unlike:1 posse:2 ineffective:1 ascent:3 markedly:1 undirected:4 spirit:1 seem:1 split:2 easy:4 identically:1 bengio:4 architecture:2 florent:2 idea:1 haffner:1 expression:1 pretraining:1 deep:12 useful:1 clear:4 detailed:1 sparcs:1 amount:1 ten:1 http:1 coates:1 notice:1 overly:1 per:3 serving:1 diverse:1 promise:1 key:1 nevertheless:1 achieving:1 drawn:1 marie:1 neither:1 utilize:1 graph:1 defect:1 merely:1 fraction:1 sum:5 run:1 inverse:4 package:1 fourth:2 i5:1 letter:1 swersky:1 place:1 throughout:2 reader:1 utilizes:1 comparable:3 entirely:2 layer:7 courville:1 identifiable:1 constraint:4 pcd:24 scene:1 generates:1 argument:2 min:1 according:1 popularized:1 poor:1 legendre:3 conjugate:3 remain:2 slightly:2 across:1 contradicts:1 smaller:1 lp:1 making:1 lem:2 restricted:10 indexing:1 taken:2 computationally:3 ln:6 equation:5 eventually:1 needed:1 tractable:2 ascending:1 end:1 available:2 yedidia:3 apply:3 observe:3 pierre:1 m2k:1 alternative:2 paving:2 batch:8 galland:1 algorithm1:1 existence:1 original:1 monasson:2 top:1 running:2 include:2 trouble:1 calculating:4 caltagirone:1 objective:3 already:1 strategy:4 primary:1 dependence:1 pave:1 gradient:9 separate:1 mapped:1 simulated:1 thank:1 topic:3 mvi:9 consensus:2 reason:1 code:2 index:1 mini:1 providing:2 minimizing:2 difficult:1 potentially:1 implementation:6 proper:1 boltzmann:28 perform:3 teh:1 observation:2 datasets:5 descent:1 displayed:2 extended:8 hinton:11 communication:1 interacting:1 arbitrary:3 inferred:1 introduced:4 pair:2 paris:1 required:4 toolbox:1 philosophical:1 tap:6 learned:2 nip:2 beyond:2 bar:1 usually:1 dynamical:1 fp:1 smolensky:1 challenge:1 pioneering:1 including:3 max:1 belief:6 difficulty:2 rely:1 restore:1 solvable:1 advanced:1 nth:1 github:1 picture:1 coupled:4 review:2 epoch:16 prior:1 python:1 mhj:11 fully:2 expect:2 interesting:1 limitation:1 filtering:2 validation:1 foundation:2 basin:1 principle:2 dq:2 dispensable:1 cd:19 row:3 featuring:1 surprisingly:1 free:17 bias:2 guide:1 peterson:1 taking:4 differentiating:1 ghz:1 distributed:1 calculated:1 opper:1 evaluating:1 made:3 adaptive:5 commonly:1 replicated:1 programme:1 welling:2 sj:1 approximate:3 obtains:1 silhouette:8 keep:1 discriminative:1 iterative:7 triplet:3 why:1 nature:2 mj:8 reasonably:1 learn:3 expansion:17 bottou:1 european:2 complex:1 main:1 neuronal:1 fig:6 referred:1 intel:1 en:2 momentum:1 inferring:1 clamped:1 jmlr:1 third:6 specific:1 inset:2 insightful:1 decay:2 intractable:2 consist:1 mnist:8 chen:1 mf:12 suited:1 entropy:2 forming:1 ganglion:1 unexpected:1 corresponds:1 tieleman:2 relies:1 extracted:1 goal:2 viewed:2 change:1 included:1 specifically:1 infinite:2 typical:1 reducing:1 averaging:1 called:1 total:1 experimental:1 gauss:1 pedregosa:1 relevance:2 evaluate:6 tested:5 |
5,289 | 5,789 | The Brain Uses Reliability of Stimulus Information
when Making Perceptual Decisions
Sebastian Bitzer1
sebastian.bitzer@tu-dresden.de
1
Stefan J. Kiebel1
stefan.kiebel@tu-dresden.de
Department of Psychology, Technische Universit?at Dresden, 01062 Dresden, Germany
Abstract
In simple perceptual decisions the brain has to identify a stimulus based on noisy
sensory samples from the stimulus. Basic statistical considerations state that the
reliability of the stimulus information, i.e., the amount of noise in the samples,
should be taken into account when the decision is made. However, for perceptual
decision making experiments it has been questioned whether the brain indeed uses
the reliability for making decisions when confronted with unpredictable changes
in stimulus reliability. We here show that even the basic drift diffusion model,
which has frequently been used to explain experimental findings in perceptual
decision making, implicitly relies on estimates of stimulus reliability. We then
show that only those variants of the drift diffusion model which allow stimulusspecific reliabilities are consistent with neurophysiological findings. Our analysis
suggests that the brain estimates the reliability of the stimulus on a short time scale
of at most a few hundred milliseconds.
1
Introduction
In perceptual decision making participants have to identify a noisy stimulus. In typical experiments,
only two possibilities are considered [1]. The amount of noise on the stimulus is usually varied to
manipulate task difficulty. With higher noise, participants? decisions are slower and less accurate.
Early psychology research established that biased random walk models explain the response distributions (choice and reaction time) of perceptual decision making experiments [2]. These models
describe decision making as an accumulation of noisy evidence until a bound is reached and correspond, in discrete time, to sequential analysis [3] as developed in statistics [4]. More recently,
electrophysiological experiments provided additional support for such bounded accumulation models, see [1] for a review.
There appears to be a general consensus that the brain implements the mechanisms required for
bounded accumulation, although different models were proposed for how exactly this accumulation
is employed by the brain [5, 6, 1, 7, 8]. An important assumption of all these models is that the
brain provides the input to the accumulation, the so-called evidence, but the most established models
actually do not define how this evidence is computed by the brain [3, 5, 9, 1]. In this contribution, we
will show that addressing this question offers a new perspective on how exactly perceptual decision
making may be performed by the brain.
Probabilistic models provide a precise definition of evidence: Evidence is the likelihood of a decision alternative under a noisy measurement where the likelihood is defined through a generative
model of the measurements under the hypothesis that the considered decision alternative is true. In
particular, this generative model implements assumptions about the expected distribution of measurements. Therefore, the likelihood of a measurement is large when measurements are assumed,
1
by the decision maker, to be reliable and small otherwise. For modelling perceptual decision making
experiments, the evidence input, which is assumed to be pre-computed by the brain, should similarly depend on the reliability of measurements as estimated by the brain. However, this has been
disputed before, e.g. [10]. The argument is that typical experimental setups make the reliability of
each trial unpredictable for the participant. Therefore, it was argued, the brain can have no correct
estimate of the reliability. This issue has been addressed in a neurally inspired, probabilistic model
based on probabilistic population codes (PPCs) [7]. The authors have shown that PPCs can implement perceptual decision making without having to explicitly represent reliability in the decision
process. This remarkable result has been obtained by making the comprehensible assumption that
reliability has a multiplicative effect on the tuning curves of the neurons in the PPCs1 . Current stimulus reliability, therefore, was implicitly represented in the tuning curves of model neurons and still
affected decisions.
In this paper we will investigate on a conceptual level whether the brain estimates measurement
reliability even within trials while we will not consider the details of its neural representation. We
will show that even a simple, widely used bounded accumulation model, the drift diffusion model, is
based on some estimate of measurement reliability. Using this result, we will analyse the results of a
perceptual decision making experiment [11] and will show that the recorded behaviour together with
neurophysiological findings strongly favours the hypothesis that the brain weights evidence using
a current estimate of measurement reliability, even when reliability changes unpredictably across
trials.
This paper is organised as follows: We first introduce the notions of measurement, evidence and
likelihood in the context of the experimentally well-established random dot motion (RDM) stimulus.
We define these quantities formally by resorting to a simple probabilistic model which has been
shown to be equivalent to the drift diffusion model [12, 13]. This, in turn, allows us to formulate
three competing variants of the drift diffusion model that either do not use trial-dependent reliability
(variant CONST), or do use trial-dependent reliability of measurements during decision making
(variants DDM and DEPC, see below for definitions). Finally, using data of [11], we show that
only variants DDM and DEPC, which use trial-dependent reliability, are consistent with previous
findings about perceptual decision making in the brain.
2
Measurement, evidence and likelihood in the random dot motion stimulus
The widely used random dot motion (RDM) stimulus consists of a set of randomly located dots
shown within an invisible circle on a screen [14]. From one video frame to the next some of the
dots move into one direction which is fixed within a trial of an experiment, i.e., a subset of the dots
moves coherently in one direction. All other dots are randomly replaced within the circle. Although
there are many variants of how exactly to present the dots [15], the main idea is that the coherently
moving dots indicate a motion direction which participants have to decide upon. By varying the
proportion of dots which move coherently, also called the ?coherence? of the stimulus, the difficulty
of the task can be varied effectively.
We will now consider what kind of evidence the brain can in principle extract from the RDM stimulus in a short time window, for example, from one video frame to the next, within a trial. For
simplicity we call this time window ?time point? from here on, the idea being that evidence is accumulated over different time points, as postulated by bounded accumulation models in perceptual
decision making [3, 1].
At a single time point, the brain can measure motion directions from the dots in the RDM display. By
construction, a proportion of measurable motion directions will be into one specific direction, but,
through the random relocation of other dots, the RDM display will also contain motion in random
directions. Therefore, the brain observes a distribution of motion directions at each time point. This
distribution can be considered a ?measurement? of the RDM stimulus made by the brain. Due to the
randomness of each time frame, this distribution varies across time points and the variation in the
distribution reduces for increasing coherences. We have illustrated this using rose histograms in Fig.
1 for three different coherence levels.
1
Note that the precise effect on tuning curves may depend on the particular distribution of measurements
and its encoding by the neural population.
2
3.2%
9.0%
90?
90?
time point 1
135?
180?
45?
3
12
45
225?
67
time point 2
90?
135?
45?
89
0?
315?
24
180?
68
225?
135?
45?
14
1012
0?
315?
2530
1520
5 10
0?
180?
225?
315?
270?
270?
270?
90?
90?
90?
135?
180?
25.6%
45?
2
225?
4
6
8
135?
45?
10
0?
315?
24
180?
225?
270?
68
0?
315?
270?
135?
1416
1012
180?
45?
2530
1520
5 10
0?
225?
315?
270?
Figure 1: Illustration of possible motion direction distributions that the brain can measure from
an RDM stimulus. Rows are different time points, columns are different coherences. The true,
underlying motion direction was ?left?, i.e., 180? . For low coherence (e.g., 3.2%) the measured
distribution is very variable across time points and may indicate the presence of many different
motion directions at any given time point. As coherence increases (from 9% to 25.6%), the true,
underlying motion direction will increasingly dominate measured motion directions simultaneously
leading to decreased variation of the measured distribution across time points.
To compute the evidence for the decision whether the RDM stimulus contains predominantly motion
to one of the two considered directions, e.g., left and right, the brain must check how strongly these
directions are represented in the measured distribution, e.g., by estimating the proportion of motion
towards left and right. We call these proportions evidence for left, eleft , and evidence for right,
eright . As the measured distribution over motion directions may vary strongly across time points, the
computed evidences for each single time point may be unreliable. Probabilistic approaches weight
evidence by its reliability such that unreliable evidence is not over-interpreted. The question is: Does
the brain perform this reliability-based computation as well? More formally, for a given coherence,
c, does the brain weight evidence by an estimate of reliability that depends on c: l = e ? r(c)2 and
which we call ?likelihood?, or does it ignore changing reliabilities and use a weighting unrelated to
coherence: e0 = e ? r??
3
Bounded accumulation models
Bounded accumulation models postulate that decisions are made based on a decision variable. In
particular, this decision variable is driven towards the correct alternative and is perturbed by noise.
A decision is made, when the decision variable reaches a specific value. In the drift diffusion model,
these three components are represented by drift, diffusion and bound [3]. We will now relate the
typical drift diffusion formalism to our notions of measurement, evidence and likelihood by linking
the drift diffusion model to probabilistic formulations.
In the drift diffusion model, the decision variable evolves according to a simple Wiener process with
drift. In discrete time the change in the decision variable y can be written as
?
(1)
?y = yt ? yt??t = v?t + ?tst
2
For convenience, we use imprecise denominations here. As will become clear below, l is in our case a
Gaussian log-likelihood, hence, the linear weighting of evidence by reliability.
3
where v is the drift, t ? N (0, 1) is Gaussian noise and s controls the amount of diffusion. This
equation bears an interesting link to how the brain may compute the evidence. For example, it has
been stated in the context of an experiment with RDM stimuli with two decision alternatives that
the change in y, often called ?momentary evidence?, ?is thought to be a difference in firing rates of
direction selective neurons with opposite direction preferences.? [11, Supp. Fig. 6] Formally:
?y = ?left,t ? ?right,t
(2)
where ?left,t is the firing rate of the population selective to motion towards left at time point t.
Because the firing rates ? depend on the considered decision alternative, they represent a form of
evidence extracted from the stimulus measurement instead of the stimulus measurement itself (see
our definitions in the previous section). It is unclear, however, whether the firing rates ? just represent
the evidence (? = e0 ) or whether they represent the likelihood, ? = l, i.e., the evidence weighted by
coherence-dependent reliability.
To clarify the relation between firing rates ?, evidence e and likelihood l we consider probabilistic
models of perceptual decision making. Several variants have been suggested and related to other
forms of decision making [6, 16, 9, 7, 12, 17, 18]. For its simplicity, which is sufficient for our
argument, we here consider the model presented in [13] for which a direct transformation from
probabilistic model to the drift diffusion model has already been shown. This model defines two
Gaussian generative models of measurements which are derived from the stimulus:
p(xt |left) = N (?1, ?t?
?2 )
p(xt |right) = N (1, ?t?
?2 )
(3)
where ?
? represents the variability of measurements expected by the brain. Similarly, it is assumed
that the measurements xt are sampled from a Gaussian with variance ? 2 which captures variance
both from the stimulus and due to other noise sources in the brain:
xt ? N (?1, ?t? 2 )
(4)
where the mean is ?1 for a ?left? stimulus and 1 for a ?right? stimulus. Evidence for a decision is
computed in this model by calculating the likelihood of a measurement xt under the hypothesised
generative models. To be precise we consider the log-likelihood which is
?
?
1 (xt ? 1)2
1 (xt + 1)2
lleft = ? log( 2??t?
l
=
?
log(
.
(5)
?) ?
2??t?
?
)
?
right
2 ?t?
?2
2 ?t?
?2
We note three important points: 1) The first term on the right hand side means that for decreasing
?
? the likelihood l increases, when the measurement xt is close to the means, i.e., ?1 and 1. This
contribution, however, cancels when the difference between the likelihoods for left and right is
computed. 2) The likelihood is large for a measurement xt , when xt is close to the corresponding
mean. 3) The contribution of the stimulus is weighted by the assumed reliability r = ?
? ?2 .
This model of the RDM stimulus is simple but captures the most important properties of the stimulus. In particular, a high coherence RDM stimulus has a large proportion of motion in the correct
direction with very low variability of measurements whereas a low coherence RDM stimulus tends
to have lower proportions of motion in the correct direction, with high variability (cf. Fig. 1). The
Gaussian model captures these properties by adjusting the noise variance such that a high coherence
corresponds to low noise and low coherence to high noise: Under high noise the values xt will vary
strongly and tend to be rather distant from ?1 and 1, whereas for low noise the values xt will be close
to ?1 or 1 with low variability. Hence, as expected, the model produces large evidences/likelihoods
for low noise and small evidences/likelihoods for high noise.
This intuitive relation between stimulus and probabilistic model is the basis for us to proceed to
show that the reliability of the stimulus r, connected to the coherence level c, appears at a prominent
position in the drift diffusion model. Crucially, the drift diffusion model can be derived as the sum
of log-likelihood ratios across time [3, 9, 12, 13]. In particular, a discrete time drift diffusion process
can be derived by subtracting the likelihoods of Eq. (5):
2rxt
(xt + 1)2 ? (xt ? 1)2
=
.
(6)
2?t?
?2
?t
Consequently, the change in y within a trial, in which the true stimulus is constant, is Gaussian:
?y ? N (2r/?t, 4r2 ? 2 /?t). This replicates the model described in [11, Supp. Fig. 6] where the
parameterisation of the model, however, more directly followed that of the Gaussian distribution
?y = lright ? lleft =
4
and did not explicitly take time into account: ?y ? N (Kc, S 2 ), where K and S are free parameters
and c is coherence of the RDM stimulus. By analogy to the probabilistic model, we, therefore, see
that the model in [11] implicitly assumes that reliability r depends on coherence c.
More generally, the parameters of the drift diffusion model of Eq. (1) and that of the probabilistic
model can be expressed as functions of each other [13]:
v=?
2
2
= ?r 2
?t2 ?
?2
?t
(7)
2?
2?
=r .
(8)
?t?
?2
?t
These equations state that both drift v and diffusion s depend on the assumed reliability r of the
measurements x. Does the brain use and necessarily compute this reliability which depends on
coherence? In the following section we answer this question by comparing how well three variants
of the drift diffusion model, that implement different assumptions about r, conform to experimental
findings.
s=
4
Use of reliability in perceptual decision making: experimental evidence
We first show that different assumptions about the reliability r translate to variants of the drift diffusion model. We then fit all variants to behavioural data (performances and mean reaction times)
of an experiment for which neurophysiological data has also been reported [11] and demonstrate
that only those variants which allow reliability to depend on coherence level lead to accumulation
mechanisms which are consistent with the neurophysiological findings.
4.1
Drift diffusion model variants
For the drift diffusion model of Eq. (1) the accuracy A and mean decision time T predicted by the
model can be determined analytically [9]:
1
(9)
1 + exp( 2vb
s2 )
b
vb
T = tanh
(10)
v
s2
where b is the bound. These equations highlight an important caveat of the drift diffusion model:
Only two of the three parameters can be determined uniquely from behavioural data. For fitting
the model one of the parameters needs to be fixed. In most cases, the diffusion s is set to c = 0.1
arbitrarily [9], or is fit with a constant value across stimulus strengths [11]. We call this standard
variant of the drift diffusion model the DDM.
A=1?
If s is constant across stimulus strengths, the other two parameters of the model must explain differences in behaviour, between stimulus strengths, by taking on values that depend on stimulus
strength. Indeed, it has been found that primarily drift v explains such differences, see also below. Eq. (7) states that drift depends on estimated reliability r. So, if drift varies across stimulus
strengths, this strongly suggests that r must vary across stimulus strengths, i.e., that r must depend
on coherence: r(c). However, the drift diffusion formalism allows for two other obvious variants
of parameterisation. One in which the bound b is constant across stimulus strengths, b = ?b, and,
conversely, one in which drift v is constant across stimulus strengths, v = v? ? r? (Eq. 7). We call
these variants DEPC and CONST, respectively, for their property to weight evidence by reliability
that either depends on coherence, r(c), or not, r?.
4.2
Experimental data
In the following we will analyse the data presented in [11]. This data set has two major advantages
for our purposes: 1) Reported accuracies and mean reaction times (Fig. 1d,f) are averages based on
15,937 trials in total. Therefore, noise in this data set is minimal (cf. small error bars in Fig. 1d,f)
such that any potential effects of overfitting on found parameter values will be small, especially in
5
relation to the effect induced by different stimulus strengths. 2) The behavioural data is accompanied
by recordings of neurons which have been implicated in the decision making process. We can,
therefore, compare the accumulation mechanisms resulting from the fit to behaviour with the actual
neurophysiological recordings. Furthermore, the structure of the experiments was such that the
stimulus in subsequent trials had random strength, i.e., the brain could not have estimated stimulus
strength of a trial before the trial started.
In the experiment of [11], that we consider here, two monkeys performed a two-alternative forced
choice task based on the RDM stimulus. Data for eight different coherences were reported. To avoid
ceiling effects, which prevent the unique identification of parameter values in the drift diffusion
model, we exclude those coherences which lead to an accuracy of 0.5 (random choices) or to an
accuracy of 1 (perfect choices). The behavioural data of the remaining six coherence levels are
presented in Table 1.
Table 1: Behavioural data of [11] used in our analysis. RT = reaction time.
coherence (%):
accuracy (fraction):
mean RT (ms):
3.2
0.63
613
6.4
0.76
590
9
0.79
580
12
0.89
535
25.6
0.99
440
The analysis of [11] revealed a nondecision time, i.e., a component of the reaction time that is
unrelated to the decision process (cf. [3]) of ca. 200ms. Using this estimate, we determined the
mean decision time T by subtracting 200ms from the mean reaction times shown in Table 1.
The main findings for the neural recordings, which replicated previous findings [19, 1], were that i)
firing rates at the end of decisions were similar and, particularly, showed no significant relation to
coherence [11, Fig. 5] whereas ii) the buildup rate of neural firing within a trial had an approximately
linear relation to coherence [11, Fig. 4].
4.3
Fits of drift diffusion model variants to behaviour
We can easily fit the model variants (DDM, DEPC and CONST) to accuracy A and mean decision
time T using Eqs. (9) and (10). In accordance with previous approaches we selected values for the
respective redundant parameters. Since the redundant parameter value, or its inverse, simply scales
the fitted parameter values (cf. Eqs. 9 and 10), the exact value is irrelevant and we fix, in each model
variant, the redundant parameter to 1.
DDM
DEPC
0.06
0.03
10
0.04
0.02
5
0.02
0.01
0
0
5
10
15
20
coherence (%)
25
0.00
30
0.004
0.003
0.00
0
0.002
0.001
80
1600
70
1400
60
1200
50
1000
40
800
30
600
20
400
10
5
10
15
20
coherence (%)
25
0.000
30
0
0
b
0.04
CONST
v
0.08
v
b
15
0.05
s
20
0.10
s
25
200
5
10
15
20
coherence (%)
25
0
30
Figure 2: Fitting results: values of the free parameters, that replicate the accuracy and mean RT
recorded in the experiment (Table 1), in relation to coherence. The remaining, non-free parameter
was fixed to 1 for each variant. Left: the DDM variant with free parameters drift v (green) and
bound b (purple). Middle: the DEPC variant with free parameters v and diffusion s (orange). Right:
the CONST variant with free parameters s and b.
Fig. 2 shows the inferred parameter values. In congruence with previous findings, the DDM variant
explained variation in behaviour due to an increasing coherence mostly with an increasing drift v
(green in Fig. 2). Specifically, drift and coherence appear to have a straightforward, linear relation.
The same finding holds for the DEPC variant. In contrast to the DDM variant, however, which also
exhibited a slight increase in the bound b (purple in Fig. 2) with increasing coherence, the DEPC
6
variant explained the corresponding differences in behaviour by decreasing diffusion s (orange in
Fig. 2). As the drift v was fixed in CONST, this variant explained coherence-dependent behaviour
with large and almost identical changes in both diffusion s and bound b such that large parameter
values occurred for small coherences and the relation between parameters and coherence appeared
to be quadratic.
DDM
DEPC
1.0
CONST
20
600
400
0.5
10
200
y
0
0.0
0
?200
?10
?0.5
0
?400
6.4
25.6
?20
200
400
600
800
time from start (ms)
1000
?600
?1.0
0
200
400
600
800
time from start (ms)
1000
0
200
400
600
800
time from start (ms)
1000
1.0
mean of y
20
15
3.2
6.4
9.0
12.0
25.6
1400
0.8
1200
1000
0.6
800
10
0.4
600
400
5
0.2
0
0
100 200 300
time from start (ms)
0.0
0
100 200 300
time from start (ms)
200
-300 -200 -100 DT
time from end (ms)
-300 -200 -100 DT
time from end (ms)
0
0
100 200 300
time from start (ms)
-300 -200 -100 DT
time from end (ms)
Figure 3: Drift-diffusion properties of fitted model variants. Top row: 15 example trajectories of y
for different model variants with fitted parameters for 6.4% (blue) and 25.6% (yellow) coherence.
Trajectories end when they reach the bound for the first time which corresponds to the decision
time in that simulated trial. Notice that the same random samples of were used across variants
and coherences. Bottom row: Trajectories of y averaged over trials in which the first alternative (top
bound) was chosen for the three model variants. Format of the plots follows that of [8, Supp. Fig. 4]:
Left panels show the buildup of y from the start of decision making for the 5 different coherences.
Right panels show the averaged drift diffusion trajectories when aligned to the time that a decision
was made.
We further investigated the properties of the model variants with the fitted parameter values. The top
row of Fig. 3 shows example drift diffusion trajectories (y in Eq. (1)) simulated at a resolution of
1ms for two coherences. Following [11], we interpret y as the decision variables represented by the
firing rates of neurons in monkey area LIP. These plots exemplify that the DDM and DEPC variants
lead to qualitatively very similar predictions of neural responses whereas the trajectories produced
by the CONST variant stand out, because the neural responses to large coherences are predicted to
be smaller than those to small coherences.
We have summarised predicted neural responses to all coherences in the bottom row of Fig. 3 where
we show averages of y across 5000 trials either aligned to the start of decision making (left panels) or aligned to the decision time (right panels). These plots illustrate that the DDM and DEPC
variants replicate the main neurophysiological findings of [11]: Neural responses at the end of the
decision were similar and independent of coherence. For the DEPC variant this was built into the
model, because the bound was fixed. For the DDM variant the bound shows a small dependence
on coherence, but the neural responses aligned to decision time were still very similar across coherences. The DDM and DEPC variants, further, replicate the finding that the buildup of neural firing
depends approximately linear on coherence (normalised mean square error of a corresponding linear
model was 0.04 and 0.03, respectively). In contrast, the CONST variant exhibited an inverse relation between coherence and buildup of predicted neural response, i.e., buildup was larger for small
coherences. Furthermore, neural responses at decision time strongly depended on coherence. Therefore, the CONST variant, as the only variant which does not use coherence-dependent reliability, is
also the only variant which is clearly inconsistent with the neurophysiological findings.
7
5
Discussion
We have investigated whether the brain uses online estimates of stimulus reliability when making
simple perceptual decisions. From a probabilistic perspective fundamental considerations suggest
that using accurate estimates of stimulus reliability lead to better decisions, but in the field of perceptual decision making it has been questioned that the brain estimates stimulus reliability on the very
short time scale of a few hundred milliseconds. By using a probabilistic formulation of the most
widely accepted model we were able to show that only those variants of the model which assume
online reliability estimation are consistent with reported experimental findings.
Our argument is based on a strict distinction between measurements, evidence and likelihood which
may be briefly summarised as follows: Measurements are raw stimulus features that do not relate to
the decision, evidence is a transformation of measurements into a decision relevant space reflecting
the decision alternatives and likelihood is evidence scaled by a current estimate of measurement
reliabilities. It is easy to overlook this distinction at the level of bounded accumulation models,
such as the drift diffusion model, because these models assume a pre-computed form of evidence as
input. However, this evidence has to be computed by the brain, as we have demonstrated based on
the example of the RDM stimulus and using behavioural data.
We chose one particular, simple probabilistic model, because this model has a direct equivalence
with the drift diffusion model which was used to explain the data of [11] before. Other models may
have not allowed conclusions about reliability estimates in the brain. In particular, [13] introduced
an alternative model that also leads to equivalence with the drift diffusion model, but explains differences in behaviour by different mean measurements and their representations in the generative
model. Instead of varying reliability across coherences, this model would vary the difference of
means in the second summand of Eq. (5) directly without leading to any difference on the drift
diffusion trajectories represented by y of Eq. (1) when compared to those of the probabilistic model
chosen here. The interpretation of the alternative model of [13], however, is far removed from basic
assumptions about the RDM stimulus: Whereas the alternative model assumes that the reliability of
the stimulus is fixed across coherences, the noise in the RDM stimulus clearly depends on coherence.
We, therefore, discarded the alternative model here.
As a slight caveat, the neurophysiological findings, on which we based our conclusion, could have
been the result of a search for neurons that exhibit the properties of the conventional drift diffusion
model (the DDM variant). We cannot exclude this possibility completely, but given the wide range
and persistence of consistent evidence for the standard bounded accumulation theory of decision
making [1, 20] we find it rather unlikely that the results in [19] and [11] were purely found by
chance. Even if our conclusion about the rapid estimation of reliability by the brain does not endure, our formal contribution holds: We clarified that the drift diffusion model in its most common
variant (DDM) is consistent with, and even implicitly relies on, coherence-dependent estimates of
measurement reliability.
In the experiment of [11] coherences of the RDM stimulus were chosen randomly for each trial.
Consequently, participants could not predict the reliability of the RDM stimulus for the upcoming
trial, i.e., the participants? brains could not have had a good estimate of stimulus reliability at the
start of a trial. Yet, our analysis strongly suggests that coherence-dependent reliabilities were used
during decision making. The brain, therefore, must had adapted reliability within trials even on the
short timescale of a few hundred milliseconds. On the level of analysis dictated by the drift diffusion
model we cannot observe this adaptation. It only manifests itself as a change in mean drift that is
assumed to be constant within a trial. First models of simultaneous decision making and reliability
estimation have been suggested [21], but clearly more work in this direction is needed to elucidate
the underlying mechanism used by the brain.
References
[1] Joshua I Gold and Michael N Shadlen. The neural basis of decision making. Annu Rev Neurosci, 30:535?574, 2007.
[2] I. D. John. A statistical decision theory of simple reaction time. Australian Journal of Psychology, 19(1):27?34, 1967.
8
[3] R. Duncan Luce. Response Times: Their Role in Inferring Elementary Mental Organization.
Number 8 in Oxford Psychology Series. Oxford University Press, 1986.
[4] Abraham Wald. Sequential Analysis. Wiley, New York, 1947.
[5] Xiao-Jing Wang. Probabilistic decision making by slow reverberation in cortical circuits. Neuron, 36(5):955?968, Dec 2002.
[6] Rajesh P N Rao. Bayesian computation in recurrent neural circuits. Neural Comput, 16(1):1?
38, Jan 2004.
[7] Jeffrey M Beck, Wei Ji Ma, Roozbeh Kiani, Tim Hanks, Anne K Churchland, Jamie Roitman,
Michael N Shadlen, Peter E Latham, and Alexandre Pouget. Probabilistic population codes for
Bayesian decision making. Neuron, 60(6):1142?1152, December 2008.
[8] Anne K Churchland, R. Kiani, R. Chaudhuri, Xiao-Jing Wang, Alexandre Pouget, and M. N.
Shadlen. Variance as a signature of neural computations during decision making. Neuron,
69(4):818?831, Feb 2011.
[9] Rafal Bogacz, Eric Brown, Jeff Moehlis, Philip Holmes, and Jonathan D. Cohen. The physics
of optimal decision making: a formal analysis of models of performance in two-alternative
forced-choice tasks. Psychol Rev, 113(4):700?765, October 2006.
[10] Michael N Shadlen, Roozbeh Kiani, Timothy D Hanks, and Anne K Churchland. Neurobiology of decision making: An intentional framework. In Christoph Engel and Wolf Singer,
editors, Better Than Conscious? Decision Making, the Humand Mind, and Implications For
Institutions. MIT Press, 2008.
[11] Anne K Churchland, Roozbeh Kiani, and Michael N Shadlen. Decision-making with multiple
alternatives. Nat Neurosci, 11(6):693?702, Jun 2008.
[12] Peter Dayan and Nathaniel D Daw. Decision theory, reinforcement learning, and the brain.
Cogn Affect Behav Neurosci, 8(4):429?453, Dec 2008.
[13] Sebastian Bitzer, Hame Park, Felix Blankenburg, and Stefan J Kiebel. Perceptual decision
making: Drift-diffusion model is equivalent to a bayesian model. Frontiers in Human Neuroscience, 8(102), 2014.
[14] W. T. Newsome and E. B. Par?e. A selective impairment of motion perception following lesions
of the middle temporal visual area MT. J Neurosci, 8(6):2201?2211, June 1988.
[15] Praveen K. Pilly and Aaron R. Seitz. What a difference a parameter makes: a psychophysical
comparison of random dot motion algorithms. Vision Res, 49(13):1599?1612, Jun 2009.
[16] Angela J. Yu and Peter Dayan. Inference, attention, and decision in a Bayesian neural architecture. In Lawrence K. Saul, Yair Weiss, and L?eon Bottou, editors, Advances in Neural
Information Processing Systems 17, pages 1577?1584. MIT Press, Cambridge, MA, 2005.
[17] Alec Solway and Matthew M. Botvinick. Goal-directed decision making as probabilistic inference: a computational framework and potential neural correlates. Psychol Rev, 119(1):120?
154, January 2012.
[18] Yanping Huang, Abram Friesen, Timothy Hanks, Mike Shadlen, and Rajesh Rao. How prior
probability influences decision making: A unifying probabilistic model. In P. Bartlett, F.C.N.
Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1277?1285. 2012.
[19] Jamie D Roitman and Michael N Shadlen. Response of neurons in the lateral intraparietal area
during a combined visual discrimination reaction time task. J Neurosci, 22(21):9475?9489,
Nov 2002.
[20] Timothy D. Hanks, Charles D. Kopec, Bingni W. Brunton, Chunyu A. Duan, Jeffrey C. Erlich,
and Carlos D. Brody. Distinct relationships of parietal and prefrontal cortices to evidence
accumulation. Nature, Jan 2015.
[21] Sophie Den`eve. Making decisions with unknown sensory reliability. Front Neurosci, 6:75,
2012.
9
| 5789 |@word trial:22 briefly:1 middle:2 proportion:6 replicate:3 seitz:1 crucially:1 contains:1 series:1 reaction:8 current:3 comparing:1 anne:4 yet:1 kiebel:2 must:5 written:1 john:1 subsequent:1 distant:1 plot:3 discrimination:1 generative:5 selected:1 alec:1 short:4 caveat:2 provides:1 mental:1 institution:1 clarified:1 preference:1 direct:2 become:1 consists:1 fitting:2 introduce:1 expected:3 indeed:2 rapid:1 frequently:1 brain:37 inspired:1 decreasing:2 duan:1 actual:1 unpredictable:2 window:2 increasing:4 provided:1 estimating:1 bounded:8 underlying:3 unrelated:2 panel:4 circuit:2 brunton:1 what:2 bogacz:1 kind:1 interpreted:1 monkey:2 developed:1 finding:15 transformation:2 temporal:1 exactly:3 universit:1 scaled:1 botvinick:1 control:1 appear:1 before:3 felix:1 accordance:1 tends:1 depended:1 encoding:1 oxford:2 firing:9 approximately:2 chose:1 dresden:4 equivalence:2 suggests:3 conversely:1 christoph:1 range:1 averaged:2 directed:1 unique:1 implement:4 cogn:1 jan:2 area:3 thought:1 imprecise:1 pre:2 persistence:1 ppcs:2 suggest:1 disputed:1 convenience:1 close:3 cannot:2 context:2 influence:1 accumulation:14 equivalent:2 measurable:1 demonstrated:1 yt:2 conventional:1 straightforward:1 attention:1 formulate:1 resolution:1 simplicity:2 pouget:2 holmes:1 dominate:1 population:4 notion:2 variation:3 denomination:1 construction:1 elucidate:1 exact:1 us:3 hypothesis:2 particularly:1 located:1 bottom:2 role:1 mike:1 wang:2 capture:3 connected:1 hypothesised:1 removed:1 observes:1 rose:1 signature:1 depend:7 churchland:4 purely:1 upon:1 eric:1 basis:2 completely:1 easily:1 represented:5 forced:2 distinct:1 describe:1 widely:3 larger:1 otherwise:1 statistic:1 timescale:1 analyse:2 noisy:4 itself:2 online:2 confronted:1 advantage:1 erlich:1 subtracting:2 jamie:2 adaptation:1 tu:2 aligned:4 relevant:1 kopec:1 translate:1 chaudhuri:1 gold:1 intuitive:1 jing:2 produce:1 perfect:1 tim:1 illustrate:1 recurrent:1 measured:5 eq:10 predicted:4 indicate:2 australian:1 direction:21 correct:4 human:1 explains:2 argued:1 behaviour:8 fix:1 elementary:1 frontier:1 clarify:1 hold:2 considered:5 intentional:1 exp:1 congruence:1 lawrence:1 predict:1 matthew:1 major:1 vary:4 early:1 purpose:1 estimation:3 tanh:1 maker:1 engel:1 weighted:2 stefan:3 mit:2 clearly:3 gaussian:7 rather:2 avoid:1 varying:2 derived:3 june:1 modelling:1 likelihood:21 check:1 contrast:2 inference:2 dependent:8 dayan:2 accumulated:1 unlikely:1 relation:9 kc:1 selective:3 germany:1 endure:1 issue:1 orange:2 field:1 having:1 identical:1 represents:1 park:1 yu:1 cancel:1 t2:1 stimulus:57 summand:1 few:3 primarily:1 randomly:3 simultaneously:1 beck:1 replaced:1 jeffrey:2 organization:1 possibility:2 investigate:1 replicates:1 implication:1 accurate:2 rajesh:2 moehlis:1 respective:1 walk:1 circle:2 re:1 e0:2 minimal:1 fitted:4 column:1 formalism:2 rao:2 newsome:1 technische:1 addressing:1 subset:1 hundred:3 front:1 reported:4 answer:1 varies:2 perturbed:1 combined:1 fundamental:1 probabilistic:19 physic:1 michael:5 together:1 postulate:1 recorded:2 rafal:1 huang:1 prefrontal:1 leading:2 supp:3 account:2 potential:2 exclude:2 de:2 accompanied:1 relocation:1 postulated:1 explicitly:2 depends:7 performed:2 multiplicative:1 reached:1 start:9 participant:6 carlos:1 contribution:4 purple:2 square:1 accuracy:7 wiener:1 variance:4 nathaniel:1 correspond:1 identify:2 yellow:1 identification:1 raw:1 bayesian:4 produced:1 overlook:1 trajectory:7 randomness:1 explain:4 simultaneous:1 reach:2 sebastian:3 definition:3 obvious:1 sampled:1 adjusting:1 manifest:1 exemplify:1 electrophysiological:1 actually:1 reflecting:1 appears:2 alexandre:2 higher:1 dt:3 friesen:1 response:10 wei:2 roozbeh:3 formulation:2 strongly:7 hank:4 furthermore:2 just:1 until:1 hand:1 defines:1 effect:5 roitman:2 contain:1 true:4 brown:1 hence:2 analytically:1 solway:1 illustrated:1 during:4 uniquely:1 m:13 prominent:1 demonstrate:1 latham:1 invisible:1 motion:21 consideration:2 recently:1 charles:1 predominantly:1 common:1 mt:1 ji:1 cohen:1 linking:1 slight:2 occurred:1 interpretation:1 interpret:1 measurement:31 significant:1 cambridge:1 tuning:3 resorting:1 similarly:2 had:4 reliability:53 dot:13 moving:1 cortex:1 feb:1 showed:1 dictated:1 perspective:2 irrelevant:1 driven:1 arbitrarily:1 joshua:1 additional:1 employed:1 redundant:3 ii:1 neurally:1 multiple:1 reduces:1 offer:1 manipulate:1 prediction:1 variant:45 basic:3 wald:1 vision:1 histogram:1 represent:4 dec:2 whereas:5 addressed:1 decreased:1 source:1 biased:1 abram:1 exhibited:2 strict:1 induced:1 tend:1 recording:3 december:1 inconsistent:1 call:5 eve:1 presence:1 revealed:1 easy:1 affect:1 fit:5 psychology:4 rdm:19 architecture:1 competing:1 opposite:1 idea:2 luce:1 favour:1 whether:6 six:1 bartlett:1 peter:3 buildup:5 questioned:2 proceed:1 york:1 behav:1 impairment:1 generally:1 clear:1 amount:3 ddm:15 conscious:1 kiani:4 millisecond:3 notice:1 estimated:3 neuroscience:1 intraparietal:1 blue:1 conform:1 discrete:3 summarised:2 affected:1 changing:1 prevent:1 diffusion:41 fraction:1 sum:1 inverse:2 almost:1 decide:1 decision:75 coherence:57 duncan:1 vb:2 bound:11 brody:1 followed:1 display:2 quadratic:1 strength:11 adapted:1 argument:3 format:1 department:1 according:1 across:17 smaller:1 increasingly:1 evolves:1 making:38 parameterisation:2 rev:3 praveen:1 explained:3 den:1 taken:1 ceiling:1 behavioural:6 equation:3 turn:1 mechanism:4 needed:1 singer:1 mind:1 stimulusspecific:1 end:6 eight:1 observe:1 alternative:14 yair:1 weinberger:1 slower:1 comprehensible:1 assumes:2 remaining:2 cf:4 top:3 angela:1 unifying:1 const:10 calculating:1 eon:1 especially:1 upcoming:1 psychophysical:1 move:3 question:3 quantity:1 coherently:3 already:1 rt:3 dependence:1 unclear:1 exhibit:1 link:1 simulated:2 lateral:1 philip:1 consensus:1 code:2 relationship:1 illustration:1 ratio:1 setup:1 mostly:1 october:1 yanping:1 relate:2 stated:1 reverberation:1 unknown:1 perform:1 neuron:10 discarded:1 t:1 parietal:1 january:1 neurobiology:1 variability:4 precise:3 frame:3 varied:2 drift:47 inferred:1 introduced:1 required:1 distinction:2 established:3 daw:1 able:1 suggested:2 bar:1 usually:1 below:3 perception:1 appeared:1 built:1 reliable:1 green:2 video:2 difficulty:2 started:1 psychol:2 jun:2 extract:1 review:1 prior:1 par:1 bear:1 highlight:1 interesting:1 organised:1 analogy:1 remarkable:1 sufficient:1 consistent:6 nondecision:1 shadlen:7 principle:1 xiao:2 editor:3 row:5 free:6 implicated:1 lleft:2 allow:2 side:1 normalised:1 formal:2 wide:1 saul:1 taking:1 burges:1 curve:3 cortical:1 stand:1 sensory:2 author:1 made:5 qualitatively:1 replicated:1 reinforcement:1 far:1 correlate:1 nov:1 ignore:1 implicitly:4 unreliable:2 overfitting:1 conceptual:1 assumed:6 search:1 table:4 lip:1 nature:1 ca:1 investigated:2 necessarily:1 bottou:2 did:1 main:3 neurosci:6 abraham:1 s2:2 noise:15 allowed:1 lesion:1 fig:15 screen:1 slow:1 wiley:1 position:1 momentary:1 inferring:1 pereira:1 comput:1 perceptual:17 weighting:2 annu:1 specific:2 xt:14 r2:1 evidence:38 sequential:2 effectively:1 nat:1 timothy:3 simply:1 neurophysiological:8 visual:2 expressed:1 corresponds:2 wolf:1 chance:1 relies:2 extracted:1 ma:2 goal:1 consequently:2 towards:3 jeff:1 change:7 experimentally:1 typical:3 determined:3 specifically:1 unpredictably:1 sophie:1 rxt:1 called:3 total:1 accepted:1 experimental:6 aaron:1 formally:3 support:1 jonathan:1 |
5,290 | 579 | Multimodular Architecture for Remote Sensing
Operations.
Sylvie Thiria(1,2)
Carlos Mejia(l)
Fouad Badran(1,2)
Michel Crepon(3)
(1) Laboratoire de Recherche en Informatique
Universite de Paris Sud, B 490 - 91405 ORSAY Cedex France
(2)
(3)
CEDRIC, Conservatoire National des Arts et Metiers
292 rue Saint Martin - 75003 PARIS
Laboratoire d'Oceanographie et de Climatologie (LODYC)
T14 Universite de PARIS 6 - 75005 PARIS (FRANCE)
Abstract
This paper deals with an application of Neural Networks to satellite
remote sensing observations. Because of the complexity of the
application and the large amount of data, the problem cannot be solved
by using a single method. The solution we propose is to build multimodules NN architectures where several NN cooperate together. Such
system suffer from generic problem for whom we propose solutions.
They allow to reach accurate performances for multi-valued function
approximations and probability estimations. The results are compared
with six other methods which have been used for this problem. We
show that the methodology we have developed is general and can be
used for a large variety of applications.
675
676
Thiria, Mejia, Badran, and Crepon
1
INTRODUCTION
Neural Networks have been used for many years to solve hard real world applications
which involve large amounts of data. Most of the time, these problems cannot be solved
with a unique technique and involve successive processing of the input data.
Sophisticated NN architectures have thus been designed to provide good performances e.g.
[Lecun et al. 90]. However this approach is limited for many reasons: the design of
these architectures requires a lot of a priori knowledge about the task and is complicated.
Such NN are difficult to train because of their large size and are dedicated to a specific
problem. Moreover if the task is slightly modified, these NN have to be entirely
redesigned and retrained. It is our feeling that complex problems cannot be solved
efficiently with a single NN whatever sophisticated it is. A more fruitful approach is to
use modular architectures where several simple NN modules cooperate together. This
methodology is far more general and allows to easily build very sophisticated architectures
which are able to handle the different processing steps which are necessary for example in
speech or signal processing. These architectures can be easily modified to incorporate
some additional knowledge about the problem or some changes in its specifications.
We have used these ideas to build a multi-module NN for a satellite remote sensing
application. This is a hard problem which cannot be solved by a single NN. The
different modules of our architecture are thus dedicated to specific tasks and allow to
perform successive processing of the data. This approach allows to take into account in
successive steps different informations about the problem. Furthermore, errors which
may occur at the output of some modules may be corrected by others which allows to
reach very good performances. Making these different modules cooperate raises several
problems which appear to be generic for these architectures. It is thus interesting to study
different solutions for their design, training, and the efficient information exchanges
between modules. In the present paper, we first briefly describe the geophysical problem
and its difficulties, we then present the different modules of our architecture and their
cooperation, we compare our results to those of several other methods and discuss the
advantages of our method.
2
THE GEOPHYSICAL PROBLEM
Scatterometers are active microwave radars which accurately measure the power of
transmitted and backscatter signal radiations in order to compute the normalized radar cross
section (ao) of the ocean surface. The ao depends on the wind speed, the incidence angle 9
(which is the angle between the radar beam and the vertical at the illuminated cell) and the
azimuth angle (which is the horizontal angle X between the wind and the antenna of the
radar). The empirically based relationship between ao and the local wind vector can be
established which leads to the determination of a geophysical model function.
The model developed by A. Long gives a more precise form to this functional. It has
been shown that for an angle of incidence 9, the general expression for ao can be
satisfactorily represented by a Fourrier series:
Multimodular Architecture for Remote Sensing Options
(1)
with U
= A.v"!
Long's model specifies that A and 'Y only depend on the angle of incidence 9, and that bi
and b2 are a function of both the wind speed v and the angle of incidence 9 (Figure 1).
Figure 1 : Definition of the different geophysical scales.
For now, the different parameters bl, b2 A and y used in this model are determined
experimentally.
Conversely it becomes possible to compute the wind direction by using several antenna
with different orientations with respect to the satellite track. The geophysical model
function (1) can then be inverted using the three measurements of 0'0 given by the three
antennas, it computes wind vector (direction and speed). Evidence shows that for a given
trajectory within the swath (Figure 1) i.e. (9 1,9 2,9 3) fixed, 9i being the incidence angle of
the beam linked to antenna i, the functional F is of the fonn presented in Fig.2 .
In the absence of noise, the determination of the wind direction would be unique in most
cases. Noise-free ambiguities arise due to the bi-hannonic nature of the model function
with respect to X. The functional F presents singular points. At constant wind speed F
yields a Lissajous curve; in the singular points the direction is ambiguous with respect
to the triplet measurements (0'1,0'2,0'3) as it is seen in Fig. 2. At these points F yields
two directions differing by 160?. In practice, since the backscatter signal is noisy the
number and the frequency of ambiguities is increased.
677
678
Thiria, Mejia, Badran, and Crepon
270"
45 0
135 0
10"
(a)
170
0
(b)
Figure 2 : (a) Representation of the Functional F for a given trajectory (b) Graphics
obtained for a section of (a) at constant wind speed.
The problem is therefore how to set up an accurate (exact) wind map using the observed
measurements (0'1,0'2,0'3) .
3 THE METHOD
We propose to use multi-layered quasi-linear networks (MLP) to carry out this inversion
phase. Indeed these nets are able of approximate complex non-linear functional relations;
it becomes possible by using a set of measurements to determine F and to realize the
inversion.
The determination of the wind's speed and direction lead to two problems of different
complexity, each of them is solved using a dedicated multi-modular system. The two
modules are then linked together to build a two level architecture. To take into account
the strong dependence of the measurements with respect to the trajectory, each module (or
level) consists of n distinct but similar systems, a specific system being dedicated to each
satellite trajectory (n being the number of trajectories in a swath (Figure 1)).
The first level will allow the determination of the wind speed at every point of the swath.
The results obtained will then be supplied to the second level as supplementary data
which allow to compute the wind direction. Thus, we propose a two-level architecture
which constitutes an automatic method for the computation of wind maps (Figure 3).
The computation is performed sequentially between the different levels, each one
supplying the next with the parameters needed.
Owing to the space variability of the wind, the measurements at a point are closely related
to those performed in the neighbourhood. Taking into account this context must
therefore bring important supplementary information to dealiase the ambiguities. At a
point, the input data for a given system are therefore the measurements observed at that
point and at it's eight closest neighbours.
All the networks used by the different systems are MLP trained with the back-propagation
algorithm. The successive modifications were performed using a second order stochastic
gradient: which is the approximation of the Levenberg-Marquardt rule.
Multimodular Architecture for Remote Sensing Options
uvtl3 :
AmbiguUies correction
-
0 0
uvel2 :
Wind Direction
compulillion
~=
-Si=
--- --
0 0
uvtll:
Wind Speed
compulillion
Luwtr Spud Wi....
NtlWorl
~=
(a)
(b)
Figure 3 : The three systems SI, S2 and S3 for a given trajectory.
One system is dedicated to a proper trajectory. As a result the networks used on the same
level of the global architecture are of the same type; only the learning set numerical
values change from one system to another. Each network learning set will therefore
consist of the data mesured on its trajectory. We present here the results for the central
trajectory, perfonnances for the others are similar.
3.1
THE NETWORK DECODING : FIRST LEVEL
A system (S 1) in the first level allows to compute the wind speed (in ms- 1) along a
trajectory. Because the function Fl to be learned (signal ~ wind speed) is highly nonlinear, each system is made of three networks (see Figure 3) : Rl allows to decide the
range of the wind speed (4 ~ v < 12 or 12 ~ v < 20); according to the Rl output an
accurate value is computed using R2 for the first range and R3 for the other. The first
level is built from 10 of these systems (one for each trajectory).
Each network (Rl, R2, R3) consists of four fully connected layers. For a given point, we
have introduced the knowledge of the radar measurements at the neighbouring points. The
same experiments were performed without introducing this notion of vicinity, the
learning and test performances were reduced by 17%, which proves the advantages of this
approach. The input layer of each network consists of 27 automata: these 9x3 automata
correspond to the 0'0 values relative to each antenna for the point to be considered and its
eight neighbours.
Rl output layer has two cells: one for 4 ~ v < 12 and the other for 12 ~ v < 20; so its
4 layers are respectively built of 27, 25, 25, 2 automata.
R2 and R3 compute the exact wind speed. The output layer is represented by a unique
output automaton and codes this wind speed v at the point considered between [-1, + I] .
The four layers of each network are respectively formed of27, 25, 25,1 automata.
679
680
Thiria, Mejia, Badran, and Crepon
3.2
DECODING THE DIRECTION : SECOND LEVEL
Now the function F2 (signal ~ wind direction) has to be learned. This level is located
after the first one, so the wind speed has already been computed at all points. For each
trajectory a system S2 allows to compute the wind direction, it is made of an MLP and a
Decision Direction Process (we call it D). As for FI we used for each point a contextual
information. Thus, the input layer of the MLP consists of 30 automata : the first 9x3
correspond to the ao values for each antenna, the last three represent three times the first
level computed wind speed. However, because the original function has major ambiguities
it is more convenient to compute, for a given input, several output values with their
probabilities. For this reason we have discretized the desired output. It has been coded in
degrees and 36 possible classes have been considered, each representing a 10? interval
(between 0? and 360?). So, the MLP is four layered with respectively 30, 25, 25, 36
automata. It can be shown, according to the coding of the desired output, that the network
approximates Bayes discriminant function or Bayes probability distribution related to the
discretized transfer function F 2 [White, 89]. The interpretation of the MLP outputs using
the D process allows to compute with accuracy the required function F 2. The network
outputs represents the 36 classes corresponding to the 36 10? intervals. For a given input,
a computed output is a ~36 vector whose components can be interpreted to predict the
wind direction in degrees. Each component, which is a Bayes discrim inant function
approximation, can be used as a coefficient of likelihood for each class. The Decision
Direction Process D (see Fig. 3) computes real directions using this information. It
performs the interpolation of the peaks' curve. D gives for each peak ist wind direction
with its coefficients of likelihood.
o
30 60 90 120 150 180 210 240 270 300 330 360 0
Figure 4 : network's output. The points in the x-axis correspond to the 36 outputs. Each
represents an interval of 10? between 0 and 360?. The Y-axis points give the automata
computed output The point indicated by a d corresponds to the desired output angle, ~ is
the most likely solution proposed by D and p is the second one.
The computed wind speed and the most likely wind direction computed by the first two
levels allow to build a complete map which still includes errors in the directions. As we
have seen in section 2, the physical problem has intrinsic ambiguities, they appear in the
results (table 2). The removal of these errors is done by a third level of NN.
Multimodular Architecture for Remote Sensing Options
3.3
CORRECTING THE REMAINING ERRORS : THIRD LEVEL
This problem has been dealt with in [Badran & al 91] and is not discussed here. The
method is related to image processing using MLP as optimal filter. The use of different
filters taking into account the 5x5 vicinities of the point considered permits to detect the
erroneous directions and to choose among the alternative proposed solutions. This method
enables to correct up to 99.5% of the errors.
4 RESULTS
As actual data does not exist yet, we have tested the method on values computed from real
meteorological models. The swaths of the scatterometer ERS 1 were simulated by flying
a satellite on wind fields given by the ECMWF forecasting model. The sea roughness
values (0'1,0'2,0'3) given by the three antennas were computed by inverting the Long
model. Noise was then added to the simulated measurements in order to reproduce the
errors made by the scatterometer. (A gaussian noise of zero average and of standard
deviation 9.5% for both lateral antennas and 8.7% for the central antenna was added at
each measurement).Twenty two maps obtained for the southern Atlantic Ocean were used
to establish the learning sets. The 22 maps were selected randomly during the 30 days of
September 1985 and nine remaining maps were used for the tests.
4.1
DECODING THE SPEED : FIRST LEVEL
In the results presented in Table 1, a predicted measurement is considered correct if it
differs from the desired output by 1 m/s. It has to be noticed that the oceanographer's
specification is 2 m/s; the prescnt results illustrate the precision of the method.
. d speed
Tabl e 1 : perfiormances on the wm
Performances
Accuracy 1 mls
4.2
learninf?
test
performances
99.3%
98,4 %
bias
0.045m/s
0.038m/s
DECODING THE DIRECTION : SECOND LEVEL
It is found that good performances are obtained after the interpretation of the best two
peaks only. When it is compared to usual methods which propose up to six possible
directions, this method appears to be very powerful. Table 2 shows the performances
using one or two peaks. The function F and its singularities have been recovered with a
good accuracy, the noise added during the simulations in order to reproduce the noise made
by the measuring devices has been removed.
. dd'uectlOn usmg th e com~I ete ~stem
T able 2 : per?ormances on the wm
Performances
Precision 20?
learnim~
test
one peak
68.0 %
72.0 %
two peaks
99.1 %
99.2 %
681
682
Thiria, Mejia, Badran, and Crepon
5 VALIDATION OF THE RESULTS
In order to prove the power of the NN approach, table 3 compare our results with six
classical methods [Chi & Li 88].
Table 3 shows that the NN results are very good compared to other techniques, moreover
all the classical methods are based on the assumption that a precise analytical function
?v ,X) ~ 0') exists, the NN method is more general and does not depend on such an
assumption. Moreover the decoding of a point with NN requires approximately 23 ms on
a SUN4 working station. This time is to be compared with the 0.25 second necessary for
the decoding by present methods.
Table 3 : performances simulation results Erms (in m/s) for different fixed wind speed
ML
WLS AWLS
L1
N.N
LS
LWSS
Speed WLSL
Low
1.02
0.92
0.66
0.74
0.69
0.63
0.49
0.67
Middle
0.87
0.53
1.31
0.89
0.85
1.10
0.89
0.98
Hight
3.44
4.11
3.71
5.52
3.52
4.06
3.49
1.18
The wind vector error e is defined as follows: e = V1 - V2 where V1 is the true
wind vector and V2 is the estimated wind vector, Erms = E( II ell).
6 CONCLUSION
Performances reached when processing satellite remote sensing observations have proved
that multi-modular architectures where simple NN modules cooperate can cope with real
world applications. The methodology we have developed is general and can be used for a
large variety of applications, it provides solutions to generic problems arising when
dealing with NN cooperation.
References
Badran F, Thiria S, Crepon M (1991) : Wind ambiguity removal by the use of neural
network techniques, J.G.R Journal of Geophysical Research vol 96 n ?C 11 p 2052120529, November 15.
Chong-Yung C, Fuk K Li (1969) : A Comparative Study of Several Wind Estimation
Algorithms for Spacebomes scatterometers. IEEE transactions on geoscience and remote
sensing, vol 26, No 2.
Le Cun Y., Boser B., & aI., (1990) : Handwritten Digit Recognition with a BackPropagation Network- in D.Touretzky (ed.) Advances in Neural Information Processing
Systems 2 , 396-404, Morgan Kaufmann
White H. (1989) : Learning in Artificial Neural Networks: A Statistical Perspective.
Neural Computation, 1,425-464.
| 579 |@word middle:1 briefly:1 inversion:2 simulation:2 hannonic:1 fonn:1 carry:1 series:1 atlantic:1 recovered:1 contextual:1 incidence:5 com:1 marquardt:1 si:2 yet:1 erms:2 must:1 realize:1 numerical:1 enables:1 designed:1 selected:1 device:1 recherche:1 supplying:1 provides:1 successive:4 along:1 consists:4 prove:1 indeed:1 scatterometer:2 multi:5 sud:1 discretized:2 chi:1 actual:1 becomes:2 moreover:3 interpreted:1 developed:3 differing:1 every:1 whatever:1 appear:2 local:1 interpolation:1 approximately:1 conversely:1 limited:1 bi:2 range:2 unique:3 lecun:1 satisfactorily:1 practice:1 differs:1 x3:2 backpropagation:1 digit:1 convenient:1 hight:1 cannot:4 layered:2 context:1 fruitful:1 map:6 l:1 automaton:8 correcting:1 rule:1 handle:1 notion:1 exact:2 neighbouring:1 recognition:1 located:1 observed:2 module:10 solved:5 connected:1 remote:8 removed:1 complexity:2 radar:5 trained:1 raise:1 depend:2 flying:1 f2:1 easily:2 represented:2 train:1 informatique:1 distinct:1 describe:1 artificial:1 whose:1 modular:3 supplementary:2 valued:1 solve:1 antenna:9 noisy:1 advantage:2 net:1 analytical:1 propose:5 awl:1 satellite:6 sea:1 comparative:1 illustrate:1 radiation:1 strong:1 predicted:1 direction:21 closely:1 correct:2 owing:1 filter:2 stochastic:1 exchange:1 ao:5 singularity:1 roughness:1 correction:1 considered:5 predict:1 major:1 estimation:2 fouad:1 gaussian:1 modified:2 likelihood:2 t14:1 detect:1 multimodular:4 nn:16 relation:1 quasi:1 france:2 reproduce:2 among:1 orientation:1 priori:1 art:1 ell:1 field:1 represents:2 constitutes:1 others:2 ete:1 randomly:1 neighbour:2 national:1 phase:1 mlp:7 highly:1 chong:1 accurate:3 microwave:1 redesigned:1 necessary:2 desired:4 increased:1 badran:7 measuring:1 introducing:1 deviation:1 azimuth:1 graphic:1 peak:6 decoding:6 together:3 ambiguity:6 central:2 choose:1 michel:1 li:2 account:4 de:5 b2:2 coding:1 includes:1 coefficient:2 depends:1 performed:4 lot:1 wind:37 linked:2 reached:1 wm:2 bayes:3 carlos:1 complicated:1 option:3 formed:1 accuracy:3 kaufmann:1 efficiently:1 yield:2 correspond:3 dealt:1 handwritten:1 accurately:1 trajectory:12 reach:2 touretzky:1 ed:1 definition:1 frequency:1 universite:2 proved:1 knowledge:3 sophisticated:3 back:1 appears:1 day:1 methodology:3 done:1 furthermore:1 working:1 horizontal:1 nonlinear:1 propagation:1 meteorological:1 wls:1 indicated:1 normalized:1 true:1 vicinity:2 deal:1 white:2 x5:1 during:2 ambiguous:1 levenberg:1 m:2 complete:1 performs:1 dedicated:5 l1:1 bring:1 cooperate:4 image:1 fi:1 functional:5 empirically:1 rl:4 physical:1 discussed:1 interpretation:2 approximates:1 measurement:11 ai:1 automatic:1 specification:2 surface:1 closest:1 perspective:1 inverted:1 transmitted:1 seen:2 additional:1 fuk:1 morgan:1 determine:1 signal:5 ii:1 stem:1 determination:4 cross:1 long:3 coded:1 represent:1 cell:2 beam:2 usmg:1 interval:3 laboratoire:2 singular:2 cedex:1 call:1 orsay:1 variety:2 discrim:1 architecture:17 idea:1 six:3 expression:1 sylvie:1 forecasting:1 suffer:1 speech:1 nine:1 involve:2 amount:2 reduced:1 specifies:1 supplied:1 exist:1 s3:1 estimated:1 arising:1 track:1 per:1 vol:2 ist:1 four:3 v1:2 year:1 angle:9 powerful:1 decide:1 decision:2 illuminated:1 entirely:1 fl:1 layer:7 occur:1 thiria:6 speed:20 sun4:1 martin:1 according:2 slightly:1 wi:1 cun:1 making:1 modification:1 discus:1 r3:3 needed:1 operation:1 permit:1 eight:2 v2:2 generic:3 ocean:2 neighbourhood:1 alternative:1 original:1 remaining:2 saint:1 build:5 prof:1 establish:1 classical:2 bl:1 noticed:1 already:1 added:3 dependence:1 usual:1 southern:1 gradient:1 september:1 simulated:2 lateral:1 whom:1 discriminant:1 reason:2 code:1 relationship:1 difficult:1 design:2 proper:1 twenty:1 perform:1 vertical:1 observation:2 november:1 variability:1 precise:2 station:1 retrained:1 introduced:1 inverting:1 paris:4 required:1 backscatter:2 learned:2 boser:1 established:1 able:3 built:2 power:2 difficulty:1 representing:1 axis:2 removal:2 cedric:1 relative:1 fully:1 interesting:1 validation:1 degree:2 scatterometers:2 dd:1 cooperation:2 last:1 free:1 bias:1 allow:5 taking:2 curve:2 world:2 computes:2 made:4 feeling:1 far:1 cope:1 transaction:1 perfonnances:1 approximate:1 dealing:1 ml:2 global:1 active:1 sequentially:1 triplet:1 table:6 nature:1 transfer:1 complex:2 rue:1 s2:2 noise:6 arise:1 fig:3 en:1 precision:2 third:2 erroneous:1 specific:3 er:1 sensing:8 r2:3 evidence:1 consist:1 intrinsic:1 exists:1 likely:2 yung:1 geoscience:1 corresponds:1 swath:4 absence:1 hard:2 change:2 experimentally:1 determined:1 corrected:1 geophysical:6 incorporate:1 tested:1 |
5,291 | 5,790 | Unlocking neural population non-stationarity
using a hierarchical dynamics model
Mijung Park1 , Gergo Bohner1 , Jakob H. Macke2
1 Gatsby Computational Neuroscience Unit, University College London
2
Research Center caesar, an associate of the Max Planck Society, Bonn
Max Planck Institute for Biological Cybernetics,
Bernstein Center for Computational Neuroscience T?ubingen
{mijung, gbohner}@gatsby.ucl.ac.uk, jakob.macke@caesar.de
Abstract
Neural population activity often exhibits rich variability. This variability can arise
from single-neuron stochasticity, neural dynamics on short time-scales, as well as
from modulations of neural firing properties on long time-scales, often referred
to as neural non-stationarity. To better understand the nature of co-variability in
neural circuits and their impact on cortical information processing, we introduce
a hierarchical dynamics model that is able to capture both slow inter-trial modulations in firing rates as well as neural population dynamics. We derive a Bayesian
Laplace propagation algorithm for joint inference of parameters and population
states. On neural population recordings from primary visual cortex, we demonstrate that our model provides a better account of the structure of neural firing than
stationary dynamics models.
1
Introduction
Neural spiking activity recorded from populations of cortical neurons can exhibit substantial variability in response to repeated presentations of a sensory stimulus [1]. This variability is thought to
arise both from dynamics generated endogenously within the circuit [2] as well as from variations in
internal and behavioural states [3, 4, 5, 6, 7]. An understanding of how the interplay between sensory
inputs and endogenous dynamics shapes neural activity patterns is essential for our understanding
of how information is processed by neuronal populations. Multiple statistical [8, 9, 10, 11, 12, 13]
and mechanistic [14] models for characterising neuronal population dynamics have been developed.
In addition to these dynamics which take place on fast time-scales (milliseconds up to few seconds),
there are also processes modulating neural firing activity which take place on much slower timescales (seconds to hours). Slow drifts in rates across an experiment can be caused by fluctuations in
arousal, anaesthesia level or other physiological properties of the experimental preparation [15, 16,
17]. Furthermore, processes such as learning and short-term plasticity can lead to slow changes in
neural firing properties [18]. The statistical structure of these slow fluctuations has been modelled
using state-space models and related techniques [19, 20, 21, 22, 23]. Recent experimental findings
have shown that slow, multiplicative fluctuations in neural excitability are a dominant source of
neural covariability in extracellular multi-cell recordings from cortical circuits [5, 17, 24].
To accurately capture the the structure of neural dynamics and to disentangle the contributions of
slow and fast modulatory processes to neural variability and co-variability, it is therefore important
to develop models that can capture neural dynamics both on fast (i.e., within experimental trials) and
slow (i.e., across trials) time-scales. Few such models exist: Czanner et al. [25] presented a statistical
model of single-neuron firing in which within-trial dynamics are modelled by (generalised) linear
coupling from the recent spiking history of each neuron onto its instantaneous firing rate, and acrosstrial dynamics were modelled by defining a random walk model over parameters. More recently,
1
Mangion et al [26] presented a latent linear dynamical system model with Poisson observations
(PLDS, [8, 11, 13]) with a one-dimensional latent space, and used a heuristic filtering approach
for tracking parameters, again based on a random-walk model. Rabinowitz et al [27] presented
a technique for identifying slow modulatory inputs from the recordings of single neurons using a
Gaussian Process model and an efficient inference technique using evidence optimisation.
Here, we present a hierarchical model that consists of a latent dynamical system with Poisson observations (PLDS) to model neural population dynamics, combined with a Gaussian process (GP)
[28] to model modulations in firing rates or model-parameters across experimental trials. The use
of an exponential nonlinearity implies that latent modulations have a multiplicative effect on neural
firing rates. Compared to previous models using random walks over parameters, using a GP is a
more flexible and powerful way of modelling the statistical structure of non-stationarity, and makes
it possible to use hyper-parameters that model the variability and smoothness of parameter-changes
across time.
In this paper, we focus on a concrete variant of this general model: We introduce a new set of
variables which control neural firing rate on each trial to capture non-stationarity in firing rates.
We derive a Bayesian Laplace propagation method for inferring the posterior distributions over the
latent variables and the parameters from population recordings of spiking activity. Our approach
generalises the 1-dimensional latent states in [26] to models with multi-dimensional states, as well
as to a Bayesian treatment of non-stationarity based on Gaussian Process priors. The paper is organised as follows: In Sec. 2, we introduce our framework for constructing non-stationary neural
population models, as well as the concrete model we will use for analyses. In Sec. 3, we derive
the Bayesian Laplace propagation algorithm. In Sec. 4, we show applications to simulated data and
neural population recordings from visual cortex.
2
Hierarchical non-stationary models of neural population dynamics
We start by introducing a hierarchical model for capturing short time-scale population dynamics as
well as long time-scale non-stationarities in firing rates. Although we use the term ?non-stationary?
to mean that the system is best described by parameters that change over time (which is how the term
is often used in the context of neural data analysis), we note that the distribution over parameters
can be described by a stochastic process which might be strictly stationary in the statistical sense1 .
Modelling framework We assume that the neural population activity of p neurons yt ? Rp depends on a k-dimensional latent state xt ? Rk and a modulatory factor h(i) ? Rk which is different
for each trial i = {1, . . . , r}. The latent state x models short-term co-variability of spiking activity
and the modulatory factor h models slowly varying mean firing rates across experimental trials.
We model neural spiking activity as conditionally Poisson given the latent state xt and a modulator
h(i) , with a log firing rate which is linear in parameters and latent factors,
yt |xt , C, h(i) , d ? Poiss(yt | exp(C(xt + h(i) ) + d)),
where the loading matrix C ? Rp?k specifies how each neuron is related to the latent state and the
modulator, d ? Rp is an offset term that controls the mean firing rate of each cell, and Poiss(yt |w)
means that the ith entry of yt is drawn independently from Poisson distribution with mean wi (the
ith entry of w). Because of the use of an exponential firing-rate nonlinearity, latent factors have a
multiplicative effect on neural firing rates, as has been observed experimentally [17, 5].
Following [11, 13, 26], we assume that the latent dynamics evolve according to a first-order autoregressive process with Gaussian innovations,
xt |xt?1 , A, B, Q ? N (xt |Axt?1 + But , Q).
Here, we allow for sensory stimuli (or experimental covariates), ut ? Rd to influence the latent
states linearly. The dynamics matrix A ? Rk?k determines the state evolution, B ? Rk?d models
the dependence of latent states on external inputs, and Q ? Rk?k is the covariance of the innovation
(i)
noise. We set Q to be the identity matrix, Q = Ik as in [29], and we assume x0 ? N (0, Ik ).
1
A stochastic process is strict-sense stationary if its joint distribution over any two time-points t and s only
depends on the elapsed time t ? s.
2
Figure 1: Schematic of hierarchical nonstationary Poisson observation Latent Dynamical System (N-PLDS) for capturing nonstationarity in mean firing rates. The parameter
h slowly varies across trials and leads to fluctuations in mean firing rates.
recording r
recording 1
The parameters in this model are ? = {A, B, C, d, h(1:r) }. We refer to this general model as nonstationary PLDS (N-PLDS). Different variants of N-PLDS can be constructed by placing priors on
individual parameters which allow them to vary across trials (in which case they would then depend
on the trial index i) or by omitting different components of the model2 .
For the modulator h, we assume that it varies across trials according to a GP with mean mh and
(modified) squared exponential kernel, h(i) ? GP(mh , K(i, j)), where the (i, j)th block of K (size
k ? k) is given by K(i, j) = (? 2 + ?i,j ) exp ? 2?12 (i ? j)2 Ik . Here, we assume the independent
noise-variance on the diagonal () to be constant and small as in [30]. When ? 2 = = 0, the
modulator vanishes, which corresponds to the conventional PLDS model with fixed parameters [11,
13]. When ? 2 > 0, the mean firing rates vary across trials, and the parameter ? determines the timescale (in units of ?trials?) of these fluctuations. We impose ridge priors on the model parameters (see
Appendix for details), so that the total set of hyperparameters of the model is ? = {mh , ? 2 , ? 2 , ?},
where ? is the set of ridge parameters.
3
Bayesian Laplace propagation
Our goal is to infer parameters and latent variables in the model. The exact posterior distribution
is analytically intractable due to the use of a Poisson likelihood, and we therefore assume the joint
posterior over the latent variables and parameters to be factorising,
(1:r)
(1:r)
(1:r)
(1:r)
(1:r)
(1:r)
p(?, x1:T |y1:T , ?) ? p(y1:T |x1:T , ?)p(x1:T |?, ?)p(?|?) ? q(?, x1:T ) = q? (?)
r
Y
(i)
qx (x0:T ).
i=1
This factorisation simplifies computing the integrals involved in calculating a bound on the marginal
likelihood of the observations,
(1:r)
log p(y1:T |?)
Z
=
log
Z
?
(1:r)
(1:r)
(1:r)
d? dx1:T p(?, x1:T , y1:T |?),
(1:r)
(1:r)
(1:r)
d? dx1:T q(?, x1:T ) log
(1:r)
p(?, x1:T , y1:T |?)
(1:r)
q(?, x1:T )
.
(1)
Similar to variational Bayesian expectation maximization (VBEM) algorithm [29], our inference
procedure consists of the following three steps: (1) we compute the approximate posterior over
(1:r)
latent variables qx (x0:T ) by integrating out the parameters
Z
(1:r)
(1:r)
(1:r)
qx (x0:T ) ? exp
d?q? (?) log p(x1:T , y1:T |?) ,
(2)
which is performed by forward-backward message passing relying on the order-1 dependency in
latent states. Then, (2) we compute the approximate posterior over parameters q? (?) by integrating
out the latent variables,
Z
(1:r)
(1:r)
(1:r)
(1:r)
q? (?) ? p(?) exp
dx0:T qx (x0:T ) log p(x0:T , y1:T |?) ,
(3)
and (3) we update the hyperparameters by computing the gradients of the bound on the eq. 1 after
integrating out both latent variables and parameters. We iterate the three steps until convergence.
Unfortunately, the integrals in both eq. 2 and eq. 3 are not analytically tractable, even with the Gaus(1:r)
sian distributions for qx (x0:T ) and q? (?). For tractability and fast computation of messages in
2
A second variant of the model, in which the dynamics matrix determining the spatio-temporal correlations
in the population varies across trials, is described in the Appendix.
3
the forward-backward algorithm for eq. 2, we utilise the so-called Laplace propagation or Laplace
expectation propagation (Laplace-EP) [31, 32, 33], which makes a Gaussian approximation to each
message based on Laplace approximation, then propagates the messages forward and backward.
While Laplace propagation in the prior work is commonly coupled with point estimates of parameters, we consider the posterior distribution over parameters. For this reason, we refer to our inference
method as Bayesian Laplace propagation. The use of approximate message passing in the Laplace
propagation implies that there is no longer a guarantee that the lower bound will increase monotonically in each iteration, which is the main difference between our method and the VBEM algorithm.
We therefore monitored the convergence of our algorithm by computing one-step ahead prediction
scores [13]. The algorithm proceeds by iterating the following three steps:
(1) Approximating the posterior over latent states: Using the first-order dependency in latent
(1:r)
states, we derive a sequential forward/backward algorithm to obtain qx (x0:T ), generalising the
approach of [26] to multi-dimensional latent states. Since this step decouples across trials, it is easy
to parallelize, and we omit the trial-indices for clarity. We note that computation of the approximate
posterior in this step is not more expensive than Bayesian inference of the latent state in a ?fixed
parameter? PLDS. The forward message ?(xt ) at time t is given by
Z
?(xt ) ? dxt?1 ?(xt?1 ) exp hlog(p(xt |xt?1 )p(yt |xt ))iq? (?) .
(4)
Assuming that the forward message at time t ? 1 denoted by ?(xt?1 ) is Gaussian, the Poisson
likelihood term will render the forward message at time t non-Gaussian, but we will approximate
?(xt ) as a Gaussian using the first and second derivatives of the right-hand side of eq. 4 with respect
to xt .
Similarly, the backward message at time t ? 1 is given by
Z
?(xt?1 ) ? dxt ?(xt ) exp hlog(p(xt |xt?1 )p(yt |xt ))iq? (?) ,
(5)
which we also approximate to a Gaussian for tractability in computing backward messages.
Using the forward/backward messages, we compute the posterior marginal distribution over latent
variables (See Appendix). We need to compute the cross-covariance between neighbouring latent
variables to obtain the sufficient statistics of latent variables (which we will need for updating the
posterior over parameters). The pairwise marginals of latent variables are given by
p(xt , xt+1 |y1:T ) ? ?(xt+1 ) exp hlog(p(yt+1 |xt+1 )p(xt+1 |xt ))iq? (?) ?(xt ),
(6)
which we approximate as a joint Gaussian distribution by using the first/second derivatives of eq. 6
and extracting the cross-covariance term from the joint covariance matrix.
(2) Approximating the posterior over parameters: After inferring the posterior over latent
states, we update the posterior distribution over the parameters. The posterior over parameters factorizes as
q? (?) = qa,b (a, b) qc,d,h (c, d, h(1:r) ),
(7)
where used the vectorized notations b = vec(B > ) and c = vec(C > ). We set c, d to the maximum
? for simplicity in inference. The computational cost of this algorithm is
?, d
likelihood estimates c
dominated by the cost of calculating the posterior distribution over h(1:r) , which involves manipulation of a rk-dimensional Gaussian. While this was still tractable without further approximations for
the data-set sizes used in our analyses below (hundreds of trials), a variety of approximate methods
for GP-inference exist which could be used to improve efficiency of this computation. In particular,
we will typically be dealing with systems in which ? 1, which means that the kernel-matrix is
smooth and could be approximated using low-rank representations [28].
(3) Estimating hyperparameters:
Finally, after obtaining the the approximate posterior
(1:r)
q(?, x0:T ), we update the hyperparameters of the prior by maximizing the lower bound with respect to the hyperparameters. The variational lower bound simplifies to (see Ch.5 in [29] for details,
note that the usage of Gaussian approximate posteriors ensures that this step is analogous to hyper
parameter updating in a fully Gaussian LDS)
(1:r)
log p(y1:T |?) ? ?KL(?) + c,
4
(8)
A group 1
C group 1 population activity
?1
trial # 25
neurons
log mean firing rate
true z1
N-PLDS
Indep-PLDS
PLDS
10s
neurons
20
30
40
50
trials
60
70
80
90
100
log mean firing rate
0
10s
trial # 25
10
20
30
40
50
trials
60
70
80
90
100
true
N-PLDS
Indep-PLDS
PLDS
0.4 condi cov (z)
0.2
0
?0.2
0
?2
0.8
0.6
D group 2 population activity
true z2
N-PLDS
Indep-PLDS
PLDS
?1
0
trial # 75
neurons
10
neurons
0
B group 2
covariance estimation
1 total cov (z)
0
?2
E
trial # 75
0
10s
-5s
-2.5s
0
2.5s
5s
10s
Figure 2: Illustration of non-stationarity in firing rates (simulated data). A, B Spike rates of 40
neurons are influenced by two slowly varying firing rate modulators. The log mean firing rates of the
two groups of neurons are z1 (red, group 1) and z2 (blue, group 2) across 100 trials. C, D Raster plots
show the extreme cases, i.e. trials 25 and 75. The traces show the posterior mean of z estimated
by N-PLDS (light blue for z2 , light red for z1 ), independent PLDSs (fit a PLDS to each trial data
individually, dark gray), and PLDS (light gray). E Total and conditional (on each trial) covariance of
recovered neural responses from each model (averaged across all neuron pairs, and then normalised
for visualisation). The covariances recovered by our model (red) well match the true ones (black),
while those by independent PLDSs (gray) and a single PLDS (light gray) do not.
where c is a constant. Here, the KL divergence between the prior and posterior over parameters,
denoted by N (?? , ?? ) and N (?, ?), respectively, is given by
?1 1
> ?1
1
KL(?) = ? 21 log |??1
(9)
? ?| + 2 Tr ?? ? + 2 (? ? ?? ) ?? (? ? ?? ) + c,
where the prior mean and covariance depend on the hyperparameters. We update the hyperparameters by taking the derivative of KL w.r.t. each hyper parameter. For the prior mean, the first derivative
expression provides a closed-form update. For ? (time scale of inter-trial fluctuations in firing rates)
and ? 2 (variance of inter-trial fluctuations), their derivative expressions do not provide a closed form
update, in which case we compute the KL divergence on the grid defined in each hyperparameter
space and choose the value that minimises KL.
Predictive distributions for test data. In our model, different trials are no longer considered to
be independent, so we can predict parameters for held-out trials. Using the GP model on h and our
approximations, we have Gaussian predictive distributions on h? for test data D? given training data
D:
p(h? |D, D? )
= N (mh + K ? K ?1 (?h ? mh ), K ?? ? K ? (K + Hh?1 )?1 K ?> ),
(10)
where K is the prior covariance matrix on D and K ?? is on D? , and K ? is their prior crosscovariance as introduced in Ch.2 of [28], and the negative Hessian Hh is defined as
r Z
T
X
X
?2
(i)
(i)
(i) (i)
? h(i) )].
?, d,
[ dx0:T q(x0:T )
log p(yt |xt , c
(11)
Hh = ? 2 (1:h)
? h
t=1
i=1
In the applications to simulated and neurophysiological data described in the following, we used this
approach to predict the properties of neural dynamics on held-out trials.
4
Applications
Simulated data: We first illustrate the performance of N-PLDS on a simulated population recording from 40 neurons consisting of 100 trials of length T = 200 time steps each. We used a
4-dimensional latent state and assumed that the population consisted of two homogeneous subpopulations of size 20 each, with one modulatory input controlling rate fluctuations in each group
(See Fig. 2 A). In addition, we assumed that for half of each trial, there was a time-varying stimulus
(?drifting grating?), represented by a 3-dimensional vector which consisted of the sine and cosine
5
A
Mean firing rate (Hz)
cell#6
cell#1
5 most non-stationary neurons
cell#7
0
15
5
cell#8
cell#3
cell#2
10
0
cell#9
cell#4
10
5
cell#10
cell#5
15
10
0
0
25
50
Trial
75
B
100
5 most stationary neurons
2
data
PLDS
N-PLDS
0
2
0
2
0
2
0
2
0
0
25
50
Trial
75
100
RMSE
5 most non-stationary neurons
0.2
0.02
5 most stationary neurons
all neurons (64)
0.07
0.1
0.01
PLDS
N-PLDS
1 2 3 4 5 6 7 8
k
0.05
1 2 3 4 5 6 7 8
k
1 2 3 4 5 6 7 8
k
Figure 3: Non-stationary firing rates in a population of V1 neurons. A: Mean firing rates of
neurons (black trace) across trials. Left: The 5 most non-stationary neurons. Right: The 5 most
stationary neurons. The fitted (solid line) and the predicted (circles) mean firing rates are also shown
for N-PLDS (in red) and PLDS (in gray). B Left: The RMSE in predicting single neuron firing rates
across 5 most non-stationary neurons for varying latent dimensionalities k , where N-PLDS achieves
significantly lower RMSE. Middle: RMSE for the 5 most stationary neurons, where there is no
difference between two methods (apart from an outlier at k=8). Right: RMSE for the all 64 neurons.
of the time-varying phase of the stimulus (frequency 0.4 Hz) as well as an additional binary term
which indicated whether the stimulus was active.
We fit N-PLDS to the data, and found that it successfully captures the non-stationarity in (log) mean
firing rates, defined by z = C(x + h) + d, as shown in Fig. 2, and recovers the total and trialconditioned covariances (the across-trial mean of the single-trial covariances of z). For comparison,
we also fit 100 separate PLDSs to the data from each trial, as well as a single PLDS to the entire
data. The naive approach of fitting an individual PLDS to each trial can, in principle, follow the
modulation. However, as each model is only fit to one trial, the parameter-estimates are very noisy
since they are not sufficiently constrained by the data from each trial.
We note that a single PLDS with fixed parameters (as is conventionally used in neural data analysis)
is able to track the modulations in firing rates in the posterior mean here? however, a single PLDS
would not be able to extrapolate firing rates for unseen trials (as we will demonstrate in our analyses
on neural data below). In addition, it will also fail to separate ?slow? and ?fast? modulations into
different parameters. By comparing the total covariance of the data (averaged across neuron pairs) to
the ?trial-conditioned? covariance (calculated by estimating the covariance on each trial individually,
and averaging covariances) one can calculate how much of the cross-neuron co-variability can be
explained by across-trials fluctuations in firing rates (see e.g., [17]). In this simulation shown in
Fig. 2 (which illustrates an extreme case dominated by strong across-trial effects), the conditional
covariance is much smaller than the full covariance.
6
Neurophysiological data: How big are non-stationarities in neural population recordings, and
can our model successfully capture them? To address these questions, we analyzed a population
recording from anaesthetized macaque primary visual cortex consisting of 64 neurons stimulated by
sine grating stimuli. The details of data collection are described in [5], but our data-set also included
units not used in the original study. We binned the spikes recorded during 100 trials of length 4s
(stimulus was on for 2s) of the same orientation using 50ms bins, resulting in trials of length T = 80
bins. Analogously to the simulated dataset above, we parameterised the stimulus as a 3-dimensional
vector of the sine and cosine with the same temporal frequency of the drifting grating, as well as an
indicator that specifies whether there is a stimulus or not.
We used 10-fold cross validation to evaluate performance of the model, i.e. repeatedly divided the
data into test data (10 trials) and training data (the remaining 90 trials). We fit the model on each
training set, and using the estimated parameters from the training data, we made predictions on the
modulator h on test data by using the mean of the predictive distribution over h. We note that, in
contrast to conventional applications of cross-validation which assume i.i.d. trials, our model here
also takes into correlations in firing rates across trials? therefore, we had to keep the trial-indices
in order to compute predictive distributions for test data using formulas in eq. 10. Using these
parameters, we drew samples for spikes for the entire trials to compute the mean firing rates of each
neuron at each trial. For comparison, we also fit a single PLDS to the data. As this model does not
allow for across-trial modulations of firing rates, we simply kept the parameters estimated from the
training data. For visualisation of results, we quantified the ?non-stationarity? of each neuron by first
smoothing its firing rate across trials (using a kernel of size 10 trials), calculating the variance of the
smoothed firing rate estimate, and displaying firing rates for the 5 most non-stationary neurons in
the population (Fig. 3A, left) as well as 5 most stationary neurons (Fig. 3A, right). Importantly, the
firing-rates were also correctly interpolated for held out trials (circles in Fig. 3A).
To evaluate whether the additional parameters in N-PLDS result in a superior model compared to
conventional PLDS [13], we tested the model with different latent dimensionalities ranging from
k = 1 to k = 8, and compared each model against a ?fixed? PLDS of matched dimensionality
(Fig. 3B). We estimated predicted firing rates on held out trials by sampling 1000 replicate trials
from the predictive distribution for both models and compared the median (across samples) of the
mean firing rates of each neuron to those of the data. The shown RMSE values are the errors of
predicted firing rate (in Hz) per neuron per held out trial (population mean across all neurons and
trials is 4.54 Hz). We found that N-PLDS outperformed PLDS provided that we had sufficiently
many latent states, at least k > 3. For large latent dimensionalities (k > 8) performance degraded
again, which could be a consequence of overfitting. Furthermore, we show that for non-stationary
neurons there is a large gain in predictive power (Fig. 3B, left), whereas for stationary neurons PLDS
and N-PLDS have similar prediction accuracy (Fig. 3B, middle). The RMSE on firing rates for all
neurons (Fig. 3B, right) suggests that our model correctly identified the fluctuation in firing rates.
We also wanted to gain insights into the temporal scale of the underlying non-stationarities. We first
looked at the recovered time-scales ? of the latent modulators, and found them to be highly preserved
across multiple training folds, and, importantly, across different values of the latent dimensionalities,
consistently peaked near 10 trials (Fig. 4 A). We made sure that the peak near 10 trials is not merely
a consequence of parameter initialization? parameters were initialised by fitting a Gaussian Process
with a exponentiated quadratic one-dimensional kernel to each neuron?s mean firing rate over trials
individually, then taking the mean time-scale over neurons as the initial global time-scale for our
kernel. The initial values were 8.12 ? 0.01, differing slightly between training sets. Similarly, we
checked that the parameters of the final model (after 30 iterations of Bayesian Laplace propagation),
were indeed superior to the initial values, by monitoring the prediction error on held-out trials.
Furthermore, due to introducing a smooth change with the correct time scale in the latent space
(e.g., the posterior mean of h across trials shown in Fig. 4B), we find that N-PLDS recovers more
of the time-lagged covariance of neurons compared to the fixed PLDS model (Fig. 4C).
5
Discussion
Non-stationarities are ubiquitous in neural data: Slow modulations in firing properties can result
from diverse processes such as plasticity and learning, fluctuations in arousal, cortical reorganisation
after injury as well as development and aging. In addition, non-stationarities in neural data can also
be a consequence of experimental artifacts, and can be caused by fluctuations in anaesthesia level,
7
A
Time-scale estimates
C Normalized mean autocovariance
1
Estimated Modulators
5
15
0.6
0
10
data
PLDS
N-PLDS
0.8
10
20
Count
B
0.4
?5
5
0
0.2
?10
5
10
15
Time-scale (trials)
0
Trial index
100
0
-500
0
Time lag (ms)
500
Figure 4: Non-stationary firing rates in a population of V1 neurons (continued). A: Histogram
of time-constants across different latent dimensionalities and training sets. Mean at 10.4 is indicated
by the vertical red line. B: Estimated 7-dimensional modulator (the posterior mean of h). The
modulator with an estimated length scale of approximately 10 trials is smoothly varying across
trials. C: Comparison of normalized mean auto-covariance across neurons.
stability of the physiological preparation or electrode drift. Whatever the origins of non-stationarities
are, it is important to have statistical models which can identify them and disentangle their effects
from correlations and dynamics on faster time-scales [16].
We here presented a hierarchical model for neural population dynamics in the presence of nonstationarity. Specifically, we concentrated on a variant of this model which focuses on nonstationarity in firing rates. Recent experimental studies have shown that slow fluctuations in neural
excitability which have a multiplicative effect on neural firing rates are a dominant source of noise
correlations in anaesthetized visual cortex [17, 5, 24]. Because of the exponential spiking nonlinearity employed in our model, the latent additive fluctuations in the modulator-variables also have
a multiplicative effect on firing rates. Applied to a data-set of neurophysiological recordings, we
demonstrated that this modelling approach can successfully capture non-stationarities in neurophysiological recordings from primary visual cortex.
In our model, both neural dynamics and latent modulators are mediated by the same low-dimensional
subspace (parameterised by C). We note, however, that this assumption does not imply that neurons
with strong short-term correlations will also have strong long-term correlations, as different dimensions of this subspace (as long as it is chosen big enough) could be occupied by short and long term
correlations, respectively. In our applications to neural data, we found that the latent state had to be
at least three-dimensional for the non-stationary model to outperform a stationary dynamics model,
and it might be the case that at least three dimensions are necessary to capture both fast and slow
correlations. It is an open question of how correlations on fast and slow timescales are related [17],
and the techniques presented have the potential to be of use for mapping out their relationships.
There are limitations to the current study: (1) We did not address the question of how to select
amongst multiple different models which could be used to model neural non-stationarity for a given
dataset; (2) we did not present numerical techniques for how to scale up the current algorithm for
larger trial numbers (e.g., using low-rank approximations to the covariance matrix) or large neural
populations; and (3) we did not address the question of how to overcome the slow convergence
properties of GP kernel parameter estimation [34]. (4) While Laplace propagation is flexible, it is
an approximate inference technique, and the quality of its approximations might vary for different
models of tasks. We believe that extending our method to address these questions provides an
exciting direction for future research, and will result in a powerful set of statistical methods for
investigating how neural systems operate in the presence of non-stationarity.
Acknowledgments
We thank Alexander Ecker and the lab of Andreas Tolias for sharing their data with us [5] (see
http://toliaslab.org/publications/ecker-et-al-2014/), and for allowing us
to use it in this publication, as well as Maneesh Sahani and Alexander Ecker for valuable comments.
This work was funded by the Gatsby Charitable Foundation (MP and GB) and the German Federal
Ministry of Education and Research (MP and JHM) through BMBF; FKZ:01GQ1002 (Bernstein
Center T?ubingen). Code available at http://www.mackelab.org/code.
8
References
[1] A. Renart and C. K. Machens. Variability in neural activity and behavior. Curr Opin Neurobiol, 25:211?
20, 2014.
[2] A. Destexhe. Intracellular and computational evidence for a dominant role of internal network activity in
cortical computations. Curr Opin Neurobiol, 21(5):717?725, 2011.
[3] G. Maimon. Modulation of visual physiology by behavioral state in monkeys, mice, and flies. Curr Opin
Neurobiol, 21(4):559?64, 2011.
[4] K. D. Harris and A. Thiele. Cortical state and attention. Nat Rev Neurosci, 12(9):509?523, 2011.
[5] Ecker et al. State dependence of noise correlations in macaque primary visual cortex. Neuron, 82(1):235?
48, 2014.
[6] Ralf M Haefner, Pietro Berkes, and J?ozsef Fiser. Perceptual decision-making as probabilistic inference
by neural sampling. arXiv preprint arXiv:1409.0257, 2014.
[7] Alexander S Ecker, George H Denfield, Matthias Bethge, and Andreas S Tolias. On the structure of
population activity under fluctuations in attentional state. bioRxiv, page 018226, 2015.
[8] A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural
Comput, 15(5):965?91, 2003.
[9] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown. Dynamic analysis of neural encoding by
point process adaptive filtering. Neural Comput, 16(5):971?98, 2004.
[10] B. M. Yu, A. Afshar, G. Santhanam, S. I. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure
embedded in neural activity. In NIPS 18, pages 1545?1552. MIT Press, Cambridge, MA, 2006.
[11] J. E. Kulkarni and L. Paninski. Common-input models for multiple neural spike-train data. Network,
18(4):375?407, 2007.
[12] W. Truccolo, L. R. Hochberg, and J. P. Donoghue. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci, 13(1):105?111, 2010.
[13] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. Empirical models of
spiking in neural populations. In NIPS, pages 1350?1358, 2011.
[14] C. van Vreeswijk and H. Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724?6, 1996.
[15] G. J. Tomko and D. R. Crapper. Neuronal variability: non-stationary responses to identical visual stimuli.
Brain Res, 79(3):405?18, 1974.
[16] C. D. Brody. Correlations without synchrony. Neural Comput, 11(7):1537?51, 1999.
[17] R. L. T. Goris, J. A. Movshon, and E. P. Simoncelli. Partitioning neuronal variability. Nat Neurosci,
17(6):858?65, 2014.
[18] C. D. Gilbert and W. Li. Adult visual cortical plasticity. Neuron, 75(2):250?64, 2012.
[19] E. N. Brown, D. P. Nguyen, L. M. Frank, M. A. Wilson, and V. Solo. An analysis of neural receptive field
plasticity by point process adaptive filtering. Proc Natl Acad Sci U S A, 98(21):12261?6, 2001.
[20] Frank et al. Contrasting patterns of receptive field plasticity in the hippocampus and the entorhinal cortex:
an adaptive filtering approach. J Neurosci, 22(9):3817?30, 2002.
[21] N. A. Lesica and G. B. Stanley. Improved tracking of time-varying encoding properties of visual neurons
by extended recursive least-squares. IEEE Trans Neural Syst Rehabil Eng, 13(2):194?200, 2005.
[22] V. Ventura, C. Cai, and R.E. Kass. Trial-to-Trial Variability and Its Effect on Time-Varying Dependency
Between Two Neurons, 2005.
[23] C. S. Quiroga-Lombard, J. Hass, and D. Durstewitz. Method for stationarity-segmentation of spike train
data with application to the pearson cross-correlation. J Neurophysiol, 110(2):562?72, 2013.
[24] Sch?olvinck et al. Cortical state determines global variability and correlations in visual cortex. J Neurosci,
35(1):170?8, 2015.
[25] Gabriela C., Uri T. E., Sylvia W., Marianna Y., Wendy A. S., and Emery N. B. Analysis of between-trial
and within-trial neural spiking dynamics. Journal of Neurophysiology, 99(5):2672?2693, 2008.
[26] Mangion et al. Online variational inference for state-space models with point-process observations. Neural Comput, 23(8):1967?1999, 2011.
[27] Neil C Rabinowitz, Robbe LT Goris, Johannes Ball?e, and Eero P Simoncelli. A model of sensory neural
responses in the presence of unknown modulatory inputs. arXiv preprint arXiv:1507.01497, 2015.
[28] C.E. Rasmussen and C.K.I. Williams. Gaussian processes for machine learning. MIT Press Cambridge,
MA, USA, 2006.
[29] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Unit, University College London, 2003.
[30] Yu et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population
activity. 102(1):614?635, 2009.
[31] A. J. Smola, V. Vishwanathan, and E. Eskin. Laplace propagation. In Sebastian Thrun, Lawrence K. Saul,
and Bernhard Sch?olkopf, editors, NIPS, pages 441?448. MIT Press, 2003.
[32] A. Ypma and T. Heskes. Novel approximations for inference in nonlinear dynamical systems using
expectation propagation. Neurocomput., 69(1-3):85?99, 2005.
[33] K. V. Shenoy B. M. Yu and M. Sahani. Expectation propagation for inference in non-linear dynamical
models with poisson observations. In Proc IEEE Nonlinear Statistical Signal Processing Workshop, 2006.
[34] I. Murray and R. P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. In
NIPS 23, pages 1723?1731. 2010.
9
| 5790 |@word neurophysiology:1 trial:81 middle:2 loading:1 replicate:1 hippocampus:1 open:1 simulation:1 covariance:21 eng:1 tr:1 solid:1 initial:3 score:1 recovered:3 z2:3 comparing:1 current:2 ka:1 numerical:1 additive:1 plasticity:5 shape:1 wanted:1 opin:3 plot:1 update:6 stationary:23 half:1 ith:2 smith:1 short:6 eskin:1 provides:3 org:2 constructed:1 ik:3 consists:2 ypma:1 fitting:2 behavioral:1 introduce:3 pairwise:1 x0:10 inter:3 indeed:1 behavior:1 multi:3 brain:1 relying:1 mijung:2 provided:1 estimating:3 notation:1 matched:1 circuit:3 underlying:1 lesica:1 neurobiol:3 monkey:2 developed:1 contrasting:1 differing:1 finding:1 guarantee:1 temporal:3 stationarities:7 axt:1 decouples:1 uk:1 partitioning:1 control:2 unit:4 whatever:1 omit:1 planck:2 shenoy:3 generalised:1 aging:1 consequence:3 acad:1 encoding:2 parallelize:1 barbieri:1 firing:53 modulation:10 fluctuation:15 approximately:1 might:3 black:2 initialization:1 quantified:1 suggests:1 co:4 averaged:2 acknowledgment:1 recursive:1 block:1 procedure:1 empirical:1 maneesh:1 thought:1 significantly:1 physiology:1 integrating:3 subpopulation:1 onto:1 context:1 influence:1 www:1 conventional:3 ecker:5 demonstrated:1 center:3 yt:9 maximizing:1 gilbert:1 attention:1 williams:1 independently:1 qc:1 simplicity:1 identifying:1 factorisation:1 insight:1 continued:1 importantly:2 crapper:1 ralf:1 population:31 stability:1 variation:1 laplace:14 analogous:1 controlling:1 exact:1 neighbouring:1 homogeneous:1 machens:1 origin:1 associate:1 expensive:1 approximated:1 updating:2 observed:1 ep:1 role:1 fly:1 preprint:2 capture:8 calculate:1 ensures:1 sompolinsky:1 indep:3 thiele:1 valuable:1 substantial:1 balanced:1 vanishes:1 covariates:1 dynamic:27 depend:2 predictive:6 efficiency:1 neurophysiol:1 model2:1 joint:5 mh:5 represented:1 train:2 fast:7 london:2 modulators:4 hyper:3 pearson:1 heuristic:1 lag:1 larger:1 statistic:1 cov:2 unseen:1 neil:1 gp:7 timescale:1 noisy:1 final:1 online:1 beal:1 interplay:1 matthias:1 cai:1 ucl:1 olkopf:1 convergence:3 electrode:1 extending:1 emery:1 adam:1 derive:4 develop:1 ac:1 coupling:1 minimises:1 iq:3 illustrate:1 eq:7 strong:3 grating:3 predicted:3 involves:1 implies:2 direction:1 correct:1 stochastic:2 human:1 mangion:2 bin:2 education:1 truccolo:1 biological:1 strictly:1 quiroga:1 gabriela:1 sufficiently:2 considered:1 exp:7 lawrence:1 mapping:1 predict:2 anaesthesia:2 vary:3 achieves:1 estimation:2 proc:2 outperformed:1 modulating:1 individually:3 successfully:3 federal:1 mit:3 gaussian:18 modified:1 occupied:1 poi:2 gaus:1 varying:8 factorizes:1 wilson:1 publication:2 covariability:1 focus:2 consistently:1 modelling:3 likelihood:4 rank:2 contrast:1 sense:1 inference:13 typically:1 entire:2 cunningham:1 visualisation:2 flexible:2 orientation:1 denoted:2 mackelab:1 development:1 constrained:1 smoothing:1 marginal:2 field:2 sampling:3 identical:1 placing:1 yu:4 caesar:2 peaked:1 future:1 stimulus:10 few:2 divergence:2 crosscovariance:1 individual:2 phase:1 consisting:2 curr:3 stationarity:11 message:11 highly:1 analyzed:1 extreme:2 light:4 natl:1 held:6 solo:2 integral:2 necessary:1 autocovariance:1 walk:3 circle:2 arousal:2 biorxiv:1 re:1 fitted:1 vbem:2 injury:1 maximization:1 tractability:2 introducing:2 cost:2 entry:2 hundred:1 dependency:3 varies:3 combined:1 peak:1 probabilistic:1 analogously:1 mouse:1 concrete:2 bethge:1 thesis:1 again:2 squared:1 recorded:2 choose:1 slowly:3 external:1 macke:2 derivative:5 marianna:1 preparation:2 li:1 syst:1 account:1 potential:1 de:1 sec:3 caused:2 mp:2 depends:2 multiplicative:5 performed:1 sine:3 endogenous:1 closed:2 lab:1 red:5 start:1 synchrony:1 rmse:7 contribution:1 square:1 degraded:1 accuracy:1 variance:3 afshar:1 identify:1 modelled:3 bayesian:10 lds:1 buesing:1 accurately:1 monitoring:1 cybernetics:1 history:1 influenced:1 nonstationarity:3 checked:1 sharing:1 sebastian:1 against:1 raster:1 sensorimotor:1 frequency:2 involved:1 initialised:1 monitored:1 recovers:2 gain:2 dataset:2 treatment:1 ut:1 dimensionality:6 ubiquitous:1 stanley:1 segmentation:1 follow:1 response:4 improved:1 furthermore:3 parameterised:2 smola:1 fiser:1 until:1 correlation:13 hand:1 nonlinear:2 propagation:14 artifact:1 gray:5 rabinowitz:2 indicated:2 quality:1 believe:1 usage:1 effect:7 omitting:1 normalized:2 true:4 consisted:2 brown:3 evolution:1 analytically:2 usa:1 excitability:2 conditionally:1 during:1 cosine:2 m:2 ridge:2 demonstrate:2 characterising:1 ranging:1 variational:4 instantaneous:1 chaos:1 recently:1 novel:1 superior:2 common:1 spiking:8 marginals:1 refer:2 cambridge:2 vec:2 smoothness:1 rd:1 grid:1 heskes:1 similarly:2 stochasticity:1 nonlinearity:3 had:3 funded:1 cortex:9 longer:2 berkes:1 dominant:3 disentangle:2 posterior:22 recent:3 apart:1 manipulation:1 ubingen:2 binary:1 ministry:1 additional:2 george:1 impose:1 employed:1 monotonically:1 signal:1 multiple:4 full:1 simoncelli:2 infer:1 smooth:2 generalises:1 match:1 faster:1 cross:6 long:5 divided:1 goris:2 impact:1 schematic:1 variant:4 prediction:4 renart:1 optimisation:1 expectation:4 poisson:8 arxiv:4 iteration:2 kernel:6 histogram:1 cell:12 condi:1 addition:4 whereas:1 preserved:1 median:1 source:2 sch:2 operate:1 strict:1 sure:1 recording:12 hz:4 comment:1 nonstationary:2 extracting:2 near:2 presence:3 bernstein:2 easy:1 enough:1 destexhe:1 iterate:1 variety:1 fit:6 modulator:8 identified:1 fkz:1 andreas:2 simplifies:2 donoghue:1 whether:3 expression:2 gb:1 movshon:1 render:1 passing:2 hessian:1 repeatedly:1 modulatory:6 iterating:1 johannes:1 dark:1 concentrated:1 processed:1 http:2 specifies:2 outperform:1 exist:2 millisecond:1 inhibitory:1 neuroscience:2 estimated:7 track:1 correctly:2 per:2 blue:2 diverse:1 wendy:1 hyperparameter:1 santhanam:1 group:8 eden:1 drawn:1 clarity:1 kept:1 backward:7 v1:2 merely:1 pietro:1 powerful:2 place:2 decision:1 appendix:3 hochberg:1 capturing:2 bound:5 brody:1 fold:2 quadratic:1 activity:16 binned:1 ahead:1 sylvia:1 vishwanathan:1 dominated:2 bonn:1 interpolated:1 extracellular:1 according:2 ball:1 across:30 smaller:1 slightly:1 wi:1 rev:1 making:1 outlier:1 explained:1 behavioural:1 count:1 fail:1 hh:3 german:1 vreeswijk:1 mechanistic:1 tractable:2 reorganisation:1 available:1 plds:46 hierarchical:7 slower:1 rp:3 drifting:2 original:1 remaining:1 calculating:3 murray:1 approximating:2 society:1 anaesthetized:2 question:5 spike:6 looked:1 receptive:2 primary:4 dependence:2 diagonal:1 exhibit:2 gradient:1 amongst:1 subspace:2 separate:2 thank:1 simulated:6 attentional:1 sci:1 thrun:1 reason:1 assuming:1 length:4 code:2 index:4 relationship:1 illustration:1 innovation:2 unfortunately:1 hlog:3 ventura:1 frank:3 trace:2 negative:1 lagged:1 collective:1 unknown:1 allowing:1 vertical:1 neuron:52 observation:7 denfield:1 unlocking:1 defining:1 extended:1 variability:15 y1:9 jakob:2 smoothed:1 drift:2 introduced:1 pair:2 kl:6 z1:3 elapsed:1 ryu:1 hour:1 nip:4 trans:1 macaque:2 qa:1 address:4 able:3 proceeds:1 dynamical:6 pattern:2 below:2 adult:1 max:2 power:1 endogenously:1 predicting:2 indicator:1 sian:1 improve:1 imply:1 conventionally:1 mediated:1 coupled:1 naive:1 auto:1 sahani:4 prior:10 understanding:2 evolve:1 determining:1 embedded:1 fully:1 dxt:2 limitation:1 filtering:4 organised:1 validation:2 foundation:1 gq1002:1 sufficient:1 vectorized:1 propagates:1 principle:1 displaying:1 exciting:1 charitable:1 editor:1 factorising:1 excitatory:1 rasmussen:1 side:1 allow:3 understand:1 normalised:1 institute:1 exponentiated:1 saul:1 taking:2 van:1 slice:1 overcome:1 calculated:1 cortical:8 dimension:2 rich:1 autoregressive:1 sensory:4 forward:8 commonly:1 collection:1 made:2 adaptive:3 tomko:1 nguyen:1 maimon:1 qx:6 approximate:12 bernhard:1 keep:1 dealing:1 global:2 active:1 overfitting:1 investigating:1 generalising:1 assumed:2 eero:1 spatio:1 tolias:2 latent:44 jhm:1 stimulated:1 nature:1 obtaining:1 constructing:1 did:3 timescales:2 main:1 linearly:1 intracellular:1 big:2 noise:4 arise:2 hyperparameters:8 neurosci:5 repeated:1 x1:9 neuronal:5 fig:13 referred:1 gatsby:4 slow:14 bmbf:1 inferring:2 exponential:4 comput:4 perceptual:1 rk:6 formula:1 xt:29 dx1:2 offset:1 physiological:2 evidence:2 essential:1 intractable:1 workshop:1 sequential:1 drew:1 phd:1 entorhinal:1 nat:3 conditioned:1 illustrates:1 uri:1 smoothly:1 lt:1 simply:1 paninski:1 neurophysiological:4 visual:11 durstewitz:1 tracking:2 ch:2 corresponds:1 utilise:1 determines:3 harris:1 ma:2 haefner:1 conditional:2 identity:1 presentation:1 goal:1 rehabil:1 change:4 experimentally:1 included:1 specifically:1 averaging:1 total:5 called:1 experimental:8 ozsef:1 select:1 college:2 internal:2 dx0:2 alexander:3 kulkarni:1 evaluate:2 tested:1 extrapolate:1 |
5,292 | 5,791 | Deeply Learning the Messages in Message
Passing Inference
Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel
The University of Adelaide, Australia; and Australian Centre for Robotic Vision
E-mail: {guosheng.lin,chunhua.shen,ian.reid,anton.vandenhengel}@adelaide.edu.au
Abstract
Deep structured output learning shows great promise in tasks like semantic image segmentation. We proffer a new, efficient deep structured model learning
scheme, in which we show how deep Convolutional Neural Networks (CNNs)
can be used to directly estimate the messages in message passing inference for
structured prediction with Conditional Random Fields (CRFs). With such CNN
message estimators, we obviate the need to learn or evaluate potential functions
for message calculation. This confers significant efficiency for learning, since otherwise when performing structured learning for a CRF with CNN potentials it is
necessary to undertake expensive inference for every stochastic gradient iteration.
The network output dimension of message estimators is the same as the number
of classes, rather than exponentially growing in the order of the potentials. Hence
it is more scalable for cases that involve a large number of classes. We apply
our method to semantic image segmentation and achieve impressive performance,
which demonstrates the effectiveness and usefulness of our CNN message learning method.
1
Introduction
Learning deep structured models has attracted considerable research attention recently. One popular approach to deep structured model is formulating conditional random fields (CRFs) using deep
Convolutional Neural Networks (CNNs) for the potential functions. This combines the power of
CNNs for feature representation learning and of the ability for CRFs to model complex relations.
The typical approach for the joint learning of CRFs and CNNs [1, 2, 3, 4, 5], is to learn the CNN
potential functions by optimizing the CRF objective, e.g., maximizing the log-likelihood. The CNN
and CRF joint learning has shown impressive performance for semantic image segmentation.
For the joint learning of CNNs and CRFs, stochastic gradient descent (SGD) is typically applied for
optimizing the conditional likelihood. This approach requires the marginal inference for calculating
the gradient. For loopy graphs, marginal inference is generally expensive even when using approximate solutions. Given that learning the CNN potential functions typically requires a large number of
gradient iterations, repeated marginal inference would make the training intractably slow. Applying
an approximate training objective is a solution to avoid repeat inference; pseudo-likelihood learning
[6] and piecewise learning [7, 3] are examples of this kind of approach. In this work, we advocate a
new direction for efficient deep structured model learning.
In conventional CRF approaches, the final prediction is the result of inference based on the learned
potentials. However, our ultimate goal is the final prediction (not the potentials themselves), so we
propose to directly optimize the inference procedure for the final prediction. Our focus here is on
the extensively studied message passing based inference algorithms. As discussed in [8], we can
directly learn message estimators to output the required messages in the inference procedure, rather
than learning the potential functions as in conventional CRF learning approaches. With the learned
message estimators, we then obtain the final prediction by performing message passing inference.
Our main contributions are as follows:
1) We explore a new direction for efficient deep structured learning. We propose to directly learn the
messages in message passing inference as training deep CNNs in an end-to-end learning fashion.
Message learning does not require any inference step for the gradient calculation, which allows
efficient training. Furthermore, when cast as a tradiational classification task, the network output
dimension for message estimation is the same as the number of classes (K), while the network
output for general CNN potential functions in CRFs is K a , which is exponential in the order (a)
of the potentials (for example, a = 2 for pairwise potentials, a = 3 for triple-cliques, etc). Hence
CNN based message learning has significantly fewer network parameters and thus is more scalable,
especially in cases which involve a large number of classes.
2) The number of iterations in message passing inference can be explicitly taken into consideration
in the message learning procedure. In this paper, we are particularly interested in learning messages
that are able to offer high-quality CRF prediction results with only one message passing iteration,
making the message passing inference very fast.
3) We apply our method to semantic image segmentation on the PASCAL VOC 2012 dataset and
achieve impressive performance.
Related work Combining the strengths of CNNs and CRFs for segmentation has been explored in
several recent methods. Some methods resort to a simple combination of CNN classifiers and CRFs
without joint learning. DeepLab-CRF in [9] first train fully CNN for pixel classification and applies
a dense CRF [10] method as a post-processing step. Later the method in [2] extends DeepLab
by jointly learning the dense CRFs and CNNs. RNN-CRF in [1] also performs joint learning of
CNNs and the dense CRFs. They implement the mean-field inference as Recurrent Neural Networks
which facilitates the end-to-end learning. These methods usually use CNNs for modelling the unary
potentials only. The work in [3] trains CNNs to model both the unary and pairwise potentials in
order to capture contextual information. Jointly learning CNNs and CRFs has also been explored
for other applications like depth estimation [4, 11]. The work in [5] explores joint training of Markov
random fields and deep networks for predicting words from noisy images and image classification.
All these above-mentioned methods that combine CNNs and CRFs are based upon conventional
CRF approaches. They aim to jointly learn or incorporate pre-trained CNN potential functions, and
then perform inference/prediction using the potentials. In contrast, our method here directly learns
CNN message estimators for the message passing inference, rather than learning the potentials.
The inference machine proposed in [8] is relevant to our work in that it has discussed the idea of
directly learning message estimators instead of learning potential functions for structured prediction. They train traditional logistic regressors with hand-crafted features as message estimators.
Motivated by the tremendous success of CNNs, we propose to train deep CNNs based message estimators in an end-to-end learning style without using hand-crafted features. Unlike the approach in
[8] which aims to learn variable-to-factor message estimators, our proposed method aims to learn
the factor-to-variable message estimators. Thus we are able to naturally formulate the variable
marginals ? which is the ultimate goal for CRF inference ? as the training objective (see Sec. 3.3).
The approach in [12] jointly learns CNNs and CRFs for pose estimation, in which they learn the
marginal likelihood of body parts but ignore the partition function in the likelihood. Message learning is not discussed in that work, and the exact relationship between this pose estimation approach
and message learning remains unclear.
2
Learning CRF with CNN potentials
Before describing our message learning method, we review the CRF-CNN joint learning approach
and discuss limitations. An input image is denoted by x ? X and the corresponding labeling mask
is denoted by y ? Y. The energy function is denoted by E(y, x), which measures the score of the
prediction y given the input image x. We consider the following form of conditional likelihood:
P (y|x) =
1
exp [?E(y, x)]
exp [?E(y, x)] = P
.
0
Z(x)
y 0 exp [?E(y , x)]
(1)
Here Z is the partition function. The CRF model is decomposed by a factor graph over a set of
factors F. Generally, the energy function is written as a sum of potential functions (factor functions):
P
E(y, x) = F ?F EF (y F , xF ).
(2)
Here F indexes one factor in the factor graph; y F denotes the variable nodes which are connected
to the factor F ; EF is the (log-) potential function (factor function). The potential function can be
a unary, pairwise, or high-order potential function. The recent method in [3] describes examples of
constructing general CNN based unary and pairwise potentials.
Take semantic image segmentation as an example. To predict the pixel labels of a test image, we can
find the mode of the joint label distribution by solving the maximum a posteriori (MAP) inference
problem: y ? = argmax y P (y|x). We can also obtain the final prediction by calculating the label
marginal distribution of each variable, which requires to solve a marginal inference problem:
P
?p ? N : P (yp |x) = y\yp P (y|x).
(3)
Here y\yp indicates the output variables y excluding yp . For a general CRF graph with cycles,
the above inference problems is known to be NP-hard, thus approximate inference algorithms are
applied. Message passing is a type of widely applied algorithms for approximate inference: loopy
belief propagation (BP) [13], tree-reweighted message passing [14] and mean-field approximation
[13] are examples of the message passing methods.
CRF-CNN joint learning aims to learn CNN potential functions by optimizing the CRF objective,
typically, the negative conditional log-likelihood, which is:
? log P (y|x; ?) = E(y, x; ?) + log Z(x; ?).
(4)
The energy function E(y, x) is constructed by CNNs, for which all the network parameters are
denoted by ?. Adding regularization, minimizing negative log-likelihood for CRF learning is:
PN
2
min? ?2 k?k2 + i=1 [E(y (i) , x(i) ; ?) + log Z(x(i) ; ?)].
(5)
Here x(i) , y (i) denote the i-th training image and its segmentation mask; N is the number of training
images; ? is the weight decay parameter. We can apply stochastic gradient descent (SGD) to optimize the above problem for learning ?. The energy function E(y, x; ?) is constructed from CNNs,
and its gradient ?? E(y, x; ?) can be easily computed by applying the chain rule as in conventional
CNNs. However, the partition function Z brings difficulties for optimization. Its gradient is:
?? log Z(x; ?) =
X
exp [?E(y, x; ?)]
?? [?E(y, x; ?)]
0
y 0 exp [?E(y , x; ?)]
P
y
= ? Ey?P (y|x;?) ?? E(y, x; ?).
(6)
Direct calculation of the above gradient is computationally infeasible for general CRF graphs. Usually it is necessary to perform approximate marginal inference to calculate the gradients at each SGD
iteration [13]. However, repeated marginal inference can be extremely expensive, as discussed in
[3]. CNN training usually requires a huge number of SGD iterations (hundreds of thousands, or even
millions), hence this inference based learning approach is in general not scalable or even infeasible.
3
Learning CNN message estimators
In conventional CRF approaches, the potential functions are first learned, and then inference is
performed based on the learned potential functions to generate the final prediction. In contrast, our
approach directly optimizes the inference procedure for final prediction. We propose to learn CNN
estimators to directly output the required intermediate values in an inference algorithm.
Here we focus on the message passing based inference algorithm which has been extensively studied
and widely applied. In the CRF prediction procedure, the ?message? vectors are recursively calculated based on the learned potentials. We propose to construct and learn CNNs to directly estimate
these messages in the message passing procedure, rather than learning the potential functions. In
particular, we directly learn factor-to-variable message estimators. Our message learning framework
is general and can accommodate all message passing based algorithms such as loopy belief propagation (BP) [13], mean-field approximation [13] and their variants. Here we discuss using loopy BP
for calculating variable marginals. As shown by Yedidia et al. [15], loopy BP has a close relation
with Bethe free energy approximation.
Typically, the message is a K-dimensional vector (K is the number of classes) which encodes the
information of the label distribution. For each variable-factor connection, we need to recursively
compute the variable-to-factor message: ? p?F ? RK , and the factor-to-variable message: ? F ?p ?
RK . The unnormalized variable-to-factor message is computed as:
P
?
?
(yp ) =
? 0 (yp ).
(7)
0
p?F
F ?Fp \F
F ?p
Here Fp is a set of factors connected to the variable p; Fp \F is the set of factors Fp excluding the
factor F . For loopy graphs, the variable-to-factor message is normalized at each iteration:
?
exp ?
p?F (yp )
? p?F (yp ) = log P
.
0
?
y 0 exp ? p?F (yp )
(8)
p
The factor-to-variable message is computed as:
X
X
0
0
? F ?p (yp ) = log
exp ? EF (y F ) +
? q?F (yq ) .
y 0F \yp0 ,yp0 =yp
(9)
q?NF \p
Here NF is a set of variables connected to the factor F ; NF \p is the set of variables NF excluding
the variable p. Once we get all the factor-to-variable messages of one variable node, we are able to
calculate the marginal distribution (beliefs) of that variable:
X
X
1
P (yp |x) =
P (y|x) =
exp
? F ?p (yp ) ,
(10)
Zp
F ?Fp
y\yp
in which Zp is a normalizer: Zp =
3.1
P
P
yp exp [
F ?Fp ? F ?p (yp )].
CNN message estimators
The calculation of factor-to-variable message ? F ?p depends on the variable-to-factor messages
? p?F . Substituting the definition of ? p?F in (8), ? F ?p can be re-written as:
0
?
X
X
exp ?
q?F (yq )
exp ? EF (y 0F ) +
log P
? F ?p (yp ) = log
00
?
yq00 exp ? q?F (yq )
q?NF \p
y 0F \yp0 ,yp0 =yp
P
X
X
exp F 0 ?Fq \F ? F 0 ?q (yq0 )
0
P
= log
exp ? EF (y F ) +
log P
00
yq00 exp
F 0 ?Fq \F ? F 0 ?q (yq )
0
0
0
y F \yq ,yp =yp
q?NF \p
(11)
Here q denotes the variable node which is connected to the node p by the factor F in the factor
graph. We refer to the variable node q as a neighboring node of q. NF \p is a set of variables
connected to the factor F excluding the node p. Clearly, for a pairwise factor which only connects
to two variables, the set NF \p only contains one variable node. The above equations show that
the factor-to-variable message ? F ?p depends on the potential EF and ? F 0 ?q . Here ? F 0 ?q is the
factor-to-variable message which is calculated from a neighboring node q and a factor F 0 6= F .
Conventional CRF learning approaches learn the potential function then follow the above equations
to compute the messages for calculating marginals. As discussed in [8], given that the goal is to
estimate the marginals, it is not necessary to exactly follow the above equations, which involve
learning potential functions, to calculate messages. We can directly learn message estimators, rather
than indirectly learning the potential functions as in conventional methods.
Consider the calculation in (11). The message ? F ?p depends on the observation xpF and the
messages ? F 0 ?q . Here xpF denotes the observations that correspond to the node p and the factor
F . We are able to formulate a factor-to-variable message estimator which takes xpF and ? F 0 ?q as
inputs and outputs the message vector, and we directly learn such estimators. Since one message
? F ?p depends on a number of previous messages ? F 0 ?q , we can formulate a sequence of message
estimators to model the dependence. Thus the output from a previous message estimator will be the
input of the following message estimator.
There are two message passing strategies for loopy BP: synchronous and asynchronous passing.
We here focus on the synchronous message passing, for which all messages are computed before
passing them to the neighbors. The synchronous passing strategy results in much simpler message
dependences than the asynchronous strategy, which simplifies the training procedure. We define one
inference iteration as one pass of the graph with the synchronous passing strategy.
We propose to learn CNN based factor-to-variable message estimator. The message estimator models the interaction between neighboring variable nodes. We denote by M a message estimator. The
factor-to-variable message is calculated as:
? F ?p (yp ) = MF (xpF , dpF , yp ).
(12)
We refer to dpF as the dependent message feature vector which encodes all dependent messages
from the neighboring nodes that are connected to the node p by F . Note that the dependent messages
are the output of message estimators at the previous inference iteration. In the case of running only
one message passing iteration, there are no dependent messages for MF , and thus we do not need
to incorporate dpF . To have a general exposition, we here describe the case of running arbitrarily
many inference iterations.
We can choose any effective strategy to generate the feature vector dpF from the dependent messages. Here we discuss a simple example. According to (11), we define the feature vector dpF as a
K-dimensional vector which aggregates all dependent messages. In this case, dpF is computed as:
P
X
exp F 0 ?Fq \F MF 0 (xqF 0 , dqF 0 , y)
P
dpF (y) =
log P
.
(13)
0
0
0
0
y 0 exp
F 0 ?Fq \F MF (xqF , dqF , y )
q?NF \p
With the definition of dpF in (13) and ? F ?p in (12), it clearly shows that the message estimation requires evaluating a sequence of message estimators. Another example is to concatenate all
dependent messages to construct the feature vector dpF .
There are different strategies to formulate the message estimators in different iterations. One strategy
is using the same message estimator across all inference iterations. In this case the message estimator
becomes a recursive function, and thus the CNN based estimator becomes a recurrent neural network
(RNN). Another strategy is to formulate different estimator for each inference iteration.
3.2
Details for message estimator networks
We formulate the estimator MF as a CNN, thus the estimation is the network outputs:
PK
? F ?p (yp ) = MF (xpF , dpF , yp ; ? F ) = k=1 ?(k = yp )zpF,k (x, dpF ; ? F ).
(14)
Here ? F denotes the network parameter which we need to learn. ?(?) is the indicator function, which
equals 1 if the input is true and 0 otherwise. We denote by z pF ? RK as the K-dimensional output
vector (K is the number of classes) of the message estimator network for the node p and the factor
F ; zpF,k is the k-th value in the network output z pF corresponding to the k-th class.
We can consider any possible strategies for implementing z pF with CNNs. For example, we here
describe a strategy which is analogous to the network design in [3]. We denote by C (1) as a fully
convolutional network (FCNN) [16] for convolutional feature generation, and C (2) as a traditional
fully connected network for message estimation.
Given an input image x, the network output C (1) (x) ? RN1 ?N2 ?r is a convolutional feature map,
in which N1 ? N2 = N is the feature map size and r is the dimension of one feature vector. Each
spatial position (each feature vector) in the feature map C (1) (x) corresponds to one variable node
in the CRF graph. We denote by C (1) (x, p) ? Rr , the feature vector corresponding to the variable
node p. Likewise, C (1) (x, NF \p) ? Rr is the averaged vector of the feature vectors that correspond
to the set of nodes NF \p. Recall that NF \p is a set of nodes connected by the factor F excluding
the node p. For pairwise factors, NF \p contains only one node.
(1)
We construct the feature vector z C
pF
? R2r for the node-factor pair (p, F ) by concatenating
(1)
C (1) (x, p) and C (1) (x, NF \p). Finally, we concatenate the node-factor feature vector z C
pF and
(2)
the dependent message feature vector dpF as the input for the second network C . Thus the input
(1)
dimension for C (2) is (2r + K). For running only one inference iteration, the input for C (2) is z C
pF
alone. The final output from the second network C (2) is the K-dimensional message vector z pF .
To sum up, we generate the final message vector z pF as:
z pF = C (2) { [ C (1) (x, p)> ; C (1) (x, NF \p )> ; d>pF ]> }.
(15)
For a general CNN based potential function in conventional CRFs, the potential network is usually
required to have a large number of output units (exponential in the order of the potentials). For
example, it requires K 2 (K is the number of classes) outputs for the pairwise potentials [3]. A large
number of output units would significantly increase the number of network parameters. It leads to
expensive computations and tends to over-fit the training data. In contrast, for learning our CNN
message estimator, we only need to formulate K output units for the network. Clearly it is more
scalable in the cases of a large number of classes.
3.3
Training CNN message estimators
Our goal is to estimate the variable marginals in (3), which can be re-written with the estimators:
X
X
X
1
1
P (yp |x) =
P (y|x) =
exp
? F ?p (yp ) =
exp
MF (xpF , dpF , yp ; ? F ).
Zp
Zp
F ?Fp
y\yp
F ?Fp
Here Zp is the normalizer. The ideal variable marginal, for example, has the probability of 1 for the
ground truth class and 0 for the remaining classes. Here we consider the cross entropy loss between
the ideal marginal and the estimated marginal.
? ; ?) = ?
J(x, y
K
X X
?(yp = y?p ) log P (yp |x; ?)
p?N yp =1
=?
K
X X
p?N yp
P
exp F ?Fp MF (xpF , dpF , yp ; ? F )
P
?(yp = y?p ) log P
,
0
yp0 exp
F ?Fp MF (xpF , dpF , yp ; ? F )
=1
(16)
in which y?p is the ground truth label for the variable node p. Given a set of N training images and
label masks, the optimization problem for learning the message estimator network is:
PN
2
? (i) ; ?).
min? ?2 k?k2 + i=1 J(x(i) , y
(17)
The work in [8] proposed to learn the variable-to-factor message (? p?F ). Unlike their approach, we
aim to learn the factor-to-variable message (? F ?p ), for which we are able to naturally formulate the
variable marginals, which is the ultimate goal for prediction, as the training objective. Moreover, for
learning ? p?F in their approach, the message estimator will depend on all neighboring nodes (connected by any factors). Given that variable nodes will have different numbers of neighboring nodes,
they only consider a fixed number of neighboring nodes (e.g., 20) and concatenate their features to
generate a fixed-length feature vector for classification. In our case for learning ? F ?p , the message
estimator only depends on a fixed number of neighboring nodes (connected by one factor), thus we
do not have this problem. Most importantly, they learn message estimators by training traditional
probabilistic classifiers (e.g., simple logistic regressors) with hand-craft features, and in contrast, we
train deep CNNs in an end-to-end learning style without using hand-craft features.
3.4
Message learning with inference-time budgets
One advantage of message learning is that we are able to explicitly incorporate the expected number
of inference iterations into the learning procedure. The number of inference iterations defines the
learning sequence of message estimators. This is particularly useful if we aim to learn the estimators
which are capable of high-quality predictions within only a few inference iterations. In contrast,
Table 1: Segmentation results on the PASCAL VOC 2012 ?val? set. We compare with several recent CNN
based methods with available results on the ?val? set. Our method performs the best.
method
ContextDCRF [3]
Zoom-out [17]
Deep-struct [2]
DeepLab-CRF [9]
DeepLap-MCL [9]
BoxSup [18]
BoxSup [18]
ours
ours+
training set
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra + COCO
VOC extra
VOC extra
# train (approx.)
10k
10k
10k
10k
10k
10k
133k
10k
10k
IoU val set
70.3
63.5
64.1
63.7
68.7
63.8
68.1
71.1
73.3
conventional potential function learning in CRFs is not able to directly incorporate the expected
number of inference iterations.
We are particularly interested in learning message estimators for use with only one message passing
iteration, because of the speed of such inference. In this case it might be preferable to have largerange neighborhood connections, so that large range interaction can be captured within one inference
pass.
4
Experiments
We evaluate the proposed CNN message learning method for semantic image segmentation. We
use the publicly available PASCAL VOC 2012 dataset [19]. There are 20 object categories and one
background category in the dataset. It contains 1464 images in the training set, 1449 images in the
?val? set and 1456 images in the test set. Following the common practice in [20, 9], the training
set is augmented to 10582 images by including the extra annotations provided in [21] for the VOC
images. We use intersection-over-union (IoU) score [19] to evaluate the segmentation performance.
For the learning and prediction of our method, we only use one message passing iteration.
The recent work in [3] (referred to as ContextDCRF) learns multi-scale fully convolutional CNNs
(FCNNs) for unary and pairwise potential functions to capture contextual information. We follow
this CRF learning method and replace the potential functions by the proposed message estimators.
We consider 2 types of spatial relations for constructing the pairwise connections of variable nodes.
One is the ?surrounding? spatial relation, for which one node is connected to its surround nodes. The
other one is the ?above/below? spatial relation, for which one node is connected to the nodes that lie
above. For the pairwise connections, the neighborhood size is defined by a range box. We learn one
type of unary message estimator and 3 types of pairwise message estimators in total. One type of
pairwise message estimator is for the ?surrounding? spatial relations, and the other two are for the
?above/below? spatial relations. We formulate one network for one type of message estimator.
We formulate our message estimators as multi-scale FCNNs, for which we apply a similar network
configuration as in [3]. The network C (1) (see Sec. 3.2 for details) has 6 convolution blocks and C (2)
has 2 fully connected layers (with K output units). Our networks are initialized using the VGG-16
model [22]. We train all layers using back-propagation. Our system is built on MatConvNet [23].
We first evaluate our method on the VOC 2012 ?val? set. We compare with several recent CNN
based methods with available results on the ?val? set. Results are shown in Table 1. Our method
achieves the best performance. The comparing method ContextDCRF follows a conventional CRF
learning and prediction scheme: they first learn potentials and then perform inference based on
the learned potentials to output final predictions. The result shows that learning the CNN message
estimators is able to achieve similar performance compared to learning CNN potential functions in
CRFs. Note that since here we only use one message passing iteration for the training and prediction,
the inference is particularly efficient.
To further improve the performance, we perform simple data augmentation in training. We generate
extra 4 scales ([0.8, 0.9, 1.1, 1.2]) of the training images and their flipped images for training. This
result is denoted by ?ours+? in the result table.
bird
boat
bottle
bus
car
cat
chair
cow
table
dog
horse
mbike
person
potted
sheep
sofa
train
tv
mean
66.4
71.6
62.2
72.0
73.4
bike
method
DeepLab-CRF [9]
DeepLab-MCL [9]
FCN-8s [16]
CRF-RNN [1]
ours
aero
Table 2: Category results on the PASCAL VOC 2012 test set. Our method performs the best.
78.4
84.4
76.8
87.5
90.1
33.1
54.5
34.2
39.0
38.6
78.2
81.5
68.9
79.7
77.8
55.6
63.6
49.4
64.2
61.3
65.3
65.9
60.3
68.3
74.3
81.3
85.1
75.3
87.6
89.0
75.5
79.1
74.7
80.8
83.4
78.6
83.4
77.6
84.4
83.3
25.3
30.7
21.4
30.4
36.2
69.2
74.1
62.5
78.2
80.2
52.7
59.8
46.8
60.4
56.4
75.2
79.0
71.8
80.5
81.2
69.0
76.1
63.9
77.8
81.4
79.1
83.2
76.5
83.1
83.1
77.6
80.8
73.9
80.6
82.9
54.7
59.7
45.2
59.5
59.2
78.3
82.2
72.4
82.8
83.4
45.1
50.4
37.4
47.8
54.3
73.3
73.1
70.9
78.3
80.6
56.2
63.7
55.1
67.1
70.8
Table 3: Segmentation results on the PASCAL VOC 2012 test set. Compared to methods that use the same
augmented VOC dataset, our method has the best performance.
method
ContextDCRF [3]
Zoom-out [17]
FCN-8s [16]
SDS [20]
DeconvNet-CRF [24]
DeepLab-CRF [9]
DeepLab-MCL [9]
CRF-RNN [1]
DeepLab-CRF [25]
DeepLab-MCL [25]
BoxSup (semi) [18]
CRF-RNN [1]
ours
training set
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra
VOC extra + COCO
VOC extra + COCO
VOC extra + COCO
VOC extra + COCO
VOC extra
# train (approx.)
10k
10k
10k
10k
10k
10k
10k
10k
133k
133k
133k
133k
10k
IoU test set
70.7
64.4
62.2
51.6
72.5
66.4
71.6
72.0
70.4
72.7
71.0
74.7
73.4
We further evaluate our method on the VOC 2012 test set. We compare with recent state-of-the-art
CNN methods with competitive performance. The results are described in Table 3. Since the ground
truth labels are not available for the test set, we evaluate our method through the VOC evaluation
server. We achieve very competitive performance on the test set: 73.4 IoU score1 , which is to date
the best performance amongst methods that use the same augmented VOC training dataset [21]
(marked as ?VOC extra? in the table). These results validate the effectiveness of direct message
learning with CNNs. We also include a comparison with methods which are trained on the much
larger COCO dataset (around 133K training images). Our performance is comparable with these
methods, even though we make use of many fewer training images.
The results for each category is shown in Table 2. We compare with several recent methods which
transfer layers from the same VGG-16 model and use the same training data. Our method performs
the best for 13 out of 20 categories.
5
Conclusion
We have proposed a new deep message learning framework for structured CRF prediction. Learning
deep message estimators for the message passing inference reveals a new direction for learning deep
structured model. Learning CNN message estimators is efficient, which does not involve expensive
inference steps for gradient calculation. The network output dimension for message estimation is
the same as the number of classes, which does not increase with the order of the potentials, and thus
CNN message learning has less network parameters and is more scalable in the number of classes
compared to conventional potential function learning. Our impressive performance for semantic
segmentation demonstrates the effectiveness and usefulness of the proposed deep message learning.
Our framework is general and can be readily applied to other structured prediction applications.
Acknowledgements This research was supported by the Data to Decisions Cooperative Research
Centre and by the Australian Research Council through the ARC Centre for Robotic Vision
CE140100016 and through a Laureate Fellowship FL130100102 to I. Reid. Correspondence should
be addressed to C. Shen.
1
The result link provided by VOC evaluation server: http://host.robots.ox.ac.uk:8080/anonymous/DBD0SI.html
References
[1] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and
P. Torr, ?Conditional random fields as recurrent neural networks,? 2015. [Online]. Available:
http://arxiv.org/abs/1502.03240
[2] A. Schwing and R. Urtasun, ?Fully connected deep structured networks,? 2015. [Online]. Available:
http://arxiv.org/abs/1503.02351
[3] G. Lin, C. Shen, I. Reid, and A. van den Hengel, ?Efficient piecewise training of deep structured models
for semantic segmentation,? 2015. [Online]. Available: http://arxiv.org/abs/1504.01013
[4] F. Liu, C. Shen, and G. Lin, ?Deep convolutional neural fields for depth estimation from a single image,?
in Proc. IEEE Conf. Comp. Vis. Pattern Recogn., 2015.
[5] L. Chen, A. Schwing, A. Yuille, and R. Urtasun, ?Learning deep structured models,? 2014. [Online].
Available: http://arxiv.org/abs/1407.2538
[6] J. Besag, ?Efficiency of pseudolikelihood estimation for simple Gaussian fields,? Biometrika, 1977.
[7] C. Sutton and A. McCallum, ?Piecewise training for undirected models,? in Proc. Conf. Uncertainty
Artificial Intelli, 2005.
[8] S. Ross, D. Munoz, M. Hebert, and J. Bagnell, ?Learning message-passing inference machines for structured prediction,? in Proc. IEEE Conf. Comp. Vis. Pattern Recogn., 2011.
[9] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille, ?Semantic image segmentation with deep
convolutional nets and fully connected CRFs,? 2014. [Online]. Available: http://arxiv.org/abs/1412.7062
[10] P. Kr?ahenb?uhl and V. Koltun, ?Efficient inference in fully connected CRFs with Gaussian edge potentials,?
in Proc. Adv. Neural Info. Process. Syst., 2012.
[11] F. Liu, C. Shen, G. Lin, and I. Reid, ?Learning depth from single monocular images using deep
convolutional neural fields,? 2015. [Online]. Available: http://arxiv.org/abs/1502.07411
[12] J. Tompson, A. Jain, Y. LeCun, and C. Bregler, ?Joint training of a convolutional network and a graphical
model for human pose estimation,? in Proc. Adv. Neural Info. Process. Syst., 2014.
[13] S. Nowozin and C. Lampert, ?Structured learning and prediction in computer vision,? Found. Trends.
Comput. Graph. Vis., 2011.
[14] V. Kolmogorov, ?Convergent tree-reweighted message passing for energy minimization,? IEEE T. Pattern
Analysis & Machine Intelligence, 2006.
[15] J. S. Yedidia, W. T. Freeman, Y. Weiss et al., ?Generalized belief propagation,? in Proc. Adv. Neural Info.
Process. Syst., 2000.
[16] J. Long, E. Shelhamer, and T. Darrell, ?Fully convolutional networks for semantic segmentation,? in Proc.
IEEE Conf. Comp. Vis. Pattern Recogn., 2015.
[17] M. Mostajabi, P. Yadollahpour, and G. Shakhnarovich, ?Feedforward semantic segmentation with
zoom-out features,? 2014. [Online]. Available: http://arxiv.org/abs/1412.0774
[18] J. Dai, K. He, and J. Sun, ?BoxSup: exploiting bounding boxes to supervise convolutional networks for
semantic segmentation,? 2015. [Online]. Available: http://arxiv.org/abs/1503.01640
[19] M. Everingham, L. V. Gool, C. Williams, J. Winn, and A. Zisserman, ?The pascal visual object classes
(VOC) challenge,? Int. J. Comp. Vis., 2010.
[20] B. Hariharan, P. Arbel?aez, R. Girshick, and J. Malik, ?Simultaneous detection and segmentation,? in Proc.
European Conf. Computer Vision, 2014.
[21] B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik, ?Semantic contours from inverse detectors,?
in Proc. Int. Conf. Comp. Vis., 2011.
[22] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
2014. [Online]. Available: http://arxiv.org/abs/1409.1556
[23] A. Vedaldi and K. Lenc, ?Matconvnet ? convolutional neural networks for matlab,? in Proceeding of the
ACM Int. Conf. on Multimedia, 2015.
[24] H. Noh, S. Hong, and B. Han, ?Learning deconvolution network for semantic segmentation,? in Proc.
IEEE Conf. Comp. Vis. Pattern Recogn., 2015.
[25] G. Papandreou, L. Chen, K. Murphy, and A. Yuille, ?Weakly-and semi-supervised learning of a DCNN
for semantic image segmentation,? 2015. [Online]. Available: http://arxiv.org/abs/1502.02734
| 5791 |@word cnn:35 kokkinos:1 paredes:1 everingham:1 sgd:4 accommodate:1 recursively:2 configuration:1 contains:3 score:2 liu:2 ours:5 romera:1 contextual:2 comparing:1 attracted:1 readily:1 written:3 concatenate:3 partition:3 zpf:2 alone:1 intelligence:1 fewer:2 mccallum:1 potted:1 node:33 org:10 simpler:1 constructed:2 direct:2 koltun:1 combine:2 advocate:1 pairwise:12 mask:3 expected:2 themselves:1 growing:1 multi:2 freeman:1 voc:36 decomposed:1 pf:10 becomes:2 provided:2 moreover:1 bike:1 kind:1 pseudo:1 every:1 nf:15 exactly:1 biometrika:1 preferable:1 demonstrates:2 classifier:2 k2:2 uk:1 unit:4 reid:5 before:2 tends:1 sd:1 sutton:1 fl130100102:1 might:1 bird:1 au:1 studied:2 range:2 averaged:1 lecun:1 recursive:1 practice:1 implement:1 union:1 block:1 procedure:8 rnn:5 significantly:2 vedaldi:1 word:1 pre:1 get:1 close:1 applying:2 optimize:2 confers:1 conventional:11 map:4 crfs:18 maximizing:1 williams:1 attention:1 shen:6 formulate:10 estimator:52 rule:1 importantly:1 obviate:1 analogous:1 exact:1 trend:1 expensive:5 particularly:4 recognition:1 cooperative:1 aero:1 capture:2 calculate:3 thousand:1 connected:16 cycle:1 adv:3 sun:1 deeply:1 mentioned:1 trained:2 depend:1 solving:1 shakhnarovich:1 weakly:1 yuille:3 upon:1 efficiency:2 easily:1 joint:10 cat:1 recogn:4 kolmogorov:1 maji:1 surrounding:2 train:9 jain:1 fast:1 describe:2 effective:1 artificial:1 labeling:1 aggregate:1 horse:1 neighborhood:2 aez:1 widely:2 solve:1 larger:1 otherwise:2 ability:1 simonyan:1 jointly:4 noisy:1 final:10 online:10 sequence:3 rr:2 advantage:1 net:1 arbel:1 propose:6 interaction:2 neighboring:8 relevant:1 combining:1 date:1 achieve:4 validate:1 exploiting:1 darrell:1 zp:6 dpf:15 object:2 recurrent:3 ac:1 pose:3 bourdev:1 australian:2 iou:4 direction:3 cnns:24 stochastic:3 human:1 australia:1 implementing:1 require:1 anonymous:1 bregler:1 around:1 ground:3 exp:22 great:1 predict:1 substituting:1 matconvnet:2 achieves:1 estimation:11 proc:10 sofa:1 label:7 ross:1 council:1 minimization:1 clearly:3 dcnn:1 gaussian:2 aim:6 rather:5 avoid:1 pn:2 focus:3 modelling:1 likelihood:8 indicates:1 fq:4 contrast:5 normalizer:2 besag:1 posteriori:1 inference:52 dependent:8 unary:6 typically:4 relation:7 interested:2 pixel:2 noh:1 classification:4 html:1 pascal:6 denoted:5 spatial:6 art:1 marginal:12 field:10 construct:3 once:1 equal:1 uhl:1 flipped:1 fcn:2 np:1 piecewise:3 few:1 zoom:3 murphy:2 mcl:4 argmax:1 connects:1 n1:1 ab:10 detection:1 huge:1 message:129 zheng:1 evaluation:2 sheep:1 tompson:1 chain:1 edge:1 capable:1 necessary:3 xpf:8 tree:2 initialized:1 re:2 mbike:1 girshick:1 papandreou:2 loopy:7 hundred:1 usefulness:2 person:1 explores:1 vineet:1 probabilistic:1 augmentation:1 rn1:1 choose:1 huang:1 conf:8 resort:1 style:2 yp:36 syst:3 potential:46 sec:2 int:3 explicitly:2 depends:5 vi:7 later:1 performed:1 competitive:2 annotation:1 contribution:1 publicly:1 hariharan:2 convolutional:14 likewise:1 correspond:2 anton:2 comp:6 simultaneous:1 detector:1 definition:2 fcnn:1 energy:6 naturally:2 dataset:6 popular:1 recall:1 car:1 segmentation:20 back:1 supervised:1 follow:3 zisserman:2 wei:1 box:2 though:1 ox:1 furthermore:1 yq0:1 hand:4 su:1 propagation:4 defines:1 logistic:2 brings:1 quality:2 mode:1 jayasumana:1 normalized:1 true:1 hence:3 regularization:1 semantic:15 reweighted:2 unnormalized:1 hong:1 generalized:1 crf:34 performs:4 image:29 consideration:1 ef:6 recently:1 common:1 exponentially:1 million:1 discussed:5 he:1 marginals:6 significant:1 refer:2 surround:1 munoz:1 approx:2 centre:3 robot:1 han:1 impressive:4 etc:1 recent:7 optimizing:3 optimizes:1 chunhua:2 coco:6 server:2 success:1 arbitrarily:1 captured:1 dai:1 guosheng:2 ey:1 semi:2 xf:1 calculation:6 offer:1 cross:1 lin:5 long:1 host:1 post:1 prediction:23 scalable:5 variant:1 vision:4 arxiv:10 iteration:22 score1:1 deeplab:9 ahenb:1 background:1 fellowship:1 addressed:1 winn:1 extra:25 lenc:1 unlike:2 facilitates:1 undirected:1 effectiveness:3 ideal:2 intermediate:1 feedforward:1 undertake:1 fit:1 cow:1 idea:1 simplifies:1 vgg:2 synchronous:4 xqf:2 motivated:1 ultimate:3 passing:28 matlab:1 deep:24 generally:2 useful:1 involve:4 ce140100016:1 extensively:2 category:5 generate:5 http:11 estimated:1 promise:1 yadollahpour:1 graph:10 sum:2 inverse:1 uncertainty:1 extends:1 decision:1 comparable:1 layer:3 convergent:1 correspondence:1 strength:1 bp:5 encodes:2 speed:1 min:2 formulating:1 extremely:1 performing:2 chair:1 structured:17 tv:1 according:1 combination:1 describes:1 dqf:2 across:1 making:1 supervise:1 den:2 taken:1 computationally:1 equation:3 monocular:1 remains:1 bus:1 describing:1 discus:3 end:8 available:14 yedidia:2 apply:4 indirectly:1 struct:1 denotes:4 running:3 remaining:1 include:1 graphical:1 calculating:4 especially:1 objective:5 malik:2 strategy:10 r2r:1 dependence:2 traditional:3 bagnell:1 unclear:1 gradient:11 amongst:1 link:1 arbelaez:1 mail:1 urtasun:2 length:1 index:1 relationship:1 boxsup:4 minimizing:1 info:3 negative:2 design:1 perform:4 observation:2 convolution:1 markov:1 arc:1 descent:2 excluding:5 fcnns:2 cast:1 required:3 pair:1 bottle:1 connection:4 dog:1 learned:6 tremendous:1 able:8 usually:4 below:2 pattern:5 fp:10 challenge:1 built:1 including:1 belief:4 gool:1 power:1 difficulty:1 predicting:1 indicator:1 boat:1 yp0:5 scheme:2 improve:1 yq:5 review:1 acknowledgement:1 val:6 deconvnet:1 fully:9 loss:1 generation:1 limitation:1 triple:1 shelhamer:1 nowozin:1 repeat:1 supported:1 free:1 asynchronous:2 intractably:1 infeasible:2 hebert:1 pseudolikelihood:1 neighbor:1 mostajabi:1 van:2 dimension:5 depth:3 calculated:3 evaluating:1 hengel:2 contour:1 regressors:2 approximate:5 ignore:1 laureate:1 clique:1 robotic:2 reveals:1 table:9 learn:23 bethe:1 transfer:1 du:1 complex:1 european:1 constructing:2 pk:1 main:1 dense:3 bounding:1 lampert:1 n2:2 repeated:2 body:1 augmented:3 crafted:2 referred:1 fashion:1 slow:1 position:1 exponential:2 concatenating:1 lie:1 intelli:1 comput:1 learns:3 ian:2 rk:3 explored:2 decay:1 deconvolution:1 adding:1 kr:1 budget:1 chen:3 mf:9 entropy:1 intersection:1 explore:1 visual:1 applies:1 corresponds:1 truth:3 acm:1 conditional:6 goal:5 marked:1 exposition:1 replace:1 considerable:1 hard:1 typical:1 torr:1 schwing:2 total:1 multimedia:1 pas:2 craft:2 adelaide:2 incorporate:4 evaluate:6 |
5,293 | 5,792 | Efficient Learning of Continuous-Time Hidden
Markov Models for Disease Progression
Yu-Ying Liu, Shuang Li, Fuxin Li, Le Song, and James M. Rehg
College of Computing
Georgia Institute of Technology
Atlanta, GA
Abstract
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter
learning algorithm for CT-HMM restricts its use to very small models or requires
unrealistic constraints on the state transitions. In this paper, we present the first
complete characterization of efficient EM-based learning methods for CT-HMM
models. We demonstrate that the learning problem consists of two challenges: the
estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation
problem in terms of an equivalent discrete time-inhomogeneous hidden Markov
model. The second challenge is addressed by adapting three approaches from the
continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict
disease progression using a glaucoma dataset and an Alzheimer?s disease dataset.
1
Introduction
The goal of disease progression modeling is to learn a model for the temporal evolution of a disease
from sequences of clinical measurements obtained from a longitudinal sample of patients. By distilling population data into a compact representation, disease progression models can yield insights
into the disease process through the visualization and analysis of disease trajectories. In addition,
the models can be used to predict the future course of disease in an individual, supporting the development of individualized treatment schedules and improved treatment efficiencies. Furthermore,
progression models can support phenotyping by providing a natural similarity measure between
trajectories which can be used to group patients based on their progression.
Hidden variable models are particularly attractive for modeling disease progression for three reasons: 1) they support the abstraction of a disease state via the latent variables; 2) they can deal with
noisy measurements effectively; and 3) they can easily incorporate dynamical priors and constraints.
While conventional hidden Markov models (HMMs) have been used to model disease progression,
they are not suitable in general because they assume that measurement data is sampled regularly
at discrete intervals. However, in reality patient visits are irregular in time, as a consequence of
scheduling issues, missed visits, and changes in symptomatology.
A Continuous-Time HMM (CT-HMM) is an HMM in which both the transitions between hidden
states and the arrival of observations can occur at arbitrary (continuous) times [1, 2]. It is therefore
suitable for irregularly-sampled temporal data such as clinical measurements [3, 4, 5]. Unfortunately, the additional modeling flexibility provided by CT-HMM comes at the cost of a more complex inference procedure. In CT-HMM, not only are the hidden states unobserved, but the transition
times at which the hidden states are changing are also unobserved. Moreover, multiple unobserved
hidden state transitions can occur between two successive observations. A previous method addressed these challenges by directly maximizing the data likelihood [2], but this approach is limited
1
to very small model sizes. A general EM framework for continuous-time dynamic Bayesian networks, of which CT-HMM is a special case, was introduced in [6], but that work did not address the
question of efficient learning. Consequently, there is a need for efficient CT-HMM learning methods
that can scale to large state spaces (e.g. hundreds of states or more) [7].
A key aspect of our approach is to leverage the existing literature for continuous time Markov chain
(CTMC) models [8, 9, 10]. These models assume that states are directly observable, but retain
the irregular distribution of state transition times. EM approaches to CTMC learning compute the
expected state durations and transition counts conditioned on each pair of successive observations.
The key computation is the evaluation of integrals of the matrix exponential (Eqs. 12 and 13). Prior
work by Wang et. al. [5] used a closed form estimator due to [8] which assumes that the transition
rate matrix can be diagonalized through an eigendecomposition. Unfortunately, this is frequently not
achievable in practice, limiting the usefulness of the approach. We explore two additional CTMC approaches [9] which use (1) an alternative matrix exponential on an auxillary matrix (Expm method);
and (2) a direct truncation of the infinite sum expansion of the exponential (Unif method). Neither
of these approaches have been previously exploited for CT-HMM learning.
We present the first comprehensive framework for efficient EM-based parameter learning in CTHMM, which both extends and unifies prior work on CTMC models. We show that a CT-HMM can
be conceptualized as a time-inhomogenous HMM which yields posterior state distributions at the
observation times, coupled with CTMCs that govern the distribution of hidden state transitions between observations (Eqs. 9 and 10). We explore both soft (forward-backward) and hard (Viterbi decoding) approaches to estimating the posterior state distributions, in combination with three methods
for calculating the conditional expectations. We validate these methods in simulation and evaluate
our approach on two real-world datasets for glaucoma and Alzheimer?s disease, including visualizations of the progression model and predictions of future progression. Our approach outperforms
a state-of-the-art method [11] for glaucoma prediction, which demonstrates the practical utility of
CT-HMM for clinical data modeling.
2
Continuous-Time Markov Chain
A continuous-time Markov chain (CTMC) is defined by a finite and discrete state space S, a state
transition rate matrix Q, and an initial state probability distribution ?. The elements qij in Q describe
the rate the process transitions
from state i to j for i 6= j, and qii are specified such that each row of
P
Q sums to zero (qi = j6=i qij , qii = ?qi ) [1]. In a time-homogeneous process, in which the qij
are independent of t, the sojourn time in each state i is exponentially-distributed with parameter qi ,
which is f (t) = qi e?qi t with mean 1/qi . The probability that the process?s next move from state i is
to state j is qij /qi . When a realization of the CTMC is fully observed, meaning that one can observe
every transition time (t00 , t01 , . . . , t0V 0 ), and the corresponding state Y 0 = {y0 = s(t00 ), ..., yV 0 =
s(t0V 0 )}, where s(t) denotes the state at time t, the complete likelihood (CL) of the data is
CL =
0
VY
?1
(qyv0 ,yv0 +1 /qyv0 )(qyv0 e?qyv0 ?v0 ) =
v 0 =0
0
VY
?1
qyv0 ,yv0 +1 e?qyv0 ?v0 =
v 0 =0
|S|
Y
|S|
Y
n
qijij e?qi ?i
(1)
i=1 j=1,j6=i
where ?v0 = t0v0 +1 ? t0v0 is the time interval between two transitions, nij is the number of transitions
from state i to j, and ?i is the total amount of time the chain remains in state i.
In general, a realization of the CTMC is observed only at discrete and irregular time points
(t0 , t1 , ..., tV ), corresponding to a state sequence Y , which are distinct from the switching times.
As a result, the Markov process between two consecutive observations is hidden, with potentially
many unobserved state transitions. Thus both nij and ?i are unobserved. In order to express the
likelihood of the incomplete observations, we can utilize a discrete time hidden Markov model by
defining a state transition probability matrix for each distinct time interval t, P (t) = eQt , where
Pij (t), the entry (i, j) in P (t), is the probability that the process is in state j after time t given that
it is in state i at time 0. This quantity takes into account all possible intermediate state transitions
and timing between i and j which are not observed. Then the likelihood of the data is
L=
VY
?1
v=0
Pyv ,yv+1 (?v ) =
|S|
VY
?1 Y
Pij (?v )I(yv =i,yv+1 =j) =
v=0 i,j=1
|S|
r
Y
Y
Pij (?? )C(? =?? ,yv =i,yv+1 =j) (2)
?=1 i,j=1
where ?v = tv+1 ? tv is the time interval between two observations, I(yv = i, yv+1 = j) is an
indicator function that is 1 if the condition is true, otherwise it is 0, ?? , ? = 1, ..., r, represents r
unique values among all time intervals ?v , and C(? = ?? , yv = i, yv+1 = j) is the total counts
2
from all successive visits when the condition is true. Note that there is no analytic maximizer of L,
due to the structure of the matrix exponential, and direct numerical maximization with respect to Q
is computationally challenging. This motivates the use of an EM-based approach.
An EM algorithm for CTMC is described in [8]. Based on Eq. 1, the expected complete log likeliP|S| P|S|
? 0 ]?qi E[?i |Y, Q
? 0 ]}, where Q
? 0 is the current
hood takes the form i=1 j=1,j6=i {log(qij )E[nij |Y, Q
?
?
estimate for Q, and E[nij |Y, Q0 ] and E[?i |Y, Q0 ] are the expected state transition count and total
? 0 , respectively.
duration given the incomplete observation Y and the current transition rate matrix Q
? parameters can be obtained
Once these two expectations are computed in the E-step, the updated Q
via the M-step as
q?ij =
X
?0]
E[nij |Y, Q
, i 6= j and q?ii = ?
q?ij .
?0]
E[?i |Y, Q
j6=i
(3)
? 0 ] and E[?i |Y, Q
? 0 ]. By exploiting
Now the main computational challenge is to evaluate E[nij |Y, Q
the properties of the Markov process, the two expectations can be decomposed as [12]:
?0] =
E[nij |Y, Q
V
?1
X
?0] =
E[nij |yv , yv+1 , Q
v=0
?0] =
E[?i |Y, Q
V
?1
X
?0]
I(yv = k, yv+1 = l)E[nij |yv = k, yv+1 = l, Q
v=0 k,l=1
?0] =
E[?i |yv , yv+1 , Q
v=0
|S|
V
?1 X
X
|S|
V
?1 X
X
?0]
I(yv = k, yv+1 = l)E[?i |yv = k, yv+1 = l, Q
v=0 k,l=1
where I(yv = k, yv+1 = l) = 1 if the condition is true, otherwise it is 0. Thus, the computation
? 0 ] and
reduces to computing the end-state conditioned expectations E[nij |yv = k, yv+1 = l, Q
? 0 ], for all k, l, i, j ? S. These expectations are also a key step in CT-HMM
E[?i |yv = k, yv+1 = l, Q
learning, and Section 4 presents our approach to computing them.
3
Continuous-Time Hidden Markov Model
In this section, we describe the continuous-time hidden Markov model (CT-HMM) for disease progression and the proposed framework for CT-HMM learning.
3.1
Model Description
In contrast to CTMC, where the states are directly observed, none of the states are directly observed
in CT-HMM. Instead, the available observational data o depends on the hidden states s via the
measurement model p(o|s). In contrast to a conventional HMM, the observations (o0 , o1 , . . . , oV )
are only available at irregularly-distributed continuous points in time (t0 , t1 , . . . , tV ). As a consequence, there are two levels of hidden information in a CT-HMM. First, at observation time, the
state of the Markov chain is hidden and can only be inferred from measurements. Second, the state
transitions in the Markov chain between two consecutive observations are also hidden. As a result, a
Markov chain may visit multiple hidden states before reaching a state that emits a noisy observation.
This additional complexity makes CT-HMM a more effective model for event data, in comparison
to HMM and CTMC. But as a consequence the parameter learning problem is more challenging.
We believe we are the first to present a comprehensive and systematic treatment of efficient EM
algorithms to address these challenges.
A fully observed CT-HMM contains four sequences of information: the underlying state transition
time (t00 , t01 , . . . , t0V 0 ), the corresponding state Y 0 = {y0 = s(t00 ), ..., yV 0 = s(t0V 0 )} of the hidden
Markov chain, and the observed data O = (o0 , o1 , . . . , oV ) at time T = (t0 , t1 , . . . , tV ). Their joint
complete likelihood can be written as
CL =
0
VY
?1
v 0 =0
qyv0 ,yv0 +1 e?qyv0 ?v0
V
Y
p(ov |s(tv )) =
v=0
|S|
Y
|S|
Y
i=1 j=1,j6=i
n
qijij e?qi ?i
V
Y
p(ov |s(tv )).
(4)
v=0
We will focus our development on the estimation of the transition rate matrix Q. Estimates for the
parameters of the emission model p(o|s) and the initial state distribution ? can be obtained from the
standard discrete time HMM formulation [13], but with time-inhomogeneous transition probabilities
(described below).
3
3.2 Parameter Estimation
? 0 , the expected complete log-likelihood takes the form
Given a current estimate of the parameter Q
L(Q) =
|S|
|S|
X
X
? 0 ] ? qi E[?i |O, T, Q
? 0 ]} +
{log(qij )E[nij |O, T, Q
V
X
? 0 ]. (5)
E[log p(ov |s(tv ))|O, T, Q
v=0
i=1 j=1,j6=i
In the M-step, taking the derivative of L with respect to qij , we have
q?ij =
X
?0]
E[nij |O, T, Q
, i 6= j and q?ii = ?
q?ij .
?
E[?i |O, T, Q0 ]
j6=i
(6)
The challenge lies in the E-step, where we compute the expectations of nij and ?i conditioned on the
observation sequence. The statistic for nij can be expressed in terms of the expectations between
successive pairs of observations as follows:
?0] =
E[nij |O, T, Q
X
? 0 )E[nij |s(t1 ), ..., s(tV ), Q
?0]
p(s(t1 ), ..., s(tV )|O, T, Q
(7)
s(t1 ),...,s(tV )
=
X
?0)
p(s(t1 ), ..., s(tV )|O, T, Q
|S|
V
?1 X
X
?0]
E[nij |s(tv ), s(tv+1 ), Q
(8)
v=1
s(t1 ),...,s(tV )
=
V
?1
X
? 0 )E[nij |s(tv ) = k, s(tv+1 ) = l, Q
? 0 ].
p(s(tv ) = k, s(tv+1 ) = l|O, T, Q
(9)
v=1 k,l=1
In a similar way, we can obtain an expression for the expectation of ?i :
?0] =
E[?i |O, T, Q
|S|
n?1
X X
? 0 )E[?i |s(tv ) = k, s(tv+1 ) = l, Q
? 0 ].
p(s(tv ) = k, s(tv+1 ) = l|O, T, Q
(10)
v=1 k,l=1
In Section 4, we present our approach to computing the end-state conditioned statistics
? 0 ] and E[?i |s(tv ) = k, s(tv+1 ) = l, Q
? 0 ]. The remaining step
E[nij |s(tv ) = k, s(tv+1 ) = l, Q
is to compute the posterior state distribution at two consecutive observation times: p(s(tv ) =
? 0 ).
k, s(tv+1 ) = l|O, T, Q
3.3
Computing the Posterior State Probabilities
? 0 ) is to avoid the explicit
The challenge in efficiently computing p(s(tv ) = k, s(tv+1 ) = l|O, T, Q
enumeration of all possible state transition sequences and the variable time intervals between intermediate state transitions (from k to l). The key is to note that the posterior state probabilities are only
needed at the times where we have observation data. We can exploit this insight to reformulate the
estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model.
? 0 , O and T , we will divide the time into V intervals, each
Specifically, given the current estimate Q
with duration ?v = tv ? tv?1 . We then make use of the transition property of CTMC, and associate
?
each interval v with a state transition matrix P v (?v ) := eQ0 ?v . Together with the emission model
p(o|s), we then have a discrete time-inhomogeneous hidden Markov model with joint likelihood:
V
Y
[P v (?v )](s(tv?1 ),s(tv ))
v=1
V
Y
p(ov |s(tv )).
(11)
v=0
The formulation in Eq. 11 allows us to reduce the computation of p(s(tv ) = k, s(tv+1 ) =
? 0 ) to familiar operations. The forward-backward algorithm [13] can be used to compute the
l|O, T, Q
posterior distribution of the hidden states, which we refer to as the Soft method. Alternatively, the
MAP assignment of hidden states obtained from the Viterbi algorithm can provide an approximate
distribution, which we refer to as the Hard method.
4
EM Algorithms for CT-HMM
Pseudocode for the EM algorithm for CT-HMM parameter learning is shown in Algorithm 1.
Multiple variants of the basic algorithm are possible, depending on the choice of method for
computing the end-state conditioned expectations along with the choice of Hard or Soft decoding for obtaining the posterior state probabilities in Eq. 11. Note that in line 7 of Algorithm 1,
4
Algorithm 1 CT-HMM Parameter learning (Soft/Hard)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
Input: data O = (o0 , ..., oV ) and T = (t0 , . . . , tV ), state set S, edge set L, initial guess of Q
Output: transition rate matrix Q = (qij )
Find all distinct time intervals t? , ? = 1, ..., r, from T
Compute P (t? ) = eQt? for each t?
repeat
Compute p(v, k, l) = p(s(tv ) = k, s(tv+1 ) = l|O, T, Q) for all v, and the complete/stateoptimized data likelihood l by using Forward-Backward (soft) or Viterbi (hard)
Create soft count table C(?, k, l) from p(v, k, l) by summing prob. from visits of same t?
Use Expm, Unif or Eigen method to compute E[nij |O, T, Q] and E[?i |O, T, Q]
P
E[n |O,T,Q]
Update qij = E[?iji |O,T,Q] , and qii = ? i6=j qij
until likelihood l converges
we group probabilities from successive visits of same time interval and the same specified endstates in order to save computation time. This is valid because in a time-homogeneous CT-HMM,
? 0 ] = E[nij |s(0) = k, s(t? ) = l, Q
? 0 ], where t? = tv+1 ?tv , so that
E[nij |s(tv ) = k, s(tv+1 ) = l, Q
the expectations only need to be evaluated for each distinct time interval, rather than each different
visiting time (also see the discussion below Eq. 2).
4.1 Computing the End-State Conditioned Expectations
The remaining step in finalizing the EM algorithm is to discuss the computation of the end-state
conditioned expectations for nij and ?i from Eqs. 9 and 10, respectively. The first step is to express
the expectations in integral form, following [14]:
Z t
qi,j
Pk,i (x)Pj,l (t ? x) dx
Pk,l (t) 0
Z t
1
Pk,i (x)Pi,l (t ? x) dx.
E[?i |s(0) = k, s(t) = l, Q] =
Pk,l (t) 0
E[nij |s(0) = k, s(t) = l, Q] =
(12)
(13)
Rt
Rt
i,j
From Eq. 12, we define ?k,l
(t) = 0 Pk,i (x)Pj,l (t ? x)dx = 0 (eQx )k,i (eQ(t?x) )j,l dx, while
i,i
?k,l
(t) can be similarly defined for Eq. 13 (see [6] for a similar construction). Several methods for
i,j
i,i
computing ?k,l
(t) and ?k,l
(t) have been proposed in the CTMC literature. Metzner et. al. observe
that closed-form expressions can be obtained when Q is diagonalizable [8]. Unfortunately, this
property is not guaranteed to exist, and in practice we find that the intermediate Q matrices are
frequently not diagonalizable during EM iterations. We refer to this approach as Eigen.
An alternative is to leverage a classic method of Van Loan [15] for computing integrals of maQ B
trix exponentials. In this approach, an auxiliary matrix A is constructed as A =
, where
0 Q
Rt
B is a matrix with identical dimensions to Q. It is shown in [15] that 0 eQx BeQ(t?x) dt =
(eAt )(1:n),(n+1):(2n) , where n is the dimension of Q. Following [9], we set B = I(i, j), where
I(i, j) is the matrix with a 1 in the (i, j)th entry and 0 elsewhere. Thus the left hand side reduces to
i,j
?k,l
(t) for all k, l in the corresponding matrix entries. Thus we can leverage the substantial literature
on numerical computation of the matrix exponential. We refer to this approach as Expm, after the
popular Matlab function. A third approach for computing the expectations, introduced by Hobolth
and Jensen [9] for CTMCs, is called uniformization (Unif ) and is described in the supplementary
material, along with additional details for Expm.
Expm Based Algorithm Algorithm 2 presents pseudocode for the Expm method for computing
end-state conditioned statistics. The algorithm exploits the fact that the A matrix does not change
with time t? . Therefore, when using the scaling and squaring method [16] for computing matrix
exponentials, one can easily cache and reuse the intermediate powers of A to efficiently compute
etA for different values of t.
4.2 Analysis of Time Complexity and Run-Time Comparisons
We conducted asymptotic complexity analysis for all six combinations of Hard and Soft EM with
the methods Expm, Unif, and Eigen for computing the conditional expectations. For both hard and
5
Algorithm 2 The Expm Algorithm for Computing End-State Conditioned Statistics
1: for each state i in S do
2:
for ? = 1 to r do
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Q I(i, i)
0
Q
P
E[?i |O, T, Q] + =
C(?,
k,
l)(D
)
i
k,l
(k,l)?L
end for
end for
for each edge (i, j) in L do
for ? = 1 to r do
q (et? A )(1:n),(n+1):(2n)
Q I(i, j)
Nij = ij
,
where
A
=
Pkl (t? )
0
Q
P
E[nij |O, T, Q] + = (k,l)?L C(?, k, l)(Nij )k,l
end for
end for
Di =
(et? A )(1:n),(n+1):(2n)
,
Pkl (t? )
where A =
soft variants, the time complexity of Expm is O(rS 4 + rLS 3 ), where r is the number of distinct time
intervals between observations, S is the number of states, and L is the number of edges. The soft
version of Eigen has the same time complexity, but since the eigendecomposition of non-symmetric
matrices can be ill-conditioned in any EM iteration [17], this method is not attractive. Unif is
based on truncating an infinite sum and the truncation point M varies with maxi,t? qi t? , with the
result that the cost of Unif varies significantly with both the data and the parameters. In comparison,
Expm is much less sensitive to these values (log versus quadratic dependency). See the supplemental
material for the details. We conclude that Expm is the most robust method available for the soft EM
case. When the state space is large, hard EM can be used to tradeoff accuracy with time. In the hard
EM case, Unif can be more efficient than Expm, because Unif can evaluate only the expectations
specified by the required end-states from the best decoded paths, whereas Expm must always produce
results from all end-states.
These asymptotic results are consistent with our experimental findings. On the glaucoma dataset
from Section 5.2, using a model with 105 states, Soft Expm requires 18 minutes per iteration on a
2.67 GHz machine with unoptimized MATLAB code, while Soft Unif spends more than 105 minutes
per iteration, Hard Unif spends 2 minutes per iteration, and Eigen fails.
5
Experimental results
We evaluated our EM algorithms in simulation (Sec. 5.1) and on two real-world datasets: a glaucoma
dataset (Sec. 5.2) in which we compare our prediction performance to a state-of-the-art method, and
a dataset for Alzheimer?s disease (AD, Sec. 5.3) where we compare visualized progression trends to
recent findings in the literature. Our disease progression models employ 105 (Glaucoma) and 277
(AD) states, representing a significant advance in the ability to work with large models (previous
CT-HMM works [2, 7, 5] employed fewer than 100 states).
5.1
Simulation on a 5-state Complete Digraph
We test the accuracy of all methods on a 5-state complete digraph with synthetic data generated
under different noise levels. Each
Pqi is randomly drawn from [1, 5] and then qij is drawn from
[0, 1] and renormalized such that j6=i qij = qi . The state chains are generated from Q, such that
100
, where min1i qi is the largest mean holding time.
each chain has a total duration around T = min
i qi
The data emission model for state i is set as N (i, ? 2 ), where ? varies under different noise level
0.5
settings. The observations are then sampled from the state chains with rate max
, where max1i qi
i qi
is the smallest mean holding time, which should be dense enough to make the chain identifiable.
?q||
A total of 105 observations are sampled. The average 2-norm relative error ||?q||q||
is used as the
performance metric, where q? is a vector contains all learned qij parameters, and q is the ground
truth.
The simulation results from 5 random runs are listed in Table 1. Expm and Unif produce nearly
identical results so they are combined in the table. Eigen fails at least once for each setting, but
when it works it produces similar results. All Soft methods achieve significantly better accuracy
6
Table 1: The average 2-norm relative error from 5 random runs on a 5-state complete digraph under varying
noise levels. The convergence threshold is ? 10?8 on relative data likelihood change.
Error
? = 1/4
? = 3/8
? = 1/2
?=1
S(Expm,Unif)
0.026?0.008 0.032?0.008 0.042?0.012 0.199?0.084
H(Expm,Unif)
0.031?0.009 0.197?0.062 0.476?0.100 0.857?0.080
Functional deterioration
Structural deterioration
...
...
Structural deterioration
Functional deterioration
(a)
?=2
0.510?0.104
0.925?0.030
s(0)=i
s(t)=j
b1
t1
b2
t2
b3
(b)
(c)
Figure 1: (a) The 2D-grid state structure for glaucoma progression modeling. (b) Illustration of the prediction
of future states from s(0) = i. (c) One fold of convergence behavior of Soft(Expm) on the glaucoma dataset.
than Hard methods, especially when the noise level becomes higher. This can be attributed to the
maintenance of the full hidden state distribution which makes it more robust to noise.
5.2
Application of CT-HMM to Predicting Glaucoma Progression
In this experiment we used CT-HMM to visualize a real-world glaucoma dataset and predict glaucoma progression. Glaucoma is a leading cause of blindness and visual morbidity worldwide [18].
This disease is characterized by a slowly progressing optic neuropathy with associated irreversible
structural and functional damage. There are conflicting findings in the temporal ordering of detectable structural and functional changes, which confound glaucoma clinical assessment and treatment plans [19]. Here, we use a 2D-grid state space model with 105 states, defined by successive
value bands of the two main glaucoma markers, Visual Field Index (VFI) (functional marker) and
average RNFL (Retinal Nerve Fiber Layer) thickness (structural marker) with forwarding edges (see
Fig. 1(a)). More details of the dataset and model can be found in the supplementary material. We
utilize Soft Expm for the following experiments, since it converges quickly (see Fig. 1(c)), has an
acceptable computational cost, and exhibits the best performance.
To predict future continuous measurements, we follow a simple procedure illustrated in Fig. 1(b).
Given a testing patient, Viterbi decoding is used to decode the best hidden state path for the past
visits. Then, given a future time t, the most probable future state is predicted by j = maxj Pij (t)
(blue node), where i is the current state (black node). To predict the continuous measurements, we
search for the future time t1 and t2 in a desired resolution when the patient enters and leaves a state
having same value range as state j for each disease marker separately. The measurement at time t
can then be computed by linear interpolation between t1 and t2 and the two data bounds of state j for
the specified marker ([b1, b2] in Fig. 1(b)). The mean absolute error (MAE) between the predicted
values and the actual measurements was used for performance assessment. The performance of CTHMM was compared to both conventional linear regression and Bayesian joint linear regression [11].
For the Bayesian method, the joint prior distribution of the four parameters (two intercepts and two
slopes) computed from the training set [11] is used alongside the data likelihood. The results in
Table 2 demonstrate the significantly improved performance of CT-HMM.
In Fig. 2(a), we visualize the model trained using the entire dataset. Several dominant paths can be
identified: there is an early stage containing RNFL thinning with intact vision (blue vertical path in
the first column), and at around RNFL range [80, 85] the transition trend reverses and VFI changes
become more evident (blue horizontal paths). This L shape in the disease progression supports the
finding in [20] that RNFL thickness of around 77 microns is a tipping point at which functional
deterioration becomes clinically observable with structural deterioration. Our 2D CT-HMM model
can be used to visualize the non-linear relationship between structural and functional degeneration,
yielding insights into the progression process.
5.3 Application of CT-HMM to Exploratory Analysis of Alzheimer?s Disease
We now demonstrate the use of CT-HMM as an exploratory tool to visualize the temporal interaction
of disease markers of Alzheimer?s Disease (AD). AD is an irreversible neuro-degenerative disease
that results in a loss of mental function due to the degeneration of brain tissues. An estimated 5.3
7
Table 2: The mean absolute error (MAE) of predicting the two glaucoma measures. (? represents that CTHMM performs significantly better than the competing method under student t-test).
MAE
CT-HMM
Bayesian Joint Linear Regression
Linear Regression
VFI
4.64 ? 10.06
5.57 ? 11.11 * (p = 0.005)
7.00 ? 12.22 *(p ? 0.000)
RNFL
7.05 ? 6.57
9.65 ? 8.42 * (p ? 0.000)
18.13 ? 20.70 * (p ? 0.000)
million Americans have AD, yet no prevention or cures have been found [21]. It could be beneficial
to visualize the relationship between clinical, imaging, and biochemical markers as the pathology
evolves, in order to better understand AD progression and develop treatments.
A 277 state CT-HMM model was constructed from a cohort of AD patients (see the supplementary
material for additional details). The 3D visualization result is shown in Fig. 2(b). The state transition
trends show that the abnormality of A? level emerges first (blue lines) when cognition scores are
still normal. Hippocampus atrophy happens more often (green lines) when A? levels are already
low and cognition has started to show abnormality. Most cognition degeneration happens (red lines)
when both A? levels and Hippocampus volume are already in abnormal stages. Our quantitative
visualization results supports recent findings that the decreasing of A? level in CSF is an early
marker before detectable hippocampus atrophy in cognition-normal elderly [22]. The CT-HMM
disease model with interactive visualization can be utilized as an exploratory tool to gain insights of
the disease progression and generate hypotheses to be further investigated by medical researchers.
Structural degeneration (RNFL)
Functional degeneration (VFI)
functional
(Cognition)
structural
(Hippocampus)
biochemical
(A beta)
(a) Glaucoma progression
(b) Alzheimer's disease progression
Figure 2: Visualization scheme: (a) The strongest transition among the three instantaneous links from each
state are shown in blue while other transitions are drawn in dotted black. The line width and the node size
reflect the expected count. The node color represents the average sojourn time (red to green: 0 to 5 years and
above). (b) similar to (a) but the strongest transition from each state is color coded as follows: A? direction
(blue), hippo (green), cog (red), A? +hippo (cyan), A? +cog (magenta), hippo+cog (yellow), A? +hippo+
cog(black). The node color represents the average sojourn time (red to green: 0 to 3 years and above).
6
Conclusion
In this paper, we present novel EM algorithms for CT-HMM learning which leverage recent approaches [9] for evaluating the end-state conditioned expectations in CTMC models. To our knowledge, we are the first to develop and test the Expm and Unif methods for CT-HMM learning. We also
analyze their time complexity and provide experimental comparisons among the methods under soft
and hard EM frameworks. We find that soft EM is more accurate than hard EM, and Expm works
the best under soft EM. We evaluated our EM algorithsm on two disease progression datasets for
glaucoma and AD. We show that CT-HMM outperforms the state-of-the-art Bayesian joint linear
regression method [11] for glaucoma progression prediction. This demonstrates the practical value
of CT-HMM for longitudinal disease modeling and prediction.
Acknowledgments
Portions of this work were supported in part by NIH R01 EY13178-15 and by grant U54EB020404 awarded
by the National Institute of Biomedical Imaging and Bioengineering through funds provided by the Big Data
to Knowledge (BD2K) initiative (www.bd2k.nih.gov). Additionally, the collection and sharing of the
Alzheimers data was funded by ADNI under NIH U01 AG024904 and DOD award W81XWH-12-2-0012. The
research was also supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR N00014-15-1-2340, NSF
IIS-1218749, and NSF CAREER IIS-1350983.
8
References
[1] D. R. Cox and H. D. Miller, The Theory of Stochastic Processes. London: Chapman and Hall,
1965.
[2] C. H. Jackson, ?Multi-state models for panel data: the msm package for R,? Journal of Statistical Software, vol. 38, no. 8, 2011.
[3] N. Bartolomeo, P. Trerotoli, and G. Serio, ?Progression of liver cirrhosis to HCC: an application of hidden markov model,? BMC Med Research Methold., vol. 11, no. 38, 2011.
[4] Y. Liu, H. Ishikawa, M. Chen, and et al., ?Longitudinal modeling of glaucoma progression using 2-dimensional continuous-time hidden markov model,? Med Image Comput Comput Assist
Interv, vol. 16, no. 2, pp. 444?51, 2013.
[5] X. Wang, D. Sontag, and F. Wang, ?Unsupervised learning of disease progression models,?
Proceeding KDD, vol. 4, no. 1, pp. 85?94, 2014.
[6] U. Nodelman, C. R. Shelton, and D. Koller, ?Expectation maximization and complex duration
distributions for continuous time bayesian networks,? in Proc. Uncertainty in AI (UAI 05),
2005.
[7] J. M. Leiva-Murillo, A. Arts-Rodrguez, and E. Baca-Garca, ?Visualization and prediction of
disease interactions with continuous-time hidden markov models,? in NIPS, 2011.
[8] P. Metzner, I. Horenko, and C. Schtte, ?Generator estimation of markov jump processes based
on incomplete observations nonequidistant in time,? Physical Review E, vol. 76, no. 066702,
2007.
[9] A. Hobolth and J. L. Jensen, ?Summary statistics for endpoint-conditioned continuous-time
markov chains,? Journal of Applied Probability, vol. 48, no. 4, pp. 911?924, 2011.
[10] P. Tataru and A. Hobolth, ?Comparison of methods for calculating conditional expectations of
sufficient statistics for continuous time markov chains,? BMC Bioinformatics, vol. 12, no. 465,
2011.
[11] F. Medeiros, L. Zangwill, C. Girkin, and et al., ?Combining structural and functional measurements to improve estimates of rates of glaucomatous progression,? Am J Ophthalmol, vol. 153,
no. 6, pp. 1197?205, 2012.
[12] M. Bladt and M. Srensen, ?Statistical inference for discretely observed markov jump processes,? J. R. Statist. Soc. B, vol. 39, no. 3, p. 395410, 2005.
[13] L. R. Rabinar, ?A tutorial on hidden markov models and selected applications in speech recognition,? Proceedings of the IEEE, vol. 77, no. 2, 1989.
[14] A. Hobolth and J. L.Jensen, ?Statistical inference in evolutionary models of DNA sequences
via the EM algorithm,? Statistical Applications in Genetics and Molecular Biology, vol. 4,
no. 1, 2005.
[15] C. Van Loan, ?Computing integrals involving the matrix exponential,? IEEE Trans. Automatic
Control, vol. 23, pp. 395?404, 1978.
[16] N. Higham, Functions of Matrices: Theory and Computation. SIAM, 2008.
[17] P. Metzner, I. Horenko, and C. Schtte, ?Generator estimation of markov jump processes,? Journal of Computational Physics, vol. 227, p. 353375, 2007.
[18] S. Kingman, ?Glaucoma is second leading cause of blindness globally,? Bulletin of the World
Health Organization, vol. 82, no. 11, 2004.
[19] G. Wollstein, J. Schuman, L. Price, and et al., ?Optical coherence tomography longitudinal
evaluation of retinal nerve fiber layer thickness in glaucoma,? Arch Ophthalmol, vol. 123,
no. 4, pp. 464?70, 2005.
[20] G. Wollstein, L. Kagemann, R. Bilonick, and et al., ?Retinal nerve fibre layer and visual function loss in glaucoma: the tipping point,? Br J Ophthalmol, vol. 96, no. 1, pp. 47?52, 2012.
[21] The Alzheimers Disease Neuroimaging Initiative, ?http://adni.loni.usc.edu,?
[22] A. M. Fagan, D. Head, A. R. Shah, and et. al, ?Decreased CSF A beta 42 correlates with brain
atrophy in cognitively normal elderly,? Ann Neurol., vol. 65, no. 2, p. 176183, 2009.
9
| 5792 |@word blindness:2 cox:1 version:1 achievable:1 norm:2 hippocampus:4 yv0:3 unif:14 simulation:4 r:1 initial:3 liu:2 contains:2 score:1 longitudinal:4 outperforms:2 existing:1 diagonalized:1 current:5 past:1 yet:1 dx:4 written:1 must:1 numerical:2 kdd:1 shape:1 analytic:1 update:1 fund:1 fewer:1 guess:1 leaf:1 selected:1 eqx:2 ctmcs:2 mental:1 characterization:1 node:5 successive:6 t01:2 along:2 direct:2 constructed:2 become:1 beta:2 initiative:2 qij:13 consists:1 elderly:2 hippo:4 expected:5 behavior:1 frequently:2 multi:1 brain:2 finalizing:1 decomposed:1 decreasing:1 globally:1 gov:1 actual:1 enumeration:1 cache:1 becomes:2 provided:2 estimating:1 moreover:1 underlying:1 panel:1 spends:2 supplemental:1 unobserved:5 finding:5 temporal:4 quantitative:1 every:1 interactive:1 demonstrates:2 control:1 medical:1 grant:1 t1:11 before:2 timing:1 consequence:3 switching:1 irreversible:2 path:5 interpolation:1 black:3 challenging:2 forwarding:1 murillo:1 qii:3 hmms:2 limited:1 range:2 cirrhosis:1 practical:2 unique:1 hood:1 testing:1 acknowledgment:1 practice:2 zangwill:1 procedure:2 auxillary:1 maq:1 adapting:1 significantly:4 ga:1 scheduling:1 intercept:1 www:1 equivalent:2 conventional:3 map:1 maximizing:1 conceptualized:1 duration:5 truncating:1 resolution:1 insight:4 estimator:1 jackson:1 rehg:1 population:1 classic:1 exploratory:3 diagonalizable:2 limiting:1 updated:1 construction:1 decode:1 homogeneous:2 hypothesis:1 associate:1 element:1 trend:3 recognition:1 particularly:1 utilized:1 pkl:2 observed:8 wang:3 enters:1 rodrguez:1 degeneration:5 ordering:1 disease:32 substantial:1 govern:1 complexity:6 dynamic:1 renormalized:1 trained:1 ov:7 efficiency:1 easily:2 joint:6 fiber:2 distinct:5 describe:3 effective:1 london:1 supplementary:3 solve:1 otherwise:2 ability:2 statistic:7 t00:4 noisy:3 eq0:1 sequence:6 interaction:2 combining:1 realization:2 flexibility:1 achieve:1 description:1 validate:1 exploiting:1 convergence:2 produce:3 converges:2 depending:1 develop:2 liver:1 ij:5 eq:10 soc:1 auxiliary:1 predicted:2 schuman:1 come:1 revers:1 distilling:1 direction:1 inhomogeneous:4 csf:2 stochastic:1 observational:1 material:4 probable:1 around:3 hall:1 ground:1 normal:3 viterbi:4 predict:5 visualize:6 cognition:5 consecutive:3 smallest:1 early:2 estimation:7 proc:1 sensitive:1 largest:1 create:1 tool:2 ophthalmol:3 always:1 reaching:1 rather:1 avoid:1 varying:1 phenotyping:1 focus:1 emission:3 likelihood:11 degenerative:1 contrast:2 progressing:1 am:1 inference:3 abstraction:1 squaring:1 biochemical:2 entire:1 hidden:30 koller:1 unoptimized:1 issue:1 among:3 ill:1 development:2 plan:1 art:4 special:1 prevention:1 field:1 once:2 having:1 chapman:1 biology:1 identical:2 represents:4 bmc:2 unsupervised:1 yu:1 rls:1 nearly:1 ishikawa:1 future:7 beq:1 t2:3 employ:1 randomly:1 national:1 comprehensive:2 individual:1 cognitively:1 familiar:1 maxj:1 usc:1 atlanta:1 organization:1 evaluation:2 yielding:1 r01gm108341:1 chain:15 accurate:1 bioengineering:1 integral:4 edge:4 alzheimers:2 incomplete:3 divide:1 sojourn:3 desired:1 nij:27 column:1 modeling:8 soft:18 eta:1 assignment:1 maximization:2 cost:3 leiva:1 entry:3 hundred:1 usefulness:1 dod:1 shuang:1 conducted:1 dependency:1 thickness:3 varies:3 synthetic:1 combined:1 siam:1 retain:1 systematic:1 physic:1 decoding:3 together:1 quickly:1 reflect:1 containing:1 slowly:1 american:1 derivative:1 leading:2 kingman:1 li:2 account:1 retinal:3 sec:3 b2:2 student:1 u01:1 depends:1 ad:8 closed:2 analyze:1 red:4 yv:29 portion:1 slope:1 accuracy:3 efficiently:2 miller:1 yield:2 yellow:1 bayesian:6 unifies:1 none:1 trajectory:2 researcher:1 j6:8 tissue:1 strongest:2 sharing:1 fagan:1 pp:7 james:1 associated:1 di:1 attributed:1 sampled:4 emits:1 dataset:9 treatment:5 iji:1 popular:1 gain:1 color:3 emerges:1 knowledge:2 schedule:1 thinning:1 nerve:3 higher:1 dt:1 tipping:2 follow:1 improved:2 ag024904:1 formulation:2 evaluated:3 loni:1 furthermore:1 stage:2 biomedical:1 arch:1 until:1 hand:1 horenko:2 horizontal:1 marker:8 lack:1 maximizer:1 assessment:2 fuxin:1 believe:1 b3:1 true:3 evolution:1 reformulating:1 q0:3 symmetric:1 illustrated:1 deal:1 attractive:3 during:1 width:1 evident:1 complete:9 demonstrate:4 performs:1 meaning:1 image:1 instantaneous:1 novel:1 nih:4 pseudocode:2 functional:10 physical:1 ctmc:13 endpoint:1 exponentially:1 volume:1 million:1 mae:3 measurement:11 refer:4 significant:1 ai:1 automatic:1 grid:2 i6:1 similarly:1 hcc:1 pathology:1 funded:1 similarity:1 v0:4 dominant:1 posterior:8 recent:3 awarded:1 n00014:1 onr:1 exploited:1 additional:5 employed:1 ii:4 multiple:3 full:1 worldwide:1 reduces:2 eqt:2 characterized:1 clinical:5 adni:2 molecular:1 visit:7 coded:1 award:1 qi:18 prediction:7 involving:1 neuro:1 variant:2 basic:1 maintenance:1 patient:6 expectation:19 metric:1 regression:5 iteration:5 vision:1 deterioration:6 irregular:3 interv:1 addition:1 whereas:1 separately:1 addressed:2 interval:12 decreased:1 neuropathy:1 morbidity:1 med:2 regularly:1 alzheimer:6 structural:10 leverage:4 abnormality:2 intermediate:4 cohort:1 enough:1 identified:1 competing:1 reduce:1 tradeoff:1 br:1 t0:4 expression:2 o0:3 six:1 utility:1 assist:1 reuse:1 song:1 hobolth:4 sontag:1 speech:1 cause:2 matlab:2 listed:1 amount:1 band:1 statist:1 tomography:1 visualized:1 dna:1 generate:1 http:1 exist:1 restricts:1 vy:5 nsf:3 tutorial:1 dotted:1 estimated:1 per:3 blue:6 discrete:8 vol:17 express:2 group:2 key:4 four:2 threshold:1 drawn:3 changing:1 neither:1 pj:2 utilize:2 backward:3 imaging:2 sum:3 year:2 fibre:1 run:3 prob:1 micron:1 package:1 uncertainty:1 extends:1 missed:1 coherence:1 acceptable:1 scaling:1 abnormal:1 cyan:1 layer:3 ct:38 guaranteed:1 bound:1 fold:1 quadratic:1 identifiable:1 discretely:1 occur:2 optic:1 constraint:2 software:1 aspect:1 min:1 eat:1 optical:1 tv:45 combination:2 clinically:1 beneficial:1 em:24 y0:2 evolves:1 happens:2 inhomogenous:1 confound:1 computationally:1 visualization:7 previously:1 remains:1 discus:1 count:5 detectable:2 needed:1 irregularly:3 end:15 bd2k:2 available:3 operation:1 vfi:4 progression:29 observe:2 save:1 alternative:2 shah:1 eigen:6 assumes:1 denotes:1 remaining:2 atrophy:3 calculating:2 exploit:2 especially:1 r01:1 move:1 question:1 quantity:1 already:2 damage:1 rt:3 visiting:1 exhibit:1 evolutionary:1 individualized:1 link:1 hmm:43 reason:1 code:1 o1:2 index:1 relationship:2 reformulate:1 providing:1 illustration:1 ying:1 unfortunately:3 neuroimaging:1 potentially:1 holding:2 motivates:1 vertical:1 observation:22 markov:28 datasets:3 finite:1 supporting:1 defining:1 incorporate:1 head:1 arbitrary:1 uniformization:1 inferred:1 introduced:2 pair:2 required:1 specified:4 learned:1 conflicting:1 nip:1 trans:1 address:2 alongside:1 dynamical:1 below:2 challenge:8 including:1 max:1 green:4 unrealistic:1 suitable:2 event:1 natural:1 power:1 predicting:2 indicator:1 representing:1 scheme:1 improve:1 technology:1 started:1 coupled:1 health:1 prior:4 literature:5 review:1 asymptotic:2 relative:3 nodelman:1 fully:2 loss:2 msm:1 versus:1 generator:2 eigendecomposition:2 pij:4 consistent:1 sufficient:1 pi:1 row:1 course:1 elsewhere:1 summary:1 repeat:1 supported:2 truncation:2 arriving:1 genetics:1 expm:21 side:1 understand:1 institute:2 taking:1 bulletin:1 absolute:2 distributed:2 van:2 ghz:1 dimension:2 transition:32 world:4 valid:1 cure:1 evaluating:1 forward:3 collection:1 jump:3 correlate:1 approximate:1 compact:1 observable:2 uai:1 summing:1 b1:2 conclude:1 alternatively:1 continuous:19 latent:1 search:1 reality:1 table:6 additionally:1 learn:1 robust:2 career:1 obtaining:1 expansion:1 investigated:1 complex:2 cl:3 domain:1 did:1 pk:5 main:2 dense:1 big:1 noise:5 arrival:1 fig:6 georgia:1 fails:2 decoded:1 explicit:1 bladt:1 exponential:8 comput:2 lie:1 third:1 minute:3 magenta:1 cog:4 jensen:3 maxi:1 neurol:1 effectively:1 higham:1 conditioned:13 chen:1 explore:2 visual:3 expressed:1 trix:1 truth:1 conditional:3 goal:1 consequently:1 digraph:3 ann:1 price:1 change:5 hard:13 loan:2 infinite:2 specifically:1 total:5 called:1 experimental:3 intact:1 college:1 support:4 bioinformatics:1 bigdata:1 evaluate:3 shelton:1 metzner:3 |
5,294 | 5,793 | The Population Posterior
and Bayesian Modeling on Streams
James McInerney
Columbia University
james@cs.columbia.edu
Rajesh Ranganath
Princeton University
rajeshr@cs.princeton.edu
David Blei
Columbia University
david.blei@columbia.edu
Abstract
Many modern data analysis problems involve inferences from streaming data. However, streaming data is not easily amenable to the standard probabilistic modeling
approaches, which require conditioning on finite data. We develop population
variational Bayes, a new approach for using Bayesian modeling to analyze streams
of data. It approximates a new type of distribution, the population posterior, which
combines the notion of a population distribution of the data with Bayesian inference in a probabilistic model. We develop the population posterior for latent
Dirichlet allocation and Dirichlet process mixtures. We study our method with
several large-scale data sets.
1
Introduction
Probabilistic modeling has emerged as a powerful tool for data analysis. It is an intuitive language
for describing assumptions about data and provides efficient algorithms for analyzing real data under
those assumptions. The main idea comes from Bayesian statistics. We encode our assumptions about
the data in a structured probability model of hidden and observed variables; we condition on a data
set to reveal the posterior distribution of the hidden variables; and we use the resulting posterior as
needed, for example to form predictions through the posterior predictive distribution or to explore the
data through the posterior expectations of the hidden variables.
Many modern data analysis problems involve inferences from streaming data. Examples include
exploring the content of massive social media streams (e.g., Twitter, Facebook), analyzing live video
streams, estimating the preferences of users on an online platform for recommending new items, and
predicting human mobility patterns for anticipatory computing. Such problems, however, cannot
easily take advantage of the standard approach to probabilistic modeling, which requires that we
condition on a finite data set.
This might be surprising to some readers; after all, one of the tenets of the Bayesian paradigm is that
we can update our posterior when given new information. (?Yesterday?s posterior is today?s prior.?)
But there are two problems with using Bayesian updating on data streams. The first problem is that
Bayesian inference computes posterior uncertainty under the assumption that the model is correct.
In theory this is sensible, but only in the impossible scenario where the data truly came from the
proposed model. In practice, all models provide approximations to the data-generating distribution,
and when the model is incorrect, the uncertainty that maximizes predictive likelihood may be larger or
smaller than the Bayesian posterior variance. This problem is exacerbated in potentially never-ending
streams; after seeing only a few data points, uncertainty is high, but eventually the model becomes
overconfident.
The second problem is that the data stream might change over time. This is an issue because,
frequently, our goal in applying probabilistic models to streams is not to characterize how they
change, but rather to accommodate it. That is, we would like for our current estimate of the latent
variables to be accurate to the current state of the stream and to adapt to how the stream might slowly
1
change. (This is in contrast, for example, to time series modeling.) Traditional Bayesian updating
cannot handle this. Either we explicitly model the time series, and pay a heavy inferential cost, or we
tacitly assume that the data are exchangeable, i.e., that the underlying distribution does not change.
In this paper we develop new ideas for analyzing data streams with probabilistic models. Our
approach combines the frequentist notion of the population distribution with probabilistic models and
Bayesian inference.
Main idea: The population posterior. Consider a latent variable model of ? data points. (This
is unconventional notation; we will describe why we use it below.) Following [14], we define the
model to have two kinds of hidden variables: global hidden variables ? contain latent structure that
potentially governs any data point; local hidden variables zi contain latent structure that only governs
the ith data point. Such models are defined by the joint,
?
p(? , z, x) = p(? ) ? p(xi , zi | ? ),
(1)
i=1
where x = x1:? and z = z1:? . Traditional Bayesian statistics conditions on a fixed data set x to obtain
the posterior distribution of the hidden variables p(? , z | x). As we discussed, this framework cannot
accommodate data streams. We need a different way to use the model.
We define a new distribution, the population posterior, which enables us to consider Bayesian
modeling of streams. Suppose we observe ? data points independently from the underlying population
distribution, X ? F? . This induces a posterior p(? , z | X), which is a function of the random data.
The population posterior is the expected value of this distribution,
p(? , z, X)
EF? [p(z, ? |X)] = EF?
.
(2)
p(X)
Notice that this distribution is not a function of observed data; it is a function of the population
distribution F and the data size ?. The data size is a hyperparameter that can be set; it effectively
controls the variance of the population posterior. How to best set it depends on how close the model
is to the true data distribution.
We have defined a new problem. Given an endless stream of data points coming from F and a value
for ?, our goal is to approximate the corresponding population posterior. In this paper, we will
approximate it through an algorithm based on variational inference and stochastic optimization. As
we will show, our algorithm justifies applying a variant of stochastic variational inference [14] to
a data stream. We used our method to analyze several data streams with two modern probabilistic
models, latent Dirichlet allocation [5] and Dirichlet process mixtures [11]. With held-out likelihood
as a measure of model fitness, we found our method to give better models of the data than approaches
based on full Bayesian inference [14] or Bayesian updating [8].
Related work. Researchers have proposed several methods for inference on streams of data.
Refs. [1, 9, 27] propose extending Markov chain Monte Carlo methods for streaming data. However,
sampling-based approaches do not scale to massive datasets; the variational approximation enables
more scalable inference. In variational inference, Ref. [15] propose online variational inference by
exponentially forgetting the variational parameters associated with old data. Stochastic variational
inference (SVI) [14] also decay parameters derived from old data, but interprets this in the context of
stochastic optimization. Neither of these methods applies to streaming data; both implicitly rely on
the data being of known size (even when subsampling data to obtain noisy gradients).
To apply the variational approximation to streaming data, Ref. [8] and Ref. [12] both propose
Bayesian updating of the approximating family; Ref. [22] adapts this framework to nonparametric
mixture models. Here we take a different approach, changing the variational objective to incorporate
a population distribution and then following stochastic gradients of this new objective. In Section 3
we show that this generally performs better than Bayesian updating.
Independently, Ref. [23] applied SVI to streaming data by accumulating new data points into a
growing window and then uniformly sampling from this window to update the variational parameters.
Our method justifies that approach. Further, they propose updating parameters along a trust region,
instead of following (natural) gradients, as a way of mitigating local optima. This innovation can be
incorporated into our method.
2
2
Variational Inference for the Population Posterior
We develop population variational Bayes, a method for approximating the population posterior in
Eq. 2. Our method is based on variational inference and stochastic optimization.
The F-ELBO. The idea behind variational inference is to approximate difficult-to-compute distributions through optimization [16, 25]. We introduce an approximating family of distributions over the
latent variables q(? , z) and try to find the member of q(?) that minimizes the Kullback-Leibler (KL)
divergence to the target distribution.
Population variational Bayes (VB) uses variational inference to approximate the population posterior
in Eq. 2. It aims to minimize the KL divergence from an approximating family,
q? (? , z) = arg min KL(q(? , z)||EF? [p(? , z | X)]).
(3)
q
As for the population posterior, this objective is a function of the population distribution of ? data
points F? . Notice the difference to classical VB. In classical VB, we optimize the KL divergence
between q(?) and a posterior, KL(q(? , z)||p(? , z | x); its objective is a function of a fixed data set x.
In contrast, the objective in Eq. 3 is a function of the population distribution F? .
We will use the mean-field variational family, where each latent variable is independent and governed
by a free parameter,
?
q(? , z) = q(? | ? ) ? q(zi | ?i ).
(4)
i=1
The free variational parameters are the global parameters ? and local parameters ?i . Though we
focus on the mean-field family, extensions could consider structured families [13, 20], where there is
dependence between variables.
In classical VB, where we approximate the usual posterior, we cannot compute the KL. Thus, we
optimize a proxy objective called the ELBO (evidence lower bound) that is equal to the negative KL
up to an additive constant. Maximizing the ELBO is equivalent to minimizing the KL divergence to
the posterior.
In population VB we also optimize a proxy objective, the F-ELBO. The F-ELBO is an expectation of
the ELBO under the population distribution of the data,
" "
##
?
L (? , ? ; F? ) = EF? Eq log p(? ) ? log q(? | ? ) + ? log p(Xi , Zi | ? ) ? log q(Zi )]
.
(5)
i=1
The F-ELBO is a lower bound on the population evidence log EF? [p(X)] and a lower bound on the
negative KL to the population posterior. (See Appendix A.) The inner expectation is over the latent
variables ? and Z, and is a function of the variational distribution q(?). The outer expectation is over
the ? random data points X, and is a function of the population distribution F? (?). The F-ELBO is
thus a function of both the variational distribution and the population distribution.
As we mentioned, classical VB maximizes the (classical) ELBO, which is equivalent to minimizing
the KL. The F-ELBO, in contrast, is only a bound on the negative KL to the population posterior.
Thus maximizing the F-ELBO is suggestive but is not guaranteed to minimize the KL. That said, our
studies show that this is a good quantity to optimize, and in Appendix A we show that the F-ELBO
does minimize EF? [KL(q(z||p(z, ? |X))], the population KL.
Conditionally conjugate models. In the next section we will develop a stochastic optimization
algorithm to maximize Eq. 5. First, we describe the class of models that we will work with.
Following [14] we focus on conditionally conjugate models. A conditionally conjugate model is one
where each complete conditional?the conditional distribution of a latent variable given all the other
latent variables and the observations?is in the exponential family. This class includes many models
in modern machine learning, such as mixture models, topic models, many Bayesian nonparametric
models, and some hierarchical regression models. Using conditionally conjugate models simplifies
many calculations in variational inference.
3
Under the joint in Eq. 1, we can write a conditionally conjugate model with two exponential families:
p(zi , xi | ? ) = h(zi , xi ) exp ? >t(zi , xi ) ? a(? )
(6)
>
p(? | ? ) = h(? ) exp ? t(? ) ? a(? ) .
(7)
We overload notation for base measures h(?), sufficient statistics t(?), and log normalizers a(?). Note
that ? is the hyperparameter and that t(? ) = [? , ?a(? )] [3].
In conditionally conjugate models each complete conditional is in an exponential family, and we
use these families as the factors in the variational distribution in Eq. 4. Thus ? indexes the same
family as p(? | z, x) and ?i indexes the same family as p(zi | xi , ? ). For example, in latent Dirichlet
allocation [5], the complete conditional of the topics is a Dirichlet; the complete conditional of
the per-document topic mixture is a Dirichlet; and the complete conditional of the per-word topic
assignment is a categorical. (See [14] for details.)
Population variational Bayes. We have described the ingredients of our problem. We are given a
conditionally conjugate model, described in Eqs. 6 and 7, a parameterized variational family in Eq. 4,
and a stream of data from an unknown population distribution F. Our goal is to optimize the F-ELBO
in Eq. 5 with respect to the variational parameters.
The F-ELBO is a function of the population distribution, which is an unknown quantity. To overcome
this hurdle, we will use the stream of data from F to form noisy gradients of the F-ELBO; we then
update the variational parameters with stochastic optimization (a technique to find a local optimum
by following noisy unbiased gradients [7]).
Before describing the algorithm, however, we acknowledge one technical detail. Mirroring [14], we
optimize an F-ELBO that is only a function of the global variational parameters. The one-parameter
population VI objective is LF? (? ) = max? LF? (? , ? ). This implicitly optimizes the local parameter
as a function of the global parameter and allows us to convert the potentially infinite-dimensional
optimization problem in Eq. 5 to a finite one. The resulting objective is identical to Eq. 5, but with ?
replaced by ? (? ). (Details are in Appendix B).
The next step is to form a noisy gradient of the F-ELBO so that we can use stochastic optimization
to maximize it. Stochastic optimization maximizes an objective by following noisy and unbiased
gradients [7, 19]. We will write the gradient of the F-ELBO as an expectation with respect to F? , and
then use Monte Carlo estimates to form noisy gradients.
We compute the gradient of the F-ELBO by bringing the gradient operator inside the expectations of
Eq. 5.1 This results in a population expectation of the classical VB gradient with ? data points.
We take the natural gradient [2], which has a simple form in completely conjugate models [14].
Specifically, the natural gradient of the F-ELBO is
"
#
?
?
?? L (? ; F? ) = ? ? ? + EF? ? E?i (? ) [t(xi , Zi )] .
(8)
i=1
We approximate this expression using Monte Carlo to compute noisy, unbiased natural gradients at ? .
To form the Monte Carlo estimate, we collect ? data points from F; for each we compute the optimal
local parameters ?i (? ), which is a function of the sampled data point and variational parameters; we
then compute the quantity inside the brackets in Eq. 8. Averaging these results gives the Monte Carlo
estimate of the natural gradient. We follow the noisy natural gradient and repeat.
The algorithm is summarized in Algorithm 1. Because Eq. 8 is a Monte Carlo estimate, we are free to
draw B data points from F? (where B << ?) and rescale the sufficient statistics by ?/B. This makes
the natural gradient estimate noisier, but faster to calculate. As highlighted in [14], this strategy is
more computationally efficient because early iterations of the algorithm have inaccurate values of ? .
It is wasteful to pass through a lot of data before making updates to ? .
Discussion. Thus far, we have defined the population posterior and showed how to approximate
it with population variational inference. Our derivation justifies using an algorithm like stochastic
variational inference (SVI) [14] on a stream of data. It is nearly identical to SVI, but includes an
additional parameter: the number of data points in the population posterior ?.
1 For
most models of interest, this is justified by the dominated convergence theorem.
4
Algorithm 1 Population Variational Bayes
Randomly initialize global variational parameter ? (0)
Set iteration t ? 0
repeat
Draw data minibatch x1:B ? F?
Optimize local variational parameters ?1 (? (t) ), . . . , ?B (? (t) )
? ? L (? (t) ; F? ) [see Eq. 8]
Calculate natural gradient ?
Update global variational parameter with learning rate ? (t)
? ? L (? (t) ; F? )
? (t+1) = ? (t) + ? (t) ?B ?
Update iteration count t ? t + 1
until forever
Note we can recover the original SVI algorithm as an instance of population VI, thus reinterpreting it
as minimizing the KL divergence to the population posterior. We recover SVI by setting ? equal to
the number of data points in the data set and replacing the stream of data F with F?x , the empirical
distribution of the observations. The ?stream? in this case comes from sampling with replacement
from F?x , which results in precisely the original SVI algorithm.2
We focused on the conditionally conjugate family for convenience, i.e., the simple gradient in Eq. 8.
We emphasize, however, that by using recent tools for nonconjugate inference [17, 18, 24], we
can adapt the new ideas described above?the population posterior and the F-ELBO?outside of
conditionally conjugate models.
Finally, we analyze the population posterior distribution under the assumption the only way
the stream affects the model is through the data. Formally, this means the unobserved variables in the model and the stream F? are independent given the data X. The population posterior without the local latent variables
z (which can be marginalized out) is EF? [p(? | X)].
R
Expanding the expectation gives p(? | X)p(X | F? )dX, showing that the population posterior distribution can be written as p(? | F? ). This can be depicted as a graphical model:
F?
X
?
This means first, that the population posterior is well defined even when the model does not specify
the marginal distribution of the data and, second, rather than the classical Bayesian setting where the
posterior is conditioned on a finite fixed dataset, the population posterior is a distributional posterior
conditioned on the stream F? .
3
Empirical Evaluation
We study the performance of population variational Bayes (population VB) against SVI and streaming
variational Bayes (SVB) [8]. With large real-world data we study two models, latent Dirichlet
allocation [5] and Bayesian nonparametric mixture models, comparing the held-out predictive
performance of the algorithms. All three methods share the same local variational update, which
is the dominating computational cost. We study the data coming in a true ordered stream, and in a
permuted stream (to better match the assumptions of SVI). Across data and models, population VB
usually outperforms the existing approaches.
Models. We study two models. The first is latent Dirichlet allocation (LDA) [5]. LDA is a
mixed-membership model of text collections and is frequently used to find its latent topics. LDA
assumes that there are K topics ?k ? Dir(?), each of which is a multinomial distribution over a fixed
vocabulary. Documents are drawn by first choosing a distribution over topics ?d ? Dir(?) and then
2 This derivation of SVI is an application of Efron?s plug-in principle [10] applied to inference of the
population posterior. The plug-in principle says that we can replace the population F with the empirical
distribution of the data F? to make population inferences. In our empirical study, however, we found that
population VI often outperforms stochastic VI. Treating the data in a true stream, and setting the number of data
points different to the true number, can improve predictive accuracy.
5
held out log likelihood
Time-ordered stream
New York Times
?7.2
?7.4
Twitter
?7.4
?7.6
?7.4
?7.6
?7.8
?7.6
?7.8
2
4
6
8 10 12 14 16 18
Population-VB ?=1M
Streaming-VB [8]
?8.0
?8.2
?7.8
?8.0
0
Science
?7.2
?8.4
?8.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
?8.6
0
10 20 30 40 50 60 70
number of documents seen (?10 )
5
held out log likelihood
Random time-permuted stream
New York Times
?7.5
?7.6
?7.0
Science
?7.3
?7.2
?7.7
?7.5
?7.4
?7.8
?7.7
?7.8
?8.0
?7.8
?8.1
0
?8.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
2
4
6
8 10 12 14 16 18
Population-VB ?=1M
Streaming-VB [8]
SVI [15]
?7.6
?7.6
?7.9
Twitter
?7.4
?7.9
?8.0
0
10 20 30 40 50 60 70
number of documents seen (?105)
Figure 1: Held out predictive log likelihood for LDA on large-scale streamed text corpora. PopulationVB outperforms existing methods for two out of the three settings. We use the best settings of ?.
drawing each word by choosing a topic assignment zdn ? Mult(?d ) and finally choosing a word from
the corresponding topic wdn ? ?zdn . The joint distribution is
N
?
p(? , ? , z, w|?, ?) = p(? |?) ? p(?d |?) ? p(zdi |?d )p(wdi |? , zdi ).
(9)
i=1
d=1
Fixing hyperparameters, the inference problem is to estimate the conditional distribution of the topics
given a large collection of documents.
The second model is a Dirichlet process (DP) mixture [11]. Loosely, DP mixtures are mixture models
with a potentially infinite number of components; thus choosing the number of components is part
of the posterior inference problem. When using variational inference for DP mixtures [4], we take
advantage of the stick breaking representation to construct a truncated variational approximation [21].
The variables are mixture proportions ? ? Stick(?), mixture components ?k ? H(?) (for infinite k),
mixture assignments zi ? Mult(?), and observations xi ? G(?zi ). The joint is
?
p(? , ?, z, x|?, ?) = p(?|?)p(? |?) ? p(zi |?)p(xi |? , xi ).
(10)
i=1
The likelihood and prior on the components are general to the observations at hand. In our study
of real-valued data we use normal priors and normal likelihoods; in our study of text data we use
Dirichlet priors and multinomial likelihoods.
For both models we vary ?, usually fixed to the number of data points in traditional analysis.
Datasets. With LDA we analyze three large-scale streamed corpora: 1.7M articles from the New
York Times spanning 10 years, 130K Science articles written over 100 years, and 7.4M tweets
collected from Twitter on Feb 2nd, 2014. We processed them all in a similar way, choosing a
vocabulary based on the most frequent words in the corpus (with stop words removed): 8,000 for the
New York Times, 5,855 for Science, and 13,996 for Twitter. On Twitter, each tweet is a document,
and we removed duplicate tweets and tweets that did not contain at least 2 words in the vocabulary.
For each data stream, all algorithms took a few hours to process all the examples we collected.
With DP mixtures, we analyze human location behavior data. These data allow us to build periodic
models of human population mobility, with applications to disaster response and urban planning.
Such models account for periodicity by including the hour of the week as one of the dimensions of the
6
Time-ordered stream
held out log likelihood
Ivory Coast Locations
Geolife Locations
New York Times
?7.8
?6.5
0.1
?6.6
0.0
?7.9
?6.7
?0.1
?8.1
?8.0
Population-VB ?=best
Streaming-VB [8]
?8.2
?6.8
?0.2
?6.9
?0.3
?7.0
0 20 40 60 80 100 120 140 160 180
?0.4
0.00 0.05 0.10 0.15 0.20 0.25 0.30
?8.3
?8.4
?8.5
0
2
4
6
8 10 12 14 16 18
number of data points seen (?10 )
5
held out log likelihood
Random time-permuted stream
?6.70
Ivory Coast Locations
Geolife Locations
?6.72
0.1
?6.74
?6.76
?6.78
0.0
?8.1
?0.1
?8.2
?0.2
?6.80
?6.84
0 20 40 60 80 100 120 140 160 180
Population-VB ?=best
Streaming-VB [8]
SVI [15]
?8.3
?0.3
?6.82
New York Times
?8.0
?8.4
?0.4
?0.5
0.00 0.05 0.10 0.15 0.20 0.25 0.30
?8.5
0
2
4
6
8 10 12 14 16 18
number of data points seen (?105)
Figure 2: Held out predictive log likelihood for Dirichlet process mixture models on large-scale
streamed location and text data sets. Note that we apply Gaussian likelihoods in the Geolife dataset,
so the reported predictive performance is measured by probability density. We chose the best ? for
each population-VB curve.
held out log likelihood
Population-VB sensitivity to ? for LDA
New York Times
?7.60
Science
?7.65
?7.9
?7.18
?7.70
?8.0
?7.20
?7.75
?7.80
?7.85
?7.90
4
5
6
7
8
9
?7.22
?8.1
?7.26
?8.3
?7.24
?8.2
?7.28
?8.4
?7.30
4
Twitter
?7.8
?7.16
5
6
7
8
9
?8.5
4
Population-VB ?=true N
5
6
7
8
9
logarithm (base 10) of ?
held out log likelihood
Population-VB sensitivity to ? for DP-Mixture
?6.75
Ivory Coast Locations
?6.76
?6.77
?6.78
?6.79
?6.80
?6.81
?6.82
4
5
6
7
8
9 10 11 12
0.00
Geolife Locations
New York Times
?0.05
?8.0
?0.10
?8.5
?0.15
?9.0
?0.20
4
5
6
7
8
9
?9.5
3
Population-VB ?=true N
4
5
6
7
8
9
logarithm (base 10) of ?
Figure 3: We show the sensitivity of population-VB to hyperparameter ? (based on final log
likelihoods in the time-ordered stream) and find that the best setting of ? often differs from the true
number of data points (which may not be known in any case in practice).
data to be modeled. The Ivory Coast location data contains 18M discrete cell tower locations for 500K
users recorded over 6 months [6]. The Microsoft Geolife dataset contains 35K latitude-longitude
GPS locations for 182 users over 5 years. For both data sets, our observations reflect down-sampling
the data to ensure that each individual is seen no more than once every 15 minutes.
7
Results. We compare population VB with SVI [14] and SVB [8] for LDA [8] and DP mixtures [22].
SVB updates the variational approximation of the global parameter using density filtering with
exponential families. The complexity of the approximation remains fixed as the expected sufficient
statistics from minibatches observed in a stream are combined with those of the current approximation.
(Here we give the final results. We include details of how we set and fit hyperparameters below.)
We measure model fitness by evaluating the average predictive log likelihood on held-out data. This
involves splitting held-out observations (that were not involved in the posterior approximation of ? )
into two equal halves, inferring the local component distribution based on the first half, and testing
with the second half [14, 26]. For DP-mixtures, we condition on the observed hour of the week and
predict the geographic location of the held-out data point.
In standard offline studies, the held-out set is randomly selected from the data. With streams, however,
we test on the next 10K documents (for New York Times, Science), 500K tweets (for Twitter), or 25K
locations (on Geo data). This is a valid held-out set because the data ahead of the current position in
the stream have not yet been seen by the inference algorithms.
Figure 1 shows the performance for LDA. We looked at two types of streams: one in which the data
appear in order and the other in which they have been permuted (i.e., an exchangeable stream). The
time permuted stream reveals performance when each data minibatch is safely assumed to be an
i.i.d. sample from F; this results in smoother improvements to predictive likelihood. On our data, we
found that population VB outperformed SVI and SVB on two of the data sets and outperformed SVI
on all of the data. SVB performed better than population VB on Twitter.
Figure 2 shows a similar study for DP mixtures. We analyzed the human mobility data and the
New York Times. (Ref. [22] also analyzed the New York Times.) On these data population VB
outperformed SVB and SVI in all settings.3
Hyperparameters Unlike traditional Bayesian methods, the data set size ? is a hyperparameter to
population VB. It helps control the posterior variance of the population posterior. Figure 3 reports
sensitivity to ? for all studies (for the time-ordered stream). These plots indicate that the optimal
setting of ? is often different from the true number of data points; the best performing population
posterior variance is not necessarily the one implied by the data. The other hyperparameters to our
experiments are reported in Appendix C.
4
Conclusions and Future Work
We introduced the population posterior, a distribution over latent variables that combines traditional
Bayesian inference with the frequentist idea of the population distribution. With this idea, we derived
population variational Bayes, an efficient algorithm for probabilistic inference on streams. On two
complex Bayesian models and several large data sets, we found that population variational Bayes
usually performs better than existing approaches to streaming inference.
In this paper, we made no assumptions about the structure of the population distribution. Making
assumptions, such as the ability to obtain streams conditional on queries, can lead to variants of
our algorithm that learn which data points to see next during inference. Finally, understanding the
theoretical properties of the population posterior is also an avenue of interest.
Acknowledgments. We thank Allison Chaney, John Cunningham, Alp Kucukelbir, Stephan Mandt,
Peter Orbanz, Theo Weber, Frank Wood, and the anonymous reviewers for their comments. This work
is supported by NSF IIS-0745520, IIS-1247664, IIS-1009542, ONR N00014-11-1-0651, DARPA
FA8750-14-2-0009, N66001-15-C-4032, NDSEG, Facebook, Adobe, Amazon, and the Siebel Scholar
and John Templeton Foundations.
3 Though our purpose is to compare algorithms, we make one note about a specific data set. The predictive
accuracy for the Ivory Coast data set plummets after 14M data points. This is because of the data collection
policy. For privacy reasons the data set provides the cell tower locations of a randomly selected cohort of 50K
users every 2 weeks [6]. The new cohort at 14M data points behaves differently to previous cohorts in a way that
affects predictive performance. However, both algorithms steadily improve after this shock.
8
References
[1] A. Ahmed, Q. Ho, C. H. Teo, J. Eisenstein, E. P. Xing, and A. J. Smola. Online inference for the infinite
topic-cluster model: Storylines from streaming text. In International Conference on Artificial Intelligence
and Statistics, pages 101?109, 2011.
[2] S. I. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[3] J. M. Bernardo and A. F. Smith. Bayesian Theory, volume 405. John Wiley & Sons, 2009.
[4] D. M. Blei, M. I. Jordan, et al. Variational inference for Dirichlet process mixtures. Bayesian Analysis,
1(1):121?143, 2006.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. The Journal of Machine Learning
Research, 3:993?1022, 2003.
[6] V. D. Blondel, M. Esch, C. Chan, F. Cl?rot, P. Deville, E. Huens, F. Morlot, Z. Smoreda, and C. Ziemlicki.
Data for development: the D4D challenge on mobile phone data. arXiv preprint arXiv:1210.0137, 2012.
[7] L. Bottou. Online learning and stochastic approximations. Online learning in Neural Networks, 17:9,
1998.
[8] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. Jordan. Streaming variational Bayes. In
Advances in Neural Information Processing Systems, pages 1727?1735, 2013.
[9] A. Doucet, S. Godsill, and C. Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering.
Statistics and Computing, 10(3):197?208, 2000.
[10] B. Efron and R. J. Tibshirani. An introduction to the bootstrap. CRC press, 1994.
[11] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association, 90(430):577?588, 1995.
[12] Z. Ghahramani and H. Attias. Online variational Bayesian learning. In Slides from talk presented at NIPS
2000 Workshop on Online learning, pages 101?109, 2000.
[13] M. D. Hoffman and D. M. Blei. Structured stochastic variational inference. In International Conference
on Artificial Intelligence and Statistics, pages 101?109, 2015.
[14] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of
Machine Learning Research, 14(1):1303?1347, 2013.
[15] A. Honkela and H. Valpola. On-line variational Bayesian learning. In 4th International Symposium on
Independent Component Analysis and Blind Signal Separation, pages 803?808, 2003.
[16] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Machine learning, 37(2):183?233, 1999.
[17] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
[18] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In Proceedings of the
Seventeenth International Conference on Artificial Intelligence and Statistics, pages 805?813, 2014.
[19] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics,
pages 400?407, 1951.
[20] L. K. Saul and M. I. Jordan. Exploiting tractable substructures in intractable networks. Advances in Neural
Information Processing Systems, pages 486?492, 1996.
[21] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[22] A. Tank, N. Foti, and E. Fox. Streaming variational inference for Bayesian nonparametric mixture models.
In International Conference on Artificial Intelligence and Statistics, 2015.
[23] L. Theis and M. D. Hoffman. A trust-region method for stochastic variational inference with applications
to streaming data. In International Conference on Machine Learning, 2015.
[24] M. Titsias and M. L?zaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In
Proceedings of the 31st International Conference on Machine Learning, pages 1971?1979, 2014.
[25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, Jan. 2008.
[26] H. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In
International Conference on Machine Learning, 2009.
[27] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document
collections. In Conference on Knowledge Discovery and Data Mining, pages 937?946. ACM, 2009.
9
| 5793 |@word proportion:1 nd:1 accommodate:2 series:2 contains:2 siebel:1 document:8 fa8750:1 outperforms:3 existing:3 current:4 comparing:1 surprising:1 yet:1 dx:1 written:2 john:3 tenet:1 additive:1 enables:2 treating:1 plot:1 update:8 half:3 selected:2 intelligence:4 item:1 mccallum:1 ith:1 smith:1 blei:7 provides:2 location:14 preference:1 mathematical:1 along:1 symposium:1 incorrect:1 doubly:1 combine:3 reinterpreting:1 inside:2 privacy:1 introduce:1 blondel:1 forgetting:1 expected:2 behavior:1 frequently:2 growing:1 planning:1 salakhutdinov:1 window:2 becomes:1 estimating:1 underlying:2 notation:2 maximizes:3 medium:1 kind:1 minimizes:1 unobserved:1 safely:1 every:2 bernardo:1 stick:2 exchangeable:2 control:2 appear:1 before:2 local:10 encoding:1 analyzing:3 mandt:1 might:3 chose:1 black:1 wallach:1 collect:1 seventeenth:1 acknowledgment:1 zaro:1 testing:1 practice:2 lf:2 differs:1 svi:16 bootstrap:1 jan:1 empirical:4 mult:2 inferential:1 boyd:1 word:6 seeing:1 cannot:4 close:1 convenience:1 operator:1 context:1 live:1 impossible:1 applying:2 accumulating:1 optimize:7 equivalent:2 reviewer:1 maximizing:2 independently:2 focused:1 amazon:1 splitting:1 zdi:2 population:78 handle:1 notion:2 annals:1 target:1 today:1 suppose:1 massive:2 user:4 gps:1 us:1 trend:1 updating:6 distributional:1 observed:4 preprint:2 wang:1 calculate:2 region:2 removed:2 mentioned:1 complexity:1 broderick:1 tacitly:1 predictive:11 titsias:1 completely:1 easily:2 joint:4 darpa:1 differently:1 talk:1 derivation:2 describe:2 monte:7 query:1 artificial:4 outside:1 choosing:5 emerged:1 larger:1 dominating:1 valued:1 say:1 drawing:1 elbo:21 amari:1 ability:1 statistic:11 highlighted:1 noisy:8 final:2 online:7 plummet:1 advantage:2 took:1 propose:4 coming:2 frequent:1 adapts:1 intuitive:1 exploiting:1 convergence:1 cluster:1 optimum:2 extending:1 generating:1 escobar:1 help:1 develop:5 fixing:1 measured:1 rescale:1 eq:17 exacerbated:1 longitude:1 c:2 involves:1 come:2 indicate:1 correct:1 stochastic:18 human:4 alp:1 crc:1 require:1 scholar:1 anonymous:1 exploring:1 extension:1 wdi:1 normal:2 exp:2 week:3 predict:1 vary:1 early:1 purpose:1 estimation:1 outperformed:3 teo:1 robbins:1 tool:2 svb:6 hoffman:3 gaussian:1 aim:1 rather:2 mobile:1 wilson:1 jaakkola:1 encode:1 derived:2 focus:2 improvement:1 likelihood:17 contrast:3 normalizer:1 inference:42 twitter:9 membership:1 streaming:18 inaccurate:1 cunningham:1 hidden:7 mitigating:1 tank:1 issue:1 arg:1 development:1 platform:1 initialize:1 marginal:1 field:2 equal:3 never:1 construct:1 once:1 sampling:5 ng:1 identical:2 nearly:1 foti:1 future:1 report:1 duplicate:1 few:2 modern:4 randomly:3 divergence:5 individual:1 fitness:2 replaced:1 replacement:1 microsoft:1 wdn:1 interest:2 mining:1 evaluation:2 mixture:22 truly:1 bracket:1 analyzed:2 allison:1 behind:1 held:15 chain:1 amenable:1 accurate:1 rajesh:1 endless:1 mobility:3 fox:1 old:2 loosely:1 logarithm:2 theoretical:1 instance:1 modeling:7 assignment:3 cost:2 geo:1 characterize:1 reported:2 periodic:1 dir:2 combined:1 st:1 density:3 international:8 sensitivity:4 probabilistic:9 yao:1 reflect:1 recorded:1 ndseg:1 kucukelbir:1 slowly:1 american:1 account:1 summarized:1 includes:2 explicitly:1 blind:1 depends:1 stream:43 vi:4 try:1 lot:1 performed:1 analyze:5 xing:1 bayes:12 recover:2 substructure:1 monro:1 minimize:3 accuracy:2 variance:4 efficiently:1 bayesian:29 carlo:7 researcher:1 facebook:2 definition:1 against:1 involved:1 james:2 steadily:1 associated:1 sampled:1 stop:1 dataset:3 knowledge:1 efron:2 follow:1 nonconjugate:1 specify:1 response:1 anticipatory:1 though:2 box:1 smola:1 until:1 honkela:1 hand:1 trust:2 replacing:1 minibatch:2 lda:8 reveal:1 contain:3 true:8 unbiased:3 geographic:1 andrieu:1 leibler:1 conditionally:9 during:1 yesterday:1 eisenstein:1 complete:5 performs:2 weber:1 variational:56 coast:5 ef:8 permuted:5 multinomial:2 behaves:1 conditioning:1 exponentially:1 volume:1 discussed:1 association:1 approximates:1 paisley:1 language:1 rot:1 base:3 feb:1 posterior:46 showed:1 recent:1 orbanz:1 chan:1 optimizes:1 phone:1 scenario:1 n00014:1 onr:1 came:1 seen:6 additional:1 paradigm:1 maximize:2 signal:1 ii:3 smoother:1 full:1 technical:1 faster:1 adapt:2 calculation:1 match:1 plug:2 ahmed:1 mcinerney:1 adobe:1 prediction:1 variant:2 scalable:1 regression:1 expectation:8 arxiv:4 iteration:3 disaster:1 cell:2 justified:1 hurdle:1 unlike:1 bringing:1 comment:1 member:1 jordan:6 cohort:3 stephan:1 affect:2 fit:1 zi:13 interprets:1 inner:1 idea:7 simplifies:1 avenue:1 attias:1 expression:1 peter:1 york:11 mirroring:1 generally:1 governs:2 involve:2 nonparametric:4 slide:1 induces:1 processed:1 nsf:1 notice:2 per:2 tibshirani:1 write:2 hyperparameter:4 discrete:1 drawn:1 urban:1 changing:1 wasteful:1 neither:1 shock:1 n66001:1 tweet:5 convert:1 year:3 wood:1 parameterized:1 powerful:1 uncertainty:3 family:16 reader:1 separation:1 draw:2 appendix:4 vb:28 bound:4 pay:1 guaranteed:1 ahead:1 precisely:1 dominated:1 min:1 performing:1 structured:3 gredilla:1 overconfident:1 conjugate:11 smaller:1 across:1 son:1 templeton:1 making:2 computationally:1 remains:1 describing:2 eventually:1 count:1 needed:1 unconventional:1 tractable:1 apply:2 observe:1 hierarchical:1 frequentist:2 ho:1 original:2 assumes:1 dirichlet:15 include:2 subsampling:1 ensure:1 graphical:3 marginalized:1 ghahramani:2 build:1 murray:1 approximating:4 classical:7 implied:1 objective:10 streamed:3 quantity:3 looked:1 strategy:1 dependence:1 usual:1 traditional:5 said:1 gradient:21 dp:8 thank:1 valpola:1 sensible:1 outer:1 topic:13 tower:2 collected:2 spanning:1 reason:1 index:2 modeled:1 minimizing:3 innovation:1 difficult:1 sinica:1 potentially:4 frank:1 negative:3 godsill:1 policy:1 unknown:2 observation:6 markov:1 datasets:2 finite:4 acknowledge:1 truncated:1 incorporated:1 incorporate:1 zdn:2 ivory:5 david:2 introduced:1 kl:15 z1:1 hour:3 kingma:1 nip:1 below:2 pattern:1 usually:3 latitude:1 challenge:1 max:1 including:1 video:1 wainwright:1 natural:9 rely:1 predicting:1 improve:2 sethuraman:1 categorical:1 auto:1 columbia:4 text:5 prior:5 understanding:1 discovery:1 theis:1 mixed:1 allocation:6 filtering:2 ingredient:1 foundation:2 sufficient:3 proxy:2 article:2 principle:2 share:1 heavy:1 periodicity:1 repeat:2 supported:1 free:3 theo:1 offline:1 allow:1 saul:2 mimno:2 overcome:1 dimension:1 vocabulary:3 ending:1 world:1 curve:1 computes:1 evaluating:1 valid:1 collection:4 made:1 far:1 social:1 welling:1 ranganath:2 approximate:7 emphasize:1 implicitly:2 kullback:1 forever:1 global:7 suggestive:1 reveals:1 doucet:1 corpus:3 assumed:1 recommending:1 xi:10 latent:18 why:1 storyline:1 learn:1 expanding:1 bottou:1 necessarily:1 complex:1 cl:1 did:1 main:2 statistica:1 hyperparameters:4 ref:7 x1:2 west:1 wiley:1 inferring:1 position:1 exponential:5 governed:1 breaking:1 theorem:1 down:1 minute:1 specific:1 showing:1 decay:1 evidence:2 workshop:1 intractable:1 sequential:1 effectively:1 justifies:3 conditioned:2 depicted:1 explore:1 ordered:5 applies:1 gerrish:1 acm:1 minibatches:1 conditional:8 goal:3 month:1 replace:1 content:1 change:4 infinite:4 specifically:1 uniformly:1 averaging:1 called:1 pas:1 formally:1 noisier:1 overload:1 wibisono:1 constructive:1 princeton:2 |
5,295 | 5,794 | Probabilistic Curve Learning: Coulomb Repulsion
and the Electrostatic Gaussian Process
David Dunson
Department of Statistics
Duke University
Durham, NC, USA, 27705
dunson@stat.duke.edu
Ye Wang
Department of Statistics
Duke University
Durham, NC, USA, 27705
eric.ye.wang@duke.edu
Abstract
Learning of low dimensional structure in multidimensional data is a canonical
problem in machine learning. One common approach is to suppose that the observed data are close to a lower-dimensional smooth manifold. There are a rich
variety of manifold learning methods available, which allow mapping of data
points to the manifold. However, there is a clear lack of probabilistic methods
that allow learning of the manifold along with the generative distribution of the
observed data. The best attempt is the Gaussian process latent variable model
(GP-LVM), but identifiability issues lead to poor performance. We solve these
issues by proposing a novel Coulomb repulsive process (Corp) for locations of
points on the manifold, inspired by physical models of electrostatic interactions
among particles. Combining this process with a GP prior for the mapping function
yields a novel electrostatic GP (electroGP) process. Focusing on the simple case
of a one-dimensional manifold, we develop efficient inference algorithms, and illustrate substantially improved performance in a variety of experiments including
filling in missing frames in video.
1
Introduction
There is broad interest in learning and exploiting lower-dimensional structure in high-dimensional
data. A canonical case is when the low dimensional structure corresponds to a p-dimensional smooth
Riemannian manifold M embedded in the d-dimensional ambient space Y of the observed data y .
Assuming that the observed data are close to M, it becomes of substantial interest to learn M along
with the mapping ? from M ? Y. This allows better data visualization and for one to exploit the
lower-dimensional structure to combat the curse of dimensionality in developing efficient machine
learning algorithms for a variety of tasks.
The current literature on manifold learning focuses on estimating the coordinates x P M corresponding to y by optimization, finding x ?s on the manifold M that preserve distances between the
corresponding y ?s in Y. There are many such methods, including Isomap [1], locally-linear embedding [2] and Laplacian eigenmaps [3]. Such methods have seen broad use, but have some clear
limitations relative to probabilistic manifold learning approaches, which allow explicit learning of
M, the mapping ? and the distribution of y .
There has been some considerable focus on probabilistic models, which would seem to allow learning of M and ?. Two notable examples are mixtures of factor analyzers (MFA) [4, 5] and Gaussian
process latent variable models (GP-LVM) [6]. Bayesian GP-LVM [7] is a Bayesian formulation
of GP-LVM which automatically learns the intrinsic dimension p and handles missing data. Such
approaches are useful in exploiting lower-dimensional structure in estimating the distribution of y ,
but unfortunately have critical problems in terms of reliable estimation of the manifold and mapping
1
function. MFA is not smooth in approximating the manifold with a collage of lower dimensional
hyper-planes, and hence we focus further discussion on Bayesian GP-LVM. Similar problems occur
for MFA and other probabilistic manifold learning methods.
xi q ` i , with ? assigned
In general form for the ith data vector, Bayesian GP-LVM lets y i ? ?px
a Gaussian process prior, x i generated from a pre-specified Gaussian or uniform distribution over
a p-dimensional space, and the residual i drawn from a d-dimensional Gaussian centered on zero
with diagonal or spherical covariance. While this model seems appropriate to manifold learning,
identifiability problems lead to extremely poor performance in estimating M and ?. To give an
intuition for the root cause of the problem, consider the case in which x i are drawn independently
from a uniform distribution over r0, 1sp . The model is so flexible that we could fit the training data
y i , for i ? 1, . . . , n, just as well if we did not use the entire hypercube but just placed all the x i values
in a small subset of r0, 1sp . The uniform prior will not discourage this tendency to not spread out the
latent coordinates, which unfortunately has disasterous consequences illustrated in our experiments.
The structure of the model is just too flexible, and further constraints are needed. Replacing the
uniform with a standard Gaussian does not solve the problem. Constrained likelihood methods [8, 9]
mitigate the issue to some extent, but do not correspond to a proper Bayesian generative model.
To make the problem more tractable, we focus on the case in which M is a one-dimensional smooth
compact manifold. Assume y i ? ? pxi q ` i , with i Gaussian noise, and ? : p0, 1q ?? M a smooth
mapping such that ?j p?q P C 8 for j ? 1, . . . , d, where ? pxq ? p?1 pxq, . . . , ?d pxqq. We focus on
finding a good estimate of ? , and hence the manifold, via a probabilistic learning framework. We
refer to this problem as probabilistic curve learning (PCL) motivated by the principal curve literature
[10]. PCL differs substantially from the principal curve learning problem, which seeks to estimate a
non-linear curve through the data, which may be very different from the true manifold.
Our proposed approach builds on GP-LVM; in particular, our primary innovation is to generate the
latent coordinates x i from a novel repulsive process. There is an interesting literature on repulsive
point process modeling ranging from various Matern processes [11] to the determinantal point process (DPP) [12]. In our very different context, these processes lead to unnecessary complexity ?
computationally and otherwise ? and we propose a new Coulomb repulsive process (Corp) motivated by Coulomb?s law of electrostatic interaction between electrically charged particles. Using
Corp for the latent positions has the effect of strongly favoring spread out locations on the manifold,
effectively solving the identifiability problem mentioned above for the GP-LVM. We refer to the GP
with Corp on the latent positions as an electrostatic GP (electroGP).
The remainder of the paper is organized as follows. The Coulomb repulsive process is proposed
in ? 2 and the electroGP is presented in ? 3 with a comparison between electroGP and GP-LVM
demonstrated via simulations. The performance is further evaluated via real world datasets in ? 4.
A discussion is reported in ? 5.
2
2.1
Coulomb repulsive process
Formulation
Definition 1. A univariate process is a Coulomb repulsive process (Corp) if and only if for every
finite set of indices t1 , . . . , tk in the index set N` ,
Xt1 ? unifp0, 1q,
`
?
(1)
2r
ppXti |Xt1 , . . . , Xti?1 q9?i?1
?Xti ? ?Xtj 1Xti Pr0,1s , i ? 1,
j?1 sin
where r ? 0 is the repulsive parameter. The process is denoted as Xt ? Corpprq.
The process is named by its analogy in electrostatic physics where by Coulomb law, two electrostatic positive charges will repel each other by a force proportional to the reciprocal of their square
distance. Letting dpx, yq ? sin |?x ? ?y|, the above conditional probability of Xti given Xtj is
proportional to d2r pXti , Xtj q, shrinking the probability exponentially fast as two states get closer to
each other. Note that the periodicity of the sine function eliminates the edges of r0, 1s, making the
electrostatic energy field homogeneous everywhere on r0, 1s.
Several observations related to Kolmogorov extension theorem can be made immediately, ensuring
Corp to be well defined. Firstly, the conditional density defined in (1) is positive and integrable,
2
Figure 1: Each facet consists of 5 rows, with each row representing an 1-dimensional scatterplot of
a random realization of Corp under certain n and r.
since Xt ?s are constrained in a compact interval, and sin2r p?q is positive and bounded. Hence, the
finite distributions are well defined.
Secondly, the joint finite p.d.f. for Xt1 , . . . , Xtk can be derived as
`
?
ppXt1 , . . . , Xtk q9?i?j sin2r ?Xti ? ?Xtj .
(2)
As can be easily seen, any permutation of t1 , . . . , tk will result in the same joint finite distribution,
hence this finite distribution is exchangeable.
Thirdly, it can be easily checked that for any finite set of indices t1 , . . . , tk`m ,
?1
?1
ppXt1 , . . . , Xtk q ?
...
ppXt1 , . . . , Xtk , Xtk`1 , . . . , Xtk`m qdXtk`1 . . . dXtk`m ,
0
0
by observing that
ppXt1 , . . . , Xtk , Xtk`1 , . . . , Xtk`m q ? ppXt1 , . . . , Xtk q?m
j?1 ppXtk`j |Xt1 , . . . , Xtk`j?1 q.
2.2
Properties
Assuming Xt , t P N` is a realization from Corp, then the following lemmas hold.
Lemma 1. For any n P N` , any 1 ? i ? n and any ? 0, we have
2? 2 2r`1
ppXn P BpXi , q|X1 , . . . , Xn?1 q ?
2r ` 1
where BpXi , q ? tX P p0, 1q : dpX, Xi q ? u.
Lemma 2. For any n P N` , the p.d.f. (2) of X1 , . . . , Xn (due to the exchangeability, we can assume
X1 ? X2 ? ? ? ? ? Xn without loss of generality) is maximized when and only when
` 1 ?
dpXi , Xi?1 q ? sin
for all 2 ? i ? n.
n`1
According to Lemma 1 and Lemma 2, Corp will nudge the x?s to be spread out within r0, 1s, and
penalizes the case when two x?s get too close. Figure 1 presents some simulations from Corp.
This nudge becomes stronger as the sample size n grows, or as the repulsive parameter r grows.
The properties of Corp makes it ideal for strongly favoring spread out latent positions across the
manifold, avoiding the gaps and clustering in small regions that plague GP-LVM-type methods. The
proofs for the lemmas and a simulation algorithm based on rejection sampling can be found in the
supplement.
2.3
Multivariate Corp
Definition 2. A p-dimensional multivariate process is a Coulomb repulsive process if and only if for
every finite set of indices t1 , . . . , tk in the index set N` ,
Xm,t1 ? unifp0, 1q, for m ? 1, . . . , p
? p`1
?r
?
i?1
2
X ti |X
X t1 , . . . , X ti?1 q9?j?1
ppX
pYm,ti ? Ym,tj q 1Xti Pp0,1q , i ? 1
m?1
3
where the p-dimensional spherical coordinates X t ?s have been converted into the pp ` 1qdimensional Cartesian coordinates Y t :
Y1,t ? cosp2?X1,t q
Y2,t ? sinp2?X1,t q cosp2?X2,t q
..
.
Yp,t ? sinp2?X1,t q sinp2?X2,t q . . . sinp2?Xp?1,t q cosp2?Xp,t q
Yp`1,t ? sinp2?X1,t q sinp2?X2,t q . . . sinp2?Xp?1,t q sinp2?Xp,t q.
The multivariate Corp maps the hyper-cubic p0, 1qp through a spherical coordinate system to a unit
hyper-ball in <p`1 . The repulsion is then defined as the reciprocal of the square Euclidean distances
between these mapped points in <p`1 . Based on this construction of multivariate Corp, a straightfoward generalization of the electroGP model to a p-dimensional manifold could be made, where
p ? 1.
3
3.1
Electrostatic Gaussian Process
Formulation and Model Fitting
In this section, we propose the electrostatic Gaussian process (electroGP) model. Assuming n ddimensional data vectors y 1 , . . . , y n are observed, the model is given by
yi,j ? ?j pxi q ` i,j ,
xi ? Corpprq,
i,j ? N p0, ?j2 q,
i ? 1, . . . , n,
j
?j ? GPp0, K q,
(3)
j ? 1, . . . , d,
where y i ? pyi,1 , . . . , yi,d q for i ? 1, . . . , n and GPp0, (K j q denotes a Gaussian process prior with
covariance function K j px, yq ? ?j exp ? ?j px ? yq2 .
Letting ? ? p?12 , ?1 , ?1 , . . . , ?d2 , ?d , ?d q denote the model hyperparameters, model (3) could be
fitted by maximizing the joint posterior distribution of x ? px1 , . . . .xn q and ?,
? ? arg max ppx
x|yy 1:n , ?, rq,
p?
x , ?q
(4)
x .?
where the repulsive parameter r is fixed and can be tuned using cross validation. Based on our
experience, setting r ? 1 always yields good results, and hence is used as a default across this
paper. For the simplicity of notations, r is excluded in the remainder. The above optimization
problem can be rewritten as
?
?
? ? arg max `pyy 1:n |x
x, ?q
x, ?q ` log ?px
xq ,
p?
x .?
where `p?q denotes the log likelihood function and ?p?q denotes the finite dimensional pdf of Corp.
Hence the Corp prior can also be viewed as a repulsive constraint in the optimization problem.
?
?
It can be easily checked that log ?pxi ? xj q ? ?8, for any i and j. Starting at initial values
x0 , the optimizer will converge to a local solution that maintains the same order as the initial x0 ?s.
We refer to this as the self-truncation property. We find that conditionally on the starting order,
the optimization algorithm converges rapidly and yields stable results. Although the x?s are not
identifiable, since the target function (4) is invariant under rotation, a unique solution does exist
conditionally on the specified order.
Self-truncation raises the necessity of finding good initial values, or at least a good initial ordering
for x?s. Fortunately, in our experience, simply applying any standard manifold learning algorithm
to estimate x0 in a manner that preserves distances in Y yields good performance. We find very
similar results using LLE, Isomap and eigenmap, but focus on LLE in all our implementations. Our
algorithm can be summarized as follows.
1. Learn the one dimensional coordinate x 0 by your favorite distance-preserving manifold
learning algorithm and rescale x 0 into p0, 1q;
4
Figure 2: Visualization of three simulation experiments where the data (triangles) are simulated
from a bivariate Gaussian (left), a rotated parabola with Gaussian noises (middle) and a spiral with
Gaussian noises (right). The dotted shading denotes the 95% posterior predictive uncertainty band
of py1 , y2 q under electroGP. The black curve denotes the posterior mean curve under electroGP and
the red curve denotes the P-curve. The three dashed curves denote three realizations from GP-LVM.
The middle panel shows a zoom-in region and the full figure is shown in the embedded box.
x0 , ?, rq using scaled conjugate gradient descent (SCG);
2. Solve ?0 ? arg max? ppyy 1:n |x
? w.r.t. (4).
3. Using SCG, setting x 0 and ?0 to be the initial values, solve x? and ?
3.2
Posterior Mean Curve and Uncertainty Bands
In this subsection, we describe how to obtain a point estimate of the curve ? and how to characterize its uncertainty under electroGP. Such point and interval estimation is as of yet unsolved in
the literature, and is of critical importance. In particular, it is difficult to interpret a single point
estimate without some quantification of how uncertain that estimate is. We use the posterior mean
? as the Bayes optimal estimator under squared error loss. As a curve, ?
? ? Ep?
?|?
?
curve ?
x , y 1:n , ?q
has infinite dimensions. Hence, in order to store and visualize it, we discretize r0, 1s to obtain n?
equally-spaced grid points x?i ? ni?1
for i ? 1, . . . , n? . Using basic multivariate Gaussian theory,
? ?1
the following expectation is easy to compute.
`
?
`
?
? .
? px?1 q, . . . , ?
? px?n? q ? E ? px?1 q, . . . , ? px?n? q|?
?
x , y 1:n , ?
(n?
? is approximated by linear interpolation using x?i , ?
? px?i q i?1
Then ?
. For ease of notation, we use
? to denote this interpolated piecewise linear curve later on. Examples can be found in Figure 2
?
where all the mean curves (black solid) were obtained using the above method.
Estimating an uncertainty region including data points with ? probability is much more challenging.
We addressed this problem by the following heuristic algorithm.
Step 1. Draw x?i ?s from Unif(0,1) independently for i ? 1, . . . , n1 ;
Step 2. Sample the corresponding y ?i from the posterior predictive distribution conditional on these
?
latent coordinates ppyy ?1 , . . . , y ?n1 |x?1:n1 , x? , y 1:n , ?q;
Step 3. Repeat steps 1-2 n2 times, collecting all n1 ? n2 samples y ? ?s;
? , and find the
Step 4. Find the shortest distances from these y ? ?s to the posterior mean curve ?
?-quantile of these distances denoted by ?;
? pr0, 1sq, the envelope of the moving trace
Step 5. Moving a radius-? ball through the entire curve ?
defines the ?% uncertainty band.
? is a piecewise linear curve. Examples can be found in
Note that step 4 can be easily solved since ?
Figure 2, where the 95% uncertainty bands (dotted shading) were found using the above algorithm.
5
Figure 3: The zoom-in of the spiral case 3 (left) and the corresponding coordinate function, ?2 pxq,
of electroGP (middle) and GP-LVM (right). The gray shading denotes the heatmap of the posterior
distribution of px, y2 q and the black curve denotes the posterior mean.
3.3
Simulation
In this subsection, we compare the performance of electroGP with GP-LVM and principal curves (Pcurve) in simple simulation experiments. 100 data points were sampled from each of the following
three 2-dimensional distributions: a Gaussian distribution, a rotated parabola with Gaussian noises
and a spiral with Gaussian noises. ElectroGP and GP-LVM were fitted using the same initial values
obtained from LLE, and the P-Curve was fitted using the princurve package in R.
The performance of the three methods is compared in Figure 2. The dotted shading represents a
95% posterior predictive uncertainty band for a new data point y n`1 under the electroGP model.
This illustrates that electroGP obtains an excellent fit to the data, provides a good characterization of
uncertainty, and accurately captures the concentration near a 1d manifold embedded in two dimensions. The P-curve is plotted in red. The extremely poor representation of P-curve is as expected
based on our experience in fitting principal curve in a wide variety of cases; the behavior is highly
unstable. In the first two cases, the P-Curve corresponds to a smooth curve through the center of
the data, but for the more complex manifold in the third case, the P-Curve is an extremely poor
representation. This tendency to cut across large regions of near zero data density for highly curved
manifolds is common for P-Curve.
For GP-LVM, we show three random realizations (dashed) from the posterior in each case. It is
clear the results are completely unreliable, with the tendency being to place part of the curve through
where the data have high density, while also erratically adding extra outside the range of the data.
The GP-LVM model does not appropriately penalize such extra parts, and the very poor performance
shown in the top right of Figure 2 is not unusual. We find that electroGP in general performs
dramatically better than competitors. More simulation results can be found in the supplement. To
better illustrate the results for the spiral case 3, we zoom in and present some further comparisons
of GP-LVM and electroGP in Figure 3.
As can be seen the right panel, optimizing x?s without any constraint results in ?holes? on r0, 1s.
The trajectories of the Gaussian process over these holes will become arbitrary, as illustrated by the
three realizations. This arbitrariness will be further projected into the input space Y, resulting in
the erratic curve observed in the left panel. Failing to have well spread out x?s over r0, 1s not only
causes trouble in learning the curve, but also makes the posterior predictive distribution of y n`1
overly diffuse near these holes, e.g., the large gray shading area in the right panel. The middle panel
shows that electroGP fills in these holes by softly constraining the latent coordinates x?s to spread
out while still allowing the flexibility of moving them around to find a smooth curve snaking through
them.
3.4
Prediction
Broad prediction problems can be formulated as the following missing data problem. Assume m new
data z i , for i ? 1, . . . , m, are partially observed and the missing entries are to be filled in. Letting
M
zO
i denote the observed data vector and z i denote the missing part, the conditional distribution of
6
Original
Observed
electroGP
GP-LVM
Figure 4: Left Panel: Three randomly selected reconstructions using electroGP compared with
those using Bayesian GP-LVM; Right Panel: Another three reconstructions from electroGP, with
the first row presenting the original images, the second row presenting the observed images and the
third row presenting the reconstructions.
the missing data is given by
?
ppzz M
|zz O , x? , y 1:n , ?q
? 1:m ?1:m
z
z
z
? ? ppxz1:m |zz O
?
? , y 1:n , ?q
? , y 1:n , ?qdx
?
???
ppzz M
1:m |x1:m , x
1:m , x
1 ? ? ? dxm ,
xz1
xzm
where xzi is the corresponding latent coordinate of z i , for i ? 1, . . . , n. However, dealing with
pxz1 , . . . , xzm q jointly is intractable due to the high non-linearity of the Gaussian process, which
motivates the following approximation,
z O
? ? ?m
?
? , y 1:n , ?q
z i , x? , y 1:n , ?q.
ppxz1:m |zz O
1:m , x
i?1 ppxi |z
The approximation assumes pxz1 , . . . , xzm q to be conditionally independent. This assumption is more
accurate if x? is well spread out on p0, 1q, as is favored by Corp.
? though still intractable, is much easier to deal with.
xO
? , ?q,
The univariate distribution ppxzi |x
i , y 1:n , u
Depending on the purpose of the application, either a Metropolis Hasting algorithm could be adopted
to sample from the predictive distribution, or a optimization method could be used to find the MAP
of xz ?s. The details of both algorithms can be found in the supplement.
4
Experiments
Video-inpainting 200 consecutive frames (of size 76 ? 101 with RGB color) [13] were collected
from a video of a teapot rotating 1800 . Clearly these images roughly lie on a curve. 190 of the frames
were assumed to be fully observed in the natural time order of the video, while the other 10 frames
were given without any ordering information. Moreover, half of the pixels of these 10 frames were
missing. The electroGP was fitted based on the other 190 frames and was used to reconstruct the
broken frames and impute the reconstructed frames into the whole frame series with the correct
order. The reconstruction results are presented in Figure 4. As can be seen, the reconstructed
images are almost indistinguishable from the original ones. Note that these 10 frames were also
correctly imputed into the video with respect to their latent position x?s. ElectroGP was compared
with Bayesian GP-LVM [7] with the latent dimension set to 1. The reconstruction mean square
error (MSE) using electroGP is 70.62, compared to 450.75 using GP-LVM. The comparison is
also presented in Figure 4. It can be seen that electroGP outperforms Bayesian GP-LVM in highresolution precision (e.g., how well they reconstructed the handle of the teapot) since it obtains a
much tighter and more precise estimate of the manifold.
Super-resolution & Denoising 100 consecutive frames (of size 100 ? 100 with gray color) were
collected from a video of a shrinking shockwave. Frame 51 to 55 were assumed completely missing
and the other 95 frames were observed with the original time order with strong white noises. The
shockwave is homogeneous in all directions from the center; hence, the frames roughly lie on a
curve. The electroGP was applied for two tasks: 1. Frame denoising; 2. Improving resolution by
interpolating frames in between the existing frames. Note that the second task is hard since there are
7
Original
Noisy
electroGP
NLM
IsD
electroGP
LI
Figure 5: Row 1: From left to right are the original 95th frame, its noisy observation, its denoised
result by electroGP, NLM and IsD; Row 2: From left to right are the original 53th frame, its regeneration by electroGP, the residual image (10 times of the absolute error between the imputation and
the original) of electroGP and LI. The blank area denotes its missing observation.
5 consecutive frames missing and they can be interpolated only if the electroGP correctly learns the
underlying manifold.
The denoising performance was compared with non-local mean filter (NLM) [14] and isotropic
diffusion (IsD) [15]. The interpolation performance was compared with linear interpolation (LI).
The comparison is presented in Figure 5. As can be clearly seen, electroGP greatly outperforms
other methods since it correctly learned this one-dimensional manifold. To be specific, the denoising
MSE using electroGP is only 1.8 ? 10?3 , comparing to 63.37 using NLM and 61.79 using IsD. The
MSE of reconstructing the entirely missing frame 53 using electroGP is 2 ? 10?5 compared to 13
using LI. An online video of the super-resolution result using electroGP can be found in this link1 .
The frame per second (fps) of the generated video under electroGP was tripled compared to the
original one. Though over two thirds of the frames are pure generations from electroGP, this new
video flows quite smoothly. Another noticeable thing is that the 5 missing frames were perfectly
regenerated by electroGP.
5
Discussion
Manifold learning has dramatic importance in many applications where high-dimensional data are
collected with unknown low dimensional manifold structure. While most of the methods focus on
finding lower dimensional summaries or characterizing the joint distribution of the data, there is (to
our knowledge) no reliable method for probabilistic learning of the manifold. This turns out to be
a daunting problem due to major issues with identifiability leading to unstable and generally poor
performance for current probabilistic non-linear dimensionality reduction methods. It is not obvious
how to incorporate appropriate geometric constraints to ensure identifiability of the manifold without
also enforcing overly-restrictive assumptions about its form.
We tackled this problem in the one-dimensional manifold (curve) case and built a novel electrostatic Gaussian process model based on the general framework of GP-LVM by introducing a novel
Coulomb repulsive process. Both simulations and real world data experiments showed excellent
performance of the proposed model in accurately estimating the manifold while characterizing uncertainty. Indeed, performance gains relative to competitors were dramatic. The proposed electroGP
is shown to be applicable to many learning problems including video-inpainting, super-resolution
and video-denoising. There are many interesting areas for future study including the development
of efficient algorithms for applying the model for multidimensional manifolds, while learning the
dimension.
1
https://youtu.be/N1BG220J5Js This online video contains no information regarding the authors.
8
References
[1] J.B. Tenenbaum, V. De Silva, and J.C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[2] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290(5500):2323?2326, 2000.
[3] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. In NIPS, volume 14, pages 585?591, 2001.
[4] M. Chen, J. Silva, J. Paisley, C. Wang, D.B. Dunson, and L. Carin. Compressive sensing
on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance
bounds. Signal Processing, IEEE Transactions on, 58(12):6140?6155, 2010.
[5] Y. Wang, A. Canale, and D.B. Dunson. Scalable multiscale density estimation. arXiv preprint
arXiv:1410.7692, 2014.
[6] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process
latent variable models. The Journal of Machine Learning Research, 6:1783?1816, 2005.
[7] M. Titsias and N. Lawrence. Bayesian gaussian process latent variable model. The Journal of
Machine Learning Research, 9:844?851, 2010.
[8] Neil D Lawrence and Joaquin Qui?nonero-Candela. Local distance preservation in the GP-LVM
through back constraints. In Proceedings of the 23rd international conference on Machine
learning, pages 513?520. ACM, 2006.
[9] Raquel Urtasun, David J Fleet, Andreas Geiger, Jovan Popovi?c, Trevor J Darrell, and Neil D
Lawrence. Topologically-constrained latent variable models. In Proceedings of the 25th international conference on Machine learning, pages 1080?1087. ACM, 2008.
[10] T. Hastie and W. Stuetzle. Principal curves. Journal of the American Statistical Association,
84(406):502?516, 1989.
[11] V. Rao, R.P. Adams, and D.B. Dunson. Bayesian inference for mat?ern repulsive processes.
arXiv preprint arXiv:1308.1136, 2013.
[12] J.B. Hough, M. Krishnapur, Y. Peres, et al. Zeros of Gaussian analytic functions and determinantal point processes, volume 51. American Mathematical Soc., 2009.
[13] K.Q. Weinberger and L.K. Saul. An introduction to nonlinear dimensionality reduction by
maximum variance unfolding. In AAAI, volume 6, pages 1683?1686, 2006.
[14] A. Buades, B. Coll, and J.M. Morel. A non-local algorithm for image denoising. In Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on,
volume 2, pages 60?65. IEEE, 2005.
[15] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 12(7):629?639, 1990.
9
| 5794 |@word middle:4 seems:1 stronger:1 unif:1 d2:1 seek:1 simulation:8 scg:2 covariance:2 p0:6 rgb:1 dramatic:2 inpainting:2 solid:1 shading:5 reduction:4 necessity:1 initial:6 series:1 contains:1 tuned:1 outperforms:2 existing:1 current:2 blank:1 comparing:1 yet:1 determinantal:2 analytic:1 generative:2 selected:1 half:1 intelligence:1 plane:1 isotropic:1 ith:1 reciprocal:2 provides:1 characterization:1 location:2 firstly:1 mathematical:1 along:2 become:1 fps:1 consists:1 fitting:2 manner:1 x0:4 indeed:1 expected:1 roughly:2 behavior:1 xz:1 inspired:1 spherical:3 automatically:1 xti:6 curse:1 becomes:2 estimating:5 bounded:1 notation:2 buades:1 moreover:1 underlying:1 panel:7 linearity:1 substantially:2 proposing:1 compressive:1 finding:4 combat:1 mitigate:1 multidimensional:2 every:2 charge:1 collecting:1 ti:3 scaled:1 exchangeable:1 unit:1 t1:6 positive:3 lvm:24 local:4 consequence:1 pxq:3 interpolation:3 black:3 challenging:1 ease:1 range:1 unique:1 differs:1 dpx:2 sq:1 stuetzle:1 area:3 straightfoward:1 pre:1 get:2 close:3 context:1 applying:2 map:2 charged:1 missing:12 demonstrated:1 maximizing:1 center:2 starting:2 independently:2 pyi:1 resolution:4 simplicity:1 immediately:1 pure:1 estimator:1 fill:1 embedding:3 handle:2 coordinate:11 construction:1 suppose:1 target:1 duke:4 homogeneous:2 approximated:1 recognition:1 parabola:2 cut:1 observed:12 ep:1 preprint:2 wang:4 solved:1 capture:1 nudge:2 region:4 ordering:2 mentioned:1 substantial:1 intuition:1 rq:2 complexity:1 broken:1 raise:1 solving:1 predictive:5 titsias:1 eric:1 completely:2 triangle:1 easily:4 joint:4 various:1 tx:1 kolmogorov:1 zo:1 fast:1 describe:1 hyper:3 outside:1 quite:1 heuristic:1 solve:4 cvpr:1 regeneration:1 otherwise:1 reconstruct:1 statistic:2 niyogi:1 neil:2 gp:28 jointly:1 noisy:2 online:2 pp0:1 propose:2 reconstruction:5 interaction:2 erratically:1 remainder:2 j2:1 combining:1 realization:5 rapidly:1 nonero:1 flexibility:1 roweis:1 exploiting:2 darrell:1 adam:1 converges:1 rotated:2 tk:4 illustrate:2 develop:1 depending:1 stat:1 rescale:1 noticeable:1 strong:1 soc:1 ddimensional:1 direction:1 radius:1 correct:1 filter:1 centered:1 nlm:4 generalization:1 tighter:1 secondly:1 extension:1 hold:1 around:1 exp:1 lawrence:4 mapping:6 visualize:1 major:1 optimizer:1 consecutive:3 purpose:1 failing:1 estimation:3 applicable:1 morel:1 unfolding:1 clearly:2 gaussian:24 always:1 super:3 exchangeability:1 derived:1 focus:7 pxi:3 likelihood:2 greatly:1 inference:2 repulsion:2 softly:1 entire:2 perona:1 favoring:2 pixel:1 arg:3 issue:4 among:1 flexible:2 denoted:2 favored:1 development:1 heatmap:1 constrained:3 field:1 sampling:1 zz:3 teapot:2 represents:1 broad:3 filling:1 carin:1 future:1 regenerated:1 piecewise:2 belkin:1 randomly:1 preserve:2 zoom:3 xtj:4 n1:4 attempt:1 detection:1 interest:2 highly:2 mixture:2 tj:1 accurate:1 ambient:1 edge:2 closer:1 experience:3 filled:1 euclidean:1 ppx:2 penalizes:1 rotating:1 plotted:1 hough:1 fitted:4 uncertain:1 modeling:1 facet:1 rao:1 introducing:1 subset:1 entry:1 uniform:4 eigenmaps:2 too:2 characterize:1 reported:1 density:4 international:2 probabilistic:10 physic:1 ym:1 squared:1 aaai:1 american:2 leading:1 yp:2 li:4 converted:1 de:1 summarized:1 notable:1 sine:1 root:1 matern:1 later:1 candela:1 observing:1 red:2 bayes:1 maintains:1 denoised:1 identifiability:5 youtu:1 square:3 ni:1 variance:1 maximized:1 yield:4 correspond:1 spaced:1 bayesian:10 accurately:2 trajectory:1 checked:2 trevor:1 definition:2 competitor:2 energy:1 pp:1 obvious:1 proof:1 riemannian:1 unsolved:1 sampled:1 gain:1 subsection:2 color:2 dimensionality:5 knowledge:1 organized:1 back:1 focusing:1 pym:1 popovi:1 improved:1 daunting:1 formulation:3 evaluated:1 box:1 strongly:2 generality:1 though:2 just:3 langford:1 joaquin:1 replacing:1 nonlinear:3 multiscale:1 lack:1 defines:1 gray:3 grows:2 usa:2 effect:1 ye:2 true:1 isomap:2 y2:3 hence:8 assigned:1 excluded:1 illustrated:2 deal:1 conditionally:3 white:1 sin:3 impute:1 self:2 indistinguishable:1 highresolution:1 pdf:1 presenting:3 performs:1 silva:2 ranging:1 image:6 novel:5 common:2 rotation:1 physical:1 qp:1 arbitrariness:1 exponentially:1 volume:4 thirdly:1 association:1 anisotropic:1 interpret:1 refer:3 paisley:1 rd:1 grid:1 px1:1 particle:2 analyzer:2 moving:3 stable:1 electrostatic:11 multivariate:5 posterior:12 showed:1 optimizing:1 store:1 corp:17 certain:1 yi:2 integrable:1 seen:6 preserving:1 fortunately:1 r0:8 converge:1 shortest:1 dashed:2 signal:1 preservation:1 full:1 smooth:7 cross:1 equally:1 laplacian:2 ensuring:1 prediction:2 hasting:1 basic:1 scalable:1 vision:1 expectation:1 arxiv:4 penalize:1 interval:2 addressed:1 appropriately:1 envelope:1 eliminates:1 extra:2 thing:1 flow:1 seem:1 near:3 ideal:1 constraining:1 spiral:4 easy:1 variety:4 xj:1 fit:2 hastie:1 perfectly:1 andreas:1 regarding:1 fleet:1 motivated:2 pr0:2 cause:2 dramatically:1 useful:1 generally:1 clear:3 nonparametric:1 locally:2 band:5 tenenbaum:1 imputed:1 generate:1 http:1 exist:1 canonical:2 dotted:3 overly:2 correctly:3 yy:1 per:1 mat:1 drawn:2 imputation:1 diffusion:2 isd:4 package:1 everywhere:1 uncertainty:9 raquel:1 topologically:1 named:1 place:1 almost:1 geiger:1 draw:1 qui:1 entirely:1 bound:1 q9:3 tackled:1 identifiable:1 occur:1 constraint:5 your:1 x2:4 diffuse:1 pcl:2 link1:1 interpolated:2 extremely:3 px:10 xtk:11 ern:1 department:2 developing:1 according:1 ball:2 poor:6 electrically:1 conjugate:1 across:3 reconstructing:1 metropolis:1 making:1 invariant:1 xo:1 computationally:1 visualization:2 turn:1 needed:1 letting:3 tractable:1 unusual:1 repulsive:14 available:1 adopted:1 rewritten:1 dpxi:1 coulomb:10 appropriate:2 spectral:1 weinberger:1 original:9 denotes:9 clustering:2 top:1 trouble:1 assumes:1 ensure:1 exploit:1 restrictive:1 quantile:1 build:1 approximating:1 hypercube:1 society:1 malik:1 primary:1 concentration:1 diagonal:1 gradient:1 distance:8 mapped:1 simulated:1 manifold:36 extent:1 unstable:2 collected:3 urtasun:1 enforcing:1 assuming:3 index:5 innovation:1 nc:2 difficult:1 dunson:5 unfortunately:2 trace:1 implementation:1 proper:1 motivates:1 unknown:1 allowing:1 discretize:1 observation:3 datasets:1 finite:8 descent:1 curved:1 peres:1 precise:1 frame:24 y1:1 arbitrary:1 david:2 specified:2 repel:1 plague:1 learned:1 nip:1 pattern:2 xm:1 built:1 including:5 reliable:2 video:12 max:3 erratic:1 mfa:3 critical:2 natural:1 force:1 quantification:1 residual:2 representing:1 yq:2 xq:1 prior:5 literature:4 geometric:2 relative:2 law:2 embedded:3 loss:2 fully:1 permutation:1 interesting:2 limitation:1 proportional:2 generation:1 analogy:1 validation:1 jovan:1 xp:4 row:7 periodicity:1 summary:1 placed:1 repeat:1 truncation:2 allow:4 lle:3 wide:1 saul:2 characterizing:2 absolute:1 curve:37 dimension:5 dpp:1 world:2 xn:4 rich:1 default:1 author:1 made:2 projected:1 coll:1 transaction:2 reconstructed:3 compact:2 obtains:2 unreliable:1 dealing:1 global:1 krishnapur:1 xt1:4 unnecessary:1 assumed:2 xi:4 latent:15 favorite:1 learn:2 improving:1 mse:3 excellent:2 complex:1 discourage:1 interpolating:1 sp:2 did:1 spread:7 whole:1 noise:6 hyperparameters:1 n2:2 x1:8 cubic:1 shrinking:2 precision:1 position:4 explicit:1 lie:2 collage:1 third:3 learns:2 theorem:1 xt:3 specific:1 sensing:1 bivariate:1 intrinsic:1 intractable:2 scatterplot:1 adding:1 effectively:1 importance:2 supplement:3 illustrates:1 cartesian:1 hole:4 chen:1 durham:2 gap:1 rejection:1 easier:1 smoothly:1 simply:1 univariate:2 partially:1 corresponds:2 acm:2 conditional:4 viewed:1 formulated:1 dxm:1 considerable:1 hard:1 infinite:1 pxqq:1 denoising:6 principal:6 lemma:6 tendency:3 eigenmap:1 incorporate:1 avoiding:1 |
5,296 | 5,795 | Preconditioned Spectral Descent for Deep Learning
David E. Carlson,1 Edo Collins,2 Ya-Ping Hsieh,2 Lawrence Carin,3 Volkan Cevher2
1
Department of Statistics, Columbia University
2
Laboratory for Information and Inference Systems (LIONS), EPFL
3
Department of Electrical and Computer Engineering, Duke University
Abstract
Deep learning presents notorious computational challenges. These challenges include, but are not limited to, the non-convexity of learning objectives and estimating the quantities needed for optimization algorithms, such as gradients. While we
do not address the non-convexity, we present an optimization solution that exploits
the so far unused ?geometry? in the objective function in order to best make use
of the estimated gradients. Previous work attempted similar goals with preconditioned methods in the Euclidean space, such as L-BFGS, RMSprop, and ADAgrad. In stark contrast, our approach combines a non-Euclidean gradient method
with preconditioning. We provide evidence that this combination more accurately
captures the geometry of the objective function compared to prior work. We theoretically formalize our arguments and derive novel preconditioned non-Euclidean
algorithms. The results are promising in both computational time and quality
when applied to Restricted Boltzmann Machines, Feedforward Neural Nets, and
Convolutional Neural Nets.
1
Introduction
In spite of the many great successes of deep learning, efficient optimization of deep networks remains a challenging open problem due to the complexity of the model calculations, the non-convex
nature of the implied objective functions, and their inhomogeneous curvature [6]. It is established
both theoretically and empirically that finding a local optimum in many tasks often gives comparable performance to the global optima [4], so the primary goal is to find a local optimum quickly. It
is speculated that an increase in computational power and training efficiency will drive performance
of deep networks further by utilizing more complicated networks and additional data [14].
Stochastic Gradient Descent (SGD) is the most widespread algorithm of choice for practitioners
of machine learning. However, the objective functions typically found in deep learning problems,
such as feed-forward neural networks and Restricted Boltzmann Machines (RBMs), have inhomogeneous curvature, rendering SGD ineffective. A common technique for improving efficiency is to
use adaptive step-size methods for SGD [25], where each layer in a deep model has an independent
step-size. Quasi-Newton methods have shown promising results in networks with sparse penalties
[16], and factorized second order approximations have also shown improved performance [18]. A
popular alternative to these methods is to use an element-wise adaptive learning rate, which has
shown improved performance in ADAgrad [7], ADAdelta [30], and RMSprop [5].
The foundation of all of the above methods lies in the hope that the objective function can be wellapproximated by Euclidean (e.g., Frobenius or `2 ) norms. However, recent work demonstrated that
the matrix of connection weights in an RBM has a tighter majorization bound on the objective
function with respect to the Schatten-? norm compared to the Frobenius norm [1]. A majorizationminimization approach with the non-Euclidean majorization bound leads to an algorithm denoted
as Stochastic Spectral Descent (SSD), which sped up the learning of RBMs and other probabilistic
1
models. However, this approach does not directly generalize to other deep models, as it can suffer
from loose majorization bounds.
In this paper, we combine recent non-Euclidean gradient methods with element-wise adaptive learning rates, and show their applicability to a variety of models. Specifically, our contributions are:
i) We demonstrate that the objective function in feedforward neural nets is naturally bounded by
the Schatten-? norm. This motivates the application of the SSD algorithm developed in [1],
which explicitly treats the matrix parameters with matrix norms as opposed to vector norms.
ii) We develop a natural generalization of adaptive methods (ADAgrad, RMSprop) to the nonEuclidean gradient setting that combines adaptive step-size methods with non-Euclidean gradient methods. These algorithms have robust tuning parameters and greatly improve the convergence and the solution quality of SSD algorithm via local adaptation. We denote these new
algorithms as RMSspectral and ADAspectral to mark the relationships to Stochastic Spectral
Descent and RMSprop and ADAgrad.
iii) We develop a fast approximation to our algorithm iterates based on the randomized SVD algorithm [9]. This greatly reduces the per-iteration overhead when using the Schatten-? norm.
iv) We empirically validate these ideas by applying them to RBMs, deep belief nets, feedforward
neural nets, and convolutional neural nets. We demonstrate major speedups on all models, and
demonstrate improved fit for the RBM and the deep belief net.
We denote vectors as bold lower-case letters, and matrices as?bold upper-case letters. Operations
and denote element-wise multiplication and division, and X the element-wise square root of X.
1 denotes the matrix with all 1 entries. ||x||p denotes the standard `p norm of x. ||X||S p denotes
the Schatten-p norm of X, which is ||s||p with s the singular values of X. ||X||S ? is the largest
singular value of X, which is also known as the matrix 2-norm or the spectral norm.
2
Preconditioned Non-Euclidean Algorithms
We first review non-Euclidean gradient descent algorithms in Section 2.1. Section 2.2 motivates and
discusses preconditioned non-Euclidean gradient descent. Dynamic preconditioners are discussed
in Section 2.3, and fast approximations are discussed in Section 2.4.
2.1
Non-Euclidean Gradient Descent
Unless otherwise mentioned, proofs for this section may be found in [13]. Consider the minimization
of a closed proper convex function F (x) with Lipschitz gradient ||?F (x) ? ?F (y)||q ? Lp ||x ?
y||p , ?x, y,where p and q are dual to each other, and Lp > 0 is the smoothness constant. This
Lipschitz gradient implies the following majorization bound, which is useful in optimization:
F (y) ? F (x) + h?F (x), y ? xi +
Lp
2 ||y
? x||2p .
(1)
A natural strategy to minimize F (x)
is to iteratively minimize the right-hand side of (1). Defining
the #-operator as s# ? arg maxx hs, xi ? 21 ||x||2p , this approach yields the algorithm:
xk+1 = xk ?
1
Lp
#
[?F (xk )]
, where k is the iteration count.
(2)
For p = q = 2, (2) is simply gradient descent, and s# = s. In general, (2) can be viewed as gradient
descent in a non-Euclidean norm.
To explore which norm ||x||p leads to the fastest convergence, we note the convergence rate of (2)
Lp ||x0 ?x? ||2
p
), where x? is a minimizer of F (?). If we have an Lp such that
is F (xk ) ? F (x? ) = O(
k
(1) holds and Lp ||x0 ?x? ||2p L2 ||x0 ?x? ||22 , then (2) can lead to superior convergence. One such
example is presented in [13], where the authors proved that L? ||x0 ? x? ||2? improves a dimensiondependent factor over gradient descent for a class of problems in computer science. Moreover, they
showed that the algorithm in (2) demands very little computational overhead for their problems, and
hence || ? ||? is favored over || ? ||2 .
2
20
15
15
15
10
5
s2
20
s2
s2
Nor m Shape
Pr econditioned
G r adient
20
10
5
10
5
||.|| 2F
||.|| 2S ?
0
0
10
20
0
10
20
0
10
20
s1
s1
s1
Figure 1: Updates from parameters Wk for a multivariate logistic regression. (Left) 1st order
approximation error at parameter Wk + s1 u1 v1 + s2 u2 v2 , with {u1 , u2 , v1 , v2 } singular vectors
? 1 v?1 + s2 u
? 2 v?2 , with
of ?W f (W). (Middle) 1st order approximation
error at parameter Wk + s1 u
?
?1, u
? 2 , v?1 , v?2 } singular vectors of D ?W f (W) with D a preconditioner matrix. (Right)
{u
Shape of the error implied by Frobenius norm and the Schatten-? norm. After preconditioning, the
error surface matches the shape implied by the Schatten-? norm and not the Frobenius norm.
0
0
PN
As noted in [1], for the log-sum-exp function, lse(?) = log i=1 ?i exp(?i ), the constant L2 is
? 1/2 and ?(1/ log(N )) whereas the constant L? is ? 1. If ? are (possibly dependent) N zero
mean sub-Gaussian random variables, the convergence for the log-sum-exp objective function is
improved by at least logN2 N (see Supplemental Section A.1 for details). As well, non-Euclidean
gradient descent can be adapted to the stochastic setting [2].
The log-sum-exp function reoccurs frequently in the cost function of deep learning models. Analyzing the majorization bounds that are dependent on the log-sum-exp function with respect to
the model parameters in deep learning reveals majorization functions dependent on the Schatten-?
norm. This was shown previously for the RBM in [1], and we show a general approach in Supplemental Section A.2 and specific results for feed-forward neural nets in Section 3.2. Hence, we
propose to optimize these deep networks with the Schatten-? norm.
2.2
Preconditioned Non-Euclidean Gradient Descent
It has been established that the loss functions of neural networks exhibit pathological curvature [19]:
the loss function is essentially flat in some directions, while it is highly curved in others. The regions
of high curvature dominate the step-size in gradient descent. A solution to the above problem is to
rescale the parameters so that the loss function has similar curvature along all directions. The basis
of recent adative methods (ADAgrad, RMSprop) is in preconditioned gradient descent, with iterates
xk+1 = xk ? k Dk ?1 ?F (xk ).
(3)
We restrict without loss of generality the preconditioner Dk to a positive definite diagonal matrix
and k > 0 is a chosen step-size. Letting hy, xiD , hy, Dxi and ||x||2D , hx, xiD , we note that
the iteration in 3 corresponds to the minimizer of
(4)
F? (y) , F (xk ) + h?F (xk ), y ? xk i + 21k ||y ? xk ||2Dk .
Consequently, for (3) to perform well, F? (y) has to either be a good approximation or a tight upper
bound of F (y), the true function value. This is equivalent to saying that the first order approximation
error F (y)?F (xk )?h?F (xk ), y ?xk i is better approximated by the scaled Euclidean norm. The
preconditioner Dk controls the scaling, and the choice of Dk depends on the objective function.
As we are motivated to use Schatten-? norms for our models, the above reasoning leads us to
consider a variable metric non-Euclidean approximation. For a matrix X, let us denote D to be
an element-wise preconditioner. Note that D is not a diagonal matrix in this case. Because the
operations here are element-wise, this would correspond to the case
? above with a vectorized form
of X and a preconditioner of diag(vec(D)). Let ||X||D,S ? = || D X||S ? . We consider the
following surrogate of F ,
F (Y) ' F (Xk ) + h?F (Xk ), Y ? Xk i +
3
1
2k ||Y
? Xk ||2Dk ,S ? .
(5)
Using the #-operator from Section 2.1, the minimizer of (5) takes the form (see Supplementary
Section C for the proof):
p
p
Xk+1 = Xk ? k [?F (xk ) Dk ]# Dk .
(6)
We note that classification with a softmax link naturally operates on the Schatten-? norm. As an
illustrative example of the applicability of this norm, we show the first order approximation error
for the objective function in this model, where the distribution on the class y depends on covariates
x, y ? categorical(softmax(Wx)). Figure 1 (left) shows the error surfaces on W without the
preconditioner, where the uneven curvature will lead to poor updates. The Jacobi (diagonal of the
Hessian) preconditioned error surface is shown in Figure 1 (middle), where the curvature has been
made homogeneous. However, the shape of the error does not follow the Euclidean (Frobenius)
norm, but instead the geometry from the Schatten-? norm shown in Figure 1 (right). Since many
deep networks use the softmax and log-sum-exp to define a probability distribution over possible
classes, adapting to the the inherent geometry of this function can benefit learning in deeper layers.
2.3
Dynamic Learning of the Preconditioner
Our algorithms amount to choosing an k and preconditioner Dk . We propose to use the preconditioner from ADAgrad [7] and RMSprop [5]. These preconditioners are given below:
p
Vk+1 = ?Vk + (1 ? ?) (?f (Xk )) (?f (Xk )), RMSprop
Dk+1 = ?1 + Vk+1 ,
.
Vk+1 = Vk + (?f (Xk )) (?f (Xk )),
ADAgrad
The ? term is a tuning parameter controlling the extremes of the curvature in the preconditioner.
The updates in ADAgrad have provably improved regret bound guarantees for convex problems
over gradient descent with the iterates in (3) [7]. ADAgrad and ADAdelta [30] have been applied
successfully to neural nets. The updates in RMSprop were shown in [5] to approximate the equilibration preconditioner, and have also been successfully applied in autoencoders and supervised
neural nets. Both methods require a tuning parameter ?, and RMSprop also requires a term ? that
controls historical smoothing.
We propose two novel algorithms that both use the iterate in (6). The first uses the ADAgrad preconditioner which we call ADAspectral. The second uses the RMSprop preconditioner which we
call RMSspectral.
2.4
The #-Operator and Fast Approximations
Letting X = Udiag(s)VT be the SVD of X, the #-operator for the Schatten-? norm (also known
as the spectral norm) can be computed as follows [1]: X# = ||s||1 UVT .
Depending on the cost of the gradient estimation, this computation may be relatively cheap [1] or
quite expensive. In situations where the gradient estimate is relatively cheap, the exact #-operator
demands significant overhead. Instead of calculating the full SVD, we utilize a randomize SVD
algorithm [9, 22]. For N ? M , this reduces the cost from O(M N 2 ) to O(M K 2 +M N log(k)) with
?
? T ' X represent the rank-k+
k the number of projections used in the algorithm. Letting Udiag(?
s)V
1 approximate SVD, then the approximate #-operator corresponds to the low-rank approximation
? 1:k V
? 1:k + s??1 (X ? U
? 1:K diag(s1:K
? T )).
? )V1:K
and the reweighted remainder, X# ' ||?
s||1 (U
k+1
We note that the #-operator is also defined for the `? norm, however, for notational clarity, we will
denote this as x[ and leave the # notation for the Schatten-? case. This x[ solution was given in
[13, 1] as x[ = ||x||1 ?sign(x). Pseudocode for these operations is in the Supplementary Materials.
3
3.1
Applicability of Schatten-? Bounds to Models
Restricted Boltzmann Machines (RBM)
RBMs [26, 11] are bipartite Markov Random Field models that form probabilistic generative models over a collection of data. They are useful both as generative models and for ?pre-training?
deep networks [11, 8]. In the binary case, the observations are binary v ? {0, 1}M with connections to latent (hidden) binary units, h ? {0, 1}J . The probability for each state {v, h} is defined
4
by parameters ? = {W, c, b} with the energy ?E? (v, h) , cT v + v T Wh + hT b and probability p? (v, h) ? ?E?P
(v, h). The maximum likelihood
P Pestimator implies the objective function
min? F (?) = ? N1 log h exp(?E? (vn , h)) + log v h exp(?E? (vn , h)).
This objective function is generally intractable, although an accurate but computationally intensive esAlgorithm 1 RMSspectral for RBMs
timator is given via Annealed Importance Sampling
Inputs: 1,... , ?, ?, Nb
(AIS) [21, 24]. The gradient can be comparatively
Parameters: ? = {W, b, c}
quickly estimated by taking a small number of Gibbs
History Terms : VW , vb , vc
sampling steps in a Monte Carlo Integration scheme
for
i=1,. . . do
(Contrastive Divergence) [12, 28]. Due to the noisy
Sample a minibatch of size Nb
nature of the gradient estimation and the intractable
Estimate gradient (dW, db, dc)
objective function, second order methods and line
% Update matrix parameter
search methods are inappropriate and SGD has traVW = ?V
ditionally been used [16]. [1] proposed an upper
p W +?(1 ? ?)dW dW
1/2
DW = ? + VW
bound on perturbations to W of
1/2
1/2
W = W ? i (dW DW )# DW
F ({W + U, b, c}) ? F ({W, b, c})
% Update bias term b
+ h?W F ({W, b, c}), Ui + M2J ||U||2S ?
Vb = ?Vb + (1 ? ?)db db
p
?
1/2
This majorization motivated the Stochastic Specdb = ? + vb
tral Descent (SSD) algorithm, which uses the #1/2
1/2
b = b ? i (db db )[ db
operator in Section 2.4. In addition, bias parameters
% Same for c
b and c were bound on the `? norm and use the [ upend for
dates from Section 2.4 [1]. In their experiments, this
method showed significantly improved performance
over competing algorithm for mini-batches of 2J and CD-25 (number of Gibbs sweeps), where
the computational cost of the #-operator is insignificant. This motivates using the preconditioned
spectral descent methods, and we show our proposed RMSspectral method in Algorithm 1.
When the RBM is used to ?pre-train? deep models, CD-1 is typically used (1 Gibbs sweep). One
such model is the Deep Belief Net, where parameters are effectively learned by repeatedly learning
RBM models [11, 24]. In this case, the SVD operation adds significant overhead. Therefore, the
fast approximation of Section 2.4 and the adaptive methods result in vast improvements. These
enhancements naturally extend to the Deep Belief Net, and results are detailed in Section 4.1.
3.2
Algorithm 2 RMSspectral for FNN
Inputs: 1,... , ?, ?, Nb
Parameters: ? = {W0 , . . . , WL }
History Terms : V0 , . . . , VL
for i=1,. . . do
Sample a minibatch of size Nb
Estimate gradient by backprop (dW` )
for ` = 0, . . . , L do
V` = ?V` + (1 ? ?)dW` dW`
p
1
?
D`2 = ? + V`
Supervised Feedforward Neural Nets
Feedforward Neural Nets are widely used models
for classification problems. We consider L layers of hidden variables with deterministic nonlinear
link functions with a softmax classifier at the final
layer. Ignoring bias terms for clarity, an input x is
mapped through a linear transformation and a nonlinear link function ?(?) to give the first layer of hidden nodes, ?1 = ?(W0 x). This process continues
with ?` = ?(W`?1 ?`?1 ). At the last layer, we
1
1
W` = W` ?i (dW` D`2 )# D`2
set h = WL ?L and an J-dimensional class vector
end for
is drawn y ? categorical(softmax(h)). The stanend for
dard approach for parameter learning is to minimize
the objective function that corresponds to the (penalized) maximum likelihood objective function over the parameters ? = {W0 , . . . , WL } and data
examples {x1 , . . . , xN }, which is given by:
PN
PJ
(7)
? M L = arg min? f (?) = N1 n=1 ?ynT hn,? + log j=1 exp(hn,?,j )
While there have been numerous recent papers detailing different optimization approaches to this
objective [7, 6, 5, 16, 19], we are unaware of any approaches that attempt to derive non-Euclidean
bounds. As a result, we explore the properties of this objective function. We show the key results
here and provide further details on the general framework in Supplemental Section A.2 and the
specific derivation in Supplemental Section D. By using properties of the log-sum-exp function
5
MNIST, CD-1 Training
Caltech-101, PCD-25 Training
-90
-95
SGD
ADAgrad
RMSprop
SSD-F
ADAspectral
RMSspectral
SSD
15
14
13
0
50
100
150
Normalized time, thousands
-85
-100
SGD
ADAgrad
RMSprop
SSD
ADAspectral
RMSspectral
-90
200
-95
log p(v)
16
12
MNIST, PCD-25 Training
-80
log p(v)
Reconstruction Error
17
-105
-115
-120
0
10
20
30
40
Normalized time, thousands
SGD
ADAgrad
RMSprop
SSD
ADAspectral
RMSspectral
-110
50
0
10
20
30
40
Normalized time, thousands
50
Figure 2: A normalized time unit is 1 SGD iteration (Left) This shows the reconstruction error from
training the MNIST dataset using CD-1 (Middle) Log-likelihood of training Caltech-101 Silhouettes
using Persistent CD-25 (Right) Log-likelihood of training MNIST using Persistent CD-25
from [1, 2], the objective function from (7) has an upper bound,
PN
f (?) ? f (?) + h?? f (?), ? ? ?i + N1 n=1 ( 12 maxj (hn,?,j ? hn,?,j )2
+2 max|hn,?,j ? hn,?,j ? h?? hn,?,j , ? ? ?i|).
j
(8)
We note that this implicitly requires the link function to have a Lipschitz continuous gradient. Many
commonly used links, including logistic, hyperbolic tangent, and smoothed rectified linear units,
have Lipschitz continuous gradients, but rectified linear units do not. In this case, we will just
proceed with the subgradient. A strict upper bound on these parameters is highly pessimistic, so instead we propose to take a local approximation around the parameter W` in each layer individually.
Considering a perturbation U around W` , the terms in (8) have the following upper bounds:
|h?,j ? h?,j
? ||U||2S ? ||?` ||22 ||??`+1 hj ||22 maxx ? 0 (x)2 ,
(h?,j ? h?,j )2 ?
? 1 ||U||2S ? ||?` ||22 ||??`+1 hj ||? ||??` hj ||? maxx |? 00 (x)|.
? h?? h?,j , ? ? ?i| ?
2
2
d
d
?(t)|t=x and ? 00 (x) = dt
Where ? 0 (x) = dt
2 ?(t)|t=x . Because both ?` and ??`+1 hj can easily
be calculated during the standard backpropagation procedure for gradient estimation, this can be
calculated without significant overhead. Since these equations are bounded on the Schatten-? norm,
this motivates using the Stochastic Spectral Descent algorithm with the #-operator is applied to the
weight matrix for each layer individually.
However, the proposed updates require the calculation of many additional terms; as well, they are
pessimistic and do not consider the inhomogenous curvature. Instead of attempting to derive the
step-sizes, both RMSspectral and ADAspectral will learn appropriate element-wise step-sizes by
using the gradient history. Then, the preconditioned #-operator is applied to the weights from each
layer individually. The RMSspectral method for feed-forward neural nets is shown in Algorithm 2.
It is unclear how to use non-Euclidean geometry for convolution layers [14], as the pooling and
convolution create alternative geometries. However, the ADAspectral and RMSspectral algorithms
can be applied to convolutional neural nets by using the non-Euclidean steps on the dense layers
and linear updates from ADAgrad and RMSprop on the convolutional filters. The benefits from the
dense layers then propagate down to the convolutional layers.
4
4.1
Experiments
Restricted Boltzmann Machines
To show the use of the approximate #-operator from Section 2.4 as well as RMSspec and ADAspec,
we first perform experiments on the MNIST dataset. The dataset was binarized as in [24]. We detail
the algorithmic setting used in these experiments in Supplemental Table 1, which are chosen to
match previous literature on the topic. The batch size was chosen to be 1000 data points, which
matches [1]. This is larger than is typical in the RBM literature [24, 10], but we found that all
algorithms improved their final results with larger batch-sizes due to reduction in sampling noise.
6
The analysis supporting the SSD algorithm does not directly apply to the CD-1 learning procedure,
so it is of interest to examine how well it generalizes to this framework. To examine the effect of
CD-1 learning, we used reconstruction error with J=500 hidden, latent variables. Reconstruction
error is a standard heuristic for analyzing convergence [10], and is defined by taking ||v ? v?||2 ,
where v is an observation and v? is the mean value for a CD-1 pass from that sample. This result
is shown in Figure 2 (left), with all algorithms normalized to the amount of time it takes for a
single SGD iteration. The full #-operator in the SSD algorithm adds significant overhead to each
iteration, so the SSD algorithm does not provide competitive performance in this situation. The
SSD-F, ADAspectral, and RMSspectral algorithms use the approximate #-operator. Combining
the adaptive nature of RMSprop with non-Euclidean optimization provides dramatically improved
performance, seemingly converging faster and to a better optimum.
High CD orders are necessary to fit the ML estimator of an RBM [24]. To this end, we use the
Persistent CD method of [28] with 25 Gibbs sweeps per iteration. We show the log-likelihood of the
training data as a function of time in Figure 2(middle). The log-likelihood is estimated using AIS
with the parameters and code from [24]. There is a clear divide with improved performance from the
Schatten-? based methods. There is further improved performance by including preconditioners.
As well as showing improved training, the test set has an improved log-likelihood of -85.94 for
RMSspec and -86.04 for SSD.
For further exploration, we trained a Deep Belief Net with two hidden layers of size 500-2000 to
match [24]. We trained the first hidden layer with CD-1 and RMSspectral, and the second layer
with PCD-25 and RMSspectral. We used the same model sizes, tuning parameters, and evaluation
parameters and code from [24], so the only change is due to the optimization methods. Our estimated
lower-bound on the performance of this model is -80.96 on the test set. This compares to -86.22 from
[24] and -84.62 for a Deep Boltzmann Machine from [23]; however, we caution that these numbers
no longer reflect true performance on the test set due to bias from AIS and repeated overfitting [23].
However, this is a fair comparison because we use the same settings and the evaluation code.
For further evidence, we performed the same maximum-likelihood experiment on the Caltech-101
Silhouettes dataset [17]. This dataset was previously used to demonstrate the effectiveness of an
adaptive gradient step-size and Enhanced Gradient method for Restricted Boltzmann Machines [3].
The training curves for the log-likelihood are shown in Figure 2 (right). Here, the methods based on
the Schatten-? norm give state-of-the-art results in under 1000 iterations, and thoroughly dominate
the learning. Furthermore, both ADAspectral and RMSspectral saturate to a higher value on the
training set and give improved testing performance. On the test set, the best result from the nonEuclidean methods gives a testing log-likelihood of -106.18 for RMSspectral, and a value of -109.01
for RMSprop. These values all improve over the best reported value from SGD of -114.75 [3].
4.2
Standard and Convolutional Neural Networks
Compared to RBMs and other popular machine learning models, standard feed-forward neural nets
are cheap to train and evaluate. The following experiments show that even in this case where the
computation of the gradient is efficient, our proposed algorithms produce a major speed up in convergence, in spite of the per-iteration cost associated with approximating the SVD of the gradient.
We demonstrate this claim using the well-known MNIST and Cifar-10 [15] image datasets.
Both datasets are similar in that they pose a classification task over 10 possible classes. However,
CIFAR-10, consisting of 50K RGB images of vehicles and animals, with an additional 10K images
reserved for testing, poses a considerably more difficult problem than MNIST, with its 60K greyscale
images of hand-written digits, plus 10K test samples. This fact is indicated by the state-of-the-art
accuracy on the MNIST test set reaching 99.79% [29], with the same architecture achieving ?only?
90.59% accuracy on CIFAR-10.
To obtain the state-of-the-art performance on these datasets, it is necessary to use various types of
data pre-processing methods, regularization schemes and data augmentation, all of which have a big
impact of model generalization [14]. In our experiments we only employ ZCA whitening on the
CIFAR-10 data [15], since these methods are not the focus of this paper. Instead, we focus on the
comparative performance of the various algorithms on a variety of models.
We trained neural networks with zero, one and two hidden layers, with various hidden layer sizes,
and with both logistic and rectified linear units (ReLU) non-linearities [20]. Algorithm parameters
7
Cifar, 2-Layer CNN
-10 -3
-10 -2
SGD
ADAgrad
RMSprop
SSD
ADAspectral
RMSspectral
-10 -1
-10
0
0
200
400
600
Seconds
800
1000
log p(v)
log p(v)
-10 -3
-10 -2
Cifar-10, 5-Layer CNN
1
SGD
ADAgrad
RMSprop
SSD
ADAspectral
RMSspectral
RMSprop
RMSspectral
0.8
Accuracy
MNIST, 2-Layer NN
0.6
0.4
-10 -1
0.2
-10 0
0
1000
2000
3000
Seconds
4000
0
2
4
Seconds
6
#10
5
Figure 3: (Left) Log-likelihood of current training batch on the MNIST dataset (Middle) Loglikelihood of the current training batch on CIFAR-10 (Right) Accuracy on the CIFAR-10 test set
can be found in Supplemental Table 2. We observed fairly consistent performance across the various
configurations, with spectral methods yielding greatly improved performance over their Euclidean
counterparts. Figure 3 shows convergence curves in terms of log-likelihood on the training data as
learning proceeds. For both MNIST and CIFAR-10, SSD with estimated Lipschitz steps outperforms
SGD. Also clearly visible is the big impact of using local preconditioning to fit the local geometry
of the objective, amplified by using the spectral methods.
Spectral methods also improve convergence of convolutional neural nets (CNN). In this setting, we
apply the #-operator only to fully connected linear layers. Preconditioning is performed for all
layers, i.e., when using RMSspectral for linear layers, the convolutional layers are updated via RMSprop. We applied our algorithms to CNNs with one, two and three convolutional layers, followed
by two fully-connected layers. Each convolutional layer was followed by max pooling and a ReLU
non-linearity. We used 5 ? 5 filters, ranging from 32 to 64 filters per layer.
We evaluated the MNIST test set using a two-layer convolutional net with 64 kernels. The best
generalization performance on the test set after 100 epochs was achieved by both RMSprop and
RMSspectral, with an accuracy of 99.15%. RMSspectral obtained this level of accuracy after only
40 epochs, less that half of what RMSprop required.
To further demonstarte the speed up, we trained on CIFAR-10 using a deeper net with three convolutional layers, following the architecture used in [29]. In Figure 3 (Right) the test set accuracy is
shown as training proceeds with both RMSprop and RMSspectral. While they eventually achieve
similar accuracy rates, RMSspectral reaches that rate four times faster.
5
Discussion
In this paper we have demonstrated that many deep models naturally operate with non-Euclidean
geometry, and exploiting this gives remarkable improvements in training efficiency, as well as finding improved local optima. Also, by using adaptive methods, algorithms can use the same tuning
parameters across different model sizes configurations. We find that in the RBM and DBN, improving the optimization can give dramatic performance improvements on both the training and the
test set. For feedforward neural nets, the training efficiency of the propose methods give staggering
improvements to the training performance.
While the training performance is drastically better via the non-Euclidean quasi-Newton methods,
the performance on the test set is improved for RBMs and DBNs, but not in feedforward neural
networks. However, because our proposed algorithms fit the model significantly faster, they can
help improve Bayesian optimization schemes [27] to learn appropriate penalization strategies and
model configurations. Furthermore, these methods can be adapted to dropout [14] and other recently
proposed regularization schemes to help achieve state-of-the-art performance.
Acknowledgements The research reported here was funded in part by ARO, DARPA, DOE, NGA
and ONR, and in part by the European Commission under grants MIRG-268398 and ERC Future
Proof, by the Swiss Science Foundation under grants SNF 200021-146750, SNF CRSII2-147633,
and the NCCR Marvel. We thank the reviewers for their helpful comments.
8
References
[1] D. Carlson, V. Cevher, and L. Carin. Stochastic Spectral Descent for Restricted Boltzmann Machines.
AISTATS, 2015.
[2] D. Carlson, Y.-P. Hsieh, E. Collins, L. Carin, and V. Cevher. Stochastic Spectral Descent for Discrete
Graphical Models. IEEE J. Special Topics in Signal Processing, 2016.
[3] K. Cho, T. Raiko, and A. Ilin. Enhanced Gradient for Training Restricted Boltzmann Machines. Neural
Computation, 2013.
[4] A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The Loss Surfaces of Multilayer
Networks. AISTATS 2015.
[5] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. RMSProp and equilibrated adaptive learning rates
for non-convex optimization. arXiv:1502.04390 2015.
[6] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking
the saddle point problem in high-dimensional non-convex optimization. In NIPS, 2014.
[7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. JMLR, 2010.
[8] D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio. Why Does Unsupervised
Pre-training Help Deep Learning? JMLR 2010.
[9] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. SIAM Review 2011.
[10] G. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. U. Toronto Technical Report,
2010.
[11] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
[12] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
2002.
[13] J. A. Kelner, Y. T. Lee, L. Orecchia, and A. Sidford. An Almost-Linear-Time Algorithm for Approximate
Max Flow in Undirected Graphs, and its Multicommodity Generalizations 2013.
[14] A. Krizhevsky and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks.
NIPS, 2012.
[15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. University of
Toronto, Tech. Rep, 2009.
[16] Q. V. Le, A. Coates, B. Prochnow, and A. Y. Ng. On Optimization Methods for Deep Learning. ICML,
2011.
[17] B. Marlin and K. Swersky. Inductive principles for restricted Boltzmann machine learning. ICML, 2010.
[18] J. Martens and R. Grosse. Optimizing Neural Networks with Kronecker-factored Approximate Curvature.
arXiv:1503.05671 2015.
[19] J. Martens and I. Sutskever. Parallelizable Sampling of Markov Random Fields. AISTATS, 2010.
[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
[21] R. M. Neal. Annealed Importance Sampling. U. Toronto Technical Report, 1998.
[22] V. Rokhlin, A. Szlam, and M. Tygert. A Randomized Algorithm for Principal Component Analysis. SIAM
Journal on Matrix Analysis and Applications 2010.
[23] R. Salakhutdinov and G. Hinton. Deep Boltzmann Machines. AISTATS, 2009.
[24] R. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.
[25] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. arXiv 1206.1106 2012.
[26] P. Smolensky. Information Processing in Dynamical Systems: Foundations of Harmony Theory, 1986.
[27] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. In NIPS, 2012.
[28] T. Tieleman and G. Hinton. Using fast weights to improve persistent contrastive divergence. ICML, 2009.
[29] L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
[30] M. D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv 1212.5701 2012.
9
| 5795 |@word h:1 cnn:3 middle:5 norm:31 open:1 propagate:1 rgb:1 hsieh:2 decomposition:1 contrastive:3 dramatic:1 sgd:13 multicommodity:1 arous:1 wellapproximated:1 reduction:1 configuration:3 outperforms:1 current:2 written:1 visible:1 wx:1 shape:4 cheap:3 update:8 generative:2 half:1 xk:25 volkan:1 iterates:3 provides:1 node:1 pascanu:1 toronto:3 kelner:1 zhang:2 along:1 persistent:4 ilin:1 combine:3 overhead:6 x0:4 theoretically:2 snoek:1 nor:1 frequently:1 examine:2 salakhutdinov:2 little:1 inappropriate:1 considering:1 estimating:1 bounded:2 moreover:1 notation:1 factorized:1 linearity:2 what:1 developed:1 supplemental:6 caution:1 finding:3 transformation:1 marlin:1 guarantee:1 quantitative:1 binarized:1 scaled:1 classifier:1 control:2 unit:6 grant:2 szlam:1 positive:1 engineering:1 local:7 treat:1 analyzing:2 plus:1 challenging:1 fastest:1 limited:1 practical:2 lecun:2 testing:3 regret:1 definite:1 pesky:1 backpropagation:1 swiss:1 digit:1 procedure:2 snf:2 maxx:3 adapting:1 significantly:2 projection:1 hyperbolic:1 pre:4 spite:2 operator:15 nb:4 applying:1 adient:1 optimize:1 equivalent:1 deterministic:1 demonstrated:2 reviewer:1 marten:2 annealed:2 convex:5 equilibration:1 identifying:1 factored:1 estimator:1 utilizing:1 dominate:2 dw:11 updated:1 controlling:1 enhanced:2 dbns:1 exact:1 duke:1 homogeneous:1 us:3 element:7 adadelta:3 approximated:1 expensive:1 mirg:1 continues:1 observed:1 electrical:1 capture:1 thousand:3 region:1 connected:2 mentioned:1 convexity:2 complexity:1 rmsprop:24 covariates:1 ui:1 dynamic:2 trained:4 tight:1 division:1 efficiency:4 bipartite:1 basis:1 preconditioning:4 easily:1 darpa:1 m2j:1 various:4 derivation:1 train:2 fast:6 monte:1 choosing:1 quite:1 heuristic:1 supplementary:2 widely:1 larger:2 loglikelihood:1 otherwise:1 statistic:1 noisy:1 final:2 seemingly:1 online:1 net:23 propose:5 reconstruction:4 aro:1 product:1 adaptation:1 remainder:1 combining:1 date:1 achieve:2 amplified:1 schaul:1 frobenius:5 validate:1 udiag:2 exploiting:1 convergence:9 enhancement:1 optimum:5 sutskever:1 produce:1 comparative:1 adam:1 leave:1 help:3 derive:3 develop:2 depending:1 pose:2 rescale:1 equilibrated:1 implies:2 larochelle:1 direction:2 inhomogeneous:2 filter:3 stochastic:9 vc:1 exploration:1 cnns:1 material:1 xid:2 backprop:1 require:2 hx:1 generalization:4 tighter:1 pessimistic:2 hold:1 around:2 exp:10 great:1 lawrence:1 algorithmic:1 claim:1 major:2 estimation:3 harmony:1 individually:3 largest:1 wl:3 create:1 successfully:2 hope:1 minimization:1 clearly:1 gaussian:1 reaching:1 pn:3 hj:4 focus:2 vk:5 notational:1 rank:2 likelihood:12 improvement:4 greatly:3 contrast:1 tech:1 zca:1 helpful:1 inference:1 ganguli:1 dependent:3 epfl:1 vl:1 typically:2 nn:1 hidden:8 quasi:2 choromanska:1 provably:1 arg:2 dual:1 classification:4 dauphin:2 denoted:1 favored:1 animal:1 art:4 special:1 smoothing:1 softmax:5 noneuclidean:2 field:2 integration:1 fairly:1 ng:1 sampling:5 unsupervised:1 carin:3 icml:6 future:1 others:1 report:2 inherent:1 employ:1 pathological:1 divergence:3 maxj:1 geometry:8 consisting:1 n1:3 attempt:1 interest:1 highly:2 evaluation:2 extreme:1 yielding:1 accurate:1 necessary:2 unless:1 iv:1 euclidean:24 detailing:1 divide:1 cevher:2 sidford:1 applicability:3 cost:5 entry:1 krizhevsky:2 osindero:1 reported:2 commission:1 considerably:1 cho:2 thoroughly:1 st:2 randomized:2 siam:2 probabilistic:3 lee:1 quickly:2 augmentation:1 reflect:1 opposed:1 hn:7 possibly:1 wan:1 dropconnect:1 expert:1 nccr:1 chung:1 stark:1 bfgs:1 de:1 bold:2 wk:3 explicitly:1 depends:2 performed:2 root:1 vehicle:1 closed:1 hazan:1 competitive:1 complicated:1 majorization:7 contribution:1 square:1 minimize:3 accuracy:8 convolutional:13 ynt:1 reserved:1 yield:1 correspond:1 generalize:1 bayesian:2 vincent:1 accurately:1 carlo:1 drive:1 rectified:4 randomness:1 history:3 ping:1 reach:1 parallelizable:1 edo:1 rbms:7 energy:1 naturally:4 proof:3 rbm:9 dxi:1 jacobi:1 associated:1 proved:1 dataset:6 popular:2 wh:1 improves:1 formalize:1 feed:4 higher:1 dt:2 supervised:2 follow:1 improved:16 evaluated:1 generality:1 furthermore:2 just:1 preconditioner:13 autoencoders:1 hand:2 tropp:1 nonlinear:2 widespread:1 minibatch:2 logistic:3 quality:2 indicated:1 effect:1 normalized:5 true:2 counterpart:1 inductive:1 hence:2 regularization:3 laboratory:1 iteratively:1 staggering:1 neal:1 reweighted:1 during:1 noted:1 illustrative:1 demonstrate:5 duchi:1 reasoning:1 lse:1 image:5 wise:7 ranging:1 novel:2 recently:1 common:1 superior:1 pseudocode:1 sped:1 empirically:2 discussed:2 extend:1 martinsson:1 significant:4 vec:1 ai:3 gibbs:4 smoothness:1 tuning:5 dbn:1 erc:1 ssd:16 funded:1 longer:1 surface:4 v0:1 whitening:1 add:2 tygert:1 curvature:10 multivariate:1 recent:4 showed:2 henaff:1 optimizing:1 binary:3 success:1 onr:1 vt:1 rep:1 caltech:3 additional:3 attacking:1 signal:1 ii:1 full:2 multiple:1 reduces:2 technical:2 match:4 faster:3 calculation:2 cifar:10 impact:2 converging:1 regression:1 multilayer:1 essentially:1 metric:1 arxiv:4 iteration:9 represent:1 tral:1 kernel:1 achieved:1 uvt:1 whereas:1 addition:1 singular:4 operate:1 ineffective:1 strict:1 pooling:2 comment:1 db:6 orecchia:1 undirected:1 flow:1 effectiveness:1 practitioner:1 call:2 vw:2 unused:1 feedforward:7 iii:1 bengio:4 rendering:1 variety:2 iterate:1 fit:4 relu:2 architecture:2 restrict:1 competing:1 idea:1 intensive:1 motivated:2 penalty:1 suffer:1 hessian:1 proceed:1 repeatedly:1 deep:27 dramatically:1 useful:2 generally:1 detailed:1 clear:1 marvel:1 amount:2 coates:1 sign:1 estimated:5 per:4 discrete:1 key:1 four:1 achieving:1 drawn:1 clarity:2 pj:1 ht:1 utilize:1 v1:3 vast:1 graph:1 subgradient:2 sum:6 nga:1 letter:2 swersky:1 saying:1 almost:1 vn:2 scaling:1 vb:4 comparable:1 dropout:1 bound:15 layer:32 ct:1 followed:2 courville:1 adapted:2 kronecker:1 pcd:3 prochnow:1 flat:1 hy:2 u1:2 speed:2 argument:1 min:2 preconditioners:3 attempting:1 relatively:2 speedup:1 department:2 combination:1 poor:1 across:2 lp:7 cun:1 s1:6 restricted:10 pr:1 inhomogenous:1 notorious:1 computationally:1 equation:1 remains:1 previously:2 discus:1 loose:1 count:1 eventually:1 needed:1 singer:1 letting:3 end:2 gulcehre:1 generalizes:1 operation:4 apply:2 v2:2 spectral:12 appropriate:2 alternative:2 batch:5 denotes:3 include:1 zeiler:2 graphical:1 newton:2 carlson:3 calculating:1 exploit:1 murray:1 approximating:1 comparatively:1 implied:3 objective:20 sweep:3 quantity:1 strategy:2 primary:1 randomize:1 diagonal:3 surrogate:1 unclear:1 exhibit:1 gradient:35 link:5 mapped:1 schatten:17 thank:1 w0:3 topic:2 preconditioned:10 code:3 relationship:1 mini:1 manzagol:1 minimizing:1 difficult:1 adative:1 greyscale:1 motivates:4 boltzmann:12 proper:1 perform:2 teh:1 upper:6 observation:2 reoccurs:1 markov:2 convolution:2 datasets:3 descent:20 curved:1 supporting:1 defining:1 situation:2 hinton:8 dc:1 perturbation:2 smoothed:1 david:1 required:1 connection:2 imagenet:1 learned:1 established:2 nip:3 address:1 proceeds:2 lion:1 below:1 dynamical:1 smolensky:1 challenge:2 max:3 including:2 belief:7 power:1 natural:2 scheme:4 improve:6 numerous:1 mathieu:1 raiko:1 categorical:2 columbia:1 prior:1 review:2 l2:2 tangent:1 literature:2 multiplication:1 adagrad:16 epoch:2 acknowledgement:1 loss:5 fully:2 remarkable:1 penalization:1 foundation:3 vectorized:1 consistent:1 principle:1 tiny:1 cd:12 penalized:1 last:1 majorizationminimization:1 drastically:1 side:1 bias:4 deeper:2 guide:1 taking:2 sparse:1 benefit:2 curve:2 calculated:2 xn:1 unaware:1 crsii2:1 forward:4 author:1 adaptive:12 made:1 collection:1 dard:1 historical:1 far:1 commonly:1 erhan:1 approximate:8 implicitly:1 silhouette:2 ml:1 global:1 overfitting:1 reveals:1 xi:2 fergus:1 search:1 latent:2 continuous:2 why:1 table:2 promising:2 nature:3 learn:2 robust:1 ignoring:1 improving:2 european:1 constructing:1 diag:2 aistats:4 dense:2 s2:5 noise:1 big:2 repeated:1 fair:1 x1:1 grosse:1 sub:1 lie:1 jmlr:2 down:1 saturate:1 specific:2 showing:1 dk:10 insignificant:1 evidence:2 intractable:2 mnist:12 effectively:1 importance:2 vries:1 demand:2 halko:1 simply:1 explore:2 ditionally:1 saddle:1 u2:2 speculated:1 corresponds:3 minimizer:3 tieleman:1 nair:1 fnn:1 goal:2 viewed:1 consequently:1 lipschitz:5 change:1 specifically:1 typical:1 operates:1 principal:1 pas:1 svd:7 ya:1 attempted:1 uneven:1 rokhlin:1 mark:1 collins:2 evaluate:1 |
5,297 | 5,796 | Learning Continuous Control Policies by
Stochastic Value Gradients
Nicolas Heess? , Greg Wayne? , David Silver, Timothy Lillicrap, Yuval Tassa, Tom Erez
Google DeepMind
{heess, gregwayne, davidsilver, countzero, tassa, etom}@google.com
?
These authors contributed equally.
Abstract
We present a unified framework for learning continuous control policies using
backpropagation. It supports stochastic control by treating stochasticity in the
Bellman equation as a deterministic function of exogenous noise. The product
is a spectrum of general policy gradient algorithms that range from model-free
methods with value functions to model-based methods without value functions.
We use learned models but only require observations from the environment instead of observations from model-predicted trajectories, minimizing the impact
of compounded model errors. We apply these algorithms first to a toy stochastic
control problem and then to several physics-based control problems in simulation.
One of these variants, SVG(1), shows the effectiveness of learning models, value
functions, and policies simultaneously in continuous domains.
1
Introduction
Policy gradient algorithms maximize the expectation of cumulative reward by following the gradient
of this expectation with respect to the policy parameters. Most existing algorithms estimate this gradient in a model-free manner by sampling returns from the real environment and rely on a likelihood
ratio estimator [32, 26]. Such estimates tend to have high variance and require large numbers of
samples or, conversely, low-dimensional policy parameterizations.
A second approach to estimate a policy gradient relies on backpropagation instead of likelihood ratio
methods. If a differentiable environment model is available, one can link together the policy, model,
and reward function to compute an analytic policy gradient by backpropagation of reward along a
trajectory [18, 11, 6, 9]. Instead of using entire trajectories, one can estimate future rewards using a
learned value function (a critic) and compute policy gradients from subsequences of trajectories. It
is also possible to backpropagate analytic action derivatives from a Q-function to compute the policy
gradient without a model [31, 21, 23]. Following Fairbank [8], we refer to methods that compute
the policy gradient through backpropagation as value gradient methods.
In this paper, we address two limitations of prior value gradient algorithms. The first is that, in
contrast to likelihood ratio methods, value gradient algorithms are only suitable for training deterministic policies. Stochastic policies have several advantages: for example, they can be beneficial for
partially observed problems [24]; they permit on-policy exploration; and because stochastic policies
can assign probability mass to off-policy trajectories, we can train a stochastic policy on samples
from an experience database in a principled manner. When an environment model is used, value
gradient algorithms have also been critically limited to operation in deterministic environments. By
exploiting a mathematical tool known as ?re-parameterization? that has found recent use for generative models [20, 12], we extend the scope of value gradient algorithms to include the optimization
of stochastic policies in stochastic environments. We thus describe our framework as Stochastic
Value Gradient (SVG) methods. Secondly, we show that an environment dynamics model, value
function, and policy can be learned jointly with neural networks based only on environment interaction. Learned dynamics models are often inaccurate, which we mitigate by computing value
gradients along real system trajectories instead of planned ones, a feature shared by model-free
1
methods [32, 26]. This substantially reduces the impact of model error because we only use models
to compute policy gradients, not for prediction, combining advantages of model-based and modelfree methods with fewer of their drawbacks.
We present several algorithms that range from model-based to model-free methods, flexibly combining models of environment dynamics with value functions to optimize policies in stochastic or deterministic environments. Experimentally, we demonstrate that SVG methods can be applied using
generic neural networks with tens of thousands of parameters while making minimal assumptions
about plants or environments. By examining a simple stochastic control problem, we show that
SVG algorithms can optimize policies where model-based planning and likelihood ratio methods
cannot. We provide evidence that value function approximation can compensate for degraded models, demonstrating the increased robustness of SVG methods over model-based planning. Finally,
we use SVG algorithms to solve a variety of challenging, under-actuated, physical control problems,
including swimming of snakes, reaching, tracking, and grabbing with a robot arm, fall-recovery for
a monoped, and locomotion for a planar cheetah and biped.
2
Background
We consider discrete-time Markov Decision Processes (MDPs) with continuous states and actions
and denote the state and action at time step t by st 2 RNS and at 2 RNA , respectively. The MDP has
an initial state distribution s0 ? p0 (?), a transition distribution st+1 ? p(?|st , at ), and a (potentially
time-varying) reward function rt = r(st , at , t).1 We consider time-invariant stochastic policies
a ? p(?|s; ?), parameterized by ?. The goal of policy optimization is to find policy parameters ? that
maximize the expectedh sum of futurei rewards. We optimize either finite-horizon or infinite-horizon
?P1 t t ?
PT
t t
sums, i.e., J(?) = E
r ? or J(?) = E
r ? where 2 [0, 1] is a discount
t=0
t=0
factor.2 When possible, we represent a variable at the next time step using the ?tick? notation, e.g.,
s0 , st+1 .
In what follows, we make extensive use of the state-action-value Q-function and state-value Vfunction.
"
#
"
#
X
X
t
? t ? t
t
t
? t ? t
Q (s, a) = E
r s = s, a = a, ? ; V (s) = E
r s = s, ? .
(1)
? =t
? =t
For finite-horizon problems, the value functions are time-dependent, e.g., V 0 , V t+1 (s0 ), and for
infinite-horizon problems the value functions are stationary, V 0 , V (s0 ). The relevant meaning
should be clear from the context. The state-value function can be expressed recursively using the
stochastic Bellman equation
Z ?
Z
t
V (s) =
rt +
V t+1 (s0 )p(s0 |s, a)ds0 p(a|s; ?)da.
(2)
We abbreviate partial differentiation using subscripts, gx , @g(x, y)/@x.
3
Deterministic value gradients
The deterministic Bellman equation takes the form V (s) = r(s, a)+ V 0 (f (s, a)) for a deterministic
model s0 = f (s, a) and deterministic policy a = ?(s; ?). Differentiating the equation with respect
to the state and policy yields an expression for the value gradient
Vs = rs + ra ?s + Vs00 (fs + fa ?s ),
V ? = r a ?? +
Vs00 fa ??
+
V?0 .
(3)
(4)
In eq. 4, the term
arises because the total derivative includes policy gradient contributions from
subsequent time steps (full derivation in Appendix A). For a purely model-based formalism, these
equations are used as a pair of coupled recursions that, starting from the termination of a trajectory,
proceed backward in time to compute the gradient of the value function with respect to the state
and policy parameters. V?0 returns the total policy gradient. When a state-value function is used
V?0
1
2
We make use of a time-varying reward function only in one problem to encode a terminal reward.
< 1 for the infinite-horizon case.
2
after one step in the recursion, ra ?? + Vs00 fa ?? directly expresses the contribution of the current
time step to the policy gradient. Summing these gradients over the trajectory gives the total policy
gradient. When a Q-function is used, the per-time step contribution to the policy gradient takes the
form Qa ?? .
4
Stochastic value gradients
One limitation of the gradient computation in eqs. 3 and 4 is that the model and policy must be
deterministic. Additionally, the accuracy of the policy gradient V? is highly sensitive to modeling
errors. We introduce two critical changes: First, in section 4.1, we transform the stochastic Bellman
equation (eq. 2) to permit backpropagating value information in a stochastic setting. This also
enables us to compute gradients along real trajectories, not ones sampled from a model, making the
approach robust to model error, leading to our first algorithm ?SVG(1),? described in section 4.2.
Second, in section 4.3, we show how value function critics can be integrated into this framework,
leading to the algorithms ?SVG(1)? and ?SVG(0)?, which expand the Bellman recursion for 1 and
0 steps, respectively. Value functions further increase robustness to model error and extend our
framework to infinite-horizon control.
4.1
Differentiating the stochastic Bellman equation
Re-parameterization of distributions Our goal is to backpropagate through the stochastic Bellman equation. To do so, we make use of a concept called ?re-parameterization?, which permits us to
compute derivatives of deterministic and stochastic models in the same way. A very simple example
of re-parameterization is to write a conditional Gaussian density p(y|x) = N (y|?(x), 2 (x)) as the
function y = ?(x) + (x)?, where ? ? N (0, 1). From this point of view, one produces samples
procedurally by first sampling ?, then deterministically constructing y. Here, we consider conditional densities whose samples are generated by a deterministic function of an input noise variable
and other conditioning variables: y = f (x, ?), where ? ? ?(?), a fixed noise distribution. Rich
density models Rcan be expressed in this form [20, 12]. Expectations of a function g(y) become
Ep(y|x) g(y) = g(f (x, ?))?(?)d?.
The advantage of working with re-parameterized distributions is that we can now obtain a simple
Monte-Carlo estimator of the derivative of an expectation with respect to x:
rx Ep(y|x) g(y) = E?(?) gy fx ?
M
1 X
gy f x
M i=1
?=?i
.
(5)
In contrast to likelihood ratio-based Monte Carlo estimators, rx log p(y|x)g(y), this formula makes
direct use of the Jacobian of g.
Re-parameterization of the Bellman equation We now re-parameterize the Bellman equation.
When re-parameterized, the stochastic policy takes the form a = ?(s, ?; ?), and the stochastic
environment the form s0 = f (s, a, ?) for noise variables ? ? ?(?) and ? ? ?(?), respectively.
Inserting these functions into eq. (2) yields
?
?
?
V (s) = E?(?) r(s, ?(s, ?; ?)) + E?(?) V 0 (f (s, ?(s, ?; ?), ?)) .
(6)
Differentiating eq. 6 with respect to the current state s and policy parameters ? gives
?
Vs = E?(?) rs + ra ?s + E?(?) Vs00 (fs + fa ?s ) ,
?
?
?
V? = E?(?) ra ?? + E?(?) Vs00 fa ?? + V?0 .
(7)
(8)
We are interested in controlling systems with a priori unknown dynamics. Consequently, in the
following, we replace instances of f or its derivatives with a learned model ?f .
Gradient evaluation by planning A planning method to compute a gradient estimate is to compute a trajectory by running the policy in loop with a model while sampling the associated noise
variables, yielding a trajectory ? = (s1 , ? 1 , a1 , ? 1 , s2 , ? 2 , a2 , ? 2 , . . . ). On this sampled trajectory, a
Monte-Carlo estimate of the policy gradient can be computed by the backward recursions:
3
vs = [rs + ra ?s + vs0 0 (?fs + ?fa ?s )]
v? = [ra ?? + (vs0 0 ?fa ?? + v?0 )]
?,?
?,?
(9)
,
,
(10)
where have written lower-case v to emphasize that the quantities are one-sample estimates3 , and
? x ? means ?evaluated at x?.
Gradient evaluation on real trajectories An important advantage of stochastic over deterministic models is that they can assign probability mass to observations produced by the real environment.
In a deterministic formulation, there is no principled way to account for mismatch between model
predictions and observed trajectories. In this case, the policy and environment noise (?, ?) that produced the observed trajectory are considered unknown. By an application of Bayes? rule, which we
explain in Appendix B, we can rewrite the expectations in equations 7 and 8 given the observations
(s, a, s0 ) as
?
Vs = Ep(a|s) Ep(s0 |s,a) Ep(?,?|s,a,s0 ) rs + ra ?+ Vs00 (?fs + ?fa ?s ) ,
?
V? = Ep(a|s) Ep(s0 |s,a) Ep(?,?|s,a,s0 ) ra ?? + (Vs00 ?fa ?? + V?0 ) ,
(11)
(12)
where we can now replace the two outer expectations with samples derived from interaction with
the real environment. In the special case of additive noise, s0 = ?f (s, a) + ?, it is possible to use
a deterministic model to compute the derivatives (?fs , ?fa ). The noise?s influence is restricted to the
gradient of the value of the next state, Vs00 , and does not affect the model Jacobian. If we consider it
desirable to capture more complicated environment noise, we can use a re-parameterized generative
model and infer the missing noise variables, possibly by sampling from p(?, ?|s, a, s0 ).
4.2 SVG(1)
SVG(1) computes value gradients by backward recursions on finite-horizon trajectories. After
every episode, we train the model, ?f , followed by the policy, ?. We provide pseudocode for this in
Algorithm 1 but discuss further implementation details in section 5 and in the experiments.
Algorithm 1 SVG(1)
Algorithm 2 SVG(1) with Replay
1: Given empty experience database D
2: for trajectory = 0 to 1 do
3:
for t = 0 to T do
4:
Apply control a = ?(s, ?; ?), ? ? ?(?)
5:
Insert (s, a, r, s0 ) into D
6:
end for
7:
Train generative model ?f using D
8:
vs0 = 0 (finite-horizon)
9:
v?0 = 0 (finite-horizon)
10:
for t = T down to 0 do
11:
Infer ?|(s, a, s0 ) and ?|(s, a)
12:
v? = [ra ?? + (vs0 0 ?fa ?? + v?0 )] ?,?
13:
vs = [rs + ra ?s + v 0 0 (?fs + ?fa ?s )]
1: Given empty experience database D
2: for t = 0 to 1 do
3:
Apply control ?(s, ?; ?), ? ? ?(?)
4:
Observe r, s0
5:
Insert (s, a, r, s0 ) into D
6:
// Model and critic updates
7:
Train generative model ?f using D
8:
Train value function V? using D (Alg. 4)
9:
// Policy update
10:
Sample (sk , ak , rk , sk+1 ) from D (k ? t)
s
14:
end for
15:
Apply gradient-based update using v?0
16: end for
11:
w=
p(ak |sk ;? t )
p(ak |sk ;? k )
k
k
k
12:
Infer ? |(s , a , sk+1 ) and ? k |(sk , ak )
13:
v? = w(ra + V?s00 ?fa )?? ?k ,?k
14:
Apply gradient-based update using v?
15: end for
?,?
4.3 SVG(1) and SVG(0)
In our framework, we may learn a parametric estimate of the expected value V? (s; ?) (critic) with
parameters ?. The derivative of the critic value with respect to the state, V?s , can be used in place
of the sample gradient estimate given in eq. (9). The critic can reduce the variance of the gradient
estimates because V? approximates the expectation of future rewards while eq. (9) provides only a
3
In the finite-horizon formulation, the gradient calculation starts at the end of the trajectory for which the
only terms remaining in eq. (9) are vsT ? rsT + raT ?sT . After the recursion, the total derivative of the value
function with respect to the policy parameters is given by v?0 , which is a one-sample estimate of r? J.
4
single-trajectory estimate. Additionally, the value function can be used at the end of an episode to
approximate the infinite-horizon policy gradient. Finally, eq. (9) involves the repeated multiplication
of Jacobians of the approximate model ?fs , ?fa . Just as model error can compound in forward planning,
model gradient error can compound during backpropagation. Furthermore, SVG(1) is on-policy.
That is, after each episode, a single gradient-based update is made to the policy, and the policy
optimization does not revisit those trajectory data again. To increase data-efficiency, we construct
an off-policy, experience replay [15, 29] algorithm that uses models and value functions, SVG(1)
with Experience Replay (SVG(1)-ER). This algorithm also has the advantage that it can perform an
infinite-horizon computation.
To construct an off-policy estimator, we perform importance-weighting of the current policy distribution with respect to a proposal distribution, q(s, a):
?
p(a|s; ?)
V?? = Eq(s,a) Ep(s0 |s,a) Ep(?,?|s,a,s0 )
ra ?? + V?s0 ?fa ?? .
(13)
q(a|s)
Specifically, we maintain a database with tuples of past state transitions (sk , ak , rk , sk+1 ). Each
proposal drawn from q is a sample of a tuple from the database. At time t, the importance-weight
p(ak |sk ;? t )
k
w , p/q = p(a
k |sk ,? k ) , where ? comprise the policy parameters in use at the historical time step k.
We do not importance-weight the marginal distribution over states q(s) generated by a policy; this
is widely considered to be intractable.
Similarly, we use experience replay for value function learning. Details can be found in Appendix
C. Pseudocode for the SVG(1) algorithm with Experience Replay is in Algorithm 2.
We also provide a model-free stochastic value gradient algorithm, SVG(0) (Algorithm 3 in the Appendix). This algorithm is very similar to SVG(1) and is the stochastic analogue of the recently introduced Deterministic Policy Gradient algorithm (DPG) [23, 14, 4]. Unlike DPG, instead of
? assuming
?
a deterministic policy, SVG(0) estimates the derivative around the policy noise Ep(?) Qa ?? ? .4
This, for example, permits learning policy noise variance. The relative merit of SVG(1) versus
SVG(0) depends on whether the model or value function is easier to learn and is task-dependent.
We expect that model-based algorithms such as SVG(1) will show the strongest advantages in multitask settings where the system dynamics are fixed, but the reward function is variable. SVG(1)
performed well across all experiments, including ones introducing capacity constraints on the value
function and model. SVG(1)-ER demonstrated a significant advantage over all other tested algorithms.
5
Model and value learning
We can use almost any kind of differentiable, generative model. In our work, we have parameterized
the models as neural networks. Our framework supports nonlinear state- and action-dependent noise,
notable properties of biological actuators. For example, this can be described by the parametric
form ?f (s, a, ?) = ?
?(s, a) + ? (s, a)?. Model learning amounts to a purely supervised problem based
on observed state transitions. Our model and policy training occur jointly. There is no ?motorbabbling? period used to identify the model. As new transitions are observed, the model is trained
first, followed by the value function (for SVG(1)), followed by the policy. To ensure that the model
does not forget information about state transitions, we maintain an experience database and cull
batches of examples from the database for every model update. Additionally, we model the statechange by s0 = ?f (s, a, ?) + s and have found that constructing models as separate sub-networks per
predicted state dimension improved model quality significantly.
Our framework also permits a variety of means to learn the value function models. We can use
temporal difference learning [25] or regression to empirical episode returns. Since SVG(1) is modelbased, we can also use Bellman residual minimization [3]. In practice, we used a version of ?fitted?
policy evaluation. Pseudocode is available in Appendix C, Algorithm 4.
6
Experiments
We tested the SVG algorithms in two sets of experiments. In the first set of experiments (section
6.1), we test whether evaluating gradients on real environment trajectories and value function ap4
Note that ? is a function of the state and noise variable.
5
Figure 1: From left to right: 7-Link Swimmer; Reacher; Gripper; Monoped; Half-Cheetah; Walker
proximation can reduce the impact of model error. In our second set (section 6.2), we show that
SVG(1) can be applied to several complicated, multidimensional physics environments involving
contact dynamics (Figure 1) in the MuJoCo simulator [28]. Below we only briefly summarize the
main properties of each environment: further details of the simulations can be found in Appendix
D and supplement. In all cases, we use generic, 2 hidden-layer neural networks with tanh activation functions to represent models, value functions, and policies. A video montage is available at
https://youtu.be/PYdL7bcn_cM.
6.1 Analyzing SVG
Gradient evaluation on real trajectories vs. planning To demonstrate the difficulty of planning
with a stochastic model, we first present a very simple control problem for which SVG(1) easily
learns a control policy but for which an otherwise identical planner fails entirely. Our example is
based on a problem due to [16]. The policy directly controls the velocity of a point-mass ?hand?
on a 2D plane. By means of a spring-coupling, the hand exerts a force on a ball mass; the ball
additionally experiences a gravitational force and random forces (Gaussian noise). The goal is to
bring hand and ball into one of two randomly chosen target configurations with a relevant reward
being provided only at the final time step.
With simulation time step 0.01s, this demands controlling and backpropagating the distal reward
along a trajectory of 1, 000 steps. Because this experiment has a non-stationary, time-dependent
value function, this problem also favors model-based value gradients over methods using value
functions. SVG(1) easily learns this task, but the planner, which uses trajectories from the model,
shows little improvement. The planner simulates trajectories using the learned stochastic model
and backpropagates along those simulated trajectories (eqs. 9 and 10) [18]. The extremely long
time-horizon lets prediction error accumulate and thus renders roll-outs highly inaccurate, leading
to much worse final performance (c.f. Fig. 2, left).5
Robustness to degraded models and value functions We investigated the sensitivity of SVG(1)
and SVG(1) to the quality of the learned model on Swimmer. Swimmer is a chain body with multiple
links immersed in a fluid environment with drag forces that allow the body to propel itself [5, 27].
We build chains of 3, 5, or 7 links, corresponding to 10, 14, or 18-dimensional state spaces with 2,
4, or 6-dimensional action spaces. The body is initialized in random configurations with respect to
a central goal location. Thus, to solve the task, the body must turn to re-orient and then produce an
undulation to move to the goal.
To assess the impact of model quality, we learned to control a link-3 swimmer with SVG(1) and
SVG(1) while varying the capacity of the network used to model the environment (5, 10, or 20
hidden units for each state dimension subnetwork (Appendix D); i.e., in this task we intentionally
shrink the neural network model to investigate the sensitivity of our methods to model inaccuracy.
While with a high capacity model (20 hidden units per state dimension), both SVG(1) and SVG(1)
successfully learn to solve the task, the performance of SVG(1) drops significantly as model capacity is reduced (c.f. Fig. 3, middle). SVG(1) still works well for models with only 5 hidden units,
and it also scales up to 5 and 7-link versions of the swimmer (Figs. 3, right and 4, left). To compare
SVG(1) to conventional model-free approaches, we also tested a state-of-the-art actor-critic algorithm that learns a V -function and updates the policy using the TD-error = r + V 0 V as an
estimate of the advantage, yielding the policy gradient v? = r? log ? [30]. (SVG(1) and the AC
algorithm used the same code for learning V .) SVG(1) outperformed the model-free approach in
the 3-, 5-, and 7-link swimmer tasks (c.f. Fig. 3, left, right; Fig. 4, top left). In figure panels 2,
middle, 3, right, and 4, left column, we show that experience replay for the policy can improve the
data efficiency and performance of SVG(1).
5
We also tested REINFORCE on this problem but achieved very poor results due to the long horizon.
6
Hand
Cartpole
Cartpole
Figure 2: Left: Backpropagation through a model along observed stochastic trajectories is able
to optimize a stochastic policy in a stochastic environment, but an otherwise equivalent planning
algorithm that simulates the transitions with a learned stochastic model makes little progress due to
compounding model error. Middle: SVG and DPG algorithms on cart-pole. SVG(1)-ER learns the
fastest. Right: When the value function capacity is reduced from 200 hidden units in the first layer to
100 and then again to 50, SVG(1) exhibits less performance degradation than the Q-function-based
DPG, presumably because the dynamics model contains auxiliary information about the Q function.
Swimmer-3
Swimmer-5
Swimmer-3
Figure 3: Left: For a 3-link swimmer, with relatively simple dynamics, the compared methods
yield similar results and possibly a slight advantage to the purely model-based SVG(1). Middle:
However, as the environment model?s capacity is reduced from 20 to 10 then to 5 hidden units
per state-dimension subnetwork, SVG(1) dramatically deteriorates, whereas SVG(1) shows undisturbed performance. Right: For a 5-link swimmer, SVG(1)-ER learns faster and asymptotes at
higher performance than the other tested algorithms.
Similarly, we tested the impact of varying the capacity of the value function approximator (Fig. 2,
right) on a cart-pole. The V-function-based SVG(1) degrades less severely than the Q-functionbased DPG presumably because it computes the policy gradient with the aid of the dynamics model.
6.2 SVG in complex environments
In a second set of experiments we demonstrated that SVG(1)-ER can be applied to several challenging physical control problems with stochastic, non-linear, and discontinuous dynamics due to
contacts. Reacher is an arm stationed within a walled box with 6 state dimensions and 3 action
dimensions and the (x, y) coordinates of a target site, giving 8 state dimensions in total. In 4-Target
Reacher, the site was randomly placed at one of the four corners of the box, and the arm in a random
configuration at the beginning of each trial. In Moving-Target Reacher, the site moved at a randomized speed and heading in the box with reflections at the walls. Solving this latter problem implies
that the policy has generalized over the entire work space. Gripper augments the reacher arm with a
manipulator that can grab a ball in a randomized position and return it to a specified site. Monoped
has 14 state dimensions, 4 action dimensions, and ground contact dynamics. The monoped begins
falling from a height and must remain standing. Additionally, we apply Gaussian random noise
to the torques controlling the joints with a standard deviation of 5% of the total possible actuator
strength at all points in time, reducing the stability of upright postures. Half-Cheetah is a planar cat
robot designed to run based on [29] with 18 state dimensions and 6 action dimensions. Half-Cheetah
has a version with springs to aid balanced standing and a version without them. Walker is a planar
biped, based on the environment from [22].
Results Figure 4 shows learning curves for several repeats for each of the tasks. We found that
in all cases SVG(1) solved the problem well; we provide videos of the learned policies in the supplemental material. The 4-target reacher reliably finished at the target site, and in the tracking task
followed the moving target successfully. SVG(1)-ER has a clear advantage on this task as also borne
out in the cart-pole and swimmer experiments. The cheetah gaits varied slightly from experiment to
experiment but in all cases made good forward progress. For the monoped, the policies were able
to balance well beyond the 200 time steps of training episodes and were able to resist significantly
7
Avg. reward (arbitrary units)
Avg. reward (arbitrary units)
Swimmer-7
Monoped
Half-Cheetah
Gripper
2D-Walker
4-Target Reacher
Figure 4: Across several different domains, SVG(1)-ER reliably optimizes policies, clearly settling
into similar local optima. On the 4-target Reacher, SVG(1)-ER shows a noticeable efficiency and
performance gain relative to the other algorithms.
higher adversarial noise levels than used during training (up to 25% noise). We were able to learn
gripping and walking behavior, although walking policies that achieved similar reward levels did not
always exhibit equally good walking phenotypes.
7
Related work
Writing the noise variables as exogenous inputs to the system to allow direct differentiation with
respect to the system state (equation 7) is a known device in control theory [10, 7] where the model is
given analytically. The idea of using a model to optimize a parametric policy around real trajectories
is presented heuristically in [17] and [1] for deterministic policies and models. Also in the limit of
deterministic policies and models, the recursions we have derived in Algorithm 1 reduce to those
of [2]. Werbos defines an actor-critic algorithm called Heuristic Dynamic Programming that uses a
deterministic model to roll-forward one step to produce a state prediction that is evaluated by a value
function [31]. Deisenroth et al. have used Gaussian process models to compute policy gradients that
are sensitive to model-uncertainty [6], and Levine et al. have optimized impressive policies with the
aid of a non-parametric trajectory optimizer and locally-linear models [13]. Our work in contrast
has focused on using global, neural network models conjoined to value function approximators.
8
Discussion
We have shown that two potential problems with value gradient methods, their reliance on planning
and restriction to deterministic models, can be exorcised, broadening their relevance to reinforcement learning. We have shown experimentally that the SVG framework can train neural network
policies in a robust manner to solve interesting continuous control problems. The framework includes algorithm variants beyond the ones tested in this paper, for example, ones that combine a
value function with k steps of back-propagation through a model (SVG(k)). Augmenting SVG(1)
with experience replay led to the best results, and a similar extension could be applied to any SVG(k).
Furthermore, we did not harness sophisticated generative models of stochastic dynamics, but one
could readily do so, presenting great room for growth.
Acknowledgements We thank Arthur Guez, Danilo Rezende, Hado van Hasselt, John Schulman, Jonathan
Hunt, Nando de Freitas, Martin Riedmiller, Remi Munos, Shakir Mohamed, and Theophane Weber for helpful
discussions and John Schulman for sharing his walker model.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
P. Abbeel, M. Quigley, and A. Y. Ng. Using inaccurate models in reinforcement learning. In ICML, 2006.
C. G. Atkeson. Efficient robust policy optimization. In ACC, 2012.
L. Baird. Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995.
D. Balduzzi and M. Ghifary. Compatible value gradients for reinforcement learning of continuous deep
policies. arXiv preprint arXiv:1509.03005, 2015.
R. Coulom. Reinforcement learning using neural networks, with applications to motor control. PhD
thesis, Institut National Polytechnique de Grenoble-INPG, 2002.
M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search.
In (ICML), 2011.
M. Fairbank. Value-gradient learning. PhD thesis, City University London, 2014.
M. Fairbank and E. Alonso. Value-gradient learning. In IJCNN, 2012.
I. Grondman. Online Model Learning Algorithms for Actor-Critic Control. PhD thesis, TU Delft, Delft
University of Technology, 2015.
D. H. Jacobson and D. Q. Mayne. Differential dynamic programming. 1970.
M. I. Jordan and D. E. Rumelhart. Forward models: Supervised learning with a distal teacher. Cognitive
science, 16(3):307?354, 1992.
D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown
dynamics. In NIPS, 2014.
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous
control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293?321, 1992.
R. Munos. Policy gradient in continuous time. Journal of Machine Learning Research, 7:771?791, 2006.
K. S. Narendra and K. Parthasarathy. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks, 1(1):4?27, 1990.
D. H. Nguyen and B. Widrow. Neural networks for self-learning control systems. IEEE Control Systems
Magazine, 10(3):18?23, 1990.
R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML,
2013.
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In ICML, 2014.
Martin Riedmiller. Neural fitted q iteration?first experiences with a data efficient neural reinforcement
learning method. In Machine Learning: ECML 2005, pages 317?328. Springer, 2005.
J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. CoRR,
abs/1502.05477, 2015.
D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient
algorithms. In ICML, 2014.
S. P. Singh. Learning without state-estimation in partially observable Markovian decision processes. In
ICML, 1994.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9?
44, 1988.
R.S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement
learning with function approximation. In NIPS, 1999.
Y. Tassa, T. Erez, and W.D. Smart. Receding horizon differential dynamic programming. In NIPS, 2008.
E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In IROS, 2012.
P. Wawrzy?nski. A cat-like robot real-time learning to run. In Adaptive and Natural Computing Algorithms,
pages 380?390. Springer, 2009.
P. Wawrzy?nski. Real-time reinforcement learning by sequential actor?critics and experience replay. Neural Networks, 22(10):1484?1497, 2009.
P. J Werbos. A menu of designs for reinforcement learning over time. Neural networks for control, pages
67?95, 1990.
R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229?256, 1992.
9
| 5796 |@word multitask:1 trial:1 version:4 middle:4 briefly:1 termination:1 heuristically:1 simulation:3 r:5 p0:1 recursively:1 initial:1 configuration:3 contains:1 past:1 existing:1 hasselt:1 current:3 com:1 freitas:1 activation:1 guez:1 must:3 written:1 readily:1 john:2 subsequent:1 additive:1 analytic:2 enables:1 motor:1 treating:1 drop:1 update:7 designed:1 v:6 stationary:2 generative:7 fewer:1 half:4 device:1 parameterization:5 plane:1 beginning:1 provides:1 parameterizations:1 pascanu:1 location:1 gx:1 height:1 mathematical:1 along:6 vs0:4 direct:2 become:1 differential:2 wierstra:3 pritzel:1 combine:1 introduce:1 manner:3 expected:1 ra:12 behavior:1 p1:1 planning:10 cheetah:6 simulator:1 terminal:1 bellman:10 torque:1 montage:1 td:1 little:2 provided:1 begin:1 notation:1 theophane:1 panel:1 mass:4 what:1 kind:1 substantially:1 deepmind:1 supplemental:1 unified:1 differentiation:2 temporal:2 mitigate:1 every:2 multidimensional:1 growth:1 control:25 wayne:1 unit:7 local:1 limit:1 severely:1 sutton:2 encoding:1 ak:6 analyzing:1 subscript:1 drag:1 conversely:1 challenging:2 mujoco:2 fastest:1 limited:1 hunt:2 range:2 practice:1 backpropagation:7 countzero:1 riedmiller:3 empirical:1 significantly:3 cannot:1 context:1 influence:1 writing:1 optimize:5 conventional:1 deterministic:21 demonstrated:2 missing:1 equivalent:1 restriction:1 williams:1 flexibly:1 starting:1 focused:1 recovery:1 estimator:4 rule:1 menu:1 his:1 stability:1 fx:1 coordinate:1 pt:1 controlling:3 target:9 magazine:1 programming:3 us:3 locomotion:1 swimmer:13 velocity:1 rumelhart:1 walking:3 werbos:2 database:7 observed:6 ep:11 levine:3 preprint:3 solved:1 capture:1 parameterize:1 thousand:1 region:1 episode:5 principled:2 balanced:1 environment:25 reward:15 dynamic:16 trained:1 singh:2 rewrite:1 solving:1 smart:1 purely:3 efficiency:3 easily:2 joint:1 cat:2 derivation:1 train:6 describe:1 london:1 monte:3 whose:1 heuristic:1 widely:1 solve:4 otherwise:2 favor:1 jointly:2 transform:1 itself:1 final:2 shakir:1 online:1 advantage:10 differentiable:2 quigley:1 gait:1 interaction:2 product:1 inserting:1 relevant:2 combining:2 loop:1 tu:1 mayne:1 moved:1 exploiting:1 rst:1 empty:2 optimum:1 produce:3 silver:3 coupling:1 widrow:1 ac:1 augmenting:1 recurrent:1 noticeable:1 progress:2 eq:11 auxiliary:1 predicted:2 involves:1 implies:1 guided:1 drawback:1 discontinuous:1 stochastic:33 exploration:1 nando:1 mcallester:1 material:1 require:2 assign:2 abbeel:3 wall:1 biological:1 secondly:1 insert:2 extension:1 gravitational:1 around:2 considered:2 ground:1 presumably:2 great:1 scope:1 predict:1 narendra:1 optimizer:1 a2:1 estimation:1 outperformed:1 tanh:1 sensitive:2 successfully:2 tool:1 city:1 minimization:1 compounding:1 clearly:1 rna:1 gaussian:4 always:1 reaching:1 varying:4 encode:1 derived:2 rezende:2 improvement:1 likelihood:5 contrast:3 adversarial:1 helpful:1 inference:1 dependent:4 inaccurate:3 entire:2 snake:1 integrated:1 hidden:6 walled:1 expand:1 interested:1 priori:1 art:1 special:1 marginal:1 construct:2 comprise:1 undisturbed:1 ng:1 sampling:4 identical:1 icml:7 future:2 connectionist:1 richard:1 grenoble:1 randomly:2 simultaneously:1 national:1 delft:2 maintain:2 ab:1 highly:2 propel:1 investigate:1 evaluation:4 yielding:2 jacobson:1 chain:2 tuple:1 partial:1 arthur:1 experience:13 cartpole:2 institut:1 initialized:1 re:10 minimal:1 fitted:2 increased:1 formalism:1 modeling:1 instance:1 column:1 planned:1 markovian:1 introducing:1 deviation:1 pole:3 examining:1 teacher:1 nski:2 undulation:1 st:6 density:3 sensitivity:2 randomized:2 standing:2 physic:3 off:3 modelbased:1 together:1 again:2 s00:1 central:1 thesis:3 lever:1 possibly:2 borne:1 worse:1 corner:1 cognitive:1 derivative:9 leading:3 return:4 toy:1 jacobians:1 account:1 potential:1 de:2 degris:1 gy:2 includes:2 baird:1 notable:1 depends:1 vst:1 view:1 performed:1 exogenous:2 start:1 bayes:2 complicated:2 youtu:1 contribution:3 ass:1 greg:1 degraded:2 variance:3 accuracy:1 roll:2 yield:3 identify:1 identification:1 critically:1 produced:2 carlo:3 trajectory:29 rx:2 acc:1 explain:1 strongest:1 sharing:1 intentionally:1 mohamed:2 associated:1 sampled:2 gain:1 sophisticated:1 back:1 higher:2 supervised:2 danilo:1 tom:1 planar:3 improved:1 harness:1 formulation:2 evaluated:2 shrink:1 box:3 furthermore:2 just:1 working:1 hand:4 trust:1 nonlinear:1 propagation:1 google:2 defines:1 quality:3 mdp:1 manipulator:1 lillicrap:2 concept:1 analytically:1 moritz:1 distal:2 during:2 self:2 backpropagating:2 backpropagates:1 rat:1 generalized:1 modelfree:1 presenting:1 demonstrate:2 polytechnique:1 bring:1 reflection:1 meaning:1 weber:1 variational:1 recently:1 pseudocode:3 physical:2 conditioning:1 tassa:5 extend:2 slight:1 approximates:1 accumulate:1 refer:1 significant:1 similarly:2 erez:4 teaching:1 stochasticity:1 biped:2 moving:2 robot:3 actor:4 impressive:1 functionbased:1 recent:1 dpg:5 optimizes:1 compound:2 approximators:1 maximize:2 period:1 pilco:1 full:1 desirable:1 multiple:1 reduces:1 infer:3 compounded:1 faster:1 calculation:1 compensate:1 long:2 lin:1 equally:2 a1:1 impact:5 prediction:4 variant:2 regression:1 involving:1 expectation:7 exerts:1 arxiv:6 hado:1 represent:2 iteration:1 achieved:2 proposal:2 background:1 whereas:1 walker:4 grabbing:1 unlike:1 cart:3 tend:1 simulates:2 effectiveness:1 jordan:2 bengio:1 variety:2 affect:1 todorov:1 reduce:3 idea:1 whether:2 expression:1 f:7 render:1 proceed:1 action:9 deep:3 heess:4 dramatically:1 clear:2 amount:1 discount:1 ten:1 locally:1 augments:1 reduced:3 http:1 conjoined:1 revisit:1 deteriorates:1 per:4 discrete:1 write:1 express:1 four:1 reliance:1 demonstrating:1 falling:1 drawn:1 iros:1 backward:3 grab:1 swimming:1 immersed:1 sum:2 orient:1 run:2 parameterized:5 uncertainty:1 procedurally:1 place:1 almost:1 planner:3 decision:2 appendix:7 entirely:1 layer:2 gripping:1 followed:4 strength:1 occur:1 ijcnn:1 constraint:1 speed:1 extremely:1 spring:2 mikolov:1 relatively:1 martin:2 ball:4 poor:1 beneficial:1 across:2 remain:1 slightly:1 making:2 s1:1 invariant:1 restricted:1 equation:12 discus:1 turn:1 merit:1 end:6 available:3 operation:1 permit:5 apply:6 observe:1 actuator:2 generic:2 batch:1 robustness:3 top:1 running:1 include:1 remaining:1 ensure:1 giving:1 balduzzi:1 build:1 contact:3 move:1 quantity:1 posture:1 fa:15 parametric:4 rt:2 degrades:1 subnetwork:2 gradient:61 exhibit:2 link:9 separate:1 simulated:1 capacity:7 reinforce:1 outer:1 thank:1 alonso:1 assuming:1 code:1 ratio:5 minimizing:1 balance:1 coulom:1 potentially:1 fluid:1 svg:64 implementation:1 reliably:2 policy:89 unknown:3 contributed:1 perform:2 design:1 observation:4 markov:1 finite:6 ecml:1 rn:1 varied:1 mansour:1 arbitrary:2 david:1 introduced:1 pair:1 specified:1 extensive:1 optimized:1 resist:1 ds0:1 engine:1 learned:10 kingma:1 inaccuracy:1 nip:3 qa:2 address:1 able:4 beyond:2 below:1 dynamical:1 mismatch:1 receding:1 summarize:1 including:2 video:2 analogue:1 suitable:1 fairbank:3 natural:1 critical:1 rely:1 difficulty:2 force:4 abbreviate:1 recursion:7 residual:2 arm:4 settling:1 improve:1 technology:1 mdps:1 finished:1 coupled:1 auto:1 parthasarathy:1 prior:1 acknowledgement:1 schulman:3 multiplication:1 relative:2 plant:1 expect:1 interesting:1 limitation:2 versus:1 approximator:1 agent:1 s0:23 critic:10 compatible:1 placed:1 repeat:1 free:7 rasmussen:1 heading:1 tick:1 allow:2 fall:1 differentiating:3 munos:2 van:1 curve:1 dimension:11 transition:6 cumulative:1 rich:1 computes:2 evaluating:1 author:1 forward:4 adaptive:1 made:2 asymptote:1 avg:2 reinforcement:12 historical:1 atkeson:1 nguyen:1 welling:1 transaction:1 approximate:3 emphasize:1 observable:1 global:1 gregwayne:1 proximation:1 summing:1 tuples:1 spectrum:1 subsequence:1 continuous:8 search:2 sk:10 additionally:5 reacher:8 learn:5 robust:3 nicolas:1 actuated:1 improving:1 alg:1 broadening:1 investigated:1 complex:1 constructing:2 domain:2 da:1 did:2 main:1 s2:1 noise:19 repeated:1 body:4 fig:6 site:5 aid:3 sub:1 fails:1 position:1 deterministically:1 replay:8 jacobian:2 weighting:1 learns:5 formula:1 down:1 davidsilver:1 rk:2 er:8 evidence:1 intractable:1 gripper:3 sequential:1 corr:1 importance:3 supplement:1 phd:3 horizon:15 demand:1 easier:1 phenotype:1 backpropagate:2 forget:1 led:1 timothy:1 remi:1 expressed:2 tracking:2 partially:2 springer:2 relies:1 ghifary:1 conditional:2 goal:5 consequently:1 room:1 shared:1 replace:2 experimentally:2 change:1 infinite:6 specifically:1 reducing:1 yuval:1 upright:1 degradation:1 total:6 called:2 deisenroth:2 support:2 latter:1 arises:1 jonathan:1 relevance:1 reactive:1 tested:7 |
5,298 | 5,797 | Path-SGD: Path-Normalized Optimization in
Deep Neural Networks
Behnam Neyshabur
Toyota Technological Institute at Chicago
bneyshabur@ttic.edu
Ruslan Salakhutdinov
Departments of Statistics and Computer Science
University of Toronto
rsalakhu@cs.toronto.edu
Nathan Srebro
Toyota Technological Institute at Chicago
nati@ttic.edu
Abstract
We revisit the choice of SGD for training deep neural networks by reconsidering
the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network,
and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is
easy and efficient to implement and leads to empirical gains over SGD and AdaGrad.
1
Introduction
Training deep networks is a challenging problem [16, 2] and various heuristics and optimization
algorithms have been suggested in order to improve the efficiency of the training [5, 9, 4]. However,
training deep architectures is still considerably slow and the problem has remained open. Many
of the current training methods rely on good initialization and then performing Stochastic Gradient
Descent (SGD), sometimes together with an adaptive stepsize or momentum term [16, 1, 6].
Revisiting the choice of gradient descent, we recall that optimization is inherently tied to a choice of
geometry or measure of distance, norm or divergence. Gradient descent for example is tied to the `2
norm as it is the steepest descent with respect to `2 norm in the parameter space, while coordinate
descent corresponds to steepest descent with respect to the `1 norm and exp-gradient (multiplicative
weight) updates is tied to an entropic divergence. Moreover, at least when the objective function is
convex, convergence behavior is tied to the corresponding norms or potentials. For example, with
gradient descent, or SGD, convergence speeds depend on the `2 norm of the optimum. The norm
or divergence can be viewed as a regularizer for the updates. There is therefore also a strong link
between regularization for optimization and regularization for learning: optimization may provide
implicit regularization in terms of its corresponding geometry, and for ideal optimization performance the optimization geometry should be aligned with inductive bias driving the learning [14].
Is the `2 geometry on the weights the appropriate geometry for the space of deep networks? Or
can we suggest a geometry with more desirable properties that would enable faster optimization and
perhaps also better implicit regularization? As suggested above, this question is also linked to the
choice of an appropriate regularizer for deep networks.
Focusing on networks with RELU activations, we observe that scaling down the incoming edges to
a hidden unit and scaling up the outgoing edges by the same factor yields an equivalent network
1
2.5
Balanced
Unbalanced
Objective
2
1
u
100
Epoch
200
8
1
1
8
3
4
10-4
v
7
~104
u
u
100
~100
10.5 70.1
10 0.1
0.1
2
SGD ?
Update
u
?
20
Rescaling
u
6
v
(b) Weight explosion in an unbalanced network
1
7
SGD
Update
300
(a) Training on MNIST
8
v
1
0
0
6
?
Rescaling
1
1
0.5
~100
100
v
1.5
1
1
v
4
u
60
10 0.1
v
0.4
20.5
70.1
SGD ?
Update
u
60.2
10.2 30.1
v
30.4
(c) Poor updates in an unbalanced network
Figure 1: (a): Evolution of the cross-entropy error function when training a feed-forward network on MNIST
with two hidden layers, each containing 4000 hidden units. The unbalanced initialization (blue curve) is generated by applying a sequence of rescaling functions on the balanced initializations (red curve). (b): Updates for
a simple case where the input is x = 1, thresholds are set to zero (constant), the stepsize is 1, and the gradient
with respect to output is ? = ?1. (c): Updated network for the case where the input is x = (1, 1), thresholds
are set to zero (constant), the stepsize is 1, and the gradient with respect to output is ? = (?1, ?1).
computing the same function. Since predictions are invariant to such rescalings, it is natural to seek
a geometry, and corresponding optimization method, that is similarly invariant.
We consider here a geometry inspired by max-norm regularization (regularizing the maximum norm
of incoming weights into any unit) which seems to provide a better inductive bias compared to the
`2 norm (weight decay) [3, 15]. But to achieve rescaling invariance, we use not the max-norm itself,
but rather the minimum max-norm over all rescalings of the weights. We discuss how this measure
can be expressed as a ?path regularizer? and can be computed efficiently.
We therefore suggest a novel optimization method, Path-SGD, that is an approximate steepest descent method with respect to path regularization. Path-SGDis rescaling-invariant and we demonstrate that Path-SGDoutperforms gradient descent and AdaGrad for classifications tasks on several
benchmark datasets.
Notations A feedforward neural network that computes a function f : RD ? RC can be represented by a directed acyclic graph (DAG) G(V, E) with D input nodes vin [1], . . . , vin [D] ? V , C
output nodes vout [1], . . . , vout [C] ? V , weights w : E ? R and an activation function ? : R ? R
that is applied on the internal nodes (hidden units). We denote the function computed by this
network as fG,w,? . In this paper we focus on RELU (REctified Linear Unit) activation function
?RELU (x) = max{0, x}. We refer to the depth d of the network which is the length of the longest
directed path in G. For any 0 ? i ? d, we define Vini to be the set of vertices with longest path of
i
length i to an input unit and Vout
is defined similarly for paths to output units. In layered networks
d?i
i
Vin = Vout is the set of hidden units in a hidden layer i.
2
Rescaling and Unbalanceness
One of the special properties of RELU activation function is non-negative homogeneity. That is,
for any scalar c ? 0 and any x ? R, we have ?RELU (c ? x) = c ? ?RELU (x). This interesting
property allows the network to be rescaled without changing the function computed by the network.
We define the rescaling function ?c,v (w), such that given the weights of the network w : E ? R, a
constant c > 0, and a node v, the rescaling function multiplies the incoming edges and divides the
outgoing edges of v by c. That is, ?c,v (w) maps w to the weights w
? for the rescaled network, where
for any (u1 ? u2 ) ? E:
?
?c.w(u1 ?u2 ) u2 = v,
(1)
w
?(u1 ?u2 ) = 1c w(u1 ?u2 ) u1 = v,
?
w(u1 ?u2 )
otherwise.
2
It is easy to see that the rescaled network computes the same function, i.e. fG,w,?RELU =
fG,?c,v (w),?RELU . We say that the two networks with weights w and w
? are rescaling equivalent
denoted by w ? w
? if and only if one of them can be transformed to another by applying a sequence
of rescaling functions ?c,v .
Given a training set S = {(x1 , yn ), . . . , (xn , yn )}, our goal is to minimize the following objective
function:
n
1X
L(w) =
`(fw (xi ), yi ).
(2)
n i=1
Let w(t) be the weights at step t of the optimization. We consider update step of the following form
w(t+1) = w(t) + ?w(t+1) . For example, for gradient descent, we have ?w(t+1) = ???L(w(t) ),
where ? is the step-size. In the stochastic setting, such as SGD or mini-batch gradient descent, we
calculate the gradient on a small subset of the training set.
Since rescaling equivalent networks compute the same function, it is desirable to have an update rule
that is not affected by rescaling. We call an optimization method rescaling invariant if the updates
of rescaling equivalent networks are rescaling equivalent. That is, if we start at either one of the two
rescaling equivalent weight vectors w
? (0) ? w(0) , after applying t update steps separately on w
? (0)
and w(0) , they will remain rescaling equivalent and we have w
? (t) ? w(t) .
Unfortunately, gradient descent is not rescaling invariant. The main problem with the gradient updates is that scaling down the weights of an edge will also scale up the gradient which, as we see
later, is exactly the opposite of what is expected from a rescaling invariant update.
Furthermore, gradient descent performs very poorly on ?unbalanced? networks. We say that a network is balanced if the norm of incoming weights to different units are roughly the same or within
a small range. For example, Figure 1(a) shows a huge gap in the performance of SGD initialized
with a randomly generated balanced network w(0) , when training on MNIST, compared to a network
initialized with unbalanced weights w
? (0) . Here w
? (0) is generated by applying a sequence of random
(0)
(0)
rescaling functions on w (and therefore w ? w
? (0) ).
In an unbalanced network, gradient descent updates could blow up the smaller weights, while keeping the larger weights almost unchanged. This is illustrated in Figure 1(b). If this were the only
issue, one could scale down all the weights after each update. However, in an unbalanced network,
the relative changes in the weights are also very different compared to a balanced network. For
example, Figure 1(c) shows how two rescaling equivalent networks could end up computing a very
different function after only a single update.
3
Magnitude/Scale measures for deep networks
Following [12], we consider the grouping of weights going into each node of the network. This
forms the following generic group-norm type regularizer, parametrized by 1 ? p, q ? ?:
?
?
?q/p ?1/q
p
? X ? X
?
?p,q (w) = ?
w(u?v) ? ? .
(3)
v?V
(u?v)?E
Two simple cases of above group-norm are p = q = 1 and p = q = 2 that correspond to overall
`1 regularization and weight decay respectively. Another form of regularization that is shown to
be very effective in RELU networks is the max-norm regularization, which is the maximum over
all units of norm of incoming edge to the unit1 [3, 15]. The max-norm correspond to ?per-unit?
regularization when we set q = ? in equation (4) and can be written in the following form:
?
?1/p
X
p
w(u?v) ?
?p,? (w) = sup ?
(4)
v?V
(u?v)?E
1
This definition of max-norm is a bit different than the one used in the context of matrix factorization [13].
The later is similar to the minimum upper bound over `2 norm of both outgoing edges from the input units and
incoming edges to the output units in a two layer feed-forward network.
3
Weight decay is probably the most commonly used regularizer. On the other hand, per-unit regularization might not seem ideal as it is very extreme in the sense that the value of regularizer corresponds to the highest value among all nodes. However, the situation is very different for networks
with RELU activations (and other activation functions with non-negative homogeneity property). In
these cases, per-unit `2 regularization has shown to be very effective [15]. The main reason could be
because RELU networks can be rebalanced in such a way that all hidden units have the same norm.
Hence, per-unit regularization will not be a crude measure anymore.
Since ?p,? is not rescaling invariant and the values of the scale measure are different for rescaling equivalent networks, it is desirable to look for the minimum value of a regularizer among all
rescaling equivalent networks. Surprisingly, for a feed-forward network, the minimum `p per-unit
regularizer among all rescaling equivalent networks can be efficiently computed by a single forward
step. To see this, we consider the vector ?(w), the path vector, where the number of coordinates
of ?(w) is equal to the total number of paths from the input to output units and each coordinate of
?(w) is the equal to the product of weights along a path from an input nodes to an output node. The
`p -path regularizer is then defined as the `p norm of ?(w) [12]:
?
?
p 1/p
d
Y
X
?
?
(5)
w ek ?
?p (w) = k?(w)kp = ?
e
e
e
d
1
2
vin [i]?v
1 ?v2 ...?vout [j]
k=1
The following Lemma establishes that the `p -path regularizer corresponds to the minimum over all
equivalent networks of the per-unit `p norm:
d
Lemma 3.1 ([12]). ?p (w) = min ?p,? (w)
?
w?w
?
The definition (5) of the `p -path regularizer involves an exponential number of terms. But it can be
computed efficiently by dynamic programming in a single forward step using the following equivalent form as nested sums:
?1/p
?
X
X
X
p
p
w(v [i]?v ) ?
w(v ?v [j])
...
?p (w) = ?
in
1
out
d?1
(vd?2 ?vd?1 )?E
(vd?1 ?vout [j])?E
(vin [i]?v1 )?E
A straightforward consequence of Lemma 3.1 is that the `p path-regularizer ?p is invariant to rescaling, i.e. for any w
? ? w, ?p (w)
? = ?p (w).
4
Path-SGD: An Approximate Path-Regularized Steepest Descent
Motivated by empirical performance of max-norm regularization and the fact that path-regularizer
is invariant to rescaling, we are interested in deriving the steepest descent direction with respect to
the path regularizer ?p (w):
2
D
E 1
w(t+1) = arg min ? ?L(w(t) ), w +
?(w) ? ?(w(t) )
(6)
w
2
p
?
?
p 2/p
d
d
Y
D
E 1
X
Y
?
?
= arg min ? ?L(w(t) ), w + ?
w ek ?
we(t)
?
k
w
2
e
e
e
1
2
d
vin [i]?v
1 ?v2 ...?vout [j]
= arg min J
w
(t)
k=1
k=1
(w)
The steepest descent step (6) is hard to calculate exactly. Instead, we will update each coordinate we
independently (and synchronously) based on (6). That is:
we(t+1) = arg min J (t) (w)
we
(t)
s.t. ?e0 6=e we0 = we0
Taking the partial derivative with respect to we and setting it to zero we obtain:
?
?2/p
X
Y (t) p
?L (t)
0=?
(w ) + we ? we(t) ?
we0 ?
?we
0
e
vin [i]????...vout [j] e 6=e
4
(7)
Algorithm 1 Path-SGDupdate rule
1: ?v?V 0 ?in (v) = 1
in
0 ?out (v) = 1
2: ?v?Vout
3: for i = 1 to d do P
p
4:
?v?V i ?in (v) = (u?v)?E ?in (u) w(u,v)
in
P
. Initialization
p
5:
?v?Vouti ?out (v) = (v?u)?E w(v,u) ?out (u)
6: end for
7: ?(u?v)?E ?(w(t) , (u, v)) = ?in (u)2/p ?out (v)2/p
(t+1)
8: ?e?E we
(t)
= we ?
?
?L
(w(t) )
?(w(t) ,e) ?we
. Update Rule
e
where vin [i] ? ? ? ? . . . vout [j] denotes the paths from any input unit i to any output unit j that includes
e. Solving for we gives us the following update rule:
w
?e(t+1) = we(t) ?
?
?L (t)
(w )
(t)
?p (w , e) ?w
(8)
where ?p (w, e) is given as
?2/p
?
?p (w, e) = ?
X
Y
p
|we0 | ?
(9)
0
vin [i]????...vout [j] e 6=e
e
We call the optimization using the update rule (8) path-normalized gradient descent. When used in
stochastic settings, we refer to it as Path-SGD.
Now that we know Path-SGDis an approximate steepest descent with respect to the path-regularizer,
we can ask whether or not this makes Path-SGDa rescaling invariant optimization method. The next
theorem proves that Path-SGDis indeed rescaling invariant.
Theorem 4.1. Path-SGDis rescaling invariant.
Proof. It is sufficient to prove that using the update rule (8), for any c > 0 and any v ? E, if w
? (t) =
(t)
(t+1)
(t+1)
?c,v (w ), then w
?
= ?c,v (w
). For any edge e in the network, if e is neither incoming nor
outgoing edge of the node v, then w(e)
?
= w(e), and since the gradient is also the same for edge e
(t+1)
(t+1)
. However, if e is an incoming edge to v, we have that w
? (t) (e) = cw(t) (e).
= we
we have w
?e
Moreover, since the outgoing edges of v are divided by c, we get ?p (w
? (t) , e) =
?L
?L
(t)
(t)
? ) = c?we (w ). Therefore,
?we (w
?p (w(t) ,e)
c2
and
c2 ?
?L
w
?e(t+1) = cwe(t) ?
(w(t) )
(t)
?p (w , e) c?we
?
?L (t)
= c w(t) ?
(w
)
= cwe(t+1) .
?p (w(t) , e) ?we
A similar argument proves the invariance of Path-SGDupdate rule for outgoing edges of v. Therefore, Path-SGDis rescaling invariant.
Efficient Implementation: The Path-SGD update rule (8), in the way it is written, needs to consider all the paths, which is exponential in the depth of the network. However, it can be calculated in
a time that is no more than a forward-backward step on a single data point. That is, in a mini-batch
setting with batch size B, if the backpropagation on the mini-batch can be done in time BT , the running time of the Path-SGD on the mini-batch will be roughly (B + 1)T ? a very moderate runtime
increase with typical mini-batch sizes of hundreds or thousands of points. Algorithm 1 shows an
efficient implementation of the Path-SGD update rule.
We next compare Path-SGDto other optimization methods in both balanced and unbalanced settings.
5
Table 1: General information on datasets used in the experiments.
Data Set
CIFAR-10
CIFAR-100
MNIST
SVHN
5
Dimensionality
3072 (32 ? 32 color)
3072 (32 ? 32 color)
784 (28 ? 28 grayscale)
3072 (32 ? 32 color)
Classes
10
100
10
10
Training Set
50000
50000
60000
73257
Test Set
10000
10000
10000
26032
Experiments
In this section, we compare `2 -Path-SGDto two commonly used optimization methods in deep learning, SGD and AdaGrad. We conduct our experiments on four common benchmark datasets: the standard MNIST dataset of handwritten digits [8]; CIFAR-10 and CIFAR-100 datasets of tiny images
of natural scenes [7]; and Street View House Numbers (SVHN) dataset containing color images of
house numbers collected by Google Street View [10]. Details of the datasets are shown in Table 1.
In all of our experiments, we trained feed-forward networks with two hidden layers, each containing
4000 hidden units. We used mini-batches of size 100 and the step-size of 10?? , where ? is an
integer between 0 and 10. To choose ?, for each dataset, we considered the validation errors over
the validation set (10000 randomly chosen points that are kept out during the initial training) and
picked the one that reaches the minimum error faster. We then trained the network over the entire
training set. All the networks were trained both with and without dropout. When training with
dropout, at each update step, we retained each unit with probability 0.5.
We tried both balanced and unbalanced initializations. In balanced initialization, incoming weights
to p
each unit v are initialized to i.i.d samples from a Gaussian distribution with standard deviation
1/ fan-in(v). In the unbalanced setting, we first initialized the weights to be the same as the
balanced weights. We then picked 2000 hidden units randomly with replacement. For each unit, we
multiplied its incoming edge and divided its outgoing edge by 10c, where c was chosen randomly
from log-normal distribution.
The optimization results without dropout are shown in Figure 2. For each of the four datasets, the
plots for objective function (cross-entropy), the training error and the test error are shown from
left to right where in each plot the values are reported on different epochs during the optimization.
Although we proved that Path-SGDupdates are the same for balanced and unbalanced initializations,
to verify that despite numerical issues they are indeed identical, we trained Path-SGDwith both
balanced and unbalanced initializations. Since the curves were exactly the same we only show a
single curve.
We can see that as expected, the unbalanced initialization considerably hurts the performance of
SGD and AdaGrad (in many cases their training and test errors are not even in the range of the plot
to be displayed), while Path-SGDperforms essentially the same. Another interesting observation is
that even in the balanced settings, not only does Path-SGDoften get to the same value of objective
function, training and test error faster, but also the final generalization error for Path-SGDis sometimes considerably lower than SGD and AdaGrad (except CIFAR-100 where the generalization error
for SGD is slightly better compared to Path-SGD). The plots for test errors could also imply that
implicit regularization due to steepest descent with respect to path-regularizer leads to a solution that
generalizes better. This view is similar to observations in [11] on the role of implicit regularization
in deep learning.
The results for training with dropout are shown in Figure 3, where here we suppressed the (very poor)
results on unbalanced initializations. We observe that except for MNIST, Path-SGDconvergences
much faster than SGD or AdaGrad. It also generalizes better to the test set, which again shows the
effectiveness of path-normalized updates.
The results suggest that Path-SGDoutperforms SGD and AdaGrad in two different ways. First, it can
achieve the same accuracy much faster and second, the implicit regularization by Path-SGDleads to
a local minima that can generalize better even when the training error is zero. This can be better
analyzed by looking at the plots for more number of epochs which we have provided in the supplementary material. We should also point that Path-SGD can be easily combined with AdaGrad to take
6
Cross-Entropy Training Loss
0/1 Training Error
0.15
0.55
0.1
0.5
Path?SGD ? Unbalanced
SGD ? Balanced
SGD ? Unbalanced
AdaGrad ? Balanced
AdaGrad ? Unbalanced
.
.
1
0
0
0.05
20
40
60
80
100
0.45
0
0
0.1
4
0.08
3
0.06
20
40
60
80
0.04
1
0.02
20
40
60
80
100
0.4
0
20
40
60
80
100
20
40
60
80
100
60
80
100
40
60
Epoch
80
100
0.85
0.8
.
2
0
0
100
.
5
.
0.75
0.7
0
0
20
40
60
80
0.65
100
0
2.5
0.02
0.035
2
0.015
0.03
0.01
0.025
.
.
1.5
.
CIFAR-100
0.6
1.5
0.5
MNIST
0.2
2
.
CIFAR-10
2.5
0/1 Test Error
1
0.005
0.02
0.5
20
40
60
80
0
0
100
0.2
2
0.15
20
40
60
80
0.015
100
0
20
40
0.2
0.19
0.18
0.1
.
.
1.5
.
SVHN
0
0
2.5
1
0.17
0.16
0.05
0.5
0
0
0.15
20
40
60
Epoch
80
100
0
0
20
40
60
Epoch
80
0.14
100
0
20
Figure 2: Learning curves using different optimization methods for 4 datasets without dropout. Left panel
displays the cross-entropy objective function; middle and right panels show the corresponding values of the
training and test errors, where the values are reported on different epochs during the course of optimization.
Best viewed in color.
advantage of the adaptive stepsize or used together with a momentum term. This could potentially
perform even better compare to Path-SGD.
6
Discussion
We revisited the choice of the Euclidean geometry on the weights of RELU networks, suggested an
alternative optimization method approximately corresponding to a different geometry, and showed
that using such an alternative geometry can be beneficial. In this work we show proof-of-concept
success, and we expect Path-SGD to be beneficial also in large-scale training for very deep convolutional networks. Combining Path-SGD with AdaGrad, with momentum or with other optimization
heuristics might further enhance results.
Although we do believe Path-SGD is a very good optimization method, and is an easy plug-in for
SGD, we hope this work will also inspire others to consider other geometries, other regularizers and
perhaps better, update rules. A particular property of Path-SGD is its rescaling invariance, which we
7
Cross-Entropy Training Loss
0/1 Training Error
2.5
0/1 Test Error
0.4
0.55
0.3
0.5
0.2
0.45
Path?SGD + Dropout
SGD + Dropout
AdaGrad + Dropout
.
1
0.1
0.5
0
0
20
40
60
80
100 00
5
0.4
20
40
60
80
100
0.35
0
0.8
0.8
0.6
0.75
0.4
0.7
20
40
60
80
100
20
40
60
80
100
60
80
100
40
60
Epoch
80
100
.
.
3
.
2
0.2
1
0
0
20
40
60
80
100
0
0
0.65
20
40
60
80
100
0.6
0
2.5
0.08
0.035
2
0.06
0.03
0.04
0.025
.
.
1.5
.
CIFAR-100
4
MNIST
.
1.5
.
CIFAR-10
2
1
0.02
0.02
0.5
0
0
20
40
60
80
2.5
0
0
100
0.4
20
40
60
80
0.015
100
0
20
40
0.18
0.17
2
0.3
0.2
.
.
.
SVHN
0.16
1.5
1
0.15
0.14
0.1
0.5
0
0
0.13
20
40
60
Epoch
80
0
100 0
20
40
60
Epoch
80
100
0.12
0
20
Figure 3: Learning curves using different optimization methods for 4 datasets with dropout. Left panel displays the cross-entropy objective function; middle and right panels show the corresponding values of the training and test errors. Best viewed in color.
argue is appropriate for RELU networks. But Path-SGD is certainly not the only rescaling invariant
update possible, and other invariant geometries might be even better.
Path-SGD can also be viewed as a tractable approximation to natural gradient, which ignores the activations, the input distribution and dependencies between different paths. Natural gradient updates
are also invariant to rebalancing but are generally computationally intractable.
Finally, we choose to use steepest descent because of its simplicity of implementation. A better
choice might be mirror descent with respect to an appropriate potential function, but such a construction seems particularly challenging considering the non-convexity of neural networks.
Acknowledgments
Research was partially funded by NSF award IIS-1302662 and Intel ICRI-CI. We thank Ryota
Tomioka and Hao Tang for insightful discussions and Leon Bottou for pointing out the connection
to natural gradient.
8
References
[1] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. The Journal of Machine Learning Research, 12:2121 ? 2159,
2011.
[2] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward
neural networks. In AISTATS, 2010.
[3] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C. Courville, and Yoshua Bengio.
Maxout networks. In Proceedings of the 30th International Conference on Machine Learning,
ICML, pages 1319?1327, 2013.
[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint
arXiv:1502.01852, 2015.
[5] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. In arXiv, 2015.
[6] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,
2014.
[7] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
Computer Science Department, University of Toronto, Tech. Rep, 1(4):7, 2009.
[8] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[9] James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015.
[10] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.
Reading digits in natural images with unsupervised feature learning. In NIPS workshop on
deep learning and unsupervised feature learning, 2011.
[11] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias:
On the role of implicit regularization in deep learning. International Conference on Learning
Representations (ICLR) workshop track, 2015.
[12] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in
neural networks. COLT, 2015.
[13] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In Learning Theory,
pages 545?560. Springer, 2005.
[14] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. On the universality of online mirror
descent. In Advances in neural information processing systems, pages 2645?2653, 2011.
[15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of
Machine Learning Research, 15(1):1929?1958, 2014.
[16] I. Sutskever, J. Martens, George Dahl, and Geoffery Hinton. On the importance of momentum
and initialization in deep learning. In ICML, 2013.
9
| 5797 |@word middle:2 norm:28 seems:2 open:1 seek:1 tried:1 sgd:39 initial:1 document:1 current:1 activation:7 universality:1 written:2 john:1 numerical:1 chicago:2 christian:1 plot:5 update:28 steepest:10 bissacco:1 node:9 toronto:3 revisited:1 zhang:1 rc:1 along:1 c2:2 prove:1 expected:2 indeed:2 roughly:2 bneyshabur:1 nor:1 behavior:1 salakhutdinov:2 inspired:1 considering:1 provided:1 rebalancing:1 moreover:2 notation:1 panel:4 what:1 shraibman:1 runtime:1 exactly:3 control:1 unit:27 yn:2 local:1 consequence:1 despite:1 path:63 approximately:1 might:4 initialization:11 challenging:2 factorization:1 range:2 directed:2 acknowledgment:1 lecun:1 implement:1 backpropagation:1 digit:2 empirical:2 suggest:4 get:2 layered:1 context:1 applying:4 optimize:1 equivalent:13 map:1 marten:2 straightforward:1 independently:1 convex:1 simplicity:1 factored:1 rule:10 deriving:1 coordinate:4 hurt:1 updated:1 construction:1 programming:1 goodfellow:1 recognition:1 particularly:1 role:2 preprint:1 wang:1 calculate:2 revisiting:1 thousand:1 sun:1 rescaled:3 technological:2 highest:1 balanced:14 alessandro:1 convexity:1 warde:1 dynamic:1 trained:4 depend:1 solving:1 efficiency:1 easily:1 various:1 represented:1 regularizer:17 effective:2 kp:1 heuristic:2 larger:1 supplementary:1 elad:1 say:2 otherwise:1 statistic:1 itself:1 final:1 online:2 sequence:3 advantage:1 product:1 aligned:1 combining:1 poorly:1 achieve:2 sutskever:2 convergence:2 optimum:1 adam:2 andrew:1 strong:1 c:1 involves:1 direction:1 stochastic:5 human:1 enable:1 material:1 generalization:2 considered:1 normal:1 exp:1 pointing:1 driving:1 entropic:1 ruslan:2 establishes:1 hope:1 gaussian:1 rather:1 focus:1 longest:2 rank:1 tech:1 sense:1 bt:1 entire:1 hidden:10 transformed:1 going:1 interested:1 tao:1 issue:2 classification:2 overall:1 among:3 denoted:1 arg:4 multiplies:1 colt:1 special:1 equal:2 ng:1 identical:1 look:1 icml:3 unsupervised:2 others:1 yoshua:3 mirza:1 randomly:4 divergence:3 homogeneity:2 geometry:15 replacement:1 karthik:1 ab:1 huge:1 certainly:1 cwe:2 analyzed:1 extreme:1 farley:1 regularizers:1 edge:16 explosion:1 partial:1 netzer:1 conduct:1 divide:1 euclidean:1 initialized:4 e0:1 we0:4 vertex:1 subset:1 deviation:1 hundred:1 krizhevsky:2 reported:2 dependency:1 considerably:3 combined:1 international:2 enhance:1 together:2 ilya:1 again:1 containing:3 choose:2 ek:2 derivative:1 rescaling:34 szegedy:1 potential:2 blow:1 includes:1 multiplicative:1 later:2 picked:2 view:3 linked:1 sup:1 red:1 start:1 hazan:1 vin:9 minimize:1 accuracy:1 convolutional:1 efficiently:3 yield:1 correspond:2 generalize:1 vout:11 handwritten:1 ren:1 rectified:1 reach:1 definition:2 reconsidering:1 james:1 proof:2 gain:1 dataset:3 proved:1 ask:1 recall:1 color:6 dimensionality:1 focusing:1 feed:4 inspire:1 done:1 furthermore:1 roger:1 implicit:6 hand:1 mehdi:1 google:1 perhaps:2 icri:1 believe:1 normalized:3 verify:1 concept:1 inductive:3 regularization:19 evolution:1 xavier:1 hence:1 illustrated:1 during:3 demonstrate:1 performs:1 duchi:1 svhn:4 image:4 wise:1 novel:1 common:1 he:1 surpassing:1 refer:2 dag:1 rd:1 similarly:2 funded:1 rebalanced:1 patrick:1 curvature:1 showed:1 optimizing:1 moderate:1 rep:1 success:1 yi:1 minimum:7 george:1 xiangyu:1 ii:1 multiple:1 desirable:3 faster:5 plug:1 cross:6 cifar:9 divided:2 award:1 prediction:1 essentially:1 arxiv:3 sometimes:2 sergey:1 normalization:1 separately:1 jian:1 rescalings:2 probably:1 sridharan:1 seem:1 effectiveness:1 call:2 integer:1 ideal:2 feedforward:2 bengio:3 easy:3 affect:1 relu:13 architecture:1 opposite:1 haffner:1 shift:1 whether:1 motivated:1 accelerating:1 shaoqing:1 deep:16 generally:1 tewari:1 nsf:1 coates:1 revisit:1 per:6 track:1 blue:1 affected:1 group:2 four:2 threshold:2 changing:1 prevent:1 neither:1 dahl:1 kept:1 backward:1 v1:1 graph:1 subgradient:1 sum:1 almost:1 yann:1 wu:1 scaling:3 bit:1 dropout:10 layer:5 bound:1 display:2 courville:1 fan:1 kronecker:1 alex:2 scene:1 u1:6 nathan:5 speed:1 min:5 argument:1 leon:1 performing:1 nitish:1 department:2 poor:2 remain:1 smaller:1 slightly:1 suppressed:1 beneficial:2 rsalakhu:1 invariant:17 computationally:1 equation:1 discus:1 singer:1 know:1 tractable:1 end:2 generalizes:2 neyshabur:3 multiplied:1 observe:2 v2:2 appropriate:5 generic:1 stepsize:4 anymore:1 batch:8 alternative:2 denotes:1 running:1 yoram:1 eon:1 prof:2 unchanged:1 objective:7 question:1 gradient:22 iclr:1 distance:1 link:1 cw:1 thank:1 capacity:1 parametrized:1 vd:3 street:2 argue:2 collected:1 reason:1 length:2 retained:1 mini:6 unfortunately:1 potentially:1 ryota:3 hao:1 trace:1 negative:2 ba:1 implementation:3 perform:1 upper:1 observation:2 datasets:8 benchmark:2 descent:24 displayed:1 situation:1 hinton:3 looking:1 synchronously:1 ttic:2 david:1 connection:1 imagenet:1 kingma:1 nip:1 suggested:3 reading:1 ambuj:1 max:10 natural:6 rely:1 regularized:1 difficulty:1 improve:1 imply:1 epoch:10 understanding:1 nati:1 adagrad:12 relative:1 loss:2 expect:1 interesting:2 srebro:5 acyclic:1 geoffrey:2 validation:2 sufficient:1 tiny:2 course:1 surprisingly:1 keeping:1 bias:3 institute:2 taking:1 fg:3 curve:6 depth:2 xn:1 calculated:1 computes:2 ignores:1 forward:7 commonly:2 adaptive:3 approximate:5 overfitting:1 incoming:10 ioffe:1 xi:1 grayscale:1 search:1 table:2 delving:1 inherently:1 adi:1 bottou:2 aistats:1 main:2 x1:1 intel:1 grosse:1 slow:1 tomioka:3 momentum:4 exponential:2 crude:1 tied:4 house:2 toyota:2 tang:1 ian:1 down:3 remained:1 theorem:2 rectifier:1 covariate:1 insightful:1 behnam:3 decay:3 glorot:1 grouping:1 intractable:1 workshop:2 mnist:8 corr:1 importance:1 ci:1 mirror:2 magnitude:1 gap:1 entropy:6 expressed:1 kaiming:1 partially:1 scalar:1 bo:1 u2:6 springer:1 corresponds:3 nested:1 viewed:4 goal:1 maxout:1 fw:1 change:1 hard:1 typical:1 except:2 reducing:1 yuval:1 lemma:3 total:1 invariance:3 aaron:1 internal:2 unbalanced:18 outgoing:7 regularizing:1 srivastava:1 |
5,299 | 5,798 | Learning with Group Invariant Features:
A Kernel Perspective.
Youssef Mroueh
IBM Watson Group
mroueh@us.ibm.com
Stephen Voinea?
CBMM, MIT.
voinea@mit.edu
?Co-first author
Tomaso Poggio
CBMM, MIT .
tp@ai.mit.edu
Abstract
We analyze in this paper a random feature map based on a theory of invariance
(I-theory) introduced in [1]. More specifically, a group invariant signal signature
is obtained through cumulative distributions of group-transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as
we show that this feature map defines an expected Haar-integration kernel that is
invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points.
Moreover, we show that it defines a function space that is dense in the equivalent
Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of
the convergence of the empirical risk minimization, as well as the reduction in the
sample complexity of a learning algorithm using such an invariant representation
for signal classification, in a classical supervised learning setting.
1
Introduction
Encoding signals or building similarity kernels that are invariant to the action of a group is a key
problem in unsupervised learning, as it reduces the complexity of the learning task and mimics how
our brain represents information invariantly to symmetries and various nuisance factors (change in
lighting in image classification and pitch variation in speech recognition) [1, 2, 3, 4]. Convolutional
neural networks [5, 6] achieve state of the art performance in many computer vision and speech
recognition tasks, but require a large amount of labeled examples as well as augmented data, where
we reflect symmetries of the world through virtual examples [7, 8] obtained by applying identitypreserving transformations such as shearing, rotation, translation, etc., to the training data. In this
work, we adopt the approach of [1], where the representation of the signal is designed to reflect
the invariant properties and model the world symmetries with group actions. The ultimate aim is
to bridge unsupervised learning of invariant representations with invariant kernel methods, where
we can use tools from classical supervised learning to easily address the statistical consistency and
sample complexity questions [9, 10]. Indeed, many invariant kernel methods and related invariant
kernel networks have been proposed. We refer the reader to the related work section for a review
(Section 5) and we start by showing how to accomplish this invariance through group-invariant Haarintegration kernels [11], and then show how random features derived from a memory-based theory
of invariances introduced in [1] approximate such a kernel.
1.1
Group Invariant Kernels
We start by reviewing group-invariant Haar-integration kernels introduced in [11], and their use in a
binary classification problem. This section highlights the conceptual advantages of such kernels as
well as their practical inconvenience, putting into perspective the advantage of approximating them
with explicit and invariant random feature maps.
1
Invariant Haar-Integration Kernels. We consider a subset X of the hypersphere in d dimensions
Sd?1 . Let ?X be a measure on X . Consider a kernel k0 on X , such as a radial basis function kernel.
Let G be a group acting on X , with a normalized Haar measure ?. G is assumed to be a compact
and unitary group. Define an invariant kernel K between x, z ? X through Haar-integration [11] as
follows:
Z Z
K(x, z) =
k0 (gx, g 0 z)d?(g)d?(g 0 ).
(1)
G
G
As we are integrating over the entire group, it is easy to see that: K(g 0 x, gz) = K(x, z), ?g, g 0 ?
G, ?x, z ? X . Hence the Haar-integration kernel is invariant to the group action. The symmetry of
K is obvious. Moreover, if k0 is a positive definite kernel, it follows that K is positive definite as
well [11]. One can see the Haar-integration kernel framework as another form of data augmentation,
since we have to produce group-transformed points in order to compute the kernel.
Invariant Decision Boundary. Turning now to a binary classification problem, we assume that we
are given a labeled training set: S = {(xi , yi ) | xi ? X , yi ? Y = {?1}}N
i=1 . In order to learn a
decision function f : X ? Y, we minimize the following empirical risk induced by an L-Lipschitz,
PN
convex loss function V , with V 0 (0) < 0 [12]: minf ?HK E?V (f ) := N1 i=1 V (yi f (xi )), where we
restrict f to belong to a hypothesis class induced by the invariant kernel K, the so called Reproducing
Kernel Hilbert Space HK . The representer theorem [13] shows that the solution of such a problem,
PN
?
?
(x) = i=1 ?i? K(x, xi ). Since the
has the following form: fN
or the optimal decision boundary fN
PN
PN
?
(gx) = i=1 ?i K(gx, xi ) = i=1 ?i K(x, xi ) =
kernel K is group-invariant it follows that : fN
?
?
fN (x), ?g ? G. Hence the the decision boundary f is group-invariant as well, and we have:
?
?
fN
(gx) = fN
(x), ?g ? G, ?x ? X .
Reduced Sample Complexity. We have shown that a group-invariant kernel induces a groupinvariant decision boundary, but how does this translate to the sample complexity of the learning
algorithm? To answer this question, we will assume that the input set X has the following structure:
X = X0 ? GX0 , GX0 = {z|z = gx, x ? X0 , g ? G/ {e}}, where e is the identity group element.
This structure implies that for a function f in the invariant RKHS HK , we have:
?z ? GX0 , ? x ? X0 , ? g ? G such that, z = gx, and f (z) = f (x).
Let ?y (x) = P(Y = y|x) be the label posteriors. We assume that ?y (gx) = ?y (x), ?g ? G. This
is a natural assumption since the label is unchanged given the group action. Assume that the set X
is endowed with a measure ?X that is also group-invariant. Let f be the group-invariant decision
function and consider the expected risk induced by the loss V , EV (f ), defined as follows:
Z X
EV (f ) =
V (yf (x))?y (x)?X (x)dx,
(2)
X y?Y
EV (f ) is a proxy to the misclassification risk [12]. Using the invariant properties of the function
class and the data distribution we have by invariance of f , ?y , and ?:
Z X
Z
X
EV (f ) =
V (yf (x))?y (x)?X (x)dx +
V (yf (z))?y (z)?X (z)dz
X0 y?Y
GX0 y?Y
Z
=
Z
d?(g)
Z
Z
d?(g)
Z
X
V (yf (x))?y (x)?X (x)dx (By invariance of f , ?y , and ? )
X0 y?Y
G
=
V (yf (gx))?y (gx)?X (x)dx
X0 y?Y
G
=
X
X
V (yf (x))?y (x)?X (x)dx.
X0 y?Y
Hence, given an invariant kernel to a group action that is identity preserving, it is sufficient to
minimize the empirical risk on the core set X0 , and it generalizes to samples in GX0 .
Let us imagine that X is finite with cardinality |X |; the cardinality of the core set X0 is a small
fraction of the cardinality of X : |X0 | = ?|X |, where 0 < ? < 1. Hence, when we sample training
points from X0 , the maximum size of the training set is N = ?|X | << |X |, yielding a reduction in
the sample complexity.
2
1.2
Contributions
We have just reviewed the group-invariant Haar-integration kernel. In summary, a group-invariant
kernel implies the existence of a decision function that is invariant to the group action, as well as
a reduction in the sample complexity due to sampling training points from a reduced set, a.k.a the
core set X0 .
Kernel methods with Haar-integration kernels come at a very expensive computational price at both
training and test time: computing the Kernel is computationally cumbersome as we have to integrate
over the group and produce virtual examples by transforming points explicitly through the group
action. Moreover, the training complexity of kernel methods scales cubicly in the sample size.
Those practical considerations make the usefulness of such kernels very limited.
The contributions of this paper are on three folds:
1. We first show that a non-linear random feature map ? : X ? RD derived from a memorybased theory of invariances introduced in [1] induces an expected group-invariant Haarintegration kernel K. For fixed points x, z ? X , we have: E h?(x), ?(z)i = K(x, z),
where K satisfies: K(gx, g 0 z) = K(x, z), ?g, g 0 ? G, x, z ? X .
2. We show a Johnson-Lindenstrauss type result that holds uniformly on a set of N points that
assess the concentration of this random feature map around its expected induced kernel. For
sufficiently large D, we have h?(x), ?(z)i ? K(x, z), uniformly on an N points set.
3. We show that, with a linear model, an invariant decision function can be learned in this
?
random feature space by sampling points from the core set X0 i.e: fN
(x) ? hw? , ?(x)i
and generalizes to unseen points in GX0 , reducing the sample complexity. Moreover, we
show that those features define a function space that approximates a dense subset of the
invariant RKHS, and assess the error rates of the empirical risk minimization using such
random features.
4. We demonstrate the validity of these claims on three datasets: text (artificial), vision
(MNIST), and speech (TIDIGITS).
2
From Group Invariant Kernels to Feature Maps
In this paper we show that a random feature map based on I-theory [1]: ? : X ? RD approximates
a group-invariant Haar-integration kernel K having the form given in Equation (1):
h?(x), ?(z)i ? K(x, z).
We start with some notation that will be useful for defining the feature map. Denote the cumulative
distribution function of a random variable X by,
FX (? ) = P(X ? ? ),
Fix x ? X , Let g ? G be a random variable drawn according to the normalized Haar measure ? and
let t be a random template whose distribution will be defined later. For s > 0, define the following
truncated cumulative distribution function (CDF) of the dot product hx, gti:
?(x, t, ? ) = Pg (hx, gti ? ? ) = Fhx,gti (? ), ? ? [?s, s], x ? X ,
Let ? ? (0, 1). We consider the following Gaussian vectors (sampling with rejection) for the templates t:
1
2
t = n ? N 0, Id , if knk2 < 1 + ?, t =? else .
d
The reason behind this sampling is to keep the range of hx, gti under control: The squared norm
2
knk2 will be bounded by 1 + ? with high probability by a classical concentration result (See proof
d?1
of Theorem
? 1 for more details). The group being unitary and x ? S , we know that : | hx, gti | ?
knk2 < 1 + ? ? 1 + ?, for ? ? (0, 1).
Remark 1. We can also consider templates t, drawn uniformly on the unit sphere Sd?1 . Uniform
templates on the sphere can be drawn as follows:
?
t=
, ? ? N (0, Id ),
k?k2
3
?
since the norm of a gaussian vector is highly concentrated around its mean d, we can use the
gaussian sampling with rejection. Results proved for gaussian templates (with rejection) will hold
true for templates drawn at uniform on the sphere with different constants.
Define the following kernel function,
Z
Ks (x, z)
s
= Et
?(x, t, ? )?(z, t, ? )d?,
?s
where s will be fixed throughout the paper to be s = 1+? since the gaussian sampling with rejection
controls the dot product to be in that range.
R
Let
?(t, g?x, ? ) = G 1Ihg?gx,ti?? d?(g) =
R g? ? G. As the group is closed, we have
1I
d?(g) = ?(t, x, ? ) and hence K(gx, g 0 z) = K(x, z), for all g, g 0 ? G. It is clear
G hgx,ti??
now that K is a group-invariant kernel.
In order to approximate K, we sample |G| elements uniformly and independently from the group
G, i.e. gi , i = 1 . . . |G|, and define the normalized empirical CDF :
|G|
?(x, t, ? ) =
X
1
?
1Ihgi t,xi?? , ? s ? ? ? s.
|G| m i=1
We discretize the continuous threshold ? as follows:
?
|G|
sk
s X
? x, t,
=?
1Ihgi t,xi? ns k , ? n ? k ? n.
n
nm|G| i=1
We sample m templates independently according to the Gaussian sampling with rejection, tj , j =
1 . . . m. We are now ready to define the random feature map ?:
sk
? R(2n+1)?m .
?(x) = ? x, tj ,
n
j=1...m,k=?n...n
It is easy to see that:
lim Et,g h?(x), ?(z)iR(2n+1)?m = lim Et,g
n??
n??
m X
n
X
j=1 k=?n
sk
sk
? x, tj ,
? z, tj ,
= Ks (x, z).
n
n
In Section 3 we study the geometric information captured by this kernel by stating explicitly the
similarity it computes.
Remark 2 (Efficiency of the representation). 1) The main advantage of such a feature map, as
outlined in [1], is that we store transformed templates in order to compute ?, while if we wanted
to compute an invariant kernel of type K (Equation (1)), we would need to explicitly transform
the points. The latter is computationally expensive. Storing transformed templates and computing
the signature ? is much more efficient. It falls in the category of memory-based learning, and is
biologically plausible [1].
2) As |G|,m,n get large enough, the feature map ? approximates a group-invariant Kernel, as we
will see in next section.
3
An Equivalent Expected Kernel and a Uniform Concentration Result
In this section we present our main results, with proofs given in the supplementary material . Theorem 1 shows that the random feature map ?, defined in the previous section, corresponds in expectation to a group-invariant Haar-integration kernel Ks (x, z). Moreover, s ? Ks (x, z) computes the
average pairwise distance between all points in the orbits of x and z, where the orbit is defined as
the collection of all group-transformations of a given point x : Ox = {gx, g ? G}.
Theorem 1 (Expectation). Let ? ? (0, 1) and x, z ? X . Define the distance dG between the orbits
Ox and Oz :
Z Z
1
dG (x, z) = ?
kgx ? g 0 zk2 d?(g)d?(g 0 ),
2?d G G
and the group-invariant expected kernel
Z s
Ks (x, z) = lim Et,g h?(x), ?(z)iR(2n+1)?m = Et
?(x, t, ? )?(z, t, ? )d?, s = 1 + ?.
n??
?s
4
1. The following inequality holds with probability 1:
? ? ?2 (d, ?) ? Ks (x, z) ? (1 ? dG (x, z)) ? ? + ?1 (d, ?),
where ?1 (?, d) =
?d?2 /16
e ?
d
?
1e
2
??d/2
d
(1+?) 2
?
d
and ?2 (?, ?) =
?d?2 /16
e ?
d
(3)
2
+ (1 + ?)e?d?
/8
.
2. For any ? ? (0, 1) as the dimension d ? ? we have ?1 (?, d) ? 0 and ?2 (?, d) ? 0, and
we have asymptotically Ks (x, z) ? 1 ? dG (x, z) + ? = s ? dG (x, z).
3. Ks is symmetric and Ks is positive semi-definite.
Remark 3. 1) ?, ?1 (d, ?), and ?2 (d, ?) are not errors due to results holding with high probability
but are due to the truncation and are a technical artifact of the proof. 2) Local invariance can be
defined by restricting the sampling of the group elements to a subset G ? G. Assuming that for each
g ? G, g ?1 ? G, the equivalent kernel has asymptotically the following form:
Z Z
1
Ks (x, z) ? s ? ?
kgx ? g 0 zk2 d?(g)d?(g 0 ).
2?d G G
3) The norm-one constraint can be relaxed, let R = supx?X kxk2 < ?, hence we can set s =
R(1 + ?), and
??2 (d, ?) ? Ks (x, z) ? (R(1 + ?) ? dG (x, z)) ? ?1 (d, ?),
where ?1 (?, d) = R e
?d?2 /16
?
d
?
Re
2
??d/2
d
(1+?) 2
?
d
and ?2 (?, ?) = R e
?d?2 /16
?
d
(4)
2
+ R(1 + ?)e?d?
/8
.
Theorem 2 is, in a sense, an invariant Johnson-Lindenstrauss [14] type result where we show that
the dot product defined by the random feature map ? , i.e h?(x), ?(z)i, is concentrated around the
invariant expected kernel uniformly on a data set of N points, given a sufficiently large number of
templates m, a large number of sampled group elements |G|, and a large bin number n. The error
naturally decomposes to a numerical error ?0 and statistical errors ?1 , ?2 due to the sampling of the
templates and the group elements respectively.
Theorem 2. [Johnson-Lindenstrauss type Theorem- N point Set] Let D = {xi | xi ? X }N
i=1
be a finite dataset. Fix ?0 , ?1 , ?2 , ?1 , ?2 ? (0, 1). For a number of bins n ? ?10 , templates m ?
C1
log( ?N1 ), and group elements |G| ? C?22 log( N?2m ), where C1 , C2 are universal numeric constants,
?21
2
we have:
|h?(xi ), ?(xj )i ? Ks (xi , xj )| ? ?0 + ?1 + ?2 , i = 1 . . . N, j = 1 . . . N,
(5)
with probability 1 ? ?1 ? ?2 .
Putting together Theorems 1 and 2, the following Corollary shows how the group-invariant random
feature map ? captures the invariant distance between points uniformly on a dataset of N points.
Corollary 1 (Invariant Features Maps and Distances between Orbits). Let D = {xi | xi ? X }N
i=1
N
1
be a finite dataset. Fix ?0 , ? ? (0, 1). For a number of bins n ? ?30 , templates m ? 9C
log(
2
? ),
?
and group elements |G| ?
9C2
?20
0
log( N?m ), where C1 , C2 are universal numeric constants, we have:
? ? ?2 (d, ?) ? ?0 ? h?(xi ), ?(xj )i ? (1 ? dG (xi , xj )) ? ?0 + ? + ?1 (d, ?),
(6)
i = 1 . . . N, j = 1 . . . N , with probability 1 ? 2?.
Remark 4. Assuming that the templates are unitary and drawn form a general distribution p(t), the
equivalent kernel has the following form:
Z
Z Z
0
0
Ks (x, z) =
d?(g)d?(g )
s ? max(hx, gti , hz, g ti)p(t)dt .
G
G
Indeed when we use the gaussian sampling with rejection
for the templates,
the integral
R
0
?1
,?1
0
max(hx, gti , hz, g ti)p(t)dt is asymptotically proportional to
g x ? g
z
. It is interesting
2
to consider different distributions that are domain-specific for the templates and assess the number
of the templates needed to approximate such kernels. It is also interesting to find the optimal templates that achieve the minimum distortion in equation 6, in a data dependent way, but we will
address these points in future work.
5
4
Learning with Group Invariant Random Features
In this section, we show that learning a linear model in the invariant, random feature space, on a
training set sampled from the reduced core set X0 , has a low expected risk, and generalizes to unseen
test points generated from the distribution on X = X0 ? GX0 . The architecture of the proof follows
ideas from [15] and [16]. Recall that given an L-Lipschitz convex loss function V , our aim is to
minimize the expected risk given in Equation (2). Denote the CDF by ?(x, t, ? ) = P(hgt, xi ? ? ),
? t, ? ) = 1 P|G| 1Ihg t,xi?? . Let p(t) be the distribution of templates
and the empirical CDF by ?(x,
i
i=1
|G|
R Rs
t. The RKHS defined by the invariant kernel Ks , Ks (x, z) =
?(x, t, ? )?(z, t, ? )p(t)dtd?
?s
denoted HKs , is the completion of the set of all finite linear combinations of the form:
X
f (x) =
?i Ks (x, xi ), xi ? X , ?i ? R.
(7)
i
Similarly to [16], we define the following infinite-dimensional function space:
Z Z s
|w(t, ? )|
Fp = f (x) =
w(t, ? )?(x, t, ? )dtd? | sup
?C .
p(t)
?,t
?s
R P
Lemma 1. Fp is dense in HKs . For f ? Fp we have EV (f ) = X0 y?Y V (yf (x))?y (x)d?X (x),
where X0 is the reduced core set.
Since Fp is dense in HKs , we canh learn an invariant
decision function in the space Fp , instead
i
sk
of learning in HKs . Let ?(x) = ?? x, tj , n
. ?, and ? are equivalent up to
j=1...m,k=?n...n
constants. We will approximate the set Fp as follows:
?
?
m X
n
?
X
s
C?
sk
F? = f (x) = hw, ?(x)i =
wj,k ?? x, tj ,
, tj ? p, j = 1 . . . m | kwk? ?
.
?
n j=1
n
m?
k=?n
Hence, we learn the invariant decision function via empirical risk minimization where we restrict
? and the sampling in the training set is restricted to the core set X0 . Note
the function to belong to F,
that with this function space we are regularizing for convenience the norm infinity of the weights
but this can be relaxed in practice to a classical Tikhonov regularization.
Theorem 3 (Learning with Group invariant features). Let S = {(xi , yi ) | xi ? X0 , yi ?
?
Y, i = 1 . . . N }, a training set sampled from the core set X0 . Let fN
= arg minf ?F? E?V (f ) =
PN
1
i=1 V (yi f (xi )).Fix ? > 0, then
N
s
!
1
1
1
?
log
4LsC + 2V (0) + LC
EV (fN ) ? min EV (f ) + 2 ?
f ?Fp
2
?
N
s
!
r
!
m 2sC
1
2sC
2sLC
1 + 2 log
+L p
1 + 2 log
+
,
+ ?
?
?
n
m
|G|
with probability at least 1 ? 3? on the training set and the choice of templates and group elements.
The proof of Theorem 3 is given in Appendix B. Theorem 3 shows that learning a linear model
in the invariant random feature space defined by ? (or equivalently ?), has a low expected
risk. More importantly, this risk is arbitrarily close to the optimal risk achieved in an infinitedimensional class of functions, namely Fp . The training set is sampled from the reduced core
set X0 , and invariant learning generalizes to unseen test points generated from the distribution
on X = X0 ? GX0 , hence the reduction in the sample complexity. Recall that Fp is dense in
the RKHS of the Haar-integration invariant Kernel, and so the expected risk achieved by a linear
model in the invariant random feature space is not far from the one attainable in the invariant
RKHS. Note that the error decomposes into two terms. The first, O( ?1N ), is statistical and it
depends on the training sample complexity N . The other is governed by the approximation error of
? and depends on the number of templates m, number of group
functions Fp , with functions in F,
q
elements sampled |G|, the number of bins n, and has the following form O( ?1m )+O
6
log m
|G|
+ n1 .
5
Relation to Previous Work
We now put our contributions in perspective by outlining some of the previous work on invariant
kernels and approximating kernels with random features.
Approximating Kernels. Several schemes have been proposed for approximating a non-linear kernel with an explicit non-linear feature map in conjunction with linear methods, such as the Nystr?om
method [17] or random sampling techniques in the Fourier domain for translation-invariant kernels
[15]. Our features fall under the random sampling techniques where, unlike previous work, we sample both projections and group elements to induce invariance with an integral representation. We
note that the relation between random features and quadrature rules has been thoroughly studied in
[18], where sharper bounds and error rates are derived, and can apply to our setting.
Invariant Kernels. We focused in this paper on Haar-integration kernels [11], since they have an
integral representation and hence can be represented with random features [18]. Other invariant
kernels have been proposed: In [19] authors introduce transformation invariant kernels, but unlike
our general setting, the analysis is concerned with dilation invariance. In [20], multilayer arccosine
kernels are built by composing kernels that have an integral representation, but does not explicitly
induce invariance. More closely related to our work is [21], where kernel descriptors are built for visual recognition by introducing a kernel view of histogram of gradients that corresponds in our case
to the cumulative distribution on the group variable. Explicit feature maps are obtained via kernel
PCA, while our features are obtained via random sampling. Finally the convolutional kernel network
of [22] builds a sequence of multilayer kernels that have an integral representation, by convolution,
considering spatial neighborhoods in an image. Our future work will consider the composition of
Haar-integration kernels, where the convolution is applied not only to the spatial variable but to the
group variable akin to [2].
6
Numerical Evaluation
In this paper, and specifically in Theorems 2 and 3, we showed that the random, group-invariant
feature map ? captures the invariant distance between points, and that learning a linear model
trained in the invariant, random feature space will generalize well to unseen test points. In this
section, we validate these claims through three experiments. For the claims of Theorem 2, we
will use a nearest neighbor classifier, while for Theorem 3, we will rely on the regularized least
squares (RLS) classifier, one of the simplest algorithms for supervised learning. While our proofs
focus on norm-infinity regularization, RLS corresponds to Tikhonov regularization with square
loss. Specifically, for performing T ?way classification on a batch of N training points in Rd ,
N ?d
summarized in the data matrix
and label matrix Y ? RN ?T , RLS will perform the
1 X ? R
optimization, minW ?Rm?T N ||Y ? ?(X)W ||2F + ?||W ||2F , where || ? ||F is the Frobenius norm,
? is the regularization parameter, and ? is the feature map, which for the representation described
in this paper will be a CDF pooling of the data projected onto group-transformed random templates.
All RLS experiments in this paper were completed with the GURLS toolbox [23]. The three
datasets we explore are:
Xperm (Figure 1): An artificial dataset consisting of all sequences of length 5 whose elements
come from an alphabet of 8 characters. We want to learn a function which assigns a positive value
to any sequence that contains a target set of characters (in our case, two of them) regardless of their
position. Thus, the function label is globally invariant to permutation, and so we project our data
onto all permuted versions of our random template sequences.
MNIST (Figure 2): We seek local invariance to translation and rotation, and so all random templates
are translated by up to 3 pixels in all directions and rotated between -20 and 20 degrees.
TIDIGITS (Figure 3): We use a subset of TIDIGITS consisting of 326 speakers (men, women,
children) reading the digits 0-9 in isolation, and so each datapoint is a waveform of a single word.
We seek local invariance to pitch and speaking rate [25], and so all random templates are pitch
shifted up and down by 400 cents and warped to play at half and double speed. The task is 10-way
classification with one class-per-digit. See [24] for more detail.
Acknowledgements: Stephen Voinea acknowledges the support of a Nuance Foundation Grant.
This work was also supported in part by the Center for Brains, Minds and Machines (CBMM),
funded by NSF STC award CCF 1231216.
7
Xperm Sample Complexity RLS
?
1.0
Raw
Bag?Of?Words
Haar
CDF(25,1)
Xperm Sample Complexity 1 ? NN
CDF(25,10)
CDF(25,25)
Raw
Bag?Of?Words
?
CDF(25,1)
0.9
Accuracy
0.8
0.7
0.6
0.5
0.4
10
100
1000
10
Number of Training Points Per Class
100
1000
Number of Training Points Per Class
Figure 1: Classification accuracy as a function of training set size, averaged over 100 random
training samples at each size. ? = CDF(n, m) refers to a random feature map with n bins and m
templates. With 25 templates, the random feature map outperforms the raw features and a bag-ofwords representation (also invariant to permutation) and even approaches an RLS classifier with a
Haar-integration kernel. Error bars were removed from the RLS plot for clarity. See supplement.
MNIST Accuracy RLS (1000 Points Per Class)
Bins
1.0
5
MNIST Sample Complexity RLS
25
?
1.0
Raw
CDF(50,500)
0.9
0.9
0.8
Accuracy
0.7
0.8
0.6
0.5
0.7
0.4
0.3
0.6
0.2
0.1
1
10
0.5
100
10
Number of Templates
100
1000
Number of Training Points Per Class
Figure 2: Left Plot) Mean classification accuracy as a function of number of bins and templates,
averaged over 30 random sets of templates. Right Plot) Classification accuracy as a function of
training set size, averaged over 100 random samples of the training set at each size. At 1000 examples per class, we achieve an accuracy of 98.97%.
TIDIGITS Gender RLS
TIDIGITS Speaker RLS
Bins 5 25 100
1.0
Bins
5
25
100
0.9
0.8
Accuracy
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
10
100
1000
10
Number of Templates
100
1000
Number of Templates
Figure 3: Mean classification accuracy as a function of number of bins and templates, averaged
over 30 random sets of templates. In the ?Speaker? dataset, we test on unseen speakers, and in the
?Gender? dataset, we test on a new gender, giving us an extreme train/test mismatch. [25].
8
References
[1] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio, ?Unsupervised
learning of invariant representations in hierarchical architectures.,? CoRR, vol. abs/1311.4158,
2013.
[2] J. Bruna and S. Mallat, ?Invariant scattering convolution networks,? CoRR, vol. abs/1203.1513,
2012.
[3] G. Hinton, A. Krizhevsky, and S. Wang, ?Transforming auto encoders,? ICANN-11, 2011.
[4] Y. Bengio, A. C. Courville, and P. Vincent, ?Representation learning: A review and new perspectives,? IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798?1828, 2013.
[5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document
recognition,? in Proceedings of the IEEE, vol. 86, pp. 2278?2324, 1998.
[6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional
neural networks.,? in NIPS, pp. 1106?1114, 2012.
[7] P. Niyogi, F. Girosi, and T. Poggio, ?Incorporating prior information in machine learning by
creating virtual examples,? in Proceedings of the IEEE, pp. 2196?2209, 1998.
[8] Y.-A. Mostafa, ?Learning from hints in neural networks,? Journal of complexity, vol. 6,
pp. 192?198, June 1990.
[9] V. N. Vapnik, Statistical learning theory. A Wiley-Interscience Publication 1998.
[10] I. Steinwart and A. Christmann, Support vector machines. Information Science and Statistics,
New York: Springer, 2008.
[11] B. Haasdonk, A. Vossen, and H. Burkhardt, ?Invariance in kernel methods by haar-integration
kernels.,? in SCIA , Springer, 2005.
[12] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe, ?Convexity, classification, and risk bounds,?
Journal of the American Statistical Association, vol. 101, no. 473, pp. 138?156, 2006.
[13] G. Wahba, Spline models for observational data, vol. 59 of CBMS-NSF Regional Conference
Series in Applied Mathematics. Philadelphia, PA: SIAM, 1990.
[14] W. B. Johnson and J. Lindenstrauss, ?Extensions of lipschitz mappings into a hilbert space.,?
Conference in modern analysis and probability, 1984.
[15] A. Rahimi and B. Recht, ?Weighted sums of random kitchen sinks: Replacing minimization
with randomization in learning.,? in NIPS 2008.
[16] A. Rahimi and B. Recht, ?Uniform approximation of functions with random bases,? in Proceedings of the 46th Annual Allerton Conference, 2008.
[17] C. Williams and M. Seeger, ?Using the nystrm method to speed up kernel machines,? in NIPS,
2001.
[18] F. R. Bach, ?On the equivalence between quadrature rules and random features,? CoRR,
vol. abs/1502.06800, 2015.
[19] C. Walder and O. Chapelle, ?Learning with transformation invariant kernels,? in NIPS, 2007.
[20] Y. Cho and L. K. Saul, ?Kernel methods for deep learning,? in NIPS, pp. 342?350, 2009.
[21] L. Bo, X. Ren, and D. Fox, ?Kernel descriptors for visual recognition,? in NIPS., 2010.
[22] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, ?Convolutional kernel networks,? in NIPS,
2014.
[23] A. Tacchetti, P. K. Mallapragada, M. Santoro, and L. Rosasco, ?Gurls: a least squares library
for supervised learning,? CoRR, vol. abs/1303.0934, 2013.
[24] S. Voinea, C. Zhang, G. Evangelopoulos, L. Rosasco, and T. Poggio, ?Word-level invariant
representations from acoustic waveforms,? vol. 14, pp. 3201?3205, September 2014.
[25] M. Benzeghiba, R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface,
A. Mertins, C. Ris, R. Rose, V. Tyagi, and C. Wellekens, ?Automatic speech recognition and
speech variability: A review,? Speech Communication, vol. 49, pp. 763?786, 01 2007.
9
| 5798 |@word version:1 norm:6 seek:2 r:1 tidigits:5 pg:1 attainable:1 nystr:1 reduction:4 contains:1 series:1 rkhs:5 document:1 outperforms:1 com:1 dx:5 fn:9 numerical:2 girosi:1 dupont:1 wanted:1 designed:1 plot:3 half:1 inconvenience:1 core:9 hypersphere:1 gx:13 allerton:1 zhang:1 c2:3 interscience:1 introduce:1 pairwise:1 x0:22 shearing:1 indeed:2 expected:11 tomaso:1 brain:2 globally:1 cardinality:3 ihg:2 considering:1 project:1 moreover:5 notation:1 bounded:1 voinea:4 transformation:4 ti:4 k2:1 classifier:3 rm:1 control:2 unit:1 grant:1 mcauliffe:1 positive:4 local:3 sd:2 encoding:1 mach:1 id:2 k:16 studied:1 scia:1 equivalence:1 co:1 limited:1 range:2 averaged:4 practical:2 lecun:1 practice:1 definite:3 digit:2 universal:2 empirical:7 projection:2 word:4 radial:1 integrating:1 induce:2 refers:1 get:1 convenience:1 close:1 onto:2 put:1 risk:14 applying:1 equivalent:5 map:22 dz:1 center:1 williams:1 regardless:1 independently:2 convex:2 focused:1 assigns:1 rule:2 importantly:1 variation:1 fx:1 imagine:1 target:1 play:1 mallat:1 hypothesis:1 pa:1 element:11 recognition:6 expensive:2 labeled:2 wang:1 capture:2 haasdonk:1 wj:1 removed:1 rose:1 transforming:2 convexity:1 complexity:15 signature:2 trained:1 reviewing:1 efficiency:1 basis:1 sink:1 translated:1 easily:1 k0:3 various:1 represented:1 alphabet:1 train:1 artificial:2 youssef:1 sc:2 neighborhood:1 whose:2 supplementary:1 plausible:1 distortion:1 niyogi:1 gi:1 unseen:5 statistic:1 transform:1 advantage:3 sequence:4 product:3 translate:1 achieve:3 oz:1 frobenius:1 validate:1 sutskever:1 convergence:1 double:1 produce:2 rotated:1 gx0:8 stating:1 completion:1 nearest:1 christmann:1 implies:2 come:2 quantify:1 direction:1 waveform:2 closely:1 observational:1 virtual:3 material:1 bin:10 require:1 hx:6 fix:4 randomization:1 memorybased:1 extension:1 hold:3 around:3 cbmm:3 sufficiently:2 mapping:1 claim:3 mostafa:1 adopt:1 bag:3 label:4 bridge:2 tool:1 weighted:1 minimization:4 mit:4 gaussian:7 aim:2 tyagi:1 pn:5 publication:1 conjunction:1 corollary:2 derived:3 focus:1 june:1 hk:3 seeger:1 sense:1 dependent:1 nn:1 entire:1 santoro:1 relation:2 transformed:5 pixel:1 arg:1 classification:12 denoted:1 art:1 integration:15 spatial:2 having:1 sampling:14 represents:1 unsupervised:3 rls:11 minf:2 representer:1 mimic:1 future:2 spline:1 hint:1 modern:1 dg:7 intell:1 kitchen:1 consisting:2 n1:3 ab:4 highly:1 evaluation:1 extreme:1 yielding:1 behind:1 tj:7 integral:5 poggio:4 minw:1 fox:1 orbit:4 re:1 tp:1 hgt:1 introducing:1 subset:4 uniform:4 usefulness:1 krizhevsky:2 johnson:4 encoders:1 answer:1 supx:1 accomplish:1 cho:1 thoroughly:1 recht:2 siam:1 together:1 augmentation:1 reflect:2 squared:1 nm:1 rosasco:3 woman:1 creating:1 warped:1 american:1 slc:1 de:1 summarized:1 explicitly:4 depends:2 later:1 view:1 closed:1 analyze:1 sup:1 kwk:1 start:3 contribution:3 minimize:3 square:3 ass:3 ir:2 convolutional:4 om:1 descriptor:2 accuracy:9 generalize:1 raw:4 vincent:1 ren:1 lighting:1 datapoint:1 cumbersome:1 pp:9 obvious:1 naturally:1 proof:6 sampled:5 proved:1 dataset:6 recall:2 knk2:3 lim:3 hilbert:3 cbms:1 scattering:1 dt:2 supervised:4 mutch:1 ox:2 just:1 steinwart:1 replacing:1 defines:2 yf:7 artifact:1 gti:7 building:1 validity:1 normalized:3 true:1 ccf:1 hence:9 regularization:4 symmetric:1 nuisance:1 speaker:4 kgx:2 demonstrate:1 dtd:2 image:2 consideration:1 rotation:2 permuted:1 belong:2 association:1 approximates:4 refer:1 composition:1 ai:1 mroueh:2 rd:3 consistency:1 outlined:1 similarly:1 mathematics:1 automatic:1 dot:3 funded:1 bruna:1 chapelle:1 similarity:2 etc:1 base:1 posterior:1 showed:1 perspective:4 store:1 tikhonov:2 inequality:1 binary:2 watson:1 arbitrarily:1 yi:6 preserving:1 captured:1 minimum:1 relaxed:2 signal:4 stephen:2 semi:1 harchaoui:1 reduces:1 rahimi:2 technical:1 bach:1 sphere:3 award:1 pitch:3 multilayer:2 vision:2 expectation:2 histogram:1 kernel:77 achieved:2 c1:3 want:1 else:1 unlike:2 regional:1 induced:4 hz:2 pooling:1 hgx:1 jordan:1 unitary:3 bengio:2 easy:2 enough:1 concerned:1 xj:4 isolation:1 architecture:2 restrict:2 wahba:1 idea:1 haffner:1 pca:1 bartlett:1 ultimate:1 akin:1 speech:6 speaking:1 york:1 action:8 remark:4 deep:2 useful:1 clear:1 burkhardt:1 amount:1 induces:2 concentrated:2 category:1 simplest:1 reduced:5 nsf:2 shifted:1 per:6 vol:11 group:55 key:1 putting:2 threshold:1 drawn:5 clarity:1 leibo:1 asymptotically:3 fraction:1 sum:1 throughout:1 reader:1 decision:10 appendix:1 bound:2 courville:1 fold:1 annual:1 constraint:1 cubicly:1 infinity:2 ri:1 fourier:1 speed:2 min:1 performing:1 according:2 combination:1 character:2 biologically:1 invariant:73 restricted:1 computationally:2 equation:4 mori:1 wellekens:1 needed:1 know:1 mind:1 zk2:2 generalizes:4 endowed:1 apply:1 hierarchical:1 batch:1 existence:1 cent:1 anselmi:1 completed:1 giving:1 arccosine:1 build:1 approximating:4 classical:4 unchanged:1 question:2 concentration:3 ofwords:1 september:1 gradient:2 distance:5 reason:1 nuance:1 assuming:2 length:1 equivalently:1 sharper:1 holding:1 anal:1 perform:1 discretize:1 convolution:3 datasets:2 finite:4 walder:1 truncated:1 defining:1 hinton:2 variability:1 communication:1 rn:1 reproducing:2 tacchetti:2 introduced:4 namely:1 specified:1 toolbox:1 imagenet:1 acoustic:1 learned:1 nip:7 trans:1 address:2 bar:1 pattern:1 ev:7 mismatch:1 fp:10 reading:1 built:2 max:2 memory:2 misclassification:1 natural:1 rely:1 regularized:1 haar:18 turning:1 hks:4 scheme:1 library:1 ready:1 acknowledges:1 gz:1 auto:1 schmid:1 philadelphia:1 text:1 review:3 geometric:1 acknowledgement:1 prior:1 loss:4 highlight:1 permutation:2 interesting:2 men:1 proportional:1 outlining:1 foundation:1 integrate:1 degree:1 sufficient:1 proxy:1 storing:1 ibm:2 translation:3 summary:1 supported:1 truncation:1 fall:2 template:34 neighbor:1 saul:1 boundary:4 dimension:2 world:2 cumulative:4 lindenstrauss:4 computes:2 numeric:2 author:2 collection:1 infinitedimensional:1 projected:1 far:1 approximate:4 compact:1 keep:1 koniusz:1 mairal:1 conceptual:1 assumed:1 xi:23 continuous:1 sk:6 decomposes:2 reviewed:1 dilation:1 learn:4 composing:1 symmetry:4 lsc:1 bottou:1 domain:2 stc:1 icann:1 dense:5 main:2 child:1 quadrature:2 augmented:1 wiley:1 n:1 lc:1 position:1 explicit:3 kxk2:1 governed:1 hw:2 theorem:14 down:1 specific:1 showing:1 incorporating:1 mnist:4 restricting:1 vapnik:1 corr:4 supplement:1 rejection:6 explore:1 visual:2 bo:1 springer:2 gender:3 corresponds:3 satisfies:1 cdf:11 identity:2 invariantly:1 lipschitz:3 price:1 identitypreserving:1 change:1 specifically:3 infinite:1 uniformly:7 reducing:1 acting:1 lemma:1 called:1 invariance:13 support:2 latter:1 regularizing:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.